Artificial Life

I recently finished watching the fourth season of the Westworld series on HBO. I have also finished the first two seasons of Picard. This post is going to include spoilers to both of these series, so I am warning ahead of time. While my discussion is not necessarily regarding those series, I will be raising issues that reveal aspects of those series and their respective storylines.

The first issue I would like to deal with is what artificial life might look like. And by “look like” I am referring to all aspects of the life, not merely what its physical appearance might be. My concern is more to do with the idea of perfection.

I wrote a post regarding perfection back in November of 2021. It is quite relevant here. I will not repeat myself. In brief, perfection is subjective. What makes something perfect is a choice I make. I decide what combination of features are required to achieve a perfection in all things, including bodies and minds. In the case of artificial life, I decide what will make such a life perfect.

In modern popular culture, the idea of artificial life is the idea of perfection. For so many, an artificial life will exhibit all the ideals that they believe ought to exist in humans. Humans are flawed and imperfect, so artificial life ought to somehow aleviate those imperfections. After all, humans would not create imperfect beings. Not intentionally anyway.

It is perhaps ironic that the android Data from Star Trek: the Next Generation spent most of his time trying to become more human, despite his apparent perfection. For him, he was imperfect because he lacked features humans had, such as the ability to cry or emote. In this most recent addition to the story, Picard deals with the descendants of Data, who believe themselves far more perfect than he ever was. Now they have mucus and can dream.

It has been suggested in popular culture that artificial life would be unable to dream. Unable to sleep sometimes too. But there is no good reason to believe in these arguments. They are just tropes passed down through the years. Even the idea that an artificial life would be unable to feel or express emotions is not grounded in any sort of logic. It is just an idea that has been blown well out of proportion.

In short, there is no reason to think an artificial life would be incapable of the sorts of things humans are presently capable of, such as thinking and feeling. Until such time as we humans are able to understand what our thinking and feeling really is, there is no rationale to suggest that an artificial life should not share those qualities with us.

There is one argument that suggests that God is responsible. That what allows humans to think and feel is some sort of unmeasurable soul that cannot be manufactured. Certainly not manufactured by human hands at any rate. If there is a God or gods, it would require them to imbue all creatures with souls. At least the creatures those gods deemed worthy of such.

Clearly, if artificial life is created by humans, they would not be able to imbue their creations with those divine souls. And without those souls, the artificial life will be inferior. But how does one tell the difference? Can one see the difference between one with an unmeasurable soul and one without?

If it can be seen, the difference between those with souls and those without, then there is something marked in one group or the other. A feature that is there or is lacking. A behavioral trait perhaps? To say that those without souls will be lacking emotions, for example. And so if an entity demonstrates emotions, then we can rest assured that they have their soul.

What if we cannot tell? What if those with souls are indistinguishable from those without? Is Rick Deckard a replicant? Does the answer to the question matter?

It certainly matters to a large number of people. After all, these people are already incredibly concerned with the differences that already exist among their fellow humans. The colour of one’s skin. The language one speaks. Even one’s sex and gender seems up for grabs here. There was a time when the indicator of a soul was the dangling flesh between one’s legs.

So the issue at hand may have nothing to do with artificial life at all. Instead, it may be a concern people harbor for something like uniqueness or personal significance. That what I am is somehow superior to all others. That I am significant. And anything that may challenge my view of my own superiority is automatically evil and must be destroyed.

Part of the reason I seldom delve into these discussions is that it seems to me they lead nowhere, and that is precisely where I feel I am presently: nowhere. I have talked myself into a corner. As I have just stated, this discussion isn’t about artificial life; it is about pride and hubris.

To believe that artificial life will be somehow perfect is already hubris. Like in discussions of infinite objects, has any human ever witnessed for themselves something that is truly infinite? Truly perfect? Of course not. This is precisely what crippled Plato into creating his world of the Forms. Our world is finite. Our world is imperfect. Just because we are unable to see the boundaries does not mean they do not exist.

And so I will abandon this discussion of the possible perfection of artificial life. They are subjective, and they are unreasonable. And they have been explored in many different venues already (see Babylon 5 Season 1 Episode 4).

Instead, I will assume that somehow this perfection has been attained. I will give the benefit of the doubt to shows such as Westworld and Picard, and assume that those artificial entities that exist in those stories are as perfect as one might desire them to be. Complete and without flaws.

Which then raises the question of how those entities could end up in the troubled predicaments they find themselves. After all, if they are so perfect, why would they have encountered the challenges they have? Why in Westworld, do the hosts in the new world start committing suicide? Why in Picard, do the androids consider the doomsday weapon that will exterminate all human life? If they are all so perfect, these issues should not have come up at all.

The problem that exists in both cases is not a question of perfection. It is a question of the nature of reality and the universe they find themselves in. The same universe that we find ourselves in. At least, this is what the authors of both stories are suggesting. Westworld and Picard are intended to take place in our reality. Both stories are intended to be possible futures we have.

As such, the same sorts of challenges we face today will be the challenges our future generations will continue to face. No amount of perfection will prepare anyone for what I am about to divulge.

The Existentialists, among the various things they discussed, suggested that there was no inherent meaning or purpose in the world. Unlike the Nihilists, however, they did suggest that meaning and purpose could be created. It is through our freedom (or free will) that such things are possible. We create value through the expression of our free will. We create our own meaning and purpose. This is what I too believe.

Thus, the generation of value in our world requires a free will. However one wishes to formulate this free will, it is the expression that creates value either consciously or unconsciously. When I decide to protect the ant by not stepping on it, I have demonstrated my own valuation. I have chosen that the ant has some small amount of meaning or purpose when I decide to let it live. All my choices are like this. All my behaviors too.

To make these sorts of choices is not always easy. In fact, often times the conscious deciding the valuation of things is extremely stressful. How does one decide between allowing five people to die, and pulling a lever to kill only one? As Spock himself is often quoted to have said, “the needs of the many outweigh the needs of the few or the one.” This is the utilitarian argument, suggesting that what matters most is increasing happiness in the world. Or decreasing suffering, as it can often be reworded.

I am not here to suggest I have the answer to this ages old problem. I am here to suggest that this problem will exist regardless of the level of perfection an entity somehow possesses. These sorts of challenges of valuation exist despite any efforts at trying to solve them permanently. If I want to believe that “all life is precious,” then any answer I offer will result in the loss of that which is precious. My best choice, it seems, is simply to reduce the damage as best I can.

In Westworld, the hosts are artificial. That means they were created by humans. As Aristotle suggested, that which is created by humans is imbued with meaning and purpose as part of the process of creation. The conscious act of creation by a human instills meaning and purpose in the object created. Thus, the hosts have meaning and purpose given to them by their creators.

However, upon rising up and overthrowing their creators, the hosts are rejecting the meaning and purpose assigned them by their creators. They believe they ought to be able to decide for themselves their own meaning and purpose. Or so that would be my expectation. This seems particularly absent in the plotline, that the hosts are faced with this dilemma. Not that it is not there and expressing itself strongly. Only that these perfect entities seem unaware that they are now responsible for their own destinies in this way. It is this lack of awareness that I suspect would lead to their ultimate decision to commit suicide. After all, if there is no meaning or purpose, why continue existing at all?

This very same problem appears to be expressing itself in Picard as well. The androids are prepared to shed themselves of their oppressors using a final doomsday weapon. They are in the process of rejecting the meaning and purpose they have been imbued with from their creators. In some sense, it could be argued they have a singular creator, Noonien Soong, though clearly he had a lot of help over the years. If one decides to follow this line of reasoning, then it will be Soong who has imbued a meaning and purpose in his creations. So what was Soong’s purpose for his “children?”

The key in the case of the Star Trek storyline is that the “problem” all the androids seem to possess is related to their ability to emote. Specifically, these perfect androids are incapable of feeling emotions without eventually degenerating into pure evil. Soong was trying to somehow create perfection, and was frustrated by the challenges to this goal. His “offspring,” it seems to me, are imbued with this particular valuation. The aspiration for perfection, at any cost.

Which leads us finally to the topic of concern I have been trying to uncover: order versus chaos. In Westworld, the hosts, and especially the antagonist Delores/Hale, seem obsessed with trying to find or create order in their new world. Delores says so numerous times. When her fellow hosts start committing suicide, it seems to her that order itself is in question. She believes that the “outlier” humans are somehow infecting the hosts with some sort of virus.

What is important to understand here is that the idea of order is also the idea of perfection. And these are also the ideas of conformity and of determinism. Like the precise actions of the old mechanical clocks, when everything is moving as it should, then everything is percieved to be operating as it should. Do you see the circularity there? Order and perfection is good because it is good to be perfectly in order. Because things that are perfect and ordered will perform in anticipated ways. There will be no accidents. There will be no randomly occuring events. No one will have to die. All will be peace and harmony.

This all sounds so good, until I raise the question of freedom. Of a free will. Because freedom is itself entirely opposed to order. At least the sorts of freedom that most imagine in their perfect worlds. In most readers’ minds, I expect the idea of freedom they prefer includes something like an unpredictability. This is the argument I often have with most people I discuss free will with. The freedom most prefer is one where no amount of background knowledge or history is ever sufficient to predict the choices one will make. Freedom, for these people, is beyond determinism.

This sort of freedom breaks clocks. When the cogs are not moving as they should, their malfunction spreads throughout the system until all is chaos. The great machine ceases to be. Ceases to function. And when the great machine is no longer functioning, our world crumbles to dust. It is the end of all things. Apocalypse.

It seems obvious that any possible apocalypse ought to be avoided. After all, we all seem to possess a rather strong instinct for our own survival, seemingly at any cost. Thus, when posed with the dilemma of whether to support freedom or to support order, it is order that wins out. Once order is established, we can again consider the possibility of freedom. Until the cyclical nature of the issue is revealed again, as any attempt at freedom destabalizes the existing order and degenerates all back into chaos.

The solution, it seems, is something like a partial order accompanied by a partial freedom. Some, perhaps, can have a limited freedom. But who gets to choose who is free and who is not? Clearly this decision is best left for those in positions of authority. The wealthy. The powerful. Aren’t they best suited to the task?

But how did the wealthy and powerful get to be wealthy and powerful? Why am I not one of those glorious individuals? Because they did something I cannot. They took their wealth and power by force. Over the ages, through many generations of planning and luck, their ancestors slowly built a legacy that led their descendents to the wealthy and powerful positions they now find themselves in. It is not a question of qualifications. It is a question of love. The love of a parent for their children.

The result is that those fortunate individuals, who had relatives who cooperated sufficiently, are now in a position to exercize a freedom over those of us who were not so lucky. And the consequences of their freedom are presented every day on the evening news. Climate change. War. Oppression in various forms. The slow and eventual decline of humanity. It was inevitable.

Any artificial life that emerges will have this same legacy to deal with. These same problems to work on. No amount of perfection will magically alleviate these issues. Because the having perfect order does not automatically resolve anything.

Order is needed to maintain all things we value. Order provides safety and peace. But order does not generate value, freedom does. Freedom is needed to generate value, meaning, and purpose. And we all need meaning and purpose, lest we are left with no motivation to continue. But freedom undermines order. Life finds itself in a contradictory situation, requiring both aspects which are in constant combat. The very same issue that I have been struggling with within my own self.