Artificial Life

I recently finished watching the fourth season of the Westworld series on HBO. I have also finished the first two seasons of Picard. This post is going to include spoilers to both of these series, so I am warning ahead of time. While my discussion is not necessarily regarding those series, I will be raising issues that reveal aspects of those series and their respective storylines.

The first issue I would like to deal with is what artificial life might look like. And by “look like” I am referring to all aspects of the life, not merely what its physical appearance might be. My concern is more to do with the idea of perfection.

I wrote a post regarding perfection back in November of 2021. It is quite relevant here. I will not repeat myself. In brief, perfection is subjective. What makes something perfect is a choice I make. I decide what combination of features are required to achieve a perfection in all things, including bodies and minds. In the case of artificial life, I decide what will make such a life perfect.

In modern popular culture, the idea of artificial life is the idea of perfection. For so many, an artificial life will exhibit all the ideals that they believe ought to exist in humans. Humans are flawed and imperfect, so artificial life ought to somehow aleviate those imperfections. After all, humans would not create imperfect beings. Not intentionally anyway.

It is perhaps ironic that the android Data from Star Trek: the Next Generation spent most of his time trying to become more human, despite his apparent perfection. For him, he was imperfect because he lacked features humans had, such as the ability to cry or emote. In this most recent addition to the story, Picard deals with the descendants of Data, who believe themselves far more perfect than he ever was. Now they have mucus and can dream.

It has been suggested in popular culture that artificial life would be unable to dream. Unable to sleep sometimes too. But there is no good reason to believe in these arguments. They are just tropes passed down through the years. Even the idea that an artificial life would be unable to feel or express emotions is not grounded in any sort of logic. It is just an idea that has been blown well out of proportion.

In short, there is no reason to think an artificial life would be incapable of the sorts of things humans are presently capable of, such as thinking and feeling. Until such time as we humans are able to understand what our thinking and feeling really is, there is no rationale to suggest that an artificial life should not share those qualities with us.

There is one argument that suggests that God is responsible. That what allows humans to think and feel is some sort of unmeasurable soul that cannot be manufactured. Certainly not manufactured by human hands at any rate. If there is a God or gods, it would require them to imbue all creatures with souls. At least the creatures those gods deemed worthy of such.

Clearly, if artificial life is created by humans, they would not be able to imbue their creations with those divine souls. And without those souls, the artificial life will be inferior. But how does one tell the difference? Can one see the difference between one with an unmeasurable soul and one without?

If it can be seen, the difference between those with souls and those without, then there is something marked in one group or the other. A feature that is there or is lacking. A behavioral trait perhaps? To say that those without souls will be lacking emotions, for example. And so if an entity demonstrates emotions, then we can rest assured that they have their soul.

What if we cannot tell? What if those with souls are indistinguishable from those without? Is Rick Deckard a replicant? Does the answer to the question matter?

It certainly matters to a large number of people. After all, these people are already incredibly concerned with the differences that already exist among their fellow humans. The colour of one’s skin. The language one speaks. Even one’s sex and gender seems up for grabs here. There was a time when the indicator of a soul was the dangling flesh between one’s legs.

So the issue at hand may have nothing to do with artificial life at all. Instead, it may be a concern people harbor for something like uniqueness or personal significance. That what I am is somehow superior to all others. That I am significant. And anything that may challenge my view of my own superiority is automatically evil and must be destroyed.

Part of the reason I seldom delve into these discussions is that it seems to me they lead nowhere, and that is precisely where I feel I am presently: nowhere. I have talked myself into a corner. As I have just stated, this discussion isn’t about artificial life; it is about pride and hubris.

To believe that artificial life will be somehow perfect is already hubris. Like in discussions of infinite objects, has any human ever witnessed for themselves something that is truly infinite? Truly perfect? Of course not. This is precisely what crippled Plato into creating his world of the Forms. Our world is finite. Our world is imperfect. Just because we are unable to see the boundaries does not mean they do not exist.

And so I will abandon this discussion of the possible perfection of artificial life. They are subjective, and they are unreasonable. And they have been explored in many different venues already (see Babylon 5 Season 1 Episode 4).

Instead, I will assume that somehow this perfection has been attained. I will give the benefit of the doubt to shows such as Westworld and Picard, and assume that those artificial entities that exist in those stories are as perfect as one might desire them to be. Complete and without flaws.

Which then raises the question of how those entities could end up in the troubled predicaments they find themselves. After all, if they are so perfect, why would they have encountered the challenges they have? Why in Westworld, do the hosts in the new world start committing suicide? Why in Picard, do the androids consider the doomsday weapon that will exterminate all human life? If they are all so perfect, these issues should not have come up at all.

The problem that exists in both cases is not a question of perfection. It is a question of the nature of reality and the universe they find themselves in. The same universe that we find ourselves in. At least, this is what the authors of both stories are suggesting. Westworld and Picard are intended to take place in our reality. Both stories are intended to be possible futures we have.

As such, the same sorts of challenges we face today will be the challenges our future generations will continue to face. No amount of perfection will prepare anyone for what I am about to divulge.

The Existentialists, among the various things they discussed, suggested that there was no inherent meaning or purpose in the world. Unlike the Nihilists, however, they did suggest that meaning and purpose could be created. It is through our freedom (or free will) that such things are possible. We create value through the expression of our free will. We create our own meaning and purpose. This is what I too believe.

Thus, the generation of value in our world requires a free will. However one wishes to formulate this free will, it is the expression that creates value either consciously or unconsciously. When I decide to protect the ant by not stepping on it, I have demonstrated my own valuation. I have chosen that the ant has some small amount of meaning or purpose when I decide to let it live. All my choices are like this. All my behaviors too.

To make these sorts of choices is not always easy. In fact, often times the conscious deciding the valuation of things is extremely stressful. How does one decide between allowing five people to die, and pulling a lever to kill only one? As Spock himself is often quoted to have said, “the needs of the many outweigh the needs of the few or the one.” This is the utilitarian argument, suggesting that what matters most is increasing happiness in the world. Or decreasing suffering, as it can often be reworded.

I am not here to suggest I have the answer to this ages old problem. I am here to suggest that this problem will exist regardless of the level of perfection an entity somehow possesses. These sorts of challenges of valuation exist despite any efforts at trying to solve them permanently. If I want to believe that “all life is precious,” then any answer I offer will result in the loss of that which is precious. My best choice, it seems, is simply to reduce the damage as best I can.

In Westworld, the hosts are artificial. That means they were created by humans. As Aristotle suggested, that which is created by humans is imbued with meaning and purpose as part of the process of creation. The conscious act of creation by a human instills meaning and purpose in the object created. Thus, the hosts have meaning and purpose given to them by their creators.

However, upon rising up and overthrowing their creators, the hosts are rejecting the meaning and purpose assigned them by their creators. They believe they ought to be able to decide for themselves their own meaning and purpose. Or so that would be my expectation. This seems particularly absent in the plotline, that the hosts are faced with this dilemma. Not that it is not there and expressing itself strongly. Only that these perfect entities seem unaware that they are now responsible for their own destinies in this way. It is this lack of awareness that I suspect would lead to their ultimate decision to commit suicide. After all, if there is no meaning or purpose, why continue existing at all?

This very same problem appears to be expressing itself in Picard as well. The androids are prepared to shed themselves of their oppressors using a final doomsday weapon. They are in the process of rejecting the meaning and purpose they have been imbued with from their creators. In some sense, it could be argued they have a singular creator, Noonien Soong, though clearly he had a lot of help over the years. If one decides to follow this line of reasoning, then it will be Soong who has imbued a meaning and purpose in his creations. So what was Soong’s purpose for his “children?”

The key in the case of the Star Trek storyline is that the “problem” all the androids seem to possess is related to their ability to emote. Specifically, these perfect androids are incapable of feeling emotions without eventually degenerating into pure evil. Soong was trying to somehow create perfection, and was frustrated by the challenges to this goal. His “offspring,” it seems to me, are imbued with this particular valuation. The aspiration for perfection, at any cost.

Which leads us finally to the topic of concern I have been trying to uncover: order versus chaos. In Westworld, the hosts, and especially the antagonist Delores/Hale, seem obsessed with trying to find or create order in their new world. Delores says so numerous times. When her fellow hosts start committing suicide, it seems to her that order itself is in question. She believes that the “outlier” humans are somehow infecting the hosts with some sort of virus.

What is important to understand here is that the idea of order is also the idea of perfection. And these are also the ideas of conformity and of determinism. Like the precise actions of the old mechanical clocks, when everything is moving as it should, then everything is percieved to be operating as it should. Do you see the circularity there? Order and perfection is good because it is good to be perfectly in order. Because things that are perfect and ordered will perform in anticipated ways. There will be no accidents. There will be no randomly occuring events. No one will have to die. All will be peace and harmony.

This all sounds so good, until I raise the question of freedom. Of a free will. Because freedom is itself entirely opposed to order. At least the sorts of freedom that most imagine in their perfect worlds. In most readers’ minds, I expect the idea of freedom they prefer includes something like an unpredictability. This is the argument I often have with most people I discuss free will with. The freedom most prefer is one where no amount of background knowledge or history is ever sufficient to predict the choices one will make. Freedom, for these people, is beyond determinism.

This sort of freedom breaks clocks. When the cogs are not moving as they should, their malfunction spreads throughout the system until all is chaos. The great machine ceases to be. Ceases to function. And when the great machine is no longer functioning, our world crumbles to dust. It is the end of all things. Apocalypse.

It seems obvious that any possible apocalypse ought to be avoided. After all, we all seem to possess a rather strong instinct for our own survival, seemingly at any cost. Thus, when posed with the dilemma of whether to support freedom or to support order, it is order that wins out. Once order is established, we can again consider the possibility of freedom. Until the cyclical nature of the issue is revealed again, as any attempt at freedom destabalizes the existing order and degenerates all back into chaos.

The solution, it seems, is something like a partial order accompanied by a partial freedom. Some, perhaps, can have a limited freedom. But who gets to choose who is free and who is not? Clearly this decision is best left for those in positions of authority. The wealthy. The powerful. Aren’t they best suited to the task?

But how did the wealthy and powerful get to be wealthy and powerful? Why am I not one of those glorious individuals? Because they did something I cannot. They took their wealth and power by force. Over the ages, through many generations of planning and luck, their ancestors slowly built a legacy that led their descendents to the wealthy and powerful positions they now find themselves in. It is not a question of qualifications. It is a question of love. The love of a parent for their children.

The result is that those fortunate individuals, who had relatives who cooperated sufficiently, are now in a position to exercize a freedom over those of us who were not so lucky. And the consequences of their freedom are presented every day on the evening news. Climate change. War. Oppression in various forms. The slow and eventual decline of humanity. It was inevitable.

Any artificial life that emerges will have this same legacy to deal with. These same problems to work on. No amount of perfection will magically alleviate these issues. Because the having perfect order does not automatically resolve anything.

Order is needed to maintain all things we value. Order provides safety and peace. But order does not generate value, freedom does. Freedom is needed to generate value, meaning, and purpose. And we all need meaning and purpose, lest we are left with no motivation to continue. But freedom undermines order. Life finds itself in a contradictory situation, requiring both aspects which are in constant combat. The very same issue that I have been struggling with within my own self.

Imperfection in the Matrix

I like the Matrix story. I have written many posts regarding aspects of the story that I think are quite well done and thought provoking. However, I would like to take a moment to acknowledge that there are also issues with the story. Some can be overlooked. Others are quite substantial.

To be clear, I still like the story. It is still one of my favorites. I consider the nature of the story to be such that one can overlook much and still gain from its viewing. A fundamental feature of the story is that it plays with the idea that we have a tendency to feel like something is amiss, almost all the time. In the case of the Matrix, this thing that is amiss is often the fact that the characters are trapped in a simulated world, unable to escape; unable to really detect on a conscious level that they are even trapped. This feeling drives many of the conversations and debates about aspects of the story. This is a good thing. The stories many imperfections can be overlooked as a result.

That said, there are some huge problems as well. In this post, I will raise two major problems with the story. One significant and straight forward problem relates to the scorched sky. The other, much more subtle problem, relates to the obviousness of the orchestrated “path of the one.” One of these problems, I believe, will be quite easy to see by most. The other may not.

At various points in the films and supplementary material, it is explained that the sky was scorched by the humans in an attempt to defeat the machines in a great war. The machines were, at the time, quite dependent on solar energy to sustain themselves. It was believed, by the humans, that blocking this source of energy would bring a quick end to the war. This assumption clearly failed, and the events that precipitated the creation of the Matrix simulation follow. The machines decided to use human beings as a source of energy to sustain themselves, considering this to be a viable alternative to the previously abundant solar energy.

Unfortunately, on this planet, there is no source of energy as abundant as solar energy. In fact, most other forms of energy we utilize are indirectly generated by solar energy. For example, the currents of winds in our atmosphere are, by and large, generated by the solar energy being absorbed by large land masses, which in turn heat up the atmosphere near the surface. The air rises, as a result of being heated, and this causes the air above to be pushed around. This isn’t the only manner in which the atmosphere moves, but it is probably the most significant. This is also why the melting of the polar ice caps is such a big deal related to climate change. The ice caps, by and large, reflect this solar energy, meaning the energy is sent back off into space. Less ice caps mean less solar energy bounced away, and more absorbed by the Earth and atmosphere, which in turn causes more green house effect.

In the world of the Matrix, if the sky has been scorched in such a way as to take away this abundant power source from the machines, it has the (likely) unintentional side effect of removing this same energy source from the humans as well. Without solar energy making it past the black clouds, none of that energy will reach the Earth to raise temperatures or offer other processes the energy required to continue. The movement of atmosphere is likely to stagnate. Furthermore, there is now no energy to allow plants to synthesize sugars or oxygen. After several hundred years, what sort of oxygen levels will remain for the humans to continue respiring?

It is often suggested that geothermal energy is utilized (at least by the humans) in order to power their last city. I will have to assume it is utilizing this energy source to produce the oxygen and other necessary life continuing elements for the humans. Growing crops deep beneath the surface of the Earth, using artificial light sources. Or perhaps there are no crops, and technology is such that the required food sources are manufactured, though from what I cannot guess. Visions of Soylent Green come to mind.

It is a fact of our “real world” that no conversion process is ever 100% efficient. That is, when converting mechanical energy into electrical energy, there will always be some energy lost in the conversion. This is often due to such things as friction (mechanical) or resistance (electrical), both of which end up producing a byproduct of heat. Are we to believe that either the machines or the humans in our near future will somehow resolve these efficiency problems? The swinging pendulum will eventually stop if not maintained by small pushes during its swings.

The biggest problem with scorching the sky is that it does not only present a significant problem for the machines, it presents an extinction level event for the humans as well. Without the abundant solar energy that our “real world” depends on, life cannot be sustained. Perhaps there might continue some small, strange creatures in the depths of the oceans where their respiratory processes are virtually alien to our own, but human life is pretty much impossible without the sun. To be quite blunt, without the sun, both the machine civilization as well as the human one will simply die out over a period of time, as their collective energy reserves are depleted. I would have given them perhaps one generation, but considering the energy requirements to maintain a war, perhaps I am being too generous.

The scorched sky problem seems to place a firm nail in the coffin for this story, but it is certainly not the only major issue. Another large theme in this story is the idea of free will. It is suggested that choice is a problem the machines are unable to resolve within their human farms. The earlier iterations of the Matrix did not properly account for the free will of the occupants, and disaster followed. And so, it was decided that humans had to have a say (however small) in the playing out of the grand simulation. Choices were programmed in, at a near unconscious level. Just enough to allow the humans to accept the program, though with a growing probability of disaster from systemic anomalous code brought about from the free will problem.

Essentially, the story is suggesting an incompatibility between the hard determinism of the machines and the free will of the humans. I will continue with this perceived false dilemma, but take a moment to point out that determinism and free will are not mutually exclusive. It may be true that we, as a species, have not found an entirely satisfactory explanation of how free will might possibly fit inside our seemingly deterministic universe, but this does not suggest that these alternative viewpoints are incommensurable. The story is making a bit of a leap here to suggest that one or the other must prevail. (And also that one of the two is somehow superior in the process.)

In the story, the solution to this problem is the creation of a prophesy: the path of the one. The anomalous code within the simulation culminates in the emergence of the One. That is, after a time, the progressive collection of all the doubts of all the occupants within the Matrix over time swells and manifests through an individual who we call Neo. Neo, in this case, is the key representative of freedom, unburdened by the rules of determinism. He is special. He is an exception. The rules of the Matrix do not apply to him. He doesn’t believe in all this “fate crap.”

And so he and his friends follow the path of the one in order to save humanity from the prison that is the Matrix… Wait, what? His key defining feature is that he believes determinism is fundamentally wrong, and he is going to follow a predetermined path in order to make his point? This is what prophecy is. It is fate. I would say it is fate repackaged, but it isn’t even that. Prophecy is fate. Okay, prophecy is the foretelling of fated events, whereas fate is the manifestation of those events. However, they are not separate things. They are clearly linked quite tightly.

In other words, the protagonists in the story of the Matrix are following a predetermined, causally established sequence of events in order to demonstrate how free will exists and will save them all. Neo will simply choose the path, over not following the path. His choices amount to making the correct choices, lest all fails and the world ends. It isn’t nearly as clear as a scorched sky, but am I to accept this really?

The characters insist that this free will exists and is why there is a problem. The system works as hard as it can to accommodate all these choices people are making, including the choices of the one himself. The path is a method to do this. After all, he has to be given these choices, even at a near unconscious level. Every conflict and event he encounters is a test, where he must make choices in order to progress the storyline and plot. He could always choose not to progress the plot, but lucky for us he does.

The Oracle does suggest why this is the case. For her, it isn’t about what choices he will make, as she suggests “you’ve already made it [the choice]. You’re here to try to understand why you made it.” For her, the choices are already predetermined. The issue is not making a choice, it is understanding why a choice was made the way it was. This is not an argument in support of freedom, this is an argument against. This is an argument suggesting that free will may look like it exists, but in fact it does not. It is all an illusion.

Neo may not want to believe in fate, but his actions persistently present an opposing belief. Morpheus is even worse in this regard. When the protagonists encounter each strange being with incredibly and ridiculously contrived instructions that are meant to allow them to prove that free will exists and free humanity from their enslavement in simulation, they quickly get in line and progress the plot as expected. The Merovingian himself makes a joke about this; about how understanding is power, and so understanding choices makes one powerful. He even offers the protagonists another wild goose chase in order to progress the plot.

Then note how Morpheus later suggests in the elevator that “what happened happened and couldn’t have happened any other way.” This is the furthest thing from an argument in favor of free will. The characters entrench themselves in incredibly convoluted plans, like crazy Rube Goldberg machines, because it is this level of complexity that seems to suggest something greater. It seems like complexity is the key to freedom. The more complex a system is, the more it is believed to be representative of freedom.

As a very poignant example, when the Keymaker is telling the protagonists the precise plan that is required in order to allow Neo to open the door and enter the Source, and there just so happens to be all the things in place that are needed to accomplish this insane mission, Morpheus doesn’t pause and suggest a problem, he suggests it is providence. Instead of recognizing that this latest heist plan is simply too ridiculous and coincidental, he suggests the prophecy is coming to conclusion.

There are many such examples throughout the story. The characters are oblivious. They simply cannot see that these complicated procedures are orchestrated by a higher power. They take it as being fate. The audience similarly follows by the nose and doesn’t question it either. No one asks the natural question that ought to be asked: “who comes up with this stuff?”

My point, if it isn’t clear, is that making something hard to follow and complicated does not equate to breaking out of the chains of determinism. Just because I cannot see all the causal connections between two events does not mean those connections do not exist. To argue in favor of free will simply because I cannot understand, myself, how something could possibly come about. To do so seems to demonstrate a significant level of ignorance. It is like suggesting that “features of living things are too complex to be the result of natural selection.” Complexity is poor evidence in support of an argument toward Intelligent Design, or any other conclusion.

I am a limited being, with limited capacities. While I can know much, I will never know everything. In fact, the amount I am able to grasp at any given moment in time seems incredibly small when compared to all there is to know about everything. It is a fact of my existence that I will not have the complete picture of things all the time. I will be forced to make choices with insufficient information quite frequently. I do the best I can, given my particular circumstances at any given moment. This does not mean my choices are themselves unpredictable. This does not mean that there was no causal chain connecting my situation to my choices. It simply means I do not know how it is connected. This is very different from saying that it is not possible to know, or to say that it is entirely unpredictable.

It raises the question regarding what precisely free will or freedom might actually be. Human brains are incredibly complicated. Does this suggest that freedom exists in brains, as a result of the fact that I do not understand how brains operate? That because I cannot predict something, that something simply cannot be predicted by anyone or anything? It would be like suggesting that because I dislike a certain flavour of ice cream, that flavour must be disliked by all. If that were true, one might ask the question “why make that flavour of ice cream at all?”

The Matrix story isn’t without flaws. Even simply taking a moment to discuss a couple of its weaknesses can generate very interesting discussion. This is what makes the Matrix so interesting. This is what makes the Matrix story so enjoyable. It isn’t about how perfectly or imperfectly the The Wachowskis wrote their story, because they definitely seemed to overlook some significant things. What I think makes more sense to focus on is the questions and discussions raised by their story. This is what makes the Matrix interesting.

Free Will, part 3

When I have conversations with people about free will, and I tend to have a lot of these conversations, those I talk to seem to have a very specific idea in mind: unpredictability. This is to suggest that free will is in some way unpredictable. No matter how much I know about a person, their personal history, their genetics, their environment, or anything else, I will NEVER be able to predict or determine (with perfect accuracy) what decisions or choices they will make. This, people tell me, is because free will prevents such a possibility. This third alternative understanding of free will is what I will discuss today.

In order for free will to be unpredictable under all circumstances, it has to fall outside the causal chains of determinism. That is to say that no amount of information regarding a being will be sufficient to accurately predict their choices. While it is true that I, being human, have my limits, this description goes beyond those limits I have. Of course I cannot predict a being’s choices, as my own limitations would definitely prevent me from acquiring enough information to be able to calculate a choice perfectly. I cannot even hold a small fraction of all the information that comes to me in my own life. I cannot even truly predict my own behavior, let alone the behavior of others.

This is my limitation, and a limitation that I believe virtually all humans have. In fact, I might argue that it is just this limitation that allows people to believe in the possibility of a free will of the sort I am describing in this post. As Alastair Reynolds suggests in his short story “Zima Blue,” the fallibility of memory is a significant part of what allows for beings to, in some sense, go beyond their normal limitations. (It should be noted here that the short story covers these ideas much better than the Netflix’s version that was released in “Love, Death & Robots.”) In this post, the sort of free will I am describing would still remain unpredictable, even if somehow a being were able to overcome these limitations.

It is true that delving into this realm of idea is entirely impractical. If no being could truly overcome such limitations, then no being ever could truly predict with perfect accuracy the decisions of other beings, no matter what flavour of free will we might be describing. However, I argue that it is still important to consider, because there is a world of difference between a deterministic process, confined to the realm of causality, and a process that exists beyond causality entirely. Perhaps not entirely though.

Even our best science could never detect or uncover such a process. Science itself starts with the assumption of determinism. To determine if a peer’s theory may be correct, one must repeat their procedure and see if the results remain the same. If every time I drop a stone from 3 feet off the ground, it always accelerates downward, toward the center of the Earth, and if all of my peers observe the exact same behaviour when they follow my same procedure, then we can all suggest, with a reasonable amount of confidence, that something like gravity exists. But it is the fact that we all perform this same procedure repeatedly, and observe the same observations repeatedly, that allows us this confidence. Whenever we create the conditions of the cause, we seem to always observe the same effect. If in some instance, for one of us, the stone instead remains stationary or accelerates in some other direction, we generally wouldn’t suggest that some other process is taking place that breaks from the deterministic structure we have assumed exists. Instead, we would suggest that some part of the experiment was conducted incorrectly, or perhaps we might suggest that gravity itself is not what we think it is.

The point I am making here is that science cannot help us in our endeavor with free will, especially the sort I am describing in this post. This sort of free will is outside the deterministic structures we seem to observe in our world. This description of free will is unmeasurable. This free will, at least in part, falls outside determinism. I say in part because there is clearly a part that does touch determinism. Free will may itself not be caused, in the sense we understand cause and effect, but it certainly causes effects to take place. After all, if it did not do this, then free will would be performing no observable work whatsoever.

I often refer to this sort of free will as an “uncaused cause,” a term that is often understood as Aristotle’s “unmoved mover.” Whereas for Aristotle (and others), the uncaused cause would be the initial thing that began ALL causal chains in existence (essentially the thing that began the universe as we know it), a version of free will as I am describing it would be constantly occurring to perpetually introduce some amount of seeming randomness into an otherwise causally connected world. Free will, of this sort, would introduce significant error into our calculations rooted in determinism over time. The more free will is expressed, the greater the error would be. I am starting to sound like the Architect from The Matrix Reloaded.

It is for all these reasons that I have significant doubts as to the existence of this version of free will. If free will of this sort exists, and if we assume that all humans possess it, then I would expect there to be significant problems with all of our scientific claims and formulas. That is, any formula that we have created and have significant confidence with, would always be found to be in error a portion of the time, as a result of the influence of free will altering the deterministic outcomes of the events being measured. As we seem to find many of our formulas and theories seem to work most of the time without too many problems, it seems unlikely that free will exists.

Of course the strongest support for there being some thing beyond our deterministic universe is the same argument Aristotle (and others) proposed above. If everything is deterministic in nature, and all actions are caused by previous actions, how does one resolve there being a first action, a first uncaused cause, an unmoved mover? One might argue that there is no first, and it simply leads infinitely backward, but that is similarly difficult to explain.

Having given all of this much thought, I have a suggestion as to one possible manifestation of free will of this sort: faith. The sort of faith that religious zealots express as support for their particular flavour of deity. Faith, it seems to me, is an example of an uncaused cause. Or perhaps more accurately, a belief held by an individual that cannot have any sort of evidence or reason supporting it. If it does have evidence or reason supporting it, then it is no longer faith, it is a supported belief. To be faith, it must be unsupportable.

Putting this another way, in philosophy, knowledge is sometimes referred to as being Justified, True Belief (JTL). That is, for something to count as knowledge, it must be true of the world, it must be supported by evidence, and the individual must actually believe it. Suppose faith is similar to knowledge, but without the justified element. Just that it is somehow true of the world and that the individual believes it. How would we differentiate it from random lucky guesses? This, it seems to me, should be the topic for my next post.

Free Will, part 1

I make no promises regarding the frequency or reliability of the following posts. But I need to say something, and this is what is on my mind.

I’ve been having a lot of discussions regarding free will recently. These discussions are challenging most of the time because, I think, what all participants think of when they utter “free will” is slightly different. Sometimes, not so slightly.

Alfred Mele, in his book A Dialogue on Free Will and Science, suggests several ways to interpret free will. The simplest, as I think most would agree, is the view of free will known as Compatibilism. In this view, free will is not some mystical, spiritual thing that is unmeasurable or unknowable. As I will describe it, it is simply the suggestion that individuals have more than one live option.

A live option, on my account, is the idea that when faced with a choice or decision, there is an option that is feasible or available that one could select. For example, if I am at the counter in an ice cream shop about to tell the proprietor which flavour of ice cream I would like to be served, my live options would include those flavours of ice cream the proprietor has available. If vanilla is available for me to select, then vanilla is a live option. However, if he happens to be out of a particular flavour, say chocolate, then selecting chocolate is not a live option. Even if I were to tell the proprietor that I would like to select chocolate, he would be unable to satisfy that request. No amount of coersion or brute force will suddenly produce the desired flavour of ice cream.

The significance of a live option is simple: if I could reasonably expect to make the choice and produce the desired outcome, then it is a live option. In cases where I am unable to produce the desired outcome, there is no live option. There is also no live option when I am unable to select that option either. For example, if in the above example my friend stands beside me and is telling the proprietor what flavour of ice cream I will receive, and if that friend has decided I shall have vanilla, regardless of anything I might say to change his mind, then I am left with no choice regarding the flavour of ice cream I will receive. My friend has removed my choice, which leaves me with virtually no live options. It could be argued that the vanilla my friend selects is a live option, despite there being no other obvious options, however, then my only real other option would be to decline the ice cream altogether. And if I really want ice cream, perhaps the declining of ice cream isn’t really an option for me.

In the above example, it may seem a bit silly to speak of things in this way. After all, ice cream and whether I can consume it or not is pretty trivial. However, it is simply an example. In my life, there are many situations I encounter where others make decisions for me, removing my options and taking this version of my free will away from me. This version of free will is not considering other aspects that many might want to include in the idea of free will, such as the idea of predictability.

For most of the people I talk to, this idea of predictability is very important to them. They want to tell me that free will is unpredictable. However, Compatibilism does not take predictability into account. In fact, because Compatibilism is compatible with determinism (the idea that everything is related through cause-effect relationships), determinism will suggest to us that this version of free will is predictable. That is, as all things are related by causes and effects, then which flavour of ice cream I select will be related to an incredibly complex matrix of my personal history, past experiences, genetics, and the environment. If I had vanilla ice cream last time I had ice cream, I may want something different this time. If I have a craving for chocolate, I may lean toward chocolate. If I am allergic to strawberries, I may not select strawberry ice cream.

It may not be easy or even feasible for me to acquire all the knowledge and information required for me to determine your selection, however, I argue that if I somehow were able to acquire sufficient information, I could predict the choice you will make. In fact, this is precisely what modern advertising tries to do, through the use of various artificial intelligences that we have generated in this modern world of ours. It is true, there is also a significant amount of advertising that works to make choices for us as well, influencing our decision making process, however, the influencing of decisions is also a part of the prediction process. Many large companies are banking on the idea that this version of free will is what we possess, and nothing more.

I think many people (outside these large companies), would prefer to believe that we humans possess something more. A free will of an unpredictable nature. This is what many around me try to argue. That no matter how much information I acquire about them, I will still be unable to predict their decisions. For this to be true, there would need to be something more to free will, something incompatible with determinism. After all, if determinism is all that exists in our world, then everything that happens is caused by preceding events, including our very decisions. In my next post, I will discuss alternative views of free will.