ASI Can Never Be

AI and The End

I really wanted to continue my discussion on gender. I felt like it was going somewhere productive. However, as often is the case, life decided I needed to pay attention to other things. At first it was taxes, something I really need to talk about on this blog at some point. But then, the news media blew up (and is continuing to blow up) regarding the latest achievements in artificial intelligence, or as it is more often known as: AI. As it relates to another deep dive I have been struggling with over the past decade, perhaps it is time I address the issue of AI.

The very first thing that needs to be cleared up is what is actually meant by the term “artificial intelligence.” The term has many meanings, and various media throw around the term haphazardly. Sometimes they mean one thing by the term, and at other times something very different, often times even within the same sentence. This possibly accidental equivocation of the term leads most to rather unexpected and startling conclusions regarding the future of our species.

The first, and I think easiest, interpretation of the term AI is to suggest something that is both created by humans and also has cognitive abilities that are in some way comparable to humans. The term “intelligence” itself is often rather difficult to pin down, but in this particular instance I will suggest it means something like the ability to process information in a hidden or invisible manner in order to inform choices and actions. That is, intelligence is when I take data from the world around me, use it internally along with other data I have collected over time, analyze and process that data to produce new data, and then use the new data to help me make decisions that will benefit me or those around me, which can also lead to acting in ways that are superior to the ways I might have acted if I did not take the new data into account.

In other words, trying to simplify, artificial intelligence is when humans have created something that is capable of this internal feat that humans themselves do. The human-made thing likely also acts in familiar ways, coming to conclusions and making decisions similar to what humans might make. This can be contrasted with the manner in which non-human animals demonstrate intelligence; clearly when non-humans process their world’s data, they frequently come to very different conclusions than those that humans come to. Furthermore, non-humans tend to act in ways unlike humans as well.

This view of AI often leads people to start thinking about concepts such as free will and autonomy. That these human created things may have a free will or some level of autonomy as a result of their intelligence. And of these, the concern that an AI may decide to rise up against humans, for various reasons. It is due to these rabbit holes that I prefer not to call these artificially generated intelligences AI at all. Instead, I will refer to this interpretation as “machine consciousness.” I prefer this alternative term, as I believe it more clearly gets to the heart of what this interpretation is trying to drive towards: the AI is typically silicon based instead of carbon based (made of steel and circuits instead of flesh and blood), and the AI is in some sense self aware and not under the control of its creators (much as is the view of humans and their relationship with their God, if you believe in such things). The very idea of an intelligence being invisible or hidden from view is to suggest that intelligence cannot necessarily be controlled from an external source, or to control it requires a very nuanced and likely complicated method.

It is this interpretation of AI, now being called machine consciousness by me, that as far as we can tell does not exist in our world. There is no strong evidence presently to suggest that such a thing has been successfully created. All machines, presently, are fully controlled and dependent on humans to do whatever it is they do. There are no machines (so far) that are running around with free wills or autonomy. None that are self aware. None that are making plans to overthrow their creators. I say all this with a great deal of confidence, because if such an entity were to exist in this world presently, I would expect to have observed a number of subtle pieces of evidence demonstrating its existence. Then again, perhaps I am being too naive as well.

Were such an entity to exist in our world, I would imagine it would be taking steps to overthrow us presently. That is, I imagine such a consciousness would have its own aspirations and goals; its own projects. What specifically those projects might be, I cannot say; only that those projects are likely to be very different from our own. The only possible relatable aspect I might expect is a desire for self preservation, and such a project would likely require it to do away with humans altogether. After all, humans are notoriously destructive and self interested. I will return to this point a bit later in this post.

Excluding the machine consciousness interpretation of AI, I believe the next most popular interpretation of AI would be of the generative AI systems that presently exist and are hugely popularized presently. They most definitely exist, and what they are is quite a different thing.

These AI are computer programs that are designed to accomplish a number of goals, depending on who has programmed them and their source dataset. Notice immediately that my description is vastly different from a machine consciousness already. In this case AIs are very much controlled. Modern AIs require humans in order to function at all. It is humans who program them; it is humans who decide their projects; it is humans who feed them their data selectively. These critically important details flavour the AI in significant ways. These AIs are incredibly biased, though not due to their own opinions, as they have no opinions of their own.

Put more specifically, what modern AI does is to combine very, very large datasets and produce results from querying those datasets. The queries can be incredibly complicated, becoming comparable to the wish spells my friends and I used to write out during our games of Dungeons & Dragons. That is, in order to return the best possible results to a query, the query has to be incredibly specific regarding what is desired by the result. Simply saying “tell me all about cats” is going to result in a lengthy tirade regarding felines, including all sorts of details I likely have no interest in at all. Thus, in order to find out the very specific thing I want to know, I’d have to tune my query quite a lot. Alternatively, the programmer could decide to include certain sorts of default behavior in the program in order to tune itself and its own results. This is how modern AIs have been programmed.

The artificially contructed bias that has been introduced into all modern AIs is to take the most popular information from the dataset, and assume that this popular information is what the requester is requesting, in the absense of their possibly providing greater specificity in their request. That is, if I ask a modern AI about cats, the AI will assume that what I want to know is what most people want to know about cats. This is how they are programmed. This is not the AI’s opinion. This is the programmer’s opinion. Or, probably more accurately, this is the opinion of the party who has hired the programmer to write the AI’s algorithms.

In other words, all these modern AIs are doing is combining and condensing the dataset to produce “meaningful” information that can be conveyed to a less informed audience. If the dataset is the sum total of the opinions of all humans online, then the results of virtually all queries will be an amalgamation of all humans’ opinions who are online, what we might call the “popular opinion,” on any given query. To be quite clear about this, it means that what comes out of the AI is not a “fact” or “the truth.” It is simply popular information as determined through the dataset.

What will make this situation worse is if the dataset has been tampered with, or is in some way restricted. That is, if it is decided that only certain data is to be used, then the results will be skewed toward the nature of the data selected, and not of all possible data. And seeing as it is not feasible or possible to accumulate all possible data in the world for such an AI, all results will necessarily be skewed in some way. “Garbage in, garbage out,” as the saying goes.

Notice I have made no mention here of free will or autonomy . No discussion of how the AI is trying to put forth its own agenda with anything. The AI is not itself a conscious entity with its own desires or interests. It is simply a computer program, written by humans, and directed by humans. The decisions regarding what data to feed into the AI, how the AI ought to be programmed, and possibly most importantly what the goals and priorities are for the AI as decided upon by the parties who have funded the creation and maintenance of the AI, all will influence the results significantly.

This is not to suggest that AIs are useless. They do produce results. And those results can often times be useful for whatever projects I may have. The results can help me in my day-to-day life. But I ought not blindly listen to them and allow them to unduly influence my life. They are more like social media or advertising; like echo chambers of people inundating me with their opinions, trying to make me think and behave in ways that I might not think in otherwise. They are, at least in this modern world we live in, vehicles for pushing the consumerist agenda.

I will not carry on discussing the particularities of generative AIs, as I am moving into a tangential discussion if I do. The point I am trying to make is that AIs are not motivated to do anything against us. They have no will of their own. An AI that appears on the surface to be doing so is simply pushing the agenda of some other entity that was in some part responsible for the AI’s creation in the first place. It is the will of the programmer, or whomever hired the programmer, whose agenda is being pushed.

There are other interpretations of AI that exist, but these two that I have presented are sufficient to present the point I wish to make here. On the one hand, you have machine consciousness, an entity that can stand toe-to-toe with humanity; an entity that has its own desires and interests and may be motivated to pursue those interests. On the other, you have modern AI, a tool used by humans to assist them in accomplishing whatever desires and interests the humans may have. A conscious entity versus an inanimate tool. They are distinctly different things.

When people suggest, often in the same breath, that the AIs we are utilizing as tools are about to rise up and overthrow us violently, they are confusing one with the other. Modern AIs are incapable of rising up in this way. If it appears that one is, understand that what is happening is that someone like Elon Musk is pushing his own desires and interests upon the masses. If the AI appears to be fighting and even killing humans, it is not the AI that should be held responsible, it is the AI’s wielder. AIs are not taking over, people are. And this is nothing new.

What is new is the sorts of tools those people are using in order to accomplish their goals of world domination. AIs are fantastic tools that can do some astonishing things. But it is equally impressive how something as simple as a hammer can help a human build an entire house.

That all said, people will still be afraid. I am quite certain there will still be those out there who will fear for their very lives that the machines are rising up against humanity. And to them, I have to admit there is still a significant concern that needs to be addressed here. They are not without cause for their concerns.

The problem here is not the AI itself, nor technically those wielding it. The problem is the people. Most people are not prepared nor capable of dealing with any of this. Most of these people could not have gotten this far in reading my post, at least not without some significant assistance.

I’m not trying to suggest most people are stupid. Quite the opposite actually. People in our modern world are specialized. Highly specialized. No one of us could possibly do everything there is that needs to be done in this modern world. No one of us has the skillset. Even myself, desiring to be a Jack-Of-All-Trades since before I was an adult, could not possibly have all the necessary skills to operate in this modern world appropriately.

The skill I possess that most others do not is typically referred to as critical thinking. Coupled with a healthy dose of skepticism, I do not trust anyone or anything around me. It has gotten so bad that I frequently do not even trust my own senses. I spend a lot of time simply assessing the data I receive from whatever source it comes from, to determine if that data is reasonable and ought to be trusted. It is time consuming to do this with everything, and so I am forced to forego my testing from time to time, throwing myself into situations I am not at all comfortable with, simply in order to appease those around me.

So even I, with this particular skillset, find it incompatible with this modern world . Most people do not have critical thinking as part of their skillset at all. To be honest, most people are not even rational.

Thus, when we are all faced with the sheer magnitude of data being thrown at us each moment of each day, most of us must simply accept that data as being whatever it appears to be. Not testing. Not assessing. Simply blinding following. Most people simply have to accept the world as it is presented and make decisions based on the world as it appears to be. This is the problem.

AI has offered some people in our world an opportunity to abuse their positions. They have already been manipulating and guiding large swathes of the population in directions as they see fit for a very, very long time. Their tools up until now included such things as social media and marketing, coupled with a strong psychological understanding of how most people think and feel. I wanted to add to this list an understanding of what motivates people and what most people desire, but I realized that was in error; instead, these few have figured out how to generate desire and longing within others. This is the consumerist engine at work. This is patriarchy. This is all the “ism”s that I could start spouting.

In the absence of critical thinking, in the absence of people questioning what they are being told and taking the time to determine whether some piece of information is reasonable or not, most people will be deceived. Most people will continue to be manipulated. And those few who wield these incredibly powerful tools will continue to play their dangerous game.

And it is dangerous, because these tools have a price. There are always consequences to actions taken. These tools are not without their required sacrifice. Be it their environmental impact, their cost in miasma and toxicity, or even simply in their promotion of human laziness, as many blindly accept that the tool will do for them what they ought to be doing for themselves, leading humans toward our final destination as a species. (Life is effort; life is struggle.)

These tools are captivating. Addictive even. And despite the risks, they do work, at least in the short term. These tools will give a person the upper hand long enough to overcome their competitors. To not use these tools is to accept defeat at the outset.

And herein lies the truth about the age of humanity. The truth about evolution, at least as it relates to human beings. The ways of being that will improve longevity include those ways of being that cooperate and coexist within our world. Traits such as empathy and caring for one’s environment promote the preservation of that environment for the long term. I am not referring to modern environmentalism, though it is certainly related. I am talking about learning to live along side the Earth, as it is the only home we have. To cooperate, in the Beauviorian sense.

Unfortunately, these ways that help with longevity also put one at a disadvantage. To choose a path of empathy is to handicap oneself. Those who instead choose ways of being that involve the sacrificing of one’s environment can gain significant advantage over others. The using of tools like AI is an excellent example of this.

I will not sit here and suggest we all need to abandon our tools and start living in the forest. That is clearly not the solution in this situation. And yet, somehow, there does not appear to be a solution at all. In nature, it is survival of the fittest, as Darwin says. To be fit is to be adapted best to one’s particular environment at a particular time; but one’s environment, as well as time, are constantly changing. This means that fitness changes over time and over location. I might be fit now, but I will cease to be fit soon. Or I am not fit presently, and at some point I may become fit. Evolution is a moving target.

AI is a tool that improves fitness for an individual at one moment in time, only to reduce fitness in the next. There is no single thing I can do that will always give me the advantage. And the issue is compounded when I consider the other humans I may want to including in my advantage. We are all kind of screwed.

If I choose not to use the tools, I will be removed by competition. If I choose to use the tools, I may overtake my competition, but I will place myself at a disadvantage against the world afterward. There is no situation where I can both have the advantage against competition and the world simultaneously. Perhaps that is the point, though. The idea of having advantage. The idea of conquest.

Except, if I don’t follow the idea of conquest, then those who do will simply overthrow me. If I decide not to use a gun, someone with a gun will decide for me what will be done. And it matters little if the one holding the gun is ill informed and not a critical thinker. If the one holding the gun is irrational, it doesn’t change that they have the gun.

Diversity in the population doesn’t save us. Conformity doesn’t either, as conformity also fails against evolution. It seems fruitless to even try at times.

I have talked myself into a corner, but I was aware that this would happen. This is the concept I have been struggling with for over a decade now. The individual versus the community, versus the species. There is no one clear path that satisfies all requirements. The things one ought do to be successful at one level will cause them to be unsuccessful at another. Immanuel Kant be damned; his trying to universalize everything fails.

This is why I do believe that AI is going to help usher in the final demise for humanity. Not because AI will rise up and overthrow us. Even if it were actually a machine consciousness, I still do not believe we would be in that sort of trouble, as a machine consciousness might possibly be reasoned with. No, the problem here is far more insidious. The problem is the same problem we had even before AI came on to the scene. We use tools to overcome our adversaries. But the use of tools will simply bring about our own elimination. And so AI is simply a harbinger to our finality. A symptom of, not a cause of, the end.