Ethics: The Chicken or The Egg

The chicken or the egg dilemma is the discussion regarding which came first. The suggestion is that the question cannot be answered. One comes from the other, and trying to find the source or origin fails. Chicken’s lay eggs. Eggs hatch into chickens. They both are the source of each other. It is a sort of paradox to ask the question.

I was thinking about ethics recently, especially as it relates to the programming of Artificial Intelligence or AI. Ethics, formally, tries to determine what one ought do. That is, within all of us, we find that we have our desires, our goals, our ambitions, the things we want to do. If left unchecked, with no reason to deviate, we might expect all of us to simply do what we feel like doing all the time.

However, if you have lived in this world for any amount of time, you likely have observed that there are times when people do not do what they want to do. Due often to some sort of restriction, rule, or law, people end up doing things they do not want to do, or are prevented from doing the things they want to do. This is often framed as people doing what they ought to do. That is, there are things we should do instead of doing the things we want to do. Ought is the word often used to talk about such things.

Ethics is the branch of philosophy concerned with “oughts.” Specifically, trying to figure out what the “oughts” are. There are a number of popular theories regarding how to determine “oughts.” For example, there is utilitarianism, which suggests that when chosing actions, one ought to perform the action whose consequences will result in greater happiness. Without delving into the hidden complexity in such a simple statement, the idea itself should be pretty clear; instead of simply doing what I want to do, I ought to instead do the thing that will make everyone (including myself) happier.

Deontology is another popular theory in ethics, often attributed to Immanuel Kant. In this understanding, what one ought to do is a function of what can be logically universalized. That is, if everyone could perform the action in question without generating some sort of logical contradiction, then the action is acceptable. A popular example is of lying; because if everyone lied, then no one would be able to communicate with any level of accuracy or reliability, and thus lying is a prohibited action. One ought not lie. Ever, according to Kant.

Aristotle wrote about a theory that is often referred to as virtue ethics, where one selects an ideal and tries to emulate it. If you have ever heard someone say “What would Jesus do?” this is a form of virtue ethics. Determine how the ideal would act, and then act in that way. The ideal is the prototype for how one ought to act.

There are many other theories, but these three are probably the most well known and popular. Put more simply, the theory suggests how to determine what the correct actions ought to be. If you are unsure, then use the theory to assist you in figuring it out. This is precisely how it is being applied to AI; program the AI with the theory so that the AI can act ethically.

But there has been something bothering me about all of this. And it relates back to the chicken or the egg. Kant tried to justify his deontology by suggesting that it is somehow an absolute and immutable law of the universe. That by creating an ethical theory based in strict logic, then there would be no way to dispute or argue against it. That everyone could then easily be bound by it. Unfortunately, if you do but a little research on the topic, you will find plenty of examples of weaknesses in this ethical theory. Situations, often hypothetical, that suggest perhaps this theory isn’t quite so indisputable.

As one example, in the case of how lying is strictly prohibited, one person asked the following: Suppose my friend or relative is being chased by an axe murderer. They come to my house and I obviously let them enter. They quickly hide in the basement. Moments later, there is a knock at my door; it is the axe murderer. I answer the door, and they ask me where my friend has gone to. The question, simply, is whether I ought to tell the murderer the truth about my friend’s whereabouts.

Kant suggested that I am bound to tell the truth, as lying is prohibited. And that if I chose to lie about my friend’s whereabouts, then anything that follows is in some manner my fault and responsibility. It is suggested that perhaps I lie and tell the murderer I do not know where my friend is, causing the murderer to go off in search of my friend. Unbeknownst to me, my friend actually left my house, escaping from a basement window. Moments later, the murderer finds my friend and kills them. Kant suggests that I am now at fault for my friend’s death, because I ought not to have lied.

On the flip side, if I tell the truth, the murderer (in this case) now wants to enter my home to kill my friend. I may, at this point, do my best to prevent the murderer’s entry, but then I may be putting myself in danger. They are an axe murderer after all, so perhaps now I will become the next victim. And of course, if I am killed, there is no longer anything to prevent the murderer from doing the same to my friend. Telling the truth, in this case, seems to cause even more problems than if I had lied.

This particular argument has many more twists and turns in its discussion, but I hope my point is clear. All these ethical theories, though sounding fairly straight forward initially, end up wrought with strange loop holes and weaknesses. None are perfect. And with that, there tends to be a variety of disagreement regarding which theory one ought to follow.

But, again, the chicken or the egg. Are the ethical theories there to help us figure out how to act? It seems not to be the case. After all, if I can find fault with the ethical theory, the very thing trying to instruct me in what I ought to do, how am I doing this? It seems like I already know what I ought to do, and the ethical theory is a model trying to explain how I know what I already know.

Perhaps it is the case that I am somehow intuitively moral to begin with. I already know right from wrong, for some reason. The ethical theory is not there to instruct me, but to try and explain that thing I already understand.

When faced with decisions of a moral nature, I already understand how I ought to act. I don’t have to think about it (most of the time). I know I ought to lie sometimes, like when I am concealing information about a surprise birthday party from my partner. But there are also times I ought not lie, like when asked what time a particular film will be playing in the theater tonight. Lying and truthtelling can be quite complicated, and to suggest that I always or never lie is not sufficient to cover all my circumstances. A theory like deontology is simply not going to cut it.

To be clear, the reason deontology is insufficient, in this case, is not because I need to use it to decide when to lie and when not to lie. It is insufficient because it cannot explain why I know when I ought to lie and when I ought not. My behavior is the prototype here, not the theory. The theory is, in this case, trying to explain my behavior.

In fact, this is how all these weaknesses and loop holes are discovered in all these ethical theories. Because (arguably) we all already know how we ought to act, even if we are unable to put into words why we know. Through our upbringing, from our parents and teachers and others, we have somehow been taught what is right and wrong already. In the same way we are able to identify a cat from a dog (if you are in a part of the world where there are plenty of cats and dogs). Through repetition. Through trial and error. Through experience.

However, there are still times when I am faced with choices where I am unable to intuit the right action. There are times when I may ponder and have to think about it, because I do not really know the thing I ought to do. Even, sometimes, my parents and teachers are at a similar loss. The trial and error just has not provided me enough to answer the question. How do I decide then?

Ironically, often, I end up back to referring to an ethical theory. This is why utilitarianism is particularly popular where I live. If I am unsure, I think about how I can act that will make those people around me happiest. Sometimes that might mean which way can I act that will get me into the least amount of trouble, but this is just a reverse formulation of the same utilitarian theory. Maximizing happiness is generally the same as minimizing suffering or misery. This is how many people around here vote for politicians.

And this all brings us back around to the original issue. The chicken or the egg. Which came first? Does the ethical theory tell me how to behave? Or do I already know and the ethical theory is simply trying to explain my behavior?

With humans, the answer to this question seems less important. Much of the time, I know right from wrong and will self legislate. I will act as I ought to act, because I know what is expected of me. And when the times come where I am unsure, I can refer to whichever appropriate ethical theory I like to provide guidance. Which means I am also free to select which ever ethical theory makes sense given my set of particular circumstances as well. Perhaps utilitarianism makes sense this time, but maybe virtue ethics might make more sense next time. As a human, I can work my way through all this. And when I do make mistakes, it will be the other humans who correct me, educate me, or perhaps even punish me, as makes sense.

But what about the AI? The reason one must program the AI with an ethical theory is because the AI is unable to intuit right from wrong. The AI does not understand what is “right” or what is “wrong” in the moral sense of right and wrong. It must be programmed in how to behave and how to make moral decisions. And as this is the case, it will fall victim to the same strange loop holes and weaknesses we humans are concerned about.