When given the option to choose between two loves, most of us end up choosing neither! For instance, last night before I arrived at mom’s place, she asked me if I wanted fried potato (yum) or vegetable fried noodles (yum). How does the mind decide that the post-meal satisfaction derived from one choice would far exceed the satisfaction quotient of the other (yum versus yum)?
To escape this conflict, the mind chooses neither options. In this way, we end of foregoing at least one possible option that could have made us satisfied. Sad, no? You see, human tendency is such that we can never be satisfied with the idea that when presented with two choices, just settling for one of them is enough to keep us satisfied. We keep ruminating, chewing grass like cows, on the possibility of the loss of satisfaction that could happen if we don’t choose the other choice as well. This is because the choices are not absolute, but circumstantial.
Does this mean we humans simply lack the ability to make logical decision, based on the facts of the case? This goes completely against the construct that man is a rational being, who has the capacity to make informed decisions based on the choices given. But as illustrated by the potato versus noodle dilemma, when presented with two equally compelling choices, our circuits go for a toss and we remain in a state of indecision. This is not really about food, it’s about college careers, choosing a lover, planning a trip, buying a house. Most of us end up compromising, making a decision and then forever thinking of the other choice we missed out on. This regret should not even be a by-product if we did manage to make the best decision possible. However, best is a ‘qualitative’ state and can keep changing.
Perhaps, then, it is not about the difficulty of the choices themselves, but more to do with the psychology of our mind. In effect, if we have reduced this dilemma to a mathematical / linguistic equation of Conditional Clause, we get:
Condition A: If you are a rational, liberal and modern human, then choosing between choice yum (potato) and choice yum (noodle) should be second nature, easy.
Condition B: If you cannot choose between choice yum (potato) and choice yum (noodles), then you must be irrational, conservative and traditional.
This brings me to Isaac Asimov’s Three Laws of Robotics, which are designed so as not to override or contradict each other:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
- Asimov also added a fourth, or zeroth law, to precede the others: A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
“In his short story “Evidence” Asimov lets his recurring character Dr. Susan Calvin expound a moral basis behind the Three Laws. Calvin points out that human beings are typically expected to refrain from harming other human beings (except in times of extreme duress like war, or to save a greater number) and this is equivalent to a robot’s First Law. Likewise, according to Calvin, society expects individuals to obey instructions from recognized authorities such as doctors, teachers and so forth which equals the Second Law of Robotics. Finally humans are typically expected to avoid harming themselves which is the Third Law for a robot.” (Source Wikipedia). When I read Asimov’s Robot series, I get a sense that the author is very subtly letting us know of the impossibility of making such moral aka logical choices, unless you are a machine that is coded with these programmatic decisions.
“The plot of “Evidence” revolves around the question of telling a human being apart from a robot constructed to appear human – Calvin reasons that if such an individual obeys the Three Laws he may be a robot or simply “a very good man”. Another character then asks Calvin if robots are very different from human beings after all. She replies, “Worlds different. Robots are essentially decent.” (Source: Wikipedia). While it appears that Dr Calvin is deriding the essential flaws of a human, it is also ironic that the ‘good behavior’ expected out of a robot is coded in by this flawed human. It’s also at this point that questions about ethical treatment of intelligent objects arises. A thing that walk, talks, thinks and makes decisions is as good as alive, albeit in a bio-mechanical structure. Then why isn’t its own life important for preservation against a human life? Why do we teach a robot that a human life is above its own, when we live by the rules of self-preservation?
How do we end up making either of these choices and rationalize the consequence? It seems there is nothing rational about morals and nothing moral about logic, and logic in itself cannot be the foundation for decision making. Perhaps we are all random bots of chance and happenstance.