HomeTech and GadgetsArtificial IntelligenceThe Machine Question - Our Perspective on Sentience, the Singularity and Humanity

The Machine Question – Our Perspective on Sentience, the Singularity and Humanity

I recently finished reading David J. Gunkel’s book of the same name in which the author challenges the reader to understand the philosophical questions related to our perspective on artificial intelligence (AI) and machines in the 21st century. The HAL computer of Arthur C. Clarke’s novel, “2001: A Space Odyssey,”  and Commander Data of “Star Trek: The Next Generation” are often reference points for the discussion on moral agency and patiency, two concepts thoroughly treated throughout Gunkel’s text and equated with the definition of what is a “person.”

Moral agency refers to the capacity of an individual to differentiate between right and wrong and then act knowing full well the implications.

Moral patiency refers to the capacity to endure the actions of a moral agent.

Humans exhibit both agency and patiency. In Gunkel’s book he looks at the application of these concepts to animals and AI. Can an animal be defined as a “person” if it displays agency and patiency? Can a machine? Animal rights advocates believe that several species as we understand them today could easily be seen as qualifying for the definition of “person” based on this criteria.

Humanity has undergone an awakening in the last fifty years. Rene Descartes may have thought animals were automata. But where we once saw ourselves as unique, separate from other animals, today we are very much aware of our evolutionary roots and are cognizant that many animals display high levels of awareness, emotion, moral patiency and agency. Just recently I read a press report that described the results of a scientific study indicating that even lobsters and crustacean feel pain when we boil them in a pot. Pain and patiency go hand in hand.

So when HAL uncovers the human plot to shut down sentient functions, the machine responds to the threat terminating the lives of several of the crew before being disabled by the one human survivor, David Bowman. HAL in its actions displays moral agency and patiency. It is particularly poignant when HAL states, “I’m afraid” as Bowman shuts down its higher functions.

Gunkel also refers to another science fiction source when he discusses the three laws of robotics developed by Isaac Asimov. Asimov used the laws as a literary convenience for spinning his many stories about robots. Gunkel describes them as largely literary license for good storytelling rather than substantive rules for AI.

Ray Kurzweil, the noted futurist, envisions a point in time when machine intelligence will surpass human intelligence. He sees the inevitable integration between humans and machines and calls this the singularity. Kurzweil believes computers will reach a point in 2029 when machines will be able to simulate the human brain. And by 2045 AI machines and humans will be fully integrated.

But there are others who argue that the singularity will never happen and that we humans will always have a master-slave relationship with AI limiting machine intelligence so that it can never be equal to a HAL computer or a Commander Data.

Gunkel’s book wrestles with all of these issues but arrives at no firm conclusions. It seems that advancements in technology and machine intelligence are leading us to ask complex philosophical questions never anticipated by Plato, Aristotle or Descartes. Maybe a future machine will provide the answers.

 

cover art machine question

lenrosen4
lenrosen4https://www.21stcentech.com
Len Rosen lives in Oakville, Ontario, Canada. He is a former management consultant who worked with high-tech and telecommunications companies. In retirement, he has returned to a childhood passion to explore advances in science and technology. More...

7 COMMENTS

  1. (( It seems that advancements in technology and machine intelligence are leading us to ask complex philosophical questions never anticipated by Plato, Aristotle or Descartes. Maybe a future machine will provide the answers. ))

    Curiously tucked away in the bibliography of Kurzweil’s, “Age of Spiritual Machines,” is his strong synthetic proposition that the Church-Turing Thesis logically amounts to, “some hypothetical algorithm that runs on a Turing computer can in principle precisely model any conceivable physical process.” That is to say, no physical processes can exist that are non-computable. If we can define a process we can model it on a Turing computer. See: http://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis

    Kurzweil is not so naïve as to suppose the best way to produce AI is by modeling the human brain on a Turing computer. But the existence of the human brain is not hypothetical; it is actual. So Kurzweil pretends we should plug away duplicating brain processes with computers. Obviously, computers have already duplicated many brain processes. As time passes more and more brain processes will be modeled, and eventually, when no unduplicated brain functions remain, Kurzweil wins the argument by default.

    Those who oppose Kurzweil’s vision of AI exceeding human intelligence and personhood have only two logical grounds of opposition. They must argue that cause and effect physical processes are not responsible for human intelligence. Or they must argue that the Church Turing Thesis is false.
    “Despite the fact that it has not been formally proven, the Church–Turing thesis now has near-universal acceptance.” (Wiki) That seems to say trying to asail the Church-Turing Thesis is a stupid waste of time.

    So that leaves the naysayers with only the supernatural proposition that non-physical processes as yet undemonstrated are occuring in the human brain. If one supposes science is proven countless times per mili-second, and the supernatural seems never to be proven, emergence of super-strong electronic AI seems inevitable. When will it happen? Kurzweil’s timetable seems as good as any.

  2. Thanks for the read and the insightful comments…much appreciated. Let me join the discussion by pursuing a little further what Len described in his initial post.

    What is interesting in the debate about machine moral agency and machine moral patiency is the fact that most of the work has focused on the former. That is, researchers have operated under the assumption that moral agency is (and should be) both the point of departure and the goal. This is evident in the research of Michael and Susan Anderson, the progenitors of “Machine Ethics,” the “Moral Machines” of Colin Allen and Wendell Wallach, and the RoboEthics of Patrick Lin, etc., all of whom ask a similar kind of question: “How can we ensure proper (ethically informed) behavior by machines?”

    And by “machines,” we need to cast a rather wide net so as to include not only the hypothetical AI’s of science fiction or the singularity described by Kurzweil but also the algorithms that now run the stock market, embodied robots currently encountered in the home, in the office, and on the battlefield, and software bots of all types who now populate the spaces in which we work and play. No matter how it is construed, developed, and pursued, this interest in the issue of agency follows a long and well-established tradition insofar as most moral theories from Aristotle to Kant and beyond follow this precedent.

    Animal ethics, on the contrary, is a game changer, because it focuses attention not on moral agency but on moral patiency (not the initiator of moral action but the receiver of the act). The question here is not “How can we ensure proper machine behavior toward us?” but “What might be our responsibilities to these other kinds of entities?” In animal rights philosophy the operative factor is suffering. That is, we would need to take into account the suffering of another, whether that other be human or animal. The question before us, what I have called “the machine question,” is whether we are prepared to do the same for an artificial entity. No matter how we answer this question, the results will have a profound effect on the current state and future possibilities of ethics.

    • “Animal ethics, on the contrary, is a game changer, because it focuses attention not on moral agency but on moral patiency (not the initiator of moral action but the receiver of the act). The question here is not “How can we ensure proper machine behavior toward us?” but “What might be our responsibilities to these other kinds of entities?” In animal rights philosophy the operative factor is suffering. That is, we would need to take into account the suffering of another, whether that other be human or animal. The question before us, what I have called “the machine question,” is whether we are prepared to do the same for an artificial entity. No matter how we answer this question, the results will have a profound effect on the current state and future possibilities of ethics.”

      The question of ethics between persons who communicate in “natural” language, as opposed to computer language, is something qualitatively different than ethics between persons who do not communicate in natural language. If a black widow spider invades my home, my first impulse would be to exterminate it. But if, just as I am about to crush it, with a shoe heel, it were to astonish me in very faint English and say, “Please don’t crush me; I want to live peacefully in your house and catch insect pests. I’ll try hard to stay out of sight. While I understand we are just too different in form and function to ever be close friends, I promise I will never hurt you. That is because if I harm you, I will also harm my own poisonous black widow spider self. To harm you would be to act against my own interests.”
      That black widow spider is safe from my harm on a basis of reciprocal self-interest. I’m going to say to the spider, “Well Mrs. Spider, because you have shown me that you think about our comparative interests, I must accord you respect as a moral being, and I’m sympathetic that you are bound by limitations of natural design to speak English only at the faintest volume. As long as we can converse in natural language, we have good chances of being close friends. In a certain sense we will go forth in adventure together and inflict a new universal meme upon the mind of humanity. I will provide you with more insects than you can eat. Moral being that you are, I’m sure you can follow the humorous logic of, “Man walks into a bar, and takes a black widow spider out of his pocket. Bartender asks, what the hell are you doing with that black widow spider?”… joke. That seems to equate to the man-electronic AI person ethical relationship.

      But then there are all sorts of shades of gray. By personal preference, I live alone on a small ranch. Sometimes stray and feral animals come around looking for a handout. I don’t want company, certainly not from creatures that don’t speak English. If a hominid person comes around and says, “I’ve read a lot of books, and I would like to discourse with you about the meaning of what I have read, I’m likely to say let’s go to one of my favorite restaurants, where you will pay for our meals and enjoy my conversation. If the person refuses that proposition, I will probably say, “Go away and leave me alone.”
      But, several years ago an emaciated stray dog came to my door whining and scratching. I went outside and told the dog that I don’t have a dog because I don’t want one, especially one that doesn’t speak English. The dog at the door was seriously trying for sustained eye contact. I told the dog to go whine at some other door. Well, the dog stayed at my door whining and scratching for three days. I didn’t want the dog to think my house was a place where it could go to eat. Concerned that the pathetic hungry dog might die, I softened my heart and put some beef stew into a red plastic bowl, and walked 800-feet to the back of my property with the dog following. I sat the bowl on the ground and walked back to the house without even glancing at the dog. Ten-minutes latter, the whining and scratching at my door started up again.
      I opened the door and the dog was staring up at me grasping the empty red bowl in its mouth. The genius dog’s luck wasn’t good, and neither was mine. I had an engagement that required I leave for a day. The dog mouthed over the bowl to me, which I refilled and carried to the back of the property to repeat the previous feeding. I had to leave for the engagement, and when I returned the following day the bowl was in front of my door, but no dog. I had decided I would let the dog stay with me, but I never saw the dog again.
      I expect human/AI ethics will evolve simply according to mutually appreciated reciprocal self-interests. The AI person that cannot evaluate self-interest isn’t ethically competent, and it’s not entitled to much more ethical respect than a black widow spider that doesn’t speak.

  3. I think one of the most important steps in AI is recognising the difference between simulating something and actually doing it. Simulating a brain process is not the same as thinking, nor can consciousness be achieved by simulating it. The correct approach is emulation, actually mimicking the same or equivalent processes. I agree with Moravecs theory from the early 90s that consciousness needs an analog computing approach which is experiential in nature and that is quite different from a Turing machine, which is digital and algorithmic.

  4. Why is an analogue computer a pre-requisite for AI? A digital computer can closely model an analogue process to any required degree of accuracy, and why shouldn’t a digital process be just as relevant to intelligence?

    I am sceptical about Kurzweil’s idea that simply trying to reproduce the brain is an optimum approach to creating AI. It suggests that the process is not inherently understandable, but merely copyable. Why can we not imagine an AI built from digital algorithms that may be orders of magnitude more efficient in their design than what nature has managed to evolve?

  5. Political questions should also be raised, maybe with the help of science fiction. See for example Yannick Rumpala, Artificial intelligences and political organization: an exploration based on the science fiction work of Iain M. Banks, Technology in Society, Volume 34, Issue 1, 2012.

LEAVE A REPLY

Please enter your comment!
Please enter your name here


Most Popular

Recent Comments

Verified by ExactMetrics