For computers to become more human they have to exhibit a lot more intelligence than the technologies we have in place here at the end of the first decade of this 21st century. When Deep Blue, the IBM supercomputer defeated Garry Kasparov in 1997 the intelligence built into the computer’s ability to analyze 200 million positions per second represented artificial knowledge specific to the one task, playing chess. But to be truly human future computers must be multi-tasking.
Back in the 1990s when I was working for a large software company, they had developed neural agents. These were bits of code that could be added to a device or a network and sense patterns in the data flow or in the operation of the equipment. As the neural agents learned the “normal” patterns and became aware of what was normal, they could also be used to alert if abnormal patterns occurred. When abnormal happened the neural agents would send messages to human observers through a computer display or would engage other neural agents specifically designed to compensate for the abnormal and restore normal operations. This type of pattern intelligence is not the same as human intelligence but nonetheless it is intelligence.
When we think about our intelligence versus what I have described in the previous paragraph, what are the differences? Humans as well as many other animals exhibit many intelligence traits:
– the ability to reason
– the ability to acquire knowledge and retain it for later use
– the ability to solve problems
– the ability to plan
– the ability to communicate through vocalization and understand vocalizations from third parties
– the means, through mind to limb interaction, to manipulate objects.
These are key traits that define our intelligence.
Deep Blue fulfills the first trait of intelligence – the ability to reason. In part, Deep Blue also fulfills the second trait, the ability to acquire knowledge. But the knowledge that Deep Blue acquired is specific to the multivariable moves within the game of chess. Deep Blue, therefore, only meets a minimal standard when one is talking about creating a computer that is human even though this computer could outplay the World’s number one chess master.
If you think of the way we as humans gather knowledge, we do it through observation, interaction with other humans, reading, and trial and error. Sometimes we learn something in one task that we then can apply to another with very different circumstances.
As much as we can give a computer access to all of the content of the Internet in which to acquire knowledge, how do we give it the ability to apply that knowledge using computational intelligence? The field of computational intelligence focuses on developing computers that use fuzzy logic to solve problems. When we talked about quantum and biological computing in earlier parts of this multi-part article we described the attributes of these types of systems with their abilities to go beyond the logic of silicon-based computers. Programmers working in the field of artificial intelligence talk about algorithms that embrace techniques such as swarm intelligence, fractals, and chaos theory. Computational intelligence approaching our way of assimilating knowledge involves the creation of programs that combine learning and adaptation.
How close are we today to creating human-like intelligence in our computer systems? Ray Kurzweil, and David Gelernter, both noted authors and futurists, describe computing technology’s future and the rise of conscious, creative, volitional and even spiritual machines in a debate that occurred in December 2006 at MIT. The event was held on the 70th anniversary of a paper published by Alan Turing, the inventor of the Turing Machine and Ultra, the latter, the machine that broke the Nazi Enigma code in the Second World War. Turing is a key individual in the foundation of modern computing. In 1948 he published “Intelligent Machinery,” a paper that first described artificial intelligence in similar terms to what I have written here.
Kurzweil describes computing technology that has mastered human emotion and subjectivity. Remember Star Trek’s Data and his discovery of emotion. To Kurzweil emotion defines the most intelligent aspect of being human. Subjectivity or consciousness gives an artificial intelligence the means to learn from experience and relate the experience to self. For Kurzweil the technology to achieve this is just around the corner, a mere twenty years from now. Where Kurzweil sees consciousness as achievable in artificial intelligence, Gelernter does not. He argues that no software can be built to create consciousness and self-awareness. Kurzweil backs up his prediction by describing the acceleration of information technology and its exponential growth. He points out current experimentation by IBM in modeling the human cerebral cortex and discounts Gelernter’s definition of software based on what we see today.
An artificial intelligence would mimic our brains which when we brake them down, are massive parallel processors featuring over 100 trillion connections all simultaneously computing. Can we model and simulate a neuron? We are already well on our way. Can we design a machine with 100 trillion parallel processes? We have already seen in Parts 2 and 3 of this discussion the evolution of quantum and biological computing with the potential to approach if not exceed the capacity of our human brain.