“A good metaphor is something even the police should keep an eye on.” – G.C. Lichtenberg
Although the brain-computer metaphor has served cognitive psychology well, research in cognitive neuroscience has revealed many important differences between brains and computers. Appreciating these differences may be crucial to understanding the mechanisms of neural information processing, and ultimately for the creation of artificial intelligence. Below, I review the most important of these differences (and the consequences to cognitive psychology of failing to recognize them): similar ground is covered in this excellent (though lengthy) lecture.
Difference # 1: Brains are analogue; computers are digital
It’s easy to think that neurons are essentially binary, given that they fire an action potential if they reach a certain threshold, and otherwise do not fire. This superficial similarity to digital “1’s and 0’s” belies a wide variety of continuous and non-linear processes that directly influence neuronal processing.
For example, one of the primary mechanisms of information transmission appears to be the rate at which neurons fire – an essentially continuous variable. Similarly, networks of neurons can fire in relative synchrony or in relative disarray; this coherence affects the strength of the signals received by downstream neurons. Finally, inside each and every neuron is a leaky integrator circuit, composed of a variety of ion channels and continuously fluctuating membrane potentials.
Failure to recognize these important subtleties may have contributed to Minksy & Papert’s infamous mischaracterization of perceptrons, a neural network without an intermediate layer between input and output. In linear networks, any function computed by a 3-layer network can also be computed by a suitably rearranged 2-layer network. In other words, combinations of multiple linear functions can be modeled precisely by just a single linear function. Since their simple 2-layer networks could not solve many important problems, Minksy & Papert reasoned that that larger networks also could not. In contrast, the computations performed by more realistic (i.e., nonlinear) networks are highly dependent on the number of layers – thus, “perceptrons” grossly underestimate the computational power of neural networks.
Difference # 2: The brain uses content-addressable memory
In computers, information in memory is accessed by polling its precise memory address. This is known as byte-addressable memory. In contrast, the brain uses content-addressable memory, such that information can be accessed in memory through “spreading activation” from closely related concepts. For example, thinking of the word “fox” may automatically spread activation to memories related to other clever animals, fox-hunting horseback riders, or attractive members of the opposite sex.
The end result is that your brain has a kind of “built-in Google,” in which just a few cues (key words) are enough to cause a full memory to be retrieved. Of course, similar things can be done in computers, mostly by building massive indices of stored data, which then also need to be stored and searched through for the relevant information (incidentally, this is pretty much what Google does, with a few twists).
Although this may seem like a rather minor difference between computers and brains, it has profound effects on neural computation. For example, a lasting debate in cognitive psychology concerned whether information is lost from memory because of simply decay or because of interference from other information. In retrospect, this debate is partially based on the false asssumption that these two possibilities are dissociable, as they can be in computers. Many are now realizing that this debate represents a false dichotomy.
Difference # 3: The brain is a massively parallel machine; computers are modular and serial
An unfortunate legacy of the brain-computer metaphor is the tendency for cognitive psychologists to seek out modularity in the brain. For example, the idea that computers require memory has lead some to seek for the “memory area,” when in fact these distinctions are far more messy. One consequence of this over-simplification is that we are only now learning that “memory” regions (such as the hippocampus) are also important for imagination, the representation of novel goals, spatial navigation, and other diverse functions.
Similarly, one could imagine there being a “language module” in the brain, as there might be in computers with natural language processing programs. Cognitive psychologists even claimed to have found this module, based on patients with damage to a region of the brain known as Broca’s area. More recent evidence has shown that language too is computed by widely distributed and domain-general neural circuits, and Broca’s area may also be involved in other computations (see here for more on this).
Difference # 4: Processing speed is not fixed in the brain; there is no system clock
The speed of neural information processing is subject to a variety of constraints, including the time for electrochemical signals to traverse axons and dendrites, axonal myelination, the diffusion time of neurotransmitters across the synaptic cleft, differences in synaptic efficacy, the coherence of neural firing, the current availability of neurotransmitters, and the prior history of neuronal firing. Although there are individual differences in something psychometricians call “processing speed,” this does not reflect a monolithic or unitary construct, and certainly nothing as concrete as the speed of a microprocessor. Instead, psychometric “processing speed” probably indexes a heterogenous combination of all the speed constraints mentioned above.
Similarly, there does not appear to be any central clock in the brain, and there is debate as to how clock-like the brain’s time-keeping devices actually are. To use just one example, the cerebellum is often thought to calculate information involving precise timing, as required for delicate motor movements; however, recent evidence suggests that time-keeping in the brain bears more similarity to ripples on a pond than to a standard digital clock.
Difference # 5 – Short-term memory is not like RAM
Although the apparent similarities between RAM and short-term or “working” memory emboldened many early cognitive psychologists, a closer examination reveals strikingly important differences. Although RAM and short-term memory both seem to require power (sustained neuronal firing in the case of short-term memory, and electricity in the case of RAM), short-term memory seems to hold only “pointers” to long term memory whereas RAM holds data that is isomorphic to that being held on the hard disk. (See here for more about “attentional pointers” in short term memory).
Unlike RAM, the capacity limit of short-term memory is not fixed; the capacity of short-term memory seems to fluctuate with differences in “processing speed” (see Difference #4) as well as with expertise and familiarity.
Difference # 6: No hardware/software distinction can be made with respect to the brain or mind
For years it was tempting to imagine that the brain was the hardware on which a “mind program” or “mind software” is executing. This gave rise to a variety of abstract program-like models of cognition, in which the details of how the brain actually executed those programs was considered irrelevant, in the same way that a Java program can accomplish the same function as a C++ program.
Unfortunately, this appealing hardware/software distinction obscures an important fact: the mind emerges directly from the brain, and changes in the mind are always accompanied by changes in the brain. Any abstract information processing account of cognition will always need to specify how neuronal architecture can implement those processes – otherwise, cognitive modeling is grossly underconstrained. Some blame this misunderstanding for the infamous failure of “symbolic AI.”
Difference # 7: Synapses are far more complex than electrical logic gates
Another pernicious feature of the brain-computer metaphor is that it seems to suggest that brains might also operate on the basis of electrical signals (action potentials) traveling along individual logical gates. Unfortunately, this is only half true. The signals which are propagated along axons are actually electrochemical in nature, meaning that they travel much more slowly than electrical signals in a computer, and that they can be modulated in myriad ways. For example, signal transmission is dependent not only on the putative “logical gates” of synaptic architecture but also by the presence of a variety of chemicals in the synaptic cleft, the relative distance between synapse and dendrites, and many other factors. This adds to the complexity of the processing taking place at each synapse – and it is therefore profoundly wrong to think that neurons function merely as transistors.
Difference #8: Unlike computers, processing and memory are performed by the same components in the brain
Computers process information from memory using CPUs, and then write the results of that processing back to memory. No such distinction exists in the brain. As neurons process information they are also modifying their synapses – which are themselves the substrate of memory. As a result, retrieval from memory always slightly alters those memories (usually making them stronger, but sometimes making them less accurate – see here for more on this).
Difference # 9: The brain is a self-organizing system
This point follows naturally from the previous point – experience profoundly and directly shapes the nature of neural information processing in a way that simply does not happen in traditional microprocessors. For example, the brain is a self-repairing circuit – something known as “trauma-induced plasticity” kicks in after injury. This can lead to a variety of interesting changes, including some that seem to unlock unused potential in the brain (known as acquired savantism), and others that can result in profound cognitive dysfunction (as is unfortunately far more typical in traumatic brain injury and developmental disorders).
One consequence of failing to recognize this difference has been in the field of neuropsychology, where the cognitive performance of brain-damaged patients is examined to determine the computational function of the damaged region. Unfortunately, because of the poorly-understood nature of trauma-induced plasticity, the logic cannot be so straightforward. Similar problems underlie work on developmental disorders and the emerging field of “cognitive genetics”, in which the consequences of neural self-organization are frequently neglected .
Difference # 10: Brains have bodies
This is not as trivial as it might seem: it turns out that the brain takes surprising advantage of the fact that it has a body at its disposal. For example, despite your intuitive feeling that you could close your eyes and know the locations of objects around you, a series of experiments in the field of change blindness has shown that our visual memories are actually quite sparse. In this case, the brain is “offloading” its memory requirements to the environment in which it exists: why bother remembering the location of objects when a quick glance will suffice? A surprising set of experiments by Jeremy Wolfe has shown that even after being asked hundreds of times which simple geometrical shapes are displayed on a computer screen, human subjects continue to answer those questions by gaze rather than rote memory. A wide variety of evidence from other domains suggests that we are only beginning to understand the importance of embodiment in information processing.
Bonus Difference: The brain is much, much bigger than any [current] computer
Accurate biological models of the brain would have to include some 225,000,000,000,000,000 (225 million billion) interactions between cell types, neurotransmitters, neuromodulators, axonal branches and dendritic spines, and that doesn’t include the influences of dendritic geometry, or the approximately 1 trillion glial cells which may or may not be important for neural information processing. Because the brain is nonlinear, and because it is so much larger than all current computers, it seems likely that it functions in a completely different fashion. (See here for more on this.) The brain-computer metaphor obscures this important, though perhaps obvious, difference in raw computational power.
Some of today's top techies and scientists are very publicly expressing their concerns over apocalyptic scenarios that are likely to arise as a result of machines with motives. Among the fearful are intellectual heavyweights like Stephen Hawking, Elon Musk, and Bill Gates, who all believe that advances in the field of machine learning will soon yield self-aware A.I.s that seek to destroy us--or perhaps just apathetically dispose of us, much like scum getting obliterated by a windshield wiper. In fact, Dr. Hawking told the BBC, "The development of full artificial intelligence could spell the end of the human race."
Indeed, there is little doubt that future A.I. will be capable of doing significant damage. For example, it is conceivable that robots could be programmed to function as tremendously dangerous autonomous weapons unlike any seen before. Additionally, it is easy to imagine an unconstrained software application that spreads throughout the Internet, severely mucking up our most efficient and relied upon medium for global exchange.
But these scenarios are categorically different from ones in which machines decide to turn on us, defeat us, make us their slaves, or exterminate us. In this regard, we are unquestionably safe. On a sadder note, we are just as unlikely to someday have robots that decide to befriend us or show us love without being specifically prompted by instructions to do so.
This is because such intentional behavior from an A.I. would undoubtedly require a mind, as intentionality can only arise when something possesses its own beliefs, desires, and motivations. The type of A.I. that includes these features is known amongst the scientific community as "Strong Artificial Intelligence". Strong A.I., by definition, should possess the full range of human cognitive abilities. This includes self-awareness, sentience, and consciousness, as these are all features of human cognition.
On the other hand, "Weak Artificial Intelligence" refers to non-sentient A.I. The Weak A.I. Hypothesis states that our robots--which run on digital computer programs--can have no conscious states, no mind, no subjective awareness, and no agency. Such A.I. cannot experience the world qualitatively, and although they may exhibit seemingly intelligent behavior, it is forever limited by the lack of a mind.
A failure to recognize the importance of this strong/weak distinction could be contributing to Hawking and Musk's existential worries, both of whom believe that we are already well on a path toward developing Strong A.I. (a.k.a. Artificial General Intelligence). To them it is not a matter of "if", but "when".
But the fact of the matter is that all current A.I. is fundamentally Weak A.I., and this is reflected by today's computers' total absence of any intentional behavior whatsoever. Although there are some very complex and relatively convincing robots out there that appear to be alive, upon closer examination they all reveal themselves to be as motiveless as the common pocket calculator.
This is because brains and computers work very differently. Both compute, but only one understands--and there are some very compelling reasons to believe that this is not going to change. It appears that there is a more technical obstacle that stands in the way of Strong A.I. ever becoming a reality.
Turing Machines Aren't Thinking Machines
All digital computers are binary systems. This means that they store and process information exclusively in terms of two states, which are represented by different symbols--in this case 1s and 0s. It is an interesting fact of nature that binary digits can be used to represent most things; like numbers, letters, colors, shapes, images, and even audio with near perfect accuracy.
This two-symbol system is the foundational principle that all of digital computing is based upon. Everything a computer does involves manipulating two symbols in some way. As such, they can be thought of as a practical type of Turing machine--an abstract, hypothetical machine that computes by manipulating symbols.
A Turing machine's operations are said to be "syntactical", meaning they only recognize symbols and not the meaning of those symbols--i.e., their semantics. Even the word "recognize" is misleading because it implies a subjective experience, so perhaps it is better to simply say that computers are sensitive to symbols, whereas the brain is capable of semantic understanding.
It does not matter how fast the computer is, how much memory it has, or how complex and high-level the programming language. The Jeopardy and Chess playing champs Watson and Deep Blue fundamentally work the same as your microwave. Put simply, a strict symbol-processing machine can never be a symbol-understanding machine. The influential philosopher John Searle has cleverly depicted this fact by analogy in his famous and highly controversial "Chinese Room Argument", which has been convincing minds that "syntax is not sufficient for semantics" since it was published in 1980. And although some esoteric rebuttals have been put forth (the most common being the "Systems Reply"), none successfully bridge the gap between syntax and semantics. But even if one is not fully convinced based on the Chinese Room Argument alone, it does not change the fact that Turing machines are symbol manipulating machines and not thinking machines, a position taken by the great physicist Richard Feynman over a decade earlier.
Feynman described the computer as "A glorified, high-class, very fast but stupid filing system," managed by an infinitely stupid file clerk (the central processing unit) who blindly follows instructions (the software program). Here the clerk has no concept of anything--not even single letters or numbers. In a famous lecture on computer heuristics, Feynman expressed his grave doubts regarding the possibility of truly intelligent machines, stating that, "Nobody knows what we do or how to define a series of steps which correspond to something abstract like thinking."
These points present very compelling reasons to believe that we may never achieve Strong A.I., i.e., truly intelligent artificial agents. Perhaps even the most accurate of brain simulations will not yield minds, nor will software programs produce consciousness. It just might not be in the cards for a strict binary processor. There is nothing about processing symbols or computation that generates subjective experience or psychological phenomena like qualitative sensations.
Upon hearing this, one might be inclined to ask, "If a computer can't be conscious, then how can a brain?" After all, it is a purely physical object that works according to physical law. It even uses electrical activity to process information, just like a computer. Yet somehow we experience the world subjectively--from a first person perspective where inner, qualitative and ineffable sensations occur that are only accessible to us. Take for example the way it feels when you see a pretty girl, drink a beer, step on a nail, or hear a moody orchestra.
The truth is, scientists are still trying to figure all this out. How physical phenomena, like biochemical and electrical processes, create sensation and unified experience is known as the "Hard Problem of Consciousness", and is widely recognized by neuroscientists and philosophers. Even neuroscientist and popular author Sam Harris--who shares Musk's robot-rebellion concerns--acknowledges the hard problem when stating that whether a machine could be conscious is "an open question". Unfortunately he doesn't seem to fully realize that for machines to pose an existential threat arising from their own self-interests, conscious is required.
Yet although the problem of consciousness is admittedly hard, there is no reason to believe that it is not solvable by science. So what kind of progress have we made so far?
Consciousness Is A Biological Phenomenon
Much like a computer, neurons communicate with one another through exchanging electrical signals in a binary fashion. Either a neuron fires or it doesn't, and this is how neural computations are carried out. But unlike digital computers, brains contain a host of analogue cellular and molecular processes, biochemical reactions, electrostatic forces, global synchronized neuron firing at specific frequencies, and unique structural and functional connections with countless feedback loops.
Even if a computer could accurately create a digital representation of all these features, which in itself involves many serious obstacles, a simulation of a brain is still not a physical brain. There is a fundamental difference between the simulation of a physical process and the physical process itself. This may seem like a moot point to many machine learning researchers, but when considered at length it appears anything but trivial.
Simulation Does Not Equal Duplication
The Weak A.I. hypothesis says that computers can only simulate the brain, and according to some like John Searle--who coined the terms Strong and Weak A.I.--a simulation of a conscious system is very different from the real thing. In other words, the hardware of the "machine" matters, and mere digital representations of biological mechanisms have no power to cause anything to happen in the real world.
Let's consider another biological phenomenon, like photosynthesis.Photosynthesis refers to the process by which plants convert light into energy. This process requires specific biochemical reactions only viable given a material that has specific molecular and atomic properties. A perfect computer simulation--an emulation--of photosynthesis will never be able to convert light into energy no matter how accurate, and no matter what type of hardware you provide the computer with. However, there are in fact artificial photosynthesis machines. These machines do not merely simulate the physical mechanisms underlying photosynthesis in plants, but instead duplicate, the biochemical and electrochemical forces using photoelectrochemical cells that do photocatalytic water splitting.
In a similar way, a simulation of water isn't going to possess the quality of 'wetness', which is a product of a very specific molecular formation of hydrogen and oxygen atoms held together by electrochemical bonds. Liquidity emerges as a physical state that is qualitatively different from that expressed by either molecule alone.
Even the hot new consciousness theory from neuroscience, Integrated Information Theory, makes very clear that a perfectly accurate computer simulation of a brain would not have consciousness like a real brain, just as a simulation of a black hole won't cause your computer and room to implode. Neuroscientists Giulio Tononi and Christof Koch, who established the theory, do not mince words on the subject:
"IIT implies that digital computers, even if their behaviour were to be functionally equivalent to ours, and even if they were to run faithful simulations of the human brain, would experience next to nothing."
With this in mind, we can still speculate about whether non-biological machines that support consciousness can exist, but we must realize that these machines may need to duplicate the essential electrochemical processes (whatever those may be) that are occurring in the brain during conscious states. If this were possible without organic materials--which have unique molecular and atomic properties--it would presumably require more than Turing machines, which are purely syntactic processors (symbol manipulators), and digital simulations, which may lack the necessary physical mechanisms.
The best approach to achieving Strong A.I. requires finding out how the brain what it does first, and machine learning researchers' biggest mistake is to think they can take a shortcut around it. As scientists and humans, we must be optimistic about what we can accomplishment. At the same time, we must not be overly confident in ways that steer us in wrong directions and blind us from making real progress.
The Myth of Strong A.I.
Since as early as the 1960s, A.I. researchers have been claiming that Strong A.I. is just around the corner. But despite monumental increases in computer memory, speed, and processing power, we are no closer than before. So for now, just like the brainy sci-fi films of the past that depict apocalyptic A.I. scenarios, truly intelligent robots with inner conscious experience remain a fanciful fantasy.
Follow Bobby Azarian on Twitter: www.twitter.com/BobbyAzarian