In the years that followed the introduction of computing machines into commerce, library cataloging, banking, the military and so on, literally thousands of mathematicians, physicists, engineers, philosophers and many others were involved in the rise of the computer. More and more tasks, traditionally handled by human beings, were performed by computers. At the same time, more and more people took up the project of computational psychology, that is the attempt to read back from the simulation of human tasks by computers to computational hypotheses as to the way human beings perform human tasks.
Marvin Minsky made two essential contributions to the working out of the implications of the Turing Test. Following up the analogy between brains and computers, in 1951 he designed the first working model of a neural net, a structure resembling the fine structure of the brain. It rejoiced in the acronym SNARC.
The practical project of designing programmes that could be used to simulate cognition required a way of simplifying the huge amount of data and the multiple rules involved in any actual cognitive task. Could a practical means be found of constraining the procedures to handle this embarrassment of riches? Was there any way of selecting from among the infinitely many ways that a computer could be programmed to perform the task that seemed human-like? Minsky's answer to these problems was the concept of the ‘frame’, which went some way towards answering these questions, at least in principle.
Marvin Minsky was born in New York on 9 August 1927. His schooling included the highly regarded Bronx High School of Science and the Phillips Academy in Massachusetts. He went directly from Phillips into the US Navy until just after the end of the Second World War. He studied mathematics as an undergraduate at Harvard, and completed his doctorate in 1954 at Princeton. After a short spell at Harvard, he joined MIT, where he held a number of posts.
Wisely, American universities do not now cast out their best at the whim of an arbitrary retiring age, and Minsky is currently Toshiba Professor of Media Arts and Sciences at MIT. In cast of mind, Minsky shared Turing's mix of skills in creating abstract systems and coming up with powerful insights into the practical devices that could be built to realize them.
Minsky explored some of the mathematical issues that arose from the fact that the Universal Turing Machine did not restrict the infinitely many programmes which any actual Turing Machine could use to produce any given input and output. Without some such restriction, the project of simulating human thinking with a plausible model of actual cognition is doomed to failure, however successful it might be as artificial intelligence (AI), knowledge engineering.
He realized that there must be certain local schemata or ‘frames’, with which human beings actually work. A frame must include a manageable and finite set of data and rules, abstracted from an indefinitely large range of theoretical possibilities. The most accessible account of frames is to be found in Minsky (1987: Ch. 24), from which this exposition is taken. Here is how he defines a ‘frame’:
Minsky made several suggestions about the fine structure of his frames, for instance that the requirements for filling slots may themselves require subframes, and the requirements for completing subframes yet more detailed subsubframes. There may also be requirements that have to be met for filling more than one slot at once.
A frame is a data structure for representing a stereotype situation, like being in a certain kind of living room or going to a child's birthday party. Attached to each frame are several kinds of information. Some of this information is about how to use the frame. Some is about what one can expect to happen next. Some is about what to do if these expectations are not confirmed.
We can think of a frame as a network of nodes and relations. The ‘top levels’ of a frame are fixed, and represent things that are always true about the supposed situation. The lower levels have many terminals - ‘slots’ that must be filled by specific instances of data.
A frame-array is a set of frames with the same terminals. This idea captures the commonsense observation that the same people, things, events and so on may be thought about in a huge variety of ways.
A fragment of a frame for the relatives present at a child's birthday party might look like this:
Minsky allows for the instability of a frame during an episode in which a person is carrying out some cognitive task. A frame may need updating momentby-moment, modifying the frame ‘axioms’. Wendy falls ill on the morning of the birthday party. How will we now celebrate the anniversary? There may be several ways in which the existing frame could be updated. It could be cancelled. It could be postponed until next Saturday and so on.
Minsky's main technical contribution to the computational model was the demonstration of the viability of the idea of a network model of artificial nodes connected in complex ways. The idea goes back long before his work, but Minky's was the first workable simulation of such a net.
Turing's implicit analogy between thinking and computing included an unspoken assumption that the thinking organ, the brain, must resemble the computing machine in various fundamental ways. The principle of multiple realizability declares that all sorts of material set-ups could be used as physical realizations of Turing machines. However, Newell had committed himself and the Carnegie-Rand group to a certain kind of computing machine, defined by a certain kind of ‘architecture’. It contained a symbol system and consisted of a central processor and a memory. This is the basic architecture proposed by Von Neumann.
Studies of the fine structure of the brain have revealed a net-like structure of neurons, linked by dendrites and axons and connected by synapses. Could a machine be built that would be a model of a neural net? If such a machine could be built would it function like a brain functions, in particular could it perform as a Turing machine carrying out routine cognitive tasks? In a mathematical tour de force, Minsky demonstrated a working model of such a machine which could be set up on an ordinary computer. This led to the development of a new branch of cognitive science, connectionism, or parallel distributed processing, PDP. Could it do more? Could it perform operations that a Von Neumann machine could not?
Who was ultimately responsible for this innovation? It seems clear that the first inkling of such a project can be found in some remarks by Turing in 1947. The history of the development of the rudimentary idea can be followed in McClelland and Rumelhart (1986). Would one have to build a network of interlinked mini-computers to simulate a neural net? The key to all subsequent developments was Minky's (1951) demonstration that it is possible to create a simulation of a neural net that is itself a simulation of a net of real neurons. The simulation could be run on a computing machine that is based on the Von Neumann architecture of memory stores and a central processing unit, thus bypassing the hardware problem of making a material analogue of the net of neurons. Subsequent research on neural nets and connectionist models has been based on Minsky's innovation, and its further developments (Minsky & Papert, 1988).
Here is how a PDP machine works. Imagine an array of nodes, representing neurons. They are connected in all possible ways (see Figure 4.1). Each node receives inputs and emits outputs via the connections it has with other nodes. The input switches on a certain pattern of nodes (artificial neurons) at edge A. Suppose that edge A receives inputs from the environment, switching on some input nodes (giving them the binary value 1) while others are off (giving them the binary value 0). In this way the state of the input edge represents a binary number. The outputs from these nodes flow through the net to edge B. In the end, the neurons at edge B will display a certain on/off pattern of 1s and 0s, also expressing a binary number. By adjusting the strengths of the connections, a net can be ‘trained’ to emit any desired output relative to some given input. For example, we might input a binary pattern representing ‘sparrow’. The net could be trained to output a binary pattern of activated neurons representing ‘bird’. Such a net could be thought of as a classifying subsystem of an imaginary brain. It would be one among a huge number of very extensive nets trained to perform other cognitive tasks. It has been shown that a net trained to perform one task could be trained, in some circumstances, to perform other tasks, without interfering with the requirements for the original task.
In a PDP or connectionist device there is no memory store and no central processing module. The whole net incorporates the data it has learned. There is no specific location for an item of knowledge. A computational model based on this principle looks to the brain as the source from which its design is derived by abstraction from the complexity of an actual brain organ. The test of the adequacy of the model is whether it can be trained to perform a cognitive task in good conformity with the way a person performs the task. This is a stronger requirement than just achieving a match between patterns of input and output. A striking example of a successful connectionist model of the hippocampus and its memory functions can be found in McLean, Plunkett and Rolls (1998).
Minsky's proposals seem to be bearing fruit, albeit slowly, in ways that the original efforts to follow up Turing's conjecture have not.
Born : 1927, New York City, USA Nat : American Ints : Artificial intelligence, cognitive science Educ : BA Harvard University, 1950;...
1927– Artificial Intelligence Pioneer, Scientist, Author Marvin Minsky is considered by most to be the father of artificial intelligence...
Human beings are toolmakers. The history of their civilization is strongly influenced by technological innovations that, to an ever-greater...