Artificial intelligence (AI) refers to the science and creation of intelligent machines, and is generally considered within the domain of computer science. Intelligence in this sense refers to the computational component of achieving goals. The ultimate goal for scientists in this field is to create a machine that can solve problems and achieve goals as well as humans can. In this sense, artificial intelligence models have been developed to help understand human memory, particularly the organization of information in memory. Intelligence is usually discussed in relation to humans, but it can occur in animals and machines as well.
Artificial intelligence emerged after World War II. One of the first of the early researchers of artificial intelligence was English mathematician Alan Turing, who gave a lecture on the topic in 1947. He thought that AI could be best researched by having people program computers rather than by building machines. In 1956, John McCarthy, a young professor of mathematics at Dartmouth College, decided to organize a conference that summer to provide a forum for researchers to discuss ways in which computers could be programmed to carry out intelligent behavior. It was the title of this conference, Summer Research Project on Artificial Intelligence, that first used the term “artificial intelligence.” At this same conference, Herbert Simon and Alan Newell, two psychologists, presented a computer program that could create proofs of mathematical theorems that involved principles of logic. The program, called the logic theorist, was revolutionary because it used human-like reasoning to solve problems. By the late 1950s, the study of AI was flourishing, with most researchers working on programming computers.
In 1950, Turing published an article, “Computing Machinery and Intelligence,” that discussed the conditions for considering a machine intelligent. He developed what has come to be called the Turing test. He argued that if a machine could successfully convince a knowledgeable observer that it was human, the machine should be considered intelligent. To avoid requiring the machine to look or sound human, it would communicate to human observers through typed communication. Thus, during a Turing test, the observer would communicate with both a human and a machine, each of which would be trying to convince the observer that the communications from that source were human. It was thought that this technique would convince the general public of the intelligence of the machine, though philosophers remained unappeased with this evidence.
Daniel Dennett, a Stanford psychologist, discussed the Turing test and examples of its implementation, and described situations in which only observers who had limited knowledge of AI were taking the test (Dennett, 1998). These examples showed that some people were rather easily led into believing that a program, even a not very sophisticated program, is intelligent.
One reemerging idea for the advancement of artificial intelligence is to create a “child machine” that can increase its intelligence by reading and learning from experience just as children do. However, as of yet, AI programs are unable to learn the majority of what a child learns from physical experience and/or experience with reading. However, at other levels, AI programs do seem to show signs of being able to increase their own intelligence. For example, Hacker, a program developed at the Massachusetts Institute of Technology (MIT), can write programs for the computer that it is running on.
One early example of AI is the chess programs used to play human chess players. As many a frustrated human can attest, the chess programs are skilled at playing chess. Might this suggest that at least in one area, AI programs have reached human intelligence levels? Unfortunately, it does not. These programs require a large amount of computational resources to play chess, much more than a human chess player needs. Once the chess programs can play chess with fewer computational resources, they will be considered to be performing in a manner much more similar to humans.
Metacognition is another issue in artificial intelligence. Humans are able to “think about thinking.” We can understand and explain why we know certain things, how we went about solving a particular problem, and how we might improve on future learning. If machines were able to perform metacognitive processes, like humans, they might be able to increase their own levels of intelligence to the levels of humans. However, the field of AI is still a ways off from accomplishing this large undertaking.
While great advancements have been made in artificial intelligence, scientists still have a ways to go before machines are able to achieve levels of problems solving and goal attainment abilities akin to that of humans.
Research on artificial intelligence incorporates theoretical and experimental approaches. The experimental approaches examine both the basic problems of AI and the application of AI concepts. Research in AI examines both the biological and phenomenal study of human intelligence. In this manner, they attempt to understand human intelligence in terms of neurological functions and the manner in which humans solve problems and achieve goals. The latter includes the study of common sense, general knowledge, and biases in decision making.
One of the major limitations in defining and measuring intelligence in human and nonhuman species as well as machines is the lack of adequate measurement tools. Most human measurement tools rely on rather basic reading, comprehension, and mathematical problems. For instance, to examine working memory on an IQ test, children must repeat back a list of digits. However, working memory, as a component process of intelligence, is much more complex a process than simply repeating back a series of digits. Thus, there are many types of intelligence that are not measured adequately or that are not measured at all and therefore are unable to make a contribution to the final measure of intelligence.
Artificial intelligence, then, refers to creating machines that display intelligence. As the reader may already have inferred, this is limited by the lack of tools of measurement for certain types, or components, that might display intelligent processes. Nonetheless, much progress has been made in creating machines that show intelligence in a way that is measurable. Of course, allowances must be made for the differences between machines and humans. While a digit span test may be useful in estimating a child's intelligence, it would be an incredibly easy task for any computer.
A common misconception is that AI tries to use a machine to simulate human intelligence. While this is sometimes the case, the relationship is much more bidirectional. Researchers may study humans in order to learn how to make machines solve problems, and they try problem-solving techniques with machines that have not been used on humans. Using machines to find more optimal problem-solving strategies can lead to better technological advancements and better insights about how humans could solve problems better.
Comparisons between human and computer intelligence can be difficult. Arthur Jensen (1998) commented that all biologically normal humans have similar intellectual mechanisms so that differences in intelligence must be attributed to “quantitative biochemical and physiological conditions” (20). Jensen further suggested that individual differences specifically show up on short-term (working) memory, speed, and the ability to form accurate and retrievable long-term memories. John McCarthy (2007) suggested that these same conditions are not a limitation for computers. In fact, computer programs have ample speed and memory but are limited by the intellectual mechanisms programed into them by the program designers. If the program designers do not have a lot of knowledge about how to create intellectual components in a computer, the computer will not have these components. In this sense, then, humans and computers have opposite limitations: humans generally have the same basic intellectual components but are limited in speed and memory (capacity and formation). However, computers have plenty of speed and memory capabilities but vary in their intellectual components. Of course, it may be that humans vary plenty in their basic intellectual abilities but we lack the appropriate measurement tools to detect these differences.
Memory research has a history of relying on models developed within an artificial intelligence framework. For example, HAM (human associative memory) is a computer program developed to model the organization of knowledge in semantic memory. Developed in the late 1960s and early 1970s by Gordon Bower and John Anderson, HAM was an early precursor to the ACT, ACT-R, and ACT* computer models. These later models used an AI framework for modeling human memory and problem solving.
Many people consider AI to be at the core of neural network models, which may, in the future, replace information processing models as the dominant paradigm in cognitive psychology. The basic premise of neural networks is that the analogy for understanding the mind is the interconnectedness of a neural network in the brain. The models that have been developed for understanding such interconnectedness stem from work in AI even though the goal is to understand a biological network, not a machine network. Nevertheless, in developing neural network models, psychologists have relied on concepts borrowed from artificial intelligence, including concepts of computational models within an adaptive system that changes its structure as information flows through the network. In particular, artificial neural networks, as these have come to be called, include models of the long- and short-term neural systems as they relate to learning and memory.
Memory becomes a central component of most artificial neural network models, both in terms of its machine properties and human properties—and most important in terms of the differences between them. An excellent example is provided by ACT, which was developed to model the organization of semantic memory and the search for information in memory, but was also developed to model a neural network rather than the prior cognitive models of information processing psychology. The prior models relied on a serial processor rather than the parallel processing that occurs in neural networks.
The study of artificial intelligence focuses on many different areas. These include logical AI, search, pattern recognition, representation, inference, common sense, knowledge and reasoning, learning from experience, planning, and epistemology. These are defined here.
Logical AI refers to the general knowledge a program has about the world and what it takes to reach its goal in specific situations. The types of steps taken by the program to achieve its goal are of interest.
Search refers to how a program looks through several possibilities when deciding which action it should take next to achieve its goal. Studies in this area examine how a program might do this search in a more efficient manner.
Those who focus on pattern recognition examine how a computer compares one pattern to another in order to recognize it.
Representation refers to the manner in which AI programs represent facts about the world, usually using language or mathematical algorithms.
How AI programs makes inferences about some facts when given other facts is another avenue of study.
The area of study in AI that is the furthest away from mimicking humans is the study of common sense knowledge and reasoning. This is despite the fact that it has been an active area of research since the 1950s.
Much like humans, computers can learn from experience. However, they do this at a much lower and limited level than humans do. How to make computers learn from experience in a manner and level akin to humans is one of the goals of AI.
How computer programs plan to achieve a goal is of interest to AI researchers.
Epistemology refers to the study of the kinds of knowledge that are required for solving problems in the world.
Ontology deals with the study of the various kinds of objects and their properties.
Heuristics is the manner in which something is discovered. This usually refers to a general approach to problem solving instead of a specific algorithm.
There are many applications of AI. These include game playing, speech recognition, understanding of natural language, computer vision, expert systems, and heuristic classifications. The last three are not common terms and so deserve further explanation.
Computer vision refers to attempts to make programs that can recognize three-dimensional objects using two-dimensional inputs, as the human eye does. So far the programs fall short of human visual abilities.
Expert systems refer to the creation of a computer “expert” in a specific domain. For instance, in 1974 programmers created an expert system that diagnosed bacterial infections of the blood and suggest treatments, which they named MYCIN. While the program did a better job of diagnosing patients than medical students or practicing doctors did, it did have some serious limitations. The program's ontology included bacteria, symptoms, and treatments but did not account for peripheral factors that existed outside of that single patient (e.g., doctors, hospitals, other patients). In other words, the program did not have the full context of the patient's situation or the ability to tap into common sense. (See McCarthy  for a complete discussion of MYCIN and its short-comings.)
Finally, heuristic classification refers to the manner in which a problem can be solved generally. What type of information is important to take into account? How might each relevant factor be weighted in the decision-making process? These questions are analyzed in the study of heuristic classification.
Related Credo Articles
A term used to describe the use of a system to emulate human decision making and learning abilities. The founding father of artificial...
Artificial intelligence (AI) comprises a vast interdisciplinary field, which has benefited from its beginning from disciplines such as computer...
For centuries humans have imagined that machines might be created that behave in an intelligent manner, as humans do, and in the second half of...