Artificial intelligence
"AI" redirects here. For other uses, see Ai and Artificial
intelligence (disambiguation).
An AI-themed mug.
Artificial
intelligence (AI)
is the intelligence exhibited by machines or software. It is also the name of
the academic field of study which studies how to create computers and computer software that are capable of intelligent behavior. Major AI
researchers and textbooks define this field as "the study and design of
intelligent agents",[1] in which an intelligent agent is a system that perceives its environment and takes
actions that maximize its chances of success.[2] John
McCarthy, who coined the term in 1955,[3] defines it as "the science and engineering of making
intelligent machines".[4]
AI research is highly
technical and specialized, and is deeply divided into subfields that often fail
to communicate with each other.[5] Some of the division is due to social and cultural
factors: subfields have grown up around particular institutions and the work of
individual researchers. AI research is also divided by several technical
issues. Some subfields focus on the solution of specific problems. Others focus on one of several
possible approaches or on the use of a particular tool or towards
the accomplishment of particular applications.
The central problems
(or goals) of AI research include reasoning, knowledge, planning, learning, natural
language processing (communication), perception and the ability to move and manipulate objects.[6] General
intelligence is still
among the field's long-term goals.[7] Currently popular approaches include statistical methods, computational intelligence and traditional symbolic AI. There are a large
number of tools used in AI, including versions of search and mathematical optimization, logic, methods based on probability and economics,
and many others. The AI field is interdisciplinary, in which a number of
sciences and professions converge, including computer science, mathematics,psychology, linguistics, philosophy and neuroscience, as well as other specialized
fields such as artificial psychology.
The field was founded
on the claim that a central property of humans, intelligence—the sapience of Homo sapiens—"can be so precisely described that a machine can
be made to simulate it."[8] This raises philosophical issues about the nature of the mind and the
ethics of creating artificial beings endowed with human-like intelligence,
issues which have been addressed by myth, fiction and philosophy since antiquity.[9] Artificial intelligence has been the subject of
tremendous optimism[10] but has also suffered stunningsetbacks.[11] Today it has become an essential part of the technology
industry, providing the heavy lifting for many of the most challenging problems
in computer science.[12]
Contents
o 3.3 Toys
·
7 Notes
History
Main
articles: History
of artificial intelligence and Timeline
of artificial intelligence
Thinking machines and
artificial beings appear in Greek myths, such as Talos of Crete, the bronze robot of Hephaestus, and Pygmalion's Galatea.[13] Human likenesses believed to have intelligence were built
in every major civilization: animated cult images were worshiped in Egypt and Greece[14] and humanoid automatons were built by Yan Shi, Hero of Alexandria and Al-Jazari.[15] It was also widely believed that artificial beings had
been created by Jābir ibn Hayyān, Judah Loew and Paracelsus.[16] By the 19th and 20th centuries, artificial beings had
become a common feature in fiction, as in Mary Shelley's Frankenstein or Karel Čapek's R.U.R.
(Rossum's Universal Robots).[17] Pamela McCorduck argues that all of these are some examples of an ancient
urge, as she describes it, "to forge the gods".[9] Stories of these creatures and their fates discuss many
of the same hopes, fears and ethical
concerns that are
presented by artificial intelligence.
Mechanical or "formal"
reasoning has been
developed by philosophers and mathematicians since antiquity. The study of
logic led directly to the invention of the programmable digital electronic
computer, based on the work of mathematician Alan Turing and others. Turing's theory of computation suggested that a machine, by shuffling symbols as simple
as "0" and "1", could simulate any conceivable act of
mathematical deduction.[18][19] This, along with concurrent discoveries in neurology, information theory and cybernetics, inspired a small group of
researchers to begin to seriously consider the possibility of building an
electronic brain.[20]
The field of AI
research was founded at a conference on the campus of Dartmouth College in the summer of 1956.[21] The attendees, including John
McCarthy, Marvin Minsky,Allen Newell, Arthur Samuel, and Herbert Simon,
became the leaders of AI research for many decades.[22] They and their students wrote programs that were, to most
people, simply astonishing:[23] computers were winning at checkers, solving word problems
in algebra, proving logical theorems and speaking English.[24] By the middle of the 1960s, research in the U.S. was
heavily funded by the Department
of Defense[25] and laboratories had been established around the world.[26] AI's founders were profoundly optimistic about the future
of the new field: Herbert Simon predicted that "machines will be capable, within
twenty years, of doing any work a man can do" and Marvin Minsky agreed, writing that "within a generation ...
the problem of creating 'artificial intelligence' will substantially be
solved".[27]
They had failed to
recognize the difficulty of some of the problems they faced.[28] In 1974, in response to the criticism of Sir James Lighthill[29] and ongoing pressure from the US Congress to fund more
productive projects, both the U.S. and British governments cut off all
undirected exploratory research in AI. The next few years would later be called
an "AI winter",[30] a period when funding for AI projects was hard to find.
In the early 1980s, AI
research was revived by the commercial success of expert systems,[31] a form of AI program that simulated the knowledge and
analytical skills of one or more human experts. By 1985 the market for AI had
reached over a billion dollars. At the same time, Japan's fifth generation
computer project
inspired the U.S and British governments to restore funding for academic
research in the field.[32] However, beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a
second, longer lasting AI winter began.[33]
In the 1990s and early
21st century, AI achieved its greatest successes, albeit somewhat behind the
scenes. Artificial intelligence is used for logistics, data mining, medical diagnosis and many other areas throughout the technology industry.[12] The success was due to several factors: the increasing
computational power of computers (see Moore's law), a greater emphasis on solving
specific subproblems, the creation of new ties between AI and other fields
working on similar problems, and a new commitment by researchers to solid
mathematical methods and rigorous scientific standards.[34]
On 11 May 1997, Deep Blue became the first computer chess-playing system to beat a
reigning world chess champion, Garry Kasparov.[35] In February 2011, in a Jeopardy! quiz show exhibition match, IBM's question
answering system, Watson,
defeated the two greatest Jeopardy champions, Brad Rutter and Ken Jennings, by a significant margin.[36] TheKinect, which provides a 3D
body–motion interface for the Xbox 360 and the Xbox One, uses algorithms that emerged from
lengthy AI research[37] as do intelligent
personal assistants in smartphones.[38]
Research
Goals
You awake one morning
to find your brain has another lobe functioning. Invisible, this auxiliary lobe
answers your questions with information beyond the realm of your own memory,
suggests plausible courses of action, and asks questions that help bring out
relevant facts. You quickly come to rely on the new lobe so much that you stop
wondering how it works. You just use it. This is the dream of artificial
intelligence.
The general problem of
simulating (or creating) intelligence has been broken down into a number of
specific sub-problems. These consist of particular traits or capabilities that
researchers would like an intelligent system to display. The traits described below
have received the most attention.[6]
Deduction, reasoning, problem solving
Early AI researchers
developed algorithms that imitated the step-by-step reasoning that humans use
when they solve puzzles or make logical deductions.[40] By the late 1980s and 1990s, AI research had also
developed highly successful methods for dealing with uncertain or incomplete information, employing concepts from probability and economics.[41]
For difficult
problems, most of these algorithms can require enormous computational resources
– most experience a "combinatorial
explosion": the amount of memory or computer time required
becomes astronomical when the problem goes beyond a certain size. The search
for more efficient problem-solving algorithms is a high priority for AI
research.[42]
Human beings solve
most of their problems using fast, intuitive judgements rather than the
conscious, step-by-step deduction that early AI research was able to model.[43] AI has made some progress at imitating this kind of
"sub-symbolic" problem solving: embodied agent approaches emphasize the importance of sensorimotor skills to higher reasoning;neural net research attempts to simulate the structures inside the
brain that give rise to this skill; statistical approaches to AI mimic the probabilistic nature of the human ability to
guess.
Knowledge representation
An ontology represents knowledge as a set of concepts
within a domain and the relationships between those concepts.
Main articles: Knowledge
representation and Commonsense knowledge
Knowledge
representation[44] and knowledge engineering[45] are central to AI research. Many of the problems machines
are expected to solve will require extensive knowledge about the world. Among
the things that AI needs to represent are: objects, properties, categories and
relations between objects;[46] situations, events, states and time;[47] causes and effects;[48] knowledge about knowledge (what we know about what other
people know);[49] and many other, less well researched domains. A
representation of "what exists" is an ontology:
the set of objects, relations, concepts and so on that the machine knows about.
The most general are called upper ontologies, which attempt to provide a
foundation for all other knowledge.[50]
Among the most
difficult problems in knowledge representation are:
Many of
the things people know take the form of "working assumptions." For
example, if a bird comes up in conversation, people typically picture an animal
that is fist sized, sings, and flies. None of these things are true about all
birds. John
McCarthy identified
this problem in 1969[51] as the qualification problem: for any commonsense rule
that AI researchers care to represent, there tend to be a huge number of
exceptions. Almost nothing is simply true or false in the way that abstract
logic requires. AI research has explored a number of solutions to this problem.[52]
The
breadth of commonsense knowledge
The number
of atomic facts that the average person knows is astronomical. Research
projects that attempt to build a complete knowledge base of commonsense knowledge(e.g., Cyc)
require enormous amounts of laborious ontological
engineering—they must be built, by hand, one complicated concept at
a time.[53] A major goal is to have the computer understand enough
concepts to be able to learn by reading from sources like the internet, and
thus be able to add to its own ontology.[citation
needed]
The
subsymbolic form of some commonsense knowledge
Much of
what people know is not represented as "facts" or
"statements" that they could express verbally. For example, a chess
master will avoid a particular chess position because it "feels too
exposed"[54] or an art critic can take one look at a statue and
instantly realize that it is a fake.[55] These are intuitions or tendencies that are represented
in the brain non-consciously and sub-symbolically.[56] Knowledge like this informs, supports and provides a
context for symbolic, conscious knowledge. As with the related problem of
sub-symbolic reasoning, it is hoped that situated
AI, computational
intelligence, or statistical AI will provide ways to represent this
kind of knowledge.[56]
Planning[
A hierarchical
control system is a
form of control system in which a set of devices and
governing software is arranged in a hierarchy.
Main article: Automated
planning and scheduling
Intelligent agents
must be able to set goals and achieve them.[57] They need a way to visualize the future (they must have a
representation of the state of the world and be able to make predictions about
how their actions will change it) and be able to make choices that maximize the utility (or "value") of the available choices.[58]
In classical planning
problems, the agent can assume that it is the only thing acting on the world
and it can be certain what the consequences of its actions may be.[59] However, if the agent is not the only actor, it must
periodically ascertain whether the world matches its predictions and it must
change its plan as this becomes necessary, requiring the agent to reason under
uncertainty.[60]
Multi-agent planning uses the cooperation and competition of many agents to achieve a given goal. Emergent behavior such as this is used by evolutionary
algorithms and swarm intelligence.[61]
Learning
Main article: Machine learning
Machine learning is
the study of computer algorithms that improve automatically through experience[62][63] and has been central to AI research since the field's inception.[64]
Unsupervised learning is the ability to find patterns in a stream of input. Supervised learning includes both classification and numerical regression.
Classification is used to determine what category something belongs in, after
seeing a number of examples of things from several categories. Regression is
the attempt to produce a function that describes the relationship between
inputs and outputs and predicts how the outputs should change as the inputs
change. In reinforcement
learning[65] the agent is rewarded for good responses and punished for
bad ones. The agent uses this sequence of rewards and punishments to form a
strategy for operating in its problem space. These three types of learning can
be analyzed in terms of decision theory, using concepts like utility. The
mathematical analysis of machine learning algorithms and their performance is a
branch oftheoretical
computer science known as computational
learning theory.[66]
Within developmental
robotics, developmental learning approaches were elaborated for
lifelong cumulative acquisition of repertoires of novel skills by a robot,
through autonomous self-exploration and social interaction with human teachers,
and using guidance mechanisms such as active learning, maturation, motor
synergies, and imitation.[67][68][69][70]
Natural language processing (communication)
A parse tree represents
the syntacticstructure of a sentence according to some formal grammar.
Main article: Natural
language processing
Natural
language processing[71] gives machines the ability to read and understand the languages that humans speak. A sufficiently powerful
natural language processing system would enable natural
language user interfaces and the
acquisition of knowledge directly from human-written sources, such as newswire
texts. Some straightforward applications of natural language processing includeinformation retrieval (or text mining), question answering[72] and machine translation.[73]
A common method of
processing and extracting meaning from natural language is through semantic
indexing. Increases in processing speeds and the drop in the cost of data
storage makes indexing large volumes of abstractions of the user's input much
more efficient.
Perception
Main articles: Machine perception, Computer vision and Speech recognition
Machine perception[74] is the ability to use input from sensors (such as
cameras, microphones, tactile sensors, sonar and others more exotic)
to deduce aspects of the world. Computer vision[75] is the ability to analyze visual input. A few selected
subproblems are speech recognition,[76] facial
recognition and object recognition.[77]
Motion and manipulation
Main article: Robotics
The field of robotics[78] is closely related to AI. Intelligence is required for
robots to be able to handle such tasks as object manipulation[79] and navigation, with sub-problems oflocalization (knowing where you are, or finding out where other things
are), mapping (learning what is around you, building a map of the
environment), and motion planning(figuring out how to get there)
or path planning (going from one point in space to another point, which may
involve compliant motion – where the robot moves while maintaining physical
contact with an object).[80][81]
Long-term goalS
Among the long-term
goals in the research pertaining to artificial intelligence are: (1) Social
intelligence, (2) Creativity, and (3) General intelligence.
Affective computing is
the study and development of systems and devices that can recognize, interpret,
process, and simulate humanaffects.[83][84] It is an interdisciplinary field spanning computer sciences, psychology, and cognitive science.[85] While the origins of the field may be traced as far back
as to early philosophical inquiries into emotion,[86] the more modern branch of computer science originated
withRosalind Picard's
1995 paper[87] on affective computing.[88][89] A motivation for the research is the ability to simulate empathy. The machine
should interpret the emotional state of humans and adapt its behaviour to them,
giving an appropriate response for those emotions.
Emotion and social
skills[90] play two roles for an intelligent agent. First, it must
be able to predict the actions of others, by understanding their motives and
emotional states. (This involves elements of game theory, decision theory,
as well as the ability to model human emotions and the perceptual skills to
detect emotions.) Also, in an effort to facilitate human-computer
interaction, an intelligent machine might want to be able to display emotions—even
if it does not actually experience them itself—in order to appear sensitive to
the emotional dynamics of human interaction.
Creativity
A sub-field of AI
addresses creativity both theoretically (from a philosophical and
psychological perspective) and practically (via specific implementations of
systems that generate outputs that can be considered creative, or systems that
identify and assess creativity). Related areas of computational research are Artificial intuition and Artificial thinking.
General intelligence
Main articles: Artificial
general intelligence and AI-complete
Many researchers think
that their work will eventually be incorporated into a machine with general intelligence
(known as strong AI),
combining all the skills above and exceeding human abilities at most or all of
them.[7] A few believe that anthropomorphic features like artificial
consciousness or an artificial brain may be required for such a project.[91][92]
Many of the problems
above may require general intelligence to be considered solved. For example,
even a straightforward, specific task like machine translation requires that the machine read and write in both
languages (NLP), follow the author's argument (reason), know what is being talked about (knowledge), and faithfully reproduce the
author's intention (social intelligence). A problem like machine translation is considered "AI-complete". In order to solve this
particular problem, you must solve all the problems.[93]
Approaches
There is no
established unifying theory or paradigm that guides AI research. Researchers disagree about many
issues.[94] A few of the most long standing questions that have
remained unanswered are these: should artificial intelligence simulate natural
intelligence by studying psychology or neurology? Or is human biology as irrelevant
to AI research as bird biology is to aeronautical engineering?[95] Can intelligent behavior be described using simple,
elegant principles (such as logic or optimization)?
Or does it necessarily require solving a large number of completely unrelated
problems?[96] Can intelligence be reproduced using high-level symbols,
similar to words and ideas? Or does it require "sub-symbolic"
processing?[97] John Haugeland, who coined the term GOFAI (Good
Old-Fashioned Artificial Intelligence), also proposed that AI should more
properly be referred to as synthetic
intelligence,[98] a term which has since been adopted by some non-GOFAI
researchers.[99][100]
Cybernetics and brain simulation
In the 1940s and
1950s, a number of researchers explored the connection between neurology, information theory,
and cybernetics. Some of
them built machines that used electronic networks to exhibit rudimentary
intelligence, such as W. Grey Walter's turtles and the Johns Hopkins Beast.
Many of these researchers gathered for meetings of the Teleological Society at Princeton University and the Ratio Club in England.[20] By 1960, this approach was largely abandoned, although
elements of it would be revived in the 1980s.
No comments:
Post a Comment