- Perceptrons (book)
Perceptrons is the name of a book authored by
Marvin Minskyand Seymour Papert, published in 1969. The full title of the book is "Perceptrons; an introduction to computational geometry." An edition with handwritten corrections and additions was released in the early 1970s. An expanded edition was further released in 1987, containing a chapter dedicated to counter the criticisms made in the 1980s towards it.
The main subject of the book is the
perceptron, an important kind of artificial neural networkdeveloped in the late 1950s and early 1960s. The main researcher on perceptrons was Frank Rosenblatt, author of the book "Principles of Neurodynamics". Rosenblatt and Minsky knew each other since adolescence, having studied with a one year difference at the Bronx High School of ScienceFact|date=June 2008. They became at one point central figures of a debate inside the AI research community, and are known to have promoted loud discussions in conferencesFact|date=June 2008. Despite the dispute, the corrected version of the book released after Rosenblatt's tragic and early death contains a dedication to him.
This book is the center of a long-standing controversy in the study of
artificial intelligence. It is claimed that pessimistic predictions made by the authors were responsible for an erroneous change in the direction of research in AI, concentrating efforts on so-called "symbolic" systems, and contributing to the so-called AI winter. This decision, supposedly, proved to be unfortunate in the 1980s, when new discoveries showed that the prognostics in the book were wrong.
The book contains a number of
mathematical proofsregarding perceptrons, and while it highlights some of perceptrons' strengths, it also shows some previously unknown limitations. The most important one is related to the computation of some predicates, as the XOR function, and also the important connectedness predicate. The problem of connectedness is illustrated at the awkwardly colored cover of the book, intended to show how humans themselves have difficulties in computing this predicateFact|date=June 2008.
The XOR affair
Critics of the book state that the authors imply that, since a single artificial neuron is incapable of implementing some functions such as the
XORlogical function, larger networks also have similar limitations, and therefore should be dropped. Later research on three-layered perceptrons showed how to implement such functions, therefore saving the technique from obliteration.
There are many mistakes in this story. Although a single neuron can in fact compute only a small number of logical predicates, it was widely known that networks of such elements can compute any possible
boolean function. This was known by Warren McCullochand Walter Pitts, who even proposed how to create a Turing Machinewith their formal neurons, is mentioned in Rosenblatt's book, and is even mentioned in the book Perceptrons.Fact|date=June 2008 Minsky also extensively uses formal neurons to create simple theoretical computers in his book "Computation: Finite and Infinite Machines".
What the book does prove is that in three-layered feed-forward perceptrons (with a so-called "hidden" or "intermediary" layer), it is not possible to compute some predicates unless at least one of the neurons in the first layer of neurons (the "intermediary" layer) is connected with a non-null weight to each and every input. This was contrary to a hope held by some researchers in relying mostly on networks with a few layers of "local" neurons, each one connected only to a small number of inputs. A feed-forward machine with "local" neurons is much easier to build and use then a larger and recurrent neural network, so researchers at the time concentrated on these instead of on more complicated models.
Analysis of the controversy
Although it is a widely available book, many scientists talk about Perceptrons only echoing what others have said, which helps to spread misconceptions about it. Minsky has even compared the book to the fictional book
Necronomiconin H. P. Lovecraft's tales, a book known to many, but only read by few [http://www.ucs.louisiana.edu/~isb9112/dept/phil341/histconn.html] . The authors talk in the expanded edition about the criticism of the book that started in the 1980s, with a new wave of research symbolized by the PDP book.
How Perceptrons was explored first by one group of scientists to drive research in AI in one direction, and then later by a new group in another direction, has been the subject of a peer-reviewed sociological study of scientific development [http://www.jstor.org/pss/285702] .
Wikimedia Foundation. 2010.
Look at other dictionaries:
Perceptron — Perceptrons redirects here. For the book of that title, see Perceptrons (book). The perceptron is a type of artificial neural network invented in 1957 at the Cornell Aeronautical Laboratory by Frank Rosenblatt. It can be seen as the simplest… … Wikipedia
Frank Rosenblatt — (11 July, 1928 ndash; 1971) was a New York City born computer scientist who completed the Perceptron, or MARK 1, computer at Cornell University in 1960. This was the first computer that could learn new skills by trial and error, using a type of… … Wikipedia
History of artificial intelligence — The history of artificial intelligence begins in antiquity with myths, stories and rumors of artificial beings endowed with intelligence and consciousness by master craftsmen. In the middle of the 20th century, a handful of scientists began to… … Wikipedia
AI winter — See also and An AI Winter is a collapse in the perception of artificial intelligence research. The term was coined by analogy with the relentless spiral of a nuclear winter: a chain reaction of pessimism in the AI community, followed by pessimism … Wikipedia
Marvin Minsky — in 2008 Born Marvin Lee Minsky August 9, 1927 … Wikipedia
Artificial intelligence — AI redirects here. For other uses, see Ai. For other uses, see Artificial intelligence (disambiguation). TOPIO, a humanoid robot, played table tennis at Tokyo International Robot Exhibition (IREX) 2009. Artificial intelligence ( … Wikipedia
Neural network — For other uses, see Neural network (disambiguation). Simplified view of a feedforward artificial neural network The term neural network was traditionally used to refer to a network or circuit of biological neurons. The modern usage of the term … Wikipedia
Connectionism — is a set of approaches in the fields of artificial intelligence, cognitive psychology, cognitive science, neuroscience and philosophy of mind, that models mental or behavioral phenomena as the emergent processes of interconnected networks of… … Wikipedia
Automatic differentiation — In mathematics and computer algebra, automatic differentiation, or AD, sometimes alternatively called algorithmic differentiation, is a method to numerically evaluate the derivative of a function specified by a computer program. Two classical… … Wikipedia
Cognitive science — Figure illustrating the fields that contributed to the birth of cognitive science, including linguistics, education, neuroscience, artificial Intelligence, philosophy, anthropology, and psychology. Adapted from Miller, George A (2003). The… … Wikipedia