Kenneth Colby

Kenneth Colby

Kenneth Mark Colby, M.D. (1920 to 2001) was an American psychiatrist dedicated to the theory and application of computer science and artificial intelligence to psychiatry. Colby was a pioneer in the development of computer technology as a tool to try to understand cognitive functions and to assist both patients and doctors in the treatment process. He is perhaps best known for the development of a computer program called PARRY, which mimicked a paranoid schizophrenic and could "converse" with others. PARRY sparked serious debate about the possibility and nature of machine intelligence.

Early Life and Education

Colby was born in Waterbury, Connecticut in 1920. He graduated from Yale University in 1941 and from Yale Medical School in 1943.

Career

Colby began his career in psychoanalysis as a clinical associate at the San Francisco Institute of Psychoanalysis in 1951. During this time, he published "A Primer for Psychotherapists," an introduction to psychodynamic psychotherapy. He joined the Department of Computer Science at Stanford University in the early sixties, beginning his pioneering work in the relatively new field of artificial intelligence. In 1967 the National Institute of Mental Health recognized his research potential when he was awarded a Career Research Scientist Award. Colby came to UCLA as a professor of psychiatry in 1974, and was jointly appointed professor in the Department of Computer Science a few years later. Over the course of his career, he wrote numerous books and articles on psychiatry, psychology, psychotherapy and artificial intelligence.

Psychoanalysis

Early in his career, in 1955, Colby published "Energy and Structure in Psychoanalysis," an effort to bring Freud's basic doctrines into line with modern concepts of physics and philosophy of science. ["Energy and Structure in Psychoanalysis" (1958)] This, however, would be one of the last attempts by Colby to reconcile psychoanalysis with what he saw as important developments in science and philosophical thought. Central to Freud's method is his employment of a hermeneutics of suspicion, a method of inquiry that refuses to take the subject at his or her word about internal processes. Freud sets forth explanations for a patient's mental state without regard for whether the patient agrees or not. If the patient does not agree, s/he has repressed the truth, that truth that the psychoanalyst alone can be entrusted with unfolding. The psychoanalyst's authority for deciding the nature or validity of a patient's state and the lack of empirical verifiability for making this decision was not acceptable to Colby.

Colby's disenchantment with psychoanalysis would be further expressed in several publications, including his 1958 book, "A Skeptical Psychoanalyst". He began to vigorously criticize psychoanalysis for failing to satisfy the most fundamental requirement of a science, that being the generation of reliable data. In his 1983 book, "Fundamental Crisis in Psychiatry", he wrote, “Reports of clinical findings are mixtures of facts, fabulations, and fictives so intermingled that one cannot tell where one begins and the other leaves off. …we never know how the reports are connected to the events that actually happened in the treatment sessions, and so they fail to qualify as acceptable scientific data.” ["Fundamental Crisis in Psychiatry" (1983)] .

Likewise, in "Cognitive Science and Psychoanalysis", he stated, "In arguing that psychoanalysis is not a science, we shall show that few scholars studying this question get to the bottom of the issue. Instead, they start by accepting, as do psychoanalytic theorists, that the reports of what happens in psychoanalytic treatment -- the pri­mary source of the data -- are factual, and then they lay out their interpretations of the significance of facts for theory. We, on the other hand, question the status of the facts." ["Cognitive Science and Psychoanalysis" (1988)] These issues would shape his approach to psychiatry and guide his research efforts.

Computer Science

In the 1960s Colby began thinking about the ways in which computer theory and application could contribute to the understanding of brain function and mental illness. One early project involved an Intelligent Speech Prosthesis which allowed individuals suffering from aphasia to “speak” by helping them search for and articulate words using whatever phonemic or semantic clues they were able to generate. [ [http://www.universityofcalifornia.edu/senate/inmemoriam/KennethMarkColby.htm Kenneth Mark Colby ] ]

Later, Colby would be one of the first to explore the possibilities of computer-assisted psychotherapy. In 1989, with his son Peter Colby, he formed the company Malibu Artificial Intelligence Works to develop and market a natural language version of cognitive behavioral therapy for depression, called "Overcoming Depression." "Overcoming Depression" would go on to be used as a therapeutic learning program by the U.S. Navy and Department of Veteran Affairs and would be distributed to individuals who used it without supervision from a psychiatrist. Needless to say, this practice was challenged by the media. To one journalist Colby replied that the program could be better than human therapists because "After all, the computer doesn't burn out, look down on you or try to have sex with you." [quoted in "Mind as Machine: A History of Cognitive Science" By Margaret A. Boden]

Artificial Intelligence

In the 1960s at Stanford University, Colby embarked on the creation of software programs known as "chatterbots," which simulate conversations with people. One well known chatterbot at the time was ELIZA, a computer program developed by Joseph Weizenbaum in 1966 to parody a psychologist. ELIZA, by Weizenbaum's own admission, was developed more as a language-parsing tool than as an exercise in human intelligence. Named for the Eliza Doolittle character in "Pygmalion," it was the first conversational computer program, designed to imitate a psychotherapist asking questions instead of giving advice. It appeared to give conversational answers, although it could be led to lapse into obtuse nonsense.

In 1972, at the Stanford Artificial Intelligence Laboratory, Colby built upon the idea of ELIZA to create a natural language program called PARRY that simulated the thinking of a paranoid individual. This thinking entails the consistent misinterpretation of others' motives – others must be up to no good, they must have concealed motives that are dangerous, or their inquiries into certain areas must be deflected - which PARRY achieved via a complex system of assumptions, attributions, and “emotional responses” triggered by shifting weights assigned to verbal inputs.

PARRY: A Computer Model of Paranoia

Colby's aim in writing PARRY had been practical as well as theoretical. He thought of PARRY as a virtual reality teaching system for students before they were let loose on real patients. ["Mind as Machine: A History of Cognitive Science" By Margaret A. Boden p. 370] However, PARRY's design was driven by Colby's own theories about paranoia. Colby saw paranoia as a degenerate mode of processing symbols where the patient's remarks "are produced by an underlying organized structure of rules and not by a variety of random and unconnected mechanical failures." ["Artificial Paranoia : A Computer Simulation of Paranoid Processes" p. 99-100] This underlying structure was an algorithm, not unlike a set of computer processes or procedures, which is accessible and can be reprogrammed, in other words "cured."

Shortly after it was introduced, PARRY would go on to create intense discussion and controversy over the possibility or nature of machine intelligence. PARRY was the first program to pass the “Turing Test," named for the British mathematician Alan Turing, who in 1950 suggested that if a computer could successfully impersonate a human by carrying on a typed conversation with a person, it could be called intelligent. PARRY succeeded in passing this test when human interrogators, interacting with the program via remote keyboard, were unable with more than random accuracy to distinguish PARRY from an actual paranoid individual.

As philosopher D.C. Dennett stated in "Alan Turing:Life and Legacy of a Great Thinker," "To my knowledge, the only serious and interesting attempt by any program designer to win even a severely modified Turing test has been Kenneth Colby. He had genuine psychiatrists interview PARRY. He did not suggest that they might be talking or typing to a computer; rather he made up some plausible story about why they were communicating with a real live patient via teletype. Then he took the PARRY transcript, inserted it into a group of teletype transcripts and gave them to another group of experts -- more psychiatrists -- and said, 'One of these was a conversation with a computer. Can you figure out which one it was?' They couldn't." [In: "Alan Turing: Life and Legacy of a Great Thinker" By Christof Teuscher, Douglas Hofstadter, p 304]

Much of the criticism of ELIZA as a model for artificial intelligence focused on the program's lack of an internal world model that influenced and tracked the conversation. PARRY simulates paranoid behavior by tracking its own internal emotional state on a few different dimensions. To illustrate this, Colby created another program called RANDOM-PARRY which chose responses at random. Responses from RANDOM-PARRY did not model the human patients' responses as well as standard PARRY. Some have argued that PARRY fooled its judges because paranoid behavior makes inappropriate responses or non sequiturs appropriate. But there is still a certain logic to them that PARRY simulates effectively. It is simpler to simulate paranoid behavior, perhaps, but it is not trivial. [http://robot-club.com/lti/pub/aaai94.html "Chatterbots, Tinymuds, And The Turing Test: Entering The Loebner Prize Competition" by Michael L. Mauldin]

The two computer programs, ELIZA and the paranoid PARRY, eventually met for series of "talks," executing different programs. One exchange such went like this:

PARRY: I know the mob controls the big rackets.

ELIZA: Who are you really talking about?

PARRY: Have you ever really thought about the Mafia?

ELIZA: Suppose you ever really thought about the Mafia?

PARRY: Let's talk about something else.

ELIZA: I am not sure I understand you fully.

PARRY: You keep asking me the same things, why not change the topic?

ELIZA: In what way? [http://www.stanford.edu/group/SHR/4-2/text/dialogues.html "Dialogues with colorful personalities of early AI"]

Colby would claim that PARRY mimics the natural process by which a person (in this case a paranoid schizophrenic) engages in conversation. The structure of the program that makes the linguistic decisions in PARRY is isomorphic to the 'deep structure' of the mind of the paranoiac. As Colby stated: "Since we do not know the structure of the 'real' simulative processes used by the mind-brain, our posited structure stands as an imagined theoretical analogue, a possible and plausible organization of processes analogous to the unknown processes and serving as an attempt to explain their workings" ["Artificial Paranoia: A Computer Simulation of Paranoid Processes." p.21] .

Yet, some critics of PARRY expressed the concern that this computer program does not in actuality "understand" the way a person understands and continued to assert that the idiosyncratic, partial and idiolectic responses from PARRY cover up its limitations. [ http://cultronix.eserver.org/sengers/ "Wallowing in the Quagmire of Language: Artificial Intelligence, Psychiatry, and the Search for the Subject". Phoebe Sangers, Cultronix. ] Colby attempted to answer these and other criticisms in a 1974 publication entitled, "Ten Criticisms of Parry." [http://portal.acm.org/citation.cfm?doid=1045200.1045202 "Ten Criticisms of Parry" by Kenneth Colby]

Colby also raised his own ethical concerns over the application of his work to real life situations. In 1984, he wrote, "With the great amount of attention now being paid by the media to artificial intelligence, it would be naive, shortsighted, and even self-deceptive to think that there will not be public interest in scrutinizing, monitoring, regulating, and even constraining our efforts. What we do can affect people’s lives as they understand them. People are going to ask not only what we are doing but also whether it should be done. Some might feel we are meddling in areas best left alone. We should be prepared to participate in open discussion and debate on such ethical issues." ["Reloading a Human Memory: A New Ethical Question for Artificial Intelligence Technology." AI Magazine 6(4) (1986), pp. 63-64 ]

Still, PARRY has withstood the test of time and for many years has continued to be acknowledged by researchers in computer science for its apparent achievements. In a 1999 review of human-computer conversation, Yorick Wilks and Roberta Catizone from the University of Sheffield comment: "The best performance overall in HMC (Human-machine conversation) has almost certainly been Colby’s PARRY program since its release on the net around 1973. It was robust, never broke down, always had something to say and, because it was intended to model paranoid behaviour, its zanier misunderstandings could always be taken as further evidence of mental disturbance, rather than the processing failures they were." [arXiv:cs.CL/9906027 v1 25 Jun 1999 "Human-Computer Conversation" by Yorick Wilks and Roberta Catizone]

Other Areas of Study

During his career, Colby ventured into other, more esoteric areas of research including classifying dreams in "primitive tribes." His findings suggested that men and women of primitive tribes differ in their dream life, these differences possibly contributing an empirical basis to our theoretical constructs of masculinity and femininity. ["Sex Differences in Dreams of Primitive Tribes," American Anthropologist, New Series, Vol. 65, No. 5: 1116-1122]

Books

* (1951) "A Primer for Psychotherapists." (ISBN 978-0826020901)
* (1955) "Energy and Structure in Psychoanalysis."
* (1957) "An exchange of views on psychic energy and psychoanalysis."
* (1958) "A Skeptical Psychoanalyst."
* (1960) "Introduction to Psychoanalytic Research"
* (1973) "Computer Models of Thought and Language."
* (1975) "Artificial Paranoia : A Computer Simulation of Paranoid Processes" (ISBN 9780080181622)
* (1983) "Fundamental Crisis in Psychiatry: Unreliability of Diagnosis" (ISBN 9780398047887)
* (1988) "Cognitive Science and Psychoanalysis" (ISBN 9780805801774)

Publications

* "Sex Differences in Dreams of Primitive Tribes" American Anthropologist, New Series, Vol. 65, No. 5, Selected Papers in Method and Technique (Oct., 1963), pp. 1116-1122
* "Computer Simulation of Change in Personal Belief Systems." Behavioral Science, 12 (1967), pp. 248-253
* "Dialogues Between Humans and an Artificial Belief System." IJCAI (1969), pp. 319-324
* "Experiments with a Search Algorithm for the Data Base of a Human Belief System." IJCAI (1969), pp. 649-654
* "Artificial Paranoia." Artif. Intell. 2(1) (1971), pp. 1-25
* "Turing-like Indistinguishability Tests for the Validation of a Computer Simulation of Paranoid Processes." Artif. Intell. 3(1-3) (1972), pp. 199-221
* "Idiolectic Language-Analysis for Understanding Doctor-Patient Dialogues." IJCAI (1973), pp. 278-284
* "Pattern-matching rules for the recognition of natural language dialogue expressions." Stanford University, Stanford, CA, 1974
* "Appraisal of four psychological theories of paranoid phenomena." Journal of Abnormal Psychology. Vol 86(1) (1977), pp. 54-59
* "Conversational Language Comprehension Using Integrated Pattern-Matching and Parsing." Artif. Intell. 9(2) (1977), pp. 111-134
* "Cognitive therapy of paranoid conditions: Heuristic suggestions based on a computer simulation model." Journal Cognitive Therapy and Research Vol 3 (1) (March 1979)
* "A Word-Finding Algorithm with a Dynamic Lexical-Semantic Memory for Patients with Anomia Using a Speech Prosthesis." AAAI (1980), pp. 289-291
* "Reloading a Human Memory: A New Ethical Question for Artificial Intelligence Technology." AI Magazine 6(4) (1986), pp. 63-64

ee also

* Artificial Intelligence
* Chatterbot
* Cognitive Science
* ELIZA
* natural language processing
* Psychoanalysis
* Turning Test

References

External links

* http://query.nytimes.com/gst/fullpage.html?res=9501E7DD1E3BF931A25756C0A9679C8B63
* http://www.stanford.edu/group/SHR/4-2/text/dialogues.html


Wikimedia Foundation. 2010.

Игры ⚽ Нужно решить контрольную?

Look at other dictionaries:

  • Kenneth MacKenna — Nombre real Leo Mielziner, Jr. Nacimiento 19 de agosto de 1899 Canterbury, New Hampshire, Estados Unidos de América Fallecimiento …   Wikipedia Español

  • Kenneth MacKenna — was an American actor and directorKenneth MacKenna (August 19, 1899 January 15, 1962) was stage name of Leo Mielziner, Jr. Hewas born in Canterbury, New Hampshire.FamilyParents were portrait artist Leo Mielziner, Sr.(December 7, 1868 August 11,… …   Wikipedia

  • Colby College Libraries — The Colby College Libraries are the libraries that support Colby College in Waterville, Maine. The libraries provide access to a merged catalog of more than eight million items via the Colby Bates Bowdoin consortium of libraries and MaineCat,[1] …   Wikipedia

  • Kenneth R. Melvin — Infobox State Representative name = Kenneth R. Melvin state delegate = Virginia district = 80th term start = 1986 preceded = L. Cleaves Manning succeeded = incumbent birth date = Birth date and age|1952|9|18| birth place = Fayetteville, North… …   Wikipedia

  • Kenneth Sansbury — Cyril Kenneth Sansbury DD MA (Cantab) (known as Kenneth; 21 January 1905 25 August 1993)[1] was an Anglican bishop in the second half of the 20th century.[2] Educated at St Paul’s and Peterhouse,[3] Sansbury was ordained in 1929.[4] His first… …   Wikipedia

  • Turing test — dablink|For the Doctor Who novel named after the test, see The Turing Test (novel).For the opera named after the test, see under the composer, Julian Wagstaff.The Turing test is a proposal for a test of a machine s ability to demonstrate… …   Wikipedia

  • History of artificial intelligence — The history of artificial intelligence begins in antiquity with myths, stories and rumors of artificial beings endowed with intelligence and consciousness by master craftsmen. In the middle of the 20th century, a handful of scientists began to… …   Wikipedia

  • Julian Lincoln Simon — (born February 12, 1932; died February 8, 1998 in Chevy Chase, Marylandcite web |url=http://query.nytimes.com/gst/fullpage.html?res=950DE0DD153CF931A25751C0A96E958260 |title=Julian Simon, 65, Optimistic Economist, Dies |last=Gilpin |first=Kenneth …   Wikipedia

  • PARRY — For other uses, see Parry (disambiguation). PARRY is, besides ELIZA, the other famous early chatterbot. Contents 1 History 2 See also 3 Notes and references 4 …   Wikipedia

  • Julian Lincoln Simon — (* 12. Februar 1932; † 8. Februar 1998) war Professor der Wirtschaftswissenschaften an der University of Maryland und Senior Fellow beim Cato Institute. Simon war Verfasser einer Vielzahl von Büchern und Artikeln, am bekanntesten sind seine Werke …   Deutsch Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”