- Haskins Laboratories
Haskins Laboratories [http://www.haskins.yale.edu] is an independent, international, multidisciplinary community of researchers conducting basic
research on spoken and writtenlanguage . Founded in 1935 and located inNew Haven, Connecticut since 1970, Haskins Laboratories is a private, non-profit research institute with a primary focus on speech,language and reading, and their biological basis. Haskins Laboratories has a long history of technological and theoretical innovation, from creating the rules forspeech synthesis and the first working prototype of areading machine for the blind to developing the landmark concept ofphonemic awareness as a critical preparation for learning to read.History
Scores of researchers have contributed to scientific breakthroughs at Haskins Laboratories since its founding. All of them are indebted to the pioneering work and leadership of
Caryl Parker Haskins [http://www.haskins.yale.edu/staff/cph.html] ,Franklin S. Cooper [http://www.haskins.yale.edu/staff/fsc.html] ,Alvin Liberman [http://www.haskins.yale.edu/staff/aml.html] , Seymour Hutner [http://appserv.pace.edu/execute/page.cfm?doc_id=18832] and Luigi Provasoli [http://www.jstor.org/view/00243590/dm995012/99p0218l/0] . This history focuses on the research program of the main division of Haskins Laboratories that, since the 1940s, has been most well known for its work in the areas of speech, language and reading. [ [http://www.haskins.yale.edu/sciencespoken.html Haskins Laboratories, "The Science of the Spoken and Written Word".] ]1930s
Caryl Haskins andFranklin S. Cooper established Haskins Laboratories in 1935. It was originally affiliated withHarvard University ,MIT , andUnion College in Schenectady, NY. Caryl Haskins conducted research inmicrobiology ,radiation physics , and other fields in Cambridge, MA and Schenectady. In 1939 the Laboratories moved its center toNew York City . Seymour Hutner joined the staff to set up a research program inmicrobiology ,genetics , andnutrition . The descendant of this program [http://appserv.pace.edu/execute/page.cfm?doc_id=18327] is now part ofPace University in New York.1940s
The U. S.
Office of Scientific Research and Development , underVannevar Bush asked Haskins Laboratories to evaluate and develop technologies for assisting blindedWorld War II veterans. Experimental psychologistAlvin Liberman joined the Laboratories to assist in developing a "sound alphabet" to represent the letters in a text for use in a reading machine for the blind. Luigi Provasoli joined the Laboratories to set up a research program inmarine biology . The program in marine biology moved toYale University in 1970 and disbanded with Provasoli's retirement in 1978.1950s
Franklin S. Cooper invented thepattern playback [http://cobweb.ecn.purdue.edu/~malcolm/interval/1994-036/] [http://www.haskins.yale.edu/featured/patplay.html] , a machine that converts pictures of the acoustic patterns of speech back into sound. With this device,Alvin Liberman , Cooper, and Pierre Delattre [http://www.mindspring.com/~ssshp/ssshp_cd/ss_hask.htm] (later joined byKatherine Safford Harris [http://www.jstor.org/view/00978507/ap020297/02a00600/0] ,Leigh Lisker and others), discovered the acoustic cues for the perception of phonetic segments (consonants and vowels). Liberman and colleagues proposed a "motor theory" [http://www.percepp.demon.co.uk/motorthy.htm] of speech perception to resolve the acoustic complexity: they hypothesized that we perceive speech by tapping into a biological specialization, a speech module, that contains knowledge of the acoustic consequences of articulation. Liberman, aided by Frances Ingemann [http://linguistlist.org/people/personal/get-personal-page2.cfm?PersonID=4996] and others, organized the results of the work on speech cues into a groundbreaking set of rules forspeech synthesis by the Pattern Playback [http://www.haskins.yale.edu/featured/patplay.html] .1960s
Franklin S. Cooper andKatherine Safford Harris , working with Peter MacNeilage [http://scitation.aip.org/getabs/servlet/GetabsServlet?prog=normal&id=JASMAN000035000011001911000004&idtype=cvips&gifs=yes] , were the first researchers in the U.S. to use electromyographic techniques, pioneered at theUniversity of Tokyo , to study the neuromuscular organization of speech.Leigh Lisker and Arthur Abramson [http://www.haskins.yale.edu/staff/abramson.html] looked for simplification at the level of articulatory action in the voicing of certain contrasting consonants. They showed that many acoustic properties of voicing contrasts arise from variations invoice onset time , the relative phasing of the onset ofvocal cord vibration and the end of a consonant. Their work has been widely replicated and elaborated, here and abroad, over the following decades.Donald Shankweiler [http://www.haskins.yale.edu/staff/shankweiler.html] andMichael Studdert-Kennedy [http://www.haskins.yale.edu/staff/msk.html] used adichotic listening technique (presenting different nonsense syllables simultaneously to opposite ears) to demonstrate the dissociation ofphonetic (speech) andauditory (nonspeech) perception by finding that phonetic structure devoid of meaning is an integral part of language, typically processed in the leftcerebral hemisphere . Liberman, Cooper, Shankweiler, and Studdert-Kennedy summarized and interpreted fifteen years of research in "Perception of the Speech Code," still among the most cited papers in the speech literature. It set the agenda for many years of research at Haskins and elsewhere by describing speech as a code in which speakers overlap (or coarticulate) segments to form syllables. Researchers at Haskins connected their first computer to a speech synthesizer designed by the Laboratories' engineers.Ignatius Mattingly [http://www.haskins.yale.edu/staff/IGM.html] , with British collaborators, John N. Holmes [http://www.amazon.co.uk/dp/0748408576] and J.N. Shearme [http://scitation.aip.org/getabs/servlet/GetabsServlet?prog=normal&id=JASMAN000035000011001911000004&idtype=cvips&gifs=yes] , adapted thePattern playback rules to write the first computer program for synthesizing continuous speech from a phonetically spelled input. A further step toward areading machine for the blind combined Mattingly's program with an automatic look-up procedure for converting alphabetic text into strings ofphonetic symbols.1970s
In 1970 Haskins Laboratories moved to
New Haven, Connecticut and entered into affiliation agreements withYale University and theUniversity of Connecticut .Isabelle Liberman , Donald Shankweiler, andAlvin Liberman teamed up withIgnatius Mattingly to study the relationship between speech perception and reading, a topic implicit in the Laboratories' research program since its inception. They developed the concept ofphonemic awareness , the knowledge that would-be readers must have of the phonemic structure of their language in order to be able to read. Under the broad rubric of the "alphabetic principle ," this is the core of the Laboratories' present program of reading pedagogy. Patrick Nye [http://www.haskins.yale.edu/staff/nye.html] joined the Laboratories to lead a team working on thereading machine for the blind. The project culminated when the addition of an optical character recognizer allowed investigators to assemble the first automatic text-to-speech reading machine. By the end of the decade this technology had advanced to the point where commercial concerns assumed the task of designing and manufacturing reading machines for the blind [http://www.kurzweiltech.com/kesi.html] .In 1973
Franklin S. Cooper was selected to form a panel of six experts [Time Magazine (1973). "The Secretary and the Tape Tangle." "Time Magazine", Dec. 10, 1973.] charged with investigating the famous 18-minute gap in the White House office tapes of PresidentRichard Nixon related to theWatergate scandal [http://www.paloaltoonline.com/weekly/morgue/community_pulse/1999_Mar_10.LEADOBIT.html]Building on earlier work,
Philip Rubin developed thesinewave synthesis program, which was then used byRobert Remez , Rubin, and colleagues to show that listeners can perceive continuous speech without traditional speech cues from a pattern of sinewaves that track the changing resonances of thevocal tract . This paved the way for a view of speech as a dynamic pattern of trajectories through articulatory-acoustic space.Philip Rubin and colleagues developed Paul Mermelstein's anatomically simpliedvocal tract model [http://www.mindspring.com/~ssshp/ssshp_cd/ss_btl2.htm] , originally worked on atBell Laboratories , into the first articulatory synthesizer [http://www.haskins.yale.edu/facilities/asy.html] that can be controlled in a physically meaningful way and used for interactive experiments.1980s
Studies of different
writing systems supported the controversial hypothesis that all reading necessarily activates thephonological form of a word before, or at the same time, as its meaning. Work included experiments by George Lukatela [http://www.haskins.yale.edu/STAFF/lukatela.html] ,Michael Turvey [http://www.sp.uconn.edu/~wwwpsyc/Faculty/Turvey/Turvey.html] , Leonard Katz [http://web.uconn.edu/psychology/people/Faculty/Katz/Katz.html] , Ram Frost [http://micro5.mscc.huji.ac.il/~frost/] , Laurie Feldman [http://www.albany.edu/psy/feldman.html] and Shlomo Bentin [http://pissaro.soc.huji.ac.il/Shlomo/people/shlomo.html] , in a variety of languages. Various researchers developed compatible theoretical accounts ofspeech production [Gloria J. Borden and Katherine S. Harris. Speech Science Primer: Physiology, acoustics, and perception of speech. Second Edition. Williams & Williams, Baltimore, MD, 1984] ,speech perception andphonological knowledge.Carol Fowler [http://www.haskins.yale.edu/staff/caf.html] proposed adirect realism theory of speech perception: listeners perceive gestures not by means of a specialized decoder, as in the motor theory, but because information in the acoustic signal specifies the gestures that form it.J. A. Scott Kelso and colleagues demonstrated functional synergies in speech gestures experimentally. Elliot Saltzman [http://www.haskins.yale.edu/staff/saltzman.html] developed adynamical systems theory of synergetic action and implemented the theory as a working model of speech production. LinguistsCatherine Browman [http://www.haskins.yale.edu/staff/browman.html] and Louis Goldstein [http://www.yale.edu/linguist/faculty/louis.html] developed the theory ofarticulatory phonology [http://www.haskins.yale.edu/research/gestural.html] , in which gestures are the basic units of bothphonetic action andphonological knowledge. Articulatory phonology, the task dynamic model, and the articulatory synthesis model are combined into a gestural computational model of speech production. [http://www.haskins.yale.edu/research/gestural.html] Saltzman and Rubin started the [http://www.wikinfo.org/wiki.php?title=IS_group IS group] to explore cutting edge developments in science and technology and foster collaboration across institutions and disciplines. The group, not formally affiliated with Haskins Laboratories, continues to meet.1990s
Katherine Safford Harris [Frederica Bell-Berti. "Producing Speech: Contemporary Issues, for Katherine Safford Harris". Springer, 1995.] , Frederica Bell-Berti [http://www.stjohns.edu/academics/graduate/liberalarts/departments/speech] and colleagues studied the phasing and cohesion of articulatory speech gestures. Kenneth Pugh [http://www.yalereadingcenter.com/] was among the first scientists to usefunctional magnetic resonance imaging (fMRI) to reveal brain activity associated with reading and reading disabilities. Pugh, Donald Shankweiler [http://www.sp.uconn.edu/~wwwpsyc/Faculty/Shankweiler/Shankweiler.html] , Weija Ni [http://www.csr.nih.gov/photodisplay/finalinter.aspx?id=1258&orgid=340010003&other=0] , Einar Mencl [http://www.haskins.yale.edu/staff/mencl.html] , and colleagues developed novel applications ofneuroimaging to measure brain activity associated with understanding sentences.Philip Rubin , Louis Goldstein and Mark Tiede [http://www.haskins.yale.edu/staff/tiede.html] designed a radical revision of the articulatory synthesis model, known as CASY [http://www.haskins.yale.edu/facilities/casy.html] , the configurable articulatory synthesizer. This 3-dimensional model of thevocal tract permits researchers to replicateMRI images of actual speakers. Douglas Whalen [http://www.haskins.yale.edu/staff/whalen.html] , Goldstein, Rubin and colleagues extended this work to study the relation between speech production and perception. [http://www.haskins.yale.edu/newsrelease/A93-2006.html]Donald Shankweiler ,Susan Brady [http://www.uri.edu/artsci/psy/schpsy/Faculty.html] [http://www.haskins.yale.edu/staff/brady.html] , Anne Fowler [http://www.haskins.yale.edu/staff/fowlera.html] , and others explored whether weakmemory andperception in poor readers are tied specifically tophonological deficits. Evidence rejected broader cognitive deficits underlying reading difficulties and raised questions about impaired phonological representations in disabled readers.2000s
Anne Fowler [http://www.haskins.yale.edu/staff/fowlera.html] and
Susan Brady [http://www.haskins.yale.edu/mrin/staff/brady.html] launched the Early Reading Success (ERS) program [http://www.haskins.yale.edu:16080/ers/] , part of the Haskins Literacy Initiative [http://www.haskins.yale.edu/hli.html] which promotes the science of teaching reading. The ERS program was a demonstration project examining the efficacy ofprofessional development in reading instruction for teachers of children in kindergarten through second grade. The Mastering Reading Instruction program [http://www.haskins.yale.edu/mrin.html] , which combines professional development with Haskins-trained mentors, was a continuation of ERS.David Ostry [http://www.psych.mcgill.ca/labs/mcl/David%20J_%20Ostry.htm] and colleagues explored the neurological underpinning ofmotor control using a robot arm to influencejaw movement.Douglas Whalen and Khalil Iskarous [http://www.haskins.yale.edu/staff/iskarous.html] pioneered the pairing ofultrasound , used here to monitor articulators that cannot be seen, and Optotrak [http://www.ndigital.com/certus.php] , an opto-electronic position-tracking device, used here to monitor visible articulators.Donald Shankweiler [http://www.sp.uconn.edu/~wwwpsyc/Faculty/Shankweiler/Shankweiler.html] and David Braze [http://www.haskins.yale.edu/staff/braze.html] developed aneye movement laboratory that combineseye tracking data with brain activity measures for investigating reading processes in normal and disabled readers. In March 2005 Haskins Laboratories moved to a new state-of-the-art facility on George Street inNew Haven .Also see (people)
*
Arthur S. Abramson
*Susan Brady
* [http://www.haskins.yale.edu/staff/braze.html David Braze]
*Catherine Browman
*Franklin S. Cooper
*Carol Fowler
*Louis M. Goldstein
*Katherine Safford Harris
*Caryl Parker Haskins
*Leonard Katz
*J. A. Scott Kelso
*Alvin Liberman
*Isabelle Liberman
*Philip Lieberman
*Leigh Lisker
*Ignatius Mattingly
*David Ostry
*Robert Remez
*Philip Rubin
*Elliot Saltzman
*Donald Shankweiler
*Michael Studdert-Kennedy
*Michael Turvey
*Douglas Whalen Also see (topics)
*
alphabetic principle
*articulatory phonology
*articulatory synthesis
*categorical perception
*coarticulation
*cognitive science
*cognitive neuroscience
*dichotic listening
*direct realism
*experimental psychology
*eye tracking
* [http://www.wikinfo.org/wiki.php?title=IS_group IS group]
*linguistics
*motor control
*Pattern playback
*phonemic awareness
*phonological awareness
* reading
*reading machine
*sinewave synthesis
*speech perception
*speech synthesis
*voice onset time
*Watergate tapes References
* Frederica Bell-Berti. "Producing Speech: Contemporary Issues, for Katherine Safford Harris". Springer, 1995.
* Gloria J. Borden and Katherine S. Harris. "Speech Science Primer: Physiology, acoustics, and perception of speech. Second Edition". Williams & Williams, Baltimore, MD, 1984.
* Alice B. Dadourian. "A Bio-Biography of Caryl Parker Haskins". Yvonix, New Haven, Connecticut, 2000.
* Haskins Laboratories. "The Science of the Spoken and Written Word". Haskins Laboratories, New Haven, CT, 2005.
* James F. Kavanagh and Ignatius G. Mattingly (eds.), "Language by Ear and by Eye: The Relationships between Speech and Reading". The MIT Press, Cambridge, MA: 1972. (Paperback edition, 1974, ISBN 0262610159).
* Alvin M. Liberman. "Speech: a special code". The MIT Press, Cambridge, MA: 1996.
* A. M. Liberman, F. S. Cooper, D. S. Shankweiler, and M. Studdert-Kennedy. Perception of the speech code. "Psychological Review", 74, 1967, 431-461.
* A. M., Liberman, A. M., K. S. Harris, H. S. Hoffman & B. C. Griffith. The discrimination of speech sounds within and across phoneme boundaries. "Journal of Experimental Psychology", 54, 358 - 368, 1957.
* Ignatius G. Mattingly & Michael Studdert-Kennedy (Eds.), "Modularity and the Motor Theory of Speech Perception": Proceedings of a Conference to Honor Alvin M. Liberman. Hillsdale, NJ: Lawrence Erlbaum: 1991. (Paperback, ISBN 0805803319)
* Patrick W. Nye, Smithsonian Speech Synthesis History Project, August 1, 1989 [http://www.mindspring.com/~ssshp/ssshp_cd/ss_hask.htm]
* Malcolm Slaney. Pattern playback from 1950 to 1955. "Proceedings of the 1995 IEEE Systems, Man and Cybernetics Conference", October 22-25, 1995, Vancouver, Canada. Copyright 1995, IEEE. [http://cobweb.ecn.purdue.edu/~malcolm/interval/1994-036/]Notes
Wikimedia Foundation. 2010.