Question answering

Question answering

Question answering (QA) is a type of information retrieval. Given a collection of documents (such as the World Wide Web or a local collection) the system should be able to retrieve answers to questions posed in natural language. QA is regarded as requiring more complex natural language processing (NLP) techniques than other types of information retrieval such as document retrieval, and it is sometimes regarded as the next step beyond search engines.

QA research attempts to deal with a wide range of question types including: fact, list, definition, "How", "Why", hypothetical, semantically-constrained, and cross-lingual questions. Search collections vary from small local document collections, to internal organization documents, to compiled newswire reports, to the world wide web.

* "Closed-domain" question answering deals with questions under a specific domain (for example, medicine or automotive maintenance), and can be seen as an easier task because NLP systems can exploit domain-specific knowledge frequently formalized in ontologies.
* "Open-domain" question answering deals with questions about nearly everything, and can only rely on general ontologies and world knowledge. On the other hand, these systems usually have much more data available from which to extract the answer.

(Alternatively, "closed-domain" might refer to a situation where only a limited type of questions are accepted, such as questions asking for descriptive rather than procedural information.)

Architecture

The first QA systems were developed in the 1960s and they werebasically natural-language interfaces to expert systems that weretailored to specific domains. In contrast, current QA systems use textdocuments as their underlying knowledge source and combine various
natural language processing techniques to search for the answers.

Current QA systems typically include a question classifier module thatdetermines the type of question and the type of answer. After thequestion is analysed, the system typically uses several modules thatapply increasingly complex NLP techniques on a gradually reducedamount of text. Thus, a document retrieval module uses
search engines to identify the documents or paragraphs in the document setthat are likely to contain the answer. Subsequently a filterpreselects small text fragments that contain strings of the same typeas the expected answer. For example, if the question is "Who inventedPenicillin" the filter returns text that contain names of people.Finally, an answer extraction module looks for further clues inthe text to determine if the answer candidate can indeed answer thequestion.

Question answering methods

QA is very dependent on a good search corpus - for without documents containing the answer, there is little any QA system can do. It thus makes sense that larger collection sizes generally lend well to better QA performance, unless the question domain is orthogonal to the collection. The notion of
data redundancy in massive collections, such as the web, means that nuggets of information are likely to be phrased in many different ways in differing contexts and documents, leading to two benefits: :(1) By having the right information appear in many forms, the burden on the QA system to perform complex NLP techniques to understand the text is lessened.:(2) Correct answers can be filtered from false positives by relying on the correct answer to appear more times in the documents than instances of incorrect ones.

hallow

Some methods of QA use keyword-based techniques to locate interesting passages and sentences from the retrieved documents and then filter based on the presence of the desired answer type within that candidate text. Ranking is then done based on syntactic features such as word order or location and similarity to query.

When using massive collections with good data redundancy, some systems use templates to find the final answer in the hope that the answer is just a reformulation of the question. If you posed thequestion "What is a dog?", the system would detect the substring "Whatis a X" and look for documents which start with "X is a Y". This often works well on simple "factoid" questions seeking factual tidbits of information such as names, dates, locations, and quantities.

Deep

However, in the cases where simple question reformulation or keyword techniques will not suffice, more sophisticated syntactic, semantic and contextual processing must be performed to extract or construct the answer. These techniques might include named-entity recognition, relation detection, coreference resolution, syntactic alternations,
word sense disambiguation, logic form transformation,logical inferences (abduction) and commonsense reasoning, temporal or spatial reasoning and so on. These systems will also very often utilize world knowledge that can be found in ontologies such as WordNet, or the Suggested Upper Merged Ontology (SUMO) to augment the available reasoning resources through semantic connections and definitions.

More difficult queries such as "Why" or "How" questions, hypothetical postulations, spatially or temporally constrained questions, dialog queries, badly-worded or ambiguous questions will all need these types of deeper understanding of the question. Complex or ambiguous document passages likewise need more NLP techniques applied to understand the text.

Statistical QA, which introduces statistical question processing and answer extraction modules, is also growing in popularity in the research community. Many of the lower-level NLP tools used, such as part-of-speech tagging,
parsing, named-entity detection, sentence boundary detection, and
document retrieval, are already available as probabilistic applications.

AQ (Answer Questioning) Methodology; introduces a working cycle to the QA methods. This method may be used in conjunction with any of the known or newly founded methods. AQ Method may be used upon perception of a posed question or answer.The means by which it is utilized can be manipulated beyond its primary usage;however, the primary usage is taking an answer and questioning it turning that very answer into a question. Example; A"I like sushi." Q"(Why do) I like sushi(?)" A"The flavor." Q"(What about) the flavorof sushi (do) I like?"Inadvertently, this may unveil different methods of thinking and perception as well.While most would agree that this seems to be the endall stratagem, it is only a starting point with endless possibilities. Any number of question methods may be used to derive the number of WHY as in, A = ∞(Q), the answer may yield any number of questions to be asked; thereby unveiling an ongoing process constantly being reborn into the research being performed. The QA methodology utilizes just the opposite where, 1(Q) = ((∞(A)-∞) = 1(A), supposedly there is only one true answer in reality everything else is perception or plausibility. Utilized alongside other forms of communication; debate may be greatly improved. Even this methodology should be questioned.

Issues

In 2002 a group of researchers wrote a roadmapof research in question answering (see external links). The followingissues were identified.

;Question classes : Different types of questions require the use of different strategies to find the answer. Question classes are arranged hierarchically in taxonomies.

;Question processing : The same information request can be expressed in various ways - some interrogative, some assertive. A semantic model of question understanding and processing is needed, one that would recognize equivalent questions, regardless of the speech act or of the words, syntactic inter-relations or idiomatic forms. This model would enable the translation of a complex question into a series of simpler questions, would identify ambiguities and treat them in context or by interactive clarification.

;Context and Q&A : Questions are usually asked within a context and answers are provided within that specific context. The context can be used to clarify a question, resolve ambiguities or keep track of an investigation performed through a series of questions.

;Data sources for Q&A : Before a question can be answered, it must be known what knowledge sources are available. If the answer to a question is not present in the data sources, no matter how well we perform question processing, retrieval and extraction of the answer, we shall not obtain a correct result.

;Answer extraction : Answer extraction depends on the complexity of the question, on the answer type provided by question processing, on the actual data where the answer is searched, on the search method and on the question focus and context. Given that answer processing depends on such a large number of factors, research for answer processing should be tackled with a lot of care and given special importance.

;Answer formulation : The result of a Q&A system should be presented in a way as natural as possible. In some cases, simple extraction is sufficient. For example, when the question classification indicates that the answer type is a name (of a person, organization, shop or disease, etc), a quantity (monetary value, length, size, distance, etc) or a date (e.g. the answer to the question "On what day did Christmas fall in 1989?") the extraction of a single datum is sufficient. For other cases, the presentation of the answer may require the use of fusion techniques that combine the partial answers from multiple documents.

;Real time question answering : There is need for developing Q&A systems that are capable of extracting answers from large data sets in several seconds, regardless of the complexity of the question, the size and multitude of the data sources or the ambiguity of the question.

;Multi-lingual question answering : The ability of developing Q&A systems for other languages than English is very important. Moreover, the ability of finding answers in texts written in languages other than English, when an English question is asked is very important.

;Interactive Q&A : It is often the case that the information need is not well captured by a Q&A system, as the question processing part may fail to classify properly the question or the information needed for extracting and generating the answer is not easily retrieved. In such cases, the questioner might want not only to reformulate the question, but (s)he might want to have a dialogue with the system.

;Advanced reasoning for Q&A : More sophisticated questioners expect answers which are outside the scope of written texts or structured databases. To upgrade a Q&A system with such capabilities, we need to integrate reasoning components operating on a variety of knowledge bases, encoding world knowledge and common-sense reasoning mechanisms as well as knowledge specific to a variety of domains.

;User profiling for Q&A : The user profile captures data about the questioner, comprising context data, domain of interest, reasoning schemes frequently used by the questioner, common ground established within different dialogues between the system and the user etc. The profile may be represented as a predefined template, where each template slot represents a different profile feature. Profile templates may be nested one within another.

History

Some of the early AI systems were question answering systems. Two of the most famous QA systems of that time are BASEBALL and LUNAR, both of which were developed in the 1960s. BASEBALL answered questions about the US baseball league over a period of one year. LUNAR, in turn, answered questions about the geological analysis of rocks returned by the Apollo moon missions. Both QA systems were very effective in their chosen domains. In fact, LUNAR was demonstrated at a lunar science convention in 1971 and it was able to answer 90% of the questions in its domain posed by people untrained on the system. Further restricted-domain QA systems were developed in the following years. The common feature of all these systems is that they had a core database or knowledge system that was hand-written by experts of the chosen domain.

Some of the early AI systems included question-answering abilities. Two of the most famous early systems are SHRDLU and ELIZA. SHRDLU simulated the operation of a robot in a toy world (the "blocks world"), and it offered the possibility to ask the robot questions about the state of the world. Again, the strength of this system was the choice of a very specific domain and a very simple world with rules of physics that were easy to encode in a computer program. ELIZA, in contrast, simulated a conversation with a psychologist. ELIZA was able to converse on any topic by resorting to very simple rules that detected important words in the person's input. It had a very rudimentary way to answer questions, and on its own it lead to a series of chatterbots such as the ones that participate in the annual Loebner prize.

The 1970s and 1980s saw the development of comprehensive theories in computational linguistics, which led to the development of ambitious projects in text comprehension and question answering. One example of such a system was the Unix Consultant (UC), a system that answered questions pertaining to the Unix operating system. The system had a comprehensive hand-crafted knowledge base of its domain, and it aimed at phrasing the answer to accommodate various types of users. Another project was LILOG, a text-understanding system that operated on the domain of tourism information in a German city. The systems developed in the UC and LILOG projects never went past the stage of simple demonstrations, but they helped the development of theories on computational linguistics and reasoning.

In the late 1990s the annual Text Retrieval Conference (TREC) included a question-answering track which has been running until the present. Systems participating in this competition were expected to answer questions on any topic by searching a corpus of text that varied from year to year. This competition fostered research and development in open-domain text-based question answering. The best system of the 2004 competition achieved 77% correct fact-based questions.

In 2007 the annual Text Retrieval Conference (TREC) included a blog data corpus for question answering. The blog data corpus contained both "clean" English as well as noisy text that includes badly-formed English and spam. The introduction of noisy text moved the question answering to a more realistic setting. Real-life data is inherently noisy as people are less careful when writing in spontaneous media like blogs. In earlier years the TREC data corpus consisted of only newswire data that was very clean.

An increasing number of systems include the World Wide Web as one more corpus of text. Currently there is an increasing interest in the integration of question answering with web search. Ask.com is an early example of a such a system, and Google and Microsoft have started to integrate question-answering facilities in their search engines. One can only expect to see an even tighter integration in the near future.

External links

QA systems regularly compete in the TREC competition and in the CLEF evaluation campaign and some of themhave demos available on the World Wide Web.

Evaluation Forums

* [http://trec.nist.gov/ TREC competition]
* [http://www.clef-campaign.org/ CLEF evaluation campaign]
* [http://research.nii.ac.jp/ntcir/ NTCIR project]

QA Systems & Demos

* [http://www.ask.com/ Ask Jeeves search engine]
* [http://www.brainboost.com/ Automatic question answering engine]
* [http://start.csail.mit.edu/ START Web-based Question Answering system at MIT]
* [http://demos.inf.ed.ac.uk:8080/qualim/ University of Edinburgh QA system - Search Wikipedia]
* [http://sourceforge.net/projects/openephyra/ OpenEphyra open source question answering system]
* [http://www.answerbus.com/ AnswerBus]
* [http://experimental-quetal.dfki.de/ DFKI Experimental Open Domain Web QA system]
* [http://qa.wpcarey.asu.edu/ ASU-QA prototype Web-based QA system]
* [http://wikiferret.com askEd! - a multilingual question answering system] ( [http://wikiferret.com English] , [http://wikiferret.com/edw/pc/index_j.html Japanese] , [http://wikiferret.com/edw/pc/index_cn.html Chinese] , [http://wikiferret.com/edw/pc/index_ru.html Russian] and [http://wikiferret.com/edw/pc/index_sw.html Swedish] )
* [http://www.ics.mq.edu.au/~pizzato/tellme TellMe QA: A prototype QA system]
* [http://www.laancor.com/technology/quadra/ QUADRA: Question Answering Digital Research Assistant]

Domain-specific QA Systems

* [http://eagl.unige.ch/EAGLi/ EAGLi: MEDLINE question answering engine]

Miscellaneous

* [http://www-nlpir.nist.gov/projects/duc/papers/qa.Roadmap-paper_v2.doc QA roadmap (Word file)]
* [http://www.languagecomputer.com/ Language Computer Corporation (LCC)]
* [http://www.laancor.com/ LAANCOR, the Language Analytic Corporation]
* [http://questsin.blogspot.com/2005/06/algorithm-for-generic-question.html Questsin, Blog on a simple do it yourself algorithm you could implement]
* [http://www.linckels.lu/chest/ CHESt, an e-Librarian Service that can be used as virtual private teacher]
* [http://1aiway.com/nlp4net/services/enparser/question.aspx Natural Language Question-Answer] QA demo and code for .NET Framework developers.
* [http://www.cnlp.org Center for Natural Language Processing at Syracuse University]
* [http://www.ephyra.info/ Ephyra question answering project at Carnegie Mellon]
* [http://thesis.liljenback.com/ Thesis on Restricted-Domain Question Answering]


Wikimedia Foundation. 2010.

Игры ⚽ Поможем написать курсовую

Look at other dictionaries:

  • Open domain question answering — In information retrieval, an open domain question answering system aims at returning an answer in response to the user’s question. The returned answer is in the form of short texts rather than a list of relevant documents. The system uses a… …   Wikipedia

  • Question Time — in a parliament occurs when backbenchers (members of the parliament who are not Ministers) ask questions of the Prime Minister which he or she is obliged to answer. It usually occurs daily while parliament is sitting, though it can be cancelled… …   Wikipedia

  • question — ques·tion n 1: a particular query directed to a witness compare interrogatory hy·po·thet·i·cal question /ˌhī pə the ti kəl/: a question directed to an expert witness (as a physician) that is based on the existence of facts offered in evidence and …   Law dictionary

  • Question — For other uses, see Question (disambiguation). For questions about Wikipedia, see Wikipedia:Questions. “ There are these four ways of answering questions. Which four? There are questions that should be answered categorically [straightforwardly… …   Wikipedia

  • Answering — Answer An swer ([a^]n s[ e]r), v. t. [imp. & p. p. {Answered} ([a^]n s[ e]rd); p. pr. & vb. n. {Answering}.] [OE. andswerien, AS. andswerian, andswarian, to answer, fr. andswaru, n., answer. See {Answer}, n.] 1. To speak in defense against; to… …   The Collaborative International Dictionary of English

  • Question of fact — In law, a question of fact (also known as a point of fact) is a question which must be answered by reference to facts and evidence, and inferences arising from those facts. Such a question is distinct from a question of law, which must be… …   Wikipedia

  • question the question — verb To ask that a proposed question’s presuppositions be explicitly justified, especially as a preliminary to answering it. To say that a question is fallacious is to say that it is objectionable to the answerer because it is constructed to… …   Wiktionary

  • Any Question Answered — Infobox Company company name = IssueBits Ltd company company type = Limited company foundation = August 2002 location city = London location country = United Kingdom key people = Colly Myers CEO Bill Batchelor Stephen Williams CFO Paul Cockerton… …   Wikipedia

  • The $64,000 Question — Genre Game show Written by Joseph Nathan Kane Directed by Joseph Cates Seymour Robbie Presented by Hal March Country of or …   Wikipedia

  • The $128,000 Question — Infobox Television show name = The $128,000 Question caption = The $128,000 Question title card. format = Game show camera = picture format = runtime = 30 minutes creator = starring = Mike Darrow (first season), Alex Trebek (second season) Alan… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”