Co-training

Co-training

Co-training is a machine learning algorithm used when there are only small amounts of labeled data and large amounts of unlabeled data. One of its uses is in text mining for search engines. It was introduced by Avrim Blum and Tom Mitchell in 1998.

Contents

Algorithm design

Co-training is a semi-supervised learning technique that requires two views of the data. It assumes that each example is described using two different feature sets that provide different, complementary information about the instance. Ideally, the two views are conditionally independent (i.e., the two feature sets of each instance are conditionally independent given the class) and each view is sufficient (i.e., the class of an instance can be accurately predicted from each view alone). Co-training first learns a separate classifier for each view using any labeled examples. The most confident predictions of each classifier on the unlabeled data are then used to iteratively construct additional labeled training data.[1]

The original co-training paper described experiments using co-training to classify web pages into "academic course home page" or not; the classifier correctly categorized 95% of 788 web pages with only 12 labeled web pages as examples.[2] The paper has been cited over 1000 times, and received the 10 years Best Paper Award at the 25th International Conference on Machine Learning (ICML 2008), a renowned computer science conference.[3][4]

Krogel and Scheffer showed in 2004 that co-training is only beneficial if the data sets used in classification are independent. Co-training can only work if one of the classifiers correctly labels a piece of data that the other classifier previously misclassified. If both classifiers agree on all the unlabeled data, i.e. they are not independent, labeling the data does not create new information. When they applied co-training to problems in functional genomics, co-training worsened the results as the dependence of the classifiers was greater than 60%.[5]

Uses

Co-training has been used to classify web pages using the text on the page as one view and the anchor text of hyperlinks on other pages that point to the page as the other view. Simply put, the text in a hyperlink on one page can give information about the page it links to.[2] Co-training can work on "unlabeled" text that has not already been classified or tagged, which is typical for the text appearing on web pages and in emails. According to Tom Mitchell, "The features that describe a page are the words on the page and the links that point to that page. The co-training models utilize both classifiers to determine the likelihood that a page will contain data relevant to the search criteria." Text on websites can judge the relevance of link classifiers, hence the term "co-training". Mitchell claims that other search algorithms are 86% accurate, whereas co-training is 96% accurate.[6]

Co-training was used on FlipDog.com, a job search site, and by the U.S. Department of Labor, for a directory of continuing and distance education.[6] It has been used in many other applications, including statistical parsing and visual detection.[7]

References

  1. ^ Blum, A., Mitchell, T. Combining labeled and unlabeled data with co-training. COLT: Proceedings of the Workshop on Computational Learning Theory, Morgan Kaufmann, 1998, p. 92-100.
  2. ^ a b Committee on the Fundamentals of Computer Science: Challenges and Opportunities, National Research Council (2004). "6: Achieving Intelligence". Computer Science: Reflections on the Field, Reflections from the Field. The National Academies Press. ISBN 0-309-09301-5. http://books.nap.edu/openbook.php?isbn=0309093015&page=101. 
  3. ^ McCallum, Andrew (2008). "Best Papers Awards". ICML Awards. http://videolectures.net/icml08_mccallum_papers/. Retrieved 2009-05-03. 
  4. ^ Shavik, Jude (2008). "10 Year Best Paper: Combining labeled and unlabled data with co-training". ICML Awards. http://videolectures.net/icml08_shavlik_clud/. Retrieved 2009-05-03. 
  5. ^ Krogel, Marc-A; Tobias Scheffer (2004). "Multi-Relational Learning, Text Mining, and Semi-Supervised Learning for Functional Genomics". Machine Learning (Kluwer Academic Publishers) 57: 61–81. doi:10.1023/B:MACH.0000035472.73496.0c. http://www.cs.uni-potsdam.de/ml/publications/mlj2004.pdf. 
  6. ^ a b Aquino, Stephen (24 April 2001). "Search Engines Ready to Learn". Technology Review. http://www.technologyreview.com/web/12353/. Retrieved 2009-05-03. 
  7. ^ Xu, Qian; Derek Hao Hu, Hong Xue, Weichuan Yu, and Qiang Yang (2009). "Semi-supervised protein subcellular localization". BMC Bioinformatics (London: BioMed Central) 10: S47. doi:10.1186/1471-2105-10-S1-S47. ISSN 1471-2105. PMC 2648770. PMID 19208149. http://www.biomedcentral.com/1471-2105/10/S1/S47. 

External links


Wikimedia Foundation. 2010.

Игры ⚽ Поможем решить контрольную работу

Look at other dictionaries:

  • training — train‧ing [ˈtreɪnɪŋ] noun HUMAN RESOURCES [singular, uncountable] the process of training someone or of being trained: • 30 workers are being sent to Japan for training. • 90% of the graduates were offered on the job training (= training while… …   Financial and business terms

  • Training analysis — (sometimes called Training Needs Analysis (TNA)) is the formal process of identifying the training gap and its related training need. IntroductionTraining can be described as “the acquisition of skills, concepts or attitudes that result in… …   Wikipedia

  • training — [ treniŋ ] n. m. • 1854; mot angl. « éducation, entraînement » ♦ Anglic. 1 ♦ Entraînement (sportif). 2 ♦ (1958) Psychol. Méthode de relaxation par autosuggestion. Training autogène. 3 ♦ (1956) Survêtement. ⇒ jogging. ● training nom masculin …   Encyclopédie Universelle

  • Training Air Wing, Finnish Air Force — Training Air Wing The Training Air Wing s flag with its pole Active since 1918 Country …   Wikipedia

  • Training Group RAF — Training Group (TG) of the Royal Air Force was the group that controlled the stations of Personnel and Training Command. It was formed on 1 April 1994 from the AOC Training Units with Personnel and Training Command its parent unit. On 30 October… …   Wikipedia

  • Training autogene — Training autogène Le Training autogène de Schultz est une technique de relaxation thérapeutique visant un apaisement du stress et de l anxiété. Sommaire 1 Le Docteur Schultz 2 Le Training autogène 2.1 Un processus en 5 ou 6 séquences …   Wikipédia en Français

  • Training Day — Données clés Titre québécois Jour de formation Titre original Training Day Réalisation Antoine Fuqua Scénario David Ayer Acteurs principaux Denzel Was …   Wikipédia en Français

  • Training Rules — Theatrical release poster Directed by Dee Mosbacher Fawn Yacker Produced by Dee Mosbacher Fawn Yacker …   Wikipedia

  • training, sociology of — Training implies preparation for a specific task or role by ordered instruction. Academic sociologists often contrast training with education. Sociologically, however, training should be conceptually opposed to schooling, leaving discussion of… …   Dictionary of sociology

  • Training — Train ing, n. The act of one who trains; the act or process of exercising, disciplining, etc.; education. [1913 Webster] {Fan training} (Hort.), the operation of training fruit trees, grapevines, etc., so that the branches shall radiate from the… …   The Collaborative International Dictionary of English

  • Training college — Training Train ing, n. The act of one who trains; the act or process of exercising, disciplining, etc.; education. [1913 Webster] {Fan training} (Hort.), the operation of training fruit trees, grapevines, etc., so that the branches shall radiate… …   The Collaborative International Dictionary of English

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”