Content-based image retrieval

Content-based image retrieval

Content-based image retrieval (CBIR), also known as query by image content (QBIC) and content-based visual information retrieval (CBVIR) is the application of computer vision techniques to the image retrieval problem, that is, the problem of searching for digital images in large databases. (see this survey[1] for a recent scientific overview of the CBIR field). Content based image retrieval is opposed to concept based approaches (see concept based image indexing).

"Content-based" means that the search will analyze the actual contents of the image rather than the metadata such as keywords, tags, and/or descriptions associated with the image. The term 'content' in this context might refer to colors, shapes, textures, or any other information that can be derived from the image itself. CBIR is desirable because most web based image search engines rely purely on metadata and this produces a lot of garbage in the results. Also having humans manually enter keywords for images in a large database can be inefficient, expensive and may not capture every keyword that describes the image. Thus a system that can filter images based on their content would provide better indexing and return more accurate results.



The term Content-Based Image Retrieval (CBIR) seems to have originated in 1992, when it was used by T. Kato to describe experiments into automatic retrieval of images from a database, based on the colors and shapes present.[2] Since then, the term has been used to describe the process of retrieving desired images from a large collection on the basis of syntactical image features. The techniques, tools and algorithms that are used originate from fields such as statistics, pattern recognition, signal processing, and computer vision.

Technical progress

There is a growing interest in CBIR because of the limitations inherent in metadata-based systems, as well as the large range of possible uses for efficient image retrieval. Textual information about images can be easily searched using existing technology, but requires humans to personally describe every image in the database. This is impractical for very large databases, or for images that are generated automatically, e.g. from surveillance cameras. It is also possible to miss images that use different synonyms in their descriptions. Systems based on categorizing images in semantic classes like "cat" as a subclass of "animal" avoid this problem but still face the same scaling issues.

Potential uses for CBIR include:

  • Art collections
  • Photograph archives
  • Retail catalogs
  • Medical diagnosis
  • Crime prevention
  • The military
  • Intellectual property
  • Architectural and engineering design
  • Geographical information and remote sensing systems

CBIR software systems

  • University of Washington FIDS Demo[3]
  • CIRES: Content Based Image Retrieval System[4]
  • Impezzeo Image Suite Visual Search
  • LTU-Corbis Visual Search[5]
  • TinEye[6]
  • Cortina [7]
  • Octagon[8]
  • Windsurf[9]
  • Visual recognition factory [10]

See CBIR engines for other examples of publicly available and accessible CBIR systems.

CBIR techniques

Many CBIR systems have been developed, but the problem of retrieving images on the basis of their pixel content remains largely unsolved.

Query techniques

            Different implementations of CBIR make use of different types of user queries.

Query by example is a query technique that involves providing the CBIR system with an example image that it will then base its search upon. The underlying search algorithms may vary depending on the application, but result images should all share common elements with the provided example.

Options for providing example images to the system include:

  • A preexisting image may be supplied by the user or chosen from a random set.
  • The user draws a rough approximation of the image they are looking for, for example with blobs of color or general shapes.[11]

This query technique removes the difficulties that can arise when trying to describe images with words.

Semantic retrieval

The ideal CBIR system from a user perspective would involve what is referred to as semantic retrieval, where the user makes a request like "find pictures of dogs" or even "find pictures of Abraham Lincoln". This type of open-ended task is very difficult for computers to perform - pictures of chihuahuas and Great Danes look very different, and Lincoln may not always be facing the camera or in the same pose. Current CBIR systems therefore generally make use of lower-level features like texture, color, and shape, although some systems take advantage of very common higher-level features like faces (see facial recognition system). Not every CBIR system is generic. Some systems are designed for a specific domain, e.g. shape matching can be used for finding parts inside a CAD-CAM database.

Other query methods

Other query methods include browsing for example images, navigating customized/hierarchical categories, querying by image region (rather than the entire image), querying by multiple example images, querying by visual sketch, querying by direct specification of image features, and multimodal queries (e.g. combining touch, voice, etc.) [1].

CBIR systems can also make use of relevance feedback, where the user progressively refines the search results by marking images in the results as "relevant", "not relevant", or "neutral" to the search query, then repeating the search with the new information.

Content comparison using image distance measures

The most common method for comparing two images in content based image retrieval (typically an example image and an image from the database) is using an image distance measure. An image distance measure compares the similarity of two images in various dimensions such as color, texture, shape, and others. For example a distance of 0 signifies an exact match with the query, with respect to the dimensions that were considered. As one may intuitively gather, a value greater than 0 indicates various degrees of similarities between the images. Search results then can be sorted based on their distance to the queried image.[11] A long list of distance measures can be found in [12].


Computing distance measures based on color similarity is achieved by computing a color histogram for each image that identifies the proportion of pixels within an image holding specific values (that humans express as colors). Current research is attempting to segment color proportion by region and by spatial relationship among several color regions. Examining images based on the colors they contain is one of the most widely used techniques because it does not depend on image size or orientation. Color searches will usually involve comparing color histograms, though this is not the only technique in practice.


Texture measures look for visual patterns in images and how they are spatially defined. Textures are represented by texels which are then placed into a number of sets, depending on how many textures are detected in the image. These sets not only define the texture, but also where in the image the texture is located.

Texture is a difficult concept to represent. The identification of specific textures in an image is achieved primarily by modeling texture as a two-dimensional gray level variation. The relative brightness of pairs of pixels is computed such that degree of contrast, regularity, coarseness and directionality may be estimated (Tamura, Mori & Yamawaki, 1978). However, the problem is in identifying patterns of co-pixel variation and associating them with particular classes of textures such as silky, or rough.


Shape does not refer to the shape of an image but to the shape of a particular region that is being sought out. Shapes will often be determined first applying segmentation or edge detection to an image. Other methods like [Tushabe and Wilkinson 2008] use shape filters to identify given shapes of an image. In some case accurate shape detection will require human intervention because methods like segmentation are very difficult to completely automate.


Some software producers are trying to push CBIR based applications into the filtering and law enforcement markets for the purpose of identifying and censoring images with skin-tones and shapes that could indicate the presence of nudity, with controversial results.

See also

Relevant research papers


  1. ^ Content-based Multimedia Information Retrieval: State of the Art and Challenges, Michael Lew, et al., ACM Transactions on Multimedia Computing, Communications, and Applications, pp. 1-19, 2006.
  2. ^ Content-based Image Retrieval, John Eakins and Margaret Graham, University of Northumbria at Newcastle
  3. ^ University of Washington FIDS Demo
  4. ^ CIRES: Content Based Image Retrieval System
  5. ^ LTU Technologies Corbis Visual Search
  6. ^ Idée Inc. TinEye Reverse Image Search Engine.
  7. ^ Vision Research Lab, UCSB
  8. ^ Octagon by Viitala
  9. ^ Windsurf (University of Bologna, Italy)
  10. ^ Visual recognition factory
  11. ^ a b Shapiro, Linda; George Stockman (2001). Computer Vision. Upper Saddle River, NJ: Prentice Hall. ISBN 0-13-030796-3. 
  12. ^ Eidenberger, Horst (2011). “Fundamental Media Understanding”, atpress. ISBN 978-3842379176.


External links

Wikimedia Foundation. 2010.

Look at other dictionaries:

  • Content Based Image Retrieval — Unter Content Based Image Retrieval (CBIR) versteht man eine inhaltsbasierte Bildersuche. Alternative Bezeichnungen sind query by image content (QBIC) und content based visual information retrieval (CBVIR). Dabei handelt es sich um ein… …   Deutsch Wikipedia

  • Image retrieval — An image retrieval system is a computer system for browsing, searching and retrieving images from a large database of digital images. Most traditional and common methods of image retrieval utilize some method of adding metadata such as captioning …   Wikipedia

  • Concept based image indexing — Concept based image indexing, also variably named as “description based” or “text based” image indexing/retrieval, refers to retrieval from text based indexing of images that may employ keywords, subject headings, captions, or natural language… …   Wikipedia

  • Content format — Graphical representations of electrical data: analog audio content format (red), 4 bit digital pulse code modulated content format (black) …   Wikipedia

  • Image organizer — An image organizer or image management application is application software focused on organizing digital images. [Cynthia Baron and Daniel Peck, The Little Digital Camera Book , July 1, 2002 pp:93] [ Julie Adair King, Shoot Like a Pro! Digital… …   Wikipedia

  • Image scanner — Desktop scanner, with the lid raised. An object has been laid on the glass, ready for scanning …   Wikipedia

  • Automatic image annotation — (also known as automatic image tagging) is the process by which a computer system automatically assigns metadata in the form of captioning or keywords to a digital image. This application of computer vision techniques is used in image retrieval… …   Wikipedia

  • Object categorization from image search — In computer vision, the problem of object categorization from image search is the problem of training a classifier to recognize categories of objects, using only the images retrieved automatically with an Internet search engine. Ideally,… …   Wikipedia

  • Recherche d'image par le contenu — La recherche d images par le contenu (en anglais : Content Based Image Retrieval ou CBIR), est une technique permettant de rechercher des images à partir de ses caractéristiques visuelles, c est à dire induite de leurs pixels. Celles ci sont …   Wikipédia en Français

  • Multimedia Information Retrieval — (MMIR) is a research discipline of computer science that aims at extracting semantic information from multimedia data sources.[1] Data sources include directly perceivable media such as audio, image and video, indirectly perceivable sources such… …   Wikipedia