- Camera resectioning
Camera resectioning (often called camera calibration) is the process of finding the true parameters of the camera that produced a given photograph or video. These parameters characterize the transformation that maps 3D points in the scene to 2D points in the camera plane. Some of these parameters are focal length, format size, principal point, and lens distortion. Camera calibration is often used as an early stage in
Computer Visionand especially in the field of Augmented reality.
camerais used, light from the environment is focused on an image plane and captured. This process reduces the dimensions of the data taken in by the camera from three to two (light from a 3D scene is stored on a 2D image). Each pixelon the image plane therefore corresponds to a shaft of light from the original scene. Camera resectioning determines which incoming light is associated with each pixel on the resulting image. In an ideal pinhole camera, a simple projection matrixis enough to do this. With more complex camera systems, errors resulting from misaligned lenses and deformations in their structures can result in more complex distortions in the final image.The camera projection matrix is derived from the intrinsic and extrinsic parameters of the camera, and is often represented by the series of transformations; e.g. a matrix of camera intrinsic parameters, a 3x3 rotation matrix, and a translation vector. The camera projection matrix can be used to associate points in a camera's image space with locations in 3D world space.
Camera resectioning is often used in the application of
stereo visionwhere the camera projection matrices of two cameras are used to calculate the 3D world coordinates of a point viewed by both cameras.
Some people call this camera calibration, but many restrict the term
camera calibrationfor the estimation of internal or intrinsic parameters only.
There are many different approaches to calculate the intrinsic and extrinisic parameters for a specific camera setup. A classical approach is Roger Y. Tsai's Algorithm.It is a 2-stage algorithm, calculating the pose (3D Orientation, and x-axis and y-axis translation) in first stage. In second stage it computes the focal length, distortion coefficients and the z-axis translation.
Pinhole camera model
* [http://campar.in.tum.de/twiki/pub/Far/AugmentedRealityIISoSe2004/L3-CamCalib.pdf Camera Calibration] - Augmented reality lecture at TU Muenchen, Germany
* [http://www.cs.cmu.edu/~rgw/TsaiDesc.html Tsai's Approach]
* [http://www.hitl.washington.edu/artoolkit/documentation/usercalibration.htm Camera calibration] (using
* [http://www.vision.caltech.edu/bouguetj/calib_doc/papers/heikkila97.pdf A Four-step Camera Calibration Procedure with Implicit Image Correction]
Wikimedia Foundation. 2010.
Look at other dictionaries:
Compositing — This article is about visual effects. For the process of combining several democratic motions, see compositing (democracy). For the technique of compositing typeset by hand, see typesetting. For compositing in graphic design and still photography … Wikipedia
Chroma key — For the progressive rock band, see Chroma Key. For musical tonality depending on key, see Key coloration. Bluescreen and Green screen redirect here. For other uses, see Blue screen and Green screen (disambiguation), respectively. Today s… … Wikipedia
Computer vision — is the field concerned with automated imaging and automated computer based processing of images to extract and interpret information. It is the science and technology of machines that see. Here see means the machine is able to extract information … Wikipedia
Computer stereo vision — See also: Stereopsis Computer stereo vision is the extraction of 3D information from digital images, such as obtained by a CCD camera. By comparing information about a scene from two vantage points, 3D information can be extracted by examination… … Wikipedia
Augmented reality — (AR) is a field of computer research which deals with the combination of real world and computer generated data. At present, most AR research is concerned with the use of live video imagery which is digitally processed and augmented by the… … Wikipedia
Hidden surface determination — In 3D computer graphics, hidden surface determination (also known as hidden surface removal (HSR), occlusion culling (OC) or visible surface determination (VSD)) is the process used to determine which surfaces and parts of surfaces are not… … Wikipedia
Head-mounted display — A head mounted display or helmet mounted display, both abbreviated HMD, is a display device, worn on the head or as part of a helmet, that has a small display optic in front of one (monocular HMD) or each eye (binocular HMD). A binocular head… … Wikipedia
Cave Automatic Virtual Environment — A Cave Automatic Virtual Environment (better known by the recursive acronym CAVE) is an immersive virtual reality environment where projectors are directed to three, four, five or six of the walls of a room sized cube. The name is also a… … Wikipedia
Ubiquitous computing — (ubicomp) is a post desktop model of human computer interaction in which information processing has been thoroughly integrated into everyday objects and activities. In the course of ordinary activities, someone using ubiquitous computing engages… … Wikipedia
Virtual reality — This article is about the sensory technology. For the Alan Ayckbourn play, see Virtual Reality (play). For the gamebook series, see Virtual Reality (gamebooks). U.S. Navy personnel using a VR parachute trainer … Wikipedia