- Motion compensation
Motion compensation is an algorithmic technique employed in the encoding of video data for video compression, for example in the generation of MPEG-2 files. Motion compensation describes a picture in terms of the transformation of a reference picture to the current picture. The reference picture may be previous in time or even from the future. When images can be accurately synthesized from previously transmitted/stored images, the compression efficiency can be improved.
How it works
Motion compensation exploits the fact that, often, for many frames of a movie, the only difference between one frame and another is the result of either the camera moving or an object in the frame moving. In reference to a video file, this means much of the information that represents one frame will be the same as the information used in the next frame.
Motion compensation takes advantage of this to provide a way to create frames of a movie from a reference frame. For example, in principle, if a movie is shot at 24 frames per second, motion compensation would allow the movie file to store the full information for every fourth frame. The only information stored for the frames in between would be the information needed to transform the previous frame into the next frame.
If a frame of information is 1 MB in size, then uncompressed, one second of this film would be 24 MB in size. Applying motion compensation, the file size for one second of the film can often be reduced to 6 MB, for typical video material.
The following is a simplistic illustrated explanation of how motion compensation works. Two successive frames were captured from the movie Elephants Dream. As can be seen from the images, the bottom (motion compensated) difference between two frames contains significantly less detail than the prior images, and thus compresses much better than the rest.
Type Example Frame Description Original Full original frame, as shown on screen. Difference Differences between the original frame and the next frame. Motion compensated difference Differences between the original frame and the next frame, shifted right by 2 pixels. Shifting the frame compensates for the panning of the camera, thus there is greater overlap between the two frames.
Motion Compensation in MPEG
In MPEG, images are predicted from previous frames (P frames) or bidirectionally from previous and future frames (B frames). B frames are more complex because the image sequence must be transmitted/stored out of order so that the future frame is available to generate the B frames.
After predicting frames using motion compensation, the coder finds the error (residual) which is then compressed using the DCT and transmitted.
Global motion compensation
In global motion compensation, the motion model basically reflects camera motions such as:
- Dolly - moving the camera forward or backwards
- Track - moving the camera left or right
- Boom - moving the camera up or down
- Pan - rotating the camera around its Y axis, moving the view left or right
- Tilt - rotating the camera around its X axis, moving the view up or down
- Roll - rotating the camera around the view axis
It works best for still scenes without moving objects.
There are several advantages of global motion compensation:
- It models the dominant motion usually found in video sequences with just a few parameters. The share in bit-rate of these parameters is negligible.
- It does not partition the frames. This avoids artifacts at partition borders.
- A straight line (in the time direction) of pixels with equal spatial positions in the frame corresponds to a continuously moving point in the real scene. Other MC schemes introduce discontinuities in the time direction.
MPEG-4 ASP supports GMC with three reference points, although some implementations can only make use of one. A single reference point only allows for translational motion which for its relatively large performance cost provides little advantage over block based motion compensation.
Moving objects within a frame are not sufficiently represented by global motion compensation. Thus, local motion estimation is also needed.
Block motion compensation
In block motion compensation (BMC), the frames are partitioned in blocks of pixels (e.g. macroblocks of 16×16 pixels in MPEG). Each block is predicted from a block of equal size in the reference frame. The blocks are not transformed in any way apart from being shifted to the position of the predicted block. This shift is represented by a motion vector.
To exploit the redundancy between neighboring block vectors, (e.g. for a single moving object covered by multiple blocks) it is common to encode only the difference between the current and previous motion vector in the bit-stream. The result of this differencing process is mathematically equivalent to a global motion compensation capable of panning. Further down the encoding pipeline, an entropy coder will take advantage of the resulting statistical distribution of the motion vectors around the zero vector to reduce the output size.
It is possible to shift a block by a non-integer number of pixels, which is called sub-pixel precision. The in-between pixels are generated by interpolating neighboring pixels. Commonly, half-pixel or quarter pixel precision (Qpel, used by H.264 and MPEG-4/ASP) is used. The computational expense of sub-pixel precision is much higher due to the extra processing required for interpolation and on the encoder side, a much greater number of potential source blocks to be evaluated.
The main disadvantage of block motion compensation is that it introduces discontinuities at the block borders (blocking artifacts). These artifacts appear in the form of sharp horizontal and vertical edges which are easily spotted by the human eye and produce ringing effects (large coefficients in high frequency sub-bands) in the Fourier-related transform used for transform coding of the residual frames.
Block motion compensation divides up the current frame into non-overlapping blocks, and the motion compensation vector tells where those blocks come from (a common misconception is that the previous frame is divided up into non-overlapping blocks, and the motion compensation vectors tell where those blocks move to). The source blocks typically overlap in the source frame. Some video compression algorithms assemble the current frame out of pieces of several different previously-transmitted frames.
Frames can also be predicted from future frames. The future frames then need to be encoded before the predicted frames and thus, the encoding order does not necessarily match the real frame order. Such frames are usually predicted from two directions, i.e. from the I- or P-frames that immediately precede or follow the predicted frame. These bidirectionally predicted frames are called B-frames. A coding scheme could, for instance, be IBBPBBPBBPBB.
Variable block-size motion compensation
Variable block-size motion compensation (VBSMC) is the use of BMC with the ability for the encoder to dynamically select the size of the blocks. When coding video, the use of larger blocks can reduce the number of bits needed to represent the motion vectors, while the use of smaller blocks can result in a smaller amount of prediction residual information to encode. Older designs such as H.261 and MPEG-1 video typically use a fixed block size, while newer ones such as H.263, MPEG-4 Part 2, H.264/MPEG-4 AVC, and VC-1 give the encoder the ability to dynamically choose what block size will be used to represent the motion.
Overlapped block motion compensation
Overlapped block motion compensation (OBMC) is a good solution to these problems because it not only increases prediction accuracy but also avoids blocking artifacts. When using OBMC, blocks are typically twice as big in each dimension and overlap quadrant-wise with all 8 neighbouring blocks. Thus, each pixel belongs to 4 blocks. In such a scheme, there are 4 predictions for each pixel which are summed up to a weighted mean. For this purpose, blocks are associated with a window function that has the property that the sum of 4 overlapped windows is equal to 1 everywhere.
Studies of methods for reducing the complexity of OBMC have shown that the contribution to the window function is smallest for the diagonally-adjacent block. Reducing the weight for this contribution to zero and increasing the other weights by an equal amount leads to a substantial reduction in complexity without a large penalty in quality. In such a scheme, each pixel then belongs to 3 blocks rather than 4, and rather than using 8 neighboring blocks, only 4 are used for each block to be compensated. Such a scheme is found in the H.263 Annex F Advanced Prediction mode
Quarter Pixel (QPel) and Half Pixel motion compensation
In motion compensation, quarter or half samples are actually interpolated sub-samples caused by fractional motion vectors. Based on the vectors and full-samples, the sub-samples can be calculated by using bicubic or bilinear 2-D filtering. See subclause 220.127.116.11 "Fractional sample interpolation process" of the H.264 standard.
3D image coding techniques
Motion compensation is utilized in Stereoscopic Video Coding
In video, time is often considered as the third dimension. Still image coding techniques can be expanded to an extra dimension.
JPEG2000 uses wavelets, and these can also be used to encode motion without gaps between blocks in an adaptive way. Fractional pixel affine transformations lead to bleeding between adjacent pixels. If no higher internal resolution is used the delta images mostly fight against the image smearing out. The delta image can also be encoded as wavelets, so that the borders of the adaptive blocks match.
2D+Delta Encoding techniques utilize H.264 and MPEG-2 compatible coding and can use motion compensation to compress between stereoscopic images.
Expanding the 8x8 JPEG blocks into the third dimension that is into 8x8x8 cubes and modifying the DCT more into a DFT enables compression of linear translations with speeds below and around one pixel per frame (sub-pixel precision).
- A New FFT Architecture and Chip Design for Motion Compensation based on Phase Correlation
- DCT and DFT coefficients are related by simple factors
- DCT better than DFT also for video
- Why DCT is better than DFT (German article)
- John Wiseman, An Introduction to MPEG Video Compression
- DCT and motion compensation
- Compatibility between DCT, motion compensation and other methods
- video compression
- change of framerate for playback of 24 frames per second movies on 60 Hz LCDs or 100 Hz interlaced cathode ray tubes
Garnham, N. W., Motion Compensated Video Coding, University of Nottingham PhD Thesis, October 1995, ISBN x-76-340971-4
- Temporal Rate Conversion - article giving an overview of motion compensation techniques.
Data compression methods Information theory LosslessOthers AudioAudio codec partsOthers ImageTermsMethodsOthers VideoTermsOthers See Compression formats for formats and Compression software implementations for codecs
Wikimedia Foundation. 2010.
Look at other dictionaries:
Motion Compensation — Visualisierte Bewegungsvektoren der MPEG Kodierung über einem Standbild aus Elephants Dream. Erkennbar sind die Bewegungen der verschiedenen Plattformen und Körperteile der Figur. Unter dem Oberbegriff Motion Compensation oder Motion Prediction… … Deutsch Wikipedia
X-Video Motion Compensation — (XvMC), is an extension of the X video extension (Xv) for the X Window System. The XvMC API allows video programs to offload portions of the video decoding process to the GPU video hardware. In theory this process should also reduce bus bandwidth … Wikipedia
Global Motion Compensation — Unter dem Oberbegriff Motion Compensation oder Motion Prediction (wortwörtlich: Bewegungsvorhersage) wird eine Reihe von Algorithmen zusammengefasst, die hauptsächlich unterstützend bei der Videokompression eingesetzt werden. Die temporären… … Deutsch Wikipedia
Quarter Pixel Motion Compensation — Unter dem Oberbegriff Motion Compensation oder Motion Prediction (wortwörtlich: Bewegungsvorhersage) wird eine Reihe von Algorithmen zusammengefasst, die hauptsächlich unterstützend bei der Videokompression eingesetzt werden. Die temporären… … Deutsch Wikipedia
Global motion compensation — (GMC) is a technique used in video compression to reduce the bitrate required to encode video. It is most commonly used in MPEG 4 ASP, such as with the DivX and Xvid codecs. OperationGlobal motion compensation describes the motion in a scene… … Wikipedia
image motion compensation — vaizdo judesio kompensacija statusas T sritis Gynyba apibrėžtis Tam tikras fotojuostos judėjimo greitis siekiant kompensuoti lėktuvo ar erdvėlaivio judėjimą į priekį fotografuojant Žemės paviršiaus objektus. atitikmenys: angl. image motion… … NATO terminų aiškinamasis žodynas
image motion compensation — A movement intentionally imparted to film at such a rate as to compensate for the forward motion of an air or space vehicle when photographing ground objects. It is an act of synchronizing the target image with the recorder. Image motion… … Aviation dictionary
image motion compensation — Movement intentionally imparted to film at such a rate as to compensate for the forward motion of an air or space vehicle when photographing ground objects … Military dictionary
Motion interpolation — is a form of video processing in which intermediate animation frames are generated between existing ones, in an attempt to make animation more fluid. Contents 1 Applications 1.1 HDTV 1.2 Side effects 1.2.1 … Wikipedia
Motion estimation — is the process of determining motion vectors that describe the transformation from one 2D image to another; usually from adjacent frames in a video sequence. It is an ill posed problem as the motion is in three dimensions but the images are a… … Wikipedia