Latency (audio)


Latency (audio)

Latency refers to a short period of delay (usually measured in milliseconds) between when an audio signal enters and when it emerges from a system. Potential contributors to latency in an audio system include analog-to-digital conversion, buffering, digital signal processing, transmission time, digital-to-analog conversion and the speed of sound in air.

Contents

Latency in broadcast audio

Audio latency can be experienced in broadcast systems where someone is contributing to a live broadcast over a satellite or similar link with high delay, where the person in the main studio has to wait for the contributor at the other end of the link to react to questions. Latency in this context could be between several hundred milliseconds and a few seconds. Dealing with audio latencies as high as this takes special training in order to make the resulting combined audio output reasonably acceptable to the listeners. Wherever practical, it is important to try to keep live production audio latency low throughout the production system in order to keep the reactions and interchange of participants as natural as possible. A latency of 10 milliseconds or better is the target for audio circuits within professional production structures,[1] local circuits should ideally have a latency of 1 millisecond or better.[dubious ]

Latency in telephone calls

Latency in telephone calls is sometimes referred to as mouth-to-ear delay. VoIP systems typically have a minimum of 20 ms latency and target 150 ms as a maximum latency. Latency is a larger consideration in these systems when an echo is present.[2] delay consist of three delays, namely: Codec delay, Playout delay and Network delay.

Latency in computer audio

Latency can be a particular problem in current Microsoft Windows audio platforms, but is much less so in Apple's Mac OS X and most Linux operating systems.[dubious ] Mac OS X uses Apple's built-in CoreAudio architecture, which is prepared to run low latencies (as opposed to Windows' WDM architecture). A popular solution is Steinberg's ASIO, which bypasses these layers and connects audio signals directly to the sound card's hardware. Most professional and semi-professional audio applications utilize the ASIO driver, allowing Windows users to work with audio in real time.[3]

With most Linux operating systems, latency tends to be better than with the MME or DirectX drivers of Microsoft Windows, if the modern ALSA sound-architecture is used.[citation needed]

The RT-kernel (RealTime-kernel)[4] is a modified Linux-kernel, that alters the standard timer frequency the Linux kernel uses and gives all processes or threads the ability to have realtime-priority. (This means, that a time-critical process like an audio-stream can get priority over another, less-critical process like network activity. This is also configurable per user (for example, the processes of user "tux" could have priority over processes of user "nobody" or over the processes of several system daemons). On a standard Linux-system, this is only possible with one process at the same time.

Audio latency in live performance

Professional digital audio equipment has latency associated with two general processes: conversion from one format to another, and digital signal processing (DSP) tasks such as equalization, compression and routing. Analog audio equipment has no appreciable latency.

Digital conversion processes include analog-to-digital converters (ADC), digital-to-analog converters (DAC), and various changes from one digital format to another, such as AES3 which carries low-voltage electrical signals to ADAT, an optical transport. Any such process takes a small amount of time to accomplish; typical latencies are in the range of 0.2 to 1.5 milliseconds, depending on sampling rate, bit depth, software design and hardware architecture.[5]

DSP can take several forms; for instance, Finite impulse response (FIR) and Infinite impulse response (IIR) filters take two different mathematical approaches to the same end and can have different latencies, depending on the lowest audio frequency that is being processed as well as on software and hardware implementations. Typical latencies range from 0.5 to ten milliseconds with some designs having as much as 30 milliseconds.[6]

Individual digital audio devices can be designed with a fixed overall latency from input to output or they can have a total latency that fluctuates with changes to internal processing architecture. In the latter design, engaging additional functions adds latency.

Latency in digital audio equipment is most noticeable when a singer's voice is transmitted through their microphone, through digital audio mixing, processing and routing paths, and then sent to their own ears via in ear monitors or headphones. In this case, the singer's vocal sound is conducted to their own ear through the bones of the head and then a few milliseconds later through the digital pathway to their ears.[citation needed]

Latency for other musical activity such as playing a guitar doesn't have the same critical concern. Ten milliseconds of latency isn't as noticeable to a listener who isn't hearing his or her own voice.[7]

Latency used for delayed loudspeakers

In audio reinforcement for music or speech presentation in large venues, it is optimal to deliver sufficient sound volume to the back of the venue without resorting to excessive sound volumes near the front. One way for audio engineers to achieve this is to use additional loudspeakers placed at a distance from the stage but closer to the rear of the audience. Sound travels through air at the speed of sound (around 343 metres (1,125 ft) per second depending on air temperature and humidity). By measuring or estimating the difference in latency between the loudspeakers near the stage and the loudspeakers nearer the audience, the audio engineer can introduce an appropriate delay in the audio signal going to the latter loudspeakers. Because of the Haas effect approximately 15 milliseconds can be added to the delay time of the loudspeakers nearer the audience, to focus the audience's attention on the stage rather than the local loudspeaker. The slightly later sound from delayed loudspeakers simply increases the perceived sound level without negatively affecting localization.

See also

References

External links


Wikimedia Foundation. 2010.

Look at other dictionaries:

  • Latency (engineering) — Latency is a measure of time delay experienced in a system, the precise definition of which depends on the system and the time being measured. Latencies may have different meaning in different contexts. Contents 1 Communication latency 1.1 Packet …   Wikipedia

  • Audio Stream Input/Output — (ASIO) is a computer soundcard driver protocol for digital audio specified by Steinberg, providing a low latency and high fidelity interface between a software application and a computer s sound card. Whereas Microsoft’s DirectSound is commonly… …   Wikipedia

  • Audio Units — (AU) are a system level plug in architecture provided by Core Audio in Mac OS X developed by Apple Computer. Audio Units are a set of application programming interface services provided by the operating system to generate, process, receive, or… …   Wikipedia

  • Audio editing — is the process of taking recorded sound and changing it directly on the recording medium (analog) or in RAM (digital). Audio editing was a new technology that developed in the middle part of the 20th century with the advent of magnetic tape… …   Wikipedia

  • Audio quality measurement — seeks to quantify the various forms of corruption present in an audio system or device. The results of such measurement are used to maintain standards in broadcasting, to compile specifications, and to compare pieces of equipment. The need for… …   Wikipedia

  • Audio compression (data) — For processes which reduce the amount of time it takes to listen to and understand a recording, see time compressed speech. Audio compression is a form of data compression designed to reduce the size of audio files. Audio compression algorithms… …   Wikipedia

  • Audio multicore cable — An audio multicore cable, or most commonly known as a snake is a compact cable, typically about the diameter of a coin, used in the audio recording and entertainment fields, which contains typically 4 64 individual shielded pair microphone cables …   Wikipedia

  • Audio over Ethernet — In audio engineering (and now in broadcast engineering), audio over Ethernet (sometimes AoE) is the concept of using an Ethernet based network to transmit digital audio. It is designed to replace bulky snake cables, and to use the existing wiring …   Wikipedia

  • Audio converter — In signal processing, an audio converter or digital audio converter is a type of electronic hardware technology which converts an analog audio signal to a digital audio format, either on the input (Analog to digital converter or ADC), or the… …   Wikipedia

  • Audio Stream Input/Output — (ASIO), «ввод/вывод потоковых аудиоданных»  протокол передачи данных с малой задержкой (англ. low latency), разработанный компанией Steinberg. Audio Stream Input / Output (ASIO) является созданным фирмой Steinberg протоколом,… …   Википедия


Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”

We are using cookies for the best presentation of our site. Continuing to use this site, you agree with this.