Low-latency

From LavryEngineering
Revision as of 15:51, 13 March 2013 by Brad Johnson (talk | contribs) (Created page with "==Overview== The term "<nowiki>low-latency</nowiki>" is used in digital audio to describe processes that have a delay in the range of 1-5 milliseconds. The length of this delay i...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search
The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.

Overview

The term "low-latency" is used in digital audio to describe processes that have a delay in the range of 1-5 milliseconds. The length of this delay is most significant in situations where live audio signals are monitored through a digital audio system and can be heard at the same time as the live signal. The time between when the analog signal enters the digital audio system and when the corresponding signal exits that system is referred to as latency.

History

Before the advent of digital audio, the extremely short delay between when an analog audio signal entered an analog audio system and when the corresponding analog signal was output was too short to be perceivable. The discussion of latency in audio only came about with the advent of digital recording systems.

Early hardware-based systems used converter designs that had lower latency by today's standards, but were also not capable of achieving the quality of newer designs. The delay introduced by the AD and DA converters is referred to as conversion delay. The digital processing time was also relatively short due to the use of specialized digital signal processing circuitry (DSP).

As the use of computers in digital audio recording became more common, additional delays were introduced by the need for RAM buffering to allow the digital audio data to be moved between the interface, memory, and processor/software. Using audio processing plug-ins also introduces delay in the playback of digital audio signals due to the time it takes to process the signal on computer processors, because unlike DSP, computer processors are not optimized for this task.

In response to these significant delays, a number of approaches to generating a lower latency mix for use in recording and overdubbing were developed.

Earlier forms employed dedicated DSP on PCI soundcards that also contained the converters used for input and output, or at least the interface for the converters which were housed in an external unit connected directly to the PCI card. A software mixer control program running on the computer communicated with the DSP on the soundcard, with the DSP doing the processing for the digital mixer function as well as routing of the signals to and from the recording software.

As computer processors and computer buss speeds increased, computer processors became capable of handling the "DSP" functions needed for mixing as an additional task. It is not uncommon for either digital audio interface control software to have a software mixer, or for this mixer function to be part of the recording software, instead of employing dedicated DSP hardware.

Basics

In order to minimize the total latency in a monitor mix in a computer based digital audio system, some of the sources of delay can be eliminated and some cannot. For example, conversion delay cannot be eliminated because the signal must be digitized to perform digital processing and converted back to analog to be heard. Processing time can be minimized by not routing the signal through the recording software or audio processing plug-ins.

In the case of non-DSP based systems this means that functions such as equalization and effects are not available for the live inputs; because these are functions of the recording software and plug-ins. The signal is routed from the output of the AD converter through a streamlined software mixer process, then directly back out through the DA converter. In this case, the majority of latency is a result of conversion delay of both the AD and DA conversion.

Most contemporary high-quality audio converters employ some form of oversampling, and have delays in the range of between approximately 0.5 and 2 milliseconds. This means that the lower limit of how much monitoring latency can be reduced is primarily a function of conversion delay.

Issues

Sample Frequency

One important factor in conversion delay is the sample frequency. Higher sample frequencies have shorter conversion delays, as well as shorter processing and buffer delays. One approach to minimize this effect is to convert at sample frequencies higher than 96kHz. Although having the benefit of minimizing monitoring latency while recording, converting at frequencies higher than 96kHz comes with a penalty of reduced audio quality when compared to converting at sample frequencies of 96kHz or lower. For more details, see The Optimal Sample Rate For Quality Audio

"Low-latency Conversion”

Due to the demand for low-latency monitoring, some converter manufacturers offer low-latency conversion settings with corresponding lower quality of the converted signal. High quality digital filtering requires processing the signal over a number of output samples, and reducing the latency can result in lower quality than when the same converter is given time to perform the calculations needed to achieve high-quality results.