Sample frequency

From LavryEngineering
Jump to navigation Jump to search

Overview

In order to digitize analog audio, most contemporary systems use a system referred to as "sampling" to repeatedly measure the voltage of the analog audio waveform at a regular time interval. Each voltage measurement results in a binary number of a given wordlength. The series of binary “words” are typically stored consecutively in a file for later reconstruction of the analog voltage waveform by a digital to analog converter. The sample frequency is the rate at which the samples are generated and is measured in Hertz (cycles per second).

The term sample rate and its abbreviation "SR" are used interchangeably with sample frequency and its abbreviation "SF."

The sample period is the duration of each sample; which is equal to 1/SF.

Basics

Virtually all contemporary analog audio equipment operates on the principle of an analog voltage waveform being analogous to the original sound's air pressure "waveform." Typically; the original sound is translated from the pressure variation to electrical variations by a microphone (a type of transducer). The resulting voltage waveform can be transmitted on wires to an amplifier and a power amplifier; then translated back into sound pressure variations by a speaker.

One important consideration is how this analog waveform can be stored for later reproduction or transmission. All analog storage and transmission schemes are prone to loss of signal quality; with storage being particularly problematic. As the technology became available, digital audio systems were developed to address these issues. In order to generate digital information that can be used for these purposes; the analog voltage waveform is sampled repeatedly at a specific time interval; normally referred to as the sample frequency. Please refer to analog to digital conversion for more details.

By sampling the analog audio signal at a fixed time interval; the need to record when the sample was taken along with the digitized voltage information is eliminated. As long as the sample frequency (SF) of a recording is known and the playback system operates at virtually the same SF; it can be assumed that the timing will be accurate when the analog waveform is reconstructed during digital to analog conversion. It also makes it extremely critical that the "clock" signal which is used as the timing reference during both analog to digital conversion and digital to analog conversion be extremely accurate in order to reduce distortion to an acceptable level.

In order to record and reproduce audio properly, the SF must be high enough to allow recording a pure tone at the highest frequency the human ear can hear. The waveform of a pure tone is the sine wave, which contains only one frequency; unlike the more complex waveforms that most acoustic sources produce. Because each full cycle of a sine wave has two distinct “half-cycles;” two samples are required to represent it in a manner that allows the original waveform to be reconstructed. It is commonly accepted that the highest frequency the human ear can hear is 20,000 Hertz or 20kHz (20 kilo-Hertz); so the sample frequency must be a minimum of twice that- higher than 40kHz.

When a signal containing frequencies higher than one-half the SF are digitized; additional information is generated in the form of sum and difference frequencies. These are referred to as "alias frequencies." The "sum" frequencies would be supersonic with a 44.1kHz SF; but the difference frequencies would be lower than 20kHz and would appear as gross distortion of a non-harmonic nature (not pleasing to hear like some forms of harmonic distortion). This means that some form of audio filter is required to prevent frequencies higher than one-half the SF from reaching the input of the AD converter. Both because it is difficult to make a filter with a very "steep cut-off" and filters with steep cut-offs tend to cause audible degradation such as phase response errors; the SF is typically higher than exactly twice the highest frequency to be recorded to allow for the range in which the filter is starting to take effect but has not completely removed the problematic signals. This is the primary reason why the CD standard is 44.1kHz instead of 40kHz.

As the sample frequency is increased; the filter requirement are eased by the additional frequency bandwidth available between 20kHz and one-half the SF. This is part of the reason why most contemporary converters operate internally at a higher sample rate than 44.1 kHz; but the advantage in this regard is obtained when the SF is increased to 88.2 or 96kHz. Sample frequencies higher than 96kHz do not offer a increase in accuracy are not necessary to address the filter issue.