Chapter Five: Digital Audio

### 6. Quantizing, approximation errors and sample size

Samples taken are then assigned numeric values that the computer or digital circuit can use in a process called **quantization**. The number of available values is determined by the number of bits (0's and 1's) used for each sample, also called **bit depth** or **bit resolution** . Each additional bit doubles the number of values available (1-bit samples have 2 values, 2-bit samples have 4 values, etc.). When a sample is quantized, the instantaneous snapshot of its analog amplitude has to be rounded off to the nearest available digital value. This rounding-off process is called **approximation**. The smaller the number of bits used per sample, the greater the distances the analog values need to be rounded off to. The difference between the analog value and the digital value is called the approximation or **quantizing error** as shown in the illustration below.

The greater the magnitude of approximation errors, the greater the level of digital or quantizing noise produced. The solution to reducing **digital noise** is to use larger sample word sizes (greater bit depth), which therefore correspond to the dynamic range of the system, since it affects the signal-to-noise ratio. (For digital systems, this is often measured as SQNR, or signal-to-quantization-noise-ratio.) A general rule of thumb is an added 6 dB of dynamic range for every additional bit used per sample. The original CD standard proposed by Sony was for a 14-bit sample size, with a dynamic range of only 84 dB, but was changed to 16 bits before inception.

Just as sample rate affects frequency response,
**sample size** (i.e., bit depth) affects **dynamic range**, or the amplitude difference between the digital noise floor and the loudest possible sound before distortion.

**The CD/DAT standard of 16-bit samples, with their impressive 65,536 values for quantizing, provide the theoretical playback system optimum of a 96 dB dynamic range. **

1 | 2