Friday, July 1, 2016

Digital audio Part 4 - Signal Reconstruction & Dithering

Reconstructed signal
Reconstruction of the digitalized signal using simple retention

Let's see now with more detail the process of reconstructing a signal. The simplest procedure will consist in simply obtaining a proportional value of the binary number of each sample using a digital analogue converter and keep it constant until a new sample arrives, which is usually when a new sample cycle starts, this process is named simple retention and that's the procedure used in the figure above.

Once the signal has been reconstructed, we must use a smoothing filter which in this case is a low pass filter that rounds the sharp edges that are a result of the simple retention. Such filter must have similar characteristics to the antialiasing filter that we had introduced for the digitalization process, this means it should have a very abrupt slope and eliminate almost entirely the frequencies over 20 kHz and it should allow the ones which are under 20 kHz to pass through completely. These kind of filters are complex and they are likely to introduce phase distortions. To solve this situation the concept of oversampling has been introduced.

Oversampling consists in interleaving the samples of the signal that are really obtained or stored, other "samples" that are calculated by interpolation using complex algorithms, thus an oversampling with a multiplier of 8 adds 7 interpolated samples of every real sample. The result is equivalent to a sample rate 8 times superior to the original. If fM =  44.1 kHz, then the new sample rate would be 352.8 kHz which can be eliminated with lowpass filters that are much simpler and with less effects over the phase and transients of the signal. Oversampling is used regularly today in compact disc players, which is possible because the speed of electronics is much higher than when this technology had just arrived.

Dithering 


When very low level signals are digitalized (near to the converter's resolution) digitalization noise turns into a distortion whose effect is more harmful than random noise. For example if a 100 Hz sine signal was digitalized whose amplitud would be less than one step, the signal we'd obtain after reconstruction would resemble more a square wave as shown in the figure below, and because of this it will have harmonics in 300 Hz, 500Hz, 700Hz, etc. If instead of one sine wave we would apply two or more we'd get an intermodulation distortion that would be really undesirable. 

Distortion created by sampling low level signals


One way of solving these inconveniences is to apply a small amount of random noise before sampling and digitalization. This noise, whose effective value is less than one step is known as dither. Dither has the side effect of worsening slightly the signal to noise ratio, from an auditive point of view the distortion is then transformed into a random noise that is more acceptable specially in those low levels.

Dither is also usually applied  in the requantization processes, which means you want to reduce the resolution of a signal that was recorded in 20 bits to 16 bits so we can dump it into a commercial format like a compact disc. If we just truncated the 20 bit data eliminating the least significant 4 bits, we would get similar inconveniences to the one described. In this case noise is generated digitally and is added before proceeding to truncate the bits.



No comments:

Post a Comment