Home

Daqarta for DOS Contents

Introduction

Free Registration

Daqarta for DOS
Data AcQuisition And Real-Time Analysis
Shareware for Legacy Systems

From the Daqarta for DOS Help system:

## WAVE AVERAGER THEORY, Continued:

### DITHER: NOISE + AVERAGING = RESOLUTION:

Although we are seldom happy about the need for averaging to reduce the noise in a signal, we can actually gain a hidden benefit from this situation in the form of increased bits of resolution. Although it may sound incredible at first, the addition of noise to an otherwise clean signal is sometimes done deliberately to take advantage of this phenomenon, known as "dither". Here's the scoop:

For simplicity, consider an ADC such that each output value represents an input span of 1 millivolt, and that the output changes state at the exact center of each span. Thus, if we slowly raised the input voltage from 0 to 1 mV we would see a step in the output value from 0 to 1 just as we crossed the 0.5 mV point.

Now suppose we apply a small triangular wave of +/- 1 mV to this ADC. In the crude graph below, we see 32 samples of this input voltage and the corresponding ADC output values. The output certainly does not look much like the input:

```      mV           INPUT VOLTAGE:
+1.0    ..../\................/\........
/  \              /  \
+0.5    ../....\............/....\......
/      \          /      \
0      /........\......../........\....
\      /          \
-0.5    ...........\..../............\..
\  /              \
-1.0    .............\/................\

+1       111111            111111
:     :           :     :
0     00      0000    0000      0000
:   :             :
-1                -1111             -11

```

If we used a waveform averager to average together 16 sweeps of this signal, it would do so by adding the output values of each sweep into 32 sample "bins" that hold the running totals for the averager. To find the actual average waveform, the total in each bin would be divided by 16. From the graph, we can see that the first few bins would receive only zeroes from each sweep, so the totals would be zero and the average would be zero as well. Then the next several bins would receive 1s from each sweep, so the totals in those bins would reach 16, and when we divided by 16 we would get back the original 1s. In other words, the averager output would look exactly like the ADC output of any individual sweep.

But now suppose we add random noise to the input signal. At any point in time, the total waveform will be the original triangle voltage for that sample, plus a random voltage between -1 mV and +1 mV. The key fact here is that each sweep gets different random values at each time point.

Consider a time point where the triangle passes through 0 mV. On some sweeps we will now find a voltage as high as +1 mV or as low as -1 mV due to the added +/-1 mV noise. If all the random noise voltages are uniformly distributed, we expect that on about half of the sweeps the instantaneous noise at that time point will fall between +0.5 mV and -0.5 mV, which means the ADC output would remain at 0. About a quarter of the time we would get noise above +0.5 mV and the ADC output would be 1, but this would be balanced by the approximate quarter of the noise voltages that fell below -0.5 mV and caused the ADC output to be -1. So, if we averaged 16 sweeps together we would still expect the average value to be somewhere around zero for this time point.

Next, consider a time point where the triangle passes through +0.5 mV before noise is added. When +/-1 mV of noise is present, the voltage at this time point may go as high as +1.5 mV or as low as -0.5 mV. We expect that about half of the time the added noise will be between -1 mV and 0, so the total will be +0.5 or less and the ADC output will be 0 also. And about half of the time the noise will be between 0 and +1 mV so the total will be above +0.5 and the ADC output will be 1. In every 16 sweeps we thus expect about 8 zeroes and 8 ones, giving 0.5 for the average ADC output value. Notice that we could not obtain this value without averaging, since the ADC alone can only produce integer outputs.

This same approach applies at all time points: Where the original waveform is at 0.875 mV, the waveform plus noise will run between -0.125 mV and +1.875 mV. The fraction of sweeps where we expect it to be below +0.5 would be proportional to the range from -0.125 to +0.5, which is 0.625 mV. This is 5/16 of the total range of 2 mV, so we expect about 5 out of every 16 sweeps to give an ADC output of 0.

The waveform plus noise would be between +0.5 and +1.5 mV about 1/2 of the time (1 mV out of 2) so we expect the ADC output to be 1 in about 8 out of 16 sweeps. The range from +1.5 to +1.875 is 3/16 of 2 mV, so we expect an ADC output of 2 on about 3 out of 16 sweeps. The average ADC output would thus be

```  5/16 × 0  +  8/16 × 1  +  3/16 × 2  =  14/16  =  0.875
```
which is just the waveform value we wanted.

Since we are averaging 16 values together to find each averaged output, we effectively have 16 times as many possible values as the direct unaveraged ADC output. This is equivalent to 4 more bits of ADC resolution, which allows us to "fill in the gaps" of our rather coarse original ADC waveform. Notice that this process depends on the noise to make it work. By adding differing voltages on different sweeps, it produces a distribution of ADC output values whose average approaches the actual noise-free input more closely.

You can use the Virtual Source to demonstrate this dither effect for yourself. Initially, set:

Use trace magnification to enlarge the waveform for easy viewing. If you don't see a stable wave, make sure Trig is toggled on, and that the Trigger menu is set to:

Now reduce the Bits setting until you see nothing but a square wave, typically at around Bits = 7. At this point the low Level input signal is just barely activating the "least bit" of the Virtual ADC. Since there is no Noise, this waveform will not be changed by Averaging... try it and see.

UnPause after the average, then change Noise to the same value you set for Level, and you will see a really trashy waveform. It's hard to think of this as an improvement, but hit the Average key and a ramp wave magically begins to appear. Sure, it's got a little noise on it... but notice that even for as few as 16 or 32 averager sweeps the noise is less than the original least bit steps. Try the Sine wave as well. (And think about why the Square wave doesn't improve.) Experiment with different averager sweeps versus Virtual Source Noise versus Level to get a good feel for what is going on here.

To show the amazing power of dither, you can take this game to extremes and reduce Bits to 1. You will need to reduce the trace magnification, since now the Virtual ADC puts out only two values, equivalent to positive full-scale for any positive input and negative full-scale for any negative input. Yet when we average as before, we still get the proper waveform.

This is pretty significant stuff: We took the worst possible ADC (only a single bit) and greatly improved it by adding NOISE, which we normally try to avoid like the plague. Yet with the "sugar" of averaging, we turned two lemons into lemonade!

Don't think that dither is only a cute parlor trick: It's not. In fact, it is used in one form or another by all commercial digital music recordings. The need for this was discovered on some of the earliest CDs, where a final piano note decayed into a nasty buzz before becoming silent. The problem was that when the sound decayed sufficiently, it was only activating a few bits of the 16-bit ADCs and thus producing grossly distorted "chunky" waveforms. Since our ears provide a certain natural averager action, it was found that adding a tiny bit of noise could render the obnoxious distortion completely inaudible at the small cost of a slightly increased (and easily tolerated) background hiss.

You can use the STIM3A DAC Bits control to create a dramatic audio demonstration of this. Check out the discussion under Bits in the STIM3A Help system. (CTRL-G brings up the STIM3A control menu, then move to Bits and hit CTRL-? to bring up the relevant section.)

So, how does knowledge of dither apply to your data acquisition system? Consider a situation where you want to record a low-level physiological signal that is already contaminated with random noise, such that averaging is needed. The presence of natural dither may mean that moving from a 12-bit ADC board to a 16-bit board would not result in any useful advantage, and that in fact even an 8-bit board may already be overkill.

The thing that needs to be considered here is DYNAMIC RANGE: How large are the largest signals compared to the smallest that you want to record? You may need to add gain to the signal, either by changing ranges on the ADC board or via an external preamp, to make use of the full range of the ADC. If, after you have done this, the inherent noise is still much bigger than the resolution of the ADC, then you don't need any more bits.

But determining the optimum system gain can be tricky, since real-world situations often have unexpected level changes and artifacts. If the nature of a typical artifact is such that it produces a transient baseline shift but does not otherwise corrupt the data, then it would be reasonable to include it in an average instead of tripping the artifact rejection limits. You would thus need to know the maximum anticipated baseline shift in order to set the gain properly. In cases like this, having more bits of ADC resolution may allow you to have a bigger safety factor in your calculations, possibly allowing faster accumulation of averages with less of the sweeps rejected by the artifact limits.

GO: