Daqarta for DOS Contents



Free Registration

Contact Us
Daqarta for DOS
Data AcQuisition And Real-Time Analysis
Shareware for Legacy Systems

From the Daqarta for DOS Help system:


The previously-discussed Discrete Fourier Transform is a general application of Fourier Transform theory using digital methods. But it's rather slow, since all N samples must be multiplied by sine and cosine values at each frequency, for N/2 different frequencies, giving N² total multiplications. That's over 1 million if N = 1024, and multiplication is one of the slower instructions on most computers. Back in the '60s, however, a couple of clever gents named Cooley and Tukey noticed that the set of DFT calculations contains a lot of redundancy and wasted effort.

For one thing, everywhere the sine or cosine reference values go to zero, there is no point multiplying the input sample... just use the zero directly. Likewise, everyhere the multipliers are +1, the input value could just be copied directly. If the multiplier is -1, we only need a sign change.

Similarly, a particular input sample may end up being multiplied by the same numerical value for the sine or cosine components of many different reference frequencies. For example, the value of 0.7071 is repeated every 45 degrees around a sine or cosine cycle with only sign changes. And since each sample will be multiplied by all the different input frequencies, it turns out that many of them will use the same values on the same sample. We could do a single multiply of each input sample by each of these recurring values, and then just use that result (maybe with a sign change) for all future needs in that set of input samples.

These recurring conditions happen a lot, especially if the number of samples happens to be a power of 2. The whole operation becomes an accounting nightmare of keeping track of which redundant value should be plugged in where, but Cooley and Tukey came up with a method to keep everything straight, and the Fast Fourier Transform (FFT) was born. Subsequent workers have come up with all sorts of enhancements, but the basic idea remains: Don't waste time doing the same thing twice.

Where the number of multiplications in a typical DFT quadruples when the N is doubled, for a typical FFT it only slightly more than doubles. For N = 1024, the FFT turns out to be about 100 times faster than the DFT.

The number of points N must be a power of 2 for almost all FFT algorithms, whereas for a DFT we could use any arbitrary number. This is not a trivial difference, since with the DFT the number of samples can be adjusted so the sample set holds an integer number of input cycles, eliminating "skirts" in the spectrum.

In practice, however, the DFT is only useful for recorded data that the user can manipulate at leisure to find the right number of points, and wait for slow computations... not feasible for a "live" system like Daqarta. Instead, FFT systems must use sample rate adjustment or windowing methods to reduce these leakage skirts.


Consider a continuous pure sine wave. Its waveform is spread throughout the time domain, but its spectrum is only a single line in the frequency domain. Now consider a signal consisting of only a single spike in the time domain. It has a perfectly constant spectrum spread throughout the frequency domain.

You can check this out using the STIM3A Advanced Stimulus Signal Generator, which includes a Pulse Wave option. Or, if you simply wish to view a pulse and its spectrum you can use the Virtual Source with the method discussed under Step response correction.

Flip to the spectrum display with the F-key and boost the trace magnification (or switch to the Y-log power spectrum mode) to see the constant low-level spectrum. The level is low because the single time-domain spike doesn't have a lot of energy (or power) to begin with, and the spectrum shows the level at EACH frequency.

Similarly, if we compare a continuous sine wave to a tone burst of that same sine wave, we find that the narrower we make the burst, the broader the spectrum. This holds as a general rule... you can't limit the extent of a signal in one domain without increasing its extent in the other. There is always a balance between the domains.

This is also an UNCERTAINTY PRINCIPLE (like Heisenberg's) that limits our ability to resolve features in both the time domain and frequency domain simultaneously. Consider that to get fine frequency resolution, we must use a large sample size N, which means a large total time to acquire the sample set. If the input signal is rapidly changing, we will get some sort of "average" spectrum of the N-samples time interval... and we can't assign that whole spectrum to any particular event or sample in the input. Conversely, we can try to narrow down the input event times by using smaller N, but the spectral resolution becomes more coarse. This is a particular issue with spectrograms, where we are specifically interested in how the spectrum changes with time.


Questions? Comments? Contact us!

We respond to ALL inquiries, typically within 24 hrs.
25 Years of Innovative Instrumentation
© Copyright 1999 - 2006 by Interstellar Research