Daqarta for DOS Contents
C: parameter on the second (ADC) line of the configuration file. Daqarta refers to each input as a "Channel" on the main title line:
Mic Ch0 CD R Ch1 CD L Ch2 Line R Ch3 Line L Ch4The power-on default is 'Line L' or Ch4. The CD and Line inputs are equivalent in sensitivity, whereas the Mic input is much more sensitive.
The Line input is the upper jack on the edge connector, and the one below it is the Mic input. For "tower" cases, the "top" of the edge connector is the end opposite the multi-pin MIDI connector.
If you plug one of the standard cables that come with many boards into the Line input, the red phono connector is the Line Right input... R for Red and Right. The remaining phono connector (white or black) is then the Line Left input. If you plug the same cable into the Mic input instead, only the left phono connector is the active Mic input... the right one is not used.
The CD inputs are only on the board itself... they don't appear on the rear panel. These "extra" inputs are handy for rapid switching between different signals. One excellent use is to monitor the direct stimulus (such as from the built-in OPL3 synthesizer) that you are using to drive your experiment. This allows you to verify timing, waveform, spectrum, rise/fall shapes, and so on.
You can buy a CD-ROM cable for a few dollars and snip off the CD connector, leaving only the end that plugs in to the SB16. Then wire the free end to your choice of connectors mounted on a spare card-slot cover plate.
There are different styles of connector that you may find on the board. The original white CD connector pins are spaced on 2mm centers, so they are not compatible with the standard 0.1 inch connectors that are readily available (such as in your junk box). Later models may have a larger black connector that uses the 0.1 inch spacing.
high-pass filters with -3 dB cutoffs that differ among the various models. Early SB16s have a cutoff of about 40 Hz. The sensitivity is down about 1 dB at 100 Hz, 3 dB at 40 Hz, and 15 dB at 10 Hz.
Later models like the SB32 have much better low frequency response, with -3 dB cutoffs of a few Hertz.
All models contain automatic anti-alias filtering, performed by the DSP chip. This results in the high frequency response being determined by the sample rate, but it is typically down by 0.25 dB for input signals at 90% of the maximum for that rate... for example 20 kHz where the maximum is 22 kHz when sampling at 44 kHz.
OPL3 synthesizer has its own response fall-off at low and high frequencies, as do the DAC outputs used for STIM3 stimulus generation. Use the Y-log (Power Spectrum) mode to read dB directly. Set the Spectrum Averager to Peak mode, and set Sweeps to Continuous (1 or less).
For best frequency resolution, you may want to measure the low-frequency response separately. Use the lowest sample rate the board allows (4979 Hz), and set the number of points to 1024. Start with the input frequency at about 500 Hz. Now slowly adjust it down to zero and a low-frequency response curve will appear. If you move the frequency too quickly, you will get dips in the response curve. Just go back and fill them in by moving more slowly... the ability to do this is one of the advantages of Peak mode.
To plot the high frequency response, simply repeat the above frequency adjustment for the higher input frequencies. Be sure to set a higher sample rate after the above tests... select a rate typical of your anticipated use.
The above spectral methods are not very good for measuring the extended low-frequency response of some models, since even at the lowest sample rate the frequency resolution is 4.86 Hz. Instead, you may want to determine the low-frequency response by computation from a waveform measurement of the step response time constant.
If you don't already have a laboratory-type function generator or other source of very slow square waves, you can use the Stimulus Pulse output from an LPT printer port. You should see an initial step transient at the trigger point, which then slowly decays. Set the minimum sample rate and N = 1024 to see as much of this as possible.
Set a slightly negative trigger delay, so you can see the waveform level just before the leading edge: It might not be what you expect, since the transient from the prior pulse going off (or square wave going negative) may not have decayed completely. Now measure the height of the peak relative to this baseline, then the height of another point near the right end of the trace, and record the time difference.
The RC time constant of the input circuit will then be the time difference divided by the natural log of the ratio of the heights:
RC = (t1 - t2) / ln(V2 / V1))
If you are able to measure the peak as indicated, the value of t1 will be zero. But often there is a little ripple on any transient like this, due to the anti-alias filter operation. You can thus measure at a nearby point... the formula works for any two points, but the greater the difference, the better the accuracy. Note that the cursor readout will show the time difference directly. (Be sure to subtract the baseline from each height measurement before taking their ratio.)
f3 = 1 / (2 × pi × RC)
Mic......... 550 ohms CD.......... 100000 ohms Line........ 100000 ohms
Y-axis if you are viewing a waveform or linear magnitude spectrum. (The Y-log Power Spectrum is always relative to full-scale sensitivity for the range, so its axis doesn't change.) Higher Range values give higher input sensitivity. The exact sensitivity depends on the Input selected, as well as any Gain parameter, so you should refer to the Y-axis values.
Each Input has its own Range setting (and could likewise have its own Gain calibration), so you can maintain different sensitivities appropriate for the signal connected to each input. When you select a different Input, the Range that was previously set for that input is automatically used.
The Range settings run from 0 to 13, with sensitivity increasing in approximately 2 dB steps. These correspond to the board's internal mixer ranges of 18 to 31. Less sensitive mixer ranges are not allowed here, since they give dangerously misleading results.
Consider that on the least-sensitive Range 0 allowed here (mixer range 18), a 1 Volt RMS input just reaches the board's input overload point of +/- 1.414 Volts. But at that level it doesn't produce a maximal output from the Analog to Digital Converter (ADC)... that would require about +/- 1.8 Volts. The Daqarta Y-axis is calibrated for this Full Scale value at unity trace magnification.
Now suppose you could use the internal mixer range 0, which would attenuate all signals a further 36 dB or a factor of 63. Full Scale on such a range would be +/-113 Volts... guaranteed to smoke your board!
G: parameter to calibrate each of the input sources for different external Gains. The values entered via the G: parameter refer only to Range 0. If you want a certain calibration on another range, you must compute the equivalent setting on Range 0 when setting the G: parameter.
The simplest way to do this is to apply a calibration signal of known amplitude to the desired input on the desired range. A sine wave of 500 to 1000 Hz is best, since you can use FFT mode to view its magnitude directly on a cursor readout. (Very high or low frequencies run the risk of being unduly affected by the frequency response of the board.) You can use the built-in synthesizer to generate the sine wave, and measure its output with an external voltmeter that is accurate enough to serve as the calibration reference.
Adjust the sine frequency or the sample rate to get minimum spectral leakage "skirts". If you are simultaneously reading the sine amplitude with an external AC voltmeter, use the Daqarta RMS magnitude mode for direct comparison without calculations. Most meters are calibrated to read in RMS Volts with a sine wave input, even if they do not use true RMS for correct results with complex waveforms.
(This method only works with sine waves. If you must use a different waveform, you will need to measure peak-to-peak or zero-to-peak values while viewing the waveform directly. You can use the ZeroP option to read zero-to-peak from a single cursor readout without subtraction. Of course, you will also need a way to know the peak value by independent means. Remember that there may be waveform changes due to the frequency response of the board.)
Now divide the true (voltmeter) value by the cursor readout to find the factor that must be multiplied by the full-scale Gain value to make the cursor readout match the meter. It doesn't matter what Range you used for the measurements, since the same factor will be automatically applied to all ranges.
But the G: parameter must be specified for Range 0, so multiply by 1.79 Volts for Line or CD inputs, or by 15.45 mV for Mic input, and use that value as the G: parameter for the given input.
As you change Range settings, the internal Range gain and the external Gain factor are combined appropriately to insure that the Daqarta Y-axis always shows the proper values.
If the signal is much larger than the mixer limit, the ADC itself can be overdriven. In this case, the distortion is much more spectacular, since inputs that exceed the ADC full-scale value will "roll over" and reappear as the opposite polarity. This will totally trash your signal into spiky, incoherent garbage!
---- CD or Line ---- --- Microphone ---- Range RMS 0-peak p-p RMS 0-peak p-p 0 1000 1414 2828 80 113 226 1 820 1160 2320 70 99 198 2 657 929 1858 55 78 156 3 515 728 1456 43 61 122 4 410 580 1160 32 45 90 ...At higher sensitivities, the distortion threshold decreases at approximately 2 dB per range step.
OPL3 synthesizer is not really "low distortion", and its software Level control resolution is only 1 dB, but this is probably adequate for most purposes.
Adjust the signal frequency to about 500 Hz and set the level for a mid-range waveform that is not obviously distorted. Observe the power spectrum and fine-tune the signal frequency for a single line spectrum without "skirts". Start with a sample rate of around 20 kHz, but if your signal source doesn't allow fine adjustment you can tweak the sample rate to get the single line spectrum.
Now slowly raise the signal level until you see a forest of lines arise from the spectrum noise floor. This is the onset of distortion. To increase your ability to determine the exact threshold as you raise and lower the level slightly, use a small amount of signal averaging, say 16 or 32 sweeps, in the Exponential mode. This will reduce the noise floor jitter and make the onset of distortion stand out more clearly, although you must adjust levels slowly to allow time for the exponential decay of the averaging.
After you find the distortion threshold in this manner, it is quite revealing to see how much higher you must raise the input level before you can detect any distortion when viewing only the waveform display.
The AGC stage has a basic sensitivity that is 10 times greater than the Fixed stage, as long as the input signal is kept below the AGC action threshold. Daqarta assumes you will be running at these lower levels when you activate the AGC option, and reports the Y-axis values accordingly.
The AGC action is not instantaneous. If the level suddenly increases after a long period at a low level, the AGC will take about 15 or 20 msec to reduce the gain. The initial part of the signal will thus be grossly distorted until the AGC catches up. The "release" from AGC action is much slower.
If you want to see the AGC action for yourself, connect one of the synthesizer outputs back to the Mic input. Set up a tone burst on that channel that lasts most of the trace... say 400 samples if you are using N = 512 samples. Set the thumbwheel to maximum (assuming you have no external amplifier).
Now adjust the Level control and watch what happens. If you start off at a high level and suddenly reduce it, the trace will reduce and then slowly grow back up to the original size. Keep reducing the level until you no longer see the growth action. You might suppose that this is the AGC action threshold, but in reality the repeating tone bursts keep nudging the AGC so that it never really reaches maximum gain.
Next, hit the S-key to activate the Single Sweep option. The trace should look almost identical. After the sweep, the system will be in Pause mode, so there will be no tone bursts going to the input and the AGC will start to increase its gain as much as possible in a futile attempt to maintain its target level. Wait about 10 seconds and take another Single Sweep. Now you should see the first part of the tone burst at a higher level, maybe even distorted, then smoothly fading down to the prior level.
The action threshold is the level below which there is no change between a live trace and a sweep taken after many seconds, after the AGC has supplied all the gain that it can.
In unusual circumstances you might consider using the AGC for its intended purpose, with larger signals whose levels fluctuate so greatly that a single fixed range would be inadequate. In this case there is no effective sensitivity calibration... every input comes out the same! But there is a way that may allow a measurement or at least an estimate of the true input signal level:
If there is some low-level component of the signal that is NOT variable, such as background noise, it will also be affected by the AGC as it operates on the higher-level signal. So if the "constant" background appears larger, it means more gain has been used to bring the overall signal to the AGC level. You can thus use the background as an indication of the AGC gain. If you are going to set up an experiment specifically to use this idea, you could deliberately add a low level tone for this purpose. The tone should stand out clearly on a spectrum display.
Besides losing the range calibration, however, there is one more penalty for using AGC in its active region: It adds a certain amount of distortion to your signal. Since one important use of AGC might be to look at the relative strengths of various frequency components even if the overall level is changing, you must take this distortion into consideration. The AGC could be changing the very components you are interested in!
Range RMS 0-peak peak-peak 0 6.15 8.70 17.39 1 6.15 8.70 17.39 2 5.50 7.78 15.56 3 4.34 6.14 12.28 4 3.42 4.84 9.67 5 2.72 3.85 7.69 ...At higher sensitivities, the AGC threshold decreases at approximately 2 dB per range step.
duplex mode the ADC Bits control allows you to change this during operation. Unless storage space for long Direct-to-Disk (DDisk) files is a real problem, the 16-bit mode is a better choice for most non-duplex work.
In full duplex mode the ADC Bits control is "locked out" for most models. That's because they require the ADC bits to be the opposite (8 or 16) of the DAC bits, and the DAC bits are set at start-up (via the B:Dn parameter or default of 8) for STIM3 module initialization.
Only the CT417x ViBRA / WavEffects models allow independent control of ADC bits and thus allow access with STIM3 present. In that case you will probably want to set B:D16 to use 16-bit DAC outputs, and set the ADC to 16-bit as well.
But for other models the DAC outputs will be 8-bit when the ADC is 16-bit and vice-versa, so choice of ADC bits in full duplex mode is not as simple as for non-duplex operation. For example, if the DACs will be used to produce auditory stimuli and the ADC to record physiological responses to those stimuli, 8-bit ADC resolution may be more than adequate for typical noisy electrode signals. In this case, 16-bit DAC resolution might be used to insure minimal audible distortion in the stimuli, and natural dither in the response can allow more than 8-bit resolution from the ADC via waveform averaging.
On the other hand, the typical distortion with 8-bit DACs is on the order of -48 dB, not easily audible with continuous tone stimuli and probably of no consequence for the vast majority of "threshold response" test types. So the 16-bit ADC could be used instead with a reduced input sensitivity, allowing increased immunity to input overloads caused by low-frequency artifacts. The sensitivity could be reduced by a factor of up to 16 while still preserving 8-bit resolution.
For intermodulation distortion measurements, even on loudspeakers or other non-biological systems, the harmonic distortion of the sources may be irrelevant and 8-bit DACs would be perfectly adequate if the proper techniques are used.
Each type of measurement should be considered separately, and possibly tested with both bit configurations for optimal results.
OPL3 Synthesizer is used to generate stimuli. When this option is set, the menu below the Stimulus source shows the main output controls for the synth, and the CTRL-PgUp / PgDn keys are available to bring up the Left and Right Synth control menus.
If the STIM3 Stimulus Generator module is also loaded, then the Stimulus source may be optionally toggled to StGen for full duplex output from the DACs. (If STIM3 is not present, then the Stimulus source option is locked to Synth.)
When you toggle the Stimulus source to StGen (or use the S:1 configuration parameter option), the lower menu is replaced with the StGen Output level controls for the Left (DAC 0) and Right (DAC 1) outputs, and the CTRL-Pg keys are disabled. In addition, an information display shows the DAC Bits (8 or 16), which is the opposite of the ADC bits in use for most models.
Alternatively, whenever you activate one or both DAC outputs from the the STIM3 control menu (CTRL-G), then the Synth Master Outputs control will be toggled Off and the Stimulus control will be forced to StGen mode automatically.
As a safety measure, turning the DACs off from the STIM3 menu will NOT restore any prior Synth output... you must return to the SB16 Board control menu (CTRL-B) and manually toggle Stimulus to Synth and Master Outputs to On. (If you have a need to rapidly toggle between these different stimulus sources, you can easily create a Key Macro for that purpose.)
Questions? Comments? Contact us!We respond to ALL inquiries, typically within 24 hrs.
25 Years of Innovative Instrumentation
© Copyright 1999 - 2006 by Interstellar Research