Hi PeterSt, this is a reply to your post #616.
The latest PFO article, although serious in tone, is partly a joke aimed at the mid-fi PCM DACs out there. From my perspective, these DACs do a pretty bad job with both PCM and DSD. The high frequencies of PCM are rendered as harsh and gritty, while DSD playback is syrupy and rolled-off sounding. However ... PCM converters followed by non-slewing electronics sound quite different; from my perspective, there's little to no "PCM" coloration as audiophiles think of it. DSD at the Playback Designs (and possibly, Invicta) level also no longer sounds syrupy and rolled-off sounding. So the real problem is with mid-fi converters, which do a very poor job of representing the recording.
Referring to the "transients" mentioned in post #616, I may not have been as clear in the PFO article as I could have been. I was not referring to musical transients at all, but the rising and falling edges that emerge from a R2R converter like the PCM-63, as directly measured with a 500 MHz Tek scope or a HP RF spectrum analyzer. The digital input to the converter was not fed musical transients, but a steady full-scale 20 kHz sine wave, and the current output of the PCM-63 was measured with a 50-ohm probe directly attached to the pin of the converter chip, with no I/V stage at all.
That's where I saw the nanosecond rise times; at the edge of each sample, before analog low-passing and waveform reconstruction. When fed into the HP analyzer, I saw a comb spectra extending out to 20 MHz, fading into the noise at the analyzer's noise floor of -100dB at 50 MHz. The PCM-63, with only a 50-ohm load and a short length of coax going into the scope, had very little overshoot, less than 5%, so it didn't appear to need deglitching.
The rise and fall times, though, as seen on the Tek scope, were extremely short, and were not affected by the sample rate of the PCM-63 ... pretty much the same at 44.1, 96, and 192 kHz. This wasn't surprising, since it looked like the rise and fall times were set by the speed of digital logic inside the PCM-63, instead of external clocks.
A little bit of math produced the rise and fall times needed by associated analog electronics, if slewing was to be avoided. It was a big number, in the range of 1000V/uSec, or maybe faster. Since it is very difficult to design analog electronics that have low distortion from 20 Hz through to 20 MHz, my feeling that passive pre-filtering is a very good idea.
Leaving a wideband signal alone is standard practice in traditional analog spectrum analyzers; there's no active preamplifier at all, the signal goes directly from the 50-ohm input, into a wideband attenuator, then right into the first mixer. The old-timers at Tektronix explained to me that a low-distortion wideband amplifier, with a bandwidth of 50 kHz to 1.8GHz, could not be built, so the signal had to go through an all-passive signal path before hitting the first mixer.
A similar problem confronts the I/V converter; the signal has too wide a bandwidth for opamp-style ultralow distortion amplification. There are zero-feedback circuits that can be borrowed from video and the vertical channel of scope preamps; cascodes and apparent long-tailed pairs where the first transistor is really an emitter-follower (collector goes straight to power supply) and the second stage is actually a grounded-base stage (base tied to ground, not feedback) and the output taken from the collector of the second transistor. By comparison, audio-optimized opamps are running out of gain by 1MHz, and barely functioning at 10 MHz. The distortion of these devices is extremely low at 1 kHz and 10 kHz, but that's not true above 1MHz, since the feedback is mostly gone.
I think it's good practice to design analog electronics that avoid gross distortion in normal operation, and if the input signal exceeds the slew rate by a factor of 50 or more, well, that's gross distortion, even though it's for a very brief duration.
The slewing has nothing to do with musical signal; it happens with all input signals, sine waves, triangle waves, square waves, etc. The unfiltered edges (of every sample) coming out of the converter are very fast before filtering is applied, regardless of signal input. This happens with any converter (PCM or DSD) that does not have a built-in opamp. If the converter has a built-in opamp, the slewing happens inside the opamp, where it cannot be seen.
The "joke" part of the article is that it's a crude workaround for DACs that have analog stages that are too slow. By making the too-slow analog electronics slew randomly, the subjective impression might be improved, although this is really a terrible solution. It's kind of like "improving" a car with bad brakes by throwing an anchor out the window, and hoping the anchor works.
The latest PFO article, although serious in tone, is partly a joke aimed at the mid-fi PCM DACs out there. From my perspective, these DACs do a pretty bad job with both PCM and DSD. The high frequencies of PCM are rendered as harsh and gritty, while DSD playback is syrupy and rolled-off sounding. However ... PCM converters followed by non-slewing electronics sound quite different; from my perspective, there's little to no "PCM" coloration as audiophiles think of it. DSD at the Playback Designs (and possibly, Invicta) level also no longer sounds syrupy and rolled-off sounding. So the real problem is with mid-fi converters, which do a very poor job of representing the recording.
Referring to the "transients" mentioned in post #616, I may not have been as clear in the PFO article as I could have been. I was not referring to musical transients at all, but the rising and falling edges that emerge from a R2R converter like the PCM-63, as directly measured with a 500 MHz Tek scope or a HP RF spectrum analyzer. The digital input to the converter was not fed musical transients, but a steady full-scale 20 kHz sine wave, and the current output of the PCM-63 was measured with a 50-ohm probe directly attached to the pin of the converter chip, with no I/V stage at all.
That's where I saw the nanosecond rise times; at the edge of each sample, before analog low-passing and waveform reconstruction. When fed into the HP analyzer, I saw a comb spectra extending out to 20 MHz, fading into the noise at the analyzer's noise floor of -100dB at 50 MHz. The PCM-63, with only a 50-ohm load and a short length of coax going into the scope, had very little overshoot, less than 5%, so it didn't appear to need deglitching.
The rise and fall times, though, as seen on the Tek scope, were extremely short, and were not affected by the sample rate of the PCM-63 ... pretty much the same at 44.1, 96, and 192 kHz. This wasn't surprising, since it looked like the rise and fall times were set by the speed of digital logic inside the PCM-63, instead of external clocks.
A little bit of math produced the rise and fall times needed by associated analog electronics, if slewing was to be avoided. It was a big number, in the range of 1000V/uSec, or maybe faster. Since it is very difficult to design analog electronics that have low distortion from 20 Hz through to 20 MHz, my feeling that passive pre-filtering is a very good idea.
Leaving a wideband signal alone is standard practice in traditional analog spectrum analyzers; there's no active preamplifier at all, the signal goes directly from the 50-ohm input, into a wideband attenuator, then right into the first mixer. The old-timers at Tektronix explained to me that a low-distortion wideband amplifier, with a bandwidth of 50 kHz to 1.8GHz, could not be built, so the signal had to go through an all-passive signal path before hitting the first mixer.
A similar problem confronts the I/V converter; the signal has too wide a bandwidth for opamp-style ultralow distortion amplification. There are zero-feedback circuits that can be borrowed from video and the vertical channel of scope preamps; cascodes and apparent long-tailed pairs where the first transistor is really an emitter-follower (collector goes straight to power supply) and the second stage is actually a grounded-base stage (base tied to ground, not feedback) and the output taken from the collector of the second transistor. By comparison, audio-optimized opamps are running out of gain by 1MHz, and barely functioning at 10 MHz. The distortion of these devices is extremely low at 1 kHz and 10 kHz, but that's not true above 1MHz, since the feedback is mostly gone.
I think it's good practice to design analog electronics that avoid gross distortion in normal operation, and if the input signal exceeds the slew rate by a factor of 50 or more, well, that's gross distortion, even though it's for a very brief duration.
The slewing has nothing to do with musical signal; it happens with all input signals, sine waves, triangle waves, square waves, etc. The unfiltered edges (of every sample) coming out of the converter are very fast before filtering is applied, regardless of signal input. This happens with any converter (PCM or DSD) that does not have a built-in opamp. If the converter has a built-in opamp, the slewing happens inside the opamp, where it cannot be seen.
The "joke" part of the article is that it's a crude workaround for DACs that have analog stages that are too slow. By making the too-slow analog electronics slew randomly, the subjective impression might be improved, although this is really a terrible solution. It's kind of like "improving" a car with bad brakes by throwing an anchor out the window, and hoping the anchor works.
Last edited: