1-bit usually implies high oversampling ratio and noise shaping; multi-bit does not. There are endless variations, of course, but I think that is the main difference at a high level. Note that many delta-sigma converters use multibit quantizers and/or DACs inside the loop, and some multi-bit designs are used in systems that oversample and noise-shape.
There are always discussions going on on CA, unfortunately the (partially unwritten) rules of CA doesn't allow you to challenge subjective experiences with facts, so CA tends to be a hotbed of audiophile voodoo, folklore and superstition.
Jussi/Miska knows what he is talking about, but of course he has his own bias because of his own software.
Anyway, as he points out, "reducing number of bits is usually done by feeding the input through a remodulator with n-bit output". Exactly my point - converting from 1-bit DSD to multi-bit requires a remodulation (conversion) stage.
Anyway, as he points out, "reducing number of bits is usually done by feeding the input through a remodulator with n-bit output". Exactly my point - converting from 1-bit DSD to multi-bit requires a remodulation (conversion) stage.
Of course. But the re-modulation is performed at the same sample rate as the 1-bit input.. No decimation filtering. That's a very different preposition than converting formats of different sampling rates, the worst being 1-bit to a much lower sampling rate PCM.
No prob - only reason I corrected you was because of accuracy
But the re-modulation is performed at the same sample rate as the 1-bit input.. No decimation filtering. That's a very different preposition than converting formats of different sampling rates, the worst being 1-bit to a much lower sampling rate PCM.
"worst" - how so? It is just two different representations of the audio waveform - one has minimal amplitude data but spreads it out on the time axis, the other needs less bandwidth but more bits per sample.
"worst" - how so? It is just two different representations of the audio waveform - one has minimal amplitude data but spreads it out on the time axis, the other needs less bandwidth but more bits per sample.
The reason I hear most often cited is the filter's phase effects on the resulting format's audio reconstruction. Whatever it is, it's discernible in spaciousness cues in DXD (352.8KHz PCM), and more noticeable at lower PCM sampling rates.
Please go slower. Are you saying that the only difference between content of the files will be in the high-frequency noise? PCM 192/24 can have bandwidth up to 96 kHz, why can't the algorithm keep it unchanged?
Please go slower. Are you saying that the only difference between content of the files will be in the high-frequency noise? PCM 192/24 can have bandwidth up to 96 kHz, why can't the algorithm keep it unchanged?
Are you saying that the only difference between content of the files will be in the high-frequency noise? PCM 192/24 can have bandwidth up to 96 kHz, why can't the algorithm keep it unchanged?
High-frequency noise and differences caused by filters. The noise is there because a 1-bit system only has a signal-to-noise ratio of 6 dB. DSD gets away with it by noise shaping (pushing the noise to inaudible frequencies), but that noise (just like PCM quantization noise) has to be filtered away. The filters will have an impact on phase and impulse response. That's why a null test will show differences - but can't say anything about the audibility of the differences.
High-frequency noise and differences caused by filters. The noise is there because a 1-bit system only has a signal-to-noise ratio of 6 dB. DSD gets away with it by noise shaping (pushing the noise to inaudible frequencies), but that noise (just like PCM quantization noise) has to be filtered away. The filters will have an impact on phase and impulse response. That's why a null test will show differences - but can't say anything about the audibility of the differences.
Thanks. Can we say that each transcoding operation degrades the signal - if we carry successive DSD-> PCM -> DSD -> PCM -> DSD -> each operation is loosing something?
Thanks. Can we say that each transcoding operation degrades the signal - if we carry successive DSD-> PCM -> DSD -> PCM -> DSD -> each operation is loosing something?
You might say each transcoding adds subtly more noise, the recommendation is to transcode only once from DSD to DXD (to do edits-processing, or for a PCM version release) and back then to DSD.
You might say each transcoding adds subtly more noise, the recommendation is to transcode only once from DSD to DXD (to do edits-processing, or for a PCM version release) and back then to DSD.
Indeed. The same of course also applies to DSD to analog and then back to DSD, where you end up doing the filtering and noise shaping at least twice. DSD was developed in the 90's, when digital workstations were less common, and was intended as an replacement for tape (analog and digital) as an archival format, but the assumption was that it would only be used for the final result after all processing and editing was done.
I see tailspin mention that filter phase effects are the usual reasons given for the audible differences heard in converting DSD to PCM
I see Julf mention that a null test will show the difference
Are there any examples of these null tests?
Bruce, I could be imagining things but I feel the 256fs 005 file is a little more realistic than the 128fs 005 file. More natural. If you can make it to the next PNWAS meeting we can play them through my DAC, and you can tell me if I'm delusional or not.