The 24-Bit Delusion

jkeny

Industry Expert, Member Sponsor
Feb 9, 2012
3,374
42
383
Ireland
I can only give my non scientific observations,but I would say Kostas's,has a dramatic effect on the digital signal. After all my experiments I would say with certainty that the digital signal can produce reproduced audio that is equal or better than analogue reproduction. Even 16 bit can meet this analogue standard and as the signal becomes washed,I think the disparity between 16 bit and higher quality becomes less and less.

Roger, I agree with the first highlighted part but it requires implementation care & attending to factors which may be considered second order issues/effects
I'm not sure what "signal becomes washed" means?
 

Yuri Korzunov

Member
Jul 30, 2015
138
0
16
When considered oversampling need consider distribution same energy of noise in wider band of oversampled signal.

Example: we have 16 bit/44 kHz signal with -120 dB noise (for accepted method). Energy the noise is E.

If we oversample it to 16 bit/88 kHz, energy noise saved. But the energy E distributed in 2 times wider band.

Energy is square of noise spectrum. So width increased 2 times, but height (noise floor) decreased 2 times (6 dB).
 

jkeny

Industry Expert, Member Sponsor
Feb 9, 2012
3,374
42
383
Ireland
When considered oversampling need consider distribution same energy of noise in wider band of oversampled signal.

Example: we have 16 bit/44 kHz signal with -120 dB noise (for accepted method). Energy the noise is E.

If we oversample it to 16 bit/88 kHz, energy noise saved. But the energy E distributed in 2 times wider band.

Energy is square of noise spectrum. So width increased 2 times, but height (noise floor) decreased 2 times (6 dB).

So you are talking about FFT oversampling here, I presume?
This is actually interpolation then where new samples are created between existing samples & the existing samples are also replaced by new samples in order to preserve the total noise power in the signal - it's not zero padding to create new samples (which would also preserve the total noise power) or duplication of existing samples to create new samples (which wouldn't preserve the total noise power)
 

Yuri Korzunov

Member
Jul 30, 2015
138
0
16
1. I written "for accepted method". FFT is method.

It is separate thing, that may decrease measure value of noise.

Example: we have 16 bit/44 kHz signal with -120 dB noise for accepted method with FFT 512 length.

If we change length to 1024, measured energy noise E are kept. But the energy E distributed in 2 times more bins of FFT.

Energy is square of noise spectrum like in previous example. So number bins is increased 2 times, but height each of bins (noise floor) decreased 2 times (6 dB).

It is way to decrease measured value noise floor with keeping energy noise of analog (measured) signal at same level.

2. Oversampling distribute quantization noise energy of analog signal in wider band. It also lead to decreasing measured level.

Important: for case #2 (oversampling) own energy noise of analog signal I accepted as zero. In previous post I meant quantization noise too.
 
Last edited:

jkeny

Industry Expert, Member Sponsor
Feb 9, 2012
3,374
42
383
Ireland
I written "for accepted method". FFT is method.

It is separate thing, that may decrease measure value of noise.

Example: we have 16 bit/44 kHz signal with -120 dB noise for accepted method with FFT 512 length.

If we change length to 1024, measured energy noise E are kept. But the energy E distributed in 2 times more bins of FFT.

Energy is square of noise spectrum like in previous example. So number bins is increased 2 times, but height each of bins (noise floor) decreased 2 times (6 dB).

It is way to decrease measured value noise floor with keeping energy noise of analog (measured) signal at same level.

Oversampling distribute quantization noise energy of analog signal in wider band. It also lead to decreasing measured level.

Important: own energy noise of analog signal I accepted here as zero. In previous post I meant quantization noise too.

I find this to be misleading because oversampling is typically understood to mean that new samples are created between existing samples but this is not the case here, right?

What we are talking about in going from an FFT of 512 length to an FFT of 1024 length is simply splitting the existing samples (there are usually many samples in each bin) across more bins - it doesn't involve any oversampling or creating of new samples, right? This is NOT oversampling

Please be specific in answering my question as it will clarify the confusion introduced by Amir of the term oversampling & the confused citing of ADC oversampling when nothing of the sort is done in FFT processing.

An example will help clarify - a 44.1KHz sampling rate with a 512 FFT bins & 1 sec runtime gives us each bin covering 86 samples i.e the energy in this bin is the result of totalling 86 samples. Increasing the number of FFT bins to 1024 redistributes the same 44100 samples equally between 1024 bins & now each bin contains the energy from 43 samples - hence the energy in each bin is lower & the 'noise floor' is therefore plotted at a lower amplitude.

No new samples have been created - they have been redistributed i.e. it's NOT oversampling & bears absolutely no relationship to ADC oversampling - it's simply a misuse of terms (you may use such loose terminology among DSP engineers but anyone who has more than a cursory knowledge of FFTs would be aware that it is an incorrect term). It's obviously something that Amir is confused about based on his stumbling attempts at defending this.

He may have come across the term "oversampling factor" or "adaptive oversampling factor" in FFT & assumed it meant the same as ADC oversampling (his continuing references to & citing of ADC oversampling testifies to his confusion) but his confusion should not be disseminated here.
 
Last edited:

RogerD

VIP/Donor
May 23, 2010
3,734
319
565
BiggestLittleCity
Roger, I agree with the first highlighted part but it requires implementation care & attending to factors which may be considered second order issues/effects
I'm not sure what "signal becomes washed" means?

I mean for the most part a signal this predominantly audio signal ie without current induced noise. From my recent experience the SNR of digital is the factor that enables it to move ahead of analog. That said, analog on the record side has many qualities that are intoxicating. For me digital is the doorway to the beyond.
 

jkeny

Industry Expert, Member Sponsor
Feb 9, 2012
3,374
42
383
Ireland
I mean for the most part a signal this predominantly audio signal ie without current induced noise. From my recent experience the SNR of digital is the factor that enables it to move ahead of analog. That said, analog on the record side has many qualities that are intoxicating. For me digital is the doorway to the beyond.

Ah, right I should have realised the meaning based on your experiments in grounding between devices. Keeping ground noise, leakage currents away from signal ground is certainly the right approach & you are reaping the sonic rewards of your approach.
 

Yuri Korzunov

Member
Jul 30, 2015
138
0
16
I find this to be misleading because oversampling is typically understood to mean that new samples are created between existing samples but this is not the case here, right?

What we are talking about in going from an FFT of 512 length to an FFT of 1024 length is simply splitting the existing samples (there are usually many samples in each bin) across more bins - it doesn't involve any oversampling or creating of new samples, right? This is NOT oversampling

Please be specific in answering my question as it will clarify the confusion introduced by Amir of the term oversampling & the confused citing of ADC oversampling when nothing of the sort is done in FFT processing.

An example will help clarify - a 44.1KHz sampling rate with a 512 FFT bins & 1 sec runtime gives us each bin covering 86 samples i.e the energy in this bin is the result of totalling 86 samples. Increasing the number of FFT bins to 1024 redistributes the same 44100 samples equally between 1024 bins & now each bin contains the energy from 43 samples - hence the energy in each bin is lower & the 'noise floor' is therefore plotted at a lower amplitude.

No new samples have been created - they have been redistributed i.e. it's NOT oversampling & bears absolutely no relationship to ADC oversampling - it's simply a misuse of terms (you may use such loose terminology among DSP engineers but anyone who has more than a cursory knowledge of FFTs would be aware that it is an incorrect term). It's obviously something that Amir is confused about based on his stumbling attempts at defending this.

He may have come across the term "oversampling factor" or "adaptive oversampling factor" in FFT & assumed it meant the same as ADC oversampling (his continuing references to & citing of ADC oversampling testifies to his confusion) but his confusion should not be disseminated here.

FFT is not oversampling. Band (sample rate) remains as before.

Example:

Power Noise 512 = 1024 W = 512 bins x 2 Watt, sample rate 44100 kHz => noise floor 2 W

Power Noise 1024 = 1024 W = 1024 bins x 1 Watt, sample rate 44100 kHz => noise floor 1 W
 
Last edited:

jkeny

Industry Expert, Member Sponsor
Feb 9, 2012
3,374
42
383
Ireland
FFT is not oversampling. Band (sample rate) remains as before.

Example:

Power Noise Before = 20 W = 10 bins x 2 Watt, sample rate 44100 kHz => noise floor 2 W

Power Noise After = 20 W = 20 bins x 1 Watt, sample rate 44100 kHz => noise floor 1 W

Thank you, Yuri - that is exactly what I have been saying to Amir but he references you in defense of his misunderstanding of FFT processing!!

"I managed DSP engineers for a decade John. I don't say that to impress you but when I talking to one them here, i.e. Yuri, I am going to use terminology that is common and familiar to them. That you don't understand it should just have been cause for a simple question to clarify, "Amir, what do you mean by oversampling?" Instead you have ran off with a bunch of rants, insults, etc. polluting this thread."
 
Last edited:

opus112

Well-Known Member
Feb 24, 2016
462
4
148
Zhejiang
No. Oversampling explains what is going on.

You're still confused then. You claimed '...you think you're saying something different. You are not.' Its very clear not only am I saying something different, I'm also meaning something totally different.

You claimed that everything I said was in support of your post. That's self-evidently false.

"Process gain" is a metric. I was giving an explanation and hence oversampling is most definitely the correct usage.

But your use of 'oversampling' in that context explains nothing, rather it muddies the waters considerably.

As a parallel analogy, an amplifier amplifies sound. It has a metric that is its "gain" in db. Here, we use the word amplifier all the time. We don't talk about "I bought this 23 db gain device."

Its not parallel at all.

Indeed the term "process gain" is hardly used in signal processing.

I entered 'fft processing gain' into Yahoo, it produced a list of results so Yahoo disputes your claim.

Oversampling on the other hand is well known concept and among DSP engineers...

Yes I agree its well known but just because a word is well known and understood it doesn't make it an appropriate word in a particular context.

...., it perfectly describes how we are effectively reducing the measured noise level through post processing.

But it doesn't. Oversampling refers to sampling, not post processing.

Here is one of countless such examples: https://www.microsemi.com/document-portal/doc_view/131569-improving-adc-results-white-paper

"Improving ADC Results
Through Oversampling and Post-Processing of Data"

Which just illustrates my point rather well. Oversampling refers to sampling, not post processing. Notice that the title of that document supports my case and undermines yours - it sees oversampling as something separate from post processing.

Search in there and the phrase "processing gain" does not appear anywhere.

I would expect that, its about improving an ADCs signal-noise ratio. Its not about getting to a noise floor measurement from an FFT.

Delving into the document itself, you've provided plenty more rope to hang your arguments out to dry with. For example:

The first step for improving the results of analog to digital conversions is called oversampling. As the name
implies, oversampling simply refers to sampling the signal at a rate significantly higher than the Nyquist
Frequency.
(I've added bolding)

In case that's not sufficient, there's more :

It should be clear that oversampling by itself improves the digital representation of the
signal only down to the physical dynamic range limit (minimum step size) of the ADC.


Did you get that Amir? Oversampling by itself can't do better than the LSB step size of the ADC. Let that sink in for a while.

So, to sum up so far - more obfuscation and more deflection from the questions already asked. Not a hint, not a glimmer of any understanding of your error in the misuse of this term 'oversampling'.

(I may deal with the rest of your post in a separate reply).
 
Last edited:

opus112

Well-Known Member
Feb 24, 2016
462
4
148
Zhejiang
So you are in full agreement with what I said.

That oversampling is the reason for the reduced noise floor in Stereophile's FFT? If so, no I'm not.

Yet because you didn't understand the terminology I used, you continue to talk past me.

I understood 'oversampling' perfectly well. Its the wrong term to use in that particular context. Any clearer now?
 

opus112

Well-Known Member
Feb 24, 2016
462
4
148
Zhejiang
Here, oversampling refers to oversampling of the noise in the combined noise of the DAC+ADC.

Run that phrase by any of the DSP engineers you used to be responsible for. I'd like to hear their response to 'oversampling of the noise in the combined noise...'.
 

amirm

Banned
Apr 2, 2010
15,813
38
0
Seattle, WA
That oversampling is the reason for the reduced noise floor in Stereophile's FFT? If so, no I'm not.
No. Once again, this is the measured dcs DAC response:



I said that the noise floor show here around -140 db is NOT that of the DAC. Agree or disagree?

Second, I said that JA made a relative measurement of 16 vs 24 bit samples to arrive at this: "With a dithered 1kHz tone at –90dBFS, increasing the bit depth from 16 (fig.12, cyan and magenta traces) to 24 (blue and red) dropped the noise floor by 24dB, indicating that the Vivaldi DAC has at least 20-bit resolution, which is the state of the art."

Agree or disagree?
 

amirm

Banned
Apr 2, 2010
15,813
38
0
Seattle, WA
Run that phrase by any of the DSP engineers you used to be responsible for. I'd like to hear their response to 'oversampling of the noise in the combined noise...'.
Sure. I am visiting president Trump first though. Do you have any messages you like me to run by him?
 

amirm

Banned
Apr 2, 2010
15,813
38
0
Seattle, WA
I entered 'fft processing gain' into Yahoo, it produced a list of results so Yahoo disputes your claim.
But you didn't click on the any of the links, now did you?

Here is the search results:



The first three are forum posts so let's not consider any of them a proper reference.

The fourth one is from analog devices: http://www.analog.com/media/en/training-seminars/tutorials/MT-001.pdf

And there we read this:

"The process gain due to oversampling for these conditions is given by: [formula] "

And that is exactly what I explained to you earlier. That process gain is the name of the metric, not the signal processing performed. The signal processing above is oversampling.

Now it is not like I don't know or use the term process gain myself. I do and here is an example four years back in this very forum, addressing the same topic:



So what is your beef again?
 

Yuri Korzunov

Member
Jul 30, 2015
138
0
16
No. Once again, this is the measured dcs DAC response:



I said that the noise floor show here around -140 db is NOT that of the DAC.

In the picture we really see measured noise level about -140 dB.

If there are some doubts need exactly know settings the measurement tool.

For same cases used protocols of measurements where exactly described all knobs/virtual knobs positions, modes of measured apparatus and measurement tools.

If we have this information, we have key for understanding of results.
 

KostasP.

Well-Known Member
May 6, 2016
116
74
135
Melbourne
Someone up-thread stated that perhaps digital is fatiguing because it doesn't smooth over the imperfections as analogue audio seems to do. There's logic to this, I believe but I would express it differently to this. As you may know I'm very interested in auditory perception & all my readings lead me to the following - we are born with the sense of hearing but we train it to evaluate sound based on the sounds we encounter daily in the world. This internal model that is built from this exposure is what becomes our auditory perception - it's mostly an analytic process. Because we have built this model using the sounds from nature we are therefore using a restricted analysis which best matches these familiar sounds. Now nature & the sounds we encounter in the world do not arrive as pure tones - they are mixed with background noise & it's what we expect to hear & how our auditory perception analysis works - it splits sounds into auditory streams, foreground & background etc. Digital audio, by it's very nature delivers the audio signal in a less noisy window. This lack of background noise has less masking effect on any noise fluctuations & hence we notice their effect on the timbre & tonal characteristics of the sound. Does digital audio suffer somewhat from it's own quieter background by being more prone to auditory analysis sensing that it is a somewhat unnatural soundscape that we are hearing i.e there isn't enough noise in the background & the low level noise that is captured on the recording & played back is where digital audio is at it's weakest? It's also means that any perturbations in ground noise will be more noticeable. The real issue is how sensitive are we to this disturbance in the sound which I don't believe is answered by the thresholds of hearing metric - we need far more sophisticated tests to ascertain this & possibly fMRI is the only real answer to testing this - it can reveal perceptions that operate at the subconscious level that we are not consciously aware of but probably affect us in long-term listening.

This tends to explain why measurements aren't a predictor of sound - they are not based on the auditory processing model of hearing & why we can find that some devices with high levels of jitter, for instance, can sound better than a low jitter device. As is well known by digital engineers there are uncorrelated & signal correlated jitter - the former being fairly acoustically benign, the latter having more psychoacoustic impact. And overlaid onto this is the frequency spectrum of the jitter. It would appear that very low frequency jitter (or phase noise) is more detrimental to sound stage & musical timbre. Why? Not sure, But maybe because it can interfere with the envelope of the sound i.e how a sound changes it's spectral footprint as it develops or maybe it changes the relative relationship between sound envelopes as music progresses through time?

Psycho-acoustics can not be measured; they are felt!

I am at a loss trying to follow the technical battle on this thread but empirically, as an avid listener of a highly resolving and transparent system, some pertinent points can have their validity.

First of all, one should dispense with analogue\digital dogma and fundamentalism. Both analogue and digital are means to an end. The end is the music. Done properly, both are capable of excellent achievements, acknowledging that perfection is non-existent, given the objective frailties of the technical and human resources, and abandoning all delusions about recreating the original, real thing. Often, a good system can transcend reality and render a higher degree of sensory, emotional and intellectual experiences than the real.

jkeny: Your views and hypotheses expounded in relation to auditory perception \psycho-acoustics were interesting and coherently articulated although, for me, somewhat overstretched and pre-conceived. I would rather have ideas than ideologies. The former can liberate you; the latter possess you (although, in fairness, your views were not extremist).

There is nothing wrong or deficient with experienced, honest ears as a ‘’hearing metric’’. Afterall you, like all of us, are citing auditory inadequacies based more on what you hear (and hence postulate your theories ) than on what you measure. Psychoacoustics can NOT be measured; they are felt and perhaps psycho-analysed, but not using an ‘’auditory analysis’’ apparatus. Mood swings\’’distortions’’ can not be quantified numerically. Music personified, for me, is an experience of the ears (sensory), heart (emotional) and mind (intellectual). Maintaing a fine balance of the three is facilitated by having a highly resolving, involving and transparent system but, for many, it does not have to be like this.

I have a simpler and less sophisticated ‘’auditory value system’’. I judge the end result, i.e the music, as produced by my two means of playback- analogue and digital. The noise properties that you assign to digital and their associated auditory effects (i.e what we actually hear), do not translate to reality in my experience, no matter how hard I try to make the ‘’naturo-cultural’’ ( my own neologism just for this post) connections that you allude to.

There is a simple test: Record an analogue track on a relatively high quality, let alone state-of-the-art, digital recorder. Match levels and compare blind-folded (unless your ears are honest). You will witness an almost facsimile of the analogue. How does your ‘’auditory analysis’’ model\mechanism (post #75) account for and reconcile with the findings. We hear the exact analogue on digital. The digital has not added anything of its own, as inferred in or deduced from your post.

Furthermore, I fully endorse RogerD’s comment about the adequacy of Redbook as playback. Higher bits may well be needed for recording, mixing\mastering (for multi-processing, headroom, etc.) but I have serious reservations about the consequences of excessive\multiple processing on the tonal\timbral properties of many Cds\SACDs. Redbook presents no compromises if every link of the digital production chain was executed impeccably.

By the way, when I record (24 bit), I neither mix nor master. My mixing relies on the appropriate placement of the musicians according to the dynamic properties of their instruments, my mood at the time, and the overall tonal balance that I am seeking. Once satisfied, after much experimentation, there is no need for further processing or manipulation. The critisism of digital timbral\tonal inferiority, often cited, may well be in my view due to multiple processing and filtering implementation NOT done properly. Having said all this and not wanting to give the impression that I am an advocate of my recording methodology, I have to confess that I envy the work of many engineers, (incomparably superior to mine) regardless of their methods and practices.

A few thoughts to ponder over, rather than a critique of your pasitions which you defend rather admirably.

We listen, always learning. Cheers, Kostas.
 
Last edited:

opus112

Well-Known Member
Feb 24, 2016
462
4
148
Zhejiang
I said that the noise floor show here around -140 db is NOT that of the DAC. Agree or disagree?

Tell you what, I'll give an answer to this after you've answered my much earlier question about the provenance of your claim about the bandwidth involved here. Until that piece of information is nailed down its not really possible to interpret this graph sensibly.

Second, I said that JA made a relative measurement of 16 vs 24 bit samples to arrive at this: "With a dithered 1kHz tone at –90dBFS, increasing the bit depth from 16 (fig.12, cyan and magenta traces) to 24 (blue and red) dropped the noise floor by 24dB, indicating that the Vivaldi DAC has at least 20-bit resolution, which is the state of the art."

Agree or disagree?

Agree that JA said that or agree that you said JA said that, or agree that JA's conclusion is a reasonable one?
 

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu

Steve Williams
Site Founder | Site Owner | Administrator
Ron Resnick
Site Co-Owner | Administrator
Julian (The Fixer)
Website Build | Marketing Managersing