The 24-Bit Delusion

amirm

Banned
Apr 2, 2010
15,813
38
0
Seattle, WA
Not arrogant, Al - just some advice to calm down although you do seem to have drunk deeply of Amir's koolaid.
Al, if you don't understand what is so fundamentally wrong with Amir's statement you would do well to read this http://www.imeko.org/publications/iwadc-2007/IMEKO-IWADC-2007-F087.pdf but if you prefer to drink deeply of the koolaid then .......
View attachment 31788

I wait for Amir to fulfil his stated promise
Wait for what? The paper you put forward and the diagram above completely back what I wrote. Yet you keep saying there is a fundamental error in it. What is the fundamental error?

The paper by the way is formulaic. It does not explain the concepts that are behind the terminology I used, leading you to think something is wrong with it. This is why I said I will write an article it.
 

jkeny

Industry Expert, Member Sponsor
Feb 9, 2012
3,374
42
383
Ireland

amirm

Banned
Apr 2, 2010
15,813
38
0
Seattle, WA
John, if from the start you would have provided an intelligible answer to a simple question, as you finally did, all this could easily have been avoided. Your initial apparent refusal to do so was what drew my ire. We may have misread one another. If that is the case, I apologize.

I understand that. However, I perceived in this thread an arrogant attitude. I may have been mistaken.
You did not misread him at all Al. Everything he has posted is consistent with the explanation I gave at the start. Let's review again what I said, why it is correct, and the answer to the question John keeps asking me:


The -140 db came from JA's measurements of the DAC (eyeballing the graph):


I hope we are all good with that. Normally we would take signal to noise ratio and divide it by 6 and arrive at our ENOB (effective number of bits). Indeed this is what was done at the start of the thread that triggered my caution to not do that. And that the measurement shown does not reveal the DAC’s true noise floor.

The system diagram is a DAC output that is connected to the ADC input of the Audio Precision. In other words we are measuring an analog quantity (DAC’s output) with a digitizer (ADC). Any “straight” noise floor as a result will be the sum of DCS DAC + Audio Precision ADC. Should the DAC noise floor be far lower than the analyzer ADC, we would be reporting the ADC’s noise floor, not the DAC.

The noise floor of the Audio Precision Analyzer that JA used is about -120 db. Based on this number and the above explanation, it would be impossible to have a -140 db measured response that not only includes the ADC in Audio Precision but also the contributions from the DCS DAC! So something else must be going on here.

To understand that we have to review the words I wrote above. Namely the fact that the output of the DAC was oversampled. What is oversampling? It is a technique for trading off bandwidth for bit depth. Should the input signal be (white band) noise and we oversample, we can push the effective noise floor of the capture system. By using more and more points in our discrete FFT transform, we can progressively lower the measured noise floor of the ADC without its physical performance changing.

The formulaic name for oversampling is DFFT "process gain." This is something that John understands and is all that he has post in the last page. It seems to me that he didn’t understand the term oversampling as an equiv. for it. This is why he objected to my statement there. He didn’t understand what I was saying even though I was using common signal processing terminology.

To wit, let’s look at this excellent paper from Analog Devices (maker of countless ADCs and DACs): https://www.silabs.com/documents/public/application-notes/an118.pdf

“This application note describes utilizing oversampling and averaging to increase the resolution and SNR of analog-to-digital conversions. Oversampling and averaging can increase the resolution of a measurement without resorting to the cost and complexity of using expensive off-chip ADCs.”

So now we see that the points I made are both completely valid:
1. Oversampling is used. The analyzer cannot by itself ever yield such low noise floors.
2. As a result, the measured number of -140 db for the DCS DAC is not real.

This leaves no room for any objection from jkeny much less at a fundamental level. If he still objects, he better come back with something this specific.

As an aside, why do we resort to a technique that generates false data this way? Because it allows us to see distortion products deep below the noise floor of the measurement system. If you look at the JA graph again, we see a tiny blip at 120 Hz for example that is at -135 db. Had we not used oversampling and pulled the noise floor of the ADC down, that blip would not be visible in the analyzer’s -120 db noise floor.

Most times we are interested in distortion products as their audibility is something we can easily analyzer (e.g. relative to threshold of hearing). Noise audibility is another animal altogether and when it is random, it tends to be far less audible.

Now you see why I said I was planning to write an article on this. :) I will still do that given how obscure digital measurements can be.
 

opus112

Well-Known Member
Feb 24, 2016
462
4
148
Zhejiang
So now we see that the points I made are both completely valid:
1. Oversampling is used. The analyzer cannot by itself ever yield such low noise floors.

Epic fail Amir, oversampling is not shown in the graph you plotted. It may have been in use sure, but its not the reason for the 'misleading' noise picture.

The reason the 'noise floor' is shown around -144dB is simply because how much noise gets measured depends crucially on how much bandwidth noise is measured in. Oversampling in itself isn't going to change the bandwidth and hence cannot change the noise measurement result. What changes the apparent 'floor' in such histograms is two other unrelated things - first the number of bins in the FFT, second the number of averages the analyser runs.

Let's explore the first of these a little further. The number of bins in the FFT depends on both the sample rate and the acquisition time. If we oversample then we'll capture more samples in the same acquisition time but this just extends the length of our X axis to higher and higher frequencies. No change to the measurement bandwidth.

The averaging has the ability to further lower the measured noise given that noise isn't correlated with itself. So we'd expect to see a reduction in the 'floor' by 3dB for each doubling of the number of averages in use.

2. As a result, the measured number of -140 db for the DCS DAC is not real.

Second epic fail. The measured number of course is real, given the measurement bandwidth is no longer the usual audio one (20-20kHz) but rather a vastly restricted one (a few Hz typically, depending on the parameters mentioned above). Hence its eminently possible to determine from these very real numbers on the plot what the total noise in the audio bandwidth is.

Now you see why I said I was planning to write an article on this. :)

I do not see why someone with such obvious ignorance of basic signal processing would wish to share his ignorance with others in an article. But by all means go right ahead Amir, do not let your own ignorance stand in your way.
 

amirm

Banned
Apr 2, 2010
15,813
38
0
Seattle, WA
Epic fail Amir, oversampling is not shown in the graph you plotted. It may have been in use sure, but its not the reason for the 'misleading' noise picture.

The reason the 'noise floor' is shown around -144dB is simply because how much noise gets measured depends crucially on how much bandwidth noise is measured in. Oversampling in itself isn't going to change the bandwidth and hence cannot change the noise measurement result. What changes the apparent 'floor' in such histograms is two other unrelated things - first the number of bins in the FFT, second the number of averages the analyser runs.

Let's explore the first of these a little further. The number of bins in the FFT depends on both the sample rate and the acquisition time. If we oversample then we'll capture more samples in the same acquisition time but this just extends the length of our X axis to higher and higher frequencies. No change to the measurement bandwidth.

The averaging has the ability to further lower the measured noise given that noise isn't correlated with itself. So we'd expect to see a reduction in the 'floor' by 3dB for each doubling of the number of averages in use.

Second epic fail. The measured number of course is real, given the measurement bandwidth is no longer the usual audio one (20-20kHz) but rather a vastly restricted one (a few Hz typically, depending on the parameters mentioned above). Hence its eminently possible to determine from these very real numbers on the plot what the total noise in the audio bandwidth is.

I do not see why someone with such obvious ignorance of basic signal processing would wish to share his ignorance with others in an article. But by all means go right ahead Amir, do not let your own ignorance stand in your way.
I will repeat again: everything you and John say are in support of my post. It simply is the case that you don't understand the meaning of the term oversampling in this context.

There is no other explanation when everything you say is in support of my post yet you think you are saying something different. You are not.

Let's demonstrate that another way. Here is what started the discussion:

I think this is an "infommercial"? For one thing, he says no existing DAC's have better than 20 bit resolution, but FWIW Stereophile has tested and measured quite a few that have between 23 and 24 bit resolution (delta-sigma and dcs "ring"), and even Schitt's best R2R DAC gets between 21 and 22 bit resolution. And even skimming it I noticed a few other half-truths as well.

Are you in support of the bolded statement?
 

opus112

Well-Known Member
Feb 24, 2016
462
4
148
Zhejiang
It simply is the case that you don't understand the meaning of the term oversampling in this context.

Do then explain why you used the word 'oversampling' when your real meaning was 'process gain'.

Oh and also do please answer my earlier question relating to the claim of 'under 1Hz'.

There is no other explanation when everything you say is in support of my post yet you think you are saying something different. You are not.

Epic fail on the mind-reading front.


Are you in support of the bolded statement?

No, I am not. I have yet to see a DAC that truly has in excess of 23 bit resolution, with the caveat that 'resolution' has its normal meaning not some Humpty Dumpty new one which you decide to dream up. Oh and another caveat I just realized - we are talking resolution in the audio bandwidth (20-20kHz) here aren't we? There are indeed DACs that do 23 or 24bits of resolution but in a much lower bandwidth than audio.
 

jkeny

Industry Expert, Member Sponsor
Feb 9, 2012
3,374
42
383
Ireland
Amir, you are digging yourself in deeper
From the AP SYS-2722 analyser datasheet (which is what is used by Stereophile for their measurements)
"The SYS-2712 and SYS-2722 configurations add dual-channel A/D converters for
FFT and other special forms of analysis. Option “IMD” adds inter-modulation
distortion measurement capability. Option “W&F” adds wow & flutter measurement
capability."​

The specifications manual show:
Residual Noise
22 Hz–22 kHz BW ?1.0 µV [–117.8 dBu].
80 kHz BW ?2.0 µV [–111.8 dBu].
500 kHz BW ?6.0 µV [–102.2 dBu].
A-weighted ?0.7 µV [–120.9 dBu].
CCIR-QPk ?3.5 µV [–106.9 dBu].​

Residual THD+N14
At 1 kHz ?(0.00025% + 1.0 µV) [–112 dB], 22 kHz BW
(valid only for analyzer inputs ?8.5 Vrms).
20 Hz–20 kHz ?(0.0003% + 1.0 µV) [–110.5 dB], 22 kHz BW,
?(0.0005% + 2.0 µV) [–106 dB], 80 kHz BW,
?(0.0010% + 6.0 µV) [–100 dB], 500 kHz BW .
10 Hz–100 kHz ?(0.0040% + 6.0 µV) [–88 dB], 500 kHz BW .​

Please show us where they specify that oversampling of their ADCs can achieve -144dB 'noise floor'?

Furthermore in the second Sterophile FFT you posted, it showed an FFT with a lower 'noise floor' of -154dB. What did they do to achieve this lower 'noise floor' - increase the 'oversampling' in their ADCs? Please tell us how they did that.

On page 37 there are two FFT plots - one showing lower than -170dB 'noise floor' - please explain how they adjusted their ADC "oversampling" to achieve this.

Edit: Ah, so "oversampling" is Amir-speak for FFT process gain & he claims it is standard terminology in DSP for FFT process gain - please show this to be the case, Amir - it would surely be easy to show this in any standard DSP text. Is this what you are now claiming or can we expect another twist in terminology to defend your latest misstep?

And yes, please answer the question that you have promised to answer - where do you ascertain that the FFT was using a bin width of <1Hz (or is this to be interpreted in some other Amir way)?
 
Last edited:

Yuri Korzunov

Member
Jul 30, 2015
138
0
16
Noise measurement's results depend on method. As example changing FFT length may decrease noise. Changing FFT window too.

Except noise floor need look to amplitude signal and its level, because plot may be shifted.

Methods, recommended by ITU, exists. I suppose, these methods used for manuals. Maybe in some countries/branches/cases need use state standards only.

But we can use any methods. For correct conclusions, most important to know, that is measurement method was used and its conditions, settings.
 

jkeny

Industry Expert, Member Sponsor
Feb 9, 2012
3,374
42
383
Ireland
Here's where I believe we see that Amir is BSing us - if he truly understood oversampling to mean FFT process gain at the time he used it, then he would not have plucked out of thin air, a bin width of <1Hz bin as he would have been aware that this bin width would make a lie of the FFT histogram he posted as Opus showed - it would have meant that the DcS Vivaldi DAC had a resolution of less than 17bits.

His failure to explain where he got this 1Hz from is a demonstration of his lack of knowledge in this area & not Opus & my lack of understanding of the meaning of oversampling.

We are still waiting for his explanation for this 1Hz.
 

KostasP.

Well-Known Member
May 6, 2016
116
74
135
Melbourne
Hello jkeny,

First of all, please excuse my technical illiteracy, even ignorance, on matters raised convincingly by you, which I am not qualified to either accept\reject or verify\refute. On the other hand, assuming serious manufacturers of digital playback systems do accept your postulations, one would presume that they would address or have addressed the issue.

Can you nominate any manufacturers who have addressed\overcome this "signal-correlated noise" ( knowing your mantra, this will most likely invite a diplomatic response). My challenged logic tells me that if this is modulated noise, why should it be initiated and manifested as an artefact arising in the PS only? Like so many other sub-domains found in a typical CD\SACD\DAC component, implementation is far more critical than theory and specifications. Any noise-related issue should be rectified wherever it occurs.

Thank you, Kostas.
 

opus112

Well-Known Member
Feb 24, 2016
462
4
148
Zhejiang
Can you nominate any manufacturers who have addressed\overcome this "signal-correlated noise" ( knowing your mantra, this will most likely invite a diplomatic response).

I can think of a couple of companies that do place an emphasis on the necessity to address it to achieve the best perceived sound quality. One is a manufacturer, Meridian and the other merely a licenser of its tech, Dolby.

My challenged logic tells me that if this is modulated noise, why should it be initiated and manifested as an artefact arising in the PS only?

It doesn't only arise from PSU but that is a major source for sure. In digital equipment it arises from lack of correct dither when performing quantisation - a function included in S-D and DSD processing. It also may well arise from DEM algorithms in the back-ends of S-D low-bit DACs.

Like so many other sub-domains found in a typical CD\SACD\DAC component, implementation is far more critical than theory and specifications.

Always assuming the theory itself isn't broken (as it is, for example in the case of DSD).
 

Yuri Korzunov

Member
Jul 30, 2015
138
0
16
Can you nominate any manufacturers who have addressed\overcome this "signal-correlated noise"

Kostas, what is you mean as "signal-correlated noise"?

Such noise may be generated during bit-depth decreasing (example: 24 to 16 bit).

In DSD noise don't correlated with signal.
 

jkeny

Industry Expert, Member Sponsor
Feb 9, 2012
3,374
42
383
Ireland
On the other hand, assuming serious manufacturers of digital playback systems do accept your postulations, one would presume that they would address or have addressed the issue.

Can you nominate any manufacturers who have addressed\overcome this "signal-correlated noise" ( knowing your mantra, this will most likely invite a diplomatic response).
Not many, it would appear but I'm not afraid of saying that I don't know the answer to your question (as my mantra would suggest) .
My challenged logic tells me that if this is modulated noise, why should it be initiated and manifested as an artefact arising in the PS only?
I don't think it is just in the PS only - for instance, I believe that a lot of the grounding discussions are actually about noise on the ground plane affecting the signal processing as are the USB isolator products. So it's not just arising from the PS although ultimately these issues can be traced back to PS interactions among connected devices
Like so many other sub-domains found in a typical CD\SACD\DAC component, implementation is far more critical than theory and specifications. Any noise-related issue should be rectified wherever it occurs.

Thank you, Kostas.
I agree that implementation is critical but one can only implement according to what one is aware of & many manufacturers aren't aware of this aspect - it's not taught in engineering schools. Many treat PS supplies as a black box - meet a certain spec & there's nothing to worry about. This is especially so in the digital audio domain where the the dissonance exists that if it can deliver the bits then the system is working as expected & no more can be done - binary thinking abounds in the digital audio domain.
 

KostasP.

Well-Known Member
May 6, 2016
116
74
135
Melbourne
Hello Yuri and opus112,

I used noise-correlated noise in italics because I was quoting jkeny. I take it to mean modulated noise carried or imbedded in the signal. In a broader sense, my questions were rhetorical but I wanted jkeny to respond and hear his views as a manufacturer.

I am also interested in the possible deterioration\alteration of the timbral\tonal personality of signals and\or clusters of musical signals resulting from excessive processing or manipulation- oversampling, upsampling, converting and reconverting, etc. all the way to the final CD\SACD product.

Thank you, Kostas.
 

jkeny

Industry Expert, Member Sponsor
Feb 9, 2012
3,374
42
383
Ireland
Hello Yuri and opus112,

I used noise-correlated noise in italics because I was quoting jkeny. I take it to mean modulated noise carried or imbedded in the signal. In a broader sense, my questions were rhetorical but I wanted jkeny to respond and hear his views as a manufacturer.

I am also interested in the possible deterioration\alteration of the timbral\tonal personality of signals and\or clusters of musical signals resulting from excessive processing or manipulation- oversampling, upsampling, converting and reconverting, etc. all the way to the final CD\SACD product.

Thank you, Kostas.

Someone up-thread stated that perhaps digital is fatiguing because it doesn't smooth over the imperfections as analogue audio seems to do. There's logic to this, I believe but I would express it differently to this. As you may know I'm very interested in auditory perception & all my readings lead me to the following - we are born with the sense of hearing but we train it to evaluate sound based on the sounds we encounter daily in the world. This internal model that is built from this exposure is what becomes our auditory perception - it's mostly an analytic process. Because we have built this model using the sounds from nature we are therefore using a restricted analysis which best matches these familiar sounds. Now nature & the sounds we encounter in the world do not arrive as pure tones - they are mixed with background noise & it's what we expect to hear & how our auditory perception analysis works - it splits sounds into auditory streams, foreground & background etc. Digital audio, by it's very nature delivers the audio signal in a less noisy window. This lack of background noise has less masking effect on any noise fluctuations & hence we notice their effect on the timbre & tonal characteristics of the sound. Does digital audio suffer somewhat from it's own quieter background by being more prone to auditory analysis sensing that it is a somewhat unnatural soundscape that we are hearing i.e there isn't enough noise in the background & the low level noise that is captured on the recording & played back is where digital audio is at it's weakest? It's also means that any perturbations in ground noise will be more noticeable. The real issue is how sensitive are we to this disturbance in the sound which I don't believe is answered by the thresholds of hearing metric - we need far more sophisticated tests to ascertain this & possibly fMRI is the only real answer to testing this - it can reveal perceptions that operate at the subconscious level that we are not consciously aware of but probably affect us in long-term listening.

This tends to explain why measurements aren't a predictor of sound - they are not based on the auditory processing model of hearing & why we can find that some devices with high levels of jitter, for instance, can sound better than a low jitter device. As is well known by digital engineers there are uncorrelated & signal correlated jitter - the former being fairly acoustically benign, the latter having more psychoacoustic impact. And overlaid onto this is the frequency spectrum of the jitter. It would appear that very low frequency jitter (or phase noise) is more detrimental to sound stage & musical timbre. Why? Not sure, But maybe because it can interfere with the envelope of the sound i.e how a sound changes it's spectral footprint as it develops or maybe it changes the relative relationship between sound envelopes as music progresses through time?
 
Last edited:

amirm

Banned
Apr 2, 2010
15,813
38
0
Seattle, WA
Do then explain why you used the word 'oversampling' when your real meaning was 'process gain'.
No. Oversampling explains what is going on. "Process gain" is a metric. I was giving an explanation and hence oversampling is most definitely the correct usage.

As a parallel analogy, an amplifier amplifies sound. It has a metric that is its "gain" in db. Here, we use the word amplifier all the time. We don't talk about "I bought this 23 db gain device."

Indeed the term "process gain" is hardly used in signal processing. Oversampling on the other hand is well known concept and among DSP engineers, it perfectly describes how we are effectively reducing the measured noise level through post processing.

Here is one of countless such examples: https://www.microsemi.com/document-portal/doc_view/131569-improving-adc-results-white-paper

"Improving ADC Results
Through Oversampling and Post-Processing of Data"


Search in there and the phrase "processing gain" does not appear anywhere.



Oh and also do please answer my earlier question relating to the claim of 'under 1Hz'.
This is the most fundamental concept here Opus.

Signal to noise ratio is exactly what it says it is. If we had the actual signal level and took the ratio of that to noise level and expressed that in db, then we would have our signal to noise ratio.

Oversampling reduces the bandwidth of each DFFT bin and as a result, what is there represents noise in a sub-hertz bucket. It is that, which then makes the signal to noise ratio computation wrong. We can't take noise that is represented in smaller bucket than 1 Hz, and compare it to a signal expressed in hertz.

Here is the application note from Audio Precision on this very topic: "FFT Scaling for Noise" (from my computer doc library -- google for original)


"Figure 3. Spectrum of a noisy signal measured at two different FFT resolutions.

So how is it that the apparent noise level of the signal changes by as much as 21 dB, based on the FFT resolution alone? This difference is due to the fact that the measurement of noise depends on the bandwidth of the measurement. For a spectrum display that contains all of the bins in the underlying FFT, each bin represents the narrow band RMS level of the signal in that bin, equivalent to the level that would be measured by a bandpass filter with a bandwidth of ?f, the width of the FFT bin in Hz. Thus, the apparent noise floor of the spectrum depends on the bin width, or ?f, which in turn is a function of the number of FFT bins. Each time you double the number of FFT bins, the binwidth is halved, reducing the “noise power” in each bin by a factor of 2. This equates to a 3 dB decreasein the RMS noise level. Therefore, in the example above, changing the FFT resolution from 256 to 32 k (a factorof 128, or 27) results in the RMS noise level in each bin being decreased by 3 dB x 7, or 21 dB.

Noise spectra are often displayed in a normalized format called power spectral density (PSD), or amplitude spectral density (ASD). This normalizes the data to the power spectrum (level squared) or amplitude spectrum that would be measured with a bin width of 1.0 Hz using a perfect bandpass filter centered at each point. In addition to compensating for the bin width (?f), it corrects the spectrum for the scaling of the FFT window used."


You see the relation to 1 Hz noise bandwidth? That is exactly what I mentioned in my original post. To understand that, is to understand the concept rather than some formula like process gain. Speaking of that, there is no reference to the term "process gain" in the Audio Precision application note above.

So once again, all of this back and forth is due to lack of familiarity on behalf of you and John with common language used between DSP engineers to talk about such concepts. It is understandable because you don't work with other DSP engineers and have learned these concepts on your own. That's all.
 

amirm

Banned
Apr 2, 2010
15,813
38
0
Seattle, WA
No, I am not. I have yet to see a DAC that truly has in excess of 23 bit resolution, with the caveat that 'resolution' has its normal meaning not some Humpty Dumpty new one which you decide to dream up.
So you are in full agreement with what I said. Yet because you didn't understand the terminology I used, you continue to talk past me.
 

amirm

Banned
Apr 2, 2010
15,813
38
0
Seattle, WA
Amir, you are digging yourself in deeper
From the AP SYS-2722 analyser datasheet (which is what is used by Stereophile for their measurements)
"The SYS-2712 and SYS-2722 configurations add dual-channel A/D converters for
FFT and other special forms of analysis. Option “IMD” adds inter-modulation
distortion measurement capability. Option “W&F” adds wow & flutter measurement
capability."​

The specifications manual show:
Residual Noise
22 Hz–22 kHz BW ?1.0 µV [–117.8 dBu].
80 kHz BW ?2.0 µV [–111.8 dBu].
500 kHz BW ?6.0 µV [–102.2 dBu].
A-weighted ?0.7 µV [–120.9 dBu].
CCIR-QPk ?3.5 µV [–106.9 dBu].​

Residual THD+N14
At 1 kHz ?(0.00025% + 1.0 µV) [–112 dB], 22 kHz BW
(valid only for analyzer inputs ?8.5 Vrms).
20 Hz–20 kHz ?(0.0003% + 1.0 µV) [–110.5 dB], 22 kHz BW,
?(0.0005% + 2.0 µV) [–106 dB], 80 kHz BW,
?(0.0010% + 6.0 µV) [–100 dB], 500 kHz BW .
10 Hz–100 kHz ?(0.0040% + 6.0 µV) [–88 dB], 500 kHz BW .​

Please show us where they specify that oversampling of their ADCs can achieve -144dB 'noise floor'?
Why? This is what I said:

The noise floor of the Audio Precision Analyzer that JA used is about -120 db.

The 144 db is achieved through signal processing, enabling us to see deep into the noise floor of the overall system.

Furthermore in the second Sterophile FFT you posted, it showed an FFT with a lower 'noise floor' of -154dB. What did they do to achieve this lower 'noise floor' - increase the 'oversampling' in their ADCs? Please tell us how they did that.
Again, you are totally confused with use of such terms. Oversampling is a component of many ADCs. That is not what I am talking about. Here, oversampling refers to oversampling of the noise in the combined noise of the DAC+ADC. By oversampling, we reduce its bandwidth per DFFT bin, allowing distortion signals to become visible.

We are using power of software to gain signal strength through noise. It has nothing to do with how an ADC is designed (oversampling or not).

On page 37 there are two FFT plots - one showing lower than -170dB 'noise floor' - please explain how they adjusted their ADC "oversampling" to achieve this.

Edit: Ah, so "oversampling" is Amir-speak for FFT process gain & he claims it is standard terminology in DSP for FFT process gain - please show this to be the case, Amir - it would surely be easy to show this in any standard DSP text. Is this what you are now claiming or can we expect another twist in terminology to defend your latest misstep?

And yes, please answer the question that you have promised to answer - where do you ascertain that the FFT was using a bin width of <1Hz (or is this to be interpreted in some other Amir way)?
See my answer to Opus on both.

Summarizing then, you did indeed get confused by the word oversampling thinking it meant the ADC in the analyzer was operated differently. This is why I kept asking you to explain what "fundamental" problem you had with my original post. You never said it but here you demonstrated it that you thought the operational mode of ADC was changed. This was a clear mistake on your part John.

I managed DSP engineers for a decade John. I don't say that to impress you but when I talking to one them here, i.e. Yuri, I am going to use terminology that is common and familiar to them. That you don't understand it should just have been cause for a simple question to clarify, "Amir, what do you mean by oversampling?" Instead you have ran off with a bunch of rants, insults, etc. polluting this thread.
 

RogerD

VIP/Donor
May 23, 2010
3,734
319
565
BiggestLittleCity
Not many, it would appear but I'm not afraid of saying that I don't know the answer to your question (as my mantra would suggest) . I don't think it is just in the PS only - for instance, I believe that a lot of the grounding discussions are actually about noise on the ground plane affecting the signal processing as are the USB isolator products. So it's not just arising from the PS although ultimately these issues can be traced back to PS interactions among connected devices I agree that implementation is critical but one can only implement according to what one is aware of & many manufacturers aren't aware of this aspect - it's not taught in engineering schools. Many treat PS supplies as a black box - meet a certain spec & there's nothing to worry about. This is especially so in the digital audio domain where the the dissonance exists that if it can deliver the bits then the system is working as expected & no more can be done - binary thinking abounds in the digital audio domain.

Hello Yuri and opus112,

I used noise-correlated noise in italics because I was quoting jkeny. I take it to mean modulated noise carried or imbedded in the signal. In a broader sense, my questions were rhetorical but I wanted jkeny to respond and hear his views as a manufacturer.

I am also interested in the possible deterioration\alteration of the timbral\tonal personality of signals and\or clusters of musical signals resulting from excessive processing or manipulation- oversampling, upsampling, converting and reconverting, etc. all the way to the final CD\SACD product.

Thank you, Kostas.

Someone up-thread stated that perhaps digital is fatiguing because it doesn't smooth over the imperfections as analogue audio seems to do. There's logic to this, I believe but I would express it differently to this. As you may know I'm very interested in auditory perception & all my readings lead me to the following - we are born with the sense of hearing but we train it to evaluate sound based on the sounds we encounter daily in the world. This internal model that is built from this exposure is what becomes our auditory perception - it's mostly an analytic process. Because we have built this model using the sounds from nature we are therefore using a restricted analysis which best matches these familiar sounds. Now nature & the sounds we encounter in the world do not arrive as pure tones - they are mixed with background noise & it's what we expect to hear & how our auditory perception analysis works - it splits sounds into auditory streams, foreground & background etc. Digital audio, by it's very nature delivers the audio signal in a less noisy window. This lack of background noise has less masking effect on any noise fluctuations & hence we notice their effect on the timbre & tonal characteristics of the sound. Does digital audio suffer somewhat from it's own quieter background by being more prone to auditory analysis sensing that it is a somewhat unnatural soundscape that we are hearing i.e there isn't enough noise in the background & the low level noise that is captured on the recording & played back is where digital audio is at it's weakest? It's also means that any perturbations in ground noise will be more noticeable. The real issue is how sensitive are we to this disturbance in the sound which I don't believe is answered by the thresholds of hearing metric - we need far more sophisticated tests to ascertain this & possibly fMRI is the only real answer to testing this - it can reveal perceptions that operate at the subconscious level that we are not consciously aware of but probably affect us in long-term listening.

This tends to explain why measurements aren't a predictor of sound - they are not based on the auditory processing model of hearing & why we can find that some devices with high levels of jitter, for instance, can sound better than a low jitter device. As is well known by digital engineers there are uncorrelated & signal correlated jitter - the former being fairly acoustically benign, the latter having more psychoacoustic impact. And overlaid onto this is the frequency spectrum of the jitter. It would appear that very low frequency jitter (or phase noise) is more detrimental to sound stage & musical timbre. Why? Not sure, But maybe because it can interfere with the envelope of the sound i.e how a sound changes it's spectral footprint as it develops or maybe it changes the relative relationship between sound envelopes as music progresses through time?

I can only give my non scientific observations,but I would say Kostas's
"mean modulated noise carried or imbedded in the signal"
,has a dramatic effect on the digital signal. After all my experiments I would say with certainty that the digital signal can produce reproduced audio that is equal or better than analogue reproduction. Even 16 bit can meet this analogue standard and as the signal becomes washed,I think the disparity between 16 bit and higher quality becomes less and less.
 

jkeny

Industry Expert, Member Sponsor
Feb 9, 2012
3,374
42
383
Ireland
Amir, I'm not sure if you set out to confuse in how you communicate or you are just confused?

Here's what you said originally
"The actual noise floor is not -140+ db. Oversampling is used in the measurements resulting in much lower measured noise floor. Without it we would be seeing the ADC noise of the measurement system itself!"​

What you are now claiming this means is that oversampling is the commonly used DSP terminology to describe a post-processing function which can be used to increase the number of bins in an FFT window & hence reduce the FFT bin bandwidth.
"Here, oversampling refers to oversampling of the noise in the combined noise of the DAC+ADC. By oversampling, we reduce its bandwidth per DFFT bin, allowing distortion signals to become visible.

We are using power of software to gain signal strength through noise. It has nothing to do with how an ADC is designed (oversampling or not).​


The confusions started when you cited ADCs as examples of oversampling & again linked here to a paper here but yet this has nothing to do with ADC oversampling, right - so what is your intent?
"Improving ADC Results
Through Oversampling and Post-Processing of Data"​

Now in that paper, yes it mentions ADC oversampling but there is no mention of post-processing oversampling or anything that relates to what you claim it shows for FFT processes "it perfectly describes how we are effectively reducing the measured noise level through post processing."
So what are you trying to show by citing it?

Now let's get beyond the terminology & tell us please what you understand as the post-processing oversampling in the FFT process & how it increases the FFT resolution? Does it zero pad or interpolate new samples between existing samples in this process?
 
Last edited:

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu

Steve Williams
Site Founder | Site Owner | Administrator
Ron Resnick
Site Co-Owner | Administrator
Julian (The Fixer)
Website Build | Marketing Managersing