Hi res again?

Phelonious Ponk

New Member
Jun 30, 2010
8,677
23
0
I can't hear it. Or let me put that more precisely: Every time I've acquired a hi-res file that sounded different/better than its redbook equivalent, it ended up being the mastering, not the resolution. How do I know? Because I'm a computer audiophool and if I don't manually re-set iTunes to play the bitrate of the file, it will up/down convert it to whatever OSX is set to. And every time I've downloaded a hi-res file and heard a difference, I then re-set iTunes to play it as a 16-bit file...and I still heard the difference. So these days I just leave it set at 16, acquire the best-sounding masters I can find, and enjoy the music.

Lots of mileage varies on issue, and I thought I more or less understood why I was hearing what I was(n't) hearing. But I just ran into this, which explains it rather clearly and well, I think. Long, but worth the read.

From "Digital Audio Explained, for the Audio Engineer," by Nika Aldrich....

It seems to me that there is a lot of misunderstanding regarding what bit depth is and how it works in digital audio. This misunderstanding exists not only in the consumer and audiophile worlds but also in some education establishments and even some professionals. This misunderstanding comes from supposition of how digital audio works rather than how it actually works. It's easy to see in a photograph the difference between a low bit depth image and one with a higher bit depth, so it's logical to suppose that higher bit depths in audio also means better quality. This supposition is further enforced by the fact that the term 'resolution' is often applied to bit depth and obviously more resolution means higher quality. So 24bit is Hi-Rez audio and 24bit contains more data, therefore higher resolution and better quality. All completely logical supposition but I'm afraid this supposition is not entirely in line with the actual facts of how digital audio works. I'll try to explain:

When recording, an Analogue to Digital Converter (ADC) reads the incoming analogue waveform and measures it so many times a second (1*). In the case of CD there are 44,100 measurements made per second (the sampling frequency). These measurements are stored in the digital domain in the form of computer bits. The more bits we use, the more accurately we can measure the analogue waveform. This is because each bit can only store two values (0 or 1), to get more values we do the same with bits as we do in normal counting. IE. Once we get to 9, we have to add another column (the tens column) and we can keep adding columns add infinitum for 100s, 1000s, 10000s, etc. The exact same is true for bits but because we only have two values per bit (rather than 10) we need more columns, each column (or additional bit) doubles the number of vaules we have available. IE. 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024 .... If these numbers appear a little familiar it is because all computer technology is based on bits so these numbers crop up all over the place. In the case of 16bit we have roughly 65,000 different values available. The problem is that an analogue waveform is constantly varying. No matter how many times a second we measure the waveform or how many bits we use to store the measurement, there are always going to be errors. These errors in quantifying the value of a constantly changing waveform are called quantisation errors. Quantisation errors are bad, they cause distortion in the waveform when we convert back to analogue and listen to it.

So far so good, what I've said until now would agree with the supposition of how digital audio works. I seem to have agreed that more bits = higher resolution. True, however, where the facts start to diverge from the supposition is in understanding the result of this higher resolution. Going back to what I said above, each time we increase the bit depth by one bit, we double the number of values we have available (EG. 4bit = 16 values, 5bit = 32 values). If we double the number of values, we halve the amount of quantisation errors. Still with me? Because now we come to the whole nub of the matter. There is in fact a perfect solution to quantisation errors which completely (100%) eliminates quantisation distortion, the process is called 'Dither' and is built into every ADC on the market.

Dither: Essentially during the conversion process a very small amount of white noise is added to the signal, this has the effect of completely randomising the quantisation errors. Randomisation in digital audio, once converted back to analogue is heard as pure white (un-correlated) noise. The result is that we have an absolutely perfect measurement of the waveform (2*) plus some noise. In other words, by dithering, all the measurement errors have been converted to noise. (3*).
Hopefully you're still with me, because we can now go on to precisely what happens with bit depth. Going back to the above, when we add a 'bit' of data we double the number of values available and therefore halve the number of quantisation errors. If we halve the number of quantisation errors, the result (after dithering) is a perfect waveform with halve the amount of noise. To phrase this using audio terminology, each extra bit of data moves the noise floor down by 6dB (half). We can turn this around and say that each bit of data provides 6dB of dynamic range (*4). Therefore 16bit x 6db = 96dB. This 96dB figure defines the dynamic range of CD. (24bit x 6dB = 144dB).
So, 24bit does add more 'resolution' compared to 16bit but this added resolution doesn't mean higher quality, it just means we can encode a larger dynamic range. This is the misunderstanding made by many. There are no extra magical properties, nothing which the science does not understand or cannot measure. The only difference between 16bit and 24bit is 48dB of dynamic range (8bits x 6dB = 48dB) and nothing else. This is not a question for interpretation or opinion, it is the provable, undisputed logical mathematics which underpins the very existence of digital audio.
So, can you actually hear any benefits of the larger (48dB) dynamic range offered by 24bit? Unfortunately, no you can't. The entire dynamic range of some types of music is sometimes less than 12dB. The recordings with the largest dynamic range tend to be symphony orchestra recordings but even these virtually never have a dynamic range greater than about 60dB. All of these are well inside the 96dB range of the humble CD. What is more, modern dithering techniques (see 3 below), perceptually enhance the dynamic range of CD by moving the quantisation noise out of the frequency band where our hearing is most sensitive. This gives a percievable dynamic range for CD up to 120dB (150dB in certain frequency bands).

You have to realise that when playing back a CD, the amplifier is usually set so that the quietest sounds on the CD can just be heard above the noise floor of the listening environment (sitting room or cans). So if the average noise floor for a sitting room is say 50dB (or 30dB for cans) then the dynamic range of the CD starts at this point and is capable of 96dB (at least) above the room noise floor. If the full dynamic range of a CD was actually used (on top of the noise floor), the home listener (if they had the equipment) would almost certainly cause themselves severe pain and permanent hearing damage. If this is the case with CD, what about 24bit Hi-Rez. If we were to use the full dynamic range of 24bit and a listener had the equipment to reproduce it all, there is a fair chance, depending on age and general health, that the listener would die instantly. The most fit would probably just go into coma for a few weeks and wake up totally deaf. I'm not joking or exaggerating here, think about it, 144dB + say 50dB for the room's noise floor. But 180dB is the figure often quoted for sound pressure levels powerful enough to kill and some people have been killed by 160dB. However, this is unlikely to happen in the real world as no DACs on the market can output the 144dB dynamic range of 24bit (so they are not true 24bit converters), almost no one has a speaker system capable of 144dB dynamic range and as said before, around 60dB is the most dynamic range you will find on a commercial recording.

So, if you accept the facts, why does 24bit audio even exist, what's the point of it? There are some useful application for 24bit when recording and mixing music. In fact, when mixing it's pretty much the norm now to use 48bit resolution. The reason it's useful is due to summing artefacts, multiple processing in series and mainly headroom. In other words, 24bit is very useful when recording and mixing but pointless for playback. Remember, even a recording with 60dB dynamic range is only using 10bits of data, the other 6bits on a CD are just noise. So, the difference in the real world between 16bit and 24bit is an extra 8bits of noise.

I know that some people are going to say this is all rubbish, and that “I can easily hear the difference between a 16bit commercial recording and a 24bit Hi-Rez version”. Unfortunately, you can't, it's not that you don't have the equipment or the ears, it is not humanly possible in theory or in practice under any conditions!! Not unless you can tell the difference between white noise and white noise that is well below the noise floor of your listening environment!! If you play a 24bit recording and then the same recording in 16bit and notice a difference, it is either because something has been 'done' to the 16bit recording, some inappropriate processing used or you are hearing a difference because you expect a difference.

G
1 = Actually these days the process of AD conversion is a little more complex, using oversampling (very high sampling frequencies) and only a handful of bits. Later in the conversion process this initial sampling is 'decimated' back to the required bit depth and sample rate.
2 = The concept of the perfect measurement or of recreating a waveform perfectly may seem like marketing hype. However, in this case it is not. It is in fact the fundamental tenet of the Nyquist-Shannon Sampling Theorem on which the very existence and invention of digital audio is based. From WIKI: “In essence the theorem shows that an analog signal that has been sampled can be perfectly reconstructed from the samples”. I know there will be some who will disagree with this idea, unfortunately, disagreement is NOT an option. This theorem hasn't been invented to explain how digital audio works, it's the other way around. Digital Audio was invented from the theorem, if you don't believe the theorem then you can't believe in digital audio either!!
3 = In actual fact these days there are a number of different types of dither used during the creation of a music product. Most are still based on the original TPDFs (triangular probability density function) but some are a little more 'intelligent' and re-distribute the resulting noise to less noticeable areas of the hearing spectrum. This is called noise-shaped dither.
4 = Dynamic range, is the range of volume between the noise floor and the maximum volume.

Tim
 

audioguy

WBF Founding Member
Apr 20, 2010
2,794
73
1,635
Near Atlanta, GA but not too near!
Very, very interesting.

I can just hear some who will continue to say "Well I can hear the difference and you can't prove I don't" [From your other thread on being able to prove a negative] or "Since I can hear the difference, there must be other measurements that we don't know about that would prove it".

I recently purchased a new music server which really has dramatically improved the digital audio playback in my room. (Since I heard the difference blind and did have any clue what I was listening to, I am comfortable saying that there was an audible difference.) In fact my digital system is so good I am considering dumping my analog hardware (and software).

As a result of this new music server, I have downloaded a number of "hi rez" albums that I also happen to have in Redbook.....and I am not able to "consistently" tell the difference. And now I know why.

As an aside, there are some of the hi-rez albums I've purchased that do sound different and my guess is (based upon the article you provided) that there was more done to the h-re version than just remaking at a higher rez.

Very interesting ... and my guess is that this will be a most interesting thread. Thanks for posting.
 

RBFC

WBF Founding Member
Apr 20, 2010
5,158
46
1,225
Albuquerque, NM
www.fightingconcepts.com
In many cases, for us old guys, well-done redbook can approach the as good as it needs to be level. When the redbook full standard is not met, differences can be more obvious. The other factors are the same/different master and the listener knowing what to listen for when comparing the two (providing the program material contains examples of these cues).

Lee
 

mep

Member Sponsor & WBF Founding Member
Apr 20, 2010
9,481
17
0
Here is problem number one for me: “The problem is that an analogue waveform is constantly varying. No matter how many times a second we measure the waveform or how many bits we use to store the measurement, there are always going to be errors. These errors in quantifying the value of a constantly changing waveform are called quantisation errors. Quantisation errors are bad, they cause distortion in the waveform when we convert back to analogue and listen to it.”

I understand what is being said here and it is telling. Digital audio regardless of bit depth is unable to perfectly measure an analog waveform. These errors in measurement are quantization errors which results in distortion. Got it.

Here is what I don’t get: “There is in fact a perfect solution to quantisation errors which completely (100%) eliminates quantisation distortion, the process is called 'Dither' and is built into every ADC on the market.

Dither: Essentially during the conversion process a very small amount of white noise is added to the signal, this has the effect of completely randomising the quantisation errors. Randomisation in digital audio, once converted back to analogue is heard as pure white (un-correlated) noise. The result is that we have an absolutely perfect measurement of the waveform (2*) plus some noise. In other words, by dithering, all the measurement errors have been converted to noise. (3*).”

Here is where I get off the bus or I missed the bus. I don’t understand how by randomizing errors by adding noise to the signal leads to perfect waveforms. Randomizing means that you have just spread the mess around doesn’t it? I just don’t understand how data that was missed/distorted at the very beginning because the analog waveform is constantly varying and digital is incapable of capturing the ever-changing waveforms without adding distortion can be perfectly added back by introducing noise to the signal. How does adding random noise equal perfect waveform reconstruction? I’m missing something here. Just because you have converted measurement errors to noise doesn’t mean that you still haven’t lost some data because of quantization errors does it?
 

Mike Lavigne

Member Sponsor & WBF Founding Member
Apr 25, 2010
12,586
11,657
4,410
FWIW, at the recent Cali Audio Show, there were demos between redbook and hirez files and most people couldn't hear a difference

it does take a system with a decent set-up that demonstrates differences in noise floor and soundstage to hear clearly the moves up the PCM ladder. particularly if you are not familiar with the system.

if you spent a couple of hours in a particular room at a show, and then heard 44/16, 88/24, 176/24, and so on, i think you'd easily hear the steps up. but 'cold', or with 20-30 minutes only in a room it would be challenging as you have not calibrated to the room and everything is 'new'.

i know what i hear in my room, or Bruce's room. they are quite clear.....and then when you go from, say, 44/16 to 176/24, to DSD.....those are significant differences.

i do this in my room frequently. start with redbook, play the HRX, then to analog.....where i know with certainty that redbook and HRX are the same mastering.
 

Phelonious Ponk

New Member
Jun 30, 2010
8,677
23
0
Here is problem number one for me: “The problem is that an analogue waveform is constantly varying. No matter how many times a second we measure the waveform or how many bits we use to store the measurement, there are always going to be errors. These errors in quantifying the value of a constantly changing waveform are called quantisation errors. Quantisation errors are bad, they cause distortion in the waveform when we convert back to analogue and listen to it.”

I understand what is being said here and it is telling. Digital audio regardless of bit depth is unable to perfectly measure an analog waveform. These errors in measurement are quantization errors which results in distortion. Got it.

Here is what I don’t get: “There is in fact a perfect solution to quantisation errors which completely (100%) eliminates quantisation distortion, the process is called 'Dither' and is built into every ADC on the market.

Dither: Essentially during the conversion process a very small amount of white noise is added to the signal, this has the effect of completely randomising the quantisation errors. Randomisation in digital audio, once converted back to analogue is heard as pure white (un-correlated) noise. The result is that we have an absolutely perfect measurement of the waveform (2*) plus some noise. In other words, by dithering, all the measurement errors have been converted to noise. (3*).”

Here is where I get off the bus or I missed the bus. I don’t understand how by randomizing errors by adding noise to the signal leads to perfect waveforms. Randomizing means that you have just spread the mess around doesn’t it? I just don’t understand how data that was missed/distorted at the very beginning because the analog waveform is constantly varying and digital is incapable of capturing the ever-changing waveforms without adding distortion can be perfectly added back by introducing noise to the signal. How does adding random noise equal perfect waveform reconstruction? I’m missing something here. Just because you have converted measurement errors to noise doesn’t mean that you still haven’t lost some data because of quantization errors does it?

I get what he's saying -- the errors are converted to uncorrelated noise, so they simply become a part of the noise floor, and even then, the noise floor is very, very low. But I don't understand how it's done either. And even if it's not done as perfectly as he says, and errors remain, that's a difference between analog and digital, not between redbook and hi-res. If I get what he's saying about that, it's that mathematically, at playback, there is no difference between redbook and hi-res. And digital audio is math. So what is it that we hear that we're waiting for science to catch up with?

Someone who gets the whole thing better than I will, no doubt, come along shortly to confuse me further. This guy was good, though. He was writing for Audio Engineers but he made it clearer for this dumb old musician than it has been before.

Tim
 

LenWhite

Well-Known Member
Feb 11, 2011
424
72
375
Florida
systems.audiogon.com
i do this in my room frequently. start with redbook, play the HRX, then to analog.....where i know with certainty that redbook and HRX are the same mastering.

Mike, how do you know with certainty the mastering is identical? In my experience direct comparisons are extremely difficult because different formats often are mastered differently. There's no questioning the recording and mastering is absolutely key to sonic quality, and I do own some awesome sounding RBCD's (e.g., Hadouk Trio: Live at FIP). And while I also have quite a number of awesome sounding SACD's, I'm not sure their sonic quality may be due more to recording and mastering rather than bits.
 

DonH50

Member Sponsor & WBF Technical Expert
Jun 22, 2010
3,952
312
1,670
Monument, CO
Note by "Audio Engineer" he is talking about audio recording engineers, not equipment designers.

Dither does not completely eliminate quantisation errors. It spreads out the energy of the sampling spurs, effectively decorrelating them from the signal and providing a smoother, if somewhat higher, noise floor. I think I may have touched upon this in my sampling tutorials (but have not gone back and looked). Nonllinearity errors in the ADC/DAC are not really affected by dither... I can say more later but have to run to church!

Aside: I have known Nika for ages; he was my sales guy at Sweetwater long ago. Small world! - Don
 

Phelonious Ponk

New Member
Jun 30, 2010
8,677
23
0
Note by "Audio Engineer" he is talking about audio recording engineers, not equipment designers.

Yeah, I know, but I assume they know this stuff better than me, too.

Tim
 

Mike Lavigne

Member Sponsor & WBF Founding Member
Apr 25, 2010
12,586
11,657
4,410
Mike, how do you know with certainty the mastering is identical? In my experience direct comparisons are extremely difficult because different formats often are mastered differently. There's no questioning the recording and mastering is absolutely key to sonic quality, and I do own some awesome sounding RBCD's (e.g., Hadouk Trio: Live at FIP). And while I also have quite a number of awesome sounding SACD's, I'm not sure their sonic quality may be due more to recording and mastering rather than bits.

Paul Stubblebine has done 100% of the RR masterings for 20 years. he uses the 176/24 as the source for the RR CD's, and obviously, the HRX's are the 176/24's.

Keith Johnson built the Pacific Microsonics ADC to do his digital recordings, and starting in the late 80's did simultaneous digital and analog masters from the same mic feed.....until the late 90's (maybe a bit later). so all RR CD's during that period were digital sourced even though there were tape masters too.

so earlier than the late 80's RR recordings might have CD's that are analog based and so then we would likely have mastering differences between redbook and higher rez PCM or SACD's. although knowing Paul would have done the masterings i'd expect they are very consistent.
 
Last edited:

Phelonious Ponk

New Member
Jun 30, 2010
8,677
23
0
Paul Stubblebine has done 100% of the RR masterings for 20 years. he uses the 176/24 as the source for the RR CD's, and obviously, the HRX's are the 176/24's.

Keith Johnson built the Pacific Microsonics ADC to do his digital recordings, and starting in the late 80's did simultaneous digital and analog masters from the same mic feed.....until the late 90's (maybe a bit later). so all RR CD's during that period were digital sourced even though there were tape masters too.

so earlier than the late 80's RR recordings might have CD's that are analog based and so then we would likely have mastering differences between redbook and higher rez PCM or SACD's. although knowing Paul would have done the masterings i'd expect they are very consistent.

I had a little trouble keeping up with all of that, Mike, are you saying that one of these gentlemen, Mr. Stubblebine or Mr. Johnson has produced digital discs or files at various resolutions, including redbook, from exactly the same hi-res digital master and you own them and have compared them?

Tim
 

Mike Lavigne

Member Sponsor & WBF Founding Member
Apr 25, 2010
12,586
11,657
4,410
I had a little trouble keeping up with all of that, Mike, are you saying that one of these gentlemen, Mr. Stubblebine or Mr. Johnson has produced digital discs or files at various resolutions, including redbook, from exactly the same hi-res digital master and you own them and have compared them?

Tim

Keith Johnson is the recording engineer for RR. he also is a designer of gear.......both pro audio (Pacific Microsonics ADC and his own mods to his own RTR deck used in the RR recordings ) and consumer gear (Spectral). Keith also was involved in the development of HDCD; now sold to Microsoft. Amir knows Keith thru that.

Paul Stubblebine has a studio in San Francisco where RR has their recordings mastered. Paul is also one of the principles of the Tape Project. Paul is the mastering engineer who makes the CD's from the 176/24 master files. Paul also mastered the HRx discs from those same files.

the openness of RR as to the exact recording chain used in their recordings, and mastering consistency, as well as the typically fine quality of performance and recordings, makes Reference Recordings a great source for knowledge for an audiophile. all those varibles which cloud the cause and effect as we try and learn is made very clear by RR.
 

fas42

Addicted To Best
Jan 8, 2011
3,973
3
0
NSW Australia
I'm going to be very bold here and suggest that the core of the problem is the quality of the playback chain, and that that is the major contributing cause of variance in perceived quality. Yes, that includes having the "same" track at different sampling frequencies going through the identical DAC and other electronics, etc. In other words, it is pointless arguing about the relative merits of different playback formats without also bringing in the actual, real life, not theoretical, abilities of the reproduction equipment used to assess any differences in formats. The medium and the device used to process it, are inextricably linked, it's one continuum as far as the ear/brain is concerned.

So I would consider the only reasonable way of deciding anything here is to take all these different tracks, mastered in different ways, and cross-sample them (is that a word??) using the very best software to all the various formats under discussion, and then do some listening tests using a set piece of equipment. This has already been tackled in a thread here, with the assistance of Bruce, and the results posted suggested minimal variation was perceived by most ...

Frank
 

fas42

Addicted To Best
Jan 8, 2011
3,973
3
0
NSW Australia
Here is where I get off the bus or I missed the bus. I don’t understand how by randomizing errors by adding noise to the signal leads to perfect waveforms. Randomizing means that you have just spread the mess around doesn’t it? I just don’t understand how data that was missed/distorted at the very beginning because the analog waveform is constantly varying and digital is incapable of capturing the ever-changing waveforms without adding distortion can be perfectly added back by introducing noise to the signal. How does adding random noise equal perfect waveform reconstruction? I’m missing something here. Just because you have converted measurement errors to noise doesn’t mean that you still haven’t lost some data because of quantization errors does it?
What's been missed out of the discussion is that the ear/brain is excellent in picking out patterns in what is heard. If you hear noise that has a consistent beat, tonal quality or emphasis, in other words it has structure to it then your mind interprets it as a sound, not noise, and if it is at cross purposes to the music then it will also certainly sound like distortion. If you get rid of every last ounce of predictability in the "error" sound then the ear/brain can let it go, it just becomes meaningless, and it is heard as pure, easily dismissed, "noise" ...

Frank
 

Phelonious Ponk

New Member
Jun 30, 2010
8,677
23
0
Keith Johnson is the recording engineer for RR. he also is a designer of gear.......both pro audio (Pacific Microsonics ADC and his own mods to his own RTR deck used in the RR recordings ) and consumer gear (Spectral). Keith also was involved in the development of HDCD; now sold to Microsoft. Amir knows Keith thru that.

Paul Stubblebine has a studio in San Francisco where RR has their recordings mastered. Paul is also one of the principles of the Tape Project. Paul is the mastering engineer who makes the CD's from the 176/24 master files. Paul also mastered the HRx discs from those same files.

the openness of RR as to the exact recording chain used in their recordings, and mastering consistency, as well as the typically fine quality of performance and recordings, makes Reference Recordings a great source for knowledge for an audiophile. all those varibles which cloud the cause and effect as we try and learn is made very clear by RR.

That added some substance to what was already a fine round of name-dropping, but didn't answer the question. Is someone in this line of audio monarchy producing digital files at various resolutions, from Redbook up, from exactly the same master for you to compare?

Tim
 

fas42

Addicted To Best
Jan 8, 2011
3,973
3
0
NSW Australia
I posted some files that were hi-rez and also at different sampling rates explaining what to listen for.
I've just had a look at these for the first time ...

Yes, on the extremely basic DAC and electronics on a PC it's obvious the difference between how the different formats are rendered even over the primitive built-in speaker. But does that translate to a top notch D/A setup on a properly optimised high end audio system?

Interestingly, there's a bit of savage, and minor clipping in the right channel. Is that par for audio these days?

Frank
 

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu

Steve Williams
Site Founder | Site Owner | Administrator
Ron Resnick
Site Co-Owner | Administrator
Julian (The Fixer)
Website Build | Marketing Managersing