Conclusive "Proof" that higher resolution audio sounds different

techno-filth Vs Half-arsed - I'm sure we can sell tickets to that fight match - it has a certain ring to it in the hood :)
 
Lots of things are hidden under masks, such as all of the techno-filth that is hidden under the mask of sighted evaluations. Stereophile, anybody?

Seems like Stereophile is still living rent-free in your consciousness, Mr. Krueger. :)

But seriously, that Stereophile performs sighted listening has no bearing on the issue at hand, which concerns significant flaws in the much-lauded blind tests "proving" the inaudibility of such things as digital audio with a sample rate greater than the CD's 44.1kHz.

Looking at the program for October's AES Convention in Los Angeles, it looks as if there will be a paper repeating the essence of the Meyer-Moran tests but conforming to the requirements listed in ITU BS.1116-2.

John Atkinson
Editor, Stereophile
 
Thanks, John for the heads up - there are some interesting conference papers listed
 
Looking at the program for October's AES Convention in Los Angeles, it looks as if there will be a paper repeating the essence of the Meyer-Moran tests but conforming to the requirements listed in ITU BS.1116-2.
Are you referring to Convention Paper 9174 by Meridian? Luckily they tested filter and wordlength/dither aspects separately. I hope they found a few well trained listeners. Looks interesting.
 
Are you referring to Convention Paper 9174 by Meridian? Luckily they tested filter and wordlength/dither aspects separately. I hope they found a few well trained listeners. Looks interesting.

I hope they found a broad range of listeners. What can be heard when you know exactly what to listen for and where is not necessarily relevant to what can be heard when you're listening to music. This is the biggest flaw in the test protocols we're currently discussing. No matter how well-controlled they are, they are telling us very little about our actual listening experience.

Tim
 
Tim - the tests are to establish the limits of what is audibly perceivable. Do you suggest listening tests to establish audibility limits should only use music to establish these limits?
What you do with the knowledge of the known limits is up to you but establishing them first is important. You may consider them irrelevant to music listening? You may personally not be able to hear above 14KHz so you may consider that a limited bandwidth is suitable for you. ArnyK's has established he has hearing damage so his selection criteria for audio reproduction may well be even more limited than yours.
 
Tim - the tests are to establish the limits of what is audibly perceivable. Do you suggest listening tests to establish audibility limits should only use music to establish these limits?
What you do with the knowledge of the known limits is up to you but establishing them first is important. You may consider them irrelevant to music listening? You may personally not be able to hear above 14KHz so you may consider that a limited bandwidth is suitable for you. ArnyK's has established he has hearing damage so his selection criteria for audio reproduction may well be even more limited than yours.

I meant what I said, John, no more, no less. Is establishing "the known limits" of what is audible important? I don't know, but we certainly haven't done that, or even attempted to do that, here. We have established what Amir (and a few others), who has said that his hearing drops off rapidly above 12khz, can hear. Given that there are, no doubt, some adolescent girls out there who can hear 18, maybe even 20khz, we are not establishing any known limits. We now have a bit of data that says Amir and a few others have been able to hear some undefined difference when they knew precisely what to listen for. Is that useful? Yes. It contradicts previous data indicating that no audible difference could be heard between hi-res and RB files. Does it have any bearing on the experience of listening to music? Maybe. But that has not only not been established, we haven't even added to the data set for that. And that's what I meant, John, no more, no less.

Science is hard.

Tim
 
Tim, that's not science you are citing - that's your bias packaged to look like science - the fundamental point that you are blind to in this thread. Did you read the title of the paper "The Audibility of Typical Digital Audio Filters in a High-Fidelity Playback System"? Did you read BS1116 or at least scan it? See the bit about trained listeners?

Did JJ test a broad range of listeners when testing the audibility of codecs. No! They were trained listeners who knew what to listen for & used specific audio to best reveal audible characteristics that were theorised might exist. Does it have any bearing on listening to music by th egeneral public? Yes!

Can long term listening tests with music be done at a later stage to establish the relevance of the results to the general public - sure but it's not the first test that is done when trying to establish "The Audibility of Typical Digital Audio Filters in a High-Fidelity Playback System" - is it?
 
We now have a bit of data that says Amir and a few others have been able to hear some undefined difference when they knew precisely what to listen for. Is that useful? Yes. It contradicts previous data indicating that no audible difference could be heard between hi-res and RB files. Does it have any bearing on the experience of listening to music? Maybe. But that has not only not been established, we haven't even added to the data set for that.

Science is hard.

Tim
Spot on, Tim.
 
(...)

Science is hard.

Tim

Tim,

We can easily agree on this. But fortunately high-end has not being waiting for the scientific confirmation of existing stereo audio knowledge to implement excellent SOTA audio products that we can currently enjoy, while a few enjoy believing that it is all bias and illusion, as it was not peer reviewed.
 
Tim, that's not science you are citing - that's your bias packaged to look like science - the fundamental point that you are blind to in this thread. Did you read the title of the paper "The Audibility of Typical Digital Audio Filters in a High-Fidelity Playback System"? Did you read BS1116 or at least scan it? See the bit about trained listeners?

Did JJ test a broad range of listeners when testing the audibility of codecs. No! They were trained listeners who knew what to listen for & used specific audio to best reveal audible characteristics that were theorised might exist. Does it have any bearing on listening to music by th egeneral public? Yes!

Can long term listening tests with music be done at a later stage to establish the relevance of the results to the general public - sure but it's not the first test that is done when trying to establish "The Audibility of Typical Digital Audio Filters in a High-Fidelity Playback System" - is it?

Let's review, John. Here's what I said:

I hope they found a broad range of listeners. What can be heard when you know exactly what to listen for and where is not necessarily relevant to what can be heard when you're listening to music. This is the biggest flaw in the test protocols we're currently discussing. No matter how well-controlled they are, they are telling us very little about our actual listening experience.

My hoping for tests involving a broad range of listeners and tests that are relevant to real listening does not mean I don't understand what these tests intended to do. I scanned the paper and BS1116, and I get it. I get that they are testing for the audibility of small differences. I get that the differences are so small that the tests demand listeners trained to hear those specific differences, and listening protocols aimed at finding them. This is exactly why I think these tests are irrelevant to the normal listening experience, and would like to see tests run under real listening conditions, with music lovers as well as trained listeners. Got it?

Tim
 
I meant what I said, John, no more, no less. Is establishing "the known limits" of what is audible important? I don't know, but we certainly haven't done that, or even attempted to do that, here. We have established what Amir (and a few others), who has said that his hearing drops off rapidly above 12khz, can hear. Given that there are, no doubt, some adolescent girls out there who can hear 18, maybe even 20khz, we are not establishing any known limits. We now have a bit of data that says Amir and a few others have been able to hear some undefined difference when they knew precisely what to listen for. Is that useful? Yes. It contradicts previous data indicating that no audible difference could be heard between hi-res and RB files. Does it have any bearing on the experience of listening to music? Maybe. But that has not only not been established, we haven't even added to the data set for that. And that's what I meant, John, no more, no less.

Science is hard.

Tim

If differences were obvious there would not be any need for testing in the first place.
 
Let's review, John. Here's what I said:
Yes, Tim, what you said was in direct reply to Kees who quoted the paper - the full post:

Originally Posted by Kees de Visser View Post
Are you referring to Convention Paper 9174 by Meridian? Luckily they tested filter and wordlength/dither aspects separately. I hope they found a few well trained listeners. Looks interesting.
I hope they found a broad range of listeners. What can be heard when you know exactly what to listen for and where is not necessarily relevant to what can be heard when you're listening to music. This is the biggest flaw in the test protocols we're currently discussing. No matter how well-controlled they are, they are telling us very little about our actual listening experience.

Tim

You either don't understand the nature of testing or didn't read the paper's title "The Audibility of Typical Digital Audio Filters in a High-Fidelity Playback System"

My hoping for tests involving a broad range of listeners and tests that are relevant to real listening does not mean I don't understand what these tests intended to do.
Really? So how would testing with "a broad range of listeners" & using music as the test signal be an advantage or of use in this particular test?
I scanned the paper and BS1116, and I get it. I get that they are testing for the audibility of small differences. I get that the differences are so small that the tests demand listeners trained to hear those specific differences, and listening protocols aimed at finding them. This is exactly why I think these tests are irrelevant to the normal listening experience, and would like to see tests run under real listening conditions, with music lovers as well as trained listeners. Got it?
You are mistakenly conflating two testing concepts - one to establish the limits of audibility & a completely different one to establish the importance of the limits to "normal listening experience". The problem Tim is that you can't define this "normal listening experience" - there is a wide spectrum of conditions, focus, attentiveness, etc. over which normal listening takes place. You also seem to miss the bit in BS1116 where there is a necessity for statistical analysis to confirm or otherwise the perception of small differences. Statistics are easily made meaningless by not controlling the variables in the test which is one of the main points in this thread - mix enough untrained listeners in with trained listeners & you will dilute the test results to a null result!!
 
Tim,

We can easily agree on this. But fortunately high-end has not being waiting for the scientific confirmation of existing stereo audio knowledge to implement excellent SOTA audio products that we can currently enjoy, while a few enjoy believing that it is all bias and illusion, as it was not peer reviewed.

Thank you for stating the obvious.
 
If differences were obvious there would not be any need for testing in the first place.

I think there might be some gap between obvious and audible to trained listeners listening to carefully chosen examples. I think that gap is where most of enjoy our music and it might be worth exploring.

I'm not, as John says, misunderstanding what these tests are meant to do, Jack. I'm simply wishing for tests of the audibility of hi res to real listeners listening to real music; tests more relevant to our listening experience. While I understand that John can argue endlessly with almost anything, I'm not sure how I can clarify my simple comment any more.

Tim
 
Statistics are easily made meaningless by not controlling the variables in the test which is one of the main points in this thread - mix enough untrained listeners in with trained listeners & you will dilute the test results to a null result!!

Knowledge and levels John, control them, as any competently organised blind-test does, and the listeners are solely evaluating sound. Perfectly valid and meaningful.


As for your trained listener red-herring. Need I point out again that none of us who differentiated Arny's and Ethan's files were, Vital, Adamdea, you, me, etc, bar Amir, apparently.
 
Last edited:
I think there might be some gap between obvious and audible to trained listeners listening to carefully chosen examples. I think that gap is where most of enjoy our music and it might be worth exploring.

I'm not, as John says, misunderstanding what these tests are meant to do, Jack. I'm simply wishing for tests of the audibility of hi res to real listeners listening to real music; tests more relevant to our listening experience. While I understand that John can argue endlessly with almost anything, I'm not sure how I can clarify my simple comment any more.

Tim

I guess eventually we'll get there but I think there's much, much more that needs to be studied along with what's being discussed here. What I'm about to say is only from my admittedly limited experience so please do not take it as any sort of statement of fact. It does make some assumptions that I can't back up.

When I got my first SACD player it was a run off the mill Sony DVD player. Redbook layers vs SACD layers were night and day. The immediate thought was SACDs were better by a wide margin. In other words, the format was superior. In hindsight of course any number of things could have made that so. The DAC chipset used for instance as I doubt that the little guy had two different DAC sections. It might have just been the filters for all I know. What I'm trying to say is that focus on the file format and format alone was not very wise. It might have been superior only within the context of that particular player. Some years later, I babysat an AudioAero. The Redbook was superior to the SACD playback. I could have jumped to the conclusion that Redbook was superior. The fact that the player converted DSD to PCM before the analog stage might have even made me jump to the conclusion that PCM is better than DSD and that DSD to PCM just plain sucks. Being just a teeny bit more skeptical both ways, it once again got chalked up to RB being superior for that particular player only as I have already stated there was simply not enough evidence or understanding thereof to state any general facts.

In this all PCM discussion, at least for those of us that have actually done conversions from 24bit to various shorter word lengths, we know that it is night and day between 2bit and 8bit but getting to around 14 bit and up things really do start to become difficult. 16bit up it gets really tough if you don't know what to listen for.

I think the big question that you are asking, correct me if I'm wrong, is that if the difference is now small, why bother at all? That is a fair question. Some believe as in my little story above hints at that the format's inherent qualities and how they are handled by hardware. On the cheap, it appears long word lengths and the accompanying higher sampling rates allows for less demands on the converters. Things like filters for example. The huge advances seems to shrink with the higher end consumer and professional hardware. Thing is, if this line of reasoning is correct, we see that the beneficiaries actually aren't the more involved hobbyists but instead it would be the broader market who use cost efficient onboard audio. The barrier after all has never been sound quality related. The barriers have been the cost of memory storage and the cost of high speed connections for downloading. Both of these have been dropping thanks to competition to serve not even the audio market but the video and telephony markets. Processors even on phones has been up to the task for a long time.

In my mind, There really are no losers here at least as far as the market goes, for the simple reason that never before have I seen this broad an array of available choices. We're talking about an era where MP3 download codes come free with the purchase of an LP. It's no longer a DVD-A vs SACD world where large consortiums dictate choice. All a guy has to do is choose what's right for HIM.
 
(...) Really? So how would testing with "a broad range of listeners" & using music as the test signal be an advantage or of use in this particular test? You are mistakenly conflating two testing concepts - one to establish the limits of audibility & a completely different one to establish the importance of the limits to "normal listening experience". The problem Tim is that you can't define this "normal listening experience" - there is a wide spectrum of conditions, focus, attentiveness, etc. over which normal listening takes place. You also seem to miss the bit in BS1116 where there is a necessity for statistical analysis to confirm or otherwise the perception of small differences. Statistics are easily made meaningless by not controlling the variables in the test which is one of the main points in this thread - mix enough untrained listeners in with trained listeners & you will dilute the test results to a null result!!

Good summary John. I have posted many times that the key word is statistics - without a clear understanding of statistical analysis it is not possible to debate these issues. IMHO it is why we are in a circular debate.
 
Good summary John. I have posted many times that the key word is statistics - without a clear understanding of statistical analysis it is not possible to debate these issues. IMHO it is why we are in a circular debate.

Yes, unfortunately, Tim & Max are two people who don't get it as revealed by their posts where lack of understanding of statistics leads to suggesting the design of incorrect experiments (look at Max's repeated recent post about knowledge & level). Experiments which invariable deliver null results. This is the major theme of this thread & if they don't get it at this stage, they probably never will.

Let me just quote another paper in the AES perception stream (just above the Meridian one) which states
In audio quality evaluation, ITU-R BS.1534-1, commonly known as MUSHRA, is widely used for the subjective assessment of intermediate audio quality. Studies have identified limitations of the MUSHRA methodology [1][2], which can influence the robustness to biases and errors introduced during the testing process. Therefore ITU-R BS.1534 was revised to reduce the potential for introduction of systematic errors and biases in the resulting data. These modifications improve the validity and the reliability of data collected with the MUSHRA method. The main changes affect the post screening of listeners, the inclusion of a mandatory mid-range anchor, the number and length of test items as well as statistical analysis. In this paper the changes and reasons for modification are given.
Convention Paper 9172

Does any of this get through or is it just willfully ignored, do you think?

Fine to have an opinion that this is all just too esoteric & there are more important things to worry about in audio (never mind in life). Things like mastering, like speakers, like room treatments, etc. but then why bother posting about what you consider are inconsequentials - concentrate on what you consider the big stuff & let others focus on what they are interested in. Unless, of course, you fear the status quo null results being questioned?

I'm sure there is some analogy with F1 racing where some cutting edge technology trickles down to the man-in-the-street's road car. Does he care about F1 racing technology - no - but yet he benefits from it's advances.
 
Last edited:
Knowledge and levels John, control them, as any competently organised blind-test does, and the listeners are solely evaluating sound. Perfectly valid and meaningful.


As for your trained listener red-herring. Need I point out again that none of us who differentiated Arny's and Ethan's files were, Vital, Adamdea, you, me, etc, bar Amir, apparently.

Max, or should I call you Garrett (we have posted to one another long enough now to be on first name terms), your refusal to read BS1116 is showing in posts like these
 
Last edited:

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu

Steve Williams
Site Founder | Site Owner | Administrator
Ron Resnick
Site Co-Owner | Administrator
Julian (The Fixer)
Website Build | Marketing Managersing