Conclusive "Proof" that higher resolution audio sounds different

jkeny

Industry Expert, Member Sponsor
Feb 9, 2012
3,374
42
383
Ireland
Well, Tim, the first step is knowing what the accepted standard set of controls are for reliable tests - something that was not demonstrated on this thread & is not typically known among the audiophiles who run blind tests. So their ignorance often results in an arrogance about the veracity of their results.

Never heard of them before. But Ive never run a blind listening test for the purpose of proving anything to any one but myself, so I've never needed to look into telecommunications ABX testing methodologies.
Yes, Tim, that's my point - you & Max argued about using a couple of controls(blind & level matched) as the controls needed & challenged me to name other biases/factors that affected the test without having read the standards document that nominates the factors needing to be controlled. You argued with me that stress was not a factor. You basically never admitted to any other controls necessary & this is because you never read BS1116 or were aware f it's existence. JJs list you may or may not have been aware of - I don't believe you knew of it or who he was but this list was ignored too. So, up to recently you have not bothered to read the landmark documents in the field. Max even refused to read it & questioned the document & you yourself did too - stating it was only a recommendation, etc.

Now, Tim, I'm afraid I hold no credence to your position or arguments, as a result


No need. I think we're about done here.


Or maybe not. You still seem to be trying to invalidate anything that doesn't include everything. Even though there is no agreement on what constitutes everything. You're having a lot of trouble letting this one go, john.
You can't seem to understand that I'm not saying it's all or nothing but I am saying that you need to know what remaining biases are of importance in order that you can ignore their influence. But you seem not to want to deal with this logic. For instance, how much is stress (a new factor introduced by the test itself) influencing the results? Really, Tim, it's a simple & necessary question of logic

I've never defined a stopping point, John, or identified a small number of controls. Nor have I said blind tests are all about bias removal. I don't think anyone has said that, actually. The blind part is, of course, about avoiding bias, but it indeed takes more than lack of knowledge to make a test.
yes, you have in the past - if you now want to retract that fine?

Perhaps you did, but that's not what you and I have been debating. What we've been debating is so simple, and I've,stated it so many times at this point that it's amazing that you're still arguing sidebars. You stated, unapologetically, unambiguously...somewhere in the mid hundreds by post count here :)...that without all the controls, and I believe at that time you were referring to JJ's summary of BS1116, an unsighted test was no better than no controls. You seemed to have backed off of that hard line of unreasoning, thankfully...or maybe not.
It might be considered a hard line but it's logical & it forces you & others into considering the logic behind the factors involved in producing valid tests. I also gave the short cut to possibly avoiding all these controls which was to include positive & negative controls/anchors within the test. You see, it's all tied together, Tim. So yes, without due consideration & some reasonable attempt to control influencing factors in a blind test then it is the same as an anecdotal sighted listening report.

Except I can evaluate sighted listening reports better as there are usually more data points involved, especially when people report specific differences they have heard with specific tracks. I can usually ask further questions of those providing sighted listening reports & get reasonable answers. The binary yes/no style of blind tests usually means that there is nothing more to know about the test.

Product development, marketing, pharma, and many other fields make extensive use controls in blind studies. I'm sure Olive and Toole used controls in their studies at Canada's national research center and at Harman. Are you talking about hobbyists self-testing and reporting back on Internet forums? We can agree that those "tests," including the ones that started this thread, are interesting, but anecdotal.

Tim

Tim, I'm not talking about professionally run tests that stretch over a number of weeks, as you well know!
 
Last edited:

jkeny

Industry Expert, Member Sponsor
Feb 9, 2012
3,374
42
383
Ireland
John, Elsdude and Tim are right. You're digging yourself deeper and deeper here. The fact is as you've been told many many times now, by me and others that no form of sighted listening can ever be as good as any type of blind-testing if identifying potential audible differences is the goal.

Also, removing knowledge and level matching, as you've also been told many times now, by me and others removes the most important biases and just comparing sighted observations to those with knowledge removed is often enough to demonstrate that differences that were reported sighted we're imagined.

I suggest to you, John that you consider giving up desperately arguing to the contrary while you still have some credibility.

I hope this post is taken in the spirit in which it was intended.

But, this is my point - You & others are arguing that any kind of blind test is better than any sighted test (false positives).
Well, I disagree - I believe that bad blind tests (by giving false negatives) are just as erroneous as sighted tests - no better, no worse, just wrong.

So the discussion is what is a bad blind test. You & Tim didn't even know what controls were necessary for a good blind test so how can you argue what is good or bad when you are ignorant of the necessary controls. I'm afraid your argument up to this point lack credibility & logic & is simply playing to the crowd. To me this is one of the roots of the problem with blind testing in this hobby - it's full of this attitude - a bad blind test is better than a sighted test - because this is then used by yourself Max to makes statements like all DACs are the same & any attempt to say otherwise is just Foo being peddled by commercial interests. A line you have tested on this thread but when rebuffed you have not tried it again. But it's this sort of attitude to badly run blind listening & the false evidence resulting that is a great disservice to the hobby.

Tim now wants to eliminate some of these (newly discovered) controls & my simple question is - which ones & why? I ask you the same question.

With sighted tests, I can deal with false positives because I have information from the test - tracks used, differences heard, etc. which I can then test myself & evaluate what was reported.
With false negatives, I really don't have anything to evaluate, except the lack of controls.

So, we are talking about pragmatism here as it relates to this hobby, not commercially organised blind testing.
 
Last edited:

maxflinn

New Member
Jul 29, 2014
92
0
0
Ireland
John, your typical forum run get together where a few guys listen to various kit level matched and sighted, and then level matched and blind, is usually good enough to demonstrate to those involved the effects of expectation bias, and thus to put the perceived differences reported sighted, and claimed by the audio press, into perspective. For example many of the guys who took part in Vital's tests no longer feel the need to buy expensive DACs in order to improve their listening experience, yet the audio press claim night and day differences between the DACs that Vital tested. Just matching levels and removing knowledge exposed these reviews as questionable at best, if not contrived.

Nothing is ever proved and sure, such tests aren't the absolute best way to allow differentiation between very, very similar sounding products/files etc. Following procedure such as outlined by Arnie, JJ etc is better if you're looking to identify these minute and, likely irrelevant differences (should they even exist), but this does not render all other blind-tests worthless and certainly doesn't put them on a par with sighted listening in terms of their usefulness to the participants.
 

jkeny

Industry Expert, Member Sponsor
Feb 9, 2012
3,374
42
383
Ireland
John, your typical forum run get together where a few guys listen to various kit level matched and sighted, and then level matched and blind, is usually good enough to demonstrate to those involved the effects of expectation bias, and thus to put the perceived differences reported sighted, and claimed by the audio press, into perspective. For example many of the guys who took part in Vital's tests no longer feel the need to buy expensive DACs in order to improve their listening experience, yet the audio press claim night and day differences between the DACs that Vital tested. Just matching levels and removing knowledge exposed these reviews as questionable at best, if not contrived.
Yes, Max a badly run blind listening session shows that expectation bias can lead you to false positives. What it didn't show was that negative expectation bias can lead you to false negatives as the organiser of the session later stated after he did persevered with ABX testing on WIner's loopback files.

So I ask you again - why was this test not revealing this negative bias? What controls do you want to drop from the list of standard controls & why?
 

arnyk

New Member
Apr 25, 2011
310
0
0
I'm open to any result my ears hear.

That statement is typical of a common self-deceit among subjectivists. It is basically: my ears uber alles, technology and science be $amned.

if a BS-1116 complaint system demonstrates superior performance to my ears then that would be good.

Why should all of science and technology be slaves to one man's ears and why should that one man be you?

please point one of those critters out to us here to listen to and ponder about.

The BS 1116 recommendation does that, it is online and you can read it for free: https://www.itu.int/rec/R-REC-BS.1116/en

I'll save you the pain of reading ideas that may be too difficult for dyed in the wool subjectivists such as: " The double blind ... method has been found to be especially sensitive, stable and to permit accurate detection of small impairments." and suggest that you page forward to section 7: "Reproduction Devices".

and it's not so much that science is not important,

Except that science has this nasty tendency to mock overblown egos...


it's that the point of putting together a high performance system is to listen and enjoy, not to prove anything.

A statement that is belied by listening to high end subjectivist audiophiles brag about their massive equipment expenditures and esoteric choices.
 

Phelonious Ponk

New Member
Jun 30, 2010
8,677
23
0
Now, Tim, I'm afraid I hold no credence to your position or arguments, as a result

Devastating given that we held each other in such hig regard before I dared to disagree.

You can't seem to understand that I'm not saying it's all or nothing

What I understand, John, is that you're now qualifying a position that you took very directly, without qualification for many pages. I'd take that, frankly, if I thought you would stick to it, but I doubt you will.

yes, you have in the past - if you now want to retract that fine?

I have said that removing knowledge is all about removing bias, John, not that the test itself is about removing bias. Is the objective of the test to remove bias? Is the number of biases removed the result? You thought this is what I was saying, and you're just now pointing out that I don't even understand that the objective of listening tests is to identify and evaluate what is being listened to? You wouldn't have let me be that stupid for this long, John.

It might be considered a hard line but it's logical & it forces you & others into considering the logic behind the factors involved in producing valid tests.

So you did make the all or nothing argument, but you only made it as an educational tool? John, I'm touched that you would sacrifice your own credibility for my enlightenment. How selfless. Are you doing that again now? :)

Except I can evaluate sighted listening reports better as there are usually more data points involved, especially when people report specific differences same as an anecdotal sighted listening have heard with specific tracks. I can usually ask further questions of those providing sighted listening reports & get reasonable answers. The binary yes/no style of blind tests usually means that there is nothing more to know about the test.

John, I'm going to give you the benefit of the doubt again, and assume you inadvertently failed to add "these" to the bolded sentence above, assuming that you haven't doggedly argued your small point for days without even understanding the scope of blind listening tests.

Tim, I'm not talking about professionally run tests a stretch over a number of weeks, as you well know!

Are you talking (sometimes...for my educational purposes) about requiring documents full of recommendations for controls, conditions and methods to increase the efficacy ABX testing held by hobbyists on online forums, where a sample large enough to be statistically significant is never going to be reached anyway? No? Then what kind of ABX testing are you talking about?

Tim
 

Orb

New Member
Sep 8, 2010
3,010
2
0
On the AVS forum I gave a relevant technical answer that explained the failings of the graph above, and here I see it again being waved in my face like I never said anything. In the face of such disrespect, I feel no need to explain the failings of the analysis once again.

For the benefit of people who are unfamiliar with FFTs the problem involves comparing the levels of a broadband incoherent signal (music) with test tones. There is a right way to do it, and Amir does not appear to know how to do so but refuses to allow more knowledgeable people to instruct him, and instead just repeats the same errors again and again and again and again...

But Arny,
as JA has also clearly pointed out you boosted the keys to a very false level of dbfs.
This IMO in essence causing the test to be flawed because ABX is meant to be with the gear used within limits.
[SEE EDIT] However by deliberately boosting signals by another 10db and close to 0dbfs or so you are then potentially pushing equipment and indirectly the ABX test into being unpredictable, meaning any results become unpredictable, and critically guaranteed to push the electronics into overload/clipping when test subjects are told to or allowed turn up the volume without it being measured.
The last sentence is more in context of the test tones provided at the end of the keys to show whether electronics suffer IMD; which is a skewed test for reason mentioned and IMD was only heard by those who turned the up the volume (in reality causing the electronics to clip/overload).

Anyway JA showed by reducing IM tones down to just below -10dbfs for the one product that had problems with a near 0dbfs tones, the issue was clearly resolved, and jangling keys WILL NEVER have content stronger than 10dbfs if left at correct levels and well recorded; Worth noting the keys were below level of what caused IMD with test tones, although ideally the key recording should still had been at a lower level than -10dbfs at peak in ultrasonic.
How do I know?
Because another forum member decided to record his jangling keys and posted it in this, and I then provided two external sources of recorded keys both using excellent hires capability mics and the pattern was pretty clear that ultrasonic strength was not anywhere near 0dbfs while both recordings also had comparable behaviour.
These two sources I linked much earlier are well known for either their understanding and capability of setting up and doing high resolution recording of sounds-instruments (university lab) or a physicist specialising in sound and music and was principle scientist at Lexicon.
Anyway why did listeners historically over the years not hear the IMD (should had been a cue if it was as bad as you said) if electronics were compromised with ultrasonics, failing this ABX for the full hirez keys compared to brickwall version?
Thanks
Orb

Edit:
Added further clarification pointing out level difference as well between test tones and keys.
Actually looking back it seems the jangling keys are well below even -10dbfs going by chart provided by Amir, then there is no way they can cause IMD if their peak ultrasonic signal is -50dbfs, unless the listener pushes the loudness to extreme levels and this can only be done IF sub 20khz content is removed as it would be incredibly loud.
With sub 20khz content removed it could be possible to cause clipping as a listener could turn the volume to max.
However the additionally provided ultrasonic test tones at the end of the jangling keys would be an issue as I mentioned above and more likely to cause false results-conclusions; and have nothing to do with real world music even considering the many harmonics involved (this conclusion was also reached by a chief scientist at Lexicon who I referenced much earlier in the thread as well btw)

Sorry but can we have it clarified what the peak dbfs is for the various jangling keys used including the ones with sub20khz content removed.
Done a new post explaining my confusion about the levels as there was the jangling keys/test tones/sub20khz content removed one I think.
Thanks
 
Last edited:

maxflinn

New Member
Jul 29, 2014
92
0
0
Ireland
Yes, Max a badly run blind listening session shows that expectation bias can lead you to false positives.

Who said anything about 'badly run'?

Sorry John but I'm not playing your game. You've just disregarded the points I felt I made well and are now asking me questions in an attempt to defect from these points. Your every reply and response ends in a deflective question mark.

John, there are some very knowledgeable people posting on this forum, far more knowledgeable than I, and possibly you too, and I believe that you are testing the patience of many of them with your dogged refusal to logically explain the points of view that you put forward as the be-all-and-end-all.

I'll leave you to debate with those who find debate with you fruitful, I simply find it frustrating, with respect.
 

Phelonious Ponk

New Member
Jun 30, 2010
8,677
23
0
Jkeny -

Except I can evaluate sighted listening reports better as there are usually more data points involved, especially when people report specific differences they have heard with specific tracks. I can usually ask further questions of those providing sighted listening reports & get reasonable answers. The binary yes/no style of blind tests usually means that there is nothing more to know about the test.

As an side, this sounds awful lot like a focus group. Research professionals and those of us who have hired them and made use of their work will recognize this as qualitative research, a Completely different animal from quantitative research. They barely belong I the same conversation, and when they do, the focus group is typically being used only to define the issues,to help focus on the questions to be addressed in the quantitative research that will hopefully deliver statistically sound, actionable data. You will see marketing and business people who will try to shortcut, and take focus group results as actionable data, over the warnings and objections of the research professionals. Usually when they can manage to conclude from the focus group results what the client or the boss wants to hear.

Tim
 

jkeny

Industry Expert, Member Sponsor
Feb 9, 2012
3,374
42
383
Ireland
I've asked Tim & Max what controls they feel from the standard list of controls that are documented in BS1116 or JJ's list or ArnyK's list should be dropped & their reasons why. I await their answers

Instead of arguing about sighted Vs blind why not examine what blind tests will/won't reveal? I'm arguing for better blind tests & it's being resisted, why?

I can bet that if blind tests were returning positive results much focus would be put on this very aspect - the controls. This is evidenced by the reaction to Amir's positive results. Forensic investigation of equipment IMd, resamplers, dither, proctoring, gaming, tells, etc. Is there ever any such examination of a null test report. Why is there such a great reluctance to do the same for negative results? I propose it's because they feed the expectations of those who push such tests.

How does anyone know what weighting to attach to the influence of such negative expectation bias in these blind test results? What level of & how common is such bias in these tests? Why not? It's the first thing I would be interested in answering if I wanted to get at the truth & not just some biased version of it.

Once I start seeing blind listening reports that have positive & negative controls embedded in them, I will then begin to believe that people are indeed interested in ensuring that their test is capable of evaluating what they claim it is capable of i.e small differences.

So far all we have seen is a lot of hand waving about this & arguing about these controls & arguing about sighted listening - most of which is a deflection from this primary point - demonstrate/show/prove that the tests are capable of doing what you are claiming they are doing - differentiating small differences. So come on guys, the solution is in your hands - prove that you really are interested in getting at the truth of the matter.
 

Orb

New Member
Sep 8, 2010
3,010
2
0
Just a reminder why using 0dbfs tones needs to be put into context and measuring is critical not just relying on subjective "listen to these provided tones to prove if your electronics suffer ultrasonic distortion"; key point to know if the product is starting to clip-overload/design implementation of digital filter affecting stopband rejection-alias effects (look at NOS DACs and their IMD).
Furthermore note that these signals are above the level of the jangling keys, however critically it should be noted the test tones included at end of the jangling keys are not at its level but boosted to 0dbfs (which brings us back to JAs point and context).
Thanks
Orb
The Meridian Prime has an AC power supply. I next measured a Meridian Explorer D/A headphone amplifier ($299), which is powered from the 5V USB bus.

View attachment 16663
You can see that with the 39+41kHz signal at 0dBFS (above), there are audio-band products visible as high as -50dBFS. However, the oscilloscope reveals that the amplifier is starting to clip with this maximum-level signal. Reducing the level to -10dBFS (below), which is still above the level of the jangling keys in Arny Krueger's file, results in any audio-band products dropping to below -100dB and the higher-order products above the audio-band disappearing.

View attachment 16664
I repeated these tests with a bus-powered AudioQuest Dragonfly ($149), with very similar results to the Meridian Explorer. So, given that musical signals never have ultrasonic content at anything close to 0dBFS, I think it appropriate, other than with pathologically poor-performing products, to rule out added intermodulation distortion as being the reason people can detect differences between 44.1kHz and 96kHz-sampled versions of the same music.

John Atkinson
Editor, Stereophile
 

jkeny

Industry Expert, Member Sponsor
Feb 9, 2012
3,374
42
383
Ireland
Jkeny -
As an side, this sounds awful lot like a focus group. Research professionals and those of us who have hired them and made use of their work will recognize this as qualitative research, a Completely different animal from quantitative research. They barely belong I the same conversation, and when they do, the focus group is typically being used only to define the issues,to help focus on the questions to be addressed in the quantitative research that will hopefully deliver statistically sound, actionable data. You will see marketing and business people who will try to shortcut, and take focus group results as actionable data, over the warnings and objections of the research professionals. Usually when they can manage to conclude from the focus group results what the client or the boss wants to hear.

Tim
Tim, I was asked by esldude to give an example of when a sighted test was better than a blind test. I gave my answer. He didn't seem to like it, calling it BS.
 

Phelonious Ponk

New Member
Jun 30, 2010
8,677
23
0
I've asked Tim & Max what controls they feel from the standard list of controls that are documented in BS1116 or JJ's list or ArnyK's list should be dropped & their reasons why. I await their answers

Instead of arguing about sighted Vs blind why not examine what blind tests will/won't reveal? I'm arguing for better blind tests & it's being resisted, why?

I can bet that if blind tests were returning positive results much focus would be put on this very aspect - the controls. This is evidenced by the reaction to Amir's positive results. Forensic investigation of equipment IMd, resamplers, dither, proctoring, gaming, tells, etc. Is there ever any such examination of a null test report. Why is there such a great reluctance to do the same for negative results? I propose it's because they feed the expectations of those who push such tests.

How does anyone know what weighting to attach to the influence of such negative expectation bias in these blind test results? What level of & how common is such bias in these tests? Why not? It's the first thing I would be interested in answering if I wanted to get at the truth & not just some biased version of it.

Once I start seeing blind listening reports that have positive & negative controls embedded in them, I will then begin to believe that people are indeed interested in ensuring that their test is capable of evaluating what they claim it is capable of i.e small differences.

So far all we have seen is a lot of hand waving about this & arguing about these controls & arguing about sighted listening - most of which is a deflection from this primary point - demonstrate/show/prove that the tests are capable of doing what you are claiming they are doing - differentiating small differences. So come on guys, the solution is in your hands - prove that you really are interested in getting at the truth of the matter.

My apologies in advance for addressing the poster instead of the post, but there is no other way to address this nonsense:

The answer to your diversionary question above is "it depends." An appropriately diversionary answer, but accurate enough. You're arguing for better blind tests? Which argument was that, John? Was that the one in which you said that without JJ's full set of controls blind listening was no better than sighted listening, or the one in which you allegedly asked max and I to pick what we think is important from the BS1116 set of controls? By the way, I have no intention of going down that road because A) it does depend and B) It would only perpetuate this argument.

You're all over the place, John, contradicting yourself, denying yourself, at one point actually attributing your own statement to me in a surreal case of arguing with yourself, using me as a proxy. You have claimed that arguments you emphatically made for days and pages on end were only meant to educate. You have inaccurately defined the scope of blind testing, either to suit your position or because you really don't understand what it is; neither is a good choice.

You are sadly and absolutely devoid of credibility, but keep digging.

Tim
 

Phelonious Ponk

New Member
Jun 30, 2010
8,677
23
0
Tim, I was asked by esldude to give an example of when a sighted test was better than a blind test. I gave my answer. He didn't seem to like it, calling it BS.

And you gave an answer that demonstrates a fundamental misunderstanding of research. You shouldn't like it either.

Tim
 

jkeny

Industry Expert, Member Sponsor
Feb 9, 2012
3,374
42
383
Ireland
Who said anything about 'badly run'?

Sorry John but I'm not playing your game. You've just disregarded the points I felt I made well and are now asking me questions in an attempt to defect from these points. Your every reply and response ends in a deflective question mark.

John, there are some very knowledgeable people posting on this forum, far more knowledgeable than I, and possibly you too, and I believe that you are testing the patience of many of them with your dogged refusal to logically explain the points of view that you put forward as the be-all-and-end-all.

I'll leave you to debate with those who find debate with you fruitful, I simply find it frustrating, with respect.

I'm sorry, Max, you wanted the long version of my reply when I thought succinctness would suffice?

John, your typical forum run get together where a few guys listen to various kit level matched and sighted, and then level matched and blind, is usually good enough to demonstrate to those involved the effects of expectation bias, and thus to put the perceived differences reported sighted, and claimed by the audio press, into perspective.
Yes, good enough to show the influence of sightedness in returning false positive results. So now, what is the possibility of the blind test returning false negative results? Do you know or care? Have you evaluated this influence? Have you evaluated any other factors that might influence the results either positively or negatively?
For example many of the guys who took part in Vital's tests no longer feel the need to buy expensive DACs in order to improve their listening experience, yet the audio press claim night and day differences between the DACs that Vital tested. Just matching levels and removing knowledge exposed these reviews as questionable at best, if not contrived.
And are these guys doing this on the basis of unqualified results?

Nothing is ever proved and sure, such tests aren't the absolute best way to allow differentiation between very, very similar sounding products/files etc. Following procedure such as outlined by Arnie, JJ etc is better if you're looking to identify these minute and, likely irrelevant differences (should they even exist), but this does not render all other blind-tests worthless and certainly doesn't put them on a par with sighted listening in terms of their usefulness to the participants.
You accept that the tests you are proposing "aren't the absolute best". So, Ok, tell me what influencing factors you wish to ignore & why? Tell me the reliability of the results so I can make a judgement of how well I should believe these results?
 

Orb

New Member
Sep 8, 2010
3,010
2
0
Just a heads up, I added an edit to my previous post as poor me was getting confused with what the signal levels were for the various jangling keys, sorry and doh :)
In summary looks like the jangling keys are -64dbfs, which is over 58db quieter than required to push those products JA tested earlier into clipping and producing IM.
The test tones that were added to the file for listeners to test if their electronics produced ultrasonic are flawed because a listener would nearly always clip-overload their electronics, furthermore using these at 0dbfs has nothing to do with being comparable to real world music even considering the fact how many harmonics/partials exist for instruments-music-sounds; the 19+20khz 0dbfs international standard tone test was originally created to stress-duress test electronics and nothing to do with real world behaviour-content apart from showing their tolerances-engineering-linearity.
There are no standards for extending this test to ultrasonic, as mentioned before and shown there is no reason for it to exist in context of visual-audio-broadcasting.

Cheers
Orb
 

jkeny

Industry Expert, Member Sponsor
Feb 9, 2012
3,374
42
383
Ireland
My apologies in advance for addressing the poster instead of the post, but there is no other way to address this nonsense:

The answer to your diversionary question above is "it depends." An appropriately diversionary answer, but accurate enough.
Ok, then no answer it is unless you want to give one "depends" example?
You're arguing for better blind tests? Which argument was that, John? Was that the one in which you said that without JJ's full set of controls blind listening was no better than sighted listening, or the one in which you allegedly asked max and I to pick what we think is important from the BS1116 set of controls? By the way, I have no intention of going down that road because A) it does depend and B) It would only perpetuate this argument.
Ah, I see you've answered my above question - so it's a NO then. You prefer to ignore any evaluation of the reliability of the results of blind tests. Are we just supposed to accept these results then as unqualified truth because you are not bothered.

I already gave a get-out - simply include positive & negative controls in such blind tests that can be used as a form of self-evaluation of the test. Why the continual arguing when this solution is possible?

I think for all concerned, this should now be dropped as we have all aired our views & it's getting nowhere & getting personal
 

arnyk

New Member
Apr 25, 2011
310
0
0
But Arny,
as JA has also clearly pointed out you boosted the keys to a very false level of dbfs.

He made a claim, and I explained why it was incorrect. I presume this post means in the view of many around here that as soon as JA says something, as far as this forum is concerned it is inviolate truth.

Why should I bother to reply?
 

arnyk

New Member
Apr 25, 2011
310
0
0
Just a heads up, I added an edit to my previous post as poor me was getting confused with what the signal levels were for the various jangling keys, sorry and doh :)
In summary looks like the jangling keys are -64dbfs, which is over 58db quieter than required to push those products JA tested earlier into clipping and producing IM.

False.

The peak value of the keys jangling segment of the file is about -2 dB FS (Left) -1 dB FS (right).

keys jangling statistics.jpg

In order to be reproduced without distortion, the peaks of the file must be reproduced cleanly. The peak value of the test signal is -1 dB which is the same as the louder of the two channels.
 

maxflinn

New Member
Jul 29, 2014
92
0
0
Ireland
Yes, good enough to show the influence of sightedness in returning false positive results.

Exactly, good enough to remove all non-audible stimuli so as it is only the sound that is being evaluated. This routinely demonstrates that the sighted perceptions are proved to be influenced by bias as unsighted, the differences reported usually vanish (DACs, cables etc) - the controls have done their job and it is these controls that make such testing several orders of magnitude better/more reliable/valid than sighted testing can ever be!

You accept that the tests you are proposing "aren't the absolute best".
John, I said - aren't the absolute best way to allow differentiation between very, very similar sounding products/files etc.

Following procedure such as outlined by Arnie, JJ etc is better if you're looking to identify these minute and, likely irrelevant differences (should they even exist).

Having said that, several of us did not require mirroring such guidelines, prior training or expert listener status to differentiate Ethan's or Arny's files using ABX when, during normal sighted listening none of us could differentiate them, so small were the differences.

So I'm not really sure just how small (infinitesimally?) differences would need to be before requiring prior training and the most strictest protocols to be followed in order to unearth them, or what point there would be in doing so, however I certainly don't see anything wrong with what could be termed 'best practice', in principle.
 
Last edited:

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu

Steve Williams
Site Founder | Site Owner | Administrator
Ron Resnick
Site Co-Owner | Administrator
Julian (The Fixer)
Website Build | Marketing Managersing