Conclusive "Proof" that higher resolution audio sounds different

Intermodulation distortion measurements back in 1990s.
Ok first thanks to Stereophile for keeping such an extensive database of their reviews with measurements.
All this talk about IM being cues in the past made me decide to have a look at the main CD products developed by such as Rotel/Philips/Panasonic/etc.
And interestingly all of them had low IM for the 19khz+20khz test signal, before anyone complains that this is not ultrasonic tones please remember a little while ago JA showed with actual measurements that even cheap and small products ultrasonic IM behaviour is pretty comparable to the 19+20khz and negligible for IM in the audioband when taken into account with "normal level" tones rather than at 0-to-6dbfs ultrasonic signals (which would be silly to use in perception tests on real world products).

Some examples (lets not go down road of boutique):
http://www.stereophile.com/content/panasonic-prism-lx-1000-cdld-player-measurements
http://www.stereophile.com/content/philips-cdr880-cd-rrw-cd-recorder-measurements
http://www.stereophile.com/content/rotel-rcd-955ax-and-rcd-965bx-cd-players-rcd-965bx-measurements

Yeah they do have other measurement notables though :)
Edit:
And here is Creek Audio Integrated that was nothing special with IM 19+20khz internationally recommended test:
http://www.stereophile.com/content/creek-4240-special-edition-integrated-amplifier-measurements
Even at just below clipping into 4ohms are -66db below the 0db IM tones.
Reducing watts a bit further from clipping and IM result much cleaner.

True a lot of stuff works well below 22 KHz, and that's good enough for a CD player. However, I have personally been surprised by how much equipment falls on its face between 22 and 44 KHz, and JAs measurements show much the same.
 
You are not evolving your test with the new controls that arise, Tim, you are wilfully ignoring existing known controls that are there to try to ensure that the result is sensitive & reliable & not skewed.

Thanks for telling me what I'm doing and thinking again, john. It's always so refreshing, and it contributes so much to the discussion. Let's review...

if you just want to say that removing some biases appears to be more useful than removing no biases but gets us nowhere as far as trusting the result...

Here's the problem with your reasoning, john -- Everybody is just "removing some biases."

JJ'ss controls, the controls outlined in BS1116, the controls used in Meyer and Moran...none of them are perfect, or even complete. They will continue to be updated, questioned, revised, doubted..they will continue to evolve. And with any luck, in all of that, they will remove more biases. So by your logic, "removing some biases "gets us nowhere as far as trusting a result," you can trust nothing, John.

One more time, with feeling...

if you just want to say that removing some biases appears to be more useful than removing no biases but gets us nowhere as far as trusting the result...

Everybody is just "removing some biases," John.

Are more controls better? Depends, but yes, I hope so. Am I "willfully ignoring them?" No. But you are trying to establish that any test that does not contain x is useless. What is x, John? BS1116? JJ? Which one has achieved perfection? Where is the complete list of all possible biases and the controls to mitigate them?

Everybody is just "removing some biases," John. And it's either all pointless, or the testing gets stronger and the results more reliable as more biases are removed. You can't claim an absolute outcome -- this is valid, this is not -- without an absolute standard. You don't have one, and neither do I. The difference is I'm ok with that. I'm ok with understanding that better methodology probably leads to more reliable results. I'm not willfully ignoring anything, John, but I'm ok with the ambiguity (science is hard, huh?), because I'm not looking for a hard line beyond which I can declare everything no better than what I think I hear with all my biases firmly intact.

Tim
 
<snip>
Here's a news flash that includes a truth about life that I don't see a lot of understanding of: Nothing in this world is perfect. But, some things are far better than others, and a well run DBT is far better than a sighted evaluation.

lol
 
I'm still waiting for an explanation as to how long-term sighted comparisons can be more valid than blind-testing in terms of identifying audible differences.
 
I'm still waiting for an explanation as to how long-term sighted comparisons can be more valid than blind-testing in terms of identifying audible differences.

Aint gonna happen because Blind tests can be long term if that is an advantage. Sighted evaluations involving small differences are so susceptible to false positives that they must be dismissed by every thinking person.

The benefit of long term listening comes when it is used to identify usually brief critical passages that make the difference most audible.

If you have proper listener training samples (such as are in the package of files I mentioned above) and allow people to search through a reasonably short sample to find critical passages (which is facilitated by Foobar2000) the results are going to be as sensitive as they are going to get with the given listener.
 
True a lot of stuff works well below 22 KHz, and that's good enough for a CD player. However, I have personally been surprised by how much equipment falls on its face between 22 and 44 KHz, and JAs measurements show much the same.
Maybe we read JAs measurements differently, but his conclusion was that most equipment does not falls on its face between 22 and 44khz regarding IMD; and a fair few of these products were both digital and headphone amp/line output so we have digital and analogue performance.

I am going by his measurements not long ago that he provided in this thread, the caveat though being one device did have problems when fed IM tones stronger than -6dbfs and that is way beyond normal signals presented to real world products unless one wants to deliberately push an amp into clipping/massive normal distortion with any reasonable level of watts.
His measurements showed that the 19+20khz in reality is enough to show whether an electronic device could handle ultrasonic, although ideally I appreciate more than 5 mainstream products (albeit those recently chosen had design constraints making it even tougher) need to be tested but it is a very strong indicator considering 19+20 compares to other products and how comparable the 30+khz tones also performed; and with that in mind I provided a Leak integrated from the 1990s with average IMD results at 19+20khz and still shows it is more than enough in terms of being linear.

Be interesting if you have a product in mind that shows its failing without being pushed into or close to clipping or overload and using a good analyser rather than subjective results (which usually force clipping or massive distortion behaviour to hear IM by being played too loud).
Lets keep it to products that are meant to be well engineered and not extreme budget/esoteric products, maybe if someone has time they should find a product that is not so linear in behaviour as those JA tested and shows different behaviour for the 19+20 test compared to ultrasonic tones; albeit tested within spec (meaning -10dbfs or quieter as some products will have issues closer to 0dbfs and not driven to stress by close to clipping/overload/etc).
One aspect that is interesting and I highlighted much earlier using Stereophile measurements is that digital filters CAN influence IMD and rare cases pretty badly, but this requires poor design or extremely weak stopband/alias rejection (check NOS DACs).

Thanks
Orb
 
Last edited:
arnyk said:
True a lot of stuff works well below 22 KHz, and that's good enough for a CD player. However, I have personally been surprised by how much equipment falls on its face between 22 and 44 KHz, and JAs measurements show much the same.
Maybe we read JAs measurements differently, but his conclusion was that most equipment does not falls on its face between 22 and 44khz regarding IMD; and a fair few of these products were both digital and headphone amp/line output so we have digital and analogue performance.

I am going by his measurements not long ago that he provided in this thread, the caveat though being one device did have problems when fed IM tones stronger than -6dbfs and that is way beyond normal signals presented to real world products unless one wants to deliberately push an amp into clipping/massive normal distortion with any reasonable level of watts.

That's correct. Arny Krueger is misrepresenting my measured data (in the context of this thread). I found with two bus-powered USB DAC/headphone amplifiers that they were driven into clipping with a pair of ultrasonic tones, each at -6dBFS so that the peak waveform was 0dBFS. But as such high-level ultrasonic tones never, repeat never, occur with music signals, the production of audible intermodulation products in the audio band with those DACs will not be an issue with 96kHz-sampled files.

John Atkinson
Editor, Stereophile
 
Last edited:
The more people who run the test and produce consistent results, the more I trust the results. What I'm seeing is hours and hours being spent typing posts justifying not spending 6 minutes running enough trials to be interesting. The biggest sign of distrust I see in the tests run by one of your own is the fact that nobody is trying to duplicate his results.
So what's the total number of positive results you have counted, both here & AVS? What distrust are you referring to, exactly?

Well that is obviously some people's idea of what the debate is about - debating the subject to death without spending even a minimal amount of time actually experiencing it. It takes 6 minutes to do a set of trials. A lot of the questions being asked would be answered if a few people would get their hands dirty. I see a widespread overestimation of personal ability to do abstract reasoning.
What questions would be answered by people doing the test?

Furthermore, AFAIK this forum is has not been informed of a previous DBT series of tests that was a far better test - the files are in this archive: https://www.dropbox.com/sh/b35feharwc7doty/AADTO9LPjXt9KwuBTPbBa1JZa?dl=0 This one has a full set of training files - I guarantee that everybody will hear a difference, and if they pursue the matter with even minimal diligence, not so much.
I'm not sure what you mean by "I guarantee that everybody will hear a difference, and if they pursue the matter with even minimal diligence, not so much."
Some instructions are needed for this test, Arny?

Here's a news flash that includes a truth about life that I don't see a lot of understanding of: Nothing in this world is perfect. But, some things are far better than others, and a well run DBT is far better than a sighted evaluation.
Totally agree with you here! So if your previous ten requirements are met, that constitutes a well run DBT?
 
Pushing? Literally, no. But depending upon how you define the term *pushing*, not sure. In this regard, you may recall your questioning the validity of blind testing WRT JNDs: http://www.whatsbestforum.com/showthread.php?3323-Do-blind-tests-really-prove-small-differences-don-t-exist
Hi Ron. Good to see you posting.

As to that thread, nothing about questioning validity of blind testing in some situations is equal to me promoting and pushing sighted tests as the solution. I am transparent with the flaws in our arguments and methods. That doesn't make me a non-believer. And a promoter of the opposite approach. If we don't offer these issues, the other side will discover it and roast us for it. We better air them out. Be upfront about them. And show that we truly believe in "science" and its fundamental principals of being unbiased at all times.

That is the nature of thousands of posts from Arny and crew arguing against me. Clearly transparency is not something "we", the vocal few advocating this approach in audio like. Or else, it comes from lack of real experience in the real world as to recognize that these problems are real.

Arny had said that his key jingling file had produced negative results down to 32 Khz sample for some 14 (?) years. He put that test forward with completely confidence that no one could pass it. As you know now, people can and did pass it. We know from theory and many published articles that there is no way we can show transparency at 32 Khz. You can't expect me to sit on the sidelines and let us sell that to forum members in the name of "let's shoot down high res audio."

Let's review the first post in that thread:

What he's responding to is John Atkinson claiming that its extraordinarily hard for blind tests 'to produce anything but a null result even when real audible difference exist' and it is a test that 'does not work'. Which is simply wrong.

I am curious where I would read about that Steven.

The power of blind testing comes from elimination of bias. It does that powerfully and can be abundantly easy to see and prove using real data. Its reverse role for finding small differences is much more difficult if not impossible to prove. To wit, I can make a change to the system that is measurable and strongly so, yet not found in a blind test. The fact that we cannot use objective data to determine if our objective tests is working puts us in a tough, tough situation.

Complicating matters, I can show that one person can hear such differences and another cannot using the exact same methodology. Is it that the difference is not audible to the latter person or that the test that made it harder for him? How do I disambiguate that as a matter of science?

In another thread, I hypothesized based on my personal experience that blind tests may provide too conservative view of audible differences. Theory I put forth was that if the mind can manufacture differences or imagine them being larger than they are, there is no reason to think that it can't do the reverse, second guessing itself in a blind test and erase a difference that may be there. And it doesn't have to do that often as to cause the results to become "statistically insignificant."

I am very interested in figuring out how to prove that real differences that are heard by the ear and the brain are indeed always detected in blind tests. Are there papers or studies I can read about this in the field of audio?

Let's take Steven's statement. As I mentioned Arny has claimed that for 14 (?) years no one could tell differences in his key jingling difference. That is across thousands of listeners as he says. All negative results according to him. Right?

So now let's look at John Atkinson's remark as quoted by Steven: its extraordinarily hard for blind tests 'to produce anything but a null result even when real audible difference exist.' We know now from me and others passing the test that "real audible differences" did exist. Yet Arny represented that no such difference existed for countless years in more forum discussions than anyone could count.

On my side, I have been through countless professional tests where I could beat other people and others could beat me. In both cases, the differences were proven to be there (both subjectivity and objectively from looking at the technology). Yet many could not hear it.

The purpose of the above thread was to investigate that scenario. If I can hear a difference, is it that no one could at all because they got 50-50 outcome? Did not hear the difference or did they second guess themselves?

In these tests, it was very easy even for me to second guess myself and get random results. It took discipline to stay focused on the differences, proving what JA said being true: that it is extraordinarily hard for blind tests to produce positive results when differences get small.

Notice my statement in red. Countless others cannot hear the difference in the double blind tests we are discussing in this thread. Even after they are taught how to cheat or otherwise hear the difference they still can't. Is that because that is all the difference that can be had and most people are not in a position to hear it? Or is it that our test is not good enough. That the material selected is not critical enough to tease out differences in dither and resampling errors.
 
Yes I agree that low power headphone amps are pretty sweet, but when it comes to power amps and speakers, I would think that small differences in the two sampled files can excite different non linearities in the power amp and speakers and thus if one had his head locked in a vice using stereo speakers he might hear a difference due to specific and selected magnification by the upstream components from the source dac or whatever. The same thing can occur of course with headphones as well to a lesser degree. And all this would be volume level dependent too.

Well look at how those cheap and very compact (meaning their performance-design is compromised) DAC-Amps behaved for both 19+20khz and then the ultrasonic tones posted earlier by JA, some correlation of performance can be seen (also maybe possible to correlate IM related performance with regards to alias rejection for a DAC).
And with that in mind please note I posted just a few posts back an early 90s Leak integrated amp that was noted for average IM performance and this is still much better than -70db to a 0db 19+20khz test when NOT pushed into clipping.
Cheers
Orb
 
That's correct. Arny Krueger is misrepresenting my measured data (in the context of this thread). I found with two bus-powered USB DAC/headphone amplifiers that they were driven into clipping with a pair of ultrasonic tones, each at -6dBFS so that the peak waveform was 0dBFS. But as such high-level ultrasonic tones never, repeat never, occur with music signals, the production of audible intermodulation products in the audio band with those DACs will not be an issue with 96kHz-sampled files.

John Atkinson
Editor, Stereophile
Thanks John.
Well if you ever get bored please could you consider doing the ultrasonic test with a few integrated or power amps just to put this to rest once and for all; might be interesting to see at what point the IMD becomes notable in regards to the product being stressed.
Be interesting to see if there is some correlation between 19+20 and that of ultrasonic equivalent for these products (integrated/power amps).

Thanks
Orb
 
Thanks for telling me what I'm doing and thinking again, john. It's always so refreshing, and it contributes so much to the discussion. Let's review...



Here's the problem with your reasoning, john -- Everybody is just "removing some biases."

JJ'ss controls, the controls outlined in BS1116, the controls used in Meyer and Moran...none of them are perfect, or even complete. They will continue to be updated, questioned, revised, doubted..they will continue to evolve. And with any luck, in all of that, they will remove more biases. So by your logic, "removing some biases "gets us nowhere as far as trusting a result," you can trust nothing, John.

One more time, with feeling...



Everybody is just "removing some biases," John.

Are more controls better? Depends, but yes, I hope so. Am I "willfully ignoring them?" No. But you are trying to establish that any test that does not contain x is useless. What is x, John? BS1116? JJ? Which one has achieved perfection? Where is the complete list of all possible biases and the controls to mitigate them?

Everybody is just "removing some biases," John. And it's either all pointless, or the testing gets stronger and the results more reliable as more biases are removed. You can't claim an absolute outcome -- this is valid, this is not -- without an absolute standard. You don't have one, and neither do I. The difference is I'm ok with that. I'm ok with understanding that better methodology probably leads to more reliable results. I'm not willfully ignoring anything, John, but I'm ok with the ambiguity (science is hard, huh?), because I'm not looking for a hard line beyond which I can declare everything no better than what I think I hear with all my biases firmly intact.

Tim

Tim, this is tiring. You are trying to make the case that our tests are imperfect because we don't apply all controls - your last post was about controls that we don't even know because they will only be discovered in the future. Now you are trying to say that there are many controls - BS116, JJs. You're going around in circles - JJ's list is his short summary of BS1116. I doubt you will find any contradictions between BS116, JJ's, ArnyK's or whatever credible list relating to reliable testing in this area - they are all informed by the same recognised standards document BS1116, Mushra & others. Yours is an untenable position.

If we are undergoing an operation, we don't want the surgeon to tell us that washing his hands is all he is going to do before he opens us up because he believes that this is the cause of most post-op infections. We expect him to apply the best precautions known at the time as did those undergoing operations before the knowledge of bacteria was prevalent. Willfully ignoring best practise is totally different & yes you are admitting to wilfully ignoring the standards.

Tim, from your posts it is pretty evident that the contents of BS1116, JJ's list of requirements & ArnyK's list of requirements are all new information to you. I'm pretty sure you have argued about well run DBTs in the past but now seem to reject the standards that are agreed by the industry that defines a well run DBT. I'm sure you were unaware just what this meant - you were unaware of the best practises contained in this BS1116 standard.

I'm OK with your final statement that you are fine with the ambiguity of test results - so am I. That ambiguity means that we don't know if the semi-controlled blind test you suggest is in any way better than sighted tests so my approach to that ambiguity is as I have said in the past - half-arsed blind tests are anecdotal evidence at best.

At worst they are misleading because they are presented as if they were scientifically rigorous & therefore more believeable than other tests. A great disservice to audio, I believe
 
The more people who run the test and produce consistent results, the more I trust the results. What I'm seeing is hours and hours being spent typing posts justifying not spending 6 minutes running enough trials to be interesting. The biggest sign of distrust I see in the tests run by one of your own is the fact that nobody is trying to duplicate his results.
Why don't you post your results of Scott/Mark tests Arny? Why not post your results of "Jitter" test you just linked to? If it is just 6 minutes, why not run those and report instead of "hours and hours being spent typing posts?" Do you not believe in doing what you ask others to do?
 
Tomelex, thanks for your response.

The first question I would ask is how do you know what your current pre-amp is doing wrong or what is it doing wrong which would require a new one in the first place?

Pre was clipping on piano transients. Lack of dynamic headroom. Pretty obvious and audible.

Secondly, you need the appropriate test equipment to switch between the two (noise, FR, harmonic spray, group delay, etc) then all you can do is say they both measure differently, one better than the other in some or all areas,

Not available.

BUT, and then if you don't pick by measurements because you don't believe in them, thus, you pick by ear, based on preference, then you don't need to do any "official" or recommended tests do ya.

That why I was asking you and others for suggestions because you and others have clearly stated that my methodology for making a purchase decision is clearly flawed.

PS, a 6922 in a circuit designed for minimum distortion has a distortion spread like a solid state device as revealed by measurements.....just sayin....

?????? The sonic difference between the "stock" tube and the EAT was very obvious.

And of course, strict adherence to blind testing is rarely used IMO.

Why do some people keep espousing this method, or variations thereof, as "viable" and necessary to determine sonic differences versus "sighted / long term" listening.

And in the end (and the reason for my participation in this thread), what does ABX, DBT, etc. have anything to do with listening to and enjoying music?. Isn't this the basic reason people get involved with this hobby?
 
Maybe we read JAs measurements differently, but his conclusion was that most equipment does not falls on its face between 22 and 44khz regarding IMD; and a fair few of these products were both digital and headphone amp/line output so we have digital and analogue performance.

I'm specifically focusing on his tests of 4 DACs/headphone amps that he did recently because they are relevant to the current topic.
 
That's correct. Arny Krueger is misrepresenting my measured data (in the context of this thread). I found with two bus-powered USB DAC/headphone amplifiers that they were driven into clipping with a pair of ultrasonic tones, each at -6dBFS so that the peak waveform was 0dBFS.

No misrepresentation that I can see, because that is exactly what I'm talking about. I'll stipulate that I agree with the technical details of John's tests as he posted their results in AVS if there are any differences between his accounts of them and mine.

My tests were done with test tones that summed up to -1 dB FS.

But as such high-level ultrasonic tones never, repeat never, occur with music signals, the production of audible intermodulation products in the audio band with those DACs will not be an issue with 96kHz-sampled files.

My tests were designed to be proof of performance for the "Keys Jangling" test files which had hugely more ultrasonic content than any real world music I've ever seen very often. I have seen a few instruments that put out unbelievable amounts of energy in the last half-octave of the normal audible range, and other examples that poured out lots of ultrasonics. I wanted to give listeners the best possible chance of detecting IM in their gear as they used it.

The same argument would appear to invalidate John's own power amp tests if they use full power test tones at or above 20 KHz if it invalidates my headphone amplifier tests.

In my book a headphone amp is just a power amp for headphones. One difference is that headphone amps in many computer audio interfaces, portable music players and those that are USB-powered tend to be run within 3-6 dB of clipping or more, while many power amps driving speakers are often run with a lot more headroom.
 
Tim, this is tiring. You are trying to make the case that our tests are imperfect because we don't apply all controls - your last post was about controls that we don't even know because they will only be discovered in the future. Now you are trying to say that there are many controls - BS116, JJs. You're going around in circles - JJ's list is his short summary of BS1116. I doubt you will find any contradictions between BS116, JJ's, ArnyK's or whatever credible list relating to reliable testing in this area - they are all informed by the same recognised standards document BS1116, Mushra & others. Yours is an untenable position.

If we are undergoing an operation, we don't want the surgeon to tell us that washing his hands is all he is going to do before he opens us up because he believes that this is the cause of most post-op infections. We expect him to apply the best precautions known at the time as did those undergoing operations before the knowledge of bacteria was prevalent. Willfully ignoring best practise is totally different & yes you are admitting to wilfully ignoring the standards.

Tim, from your posts it is pretty evident that the contents of BS1116, JJ's list of requirements & ArnyK's list of requirements are all new information to you. I'm pretty sure you have argued about well run DBTs in the past but now seem to reject the standards that are agreed by the industry that defines a well run DBT. I'm sure you were unaware just what this meant - you were unaware of the best practises contained in this BS1116 standard.

I'm OK with your final statement that you are fine with the ambiguity of test results - so am I. That ambiguity means that we don't know if the semi-controlled blind test you suggest is in any way better than sighted tests so my approach to that ambiguity is as I have said in the past - half-arsed blind tests are anecdotal evidence at best.

At worst they are misleading because they are presented as if they were scientifically rigorous & therefore more believeable than other tests. A great disservice to audio, I believe

It is getting tiring, John because you're hearing what you think the "opposition" will say instead of what I'm actually writing. I'm not trying to say the tests are imperfect, they are imperfect. If they had been perfect, there would have been no reason to update them. Does mean that the first two editions of BS1116 were bad and every result found under them was wrong because they didn't contain the updates? Or does it mean, hopefully, that the previous editions contained good, valuable controls and methods that delivered actionable information, but they've gotten even better with the updates? Is it all or nothing, John? Or is everything invalid that doesn't use the most complete set of controls available? That's all. Really, our only disagreement is your notion that less than all the controls, whatever that means, is no better than no controls at all. That would make JJ's "summary" of BS1116 no better than no controls, and the prior editions of BS1116 better than no contras.

Is it all or nothing, John? Or not? That's about as straight as it's going to get. No circles there. Any chance I can get a straight answer to match my straight question?

Tim
 
.....
Is it all or nothing, John? Or not? That's about as straight as it's going to get. No circles there. Any chance I can get a straight answer to match my straight question?

Tim
I already proposed a mutually acceptable way for agreement on this so we could move on & thought we came to this agreement, Tim, - you seemed to accept this then but now you seem to not want to leave it at that - can we now leave this, please?
ME:I suggested positive & negative controls as a means of short-circuiting/bypassing this lack of attention to other possible biases in operation in a blind test. Do you object to this? If not, then we are in agreement & can move on, OK?
YOU: I would never object to anything with the potential to remove bias and avoid errors, John. Thanks.
 
Stereoeditor said:
But as such high-level ultrasonic tones never, repeat never, occur with music signals, the production of audible intermodulation products in the audio band with those DACs will not be an issue with 96kHz-sampled files.

My tests were designed to be proof of performance for the "Keys Jangling" test files which had hugely more ultrasonic content than any real world music I've ever seen very often.

I agree. But your spectral analysis, Mr. Krueger, showed that the ultrasonic content of the "keys jangling" file didn't reach 0dBFS. So even the bus-powered DACs will not be driven into clipping with this file.

I have seen a few instruments that put out unbelievable amounts of energy in the last half-octave of the normal audible range, and other examples that poured out lots of ultrasonics.

But again not at levels approaching 0dBFS in the octave above 24kHz.

The same argument would appear to invalidate John's own power amp tests if they use full power test tones at or above 20 KHz if it invalidates my headphone amplifier tests.

This high-frequency intermodulation test is intended to be a worst-case test using frequencies that are still regarded as being in the audio band. It has nothing t do with full-scale signals an octave higher in frequency.

John Atkinson
Editor, Stereophile
 
My tests were designed to be proof of performance for the "Keys Jangling" test files which had hugely more ultrasonic content than any real world music I've ever seen very often. I have seen a few instruments that put out unbelievable amounts of energy in the last half-octave of the normal audible range, and other examples that poured out lots of ultrasonics. I wanted to give listeners the best possible chance of detecting IM in their gear as they used it.
That you did Arny. Here is the spectrum of the file and test tones together:

i-RNhgjCT-X2.png


If your key jingling has more ultrasonic content than any real world music, where does that leave the test tones?

I said this before on AVS Forum. If you insist on this being a useful test, its repercussions will go way past the borders of this discussion...
 

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu

Steve Williams
Site Founder | Site Owner | Administrator
Ron Resnick
Site Co-Owner | Administrator
Julian (The Fixer)
Website Build | Marketing Managersing