Conclusive "Proof" that higher resolution audio sounds different

Orb

New Member
Sep 8, 2010
3,010
2
0
I ran the test and heard no IMD. I also did not at any point support IMD as being the reason for positive results. I am afraid your obsession with Arny filled in some blanks there. David Griesinger's surprising results that IMD usually only shows up at the low level of electronics instead of transducers is also what I have found (also to my surprise).
..................
As for the test file of jangling keys not being a reasonable test we will have to disagree.

Just to clarify Griesinger's results and conclusion only showed audible IM with electronics when clipping in the context of this thread, sorry to mention it again just some might take wrong conclusion reading that sentence.
Cheers
Orb
 

amirm

Banned
Apr 2, 2010
15,813
38
0
Seattle, WA
As for the test file of jangling keys not being a reasonable test we will have to disagree.
I thought I explained this. So once more, there is no issue with the keys jingling part. All of these discussions are around appropriateness of the ultrasonic tones added to the end to create the type of distortion that would occur when playing the key jingling part.
 

amirm

Banned
Apr 2, 2010
15,813
38
0
Seattle, WA
My concern is with the test tool, specifically the Foobar-PC ABX software. By allowing the user ability to specify the start/stop points of the file it makes it trivial to "game" the system, either intentionally or unintentionally. So when I read that someone used PC ABX and heard a difference, I can not reach the conclusion that they actually heard a difference between the two test files. They could simply have been hearing differences due to switching transients. The PC ABX tool does not provide an adequate control over false positive results. Controls of this type may not be needed if a single hobbyist is conducting a listening test for their own purposes, but as a means of gathering evidence with a chance of convincing others, the tool simply does not cut it. If the possibility of convincing others is not a requirement, then there is really no need for any "objective" tests in the first place.
The whole purpose of these tests and this thread is the last sentence but the other way around.

The litmus test for any statement regarding fidelity on forums is to demand a double blind tests. Until now, foobar 2000 ABX would have not only been accepted as such proof but actually demanded! Indeed, that is why I used it because Arny kept insisting that we report on the results of foobar 2000. So I did. We didn't wake up one morning deciding to create our own test and run foobar 2000 ABX to convince anyone. We were told the prescription and we followed suit as requested.

To say then the results are not useful to convince anyone turns the whole affair on its head. All these years that these tests have been demanded has been for the amusement of the person taking the test? I hope not. If there were issues with the content, playback equipment distortion, ABX tool, and we say we are experts as such, then the test should have never been asked to be run. The fact that we ran them and folks now realizing there can be issues is indicating of how shallow our knowledge is in regards to blind testing. And weakness of forced choice tests like ABX where the answer is one of the two given. So if there is a problem with the test, you can be assured to beat the game and beat it well.

I created this thread and put the word "proof" in quotes for precisely this reason. Passing such tests was supposed to be the proof that we demanded. Arny has said that in 14 years or so no one could tell such differences apart with this very content and software based ABX tool. Now that we have beat the test what to make of that? We don't get to just say, "well the test as is run is wrong." How come it was not wrong for the 14 years before? How come that was supposed to be convincing and this not?

You talk about switching problems. I have heard others say that but no one has provided any data that this is what we heard. What a DAC on a laptop like mine does no opening/closing the device is not deterministic apriori. Indeed that was the first test I did to see if switching glitches were consistent and they were not at all on my machine.

We have taken the negative results of such tests to the bank every day and twice on Sunday. We don't even think there test harness could be faulty, the test content not revealing, the listener not trained. We got negative results so all is well. Get a positive results and any and all things could be wrong.

Circling back, the thing that needs to be convincing is that our unsinkable titanic was not such. The trust we put in these tests being "objective" is now being doubted by us the objectivists just the same. This leaves us bare in the future with respect to these challenges. This is what it should do as opposed to say, "you all don't understand this and that." There is nothing hard to understand about defeating a test that was supposed to be undefeatable.

If the objectivity camp which by the way I am part of walks like nothing has happened and this is a technicality problem, then let's declare that we have no common sense at all.

As to your switching theory, it is a fine theory. It lacks any data confirming that is how the positive outcome was achieved. How in an operating system a DAC performs with respect to when you open and close it and feed it non-zero crossing data is unknown. You can't predict that all the samples get played and what DC levels they generate. But again, none of this matters with respect to the scope of this thread.

The other point that is critical to note that despite a number of people such as yourself giving clues as to how the system can be gamed, hardly anyone has managed to create such an outcome. Why? Because even if these cheats exist, they are such small differences that vast majority of people are not able to detect them even with full knowledge of what to look for. It demonstrates so vividly that there are two camps: those with critical listening abilities and those not. As such, tests that did not explicitly select such listeners and screened them as such, are invalid when the outcome was negative. We cannot disambiguate whether the difference was not audible or the listener was not capable of finding differences that were audible.

There is no need for the PC ABX software to have this fault. One way would be to fix the PC ABX software to fade in and out at the start points.
That would force the minimum segments to have to be quite a bit longer as to allow for fade up and down. Often a single note may be at stake here. By lengthening the segment you start to strain short term auditory memory and reduce the chances of hearing such differences. Having used ABX tools that do this, there is also another non-obvious problem in that soft dissolves reduce the ability to tell differences as compared to immediate switching. Go in the shower and suddenly increase the amount of hot water vs doing it gradually. The former will be much more noticeable. There is no time for the brain to adapt.

Another way would be to remove these buttons from the tool (or operate on the honor system and do all the testing without using these buttons at all.) If this made the testing too hard, then shorter segments could be selected as part of test software and distributed to the group It would then be possible to vet these sequences for artifacts related to start/stop. If I were serious about this test this is precisely what I would do: find a promising short segment, edit it with a fade in / fade out and then test that using PC ABX, playing the complete segment.
The first approach is done routinely in the industry but not the second part. Again, you want to take maximum advantage of the immediate switch over for the brain to compare the two segments. These "safeguards" readily tilt the outcome toward getting more negative outcomes.

Also, when we select such segments we do it with full knowledge of where the differences may be. DIY tests such as we talk about fail at the outset here. Who says Arny's test clip is revealing enough? And which segment would you have picked? This is why we need to do away with these hobby tests/challenges.

Also, if these tests were intended to be a serious scientific experiment they would not have mixed apples and oranges. They would have tested a single aspect of PCM formats, e.g. 44/16 vs 44/24 or 44/24 vs. 96/24.
The tests were for a real life scenario: getting the master files or the converted ones to 16/44.1. There is no scenario there that is 24/44.1. It is not a useful test case.

Note that there is one such test which I also passed. It tested just the word length difference and sampling rate kept the same. It also used a number of countermeasures against electronic detection: http://www.whatsbestforum.com/showt...unds-different&p=279735&viewfull=1#post279735

So here is another set of results i just posted in response to this person's comment on AVS:

---------



Speaking of Archimago, he had put forward his own challenge of 16 vs 24 bit a while ago (keeping the sampling rate constant). I had downloaded his files but up to now, had forgotten to take a listen. This post prompted me to do that. On two of the clips I had no luck finding the difference in the couple of minutes I devoted to them. On the third one though, I managed to find the right segment quickly and tell them apart:

============

foo_abx 1.3.4 report
foobar2000 v1.3.2
2014/08/02 13:52:46

File A: C:\Users\Amir\Music\Archimago\24-bit Audio Test (Hi-Res 24-96, FLAC, 2014)\01 - Sample A - Bozza - La Voie Triomphale.flac
File B: C:\Users\Amir\Music\Archimago\24-bit Audio Test (Hi-Res 24-96, FLAC, 2014)\02 - Sample B - Bozza - La Voie Triomphale.flac

13:52:46 : Test started.
13:54:02 : 01/01 50.0%
13:54:11 : 01/02 75.0%
13:54:57 : 02/03 50.0%
13:55:08 : 03/04 31.3%
13:55:15 : 04/05 18.8%
13:55:24 : 05/06 10.9%
13:55:32 : 06/07 6.3%
13:55:38 : 07/08 3.5%
13:55:48 : 08/09 2.0%
13:56:02 : 09/10 1.1%
13:56:08 : 10/11 0.6%
13:56:28 : 11/12 0.3%
13:56:37 : 12/13 0.2%
13:56:49 : 13/14 0.1%
13:56:58 : 14/15 0.0%
13:57:05 : Test finished.

----------
Total: 14/15 (0.0%)


============

It is now going to bug me until I can pass the test on his other two clips. :)

BTW, his test period is closed and he has reported such for the above clip:



And his commentary:
As you can see, in aggregate there is no evidence to show that the 140 respondents were able to identify the 24-bit sample. In fact it was an exact 50/50 for the Vivaldi and Goldberg! As for the Bozza sample, more respondents actually thought the dithered 16-bit version was the "better" sounding 24-bit file (statistically non-significant however, p-value 0.28).


So we see what aggregation of general public does in these tests and why industry recommendation is to use only trained listeners. Inclusion of large number of testers without any prequalifications pushes the results to 50-50. Imagine 4 out of his 140 being like me. Their results would have been erased by throwing them into the larger pool. If our goal to say what that larger group can do, then this kind of averaging of the results is fine. But if we want to make a "scientific" statement about audibility of 16 vs 24, is wrong and we run foul of simpson's paradox. See: http://en.wikipedia.org/wiki/Simpson's_paradox.

As always, I want to caution people that my testing is all about finding a difference and not stating what is better. And yet again, I do not know which file was which as I did my testing. I simply characterised the difference between A and B clips and then went to town.

Of note, I don't know how he had determined that the original files did indeed have better dynamic range than 16 bits. Maybe someone less lazy than me can find it and tell me :).

Here is the original thread where the files were posted: http://archimago.blogspot.ca/2014/06/24-bit-vs-16-bit-audio-test-part-i.html

And results: http://archimago.blogspot.com/2014/06/24-bit-vs-16-bit-audio-test-part-ii.html
 

thedudeabides

Well-Known Member
Jan 16, 2011
2,181
691
1,200
Alto, NM
Net, net, net, if I claim to be an EE I'm actually understating my formal credentials by quite a bit. Furthermore I've worked professionally as an EE, a ME, and a BS in IT. Enough alphabet soup? ;-)

Very impressive sir. I especially like the fact that you are "understating your formal credentials" by quite a bit. ;)

Can you, with all due respect, tell me how that qualifies you to be a more informed, astute listener within the context of judging the quality of reproduced music?

PS: Science experiments don't count. :)
 
Last edited:

aronjt

New Member
Aug 25, 2014
28
0
0
Speaking as an EE, Esldude seems to "get it" regardless of his formal credentials.

"… seems to get it"? I didn't ask if he had some sort of religious revelation based on your particular brand of dogmatic audio religion. I asked if he had any formal education in an audio related field. If someone agrees with you, they need no credentials? Not that credentials are always required for sufficient understanding. But if one wants to play armchair expert, he should make some effort to earn respect among the professionals, show some humility, and act honestly. Unfortunately, this is rarely the case with most internet forum armchair experts.

I've never seen someone who "gets it" need to be corrected so often.
 

aronjt

New Member
Aug 25, 2014
28
0
0
Welcome to WBF as this appears to be your first post.

Yes I have some formal education in these matters though it isn't how I make my living.

Educated armchair perhaps?

Sorry, I missed the qualifier "some" earlier in my reply. Does "some" mean completion of an EE degree and additional study, or just one related class attended?

Just curious. Thanks.
 

jkeny

Industry Expert, Member Sponsor
Feb 9, 2012
3,374
42
383
Ireland
... We cannot disambiguate whether the difference was not audible or the listener was not capable of finding differences that were audible.
Or the equipment was not capable or the many other reasons for non-differentiation. That's why I've been saying the need for controls in all such test is paramount. I don't know how feasible/practical this would be (as I've never seen it done in home tests) that's why I raised it here - to get some opinions on it's practicality/feasibility, not some wishy washy reply that we can't control everything so why bother trying. An attempt to leave the status quo situation i.e. a skew towards null results in 99.9% of such tests.
 

jkeny

Industry Expert, Member Sponsor
Feb 9, 2012
3,374
42
383
Ireland
If I was to use the goading tactics that we usually see behind those who call for blind tests, I would be saying that these same people are afraid to put the necessary controls in their tests that show they are valid tests because they are afraid of exposing their null results as bogus all along. But, I won't do that goading, instead I'd ask that those who say their informal blind tests are relevant to now test Arny's files in the way they would have run their blind tests in the past & tell me if they can still hear the differences (& be honest)?
 

Robh3606

Well-Known Member
Aug 24, 2010
1,484
473
1,155
Destiny
Hello thedudeabides

Can you, with all due respect, tell me how that qualifies you to be a more informed, astute listener within the context of judging the quality of reproduced music?

PS: Science experiments don't count.


Well they do actually. Take a look at the attached link download it and give it a try. I am a much better listener now than I was before building DIY speakers for the last 15 years. Measurements along with crossover design and critical listening are all part of the experience. If you don't think DIY speakers is a science experiment you should give it a try :)


http://harmanhowtolisten.blogspot.com/

Rob:)
 

Tony Lauck

New Member
Aug 19, 2014
140
0
0
The reaction to the demonstrated results and the resultant acrimony in two long threads does not surprise me in the slightest. In fact, it's exactly what I figured I would read before beginning the first thread. I have seen these debates too many times starting with my introduction to S vs. O by two of my one-time summer college roomates, Clark Johnsen and Brad Meyer. I also had occasion to meet the director of the Princeton PEAR project which led to my reading and studying results of various positive ESP experiments and observing how these were dismissed by mainstream scientists despite overwhelming statistical evidence. Two books that shed light on scientific epistemology are "The structure of Scientific Revolutions" and "Personal Knowledge". If anyone is seriously interested in the S vs. O debate I suggest reading these books.

If someone who reliably hears differences would be so kind as to specify the specific ABX start/stop points for a legitimate segment of the key file, I will try and produce short click free segments no more than 200 ms longer than the specified intervals. I can then post these for others to download and play without using the problematical start/stop feature. As of now I am not convinced that anyone has legitimately heard differences in these two files because of the click fault in the test tools.

BTW, the only solid conclusion that I reached was that the original experiment, including the test files and suggested tools, was ill advised, something that would, at best, characterize the work of an amateur and not a first-class research scientist. I have had a lifetime of experience dealing with people who are first rate. I have enjoyed working with these people despite many of them having a "difficult" personality. On the other hand, I never was able to reach a level of personal maturity where I could tolerate second raters who were difficult to get along with, particularly those who were more interested in winning an argument than uncovering knowledge.
 

jkeny

Industry Expert, Member Sponsor
Feb 9, 2012
3,374
42
383
Ireland
BTW, the only solid conclusion that I reached was that the original experiment, including the test files and suggested tools, was ill advised, something that would, at best, characterize the work of an amateur and not a first-class research scientist.
Totally agree! And one of the points of this thread is to drive some rigour into such testing. This rigour is being resisted at every step of the way, with the excuse that such tests are good enough because they expose the sighted Vs blind debate (which is really what all these amateur tests are really about - to shame/goad/"prove" the sighted listener as wrong in their perception. It may be that they are wrong but these amateur tests (& this is one of the better ones) show nothing other than the test designers bias. This acceptance of a test that's good enough is really indicative of the bias - as I said already if the results were mainly positive you can be sure there would have already been a great search for underlying causes & some attempt to tighten up the test but this has not begun to happen until now - at least the search for the underlying positive result is being searched fro but I see no attempt at improving the test with pos & neg controls.
I have had a lifetime of experience dealing with people who are first rate. I have enjoyed working with these people despite many of them having a "difficult" personality. On the other hand, I never was able to reach a level of personal maturity where I could tolerate second raters who were difficult to get along with, particularly those who were more interested in winning an argument than uncovering knowledge.
Exactly where I stand too - sometimes I wish I could overlook hypocrisy but I have a built in reflex to it - probably comes from my upbringing in catholic Ireland? :)
 

amirm

Banned
Apr 2, 2010
15,813
38
0
Seattle, WA
The reaction to the demonstrated results and the resultant acrimony in two long threads does not surprise me in the slightest. In fact, it's exactly what I figured I would read before beginning the first thread.
Well, you are much smarter than me because a) I never predicted these outcomes and b) expected the reactions it is getting.

I have seen these debates too many times starting with my introduction to S vs. O by two of my one-time summer college roomates, Clark Johnsen and Brad Meyer.
Then perhaps you are misreading the discussion then as this is an argument between one "O" (me) and many other self-declared "Os." This is not just a hobby for me but was part of my professional life. I have organized and participated in many blind tests where the outcome had direct bearing on my company's future (a little company called Microsoft) and that of my own career. I participated in more shoot outs than I could count from engineers at major recording labels to magazines. So the properness of the test mattered and results had real meaning and not just winning a forum argument. There is a reason I accepted these tests with open arms and ran them. Not sure where you find subjectivists who have had this kind of professional experience or act like I have.

It is through that lens that I have been arguing with the people in "our camp" that the hardline trust we put in prior DIY tests with negative outcomes is unwise. All the experience in the world did not overcome forum members with no professional or educational experience to change their mind or the vigor with which they fight knowledge. It is as if you declare yourself an objectivists, all of a sudden whatever you type must be "science," never mind that it is almost always stuff people have read on forums.

Anyway, surprised or not, we finally we have some data that points to inaccuracy of our assumed knowledge in the "O" camp. That critical listening matters. That controls are a must. That true understanding of the tools to perform these tests understood. That you think this is same old, same old means we have not done a well enough job to show the distinction of this discussion. And importantly, how this is not an argument between a subjectivist and objectivist.

If someone who reliably hears differences would be so kind as to specify the specific ABX start/stop points for a legitimate segment of the key file, I will try and produce short click free segments no more than 200 ms longer than the specified intervals. I can then post these for others to download and play without using the problematical start/stop feature. As of now I am not convinced that anyone has legitimately heard differences in these two files because of the click fault in the test tools.
Unfortunately the weapon of choice, Foobar2000 ABX, does not save any of the segment locations so I have none to share. But all is not lost. You can try to use the faults you think exist here and attempt to generate positive results and share the outcome with others. I suggest doing so not only Arny's files but also Scott/Mark's. And Archemiago (sp?).

BTW, the only solid conclusion that I reached was that the original experiment, including the test files and suggested tools, was ill advised, something that would, at best, characterize the work of an amateur and not a first-class research scientist. I have had a lifetime of experience dealing with people who are first rate. I have enjoyed working with these people despite many of them having a "difficult" personality. On the other hand, I never was able to reach a level of personal maturity where I could tolerate second raters who were difficult to get along with, particularly those who were more interested in winning an argument than uncovering knowledge.
I am not sure which way you are pointing this weapon of "second raters." :) If you mean me, please be direct and say so. Without it, it is extremely hard to know what you mean by this paragraph in the specific, and the whole post in general.

If you mean me, that's cool. I don't mind the personal remark at all although we heavily discourage discussing members on WBF and want to focus the discussion on the technical points. If those points show a person to be second rate, then you have made your point but without getting personal.

Anyway, back to your point, Meyer and Moran has been presented as the bible of high-res vs CD. Yet, the test did not do the simplest thing of making sure the content they used had high resolution content. And countless other flaws such as lack of controls, pre-screening of the testers, etc. So that I get calibrated on your metric, do you consider M&M test to be work of an amateure or first-class research scientist?
 

Tony Lauck

New Member
Aug 19, 2014
140
0
0
Well, you are much smarter than me because a) I never predicted these outcomes and b) expected the reactions it is getting.

I think it a matter of accidents in my life that lead me to read books like "The Structure of Scientific Revolutions" and "Personal Knowledge". Or perhaps it was more a result of having gone to Harvard rather than MIT and training in mathematics with some concept of mathematical truth rather than engineering with its concept of "good enough". Or maybe it was getting a B in a high school physics experiment in which my lab partner and I could not get the correct value for the acceleration of gravity because it turned out that our experiment had been using gravity to measure the power line frequency. We were pissed off at the poor grade, but when we worked with the instructor and got to the bottom of the problem, including looking at strip charts from the power company, our grade got turned into an A+. (Years later I was still somewhat pissed off, as it was likely that some of my classmates had gotten away with cheating, unless their measurements had been taken by luck when the power line frequency was 60 Hz.)



I am not sure which way you are pointing this weapon of "second raters." :) If you mean me, please be direct and say so. Without it, it is extremely hard to know what you mean by this paragraph in the specific, and the whole post in general.

If you mean me, that's cool. I don't mind the personal remark at all although we heavily discourage discussing members on WBF and want to focus the discussion on the technical points. If those points show a person to be second rate, then you have made your point but without getting personal.

Anyway, back to your point, Meyer and Moran has been presented as the bible of high-res vs CD. Yet, the test did not do the simplest thing of making sure the content they used had high resolution content. And countless other flaws such as lack of controls, pre-screening of the testers, etc. So that I get calibrated on your metric, do you consider M&M test to be work of an amateure or first-class research scientist?

Rest assured, the weapon was not pointed at you. You knew what you didn't know (you hired JJ) and you are more interested in truth than winning an argument (as evidenced by your quoted words.) The only thing I didn't like in your posts in this thread is your use of FFTs when the issue is distortion caused by peak levels. :)

M&M was not even to the level of what I would call "second rate". It was at best third rate, for the reasons you stated. But more fundamentally, the weasel wording in that paper around negative results was sufficient for me to dismiss the authors as not measuring up to the mid-dilettante level of expertise. Having done blind tests at Brad Meyer's home in past decades, I characterize Brad as more of a Technician than Engineer or Scientist. I will go even further. After the publication of M&M I lost respect for the AES.
 

Kees de Visser

New Member
Aug 21, 2014
21
0
0
France
I go nearly deaf listening to the key jingling files at "normal" levels when that 4K tone starts to play. So we know subjectively that the single tone 4K is louder perceptually than the rest of the clip.
Amir, afaik the 4kHz tone is at -30dBFS peak and -33dB RMS level. If you go nearly deaf, doesn't that indicate that your monitoring level is rather high? Have you verified that your chain is not adding audible distortion at that level? When I did an ABX on my MacBook with cheap headphones I could easily hear distorted peaks in the hi-res version and not in the redbook. Lowering the volume on the MacBook, or using a separate analog headphone amp removed the distortion. It looks like distortion in the analog domain was responsible.
The hi-res jangling version has higher peak levels than the redbook version (almost 3 dB after reconstruction filter), so clipping will more likely occur in the hi-res signal. I have selected a 700ms segment of the jangling sample and made a loop that alternates between hi-res and redbook in order to find out when the difference becomes inaudible by lowering monitoring gain. My MacBook output has to be set to -12dB to avoid audible clipping of 0 dBFS signals with these headphones, making it rather soft for serious listening. With Sennheiser CX150 earbuds it was even -18dB. Have you checked your laptop headphone amp for clipping levels?
 

Orb

New Member
Sep 8, 2010
3,010
2
0
Kees.
I thought the keys had 0.2db gain difference going by what I remember others saying in this thread; although getting confusing considering the various test files around that also included multiple iterations of the hirez music (that would not have the distortion clipping/overload issues though) Amir still able to ABX.
Anyone else able to comment on this?
Just to add, Amir also resampled certain test files and remove time synch/offset and gain issues; others also resorted to doing this as well for the keys such as esldude (after this he could not differentiate between the two accurately that looks potentially to be SRC transparency with the original - although Amir was still able to identify anomalies, or the issue relates to what Tony Lauck mentioned but very few can audibly identify it).
Cheers
Orb
 

amirm

Banned
Apr 2, 2010
15,813
38
0
Seattle, WA
Amir, afaik the 4kHz tone is at -30dBFS peak and -33dB RMS level. If you go nearly deaf, doesn't that indicate that your monitoring level is rather high? Have you verified that your chain is not adding audible distortion at that level? When I did an ABX on my MacBook with cheap headphones I could easily hear distorted peaks in the hi-res version and not in the redbook. Lowering the volume on the MacBook, or using a separate analog headphone amp removed the distortion. It looks like distortion in the analog domain was responsible.
First, welcome to the forum Kees. I have enjoyed reading your thoughtful responses on AVS so it is nice to see you contributing here too.

As to levels, I am using my laptop. I just replayed the file and for the key jingling part I have the Sound card volume set to 50%. With the 4 KHz tone, I had to dial it down to 30%. Given the fact that our ears are less sensitive to high frequencies than mid-tones which includes 4 Khz, I don't think there is an issue here. This laptop has surprisingly good fidelity given the fact that I have been able to find many differences such as the 16 vs 24 bit files and 320 kbps MP3 vs CD.
 

Kees de Visser

New Member
Aug 21, 2014
21
0
0
France
I thought the keys had 0.2db gain difference
There are several level differences, so I'm not quite sure which ones you mean and I have trouble finding anything in the huge AVS thread :)
AFAIK the 0.2 dB level difference was between the original 3 AVS music samples, caused by the specific (Sonic Studio) SRC. This only concerns level differences below 20kHz. The level differences between the two key jangling files are for peaks only and caused by the removal of >22kHz frequencies. This is mathematics and perfectly normal. Comparing sample values of the peaks is not fair since the DAC's reconstruction filter will create higher peaks, depending on the source signal and the filter. It's possible to estimate the analog peak levels by interpolation and I found a close to 3dB difference. This means that in the (analog) playback chain the hi-res peaks will clip before the redbook ones. To make sure a listening test is valid, it should be verified that both peaks are undistorted, IMHO.
or the issue relates to what Tony Lauck mentioned but very few can audibly identify it.
Are you referring to switching artifacts ? I share his concerns and even started a topic on HA forum about it (mod: please delete if inappropriate), with contributions of JJ (Woodinville) and a Fraunhofer expert. Pity that it didn't result in a better ABX application.
 

Johnny Vinyl

Member Sponsor & WBF Founding Member
May 16, 2010
8,570
51
38
Calgary, AB
Thank you! This looks like an interesting forum and I'm looking forward to learning and sharing :)

Welcome Kees!

So what we have here is a Dutchman(?) living in France and writing in English. Pretty cool! :D
 

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu

Steve Williams
Site Founder | Site Owner | Administrator
Ron Resnick
Site Co-Owner | Administrator
Julian (The Fixer)
Website Build | Marketing Managersing