Objectivist or Subjectivist? Give Me a Break

My point is simple. Unless you can not account for all variables, the conclusion must acknowledge the scope and limitations

There's no qualification necessary. Any test must show its scope.

Perhaps we agree, but it was not clear from your prior dismissal and your going on about "theory" and "fact".

There is no "fact" other than test results in science. All science is theories, accepted ones. (as opposed to rejected ones like "moon is green cheese").

Anything can be a theory, but accepted theories are as close to facts as science gets.
 
Instead of describing the harmonic structure of the distortion, I wish we could just say "I prefer/don't like an amp with flattened peaks", or "I prefer/don't like an amp with crossover distortion" because, presumably, that is what we're talking about - a transfer function that is nonlinear and gives rise to harmonics (on a single note, only, otherwise it's IMD). A description of the transfer function would be much more direct.

More particularly, what matters is the spectrum of distortion+noise+frequency_shaping vs. the spectrum of the signal.

THAT, in a nutshell, is what matters to the ear.

And the spectra need to be calculated on a frequency scale appropriate to human hearing, i.e. ERB's or critical bands, and not one point per band, either.

Until that information is available, measurement means very little.

ETA:
That spectrum must also be calculated with appropriate windows, and on a one sample basis, give or take.
 
Personally, I think both sides of the debate could do with a bit of a reality check. The subjective side should remember that not everything is audible. The objective side forgets that in the world of buying things, human nature will tend to trump even the most ineluctable brute fact if it contradicts prima facie findings.

I've had words with the hard-core SNR weenies as well as the hard-core audio fantasy types.

With the first bunch, I really get peeved when well-known perceptual effects are discounted, or when I'm told that things like positive and negative controls aren't necessary.

With the second bunch, it's a bit wider-ranged from the old stupidity about "intersample differences in PCM are at best one sample" (sorry, that's about 5 orders of magnitude off) to "DBT's don't work" in place of "bad DBT's don't work".

One of the annoying things of working both in math (signal processing) and auditory perception is that the perception gets a great deal of guff from some of the signal processing types (especially those who do image/video, which has different problems) in regard to audio perception, and from the audio folks about exact math in signal processing.

It gets tiring after a very short while, nowadays.
 
One big problem with real world DBT's (as I see it) is that they end up being valid only for the participants. That's great for those who participate, not so great for the rest of us. Even Toole's tests of different speaker radiation patterns, while pretty well done and likely useful to a speaker manufacturer, are too narrow in scope to be of much use to the broader audiophile community.
 
One big problem with real world DBT's (as I see it) is that they end up being valid only for the participants. That's great for those who participate, not so great for the rest of us. Even Toole's tests of different speaker radiation patterns, while pretty well done and likely useful to a speaker manufacturer, are too narrow in scope to be of much use to the broader audiophile community.

Not sure I understand what you mean. Depending on methodology, subject screening and the number samples, of course, a DBT of audio equipment is as broadly applicable as any DBT. The Toole/Hk speaker studies are great examples. What do you think is too narrow about their scope?

Tim
 
Well, I thought we would all remember many of the posts re: DBT's in the many topics of this forum, but on further thought that is unrealistic.

DBT's in the audio world are limited by cost and motivation, i.e. it's hard bordering on impossible to get enough listeners to participate in enough tests to get statistically meaningful results in more than a very few specific instances. Not quite like pharmaceutical research, for example, where the resources and rewards can be huge.

As far as Toole's tests go, how much does it mean in "your" home system that listeners prefer a speaker with wide, even dispersion when played monophonically in a LEDE room? That's just too narrow a finding to be broadly useful outside of speaker design, and even there it's not an absolute. It is typical of what to expect from good audio DBT's, though, and illustrative of the difficulties in generating more information in the audio perception arena.
 
There's no qualification necessary. Any test must show its scope.

Perhaps we agree, but it was not clear from your prior dismissal and your going on about "theory" and "fact".

There is no "fact" other than test results in science. All science is theories, accepted ones. (as opposed to rejected ones like "moon is green cheese").

Anything can be a theory, but accepted theories are as close to facts as science gets.

We do agree and yes I was not clear. With you, I will try to be more literal. When I say "qualiification" again I refer to the language. Proper conclusions are worded in ways that do not go beyond the scope. One need not reiterate the scope in the conclusions so I see what you mean by there being no need for qualification in the conclusions. Maybe that is the problem. Some people only read the conclusions of a study and run with it thus missing much of the context.
 
Well, I thought we would all remember many of the posts re: DBT's in the many topics of this forum, but on further thought that is unrealistic.

I remember...

DBT's in the audio world are limited by cost and motivation, i.e. it's hard bordering on impossible to get enough listeners to participate in enough tests to get statistically meaningful results in more than a very few specific instances. Not quite like pharmaceutical research, for example, where the resources and rewards can be huge.

It's not easy, but I don't think it is as difficult or as expensive as you imagine. Meyer and Moran did it without the financial resources or the motivation of the pharmaceutical industry.

As far as Toole's tests go, how much does it mean in "your" home system that listeners prefer a speaker with wide, even dispersion when played monophonically in a LEDE room?
It might mean nothing in my system; it could be very important if I'm shopping for speakers.

That's just too narrow a finding to be broadly useful outside of speaker design, and even there it's not an absolute.

There are no absolutes, but I think an understanding of the perception of a very basic speaker characteristic in an untreated space is broadly useful to home audio reproduction.

It is typical of what to expect from good audio DBT's, though, and illustrative of the difficulties in generating more information in the audio perception arena.

I'd say it is typical of what to expect from good audio DBTs, and illustrates how critical they can be to understanding audio perception.

Tim
 
We have to remember that Toole's study being discussed and later continued by Sean is about the listeners and not the speakers. The sample sizes are large enough and the methodology IMO dealt more than adequately with issues such as selection bias. That said these were tests of only one speaker at a time and if one were to look at the pictures of the two testing rooms and how the listeners were arranged (albeit they were also reshuffled IIRC) it becomes understandable why there was a statistical preference for speakers with flat off axis response.

While the results might be different with two speakers and a smaller sweet zone in which subjects are placed, that is a different test introducing much more than one variable and that does not invalidate the original one. Speakers with narrow dispersion may well score high in a two speaker test while simultaneously scoring poorly in a test such as the first. MLs scored badly in Sean's test. We already know that MLs don't sound as good when you aren't in the sweet spot so the results are not that surprising. The takeaway for Harman is to build speakers for the home that fit the test results.

The over reach would be to interpret the results and say all speakers with poor off-axis responses are bad or as we see in forums, "your speakers suck because the graph says so". They are only poor off axis. To say they are bad under conditions an ML buyer would actually use a pair (a pair playing and the listener in the optimum position) is an over reach. There is no data to support that conclusion.
 
Meyer and Moran's study never demonstrated that listeners could hear the difference between any sources. For the test to be meaningful they really needed to include something like a 256k MP3 in addition to the CD and SACD sources; oh yeah, that would not have been distinguishable either...

You keep mentioning all these well done studies on audio perception, but they unfortunately don't exist. It would be nice if they did, and maybe someday they will, but that someday hasn't yet arrived.

Maybe you understood Jack's explanation of the limitations of Toole's studies better than mine?
 
We have to remember that Toole's study being discussed and later continued by Sean is about the listeners and not the speakers. The sample sizes are large enough and the methodology IMO dealt more than adequately with issues such as selection bias. That said these were tests of only one speaker at a time and if one were to look at the pictures of the two testing rooms and how the listeners were arranged (albeit they were also reshuffled IIRC) it becomes understandable why there was a statistical preference for speakers with flat off axis response.

While the results might be different with two speakers and a smaller sweet zone in which subjects are placed, that is a different test introducing much more than one variable and that does not invalidate the original one. Speakers with narrow dispersion may well score high in a two speaker test while simultaneously scoring poorly in a test such as the first. MLs scored badly in Sean's test. We already know that MLs don't sound as good when you aren't in the sweet spot so the results are not that surprising. The takeaway for Harman is to build speakers for the home that fit the test results.

The over reach would be to interpret the results and say all speakers with poor off-axis responses are bad or as we see in forums, "your speakers suck because the graph says so". They are only poor off axis. To say they are bad under conditions an ML buyer would actually use a pair (a pair playing and the listener in the optimum position) is an over reach. There is no data to support that conclusion.

And I don't think anyone is reaching there, but I think linear off-axis response has a sonic impact in-room beyond the simple size of the sweet spot. And let's not forget that in Sean's current HK studies, the listener remains in the sweet spot; it is the speakers that shuffle.

Tim
 
Last edited:
Meyer and Moran's study never demonstrated that listeners could hear the difference between any sources. For the test to be meaningful they really needed to include something like a 256k MP3 in addition to the CD and SACD sources; oh yeah, that would not have been distinguishable either...

You keep mentioning all these well done studies on audio perception, but they unfortunately don't exist. It would be nice if they did, and maybe someday they will, but that someday hasn't yet arrived.

Maybe you understood Jack's explanation of the limitations of Toole's studies better than mine?

No, Meyer and Moran demonstrated that listeners could not consistently differentiate between Redbook and SACD resolution. Period. The addition of MP3 samples to Meyer and Moran would not have made it more meaningful, it would have made it a different study altogether. The introduction of a control, something subtle but clearly audible, may have revealed participants with bad hearing or a bias against hearing any audio differences at all, but that was covered by the depth and breadth of the study, the screening of the participants and their sheer numbers.

There are few studies of audio perception with enough depth and discipline to be meaningful, but there are a few. Yes, it would be nice if there were more.

Tim
 
And I don't think anyone is reaching there, but I think linear off-axis response has a sonic impact in-room beyond the simple size of the sweet spot. And let's not forget that in Sean's current HK studies, the listener remains in the sweet spot; it is the speakers that shuffle.

Tim

It definitely has an impact but that would depend on the room too. That is why Earl (Geddes) has different recommendations for rooms with his CD loudspeakers.

IIRC it was Sean himself that said here that in some of the groups in the large testing room that the listeners changed seating arrangements. The carousel is to switch speakers rapidly behind the blind. Pun intended :)
 
I would like to add, that measurments mean "everything" when, now hold on a minute :p, when it comes to characterizing the fidelity of a signal, an electronics signal, in the case of audio, measured at the output of the mic to the output of the power amp. Of course, if you want to get very scientific (and we should), you can do a lot of special tests at the amp/speaker interface given different amp topologies and their reactions to speaker loads, but still, measurments have their place in regards signal integrity.

What you hear and prefer are other things entirely.

Where should we take the measurements; inside the components, outside in front of the speakers, or somewhere else in the room where we most likely to sit down and relax?

How do we interpret those measurements?
 
Where should we take the measurements; inside the components, outside in front of the speakers, or somewhere else in the room where we most likely to sit down and relax?

How do we interpret those measurements?

What are you trying to accomplish. You ask a question with many potential answers.
 
No, Meyer and Moran demonstrated that listeners could not consistently differentiate between Redbook and SACD resolution. Period. The addition of MP3 samples to Meyer and Moran would not have made it more meaningful, it would have made it a different study altogether. The introduction of a control, something subtle but clearly audible, may have revealed participants with bad hearing or a bias against hearing any audio differences at all, but that was covered by the depth and breadth of the study, the screening of the participants and their sheer numbers.
The concept of using positive & negative controls is exactly for that reason - to validate that the test is also capable of revealing what it purports to test. If you don;t understand that, then you don't understand what is a valid DBT & what isn't

JJ probably can tell you more on this - he has mentioned it plenty of times!
 
I would like to add, that measurments mean "everything" when, now hold on a minute :p, when it comes to characterizing the fidelity of a signal, an electronics signal, in the case of audio, measured at the output of the mic to the output of the power amp. Of course, if you want to get very scientific (and we should), you can do a lot of special tests at the amp/speaker interface given different amp topologies and their reactions to speaker loads, but still, measurments have their place in regards signal integrity.

What you hear and prefer are other things entirely.

Yes, this always seems like a reasonable position but then what measurements & what procedure for recording/comparison are you suggesting should be used to check this fidelity?
 
Yes, this always seems like a reasonable position but then what measurements & what procedure for recording/comparison are you suggesting should be used to check this fidelity?
When it comes to sources and amps, I think it should be possible to run a comprehensive battery of 'distortion' tests (THD, IMD, transient signals, linear distortions e.g. frequency/phase response) and, based on psycho-acoustic models, to come up with a rough measure of the worst case degree of audibility of every deviation from perfection. As in engineering, you can't test a complete complex system exhaustively - far too many permutations, variables and dimensions - but you can test each component or sub-system exhaustively. So you can only crash test a few cars, for example, giving only one or two test cases for each sub-system. But you can test each sub-system (e.g. anti-lock brakes) in a test rig many more times.

If it could be established that certain sources were a hundred times better than they needed to be to be audibly 'perfect', at least they could be 'written out of the equation'. Doing the same for amps into realistic speaker loads (essential) might also allow us to write them out of the equation, too. We could then concentrate on speakers which would be the fun part, it seems to me!

(But of course, I am ignoring the "euphonic distortion" aspect which says that some people actually prefer a non-perfect system. I am not yet convinced that this is not merely a preference of theirs from the selection of inherently compromised systems that they have heard, and that these same people wouldn't prefer a 'perfect' one if they heard it.)
 

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu

Steve Williams
Site Founder | Site Owner | Administrator
Ron Resnick
Site Owner | Administrator
Julian (The Fixer)
Website Build | Marketing Managersing