Some More Evidence that Kids (American and Japanese) Prefer Good Sound

NorthStar

Member
Feb 8, 2011
24,305
1,323
435
Vancouver Island, B.C. Canada
I honestly don't know the break-down cost of the Infinity speaker but I could certainly find out. But the main issue here is that you have trouble accepting the untruth that there is a linear relationship between the cost of the speaker and its sound quality. To me that sounds like an expectation bias.

I've been testing speakers for 26 years and have seen many examples of a well-engineered loudspeaker beating a poorly-engineered speaker costing 10x or more its price. What quality controls are in place to stop companies from building under-performing loudspeakers and charging a lot of money for it? Zero.

There are no audio federal agencies that require loudspeakers to pass some basic meaningful sound quality standard, or be submitted to clinical listening trials to show that they cause no adverse sounds or effects on your enjoyment. The current industry loudspeaker specifications are entirely useless in terms of indicating how the speakers sound, and the audio review process is sighted, biased and largely ineffective. Consumers today cannot reliably find a store where they can do an A/B demonstration of the product they're interested in purchasing.

So companies are free to design and manufacture speakers and charge whatever they want because there are virtually no meaningful specifications or controls in place that indicate how good the loudspeaker performs and sounds.

Welcome to the Wild West of our Loudspeaker Industry!

Great post Sean! ... And it is also the same with electronics, and many other things of life .... :b
 

Tonepub

New Member
Jun 3, 2011
116
0
0
Great time meeting Dr. Olive when I visited Harman last week. Got to listen to their double blind testing and it was pretty exciting...

Looking forward to spending more time with their downloadable tests.
 

mep

Member Sponsor & WBF Founding Member
Apr 20, 2010
9,481
17
0
Great time meeting Dr. Olive when I visited Harman last week. Got to listen to their double blind testing and it was pretty exciting...

Looking forward to spending more time with their downloadable tests.

So did the Harman speakers drub all of the competition while you were there? Did you get to sit on axis? Do you know what speakers you compared against the Harman speaker(s)?
 

Bjorn

VIP/Donor
Oct 12, 2010
271
136
993
Norway
This is nonsense. Are you saying if speakers sound bad 30 degrees off-axis it's not fair to listen at that spot? What about listeners not sitting in the sweet spot? And to be fair, the speakers were tested with listeners sitting both on and off-axis.

Also, in the case of the ML speaker, it actually was rated lower when listeners sat on-axis versus on-axis because it's spectral balance is actually better sitting off-axis compared to on-axis. If you look at the anechoic measurements, the first two curves from top to bottom represent the sound received on-axis and slightly off-axis (we call it the listening window) Both curves ndicate an elevated mid-treble relative to the bass, making it sound very bright, harsh and thin. As you move off-axis, the third curve from the top (first reflections) show a more balanced, albeit slightly dull" frequency response. For listeners sitting off-axis the speaker is more balanced. The reason the Harman listeners rated it so low compared to the other groups is because the Harman listeners were all sitting on-axis, whereas the other listening groups were distributed in seats both on-axis and slightly off-axis.

So, if anything, we were doing the speaker a favor by including listeners sitting off-axis. If we included only on-axis listening results it would have been rated even lower.
Thanks for the clarification.
I disagree when it comes to buyers listening to ML off-axis, but you are obvioulsy correct about the rest and I didn't notice that ML measured worse off-axis. Very interesting and then I also assume correct placement (enough space to the frontwall) was taking into consideration.

Thanks for sharing this with us. I completely agree with you what price doesn't necessarliy mean anything.
 

Keith_W

Well-Known Member
Mar 31, 2012
1,024
95
970
Melbourne, Australia
www.whatsbestforum.com
In my field, if a pharmaceutical company publishes a paper suggesting that their drug is massively superior to the competition, we look at it with skeptical eyes. Of course, there is every possibility that the drug is indeed superior. But the very fact that the paper was sponsored and produced by the same company that makes the drug is enough to cast suspicion on the data gathering methods, the impartiality of the experimenters and reporters. There are many ways to massage the data to suit the needs of the pharmaceutical company. It does not even have to be done consciously (lest some people think I am accusing Dr. Olive of some nefarious motive) - it may even be unconscious. This is why some studies also blind the people collecting the results. Was such blinding done in this case?

In this case we have a paper published by a speaker manufacturer suggesting that their inexpensive speaker is superior to some speakers many times the price. One of the speakers chosen is known to give flawed performance when listened to off axis, yet the listening tests were conducted such that most of the listeners were listening off axis. There were no statistics supplied to show if statistical significance was reached. There is in fact a strong likelihood that instead of a neat data point that Dr. Olive presented in his graph, what you are actually looking at is a "cloud" where the actual data point may not correlate with reality because the small sample size did not reach statistical significance. If it is not statistically significant, then the study means nothing.

Of course, what we are looking at is a summary of the paper and not the actual paper. I would hope that the study itself is statistically significant otherwise one would wonder what kind of organization accepts for publication such low quality studies.
 

amirm

Banned
Apr 2, 2010
15,813
38
0
Seattle, WA
In my field, if a pharmaceutical company publishes a paper suggesting that their drug is massively superior to the competition, we look at it with skeptical eyes.
What if an independent government agency performed it? That is what we have here as much of what is published here regarding speakers was the result of Canadian Government funding of NRC:

Here is more Dr. Toole research from NRC:
http://www.aes.org/e-lib/browse.cfm?elib=5270

Loudspeaker Measurements and Their Relationship to Listener Preferences: Part 2
Author: Toole, Floyd E.
Affiliation: National Research Council, Ottawa, Ont. K1A OR6, Canada+
JAES Volume 34 Issue 5 pp. 323-348; May 1986

"Using the highly reliable subjective ratings from an earlier study, loudspeaker measurements have been examined for systematic relationships to listener preferences. The resuls has been a logical and orderly organization of measurements that can be used to anticipate listener opinion. With the restriction to listeners with near-normal hearing and loudspeakers of the conventional forward-facing configuration, the data offer convincing proof that a reliable ranking of loudspeaker sound quality can be achieved with specific combinations of high-resolution free-field amplitude-response data. Using such data obtained at several orientations it is possible to estimate loudspeaker performance in the listening room. Listening-room and sound-power measurements alone appear to be susceptible to error in that while truly poor loudspeakers can generally be identified, excellence may not be recognized. High-quality stereo reproduction is compatible with those loudspeakers yielding high sound quality; however, there appears to be an inherent trade-off between the illusions of specific image localization and the sense of spatial involvement.

"

In this case we have a paper published by a speaker manufacturer suggesting that their inexpensive speaker is superior to some speakers many times the price.
Well, they also say inexpensive speakers sound really good when they make their own expensive speakers :). Reminds me of me cringing half the time in my old job when Microsoft Research wanted to publish papers to tell the world our secret sauce :).

One of the speakers chosen is known to give flawed performance when listened to off axis, yet the listening tests were conducted such that most of the listeners were listening off axis. There were no statistics supplied to show if statistical significance was reached. There is in fact a strong likelihood that instead of a neat data point that Dr. Olive presented in his graph, what you are actually looking at is a "cloud" where the actual data point may not correlate with reality because the small sample size did not reach statistical significance. If it is not statistically significant, then the study means nothing.
The type of research provided here has been repeated over and over again with similar results. I have sat through two of them and the results were very consistent both for me and for others in the room. Nothing is 100% certain of course but directionally, there is no accident here.
 

MylesBAstor

Well-Known Member
Apr 20, 2010
11,238
81
1,725
New York City
What if an independent government agency performed it? That is what we have here as much of what is published here regarding speakers was the result of Canadian Government funding of NRC:




Well, they also say inexpensive speakers sound really good when they make their own expensive speakers :). Reminds me of me cringing half the time in my old job when Microsoft Research wanted to publish papers to tell the world our secret sauce :).


The type of research provided here has been repeated over and over again with similar results. I have sat through two of them and the results were very consistent both for me and for others in the room. Nothing is 100% certain of course but directionally, there is no accident here.

Amir, I'm sorry but ANY scientific study has to be statistically validated! In addition, You know that it's a requirement nowadays for all academic studies to include any potential conflict of interest! That's SOP. Furthermore, I've never seen a scientific study in the last 30 years published without the corresponding statistical analysis; I'd be shocked that this data could be published without statistical analysis.

As a matter of fact, as in any properly designed study, the statistical parameters should have been established first. You and I know that 100 participants in a study is nothing given the variability especially if only a subtraction, say 30 pct of the group, differs. That might then increase the numbers required to a 1000 or more ergo the need in medicine for large, inter-institutional, studies.

And I still don't understand how those off axis gave the same results as the on axis, nor was it ever established tha the speakers in the test were broken in and in the case of the ML allowed to charge up for at least 24 hrs before listening.
 

amirm

Banned
Apr 2, 2010
15,813
38
0
Seattle, WA
Amir, I'm sorry but ANY scientific study has to be statistically validated!
That wasn't the argument that was being made. What I read was that this was funded with people in the interest in the outcome so therefore it was potentially corrupt. I showed that the research came out of government funding so that doesn't apply here.

As for your comment, it is a generic one. What do you mean by it? There is certainly enough data point for the results to be significant: http://seanolive.blogspot.com/2008/12/loudspeaker-preferences-of-trained.html

"To study this question, the author conducted a large study (see reference 1) that compared the loudspeaker preferences of 300+ untrained and trained listeners...."

From Sean's AES paper on modelling the listening tests:

"The selection of 70 loudspeakers was based on the
competitive samples purchased for performance
benchmarking tests performed for each new JBL,
Infinity and Revel model.

The price range of samples varied from $100 to
$25,000 per pair and includes models from 22
different brands from 7 different countries: United
States, Canada, Great Britain, France, Germany,
Denmark and Japan. The loudspeakers included
designs that incorporated horns and more traditional
designs configured as 1-way to 4-ways. Some used
waveguides, while others did not. The sample also
included four professional 2-way active models
referred to as “near-field” monitors. The vast
majority of the speakers were forward-facing driver
designs, with one electrostatic dipole sample."


So we had a lot of subjects and lot of test devices.
In addition, You know that it's a requirement nowadays for all academic studies to include any potential conflict of interest! That's SOP.
Full attribution is provided by the authors as to who they are.

Furthermore, I've never seen a scientific study in the last 30 years published without the corresponding statistical analysis; I'd be shocked that this data could be published without statistical analysis.
Sean's AES reports are chuck full of statistical analysis. Here is an example:

"RESULTS
In this section the results of the two listening tests are
presented and discussed.
3.1 Statistical Analysis
The results of the four-way and three-way tests were
analyzed separately using a repeated-measures analysis of
variance (ANOVA). In both tests the dependent variable
was preference rating.

The four-way test was analyzed as a 16 4 4 2
design where the within-subject fixed factors included
loudspeaker (4 levels), program (4 levels), and session (2
levels; morning and afternoon). The between-subjects factor,
group (16 levels), is a nominal variable representing the
16 different groups of listeners that participated in the test.
The repeated measures ANOVA for the three-way test
consisted of a 20 3 4 design that included the
between-subjects factor, group (20 levels), and the within subject
fixed factors, loudspeakers (3 levels) and program
(4 levels). There was no afternoon repetition of the test so
session was not a variable..."


There are 3 pages full of words and graphs like this. :)

As a matter of fact, as in any properly designed study, the statistical parameters should have been established first. You and I know that 100 participants in a study is nothing given the variability especially if only a subtraction, say 30 pct of the group, differs. That might then increase the numbers required to a 1000 or more ergo the need in medicine for large, inter-institutional, studies.
This research has been going on for three decades, under multiple organizations with consistent results that has been heavily published and reviewed. In my opinion, it provides high confidence results and conclusions. That there is some doubt left is not cause to dismiss its results, especially when I have personal experience and have it match with what they have found in not one, but two sessions. That kind of spot test gives one high confidence here.

And I still don't understand how those off axis gave the same results as the on axis, nor was it ever established tha the speakers in the test were broken in and in the case of the ML allowed to charge up for at least 24 hrs before listening.
You ask about statistics. Do you have research data with statistics analysis that proves there is anything to be "broken" in here? I have seen tests that Harman did for their speakers and they showed zero impact of break in. Measurements were identical for speakers new and broken in.
 

MylesBAstor

Well-Known Member
Apr 20, 2010
11,238
81
1,725
New York City
That wasn't the argument that was being made. What I read was that this was funded with people in the interest in the outcome so therefore it was potentially corrupt. I showed that the research came out of government funding so that doesn't apply here.

As for your comment, it is a generic one. What do you mean by it? There is certainly enough data point for the results to be significant: http://seanolive.blogspot.com/2008/12/loudspeaker-preferences-of-trained.html

"To study this question, the author conducted a large study (see reference 1) that compared the loudspeaker preferences of 300+ untrained and trained listeners...."

From Sean's AES paper on modelling the listening tests:

"The selection of 70 loudspeakers was based on the
competitive samples purchased for performance
benchmarking tests performed for each new JBL,
Infinity and Revel model.

The price range of samples varied from $100 to
$25,000 per pair and includes models from 22
different brands from 7 different countries: United
States, Canada, Great Britain, France, Germany,
Denmark and Japan. The loudspeakers included
designs that incorporated horns and more traditional
designs configured as 1-way to 4-ways. Some used
waveguides, while others did not. The sample also
included four professional 2-way active models
referred to as “near-field” monitors. The vast
majority of the speakers were forward-facing driver
designs, with one electrostatic dipole sample."


So we had a lot of subjects and lot of test devices.

Full attribution is provided by the authors as to who they are.


Sean's AES reports are chuck full of statistical analysis. Here is an example:

"RESULTS
In this section the results of the two listening tests are
presented and discussed.
3.1 Statistical Analysis
The results of the four-way and three-way tests were
analyzed separately using a repeated-measures analysis of
variance (ANOVA). In both tests the dependent variable
was preference rating.

The four-way test was analyzed as a 16 4 4 2
design where the within-subject fixed factors included
loudspeaker (4 levels), program (4 levels), and session (2
levels; morning and afternoon). The between-subjects factor,
group (16 levels), is a nominal variable representing the
16 different groups of listeners that participated in the test.
The repeated measures ANOVA for the three-way test
consisted of a 20 3 4 design that included the
between-subjects factor, group (20 levels), and the within subject
fixed factors, loudspeakers (3 levels) and program
(4 levels). There was no afternoon repetition of the test so
session was not a variable..."


There are 3 pages full of words and graphs like this. :)


This research has been going on for three decades, under multiple organizations with consistent results that has been heavily published and reviewed. In my opinion, it provides high confidence results and conclusions. That there is some doubt left is not cause to dismiss its results, especially when I have personal experience and have it match with what they have found in not one, but two sessions. That kind of spot test gives one high confidence here.


You ask about statistics. Do you have research data with statistics analysis that proves there is anything to be "broken" in here? I have seen tests that Harman did for their speakers and they showed zero impact of break in. Measurements were identical for speakers new and broken in.

Well Amir, why did Sean refuse to answer this simple request?

Personally Amir, what do your ears tell you? And I think there's good evidence that the stators need 24 hrs to be fully charged. I certainly think that MLs sound like crap uncharged or after being unplugged. But this result certainly raises my eyebrows about the tests then.

Are you really telling me that drivers don't need to loosen up? That caps and wires don't need to break in? That speakers don't sound bass shy or dynamically constricted when new? C'mon! Then there's something amiss here.
 

Keith_W

Well-Known Member
Mar 31, 2012
1,024
95
970
Melbourne, Australia
www.whatsbestforum.com
Amir, I am not disputing the conclusion. I have absolutely no problem with the statement that accurate speakers are preferred by listeners.

What I am disputing the quality of this particular study. No statistics, and a conclusion that a speaker made by the company that sponsored the study beats the competition. Am I the only one to smell a rat here?
 

amirm

Banned
Apr 2, 2010
15,813
38
0
Seattle, WA
Personally Amir, what do your ears tell you?
It is hard to answer that question as I have not been able to AB a broken and non-broken in speakers blind.

And I think there's good evidence that the stators need 24 hrs to be fully charged. I certainly think they sound like crap uncharged or unplugged. But this result certainly raises my eyebrows about the tests then.
My eyebrows would be heavily raised if someone sells a multi-thousand dollar speaker manufactured one at a time that sounds like crap and they could not be bothered to run them for 24 hours to avoid that :).

Are you really telling me that drivers don't need to loosen up? That speakers don't sound bass shy when new? Then there's something amiss here.
I am not just telling you that. I am saying folks who do this for a living have run tests and measurements and they say it is a myth. And have shown the data to me before/after. If you have statistically valid, non-biased research that says otherwise, I love to see it.
 

MylesBAstor

Well-Known Member
Apr 20, 2010
11,238
81
1,725
New York City
It is hard to answer that question as I have not been able to AB a broken and non-broken in speakers blind.


My eyebrows would be heavily raised if someone sells a multi-thousand dollar speaker manufactured one at a time that sounds like crap and they could not be bothered to run them for 24 hours to avoid that :).


I am not just telling you that. I am saying folks who do this for a living have run tests and measurements and they say it is a myth. And have shown the data to me before/after. If you have statistically valid, non-biased research that says otherwise, I love to see it.

Or there's something being missed in the testing.
 

amirm

Banned
Apr 2, 2010
15,813
38
0
Seattle, WA
Or there's something being missed in the testing.
They measured the speaker before and after. What was there to be missed when the company doing it is in the business of producing the same and has a stake in knowing the truth.

But as I asked, what data is there otherwise? How come all the companies that require break in don't produce before and after measurements? We are taking about a mechanical change. Right? Measurements should show this clearly.
 

amirm

Banned
Apr 2, 2010
15,813
38
0
Seattle, WA
What I am disputing the quality of this particular study. No statistics, and a conclusion that a speaker made by the company that sponsored the study beats the competition. Am I the only one to smell a rat here?
Sorry did you miss my post completely? I showed that the work here predates to NRC in 1980s when none of the researchers worked for a commercial entity. And showed a ton of statistical analysis and large scale of the tests when answering Myles.

The report here is just a recent chapter. Much predates this work. And remember that the Harman as a company had to be pulled in this direction, kicking and scream at times, just the same. Today, billions of dollars in business rides on this work as the technology gets designed into mid to high-end cars in addition to professional and consumer lines.

Now, if others had conflicting research and listening tests that said otherwise it would be one thing. But here we are, since 1986, folks have not presented them. Plenty of time has gone by. Logic would say there is goodness here.

Honestly guys. My antenna is always up on such things just the same. Until I sat through the tests, I was cagey too. But once you do that and read all the research -- which I have done multiple times :) -- you realize there is excellent science here.
 

Keith_W

Well-Known Member
Mar 31, 2012
1,024
95
970
Melbourne, Australia
www.whatsbestforum.com
Amir, I did not miss your post or the relevant articles you quoted. Like I said - I have no problem with the statement that listeners prefer more accurate speakers. My beef is with this particular study which reads suspiciously like a piece of Harmann marketing. I have said the same thing three times now, only this time I have bolded the relevant statements.

And BTW I might go back and take a closer look at the stats quoted in the other paper, since it appears to be the only full paper to be cited on this thread. I am not willing to pay $20 to download an article from the AES.
 

amirm

Banned
Apr 2, 2010
15,813
38
0
Seattle, WA
Amir, I did not miss your post or the relevant articles you quoted. Like I said - I have no problem with the statement that listeners prefer more accurate speakers. My beef is with this particular study which reads suspiciously like a piece of Harmann marketing. I have said the same thing three times now, only this time I have bolded the relevant statements.
I think we continue to talk past each other. Confirming, the fact that same kind of findings was there in 1986, under funding from Canadian government, does not change your views? If a researcher leaves a university and 20 years later repeats the same test for a different company arriving at the same outcome, you would say the results are fishy then, but not while he was at the University?
 

MylesBAstor

Well-Known Member
Apr 20, 2010
11,238
81
1,725
New York City
Last edited:

microstrip

VIP/Donor
May 30, 2010
20,807
4,702
2,790
Portugal
(...)
My eyebrows would be heavily raised if someone sells a multi-thousand dollar speaker manufactured one at a time that sounds like crap and they could not be bothered to run them for 24 hours to avoid that :).
(...)

Amir,

I do not want to enter your fight for accurate speakers, but it seems you never owned electrostatics for a reasonable long time. I have experience with Quad's, Soundlab's, Audiostatic, Final, Martin Logan and even the old B&W DM70. The effects of burn-in these speakers are very noticeable, and have a slow and long time component. It seems your comfort and stability criteria for a domestic speaker out-rules this type of speaker. OK, let us accept it without denigrating the manufacturers. And may be debate it in thread about burn-in, not about preferences.
 

Keith_W

Well-Known Member
Mar 31, 2012
1,024
95
970
Melbourne, Australia
www.whatsbestforum.com
I think we continue to talk past each other. Confirming, the fact that same kind of findings was there in 1986, under funding from Canadian government, does not change your views? If a researcher leaves a university and 20 years later repeats the same test for a different company arriving at the same outcome, you would say the results are fishy then, but not while he was at the University?

Amir - once again, my issue is with this particular paper, or at least what has been presented in public so far. I would need to read the entire original article to give a more balanced viewpoint. However - from what I see - I am doubtful if the numbers reach statistical significance, and if they don't - then there is no point publishing the paper.

I do not have an opinion on the 1986 paper, because I have not read it beyond the abstract which you linked to. You can not form an opinion on a paper from reading the abstract. I do not want to pay $20 to download the paper either. I trust that you have read it and found the data and analysis to be to your satisfaction?

I should also make the point that just because a paper is sponsored by a commercial entity, does not mean its results can be summarily dismissed. The paper should still be evaluated on its merits, but one should pay attention to any parts of the experiment that suggest that a conflict of interest may have skewed the result. Conversely, just because a paper was funded by a university or government department, does not mean it is free from bias. All papers should be evaluated on what is presented (and questions asked about what has not been presented). This is much more nuanced than simply accepting or rejecting papers based on who sponsored them.

Ergo, I have no opinion on Dr. Olive's previous work because I have not read the papers.

Once again: I support the position Dr. Olive is advocating. But I do not think this paper is particularly convincing.
 

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu

Steve Williams
Site Founder | Site Owner | Administrator
Ron Resnick
Site Co-Owner | Administrator
Julian (The Fixer)
Website Build | Marketing Managersing