Why the Harman mono speaker test was wrong for dipole planers

Phelonious Ponk

New Member
Jun 30, 2010
8,677
23
0
This has been discussed to death here. And the point currently being made, that the room placement was not ideal foe the MLs, has been made many times. If that was the only test made, one could dismiss the tests completely, but it wasn't. The MLs, and all the other speakers in the test, were measured on many axis, in an anechoic chamber. The MLs measured poorly. Then they performed poorly in listening tests. This is not an isolated result. I know many here don't want to believe that measurements can predict performance, but in decades of testing by Toole and Olive, measured speaker performance has been confirmed in listening tests. Would the MLs sound better if their position and the room treatments were tweaked to suit and the listeners were all sitting in the sweet spot? You bet. Their frequency response changes dramatically as you move off axis, and so room treatment, speaker placement and listening position are all more critical than they are for most speakers. Whether or not you consider that a feature or a flaw is up to you. Personally, I find the single chair listening room concept incompatible with the joy of music in my life. If I wanted to do that kind of listening, I'd go back to a near field set up. YMMV, and if it does, MLs may be a great choice for you. That doesn't negate the research.

Tim
 

Argonaut

Well-Known Member
Jul 30, 2013
2,425
1,655
530
N/A
Obviously Noel Keywood doesn't know one end of a scope from the other! But then he doesn't have a serious 'Weed Up His Ass' where ML are concerned.

"MEASURED PERFORMANCE
The vertical high frequency panel of the CLX is a little directional, but less so than the budget Martin Logan models; moving the measuring microphone laterally in front of the CLX altered the basic high to low energy balance, rather than upsetting frequency response. This means midrange and treble remains as smooth and extended as our stepped sine wave analysis shows, moving up or down in prominence, as listening position changes, relative to bass and midrange frequencies below 1kHz. So whilst the CLX is listening position critical, it isn’t too demanding in this respect. It also drove our 28ft square measuring room well, much like the Kingsound Prince II tested in our April 09 issue, giving a consistent sound over a wide frontal area.

Frequency response of the CLX is flat from 700Hz all the way up to 20kHz, so it will sound even in its midband and upper midband/treble delivery. Below 700Hz output is on average 3dB up, right down to 55Hz no less. With a monopole this would give a fulsome balance, but with a dipole it gives a natural balance, likely because the solid radiation angle and associated acoustic power is less. A low frequency peak at 60Hz (third octave analysis, not shown here) suggests there will be no lack of punchy bass. This looks like a carefully tailored euphonic balance that will be easy on the ear.

Electrostatics are usually insensitive but the CLX isn't much different from conventional loudspeakers in this area, producing 84dB sound pressure level from one nominal Watt of input (2.84V). This is far better than the 11dB less sensitive (73dB) Kingsound Prince IIs for example. The CLX is a similar amplifier load however, comprising a huge low frequency peak, our analysis shows, reaching a maximum of 125 Ohms at 16Hz, falling to 11 Ohms DCR at 0Hz and 1.5 Ohms at 20kHz. Above 100Hz impedance falls below 15 Ohms and the overall figure measured a normal 5.5 Ohms. The CLX is reactive at low frequencies only, which should not be a problem to amplifiers. Above 300Hz it is largely resistive, making it an easy amplifier load, except for the 1.5 Ohm minimum at 20kHz which could conceivably be too demanding for some solid-state amplifiers. Again, valve amplifiers cope best.

The loudspeaker’s spectral decay over 200mS showed there is remarkably little colouration, a strength of the electrostatic, and decay is fast and even across the frequency spectrum, with a small amount of overhang at 60Hz as expected. Distortion levels were a little above conventional loudspeakers and varied across the bass panel area below 100Hz, falling from 3% at 40Hz down to 1% or so at 100Hz, then around 0.3% up to 1kHz, falling to 0.1% to 6kHz.

The CLX delivers a smooth yet extended sound into the room, free from serious frequency response anomalies. The balance emphasises lows a little, for warmth and body, and low bass output is strong and deep. Its basic accuracy and smoothness of output is excellent for such a big panel, it drives the room evenly and colouration is extremely low, so this is a quality design. NK "
 
Last edited:

Phelonious Ponk

New Member
Jun 30, 2010
8,677
23
0
Obviously Noel Keywood doesn't know one end of a scope from the other! But then he doesn't have a serious 'Weed Up His Ass' were ML are concerned.

"MEASURED PERFORMANCE
The vertical high frequency panel of the CLX is a little directional, but less so than the budget Martin Logan models; moving the measuring microphone laterally in front of the CLX altered the basic high to low energy balance, rather than upsetting frequency response. This means midrange and treble remains as smooth and extended as our stepped sine wave analysis shows, moving up or down in prominence, as listening position changes, relative to bass and midrange frequencies below 1kHz. So whilst the CLX is listening position critical, it isn’t too demanding in this respect. It also drove our 28ft square measuring room well, much like the Kingsound Prince II tested in our April 09 issue, giving a consistent sound over a wide frontal area.

Frequency response of the CLX is flat from 700Hz all the way up to 20kHz, so it will sound even in its midband and upper midband/treble delivery. Below 700Hz output is on average 3dB up, right down to 55Hz no less. With a monopole this would give a fulsome balance, but with a dipole it gives a natural balance, likely because the solid radiation angle and associated acoustic power is less. A low frequency peak at 60Hz (third octave analysis, not shown here) suggests there will be no lack of punchy bass. This looks like a carefully tailored euphonic balance that will be easy on the ear.

Electrostatics are usually insensitive but the CLX isn't much different from conventional loudspeakers in this area, producing 84dB sound pressure level from one nominal Watt of input (2.84V). This is far better than the 11dB less sensitive (73dB) Kingsound Prince IIs for example. The CLX is a similar amplifier load however, comprising a huge low frequency peak, our analysis shows, reaching a maximum of 125 Ohms at 16Hz, falling to 11 Ohms DCR at 0Hz and 1.5 Ohms at 20kHz. Above 100Hz impedance falls below 15 Ohms and the overall figure measured a normal 5.5 Ohms. The CLX is reactive at low frequencies only, which should not be a problem to amplifiers. Above 300Hz it is largely resistive, making it an easy amplifier load, except for the 1.5 Ohm minimum at 20kHz which could conceivably be too demanding for some solid-state amplifiers. Again, valve amplifiers cope best.

The loudspeaker’s spectral decay over 200mS showed there is remarkably little colouration, a strength of the electrostatic, and decay is fast and even across the frequency spectrum, with a small amount of overhang at 60Hz as expected. Distortion levels were a little above conventional loudspeakers and varied across the bass panel area below 100Hz, falling from 3% at 40Hz down to 1% or so at 100Hz, then around 0.3% up to 1kHz, falling to 0.1% to 6kHz.

The CLX delivers a smooth yet extended sound into the room, free from serious frequency response anomalies. The balance emphasises lows a little, for warmth and body, and low bass output is strong and deep. Its basic accuracy and smoothness of output is excellent for such a big panel, it drives the room evenly and colouration is extremely low, so this is a quality design. NK "

What is the difference between "basic high to low energy balance" and "frequency response?"

Tim
 

Gregadd

WBF Founding Member
Apr 20, 2010
10,575
1,794
1,850
Metro DC
Strange, it appaears the performance of the ML in ther Harmon test is the death sentnce for ML. However the poor performance of Harmon speakers in the Consumer Reports seems to be just an aside not worthy of even a passing discussion.
 

Gregadd

WBF Founding Member
Apr 20, 2010
10,575
1,794
1,850
Metro DC

bonzo75

Member Sponsor
Feb 26, 2014
22,650
13,685
2,710
London

Purite Audio

banned
May 28, 2013
417
1
0
www.puriteaudio.co.uk

amirm

Banned
Apr 2, 2010
15,813
38
0
Seattle, WA
Strange, it appaears the performance of the ML in ther Harmon test is the death sentnce for ML. However the poor performance of Harmon speakers in the Consumer Reports seems to be just an aside not worthy of even a passing discussion.
Not at all. Not only has it been discussed, there is a full AES paper and research demonstrating the flaws in Consumer Reports (CU)'s testing. From AES paper, A Multiple Regression Model For Predicting Loudspeaker Preference Using Objective Measurements: Part I-Listening Test Results
Sean E. Olive, AES Fellow
Harman International Industries, Inc., Northridge, CA, 91329, USA

"A sample of 13 loudspeakers was selected from
the 23 different models reviewed in the August 2001
edition of Consumer Reports [9].
The speakers tested
are listed in Table 1 in descending order based on the
overall score given by CU.

[...]
The tests produced a total of 2,912 ratings for
preference, distortion, spectral balance, in addition to
comments (13 sessions x 4 loudspeakers x 4 programs x 2 repeats x 7 listeners = 2,912 ratings). The 13 listening test sessions were performed double blind using
monophonic (single loudspeaker) comparisons.


[...]

Fig. 4 graphically compares the loudspeaker
rank order based on the measured preference rating and
the CU predicted overall quality rating. Apart from the
lowest ranked speaker (L13) there is little correlation
between the two. The top two ranked speakers in our
test (L1 and L2) were ranked near the bottom according
to CU (10th and 11th place). Similarly a speaker ranked
2nd by CU (L10) was rated 10th according to our tests.
There is clearly no correlation between listeners’
loudspeaker preferences and CU’s objective-based
sound quality ratings.
"


Put in succinctly, Consumer Report's (CU) ranking is based on sound power which is the total energy coming out of a loudspeaker in all directions. That is NOT what hits you as a listener in a room. What you hear is a mix of direct sound with reduced levels coming from other directions. Treating all of these as equal importance, leads to results that fail and fail miserably when put to test in listening tests. You know, when we use the ear to evaluate loudspeakers rather than blind measurements, pun intended :).

As far as I recall, Consumer Reports accepted the results and were looking for ways to incorporate this research into their rankings. The research as the name indicates, shows another way to compute a metric based on measurements that does highly correlate with listening test results in the Part 2 of the above paper. This is its abstract:

"A new model is presented that accurately predicts listener preference ratings of loudspeakers based on anechoic
measurements. The model was tested using 70 different loudspeakers evaluated in 19 different listening tests. Its
performance was compared to 2 models based on in-room measurements with 1/3-octave and 1/20-octave
resolution, and 2 models based on sound power measurements, including the Consumers Union (CU) model, tested
in Part One. The correlations between predicted and measured preference ratings were: 1.0 (our model), 0.91 (in room,
1/20th-octave), 0.87 (sound power model), 0.75 (in-room, 1/3-octave), and ?0.22 (CU model).
Models based
on sound power are less accurate because they ignore the qualities of the perceptually important direct and early reflected
sounds. The premise of the CU model is that the sound power response of the loudspeaker should be flat,
which we show is negatively correlated with preference rating. It is also based on 1/3-octave measurements that are
shown to produce less accurate predictions of sound quality."


1.0 correlation means perfect matching of measured response out of the model and listening test results. -0.22 means no correlation.
 

Gregadd

WBF Founding Member
Apr 20, 2010
10,575
1,794
1,850
Metro DC
Amir congratulations on breaking your silence. I hope you appreciate Christmas music.You always have soemthing informative to say.
I appreciate your expansion of your previous argument that the Consumer Reprts(CR) results were flawed because the test subjects were simultaneously participating in a drug trial. Indeed in Dr. Tooles', own video, posted by you he staate he responde to the Consumer Reports tests because he had been callled on the carpet by the Harmon CEO-the paid him good money to make our speakers sound good He then set out to discredit the Consumer Rpeorts tests results(by his own admission). While it appears he was able to gain some capitulation fromvthe CR hierarchy, the fact remained that the participants liked his speakerss least..

Just in case you missed my point, allow me to be succint> Test methodoogy and analysis of resukting data can dictate outcome. You seem willing to to admit that . That is, CR achieved the resukts it did because of its methodology and analysis. The test subjects made their choices,, at least in part, because they were drugged. I get it (Personally I tend to like everything when I'm intoxicated Smile).

I am not a member of AES nor do I have a any intention of purchasing any papers form them. When I said the CR results are treated as aisde I was not referring to the AES paper.
Any test can be flawed. m For example 13 of 23 speakers were chosen By Harmon from the CR test. Were they chosen at random? How do we account for bias?

I wonder did you really consider the quotes you made? Did cR inadvertently prove what subjectivists have argued all along?

"There is clearly no correlation between listeners’
loudspeaker preferences and CU’s objective-based
sound quality ratings
."

It says it right there. No correlation! An imroper test is more likey to produce inconclusive results.
 

amirm

Banned
Apr 2, 2010
15,813
38
0
Seattle, WA
Amir congratulations on breaking your silence.
I did not. My silence was regarding the results of listening tests at Harman and ML loudspeakers. You asked me about consumer reports tests which I have yet to hear anyone champion. So I assume there won't be any threats of committing suicide based on that discussion.

I hope you appreciate Christmas music.You always have soemthing informative to say.
Thank you and hope you are enjoying some great music as well.

I appreciate your expansion of your previous argument that the Consumer Reprts(CR) results were flawed because the test subjects were simultaneously participating in a drug trial.
No, there were no subjects in the CR evaluation. It was all based on measurements and ranking loudspeakers that way. They assumed, without verification, that full measurement of sound energy around a loudspeaker would indicate goodness of the loudspeaker. They thought they were doing one better than the typical, single axis frequency response measurement.

Experts in loudspeaker design I am sure already knew that CR was wrong. They could say that or show it in practice and that is what Harman did. They set out to perform a listening test. The listening tests with identical loudspeakers used in CR rating, shows results that other than one loudspeaker, did not at all correlate with the measurements CR was using.

Indeed in Dr. Tooles', own video, posted by you he staate he responde to the Consumer Reports tests because he had been callled on the carpet by the Harmon CEO-the paid him good money to make our speakers sound good He then set out to discredit the Consumer Rpeorts tests results(by his own admission). While it appears he was able to gain some capitulation fromvthe CR hierarchy, the fact remained that the participants liked his speakerss least..
Again, there were no participants in CR tests because it was based on a measurement, not any kind of listening test. What Harman discredited was measurements that did not correlate with what we actually hear.
 

thedudeabides

Well-Known Member
Jan 16, 2011
2,183
693
1,200
Alto, NM
Now, if you have nothing to share on the topic of the thread, please stay out of it.

I tried but obviously, from your "measure poorly equals sounds poorly" perspective, I've clearly failed. Even though I tried to pose some "succinct" questions in an attempt to understand your perspective, which by and large went unanswered. That's OK with me.

I will close by saying that I've really enjoyed listening to "poor sounding speakers" for some 25 years. And I suspect that my current MBL's, which I thoroughly enjoy, will measure poorly and of course, sound sub par.

Happy New Year.
 

amirm

Banned
Apr 2, 2010
15,813
38
0
Seattle, WA
Just in case you missed my point, allow me to be succint> Test methodoogy and analysis of resukting data can dictate outcome. You seem willing to to admit that . That is, CR achieved the resukts it did because of its methodology and analysis. The test subjects made their choices,, at least in part, because they were drugged. I get it (Personally I tend to like everything when I'm intoxicated Smile).
Again, there were NO test subjects in consumer reports ratings. They measured the loudspeaker response with a computer and used that to judge the performance of each. That could have been the right assumption. They should have verified it with a listening panel but did not. So Harman did. And published the results for everyone in the industry to read and judge.

I am not a member of AES nor do I have a any intention of purchasing any papers form them. When I said the CR results are treated as aisde I was not referring to the AES paper.
Any test can be flawed. m For example 13 of 23 speakers were chosen By Harmon from the CR test. Were they chosen at random? How do we account for bias?
If you don't want to read the paper, here is Sean Olive's blog about it: http://seanolive.blogspot.com/2008/12/are-consumer-reports-loudspeaker.html

"For over 35 years, Consumer Reports magazine recommended loudspeakers to consumers based on what many audio scientists believe to be a flawed loudspeaker test methodology. Each loudspeaker was assigned an accuracy score related to the "flatness" of its sound power response measured in 1/3-octave bands. Consumers Union (CU) - the organization behind Consumer Reports - asserted that the sound power best predicts how good the loudspeaker sounds in a typical listening room. Until recently, this assertion had never been formally tested or validated in a published scientific study."

And this graph there:



Pretty damning results showing how CR measurements did not correlate with listening test results.

As to picking loudspeakers, they picked 13 out of a total of 23. Among those were the top 5 loudspeakers according to CR rating. After that, the CR ratings, were 8, 9, 12, 14, 18, 20, 21 and 23. So there is excellent sampling all the way from the top best to top worst. So it is not like they picked a couple to test with.

I wonder did you really consider the quotes you made? Did cR inadvertently prove what subjectivists have argued all along?

"There is clearly no correlation between listeners’
loudspeaker preferences and CU’s objective-based
sound quality ratings
."

It says it right there. No correlation! An imroper test is more likey to produce inconclusive results.
Subjectivists would be in agreement with Harman research here. Where they fall in the ditch is that Harman shows another set of measurements, based on weighted sum of anechoic measurements that shows 100% correlation with listening test results. 30 years of research has proven value here in predicting a lot of what makes good sound to listeners. Subjectivists simultaneously advocate listening while in the next breath want to jump out of building because someone wants to share results of listening tests. So the ear is not a final arbiter it seems.
 

amirm

Banned
Apr 2, 2010
15,813
38
0
Seattle, WA
I will close by saying that I've really enjoyed listening to "poor sounding speakers" for some 25 years. And I suspect that my current MBL's, which I thoroughly enjoy, will measure poorly and of course, sound sub par.
What does measurement have to do with this thread and topic??? Harman tests are *listening tests*. Not measurements. Yes, they have identified what they think leads to good performance in listening tests. But the core thesis has been and continues to be validating any technical claim of superiority with listening tests where subjects don't what they are listening to. The continued objection to formal listening tests performed by them as I mentioned in my last post, runs counter to everything you guys talk about. Isn't the sound that enters your ear and only that, should be the arbiter of good sound?

I have not seen the anechoic weighted measurements of MBL, nor any blind listening test results. As such, I consider them an unknown in this regard. No need to get defensive unless you know that either one of these data points for sure damns the product.
 

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu

Steve Williams
Site Founder | Site Owner | Administrator
Ron Resnick
Site Co-Owner | Administrator
Julian (The Fixer)
Website Build | Marketing Managersing