Griesinger's teachings show up in Klippel, Linkwitz, Toole, and Geddes

Duke LeJeune

[Industry Expert]/Member Sponsor
Jul 22, 2013
747
1,200
435
Princeton, Texas
I’m following this thread with interest. On a forum where sound quality is all too often perceived as being proportional to the price tag of a component it’s refreshing to read a discussion where the participants are seeking to rely on science.

Thank you very much, Ian!

Unfortunately there is no peer-reviewed research which directly supports the direction I'm pursuing, but I don't have time to wait for it, so I try to find guidance in relevant words of people who are smarter than me. Not everything that is useful has yet been conclusively "proven".

I‘m a fan of dipole speakers and like many others who have lived with dipoles for any length of time, I would find it very hard to go back to traditional box speakers. There are two main reasons for that and one of them is the sense of space and openness you get with a properly set up dipole.

I'm also a SoundLab dealer, and have owned many Maggies, so I understand that good dipoles are an incredibly difficult act to follow!

Over the years I've made two sales to SoundLab owners, and almost made a third. In that third case, he placed an order for a set of my first-generation bipolar speakers after hearing them in his dedicated listening room, but then when he replicated my set-up configuration with his SoundLabs, he reconsidered and cancelled the order. He had been a SoundLab owner for over two decades and told me mine was the first speaker that really came close.

Here is some online commentary from someone who did replace his SoundLabs with one of my systems; I DO NOT claim my speakers are "better than" the SoundLabs, but perhaps they can be arguably competitive in some ways:

https://forum.audiogon.com/discussions/review-audiokinesis-planetarium-beta-speaker

Here is an excerpt which is anecdotal support for some of what I've been saying in this thread, and which is probably the sort of thing you have known all along:

"The mains are placed approximately 7 feet from the front wall... As a test, I moved them to within 2 feet of the front wall. The magic went poof in a hurry. Personally, after playing with placement over these 2 months, I would allow an absolute minimum of 4-5 feet behind them."
 
Last edited:

KlausR.

Well-Known Member
Dec 13, 2010
291
29
333
Hello Duke,

As the amount of energy in the reverberant field goes up the time to decay by 60 dB remains constant, but not the time to decay into inaudibility. If the reverberant field starts out louder (because we are deliberately adding energy to it), then it takes longer for that reverberant energy to die out. This is one of the ways we can manipulate the reverberant field via the loudspeakers themselves, and there are others.

From Everest 2009, p.153: “Reverberation time (RT) is a measure of the rate of decay of sound. It is defined as the time in seconds required for sound intensity in a room to drop 60 dB from its original level. This reverberation time measurement is referred to as RT60. The 60-dB figure was chosen arbitrarily, but it roughly corresponds to the time required for a loud sound to decay to inaudibility.”

I don’t have exact figures of total decay time as compared to RT60, but only the initial part of the decay is of importance, the remaining part is masked by the running music (figures of 10 dB (Blesser 2001), 20 dB (Toole 2008, p.48), 20-30 dB (Everest 2009, p.155) are mentioned, so that IMO total decay time does not play a role that needs to be considered.

Blesser (2001), “An interdisciplinary synthesis of reverberation viewpoints”, J. of the Audio Engineering Society 2001, p.867
Toole (2008), „Sound reproduction - Loudspeakers and rooms”, Focal Press 2008
Everest (2009), “Master Handbook of Acoustics, 5th edition, McGraw-Hill 2009

If I understand your concept correctly, your speakers minimize or even avoid reflections before 10 ms, the first one to happen is that wall-ceiling bounce. So there is no reflection from floor, ceiling (other than the wall-ceiling), side walls. Doesn’t that mean that there is less energy put into the reverberant field rather than more?

I think a white paper would be really helpful.

Klaus
 
  • Like
Reactions: Duke LeJeune

Duke LeJeune

[Industry Expert]/Member Sponsor
Jul 22, 2013
747
1,200
435
Princeton, Texas
... IMO total decay time does not play a role that needs to be considered.

Maybe not directly, but longer total decay time corresponds with longer time before the reflections decay into inaudibility. Also, what I am doing is different from what Everest and others have examined, so we should take the specifics into account.

The additional reverberant energy I'm injecting doesn't start to arrive until we are a good 10 milliseconds beyond the arrival of the direct sound, so there is in effect a later-than-normal (for that room size) "surge" in reverberant energy (with no corresponding increase in the direct sound), and therefore the time for the reveberation to decay into inaudibility is also pushed back in time somewhat.

If I understand your concept correctly, your speakers minimize or even avoid reflections before 10 ms, the first one to happen is that wall-ceiling bounce. So there is no reflection from floor, ceiling (other than the wall-ceiling), side walls. Doesn’t that mean that there is less energy put into the reverberant field rather than more?

My approach does not suppress the floor and ceiling bounces, but they are perceptually rather benign (on this point Geddes agrees with Toole).

There is less early-onset (pre-10-milliseconds) reverberant energy with my approach due to the narrowed radiation pattern (and aggressive toe-in angle) of the main arrays, but then we have the additional later-onset reverberant energy injected by the upwards-and-rearwards firing array. Whether the net effect is more or less reverberant energy than a "conventional" wide-pattern speaker, I'm not sure, but my reverberant energy is probably more spectrally correct than most wide-pattern speakers, and comparable to a dipole's in that regard. In practice we have found that the optimum amount of additional late-onset reverberant energy is several decibels less than what we would normally expect from a dipole speaker's backwave bounce.

I think a white paper would be really helpful.

Well if it does happen, I will have YOU to thank for it - not only for encouraging me to do so, but for the information you have shared and the questions you have raised.
 
Last edited:

DaveC

Industry Expert
Nov 16, 2014
3,899
2,141
495
Hello Duke,



From Everest 2009, p.153: “Reverberation time (RT) is a measure of the rate of decay of sound. It is defined as the time in seconds required for sound intensity in a room to drop 60 dB from its original level. This reverberation time measurement is referred to as RT60. The 60-dB figure was chosen arbitrarily, but it roughly corresponds to the time required for a loud sound to decay to inaudibility.”

I don’t have exact figures of total decay time as compared to RT60, but only the initial part of the decay is of importance, the remaining part is masked by the running music (figures of 10 dB (Blesser 2001), 20 dB (Toole 2008, p.48), 20-30 dB (Everest 2009, p.155) are mentioned, so that IMO total decay time does not play a role that needs to be considered.

Blesser (2001), “An interdisciplinary synthesis of reverberation viewpoints”, J. of the Audio Engineering Society 2001, p.867
Toole (2008), „Sound reproduction - Loudspeakers and rooms”, Focal Press 2008
Everest (2009), “Master Handbook of Acoustics, 5th edition, McGraw-Hill 2009

If I understand your concept correctly, your speakers minimize or even avoid reflections before 10 ms, the first one to happen is that wall-ceiling bounce. So there is no reflection from floor, ceiling (other than the wall-ceiling), side walls. Doesn’t that mean that there is less energy put into the reverberant field rather than more?

I think a white paper would be really helpful.

Klaus

IMO, decay time is more important than most think.

I don't have any hard data, but have experienced "tweaks" that are intended to extend the harmonics and reverb of instruments or vocals, effectively increasing the decay over certain frequency ranges. This can result in the impression of greater clarity, and the sense that the added information is part of the recording as it psychoacoustically fits what we are expected to hear and it sounds more complete and natural.

I was just debating with PeterA and ddk about ddk's sytem of tweaks, which involves removing room treatments that might reduce decay times and adding things that extend the decay of vibration, such as large steel plates as equipment rack shelving. IMO, this is an attempt to make the recording sound more live and many members here feel it sounds more "natural". They may be right, but it's also adding the same effect to every recording. IMO, we're best off preserving the information on the recording as much as possible and not adding so much to it in the playback system, but this is a good example of how decay times and their effects are more multi-faceted than we think and in some cases extended decay can match what we expect to hear and have the effect of enhancing the recording.

However, with decay I think it's an incomplete discussion without also considering the effects of feedback at the same time. In the previous example I believe feedback combined with extended decay times are responsible for the effect. Vinyl and tubes all effect the electromechanical feedback of the system, where the speaker's output excites the vacuum tube and turntable mechanisms just like it can with a microphone, and we get information played back again, delayed in time and with lower output. "Tweaks" like footers, racks, vacuum tube dampers, etc. are all used to effect the feedback characteristics of the system and get them to a point where they sound psychoacoustically correct.

So IMO, if the decay and feedback match what we expect to hear it can be beneficial, if not then it's a problem. Defining this is much more complicated than has been proposed by anyone thus far, and is why we have so many different experiences that contradict any current theories on how reflections and decay drive preference. It's why there are so many different ways to get to "good sound" and we don't fully understand how both a wide dispersion speaker with lots of 1st reflections can sound very clear and not mask information compared to a directional speaker that is setup to avoid 1st reflections. Then we even have setups like ddk's that go way beyond the "norm" and still result in high scores for preference.
 
  • Like
Reactions: Duke LeJeune

KlausR.

Well-Known Member
Dec 13, 2010
291
29
333
”KlausR.” said:
...IMO total decay time does not play a role that needs to be considered.
Maybe not directly, but longer total decay time corresponds with longer time before the reflections decay into inaudibility.

The argument is that you don’t perceive the last/late part of reverberation anyway because it is masked by the music being played continuously. You only would hear reverberation fading into inaudibility when the sound/music is stopped suddenly. From Blesser 2001: “In continuous music, only the first 10 dB of decay can be perceived because the remaining reverberation is masked by the next part of the music. This is called “running reverberance.” The reverberation tail, “stopped reverberance”, is perceived only when the music stops.” How often does it occur in music material that music stops, other than at the end, followed by a pause and then resumes?

The additional reverberant energy I'm injecting doesn't start to arrive until we are a good 10 milliseconds beyond the arrival of the direct sound, so there is in effect a later-than-normal (for that room size) "surge" in reverberant energy (with no corresponding increase in the direct sound), and therefore the time for the reverberation to decay into inaudibility is also pushed back in time somewhat.

If you look at room 2 (my own room) in my “SPL in three rooms” you see that the reflection delays are in the range between 0.6 and 20.8 ms, so 10 ms and more is not later than normal. Of course, this is room depending. Narrow dispersion+aggressive toe-in, that looks like Geddes’ approach, assisted by your wall-ceiling bounce. I can’t but repeat myself: white paper.

Klaus
 

KlausR.

Well-Known Member
Dec 13, 2010
291
29
333
DaveC said:
However, with decay I think it's an incomplete discussion without also considering the effects of feedback at the same time. In the previous example I believe feedback combined with extended decay times are responsible for the effect. Vinyl and tubes all effect the electromechanical feedback of the system, where the speaker's output excites the vacuum tube and turntable mechanisms just like it can with a microphone, and we get information played back again, delayed in time and with lower output.

Since the proof is in the pudding I put this issue to the test many many moons ago and posted the following on Audio Asylum:

Dustcover open or closed, should your TT happen to have one, that is a question that regularly comes onto the table for discussion.

I think that we all agree that a TT should be well isolated from external sources of vibration since these can be transmitted via the TT to the cartridge and hence audibly affect playback .
I was curious to know whether or not the dustcover can provide such isolation. So I did the test :
I recorded onto Mini-Disc a silent groove from a brand-new test record. During the recording I played the first track from Flim & the BB's "Tricycle" (dmp Gold-9000).
The speakers are on the long wall of the room, the rack with TT is on the short wall close to the corner, distance to speaker is about 4 ft.
TT is Michell Gyrodec, SME 309, Shure V15.

In a first run, the volume knob of the preamp was set to maximum, the attenuation of the power amps (of my active speakers) was set to -10 dB, which is pretty loud.
For those who don't know "Tricycle", the first track starts with low level piano, then after some 15 seconds comes a blast that makes your heart stand still (unless you know the tune).
With dustcover open, the whole tune is clearly audible on the recording. With dustcover closed, the low level piano part is not audible, the blast comes through, albeit very much deformed and lower in level than with cover open.

In a second run I set the attenuation of the power amps to -15 dB, which is still quite loud but my usual level when playing rock music and the like. With cover open, the whole tune is still audible on the recording, with cover closed only the blast comes through, deformed and at rather low level.

I tried placing a pillow on the cover in both runs and it tames the acoustic a bit, but not completely.
Conclusion : turntables should have a dustcover since it not only protects the records from enemy no.1, dust and dirt (think of the "Wilson effect") , but also prevents the music being played from being fed back to the cartridge by the speakers. I think that position of speakers and TT as well as design of TT and cover play a role here, but a turntable without cover is, for me, a clear no-no.

Imagine your top-of-the-line Clearaudio, Rockport, VPI, Yorke, NA, or any other coverless table for that matter, set up with care, tweaked with devotion, just to have acoustic breakthrough spoiling your pleasure.

Klaus
 
  • Like
Reactions: DaveC

Duke LeJeune

[Industry Expert]/Member Sponsor
Jul 22, 2013
747
1,200
435
Princeton, Texas
IMO, decay time is more important than most think.

Agreed!

I don't have any hard data, but have experienced "tweaks" that are intended to extend the harmonics and reverb of instruments or vocals, effectively increasing the decay over certain frequency ranges. This can result in the impression of greater clarity, and the sense that the added information is part of the recording as it psychoacoustically fits what we are expected to hear and it sounds more complete and natural.

Toole says that spectrally correct repetitions of the original sound - reflections - give the listener "multiple looks" at complex sounds and therefore can increase intelligibility.

It is highly desirable that these repetitions to come from directions OTHER THAN the direct sound. Reflections which are presented from the EXACT SAME direction as the direct sound are the MOST LIKELY to be heard as coloration. So while room reflections are not "on the recording", I would argue that they are highly beneficial to our PERCEPTION of what's on the recording... and what I try to do is present those reflections in a more-beneficial-than-normal way.

I was just debating with PeterA and ddk about ddk's system of tweaks, which involves removing room treatments that might reduce decay times and adding things that extend the decay of vibration, such as large steel plates as equipment rack shelving. IMO, this is an attempt to make the recording sound more live and many members here feel it sounds more "natural". They may be right, but it's also adding the same effect to every recording...

If we're talking about the kind of highly directional vintage horn system that ddk embraces, in my experience such systems can sound rather "dry" in a home audio setting, as the direct-to-reverberant sound ratio is often (arguably) too high. In such cases preserving and making best possible use of what reverberant energy we do have can definitely pay dividends, and my guess is that's one of the things his steel equipment rack shelving contributes. My approach is to deliberately inject some additional reverberant energy, the spectral balance and relative level of which can be adjusted for room acoustics.

Your use of the word "effect" seems to imply that such additional reflections are undesirable. My experience in this area has been somewhat counter-intuitive: Reflections "done right" seem to result in greater variation in the spatial presentation from one recording to the next, rather than the "sameness" one would expect from an artificial "effect". In other words, it has been my experience that reflections "done right" tend to favor the presentation of the spatial cues on the recording over the playback room's acoustic signature. (Timbre can also benefit from reflections, at home and in a good concert or recital hall.)

The argument is that you don’t perceive the last/late part of reverberation anyway because it is masked by the music being played continuously. You only would hear reverberation fading into inaudibility when the sound/music is stopped suddenly. From Blesser 2001: “In continuous music, only the first 10 dB of decay can be perceived because the remaining reverberation is masked by the next part of the music. This is called “running reverberance.” The reverberation tail, “stopped reverberance”, is perceived only when the music stops.”

What I'm doing is adding reverberant energy which starts out about 10 dB down relative to the direct sound. If Blesser's findings apply, any effects would be imperceptible until the music stops. And if such is indeed the case, then clearly I have been deluding myself all along.

On the other hand, perhaps you have been "listening" to what I do through the lense of papers which may or may not be directly applicable. Would you have any interest in the other kind of listening? It may be possible to configure a crude but adequate test system. If this might interest you, let me know whether you have access to a pair of small speakers which can be played at the same time as your big Klein & Hummels, and we can take a look at the feasibility.

... I can’t but repeat myself: white paper.

How would that change the legitimacy (or lack thereof) of my opinions?
 
Last edited:
  • Like
Reactions: DaveC

jkeny

Industry Expert, Member Sponsor
Feb 9, 2012
3,374
42
383
Ireland
Hope you all had a great holiday & have a great 2020?

Agreed!



Toole says that spectrally correct repetitions of the original sound - reflections - give the listener "multiple looks" at complex sounds and therefore can increase intelligibility.

It is highly desirable that these repetitions to come from directions OTHER THAN the direct sound. Reflections which are presented from the EXACT SAME direction as the direct sound are the MOST LIKELY to be heard as coloration. So while room reflections are not "on the recording", I would argue that they are highly beneficial to our PERCEPTION of what's on the recording... and what I try to do is present those reflections in a more-beneficial-than-normal way.
I find Toole's "multiple looks" phrase is awkward & confusing (although a nice soundbite) as it doesn't really have any grounding in pyschoacoustics that I know of unless it's falling back on a very basic & general phrase along the lines ' the more we hear something, the more believable it becomes' ? I prefer to think that what is being presented in your setup better matches the expectations our auditory perception has of real-world sound. I guess I may be wrong in all of this, not having heard Duke's setup in real life.

.............

What I'm doing is adding reverberant energy which starts out about 10 dB down relative to the direct sound. If Blesser's findings apply, any effects would be imperceptible until the music stops. And if such is indeed the case, then clearly I have been deluding myself all along.
I haven't read Blesser's paper yet but I guess he is using the idea of masking - I would like to see if he "predicted" the masking effect or actually tested it with music signals rather than synthetic algorithms?

If this masking is what Klaus is suggesting - the fact that you are injecting a reflection at a higher volume level than would be naturally occurring from a normal reflection not have a bearing on this masking? As I understand your setup - you have suggested that you are creating new, time delayed, synthesised reflections & thereby extending the audibility of reverberations & delaying the time of this masking.

On the other hand, perhaps you have been "listening" to what I do through the lense of papers which may or may not be directly applicable. Would you have any interest in the other kind of listening? It may be possible to configure a crude but adequate test system. If this might interest you, let me know whether you have access to a pair of small speakers which can be played at the same time as your big Klein & Hummels, and we can take a look at the feasibility.
This is one of the most difficult parts - to correlate the many ideas in psychoacoustic research with real-world experiences - one has to brainstorm & be open to concepts & the possible gaps in the research and/or areas where it applies & doesn't apply to experiences from life



How would that change the legitimacy (or lack thereof) of my opinions?[/QUOTE]
 
  • Like
Reactions: Duke LeJeune

DaveC

Industry Expert
Nov 16, 2014
3,899
2,141
495
Since the proof is in the pudding I put this issue to the test many many moons ago and posted the following on Audio Asylum:

Dustcover open or closed, should your TT happen to have one, that is a question that regularly comes onto the table for discussion.

I think that we all agree that a TT should be well isolated from external sources of vibration since these can be transmitted via the TT to the cartridge and hence audibly affect playback .
I was curious to know whether or not the dustcover can provide such isolation. So I did the test :
I recorded onto Mini-Disc a silent groove from a brand-new test record. During the recording I played the first track from Flim & the BB's "Tricycle" (dmp Gold-9000).
The speakers are on the long wall of the room, the rack with TT is on the short wall close to the corner, distance to speaker is about 4 ft.
TT is Michell Gyrodec, SME 309, Shure V15.

In a first run, the volume knob of the preamp was set to maximum, the attenuation of the power amps (of my active speakers) was set to -10 dB, which is pretty loud.
For those who don't know "Tricycle", the first track starts with low level piano, then after some 15 seconds comes a blast that makes your heart stand still (unless you know the tune).
With dustcover open, the whole tune is clearly audible on the recording. With dustcover closed, the low level piano part is not audible, the blast comes through, albeit very much deformed and lower in level than with cover open.

In a second run I set the attenuation of the power amps to -15 dB, which is still quite loud but my usual level when playing rock music and the like. With cover open, the whole tune is still audible on the recording, with cover closed only the blast comes through, deformed and at rather low level.

I tried placing a pillow on the cover in both runs and it tames the acoustic a bit, but not completely.
Conclusion : turntables should have a dustcover since it not only protects the records from enemy no.1, dust and dirt (think of the "Wilson effect") , but also prevents the music being played from being fed back to the cartridge by the speakers. I think that position of speakers and TT as well as design of TT and cover play a role here, but a turntable without cover is, for me, a clear no-no.

Imagine your top-of-the-line Clearaudio, Rockport, VPI, Yorke, NA, or any other coverless table for that matter, set up with care, tweaked with devotion, just to have acoustic breakthrough spoiling your pleasure.

Klaus

Thanks for that, I also believe feedback is happening in every part of the system and contributes to what we hear to some degree. Otherwise racks, footers, etc. wouldn't make a difference.
 
  • Like
Reactions: Duke LeJeune

DaveC

Industry Expert
Nov 16, 2014
3,899
2,141
495
Agreed!



Toole says that spectrally correct repetitions of the original sound - reflections - give the listener "multiple looks" at complex sounds and therefore can increase intelligibility.

It is highly desirable that these repetitions to come from directions OTHER THAN the direct sound. Reflections which are presented from the EXACT SAME direction as the direct sound are the MOST LIKELY to be heard as coloration. So while room reflections are not "on the recording", I would argue that they are highly beneficial to our PERCEPTION of what's on the recording... and what I try to do is present those reflections in a more-beneficial-than-normal way.

I don't think this is necessarily the case, I've heard resonators inserted into a horn that clearly increased intelligibility and clarity of female vocals. This was the manufacture's demo so I'm sure the material chosen was ideal, but it was clearly an improvement and the manufacturer explained that the resonator was simply extending decay of the vocals and bringing them closer to what we expect to hear.

I also think reflections can't improve the recording, but they can come closer to what we expect to hear. Think of headphones vs speakers, headphones don't lose all this information, in fact the opposite, but reflections in the room can seem more natural and ideally they won't add coloration.

If we're talking about the kind of highly directional vintage horn system that ddk embraces, in my experience such systems can sound rather "dry" in a home audio setting, as the direct-to-reverberant sound ratio is often (arguably) too high. In such cases preserving and making best possible use of what reverberant energy we do have can definitely pay dividends, and my guess is that's one of the things his steel equipment rack shelving contributes. My approach is to deliberately inject some additional reverberant energy, the spectral balance and relative level of which can be adjusted for room acoustics.

Your use of the word "effect" seems to imply that such additional reflections are undesirable. My experience in this area has been somewhat counter-intuitive: Reflections "done right" seem to result in greater variation in the spatial presentation from one recording to the next, rather than the "sameness" one would expect from an artificial "effect". In other words, it has been my experience that reflections "done right" tend to favor the presentation of the spatial cues on the recording over the playback room's acoustic signature. (Timbre can also benefit from reflections, at home and in a good concert or recital hall.)

Well, PeterA uses MAgico and reports the same exact effects, using the same language. And yes, extended decay where RT60 is far beyond norms is an effect imo, and desirability is a preference as ddk and others seem to enjoy it, and it's obvious that it's closer to what one might expect to hear from a live performance but I'd argue it's not closer to what the artist and recording engineers intended.

Not all reflections are beneficial ime, and extended decay times are not beneficial as far as what I personally prefer. I find they do sound more "live" in many recordings, but badly color others and are distracting. As I mentioned earlier, vocals and music are not processed the same in the brain and vocals become unintelligible with more decay much faster than flowing music does.

In my current room I have high ceilings and hence a lot of reflective surface, I added absorption until decay times were more in line with norms and I prefer that over no room treatments.

Also, the approach of a very live room isn't going to work as well as the space becomes larger, the "room filling sound" that one hears in smaller spaces will not happen as the delay times increase and the brain no longer sums them into one sound.
 
  • Like
Reactions: Duke LeJeune

DaveC

Industry Expert
Nov 16, 2014
3,899
2,141
495
Hope you all had a great holiday & have a great 2020?

I find Toole's "multiple looks" phrase is awkward & confusing (although a nice soundbite) as it doesn't really have any grounding in pyschoacoustics that I know of unless it's falling back on a very basic & general phrase along the lines ' the more we hear something, the more believable it becomes' ? I prefer to think that what is being presented in your setup better matches the expectations our auditory perception has of real-world sound. I guess I may be wrong in all of this, not having heard Duke's setup in real life.

I agree, and I think the issue is acclimation and is something Toole seems to ignore.

We acclimate to different sounds and that is what sounds "right" to us. An example is the inexperienced audio enthusiast who doesn't hear much live sound or hear many other systems. These people develop a strong preference for their own systems, they may go to a show with a few excellent systems that are much better than their own, but they will still prefer their own system. That goes away with more experience hearing other systems and live sound.

Lack of accounting for acclimation is a major issue with all of Toole's preference testing imo. If they "train" listeners using one system that has a certain dispersion pattern and room acoustics, then ask for an opinion of a much different system the results will be tainted by acclimation.
 
  • Like
Reactions: Duke LeJeune

jkeny

Industry Expert, Member Sponsor
Feb 9, 2012
3,374
42
383
Ireland
I agree, and I think the issue is acclimation and is something Toole seems to ignore.

We acclimate to different sounds and that is what sounds "right" to us. An example is the inexperienced audio enthusiast who doesn't hear much live sound or hear many other systems. These people develop a strong preference for their own systems, they may go to a show with a few excellent systems that are much better than their own, but they will still prefer their own system. That goes away with more experience hearing other systems and live sound.

Lack of accounting for acclimation is a major issue with all of Toole's preference testing imo. If they "train" listeners using one system that has a certain dispersion pattern and room acoustics, then ask for an opinion of a much different system the results will be tainted by acclimation.
I don't disagree with you - yes, we can acclimatise to different sounds & it makes me think more about the point I was trying to make. The point I was making is that we have 'learned' how sound behaves in the world through constant exposure to it i.e. acclimatised ourselves to it & established it as our internal reference. But you are correct, when we listen to our stereo systems, we are listening to an artistic version of what we hear in the real world. It doesn't tick all the boxes that we get when listening live - we are dropping some (a lot?) of those real-life aspects of sound that we have in our internal reference model - yet, the presentation of the sound is enough to satisfy us & allow us to engage with the higher level appreciation of the musical performance i.e. we can immerse ourselves in the performance rather than the shortcomings of the presentation - we accept the illusion as being close enough to real-world. The better the system, the easier this illusion is accepted i.e. my thinking here is that better playback systems tick more of the boxes of our internal reference model (while still not being able to tick all the boxes due to the limitations of 2 channel stereo)

But your point about acclimatisation to the sound is well made. From birth (& before) we are analysing & internalising the sounds we hear & eventually this becomes the internal reference model (or analytic engine) for how we evaluate sound. But this is continually happening all through life. Some examples, first just an interesting example that this is happening before birth - newborn babies cry in the same cadence as the cadence of the language they are born into i.e. they have heard & learned this basic cadence in the womb

Two other examples - experiments have been done in sight & sound perception showing how our internal analytic model/process accommodates to new patterns of nerve impulses from sight & sound senses. One experiment involved prism glasses which flip the image upside down. Wearing these glasses took about a week to perceive the image not flipped. Taking off the glasses then took a couple of seconds to readjust & see in not flipped version. Interestingly, allow a week to pass & donning the glasses again, takes only a couple of seconds to perceive the flipped image correctly (i.e in normal, non-flipped way). The longer the gap between trying the prism glasses the longer the accommodation takes. This demonstrates to me that new processing models (or adjustments to the existing model) are continually created based on the amount of exposure to a particular stimulus but are also lost when that exposure is removed. It also shows me that the processing model is automatically engaged based on the stimulus received

The same happens with sound - another experiment used a silicon mould of a different pinna which is inserted into each ear. The ability to localise sounds was greatly diminished, initially but this localisation ability returned to normal fairly quickly (can't remember the exact time). Again, when silicon inserts are removed the localisation ability is back to normal.

So my thoughts are that we accept a lot of missing elements in our stereo replay system when judged against how our internal reference would normally analyse the stimulus. But we do this all the time - we enter a room & within seconds have acclimatised to the sound of the room Vs the sound of the sound - our mother's voice sounds the same as we follow her around the house from room to room. We accept our replay system, the same way as we accept the change a room makes to our mother's voice but I still have a feeling that the better the sound ticks the boxes of our internal reference, the more satisfying/believable it is perceived to be, the better/more immersive, the illusion.

As an aside, I have a 2 yr old grandson & I've watched him experiment, from very young, with & learn how sound behaves (I've also helped) - I can see when he is paying attention to sounds that we take for granted i.e the sound of his steps on different surfaces, the sound of his voice in different spaces - he's a constant learning machine which is fascinating to observe
 
Last edited:
  • Like
Reactions: Duke LeJeune

jkeny

Industry Expert, Member Sponsor
Feb 9, 2012
3,374
42
383
Ireland
One other aspect that might have relevance to what Duke is doing with these late reflections - from Griesinger here
Even though his research tends towards large acoustic spaces, there are generic points in his research which applies to sound itself & our perception of it
Acoustic quality has been difficult to define, and it is very difficult to measure something you can't define. Fundamentally the ear/brain system needs to
  1. separate one or more sounds of interest from a complex and noisy sound field, and
  2. to identify the pitch, direction, distance, and timbre (and thus the meaning) of the information in each of the separated sound streams.
Previous research into acoustic quality has mostly ignored the problem of sound stream separation - the fundamental process by which we can consciously or unconsciously select one or more of a potentially large number of people talking at the same time (the cocktail party effect) or multiple musical lines in a concert. In the absence of separation multiple talkers become babble. Music is more forgiving. Harmony and dynamics are preserved, but much of the complexity (and the ability to engage our interest) is lost. Previous acoustic research has focused on how we perceive a single sound source under various acoustic conditions. Previous research has also concentrated primarily on how sound decays in rooms - on how notes and syllables end. But sounds of interest to both humans and animals pack most of the information they contain in the onset of syllables or notes. It is as if we have been studying the tails of animals rather than their heads.

The research presented in the preprint above shows that the ability to separate simultaneous sound sources into separate neural streams is vitally dependent on the pitch of harmonically complex tones. The ear/brain system can separate complex tones one from another because the harmonics which make up these tones interfere with each other on the basilar membrane in such a way that the membrane motion is amplitude modulated at the frequency of the fundamental of the tone (and several of its low harmonics). When there are multiple sources each producing harmonics of different fundamentals, the amplitude modulations combine linearly, and can be separately detected. Reflections and reverberation randomize the phases of the upper harmonics that the ear/brain depends upon to achieve stream separation, and the ampltude modulations become noise. When reflections are too strong and come too early, separation - and the ability to detect the direction, distance and timbre of individual sources - becomes impossible. But if there is sufficient time in the brief interval before reflections and reverberation overwhelm the onset of sounds the brain can separate one source from another, and detect direction, distance, and meaning.

So, essentially, from the underlined text - perhaps the phase of the upper harmonics is more correctly rendered by Duke's speakers?

The interesting findings from Auditory Stream Analysis (ASA), which is what Griesinger is talking about above, is that streams click into place when the correct conditions are met (as judged by the continuous processing of the internal analytic engine called auditory perception). This "clicking into place" is almost a binary effect. I perceive this as a solidity to the soundstage with a new found/more realistic portrayal of each individual sound object in the sound stream. In other words, the air around instruments, 3D dimensionality & realism of the soundstream portrayal is, IMO, because the direct & reflected sound is correct in all the ways necessary to be easily recognised by our auditory processing analytic engine, as coming from the same sound object. I can easily imagine that Duke's speakers generated reflections better satisfy at least some of the factors necessary to achieve this better recognition by being spectrally better matched to the direct sound & as Griseinger states, "When there are multiple sources each producing harmonics of different fundamentals, the amplitude modulations combine linearly, and can be separately detected." I surmise that reflections (in the real world) may be expected to retain some of the relationship (amplitude modulations?) between the multiple sources & these relationships are probably somewhat fragile - playback from point source speakers should retain.
 
Last edited:
  • Like
Reactions: Duke LeJeune

Duke LeJeune

[Industry Expert]/Member Sponsor
Jul 22, 2013
747
1,200
435
Princeton, Texas
Jkeny and DaveC, thank you BOTH very much for your insights and contributions!

I find Toole's "multiple looks" phrase is awkward & confusing (although a nice soundbite) as it doesn't really have any grounding in pyschoacoustics that I know of..."

Agreed, Toole seems to put forward the phrase as his best guess as to what's going on. He (and I) may well be wrong about it.

I don't think this is necessarily the case, I've heard resonators inserted into a horn that clearly increased intelligibility and clarity of female vocals. This was the manufacture's demo so I'm sure the material chosen was ideal, but it was clearly an improvement and the manufacturer explained that the resonator was simply extending decay of the vocals and bringing them closer to what we expect to hear.

It would be interesting to have some idea of what the resonator was doing... was it a Helmholtz resonator, or something which caused reflections inside the horn?

Not all reflections are beneficial ime, and extended decay times are not beneficial as far as what I personally prefer.... vocals and music are not processed the same in the brain and vocals become unintelligible with more decay much faster than flowing music does.

... Also, the approach of a very live room isn't going to work as well as the space becomes larger, the "room filling sound" that one hears in smaller spaces will not happen as the delay times increase and the brain no longer sums them into one sound.

When I have done a custom system for a large room, I have kept the well-controlled front-firing radiation pattern and only added a little bit of additional high-frequency energy to the reverberant field to compensate for the main tweeter's pattern-narrowing in the top octave. I have relied on the inherently long reflection paths of the room itself to separate the second (reverberant) stream from the first, and this approach seems to work well.

I think the issue is acclimation and is something Toole seems to ignore.

We acclimate to different sounds and that is what sounds "right" to us...

I think you are onto something. Acclimation would explain a lot, and thank you for mentioning it multiple times because I often need multiple prods ("multiple look", as Toole might say???) before starting to wrap my head around a new idea. At audio shows when I get a little bit of time to "walk the halls" I usually hear many systems that sound obviously "wrong" to me, which probably traces back to my being acclimated to something significantly different, for better or for worse.

The same happens with sound - another experiment used a silicon mould of a different pinna which is inserted into each ear. The ability to localise sounds was greatly diminished, initially but this localisation ability returned to normal fairly quickly (can't remember the exact time). Again, when silicon inserts are removed the localisation ability is back to normal.

This is FASCINATING!! I have a friend who has a significant deformity of one ear, and it doesn't seem to impair his enjoyment of music nor his ability to appreciate a good stereo image in the slightest.

So... perhaps the phase of the upper harmonics is more correctly rendered by Duke's speakers?

I think the most important place for phase preservation (and also time domain preservation) is in the first-arrival sound, though obviously I also think that clearly separating the sound into "two streams" is beneficial.

Griesinger makes an interesting case for the phase preservation of the upper harmonics: When the fundamentals and the harmonics line up precisely in time, they result in not only a greater instantaneous SPL peak (better dynamic contrast), but also they do a better job of grabbing our attention. I have used this information in my most recent designs and the results are encouraging, so I plan to continue in that general direction.
 
  • Like
Reactions: DaveC

jkeny

Industry Expert, Member Sponsor
Feb 9, 2012
3,374
42
383
Ireland
Jkeny and DaveC, thank you BOTH very much for your insights and contributions!



Agreed, Toole seems to put forward the phrase as his best guess as to what's going on. He (and I) may well be wrong about it.



It would be interesting to have some idea of what the resonator was doing... was it a Helmholtz resonator, or something which caused reflections inside the horn?



When I have done a custom system for a large room, I have kept the well-controlled front-firing radiation pattern and only added a little bit of additional high-frequency energy to the reverberant field to compensate for the main tweeter's pattern-narrowing in the top octave. I have relied on the inherently long reflection paths of the room itself to separate the second (reverberant) stream from the first, and this approach seems to work well.



I think you are onto something. Acclimation would explain a lot, and thank you for mentioning it multiple times because I often need multiple prods ("multiple look", as Toole might say???) before starting to wrap my head around a new idea. At audio shows when I get a little bit of time to "walk the halls" I usually hear many systems that sound obviously "wrong" to me, which probably traces back to my being acclimated to something significantly different, for better or for worse.



This is FASCINATING!! I have a friend who has a significant deformity of one ear, and it doesn't seem to impair his enjoyment of music nor his ability to appreciate a good stereo image in the slightest.
This experiment was first done by Hoffman in 1998 but repeated more recently here https://www.jneurosci.org/content/25/22/5413

Basically, we build our internal auditory model based on the nerve signals from our physiology. The different shaped pinna means that we all receive different frequency spectra at the eardrum (Griesinger has some nice graphs of these differences). But & here's the real underlying point - over time (from fetus/babies onwards) our auditory system builds its model(s) using all our senses (mainly vision) to correlate the nerve signals from our auditory physiology to the nerve signals from our visual physiology. In other words the sound of a big bell being struck is registered as a somewhat different pattern of nerve signals to the sound from a small bell (both are confirmed by vision). But the spectral makeup of the sound that appears at your eardrum are different from what hits my eardrum for the same sound. But again, we both hear & recognise a large bell strike from small bell strike even though our internal model (pattern) is likely somewhat different from yours

Yes, it is fascinating - we all continually learn our own individual auditory spectral pattern that relates to real world objects & the sounds they make. We can all relate to one another's auditory experience as we all have used the same training material i.e. the real world.

I think the most important place for phase preservation (and also time domain preservation) is in the first-arrival sound, though obviously I also think that clearly separating the sound into "two streams" is beneficial.

Griesinger makes an interesting case for the phase preservation of the upper harmonics: When the fundamentals and the harmonics line up precisely in time, they result in not only a greater instantaneous SPL peak (better dynamic contrast), but also they do a better job of grabbing our attention. I have used this information in my most recent designs and the results are encouraging, so I plan to continue in that general direction.
Let me explain the streaming concept. What is meant by an auditory stream is that the auditory system categorises all the sounds coming from a particular object as identified to that object - that includes it's harmonics & reflections. Not just the harmonics of the direct sound but also the reflections of these harmonics - in other words all the sound belonging to that object. This is an ongoing process as the sound stream progresses through time, not a once off process.

But there are multiple sound producing objects, all in play at the same time, - think of an orchestra playing a piece of music - so the solidity of a soundstage is dependent on all the audio streams being individually recognised through time during the playing.

Not only do we have sound objects producing sound but we also have the venue producing reflections of the sound objects (the room/hall acoustic) & we also have other sources of sound which we can group as background noise (maybe external to the venue, maybe also from the people in the venue?). But this 'noise' isn't a static thing like the white noise of tape hiss, etc. - it has spectrally changing content. So this 'stream' is usually perceived as the background (although can be focused on & brought to foreground attention)

What happens if some sounds are not properly categorised/analysed as belonging to an object stream? It can become part of the background noise or more likely become a separate stream from some unknown sound object.

Yes, I believe that our playback systems work best when preserving all the sound clues that our auditory system uses to identify & categorise auditory streams.
 

jkeny

Industry Expert, Member Sponsor
Feb 9, 2012
3,374
42
383
Ireland
I may have said this before so forgive me if I have but the acclimatisation reminds me of this.
I believe very good stereo playback create an illusion which we can just about believe in i.e. it is a very fragile illusion because it is missing lots of the cues our auditory perception relies on.

Lesser playback systems produce the notes in the right order but even less of the subtle cues necessary to create this illusion

Because it is relying on a minimal set of subtle cues, of the cues that it does address, any flaws in their reproduction can upset the whole illusion, much more so than if there was a broader set of cues being captured & reproduced.

So we are acclimatising ourselves to our playback systems at whatever level of presentation they produce.

So when we strengthen the reproduction of some of these cues, then the illusion can be enhanced.

Just my 2c
 

Duke LeJeune

[Industry Expert]/Member Sponsor
Jul 22, 2013
747
1,200
435
Princeton, Texas
Let me explain the streaming concept. What is meant by an auditory stream is that the auditory system categorises all the sounds coming from a particular object as identified to that object - that includes it's harmonics & reflections. Not just the harmonics of the direct sound but also the reflections of these harmonics - in other words all the sound belonging to that object. This is an ongoing process as the sound stream progresses through time, not a once off process....

Thank you so much for teaching me about auditory streaming! I was aware of the concept (that the ear attaches the correct reflections to each sound based on spectral content), but didn't know what it is called. And imo one of the implications is that we don't want to muddy the reflections by absorbing their high frequency content.

I believe very good stereo playback create an illusion which we can just about believe in i.e. it is a very fragile illusion because it is missing lots of the cues our auditory perception relies on...

Because it is relying on a minimal set of subtle cues, of the cues that it does address, any flaws in their reproduction can upset the whole illusion, much more so than if there was a broader set of cues being captured & reproduced...

So when we strengthen the reproduction of some of these cues, then the illusion can be enhanced.

I think your analysis that our fragile illusions are based on a poverty of cues is excellent. And it doesn't take much to destroy that illusion... but on the other hand, perhaps it doesn't take much to enhance it?

I think you have explained to me what I'm doing better than I could have explained it do myself, because your understanding is better than mine. Thank you again for entering this thread and educating me.

Here is my in-a-nutshell philosophy of what a speaker should do; it contains elements of what you are saying but is not as well articulated:

There are two things a speaker should do. First, a speaker should do SOMETHING so well that the listener can, when focusing on that something, suspend disbelief and get lost in the music. That something can be timbre, clarity, imaging, impact, inner detail, immersion, PRAT, whatever. If a speaker can do more than one of these things, so much the better. But this is the easy part.

The HARD part is, a speaker should ALSO be free of colorations and inadequacies which collapse that hard-won illusion.
 
Last edited:

DaveC

Industry Expert
Nov 16, 2014
3,899
2,141
495
I may have said this before so forgive me if I have but the acclimatisation reminds me of this.
I believe very good stereo playback create an illusion which we can just about believe in i.e. it is a very fragile illusion because it is missing lots of the cues our auditory perception relies on.

Lesser playback systems produce the notes in the right order but even less of the subtle cues necessary to create this illusion

Because it is relying on a minimal set of subtle cues, of the cues that it does address, any flaws in their reproduction can upset the whole illusion, much more so than if there was a broader set of cues being captured & reproduced.

So we are acclimatising ourselves to our playback systems at whatever level of presentation they produce.

So when we strengthen the reproduction of some of these cues, then the illusion can be enhanced.

Just my 2c


I agree and for me it's another point of contention WRT Toole's findings.

AFAIK, he and Harman in general don't optimize MANY parts of the system that can improve resolution in a system. Below is just one example. ANother glaring example is testing Martin Logans in the exact same setup as their box speakers, but that's another story...

Clean AC power, high quality cables and vibration control via racks, footers and speaker isolation allow a system to preserve subtle cues, if this is done then I think it removes a lot of the benefits and hence preference for room reflections. Instead of room reflections we get more of the spatial cues in the recording, this can keep the sound from seeming too "dry" even when the system has no 1st reflections and minimal room interaction. It also enhances the soundstage and the immersive, 3D effect you get with a good system. I'd go so far to say this makes the difference between a decent system and one that is truly captivating and draws the listener in. A Million dollar system can sound quite boring if it loses too much resolution. I'd even go so far as to say the best systems may even add some decay that is psychoacoustically correct as discussed previously. Look at recent threads extolling the virtues of vacuum tube dampers on Lampi DACs, this is a prime example of tuning system feedback to user preference, and what we expect to hear psychoacoustically.

Interconnect cables are key, imo a vast majority of copper interconnect cables are incapable of preserving resolution in the music, and even the best ones from Jorma, Tara, etc. aren't as good as the best UPOCC silver cables as far as preserving the integrity of the signal.
 

jkeny

Industry Expert, Member Sponsor
Feb 9, 2012
3,374
42
383
Ireland
Thank you so much for teaching me about auditory streaming! I was aware of the concept (that the ear attaches the correct reflections to each sound based on spectral content), but didn't know what it is called. And imo one of the implications is that we don't want to muddy the reflections by absorbing their high frequency content.[



I think your analysis that our fragile illusions are based on a poverty of cues is excellent. And it doesn't take much to destroy that illusion... but on the other hand, perhaps it doesn't take much to enhance it?

I think you have explained to me what I'm doing better than I could have explained it do myself, because your understanding is better than mine. Thank you again for entering this thread and educating me.
I haven' heard you speakers so I'm just giving my best guess based on my experiences with my own electronics & I have formulated my views based on this experience & my readings of research in the area of psycho-acoustics, particularly ASA - I could well be wrong (& just a man with a hammer to whom every problem is a nail :cool:) but so far my views have helped me explain what is a complex system of all sorts of interacting sub-systems which themselves are complex - the whole area of stereo recording & playback in itself is a minefield not to mention the even more complex area of auditory perception.

But when we consider it from the perspective of auditory processing engine & begin to understand that it works as a complex analytic engine which only has nerve impulses as input signals, we gain some perspective. This is at the heart of ASA i.e how we analyse/categorise these nerve impulses together as being from the same sound object - in other words into dynamically changing sound streams that are mapped through time. It's essentially a complex problem-solving exercise

One of the important perspectives that I learned from my research is that, in sounds emanating from the real world (not our playback systems), there is usually not enough information in the nerve impulses being received to uniquely solve the complex problem i.e. to be able to be sure that those particular pattern of nerve impulses are for only ONE possible set of sound objects whose sound is changing/moving through time. So we are constantly involved in a best-fit guess using a lot of input from other senses (other nerve impulses). To expand this further we are constantly building an internal model of what we are hearing bu tit is always provisional & we are constantly using the prediction/expectation from this model to evaluate the next sounds (nerve impulses) we perceive. If the incoming signals somehow mismatch with expectations/predictions, a modification is made to the model & off we go again. The more this happens, the more energy is expended - all this is happening at the subconscious level so our only conscious indicator is an disinterest/tiredness/unease with the sound.

So when it concerns our stereo playback systems which are, by their nature, presenting a restricted set of sound signals, we are receiving even less of the nerve impulses than we would from real-world sound & we are further into the guesswork aspect of auditory processing & this is where we are working at the edge.

IMO, this is the underlying explanation for our acclimatisation (we can accept most systems as long as there are no major issues) & it also explains how we can become fatigued over time listening to our playback system - it doesn't engage our attention, involve us emotionally (see above) - just replays all the note sin the correct order (I think

Here is my in-a-nutshell philosophy of what a speaker should do; it contains elements of what you are saying but is not as well articulated:

There are two things a speaker should do. First, a speaker should do SOMETHING so well that the listener can, when focusing on that something, suspend disbelief and get lost in the music. That something can be timbre, clarity, imaging, impact, inner detail, immersion, PRAT, whatever. If a speaker can do more than one of these things, so much the better. But this is the easy part.

The HARD part is, a speaker should ALSO be free of colorations and inadequacies which collapse that hard-won illusion.
Yes, IMO, that is the goal of the whole replay system.
 

jkeny

Industry Expert, Member Sponsor
Feb 9, 2012
3,374
42
383
Ireland
I agree with you about Olive? & Harmon research - there's an obvious experimenter's bias shown in a lot of their research.
As you pointed out above, ignoring the different requirements of planar speaker placement, ignoring the role of SOTA electronics, etc.

It's a problem I find when reading research that involves listening tests - the quality of the electronics/transducers is often in question.

We are at the bleeding edge in this hobby - which is not a boast but my reading of what this hobby/industry entails - the problem is that it's difficult to figure out what works from what doesn't and for every claimed benefit there's someone shouting snake-oil salesman (depends on the forum, I guess?) - I try to keep an open mind but won't go to crystals, etc
 
Last edited:

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu

Steve Williams
Site Founder | Site Owner | Administrator
Ron Resnick
Site Co-Owner | Administrator
Julian (The Fixer)
Website Build | Marketing Managersing