Griesinger's teachings show up in Klippel, Linkwitz, Toole, and Geddes

jkeny

Industry Expert, Member Sponsor
Feb 9, 2012
3,374
42
383
Ireland
Can of worms was maybe not the appropriate term. This issue would require some extensive digging in the scientific literature, but should you have an archive well filled in heat respect, give it a shot.
I don't have any ready to hand literature (or even know if there is any) on the subject of differences in perception of vertical Vs lateral reflections. I assumed that the angle of incidence of the reflected sound wave at the pinnae would have a bearing on this & the idea that vertical reflections would be presented to both ears without the HRTF of horizontal waves? (btw, I think the distortions of the waveforms introduced by the pinnae may have a significant role to play in recognising vertical direction from horizontal direction reflections)

I was surprised by your "can of worms" phrase about this & reckoned I must be missing some obvious piece of knowledge related to this (which wouldn't be a first) & I was being too simplistic & naive in my statement?

BTW, I still have to read the links you gave to research in the field on the other thread so I may well be missing some fundamental knowledge?
 
Last edited:

KlausR.

Well-Known Member
Dec 13, 2010
291
29
333
Did you include the balcony reflection in your calculation? Griesinger says the 17 millisecond reflection is a double, with the balcony reflection included, so it would be louder than the sidewall reflection alone.

No, I did take only a single reflection, so you are correct, the other one would add its share to overall SPL.


But I do believe the same psychoacoustic principles apply, which I think Geddes' wording expresses well: ”The earlier and the greater in level the first room reflections are, the worse they are.” The implication is that it's a combination of timing and level.

I add Bech’s graph on detection thresholds of early reflections. I Think you will agree that in order to have an effect, positive or negative, a reflection has to be above threshold, so these thresholds indicate the potential of the respective reflection to take an active part. As the graph clearly indicates there is no correlation between delay and potential. As the graph further indicates thresholds for most of the reflections are above the natural level, hence not audible, which is good news. Bech made his experiments in a domestic size room of 7.52 x 4.75 m. The caveat is that he used a single loudspeaker/single reflection/reverberation setup and did not use music. But still, the statement that the earlier the reflection the worse it is, is not supported by the facts.

I could find only one research where music was used, detection thresholds for different reflections were obtained for a single loudspeaker/single reflection setup in an anechoic chamber. Only one of those reflections is (almost) identical to one of Bech’s, so I’ve extrapolated the lower and upper thresholds, which depend on music motive, to Bech’s experimental conditions and added them to the graph. That extrapolation obviously might be riddled with errors.


InkedBech1995_LI.jpg



So we have powerful reflections at 17 milliseconds being detrimental to clarity in the concert hall, and weaker but earlier reflections potentially being detrimental to clarity in home audio.

The earliest reflections in home audio are generally from floor and ceiling, but coming from roughly the same direction as the direct sound, they are masked and detection thresholds are high.

The geometry I currently use typically results in 10 milliseconds delay with the speakers about 1 meter out into the room, as the rear-firing array is angled upwards at 45 degrees such that we get a wall bounce and a ceiling bounce before the "backwave" reaches the listening area. The spectrum of this energy is user-adjustable so some adaptation to different kinds of surfaces is possible.

If I place your speaker in our living room with the current setup of furniture (speakers along the longer wall) as indicated that wall-ceiling bounce would be delayed by 11 ms. Maybe you should write a white paper for your website explaining the psychoacoustical background of your concept.

Klaus
 
  • Like
Reactions: Duke LeJeune

jkeny

Industry Expert, Member Sponsor
Feb 9, 2012
3,374
42
383
Ireland
I'm not a researcher in the field so don't have a wide range or depth of readings in the field. My interest stems from my experiences in audio & attempts to understand & explain what I'm perceiving. I consider I'm on a journey in trying to understand a very complex, unfinished & active field of auditory perception research so I'm open to being corrected in anything I post

One thing that I see mentioned in the recent literature & it strikes me as relevant, is that there is an understanding now that tone test signals are not giving us the full picture as far as JND or thresholds of hearing are concerned. There's a difference between a single ERB (or ear's auditory filter bank) being activated (as in a single tone signal) to a number of ERBs being activated by a more complex or broadband signal. The same applies to electrical test signals when evaluating audio equipment reproduction - both audio playback systems & auditory processing are complex systems with complex interactions, particularly to complex signals. I would also see fault with using a single speaker, mono signal when trying to evaluate auditory perception - it's an attempt to isolate the ear as the hearing mechanism in preference to the brain's role.

I know that it seems to make logical sense that a tone which has been determined in such test to be below audibility at a certain dB cannot have an audible effect when played in consort with another tone or complex. But I question whether this logic is correct. First, audibility, seems to mean a conscious recognition of the signal, not whether the hearing mechanism has generated a nerve impulse as a result of the signal. So if there is a nerve impulse generated, why wouldn't it be processed by the same mechanism as other nerve signals arriving from the auditory mechanism? What happens to these 'lesser' nerve signals do they affect the processing of the 'main' signals in any way - does this effect the perception of the main signal/sound in any way?

Again, I'm open to correction of these thoughts/ideas & you may consider I'm drawing these ideas from flimsy evidence but I consider that the pattern matching/ASA aspects of auditory processing are where our next stage/progress in understanding how our replay systems can best serve us, will occur. Of course this will be greatly aided by discovering/using measurements which have more significance with auditory perception - it's chicken & egg again, however.

I'll give one example of a conundrum in auditory processing whose mechanism of action is still not fully understood or resolved - CMR or Comodulation Masking Release. This occurs in nature all the time & it gives rise to a complex of different phenomena - one of which is that when a signal is amplitude comodulated with a masked signal, it more easily reveals the masked signal i.e it decreases audible threshold of the masked signal. This might have a bearing here with regard to reflections also?

Another example of this CMR is found here - it's a working example.

BTW, I see in Bech's paper "Timbral aspects of reproduced sound in small rooms " the following quote which seems to support my thought that vertical reflections are perceived differently to horizontal reflections
"The reverberant field has been found to have an effect on the contribution of the individual reflections. An increase in the level of individual reflections are most likely to be audible for the first-order floor and ceiling reflections, and certain reflections from the sidewalls."
 
Last edited:

KlausR.

Well-Known Member
Dec 13, 2010
291
29
333
I don't have any ready to hand literature (or even know if there is any) on the subject of differences in perception of vertical Vs lateral reflections. I assumed that the angle of incidence of the reflected sound wave at the pinnae would have a bearing on this & the idea that vertical reflections would be presented to both ears without the HRTF of horizontal waves?

I did not specifically search for papers on this particular issue, but some of the ones I have in my archive might contain information relating to it. Angle of incidence indeed has an effect since the in-ear frequency response is different for different angles, see graph below.

Shaw1.jpg


Further, differences between individuals are substantial, see graph below, and that’s why I consider subjective impressions of others of little or no relevance, their pinnae might be different.

Shaw2.jpg


One thing that I see mentioned in the recent literature & it strikes me as relevant, is that there is an understanding now that tone test signals are not giving us the full picture as far as JND or thresholds of hearing are concerned. There's a difference between a single ERB (or ear's auditory filter bank) being activated (as in a single tone signal) to a number of ERBs being activated by a more complex or broadband signal. The same applies to electrical test signals when evaluating audio equipment reproduction - both audio playback systems & auditory processing are complex systems with complex interactions, particularly to complex signals. I would also see fault with using a single speaker, mono signal when trying to evaluate auditory perception - it's an attempt to isolate the ear as the hearing mechanism in preference to the brain's role.

As you will find in my write-up is that complex signals such as speech and music have been used in threshold experiments, albeit not in a 2-channel setup with all reflections and reverberation. I’m still waiting for Naqvi to continue his research.

I know that it seems to make logical sense that a tone which has been determined in such test to be below audibility at a certain dB cannot have an audible effect when played in consort with another tone or complex. But I question whether this logic is correct. First, audibility, seems to mean a conscious recognition of the signal, not whether the hearing mechanism has generated a nerve impulse as a result of the signal. So if there is a nerve impulse generated, why wouldn't it be processed by the same mechanism as other nerve signals arriving from the auditory mechanism? What happens to these 'lesser' nerve signals do they affect the processing of the 'main' signals in any way - does this effect the perception of the main signal/sound in any way?

Since you know you’re in a room when you are in a room and are able to estimate its size, source distance and location, all sound events are somehow analysed. In the 70ies the physicists of Göttingen University built a system with more than 80 loudspeakers arranged on hemispheres for soundfield experiments, maybe they also investigated the perceptual effects of reflections.

Comodulation masking release: if I search for comodulation in title in J. of the Acoustical Society of America I find 7 pages of results. From a quick glance at some abstracts I think that this is of no relevance for early reflections, but I shall download one or two early papers to have more details.

Klaus
 

marty

Well-Known Member
Apr 20, 2010
3,025
4,173
2,520
United States
Duke and Klaus,
Thank you for a superb and informative thread. I am learning a lot from your dialog. I studied visual and auditory perception at the graduate level thus I appreciated your data and its assessment at both the psychophysical as well as the subjective level. Well done.
Marty
 
  • Like
Reactions: Duke LeJeune

jkeny

Industry Expert, Member Sponsor
Feb 9, 2012
3,374
42
383
Ireland
I did not specifically search for papers on this particular issue, but some of the ones I have in my archive might contain information relating to it. Angle of incidence indeed has an effect since the in-ear frequency response is different for different angles, see graph below.

View attachment 59840
Thanks - those graphs are as expected

Further, differences between individuals are substantial, see graph below, and that’s why I consider subjective impressions of others of little or no relevance, their pinnae might be different.

View attachment 59841
Agreed that pinna are different between individuals. Don't agree that "subjective impressions of others of little or no relevance" - to me, when there is a critical mass (or if someone whom I have found in the past shares similar evaluation) of anecdotal reports pointing to an audible improvement, I pay attention & try to evaluate for myself or put on my list of things to investigate


As you will find in my write-up is that complex signals such as speech and music have been used in threshold experiments, albeit not in a 2-channel setup with all reflections and reverberation. I’m still waiting for Naqvi to continue his research.
My point was that a lot of thresholds which are still referred to, were set back in the 50s or earlier with tones or noise & they tend to focus on the mechanics of the ear which, IMO, is just the tip of the iceberg, ignoring as it does the processing of stereo signals, HRTF, complex signals, etc. I'm not so sure that extrapolating from these earlier listening test sis always warranted.

Since you know you’re in a room when you are in a room and are able to estimate its size, source distance and location, all sound events are somehow analysed. In the 70ies the physicists of Göttingen University built a system with more than 80 loudspeakers arranged on hemispheres for soundfield experiments, maybe they also investigated the perceptual effects of reflections.
I'm not sure what you are saying here? I was querying the definition of "threshold signals" & whether this means that no nerve impulse is generated from such signal or whether the definition of threshold means something is consciously identified? I know this may seem like nit-picking but when a mono tone is being used as test signal it removes a lot of the possibly audible changes which complex stereo signals could reveal.

Comodulation masking release: if I search for comodulation in title in J. of the Acoustical Society of America I find 7 pages of results. From a quick glance at some abstracts I think that this is of no relevance for early reflections, but I shall download one or two early papers to have more details.

Klaus
I'm interested in what you uncover & your thoughts on CMR. I firstly gave it as an example of how audibility thresholds can be changed when a more complex signal than single tones are used as test signals - in this case a separate tone which is amplitude modulated with a masked tone.

I also see a lot in discussions (not here) regarding a defense for using single tones - that complex music signals will end up masking things which are audible when using single tones. This may well be true in a number/lot of cases but it fails to recognise the additional evaluation done by auditory processing when complex tones (particularly music which has recognised patterns) & it also fails to recognise how some signals may be revealed to auditory processing when CMR conditions prevail.

Maybe these discussions should be taken to the other thread as they are going OT from the topic?
 

KlausR.

Well-Known Member
Dec 13, 2010
291
29
333
Agreed that pinna are different between individuals. Don't agree that "subjective impressions of others of little or no relevance" - to me, when there is a critical mass (or if someone whom I have found in the past shares similar evaluation) of anecdotal reports pointing to an audible improvement, I pay attention & try to evaluate for myself or put on my list of things to investigate.

You don’t know the external conditions (room, audio components, set-up, music material, level etc.) under which other individuals have auditioned a particular component. Even when you know someone with identical pinnae these external conditions may be too different to rely on this someone’s assessment.

My point was that a lot of thresholds which are still referred to, were set back in the 50s or earlier with tones or noise & they tend to focus on the mechanics of the ear which, IMO, is just the tip of the iceberg, ignoring as it does the processing of stereo signals, HRTF, complex signals, etc. I'm not so sure that extrapolating from these earlier listening tests is always warranted.

If you look at thresholds obtained under similar conditions, i.e. anechoic, single speaker/single reflection, see Toole’s graph below, it obviously doesn’t make much sense referring to thresholds other than for music. When I saw Toole’s graph first I wondered why he used Schubert’s data for 30° angle, where Schubert presents also data for 60°. I added the average value of the 60° thresholds for the 5 music motives used by Schubert. The extrapolation in my writeup was made from anechoic to Bech’s condition, using data for speech as reference. As I said, the research of Naqvi resembles the most real conditions under which people listen to music, but still no follow-up.

InkedToole_LI.jpg

I'm not sure what you are saying here? I was querying the definition of "threshold signals" & whether this means that no nerve impulse is generated from such signal or whether the definition of threshold means something is consciously identified?

When you are in an anechoic chamber, you will recognize that. When you are in a room, you will recognize that too, even when all reflections are below perception threshold, which would mean that nerve impulses are fired also in that case. At threshold something changes, timbre or image, various cues were used, resulting in different thresholds, as outlined in my writeup.

I also see a lot in discussions (not here) regarding a defense for using single tones - that complex music signals will end up masking things which are audible when using single tones.

I think that for basic psychoacoustic research artificial test signals are the thing to use. To answer the question whether or not first reflections are detrimental when your stereo is playing music in your living room, one obviously must use stereo playing music in a room of similar characteristics. Adding reflections and reverberation results in an increase of detection thresholds, a phantom source is not the same thing as a real source.

Klaus
 
  • Like
Reactions: Duke LeJeune

jkeny

Industry Expert, Member Sponsor
Feb 9, 2012
3,374
42
383
Ireland
You don’t know the external conditions (room, audio components, set-up, music material, level etc.) under which other individuals have auditioned a particular component. Even when you know someone with identical pinnae these external conditions may be too different to rely on this someone’s assessment.
Correct - that's why I suggested that these anecdotal reports are useful only for shortlisting for my own investigation. I don't consider this useless information.

But I don't quite follow the logic in your statement "Further, differences between individuals are substantial, see graph below, and that’s why I consider subjective impressions of others of little or no relevance, their pinnae might be different." Does it not logically follow that because ALL listening is subjective & ALL pinnae are different that listening results are therefore of no relevance? I doubt you dismiss ALL listening as irrelevant but that seems to be what your statement says?


If you look at thresholds obtained under similar conditions, i.e. anechoic, single speaker/single reflection, see Toole’s graph below, it obviously doesn’t make much sense referring to thresholds other than for music. When I saw Toole’s graph first I wondered why he used Schubert’s data for 30° angle, where Schubert presents also data for 60°. I added the average value of the 60° thresholds for the 5 music motives used by Schubert. The extrapolation in my writeup was made from anechoic to Bech’s condition, using data for speech as reference. As I said, the research of Naqvi resembles the most real conditions under which people listen to music, but still no follow-up.

View attachment 59891
Just to fill in my lack of knowledge - how is a reflection done in an anechoic chamber - I presume by using another speaker which transmits the single reflection signal?

When you are in an anechoic chamber, you will recognize that. When you are in a room, you will recognize that too, even when all reflections are below perception threshold, which would mean that nerve impulses are fired also in that case. At threshold something changes, timbre or image, various cues were used, resulting in different thresholds, as outlined in my writeup.
I'm not sure what you are saying here - do you mean that because sight informs you that you are in a room as opposed to an anechoic chamber, the nerve signals from your tympanic membrane will be different? Are you referring to the plot above?

I think that for basic psychoacoustic research artificial test signals are the thing to use. To answer the question whether or not first reflections are detrimental when your stereo is playing music in your living room, one obviously must use stereo playing music in a room of similar characteristics. Adding reflections and reverberation results in an increase of detection thresholds, a phantom source is not the same thing as a real source.

Klaus
I look on this as the same issue being faced in electronic measurements of audio devices - simple test signals are fine for gross examination of individual devices but tell little to nothing about how such devices will be perceived in the intended end use (a system usually playing music). Refinement of devices have reached the stage based on these measurements where we hear statements that all DACs/amps (properly designed) sound the same but we know this isn't the case. Hopefully this chasm between measurements & reality can be narrowed but I think it also requires an understanding/accomodation between psychoacoustic research & audio electronics measurements to advance this situation?.
 
Last edited:
  • Like
Reactions: Duke LeJeune

jkeny

Industry Expert, Member Sponsor
Feb 9, 2012
3,374
42
383
Ireland
In doing some investigation I came across this very recent (2019) research paper
"Perception and preference of reverberation in small listening rooms for multi-loudspeaker reproduction"

I have yet to study the paper in detail but believe it is highly relevant to Duke & the discussion here

Some immediate quotes jump out:
- "Our aims were (i) investigate the extent to which the properties of reverberant sound field in residential-sized rooms affect the perceived sensory experience of a reproduced sound field,(ii) identify the major perceptual attributes underlying these properties and the relationships between them, and (iii)examine possible influences of the physical and sensory characteristics of these fields on assessors’ preferences.

- "The knowledge that currently pertains to sound reproduction is such spaces as complete sound fields are limited. Investigations in this domain have primarily focused on the interaction of single reflections and loudspeakers’ properties in simulated or real sound fields. It is still unknown which elements of the reverberant field in domestic listening environments evoke certain sensations and how these relationships operate. This limits our ability to address the above degradation and restricts the further domestication of advanced spatial audio

- "two principal dimensions that could summarize the sound fields of this investigation. .............In our study, the first and dominant dimension relates to the decay times and reverberation’s cognate percepts, whilst the second dimension relates to spectral characteristics.

- "Four perceptual constructs seem to characterize the sensory properties of these dimensions, relating to Reverberance, Width & Envelopment, Proximity, and Bass. Overall, the results signify the importance of reverberation in residential listening environments on the perceived sensory experience, and as a consequence, the assessors’ preferences towards certain decay times.



(PDF) Perception and preference of reverberation in small listening rooms for multi-loudspeaker reproduction. Available from: https://www.researchgate.net/public...ning_rooms_for_multi-loudspeaker_reproduction [accessed Dec 16 2019].
 
Last edited:
  • Like
Reactions: Duke LeJeune

Duke LeJeune

[Industry Expert]/Member Sponsor
Jul 22, 2013
747
1,200
435
Princeton, Texas
Maybe you should write a white paper for your website explaining the psychoacoustical background of your concept.

Thank you for the suggestion, I'll think about it... but my understanding is that people who write papers in a given field are expected to have relevant credentials, and I do not.

Duke and Klaus,
Thank you for a superb and informative thread. I am learning a lot from your dialog. I studied visual and auditory perception at the graduate level thus I appreciated your data and its assessment at both the psychophysical as well as the subjective level. Well done.

Thank you very much, Marty!
 

Duke LeJeune

[Industry Expert]/Member Sponsor
Jul 22, 2013
747
1,200
435
Princeton, Texas

jkeny

Industry Expert, Member Sponsor
Feb 9, 2012
3,374
42
383
Ireland

KlausR.

Well-Known Member
Dec 13, 2010
291
29
333
But I don't quite follow the logic in your statement "Further, differences between individuals are substantial, see graph below, and that’s why I consider subjective impressions of others of little or no relevance, their pinnae might be different." Does it not logically follow that because ALL listening is subjective & ALL pinnae are different that listening results are therefore of no relevance? I doubt you dismiss ALL listening as irrelevant but that seems to be what your statement says?

I’m not dismissing all listening as irrelevant in absolute terms, it obviously is relevant for the individual(s) in question. I consider listening results of others not relevant for me personally because of different pinnae, different listening conditions, different tastes etc. What sounds good to individual X not necessarily sounds good to individual Y. In other terms, I would not listen to or even buy loudspeakers simply because someone/some reviewer has written a glowing review. When I read reviews, I normally skip the listening part.



Just to fill in my lack of knowledge - how is a reflection done in an anechoic chamber - I presume by using another speaker which transmits the single reflection signal?



For some research reflective panels were used, for some research the reflections were simulated by loudspeakers, with and without DSP, the latter taking into account directivity of the main speaker and acoustic properties of the wall surfaces of the room.



I'm not sure what you are saying here - do you mean that because sight informs you that you are in a room as opposed to an anechoic chamber, the nerve signals from your tympanic membrane will be different?



You will know if you are in an anechoic chamber, a living room, a bathroom, a church, a concert hall with your eyes closed. That’s why synthetic sound fields would be THE tool to see how added (amounts of) reflections and reverberation, starting at anechoic all the way up to concert hall are perceived. Maybe this has been done, I don’t know.



Refinement of devices have reached the stage based on these measurements where we hear statements that all DACs/amps (properly designed) sound the same but we know this isn't the case.



My strictly personal take on this is that as long as listening is not done blind, i.e. with proper controls to remove potential sources of bias, I consider there is no evidence that properly designed electronic components do not sound the same. Can of worms!



Thanks for the pointer to the JASA paper.



Klaus
 

jkeny

Industry Expert, Member Sponsor
Feb 9, 2012
3,374
42
383
Ireland
OK, Klaus, let's not get into the well worn rut that these debates usually descend into & leave it that we agree to disagree.
 
  • Like
Reactions: Duke LeJeune

Duke LeJeune

[Industry Expert]/Member Sponsor
Jul 22, 2013
747
1,200
435
Princeton, Texas
I took a look at that paper jkeny posted a link to yesterday:

Perception and preference of reverberation in small listening rooms for multi-loudspeaker reproduction. Available from: https://www.researchgate.net/public...ning_rooms_for_multi-loudspeaker_reproduction

Here are a few things which stood out to me, along with some comments:

Early on the paper makes an observation which is imo consistent with what I have been trying to say:

“The perceived aural impression of the reproduced sound scene is distorted as the recorded signal is superimposed with the spatiotemporal response of the reproduction room.” In other words, the First Venue [recording's] cues are distorted by the Second Venue [playback room's] cues.

The researchers are looking at how perceptions of proximity [strongly related to clarity], width & envelopment, and reverberance relate to small room acoustics, and where preferences lie, using experienced trained listeners. So they are focusing on the acoustic characteristics of the room, NOT on the radiation patterns of the loudspeakers. (They also looked at bass, but I'm not going to here.)

Personally I would separate out “width” from “envelopment”, because my understanding is that soundstage width (which Toole calls "Apparent Source Width") depends on early lateral reflections, but envelopment does not.

As far as I can tell, the findings didn't look for anything new and really didn't really uncover anything new (though their approach of perceptually recreating real-world rooms using a spherical array in an anechoic chamber IS new, to the best of my knowledge). They found that higher direct-to-reverberant sound ratios and shorter decay times result in stonger proximity [better clarity], while lower direct-to-reverberant ratios and longer decay times result in greater width and envelopment, and more reverberance.

In the tradeoff of proximity (clarity) vs width/envelopment/reverberance, preferences leaned towards proximity (clarity), with the preferred decay times at the shorter end of the AES recommended range for small rooms.

The researchers specifically state that "there was no intention to systematically vary the early reflection patterns in a specific manner". But evidently they are aware loudspeaker radiation patterns can also play an important role. From their Conclusions:

“Understanding the acoustic influence of these environments on the reproduced sound field will enhance the system's ability to recreate a sonic experience in acoustically-dissimilar enclosures [rooms] in a more accurate and perceptually relevant way.”

My translation: If we know how to minimize the influence of Second Venue (playback room) acoustics, we can do a better job of effectively presenting the First Venue (recording's) acoustics.

Returning to the Conclusions:

“For example, one could attempt to alter the Direct to Reverberant Ratio within a field by means of directivity control in the loudspeakers, aiming to evoke certain perceptual aspects that would otherwise be dominated by the room's natural acoustical field.”

Again, they are talking about effectively favoring First Venue cues over the otherwise-dominant Second Venue cues.

Imo more can be done to perceptually favor the First Venue than simply manipulating the Direct to Reverberant ratio; imo a more effective approach would make the direct sound distinct from the reverberant sound by introducing a perceptually relevant time gap in between the two, resulting in a “Two Streams” presentation. My observation has been that this seems to shift perceptual dominance towards the First Venue cues.

Let's look again at that tradeoff relationship between proximity/clarity on the one hand, and width/envelopment/reverberance on the other. The “Two Streams” approach offers benefits from BOTH sides of the tradeoff: Clarity is good because the early reflections are minimized, but envelopment and reverberance are also good. One critical element: We find that there is a “sweet spot” to the level of the added relatively late-onset reverberant energy, where envelopment is enhanced but clarity is not degraded... too much late-onset reverberant energy and clarity IS degraded.

What our "Two Streams" approach does not provide are the strong early lateral reflections which enhance the Apparent Source Width, so we would not score high in “width”, but in practice the speakers can usually be spread further apart than normal with no offsetting detriment.

So to sum up:

- In a competition between speaker/room interactions which favor proximity/clarity vs those which favor envelopment/width/reverberance, proximity/clarity is preferred.

- Our “Two Streams” approach arguably offers some of the best of both worlds: Proximity/clarity AND envelopment/reverberance.
 
Last edited:
  • Like
Reactions: jkeny

KlausR.

Well-Known Member
Dec 13, 2010
291
29
333
I think the following phrase in the section "Conclusions" very nicely sums up the paper: "The analyis indicated that rooms decribed by lower RT are preferred."

So it's all about reverberation time, and it's all about preference. Preference is no subject to discussion, your preference is a valid as mine. The perceptual effects of first order reflections have not been investigated and could be subject of further research.

Your "2-streams" approach is all about first order reflections, if I understand it correctly, by controlling directivity to avoid/weaken reflections in the 10 ms window and generating a single reflection after that by the wall-ceiling bounce.

In any case, this paper is interesting and another piece to the RT puzzle which I will add to my write-up on reverberation time.

Klaus
 
  • Like
Reactions: Duke LeJeune

Duke LeJeune

[Industry Expert]/Member Sponsor
Jul 22, 2013
747
1,200
435
Princeton, Texas
I think the following phrase in the section "Conclusions" very nicely sums up the paper: "The analyis indicated that rooms decribed by lower RT are preferred."

I agree, that is a good summation of what the study showed.

So it's all about reverberation time...

The study didn't look at the sort of things I do.

The authors state "there was no intention to systematically vary the early reflection patterns in a specific manner", so the paper doesn't tell us anything about what might have happened if they had varied the early reflection patterns in a specific manner. They went to an awful lot of trouble to acoustically recreate multiple rooms in an anechoic chamber, but they did not “look outside the box” at the effects of loudspeaker radiation patterns.

Your "2-streams" approach is all about first order reflections, if I understand it correctly, by controlling directivity to avoid/weaken reflections in the 10 ms window and generating a single reflection after that by the wall-ceiling bounce.

The wall-ceiling bounce usually ends up being a bit more complicated than what I have heretofore described. My speakers are designed to be toed-in at a roughly 45 degree angle, which means the wall/ceiling bounce energy is correspondingly toed-out at a roughly 45 degree angle. So in practice we often do not end up with all of that energy in a single reflection. (The relevant reflection path lengths are also correspondingly increased).

Also, it's not just about that initial reflection – there are subsequent reflections coming from directions which may not have been significantly illuminated ordinarily, and the overall greater amount reverberant energy will increase the decay time.

Quoting again from the paper: “Both timbral and spatial characteristics of the reproduced sound fields were found to be altered, even when the differences of the rooms are subtle and within the proposed recommendations of audio evaluation standards.”

So SUBTLE differences in the rooms apparently have significant effects on “both timbral and spatial characteristics of the reproduced sound fields.” The sound fields in the room can be manipulated by the room's acoustics AND by the loudspeaker's radiation pattern. The fact that "subtle" differences in the rooms had audibly significant effects raises the possibility that not-so-subtle differences in loudspeaker radiation patterns could also have audibly significant effects.

Given that we don't have a study investigating my specific approach, let's do a thought experiment. Suppose we want to cultivate the perception of “listening in a small room”. How would we go about doing so? We would increase the level of the early reflections and decrease the reflection times, and we would decrease the reverberation time. Does that make sense?

Now suppose we wanted to go in the opposite direction and cultivate the perception of listening in a large room. Wouldn't we now want to decrease level of the early reflections and increase the reflection times, and increase the reverberation time?
 

KlausR.

Well-Known Member
Dec 13, 2010
291
29
333
Hello Duke,

They went to an awful lot of trouble to acoustically recreate multiple rooms in an anechoic chamber, but they did not “look outside the box” at the effects of loudspeaker radiation patterns.

Some research has been done to investigate the effects of directivity, see my write-up. Kaplanis et al. indicate that his issue might be subject of further research.


Also, it's not just about that initial reflection – there are subsequent reflections coming from directions which may not have been significantly illuminated ordinarily, and the overall greater amount reverberant energy will increase the decay time.

Only first order reflections are/may be of importance, higher order ones are too low in level to have an impact on their own, they are part of the reverberation.


Decay or reverberation time RT is defined as the time required for the level to decrease by 60 dB. The number of reflections per second is equal to cS/4V, with

c = speed of sound

S= total surface area

V=volume

That number remains constant in a given room, regardless of whether the amounts of reverberant energy are small or great, so for changing RT you must change the acoustic properties of the surfaces.


Given that we don't have a study investigating my specific approach, let's do a thought experiment. Suppose we want to cultivate the perception of “listening in a small room”. How would we go about doing so? We would increase the level of the early reflections and decrease the reflection times, and we would decrease the reverberation time. Does that make sense?

You mean, do all this in a setup as used in the Kaplanis paper?

Klaus
 

Duke LeJeune

[Industry Expert]/Member Sponsor
Jul 22, 2013
747
1,200
435
Princeton, Texas
Hello Klaus, thanks for your reply.

Decay or reverberation time RT is defined as the time required for the level to decrease by 60 dB...

That number remains constant in a given room, regardless of whether the amounts of reverberant energy are small or great, so for changing RT you must change the acoustic properties of the surfaces.

As the amount of energy in the reverberant field goes up the time to decay by 60 dB remains constant, but not the time to decay into inaudibility. If the reverberant field starts out louder (because we are deliberately adding energy to it), then it takes longer for that reverberant energy to die out. This is one of the ways we can manipulate the reverberant field via the loudspeakers themselves, and there are others.

Given that we don't have a study investigating my specific approach, let's do a thought experiment. Suppose we want to cultivate the perception of “listening in a small room”. How would we go about doing so? We would increase the level of the early reflections and decrease the reflection times, and we would decrease the reverberation time. Does that make sense?

Do you mean, do all this in a setup as used in the Kaplanis paper?

Could be, but whether in an anechoically-simulated simulated room (like Kaplanis used) or if we were to select a real room, how would we cultivate the perception of "listening in a small room"? One obvious way would be to USE a small room, which would result in a lot of early reflections with short delay times, and a short reverberation time. That could also be simulated via a spherical array of drivers like in Kaplanis' s room. My point here being, it is the reverberant field (all reflections, both early and late) which informs us about the room size, and there are specific directions we would take the reflections (early and late) in order to evoke the perception of "listening in a small room."

And the point of my subsequent paragraph is that, if we want to cultivate the opposite perception ("listening in a large room"), we would take the reflections (early and late) in the opposite direction... which is what I have done.
 

iansr

Well-Known Member
Dec 27, 2010
129
44
933
I’m following this thread with interest. On a forum where sound quality is all too often perceived as being proportional to the price tag of a component it’s refreshing to read a discussion where the participants are seeking to rely on science.

I‘m a fan of dipole speakers and like many others who have lived with dipoles for any length of time, I would find it very hard to go back to traditional box speakers. There are two main reasons for that and one of them is the sense of space and openness you get with a properly set up dipole. Hence my interest in the points under discussion. For the same reason I’d be interested in hearing Duke’s speakers, but I’m in the UK so that’s unlikely to happen.
 
  • Like
Reactions: Duke LeJeune

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu

Steve Williams
Site Founder | Site Owner | Administrator
Ron Resnick
Site Co-Owner | Administrator
Julian (The Fixer)
Website Build | Marketing Managersing