Do Members use Live Music as a Reference

Do Members use Live Music as a Reference?

  • I use live music as a reference.

    Votes: 50 73.5%
  • I do not use live music as a reference.

    Votes: 18 26.5%

  • Total voters
    68
@853guy
I find it hard to believe that music is processed in a completely different neural pathway to other sounds - it just doesn't seem to follow the rules of efficiency that this biological machine uses. As there are so many sonic features in common between music & general sound that I just can't see a doubling up of neural processing pathways to handle these features for music separately to general sound.
I can believe that at a higher level of cognitive processing music is separately processed & if that is the paper conclusions, then I can subscribe to this but I'm not sure I read it this way or that it's being interpreted this way?
 
Yes, I wonder about this - is it because mechanical & electromechanical devices are constrained by the mechanical laws of physics from deviating too far from natural sound - their distortions are more easily accommodated to by our auditory perception at a fundamental level? That's my take on it, anyway.

Auditory perception is actually analysing all incoming signals & assembling an auditory scene based on best guess 'identification' of the auditory objects in that scene. 'Identification' is too definitive a term, auditory processing is in a constant state of insecurity, guessing, at every moment in time, at the best solution to what the signals represent & probably having a small remnant of the signals as unresolved. Depending on how big the distortion is & where the distortion is occurring in the full structure of the sound, defines how large an effect it has on our auditory perception. If it affects the attack stage of a sound, it will likely have a significant effect as this stage is more important for auditory perception, relative to other stages, sustain, decay, release. Furthermore, any set of these signals has a number of alternative possible solutions which fits our understanding of the understanding we have of the auditory world & minimises the number of unresolved signal remnants.

With that understanding comes my realisation that it's not just that an unnatural distortion is noticed as a singular issue, it can have a more disruptive & widespread effect across the auditory scene being built from moment to moment. The level of disruption perceived depends on so many factors. If it It's not that the whole created scene falls apart, my guess is that we are just thrown back to one of the other possible solutions or we are more unsure of our derivations.

An example may explain my meaning better - if we take very low level noise that is correlated to signal processing - in other words, common mode noise. This isn't a noise that is audible with ear to speakers when playing digital silence - it is only present & probably fluctuates with signal processing. As it is very low level fluctuating noise it is normally masked by higher signal levels found during sustain & decay stages of a sound. It will only audibly affect sound during the low level stages of sound build up & tail off - the attack & decay stages of a sound. The effect on the attack portion is again not directly audible as noise, as there's backward masking happening but more likely perceived as a less defined start to the sound. The release portion may also be perceived as a fuzzy end to the sound but I doubt it is as perceptible. So, how is less defined, more fuzzy, tiuming perceived? Probably as a less solid, not as clearly defined sounds with the result of a less solid sound stage - possibly a less realistic sound? This is all coming from the electronics & a fluctuating low level noise.

I don't believe this sort of signal correlated fluctuating noise distortion is usual in speakers? Fixed noise is perceptually far easier to accommodate to. In fact any signals which can be tied together into a separate sound stream can be accommodated by auditory perception - it's how we can focus on a conversation in a noisy room, it's how we can separate our room ambience from the ambience on the playback

Maybe others can correct or expand on my thoughts?

In my experience low frequency noise has a greater effect on clarity. The only way to achieve a very high level of realism is to make a large pathway to ground. This is why some cables have higher resolution than others. It is impossible to calculate if you have a large enough pathway. Noise both measureable and unmeasurable will take the path of least resistance back to ground.the trick is to have your pathway large enough to draw it away from the component. I also think that signal cables will never be capable of creating a large enough pathway, unless every component electronics has a egual high level grounding scheme.
.
 
@853guy
I find it hard to believe that music is processed in a completely different neural pathway to other sounds - it just doesn't seem to follow the rules of efficiency that this biological machine uses. As there are so many sonic features in common between music & general sound that I just can't see a doubling up of neural processing pathways to handle these features for music separately to general sound.
I can believe that at a higher level of cognitive processing music is separately processed & if that is the paper conclusions, then I can subscribe to this but I'm not sure I read it this way or that it's being interpreted this way?

http://mobile.nytimes.com/2016/02/09/science/new-ways-into-the-brains-music-room.html

There was this story in the science section of the New York Times earlier this year after work was published from the Massachusetts Institute of Technology on music within the brain which points to having at last discovered where music (as opposed to most other sounds) is identified and then correlated within the brain.

Given the relative neural plasticity of humankind it may be surprising that patterns in music are still essentially unifying in experience and also shared in process however sounds not identified as music are still processed separately in a range of places within the brain including the limbic system, cerebellum and cerebral cortex amongst others apparently. Music is translated in the sulcus according to their findings.

So elements of rap or even dialogue in opera or perhaps poetry is not necessarily only experienced in the speech or language zones because the rhythmic pattern of delivery triggers a pattern of recognition also in the sulcus.

So the sulcus may be crossroads of the sacred seat of music. Interestingly this fluid filled furrow essentially forms a cross through the brain connecting cardinally across the lobes. Music may well yet prove to be at the intersection of all perception.
 
http://mobile.nytimes.com/2016/02/09/science/new-ways-into-the-brains-music-room.html

There was this story in the science section of the New York Times earlier this year after work was published from the Massachusetts Institute of Technology on music within the brain which points to having at last discovered where music (as opposed to most other sounds) is identified and then correlated within the brain.

Given the relative neural plasticity of humankind it may be surprising that patterns in music are still essentially unifying in experience and also shared in process however sounds not identified as music are still processed separately in a range of places within the brain including the limbic system, cerebellum and cerebral cortex amongst others apparently. Music is translated in the sulcus according to their findings.

So elements of rap or even dialogue in opera or perhaps poetry is not necessarily only experienced in the speech or language zones because the rhythmic pattern of delivery triggers a pattern of recognition also in the sulcus.

So the sulcus may be crossroads of the sacred seat of music. Interestingly this fluid filled furrow essentially forms a cross through the brain connecting cardinally across the lobes. Music may well yet prove to be at the intersection of all perception.

Right, it does seem like I was being a Luddite on this, after all we specialist areas for speech recognition, for face recognition & it appears even for some body parts.

Sorry, for my push back against this new discovery - look forward to more details being uncovered
 
I think we are all in the land of Ludd when it comes to grasping perceptual modelling and matching it to brain behaviour. If you ever get the time to read up on Hameroff and Penrose's work on Orchestrated Objective Reduction it is a fascinating model from neuroscience.

If you also overlaid Bergmans ASA that you are so clearly familiar with and Orch-OR you might see how once you factor in quantum field theory if neurons can respond not just with fire and integrate behaviour but also that they have the potential capacity in the surrounding neuron's microtubules to respond in listening perception in quantum ways then the implications of these and also further correlations to Langer's theory of a biocentric consciousness give way to the notion that we create our universe of experiences in music and that any occasion of experiences within a listening experience might never indeed collapse as long as we can simply sit with them unaware of any other measurement of reality. They could lead us perhaps to a core central place free of any other temporal or spatial restraint other than those set in the waves of the music, that is until you try and grasp it as a moment and realise in that instant that the music is separate to you and your self. Something as simple as giving the experience a name or expresing a quality of experience that is in any way disparate and potentially a reference point of change in perception. Any observance of a point of focus in space or time that puts distance between you and what you are hearing. Then the field of union collapses into a field of obvious separation ready to be viewed from another seemingly completely valid and objective perspective.
 
Last edited:
Thanks for the heads up - will take a look at Orch-OR.

Just did & it reminded me that in my undergraduate studies in Biochemistry, I did electron micrographs of cells during division which clearly showed microtubule structures.
I don't know where those images are now?
 
That's really cool, I love the notion that we are each a conductor in consciousness and the various patterns of perception we encounter are like separate sections within the orchestra with their own voice and where ever we point then becomes alive with potential change and connection. That each of is is an orchestration of awareness and our consciousness is resonant, harmonic and symphonic.
 
853guy: SUCH CERTITUDE!

Apologies for totally misunderstanding your provocative, rather than factual thoughts (since I am NOT a five-year old), as I take offence to your absolutist views concerning «music being wholly separate and distinct from sound».
I detect either an innocent ignorance or a cultural bias (accepted as human frailties), hence my laconic responses that follow.

From post #88 : «music though is the combination of pitch\frequency and rhythm and time. Always has been, always will be». And from post #277: «when pitch is organised relative to time and given dynamics, we have music. For the last 43000 years, this is the foundation of what has and always will comprise music. Even a five-year old can understand this».

Pitch \frequency does NOT have to be defined by rhythm and time. My music collection includes hundreds of hours of non-western improvised music, under the generic term TAKSIM/TAQASIM, namely instrumental improvisations and vocal improvisations, known as GAZEL/GAZAL ( turkish, persian, arabic, greek and others). This music epitomises the most expressive, lyrical, refined and deeply felt sentiments, often far surpassing other more «organised» pitches/frequencies defined by rhythm and time. They are both free of form, devoid of rhythm and time. There is content/substance (music) without form (rhythm and time).

According to your absolutist definition, this is NOT music but sounds. Let me assure you on behalf of millions who seek this form of musical intoxication and ecstacy (TARAB) that this is music of the highest order! And may well be the case with western improvised music. Totally personalising the issue, I can unequivocally tell you that if I can create a «taksim» (usually spotaneously but not necessarily entirely so) to complement the ethos and melodic personality of a song, the pleasure and satisfaction of the former, i.e the «taksim» (sounds according to your defintion) far exceed the latter, i.e the song (pitch, rhythm, time, dynamics).......music according to you. I do NOT differentiate between the two! Philological pedantics/semantics aside, in this context, music is sound; sound is music, whether organised into time and rhythm or not.

By the way, tone and timbre as elements of sound are, by extension, also elements of music. They are part and parcel of the emotional personality of the music and thus not «acoustical by-products» as you claim. An intricate, mesmerising microtonal inflection of an instrumental «taksim» or vocal «gazel», modulating into a number of related «makam»/modes will only be emotionally enhanced and accepted by an equally beautiful tone/timbre.

I am more than happy for you to continue separating the two (music and sound) and listening to «pitch organised relative to time and dynamics». This however, is a very narrow and culturally biased (and don’t forget western improvised music) defintion/position to adopt in such a dogmatic manner, as expounded in your numerous posts. For what?........To support a premise/hypothesis that the brain processes music as distinct from sound and speech.

Finally, since we all wallow in frailties of all sorts (especially in this hobby), my offence to your thesis can equally be your offence to what I just wrote, or perhaps (as I would like to think,) it was an innocent oversight on your behalf. In which case, my comments act as a clarification.

Thank you, Kostas Papazoglou.
 
Right, it does seem like I was being a Luddite on this, after all we specialist areas for speech recognition, for face recognition & it appears even for some body parts.

Sorry, for my push back against this new discovery - look forward to more details being uncovered

the sound of Tao said:
I think we are all in the land of Ludd when it comes to grasping perceptual modelling and matching it to brain behaviour. If you ever get the time to read up on Hameroff and Penrose's work on Orchestrated Objective Reduction it is a fascinating model from neuroscience.

Yep, count me in, too!

Thanks for the heads up re: Hameroff and Penrose. Should I overcome my guilt at posting here rather than creating work as billable hours for my client I'll try and take a look.

853guy
 
853guy: SUCH CERTITUDE!

Apologies for totally misunderstanding your provocative, rather than factual thoughts (since I am NOT a five-year old), as I take offence to your absolutist views concerning «music being wholly separate and distinct from sound».
I detect either an innocent ignorance or a cultural bias (accepted as human frailties), hence my laconic responses that follow.

Hi Kostas,

Thanks for your reply. To me, I think this is a really interesting thread that’s bringing out a lot of really useful discussion, and I appreciate you posting your thoughts.

I neither intended to give offence to you, nor anyone, and certainly, I intend to take none. If you were offended, I very much apologise. (And I'm certainly not suggesting anyone on this thread is a child - my point was rather that any child learning to read a score or understand music theory will be confronted with the three domains of pitch, time and dynamics.)

I’m a pilgrim on a journey trying to understand better - within my own socio-cultural and intellectual limits (which are considerable), i.e., my own perspective - what it is about music that has made it so essential to our species’ development and enrichment for thousands and thousands of years. Sometimes, in the attempt at articulating what I think at any given moment, I tend to be more definitive than I mean to be, partly in an attempt to clarify my own thoughts, and hopefully, to avoid saying what I don’t mean to say.

I hope you caught this insight into my process in post #277 here where I say:

853guy said:
In my opinion - which, because I am some anonymous guy posting on an internet forum, is essentially worthless

I take this area of research very seriously (because I think it’s incredibly exciting), but myself, a lot less so. And of course, I’m constantly making assumptions on behalf of my cultural time and place, simply because I cannot not experience my own experience (except through repression or denial), and I cannot experience yours, except through immersion and shared exchange. I love that you have had a very different culturally defined existence, and I’m looking forward to what I might learn from you. But please - I’m some privileged guy who makes money in advertising and spends some of his free time pontificating on the essence of music and its reproduction via systems whose value is greater than the GDP per capita of the lowest twenty countries combined. That I have a bias - social, cultural, intellectual, experiential, and egomaniacal - is undeniable. However, I’m open to learning, and moving to Europe three years ago and becoming an “immigrant” (though I use that term very loosely) was part of the process to better understand my own socio-cultural myopism. It’s good to proven wrong and have to check your privilege every now and then. In fact for me, it’s essential.

From post #88 : «music though is the combination of pitch\frequency and rhythm and time. Always has been, always will be». And from post #277: «when pitch is organised relative to time and given dynamics, we have music. For the last 43000 years, this is the foundation of what has and always will comprise music. Even a five-year old can understand this».

Pitch \frequency does NOT have to be defined by rhythm and time. My music collection includes hundreds of hours of non-western improvised music, under the generic term TAKSIM/TAQASIM, namely instrumental improvisations and vocal improvisations, known as GAZEL/GAZAL ( turkish, persian, arabic, greek and others). This music epitomises the most expressive, lyrical, refined and deeply felt sentiments, often far surpassing other more «organised» pitches/frequencies defined by rhythm and time. They are both free of form, devoid of rhythm and time. There is content/substance (music) without form (rhythm and time).

According to your absolutist definition, this is NOT music but sounds. Let me assure you on behalf of millions who seek this form of musical intoxication and ecstacy (TARAB) that this is music of the highest order! And may well be the case with western improvised music. Totally personalising the issue, I can unequivocally tell you that if I can create a «taksim» (usually spotaneously but not necessarily entirely so) to complement the ethos and melodic personality of a song, the pleasure and satisfaction of the former, i.e the «taksim» (sounds according to your defintion) far exceed the latter, i.e the song (pitch, rhythm, time, dynamics).......music according to you. I do NOT differentiate between the two! Philological pedantics/semantics aside, in this context, music is sound; sound is music, whether organised into time and rhythm or not.

By the way, tone and timbre as elements of sound are, by extension, also elements of music. They are part and parcel of the emotional personality of the music and thus not «acoustical by-products» as you claim. An intricate, mesmerising microtonal inflection of an instrumental «taksim» or vocal «gazel», modulating into a number of related «makam»/modes will only be emotionally enhanced and accepted by an equally beautiful tone/timbre.

I am more than happy for you to continue separating the two (music and sound) and listening to «pitch organised relative to time and dynamics». This however, is a very narrow and culturally biased (and don’t forget western improvised music) defintion/position to adopt in such a dogmatic manner, as expounded in your numerous posts. For what?........To support a premise/hypothesis that the brain processes music as distinct from sound and speech.

Finally, since we all wallow in frailties of all sorts (especially in this hobby), my offence to your thesis can equally be your offence to what I just wrote, or perhaps (as I would like to think,) it was an innocent oversight on your behalf. In which case, my comments act as a clarification.

Thank you, Kostas Papazoglou.

Part of the problem I alluded to in post #162 is that when we take one form of communication and attempt to define it via another one, we mostly end up defining the limits of one in light of the other. It’s problematic to define music through language, because we tend to use terms that make the most sense vis-a-vis language rather than music.

My definition above in post #277 is a simplistic attempt (perhaps overly so) at trying to bridge my own socio-culturally defined experiences in the exploration of music (and to reply to jkeny’s post) with the above mentioned research I’ve referenced throughout this thread, and provided a link to here:

853guy said:

Is it problematic to define music using terms we’ve invented in order to share it with others? Absolutely. But for better or worse, that ability to break music down into three basic variables has allowed us as a species to create and share ideas on a scale not only unimaginable to our ancestors, but to increase the complexity of the music we share. A piece of sheet music doesn’t capture everything about a piece of music - certainly, it’s outside of its remit to include artistry and aesthetics, the domain of the artist - but it does potentially provide us an insight into the above research of what the brain is looking for, which is fundamentally a relationship between variables.

Does much of what we call ‘music’ stretch the limits of that basic relationship? Absolutely. If you take a look at post #164, you’ll see a small collection of the type of ‘music’ I love and have continued to collect ("ambient", "illbient", "dark-ambient", "drone", "doom", "noise" - terrible names all) ever since I discovered Brian Eno’s Apollo. And in fact, one of the albums listed there, Lustmord’s The Word as Power has a great track, “Grigori” featuring the vocals of Soriah (Enrique Ugalde), who’s a practitioner of Khöömei (Tuvan throat singing). There’s very little rhythm - it’s essentially long drones and occasional low percussive interjections over which Soriah intonates - but it does feature pitch, dynamics and it cannot help be be defined by time.

My belief is that music is directly related to the intent of the artist. Even the most free-form and improvised ‘music’ starts and stops at a certain point in time (I'm also a huge fan of improvisational and avant grade classical and jazz). However, even if I create a drone made of various sub-harmonics, I still make a decision as to when they begin and when they end - in this regard I can’t see how music cannot be defined by time, because we as practitioners are bound by it, and the decision to emit sound via an instrument is always under our control.

If you get a chance to look at the research above I’d love to know your thoughts on it.

Thanks again, Kostas.

853guy
 
Last edited:
...in this context, music is sound; sound is music, whether organised into time and rhythm or not.
Very well said Kostas. In my book there are some rules one must not break: "no audio argument can rely on first defining what music is." Music is whatever it wants to be. It is creative art and there is no limit to what someone puts in their creation. In visual arts, a blank canvas with a dot on it could be considered high art. Music can be the same way. I don't know how we can ever attempt to put any boundary on it.

An example I use is this Allan Taylor Color to the Moon track which starts with nearly pure tone:


Are we to say that tone is not music? It is of course and in the case of this track I have learned to expect it as part and parcel of what is music.
 
KostasP. said:
...in this context, music is sound; sound is music, whether organised into time and rhythm or not.
Very well said Kostas. In my book there are some rules one must not break: "no audio argument can rely on first defining what music is." Music is whatever it wants to be. It is creative art and there is no limit to what someone puts in their creation. In visual arts, a blank canvas with a dot on it could be considered high art. Music can be the same way. I don't know how we can ever attempt to put any boundary on it.

Hi Amir,

Music is not (always) sound. It can simply be some dots on a piece of paper. If I show it to someone who can read music, they’ll “hear” in their head whatever melody is written, without any sound needing to take place. In post #125 I referenced Daniel J. Levitin & Scott T. Grafton’s August 2016, Neurocase study “Measuring the representational space of music with fMRI: a case study with Sting”, in which they write:

“Our hypotheses were confirmed. The act of composing, and even of imagining elements of the composed piece separately, such as melody and rhythm, activated a similar cluster of brain regions, and were distinct from prose and visual art. Listened and imagined music showed high similarity, and in addition, notable similarity/dissimilarity patterns emerged among the various pieces used as stimuli: Muzak and Top 100/Pop songs were far from all other musical styles in Mahalanobis distance (Euclidean representational space), whereas jazz, R&B, tango and rock were comparatively close. Closer inspection revealed principaled explanations for the similarity clusters found, based on key, tempo, motif, and orchestration.”

http://www.tandfonline.com/doi/full/10.1080/13554794.2016.1216572

(Emphasis mine.)

No one is trying to put a boundary on it. Some of us, myself included, are using some basic building blocks of our understanding of what constitutes music - pitch, time and dynamics being the three basics - in which to better understand the current research. As I mentioned above to Kostas, we can of course, stretch those limits, but the above two studies indicate the brain is looking for a specific relationship between variables - and sound need not be one of them.

An example I use is this Allan Taylor Color to the Moon track which starts with nearly pure tone:

Are we to say that tone is not music? It is of course and in the case of this track I have learned to expect it as part and parcel of what is music.

If the tone is question by a human being who intends it to be music by choosing to begin it at a certain point in time, then for all intents and purposes, one can only conclude that it is.
 
The Theremin

 
The Theremin

Hi RogerD,

The first time I ever heard a theremin (or actually, its cousin the ondes Martenot) was in Messiaen's Turangalîla-Symphonie, and then later, when Page used it for Led Zep performances. Such a cool sound.

Of course, we still have a human being making intentional choices of how to modulate its pitch over time. Messiaen's score here for Trois petites liturgies de la présence divine is a nice example of how some simple marking can convey intention, even on an instrument as mercurial as the ondes Martenot:

df1360200.jpg

And, because it's almost a work of art in itself, an extract from Iannis Xenakis' Pithoprakta - musical intention communicated in two-dimensional form:

87_2_w1000h600.jpg
 
We ALL hear things differently and we all have different taste.. so we should ALL only use our own EARS as a REFERENCE . ;)

I do not even like Live Music!
 
This was my initial issue with the findings of this paper - what constituted music? I'm still unsure & hope that further research will define what criteria is used in our auditory processing to direct a particular set of sounds to the music processing sulcus area of neurons?

Maybe someone can help with the same query regarding speech which has long been acknowledged to be processed in a defined neurological area & been researched for a far longer time? Knowing how speech is handled/defined may help to overcome this issue of definition?
 
This was my initial issue with the findings of this paper - what constituted music? I'm still unsure & hope that further research will define what criteria is used in our auditory processing to direct a particular set of sounds to the music processing sulcus area of neurons?

Maybe someone can help with the same query regarding speech which has long been acknowledged to be processed in a defined neurological area & been researched for a far longer time? Knowing how speech is handled/defined may help to overcome this issue of definition?

Hi jkeny,

Well, ultimately we're revisiting previously covered territory, and I think the reality is: No one knows yet.

Interestingly though, in a 2014 study (Edward F. Chang, Nima Mesgarani, Keith Johnson, & Connie Cheung) into how the brain recognises speech sounds researchers discovered the brain breaks speech down into acoustic features and “has a systematic organization for basic sound feature units, kind of like elements in the periodic table.” What’s more, they discovered “the arrangement in the STG is reminiscent of feature detectors in the visual system for edges and shapes, which allow us to recognize objects, like bottles, no matter which perspective we view them from. Given the variability of speech across speakers and situations, it makes sense, for the brain to employ this sort of feature-based algorithm to reliably identify phonemes.”

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4350233/

So although the brain has discrete and specialised areas for processing music from speech and other sounds, it's possible it's using a similar process to identify sounds, hence my supposition it's looking for a relationship between certain features.

I'd say we're only just beginning to scratch the surface…
 
Thinking Sounds - by Marcel Cobussen

What Is Music?

"This course is an introduction to one of the most basic questions in the philosophy of music. The course includes an historical overview, though most attention will go to contemporary, (late) 20th-century ideas about the problems and (im)possibilities to define music.

What is ‘music’? A complex amalgam of melody, harmony, rhythm, timbre and silence in a particular (intended) structure (Hanslick)? A sonoric event between noise and silence (Attali)? A ‘total social fact’ (Molino)? Something in which truth has set itself to work (Heidegger)?

Music. In the first place a word. As a word, it has meaning. As a word, it gives meaning. Take sounds for example: this sound is music. Which actually conveys: ‘we’ consider this sound as music. Music – as word – frames, delimits, opens up, encloses. To call (‘consecrate’ as Pierre Bourdieu would say) something music is a political decision-making process. As a grammatical concept, ‘music’ is useful: using this concept, we differentiate between various sounds. We divide, classify, categorize, name, delimit: not every sound is music. Although, since Cage, no single sound is by definition banned from the musical domain. The word ‘music’ brings (necessary) structure and order into the (audible) world.

But, there is also an other music; there is a ‘musical dimension’ that is much more difficult to capture in words. This dimension might be indicated as ‘the sensual’, something which can and should (at least according to Søren Kierkegaard) only be expressed in its immediacy. This immediate – perhaps one could also speak of ‘the physical’ – is erased at the moment when it, through reflection, would be conceptualized; it is by definition indefinable and therefore unreachable by means of language. There is thus something in music which can only be expressed through or as music. The moment that language tries to pinpoint this something, it dissolves and is lost.

So, is it possible at all to define – that is: to incorporate into a linguistic category – music?"

______

?? What is Musik?
 
Hi jkeny,

Well, ultimately we're revisiting previously covered territory, and I think the reality is: No one knows yet.
Yes, I agree & that's why I was pointing out the speech recognition issue as it helped me to realise that this must have been the initial question with that speech research too i.e what are the characteristics that auditory processing uses to categorise the signals as speech? And I see you have given a reference below - thank you!

Interestingly though, in a 2014 study (Edward F. Chang, Nima Mesgarani, Keith Johnson, & Connie Cheung) into how the brain recognises speech sounds researchers discovered the brain breaks speech down into acoustic features and “has a systematic organization for basic sound feature units, kind of like elements in the periodic table.” What’s more, they discovered “the arrangement in the STG is reminiscent of feature detectors in the visual system for edges and shapes, which allow us to recognize objects, like bottles, no matter which perspective we view them from. Given the variability of speech across speakers and situations, it makes sense, for the brain to employ this sort of feature-based algorithm to reliably identify phonemes.”

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4350233/

So although the brain has discrete and specialised areas for processing music from speech and other sounds, it's possible it's using a similar process to identify sounds, hence my supposition it's looking for a relationship between certain features.

I'd say we're only just beginning to scratch the surface…
Excellent!
Yes, the more that people realise that all our perceptions are really the brain's analysis & correlation of very basic incoming signals, the better understanding we achieve of the signal characteristics in audio playback that are important to our perception & how we judge such signals as realistic or life-like

As you say, auditory perception is thought to be somewhat similar to visual perception in that there are concurrent parallel processing of certain characteristics in the signal & correlation between these parallel processing streams is used to assign the signals to specific objects (visual or auditory).

So, yes, I would agree that categorising of the signals as music-like by auditory processing is most likely to be a correlation (over time) among signal factors.

And yes we are only scratching the surface regarding our understanding. I have always felt that we, on audio forums, are dealing in one of the most complicated areas - the reproduction of audio that satisfies the 'rules' of auditory perception, with only knowing some basic rules.

It's heartening to see the possibility that this new approach to fMRI analysis will ignite further & ongoing research into the auditory processing of music. This is the way research progresses - a new tool allows experiments & analysis of results for questions that were not possible to answer before.
 

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu

Steve Williams
Site Founder | Site Owner | Administrator
Ron Resnick
Site Co-Owner | Administrator
Julian (The Fixer)
Website Build | Marketing Managersing