One of my favourite scenes in a movie is this one by Woody Allen - that's why the title of this thread. Just so you know that this is not at all a definitive answer, just the beginnings of a discussion - and I hope that other members will contribute their knowledge.
Our hobby raises a host of challenging philosophical questions - what is it that we are actually hearing? Is hearing spatial? How do our other senses affect what we hear? Both Bruce and I have previously posted this video by Evelyn Glennie where a deaf musician shows us how to listen.
In Hearing Gestures, Seeing Music, researchers Schutz and Lipscomb show that note duration on a marimba is perceived to be longer when the gesture to strike the note is longer even when the acoustic distinction cannot be found. A similar point was made by Ms Glennie in the video where her body language makes the passage she plays sound more expressive.
These studies all demonstrate that ignoring visual information (blind auditions, recordings) rob both the performer and the audience of a significant dimension of musical communication.
However, what then our hifi systems? When we are playing a recording, and there is no visual information of the performance, where do we find that illusion that one performance is more expressive than another? Shouldn't, given what we know now, all marimba/xylophone/vibraphone recordings be flat and boring?
Just listen to John Cocuzzi to hear how much expression and emotion can come into a vibraphone performance even without the visual
Even imaging and soundstage can be influenced by what we see. Both audition and vision are used by the brain to construct spatial maps of the world. But, the eyes are movable and the ears are not, so the brain must accurately account for eye position to maintain alignment. Studies have shown that when the eye position is changed, it can shift sound localization. Auditory Spatial Perception Dynamically Realigns with Changing Eye Position.
In the corollary, with the ventriloquism effect, we perceive sound to come from the location of a visual stimulus. It was found that even monkeys exhibited this plasticity. Just as the auditory signal of an actor's speech appears to come directly from the actor's mouth on a movie screen (even with very badly set up loudspeakers), we perceive the image and soundstage depending on how strong our mental image of the soundstage is.
It has been theorized that music predated language, and that music is evolutionarily important. There have even been studies to show that music can can cause the release of endorphins (and hence relieve pain), lower levels of cortisol (reduce stress and arousal), and raise levels of melatonin (induce sleep). Poets and philosophers have always acknowledged the emotions and sensations that music can elicit.
In a paper to the Conference of the European Society for the Cognitive Sciences of Music, Vincent Meelberg introduced the concept of a sonic stroke - a sound that can create autonomous reactions of the body (like chills up and down the spine). A succession of such sonic strokes bodily invokes meaning and emotions can be treated as causal consequence of bodily changes. Philosophically, in this way, music has an impact on the listener, it induces affects in the body, and we have to explore the ethical aspect of sound.
This understanding serves as a starting point for looking at music where it is not only the listener's mind, but the listener's whole body is the main focus. This leads to Embodied Music Cognition which considers that the human body is a mediator between the mind and the physical environment containing the musical sound.
Finally, what is the nature of "sound" itself? There are three philosophical answers:
1) The proximal theory claim that sound is where the listener is. (If a tree falls in the forest, it does not make a sound if there is no one there to listen.) It is a very egocentric view of what sound is. This also distinguishes between the source and what is being perceived which may be the source, the intervening distance, echoes, reverberations, etc. Hence, there are as many sounds as there are actual (or potential) listeners - lending weight to Ethan's position that moving a microphone as little as 3mm changes the "sound".
2) The medial theory of sound is that it exists between the object making the sound and the listener. (If a tree falls in the forest, it does not make a sound if the forest is in a vacuum.) This goes back to Aristotle, Galileo and Descartes - and is basically the wave theory of sound. Sound exists because of waves in the medium of air - and it is studied in acoustic theory and what we all classically understand. Unfortunately, it is not that simple. Sound waves propagate in all directions from a sounding object (say a musical instrument) and may have a different nature with different direction. Also, a soft sound that is heard close to us is different from a loud sound that is far away. (You could also say that the whole train of soundwaves from the source to our ears is the "sound".)
3) The distal theory considers that the sound is located at the object making the sound. (If a tree falls in the forest, the falling of the tree is the sound.) Hence, sound is a temporal event created by an object. It provides a precise location and time that sound "happens". In this theory, even if a bell is vibrating in a vacuum, it is making a sound - surrounding the bell in air is revealing the sound to the listener. (An apple is still red in the dark, turning on the light reveals it as red.) This gives me the biggest headache - if the sound is a located event at an object (musical instrument), then what we hear in our hifi system is a different sound from a different object - a loudspeaker. When we see an object in the mirror, we are not seeing another immaterial object located in an immaterial space beyond the mirror - there is no such immaterial object, just located incorrectly.
In summary, sounds are here (proximal), there (distal) or everywhere (medial). It has also been denied that sounds have any locations at all - which gives rise to aspatial theories which give me a major headache.
So, dear readers, if I haven't given you a headache yet, in a future installment I'll discuss how some of these philosophies give rise to different ways in which I think as a loudspeaker designer.
Our hobby raises a host of challenging philosophical questions - what is it that we are actually hearing? Is hearing spatial? How do our other senses affect what we hear? Both Bruce and I have previously posted this video by Evelyn Glennie where a deaf musician shows us how to listen.
In Hearing Gestures, Seeing Music, researchers Schutz and Lipscomb show that note duration on a marimba is perceived to be longer when the gesture to strike the note is longer even when the acoustic distinction cannot be found. A similar point was made by Ms Glennie in the video where her body language makes the passage she plays sound more expressive.
These studies all demonstrate that ignoring visual information (blind auditions, recordings) rob both the performer and the audience of a significant dimension of musical communication.
However, what then our hifi systems? When we are playing a recording, and there is no visual information of the performance, where do we find that illusion that one performance is more expressive than another? Shouldn't, given what we know now, all marimba/xylophone/vibraphone recordings be flat and boring?
Just listen to John Cocuzzi to hear how much expression and emotion can come into a vibraphone performance even without the visual
Even imaging and soundstage can be influenced by what we see. Both audition and vision are used by the brain to construct spatial maps of the world. But, the eyes are movable and the ears are not, so the brain must accurately account for eye position to maintain alignment. Studies have shown that when the eye position is changed, it can shift sound localization. Auditory Spatial Perception Dynamically Realigns with Changing Eye Position.
In the corollary, with the ventriloquism effect, we perceive sound to come from the location of a visual stimulus. It was found that even monkeys exhibited this plasticity. Just as the auditory signal of an actor's speech appears to come directly from the actor's mouth on a movie screen (even with very badly set up loudspeakers), we perceive the image and soundstage depending on how strong our mental image of the soundstage is.
It has been theorized that music predated language, and that music is evolutionarily important. There have even been studies to show that music can can cause the release of endorphins (and hence relieve pain), lower levels of cortisol (reduce stress and arousal), and raise levels of melatonin (induce sleep). Poets and philosophers have always acknowledged the emotions and sensations that music can elicit.
In a paper to the Conference of the European Society for the Cognitive Sciences of Music, Vincent Meelberg introduced the concept of a sonic stroke - a sound that can create autonomous reactions of the body (like chills up and down the spine). A succession of such sonic strokes bodily invokes meaning and emotions can be treated as causal consequence of bodily changes. Philosophically, in this way, music has an impact on the listener, it induces affects in the body, and we have to explore the ethical aspect of sound.
This understanding serves as a starting point for looking at music where it is not only the listener's mind, but the listener's whole body is the main focus. This leads to Embodied Music Cognition which considers that the human body is a mediator between the mind and the physical environment containing the musical sound.
Finally, what is the nature of "sound" itself? There are three philosophical answers:
1) The proximal theory claim that sound is where the listener is. (If a tree falls in the forest, it does not make a sound if there is no one there to listen.) It is a very egocentric view of what sound is. This also distinguishes between the source and what is being perceived which may be the source, the intervening distance, echoes, reverberations, etc. Hence, there are as many sounds as there are actual (or potential) listeners - lending weight to Ethan's position that moving a microphone as little as 3mm changes the "sound".
2) The medial theory of sound is that it exists between the object making the sound and the listener. (If a tree falls in the forest, it does not make a sound if the forest is in a vacuum.) This goes back to Aristotle, Galileo and Descartes - and is basically the wave theory of sound. Sound exists because of waves in the medium of air - and it is studied in acoustic theory and what we all classically understand. Unfortunately, it is not that simple. Sound waves propagate in all directions from a sounding object (say a musical instrument) and may have a different nature with different direction. Also, a soft sound that is heard close to us is different from a loud sound that is far away. (You could also say that the whole train of soundwaves from the source to our ears is the "sound".)
3) The distal theory considers that the sound is located at the object making the sound. (If a tree falls in the forest, the falling of the tree is the sound.) Hence, sound is a temporal event created by an object. It provides a precise location and time that sound "happens". In this theory, even if a bell is vibrating in a vacuum, it is making a sound - surrounding the bell in air is revealing the sound to the listener. (An apple is still red in the dark, turning on the light reveals it as red.) This gives me the biggest headache - if the sound is a located event at an object (musical instrument), then what we hear in our hifi system is a different sound from a different object - a loudspeaker. When we see an object in the mirror, we are not seeing another immaterial object located in an immaterial space beyond the mirror - there is no such immaterial object, just located incorrectly.
In summary, sounds are here (proximal), there (distal) or everywhere (medial). It has also been denied that sounds have any locations at all - which gives rise to aspatial theories which give me a major headache.
So, dear readers, if I haven't given you a headache yet, in a future installment I'll discuss how some of these philosophies give rise to different ways in which I think as a loudspeaker designer.