What does it mean when people describe Digital as Sounding like "Analog"? Best term?

Al M.

VIP/Donor
Sep 10, 2013
8,685
4,474
963
Greater Boston
So yes, I’d believe so Tim, the digital distinction is something of a straw man as nobody is actually listening to digital and happily we’re all in the analogue together.

Agreed. Our posts crossed.
 
  • Like
Reactions: the sound of Tao

the sound of Tao

Well-Known Member
Jul 18, 2014
3,620
4,839
940
There is no "gestalt of digital" since we don't hear in sample points (which are not stair cases, for the uninitiated ;)). The output from a DAC is analog wave forms, thus digital also delivers a "gestalt of analog".
In those very early samples of awkward staircases we all pretty much fell down. Ouch. In the attempt to lower noise for quite some very long time we also lost connection with the essential signal... but thankfully it’s back.

Music is a rich part of the experience and the how we now access it for me it is less important than the amount of accessibility that we do now have.
 

JackD201

WBF Founding Member
Apr 20, 2010
12,308
1,425
1,820
Manila, Philippines
Whe my analog reference were my SL1200s with crappy slipmats and Shure carts, Digital was superior LOL

These days my answer is usually "It depends". Analog is notoriously easy to screw up just like anything else where fine mechanical calibration is required.
 
  • Like
Reactions: jeff1225

andromedaaudio

VIP/Donor
Jan 23, 2011
8,355
2,731
1,400
Amsterdam holland
For me this makes digital kinda listenable / old school .
Fully serviced picking it up tomorrow ( if im not stopped at the border , lol)

360 S.
may be its colouration may be not who cares in the end, as long as it sounds nice its all good
 
Last edited:

microstrip

VIP/Donor
May 30, 2010
20,806
4,698
2,790
Portugal
Huh? How does one preference vs another relate to the ability to distinguish between live vs recorded music? And yes everything happens in a context, I think we know that.

I am simply stating that in some conditions people will easily distinguish between real and recording, in others they will not. What can we conclude from that?
 

jkeny

Industry Expert, Member Sponsor
Feb 9, 2012
3,374
42
383
Ireland
My 2 cents for what it's worth - all this is IMO

We don't know what the original event or the final cut sounded like so how do we judge the sound of our replay systems?

If we look on recordings as a piece of art, the art of the musicians as interpreted by the art of the recording engineer as a representative interpretation. I'm sure given different recording engineers & producers we would get different interpretations.

So you want what you hear to be what the recording professionals intended but you can't really know this precisely. So how do we judge our replay systems? I suggest that we judge the sound in terms of how 'real' it sounds. When it sounds 'real' we find engagement & immersion in the performance - we transcend the sound of our playback system. This can only happen if the 'realism' of the sound is maintained during the playback.

Most of our auditory perception is happening below the level of consciousness with the end result presented to consciousness. What I mean is that our brain is analysing the nerve impulses from our two ears & making sense of these nerve impulses organising & categorising them into an auditory world model that makes sense - it's a heavy duty analytical process that evaluates what we perceive through our senses. This can only work efficiently if we have an internal series of rules/models against which the analysis is performed - rules/models that have built (& continue to be built) over the years of exposure to sound in the world.

So my contention is that it is this sub-conscious analysis which determines how real we perceive the sound to be.

To explain in a bit more detail - the real-time analysis seems to work along the lines that at any point in time it is finding the best fit for the collection of nerve impulses into a working model & as a result predicts what should come next according to that working model. An example of this prediction function is how alien sound seems when played backwards - it doesn't match the behaviour of sound as we usually experience it & have built our internal analytic models around.

So what is happening when we listen to our playback system? We are analysing in the exact same way. If it sounds natural & real, it's because the sound ticks all the analytical correct boxes that matches our internal real-world models of sound. If, at any point in the sound stream, it doesn't match the prediction in some aspect, the working analytic model is changed to best fit the new collection of nerve impulses and so on. Too much of these deviations from the model's expectations & we have too much modification of the working model, too much energy expended as this best fit analysis & changing of models is heavy on resources. But all this is happening below consciousness - what emerges at a conscious level depends on the how many such misfits there are - a disinterest in the music at one end , perhaps even a wish to turn it off as it is disturbing/jarring in some way at the other end. We generally don't get into the relaxed state where music transports us as our brain's energy is mostly being used in figuring out the nerve impulses it is being presented with.

The opposite, where less resources are expended listening to playback, allows the saved energy to now be used by higher levels of brain function & I believe why we feel engagement, immersion in the sound & enjoyment in listening to the music playback. This only happens when the rightness of the nerve impulses (music stream) is in concurrence with our inbuilt models of natural sound.

I'm not sure what all the characteristics are that determine the 'realness' of the sound - it may be that we form a statistical analysis of the ongoing collection of sounds we call music i.e it's not individual freqs or amplitudes or timing but an ongoing statistical analysis/abstraction moment to moment so a sort of pattern of sophisticated ongoing pattern analysis with prediction. So what has occurred in the music some moments ago (how many moments I don't know?) is of importance to this on-going statistical analysis.

All this leads me to kinda try answering the question posed in the o/p - an all analogue system can be wrong but the mistakes are of a certain type - a type that auditory perception finds easier to accommodate to, perhaps? Digital audio system errors may be considered more unnatural to auditory perception? For instance I've often seen wow & flutter compared to jitter or close-in clock phase noise as if they are equivalent but I don't believe that to be the case. Perhaps digital audio is focused on the wrong goal - removing noise? By doing so it may expose patterns of errors which were previously buried in the base noise of analogue? Perhaps patterns are more easily exposed in digital audio & it's patterns that our auditory system uses for it's analysis? Again, take all my statements as working hypothesis & IMO, best guesses - not set opinions or definitive descriptions of the way auditory perception works.

So it's not so much does the replay sound like a strad/fender or whatever but is it internally consistent when analysed by auditory perception as real-world sound?

And on top of all this we are listening a a very limited version 2 channel version of what we would hear in the real world - which adds another complexity to the scenario
 
Last edited:

Empirical Audio

Industry Expert
Oct 12, 2017
1,169
207
150
Great Pacific Northwest
www.empiricalaudio.com
For me, there is an "ease" and "effortlessness" to the music which is best conveyed by 1/2", 30 ips, half-track tapes. The closest I've heard to that is DSD 256. Hope everyone is well!

This is exactly what I hear when jitter and distortion in a digital system is very low, but it also requires the power subsystem to deliver the high-frequency transient currents. All three are really needed in order to achieve digital that sounds like analog.

The fatigue factor, or effortlessness is the audible manifestation of these features. I get this even with 44.1 redbook tracks.
 
  • Like
Reactions: jkeny

Empirical Audio

Industry Expert
Oct 12, 2017
1,169
207
150
Great Pacific Northwest
www.empiricalaudio.com
I am simply stating that in some conditions people will easily distinguish between real and recording, in others they will not. What can we conclude from that?

If your track is non-music, such as water running/splashing, gravel on a road being walked on or similar, everyone knows what these sound like. These are pretty good test tracks for determining liveness.

I like to compare my real analog system, my acoustic VictorII gramophone to my digital system. This is true analog, no electronics involved.
 

jkeny

Industry Expert, Member Sponsor
Feb 9, 2012
3,374
42
383
Ireland
This post was messed up when I lost internet connection - I'll write it again so as it makes sense
 
Last edited:

jkeny

Industry Expert, Member Sponsor
Feb 9, 2012
3,374
42
383
Ireland
This is exactly what I hear when jitter and distortion in a digital system is very low, but it also requires the power subsystem to deliver the high-frequency transient currents. All three are really needed in order to achieve digital that sounds like analog.

The fatigue factor, or effortlessness is the audible manifestation of these features. I get this even with 44.1 redbook tracks.
Yea, we all try to correlate the ease & effortless, I call it realism, in sound reproduction with some factor in individual electronics or system components chain that is responsible for this realism. I tend to agree with you but would express it in a wider context - that electrical noise in general, is paramount in achieving this realism. My experience tells me that power is a big factor in all of this but so is electrical noise coming from connected devices, common mode noise infiltrating DACs from the connection to computer, network server, etc.

Exactly what is the mechanism whereby this noise gets through, effects the ground reference & becomes audibly evident is a complex puzzle to yet be spelled out. Indeed standard measurements of the analogue waveform on DAC outputs do not immediately reveal differences considered audible between the sort of top-class systems we are describing & the run of the mill digital audio devices. The answer to this will be interesting when it arrives.

And one other factor - I believe this realism is more a factor of the source - the further away from the source the less ability to achieve this with changes.

I know this sounds like "to a man with a hammer, every problem is a nail"
 
Last edited:
  • Like
Reactions: RogerD

jkeny

Industry Expert, Member Sponsor
Feb 9, 2012
3,374
42
383
Ireland
If your track is non-music, such as water running/splashing, gravel on a road being walked on or similar, everyone knows what these sound like. These are pretty good test tracks for determining liveness.

I like to compare my real analog system, my acoustic VictorII gramophone to my digital system. This is true analog, no electronics involved.
These examples of water running, rain on a tin roof, audience applause, fire crackling, etc are called texture sounds in the acoustics research literature & are used as examples of how auditory perception uses summary statistics to analyse & categorise them. I extrapolate this function to suggest that we use it to some extent in all our auditory perception. Knowing how biological organisms are efficient in their use of their limited resources, it seems that when we discover a working mechanism like this it is seldom just being used for a very specific scenario like these specific examples
 
Last edited:

Ron Resnick

Site Co-Owner, Administrator
Jan 24, 2015
16,017
13,347
2,665
Beverly Hills, CA
My 2 cents for what it's worth - all this is IMO

We don't know what the original event or the final cut sounded like so how do we judge the sound of our replay systems?

If we look on recordings as a piece of art, the art of the musicians as interpreted by the art of the recording engineer as a representative interpretation. I'm sure given different recording engineers & producers we would get different interpretations.

So you want what you hear to be what the recording professionals intended but you can't really know this precisely. So how do we judge our replay systems? I suggest that we judge the sound in terms of how 'real' it sounds. When it sounds 'real' we find engagement & immersion in the performance - we transcend the sound of our playback system. This can only happen if the 'realism' of the sound is maintained during the playback.

Most of our auditory perception is happening below the level of consciousness with the end result presented to consciousness. What I mean is that our brain is analysing the nerve impulses from our two ears & making sense of these nerve impulses organising & categorising them into an auditory world model that makes sense - it's a heavy duty analytical process that evaluates what we perceive through our senses. This can only work efficiently if we have an internal series of rules/models against which the analysis is performed - rules/models that have built (& continue to be built) over the years of exposure to sound in the world.

So my contention is that it is this sub-conscious analysis which determines how real we perceive the sound to be.

To explain in a bit more detail - the real-time analysis seems to work along the lines that at any point in time it is finding the best fit for the collection of nerve impulses into a working model & as a result predicts what should come next according to that working model. An example of this prediction function is how alien sound seems when played backwards - it doesn't match the behaviour of sound as we usually experience it & have built our internal analytic models around.

So what is happening when we listen to our playback system? We are analysing in the exact same way. If it sounds natural & real, it's because the sound ticks all the analytical correct boxes that matches our internal real-world models of sound. If, at any point in the sound stream, it doesn't match the prediction in some aspect, the working analytic model is changed to best fit the new collection of nerve impulses and so on. Too much of these deviations from the model's expectations & we have too much modification of the working model, too much energy expended as this best fit analysis & changing of models is heavy on resources. But all this is happening below consciousness - what emerges at a conscious level depends on the how many such misfits there are - a disinterest in the music at one end , perhaps even a wish to turn it off as it is disturbing/jarring in some way at the other end. We generally don't get into the relaxed state where music transports us as our brain's energy is mostly being used in figuring out the nerve impulses it is presented with

The opposite, where less resources are expended listening to playback, allows the saved energy resources to now be used by higher levels of brain function & I believe why we feel engagement, immersion in the sound & enjoyment in listening to the music playback. This only happens when the rightness of the nerve impulses (music stream) is in concurrence with our inbuilt models of natural sound.

I'm not sure what all the characteristics are that determine the 'realness' of the sound - it may be that we form a statistical analysis of the ongoing collection of sounds we call music i.e it's not individual freqs or amplitudes or timing but an ongoing statistical analysis/abstraction moment to moment so a sort of pattern of ongoing sophisticated pattern analysis with prediction - so what has occurred in the music some moments ago (how many moments I don't know?) is of importance to this on-going statistical analysis.

All this leads me to kinda try answering the question posed in the o/p - an all analogue system can be wrong but the mistakes are of a certain type - a type that auditory perception finds easier to accommodate to, perhaps? Digital audio system errors may be considered more unnatural to auditory perception? For instance I've often seen wow & flutter compared to jitter or close-in clock phase noise as if they are equivalent but I don't believe that to be the case. Perhaps digital audio is focused on the wrong goal - removing noise? By doing so it may expose patterns of errors which were previously buried in the base noise of analogue? Perhaps patterns are more easily exposed in digital audio & it's patterns that our auditory system uses for it's analysis? Again, take all my statements as working hypothesis & IMO, best guesses - not set opinions or definitive descriptions of the way auditory perception works.

So it's not so much does the replay sound like a strad/fender or whatever but is it internally consistent when analysed by auditory perception as real-world sound?

And on top of all this we are listening a a very limited version 2 channel version of what we would hear in the real world - which adds another complexity to the scenario


I think this is very thoughtfully and beautifully written. A group here developed in 2016 four alternative, but not mutually exclusive, objectives of high-end audio:

1) recreate the sound of an original musical event,​
2) reproduce exactly what is on the tape, vinyl or digital source being played,​
3) create a sound subjectively pleasing to the audiophile, and​
4) create a sound that seems live.​
I think that your objective of achieving a sound that sounds "real" is substantially similar to objective 4) "create a sound that seems live."
 

jkeny

Industry Expert, Member Sponsor
Feb 9, 2012
3,374
42
383
Ireland
This was a long post & I posted earlier but lost internet connection when I tried to post it & when I looked back at the post it was garbled so I'm trying to remember my thoughts & rewrite it.
I hate when that happens !!

I often see the idea put forth that our auditory memory is unreliable & flawed so how can we judge if what we hear in the playback sounds like a real model X instrument.

A number of things strike me.

Sure we don't remember exact freq/amplitude/timing that we can compare between different sounds separated by a longer time than a few seconds. But that is not the role that auditory perception was designed for so let's not try to insist that it is fatally flawed because it doesn't perform like a measuring instrument. We need to accept that & live with the uncertainty of all our perceptions - they are designed for timely analysis of the nerve signals in order that we can usefully interact with the physical world. Listening to music is an enjoyable side-benefit of this primary role.

It's not a case of remembering what a particular instrument/voice/etc sounds like (in terms of frequency/amplitude/time) it's more a case that what sounds we are exposed to & listen to regularly, automatically generate in us a sonic fingerprint (the literature calls this a statistical summary) at a subconscious level & it's this fingerprint that our auditory perception uses. So over time it's my belief that we do this with our playback systems & build this sonic fingerprint over time. When we audition new gear we often take time to evaluate it's sonic characteristics, allowing, over time, those characteristics or sonic personality to reveal itself. Maybe this characteristic doesn't reveal itself until we swap back our previous device into our playback system. What I believe is happening here is a sped up version of the usual sonic fingerprinting we do all the time. We are comparing one sonic fingerprint Vs another. Somebody mentioned gestalt earlier in the thread, I consider it gist listening - more a focus on how the music is affecting us in terms of engagement, immersion, wanting to rediscover our music library, etc - a delight in the new insights it gives us into the music. I suspect this sense of engagement & delight is more about our higher level brain functions being engaged more. When we are able to engage in relaxed listening like this, it's a soothing balm for our psyche. This is not about hearing distant trains running while the recording was made, it's more about achieving a better insight into the musical portrayal.

An example of how all this works is given in experiments done to examine the role of the pinna in sound source localisation. Even though we all have different shaped pinna we generally converge to the same ability to localise sound - so even though the spectral fingerprint of the reflections from the pinna are wildly different between individuals, each individual doesn't have wildly different localisation abilities - that's because each individual learns the correlation between spectral fingerprint & sound source location.

So the experiment used a silicon mould inserted in the ear changing the spectral fingerprint - localisation ability was greatly diminished but returned after a number of days wearing the insert. On taking out the insert individuals almost immediately reverted back to their former localisation acuity & when inserting back the mould after a day, it again had the same acuity, after a week less so, a month less so & so on - the new correlation model learned subconsciously with the mould inserted, faded over time from lack of use.

Same thing happens with visual perception & prism glasses - I'll let you extrapolate - I'm too lazy to spell it out.

The point being that we form internal models of what our perceptions are exposed to on a regular basis. I extrapolat ethis to our replay system that we probably listen to every day - we form an internal model of its sound - a sonic fingerprint characteristic or maybe a statistical summary of its sound?
 

jkeny

Industry Expert, Member Sponsor
Feb 9, 2012
3,374
42
383
Ireland
I think this is very thoughtfully and beautifully written. A group here developed in 2016 four alternative, but not mutually exclusive, objectives of high-end audio:

1) recreate the sound of an original musical event,​
2) reproduce exactly what is on the tape, vinyl or digital source being played,​
3) create a sound subjectively pleasing to the audiophile, and​
4) create a sound that seems live.​
I think that your objective of achieving a sound that sounds "real" is substantially similar to objective 4) "create a sound that seems live."
Thanks
I don't think we can achieve 1) or at least know if/when we have achieved this.
2) I don't know how to achieve this - correlation systems comparing input with output waveform measurement is fraught with many issues
3) can probably be achieved (at least for a short time) with non-accurate reproduction
4) seems to me the core issue but I call it perceived realism, not "seems live" :)
 
Last edited:

tima

Industry Expert
Mar 3, 2014
5,778
6,820
1,400
the Upper Midwest
I was trying to tease out whether there can be degrees of aural memory, or if discriminating live from recorded music was something other than 'remembering'.

If one hears enough live music I think you know what it sounds like - you've learned what it sounds like. I was asking if that is the same as remembering, if knowledge is different from memory. If you know what a live piano sounds like I don't know if that requires remembering what a specific note sounds like from a specific piano.


Most of our auditory perception is happening below the level of consciousness with the end result presented to consciousness. What I mean is that our brain is analysing the nerve impulses from our two ears & making sense of these nerve impulses organising & categorising them into an auditory world model that makes sense - it's a heavy duty analytical process that evaluates what we perceive through our senses. This can only work efficiently if we have an internal series of rules/models against which the analysis is performed - rules/models that have built (& continue to be built) over the years of exposure to sound in the world. ...

So it's not so much does the replay sound like a strad/fender or whatever but is it internally consistent when analysed by auditory perception as real-world sound?

This suggests we may be close in how we think about gauging what we hear, at the minimum how we distinquish live from reproduced music.

The opposite, where less resources are expended listening to playback, allows the saved energy to now be used by higher levels of brain function & I believe why we feel engagement, immersion in the sound & enjoyment in listening to the music playback. This only happens when the rightness of the nerve impulses (music stream) is in concurrence with our inbuilt models of natural sound.

For a while now I've believed the most immersive level of enjoyment of reproduced music happens when one is mostly or wholly focused to or in touch with music to the point where we are not thinking about other stuff, when we are not thinking about system or reproduction, when the music, as it were, takes us. And I've read that the state of such immersion occurs largely in the limbic area of the brain, which I read is more primative in terms of evolution and not an area where higher levels of brain function occur. This is Copland's "sensuous plane" -- "a kind of brainless but attractive state of mind [that] is engendered by the mere sound appeal of music." Whether higher or lower order brain function, I agree that we 'do less work', expend less thought resources at that level of appreciation. (Admittedly though, sometimes the Kingsway Hall underground can break in.)

One thing of which you wrote I find interesting is the notion of the "internal series of rules/models" built "over the years of exposure to sound in the world." Perhaps their application might be considered a kind of memory but as themselves are a kind of knowledge. We might think of those as our preferences or reflecting our preferences (i'm not sure on this) and ask whether they come entirely from exposure to live music or are perhaps more fungible across individuals in terms of their origin. Being audiophiles we dicker amongst ourselves over the sound of this system or that system.

I enjoyed your write-up - it was interesting, well written and cogent.
 

jkeny

Industry Expert, Member Sponsor
Feb 9, 2012
3,374
42
383
Ireland
This suggests we may be close in how we think about gauging what we hear, at the minimum how we distinquish live from reproduced music.
What I'm suggesting is that before we consciously think about the timbre of an instrument & whether it accurately matches our memory of the live instrument, our analytic engine has done a lot of work subconsciously. This work at the subconscious level is very basic but also very complex.

It's best to use an analogy to explain what I mean - imagine you are sitting at the edge of a swimming pool with only your two feet dipped in the water & you can't see or hear. There are many people in the pool splashing, moving around, swimming, etc. All these pool activities happening at the same time cause composite waves which arrive at your feet. Working out where people are & what they are doing in the pool just based on the arriving waves at the two feet is the equivalent to what auditory processing is doing. So in other words out of the mixture of waveforms, the ones that the swimmer is creating is identified & grouped together out of the composite mix of waves & this grouping is maintained as the swimmer moves through the pool. The same applies to all the people/objects creating waves, each is separately identified just by their waveform not just as a once off but on an ongoing basis. As you can see this is a very complex inference engine which requires many past examples from which to learn how waveforms in pools behave with different people/objects & actions creating the waves - so it's a heuristic inference engine.

So an internal working model is built based on the best fit to the sensed waves. This model has within it expectations of how these waves will behave (the heuristics element). But what happens if a wave arrives which doesn't properly fit into the existing model or more likely our best fit analysis was wrong & only now discovered based on an arriving wave which doesn't fit? The working model has to change to best accommodate this . (This is what is meant by our perceptions are an interpretation of what's out there & really a best guess at any point in time (usually fairly accurate or accurate enough for our continued existence in the world). It's not necessary for it to be highly accurate, rather it needs to be fast & adapted to our needs.)

This is the job auditory processing is performing at a subconscious level & a working model created in real-time representing the current auditory objects & tracking their movement/progress over time all at the subconscious level. All this is happening before we even come to consciously consider whether the timbre of an instrument is correct (in our judgement)

So when we listen to our 2 channel playback systems we are suspending some of the rules & expectations in this analysis - much the same as we do when looking at TV, video, etc. Listening to our 2 channel stereo creates a working model which just about satisfies enough criteria to conclude it is realistic - in other words it is just close enough to the working model that would be created if we were listening to the same event live, that we can more easily enter into the engagement/immersion state that we could easily do if we were at the live event (I'm using "live event" for the sake of shorthand). if there is some anomaly in the sound from our reproduction system that perception has to change it's working model then the more this happens the more energy is consumed & more fatigue/disinterest/discomfort results (again this is happening subconsciously)

But I believe 2 channel stereo is a precarious thing on the edge of this division between "realism" & blabla/uninteresting sound - it takes a lot of the small things to be correct in the reproduced sound to satisfy this criteria. It's a surrogate for reality in much the same way as the the actual recording is a surrogate for a musical event

IMO, this explains a lot about this hobby but from a different perspective perhaps?

For a while now I've believed the most immersive level of enjoyment of reproduced music happens when one is mostly or wholly focused to or in touch with music to the point where we are not thinking about other stuff, when we are not thinking about system or reproduction, when the music, as it were, takes us. And I've read that the state of such immersion occurs largely in the limbic area of the brain, which I read is more primative in terms of evolution and not an area where higher levels of brain function occur. This is Copland's "sensuous plane" -- "a kind of brainless but attractive state of mind [that] is engendered by the mere sound appeal of music." Whether higher or lower order brain function, I agree that we 'do less work', expend less thought resources at that level of appreciation. (Admittedly though, sometimes the Kingsway Hall underground can break in.)
Yes, I agree that we are transported when listening to a good system & even music we are not familiar with is interesting - maybe not as interesting/engaging as music we know & love but still there's enough realism in it to engage us. My quip about the sound of background trains was really to point out that this detail isn't the goal but rather that realism/engagement is the goal. I do believe that this sort of low level detail is necessary for realism

One thing of which you wrote I find interesting is the notion of the "internal series of rules/models" built "over the years of exposure to sound in the world." Perhaps their application might be considered a kind of memory but as themselves are a kind of knowledge. We might think of those as our preferences or reflecting our preferences (i'm not sure on this) and ask whether they come entirely from exposure to live music or are perhaps more fungible across individuals in terms of their origin. Being audiophiles we dicker amongst ourselves over the sound of this system or that system.
I'm considering this at a lower level initially as you see above & what I'm suggesting is that from babies onwards we absorb the world of sound, correlate it with the world of images & with these two senses build internal models of how objects behave in the world both in their visual aspects & in their auditory aspects. So a bell sound has a sharp attack & a long decay (not the other way around) - a small bell produces a higher freq than a large bell, etc. In the visual model I think of a scene from Father Ted "small cow or far away"

I don't think this defines preference as it happens to everybody as part of the development of our senses from birth. With regard to exposure to our replay systems, yes I think we become familiar with its sound signature & in that way we evaluate new devices inserted into the system. Listening to live music on a daily basis should instil in us an innate expertise in how instruments/voices sound, I guess?
 
Last edited:
  • Like
Reactions: Al M. and Lagonda

jkeny

Industry Expert, Member Sponsor
Feb 9, 2012
3,374
42
383
Ireland
BTW, I only just came across this website where the very first problem of recording is addressed - the microphone configuration & placement. I don't know if anyone has seen this before or checked out their recordings with their approach to this? I will be checking them out (they have a free streaming section) but the reason I posted it is because it does highlight the unspoken problem which underlies the o/p question. In almost all the recordings we listen to we are listening to a bad surrogate of the original event which greatly confuses the whole evaluation of our replay systems in its ability to give us something approaching the original.

I do believe that our replay systems, even though it is starting with a substandard product (the recording) can have increasing levels of reproduction quality which provide increasing levels of realism.
 
Last edited:

jeff1225

Well-Known Member
Jan 29, 2012
3,011
3,256
1,410
51
CD's so easily replaced vinyl because listening to CD's is easy and getting great vinyl playback is hard. To get great vinyl sound you must have clean vinyl, good pressings, a decent record player, a decent tonearm a decent needle and decent phono stage.

I started collecting records because much of the hard bop and avantgarde jazz I listen to was not vinyl. I love my record collection, but it's a pain in the ass. Honestly the best investment I've made has been my KLAudio record cleaner.
 

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu

Steve Williams
Site Founder | Site Owner | Administrator
Ron Resnick
Site Co-Owner | Administrator
Julian (The Fixer)
Website Build | Marketing Managersing