Someone up-thread stated that perhaps digital is fatiguing because it doesn't smooth over the imperfections as analogue audio seems to do. There's logic to this, I believe but I would express it differently to this. As you may know I'm very interested in auditory perception & all my readings lead me to the following - we are born with the sense of hearing but we train it to evaluate sound based on the sounds we encounter daily in the world. This internal model that is built from this exposure is what becomes our auditory perception - it's mostly an analytic process. Because we have built this model using the sounds from nature we are therefore using a restricted analysis which best matches these familiar sounds. Now nature & the sounds we encounter in the world do not arrive as pure tones - they are mixed with background noise & it's what we expect to hear & how our auditory perception analysis works - it splits sounds into auditory streams, foreground & background etc. Digital audio, by it's very nature delivers the audio signal in a less noisy window. This lack of background noise has less masking effect on any noise fluctuations & hence we notice their effect on the timbre & tonal characteristics of the sound. Does digital audio suffer somewhat from it's own quieter background by being more prone to auditory analysis sensing that it is a somewhat unnatural soundscape that we are hearing i.e there isn't enough noise in the background & the low level noise that is captured on the recording & played back is where digital audio is at it's weakest? It's also means that any perturbations in ground noise will be more noticeable. The real issue is how sensitive are we to this disturbance in the sound which I don't believe is answered by the thresholds of hearing metric - we need far more sophisticated tests to ascertain this & possibly fMRI is the only real answer to testing this - it can reveal perceptions that operate at the subconscious level that we are not consciously aware of but probably affect us in long-term listening.
This tends to explain why measurements aren't a predictor of sound - they are not based on the auditory processing model of hearing & why we can find that some devices with high levels of jitter, for instance, can sound better than a low jitter device. As is well known by digital engineers there are uncorrelated & signal correlated jitter - the former being fairly acoustically benign, the latter having more psychoacoustic impact. And overlaid onto this is the frequency spectrum of the jitter. It would appear that very low frequency jitter (or phase noise) is more detrimental to sound stage & musical timbre. Why? Not sure, But maybe because it can interfere with the envelope of the sound i.e how a sound changes it's spectral footprint as it develops or maybe it changes the relative relationship between sound envelopes as music progresses through time?
Psycho-acoustics can not be measured; they are felt!
I am at a loss trying to follow the technical battle on this thread but empirically, as an avid listener of a highly resolving and transparent system, some pertinent points can have their validity.
First of all, one should dispense with analogue\digital dogma and fundamentalism. Both analogue and digital are means to an end. The end is the music. Done properly, both are capable of excellent achievements, acknowledging that perfection is non-existent, given the objective frailties of the technical and human resources, and abandoning all delusions about recreating the original, real thing. Often, a good system can transcend reality and render a higher degree of sensory, emotional and intellectual experiences than the real.
jkeny: Your views and hypotheses expounded in relation to auditory perception \psycho-acoustics were interesting and coherently articulated although, for me, somewhat overstretched and pre-conceived. I would rather have ideas than ideologies. The former can liberate you; the latter possess you (although, in fairness, your views were not extremist).
There is nothing wrong or deficient with experienced, honest ears as a ‘’hearing metric’’. Afterall you, like all of us, are citing auditory inadequacies based more on what you hear (and hence postulate your theories ) than on what you measure. Psychoacoustics can NOT be measured; they are felt and perhaps psycho-analysed, but not using an ‘’auditory analysis’’ apparatus. Mood swings\’’distortions’’ can not be quantified numerically. Music personified, for me, is an experience of the ears (sensory), heart (emotional) and mind (intellectual). Maintaing a fine balance of the three is facilitated by having a highly resolving, involving and transparent system but, for many, it does not have to be like this.
I have a simpler and less sophisticated ‘’auditory value system’’. I judge the end result, i.e the music, as produced by my two means of playback- analogue and digital. The noise properties that you assign to digital and their associated auditory effects (i.e what we actually hear), do not translate to reality in my experience, no matter how hard I try to make the ‘’naturo-cultural’’ ( my own neologism just for this post) connections that you allude to.
There is a simple test: Record an analogue track on a relatively high quality, let alone state-of-the-art, digital recorder. Match levels and compare blind-folded (unless your ears are honest). You will witness an almost facsimile of the analogue. How does your ‘’auditory analysis’’ model\mechanism (post #75) account for and reconcile with the findings. We hear the exact analogue on digital. The digital has not added anything of its own, as inferred in or deduced from your post.
Furthermore, I fully endorse RogerD’s comment about the adequacy of Redbook as playback. Higher bits may well be needed for recording, mixing\mastering (for multi-processing, headroom, etc.) but I have serious reservations about the consequences of excessive\multiple processing on the tonal\timbral properties of many Cds\SACDs. Redbook presents no compromises if every link of the digital production chain was executed impeccably.
By the way, when I record (24 bit), I neither mix nor master. My mixing relies on the appropriate placement of the musicians according to the dynamic properties of their instruments, my mood at the time, and the overall tonal balance that I am seeking. Once satisfied, after much experimentation, there is no need for further processing or manipulation. The critisism of digital timbral\tonal inferiority, often cited, may well be in my view due to multiple processing and filtering implementation NOT done properly. Having said all this and not wanting to give the impression that I am an advocate of my recording methodology, I have to confess that I envy the work of many engineers, (incomparably superior to mine) regardless of their methods and practices.
A few thoughts to ponder over, rather than a critique of your pasitions which you defend rather admirably.
We listen, always learning. Cheers, Kostas.