We've had people here report astonishing differences between cables at first listen. But I'm sure it takes weeks of sighted listening and discussion to hear subtle differences and reach audiophile consensus.
I am always addressing the same thing - the constraints imposed by real audiophile life added to our ignorance about audio blind tests methodology make the so called casual tests invalid. So the only thing left for forum debates is gedankenexperiment.
BTW, did you ever carry a blind test you consider valid?
I guess what I am trying to understand is why are casual blind tests considered invalid when causal sighted tests are not?
That was the question I was trying to ask in the hypothetical example I gave in post #69
To answer your question, I have conducted many causal blind tests which I consider to be more valid than the casual sighted tests I have conducted
I did, micro. It's just another example of holding the listening method which removes a ton of potential bias to a much higher level of scrutiny than the method which leaves all the possibilities for bias in play. It doesn't make a bit of sense.
(...) To answer your question, I have conducted many causal blind tests which I consider to be more valid than the casual sighted tests I have conducted
Do you want to describe them in detail and their conclusions?
BTW, I also consider most individual casual sighted tests as invalid. But the statistical gathering of many sighted tests, made with great care and knowledge can result in valuable conclusions. A typical case is when three or more independent people, with no knowledge of each other, reach compatible conclusions. Surely we have to be sure that they are not being influenced by the marketing literature or a former review.
Do you want to describe them in detail and their conclusions?
BTW, I also consider most individual casual sighted tests as invalid. But the statistical gathering of many sighted tests, made with great care and knowledge can result in valuable conclusions. A typical case is when three or more independent people, with no knowledge of each other, reach compatible conclusions. Surely we have to be sure that they are not being influenced by the marketing literature or a former review.
But casual sighted tests are what the vast majority of audiophiles use when selecting components for their system. Are they therefore making invalid choices?
But casual sighted tests are what the vast majority of audiophiles use when selecting components for their system. Are they therefore making invalid choices?
No, most of the time the differences are large enough to overcome bias, and thanks to their previous knowledge and experience very often they make the choice they really wanted. Remember that happily our preferences have a broad fork. But it is why it is important to have the help of knowledgeable and serious people when buying high-end if you do not have the time and conditions to listen carefully. Remember the processes we debate are statistical and should be debated according to the rules of statistics.
Again, do you want to describe your blind sessions in detail and their conclusions?
No, most of the time the differences are large enough to overcome bias, and thanks to their previous knowledge and experience very often they make the choice they really wanted. Remember that happily our preferences have a broad fork. But it is why it is important to have the help of knowledgeable and serious people when buying high-end if you do not have the time and conditions to listen carefully. Remember the processes we debate are statistical and should be debated according to the rules of statistics.
Again, do you want to describe your blind sessions in detail and their conclusions?
Most of the time, in my experience of sighted testing, I *thought* the differences were big enough to overcome my biases. However, once my knowledge of which component was playing was removed, more often than not the differences became pretty much imperceptible.
What should I make of that?
Most of the time, in my experience of sighted testing, I *thought* the differences were big enough to overcome my biases. However, once my knowledge of which component was playing was removed, more often than not the differences became pretty much imperceptible.
What should I make of that?
IMHO better listening, more statistics and correlation are needed in your sighted tests. Or most probably that your secret blind tests were not valid as the methodology obfuscated the differences. Or o a mix of both.
As you do not want to share details of your experience our conversation is going too hypothetical and nothing new or interesting is being added. Perhaps we should friendly stop now. :b
IMHO better listening, more statistics and correlation are needed in your sighted tests. Or most probably that your secret blind tests were not valid as the methodology obfuscated the differences. Or o a mix of both.
As you do not want to share details of your experience our conversation is going too hypothetical and nothing new or interesting is being added. Perhaps we should friendly stop now. :b
You what a major and very basic problem is in these tests? The heart of the problem? The basic question? What the hell does sound different actually mean?It's such as nebulous question as to make any test meaningless.
Couple that with the perceptual science studies showing that as one increases the complexity of the task, perceptual skills severely decrease (or why a QB throws an interception when being blitzed). It's simply and utterly impossible for listeners to reliably concentrate on the many parameters involved in musical reproduction and get any sort of consistent, reproducible effects. That also explains in large part why Olive's studies are misguided: eg. novices score as well as "established" audiophiles. Novices just don't know what to listen for nor the ins and outs of audio reproduction. Ergo, they don't tax their perceptual skills.