I know what a control is, John. I understand the concept of testing audibility within the systems to make sure it is possible for the subject material to be audible. And as I already said, I don't know what controls M&M did or did not use. Do you? Or are you just looking for reasons to dismiss the study without further investigation? And what stimuli would you suggest to put in place to test whether or not a system would be able to reveal audible differences that are theoretically and measurably inaudible and which all exist above the known threshold of human hearing? None, I'm sure, that could not be dismissed by anyone who wants to believe in any subsequent conclusions they dislike.
Here this is not the study itself and it doesn't address your specific question, but it's a pretty good overview. I'm sure you'll find plenty to argue with here
http://mixonline.com/recording/mixing/audio_emperors_new_sampling/
Tim
Tim, you have cited M & M as a valid, well-run test & yet don't seem able to deal with questions about it's procedure & validity. Instead you attribute any such questions as attempts at dismissing the study. Well, if the study isn't rigorous then it should be dismissed & certainly not cited as well run or valid. So if you don't know what if any controls were used how can you stand over the test?
If you're going to run a test then run it properly - An excerpt from the "METHODS FOR THE SUBJECTIVE ASSESSMENT OF SMALL IMPAIRMENTS IN AUDIO SYSTEMS INCLUDING MULTICHANNEL SOUND SYSTEMS"
"It must be empirically and statistically shown that any failure to find differences among systems is not due to
experimental insensitivity because of poor choices of audio material, or any other weak aspects of the experiment, before
a “null” finding can be accepted as valid. In the extreme case where several or all systems are found to be fully
transparent, then it may be necessary to program special trials with low or medium anchors for the explicit purpose of
examining subject expertise (see Appendix 1).
These anchors must be known, (e.g. from previous research), to be detectable to expert listeners but not to inexpert
listeners. These anchors are introduced as test items to check not only for listener expertise but also for the sensitivity of
all other aspects of the experimental situation.
If these anchors, either embedded unpredictably within the context of apparently transparent items or else in a separate
test, are correctly identified by all listeners in a standard test method (§ 3 of this Annex) by applying the statistical
considerations outlined in Appendix 1, this may be used as evidence that the listener’s expertise was acceptable and that
there were no sensitivity problems in other aspects of the experimental situation. In this case, then, findings of apparent
transparency by these listeners is evidence for “true transparency”, for items or systems where those listeners cannot
differentiate coded from uncoded version"