When listening for distortions, I find it best to listen to music I dislike -- polkas, new age noodling, modern country. If I listen to good music, even pretty poorly recorded good music, I get distracted far too quickly.
Tim
Tim
Second problem is that you will get tired of your favorite music! Long time ago, I was building a darkroom to process my own pictures. I needed a timer to tell me to change chemicals every few seconds. Instead of getting an expensive programmable timer, someone suggested using music and inserting your voice at the right times to move to the next step. Worked like a charm on a $10 cassette deck. Secondary advice given? Don't use your favorite music. Indeed, by the 10th print, I never wanted to hear that music ever again.When listening for distortions, I find it best to listen to music I dislike -- polkas, new age noodling, modern country. If I listen to good music, even pretty poorly recorded good music, I get distracted far too quickly.
Tim
I assume you are using jitter in a broad sense to address timing errors common to analog but typically not referred to as jitter?
Timing distortion is FM distortion whether you call it jitter or flutter.
Yes, but when it's analog, it's musical.
Tim
Timing distortion is FM distortion whether you call it jitter or flutter.
orb said:Just to be clear though Arny.
Paul Miller differentiates between jitter and wow-flutter, with the measurement values not expressing the same conclusions in terms of jitter, their context technically, and audibility.
However I appreciate this does not mean Paul Miller is correct
Still, he is pretty much what I would deem is an expert on this subject due to his skills and experience with developing very technical testing-measurement tools while also engaging with some exceptional academics.
If memory serves a common kind of FM distortion that people are concerned about these days is related to HDMI data blocking and buffering, and is centered around 100 Hz. The filter characteristic above shows that there has been considerable attention in this frequency range, and so it could very well be valid.
The filter shown above is also the one that is generally recommended for measuring wow and flutter in analog tape equipment. It is well known that analog tape recorders were subject to a form of FM distortion called scrape flutter which could easily reach into the same frequency range as HDMI FM distortion.
Combine this information with the fact that analog tape machine wow and flutter is a mere million times larger, and we are hard put to dismiss the connection or severity of analog tape machine FM distortion completely out of hand.
I'm fully aware of Paul Miller's biases in this matter. As soon as he provides a justification for his conclusions that are supported by reliable listening tests, I'll stop characterizing him as someone who seems to be chasing numbers for the sake of numbers. ;-)
......
Has any of those who have done these tests (whether running or participating) also done what I would call A/X comparing, meaning that there is always only one constant (A). while X is what can change and could be A or different.
So in our audio example A could be Krell SA50 and B is a Crown amp, for the 12+ tests a listener would need to decide if X matches A or is different.
This benefits from removing from a cognitive perspective two constants that occur with ABX, and simplifies the process that should provide matching results.
I am with JA that IMO it is incredibly difficult to remove all factors that you do not want when it comes to blind testing - this is covered in the discussion between JA and Arny on the ABX debate, so if want to hear more with both sides presenting useful info it is worth listening to.
Personally I would like to see more of ABX tests done using a rather complex hardware/software setup that records/analyses the responses of the listener - how many times they use A and B and duration, how many times switch, length of time for decision selection,etc.
I cannot find the paper and IMO it is not complete but they identified a subtle AB order bias when doing a sound perception study using trained/professional staff.
Arny, any reason why you feel the need to denigrate someone who IMO and probably many others understands this subject much better than any of us?
I doubt you will find he needs to "justify" anything to you or me, and seriously doubt he even cares but as I mentioned he does work with some exceptional academics as well.
Anyway, it is fair to say that you both disagree on comparing wow-flutter-jitter, without denigrating you
Orb said:I am with JA that IMO it is incredibly difficult to remove all factors that you do not want when it comes to blind testing - this is covered in the discussion between JA and Arny on the ABX debate, so if want to hear more with both sides presenting useful info it is worth listening to.
The irony of JA complaining about uncontrolled factors in blind tests seems pretty extreme given his apparent strong preference for listening tests where well known strong influencing factors such as sight are intentionally not controlled. ;-)
(...) My position is that to perform an uncontrolled, unvalidated blind test is pointless. Many such tests have been published by others; all produce null results, but there is no indication that this was due to there not being an audible difference or whether it was due to the poor design or implementation of the test, thus producing a false negative. (...)
John Atkinson
Editor, Stereophile
Many such tests have been published by others; all produce null results, but there is no indication that this was due to there not being an audible difference or whether it was due to the poor design or implementation of the test, htus producing a false negative.
Yes, Stereophile magazine's reviewers practice sighted evaluations, and the danger is that that can produce false positives. But as that is self-evident and as we do encourage our readers always to test our review findings for themselves,
Trouble is, you only think it is a false negative because you have, a priori, decided that there should be a positive?Stereoeditor said:Many such tests have been published by others; all produce null results, but there is no indication that this was due to there not being an audible difference or whether it was due to the poor design or implementation of the test, thus producing a false negative.
any thoughts on how we can change the tests to help remove the types of errors being discussed?
The irony of JA complaining about uncontrolled factors in blind tests seems pretty extreme given his apparent strong preference for listening tests where well known strong influencing factors such as sight are intentionally not controlled. ;-)
Ironic, maybe. My position is that to perform an uncontrolled, unvalidated blind test is pointless.
Many such tests have been published by others; all produce null results,
but there is no indication that this was due to there not being an audible difference or whether it was due to he poor design or implementation of the test, thus producing a false negative.
Yes, Stereophile magazine's reviewers practice sighted evaluations, and the danger is that that can produce false positives.
I don't understand the angst over stereophile reviews.