Untitled Document

Page 3 of 18 FirstFirst 12345678910111213 ... LastLast
Results 21 to 30 of 175

Thread: Audible Jitter/amirm vs Ethan Winer

  1. #21
    Banned
    Join Date
    Jul 2010
    Location
    New Milford, CT
    Posts
    1,232

    Lightbulb

    Quote Originally Posted by amirm View Post
    Beyond noise, you don't know if what you heard was true to the original. It could have been super distorted and you wouldn't know it. The only way to make sure you are linear down to the last bit is to measure it.
    I'm sure it was super distorted! A sine wave that occupies just the lowest one or two bits is by definition totally distorted.

    I don't know why you keep mentioning 100db. I have explained why that is not the right way to look at this.
    All I have to go on is the graph I showed from Ken Pohlmann's book, repeated here for your convenience:



    There's also one from John Watkinson's Art of Digital Audio, attached below.

    Let's review what the paper said:

    "It was shown that the detection threshold for random jitter was several hundreds ns for well-trained listeners under their preferable listening conditions."

    Now let's look at the AP graph I posted which was for a periodic jitter of just 7 nanoseconds, not "several hundreds" mentioned in the article:
    If anything that confirms my point, that even when not way down at -100 dB, such artifacts are still not audible. I got the lower than -100 dB figure from Pohlman's graphs. I do understand the difference between individual components that soft, versus the sum of all components which is of course much louder. So I probably should have been clearer when I said the artifacts from jitter are 100+ dB down.

    In practice, 60 or 70 dB is soft enough for artifacts to be inaudible even under the most favorable conditions. When I tested this I made a recording of a 100 Hz tone at nearly full scale, then added a 3 KHz sine wave that pulsed on and off at various levels below the music. These two frequencies are far enough part that masking is not a factor, and 3 KHz is where our ears are most sensitive. So this was a worst-case test favoring audibility. When the 3 KHz sine wave was 40 dB below the 100 Hz tone I could hear it start and stop. At 60 dB below the 100 Hz tone I could just barely hear it with the playback very loud. At -80 I could not hear it at any playback level.

    So even in your AP example of jitter sidebands at -80, it makes sense to me that nobody could hear it. Especially at those high and nearby frequencies. Indeed, in the example for my AES video where I played that nasty noise below a gentle passage in my cello concerto, once the noise was 40 dB softer than the music I could no longer hear it at a normal (or any) playback level. BTW, is that AP example at -80 real or simulated? If real, what device did you measure?

    Now let's again look at real music capture using my audio precision:
    What am I looking at?

    Per above, sounds like you want a situation where on purpose, jitter would not be audible.
    I might not have been clear enough. Forget the -60 part unless you can show that jitter is ever that high in functioning gear. All I am asking for is an example where the amount of jitter from a cheap CD player or other consumer digital device is ever audible.

    Show me where high frequency content is at 0db and I buy your arguments.
    Then we're getting closer. Didn't you already acknowledge in your Post #8 that jitter is a fixed level below the signal, rather than steady as is noise?

    --Ethan
    Attached Images Attached Images  

  2. #22
    Banned
    Join Date
    Jul 2010
    Location
    New Milford, CT
    Posts
    1,232

    Lightbulb

    Quote Originally Posted by Ethan Winer View Post
    BTW, is that AP example at -80 real or simulated? If real, what device did you measure?
    Just for reference, this page states that the processor in the old SoundBlaster Live card has jitter around 110 ps:

    http://ixbtlabs.com/articles2/multim...tive-x-fi.html

    Quote Originally Posted by Creative Labs article
    For reference, the 10Kx processors have jitter in the neighborhood of 110pSecs.
    I have no idea if that's true, but if it is that means a typical $25 sound card has 1/64th the amount of jitter as whatever device is shown in your -80 dB example.

    I'll continue to search for jitter specs for various sound cards, though so far it's been tough to find anything concrete!

    --Ethan

  3. #23
    Banned
    Join Date
    Apr 2010
    Location
    Seattle, WA
    Posts
    16,044
    Quote Originally Posted by Ethan Winer View Post
    I'm sure it was super distorted! A sine wave that occupies just the lowest one or two bits is by definition totally distorted.
    I hope the larger point is not missed Ethan. Those low order bits are what represents the finer detail in the music. Without it, we might as well say that CD is overkill and we should have had a 14 bit system at 44.1Khz.

    All I have to go on is the graph I showed from Ken Pohlmann's book, repeated here for your convenience:
    The graph is showing the effect of a 2ns jitter. The paper said hundreds of nanoseconds is inaudible. So other than proving the same point that I did in that random jitter simply raises the noise floor, in what way is it proving your point?

    It also says the system signal to noise ratio is now nearly 80db in the presence of jitter noise, not 100db. I personally don't feel comfortable with a system that has such low level of signal to noise ratio.

    There's also one from John Watkinson's Art of Digital Audio, attached below.
    That is also for random jitter, not for periodic or correlated jitter which are the two we also worry about. At the outset I mentioned that the ear is not as sensitive to broadband noise as it is to noise that comes and goes and/or is related to music. So neither one of these references are helpful in this regard.

    I have already addressed random jitter showing how it is quite difference than periodic or program related jitter.

    If anything that confirms my point, that even when not way down at -100 dB, such artifacts are still not audible. I got the lower than -100 dB figure from Pohlman's graphs. I do understand the difference between individual components that soft, versus the sum of all components which is of course much louder. So I probably should have been clearer when I said the artifacts from jitter are 100+ dB down.
    I showed you graphs where the noise spikes were at -80db. If you want to stick to some graph, why not consider those also? I also showed you that music could be at that level. Note that there, I used real content showing that its high frequency level dips to the same level as the jitter.

    In practice, 60 or 70 dB is soft enough for artifacts to be inaudible even under the most favorable conditions.
    Really? Here is another graph I have saved up on my server of real music:



    You want to tell me that that decay is illegal and can't occur in real music? I assume not . As that signal decays into nothing, we hear the room reverb and other cues which gives music its life and character. If the DAC is non-linear or jitter too high, you get an abrupt and distorted finish as that signal decays.

    Sure, when the signal is loud such as in your example, none of this matters. We are not disputing that. Digital is king when it comes to loud signals. It achieves perfection that way in the way it is able to kill analog. Invert the equation though, and tables are turned. You have to look at how well digital can reproduce those fine details if you want to have luscious sound which can replace analog.

    Again, you can't pick test tones or even sample music tracks to prove your point. Your point must be true of all music or it is not valid. We can't say this noise is this much db below music unless you can show that music can never be fainter than that. As I have shown, music can and is faint very often.

    When I tested this I made a recording of a 100 Hz tone at nearly full scale, then added a 3 KHz sine wave that pulsed on and off at various levels below the music. These two frequencies are far enough part that masking is not a factor, and 3 KHz is where our ears are most sensitive. So this was a worst-case test favoring audibility.
    It is not. It is the best case per above. We are not interested if we can hear faint noise when the music is blasting our woofers and ears. We are in agreement there. Where we are taking past each other is that you seem to be assuming that music is always that loud and that it only has a single component like the 100 Hz tone. In reality, music has high frequency detail whose level is very low relative to low and mid-bands. Yet that faint level conveys how "bright" the music is. Mess with those high frequencies even a little, and the tone changes.

    That is a reason why compressed music can sound "bright." The increased quantization noise is not audible in the way you imagine it above, but instead, spreads into the high frequency bins and causes that increased edginess.

    So even in your AP example of jitter sidebands at -80, it makes sense to me that nobody could hear it.
    Would you say that is true if the signal is at -60db?

    BTW, is that AP example at -80 real or simulated? If real, what device did you measure?
    I did not measure the jitter diagram. It comes from other sites. AP graphs are real measurements as for simulations, we don't need that device. My measurements were done using a Blu-ray player feeding my AP.

    Then we're getting closer. Didn't you already acknowledge in your Post #8 that jitter is a fixed level below the signal, rather than steady as is noise?

    --Ethan
    I am sorry but I don't understand the question. I have said that jitter has infinite profiles so it is not any one thing anyway. But that if you want to put it in buckets, you have three kinds:

    1. Random jitter. This raises the noise floor and reduces the dynamic range of the system. At high enough level, it reduces fidelity of the system but it is not as bothersome as other forms below. All of your testing and papers you have cited fall in this category.

    2. Periodic. This is one more more pure tones which change the signal timing. This could be the USB frame buffer timing, power supply noise, front panel high voltage oscillator, video clock related (e.g. as in HDMI that has video as master timing), etc. This can be more audible as each one of these tones modulates the music and creates sidebands that could fall within the music levels especially at high frequencies. I have shown examples of this type of jitter and consider it a potentially audible problem (depending on frequency).

    3. Program related. This is jitter that is self-dependent on the signal itself! A good example is cable induced jitter. A poor digital audio interconnect, changes the shape of the pulses, causing the time they are accepted by the receiver to change. This again from Julian's book:



    Unfortunately, those pulses change as the music changes, modifying the waveform as seen by the receiver and hence its timing. There, jitter comes and goes with music which can be most offensive to listeners. It also be very tricky to test for as you need to find conditions which aggravate it.

  4. #24
    Banned
    Join Date
    Jul 2010
    Location
    New Milford, CT
    Posts
    1,232

    Lightbulb

    Quote Originally Posted by amirm View Post
    Those low order bits are what represents the finer detail in the music. Without it, we might as well say that CD is overkill and we should have had a 14 bit system at 44.1Khz.
    I've heard 14 bit audio and it sounds fine to me. At least for music recorded at sensible levels. I'd never use less than 16 bits for a live classical music recording, and that's the one place where using 24 bits makes sense. Even 12 bits is pretty good for pop music when the music is already normalized. But that's all beside the point.

    The graph is showing the effect of a 2ns jitter.
    But is it a real measurement, or a simulation of what could happen if the jitter were that high? If it's a real measurement, what specific piece of gear is it showing?

    Do we know how much jitter is typical in consumer level CD players and sound cards etc? So far all I could find is that spec for SoundBlaster cards that cited 110 picoseconds.

    It also says the system signal to noise ratio is now nearly 80db in the presence of jitter noise, not 100db. I personally don't feel comfortable with a system that has such low level of signal to noise ratio.
    That's not relevant for sidebands at very high frequencies, which is why A weighting is used to correlate noise with its actual audibility. Further, as yourself pointed out, no real music contains a 10 KHz tone at 0 dB FS!

    At the outset I mentioned that the ear is not as sensitive to broadband noise as it is to noise that comes and goes and/or is related to music.
    I don't know why distortion would sound worse, or be more noticeable, than noise. Assuming similar spectrums. If anything, distortion is masked by the signal. So it stops when the signal stops and thus seems less likely to be noticed. Again, I'm not talking about your past example of a loud blower fan your ears eventually ignore. I'm talking about stuff 60 or 80 or 100 dB below the music.

    I showed you graphs where the noise spikes were at -80db. If you want to stick to some graph, why not consider those also?
    I'll gladly consider that graph if those -80 sidebands represent what you could possibly get from normal functioning audio gear playing real music versus a 10 KHz test tone at full scale.

    I also showed you that music could be at that level. Note that there, I used real content showing that its high frequency level dips to the same level as the jitter.
    Okay, but that still doesn't mean it's ever audible! People often use the example of reverb tails that decay as an example of why 24-bit audio is "better" than 16 bits. I have tested this several times, and was never able to hear the "fizz" people describe as a reverb tail decays unless I cranked the volume much louder than normal during that decay. So yeah, music and decays can have very soft components. That doesn't mean the soft stuff is ever audible. This is why I keep returning to what is practical rather than theoretical.

    Digital is king when it comes to loud signals. It achieves perfection that way in the way it is able to kill analog. Invert the equation though, and tables are turned. You have to look at how well digital can reproduce those fine details if you want to have luscious sound which can replace analog.
    What analog tape or LP comes within even 20 dB of the low noise floor of 16-bit digital? Unless by "analog" you mean a direct console feed of the microphones before being recorded.

    Again, you can't pick test tones or even sample music tracks to prove your point. Your point must be true of all music or it is not valid.
    I agree completely! This is why I ask again and again for someone to provide an audio example showing that artifacts at a level typical for jitter are ever audible. As soon as someone does this I'll change my opinion in an instant. Not a graph of what could happen with broken or poorly designed gear having jitter 50 time more than usual. But actual music containing artifacts of the nature and level typical for jitter.

    Would you say that is true if the signal is at -60db?
    For those high frequencies? Probably! Now, at -40 I agree it could be a problem. But what gear has jitter artifacts at -80 let alone -60?

    My measurements were done using a Blu-ray player feeding my AP.
    Which specific measurements? I still don't understand what is being shown in that red and turquoise graph in your Post #20.

    I have said that jitter has infinite profiles so it is not any one thing anyway. But that if you want to put it in buckets, you have three kinds:
    This I what I keep asking for any example using any type of jitter having any spectrum. You can pick whichever "bucket" you feel best shows jitter as being audible. Pick the worst case you can find, as long as it's representative of actual jitter levels.

    --Ethan

  5. #25
    Banned
    Join Date
    Apr 2010
    Location
    Seattle, WA
    Posts
    16,044
    Quote Originally Posted by Ethan Winer View Post
    I've heard 14 bit audio and it sounds fine to me. At least for music recorded at sensible levels. I'd never use less than 16 bits for a live classical music recording, and that's the one place where using 24 bits makes sense. Even 12 bits is pretty good for pop music when the music is already normalized. But that's all beside the point.
    It is not beside the point Ethan. It is the point!!!. Once more we are not discussing what is good for the general public. But rather, what the expectations should be for high-end audiophiles. And we are not talking about normalized pop music. But rather, well recorded music of all types.

    Okay, but that still doesn't mean it's ever audible! People often use the example of reverb tails that decay as an example of why 24-bit audio is "better" than 16 bits. I have tested this several times, and was never able to hear the "fizz" people describe as a reverb tail decays unless I cranked the volume much louder than normal during that decay. So yeah, music and decays can have very soft components. That doesn't mean the soft stuff is ever audible. This is why I keep returning to what is practical rather than theoretical.
    What you just described by turning up the volume was not theoretical. You turned up the volume, and heard the limitations of your system at 16 bits of resolution! That is not theoretical at all. What you say other people describe to you is precisely what I have been trying to explain.

    Your assumption then is that we don't listen to soft passages at elevated levels but we do. I was just in my car driving and listening to my top songs which is about 1000+ tracks on a flash drive. The player just moves from folder to folder as it plays all the songs from each album. I was enjoying some wonderful piano music and the track changed to another album at such elevated level that I thought my doors were going to fall off! As you can imagine then, I was listening to the other track at increased volume which you accept can bring out distortion.

    There should not be any requirement for the user to listen to loud music or at low levels for your theory of "jitter can't be hard" to be true. If digital is perfect, then it needs to be perfect all the time and not fall apart if I turn up the volume during soft passages.

    You don't control what music people listen to and at what level. Why not then pay attentions that comprise system performance in all scenarios?

  6. #26
    Banned
    Join Date
    Apr 2010
    Location
    Seattle, WA
    Posts
    16,044
    Do we know how much jitter is typical in consumer level CD players and sound cards etc? So far all I could find is that spec for SoundBlaster cards that cited 110 picoseconds.
    110 picoseconds for sounblaster? If it were 10 times better I would give them a medal! Data like that is hard to come by in general. Here is the only piece I have from a UK audio magazine:

    "In the Feb 2009 edition of the Hi-fi News magazine Paul Miller measured the following jitter results for a few A/V amplifiers:

    Denon AVR-3803A
    ---------------
    SPDIF: 560psec
    HDMI: 3700psec

    Onkyo TX-NR906
    ---------------
    SPDIF: 470psec
    HDMI: 3860psec

    Pioneer SC-LX81
    ---------------
    SPDIF: 37psec
    HDMI: 50psec

    Yamaha RX-V3900
    ---------------
    SPDIF: 183psec
    HDMI: 7660psec"

    So if you are using HDMI, your jitter is about 10X of what is should be in most cases.

    For those of you math challenged, 1000 picoseconds (ps) = 1 nanosecond (ns). So the Yamaha above has 7.6 ns of jitter or similarly to what was shown here:


    And before you ask again Ethan, that is a measurement, not simulation . This means the Yamaha creates reduces your signal to noise ratio from 96db for 16-bit audio to 80db or 13 bits of resolution. You decide if you want to pay to get 16 bits of quality or 13.

  7. #27
    Banned
    Join Date
    Jul 2010
    Location
    New Milford, CT
    Posts
    1,232

    Lightbulb

    Quote Originally Posted by amirm View Post
    What you just described by turning up the volume was not theoretical. You turned up the volume, and heard the limitations of your system at 16 bits of resolution! That is not theoretical at all. What you say other people describe to you is precisely what I have been trying to explain.
    Okay, maybe it's possible to hear jitter and other artifacts if you turn the volume way up during a reverb tail or song fade-out. But the volume has to be raised a lot - much more than the difference between a soft passage and a loud track on your thumb drive. I'm talking about 40 dB or more gain, which would blow out your speakers when the normal parts of the music play. And at that point jitter is the least of one's worries, after hiss and maybe hum.

    From my perspective, if you have to raise the volume 40 to 60 dB beyond normal to hear an artifact, then it's a curiosity but not a real problem. And certainly not a justification for analog fans to diss digital. Again, I don't disagree that designers (and consumers) should aim for the highest performance possible. My interest is only what's practical, and consumers paying $2,000 more for a device that promises lower jitter is never practical IMO.

    --Ethan

  8. #28
    Banned
    Join Date
    Jul 2010
    Location
    New Milford, CT
    Posts
    1,232

    Lightbulb

    Quote Originally Posted by amirm View Post
    Here is the only piece I have from a UK audio magazine:
    Wow, I'm sure glad I use HDMI only for video and not for audio!

    So if you are using HDMI, your jitter is about 10X of what is should be in most cases.
    This is not an inherent problem with digital audio per se, but it sure is an eye-opener. Is this insurmountable and due to a limitation of HDMI? Or is it just sloppy engineering?

    I'd still like to see a blind test with a dozen skilled listeners to know if anyone can ever actually hear that.

    --Ethan

  9. #29
    Banned
    Join Date
    Apr 2010
    Location
    Seattle, WA
    Posts
    16,044
    Quote Originally Posted by Ethan Winer View Post
    Is this insurmountable and due to a limitation of HDMI? Or is it just sloppy engineering?
    As you see from the data presented, HDMI can be done right. But it is more challenging than an audio-only interface. HDMI slaves audio clock to video. Now you have a high frequency clock that you need to lock to in addition to having a ton more circuits on, creating their own jitter components due to crosstalk and other factors like it.

    I'd still like to see a blind test with a dozen skilled listeners to know if anyone can ever actually hear that.

    --Ethan
    Until then, you could simply avoid the concern altogether and simply buy equipment with less than 250ps worth of jitter . For 16-bit samples that is....

  10. #30
    Banned
    Join Date
    Apr 2010
    Location
    Seattle, WA
    Posts
    16,044
    Quote Originally Posted by Ethan Winer View Post
    Okay, maybe it's possible to hear jitter and other artifacts if you turn the volume way up during a reverb tail or song fade-out. But the volume has to be raised a lot - much more than the difference between a soft passage and a loud track on your thumb drive. I'm talking about 40 dB or more gain, which would blow out your speakers when the normal parts of the music play.
    I gave an example where there was no "normal" part of music. The whole album was softly recorded. It was the next album which was recorded closer to 0db.

    Let's also agree that while you may have needed 40db to hear those artifacts, others may need much less.

    And at that point jitter is the least of one's worries, after hiss and maybe hum.
    I hear such distortions well before I hear hiss or hum. We are talking about well recorded music and high performance audio systems here.

    From my perspective, if you have to raise the volume 40 to 60 dB beyond normal to hear an artifact, then it's a curiosity but not a real problem. And certainly not a justification for analog fans to diss digital. Again, I don't disagree that designers (and consumers) should aim for the highest performance possible. My interest is only what's practical, and consumers paying $2,000 more for a device that promises lower jitter is never practical IMO.
    $2K is very practical for our readership. If that is what it takes to be assured as to the best digital audio reproduction, is money well spent. If you said $200K, you would have a point but $2K is not much in the context of a system which would deploy such a good DAC. But really, we can't be making economic arguments. I don't own a Ferrari but I can't say it is a bad car because it is expensive. People don't need you and I to give them that kind of lesson .

Page 3 of 18 FirstFirst 12345678910111213 ... LastLast

Similar Threads

  1. Hi from Ethan Winer
    By Ethan Winer in forum Introduce Yourself
    Replies: 8
    Last Post: 07-12-2010, 10:14 AM

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •