Audibility of Small Distortions

DonH50

Member Sponsor & WBF Technical Expert
Jun 22, 2010
3,956
318
1,670
Monument, CO
I do not understand how jitter can increase relative to full-scale as the signal decreases, must be missing something. I'll have to think on it. Clock jitter should be independent of signal level; signal-related jitter I would think would either stay constant or decrease with decreasing signal depending upon the architecture. The former would cause the jitter relative to the signal to increase but should not increase the absolute jitter.

I am not sure we need better (higher) resolution as the signal level decreases; aren't we less sensitive to quieter sounds? Again I am not sure, just curious what is going on and have not thought about it much.

The old-school rationale for low-level issues with digital was the digital quantization noise floor sounds much "harsher" to our ears than a random thermal noise floor. Dither was added to help with that (and to reduce tones in early DS converters). I am also curious about the tie to modulation noise as opus111 is describing.

Random thoughts... - Don
 

jkeny

Industry Expert, Member Sponsor
Feb 9, 2012
3,374
42
383
Ireland
Don, I believe he is repeating Hawksford's experiment in this paper http://www.essex.ac.uk/csee/research/audio_lab/malcolmspubdocs/C41 SPDIF interface flawed.pdf

where figs 17-18 "where he explicitly shows in-band, signal correlated jitter products.
More precisely, while he looks at the simulated / measured jitter phase noise spectra induced by a 1khz audio signal played back at different levels, first he finds back the original signal, as fundamental, + a series of harmonics, starting with the third. The second harmonic is always missing. He had also found that with decreasing audio signal level, the Dj introduced is increasing."
hawks_simjitter.jpg

From that paper "Meitner and Gendron [6] have also found that the jitter spectrum in a decoded interface signal has a
strong dependency upon audio level but account for this behaviour in terms of power supply
artifacts or 'logic induced modulation'. In truth, power supply related jitter in an interface
decoder will show similar characteristics to jitter due to band-limitation, though the results
presented below suggest that the band-limitation model compares well to jitter measured in
practical circuits."
 
Last edited:

ehoove

New Member
Dec 30, 2012
8
0
1
Thanks for the interesting thread, a bit over my head but very, very interesting!
I think I'm going to enjoy this forum,
Jim
 

DonH50

Member Sponsor & WBF Technical Expert
Jun 22, 2010
3,956
318
1,670
Monument, CO
Thanks John,

DJ makes a lot more sense, and for the reasons cited in the last paragraph (among others). The plots in your post above seem to have about the same spectral peaks. I'll download the paper and (hopefully) read through it -- it sounds familiar, probably buried in my ancient files someplace in the black hole masquerading as a basement storage room...
 

jkeny

Industry Expert, Member Sponsor
Feb 9, 2012
3,374
42
383
Ireland
Thanks for the interesting thread, a bit over my head but very, very interesting!
I think I'm going to enjoy this forum,
Jim
Welcome to forum, Jim but don't worry, I'm working my way through these things too & trying to figure out these issues.
It's a forum for learning & unlike a lot of other forums, not a battle of egos! Refreshing, really - you should enjoy :)
 

jkeny

Industry Expert, Member Sponsor
Feb 9, 2012
3,374
42
383
Ireland
I re-read that DiYHiFi thread which still remains very interesting & just as an aside he does a jitter test with multitone HF signal input (to simulate violins in typical classical music, -20dB down ) which shows deterministic jitter (DJ) in the output spectra i.e the two humps in the plot
Mutitone test jitter.jpg

But when he ran a Jtest signal he found no DJ (no double hump) "I had made also a test with the famous "Jtest" signal.
At a quarter wavelength of the sampling frequency, 11025 Hz, -6db amplitude, + LSB switched at 229.6875Hz."
JTest output.jpg

As he says "That is, clean! Which is very strange: next topic to investigate.." but I don't believe he ever got back to that question?
 

jkeny

Industry Expert, Member Sponsor
Feb 9, 2012
3,374
42
383
Ireland
Hopefully he's not chasing a simple windowing artifact... The IEEE ADC Standard (1241) defines how to select test tones to eliminate windowing and thus one big headache from testing.
I'm not sure about the windowing artifacts, Donh, but I noted that early on in the thread he noted that he had some trouble when validating his test setup
Today I was losing some (a lot) time trying to calibrate the setup.
Problem is, I do not have a modulation analyzer, so have to get around somehow.

I do have a Fluke6060B/AK. In ext FM mode it is supposed to do 99.9kHz peak FM deviation
when fed by a 1V peak (707mV rms) signal. The carrier is 10MHz, the modulation frequency is set at 15kHz, so as to be out of the LF grunge of the generator.
At these parameters, I'm expecting 500psec peak excursion, 1nsec p-p.
IF the generator would be in cal. Which is not, unfortunately, so I can only hope..
Which is not the result he got when he ran a test.

But then he does some work on it
So, I had a look at the generator, and yes it was not working well. With a bit of help though it got back in shape.
So let's start again. I am searching to produce a known amount of peak jitter. According to it's manual,
at 1Volt peak input modulation my generator gives 100KHz peak (not peak-peak, as I supposed earlier)
FM modulation, at 10kHz modulating frequency.
With a carrier 10MHz, that is a 1% peak modulation, that is, I should see 1nsec peak displacement, 2nsec p-p.

And gets:
As you can see, it do is 2nsec p-p, and in the spectra it gives 1000psec line.
Calib_10KHz.jpg
 

KBK

New Member
Jan 3, 2013
111
1
0
the human ear works like a diode. it works with the leading edge of a transient,and ignores the vast majority of the falling waveform.

It hears the level of the leading edge of the transient, and the next transient peak, and reconstructs within the ear-brain interface.

it also hears harmonic structure as the secondary point, via the many 'hairs' or cilia in the ear. Any form of a forward leading edge or transient function, is part of the ear's signal recognition aspect.

thus, harmonics, transients, and level and timing differentials between their given occurrences.

100% of the signal, to the ear, is this 10% that makes up transient and micro-transient harmonics.

The ear does not hear the other 90% that linear weighted measurement systems utilize.

What this means, is that the 10% of the signal that gets damaged by the gear via distortion....is the only part that the ear hears.

And that the linear measurements ascribe only a 10% of whole value to it, when calculating distortion numbers.

Essentially the measurements are nicely connected to engineering measurement principle, not in any way shape or form, are they even remotely slated to how the ear hears and what the ear hears.

If a measurement protocol was utilized that is related to and exacted in a way that actually follows human hearing, then the measurements would finally be fully connected to what we hear.
 
Last edited:

jkeny

Industry Expert, Member Sponsor
Feb 9, 2012
3,374
42
383
Ireland
Excellent, KBK, thank you.
I've always thought that the secret to unlocking this measurement Vs hearing dilemma was a better understanding of how the ear works & the attempt to model measurements accordingly. Some will say that existing psychoacoustics brings us a long way there but I believe it's only a gross level of understanding.

Your conjecture on hearing function is very illuminating & thought provoking. I like your statement elsewhere that the ear/brain system is an almost full analogy to a 3d waveform analyz#sing temporal and level based transient multiband system operating in real time, making FFT analysing systems look prehistoric by comparison. I have tried to say the same myself elsewhere but not nearly as eloquently. Indeed you will find some threads here which are challenging the ubiquitous FFT analysis method by pointing out some factors considered widespread in audio reproduction & which normally remain invisible to FFT analysis - namely Opus111's noise modulation conjecture.

I've collected many papers on psychoacoustics but none giving the depth or synthesis of your analysis. Do you have any seminal papers you could point me to?
 

KBK

New Member
Jan 3, 2013
111
1
0
Excellent, KBK, thank you.
I've always thought that the secret to unlocking this measurement Vs hearing dilemma was a better understanding of how the ear works & the attempt to model measurements accordingly. Some will say that existing psychoacoustics brings us a long way there but I believe it's only a gross level of understanding.

Your conjecture on hearing function is very illuminating & thought provoking. I like your statement elsewhere that the ear/brain system is an almost full analogy to a 3d waveform analyz#sing temporal and level based transient multiband system operating in real time, making FFT analysing systems look prehistoric by comparison. I have tried to say the same myself elsewhere but not nearly as eloquently. Indeed you will find some threads here which are challenging the ubiquitous FFT analysis method by pointing out some factors considered widespread in audio reproduction & which normally remain invisible to FFT analysis - namely Opus111's noise modulation conjecture.

I've collected many papers on psychoacoustics but none giving the depth or synthesis of your analysis. Do you have any seminal papers you could point me to?

None that I can think of offhand. It took decades of research, with bits gathered from here and there, in order to finally reach that given understanding. It is based on facts, not much in the way of conjecture. Some bits are not so clearly scientifically stated to perfection, but it's all there. For the larger part, the way my mind works is that I retain the useful bits and tend to recognize the given relevant sentence as I read it, no matter the source or intent of the given source. A friend taught me that this is how the better researchers do it. That, for example, in a paper or book, it can be distilled down to a few critical sentences.

I hesitate to use the word fact, as any good scientific mind knows that there is no such thing as a fact, merely a theory that we use until the next useful theory comes along. My other big resource, is my friend and biz partner, who is an expert in acoustics. Down to the point that at least one of the people who've invented some of the parameters/standards we use in acoustics think very highly of his skills, and that his practical capacities and, most specifically... his results on jobsites and work done.....exceed their theoretical musings.

I went to your website and noted the digital situation. Here's an interesting one.

since we hear with both ears, and we triangulate position, This means that separate clocks should be used for each channel,and that the sampling rate needs be doubled again.

that the worst case golden eared audiophile, can position a stereoscopic ping in a stereo set up, to the tune of sounding as if has shifted by one inch left or right. Meaning, if we put a set of speakers 8 feet away from an audiophile and 8 ft apart, we can hear differentials of the image shift (left or right), in the area of one inch of position change. Repeated accuracy.

differences in sonic signal arrival times to the ears, as a pair, gets into under 100,000th of a second, in order to achieve this 'trick' of stereoscopic placement.

Yet, our ears are MUCH more complex than that, regarding capacity to decode.

What this means, is, that a minimum HQ standard is not met, until we get to 20bit at 225k sampling. It reaches that minimum when it has ZERO JITTER, at the same time.

Thus the minimum quality spec for digital audio, would be at let say, 384khz sampling, with 24 bit word depth, and no jitter, or maybe separate clocks for each channel.

That to equal the capacity for timing errors and resolution, as a set, in a system of analysis that takes into account what the ear needs to hear....digital audio could not beat the LP, until it is at 7M samples a second, with 20 bit word depth.
 
Last edited:

microstrip

VIP/Donor
May 30, 2010
20,807
4,702
2,790
Portugal
(...) If a measurement protocol was utilized that is related to and exacted in a way that actually follows human hearing, then the measurements would finally be fully connected to what we hear.
Once again we arrive to a key question : what are we considering in human hearing? Just the physiological or also the perceptual?
 

jkeny

Industry Expert, Member Sponsor
Feb 9, 2012
3,374
42
383
Ireland
Once again we arrive to a key question : what are we considering in human hearing? Just the physiological or also the perceptual?
You may be missing the point that the ear/brain acts as a sophisticated piece of equipment with dynamic adjustment capabilities. Trying to differentiate what happens in the brain as perceptual seems to be missing the point - the brain is part of the equipment/device - it dynamically adjusts the devices within the ears & also dynamically interprets the results. If you look at the following paper you will see that this interpretation is fundamental to the operation of hearing & allows it to beat FFTs in certain regards.

Here's a paper, I posted before, which might help - where the ear/brain is considered as a whole & beats FFT in this one regard of time-freq acuity

Tom, this one is also for you - "Human Time-Frequency Acuity Beats the Fourier Uncertainty Principle" http://arxiv.org/pdf/1208.4611.pdf

Here's the abstract
The time-frequency uncertainty principle states that the product of the temporal and frequency
extents of a signal cannot be smaller than 1/(4). We study human ability to simultaneously
judge the frequency and the timing of a sound. Our subjects often exceeded the uncertainty limit,
sometimes by more than tenfold, mostly through remarkable timing acuity. Our results establish a
lower bound for the nonlinearity and complexity of the algorithms employed by our brains in parsing
transient sounds, rule out simple "linear filter" models of early auditory processing, and highlight
timing acuity as a central feature in auditory object processing.
 
Last edited:

jkeny

Industry Expert, Member Sponsor
Feb 9, 2012
3,374
42
383
Ireland
But really the point is not where/how can the ears "beat" measurements - the whole point is that we have this amazing sense called hearing which is fundamental to our hobby. Understanding how this sense works is surely the first step in then developing measurements that are relevant. Using convenient existing measurements & then claiming that the ear can't do as well as these measuring devices seems to be back to front. As KBK said, hearing doesn't give a whit about these measurements - they are obviously of no great importance to how the hearing "actually" works - it is focused elsewhere & continues to work at a level which is fascinating.
 

jkeny

Industry Expert, Member Sponsor
Feb 9, 2012
3,374
42
383
Ireland
Maybe the limitations of the two channel system are that we are not using it optimally with measurements that correlate with hearing. Perhaps it's not a limitation after all? Who knows if, given the correct measurement system applied to reproduction devices, that 2 channel audio wouldn't bring us a long way to representing audible realistic soundscapes? Without the necessary focus being applied on how hearing works we are just left in an unknown area where all we have are traditional measurements which tell us little. The real research in all this is coming from other fields other than audio reproduction so we are sometimes left with interpretation & extrapolation of this research into our audio reproduction hobby.

The problem with your dichotomy of "facts Vs ideas" is that in any field of research, the whole point of the research is to investigate, theorems, ideas, conjectures. Science never becomes "facts", it's always just considered as the best current concept that fits observable phenomena. This remains the case until such time as a "better" concept arises which encompasses more observable phenomena or new phenomena are observed. Often, new observations are just ones which have been ignored in the past so in essence they are not new, just simply put aside as inconvenient observations that aren't explained in the current theorem.

I agree with you John. But actually, the two channel recording process is the built in weakness. But we are stuck with it pretty much. And there are apparently several "hearing" theories. That it is not a strictly "linear" system (but exhibits characteristics of a linear system even so) is acknowledged a long time ago. Yet, yes, we still have not pinned it all down.

One point I am making, is that if we actually knew everything about our hearing, that does not fix the limitations of two channel stereo we have now. We would then need to change the "recording or replication process" and guys on th is forum like Bruce (has a recording studio) are not going to do that, someone has to come along with the new physical "gear" and techniques to use them so guys like Bruce can implement them and finally we can "be there".

Yes, by all means if, what i read (my interpretation of Kens post above) is true that 90% of the music is in the 10% leading transient, then lets find that scientific work and perhaps focus on how we can make sure we are getting full fidelity to what our curren recording technology does with that transient in our recorded material...tape, lp or digital.

There is room for thought and experiment but I like to know that I am dealing with facts vs "ideas". We are (as a comsumer or audio playback designers) "locked in" to what we are given as the recording medium, sadly.
 

jkeny

Industry Expert, Member Sponsor
Feb 9, 2012
3,374
42
383
Ireland
What has been expressed here by Micro is the often expressed but mistaken belief that once the nerve signals leave the ear mechanism then they are really considering "just perception" & all is in the realm of psychology. This ignore the many secondary processing that functions that occur after the processing of the signal by the ear & also ignore the feedback mechanism from some structures in the brain which dynamically adjusts the Cochlear amplifer itself.

One such paper which is an example of this:
"Harmonic sounds, such as voiced speech sounds and many animal communication signals, are
characterized by a pitch related to the periodicity of their envelopes. While frequency information
is extracted by mechanical filtering of the cochlea, periodicity information is analyzed by temporal
filter mechanisms in the brainstem" http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2784045/pdf/fnint-03-027.pdf
 

jkeny

Industry Expert, Member Sponsor
Feb 9, 2012
3,374
42
383
Ireland
Glad for this thread & these recent posts as it spurred me to search further & educate myself a bit more. One interesting paper I cam across was this one https://engineering.purdue.edu/~malcolm/apple/visualspeech/ImportanceOfTime.pdf

"ON THE IMPORTANCE OF TIME— A TEMPORAL REPRESENTATiON OF SOUND

I haven't read the complete paper yet but this jumped out at me & resonated with what has been said here by KBK
One characteristic of an auditory signal that is undisturbed by most nonlinear transformations
is the periodicity information in the signal. Even if the bandwidth, amplitude, and phase
characteristics of a signal are changing, the repetitive characteristics do not. In addition, it is
very unlikely that a periodic signal could come from more than one source. Thus the auditory
system can safely assume that sound fragments with a consistent periodicity can be combined
and assigned to a single source. Consider, for example, a sound formed by opening and closing
the glottis four times and filtering the resulting puffs of air with the vocal resonances. After
nonlinear processing the lower auditory nervous system will still detect four similar events
which will be heard and integrated as coming from a voice.

The duplex theory of pitch perception, proposed by Licklider in 1951 [11] as a unifying
model of pitch perception, is even more useful as a model for the extraction and representation
of temporal structure for both periodic and non-periodic signals. This theory produces a
movie-like image of sound which is called a correlogram. We believe that the correlogram,
like other representations that summarize the temporal information in a signal, is an important
tool for understanding the auditory system.

It has been a hobby horse of mine for a long time now that the temporal aspects of sound are much overlooked & a possible major factor in our auditory process.
 

microstrip

VIP/Donor
May 30, 2010
20,807
4,702
2,790
Portugal
You may be missing the point that the ear/brain acts as a sophisticated piece of equipment with dynamic adjustment capabilities. Trying to differentiate what happens in the brain as perceptual seems to be missing the point - the brain is part of the equipment/device - it dynamically adjusts the devices within the ears & also dynamically interprets the results. If you look at the following paper you will see that this interpretation is fundamental to the operation of hearing & allows it to beat FFTs in certain regards.

Good point. I should have clearly formulated the question to avoid confusions : Just the physiological or also the physiological/perceptual?

BTW, this brain commanded adjustment capacity is considered congenital or can benefit with the experience of the individual (learning capability)?
 

jkeny

Industry Expert, Member Sponsor
Feb 9, 2012
3,374
42
383
Ireland
Good point. I should have clearly formulated the question to avoid confusions : Just the physiological or also the physiological/perceptual?
I don't know where this transition actually occurs - where physiological becomes perceptual?

BTW, this brain commanded adjustment capacity is considered congenital or can benefit with the experience of the individual (learning capability)?
I'm no expert so can't really answer this. There seem to be a number of processing stations along the way of the auditory path, one of which is the auditory brainstem (or reptilian brain showing the evolutionary origins of our hearing). As there is a good correlation in mammals regarding the functionality of hearing, I would imagine there is a genetic & evolutionary aspect to it. However, we also seem to have a very unique ability to learn or improve the higher level functionality of hearing. Look at the earlier links which are to a site that is studying ways of using audition to provide some level of visual ability to blind people. Definitely a learning task required here which shows the possible malleability of hearing.

Have a look also at my last posted link in which a model of hearing is presented which involves some area of the brain interpreting the signal stream that is presented i.e like the frames of a movie. This interpretation/analysis would seem to be important in appreciation of music i.e focussing & following a musical pattern running through a song or a particular instrument, voice, etc. The "cocktail party effect" being another example of this ability to isolate & follow a conversation against a noisy background. So again, is this perception or a function of hearing - I would say it's a function.
 
Last edited:

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu

Steve Williams
Site Founder | Site Owner | Administrator
Ron Resnick
Site Co-Owner | Administrator
Julian (The Fixer)
Website Build | Marketing Managersing