Are all Asynchronous USB chips/implementations created equal??
Or not? In theory, Asynchronous USB should eliminate ALL interface jitter. On paper, it should be the end all cure of interface jitter. The Master clock is in control of flow from the computer. Data is downloaded into a buffer to eliminate any variations in flow that may have occurred either in the computer or the USB cable. The Master clock then takes the data directly from the buffer in perfect time. The only issue it seems would be flow control and potential overrun or underrun when the computer clock doesn't want to play nicely. A larger buffer would seemingly be all that is needed to account for all but the very worst of deviations.
But, the problem is, in practice this doesn't seem to be the case. WHY? I don't know why, and hence the reason I am positing the question. And furthermore, why do DAC manufactures INSIST on placing further jitter reduction technology such as a second buffer or an ASRC?? It should not be necessary with an asynch. setup as I just described.
I am brought to this question by this little iFi iDSD Nano I just purchased. It has technology and programming code trickled down from Abbingdon Music Research. It has an XMOS Asych USB controller presumably with custom code written by AMR. But then, they add what would be seemingly redundant, a second jitter elimination system. It is a buffer that essentially acts as a digital PLL, clocking the data at an average rate of the input and making adjustments as necessary. WHY? There should be no interface jitter after the asynch. USB receiver! To me it seems that this could only make the jitter worse, as you just made the master clock right before the DAC a variable clock. Even if it varies very little (or not at all) the intrinsic jitter of such a clock would be higher than a truly fixed oscillator master clock.
Interestingly enough, the iFi iDSD's big brother, the AMR DP-777, actually does measure higher jitter numbers over asynchronous USB than SPDIF with the "Zero Jitter" mode I just described. Actually, the overall jitter numbers of any input on that device is relatively poor compared to the current state of the art. (Stereophile measurements)
Furthermore, other manufactures seem content to add further jitter reduction after their Asynch. USB interface. One that immediately comes to mind is Emotiva. On the other hand, Benchmark has bypassed its trademark ASRC jitter reduction in its latest DAC when using Asynchronous USB. They acknowledge that there is no need for further jitter reduction.
So, experts, can you tell me... why do asynchronous USB interfaces that should have little to no interface jitter because of the buffer need to have further jitter reduction? Am I missing something? Or are all Asych. USB interfaces not created equal???
With great curiosity...
Andrew
Or not? In theory, Asynchronous USB should eliminate ALL interface jitter. On paper, it should be the end all cure of interface jitter. The Master clock is in control of flow from the computer. Data is downloaded into a buffer to eliminate any variations in flow that may have occurred either in the computer or the USB cable. The Master clock then takes the data directly from the buffer in perfect time. The only issue it seems would be flow control and potential overrun or underrun when the computer clock doesn't want to play nicely. A larger buffer would seemingly be all that is needed to account for all but the very worst of deviations.
But, the problem is, in practice this doesn't seem to be the case. WHY? I don't know why, and hence the reason I am positing the question. And furthermore, why do DAC manufactures INSIST on placing further jitter reduction technology such as a second buffer or an ASRC?? It should not be necessary with an asynch. setup as I just described.
I am brought to this question by this little iFi iDSD Nano I just purchased. It has technology and programming code trickled down from Abbingdon Music Research. It has an XMOS Asych USB controller presumably with custom code written by AMR. But then, they add what would be seemingly redundant, a second jitter elimination system. It is a buffer that essentially acts as a digital PLL, clocking the data at an average rate of the input and making adjustments as necessary. WHY? There should be no interface jitter after the asynch. USB receiver! To me it seems that this could only make the jitter worse, as you just made the master clock right before the DAC a variable clock. Even if it varies very little (or not at all) the intrinsic jitter of such a clock would be higher than a truly fixed oscillator master clock.
Interestingly enough, the iFi iDSD's big brother, the AMR DP-777, actually does measure higher jitter numbers over asynchronous USB than SPDIF with the "Zero Jitter" mode I just described. Actually, the overall jitter numbers of any input on that device is relatively poor compared to the current state of the art. (Stereophile measurements)
Furthermore, other manufactures seem content to add further jitter reduction after their Asynch. USB interface. One that immediately comes to mind is Emotiva. On the other hand, Benchmark has bypassed its trademark ASRC jitter reduction in its latest DAC when using Asynchronous USB. They acknowledge that there is no need for further jitter reduction.
So, experts, can you tell me... why do asynchronous USB interfaces that should have little to no interface jitter because of the buffer need to have further jitter reduction? Am I missing something? Or are all Asych. USB interfaces not created equal???
With great curiosity...
Andrew