Barrows, by your description of your configuration, you have just proven that there are probably very few systems that offer the isolation you are talking about. Contrary to what you stated/claimed initially "I am a little bewildered by jkenny's apparent lack of knowledge on how to properly isolate a USB interface though? I thought he was much more knowledgable about this?". You then go on to describe an XMOS based system which only partially isolates the DAc & now have described your system & the lengths you have gone to & are still striving for a better solution!!
I'm confused by this shifting of opinion - is it easy to isolate as you claimed initially or at least claimed my statement that it was difficult was erroneous
Hahaha! Nothing is as "easy" as it seems at first glance. But, I entirely disagree with you, but perhaps this a question of semantics. The USB interface does provide isolation from the computer. There is absolutely no need for the XMOS chip itself to be isolated, what is important is that the oscillators are isolated, and the I2S signal is re-clocked by a clean clock signal, on the clean side of the interface. As long as this is done, you have low jitter (at virtually the intrinsic level of the masterclock) and all the samples are there (if you were dropping any samples you would hear ticks, pops, or clicks). The facts have not changed, and neither has my "opinion" of them.
Now, reducing noise put back on the AC mains is a separate issue from USB isolation-this is a system problem, which exists in any audio system, even one just using an all analog approach. Audiophiles know about these problems with AC line noise already, it is nothing new.
My approach is to reduce the sources of noise as much as reasonably possibly. There is no evidence to suggest that jplay has any influence on computer noise reduction; in fact, a good theory has been put forth in the preceding posts which suggests jplay is likely to increase noise. Since these noise levels would be relatively trivial for someone with a financial motive to measure, I would like to see measurements like this from the developers of jplay…
In these systems we have only three things which go to the DAC conversion stage which matter: the samples, the jitter, and the noise. As long as we address these three things, then the DAC converter can do its job very well. There seems to be this kind of weird belief among some, that somehow the samples themselves are being "damaged" in some way, which results in subtle audio degradation. This just sounds like nonsense. If this were true, or possible, why can no one who believes these things present a single, plausible, mechanism for how this "degradation" might happen (much less measure their effects)? I can accept that noise coupling into the clean side of things could affect clock jitter, I can also accept that noise coupling into the output stage of the DAC could result in sonic degradation, other than that…
I have heard different things from different software approaches, I associate this differences with different noise spectrums. Additionally, usually the differences I hear are not clearly of the type which can prove one software approach is "better", just that there is (maybe) a difference; the kind of difference where one track might sound better, another might sound worse. The more dialed my system got, the less software seemed to matter, at which point I stopped playing with software, as my time was better spent choosing different resistors in my DAC (for example) which makes much more prominent improvements.