Computer Audio: Is isolation as good as optimization?

Jul 1, 2010
8,713
0
0
#21
You might use Wi-Fi or Ethernet (also galvanic isolated) but you need a processor to do all the networking.
Might be a small one (SqueezeTouch) but it is a kind of mini PC.
So the question arise how to isolate the DAC from this computer!
Of course. I didn't think of that. It could, however, be a very simple, dedicated computer with minimal processes, no HD, no fan...pretty much with the dedicated minimized/optimized computer server people go for, no?

Tim
 

fas42

Addicted To Best
Jan 8, 2011
3,973
0
0
NSW Australia
#22
How about wireless? An obvious advantage is isolation. What are the drawbacks?

Tim
Depending on everything, you could have some radio frequency, RF, interference degrading the sound. As an example, I had to do a lot of work on reducing the impact of this: even now there are still "problems", but it's manageable.

Fank
 

Vincent Kars

WBF Technical Expert: Computer Audio
Jul 1, 2010
860
0
0
#23
It could, however, be a very simple, dedicated computer with minimal processes, no HD, no fan...pretty much with the dedicated minimized/optimized computer server people go for, no?
Yes, like the Squeeze.
500 MHz ARM, no fan, no HD, no moving parts at all
But hardcore hackers like John Swenson hacked it to do async USB.

BTW: it took a while before I understood too that the protocol only is not the message.
Clever guys like the ones at Devialet now advertise their network connection as Asynchronous!
 
Jul 1, 2010
8,713
0
0
#24
Yes, like the Squeeze.
500 MHz ARM, no fan, no HD, no moving parts at all
But hardcore hackers like John Swenson hacked it to do async USB.

BTW: it took a while before I understood too that the protocol only is not the message.
Clever guys like the ones at Devialet now advertise their network connection as Asynchronous!
Ok, now give me the long version of that, and talk real slow.

Tim
 

Vincent Kars

WBF Technical Expert: Computer Audio
Jul 1, 2010
860
0
0
#25
Take SPDIF
The sender sends the data, the receiver use the send rate to construct the sample rate.
Obvious the DAC is slaved to the sender.
Adaptive USB: the DAC construct the sample rate using the amount of data generated by the sender.
Obvious the DAC is slaved to the sender.
Asynchronous USB: the DAC is told to run at a certain speed e.g. 96 kHz and regulates the amount of data send.
Obvious the DAC is the master, it can run on a fixed speed, zero input jitter by design.
Conclusion: an asynchronous protocol is the way to go.

Wi-Fi or Ethernet are asynchronous by design.
Conclusion: this protocols are asynchronous so do as well.

A little experiment.
Play a song on a PC (sorry no OSX here) from a HD
Play the same song from an external HD
Play the same song from a NAS
Play the same song from another PC in the network using DLNA (sorry no Airplay here)
All of the time the protocols are asynchronous but you are using the same audio device e.g. the onboard audio.

In these examples the protocol is not relevant. The configuration is totally different but in the end you are simply reading data and store this in a buffer in memory.
We are not feeding a DAC using these protocols but reading data to process.

If we talk isolation of a DAC on protocol level it is not about the method (protocol) used to read the data but about the protocol we use to feed the DAC directly.
It is not about how to get our audio files into a CPU but about how to get the audio out of a CPU with minimal jitter.
 
Last edited:

fas42

Addicted To Best
Jan 8, 2011
3,973
0
0
NSW Australia
#26
Wi-Fi or Ethernet are asynchronous by design.
Conclusion: this protocols are asynchronous so do as well.
I think it's getting a bit messy with the throwing around the word asynchronous in these different situations. Another way of explaining things would be as follows:

A DAC needs a good, "clean" clock to deliver good results. Which means that it beats with an absolutely steady rhythm, has to have a perfect "heartbeat". Now every DAC has a clock sitting right next to it, whether it's a crystal or a phase locked loop, PLL, or something else, it's still a clock. Some people may dispute this, but as far as the electronics are concerned it is a clock.

Now, some of these clocks are set up to march to their own time, they decide how fast they're going, always. Others, have a heartbeat that's bendable, or flexible, they can be told to slightly speed up or slightly slow down, and this is done so many times a second. This is how most of digital works, when sending the musical data from something to the DAC circuitry; the component sending the data out is doing so at a rate, or clock, that suits itself, not the DAC. And the DAC clock has to follow this, otherwise the audio can "glitch". Now, if the heartbeat is constantly being told to speed up and slow down, at a ridiculous rate then it is pretty intuitively obvious this is not a good situation, "jitter" is in the air. So the aim is always to keep the heartbeat as steady as possible, only nudge it one way or the other when you absolutely have to, or never at all if possible. Which gets back to the first version of the clock, the "ideal".

The way to keep the stress, or need, down of constantly fiddling with the DAC clock's speed, heartbeat, is to have a buffer, a reservoir of data that the DAC can suck out of, at a pace that suits it: the best thing is to have a very, very large dump of this data, so matter how out of kilter the rate of filling up, and emptying of the buffer is, there is never an awkward moment when it's full, or empty. And this technique has been used, successfully to get good audio: it's equivalent to having a DAC clock which always decides its own heartbeat, not something else.

What's left is deciding what controls the rate of filling up this buffer: the emptying is never a problem, that's always decided by the DAC clock, which is just trying to run as steadily as possible, minimum jitter. The best situation is having the DAC clock effectively deciding this, which means the buffer can be very small, and the heartbeat at the DAC can made as stable as possible. This is done when you have the DAC running a second cable back to the transport, asynchronous USB, and in a sense on a wider network. Networks work as a cooperative thing, no-one is pushing around someone else: one device asks for something, or is told something is coming. The rest of the network can choose to ignore this, or respond. The key thing is that there are no guarantees; except in special circumstances. So a DAC, that has to have a buffer that is never empty or full, otherwise audio glitches, needs yet again for that buffer to be big; this is a more complicated dance between whatever is sending the musical data, and what is receiving it. I don't know what all the "protocols" are, which in part decide who is holding the business end of the whip, setting the clock rate, but good engineering is always needed to make sure that firstly the buffer is handled correctly, and secondly that the DAC clock, the crucial heartbeat, is able to run at as steady a pace as possible ...

Frank
 

exa065

New Member
Jan 25, 2012
7
0
0
Toronto
www.exaSound.com
#28
Is isolation as good as optimization?

In my experience improving computer power sources and filtering without isolation if not enough. The common ground between the computer and the audio gear will always transmit significant high-frequency noise. Galvanic isolation eliminates ground loops. Proper implementation can achieve barrier capacitance as low as 20 pF.

Achieving ground isolation dramatically reduces noise, but this is not the only problem when computers are used as music players.

Computers are general purpose devices and it is not a trivial task to turn them into audiophile grade gear. There are a number of hardware and software elements in the sound-streaming chain and all of them must be fully transparent. These include the media files, the software player, the OS sound subsystem, the sound driver and the audio interface to the external DAC. All must be bit-perfect and the sum of them must be jitter-free.

I achieve this by using simple bit-perfect players, proprietary bit-perfect sound drivers, proprietary asynchronous USB interface with error correction, GMRs for galvanic isolation and re-clocking.

A truly asynchronous interface compensates for the unstable computer timing. Therefore there is no need to try to eliminate background computer processes like disk operations and networking.

The benefits of this approach are fully realised when high-bit-depth / high-sampling rate media files are used. In my experience the highest degree of realism is achieved when source files are played at their native sample rate and resolution.

George
 
Apr 3, 2010
16,022
0
0
Seattle, WA
#29
Great post George :). It is the right formula.

We need at some point an objective way to prove it. Folks still spend a lot of energy and money on improving sources even with all the things you (and I) mention.
 
May 30, 2010
13,900
3
38
Portugal
#30
(...) Computers are general purpose devices and it is not a trivial task to turn them into audiophile grade gear. There are a number of hardware and software elements in the sound-streaming chain and all of them must be fully transparent. These include the media files, the software player, the OS sound subsystem, the sound driver and the audio interface to the external DAC. All must be bit-perfect and the sum of them must be jitter-free.

I achieve this by using simple bit-perfect players, proprietary bit-perfect sound drivers, proprietary asynchronous USB interface with error correction, GMRs for galvanic isolation and re-clocking.

A truly asynchronous interface compensates for the unstable computer timing. Therefore there is no need to try to eliminate background computer processes like disk operations and networking. (...)

George
George,

Here we go again. Can you conclude from your words that a server implemented using, for example, Windows7, Jmriver and direct ASIO drivers playing FLAC native files at native sampling can be "bit non perfect" ?

What are calling "fully transparent"?
 

exa065

New Member
Jan 25, 2012
7
0
0
Toronto
www.exaSound.com
#31
J River is one of the best samples of stable bit-perfect operation:

• It works with ASIO drivers, and most ASIO drivers are bit-perfect by design.
• ASIO can bypass completely the Windows sound system, the Windows volume control and mixer are out of the picture.
• ASIO allows for easy automatic sample rate switching, so another limitation of the operating system is eliminated.
• ASIO is very light-weight, my driver works with J River up to 8 channels at 32bit resolution and 384 kHz sampling rates.
• The only area for configuration is the J River player. To have bit-perfect operation, the user must keep the setup simple and basic - no DSP processing, no up-sampling, no channel mixing, no bass processing, no software volume control.

"Fully transparent" means bitperfect. The bits from the media file delivered to the input pins of the DAC chip without any changes. It also implies that the data delivery is jitter free. Of course the DAC will introduce some jitter, but the interface, the delivery medium can and must be jitter free. This is one of the major advantages of Asynchronous USB over legacy audio interfaces like SPDIF.

George
 

exa065

New Member
Jan 25, 2012
7
0
0
Toronto
www.exaSound.com
#32
Great post George :). It is the right formula.

We need at some point an objective way to prove it. Folks still spend a lot of energy and money on improving sources even with all the things you (and I) mention.
amirm,

It is great to be here and to explore new ideas! I find that people are investing lots of time and energy to perfect outdated technologies. CDs are really constrained by resolution limitations. Why invest thousands of dollars to try to achieve the impossible - to bring to live material that was downgraded by the recording process? Many of the great performances of the past are re-digitised and studio grade master files are offered at specialty online stores. I think that this is the future.

George
 
May 30, 2010
13,900
3
38
Portugal
#33
(...) I find that people are investing lots of time and energy to perfect outdated technologies. CDs are really constrained by resolution limitations. Why invest thousands of dollars to try to achieve the impossible - to bring to live material that was downgraded by the recording process? Many of the great performances of the past are re-digitised and studio grade master files are offered at specialty online stores. I think that this is the future.

George
George,

It would be great if was true, but unhappily for most of us it is not. I look at my recordings collection and less than 1% of it is available in HiRez formats. HiRez is still a niche and I foresee that I will go on listening to 44.1/16 for a long time. Thirty years can not be erased in a short time and sometimes the future is not so near as we would like.
As you say, I also have found that even thousands of dollars can not achieve the impossible, but can make the existing material very listenable and even enjoyable.
 

exa065

New Member
Jan 25, 2012
7
0
0
Toronto
www.exaSound.com
#34
George,

It would be great if was true, but unhappily for most of us it is not. I look at my recordings collection and less than 1% of it is available in HiRez formats. HiRez is still a niche and I foresee that I will go on listening to 44.1/16 for a long time. Thirty years can not be erased in a short time and sometimes the future is not so near as we would like.
As you say, I also have found that even thousands of dollars can not achieve the impossible, but can make the existing material very listenable and even enjoyable.
Sure, I also have a vast collection of CDs. But I am not interested anymore of the minor returns that I get when I try to beautify CD sound.

For me it is time for something more exciting. With high-sampling rate files the sound is really detailed and dynamic. The improvements are not subtle at all. Detail is much clearer, sound is much less harsh, and the bass is so much stronger and tighter.
 

fas42

Addicted To Best
Jan 8, 2011
3,973
0
0
NSW Australia
#35
Sure, I also have a vast collection of CDs. But I am not interested anymore of the minor returns that I get when I try to beautify CD sound.

For me it is time for something more exciting. With high-sampling rate files the sound is really detailed and dynamic. The improvements are not subtle at all. Detail is much clearer, sound is much less harsh, and the bass is so much stronger and tighter.
Welcome to the forum, George. But I will beg to differ regarding CD sound: in spite of what is said by many, and your experiences to date, absolutely remarkable, indeed, magical sound can be retrieved from Redbook. The problem is that the general audio industry hasn't, and refuses in many cases, to investigate the situation and engineer good solutions; so it is almost impossible to simply purchase a straightforward solution, irrespective of how much money is thrown at it. Virtually all people who achieve higher levels of sound in digital either have to do major tweaking themselves, which is what I do, or have their equipment extensively modified by people who have that extra understanding.

It is not necessary to go hi-res: the fact that it sounds better for most people is because it circumvents, solves some of the technical issues, glitches, that plague digital playback, that the industry can't seem to get a decent grip on ...

Frank
 
Jan 18, 2012
547
0
0
Drobak Norway
#36
Redbook and the future

I think just because I´ve seen the light or glimpses of it, I cannot ignore 6TB of wav files with basically Redbook format
besides sometimes the light in the other end of the tunnel, turns out to be the train......:eek:
vinyl is still my ultimate sound source for hi-q sound, but even redbook through the right gear is getting closer, much thxs to guys like the EXA and TPA teams
besides; look at the cost of a SOTA vinyl rig compares to a ditto pc audio setup; 50:1 ?
and soundwise: 10:7 ?
best
Leif
 

Members online