Wi-Fi or Ethernet are asynchronous by design.
Conclusion: this protocols are asynchronous so do as well.
I think it's getting a bit messy with the throwing around the word asynchronous in these different situations. Another way of explaining things would be as follows:
A DAC needs a good, "clean" clock to deliver good results. Which means that it beats with an absolutely steady rhythm, has to have a perfect "heartbeat". Now every DAC has a clock sitting right next to it, whether it's a crystal or a phase locked loop, PLL, or something else, it's still a clock. Some people may dispute this, but as far as the electronics are concerned it
is a clock.
Now, some of these clocks are set up to march to their own time,
they decide how fast they're going, always. Others, have a heartbeat that's bendable, or flexible, they can be told to slightly speed up or slightly slow down, and this is done so many times a second. This is how most of digital works, when sending the musical data from something to the DAC circuitry; the component sending the data out is doing so at a rate, or clock, that suits itself, not the DAC. And the DAC clock has to follow this, otherwise the audio can "glitch". Now, if the heartbeat is constantly being told to speed up and slow down, at a ridiculous rate then it is pretty intuitively obvious this is not a good situation, "jitter" is in the air. So the aim is always to keep the heartbeat as steady as possible, only nudge it one way or the other when you absolutely have to, or never at all if possible. Which gets back to the first version of the clock, the "ideal".
The way to keep the stress, or need, down of constantly fiddling with the DAC clock's speed, heartbeat, is to have a buffer, a reservoir of data that the DAC can suck out of, at a pace that suits
it: the best thing is to have a very, very large dump of this data, so matter how out of kilter the rate of filling up, and emptying of the buffer is, there is never an awkward moment when it's full, or empty. And this technique has been used, successfully to get good audio: it's equivalent to having a DAC clock which always decides its own heartbeat, not something else.
What's left is deciding what controls the rate of filling up this buffer: the emptying is never a problem, that's always decided by the DAC clock, which is just trying to run as steadily as possible, minimum jitter. The best situation is having the DAC clock
effectively deciding this, which means the buffer can be very small, and the heartbeat at the DAC can made as stable as possible. This is done when you have the DAC running a second cable back to the transport, asynchronous USB, and in a sense on a wider network. Networks work as a cooperative thing, no-one is pushing around someone else: one device asks for something, or is told something is coming. The rest of the network can choose to ignore this, or respond. The key thing is that there are no guarantees; except in special circumstances. So a DAC, that has to have a buffer that is never empty or full, otherwise audio glitches, needs yet again for that buffer to be big; this is a more complicated dance between whatever is sending the musical data, and what is receiving it. I don't know what all the "protocols" are, which in part decide who is holding the business end of the whip, setting the clock rate, but good engineering is always needed to make sure that firstly the buffer is handled correctly, and secondly that the DAC clock, the crucial heartbeat, is able to run at as steady a pace as possible ...
Frank