Assuming one is running Amarra in bit perfect mode, meaning volume set to max, no resampling, no dithering, then the only explanation of improved fidelity is an esoteric one.
The idea is that by limiting or controlling activity in the PC, the timing and quality of the signals over ground connections to your DAC can be improved. When one looks at the spectrum of jitter, you quickly see a 60 Hz component coming from the power supply (in some devices). Likewise, you see other components from say, other clocks in your device. With the PC having incredible amount of activity, it will likely pollute its clock with lots of jitter components. Likewise, if your audio equipment is not electrically isolated from the PC, as soon as you connect the two together, you pollute your DAC jut the same.
The above is a theory. As far as I know, they have never furnished measurements to prove the same. That is a shame since we can easily measure the above effects. While folks argue over validity of measurements in this case, that does not apply because the hardware is the same in either case. We should be able to see the effect of switching media players and seeing a difference. If there is no difference, there is no way that audibly there will be some. At least I can't think of how in this configuration.
I plan to one of these days run a test like this. I like to see how bad the PC is as is, and the with such changes.
My experience on the PC has shown that placebo effect is far stronger than real difference. For example, I tested Foobar2000 against WMP. At first, I thought Foobar sounded so much better. But upon more careful testing, I found the difference to vanish.
The situation with Lee is even more puzzling. He is using an asynchronous USB DAC. The whole reason to use one is to eliminate the clock from the PC and have it come from fresh new one on the other side of the bus. To hear a difference means that either the electronic isolation is not there or else, their implementation is still subject to jitter from the PC.