Unfortunately, for digital audio, timing is an essential requirement: the official standard for CD playback says 32 bits must be played precisely every 22 microseconds: if this timing is 'off', even by a very, very small amount, the output, by definition, is no longer in line with the technical specification for CD playback. In other words: digital playback must not only be 'bit-perfect' but also 'timing-perfect'. That is why many modern DACs often showcase 'jitter' measurements (denoting a DAC’s timing precision) at the _pico_second level (1 picosecond is only 0.000001 microseconds!).
And that is what JPLAY is all about: improving timing.
Thanks for the link. It is a strange and misleading response. It even starts wrong:
"'Simpler is better' is an old rule frequently quoted by designers of audio-equipment. However, some say we should completely forget this rule when it comes to computer audio: They say, computers are so 'fast' and audio reproduction such a relatively 'easy' job for a computer, that any computer, regardless of hardware or software used, will sound absolutely identical provided the data is 'bit-perfect' (as in: digital audio bits are not modified by equalization, digital signal processing, etc). And they add that once a computer outputs 'bit-perfect' data then all those who claim to hear a difference between software players or operating systems or computer hardware are ‘delusional’ at best and, at worst are 'scammers' and 'hoaxsters'!"
No one is saying that simpler is not better. We are saying that they can't and aren't making things simpler. They may think they are, but they aren't. If the path to your destination takes 1000 turns and you change one of them, you have not made the journey less complex. It is that complexity that they cannot tame. While it is the position of Jriver that bit-perfect equals perfection, it is not mine. I acknowledge that timing is involved. The problem is, as they later say, the PC is not a "real-time" device no matter what you do in the app. Ultimately the operating system has a higher priority than any process in the system and it is the entity that decides what happens next.
"Unfortunately, for digital audio, timing is an essential requirement: the official standard for CD playback says 32 bits must be played precisely every 22 microseconds: if this timing is 'off', even by a very, very small amount, the output, by definition, is no longer in line with the technical specification for CD playback. In other words: digital playback must not only be 'bit-perfect' but also 'timing-perfect'. That is why many modern DACs often showcase 'jitter' measurements (denoting a DAC’s timing precision) at the _pico_second level (1 picosecond is only 0.000001 microseconds!).
And that is what JPLAY is all about: improving timing."
What confusing logic. First of all, if the timing is not maintained, you get a glitch or data loss. If the target device is fed data late, it will have to make it up on its own and hence will cause an audible distortion that cannot be missed. If it runs behind, then data is dumped on the floor and effect again, will be anything but subtle.
Stepping back, the way this a non-real-time PC is made into a real-time one is to create a buffer, a pool of memory, were audio samples are prefetched and put there. The hardware also has a buffer but usually much smaller one that it uses to make sure it can output things on time. When its buffer starts to empty, it will force the compute to stop with what is called an "interrupt." All activity in the computer stops at that point and a piece of software called a "driver" for that device (e.g. USB for USB DAC) starts to run and it takes data from computer memory and sends it to the hardware to push out. The job of an application is to keep giving data to the driver so that when that critical time comes, it has plenty of data to send out. Once you achieve glitch-free operation here, doing better and faster will mean nothing. This is why if you bought a computer that is 100 times faster than you need will not result in audio playing any better. This deals with the first sentence.
The rest of what they say has nothing to do with the above factor. What is being described is classic timing jitter. If you are using a low-quality digital output directly from your computer or its DAC, then it likely has fair amount of jitter. Source of jitter is varied and a lot of it may have nothing to do with what the app is or is not doing. Some of it may be due to CPU activity and changing that, might impact it. JPlay has no data at all that they are able or have impacted this worst case situation.
Here is the other problem: the audience for this player is not the guy who is running the motherboard S/PDIF or a DAC on the computer. Instead, it is someone who is likely using a high-quality interface that is not dependent on the quality of the clock out of the PC. Assuming so, then the situation gets hard, really hard. That is because a proper async interface does not use any clock out of the PC. It has its own system similar to what I described above where the DAC calls the shots as far as when it needs the next chunk of data. In that sense, it has isolated itself from the PC clock altogether.
"While some audiophiles will manually optimize the Windows OS on their servers, JPLAY adds to that process by increasing the computer's timer resolution accuracy to the maximum possible."
I don't know what timer they think they have changed. But there is no facility whatsoever to change the S/PDIF clock accuracy under program control or that of USB, etc. All of those run at hardware governed rates and no software is able to change them.
Maybe they have mucked with the operating system time. Windows allows the default 15 msec timer to be changed to smaller values. But such changes will not at all control the hardware clocks per above. The change will increase the CPU load also and could have unpredictable impact on the time jitter should that interface be on board. A 15 msec timer should it create jitter, will be at about 70 Hz (1/.015) which is masked perceptually. Increase resolution to 1 msec and now the jitter will be at 1000 Hz and you just made that jitter more audible by pushing it out of the shadow of the music signal.
And what happened to making things simpler? More timer interrupts increases the load, not reduce it.
"JPLAY uses special ultra low-latency RAM to store music samples and massively pre-queues them so the sound driver can access them faster."
As I explained, once you keep up with the hardware, it matters not how much faster you make it. The hardware, in this case, the audio driver and audio click, drive the speed requirements. And by definition, all players put their audio samples in high-speed system memory which is thousands of times faster than what the audio DAC needs.
"It’s important to note that the corporation accusing JPLAY of being a 'hoax' does not, in fact, deny JPLAY is performing this massive "audiophile re-programming" of Windows. No—Instead, this corporation denies that, despite JPLAY’s actions, JPLAY has any impact on sound quality whatsoever. Their "proof" is that JPLAY does not have any 'technical measurements' to demonstrate an improvement in sound quality.
Sure, we don’t have all the 'technical measurements' we would like: The simple fact is, while there are plenty of DAC measurements regarding jitter, when it comes to using a computer as a digital transport, there simply aren’t any! Nobody has quite figured out how to measure ‘computer jitter’ (or 'computer noise'), which others propose is the "real" cause of the sonic differences in software and/or hardware."
So they did all of these optimizations yet had no way of measuring if any of them did any good? How do they know they did not make things worse?
And such measurements readily exist. I published my article in WSR magazine a couple of months ago and there are ton of measurements around online, including the specific ones that showed Jplay not changing noise or jitter. Those tests were trivial to put together and run. Just a PC with a sound card and free software to analyze what it the other PC is outputting. We are readily able to measure if the PC changes impact what comes out of the DAC which is ultimately what we need to see.
If he is talking about measuring jitter as it comes out of the PC (e.g. on USB) that can also be easily done with a digital scope. There is no mystery there. As tests go, it could not be any easier to set up and run.
"While we’re certain technical measurements will come in time, computer audio is still a new field—and while we're certainly looking forward to working with anyone advancing the state of art, we do believe we have the best measurement equipment on the planet: the ears of thousands of passionate and discerning audiophiles who have tested dozens of JPLAY versions by ear alone…"
So just say this. Don't try to claim technical points that are not founded or backed by any objective data.
Thanks for the link. It is a strange and misleading response. It even starts wrong:
"'Simpler is better' is an old rule frequently quoted by designers of audio-equipment. However, some say we should completely forget this rule when it comes to computer audio: They say, computers are so 'fast' and audio reproduction such a relatively 'easy' job for a computer, that any computer, regardless of hardware or software used, will sound absolutely identical provided the data is 'bit-perfect' (as in: digital audio bits are not modified by equalization, digital signal processing, etc). And they add that once a computer outputs 'bit-perfect' data then all those who claim to hear a difference between software players or operating systems or computer hardware are ‘delusional’ at best and, at worst are 'scammers' and 'hoaxsters'!"
No one is saying that simpler is not better. We are saying that they can't and aren't making things simpler. They may think they are, but they aren't. If the path to your destination takes 1000 turns and you change one of them, you have not made the journey less complex. It is that complexity that they cannot tame. While it is the position of Jriver that bit-perfect equals perfection, it is not mine. I acknowledge that timing is involved. The problem is, as they later say, the PC is not a "real-time" device no matter what you do in the app. Ultimately the operating system has a higher priority than any process in the system and it is the entity that decides what happens next.
"Unfortunately, for digital audio, timing is an essential requirement: the official standard for CD playback says 32 bits must be played precisely every 22 microseconds: if this timing is 'off', even by a very, very small amount, the output, by definition, is no longer in line with the technical specification for CD playback. In other words: digital playback must not only be 'bit-perfect' but also 'timing-perfect'. That is why many modern DACs often showcase 'jitter' measurements (denoting a DAC’s timing precision) at the _pico_second level (1 picosecond is only 0.000001 microseconds!).
And that is what JPLAY is all about: improving timing."
What confusing logic. First of all, if the timing is not maintained, you get a glitch or data loss. If the target device is fed data late, it will have to make it up on its own and hence will cause an audible distortion that cannot be missed. If it runs behind, then data is dumped on the floor and effect again, will be anything but subtle.
Stepping back, the way this a non-real-time PC is made into a real-time one is to create a buffer, a pool of memory, were audio samples are prefetched and put there. The hardware also has a buffer but usually much smaller one that it uses to make sure it can output things on time. When its buffer starts to empty, it will force the compute to stop with what is called an "interrupt." All activity in the computer stops at that point and a piece of software called a "driver" for that device (e.g. USB for USB DAC) starts to run and it takes data from computer memory and sends it to the hardware to push out. The job of an application is to keep giving data to the driver so that when that critical time comes, it has plenty of data to send out. Once you achieve glitch-free operation here, doing better and faster will mean nothing. This is why if you bought a computer that is 100 times faster than you need will not result in audio playing any better. This deals with the first sentence.
The rest of what they say has nothing to do with the above factor. What is being described is classic timing jitter. If you are using a low-quality digital output directly from your computer or its DAC, then it likely has fair amount of jitter. Source of jitter is varied and a lot of it may have nothing to do with what the app is or is not doing. Some of it may be due to CPU activity and changing that, might impact it. JPlay has no data at all that they are able or have impacted this worst case situation.
Here is the other problem: the audience for this player is not the guy who is running the motherboard S/PDIF or a DAC on the computer. Instead, it is someone who is likely using a high-quality interface that is not dependent on the quality of the clock out of the PC. Assuming so, then the situation gets hard, really hard. That is because a proper async interface does not use any clock out of the PC. It has its own system similar to what I described above where the DAC calls the shots as far as when it needs the next chunk of data. In that sense, it has isolated itself from the PC clock altogether.
"While some audiophiles will manually optimize the Windows OS on their servers, JPLAY adds to that process by increasing the computer's timer resolution accuracy to the maximum possible."
I don't know what timer they think they have changed. But there is no facility whatsoever to change the S/PDIF clock accuracy under program control or that of USB, etc. All of those run at hardware governed rates and no software is able to change them.
Maybe they have mucked with the operating system time. Windows allows the default 15 msec timer to be changed to smaller values. But such changes will not at all control the hardware clocks per above. The change will increase the CPU load also and could have unpredictable impact on the time jitter should that interface be on board. A 15 msec timer should it create jitter, will be at about 70 Hz (1/.015) which is masked perceptually. Increase resolution to 1 msec and now the jitter will be at 1000 Hz and you just made that jitter more audible by pushing it out of the shadow of the music signal.
And what happened to making things simpler? More timer interrupts increases the load, not reduce it.
"JPLAY uses special ultra low-latency RAM to store music samples and massively pre-queues them so the sound driver can access them faster."
As I explained, once you keep up with the hardware, it matters not how much faster you make it. The hardware, in this case, the audio driver and audio click, drive the speed requirements. And by definition, all players put their audio samples in high-speed system memory which is thousands of times faster than what the audio DAC needs.
"It’s important to note that the corporation accusing JPLAY of being a 'hoax' does not, in fact, deny JPLAY is performing this massive "audiophile re-programming" of Windows. No—Instead, this corporation denies that, despite JPLAY’s actions, JPLAY has any impact on sound quality whatsoever. Their "proof" is that JPLAY does not have any 'technical measurements' to demonstrate an improvement in sound quality.
Sure, we don’t have all the 'technical measurements' we would like: The simple fact is, while there are plenty of DAC measurements regarding jitter, when it comes to using a computer as a digital transport, there simply aren’t any! Nobody has quite figured out how to measure ‘computer jitter’ (or 'computer noise'), which others propose is the "real" cause of the sonic differences in software and/or hardware."
So they did all of these optimizations yet had no way of measuring if any of them did any good? How do they know they did not make things worse?
And such measurements readily exist. I published my article in WSR magazine a couple of months ago and there are ton of measurements around online, including the specific ones that showed Jplay not changing noise or jitter. Those tests were trivial to put together and run. Just a PC with a sound card and free software to analyze what it the other PC is outputting. We are readily able to measure if the PC changes impact what comes out of the DAC which is ultimately what we need to see.
If he is talking about measuring jitter as it comes out of the PC (e.g. on USB) that can also be easily done with a digital scope. There is no mystery there. As tests go, it could not be any easier to set up and run.
"While we’re certain technical measurements will come in time, computer audio is still a new field—and while we're certainly looking forward to working with anyone advancing the state of art, we do believe we have the best measurement equipment on the planet: the ears of thousands of passionate and discerning audiophiles who have tested dozens of JPLAY versions by ear alone…"
So just say this. Don't try to claim technical points that are not founded or backed by any objective data.
Amir, it seems the deck is stacked against JPay..With Jriver AND Foobar joining forces against it, as well as some pretty
well known computer audio gurus and personalities all publishing null findings..including a big article on computeraudiophile.com
But how do you account for the group that insists it improves Jriver, feverishly defending it?
Also Steve Plaskin on Audiostream definitively said it improved Jriver.
http://www.audiostream.com/content/road-jplay
I'm stumped!
P.S., I also thought their response and explanation of what the program does was lightweight at best, but
to be fair there could be a language issue as well.
Cars is another realm where such tweaks abound... Did an independent lab tested the cars before and after shakti? Allow me to hold to my doubts...
You must also admit that their open letter does not inspire confidence ...
Peter, in a blind test, with someone sitting by your computer out of view, could you reliably tellWhen I close processes I hear a difference, when I turn off the screen, I hear differences x100. I believe there may be subtle improvements with process shutdowns but I believe more in LCD power consumption/LCD vibration issues. Try shutting your screen down or your amp's LCD and then post your results.
Last year I was reviewing a couple of DAC's and tried JRiver with and without JPlay... I couldn't tell a difference.
The DAC's I tried it on were the PB MPS-5, MSB and the one from JKenny.
My reply which you have fleshed out in great detail:Jplay makes absolutely no difference with respect to *data* being output. If the bits physically change when Jplay sends them out, then it is flat out broken software. PCM bits better get out the same either way. The only thing that Jplay can impact to improve/change fidelity is impacting the data timing or noise from the PC. As I explained, it is outside of their means to impact those factors in a multi-tasking OS. Here is a simple test to prove that. Install Jplay and then run process monitor. Tell it to show you processes from all users. You will see dozens of background processes. Look at your hard disk light. Does it blink away seemingly randomly even if you are not doing anything? That is because of background activity.
When a request is made of the system the CPU requires instructions for executing that request. The CPU works many times faster than system RAM, so to cut down on delays, L1 cache has bits of data at the ready that it anticipates will be needed. L1 cache is very small, which allows it to be very fast. If the instructions aren’t present in L1 cache, the CPU checks L2, a slightly larger pool of cache, with a little longer latency. With each cache miss it looks to the next level of cache. L3 cache can be far larger than L1 and L2, and even though it’s also slower, it’s still a lot faster than fetching from RAM.
Nothing is beneficiary to sound unless you can show cause and effect. You can hope, wish or pray that it is. You have to connect the dots from the pipeline all the way to the DAC clock. And be able to show that objectively using measurements. If it is some other benefit you can't quantify, then any reasoning you give is neither here, nor there. It is all hope and wish.Brilliant exposition of exactly the same "points of contention" I had with Amir previously when he wrote:
My reply which you have fleshed out in great detail:
Of course I know about muti-tasking OSes but you over-simplify. As you know muti-tasking is about splitting the CPU time across the currently "active" tasks according to controller algorithms or scheduler. How this is done & whether tasks are privileged so that they can't be interrupted can significantly affect their performance. Pre-emptive muti-tasking OSes can interrupt non-privileged tasks without their co-operation for resumption of that task at a later time. So if you are telling me that timing of time-sensitive processes, such as audio playback, can't be brought under better control by app developers that know deep level OS processes, that can privilege the pipeline then I have to disagree with you. Could a better timed/controlled pipeline for audio signal delivery to the devices be beneficial to the sound - I expect the answer is yes.
Just one question: When you are talking about low-latency RAM you are talking about the CPU's cache RAM, right - the one right on the CPU die itself?
Or you listen!!Nothing is beneficiary to sound unless you can show cause and effect.
But I don't need measurements to tell me what I hear.You can hope, wish or pray that it is. You have to connect the dots from the pipeline all the way to the DAC clock. And be able to show that objectively using measurements. If it is some other benefit you can't quantify, then any reasoning you give is neither here, nor there. It is all hope and wish.
Then dispense with the technical talk of what the software does and try to reason why it improves things. Because if that logic is correct, it is measurable. We are discussing a specific technical claim for which there are measurements: noise and jitter. If your claim is that noise and jitter are not changed yet you think it still sounds better, that's cool. Confirm that is what you are saying and we are done discussing technical points.Or you listen!! But I don't need measurements to tell me what I hear.
![]() | Steve Williams Site Founder | Site Owner | Administrator | ![]() | Ron Resnick Site Owner | Administrator | ![]() | Julian (The Fixer) Website Build | Marketing Managersing |