Question about PLL and digital filters

aljordan

Well-Known Member
Jan 13, 2012
20
1
908
Southern Maine, USA
www.alanjordan.org
Hello,

One of the DACs I own includes a DSP processor implemented in a manner that allows you to set or bypass various functions. The specifics on the DSP module can be seen here: http://www.audio-gd.com/Pro/dac/DSP1/DSP1ENspecs.htm

I would like to gain some insight into the pros and cons of the features.

First, one can bypass the PLL. It is my understanding that a PLL allows a clock to track the source stream so that it won't lose lock do to two separate clocks being slightly different. I have been feeding the DAC via a Halade Bridge asynch USB convertor, or the SPDIF output of a Prism Sound Orpheus. When I bypass the PLL, I think the DAC sounds a little bit more natural. I haven't had any lock problems bypassing the PLL. My question is, if I don't lose lock, are there any other important performance functions of a PLL? Also, are there disadvantages to a PLL that might account for the slight audible change I am hearing when I bypass it?

Second, the DSP unit allows filter stop band attenuation settings of 130, 90, or 50 dB. My understanding is that a steep filter keeps alias images out of the audible frequency band, but some people think gentle filters sound better. I can understand why gentle analog filters would sound better, but I am not sure why gentle digital filters might sound better. I have been playing with upsampling on the PC side and sending a higher sample rate to the DAC while choosing a filter with more gentle stop band attenuation. It does sound different. Is there any need to implement a steep filter if the sample rate is at 88.2 kHz or above?

Any insight into these technologies will be appreciated.

Thanks,
Alan
 

DonH50

Member Sponsor & WBF Technical Expert
Jun 22, 2010
3,952
312
1,670
Monument, CO
1. I cannot tell how they are using the PLL. A PLL is typically used as part of a clock and data recovery (CDR) block that recovers the clock signal embedded in the incoming serial stream, using it to drive the rest of the logic operating on the data. If that were the case you'd need a separate clock from someplace that stays in synch with the data. I am not sure what they are doing when the "disable" the PLL. Without knowing more I would not care to speculate on the various pros and cons of their PLLs. PLL design involves many trades in design and implementation.

2. Steeper filters, analog or digital, tend to have different time-domain response than more gentle filters, often more ringing. Look at some Stereophile reviews of DACs or CD/BD players for examples. The output image amplitude is a function of the bandwidth of your system so there is not a simple answer, but I would guess any of those rates would be fine at 88 kHz clock assuming there is roll-off elsewhere and the rest of the electronics are stable with ultrasonic signals.
 

amirm

Banned
Apr 2, 2010
15,813
38
0
Seattle, WA
I will speculate :). The PLL is a feedback loop to keep the DAC clock in sync with the incoming data stream. Assuming you somehow determine the incoming sampling rate, you can indeed "freewheel" without the PLL. What that means however that you may either fall behind or get too far ahead of the input stream. As an example, a 44.1 Khz stream may really be running at 44.09 or 44.11 Khz. DACs have internal buffers that hold the incoming data temporarily. If your read enough of the data before you start, then you may be able to get through the song without falling behind. Likewise, if the buffer is big enough, you can keep playing without the faster source overflowing it. There are DACs that actually work this way (Naim and Mark Levinson are some examples). The trick is resetting the buffer somehow between tracks as otherwise there is no good solution to long term drift. You could leave your DAC on for a couple of day and have it be way behind or ahead.

Turning of PLL will also kill any chance of staying in sync with video should that be your application.

The positive side is what you may have observed. That by disabling the PLL, you have decoupled yourself from the incoming clock and hence its jitter. The only jitter then is from the local clock which may be much better than the incoming one.
 

opus111

Banned
Feb 10, 2012
1,286
3
0
Hangzhou, China
I'll speculate too about the PLL. With the PLL disabled they might be implementing 'asynchronous reclocking' - relying on an internal crystal and dropping or adding additional samples when required. Its unclear if the PLL being used is one resident on the FPGA - if so its bound to suck at audio duties.

On filters - the plots shown look to be half-band types because this makes better use of the multipliers on the FPGA. None of them will keep all aliasing at bay because of this lack of serious attenuation at Nyquist (0.5X sample rate). The data lacks the impulse response plot, but I'd guess because they're half-band they're also linear phase. Linear phase has pre-ringing and post-ringing around the corner frequency. The gentler filters usually have reduced ringing.

If the sample rate of your source material is 88k2 I'd say implement no digital filter at all - run in NOS mode (which is selectable). That's how I listen to my Redbook material. Probably best to use NOS mode anyway and then you can choose the appropriate filter on your PC - the options available in the hardware are rather limited.
 

DACMan

New Member
Sep 30, 2012
48
0
0
near Nashville, Tennessee
Alan,

All digital audio data streams include a clock "embedded" in the data stream. The amount of jitter (timing variation) present in those clocks varies widely (non-asynch USB is usually awful while S/PDIF ranges from pretty good to bad). When you use a good asynch/USB converter (like your Halide Bridge), it re-clocks the USB signal and sends it back out as an S/PDIF signal, presumably with a pretty good quality clock. No clock is perfect, however, and the S/PDIF transmitter, receiver, and cable will introduce SOME jitter along the way. When you use your DAC with the PLL turned off, it is simply using the incoming clock already present with the data.

The purpose of the PLL is to filter out jitter (you can think of it as a noise filter - but for the clock - and in the time domain). The PLL filters the incoming clock and makes a "new" one, which is based on it. (The whole "lock" thing refers to the fact that the new internal clock must follow the incoming clock overall because each "clock tick" must, in the end, have a word of data to go with it. If the clocks didn't track, you could get "gaps" or "overlaps" in the data - which would be very bad.) DSP designs can do something similar in other ways since they can buffer data. You shouldn't EVER have "lock problems" without the PLL because the DAC is simply using the incoming clock that came with the data - good, bad, or whatever rate it runs at (within reason).

So, in short, with the PLL turned off, your Halide Bridge is re-clocking the USB data, converting it to S/PDIF, and sending it on to the DAC - presumably with a pretty clean clock. The S/PDIF circuitry and the cable will, however, add some small amount of jitter. The DAC is simply using that clock directly to clock the D/A conversion. If you turned the PLL ON instead, it will do its best to filter out any jitter and make a new (and equivalent) clock without the jitter, and the DAC will do the conversion using the new clock generated by the PLL. Unfortunately, nothing is perfect; the PLL itself will generate some small amount of jitter. (Think of the anti-shake on a camera, which you are told to turn off when you use a tripod.) Depending on the actual amount of jitter arriving at the input of the DAC circuitry, and how good the PLL is, the new clock generated by the PLL may actually be worse than the one that came in (remember that the Halide Bridge is supposed to be sending a pretty good clock to begin with).

Since the Halide Bridge is already filtering out any jitter from your computer, when you turn the PLL on and off you are really choosing between getting the jitter of the Halide Bridge plus the wiring and circuitry in between, or trading that for the jitter that the PLL generates itself. (If you didn't have the Halide Bridge, the jitter from a non-asynch USB would be pretty bad, so the PLL would almost surely be an improvement). The Halide Bridge, however, is a much better solution to eliminating THAT jitter than the PLL. Just to complicate matters a bit more, jitter comes in different "flavors" (rates and spectra), so even equal amounts of jitter MIGHT sound slightly different.

The short answer to your other question is that not even digital filters are perfect. A steeper filter will do a better job of filtering out-of-band components that will end up causing imaging products that might be audible, BUT a steeper filter is more likely to cause other slight "anomalies" like phase shift or frequency response ripples in the audio band. Thus the tradeoff. In theory, none of the aliasing products should be audible, and what you're hearing is probably just the slight variations in frequency response caused by the filters (gradual filters usually roll off by 1 or 2 dB at 20 kHz with a 44k sample rate). Some of the filters also offer various other "neat tricks". One of the new ones is what they call "an apodising filter" - which is really a general term but, in this context, refers to a neat digital trick. If you look at the image of what comes out of a DAC when you put in an impulse (a pulse) you will see ringing, both after and BEFORE the pulse (time really IS an illusion :)). An "apodising filter" uses some digital trickery to shift all the ringing AFTER the pulse. Apparently, due to the masking effect, this is less audible. Such filters tend to roll off the high-end a bit, but many people say they sound "more natural". (You get that option on most Audio-GD DACs lately).

In practice, the various filters absolutely do sound different, so pick the one you like best.

Incidentally, even a "non-steep" filter will probably do well enough to not actually hurt anything.

A word for the wise, however, is to NEVER play an 88k file that came from a ripped SACD on a DAC - like some DIY ones - with no filtering whatsoever.
The SACD recording process shifts the sampling noise of the digitizing process into the ultrasonic range, so SACDs have a massive amount of noise at very high frequencies (we're talking destructive amounts of HF noise at up to 80 kHz). Playing the results of a badly ripped SACD with no filtering could well damage tweeters or even an amplifier. (Of course, making a DAC without any filtering is foolish since the filter is necessary to accurately reproduce the original
audio signal :)

DACMan
 

DACMan

New Member
Sep 30, 2012
48
0
0
near Nashville, Tennessee
ALL properly designed DACs have an analog "reconstruction" filter at the output (otherwise the output would look like a staircase on a 'scope).
The reconstruction filter is required (it is, in a sense, "symmetrical" with the filter used when the original A-D conversion occurred).
Without it, the output of the DAC will have no resemblance to the original signal, and will quite possible burn tweeters (and even some amps) as well.

Oversampling, by "pushing the sample frequency up", lets you get similar results with a gentler analog filter....
oversampling in the DAC (in the digital filter) is rather different than UPSAMPLING before the DAC

Several of Audio-GD's products do include their DSP "functionality", which does upsampling before the DAC.
They're doing an ASRC (asynchronous sample rate converter) in a DSP ....
At least in principle, an ASRC should do a "clean" move from one sample rate to another without altering the
response characteristics of the audio. What it does is to generate a new audio stream, at a different sample rate, which
is intended to be identical in "content" to the original.

It also MAY filter out jitter as part of the conversion process (that depends on the design).
(How well Audio-GD's implementation does this is unknown to me.
The AD1896 ASRC used by Benchmark and others does a very good job of filtering jitter.)

If Audio-GD's implementation works well at filtering jitter, then it would be expected to obviate, or at least minimize,
the benefits of having a low-jitter input to begin with....
which should minimize the benefit of something like a Halide Bridge when you have the DSP upsampling turned on.
Since I haven't tried one, I can't say for sure.

At the company where I work (Emotiva Audio), we've been developing two new DACs with ASRCs...
What we've found is that, although ASRCs do remove jitter really well, and so do clearly make content with high jitter sound better,
they sometimes have a slightly detrimental effect on sources that are low jitter to begin with.

That would suggest that enabling UPsampling is indeed a matter of personal preference, and of your particular source components and material.

DACMan







^ I think they say zero group delay distortion so linear phase almost certainly.

Is there an analog output filter to suppress the image?
 

DACMan

New Member
Sep 30, 2012
48
0
0
near Nashville, Tennessee
Your interpretation is actually slightly off-base.
You're right in what the PLL does, but wrong in that most simple DACs don't bother
to generate any clock whatsoever except the one that comes with the data.
A buffer is not at all necessary for a DAC to operate.

The incoming data (whether USB or S/PDIF) has a clock imbedded in it.
Most "simple" DACs will just run using that internal clock (although some might refuse to operate at all if it's a really odd rate).
The problem is that, if the clock rate drifts, then the audio itself may drift (like a turntable with a bad motor).
The PLL is just a simple, "old school" way to even out short-term speed fluctuations (jitter and fast drift).
It lets you create your own "internal" low jitter clock and have it track the original.
It's really "a time domain filtering of the original clock".
It involves no conversions, no resampling, no calculations, and is, in fact, entirely "analog" in nature.
It's simple, and it works reasonably well, although it has only moderately good jitter performance.
It's the way pretty much ALL DACs operated 20 years ago.

Many newer DACs have fancier ways of controlling jitter....
The Sabres have their own digital re-clocking, which works more or less like an ASRC - only they don't change the sample rate.
Others have DSP-based systems that include buffers, and many have buffered (asynch) USB inputs.
Of course, we still have plain old ASRCs (which work very well).
While they may actually use a PLL of some sort to "grab the signal" at the input, although I doubt it,
it would be incidental to the actual workings of their clock management.

What we're talking about here is using a "simple PLL" to created a "new" clock locked onto the original
without all of the complications and conversions involved in the other methods.

I'm afraid, however, that you're entirely wrong about synching with video.
Using buffers or ASRCs WILL delay the audio (maybe enough to bump it off sync).
Neither using a PLL, nor NOT using one, however, will delay it at all (as long as nothing else is going on).
(Audio-Gd's DSP jitter reduction may or may not produce an appreciable delay - I would ask them.)


I will speculate :). The PLL is a feedback loop to keep the DAC clock in sync with the incoming data stream. Assuming you somehow determine the incoming sampling rate, you can indeed "freewheel" without the PLL. What that means however that you may either fall behind or get too far ahead of the input stream. As an example, a 44.1 Khz stream may really be running at 44.09 or 44.11 Khz. DACs have internal buffers that hold the incoming data temporarily. If your read enough of the data before you start, then you may be able to get through the song without falling behind. Likewise, if the buffer is big enough, you can keep playing without the faster source overflowing it. There are DACs that actually work this way (Naim and Mark Levinson are some examples). The trick is resetting the buffer somehow between tracks as otherwise there is no good solution to long term drift. You could leave your DAC on for a couple of day and have it be way behind or ahead.

Turning of PLL will also kill any chance of staying in sync with video should that be your application.

The positive side is what you may have observed. That by disabling the PLL, you have decoupled yourself from the incoming clock and hence its jitter. The only jitter then is from the local clock which may be much better than the incoming one.
 

DonH50

Member Sponsor & WBF Technical Expert
Jun 22, 2010
3,952
312
1,670
Monument, CO
A few of the NOS DACs claim they have no analog image filter at the output. There is certainly one there from parasitics if nothing else, but they claim "none". I personally would be nervous without one having a reasonable cut-off frequency.
 

opus111

Banned
Feb 10, 2012
1,286
3
0
Hangzhou, China
Quite a few of the NOS DACs don't have the anti-imaging filter its true - they'd not be considered 'properly designed' by DACMan but people still love them for their sound (myself included).

The issue with NOS is there's really no space in the frequency specturm for implementing an analog reconstruction filter - unless one wants to delve into very complex elliptic LC or op-amp based filters. These are very hard to get sounding good so the designers don't bother. I myself wouldn't want to put out such an incontinent design so I've another way to fix up this problem - implement a transversal filter at the output. This has the response of a digital filter but needs no DSP as it uses an array of DACs as multiply elements and sums their currents together to create the 'accumulate'. In listening vs vanilla NOS the top end is indeed more delicate, spacious and airy which I put down to having lower intermod products generated within the tweeter (and perhaps tweeter amp).
 

amirm

Banned
Apr 2, 2010
15,813
38
0
Seattle, WA
Welcome to WBF DACMan. We very much appreciate industry participation. Please put your affiliation in your signature so that members know where you work. I also like to encourage you to use your real name if you can.
Your interpretation is actually slightly off-base.
You're right in what the PLL does, but wrong in that most simple DACs don't bother
to generate any clock whatsoever except the one that comes with the data.
A buffer is not at all necessary for a DAC to operate.
I didn't say it was. I was explaining a specific scenario where a local master clock can be used async from input. If done that way, then a buffer is needed to handle overrun and underrun conditions. Data is prefetched in advance of playing for underrun and is used to handle overrun if the opposite occurs. This is restarted on song breaks. It is certainly an unusual solution to the problem but I put it out there as a possible way this device could work.

The incoming data (whether USB or S/PDIF) has a clock imbedded in it.
Most "simple" DACs will just run using that internal clock (although some might refuse to operate at all if it's a really odd rate).
The problem is that, if the clock rate drifts, then the audio itself may drift (like a turntable with a bad motor).
The PLL is just a simple, "old school" way to even out short-term speed fluctuations (jitter and fast drift).
It lets you create your own "internal" low jitter clock and have it track the original.
It's really "a time domain filtering of the original clock".
It involves no conversions, no resampling, no calculations, and is, in fact, entirely "analog" in nature.
It's simple, and it works reasonably well, although it has only moderately good jitter performance.
It's the way pretty much ALL DACs operated 20 years ago.
You can get excellent performance out cascading two PLLs. That way one of them can have narrower bandwidth the other for jitter suppression while the other enables fast locking to the input clock. It is a more complicated design so it is not done as often.

I'm afraid, however, that you're entirely wrong about synching with video.
Using buffers or ASRCs WILL delay the audio (maybe enough to bump it off sync).
Neither using a PLL, nor NOT using one, however, will delay it at all (as long as nothing else is going on).
(Audio-Gd's DSP jitter reduction may or may not produce an appreciable delay - I would ask them.)
It is not an issue of delay but drift. If you make the sink (DAC) the clock master, then you can get out of sync if video clock remains in the source. The two will drift away from each other and no amount of buffering will fix it. Consumer A/V reproduction calls for the source to be in charge of clock for both audio and video. This assumption is used upstream when content is authored. To wit, the mux tool is free to slow or speed up the number of audio samples/sec to better match it to the video clock (which is always the master) as it brings the two streams together. As an example, the 48Khz clock may indeed run at 48,001 samples/sec. If you convert audio to an async interface and run using your own clock, you will drift away from the video clock. For audio-only applications we don't care of course so async is the way to go. But for video, we better clean up that clock and use it.
 

DACMan

New Member
Sep 30, 2012
48
0
0
near Nashville, Tennessee
I didn't say it was. I was explaining a specific scenario where a local master clock can be used async from input. If done that way, then a buffer is needed to handle overrun and underrun conditions. Data is prefetched in advance of playing for underrun and is used to handle overrun if the opposite occurs. This is restarted on song breaks. It is certainly an unusual solution to the problem but I put it out there as a possible way this device could work.

I sort of lost the original context of what I said there.... I agree 100% with what you said, and it seems like an obvious (and not so unusual) solution to me. I've heard two arguments against doing it that way, neither of which seems compelling to me in an audiophile environment, although they might make sense in others. The first is that any buffer large enough to help would also be large enough to introduce a significant delay while it fills (remember that interfaces like SPDIF are "metered", so you can't prefetch; you'd have to wait several seconds for some data to accumulate whenever the track started playing). I can see this being an issue when you're synching with video, or trying to do processor loops on a studio console, but not for audio applications. The other argument is that, again due to being unable to do prefetch, your buffer would *eventually* over or under run, assuming there were no breaks during which you could re-synch it. Honestly, I don't think either of these is a compelling argument in an audiophile setting as I don't see a problem with delaying the start of each song a fraction of a second, nor of having to pause every several hours (if there happened to not be song breaks in all that time) to re-set the buffer.

You can get excellent performance out cascading two PLLs. That way one of them can have narrower bandwidth the other for jitter suppression while the other enables fast locking to the input clock. It is a more complicated design so it is not done as often.

As for complication; PLL chips cost a few bucks, so I don't see that as a compelling excuse; however, I still prefer an actual buffer - it's easier.

ASRC's have an even more sophisticated, but similar, mechanism. They use a DSP-based "digital PLL" which can actually adjust its locking mechanism... so it can be set to seek quickly until it gets close, then seek more slowly when it's within a certain range. In theory, you could even, for example, change the performance based on what the pattern of drift looks like afterwards. The downside is that ALL PLLs introduce "noise" of their own (according to the design folks, it is unavoidable, with "fast lockers" making more noise - which is why the digital ones which adjust their rate are cool - they actually make less noise when their rate is slower - but the actual design stuff there is over my head.) If we had control over the data entirely, the obvious answer is to load the entire song into a buffer, then play it using a local clock. There have been a VERY few players that worked this way (one that's been around for a while is called "the Elephant Player"; it's very expensive, sounds cool, and I don't know if they actually sell any :))

It is not an issue of delay but drift. If you make the sink (DAC) the clock master, then you can get out of sync if video clock remains in the source. The two will drift away from each other and no amount of buffering will fix it. Consumer A/V reproduction calls for the source to be in charge of clock for both audio and video. This assumption is used upstream when content is authored. To wit, the mux tool is free to slow or speed up the number of audio samples/sec to better match it to the video clock (which is always the master) as it brings the two streams together. As an example, the 48Khz clock may indeed run at 48,001 samples/sec. If you convert audio to an async interface and run using your own clock, you will drift away from the video clock. For audio-only applications we don't care of course so async is the way to go. But for video, we better clean up that clock and use it.

My problem with all of that is simple; if the standard is silly, then change it. How about we load the video into a processor, demux the audio and video, buffer and process each separately, and then re-mux them and play them (or, well, obviously, if we're the player we don't need to reassemble the pieces). If you haven't noticed, however, home pre/pros are going AWAY from video processing, with most new models not even doing resampling.... so it doesn't look like this level of processing, or simply buffering the video as data, is likely to become a reality in a pre/pro.
 

DACMan

New Member
Sep 30, 2012
48
0
0
near Nashville, Tennessee
Hey, I'm not the mathematician.... but the math boys seem quite sure that, unless you include a "correct" filter, what comes out will absolutely NOT be the same as what went in. Now, to me, that's the whole goal, so eliminating that filter is throwing the game. As someone else observed, however, there is always an anti-aliasing filter.... unless your speakers can play 44.1 kHz square waves; all you've done by "eliminating" it is to trade a properly calculated and designed filter for an impromptu one consisting of your cables and the rest of your signal chain. The problems there are numerous. For one, the high frequency noise that results may make your amp misbehave and sound lousy, or it may cook your tweeters. For another, it is completely uncontrollable and unpredictable, so it may sound entirely different depending on what you connect it to. Your idea sounds more elegant, but still rather complicated to implement.

As far as I can tell, your summary of the problems with filters for NOS DACs sounds accurate... and is a perfect summary of the reasons why most people have moved to oversampling or delta-sigma DACs; BECAUSE they avoid all the nasty annoyances and limitations with NOS designs; you oversample, which is easy to do these days, doesn't change the actual content of the data at all, and makes the filter design much easier. The general consensus among designers is pretty much "why spend all that effort trying to make a good NOS DAC and filter when it's easy to make an even better one by using oversampling".

There is no technical reason why an oversampling DAC shouldn't sound equal to a good NOS design - minus the filter issues. Even accepting that no DAC today is perfect, oversampling seems like a better starting point to me.... since it gets rid of at least SOME of the known problems and limitations. Since there is no inherent down-side to oversampling, I see no compelling reason to take on even more obstacles by avoiding it.


Quite a few of the NOS DACs don't have the anti-imaging filter its true - they'd not be considered 'properly designed' by DACMan but people still love them for their sound (myself included).

The issue with NOS is there's really no space in the frequency specturm for implementing an analog reconstruction filter - unless one wants to delve into very complex elliptic LC or op-amp based filters. These are very hard to get sounding good so the designers don't bother. I myself wouldn't want to put out such an incontinent design so I've another way to fix up this problem - implement a transversal filter at the output. This has the response of a digital filter but needs no DSP as it uses an array of DACs as multiply elements and sums their currents together to create the 'accumulate'. In listening vs vanilla NOS the top end is indeed more delicate, spacious and airy which I put down to having lower intermod products generated within the tweeter (and perhaps tweeter amp).
 

DACMan

New Member
Sep 30, 2012
48
0
0
near Nashville, Tennessee
With "standard adaptive" USB mode, the computer controls the timing, with "occasional advice" from the receiving device. Basically the DAC gets to say "faster" or "slower" once a millisecond or so, which is slightly better than nothing. There is a better USB mode "asynch mode", which some DACs use for their USB inputs. Asynch mode is also used by many USB-to-SPDIF converter devices. In asynch mode the receiver does get to throttle the data, so it can implement a buffer and keep it under control. This is the preferred mode for USB.

Unfortunately, SPDIF (coax and Toslink) is a "dumb" sender, and so the sending device controls the data rate entirely. You would have to sort out the rate, make your own local clock that matches it, and then wait a short time for your buffer to fill up half-way, before you could start running. And you would still risk over or under flows if the sending data rate changed or your matching clock wasn't perfect.

There is another cute thing that many modern DACs are doing lately. The impulse response of all filters shows some pre and post ringing (which has to be there actually). Some DACs are now offering what they call "an apodizing filter" - which is really a general term. What they're doing is, by way of some fancy math, shifting the pre ringing to post (so you get less pre ringing and more post ringing from an impulse - although the total remains the same). Many people claim that the result sounds more natural because the post ringing is better masked. A side effect of this sort of filter tends to be a slight droop at higher audio frequencies, and ther are limitations in how and when it can be implemented.

Also, while you can change filters for the oversampling in your PC, if your DAC is converting an 88k sample, there will still be residual energy coming out at that frequency. (If you look at the output on a scope, it will be a curve with steps every 1/88000 of a second. If you don't include an analog filter at the output, then that energy WILL be sent out... along with the stuff (images) that comprise it. There is nothing that you can do before the DAC that will make the steps go away, or the energy they contain. Even if you could get rid of all the image energy, the 88k carrier frequency will still be there.)

Check out the audio-gd website... they're a DAC vendor, but they have nice charts of all the various filter responses of their various DACs
(including all the built in filters in the Wolfson DAC)

I'll speculate too about the PLL. With the PLL disabled they might be implementing 'asynchronous reclocking' - relying on an internal crystal and dropping or adding additional samples when required. Its unclear if the PLL being used is one resident on the FPGA - if so its bound to suck at audio duties.

On filters - the plots shown look to be half-band types because this makes better use of the multipliers on the FPGA. None of them will keep all aliasing at bay because of this lack of serious attenuation at Nyquist (0.5X sample rate). The data lacks the impulse response plot, but I'd guess because they're half-band they're also linear phase. Linear phase has pre-ringing and post-ringing around the corner frequency. The gentler filters usually have reduced ringing.

If the sample rate of your source material is 88k2 I'd say implement no digital filter at all - run in NOS mode (which is selectable). That's how I listen to my Redbook material. Probably best to use NOS mode anyway and then you can choose the appropriate filter on your PC - the options available in the hardware are rather limited.
 

opus111

Banned
Feb 10, 2012
1,286
3
0
Hangzhou, China
Hey, I'm not the mathematician.... but the math boys seem quite sure that, unless you include a "correct" filter, what comes out will absolutely NOT be the same as what went in. Now, to me, that's the whole goal, so eliminating that filter is throwing the game. As someone else observed, however, there is always an anti-aliasing filter.... unless your speakers can play 44.1 kHz square waves; all you've done by "eliminating" it is to trade a properly calculated and designed filter for an impromptu one consisting of your cables and the rest of your signal chain. The problems there are numerous. For one, the high frequency noise that results may make your amp misbehave and sound lousy, or it may cook your tweeters. For another, it is completely uncontrollable and unpredictable, so it may sound entirely different depending on what you connect it to. Your idea sounds more elegant, but still rather complicated to implement.

Agreed with pretty much all of this except for the bit about cooking the tweeters - that's way overstated in practice. Sure the tweeter energy will go up but that's a long way from cooking them, they're inductive anyway so hard to get much energy into them from normal music image frequencies. Tweeters cook generally from poweramp instability - ultrasonic oscillation at high level.

As far as I can tell, your summary of the problems with filters for NOS DACs sounds accurate... and is a perfect summary of the reasons why most people have moved to oversampling or delta-sigma DACs; BECAUSE they avoid all the nasty annoyances and limitations with NOS designs; you oversample, which is easy to do these days, doesn't change the actual content of the data at all, and makes the filter design much easier. The general consensus among designers is pretty much "why spend all that effort trying to make a good NOS DAC and filter when it's easy to make an even better one by using oversampling".

Of course the reason for spending all the time is the highly rewarding sound - that's what drives me to design with NOS DACs. I get no kicks from achieving vanishingly small THDs or huge SNRs. The only way that non-NOS DACs are better is in the measurements which to me isn't quite the point of audio design.

There is no technical reason why an oversampling DAC shouldn't sound equal to a good NOS design - minus the filter issues.

I know of at least two.

Even accepting that no DAC today is perfect, oversampling seems like a better starting point to me.... since it gets rid of at least SOME of the known problems and limitations. Since there is no inherent down-side to oversampling, I see no compelling reason to take on even more obstacles by avoiding it.

You probably should have written 'since I know of no inherent downside to oversampling' (assuming you're interested in writing truthfully) as my ears tell me it sounds not so compelling. Others over on SNA have found similarly. Its very rewarding in terms of SQ achieved to design to avoid the issues introduced by oversampling.
 

opus111

Banned
Feb 10, 2012
1,286
3
0
Hangzhou, China
There is another cute thing that many modern DACs are doing lately. The impulse response of all filters shows some pre and post ringing (which has to be there actually).

I'm not clear here whether you're attributing the ringing to the filter. Not all filters need to have pre-ringing, they do all have post-ringing. But even with no filter you'll still see the ringing of the input AAF, before the ADC. Or in the case of S-D type converters, after the 1-bit (or low-bit) modulator. So yeah, ringing is unavoidable at the output due to the inherent bandlimiting of digital sampling. I know some NOS vendors advertise lack of ringing by putting into their DAC an invalid sequence of samples, if you notice this trick being played be certain to pay no attention as its not a real-world test.

Some DACs are now offering what they call "an apodizing filter" - which is really a general term. What they're doing is, by way of some fancy math, shifting the pre ringing to post (so you get less pre ringing and more post ringing from an impulse - although the total remains the same).

That's not my understanding of what an apodizing filter is doing. ISTM what they do is rather bandlimit at under the 20kHz normal bandwidth and hence cut out any ringing at frequencies above 20kHz (which should be inaudible anyway to most of us). They need to be relatively gentle slope though so as not to introduce their own ringing at the new, lower frequency, which potentially is more audible as its lower.

Many people claim that the result sounds more natural because the post ringing is better masked.

I can't see why the post-ringing would be better masked. Its normally well masked anyway, as its post-.

A side effect of this sort of filter tends to be a slight droop at higher audio frequencies, and ther are limitations in how and when it can be implemented.

The droop is a direct consequence of not wishing to introduce more ringing.

I'll just add that one of the advantages of using an apodizing filter is also to chop out any aliasing products that got introduced by half-band digital filters which are almost ubiquitous. These aliases often lie between 20 - 22kHz and one listener I've read says chopping them off reduced the sibilance he perceived on voice.
 

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu

Steve Williams
Site Founder | Site Owner | Administrator
Ron Resnick
Site Co-Owner | Administrator
Julian (The Fixer)
Website Build | Marketing Managersing