DAC interpolation Filters

jkeny

Industry Expert, Member Sponsor
Feb 9, 2012
3,374
42
383
Ireland
The Chord Hugo DAC is causing quite a stir on the forums & Rob Watts, the designer, attributes it's apparently great sound to the use of 26K of taps in the interpolation filter, implemented in an FPGA. A presentation of the technical details of the DAC are to be found here

One of his interesting slides states:
- The interpolation filter (an FIR filter that has a line of taps multiplying coefficients to delayed data) recovers the original amplitude and timing information of the recording
- This filter re-creates the missing bits between samples
- If you look at the original Whittaker-Shannon sampling theory, then for a bandwidth limited signal, if you use an infinite tap length FIR filter then the “missing bits” will be perfectly reconstructed
- The FIR filter has a sine(x)/x response – if you use taps that have 16 bit coefficient accuracy, you need about 1,000,000 taps for an 8 times filter!
- Practical filters have limited tap length – a few hundred maximum
- These conventional filters do not properly reconstruct the original timing of transients

It has been stated here & elsewhere that the subsample timing accuracy of a 16/44 reconstructed waveform is some tens of picoseconds (or was it hundreds?).
This is for a bandlimited signal & of course band-limiting a signal (at the ADC or before) smears it's timing accuracy. So we have picosecond subsample timing for an a signal that already has had it's timing limited.

But aside from that limitation which we can do nothing about (it's already baked onto the 16/44 file), the above highlighted text is stating that the perfect reconstruction of the waveform at playback requires an infinite number of taps.

Some information on FIR filter design & characteristics are given here particularly from slide 77 on.

Any comments from those more closely involved with this DSP area?
 

jkeny

Industry Expert, Member Sponsor
Feb 9, 2012
3,374
42
383
Ireland
This is probably a better explanation of interpolation filters & the issues involved.
The interpolation accuracy is affected by a number of factors, the most
important of which are as follows:

(a) The predictability, or correlation structure of the signal: as the
correlation of successive samples increases, the predictability of a
sample from the neighbouring samples increases. In general,
interpolation improves with the increasing correlation structure, or
equivalently the decreasing bandwidth, of a signal.
(b) The sampling rate: as the sampling rate increases, adjacent samples
become more correlated, the redundant information increases, and
interpolation improves.
(c) Non-stationary characteristics of the signal: for time-varying signals
the available samples some distance in time away from the missing
samples may not be relevant because the signal characteristics may
have completely changed. This is particularly important in
interpolation of a large sequence of samples.
(d) The length of the missing samples: in general, interpolation quality
decreases with increasing length of the missing samples.
(e) Finally, interpolation depends on the optimal use of the data and the
efficiency of the interpolator.

Also just to reinforce the concept that there isn't just one way to interpolate - there are a number of different approaches to interpolation (which will give slightly different output waveforms)
The classical approach to interpolation is to construct a polynomial
interpolator function that passes through the known samples. We continue
this chapter with a study of the general form of polynomial interpolation,
and consider Lagrange, Newton, Hermite and cubic spline interpolators.
Polynomial interpolators are not optimal or well suited to make efficient use
of a relatively large number of known samples, or to interpolate a relatively
large segment of missing samples.

In Section 10.3, we study several statistical digital signal processing
methods for interpolation of a sequence of missing samples. These include
model-based methods, which are well suited for interpolation of small to
medium sized gaps of missing samples. We also consider frequency–time
interpolation methods, and interpolation through waveform substitution,
which have the ability to replace relatively large gaps of missing samples
 

DonH50

Member Sponsor & WBF Technical Expert
Jun 22, 2010
3,947
306
1,670
Monument, CO
Been a while since I last did a digital filter design, and much longer since my grad DSP courses, but I think one of the reasons for long-duration filters is to reduce sensitivity to long-term time constants and time-varying (non-stationary, i.e. not LTI) signals. That seems to match the comments above about filling in the missing information between samples.

NOT my field of expertise, however! - Don
 

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu

Steve Williams
Site Founder | Site Owner | Administrator
Ron Resnick
Site Co-Owner | Administrator
Julian (The Fixer)
Website Build | Marketing Managersing