Rock around the clock

Vincent Kars

WBF Technical Expert: Computer Audio
Jul 1, 2010
860
1
0
There is
the master clock
the bit clock
the word clock

Playing Redbook audio you have
Master clock: 11,2896MHz
Bit clock: 2,8224MHz
Word clock: 44,1kHz

Don (why isn't there a DonH50 expert forum?) can you tell me how these clocks are related?

Another question:
In case of SPDIF the “clock” is derived from the bit rate
In case of I2S the “clock” is send separately.
Does this make I2S a better protocol? (the jitter you know...)
 

microstrip

VIP/Donor
May 30, 2010
20,807
4,700
2,790
Portugal
Although I am not Don, I can tell you that some Redbook players have Master Clocks different from 11,2896MHz. The lower frequency clocks are usually obtained by division by powers of 2, but any integer division can be used.
 

RBFC

WBF Founding Member
Apr 20, 2010
5,158
46
1,225
Albuquerque, NM
www.fightingconcepts.com
Vincent,

What types of connections (SPDIF, USB, etc.) use what type of clock schemes, and how do they differ? What are the strengths and weaknesses of each type of connection with regard to clock stability and what factors (cable shielding, connector integrity, etc.) can adversely affect performance for each connection type?

That ought to be a start!

Lee
 

DonH50

Member Sponsor & WBF Technical Expert
Jun 22, 2010
3,952
312
1,670
Monument, CO
Sorry, missed this, busy with work and rehearsals for Easter and upcoming orchestra concert so my evenings have not been free.

Disclaimer: I am no expert on CD standards. I did a little background search on Red Book to address the data rate and clock questions. As for different formats and such, I'll (or somebody) will have to tackle that a little later.

For now, let's start at the bottom. Your CD uses 16-bit samples, in stereo, at 44.1 kS/s. I remember Sony and Phillips arguing over that a bit; I think Sony wanted fewer bits, and Phillips wanted a slightly slower rate to be compatible with video frame rates (but I could be wrong). The word clock relates to the actual sampling rate, 44.1 kS/s is the rate at which (stereo) samples are finally delivered from your CD to the DAC that converts them into analog audio.

Now it gets a little complicated to move up the clock chain... You might think the bit rate is simply 44.1 kHz * 16 bits * 2 for stereo or 1.4112 Mb/s, and thus the bit stream clock is twice that or 2.8224 MHz. In fact, it does work out that way at the DAC itself, but the actual path is a little more convoluted. Those not wanting a bunch of explanation and some (simple) math should stop here.

The problem with CDs is that they can have defects. Alas, nothing is perfect. There is also the need to keep the data stream "busy", that is no long strings of 1's or 0's, to keep the clock synchronized. The actual clock circuit, a phase-locked loop, needs edges to detect the phase and stay in synch. No edges, and it can drift away, eventually losing lock (it gets lost and out of synch with the data stream). These two considerations mean extra bits get added for error correction and to ensure the bitstream is always "busy". The standard also defines a minimum data package, called a frame, to simplify the buffering and provide space for all the extra overhead.

The frame in a CD comprises six samples, or 24 bytes (192 bits, 6 samples * 16 bits/sample * 2 samples for stereo). To that is added 8 bytes (24 bits) for error correction (CRC), and another byte for control and display functions, so there are 33 data bytes in each frame. Next, to make sure bits toggle all the time, those 8-bit bytes are encoded into 14 bits, with an extra 3 "merge" bits between each encoded byte. Now, we are up to 33 bytes * (14 + 3 bits/byte) = 561 bits/frame. At this point a 27-bit synchronization code is added to keep all the data bits lined up in the right order, so at the end we have 588 bits in every frame. Whew!

The next mind-numbing discussion relates to decisions made about physical implementation on a CD. Like other disc drives, they divided the data regions on the CD into sectors, with 98 frames per sector. (I am sure there is a reason for all these strange numbers but I did not dig that deeply.) The CD transport is designed to deliver data at a constant rate of 75 sectors per second, which works out to 57,624 bits/sector and 4.3218 Mb/s net data rate. But, this doesn't look anything like that 2.8224 MHz number, does it? Hmmm...

Ah, remember that there are only six stereo samples in each frame, or only about 1/3 of each frame is what we listen to. Six samples is 192 bits (6 samples * 16 bits/sample * 2 for stereo = 192 bits) per frame. Now take that total data rate and convert to "useful" bits at the end: 192 audio data bits / 588 bits/frame * 4.3218 Mb/s = 1.4112 Mb/s. The clock runs at twice that rate, and viola! We are back at 2.8224 MHz for the bit clock. The extra bits are used and then stripped off before the data stream goes toward the DAC.

The master clock is simply four times the bit rate, 4*2.8224 = 11.2896 MHz, and is needed to provide margin for various timing issues that might happen in the real world, as well as extra bandwidth for all that overhead in the bitstream. As has been mentioned, designs may use some other master clock if they choose. Why they chose this instead of some multiple of the actual data stream, I do not know, but I am sure it can be found someplace in the standard (I do not have a copy at hand). However, having it be a multiple of the actual audio data stream is a Very Good Thing for those designing clock circuits for DACs.

There, probably more than anyone really wanted to know about CD bit streams!

HTH - Don

References: http://en.wikipedia.org/wiki/Compact_Disc and a couple of audio handbooks I have at home but forgot to write down (just have the numbers with me -- this was my lunch hour today!)

p.s. I would have to do more on clock schemes to answer the questions about USB vs. SPDIF vs. whatever. I will note that bitstreams used for data tend to have lots of error correction schemes, including retries and such, may use spread-spectrum clocks (that move around a little) to reduce spurious output (EMI/RFI), and have fairly high digital noise (including jitter) that is fairly easily suppressed in a digital data system, but that plays havoc when trying to recover a clean low-jitter clock for an audio DAC from the digital data stream. USB was not designed with low jitter in mind. Audio SPDIF is much cleaner, probably because it is used in so many professional recording systems, but then we went and shot ourselves in the foot by letting the video guys saddle us with HDMI. Since we are generally less sensitive to video jitter (the TV and our eyes integrate it out), HDMI is a fairly noisy signal. Some components use sophisticated reclocking schemes to reduce the jitter to acceptable levels, but many others do not.
 

garylkoh

WBF Technical Expert (Speakers & Audio Equipment)
Sep 6, 2010
5,599
225
1,190
Seattle, WA
www.genesisloudspeakers.com
Many thanks, Don. Does this imply that the S/PDIF interconnect needs to be rated for 2.8MHz for 16/44.1 and much higher for high-rez? No wonder I'm having so many problems trying to interface a ADC to a data collector for recording!!
 

DonH50

Member Sponsor & WBF Technical Expert
Jun 22, 2010
3,952
312
1,670
Monument, CO
Is this because it is biphase encoded?

No, the data changes once per clock period, so the clock frequency is twice the data rate. The data is only sampled on one edge, unlike say a DDR memory chip that uses both edges of the clock to double the data rate.
 

DonH50

Member Sponsor & WBF Technical Expert
Jun 22, 2010
3,952
312
1,670
Monument, CO
Many thanks, Don. Does this imply that the S/PDIF interconnect needs to be rated for 2.8MHz for 16/44.1 and much higher for high-rez? No wonder I'm having so many problems trying to interface a ADC to a data collector for recording!!

Yes, more or less -- the rule of thumb is you need at least 1.7x the data rate to have adequate bandwidth for decent eyes. More bandwidth, faster edges, but also more noise due to the wider bandwidth (more Hz, more noise). Slower, and the receiver could have more noise due to slower slew rates (more time in the noisy linear region of the amplifiers), and more ISI (inter-symbol interference). All in life is a compromise. :) The good news is that most decent (and I do not mean expensive) interconnects are based upon RF cables that have plenty of bandwidth. Bad connectors, or connections, are the ruin of many a cheap cable, however. That's why I tend to pay a little more for the better cables at Monoprice or Blue Jeans and not the absolute cheapest Monoprice or Wal-Mart specials.

Hi-rez is usually more bits at a higher sampling rate so yep, ya' need more bandwidth... The limitation is usually the transceivers, not the cable, except for very cheap cables or very long runs (e.g. over 10m, though the number varies with the standard and bit rate, natch).

ADC interface is harder than it looks, not only because of the timing at the output, but also because of the isolation needed to/from the input. If your output data or output clock modulates the input signal or sampling clock, all heck breaks loose. 40 dB is not too hard, and 60 dB takes a little work; beyond that you have to work very hard.
 

DonH50

Member Sponsor & WBF Technical Expert
Jun 22, 2010
3,952
312
1,670
Monument, CO
To expand a bit on the encoding, for a number of reasons the bit stream must be kept "busy". That is, there must be lots of transitions, changes from 0 to 1 and back, not a lot of long strings of bits just sitting there. However, music covers a very wide (relative) frequency range: 20 Hz to 20 kHz is three decades, over ten octaves. This is a much wider range than most systems (including most RF systems) can handle. The problem for digital bit streams is that, with such a large signal frequency range, much of the time the music is moving very slowly (1/10th to 1/100th) compared to the clock, which means long strings of bits that change only rarely. This causes all sorts of problems at the receiver and clock recovery circuit in your DAC. So, what to do about it?

The answer is to add more bits and use an algorithm to decide how to choose them. To illustrate, I'm going to pick something most of have but rarely think about -- the disk drive in our computer. You have seen terms like SAS (serial attached SCSI) and SATA (serial ATA) but might not realize the same problem -- data and clock recovery -- exists in all these systems. They need busy bits, too! They use 8b:10b encoding to turn 8 bits into 10. How does this help? Well, with the two extra bits, there are now four times as many values any given 8-bit number can take on (two bits, two states each, 2^2 = 4). Since 8 bits can represent 256 decimal numbers, we now have 256 * 4 = 1024 different numbers to choose from. The encoder picks terms to provide extra 1's or 0's as needed in the bit stream, and the transmitter and receiver follow an algorithm (a set of rules) to determine the choice so everybody stays together. A different value can be chosen each time so even if there is a long string of the same 8-bit value, it rotates through four different 10-bit words, so the data path never sees a bunch of 1's or 0's in a row. Problem solved!

Well, for data, which tends to be fairly random in most applications anyway (yes, there are exceptions, but let's leave that alone for now as it is not relevant here). But, four values may not be enough when the signal bandwidth is very broad, as in music. Slow signals may flip a bit only every now and then, at a very low rate relative to the bit stream (which must go fast enough to handle the very highest frequencies). Too few extra bits and we risk making coherent patterns that can cause other problems (trust me on this). The CD standard has six extra bits, providing 64 (2^6) times as many choices for each byte, or 256 * 64 = 16,384 different values. Since the audio band is so wide, at least relatively speaking, having more choices helps keep the bit stream busy even with relatively slow (low-frequency) material, and reduces the chance of patterns creeping into the data stream and causing other problems (like signal-dependent "noise" coupled to the DAC's clock and analog output, creating distortion, among other things).

HTH - Don
 

DonH50

Member Sponsor & WBF Technical Expert
Jun 22, 2010
3,952
312
1,670
Monument, CO
Had to look up S/PDIF to make sure I 'membered rightly... It is derived from the professional audio link AES3, a joint standard of the AES (Audio Engineering Society, I used to belong) and the EBU (European Broadcasting Union). For more info on AES3, look here: http://en.wikipedia.org/wiki/AES3. S/PDIF was developed by Sony and Phillips (the S/P, DIF is for Digital Interconnect Format) and is pretty similar; it is sort of AES3-lite, a consumer version. Neither define link rates, leaving it up to the designers/manufacturers, and bit length is also up to them. (however, AES3 defines up to 24 data bits, and S/PDIF 20, with 24 bits optional.) The protocols are similar, but differ in physical implementation. AES3 uses XLR or BNC connectors, while S/PDIF uses RCA or optical (TOSLINK) connectors, though some companies have used other connectors (e.g. TRS or even pairs of BNCs for balanced AES3, and BNCs for S/PDIF). S/PDIF uses much lower signal levels (~0.5 Vpp vs. 2 to 7 Vpp for AES3) and is limited to 10 m runs vs. AES3's 100 - 1000 m limits. Both use BMC (biphase mark code) modulation, which probably doesn't mean a lot to most of us (but Vincent, I'd guess that's what you were thinking about when you made your earlier post). BMC is just another scheme to keep the signal busy by ensuring frequent edges no matter what the data is actually doing.

Rather than run through all the numbers for S/PDIF, I'll just point you to comments Vincent made in another thread, and the Wiki article at http://en.wikipedia.org/wiki/S/PDIF. As for lower noise, it benefits from close association with a professional audio based standard and a lot of focus upon keeping it clean. I don't think it is intrinsically any cleaner than numerous other formats, just implemented better due to its pro roots, and in fact one of the streams it will handle is the standard CD bit stream described earlier. So, for that you have the numbers! It will still suffer if the clock is noisy, but a lot of effort on pro audio gear over the years has given S/PDIF a big boost compared to something like HDMI.

To just mention HDMI, it uses 8b/10b video coding, 2b/10b for control, and 4b/10b for audio and auxiliary data. A glance at most any HDMI article (such as http://en.wikipedia.org/wiki/HDMI) will quickly show it is primarily aimed at video, and a glance at the specs of most AVRs will show very high jitter (if they spec it at all) relative to S/PDIF links. Not all; my Pioneer and a lot of the higher-end components retime/buffer/reclock to reduce the jitter. These techniques will undoubtedly find their way into most components as the cost of implementing them falls with technology and under market pressure ("my jitter's lower than your jitter, nyah nyah nyah"). the ability (bandwidth) to carry lossless audio should win in the end. I just wish they used a better connector, with more positive capture and less sensitivity to contact loss! Ah well, cheap rules...

Finally, I know almost nothing about I2S. Separating the clock could help, but of course it depends on how well the clock is implemented. I would guess anybody going to the trouble of running extra lines and connectors will probably do a good job since it's likely to be found on higher-end gear. The clock recovered (de-embedded, if you want to think of it that way) from a data stream can be extremely good, however. Like anything else, it comes down to how good the recovery circuits are.
 
Last edited:

DonH50

Member Sponsor & WBF Technical Expert
Jun 22, 2010
3,952
312
1,670
Monument, CO
One last post, then to bed! I want to thank you all for your kind words, and assure you that Steve et. al. did indeed ask if I would like to have my own forum. For personal reasons, among them knowing I am not any sort of acknowledged expert in the audio arena, a lack of time, and knowledge that I refuse to start something I cannot support 110%, I declined. I am quite happy contributing here, and in other places around WBF when I feel I have something to contribute. Or not. :)

All the best - Don

p.s. Snowing like crazy outside. Springtime in the Rockies!
 

ar-t

Well-Known Member
Jun 3, 2011
73
0
313
Texas
ar-t.co
About master clock rates.............

Not sure anyone here still uses a CD player, but master clock is almost always 256 x Fs, in Philips-based players. Most of the Japanese units used 384 x Fs. There may be some I2S signals that work at a slower rate, usually 1/2 of normal.

When it comes to DACs, SPDIF TX/RX units, you may run across 128 x Fs. With more migration towards higher bit rates, it is not uncommon to see a clock running at 256 x Fs, for the higher rates, which becomes 512 x Fs, for the lower bit rates.

As for SPDIF BW requirements: a whole subject, by itself. Let's just say I don't subscribe to the "All you need is 15 MHz" school of thought.
 

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu

Steve Williams
Site Founder | Site Owner | Administrator
Ron Resnick
Site Co-Owner | Administrator
Julian (The Fixer)
Website Build | Marketing Managersing