OK boys, put your feet up for this one: (angela)
If so, then why do the digital issues of the original recording pale alongside the original vinyl release?
[FONT="]Your question of " . . . why then do the digital issues of the original recording pale alongside the original vinyl release?" is a good one. The answer is much more complex and surprising than you might think and requires a lot of background information before I can answer it. So sorry, but this will be a lengthy discussion. [/FONT]
[FONT="]I agree with your observation that many digital re-issues of the original recording are far inferior to the original vinyl release. Like you, I found this to be very perplexing and troubling. So I embarked on a quest to find the reasons. I think you will find the following story to be most interesting. [/FONT]
[FONT="]In the mid 1960's I had become a serious audiophile and had figured out that better source material was going to be essential if I wanted to get serious improvement in my audio listening experiences. I became so frustrated with the quality of the source material of the day (vinyl LPs and 2-track, open reel tape) that I vowed to start doing my own recordings.[/FONT]
[FONT="]Due to all the restraints on recording imposed by the musician's unions, I found it was really tough to find musicians to record. But after considerable effort, I was finally managed to start using the recording booth at the concert hall of the University I was attending in 1968.[/FONT]
[FONT="]As a poor college student, this was a godsend as I was able to use the finest equipment of the day (Ampex 354 studio recorders, Altec mixers, Neuman and Telefunken condenser mics, etc.) without my having to buy this equipment myself, which I could not have done. I also received excellent training by the staff. But probably the most important advantage was the fact that concerts were always recorded by the University, so I was able to record some truly great performers without union interference.[/FONT]
[FONT="]As part of my recording training, I was also taught to align, maintain, and repair all sorts of recording and electronic equipment -- particularly the studio tape decks, which were always in need of attention. I had access to the University's test laboratory facilities for this purpose. [/FONT]
[FONT="]I soon became one of the top technicians at the University and was able to make all manner of measurements and tests to evaluate and compare equipment to the sound of the recordings I was making. This lucky turn of events put me in the unusual position of being able to do serious, rigorous, scientific testing to figure out the actual cause/effect relationships of what I was hearing. [/FONT]
[FONT="]
What made this so unusual is that I was a concert musician, audiophile, and technician all at the same time. Therefore, I didn't have to just accept someone's opinion about audiophile topics, nor did I have to take the word of engineers who had no knowledge of music or audiophile issues. I could actually do both listening and measurement tests myself to find out the true causes for what I was hearing.[/FONT]
[FONT="]When I graduated from the University, I was able to use my experience there to get permission to do recordings for our local public radio station. I was then able to get past the musician's union problem to be able to record the region's symphony orchestra, opera, and pipe organ. [/FONT]
[FONT="]Because I was doing that, I was also able to get local musical groups to allow me to record them as well. I often could then get them air time on the radio station for which they were very grateful. [/FONT]
[FONT="]
By that time, I was also able to buy my own equipment and start a recording studio as well as do live, on-location recording. So I was very lucky, did a lot of live recordings, and have a wonderful library of music that I have recorded over the last 40 years, much of which is superior to what you can find on the commercial market.[/FONT]
[FONT="]I quickly learned that natural settings in good-sounding concert halls were far superior to the artificially processed recordings that were done in recording studios. Of great importance is that most on-location work was done with a simple 2-microphone setup in true stereo using either the Bleumline, crossed cardioid mics, or widely spaced, omni mic setups. These techniques produced true stereo recordings with natural hall ambience, while studio recordings were done with single microphones that produced inherently dry, monaural sound. [/FONT]
[FONT="]To get the mono sound from a studio to work in a stereo system required panning, artificial reverb, compression, and equalization. Furthermore, studio recording meant that each instrument and performer had to be have their own individual microphone and be recorded in isolation on a single track. The result were multi-channel recordings that then had to be "mixed" to produce the final sound. [/FONT]
[FONT="]This meant that the sound of the final 2-channel mix depended on the judgment of a "recording engineer" to make it sound good. There was no natural or real sound to compare to, so the quality of the recording was totally dependent on the engineer. This was (and remains) a critical flaw in the recording stream. [/FONT]
[FONT="]The close-miked sound from the recording studio was not very realistic, particularly when it was heavily processed and altered to sound "good." I much prefer a simple 2-mic stereo setup in a concert hall.[/FONT]
[FONT="]I continued to do all my own alignment and maintenance work on my analog recording equipment. This was always frustrating because good performance simply was not available. While high-speed, 2-track tape had much better performance than LPs, it was impossible to make recordings good enough that you couldn't tell the difference between the source (usually a live microphone feed) and the recording.[/FONT]
[FONT="]Specifically, even the best recorders were noisy and it was impossible to record a symphony orchestra without hearing tape hiss on quiet passages. The quietest studio recorders had a S/N (Signal to Noise ratio) of around 72 dB. By comparison a symphony orchestra has a dynamic range of about 80 dB. So it physically was not possible to make recordings with a silent background.[/FONT]
[FONT="]The development of noise reduction systems by Dolby and DBX were a help, but they introduced other problems that degraded the sound in exchange for lower noise. The biggest of these was "pumping" or "breathing" noises as their compander circuits opened and closed in response to the music levels.[/FONT]
[FONT="]Linear frequency response in analog recorders was impossible to achieve. It was considered outstanding to get plus/minus 2 dB tolerance from 30 Hz to 15 KHz, which is really quite poor performance.[/FONT]
[FONT="]Distortion was almost a joke. While you could get distortion slightly under 1% at midrange frequencies, the frequency extremes were far worse. In any case, it was normal practice to heavily saturate the tape on loud passages in order to have a quieter background for quiet musical passages. [/FONT]
[FONT="]
While magnetic tape saturates "softly" (like tubes), the distortion at saturation often shoots up to well over 50% on loud passages. Due to the soft nature of the overload, this was sonically tolerable. But the music lacked full dynamic range and had a muddy and confused quality on loud sections.[/FONT]
[FONT="]But the parameter that most annoyed me was the instability of both the frequency and amplitude of the signal. Frequency variations (measured as wow and flutter) rarely were as low as 1%. The slightest mechanical flaw (dirty or worn heads, capstan shafts, or tape guides) would dramatically worsen this. Wow and flutter could easily be heard on critical material like sustained piano tones as a warbling of the sound that was very unnatural.[/FONT]
[FONT="]Amplitude instability made the flutter even worse. You could play back a steady test tone as you recorded it and its amplitude would vary plus/minus 2 dB! [/FONT]
[FONT="]This was highly dependent on the quality of the magnetic coating on the tape. Later advancements in tape technology, particularly the polishing of the tape surface and the use of smaller magnetic particles improved this. But even so, the best reading I ever saw was plus/minus 1 dB at the midrange frequencies (high frequencies were much worse). This was always audible to a critical listener.[/FONT]
[FONT="]Then there was the problem of bias drift. Analog tape is totally dependent on a supersonic bias signal (typically 100 KHz) for achieving low noise, low distortion, and linear frequency response. As the bias oscillator heated up during a recording, the bias current would change. [/FONT]
[FONT="]This would cause a tape deck that I had spent hours "tweaking" to the highest performance possible to change its performance during the recording session. This was truly frustrating as I could never get the best performance from the equipment, even though I was using the finest equipment of the day.[/FONT]
[FONT="]LPs were much worse than tape. I had many pressings of LPs made for customers. The process degraded the sound at every step of the process from using a lathe to produce the lacquer master, through the metal casting, and subsequent pressing with vinyl that was always contaminated with foreign particles that caused the clicks and pops of surface noise. [/FONT]
[FONT="]Note in particular that most of the rumble heard on LPs is actually recorded into the master disk during the lathe cutting process. In most cases, the rumble from the customer's turntable bearing contributes an insignificant amount of rumble compared to the large, noisy bearings in a lathe. Rumble that is cut into the disk cannot be removed by using a super quiet turntable on playback.[/FONT]
[FONT="]So the resulting LP inherently had a large degree of variation from the original master tape. But even if the pressing were perfect and had no errors (impossible), when playing an LP, you have the problem of phono cartridges. [/FONT]
[FONT="]Cartridges are like loudspeakers in that they are transducers and therefore there are large differences in the sound of cartridges. Between the errors introduced during the production of an LP and the variances of phono cartridges, the sound from an LP sounds quite obviously different than the sound on the original master tape -- which sounds significantly different from the live microphone feed.[/FONT]
[FONT="]Although a high-speed, master tape was the best storage medium we had, it still corrupted the sound quite obviously. Everyone could easily hear the difference between the recording and the live microphone feed. In fact, every preamp of the day had a tape monitor loop so you could compare the source to the recording in real time (if you had a 3-head tape deck), and there always where differences you could hear between the two. [/FONT]
[FONT="]In short, analog recording is seriously flawed and never sounds like the source. Something better was badly needed.[/FONT]
[FONT="]By the 1980's, digital recording had been developed that had the potential to solve the technical problems that were insurmountable with analog equipment. Of course, with an entirely new process, there were some teething problems. Initially there were serious problems with insufficient data storage, low-level accuracy of DACs, and a weird problem with 1/3 order harmonic distortion that was eventually eliminated with the introduction of dither. [/FONT]
[FONT="]But the quality of digital recording quickly progressed and by the mid '80s, it was possible to make digital recordings that sounded identical to the live microphone feed. They had perfectly linear frequency response (DC to 20 KHz +/- 0.1 dB), lower distortion than most instruments could measure (less than 0.002% THD), unmeasurable wow and flutter, unmeasurable amplitude instability, and a totally silent background (S/N of better than 92 dB). [/FONT]
[FONT="]Let me take a momentary detour here and comment that most audiophiles today still believe that linear PCM (Pulse Code Modulation) such as used on CDs and that requires a DAC (Digital to Analog Converter) produce digital steps in the wave form. This is simply untrue. [/FONT]
[FONT="]You need look no further than to observe the recorded wave form on an oscilloscope to see that modern DACs work so well that the wave form is absolutely smooth and cannot be distinguished from the source wave form. Even a 20 KHz tone from a CD, which has only two samples at that frequency will be perfectly formed, utterly smooth, and will have distortion of around only a thousandth of a percent. [/FONT]
[FONT="]The purpose of a DAC is to produce smooth wave forms and they do so brilliantly. There are simply no steps in the wave form of a PCM recording. [/FONT]
[FONT="]These audiophiles then further believe that there is some mysterious measurement called "resolution" and that higher sampling rates improve the "resolution" of the sound. This is also nonsense. There is no such measurement as resolution and there are no steps in the wave form of a PCM recording.[/FONT]
[FONT="]So if the sampling rate does not improve "resolution", what does it do? The sampling rate defines the highest frequency that the digital recording system can record and store. The sampling rate in a PCM system must be twice the highest frequency to be recorded. [/FONT]
[FONT="]The CD "Red Book" that specifies the performance of a CD requires a 40 KHz sampling rate. So a CD can record sounds up to 20 KHz -- the limit of human hearing.[/FONT]
[FONT="]"But wait," you'll say, "CD's are sampled at 44.1 KHz, not 40 KHz." True. The additional 4.1 KHz above 40 KHz are used for the anti-aliasing filter. This filter is required to remove any frequencies above 20 KHz, which would confuse the digital converters and cause errors and flaws in the recording. [/FONT]
[FONT="]A sampling rate of 96 KHz will record up to 40 KHz (requires 80 KHz sampling). The extra 16 KHz are used for the anti aliasing filter. The 192 KHz sampling rate will use the first 160 KHz to record up to 80 KHz with the remaining 32 KHz being used for the filter.[/FONT]
[FONT="]So the sampling rate only defines the high frequency limit of the recording. It has nothing to do with "resolution" in PCM recordings.[/FONT]
[FONT="]I have been careful to state repeatedly that I have been talking about PCM recordings. This is because there are other digital recording and playback schemes that DO have steps in them and require different techniques to correct. [/FONT]
[FONT="]For example SACD does not use a DAC. It detects the difference between samples only (delta-sigma processing). Therefore, there are discrete steps in the wave form. [/FONT]
[FONT="]To get adequate smoothing, extremely high sampling rates and storage of massive amounts of information are required. So SACD samples in the MHz region. It then must eliminate the tiny steps that remain (which is noise) by using noise shaping to move the noise into the supersonic region up around 50 KHz. [/FONT]
[FONT="]This system works, but it clearly is inherently inferior to PCM recording. The only advantage of SACD is that no DAC is required. But this is a moot point since SACD has now been abandoned by the industry.[/FONT]
[FONT="]I might also add that to enjoy any true benefits (if any exist) from the SACD medium, the musicians had to be recorded with SACD from the start and all processing must have been done in the SACD domain. But most SACD releases simply copied PCM masters onto SACD for distribution to customers. So even if SACD were perfect, it could do nothing more than present a PCM recording to its listeners. [/FONT]
[FONT="]In this regard, the industry was deceiving its customers. If you pay for an SACD recording, it must be SACD at every step of the recording chain.[/FONT]
[FONT="]Digital steps are also a problem for Class D amplifiers. They too sample at very high frequencies to minimize the size of the digital steps. Their wave forms must be smoothed using a Zobel network.[/FONT]
[FONT="]For the Zobel network to work well, it must be precisely tailored to the load (the speaker) that the amplifier "sees." If the two are not perfectly matched, the frequency response of the amp/speaker combination will not be linear.[/FONT]
[FONT="]This is a huge problem for manufacturers of Class D amplifiers because usually they cannot know the load to which the amplifier will be attached. So they produce "universal" Zobels. These may work well or not with a specific speaker system -- you just don't know until you try it and measure it. [/FONT]
[FONT="]Because Class D amplifiers will not produce linear frequency response with most speakers, I don't consider them to be high fidelity devices. But they do work very well in selected applications. [/FONT]
[FONT="]
For example, powered sub woofers are ideal for Class D amps because the load is known, high frequency response is not required, but high power and cool operation are. So Class D amps are an excellent choice for manufacturers to include in their sub woofers.[/FONT]
[FONT="]Now turning back to digital recording, the "word length" of a digital PCM sample is the number of bits in it. So what do the bits do?[/FONT]
[FONT="]They define the dynamic range and S/N of the recording. In general, you can consider a bit to be 6 dB of S/N. [/FONT]
[FONT="]The CD Red Book specifies 16 bits. Therefore, the S/N of a CD can be as high as 96 dB. [/FONT]
[FONT="]There are some subtle technicalities that I won't get into that alter this slightly. For example the addition of dither (very quiet white noise) will reduce the S/N slightly. I measure an actual S/N on most CD equipment of around 92 dB for these reasons. But using 6 dB per bit is a good rule of thumb.[/FONT]
[FONT="]Today's "hi resolution" [sic] recordings usually are made using the 24/96 (24 bit, 96 KHz sampling), linear PCM specification. This means that the highs will extend to 40 KHz and the theoretical S/N will be 144 dB.[/FONT]
[FONT="]We can't hear above 20 KHz, so doubling the frequency response to 40 KHz serves no useful purpose. And while the digital S/N may be as high as 144 dB, no analog electronics are anywhere near that quiet. [/FONT]
[FONT="]The quietest analog electronics have a S/N at best of 120 dB, and Browning motion of the air molecules around microphone membranes limits them to about 92 dB. So there is nothing to be gained by using 24 bits during playback. [/FONT]
[FONT="]The industry recognizes these facts and that is why the CD remains the highest quality music storage medium available. No human can hear any difference between a properly make Red Book recording and the source. [/FONT]
[FONT="]Many audiophiles doubt this. But I have a standing bet of $10,000 (or any amount of money you are willing to bet) that nobody can hear the difference on a properly controlled test. I've never lost this bet. Contact me anytime for details and arrangements to place your bet and do the test.[/FONT]
[FONT="]This finally brings us to answer your question. If digital recording is so good, why do many old LPs sound much more enjoyable and realistic than their CD counterparts?[/FONT]
[FONT="]To answer this, let me tell you a story about the best recording of Respighi's "Pines of Rome" that I have ever heard. It was recorded in 1959 by Fritz Reiner and the Chicago Symphony by RCA "Red Seal." [/FONT]
[FONT="]The LP had superb dynamics, essentially full frequency range (30 Hz to 15 KHz), great "hall sound", and a high degree of realism. It was pure joy and very exciting to listen to, despite all the obvious faults that were epidemic in LPs of that era (surface noise, distortion, wow and flutter, poor S/N, and general instability). [/FONT]
[FONT="]When CD's became available years later, I couldn't wait for RCA to re-release that recording on CD so that I could eliminate all the faults heard on the LP. RCA finally did so in the late 80's and I couldn't wait to bring the CD home and play it.[/FONT]
[FONT="]Boy, was I disappointed. The CD had essentially no dynamic range, no bass, many of the instruments could barely be heard, and it in general sounded like I was listening through a telephone! [/FONT]
[FONT="]I was furious. I knew that digital recordings could (and should) be superb, since I was making them myself and knew this to be true. So I was determined to find out what was going on at RCA to ruin this recording.[/FONT]
[FONT="]After enduring considerable hassles finding my way through the telephone maze at RCA, I finally got to those responsible for releasing the recording. After hearing my complaint, they explained what had happened this way:[/FONT]
[FONT="]The original master tape recording was NOT made in 2-channel stereo. It was made using a 16 track recorder and multiple microphones -- in stereo -- on each orchestral section (violins had 2 mics, trumpets had 2 mics, etc.) They also placed mics out in the concert hall to record the sound of the hall sound.[/FONT]
[FONT="]They then mixed down the 16 track tape to get a 2-channel stereo recording that could be pressed to produce LPs. The recording engineer who did this work obviously really knew his stuff and did a great job of getting the right balance between the various orchestra sections, blending in the hall sound, and maintaining nearly full dynamic range and frequency response (particularly in the bass). [/FONT]
[FONT="]Although this was a mixdown, he kept it reasonably simple, and did not use compression, equalization, or artificial reverb. The performance was superb and his mix showed it off extremely well.[/FONT]
[FONT="]Twenty five years later, when RCA wanted to re-release the performance on CD, they did not have the mixdown used for the LP. So they had a different engineer do another mix of the original 16 channel tape for the CD. He totally butchered the job. [/FONT]
[FONT="]No matter how good the recording medium, if you put garbage in, you get garbage out. So the awful sound on the CD version of this recording was due to an horrible mix done by an incompetent sound engineer who had probably never been to a live, symphony orchestra concert.[/FONT]
[FONT="]The answer to your question should now be clear. It is usually the later re-processing of an old master tape that is responsible for the poor quality of sound you hear from a CD compared to the LP. It is like comparing apples and bicycles, the recordings simply are not at all the same.[/FONT]
[FONT="]Obviously this problem is not the fault of the digital recording medium, which is actually far better than any analog recording process. You should not assume that the digital medium is the cause of the problem as it clearly is not. [/FONT]
[FONT="]In short, LP recordings are often far better than their CD counterparts because they were mixed in a more natural and realistic manner than what happens in a modern recording or mix. So the LP is much more enjoyable than the CD in spite of all the serious flaws and inaccuracies inherent in the LP medium. [/FONT]
[FONT="]Of course, not all CD recordings are inferior to LPs. A good example is Willie Nelson's album "Stardust." It is available on both formats and was apparently recorded using the same mixdown tapes. The LP is a modern pressing with excellent quality vinyl. As a result, the CD sounds a bit better than the LP because it has none of the technical flaws that are obvious in the LP. But it is obvious that the two are more similar than different in that the actual recording is identical on both. Try them and see for yourself.[/FONT]
[FONT="]This experience should drive home the fact that audiophiles need to be very cautious when making cause/effect judgments about what is heard. Audiophiles far too often make assumptions and assign fault to components or design features without actually knowing that these are the cause of what they hear. [/FONT]
[FONT="]This brings up another major topic -- subjective listening techniques. But this opus is already far too long, so that discussion will have to wait for another day.
-Roger
[/FONT]