Objectivists - what might be wrong with this label/viewpoint!!

arny said:
Dynamic music-like signals with realistic crest factors don't generally stress equipment in ways which steady state periodic signals don't. In fact the opposite is more generally true. Steady state signals generally produce more stress since they require handing more energy on the average, to reproduce properly.

Have you got any measurements that backs up this claim?

Since it is essentially a law of physics, every valid measurement backs it up. I have done the work many times. It just takes time and work.

I guess there may have been a serious omission in published tests. People haven't made a big point publishing the difference in power taken from the power line and the heat generated by heat sinks for an amplifier amplifying music versus a similar amplitude sine wave. It is huge! I hesitate to do such a test and post the results here because none of my own audio gear is Golden Ear/Placebophile approved. I'll do the work, post the results and have my work $#!t on in public.

What bit about "DYNAMIC non-periodic signals" do you not understand?

Certainly there is quite a bit that I do not understand because that is the nature of Science. However, I have this little matter of advanced formal education, decades of real world experience, and all that jazz. Tell me about yours!

A claim that any objectivist should be able to back up with measurements not just truisms.

I can quite easily, but when dealing with people who work at the purely emotional level... ;-)
 
They are the people I am addressing Tom. They know who they are.

I'm sorry Jack, but you're not addressing those people, you're addressing the group. I understand if you don't want to call out individuals publicly. Send them private messages and quote their offensive language back to them so they can clearly see where they've crossed the line. If you have also done this, my apologies.

I'll be watching my mailbox. :)

Tim
 
Ummmm. Thanks for the suggestions? To each his own buddy. Clearly those who have been civil need not be asked to be. So don't expect any messages in your inbox from me. I'm notoriously lax in my standards for civility. ;)
 
One of the classic ways to show how perceptual coders trash audio signals is to measure their performance with multitones. Multitones are a closer simulation of music than small numbers of pure tones. They are a test tone that people have been trying to popularize in modern audio measurements for decades:

http://www.ap.com/kb/show/60

Here is a 4416 file of a multitone:

View attachment 18273

Here is the same audio file after being turned into a 128 K MP3:

View attachment 18274

There have been a number of dramatic technical degradations of the file when encoded via MP3, the most visible of are:

(1) The noise floor between the tones has increased tremendously - something like 40 dB. Generally this suggests nonlinear distortion but with MP3 encoders it is usually something more complex - it is usually band limited random noise.

(2) A brick wall filter was inserted above about 15.5 KHz.

I posted similar measurements on my www.pcavtech.com web site starting in the middle 1990s.
Sorry to hear that. These are exactly the wrong kind of measurements to use to characterize a system like lossy compression. Nothing in what you measured is representative of what I encode in my library of music. Some will be remarkably close to the source, some will not. There is nothing in your measurements that would indicate either outcome.

Lossy music compression algorithms divide the audio into frames and perform compression on each one of them. How far they reduce the effective resolution depends on how full their buffer is. If the previous frame took a lot of space, then they will crunch the bits even more. More advanced codecs like AAC and WMA will use different frame sizes as to optimize time vs frequency domain response. Transient detection is used to select small or large window and again, how much space left in the interim buffer is used to decide how much to compress the signal.

The above is clearly, data dependent, dynamic system behaviour. Such a system from literally moment to moment generates sharply varying distortion from perceptually inaudible to audible to some people but not others, to audible to many people. Nothing about your static two-tone test signal characterizes its distortions.

So no, you must not use such measurements as they generate meaningless data. Listening tests are expensive and time consuming. If such simple measurements could tell the story, we would save ourselves a ton of effort and expense in developing lossy compression. But because the measurements are psychoacoustically blind, and static to boot, we can't use them.

I'm just an amateur, guys! ;-)
What can I say Arny. Show me a single post from JJ who uses your type of measurement to talk about lossy compression. Show me AES papers where people talk about how good a new compression system works with dual tone testing. You won't find any. What you find is bloggers, etc. who resort to measurements they know how to make, but don't know the nature of the system as to know their applicabilities.
 
Do Atkinson or Amir post the results of multitone tests done with their fancy AP gear? According to the AP document above they are loaded for bear. If they are so esoteric why is this sad poor stupid amateur posting multitone results with such ease?
As I just explained, you won't find either me using static audio measurements to dynamic systems like lossy compression. I would be the laughing stock of the perceptual coding world if I did that. Audio Precision analyzer is fine when we want to demonstrate static distortion but it has nothing remotely useful let alone sufficient to characterize systems that have dynamic distortion.

If anything, it can confuse the heck out of you as it did to me when I was performing my AVR jitter measurements. I was testing the Anthem and I would run the jitter test and get one outcome. Then try it again and would get wildly different results. I thought for sure my test gear was broken or something. Only to try it on all the other devices I was testing and seeing consistent run to trun results. Clearly the Anthem was at fault here:

i-S4KG4t7-X2.png


Finding this was a fluke. Had I not had other gear to use as a reference, I would not know what is going on. These test equipment are simply not the right tool to find dynamic, time varying distortion.

So no, just because you can perform a measurement, it does not mean you should. If I cut my hand, the doctor does not look in my ear with his instrument. You must know what tool is useful where. And dual tone tests, or any static measurement is woefully inadequate and quite misleading in characterizing distortion in dynamic systems.
 
I mean exactly what I said. It falls out of the very definition of crest factor!

http://en.wikipedia.org/wiki/Crest_factor

"Crest factor is a measure of a waveform, such as alternating current or sound, showing the ratio of peak values to the average value. In other words, crest factor indicates how extreme the peaks are in a waveform. Crest factor 1 indicates no peaks, such as direct current. Higher crest factors indicate peaks, for example sound waves tend to have high crest factors."

Objectivists are commonly faulted for saying things like "If a reasonably good amplifier can handle a signal without clipping it will be free of audible distortion." However, looking at the formal definition of Crest factor we see that the statement presumes that the peaks are handled without distortion. A ton of placebophile hand wringing about the differences between sine waves and music goes right out the window!

I am trying to communicate the truth that a power amp that has to handle a low crest factor signal will generally handle a high crest factor signal with the same peak amplitude as long as it does not clip the peaks. Along the way I observe that the high crest factor signal by definition contains less average energy which makes fewer demands on output transistors, power supplies and heat sinks. This falls out of the very definition of crest factor!

This is readily observable when testing doing sine waves are compared to tests using actual musical waveforms. The music tests draw less current from the power line, the heat up the amp far less, and they are generally easier to complete no matter how long they run.

Sounds like I was correct:

quote_icon.png
Originally Posted by arny
Dynamic music-like signals with realistic crest factors don't generally stress equipment in ways which steady state periodic signals don't. In fact the opposite is more generally true.

I would have stated this with the word 'do' instead of don't... So- was I right, the word 'do' should have been used or did you really mean 'don't'??
 
I have access to a lot of the Bell Labs papers. I'll take a look over the break.

Regarding the MacPherson/Middlebrooks paper, they left the money shot till last:These simulation results suggest that, in agreement with the hypothesis of Middlebrooks and Green ~1990!, high-frequency Gaussian noise signal envelopes cannot convey robust ITD information.

Well that fits with me trying to develop such signals, and over headphones sometimes getting very weak minor imaging off center. Never was it enough of an effect to think it was important at least over headphones. But I was just goofing around after reading about it anyway.

As for the work by JJ if you find or can share any info I would be interested.
 
Sounds like I was correct:
**/quote]

Sorry, I didn't see the double negative until it was clearly pointed out to me.

The desired statement is:

Dynamic music-like signals with realistic crest factors don't generally stress equipment in ways which steady state periodic signals do. In fact the opposite is more generally true.

I would have stated this with the word 'do' instead of don't... So- was I right, the word 'do' should have been used or did you really mean 'don't'??

So would have I, were I not too busying making typos. ;-)

IOW: Steady state signals with a given peak level cause more stress than music with the same peak level.
 
Sorry to hear that. These are exactly the wrong kind of measurements to use to characterize a system like lossy compression.

Proof by means of self-serving unsupported assertion is always easy to fault on general grounds.

Nothing in what you measured is representative of what I encode in my library of music.


That's interesting. So I can put you down as absolutely denying the idea that music is just a kind of multitone?

Some will be remarkably close to the source, some will not. There is nothing in your measurements that would indicate either outcome.


Actually there is, but in the face of fully-winded such self-serving denial, it's not worth my time to try to explain it to you.

In an early post you seemed to claim that this noise between the tones is IM, didn't you? That's an error that I would explain if I had an audience that was interested in such facts.

Lossy music compression algorithms divide the audio into frames and perform compression on each one of them. How far they reduce the effective resolution depends on how full their buffer is. If the previous frame took a lot of space, then they will crunch the bits even more. More advanced codecs like AAC and WMA will use different frame sizes as to optimize time vs frequency domain response. Transient detection is used to select small or large window and again, how much space left in the interim buffer is used to decide how much to compress the signal.

Of course. Never said otherwise.

The above is clearly, data dependent, dynamic system behaviour.

Of course it is, but that doesn't diminish the value of certain lessons that can be learned by the performance of perceptual coders from looking at what they do with multitones.

What see above is just a rehash of the usual placebophile golden ear subjectvist bias against testing with steady state tones. It takes an open mind to take the good from many different sources and form a complete picture.

Such a system from literally moment to moment generates sharply varying distortion from perceptually inaudible to audible to some people but not others, to audible to many people. Nothing about your static two-tone test signal characterizes its distortions.

In a way you are right, because no matter what, many of the problems of perceptual coding aren't about distortion, they are about noise.

So no, you must not use such measurements as they generate meaningless data.

Except they don't but I'm not going to convince anyone who is agenda-driven and totally self-serving. So I won't waste forum bandwidth trying.

Listening tests are expensive and time consuming. If such simple measurements could tell the story, we would save ourselves a ton of effort and expense in developing lossy compression. But because the measurements are psychoacoustically blind, and static to boot, we can't use them.

Except they aren't totally useless - as always its about using the right tool for the job at and, and there are many different jobs.

It is ludicrous to ignore the fact that I invented ABX exactly for the problem that became manifest when perceptual coders happened on the scene - audible problems that traditional measurements don't fully capture or explain.
 
So no, you must not use such measurements as they generate meaningless data. Listening tests are expensive and time consuming. If such simple measurements could tell the story, we would save ourselves a ton of effort and expense in developing lossy compression. But because the measurements are psychoacoustically blind, and static to boot, we can't use them.

What can I say Arny. Show me a single post from JJ who uses your type of measurement to talk about lossy compression. Show me AES papers where people talk about how good a new compression system works with dual tone testing.

At no point did I mention dual tones. At no point did I picture just dual tones.

Anybody who can count to at least three can see that my tests contained far more than just dual tones. Just to remind you, here is one of my graphs:

4416 multitone.jpg

How many tones are there, Amir?
 
Last edited by a moderator:
Proof by means of self-serving unsupported assertion is always easy to fault on general grounds.
Plot is lost Arny. We are discussing your statement that measurements can be used to ascertain the performance of lossy codecs. You have so far shown us a couple of blogs, yours and one other person, with said measurements. I have explained in detail why your approach is completely misguided. That no dynamic system can be characterized with static test tones. And even when you do show distortion, they cannot be representative because lossy codec distortion is completely signal dependent and no one listens to 3 minutes of two static tones.

You have provided no references to back your position, making your statement clearly apply to your own posts here. There are arguments that need to be shut down because the person is totally out of their domain of knowledge, and this is one of those clear situations.

It is ludicrous to ignore the fact that I invented ABX exactly for the problem that became manifest when perceptual coders happened on the scene - audible problems that traditional measurements don't fully capture or explain.
You did not invent ABX testing Arny. Please don't keep saying that. And ABX testing is not used for lossy codec evaluation anyway.
 
Plot is lost Arny.

That's right Amir. You had your chance to criticize my statement and the best you could do was come up with a bunch of unsupported, self-serving assertions. You've now had a second chance, and still no joy.

We are discussing your statement that measurements can be used to ascertain the performance of lossy codecs.

I notice that even though it is stupidly easy to do so, you have not quoted me and instead substituted your own self-serving paraphrase of what you claim I said, which is of course easy to demonstrate to be false and stupid because of its authorship, which was not me.

So I will do things right:

http://www.whatsbestforum.com/showthread.php?16388-Objectivists-what-might-be-wrong-with-this-label-viewpoint!!&p=298283&viewfull=1#post298283

Here is the actual interchange: (no paraphrases, cut-and-paste accurate)

arnyk said:
Since audio signals are two dimensional (time and amplitude) there is only a very short list (N=4) of things that can go wrong. They are: Linear Distortion (FR and phase), Nonlinear Distortion (IM, THD, jitter), random Noise (usually due to thermal effects) and Interfering Signals such as hum. The means for measuring all of these problems have been known for decades and in modern times are easy enough to actually measure for yourself with an investment that is by high end audio standards chump change. Most good modern audio gear reduces all of these potentially destructive influences to orders of magnitude below audibility. Worrying about these things is for chumps.

amirm said:
Compression artifacts in an MP3 encoder can be well above threshold of hearing yet you never see measurements of such Arny. Dynamic distortion that comes and goes based on what is played or what the equipment is doing at that moment is a glaring problem with our measurements today.

arnyk said:
One of the classic ways to show how perceptual coders trash audio signals is to measure their performance with multitones. Multitones are a closer simulation of music than small numbers of pure tones. They are a test tone that people have been trying to popularize in modern audio measurements for decades:

So, I didn't say what you are now falsely claim I said, Amir.

I didn't say: "Measurements can be used to ascertain the performance of lossy codecs."

I showed that measurements can and have been used to show the lack of performance of lossy codecs. And I illustrated it with a real world example involving software and parameters that people are likely to use, showing significant measured degradation of a relatively simple test signal which you falsely claimed was a "duotone" when in fact it is obvious to anybody who knows how to read a FFT can easly see has a lot more than just two tones in it.

Now let's look at some more facts:

https://www.linkedin.com/pub/amir-majidimehr/5/4a7/1

"Corporate Vice President Microsoft
September 1997 – January 2008 (10 years 5 months)
Ran the digital media division (~1000 employees) which included software development, marketing and business development for the entire suite of audio, video and digital imaging for Microsoft and consumer electronic devices. Group created such well-known technologies such as WMA audio compression, WMV/VC-1 video compression (mandatory in Blu-ray format), Windows Media DRM, and Windows Media Player. The streaming technology developed in this group won an Emmy award in 2007.
"

It becomes interesting to know if your words have any meaning. Amir.

This is another simple test that I performed on a product of Microsoft that I downloaded from them around the year 2000:

This is the simple test signal:

wme4.11 impulses reference.jpg

It is just a stream of simple impulses.

and this that wave processed @ 128K by a WMA encoder I downloaded from the Microsoft web site about 3 years after you apparently became responsible for the WMA product:

wme4.11 impulses processed 128.jpg

Note that the fourth and fifth impulses are severely trashed as demonstrated by a simple waveform presentation. They sounded bad, too!

I believe that the two trashed and obliterated impulses again make the point that measurements can and have been used to show the lack of performance of lossy codecs. I again illustrated it with a real world example involving software and parameters that people are likely to use, showing significant measured degradation of a relatively simple test signal. The product appears to be one that you must have approved for distribution. How did your listening tests serve you then, Amir? ;-)

Here's the important point. Reliable listening tests are always the final arbiter of sound quality. They actually are the gold standard that we use to determine the relevance of any measurements we might use for our own convenience. But as we all seem to agree, doing listening tests right can be time consuming and expensive. In many cases we can demonstrate clearly audible faults with relevant technical tests, and when we can do so, we save everybody a lot of time, trouble and even some money.
 
Last edited:
You did not invent ABX testing Arny. Please don't keep saying that.

I guess we can call the above "The Gospel According To Amir" ;-)

Here's what the rest of the world says:

http://en.wikipedia.org/wiki/ABX_test

wikipedia said:
History[edit]
In 1977 Arnold B. Krueger and Bern Muller, both members of the Southeastern Michigan Woofer and Tweeter Marching Society (SMWTMS), invented the ABX Double Blind Comparator System in order to settle a debate if differences between well constructed and level matched amplifiers are audible, Muller being pro and Krueger being against the possibility.[1] On May 7, 1977 SMWTMS organized the first three audio double blind listening tests using Krueger's and Muller's ABX Comparator. Consequent to the meeting a company that will manufacture and sell the ABX Comparators was formed under the name ABX Corporation. Later David Clark, member of the Audio Engineering Society (AES), will continue to refine, promote and market the ABX Comparator.[2][3]

On September 22, 1999 Krueger launched a website fully dedicated to educating about ABX testing with included software applications for download.[4]



And ABX testing is not used for lossy codec evaluation anyway.

http://en.wikipedia.org/wiki/Codec_listening_test

Wikipedia said:
Testing methods[edit]
ABX test[edit]
Main article: ABX test
In an ABX test, the listener has to identify an unknown sample X as being A or B, with A (usually the original) and B (usually the encoded version) available for reference. The outcome of a test must be statistically significant. This setup ensures that the listener is not biased by his/her expectations, and that the outcome is not likely to be the result of chance. If sample X cannot be determined reliably with a low p-value in a predetermined number of trials, then the null hypothesis cannot be rejected and it cannot be proved that there is a perceptible difference between samples A and B. This usually indicates that the encoded version will actually be transparent to the listener.

ABC/HR test[edit]
In an ABC/HR test, C is the original which is always available for reference. A and B are the original and the encoded version in randomized order. The listener must first distinguish the encoded version from the original (which is the Hidden Reference that the "HR" in ABC/HR stands for), prior to assigning a score as a subjective judgment of the quality. Different encoded versions can be compared against each other using these scores.

MUSHRA[edit]
Main article: MUSHRA
In MUSHRA (MUltiple Stimuli with Hidden Reference and Anchor), the listener is presented with the reference (labeled as such), a certain number of test samples, a hidden version of the reference and one or more anchors. The purpose of the anchor(s) is to make the scale be closer to an "absolute scale", making sure that minor artifacts are not rated as having very bad quality


As they say Amir: "Make My Day" ;-)
 
I showed that measurements can and have been used to show the lack of performance of lossy codecs.
And I have said repeatedly that doing so indicates complete lack of understanding of how these systems work, and how we characterize their performance.

You can't continue being your own expert witness when you have no expertise in this field Arny. Show me references from AES papers. Then we can talk.

Now let's look at some more facts:

https://www.linkedin.com/pub/amir-majidimehr/5/4a7/1

"Corporate Vice President Microsoft
September 1997 – January 2008 (10 years 5 months)
Ran the digital media division (~1000 employees) which included software development, marketing and business development for the entire suite of audio, video and digital imaging for Microsoft and consumer electronic devices. Group created such well-known technologies such as WMA audio compression, WMV/VC-1 video compression (mandatory in Blu-ray format), Windows Media DRM, and Windows Media Player. The streaming technology developed in this group won an Emmy award in 2007.
"

That's right. You are arguing with someone who did this work for a living. This is my domain of expertise Arny. You can't just throw stuff at the wall and hope that it sticks.

Come back with a reference from someone who knows this field.

This is another simple test that I performed on a product of Microsoft that I downloaded from them around the year 2000:

This is the simple test signal:

View attachment 18288

It is just a stream of simple impulses.

and this that wave processed @ 128K by a WMA encoder I downloaded from the Microsoft web site about 3 years after you apparently became responsible for the WMA product:

View attachment 18289
The very first group at I managed at Microsoft was the audio/video compression group. This is back in 1998 or so. WMA development had started a bit before that. I worked intensively with the team to substantially improve its performance prior to release in 1999. I worked hand in hand with my Phd researchers and developers to build this technology. This is what I know Arny. This is what I did.

And your measurement means absolutely nothing. When it comes to lossy audio compression, their performance on synthetic signals like you used is not of any importance. Show me where MPEG uses a signal like that. You won't find it. The way we get away with throwing away more than 90% of the audio signal and make the 10% remaining bits sounding close to the original is because we optimize the compression for music, not test signal.

For example, every lossy codec has an entropy (lossless) compressor on the back-end (post quantization). The code book for that entropy coder comes from analysis of large number of music files. Not test signals.

I can go on but again, the more you post, the more you are showing that you are going to jump into technical topics in which you have no professional or educational experience. You post stuff that make sense in your lay/hobbyist mind which is fine in some situations. But not here. This is a complex technical area that you cannot learn about by making assumptions of what sounds right to you, or tidbits you have read online. I learned what I know from directly interacting for years with people who understand this technology and have extreme education and professional experience to show it.

I believe that the two trashed and obliterated impulses again make the point that measurements can and have been used to show the lack of performance of lossy codecs.
No, you have not come within a mile of doing so. You continue to look in the trunk of the car for the engine.

Yes, you can use frequency response measurements to detect pre-filtering that the encoder performs to reduce signal entropy as high frequencies are much harder to encode. Once there, and in the case of encoders which (at sufficient bit rate) don't pre-filter, then even that measurement is useless.

Analysis of performance of lossy codec happens in two domains:

1. Their design. The components of the codec can provide clues to experts in the field as to how well the codec could perform. If you don't have more than one window size, I can tell you right away that you are going to lose efficiency as bit rates go lower. If you do have more than one window, then accurate transition detection becomes key which is a hard problem to solve in itself. The MPEG reference codec test clip, Castanets, is designed to tease out this problem. The rest of the MPEG reference music clips, and they are all music not synthetic signals, are likewise designed with full knowledge of the codec design to find audible problems.

The above shows the major failing of listening tests in other domains where people with no knowledge of how the system generates distortion, randomly select test signals. You are shooting in the dark that way. But we digress.

2. Listening tests. This always starts with the MPEG reference clips and then expands to set of music files that are likewise revealing. MPEG clips tend to be optimized for MPEG technologies (such as AAC) as the codecs there were highly optimized for those clips. So you always want to augment them with other selections. I used to have about a 100 tracks out of thousands in my library that I had found to be suitable here. Again, none were synthetic clips. Always music.

That is it. No measurements. Using one as you are doing would get you booted and dismissed as knowing absolutely nothing about these systems.

Net, net, you can't use static measurements to determine performance of dynamic, data dependent systems. You just can't. You can chase fool's gold with your measurements but such a thing does not exist or else, the industry/research would use it.
 
I guess we can call the above "The Gospel According To Amir" ;-)

Here's what the rest of the world says:

http://en.wikipedia.org/wiki/ABX_test
Sorry, no. A big NO! :D

The world said no such thing. The wiki is an open dictionary where anyone could wake up in the morning and go edit any entry. Their edit in no way or shape indicates what the "world says."

Good news is Wiki keeps track of all such edits so we can go and look to see who edited that page and put your name in there. Here it is:

i-T24H54j-XL.png


We see on the left that the April *2014*, i.e. this year's version of that wiki page has nothing with your name in there. Someone by the alias of "Pandovski" decided to put that entry in there in May of this year! The reference he used for that tidbit was an Audio Dictionary:

White, Glenn; Louie, Gary J. (2005). The Audio Dictionary (3 ed.). University of Washington Press. p. 6. ISBN 0295984988.

If we look in there, we just see the cut and paste into the Wiki:

i-cLcrMX2-X2.png


No references are provided as to where those authors got their information. In other words they too are relying on Internet folklore and at any rate, they don't say you are the sole inventor.

Fortunately there is authoritative documentation of who invented ABX testing. This is the original "invention" of ABX dating back to 1950 by two Bell Labs luminaries as I pointed out to you in HA forum:

i-tpFznSh-X2.png


That kind of reference we can bank on since it is documented in the Journal of Acoustic Society of America. Not some audio dictionary book which was quoted in the Wiki by one person.

You have to be careful in referencing Wiki. You need to know the topic better than the reference or you fall in the ditch as is the case here.

As they say Amir: "Make My Day" ;-)
I don't know about making your day but I hope we are done once and for all with you saying you "invented ABX." It is highly unethical to claim credit for other people's work. It is not cute. It is a serious matter.

It is time to set the record straight Arny and not propagate this fish story that you invented ABX. You built a hardware ABX comparator. You did that. That is not an invention let alone be invention of the general technique itself, nor anything that is used to evaluate lossy audio codecs.
 
Looks like wikipedia edit time.
 
Since audio signals are two dimensional (time and amplitude) there is only a very short list (N=4) of things that can go wrong. They are: Linear Distortion (FR and phase), Nonlinear Distortion (IM, THD, jitter), random Noise (usually due to thermal effects) and Interfering Signals such as hum. The means for measuring all of these problems have been known for decades and in modern times are easy enough to actually measure for yourself with an investment that is by high end audio standards chump change. Most good modern audio gear reduces all of these potentially destructive influences to orders of magnitude below audibility. Worrying about these things is for chumps.

The real problem is the typical subjectivist audiophile's lack of scientific knowledge and corresponding distrust for science. Most of these people, even most leading audiophile subjectivist journalists lack the credentials and knowledge required to understand why the previous paragraph is true and what it means.

Subjectivists generally loathe proper listening tests (they are taught to do so by self-serving sales hacks disguised as technicians). Among them there are a few posers (at least one who posts here frequently) who give proper technology and listening tests zbundent lip service, but still as a rule don't actually benefit from them. Since the alternative is sighted evaluations with a proven huge propensity for false positives, these people have locked themselves up in a logic tight box. Let them waste their time and their money with products like these: https://sites.google.com/site/jkciunas/ciunas-dac

I take exception to your simplistic enumeration of things that can go wrong. They look like the kind of over simplification that a technician would come up with, not an engineer or scientist. Without knowing more about your knowledge base and level of understanding, I'm not sure the best way to to clarify my differences with your first paragraph. So I am going to put it aside for a while.

Instead, I'm going to focus on a simple subject, not one that I ordinarily place much emphasis on, namely "credentials". I raise this issue since in your post you seem to use credentials as a way of dismissing people.

Arny, what are your credentials?
 
Tony, forget about Arny - he's tripping himself up on HA instead of here.

I started another thread on Very Low Frequencies which leads me further along the logical path I've outlined in this thread
 

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu

Steve Williams
Site Founder | Site Owner | Administrator
Ron Resnick
Site Co-Owner | Administrator
Julian (The Fixer)
Website Build | Marketing Managersing