The Upgrade Company

It should be noted that Harman's investigations have shown that trained listeners do not yield different results vs. untrained, but do yield results more quickly i.e. with fewer trials. Given the small # of trials you're (perforce) conducting, Jeff, it can't hurt.
 
It should be noted that Harman's investigations have shown that trained listeners do not yield different results vs. untrained, but do yield results more quickly i.e. with fewer trials. Given the small # of trials you're (perforce) conducting, Jeff, it can't hurt.
As much as I am a fan of Harman work, I think this interpretation/statement by them is not quite right. In the case of *non-linear* distortion, expert listeners have ability to find and hear distortion that others simply cannot. I have asked Dr. Toole about it and he agrees. Their statement above applies to linear distortions such as frequency response changes and there, expert listener preferences match that of general public which is the point they really like to make.

I have a ton of experience with above in the area of compressed music and expert listeners easily outperform more than 90% of the public in hearing artifacts that the masses simply cannot hear. I know, I used to be such an expert listener :) I can't tell you how many times I would get blank looks when I would ask, "can't you hear that?" What was obvious to me, was not at all to others. I was just like them prior to the training (which lasted about 6 months).
 
An internal control is say in biochemistry, adding a known concentration of the agent/chemical [being tested] to the system and seeing if the reading/results obtained represent that known quantity. If not, there's something wrong with the assay system. For audio, it might be adding a known distortion, non-linearity, FR or phase response deviation, etc to the DUT/system that should be audible and see if in fact, the participants can correctly ID that problem. If you can't, then there's something wrong with the system. I'd also suggest that that "distortion, non-linearity, FR response deviation, etc." be at the borderline of detectability, not at the maximum levels.
In principle, I agree. The problem with its application here is that one has to determine which parameter of sound is being tested. If one presumed it was FR or noise or distortion, an internal standard with a predetermined offset could be employed. However, standard listening tests like this are multi-parametric as is normal listening, itself. Perhaps if someone could objectively demonstrate a measureable difference between the units, ABX listening could be used to determine if that parameter contributed to a perceived difference.
 
In principle, I agree. The problem with its application here is that on one has determined which parameter of sound is being tested. If one presumed it was FR or noise or distortion, an internal standard with a predetermined offset could be tested. However, standard listening tests like this are multi-parametric as is normal listening, itself. Perhaps if someone could objectively demonstrate a measureable difference between the units, ABX listening could be used to determine if that parameter contributed to a perceived difference.
That is a good point Kal. In general, controls are put in there to catch gross issues and get rid of "deaf" people. For example, in the test of compressed music, one clip is put in there that has everything above 12 Khz filtered out. If a tester can't hear that, he is excluded from the statistics even though losing high frequencies that way is not something that codecs are being tested for.
 
I know exactly what you mean, Amir. I get those looks often and it can be rather uncomfortable at times. Every once in a while, when I describe over and over what I'm hearing in fine detail, someone will get what I'm hearing. At least they say they do. So, with that said, I can agree with Dr. Toole.
 
That is a good point Kal. In general, controls are put in there to catch gross issues and get rid of "deaf" people. For example, in the test of compressed music, one clip is put in there that has everything above 12 Khz filtered out. If a tester can't hear that, he is excluded from the statistics even though losing high frequencies that way is not something that codecs are being tested for.

Yes there are two things you are doing Amir and have mentioned it before. 1) You used specific pieces of music to test; 2) You did an internal control. ;)
 
It should be noted that Harman's investigations have shown that trained listeners do not yield different results vs. untrained, but do yield results more quickly i.e. with fewer trials. Given the small # of trials you're (perforce) conducting, Jeff, it can't hurt.

What I also meant in part, and something that Kevin Voecks talked about in his RMAF demo, was the ability of more seasoned listeners to "hear" and ID where peaks and dips occured in a speaker's FR. Single dips or peaks most could be heard by both trained and novices. With each additional peak/dip, it became harder for the untrained listeners to ID what was happening.

While I have some reservations about the actual speaker testing procedure, I think that listening to different types aberrations can only make one a more perceptive listener :)
 
That is a good point Kal. In general, controls are put in there to catch gross issues and get rid of "deaf" people. For example, in the test of compressed music, one clip is put in there that has everything above 12 Khz filtered out. If a tester can't hear that, he is excluded from the statistics even though losing high frequencies that way is not something that codecs are being tested for.
Well, in that sense, the ABX test is a control for the preference test. OTOH, while I do not object to the inclusion of additional controls, I am actually not doing these tests or participating in them and getting enough people and enough trials seems to be a problem. In science, it would be incumbent on us to proceed until we had statistically valid data but I doubt there are sufficient resources here.
 
Don't play dumb.

TBH I was not playing dumb. IF however I got your concept of internal control wrong, so be it.

So let's take it that I was (actually) dumb-ie I was not playing anything-howzabout in any case addressing the still relevant points?

See, the main thing I took away from your internal control is that it is a means of testing the sensitivity of the system to detect the changes under investigation. yOU WOULD NOT (sorry) use bathroom scales to mix chemicals for a reaction. Additionally, the internal control must at least have some bearing to the investigation, no point in testing the sensitivity of weight when we are after the chemical makeup.

So, when we are after the ability of listeners to detect mods made in an identical unit sighted vs blinded (let's not forget that point...it's not as if they are two completely different units which has as a reasonable first point they couild be different sounding) then surely finding out if they sound different sighted IS a good control. There is no point doing the second part of the test if this beginning is not accomplished?

That HAS tested the sensitivity of the system to detect changes under the conditions ALL these reviews are under...sighted and with full knowledge of the identity of the unit involved. In that regard, my 'internal control' has far more relevance than 'can the guy tell us what the introduced peak in the signal is?'

You are perfectly able, if you wish, to provide us with a specific example of what you consider an appropriate internal control is instead of ridiculing my suggestion, and you can go further and explain why your internal control (not yet provided except as a diversion from the results) tests the particular sensitivity in question better than mine does.

The problem is also that the ABXers assume that 100% of the people will hear the difference. Say that's not so and only 30% of the listeners can hear a difference between DUT#1 and DUT#2.

Really? TBH I would have thought it somewhat reversed. Let me ask this way, can you link to some examples of reviews where the reviewer makes some sort of point along these lines? I dunno, maybe he says 'These differences are subtle, possibly only thirty percent or so of audiophiles will hear them'. What proportion, in your experience, would you say the reviews fall in this regard.

In any case, you did not address the last line of my post, I asked you 'Quickly, what if anything would you need to accept a negative result?'

Would you have brought up the question of an internal control if during the blinded part they did successfully identify the unit? (that, btw, is an additional question, the 'what if anything would you need to accept a negative result?' is standalone)
 
It should be noted that Harman's investigations have shown that trained listeners do not yield different results vs. untrained, but do yield results more quickly i.e. with fewer trials. Given the small # of trials you're (perforce) conducting, Jeff, it can't hurt.
Ken I think it depends on what you call trained listeners. Potential examiners are subjected to obvious differences to ascertain if they have the ability to notice differences at all.

Another drawback of ABX is the total lack of a contrl group.
 
While I have some reservations about the actual speaker testing procedure, I think that listening to different types aberrations can only make one a more perceptive listener :)

A parallel in the wine world are tasting courses that begin with samples of basic flavors accompanied by descriptions. The courses become more and more complex until you can blind taste three wines and identify the grape, the terroir and the vintage ... which makes you a master sommelier. :)
 
Last edited:
In any case, you did not address the last line of my post, I asked you 'Quickly, what if anything would you need to accept a negative result?'

Would you have brought up the question of an internal control if during the blinded part they did successfully identify the unit? (that, btw, is an additional question, the 'what if anything would you need to accept a negative result?' is standalone)
Tests like this are only questioned if the results do not match the outcome the questioner knows to be the correct one.
 
Last edited:
Somebody is not telling the truth

WOW!!

If I hadn't already read the many, many, many negative comments on this company (yes, TUC, go do a Google search on complaints on TUC instead of calling me a liar) prior to deciding not to contact them, after reading his posts here, I would certainly never do business with them.

While I can certainly understand a company wanting to defend their reputation, his approach here is certainly not one that will endear his company to future prospects.

WOW!
Recently had the pleasure of visiting David to have some of my stuff upgraded.Have seen with my own eyes what is done and the great care that goes into it.Believe me it is not what some naysayers have said.But the real proof is the results which were fantastic.Take for instance the upgraded Marantz 7007.It's sound is the best I have ever heard and I have been into Audio since 66.Another piece which was done is an old Phase Linear Amp which when completed sounded better than a 5500 dollar modern amp.A Marantz AV7701 upgraded surpassed the video and Audio by a mile.As to his approach I think if you put your heart and soul into a company for 33 years You would be a little touchy when someone Tried too degrade your work.Heck I'm offended because I know it works great.
 
Sooooo..... you're saying that someone with a different opinion to your own should be run off the forum?

Nick
 
I have 19 posts in this thread, 28 posts in your AVS thread of 2011, and owned 6 TUC players and processors over 5 years, so I think I'm qualified to comment.

I'm also very well aware of what you did in 2012, and I don't think it changes much - tells us more about ABX testing than it does about TUC products.

Nick
 
Last edited:

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu

Steve Williams
Site Founder | Site Owner | Administrator
Ron Resnick
Site Co-Owner | Administrator
Julian (The Fixer)
Website Build | Marketing Managersing