DIscussion of ABX results of Winer's Loopback files

Well, one of the guys over on pink fish who did identify the 20 pass file v the original with 100% accuracy (as did I) and the one pass v original with 98.9% accuracy (I think) may have been encouraged to try hard based on reports that others had already done so, 100% successfully. However the same guy recently hosted a properly controlled double blind ABX test of DACs of hugely varying price, and a follow up sighted but level matched test of DACs.

At the beginning of the first test the participants listened to the various DACs sighted first, and several said they heard differences, but blind, none could reliably differentiate them. No differences were found during the follow-up sighted but level matched test either, which I believe many of the same people took part in.

Based on all the reviews, manufacturers claims and user reports of night and day differences between DACs, and the differences observed sighted before blind testing, one might expect that the participants would have been trying just as hard and indeed expecting to spot any differences during the ABX testing of these DACs, and trying even harder in the follow-up test (otherwise, why bother.), and indeed the host proved that by differentiating the one pass file v the original successfully (98.9% success), in the same system (his) that he doesn't have any hearing issues, nor is there any doubt that his system may not be revealing enough.

So whilst there is no doubt that audible differences exist (and are explainable) between Ethan's loopback files, should there be doubt whether differences between the DACs tested previously may have been present but weren't detected? And if so, why?
If I said to measure the temperature between a hot oven and your dining room table with you thumb alone, you could do that. If I asked you to do the same between different parts of the table, you could not, right? But if I gave you an infrared thermograph, you could tell the difference.

The file based ABX allows us far better capability to identify differences than real-time playback and switching streams but not hearing what just passed. We need to hear the same part of the track and compare that quickly to the other before our memory fades. We can't carry the difference forward to different segments as we switch DACs or whatever.

Now, does this indicate that the differences must be small? Likely but it also indicates that we get a ton of our negative outcome in DBTs from using the thumb test. We don't test for sensitivity of such test fixtures and consider their limitations.

It is important in my view that we find differences no matter how small if they are audible. We can then work to remove them.
 
Another thing ABXs reveal is that younger people can hear differences that those over say thirty five can't. This is probably because by that age you've lost about 10dB of hearing sensitivity at 3kHz and above. This is the equivalent of half the volume on the bit of the spectrum that gives clarity or intelligibility. It may also explain why thirty five is the age that many used to consider hi fi, no longer though because headphones have more or less killed it.
Yet, despite being much older than 35, I am able to detect differences that many younger people cannot. The reason is definitely not because I hear better. I absolutely do not -- a fact that my son and wife constantly point out as they come to me and ask, "what is that loud high-pitch sound?" I ask, what high pitch sound? They say, "can't you hear that?" I say no. I think look at my computer monitor and notice that I have been playing a 12 Khz tone all along that was totally inaudible to me, yet they heard it from somewhere else in the house!

So it is not 'hearing' that is important but rather, 'listening' ability. Take an X-ray. There is no way I can analyze it as well as a pathologist who is trained to know what to look for. We both have eyes and maybe my eyes are better than his. But his diagnosis basis using his visual system is far superior to mine. What looks like a random dark spot to me, may indicate a serious issue to him.
 
Many people can hear the degradation after 20 passes, and Amir showed he could hear it after only five passes.
Correction - he heard it after 1 pass (after training himself using the 5 pass file)
This does not surprise me. But I used multiple passes only to make the degradation more obvious. Nobody listens or uses converters this way! If the goal is to determine how much money one needs to spend on a DAC - and I think it should be! - then testing a single generation makes far more sense. That's what really matters.
Exactly so - 1 pass is what Amir & others could hear differences in.
More to the point, that test was done using an old SoundBlaster sound card that cost $25. Few serious audiophiles are using or considering buying stuff like that. I'm much more interested in learning how many people can pick out the original versus one generation of the more typical "high end" converter I used in this follow-up test:
Sure, you may NOW be interested in doing another test with a better soundcard than this Soundblaster Live card but some would like to know your analysis of the results for this test, first. Can you tell us why, based on your measurements of this card you thought it wise to setup this test? Based on your stated transparency criteria, after how many passes did/do you expect it will no longer be transparent - was this none, one or 5? Can you tell us why you would state in 2010 about this soundblaster Live soundcard when asked the question:
Q: From a PURELY SONIC standpoint (setting aside professional design traits (like balanced connectors) and market perception issues), do you think the Soundblaster card is good enough for professional studios?"
A: Sure, why not? There might be better converters, but there are always "better" everythings in pro audio. Do you use the very best DI ever made? Are your monitor speakers the very best that have ever been made regardless of price?
Or this:
When considering only fidelity and accuracy, even my $25 SB card beats the finest analog recorder in every way one could possibly assess fidelity. This is easily proven by measuring the four parameters that define everything affecting audio reproduction."

I know you come here already armed with your answers but I'm sure people would like to read them, anyway.
 
Last edited:
If I said to measure the temperature between a hot oven and your dining room table with you thumb alone, you could do that. If I asked you to do the same between different parts of the table, you could not, right? But if I gave you an infrared thermograph, you could tell the difference.

The file based ABX allows us far better capability to identify differences than real-time playback and switching streams but not hearing what just passed. We need to hear the same part of the track and compare that quickly to the other before our memory fades. We can't carry the difference forward to different segments as we switch DACs or whatever.

Agreed, hence why I'm very sceptical of the validity of long-term comparative listening as any kind of meaningful differentiator.

Now, does this indicate that the differences must be small? Likely but it also indicates that we get a ton of our negative outcome in DBTs from using the thumb test. We don't test for sensitivity of such test fixtures and consider their limitations.
Small differences, yes, but I'm more inclined to think non-existent or irrelevant rather than small when not found in DBT, as opposed to assuming the fault is with the test procedure and differences were missed. Having said that I do personally think that the file based model is best and thus any way that one can get closer to that experience when testing different components will be beneficial.
 
As I said Jk, ABX or AB is absolute and you hear which is best. However the truth is that it's some time since the difference between competent DACs was audible or worth bothering to compare.
So when is the last time you ABXed some decent DACs?

We bought three AV processors a couple of years ago, a Denon professional one, an expensive Rotel AV Pre only and a Yamaha RVX 667. There wasn't any audible difference between the digital bit of any of them, the Rotel had the worst OS and Yamaha's power amps weren't marvellous. This year even their power amps are superb.
Did you ABX them or sighted? You say nothing about how you connected them digitally (HDMI, SPDIF, optical or coaxial) & to what DAC?

Now I doubt anyone on this forum would hear any difference between an iPhone, a Weiss or a Benchmark in a properly conducted test,
Indeed, you might doubt it but do you have any evidence on which this doubt is based?
but there'd be a country mile between B & W, Focal, MA and Spendor and everyone would hear that easily.

And that's audiophiles for you. ;)
Yes, the big stuff (speakers, rooms) is important - nobody denies that but it doesn't mean that everything else should therefore be ignored. Some believe that there is much to be gained by getting the source as absolutely pristine as possible as it can't be improved later in the chain.

Ashley, you mentioned on the other thread, the blind tests that you run. Maybe it would be beneficial to all for you to outline the factors needed for a properly run blind test?
 
Last edited:
I don't remember the year JK, but early this century we bought DAC evaluation PCBs from leading DAC manufacturers. These are a technically perfect implementation of each company's chipset intended to help companies like us design our own version and test it against theirs to ensure we've got it right.

They're also for us to evaluate their sound quality.

Using microprocessor controlled reed relay switches and unity gain buffers for switching between them and very careful level matching (electronically), we were able, with an IR handset to switch instantaneously between three at a time without knowing which we were hearing. We set the test up because there was no other way we could hear any difference. We still couldn't. This was done for our benefit alone to decide which we should use, so no axe to grind and not comparing our designs, but the clever people's, the ones who'd designed the chipsets.

Because we could not hear any difference I set the system up so that anyone who visited could make the same comparison without knowing even if they were hearing anything different, just by pressing one of three buttons on a handset from the listening position.

We have professional sound engineers, Classical sound engineers, ex BBC sound people as well as the odd dealer and just possible customers who we can ask to participate and none of them, in this instance, were able to hear any difference and to be frank there was no reason why they should. They all measured better than 16 Bit, than the amps and than the speakers, leave alone the CDs.

Since then we've worried rather less about DACs because they're better still now. You can imagine how perplexed I am every time I see arguments suggesting differences and people believing they hear them. There are areas in hi fi where massive improvement can be made, but I'd bet a million pounds to a penny you wouldn't be able to hear a difference between an iPhone and any expensive audiophile DAC, assuming the latter was correctly implemented.

In 1990 I had a £2000 NEC CD player that used twin Burr Brown dual co-linear 18 Bit DACs and everything else that was best at the time and I doubt you'd hear any difference between that and the best of today either. You might just the Sony CDP555ESD that preceded it. It had a Crown S1 TDA 1541 and Sony's rather better digital filter.
 
Many people can hear the degradation after 20 passes, and Amir showed he could hear it after only five passes.
I did both Ethan as I reported in my original post. Here is the one-pass again:

====

Above I am showing my search for critical section. So when I tested the single generational loss (i.e. "most difficult") I knew what to listen for:

foo_abx 1.3.4 report
foobar2000 v1.3.2
2014/07/18 06:40:07

File A: C:\Users\Amir\Music\Ethan Soundblaster\sb20x_original.wav
File B: C:\Users\Amir\Music\Ethan Soundblaster\sb20x_pass1.wav


06:40:07 : Test started.
06:41:03 : 01/01 50.0%
06:41:16 : 02/02 25.0%
06:41:24 : 03/03 12.5%
06:41:33 : 04/04 6.3%
06:41:53 : 05/05 3.1%
06:42:02 : 06/06 1.6%
06:42:22 : 07/07 0.8%
06:42:34 : 08/08 0.4%
06:42:43 : 09/09 0.2%
06:42:56 : 10/10 0.1%
06:43:08 : 11/11 0.0%
06:43:16 : Test finished.

----------
Total: 11/11 (0.0%)

======

This does not surprise me. But I used multiple passes only to make the degradation more obvious. Nobody listens or uses converters this way! If the goal is to determine how much money one needs to spend on a DAC - and I think it should be! - then testing a single generation makes far more sense. That's what really matters. More to the point, that test was done using an old SoundBlaster sound card that cost $25. Few serious audiophiles are using or considering buying stuff like that. I'm much more interested in learning how many people can pick out the original versus one generation of the more typical "high end" converter I used in this follow-up test:

Converter Loop-Back Tests
Ah, didn't realize there were a second set of files on that page. Thanks for upping the fidelity and providing the variety. I gave one set of files a try. Here is how I did:

foo_abx 1.3.4 report
foobar2000 v1.3.2
2014/07/29 14:05:31

File A: C:\Users\Amir\Music\Ethan's new generational loss files\focusrite_3a.wav
File B: C:\Users\Amir\Music\Ethan's new generational loss files\focusrite_3c.wav

14:05:31 : Test started.
14:05:50 : 00/01 100.0%
14:06:18 : 01/02 75.0%
14:06:24 : 02/03 50.0%
14:06:31 : 02/04 68.8% -<<< Difference found
14:06:42 : 03/05 50.0%
14:06:53 : 04/06 34.4%
14:07:02 : 05/07 22.7%
14:07:19 : 06/08 14.5%
14:07:35 : 07/09 9.0%
14:08:01 : 08/10 5.5%
14:08:12 : 09/11 3.3%
14:08:31 : 10/12 1.9%
14:08:54 : 11/13 1.1%
14:09:32 : 11/14 2.9% <--- Dog barked. :D
14:09:52 : 12/15 1.8%
14:10:03 : 13/16 1.1%
14:10:19 : 14/17 0.6%
14:10:53 : 15/18 0.4%
14:11:33 : 16/19 0.2%
14:12:47 : 17/20 0.1%
14:13:18 : 18/21 0.1%
14:13:39 : 19/22 0.0%
14:13:41 : Test finished.

----------
Total: 19/22 (0.0%)
 
BTW Ethan, I believe there is an easy way to tell which file is the original. Please PM me and I will explain. I did not (and could not) take advantage of that in my results above as I was not trying to identify which is which.
 
I don't remember the year JK, but early this century we bought DAC evaluation PCBs from leading DAC manufacturers. These are a technically perfect implementation of each company's chipset intended to help companies like us design our own version and test it against theirs to ensure we've got it right.

They're also for us to evaluate their sound quality.

Using microprocessor controlled reed relay switches and unity gain buffers for switching between them and very careful level matching (electronically), we were able, with an IR handset to switch instantaneously between three at a time without knowing which we were hearing. We set the test up because there was no other way we could hear any difference. We still couldn't. This was done for our benefit alone to decide which we should use, so no axe to grind and not comparing our designs, but the clever people's, the ones who'd designed the chipsets.

Because we could not hear any difference I set the system up so that anyone who visited could make the same comparison without knowing even if they were hearing anything different, just by pressing one of three buttons on a handset from the listening position.
Sounds like you carefully considered the technical end of it. Well done! Although, you don't tell us anything about the listening system, room, etc. Did you consider implementing any controls that might test the equipment & listeners acuity in hearing other known subtle differences? Did you give consideration to types of music or test signals used that might best reveal differences?

We have professional sound engineers, Classical sound engineers, ex BBC sound people as well as the odd dealer and just possible customers who we can ask to participate and none of them, in this instance, were able to hear any difference and to be frank there was no reason why they should. They all measured better than 16 Bit, than the amps and than the speakers, leave alone the CDs.

Since then we've worried rather less about DACs because they're better still now. You can imagine how perplexed I am every time I see arguments suggesting differences and people believing they hear them. There are areas in hi fi where massive improvement can be made, but I'd bet a million pounds to a penny you wouldn't be able to hear a difference between an iPhone and any expensive audiophile DAC, assuming the latter was correctly implemented.
I might take your challenge if you are serious & I'm sure Amir would also be interested :)?
 
Last edited:
Agreed, hence why I'm very sceptical of the validity of long-term comparative listening as any kind of meaningful differentiator.
What Amir is telling you is that Vital's (& other such blind tests) are flawed because of the way they are done - no instant switching to hear exactly the same extract of the music that was listened to a second ago on another DAC. Typically they involve switching between DACs & even if it is an instant switch, the music heard is a continuation of the track, So you are asking someone to compare an ongoing playback of a track with a different part of the track they heard a number of seconds ago (if not more)

He's not talking about long term testing

Small differences, yes, but I'm more inclined to think non-existent or irrelevant rather than small when not found in DBT
See Amir's comments above. Unless you can deal with the obvious flaws in the test, you cannot use it as a measure of anything
, as opposed to assuming the fault is with the test procedure and differences were missed.
Are these flaws of no consequence? Not to mention listener training, use of embedded controls, test sample, statistical analysis, test racks, order of presentation - the list goes on
Having said that I do personally think that the file based model is best and thus any way that one can get closer to that experience when testing different components will be beneficial.
Ok, so if you think file based ABX is best then the other testing methods must not be as precise, right? How do you suggest making DBTs more accurate?
 
Last edited:
Oops, Ethan, Amir has just passed your new 1 pass loopback test as he did your old 1 pass loopback test!!
I guess you need to try another sound card & test setup until you get it right?
Maybe, before doing so you could give your analysis of both the SoundBlaster Live 1 pass positive ABX results & the Focusrite 1 pass positive ABX results - both posted by Amir?
 
Last edited:
What Amir is telling you is that Vital's (& other such blind tests) are flawed because of the way they are done - no instant switching to hear exactly the same extract of the music that was listened to a second ago on another DAC. Typically they involve switching between DACs & even if it is an instant switch, the music heard is a continuation of the track, So you are asking someone to compare an ongoing playback of a track with a different part of the track they heard a number of seconds ago (if not more)

I'm sure Amir will clarify himself what he may or may not have been implying.

He's not talking about long term testing

He has correctly pointed out how quickly audio memory fades, so by implication, long/er term comparative listening can never be close in terms of accurately differentiating small differences, should they be audible.

See Amir's comments above. Unless you can deal with the obvious flaws in the test, you cannot use it as a measure of anything. Are these flaws of no consequence? Not to mention listener training, use of embedded controls, test sample, statistical analysis, test racks, order of presentation - the list goes on Ok

Flaws in your eyes, perhaps not everybody's .

Vital and I had no training yet could discern differences. The other things you mention can be catered for.

so if you think file based ABX is best then the other testing methods must not be as precise, right? How do you suggest making DBTs more accurate?

I see no reason why similarly short clips can't be replayed via one DAC, say, and then via another directly after, having switched, so no, I don't think anything other than file based tests are imprecise, rather they would be as good, IMO if implemented similarly, as I've just detailed.

Of course it's also worth baring in mind how, again by implication, sighted testing must be so unreliable given the necessity for the stringent framework that insures precise ABX testing, as outlined by yourself.
 
Last edited:
Flaws in your eyes, perhaps not everybody's .
Yes, weaknesses is a better term

Vital and I had no training yet could discern differences.
Again, you're confusing ABX with the usual forum blind A/B tests
The other things you mention can be catered for.
Yes, exactly - sure they can with difficulty but they seldom are catered for & this is why the tests are flawed

I see no reason why similarly short clips can't be replayed via one DAC, say, and then via another directly after, having switched, so no, I don't think anything other than file based tests are imprecise, rather they would be as good, IMO if implemented similarly, as I've just detailed.
Sure, that's one of the checklist covered, perhaps?

Of course it's also worth baring in mind how, again by implication, sighted testing must be so unreliable given the necessity for the stringent framework that insures precise ABX testing, as outlined by yourself.
Sighted testing is considered to be just anecdotes. So too are the blind tests without the necessary controls/factors catered for - just anecdotes. Trying to elevate them to better than sighted tests is just a religious belief & does a disservice to well organised blind testing.
 
Last edited:
But, from my reading & research, there are aspects of sound that aren't amenable to quick A/B testing - some aspects of the soundfield we need time to accommodate to which means that long term listening (blind or otherwise) is the only real way to check for differences.

Things like solidity of the soundstage, ambience of the venue (assuming a recording of a live event) are all things that take time to evaluate & accommodate to. The area of study known as Auditory Scene Analysis is concerned with these important aspects of our perception of hearing - not just the mechanics of our ears (which are tested by rapid A/B switching) but what happens within our processing system to make sense of the sound waves impinging on the ears.

So, yes, ABX testing has it's strengths, just like measurements have their strengths & long term listening has its strengths - there is no ONE way that provides all the answers, I believe.

I'm sure I could differentiate a solid sound stage from an even slightly smeared one very quickly on my own system. I probably couldn't do it through rapid switching of one or two second segments, but in a few repetitions of say, 30 second to one-minute passages, no problem. Same goes for the perception of ambient space. I think I could differentiate a recording with deep, natural-sounding ambience from a flat one in minutes of listening. But that's not at all what I thought you and others here were referring to when talking about long-term listening. Have I been misunderstanding you?

Tim
 
I'm sure I could differentiate a solid sound stage from an even slightly smeared one very quickly on my own system. I probably couldn't do it through rapid switching of one or two second segments, but in a few repetitions of say, 30 second to one-minute passages, no problem.
This is considered long term when talking about auditory memory which lasts for a couple of seconds :)
Same goes for the perception of ambient space. I think I could differentiate a recording with deep, natural-sounding ambience from a flat one in minutes of listening.
Sure & I get your point - my examples were simplistic.
But that's not at all what I thought you and others here were referring to when talking about long-term listening. Have I been misunderstanding you?

Tim
Well, what is mainly being talked about in long term testing, living with & familiarisation with two pieces of equipment is how well one piece compares to the other with regard to things like - how the illusion works for us, how we connect with the performance.
 
This is considered long term when talking about auditory memory which lasts for a couple of seconds :)Sure & I get your point - my examples were simplistic. Well, what is mainly being talked about in long term testing, living with & familiarisation with two pieces of equipment is how well one piece compares to the other with regard to things like - how the illusion works for us, how wte connect with the performance.

But that wasn't my point. I don't think your examples were simplistic at all. I think that imaging in the stereo field, and a sense of ambient depth are two of the most difficult things for a recording to capture and a system to reproduce. I think they are two of the hardest things to get right, and I wonder, if they can be heard, if the good can be differentiated from the mediocre In a few moments of critical comparative listening, what sonic attributes are requiring days or weeks of long-term listening to differentiate?

Tim
 
But that wasn't my point. I don't think your examples were simplistic at all. I think that imaging in the stereo field, and a sense of ambient depth are two of the most difficult things for a recording to capture and a system to reproduce. I think they are two of the hardest things to get right, and I wonder, if they can be heard, if the good can be differentiated from the mediocre In a few moments of critical comparative listening, what sonic attributes are requiring days or weeks of long-term listening to differentiate?

Tim
I already said, Tim.
It may be that long term listening is more suitable to being able to analyse & compare the many possible factors in the soundfield between two devices in a relaxed & natural manner? We can only focus on a limited number of aspects in music at a time with focussed listening. This sort of listening is tiring & not possible to do over long time periods. In order to cover all the aspects by which two presentations can differ, we need time.
 
Jk

You're beating this one to death and appear to be trying desperately to discredit a well known, well tried, approved and tested method of comparison. There is no better way and no amount of obfuscation will change that. As I explained, we were doing it for our own benefit and we did it often with different discriminators in FM Tuners, various amplifier circuits, op-amps, all sorts of things to make sure we did the best possible job.

However things have moved on and all the money on development has gone into phones and their DACs or AV. The headphone business is about ten times the size of hi fi and headphones are more revealing. This hasn't stopped the relentless march of progress in AV though and now a £500 Yamaha processor is not only as good a DAC as you'll get, but despite all being crammed in together, you won't hear better powers amps either.

Modern mass market electronics are largely as transparent as they need be, so all that's left is speakers and mostly they're pretty grim and losing out to the better soundbars and inexpensive AV efforts from the big Japanese companies. They're almost as good, a fraction of the price and they work seamlessly with the TV that's at the centre of all modern media systems.

Alan Sircom understands all this better than I do and on PF he's explained that legacy hi fi is a dead duck and how hard it is to write for such a small group who can't agree on anything. Or write reviews that satisfy both advertisers and readers.

Anyway my advice to you is to stop beating a dead and decomposing horse, get yourself an iPhone, buy some "mastered for iTunes" tracks and start with the standard ear buds. If you still think an ABX is necessary, you can tell us all and I'll have a quiet chuckle.
 
I think that imaging in the stereo field, and a sense of ambient depth are two of the most difficult things for a recording to capture and a system to reproduce. I think they are two of the hardest things to get right, and I wonder, if they can be heard, if the good can be differentiated from the mediocre In a few moments of critical comparative listening, what sonic attributes are requiring days or weeks of long-term listening to differentiate?

Tim

We need this stuff in recordings to evaluate speakers. We design two ways, which means that the bass driver inevitably has to operate in part of it's break region and break up means unpredictable phase behaviour and the impact it has on stereo image and ambience/depth and sometimes with a recording from a Cathedral, height.

I have an ex BBC chum who records various music festivals and for BBC R3. He's been at it for over forty years and is the best I've heard. Not only that but he's using a variation on the original Blumlein patents, it's a single point stereo microphone comprising of two figure of eight Sennheisers matrixed as a mid/side pair by the mikes preamps in the latest Nagra digital recorder. You don't get better, and because he's is extremely able, he's able to balance this even in a massive cathedral using Sennheiser HD25s. There's nowhere to use speakers.

I've described the method and the equipment because these, with a highly competent sound engineer, will give a three dimensionally to die for and most commercial recordings seem closed in, phasey and just average in comparison.

These are very useful even if some of the music isn't to my taste. Fauré Requiem from Gloucester Cathedral was pretty magnificent though, but the real value of something like this is that just as in an ABX, you always recognise what's best, so I know have as good a benchmark as I can get for what my speakers might be able to do.
 
I already said, Tim.
It may be that long term listening is more suitable to being able to analyse & compare the many possible factors in the soundfield between two devices in a relaxed & natural manner? We can only focus on a limited number of aspects in music at a time with focussed listening. This sort of listening is tiring & not possible to do over long time periods. In order to cover all the aspects by which two presentations can differ, we need time.

As you've said. You just haven't said what, other than the imaging and sound stage which we've discussed and dismissed, these "aspects" might be, what might constitute "long term," or why it's not possible to use ABX switching in this undefined long term. You seem to be saying that ABX testing won't work for some very important aspects of audio that you can't specify, for reasons that you can't specify, and therefore, must be replaced by long-term listening of undefined length and methodology before we can reach any specific conclusions about all these non-specific "aspects."

Just keeping up has been quite stressful. :) I can no longer tell your A from your B. Never mind X.

Tim
 

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu

Steve Williams
Site Founder | Site Owner | Administrator
Ron Resnick
Site Co-Owner | Administrator
Julian (The Fixer)
Website Build | Marketing Managersing