Conclusive "Proof" that higher resolution audio sounds different

Phelonious Ponk

New Member
Jun 30, 2010
8,677
23
0
I already proposed a mutually acceptable way for agreement on this so we could move on & thought we came to this agreement, Tim, - you seemed to accept this then but now you seem to not want to leave it at that - can we now leave this, please?

Different subject, John, though it did come from the same dodgy, evasive post. I did agree to that, of course. What I didn't agree to was this...

...if you just want to say that removing some biases appears to be more useful than removing no biases but it gets us nowhere as far as trusting the result, then yes, I can agree with you. But it is the reliability of the result that we are ultimately interested in, no?

...because while it started out right - removing some biases is better than removing no biases - it circled back on itself before it even got to the end of the sentence -- "it gets us nowhere as far as trusting the result,"

So I ask again, John -- is it all controls, whatever that means, or nothing? Does an incomplete list of controls, by BS1116 standards or whatever standard you care to choose, mean that the test is no better than sighted listening, or not? It's a really simple, straightforward question. Give me a straight answer and stand by it and yes, we'll be done here.

Tim
 

jkeny

Industry Expert, Member Sponsor
Feb 9, 2012
3,374
42
383
Ireland
No, Tim, it's not all or nothing - it's not a binary choice - it's not black & white.
You consider one or two controls sufficient - I don't.
Done!
 

esldude

New Member
No, Tim, it's not all or nothing - it's not a binary choice - it's not black & white.
You consider one or two controls sufficient - I don't.
Done!

Is there any situation where the simplest of blind testing is inferior to sighted long term listening? I'll give you my answer right now. No there isn't. Can you describe one where long term sighted listening is better, less biased, more discerning?
 

jkeny

Industry Expert, Member Sponsor
Feb 9, 2012
3,374
42
383
Ireland
Is there any situation where the simplest of blind testing is inferior to sighted long term listening? I'll give you my answer right now. No there isn't. Can you describe one where long term sighted listening is better, less biased, more discerning?

Yes, for me, long term sighted listening gives me a better handle on the characteristics or personality of the sound from an audio device. A piece of information I believe important if I want to consider the device as a possible part of my audio system. It takes living with the device over an extended period listening to different types of music in different moods (yes, different biases). I trust this type of listening more so than quick A/B listening blind or sighted - it allows me to give the attention I need in short non-fatiguing sessions, when I'm in the mood. It allows me many different times of listening across many different scenarios & not just a one shot listen. Now don't reply that I could do all this blind over an extended period - that's just an attempt to re-invent blind testing into whatever you decide it to be (the only thing that you will not do is re-frame it as sighted testing).

We all know that the normal blind testing done & reported on audio forums is not long term - it's usually a get together of a couple of guys who get all serious about level matching & hiding the identity of the device. They then declare that all devices they tested sounded the same & feed this into the pool of such reports which are then referred to as proof that all X devices sound the same.

So given typical blind listening Vs typical sighted, long term listening - I will consider the results from long term impressions more seriously then I will the blind listening - at least I know what the biases at play in sighted listening are.
 

Phelonious Ponk

New Member
Jun 30, 2010
8,677
23
0
No, Tim, it's not all or nothing - it's not a binary choice - it's not black & white.
You consider one or two controls sufficient - I don't.
Done!

I actually don't think one or two controls are sufficient, but thank you for answering the question instead of accusing me of circling when I was re-phrasing in hopes of getting a straight answer. So now we know that we both believe a reliable study needs some controls. We agree! Until we get to the detail of which controls, when and why.

I'm reminded of the story of the older gentleman who, upon being approached by a high end call girl offering him an evening of pleasure for $2000, asked if he could get 15 minutes for 25 bucks.

"Just what do you think I am?!" The lady protested.

"Oh we've established that," replied the old gent. "Now were negotiating price."

Tim
 

jkeny

Industry Expert, Member Sponsor
Feb 9, 2012
3,374
42
383
Ireland
I actually don't think one or two controls are sufficient, but thank you for answering the question instead of accusing me of circling when I was re-phrasing in hopes of getting a straight answer. So now we know that we both believe a reliable study needs some controls. We agree! Until we get to the detail of which controls, when and why.
Well, Tim, the first step is knowing what the accepted standard set of controls are for reliable tests - something that was not demonstrated on this thread & is not typically known among the audiophiles who run blind tests. So their ignorance often results in an arrogance about the veracity of their results.

Indeed, it appeared to me that you yourself was not aware of BS1116 - so answer me honestly, Tim were you & the standards within it?

I'm reminded of the story of the older gentleman who, upon being approached by a high end call girl offering him an evening of pleasure for $2000, asked if he could get 15 minutes for 25 bucks.

"Just what do you think I am?!" The lady protested.

"Oh we've established that," replied the old gent. "Now were negotiating price."

Tim
Sure Tim, the argument can continue over the weighting of each of the biases (& hence controls) but that was the point I was making before - how do you know what influence the remaining biases have on the result? Without knowing this you are left with an unreliable result. It seemed to me that you & others are blinded by the bias of knowingness & consider it (& maybe a few others) as the primary controls for a reliable result. I beg to differ & ask what is your evidence/reasons for stopping at these small number of bias controls? The phrase I hear most often about blind tests is - blind tests are about bias removal - a patently untrue statement as it stops with pretty much a few biases & doesn't consider what new factors which can bias the result have been introduced by the test itself

I suggested a simple inclusion of positive & negative controls in blind tests. This has been suggested, not just by me, for a long time. Why do we never see such internal controls in any blind tests - it's relatively easy to implement?
 
Last edited:

esldude

New Member
Yes, for me, long term sighted listening gives me a better handle on the characteristics or personality of the sound from an audio device. A piece of information I believe important if I want to consider the device as a possible part of my audio system. It takes living with the device over an extended period listening to different types of music in different moods (yes, different biases). I trust this type of listening more so than quick A/B listening blind or sighted - it allows me to give the attention I need in short non-fatiguing sessions, when I'm in the mood. It allows me many different times of listening across many different scenarios & not just a one shot listen. Now don't reply that I could do all this blind over an extended period - that's just an attempt to re-invent blind testing into whatever you decide it to be (the only thing that you will not do is re-frame it as sighted testing).

We all know that the normal blind testing done & reported on audio forums is not long term - it's usually a get together of a couple of guys who get all serious about level matching & hiding the identity of the device. They then declare that all devices they tested sounded the same & feed this into the pool of such reports which are then referred to as proof that all X devices sound the same.

So given typical blind listening Vs typical sighted, long term listening - I will consider the results from long term impressions more seriously then I will the blind listening - at least I know what the biases at play in sighted listening are.

A load of BS. Match levels and remove sight and you are way beyond long term sighted listening. You are claiming to know what sighted biases are at play. Quite simply it is maximal bias. Yet you claim that is preferable to the simplest blind comparisons. Which remove the most important biases involved. So we are back to the same old circular argument by you. One with no evidence, no logic, no sense, one which is simply senseless other than based upon faith. Get over it John, you would benefit to do so.
 

jkeny

Industry Expert, Member Sponsor
Feb 9, 2012
3,374
42
383
Ireland
A load of BS. Match levels and remove sight and you are way beyond long term sighted listening. You are claiming to know what sighted biases are at play. Quite simply it is maximal bias. Yet you claim that is preferable to the simplest blind comparisons. Which remove the most important biases involved. So we are back to the same old circular argument by you. One with no evidence, no logic, no sense, one which is simply senseless other than based upon faith. Get over it John, you would benefit to do so.

You asked, I answered. My answer seems to annoy you?
If you have any points to make about my answer, please make them. Hurling insults, getting personal & repeating your mantra will not make it true.
 

jkeny

Industry Expert, Member Sponsor
Feb 9, 2012
3,374
42
383
Ireland
Evidence for how SOLELY focussed on a null result are those who wield blind tests as a cudgel is the other phrase "Take away their ability to tell which X is which and their preferences change"

Again, no self inspection of the test, no attempt to verify it's reliability, no search for truth, no care or understanding of what biases are at play.

I ask again, why no simple inclusion of positive & negative controls in the blind tests reported on forums? Would this not answer a great number of the questions being posed about controls & demonstrate that what is being tested for, is actually capable of being sensed by the testers/equipment/setup?
 

audioarcher

Well-Known Member
May 6, 2012
1,396
51
970
Seattle area
Is there any situation where the simplest of blind testing is inferior to sighted long term listening? I'll give you my answer right now. No there isn't. Can you describe one where long term sighted listening is better, less biased, more discerning?

Yes, every situation. Music is too complex to completely digest in a short time frame. The mind can only concentrate on so many things at once. Short term blind testing is flawed because of this.

It would be interesting if someone did a study on long term blind testing. Would be very tedious to pull off though.
 

Phelonious Ponk

New Member
Jun 30, 2010
8,677
23
0
Well, Tim, the first step is knowing what the accepted standard set of controls are for reliable tests - something that was not demonstrated on this thread & is not typically known among the audiophiles who run blind tests. So their ignorance often results in an arrogance about the veracity of their results.

Indeed, it appeared to me that you yourself was not aware of BS1116 - so answer me honestly, Tim were you & the standards within it?

Never heard of them before. But Ive never run a blind listening test for the purpose of proving anything to any one but myself, so I've never needed to look into telecommunications ABX testing methodologies.

Sure Tim, the argument can continue over the weighting of each of the biases (& hence controls)

No need. I think we're about done here.

but that was the point I was making before - how do you know what influence the remaining biases have on the result? Without knowing this you are left with an unreliable result. It seemed to me that you & others are blinded by the bias of knowingness & consider it (& maybe a few others) as the primary controls for a reliable result.

Or maybe not. You still seem to be trying to invalidate anything that doesn't include everything. Even though there is no agreement on what constitutes everything. You're having a lot of trouble letting this one go, john.

I beg to differ & ask what is your evidence/reasons for stopping at these small number of bias controls? The phrase I hear most often about blind tests is - blind tests are about bias removal - a patently untrue statement as it stops with pretty much a few biases & doesn't consider what new factors which can bias the result have been introduced by the test itself

I've never defined a stopping point, John, or identified a small number of controls. Nor have I said blind tests are all about bias removal. I don't think anyone has said that, actually. The blind part is, of course, about avoiding bias, but it indeed takes more than lack of knowledge to make a test.

I suggested a simple inclusion of positive & negative controls in blind tests
.

Perhaps you did, but that's not what you and I have been debating. What we've been debating is so simple, and I've,stated it so many times at this point that it's amazing that you're still arguing sidebars. You stated, unapologetically, unambiguously...somewhere in the mid hundreds by post count here :)...that without all the controls, and I believe at that time you were referring to JJ's summary of BS1116, an unsighted test was no better than no controls. You seemed to have backed off of that hard line of unreasoning, thankfully...or maybe not.

This has been suggested, not just by me, for a long time. Why do we never see such internal controls in any blind tests - it's relatively easy to implement?

Product development, marketing, pharma, and many other fields make extensive use controls in blind studies. I'm sure Olive and Toole used controls in their studies at Canada's national research center and at Harman. Are you talking about hobbyists self-testing and reporting back on Internet forums? We can agree that those "tests," including the ones that started this thread, are interesting, but anecdotal.

Tim
 

Mike Lavigne

Member Sponsor & WBF Founding Member
Apr 25, 2010
12,587
11,658
4,410
Yes, every situation. Music is too complex to completely digest in a short time frame. The mind can only concentrate on so many things at once. Short term blind testing is flawed because of this.

It would be interesting if someone did a study on long term blind testing. Would be very tedious to pull off though.

+1

and what I would love to see would be to listen to a system where someone actually selected all their gear (even software and formats) based on only AES certified (BS1116) ABX testing. and only those folks who believed in that sort of testing could be involved, using their own money and time to do it. obviously it would not matter whether they had experience listening to music since that is not supposed to matter.

then compare it to a few of the better systems of subjectivists on this forum who choose their gear, formats and software with long term sighted listening.

and see how the systems compared. measurements would be part of system setup for both camps.

(1) would the scientists actually go thru the effort to do that? (2) are they that interested in actual listening to invest in high performance gear that could be competitive? (3) any bets on the results?

it would offer the scientists an opportunity to demonstrate the actual audible benefit of their dogma.

what can blind testing bring to the audiophile that will justify the effort? will the experienced subjective audiophiles have advantages over the blind testers?

how does blind testing do in the real world of the audiophile? no 'sponsors'. real people with their own personal commitment.

here is your chance to convince me.
 
Last edited:

amirm

Banned
Apr 2, 2010
15,813
38
0
Seattle, WA
+1

and what I would love to see would be to listen to a system where someone actually selected all their gear (even software and formats) based on only AES certified (BS1116) ABX testing. and only those folks who believed in that sort of testing could be involved, using their own money and time to do it. obviously it would not matter whether they had experience listening to music since that is not supposed to matter.

then compare it to a few of the better systems of subjectivists on this forum who choose their gear, formats and software with long term sighted listening.

and see how the systems compared. measurements would be part of system setup for both camps.
How would equalize the budgets? What if we said the max was $2,000. Will you sign up for that challenge and see if sighted you can do better?

(1) would the scientists actually go thru the effort to do that? (2) are they that interested in actual listening to invest in high performance gear that could be competitive? (3) any bets on the results?
I am not able to quite understand your challenge. What will be the test? Double blind ABX? Sighted? Something else?

it would offer the scientists an opportunity to demonstrate the actual audible benefit of their dogma.
They don't talk about their benefits. They talk about lack of benefit in sighted tests.

what can blind testing bring to the audiophile that will justify the effort? will the experienced subjective audiophiles have advantages over the blind testers?
A realization that some or maybe all of your assumptions about a product being better than the other may be false. Completely false. That knowledge can be quite powerful in saving you money, steering you to other solutions that do have material difference, etc.

how does blind testing do in the real world of the audiophile? no 'sponsors'. real people with their own personal commitment.
I do that with my personal equipment and purchases. Are you suggesting that no one is? Again, I am having a hard time understanding your post.

here is your chance to convince me.
Here is a simple way. Take a music server and copy one of your music files. Now listen to them and think if one sounds better than the other. I almost guarantee you will hear a difference. If so, I guarantee that if I got another audiophile in the same class as you, they may come back with the opposite outcome. And further, I can make you think one or the other is far better.

This is not theory. I can and have done this test many times. What's deadly is that the fidelity differences are the things we cherish: "more air, better resolution, more analog like," you name it.

You talk about spending my own money. How about spending million dollars of your company's money with your name associated with it? When I was at Microsoft we acquired Pacific Microsonics, the makers of HDCD. We acquired them not for that, but for speaker correction technology. But HDCD was the icing on the cake. After the acquisition went through and we had their staff on board, one of them to my surprise said HDCD was a bunch of voodoo and brought no fidelity difference. I told him that he was wrong and that the difference was very clear to my ears. He asked where I had heard the comparison and I said the demo content they had produced. He gives me the headphone, asks me to turn around so I could not see what he was playing and proceeded to play the same demo tracks. I was relieved to hear the same difference I had heard before. The HDCD version definitely sounded more analog like, had more air, etc. He plays a few clips and I kept telling him the one file was so much better than the other. He then hits me with a bat saying that all along he was playing the same version of the files!

I did not believe him. So I had him run the test again the same way, i.e. the same file playing. I could easily make myself hear the same difference as before, or hear them sound identical!

Experiences like this shake you to core. The challenge certainly convinced me that we could easily read a lot of fidelity into something when it does not exist.

When nobody is looking, I hope you run the test I mentioned above Mike. You don't have to tell anyone about it. But I hope you do take the opportunity to convince yourself that there is merit to removing your bias.

Does it mean you should go and then shop at dollar store for your audio gear? No. Quality matters and we must have a nice degree of margin beyond what we think is transparent.
 

Mike Lavigne

Member Sponsor & WBF Founding Member
Apr 25, 2010
12,587
11,658
4,410
How would equalize the budgets? What if we said the max was $2,000. Will you sign up for that challenge and see if sighted you can do better?

no budgets since we are talking about comparing the ABX system to existing systems.

even better would be someone somewhere who has used this approach already where we can go listen. where more modest gear really kicks ass due to ABX. the issue here is not proof, it's value.


I am not able to quite understand your challenge. What will be the test? Double blind ABX? Sighted? Something else?


They don't talk about their benefits. They talk about lack of benefit in sighted tests.

there is no test. people listen to the different systems and decide whether the ABX approach has merit for them.

either ABX testing brings people closer to the music, or it does not, or maybe it does and maybe it does not. the question is whether the product of that process has value to the audiophile. can it take a more modest system and project it's performance to a higher level? or really, is it a purely theoretical process with little practical benefit for improving music reproduction?


A realization that some or maybe all of your assumptions about a product being better than the other may be false. Completely false. That knowledge can be quite powerful in saving you money, steering you to other solutions that do have material difference, etc.


I do that with my personal equipment and purchases. Are you suggesting that no one is? Again, I am having a hard time understanding your post.

ok, let me be clear.

yes; absolutely I am suggesting that no one uses full tilt boggie ABX testing (BS1116) for personal high end audio gear selection!!!!!! never happened. once.

and maybe you are the one person who did it. if so, congrats.....I guess.:D


Here is a simple way. Take a music server and copy one of your music files. Now listen to them and think if one sounds better than the other. I almost guarantee you will hear a difference. If so, I guarantee that if I got another audiophile in the same class as you, they may come back with the opposite outcome. And further, I can make you think one or the other is far better.

This is not theory. I can and have done this test many times. What's deadly is that the fidelity differences are the things we cherish: "more air, better resolution, more analog like," you name it.

it's one thing to do that with music files. I've done some similar things myself. it's a whole different thing to try it with gear.....which I've tried and got nowhere and never went back to it.

You talk about spending my own money. How about spending million dollars of your company's money with your name associated with it? When I was at Microsoft we acquired Pacific Microsonics, the makers of . We acquired them not for that, but for speaker correction technology. But HDCD was the icing on the cake. After the acquisition went through and we had their staff on board, one of them to my surprise said HDCD was a bunch of voodoo and brought no fidelity difference. I told him that he was wrong and that the difference was very clear to my ears. He asked where I had heard the comparison and I said the demo content they had produced. He gives me the headphone, asks me to turn around so I could not see what he was playing and proceeded to play the same demo tracks. I was relieved to hear the same difference I had heard before. The HDCD version definitely sounded more analog like, had more air, etc. He plays a few clips and I kept telling him the one file was so much better than the other. He then hits me with a bat saying that all along he was playing the same version of the files!

I did not believe him. So I had him run the test again the same way, i.e. the same file playing. I could easily make myself hear the same difference as before, or hear them sound identical!

Experiences like this shake you to core. The challenge certainly convinced me that we could easily read a lot of fidelity into something when it does not exist.

When nobody is looking, I hope you run the test I mentioned above Mike. You don't have to tell anyone about it. But I hope you do take the opportunity to convince yourself that there is merit to removing your bias.

I am speaking about real world personal system building. not commercial situations. and again; I'm speaking about across the board gear selection, not just dacs or digital formats. what use is ABX testing to the audiophile? is it better than his experience as a listener?

Does it mean you should go and then shop at dollar store for your audio gear? No. Quality matters and we must have a nice degree of margin beyond what we think is transparent.

I have a very hard time connecting ABX testing and the real world of gear selection.

remember; this is just one guys viewpoint on how ABX might relate to his world. I don't speak for anyone else.
 
Last edited:

arnyk

New Member
Apr 25, 2011
310
0
0
That you did Arny. Here is the spectrum of the file and test tones together:



If your key jingling has more ultrasonic content than any real world music, where does that leave the test tones?

I said this before on AVS Forum. If you insist on this being a useful test, its repercussions will go way past the borders of this discussion...


On the AVS forum I gave a relevant technical answer that explained the failings of the graph above, and here I see it again being waved in my face like I never said anything. In the face of such disrespect, I feel no need to explain the failings of the analysis once again.

For the benefit of people who are unfamiliar with FFTs the problem involves comparing the levels of a broadband incoherent signal (music) with test tones. There is a right way to do it, and Amir does not appear to know how to do so but refuses to allow more knowledgeable people to instruct him, and instead just repeats the same errors again and again and again and again...
 

Ron Party

WBF Founding Member
Apr 30, 2010
2,457
13
0
Oakland, CA
Hi Ron. Good to see you posting.
Reluctantly, I'm afraid. There is way too much intellectual dishonesty in this discussion and, as such, I just don't know in what way I'd personally benefit by my participation.

Indeed it is disheartening to read this thread and others like it where some people are incorrectly citing or demanding blind tests in order to prove something, while others engage in transparent, desperate attempts at false equivolency when comparing sighted versus less than perfect blind tests (as if there ever was a perfect test).

In the title to this very thread you put the word *proof* in quotes. I'm guessing you might be, at least a tad, second guessing yourself in using the adjective *conclusive*, because as you know nothing is conclusive or has been proven. You may recall that my first involvement here was to ask you a question, namely, why do you believe you were able to pass? I asked if you thought it was the ADC, the DAC, the algorithm or something else. IIRC, at that time you posted you did not know. I state this because as you well know, the scientific method is not about proof. Math is about proof; the scientific method is about inquiry.
 

arnyk

New Member
Apr 25, 2011
310
0
0
+1
and what I would love to see would be to listen to a system where someone actually selected all their gear (even software and formats) based on only AES certified (BS1116) ABX testing.

I guess many people don't get BS 1116, because it contains guidelines for selecting associated equipment and setting up test environments and it does not commit the obvious logical error of depending on itself.

The relevant question is what does a BS 1116-compliant system sound like and the answer is "Very Good", even brilliant ( not meant in the tonal way).

In fact the golden ear method to evaluate the systems in question would no doubt involve sighted evaluations by them, which does in fact make the logical error of using an obviously invalid highly biased listening test to judge the results of the use of bias-controlled listening tests. High end audio as practiced by many is all about personal biases and little else.

Another way to look at this would be to ask "What do systems assembled by leading DBT advocates sound like", and again the answer is "Very good". Of course if one believes that for example Cable Elevators and demagnetized CDs are required for good sound, then the sound will be disappointing.

BTW I am aware of many people's audio biases and have zero expectation that they will ever budge from them. The have placed themselves in a logic-tight box. Science just isn't that important to many.
 

maxflinn

New Member
Jul 29, 2014
92
0
0
Ireland
John, Elsdude and Tim are right. You're digging yourself deeper and deeper here. The fact is as you've been told many many times now, by me and others that no form of sighted listening can ever be as good as any type of blind-testing if identifying potential audible differences is the goal.

Also, removing knowledge and level matching, as you've also been told many times now, by me and others removes the most important biases and just comparing sighted observations to those with knowledge removed is often enough to demonstrate that differences that were reported sighted we're imagined.

I suggest to you, John that you consider giving up desperately arguing to the contrary while you still have some credibility.

I hope this post is taken in the spirit in which it was intended.
 

Mike Lavigne

Member Sponsor & WBF Founding Member
Apr 25, 2010
12,587
11,658
4,410
I guess many people don't get BS 1116, because it contains guidelines for selecting associated equipment and setting up test environments and it does not commit the obvious logical error of depending on itself.

The relevant question is what does a BS 1116-compliant system sound like and the answer is "Very Good", even brilliant ( not meant in the tonal way).

In fact the golden ear method to evaluate the systems in question would no doubt involve sighted evaluations by them, which does in fact make the logical error of using an obviously invalid highly biased listening test to judge the results of the use of bias-controlled listening tests. High end audio as practiced by many is all about personal biases and little else.

Another way to look at this would be to ask "What do systems assembled by leading DBT advocates sound like", and again the answer is "Very good". Of course if one believes that for example Cable Elevators and demagnetized CDs are required for good sound, then the sound will be disappointing.

BTW I am aware of many people's audio biases and have zero expectation that they will ever budge from them. The have placed themselves in a logic-tight box. Science just isn't that important to many.

I'm open to any result my ears hear. if a BS-1116 complaint system demonstrates superior performance to my ears then that would be good. please point one of those critters out to us here to listen to and ponder about.

and it's not so much that science is not important, it's that the point of putting together a high performance system is to listen and enjoy, not to prove anything. if science can assist, then fine. science serves the process, it is not the evidence of success. the level of enjoyment provided or musical truth uncovered is the payoff.

music is art, and science cannot define it. it takes a person and their brain and senses over time to judge the merits of music. how did that music make me feel? will I want to hear it again. am I happier than I was? did I connect with the musical intentions of the artist? am I full of wonder? what memories were brought back by that piece? more of this same album or should I switch to something else?

most important is the flow and journey of the listening session.....and the feeling of satisfaction that the musical itch got scratched.

sound is another matter. sometimes it's a critical listening session and some judging of the sound must be combined with the emotional part. but the emotions are always part of it and should be IMHO.

of course; if proof of something is what is important to some, then that is their business and who am I to question it. I care nothing for proof.

I do care about preference.
 
Last edited:

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu

Steve Williams
Site Founder | Site Owner | Administrator
Ron Resnick
Site Co-Owner | Administrator
Julian (The Fixer)
Website Build | Marketing Managersing