Double blind testing and stress

mep

Member Sponsor & WBF Founding Member
Apr 20, 2010
9,481
17
0
Have there been any studies that measure the stress of people that are subjected to blind testing including audio and how that stress possibly causes a high percentage of incorrect answers? On another thread, Frantz discussed how when many chefs are blindfolded, they can't tell dog shit from a NY strip steak (ok, slight exageration, but they make some really bone-head mistakes on what should be easily recognized ingrediants).

I can't help but think that when people are subjected to blind testing that it induces a certain amount of stress and pressure to make what they think will be the "right" choice. I wonder if people could tell the difference between an IPOD and a $50K CD rig under a double blind test? I bet it would be no better than a 50/50 spread which is the same as the flip of a coin. If so, I don't know what the moral to the story would be. Could it be that IPODs sound just as good as $50K CD players or that people who are subjected to DB testing have had their senses dumbed down and their brains aren't functioning normally as a result?

Mark
 

amirm

Banned
Apr 2, 2010
15,813
37
0
Seattle, WA
I don't know of any formal studies. But I can convey my own feelings, having participated in countless blind tests (formal and informal).

The stress factor is extremely real when the tests are run by others. It is human nature to want to be right. Look at this forum and arguments within :). It is also human nature to second guess yourself. These to combined make for inaccurate voting at times. I gave examples before of people voting two identical videos shown in split screen as one of them being inferior. And, giving a video that is degraded compared to original as being better than the original itself!

I personally hate being tested this way. It reminds of having final exam in college every 30 seconds for half hour! :)

The smaller the difference btw, the more the stress.

Part of the solution is to pick revealing material and setup. That can widen the gap. Sadly, too often people just throw any old material at the person to be tested. In your example, I can create a ton of tests where the iPod would sound the same as higher-end gear. But there would be other selections that careful listeners (not general public) could show a preference.

The other point you raise is the "50-50" thing. That is also often used incorrectly. If I have two skilled testers that score 100, and two people who score zero, the average for the test becomes 50% accurate. People then rush to judgement and call the data conclusive proof that there is no difference in the two scenarios tested. You just can't do that. Individual scores and people's ability matter. You don't want to dilute the results that way.
 

RUR

WBF Founding Member
Apr 20, 2010
647
3
0
SoCal
Part of the solution is to pick revealing material and setup. That can widen the gap. Sadly, too often people just throw any old material at the person to be tested. In your example, I can create a ton of tests where the iPod would sound the same as higher-end gear. But there would be other selections that careful listeners (not general public) could show a preference.
This is crucial, in my experience, and is consistent with ITU Recommendation BS.1116-1, "Methods for the Subjective Assessment of Small Impairments in Audio Systems Including Multi-Channel Sound Systems". Last year, when I was selecting an outboard DAC, I lived with them for some weeks. Over time, I was able to select musical material which, I felt, best showed the differences between them. Ultimately, I conducted a series of DBT's using that material and easily was able to distinguish between 3 of the 4 DACs.
 

The Smokester

Well-Known Member
Jun 7, 2010
347
1
925
N. California
The other point you raise is the "50-50" thing. That is also often used incorrectly. If I have two skilled testers that score 100, and two people who score zero, the average for the test becomes 50% accurate. People then rush to judgement and call the data conclusive proof that there is no difference in the two scenarios tested. You just can't do that. Individual scores and people's ability matter. You don't want to dilute the results that way.

This is the first time I have heard mention of non-gaussian distributions being used (or at least recognized as being possible) in these tests. In fact, even the use of gaussian statistics can only be infered when people use standard deviations in a formulaic approach to give probabilities. I have been taking from this that most testers/experimenters don't know modern statistics and wouldn't know Rev. Bayes if he stood up and said boo.

Am I being fair?
 

Steve Williams

Site Founder, Site Owner, Administrator
This is the first time I have heard mention of non-gaussian distributions being used (or at least recognized as being possible) in these tests. In fact, even the use of gaussian statistics can only be infered when people use standard deviations in a formulaic approach to give probabilities. I have been taking from this that most testers/experimenters don't know modern statistics and wouldn't know Rev. Bayes if he stood up and said boo.

Am I being fair?

John

For we mere mortals could you ask that question again for us to understand. I guess I don't understand modern statistics :)
 

Ethan Winer

Banned
Jul 8, 2010
1,231
3
0
75
New Milford, CT
I can't help but think that when people are subjected to blind testing that it induces a certain amount of stress and pressure to make what they think will be the "right" choice.

This is one of the great things about ABX software - you can test yourself over as long a period as you'd like, in the comfort of your own home. But in the grand scheme of things, if a difference isn't immediately discernible when switching back and forth, then how important is it really? Anyone can tell the poor quality of music over AM radio, even when the reception is clear and strong. Same for FM radio even though FM gets much closer to "high fidelity" than AM. And the same again for low bit-rate MP3 files. Nobody can miss a switch from a CD to a 48 kbps lossy MP3 file.

So then we're down to "subtle" differences. If they're so subtle that you can't hear a difference - "stressed" or not - then it's probably not worth paying $10,000 more for.

I can relate a funny story about blind testing MP3 bit rates. A fellow I met in another audio forum visited me here a few years ago. He had never seen professional recording software so we spent a lot of time playing with my various toys and plug-ins. Then he mentioned in passing that he can always identify lossy MP3 compression, even when the bit-rate is very high. So I tested him blind. I extracted a Wave file from a solo piano CD he brought and was familiar with, and made MP3 copies at 128, 192, and 256 kbps. He stood behind me in the speaker sweet spot, unable to see my face or which file I was playing. I cycled through the four files in a random order, and sun of a gun he got them all correct. He beamed with pride. Then we did the test again and he got them all backwards, picking the CD Wave file as worst and the 128 kbps as best! He did not mention feeling stressed, and in fact he was feeling pretty good during the second test.

--Ethan
 

audioguy

WBF Founding Member
Apr 20, 2010
2,794
73
1,635
Near Atlanta, GA but not too near!
So then we're down to "subtle" differences. If they're so subtle that you can't hear a difference - "stressed" or not - then it's probably not worth paying $10,000 more for.

--Ethan

Amen! Amen and Amen!!!

We used to call that the "squint factor". If you have to close (or squint) your eyes to "try" to hear the difference, how important can it be.

There will be a lot of folks who visit this forum who will argue the point but, in my world, the above statement may be the most accurate, descriptive and telling (but maybe not popular) of any I have read in a really long time as it relates to this hobby.
 

amirm

Banned
Apr 2, 2010
15,813
37
0
Seattle, WA
But in the grand scheme of things, if a difference isn't immediately discernible when switching back and forth, then how important is it really? Anyone can tell the poor quality of music over AM radio, even when the reception is clear and strong. Same for FM radio even though FM gets much closer to "high fidelity" than AM. And the same again for low bit-rate MP3 files. Nobody can miss a switch from a CD to a 48 kbps lossy MP3 file.

So then we're down to "subtle" differences. If they're so subtle that you can't hear a difference - "stressed" or not - then it's probably not worth paying $10,000 more for.

I think that characterization is unfair :). We all mistakenly attempt to measure audio fidelity in digital domain as we do in analog domain as you did above. For example, the reason we can tell AM from FM so well is that there is a difference in frequency response. When it comes to digital though, those measures do not work. Worse yet, and this is key, the degradation is highly content sensitive. Here is a quick example using MPEG reference clips:


The horizontal axis shows the clip name and vertical, the range of scores as far as fidelity. As you can see, the quality varies a lot. Pick the "wrong" clip and the difference could be very small as to even fool experienced listeners. Pick the "right" clip and many people could tell the difference. When I find some time, I will create some test samples for people to hear. But for now, I think it is important to not generalize that all differences are subtle just because in one test that is how it seemed.

I can relate a funny story about blind testing MP3 bit rates...So I tested him blind. I extracted a Wave file from a solo piano CD he brought and was familiar with, and made MP3 copies at 128, 192, and 256 kbps. He stood behind me in the speaker sweet spot, unable to see my face or which file I was playing. I cycled through the four files in a random order, and sun of a gun he got them all correct. He beamed with pride. Then we did the test again and he got them all backwards, picking the CD Wave file as worst and the 128 kbps as best! He did not mention feeling stressed, and in fact he was feeling pretty good during the second test.

--Ethan
Ethan, I hope you always played the original and the sample under test. A test that plays every clip and asks which one is the original is a very odd test even though many people attempt it. One needs to play the original and then ask if the person can find the impaired and original clips.
 

amirm

Banned
Apr 2, 2010
15,813
37
0
Seattle, WA
This statement may be too strong but I always feel if a test is not revealing of a difference, there is more fault with the test and then the tester! When I was doing codec tests, I would buy 20 to 30 CDs a week, rip and encode them all and listen to find that one clip, or even part of a clip, which would make differences in audio compression stand out. That was the only way I could get our researchers working on the codec who were not golden ear audiophiles hear the real problems and fix them.
 

The Smokester

Well-Known Member
Jun 7, 2010
347
1
925
N. California
Steve, Gaussian statistics refers to the familiar bell-shaped curve and is indicative of a random variable. Repeated measurements of such a variable will produce a frequency distribution which can be characterized by two parameters: The mean and the standard deviation (width of the bell curve).

If a variable is repeatedly measured and these measurements deviate from a bell curve it is indicative that said variable is not random and there is further information to be understood in the form of correlations. To do this usually requires the development of new and different measurements to track down the responsible "hidden variables". Non-Gaussian behavior can be inferred from Sean Olive's writeup entitled "Students Prefer Music in Lossless CD vs MP3 Formats" where he discovers that some students are more discriminating than others. In this case the full data would be skewed because the ability to discern is not random but is correlated with particular students.

Rev. Thomas Bayes (1702 to 1761) is credited (controversial) with a general method (i.e. not particular to Gaussian distributions) that can be used for comparing observed probabilities with hypothetical probabilities (i.e. evidence with predictions). So, for example, one could take the ratio of a measured distribution with a random distribution to quantify if the observations are truly random.One can then add conditional probabilities (based on hypotheses) to explain significant deviations. Over the years a large body of work in applied statistics has built up around his fundamental theorem and its interpretation.

An introduction to Bayes Theorem, which includes an example for diagnosing breast cancer which you might find interesting, can be found on Wikipedia:

http://en.wikipedia.org/wiki/Bayes'_theorem
 

Gregadd

WBF Founding Member
Apr 20, 2010
10,517
1,774
1,850
Metro DC
This is one of the great things about ABX software - you can test yourself over as long a period as you'd like, in the comfort of your own home. But in the grand scheme of things, if a difference isn't immediately discernible when switching back and forth, then how important is it really? Anyone can tell the poor quality of music over AM radio, even when the reception is clear and strong. Same for FM radio even though FM gets much closer to "high fidelity" than AM. And the same again for low bit-rate MP3 files. Nobody can miss a switch from a CD to a 48 kbps lossy MP3 file.

So then we're down to "subtle" differences. If they're so subtle that you can't hear a difference - "stressed" or not - then it's probably not worth paying $10,000 more for.

I can relate a funny story about blind testing MP3 bit rates. A fellow I met in another audio forum visited me here a few years ago. He had never seen professional recording software so we spent a lot of time playing with my various toys and plug-ins. Then he mentioned in passing that he can always identify lossy MP3 compression, even when the bit-rate is very high. So I tested him blind. I extracted a Wave file from a solo piano CD he brought and was familiar with, and made MP3 copies at 128, 192, and 256 kbps. He stood behind me in the speaker sweet spot, unable to see my face or which file I was playing. I cycled through the four files in a random order, and sun of a gun he got them all correct. He beamed with pride. Then we did the test again and he got them all backwards, picking the CD Wave file as worst and the 128 kbps as best! He did not mention feeling stressed, and in fact he was feeling pretty good during the second test.

--Ethan

Ultimately then anyone evaluating the differences in performance should have no stake in the outcome. They should not care if they pass or fail. They should only perform the test criteria. Probably never even be told whether they were quote right or wrong. To use an example the reason no one told the king he was naked because the king had habit of killing the bearer of bad news. The only one willing to tell is a child immune to the consequences. It is a common occurrence for NFL teams to audition FG kickers and have them kick perfect in practice and be awful in the game. Another example is the lie detector test. Of course it has no ability to detect lies. It's a test to measure stress of those nervous about being caught lying and these consequences thereof. So when F.Lee Bailey subjected the general public to polygraph with nothing at stake their lies went undetected. When he offered them $10,000 to beat the lie detector they failed. Less we set off a debate. Not a scientifc test.
The question is what significance is it that the test subject got the results in exact reverse order. I think it is indisputable that he heard a difference. I would guess the odds of guessing them in exact reverse order is the same as guessing them in the exact correct order. This argument was made a while ago in Serophile when JGH was asked to distinguish the Sp9 vs the SP14. In four trials he went 0-4. Not entirely consistent with guessing. You could flip a coin and get four heads or four tails in a row but not likely. Stereophile scrambling to explain why their leader did not perform well suggested he must have heard a difference. It is unlikely that he guessed exactly wrong each time.

Order JGH JA
SP9 SP11 SP11
SP11 SP9 SP9
SP11 SP9 SP11
SP9 SP11 SP9
http://www.stereophile.com/historical/739/index3.html

It can be seen that JGH scored 0/4, and JA 2/4, the latter the result that would be obtained by chance alone. It would seem that Stereophile cannot support its opinions under blind conditions and that Audio Research's faith in their product is vindicated.

Wait a minute, however. Consider JGH's scoring: 0% identification is as statistically significant as 100%; both are extremely unlikely to happen by chance. Examined on a more rigorous basis, JGH's results indicate that he did hear a difference between the two preamplifiers, as he correctly identified every time there was a change; under the blind listening conditions, however, his value judgments were turned upside-down, with the SP11 being identified as the '9 and vice versa, something that is not unusual in blind tests. JGH stated at the time of the test that he was getting mixed cues from the two preamps: the thinness in the '9's bass compared with the '11 in his own room and system had metamorphosed in the unfamiliar listening test conditions into a more natural balance for the '9 and an excessive and somewhat "drummy" bass from the '11.

I think this could be a case where getting them in exact reverse order is a matter of confused memory rather than an inability to discern the difference.

What conclusions can be drawn from this blind test? Does it support or refute the findings of JGH's formal review?

Certainly, it is incontrovertible that a difference was heard, but whether it was one of character, as RF strongly felt, or of quality, as JA and JGH felt, is not proved either way. In fact, proving anything at all from blind testing is extremely hard—which is why Stereophile does not test equipment in this manner. Again, with respect to the SP9's sound, or lack of it, we urge you to audition it for yourself.—John Atkinson & J. Gordon Holt
 

Ethan Winer

Banned
Jul 8, 2010
1,231
3
0
75
New Milford, CT
the reason we can tell AM from FM so well is that there is a difference in frequency response.

Yes, and by the occasional dropouts, and the hiss and other noise artifacts too.

When it comes to digital though, those measures do not work ... Here is a quick example using MPEG reference clips:

I should have been clear that my usual assessment of fidelity (not discussed yet in this thread) includes everything except lossy compression. The problem with "measuring" lossy compression is that the frequency response constantly changes!

I think it is important to not generalize that all differences are subtle just because in one test that is how it seemed.

I agree. How much damage is done to a given piece of music at a given bit rate depends entirely on the music.

Ethan, I hope you always played the original and the sample under test. A test that plays every clip and asks which one is the original is a very odd test even though many people attempt it. One needs to play the original and then ask if the person can find the impaired and original clips.

I understand, and we did not do that. This fellow insisted he can hear the "signature" of MP3 compression every time, so our informal test should have been enough to satisfy that claim.

--Ethan
 

Gregadd

WBF Founding Member
Apr 20, 2010
10,517
1,774
1,850
Metro DC
Ten times? I think you may have already strained the the relationship.:D
 
Last edited:

Vincent Kars

WBF Technical Expert: Computer Audio
Jul 1, 2010
860
1
0
What does blind testing do?
In essence it removes the expectation bias from the experiment.
It test the null hypothesis, there is no difference unless our score proves to be beyond change level.
That is all it does.
I have the feeling critics of the blind testing methodology often mix-up the experimental design (the method) and the experimental setup (the conditions).
However, what ever the limitations might be, would our judgment improve the moment our expectation bias comes into play?
 

terryj

New Member
Jul 4, 2010
512
0
0
bathurst NSW
four heads in a row is considered unlikely??

by whom?

Re stress and familiarity. Most audiophiles that do a test do need to do it on their own system, with music they feel will show the difference. Of course, why not maximise the chance of detection?

I would like to note most of the objections to DBTs come from dyed in the wool audiophiles, to use a phrase.

usually, the very same audiophiles that make claims like 'Cables make a huge and undeniable difference, in fact you can make a $40 000 pair of speakers sound crap when you use the wrong ones'.

As Ethan pointed out, and I agree with,..... well it is actually how I audition products/upgrades, 'if it does not make an immediate and unmistakeable difference then I don't care how cheap it is I'm not interested.'

So what do we make of such an unequivocal statement like the cable one above???

Well, if the audiophile making the claim want's it to be accepted as fact (and nothing in the wording argues against that I might add!) then he should be able to show that he CAN, indeed, tell these cables apart as easily as it sounds like he can. (heck, I'd just be happy that he could differentiate them, but that's another story)

And, I can pretty well guarantee you that IF he did the test, at the least he would admit that the difference was not as large as he'd always falsely claimed.

I mean if the persona was some sort of opinion leader, and committed to the truth of things, you'd think they would do the test for honesty's sake?

So, they are done IN the home, on HIS system, WITH the gear he already claims he can tell apart.

Why would I feel stress when testing my claim that I can tell my daughters apart?? I'd feel rather confident actually.

IF I felt stress about something, it surely has to be related to the magnitude of my claim and the confidence I felt about that claim??

If that is true, and you feel stressed about taking the test, then STOP making claims like 'cables are the make break point of a system, they can make even the best speakers sound like crap, and good cables can make a crap pair of speakers sound magnificent'.

Or, if you make that claim, have the balls to back it up???

[I might add just for completeness....the person does NOT usually feel stressed going into the test. Why?? Because he knows he can easily tell them apart. Ahh, the 'excuses' only need to start coming after the first time he flicks from one to the other blinded. THEN he starts to realise.....but until then it's all night and day ain't it??

What I don't get is why the person can't just say 'well, ha, how about that, all this stuff about our knowledge influencing things has something to it after all' and go away having learned something about himself, his system and the universe??

But they CAN'T can they? No, they have to continue arguing black and blue that, no matter what they heard blinded, that these differences are still night and day, that somehow not knowing which was which just totally fucked everything up completely, it somehow managed to make them think the sun was the moon, the moon the sun and other equally silly things,..... they have to completely deny their own senses....the VERY same senses they proclaim elsewhere that are so sensitive and unaffected by anything other than audible stimuli!!!!-------- I can't be the only one to find this the ultimate irony can I?

Why not just 'hahaha, boy, they are a bit more similar that I thought??']

It just makes me pretty angry when this type of stuff just get's spouted by the magazines and online press. THEN they will cover their arse by saying stuff like 'dbts don't work' or somesuch.

Of course, in most cases there can never be a challenge to what they write can there?They publish and say what THEY want.

THEN, these same guys get on line forums and sprout the same guff...THEN whinge and moan when they (after all this time) FINALLY get challenged!! "ooh, we just want to talk on a forum, and all you guys come up with stuff about dbt's" "we just want to socialise"

(all leading to a push against any discussion of dbts being made...as if the net is not already completely biased toward any audiophile making any claim he wishes in the confidence he won't get challenged...and when he does get challenged he cries, moans and wrings his hands about being persecuted and hounded.)

Here is an example of the extreme rubbish a publication can make, yet those that disagree have no rights of redress or correction (ie, where is OUR article in return??)

http://www.6moons.com/audioreviews/fragilesouls/fragilesouls.html

Absolute rubbish.
 

Gregadd

WBF Founding Member
Apr 20, 2010
10,517
1,774
1,850
Metro DC
four heads in a row is considered unlikely??

by whom?


There's a city in the desert waiting for you. In fact I'll take that bet. Actually I was being generous.. The odds of heads 1/2. He was actually given 4 choices. 1/4. For a coin (1/2)^4. For the trial it would be (1/4)^4. Assuming he was guessing.
If order is not important see http://en.wikipedia.org/wiki/Binomial_probability
 
Last edited:

Ron Party

WBF Founding Member
Apr 30, 2010
2,457
13
0
Oakland, CA
Terry, I find that article equally appalling, to state nothing of the fact that it mischaracterizes and laughingly equates rationalism with disinterest in the music.

Apparently (according to that article), you went to great lengths to build the best speakers you could not to listen to music but only to test tones. I don't know how many albums I have - I know it is 4 figures. But after reading that article I suppose I should just throw away all of my albums. Come to think of it, I have no idea what I was doing at the 400 or so concerts I've attended. I don't know why I've played in bands my whole life. I don't know why Steve asked me to serve as moderator of the music forums here. I'm only interested in reading scopes and listening to test tones.

In that article the stereotyping of *subjectivists* also is disturbing. There certainly are plenty of people that don't necessarily care to learn about the science of it all, that just want to enjoy the tunes, but if asked would readily answer with an acknolwedgement that things like expectation or confirmation bias are real.

Indeed finding common ground between subjectivists and rationalists is hard enough - ok, maybe impossible - but mischaracterizing the positions of each is unproductive at best, offensive and inexcusable at worst. If one cannot even accurately describe the positions taken by each side in a debate, any debate, then one has no business attempting the description at all.
 

terryj

New Member
Jul 4, 2010
512
0
0
bathurst NSW
four heads in a row is considered unlikely??

by whom?


There's a city in the desert waiting for you. In fact I'll take that bet. Actually I was being generous.. The odds of heads 1/2. He was actually given 4 choices. 1/4. For a coin (1/2)^4. For the trial it would be (1/4)^4. Assuming he was guessing.
If order is not important see http://en.wikipedia.org/wiki/Binomial_probability

and so did you work the odds out?? are they 'astronomical' or something we would expect to see occur everyday in this city in the desert?

four heads in a row, that might be half the problem?? To you (and other audiophiles???) that run would constitute sufficient proof that cable differences exist???

Note how amir has been trying to make sure things are understood and put into their proper context, that you cannot extrapolate too much without knowledge of the circumstances in which a result was obtained.

Four heads in a row? pfft, nothing. truly.



Ron, can I say I cannot agree with you more?? Even down to the pictures used to illustrate his article. It is obvious there is NO desire to get to the bottom of it, nothing that would indicate a willingness to find common ground, to see if we can 'work this stuff out'.

It has been a while since I read it, but I remember the basics. 'I have not, nor will I, ever participate in a DBT". Brother, then where do we go?? Echoes of galileo 'there are moons around jupiter, just look into the scope sir'

"I have no need to look into the scope, you are convicted of heresy''.

Can we at least not try and get into each others shoes?? I am like you, I do NOT listen to test tones. And his derision when it was suggested that the biggest hold up ARE the recordings!!

Nope, he is an audiophile, therefore it is all about the equipment (yet accuses *us* of listening to test tones when to him it is the equipment).

AND, not the slightest eyebrow raised about whether or not vinyl can even be demagnetised!! the entire starting point of his article! Someone said so, the claim was made..SUFFICIENT. PASS. IN YA GO. ENTER.

And let's not forget the main point...all of that is coming from a magazine that 'serves' audiophiles. We begin to wonder, WHY do they try so hard to prevent honest examination of claims in audio??? Why the constant 'trust your ears' (except when it comes from a dbt)???

THEN look at how many of the real vocal people here are from the very same industry....

Where is it reasonable to draw the line?? At what point can we ALL agree it is ok to question something?? Can someone explain why it is fine to make any old statement anywhere no matter how much on the face of it it flies in the face of received knowledge??

We ALL have our boundaries, what we would let go without a second glance, others we would question. We all have these boundaries, just at very different points.

IS there such a thing as a point were we would ALL go 'no way'?? I doubt it somehow.

what about THIS tweak? Why can it not be questioned, why on the face of it, should it be given any credence and have energy and time expended on it to see if it has merit.

HOW would we test it?? if not with dbt?? Anecdotal evidence like 'yep, worked for me'. Would that be sufficient to change anyones opinion??

http://www.theadvancedaudiophile.com/the-5-pinhole-paper-device/index.html

[ahh, maybe I should not have posted that, someone with no doubt a search engine in place will note it, and arrive here. He and ethan don't get along...heck, I'm sure he has a search in place for ethan (stalking him) so he'll be here anyway!]
 

Gregadd

WBF Founding Member
Apr 20, 2010
10,517
1,774
1,850
Metro DC
If he played all 4 tones each time and gave you a chance to pick it would be (1/4)(1/4)(1/4)(1/4)=1//256= .0039 if order mattters. I would need to know more about the test. Roulette is 1/30. When order does not matter the chances are higher. The house would win 256 times to your one. Pick three lottery is 1/1000. Pretty bad odds also.

Four heads in a row is little more complicated. 1/16. Becasue if you are trying to get four heads in a row it becomes an imposibiltiy once a tail turns up you're finshed. Not astronomical.
 

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu

Steve Williams
Site Founder | Site Owner | Administrator
Ron Resnick
Site Co-Owner | Administrator
Julian (The Fixer)
Website Build | Marketing Managersing