It's actually way more audible than it looks, FR is only a small part of the horrible sound you get for the first few hours of using this cable, and is what motivated me to buy the cable burner.
This cable is not typical though, the litz wire I have uses 44g UPOCC copper, so over 1000 individually insulated strands make up a single 14g litz wire. That's a lot of conductor surface area in contact with it's insulation, so burn-in is very noticeable. The cable is literally sucking energy from the amp as it burns-in. Considering 3 dB is a doubling of power the differences in the measurements are very significant, and this excess energy is going somewhere. I'm not saying this is proving anything, it's just one more data point...
It's actually way more audible than it looks, FR is only a small part of the horrible sound you get for the first few hours of using this cable, and is what motivated me to buy the cable burner.
This cable is not typical though, the litz wire I have uses 44g UPOCC copper, so over 1000 individually insulated strands make up a single 14g litz wire. That's a lot of conductor surface area in contact with it's insulation, so burn-in is very noticeable. The cable is literally sucking energy from the amp as it burns-in. Considering 3 dB is a doubling of power the differences in the measurements are very significant, and this excess energy is going somewhere. I'm not saying this is proving anything, it's just one more data point...
What would be even nicer...is if you did a crossover study and remeasured the new cable after break in and see if it moves towards your broken in cable frequency response
This would show a break in phenomena and rule out batch variation
Regardless of the reason why, I've found that no matter how my listening sessions go (whether I burn it in while absent or enjoy the time in front of the system), any new component, including cables, tend to make major improvements and become more of what they were intended to be somewhere between 150-200 hours. Some components continue to change up to 400-600 hours, but I find these much more minor compared to the changes experienced in those first 200 hours.
The consistency of this time frame leads me to believe it has less to do why psychology and more to do with components 'settling in'.
there are a few things that are just not worth arguing about. when challenged about these things I just shrug my shoulders, grin, and keep my mouth shut. thousands of posts have been written about these things and no one has ever changed their viewpoint.
1-cable elevators
2-cable dressing/positioning
3-cable break-in
4-cable directionality
5-lots of crazy kind of tweaks.
some things you do in case it works, not because you are sure. you hope that this 'housekeeping' collectively amounts to real gains. but you don't get your shorts in a twist over it.
Trouble is no one measures cables as far as endusers and manufacturers never will anyway. Unless you accept the premise that any high end system can attain absolute clarity and put in the necessary work to achieve that goal,most will never make real headway. That is, if that is your intention. I personally think cables by themselves will never get a persons system to such a high level.
there are a few things that are just not worth arguing about. when challenged about these things I just shrug my shoulders, grin, and keep my mouth shut. thousands of posts have been written about these things and no one has ever changed their viewpoint.
1-cable elevators
2-cable dressing/positioning
3-cable break-in
4-cable directionality
5-lots of crazy kind of tweaks.
some things you do in case it works, not because you are sure. you hope that this 'housekeeping' collectively amounts to real gains. but you don't get your shorts in a twist over it.
How can we be certain that what's going on is cable break-in and not ear break-in (i.e., our ears' acclimation to, and increasing comfort with, a slightly different sound)?
First time I see a cable skeptic asking if a .5 dB difference in the midrange is audible - they usually claim that the small differences subjectivists perceive or preferences are due to matching errors around .1dB!
1. Big undeniable improvement
2. Perceived improvement found to be flawed over time
3. Not obvious improvement battle to get the sound u think it capable of
4. Different sound not better
5. Annoy traits though improvement in some areas
6. No improvemnt or worse sound
That audio alcoholic thread is funnier than this thread because they don't have any argument except one of ignorance. All cables carry audio signal and the largest impedement to the the faithful transmission of that audio signal is current EMI. The trick is to separate the audio signal from the EMI hash. That can be done in two methods:
1. By engineering the cable to separate the signal from EMI
2. Create a parallel circuit for the extraction of EMI
Btw this can be measured, but very few do.
Now if the parallel circuit is sufficient then a exotic cable is less of a need. A good example is pro audio recording industry even going back to the late 1950's.
Cable break in is real because most cables use a designed passive network to separate the audio signal from current EMI and as pointed out by DaveC can effect the current draw by the amplifier which effects the sound dispersion pattern of the speakers. The overall clarity of the system is dependent on the level of signal integrity. The reproduction of sound is only limited by the quality of the capturing microphone and distortion is a non issue once current EMI is eliminated. The overall clarity of most recordings proves this.
Whether you spend 10k or 500k there are principles at work here that will help to move you closer to the music. Once you except that the actual recording event is capable of a realism that is worth exploring,then you are 2/3 rds there.