Sure, YMMV is always a safe response, but imo it is not a directed answer.
There is quite a bit of audio furniture designed primarily to support equipment with perhaps a passing nod to some form of mitigation. House hold furniture as well. If that is the position one finds themselves in, then footers may add some functionality. Of the footers I've tried, ranging from cones to rolling balls, to viscoelastics, to different constrained layers, each will do something to alter sound yet over time they reveal some inadequacy, or some other footer becomes appealing because it is different. I believe it takes time, at least a few months, to evaluate the sonic results of any vibration mitigation solution. Quick A/B comparisons can be misleading.
The OP asked about footers vs racks and platforms. I believe there is a consistency of experience as drawn across time, based on my own experience and reports and reviews from others. Racks and platforms designed to address vibration mitigation are more consistently better long term performers than footers.
Then again, ymmv.
Tim I agree with your original contention that this can just simply be about preference. I’m also thorough and trial gear through time and don't do short A/B compares and I do like to live with gear through time and in day to day usage to understand all the implications of outcomes better.
I come from an industry where you are required to formally validate your assessment strategy and I’ve always thought the relatively poorly structured and sometimes downright random approach displayed in the reviewing of audio gear to be a great weakness in its approach… when assessing subjective data it takes an even more rigorous approach to validate findings (unlike simple objective assessment which tends to be much easier to verify in terms of things being higher performing or worse) and subjective evaluation usually needs to be developed around writing a clear rubrik of assessment outcomes gradings and weightings.
Tim the finding below wouldn’t likely survive the first step of even a fairly basic assessment audit and simply is too generalised and based in subjective preferences that aren’t really sufficiently evidenced here.
“I believe there is a consistency of experience as drawn across time, based on my own experience and reports and reviews from others. Racks and platforms designed to address vibration mitigation are more consistently better long term performers than footers.”
I’d love to challenge the audio review industry to be more rigorous and develop (and regularly review) their assessment tools for assessing and validating the audio review process and in the making of subjective determinations and perhaps even working towards developing a set of standards in the industry… I know it’s not a regulated industry that you are in but really it would be good to see more clarity and structure in general of the audio review process. I believe we’d all be winners in that.
Below are an example of simple assessment principles and rules of evidence that perhaps would be good for the industry to consider.
The delivery of assessment strategies and assessment principles designed to ensure that there is reliability, flexibility, validity, and fairness.
Reliability refers to the extent to which there is a degree of consistency and accuracy in the assessment outcome. Meaning, the evidence presented is consistently interpreted and results are comparable,
regardless of the assessor conducting the assessment.
Validity and reliability share a number of characteristics. Both assessments are based on four dimensions of competency and both use a process that integrates knowledge and skills demonstrated with practical application.
Flexibility - assessment reflects being based on the specific assessment needs.
Fairness - This approach is conducted through careful consideration of the assessment needed and characteristics and the capacity to make reasonable judgments when required.
Rules of evidence:
We need to reach an appropriate balance when delivering assessment outcome and to do that the evidence collected should meet rules of evidence.
Valid – this refers to the extent to which the assessment outcome is supported by evidence. Evidence is considered valid when
assessment performance matches the performance required in a competency standard. Assessors should be able to demonstrate the skills, knowledge, and attributes described in an outlined criteria of competency and any associated assessment requirements.
We all tend to make a range of subjective evaluations regularly here and showing an understanding of the reasonable limits in the scope of our findings is actually what makes a determination more valid. But as always very much YMMV.