The Insanity of Marketing Disguised as Loudspeaker Science
Before reading this article, I must warn you that it is an opinion piece. You won’t find measurements and analytics from me backing up the statements I make herein. I just felt like venting my views about the abuses of loudspeaker marketing disguised as science and I apologize in advance to any objectivists that will surely take issue with what is written in text below. This article does NOT target any particular manufacturer, but raises issues I have with what I feel to be misconceptions in objective loudspeaker testing and the mindset some manufacturers and consumers tend to have regarding measurements and blind testing.
The exclusive interview below includes a great discussion about the Loudspeaker Myths covered in this article but also goes into more details including personal experiences that should be both informative and entertaining.
Must Watch Speaker Myths Interview with Gene DellaSala (left) and Hugo Rivera (right)
How many times have your seen this in the forums?
Thread Title: My New Speakers ROCK!!!111
SpeakerLVR: I just got new speakers made by XXX and they sound SO MUCH better than my old speakers made by YYY. I can't believe it, I'm so happy.
BrndXlvr: Oh, you are so lucky; I've had my eyes on those for a long time! JEALOUS!!!11
RNonParade: How do you know they sound better?
SpeakerLVR: Um…because they do?
RNonParade: But how do you know?
BrndXlvr: Everyone knows XXX speakers are better than YYY. Go away troll!
RNonParade: Do they? Well, look at this graph of speaker XXX and this graph of speaker YYY. Seems to me fairly clear that speaker YYY is better. Look at how flat the graph is. The other one has much more variation.
SpeakerLVR: But they sound better to my ears. And that's all that counts, right?
RNonParade: How do you know? Did you do a blind listening test?
SpeakerLVR: Well, I had my wife switch the cables for me.
RNonParade: And you level matched to 0.1dB? You didn't watch her switch the cables did you?
BrndXlvr: What? What are you talking about?
RNonParade: If you watched her switching the cables, it wasn't even a blind test! Heck, the only way it would be a decent test if she didn't know which speaker was playing either! You can't say you like those speakers better until you do a proper DBT.
SpeakerLVR: Okaaaaayyyy. I'm going to go watch a movie now. You have fun blindfolding your speakers.
RNonParade: HOW DO YOU KNOW YOU'RE WATCHING A MOVIE UNLESS YOU DBT IT? YOU COULD BE WATCHING YOUTUBE FOR ALL YOU KNOW!!!!
One
Graph Rules them All!
There
isn’t a day that goes by where a thread pops up on our forums where one of our
members post a very smoothed on-axis frequency response graph of a speaker and
declares it must be the holy grail of performance compared to another speaker
whose measurement doesn’t look as good. What
the reader may not realize is the measurement technique and or the way the
reviewer chose to report the measurement may be totally different for the two
products they are comparing. There are many ways to manipulate measurements to
look good. The marketing department of
virtually all successful loudspeaker companies have learned this only too well
many years ago. Most consumers are uneducated about loudspeakers, so manufacturers
often love to publish the smoothest and flattest graphs possible.
For more information on this topic see: Audio Measurements – the Useful vs. the Bogus
A single on-axis measurement response simply cannot give you enough information as to how a speaker will play in a real room. This is why a collection of measurements off-axis at various power levels help to complete the picture but they still won’t get you all the way there. Most frequency response and distortion measurements are done at very low power. They don’t tell you how a speaker will hold up when being driven at loud sustained output in a large room. Most distortion measurements are over-simplistic, low level sweeps vs. frequency. They won’t reveal bad design choices such as incorrectly chosen crossover points causing a tweeter to play lower in frequency than it should, which increases audible strain and distortion or even destroys the driver. These measurements typically won’t show increased intermodulation distortion as a result of running a midrange driver over a wider than optimal bandwidth. These sweeps are pretty, though, and they give comfort to the objectivist that helps them fall in love with smooth flat curves instead of the actual sound quality produced in their listening environment.
It is important to mention that you simply cannot measure each speaker in the same fashion. A simple two-way monopole that acoustically sums at 1 meter behaves much differently than an array of drivers that may not acoustically converge until 2-3 meters away from the front baffle. Thus, different measurement techniques must be used to get an accurate representation of how the speaker performs.
It is equally important to mention that we are not downplaying the importance of proper loudspeaker measurements and procedures. We simply use measurements to determine if there are any obvious flaws in the design. When appropriate we attempt to correlate our subjective impressions with our objective measurements. If for example we find a speaker sounds too bright in our listening tests and come to find a +10dB rise in frequency response above 10kHz, there is usually a good correlation between the listening and measurements.
For more information on this topic see: Why we Measure Audio Equipment Performance
Anechoic
Chamber – the place where sound goes to die
An anechoic chamber is basically a room without echoes. It is designed to make the most accurate
measurements of a loudspeaker by removing the room acoustics from interfering
with the measurements. They are only
accurate down to the frequencies where the wedges stop being effective
absorbers, which in most chambers is usually around 100-150Hz.
Some manufacturers argue that a loudspeaker cannot be designed properly without the use of an anechoic chamber. This again is yet another fallacy. Any loudspeaker engineer worth their salt knows how to separate the room from the speaker when measuring through various measurement techniques. Modern electronic measurement systems allow for “gating” the measurements. Gating measurements removes early reflections, leaving essentially just the anechoic information. Because of this computer measurement capability and the abilities to measure outdoors, large and expensive physical anechoic chambers are far less necessary than they were a generation ago. Propping up the need for anechoic chambers does however sell a good story for a company that has made such an investment must automatically make speakers better than one that hasn’t. One has to ponder, is it cheaper to make the initial investment of say $50k on an anechoic chamber to give the perception that better designed loudspeakers would result, or to justify the cost savings over the long run of using inferior components because their “science” backs up their claim that “higher quality parts don’t make a difference”?
In defense of anechoic chambers, they do provide a means of accurately and consistently producing a set of measurements for a loudspeaker under test while also significantly reducing testing time. They are a great tool to use, but they aren't necessarily a requirement for accurately designing and evaluating loudspeaker performance.
Editorial Note on Anechoic Chambers by Philip Bamberg
My experience has been that an anechoic chamber must be quite large to be truly be flat to 100Hz. And that if I can’t get valid data down that low, then I can’t properly characterize the baffle rise, and therefore I have much less chance to correctly compensate for it. Therefore I still prefer measuring outdoors.
For more information on this topic see: Interview with Atlantic Technology on Loudspeaker Design Philosophy
The
Double Blind Test – the single most abused term in audio
We
have discussed this topic in detail many times before but we will cover it
again because the insane logic still prevails on the forums. The objectivists argue the only way to
determine if Speaker XXX is better than Speaker YYY is by double blind testing
it because if the listener sees the speakers, their judgment will be unfairly
influenced by their visual or brand preference. This has actually become a religion for some people and their minds are quite settled on the notion that you simply cannot determine anything meaningful from a sighted test no matter how controlled the bias variable are.
The Double Blind Test (DBT) is a type of testing protocol typically used in the medical field as the means of determining the outcome of an experimental drug or procedure by eliminating biases. In a true DBT, neither the researcher nor the participants know which medication is being given in order to avoid preferentiality from either side. Examples of these biases, to mention a few: type of patient, type of condition or ailment, type of drug, type or packaging. The “products” to be tested have to not only look the same to the naked eye, but even “taste’ the same. The effect is what we are looking for by removing all conscious and subconscious biases.
Let’s examine this logic a bit further by assuming for the moment that its unanimously agreed Speaker XXX is prettier and more prestigious than Speaker YYY. The objectivists will decry you have to conduct a controlled DBT to avoid biases caused by visual preferences. What if we disconnect the tweeter of Speaker XXX so that all of the top end was removed from that speaker where only bass and vocals were playing. Are we expected to believe the listeners would still prefer Speaker XXX because it’s prettier during a sighted test?
If you put two top quality steaks in front of a food taster that hated Tabasco sauce where one was infused with Tabasco sauce while the other was lightly seasoned, do we really need a Double Blind Test (DBT) to determine their preference?
To those that consider DBT's a necessity in determining subjective consumer preferences even with devices whose primary purpose is to produce sound, I encourage them to engage in one when comparing performance sport cars on a test track :) Don't trust your eyes, rely on the force to guide you around the track!
The reality is DBT’s are critical for medical tests where one needs to study the effects of different drugs in a controlled environment. In my opinion, the importance of a applying a DBT protocol in a controlled listening test of loudspeakers is overstated, especially when the differences between stimuli are far greater than the perceived biases of appearances or brand prestige. When testing components with subtle sonic differences such as cables, or changing a component in a loudspeaker to determine if it produced a sonically detectable result, then a blind testing protocol makes sense. In these scenarios, the blind test is useful in reducing perceptual bias to more accurately determine minute detectable differences, but not necessarily to determine if those differences were preferred.
In most cases, nobody adheres to strict DBT protocol (where both the listeners and testers are unaware of what they are testing). Instead, they often at best set up a SBT (Single Blind Test) which have their own biases often not disclosed by the testing party. Despite my opinion on this topic, it's noteworthy to mention we still often engage in sighted and blind test shootouts of loudspeakers to study how the results differ as can be seen in our 2009 Floorstanding Loudspeaker Shootout.
The Claim: "our Speakers are "Similarly Good" to the very best on the market regardless of price"
Here is where I take the most issue with the abuses of the blind test, especially when a manufacturer claims they conducted a true DBT to come to these conclusions. It is convenient for a loudspeaker company to declare
that their speakers always win a DBT or at worst case are “similarly good” to
their competitors. This puts them into a
no-lose scenario regardless of product price.
What they often fail to mention is they came to this conclusion while running their own blind test instead of having a neutral 3rd party run it for them. They typically only win their own blind tests
which are often only SBT’s and NOT DBT’s like they claim. They almost always use their own staff—who are
intimately familiar with the sonic signature of their products. This
introduces familiarity bias to conveniently come to their preferred conclusions. This type of test doesn't even qualify as a SBT. Its basically just a blind test heavily biased to the listener familiar with the product under test. I have personally found familiarity bias
in a blind test among trained listeners to be a far greater source of bias than perceptual
bias of a sighted test. The extent of perceptual bias depends on how familiar the listener is with brands, prices, and how they subjectively value the aesthetics of the products under test. Familiarity bias is dependent on how familiar the listener is with the model or brand of speaker under test in the comparison. It is very easy to train a listener to recognize the sonic signature of their product.
I have to say, I firmly believe the DBT is the single most abused term in the audio industry. To date, we haven’t seen any company disclose what products they’ve allegedly compared to and beaten. Are we really to assume the company claiming product superiority has really gone out and purchased the very best and most elaborate competitor designs to compare their products to? No company has that kind of budget, nor would they be willing to invest financial resources and time in testing every competing model to make such a determination with any statistical confidence. If their speakers are that good, why not disclose in a true DBT protocol format (peer reviewed by a neutral party) which products and models they’ve beaten? Simply publishing papers that “sighted testing” is flawed, in my opinion is at best a red herring and at worst intellectually dishonest. Either fully disclose all test data, indicating all sources of test bias, or simply stop making declarations in white papers or online marketing literature.
As an interesting side note, I once offered one of the industry’s largest and most widely known and respected speaker companies the chance of sending them competitive products to conduct a DBT test at their facility where I would be on the panel of listeners. I’ve had many manufacturers interested in participating and shipping that company product to be part of the testing. I also suggested each engaging manufacturer would be allowed one listener to participate in the listening panel. The company initially seemed to go for it, but at the last minute declined stating they didn’t want to deal with potential legal issues from fall out of the testing results that were to be published on Audioholics. They also didn’t want competitive manufacturers on their premises observing their labs and protocols. So much for the science.
In our experience, many companies that cherish the DBT mantra often shy away from participating in 3rd party shootouts (sighted or blind). This is true even if we give them advanced copy of our testing protocol and results prior to publishing! I pose this request to ALL loudspeaker manufacturers that claim their speakers are “similarly good” or better than their competition at any price. Publish your results please or at least allow a neutral 3rd party to conduct the testing at your facilities!
Editorial Note about Blind Tests by Philip Bamberg
I don’t have much to say about blind tests. But I believe that with speakers there are usually enough differences in sound among them to not demand blind testing. Besides, there are differences which do not always fall into the two categories of better and worse. Finally, there is the benefit of medium and long term evaluations. The former occurs after a couple of hours or so of continued listening. This is when one may become fatigued (as with too much distortion), or bored (as with too much veiling). Long term listening over a week or more is useful because one may begin to better draw conclusions on overall strengths and weaknesses of the product.
For more information on this, see: Revealing
Flaws in the Loudspeaker Demo & DBT and Overview of Audio Testing Methodologies
Competitive Marketspace Leads to Aggressive
Marketing
It’s important to realize that the audio market
could be best described as 90-10% where 10% are audiophiles/hobbyists (like us)
and the other 90% are just casual listeners. The 90% would prefer ‘good’ sound
over ‘mediocre’ sound if given the choice assuming it even entered their mind in
the first place. It usually doesn’t.
There are over 400 brands of speakers on the U.S. market beating
each other’s brains in fighting for the ever-diminishing segment of the 10%,
while a brilliantly marketed company like Bose actively courts the 90% segment,
virtually unopposed by the high-end brands.
That’s smart marketing, and it’s to their credit, not their detriment,
that they do this. The remaining brands
struggle to justify their own existence by differentiating themselves from
their competition. It’s not unreasonable
as to why they feel the need to dress up marketing as science while sometimes
claiming product superiority in their own controlled and ironically unpublished
blind listening results.
Conclusion
I hope this article sheds some light as to why you simply cannot declare Speaker XXX is better than Speaker YYY based on a few measurement graphs or claims from a manufacturer that their speakers are inherently the best because they use anechoic chambers and DBT protocols during their design and testing phases.
There are some very large, well-recognized name brands that do very substantial audio research, yet they continue to manufacture and deliver speakers that put a higher premium on “style” and appearance than on actual audio performance. These products have material costs and subsequent performance far below many of their competitors at similar price points. We used to have a writer who was a design engineer at another well-known speaker company who claimed he would design a speaker to measure very linearly only to be dictated by marketing to put a deliberate bass bump so it would demo well in a noisy show floor environment.
These are just two of many examples I have observed over the years in this industry. Don’t blindly trust the science purported by the manufacturer, because often more than not, marketing dictates final product decisions which can adversely affect product performance either to obtain more profit or to provide more boom and sizzle to dazzle during a brief showroom demo. We live in an era where “the science” is often used and abused as a marketing tool.
Take your time to demo speakers with familiar listening material and preferably in your own listening space to ensure you can actually live with the product. Always level match when doing direct comparisons between products. Feel free to have a friend switch between the products you are comparing and close your eyes if you feel perceptual bias may get the best of you. Personally, I find I can better concentrate on the sound if I close my eyes during critical listening - so there's a valid pro argument for doing blind testing.
Editorial Note about Listening by Philip Bamberg:
Again, I advise the listener to be cautious about concluding that a distorting speaker is “detailed”, or that a veiling speaker is “smooth”. Listen long enough so that the “detailed” speaker is also not fatiguing, or that the “smooth” speaker is not boring.
Next time you see someone pop up in the forums declaring a loudspeaker’s superiority because of a frequency graph or a manufacturer’s scientific claims, send them a link to this article and hope it at least gets things churning in their heads to not formulate a final opinion on a product they’ve never heard firsthand themselves. How a loudspeaker plays into a room and how we ultimately perceive that experience is a far more complex topic than we can fully understand and neatly frame with a few measurements and listening tests (sighted or blind).
Article Addendum
added: 2/10/12
I thought it might help to recap the main points in this article for the forum folks that may have misunderstood my intentions of this opinion piece.
- We are NOT anti-measurement or anti-DBT / anti-blind testing.
- We engage in our own blind test protocols frequently when doing product comparisons.
- Recognize that most companies that claim to use DBT protocol don't. At best they are doing a SBT or if using their own staff, a highly biased blind test.
- Recognize a companies making "similarly good" or "you can't do better beyond a certain price" claims are highly dubious.
- Recognize that blind tests can be as biased and flawed as sighted tests.
- Recognize sighted tests can still be valid if properly controlled.
- Put more weight on reviews and reviewers that take the time to actually measure the products, consult with manufacturers to peer review their results, and give a detailed look inside the box and discuss design theory.
- Gather user feedback in the forums on the particular product you are interested in assuming said users have actually heard the products in question.
- Recognize that all measurement techniques and reports aren't created equal.
- Not everything we can measure relates to audibility and not everything that is audible can be directly measured in loudspeakers.
- Realize that distortion measurements are usually severely lacking in accuracy or in correlation to how the ear perceives distortion. Music is much different than sweep tones.
- Trust your instincts, if something seems to good to be true, it probably is regardless of how much "science" is wrapped around it.
- Lastly and most importantly, demo the products for yourself, preferably with familiar listening material and in your own listening environment.