Cable Distortion and Dielectric Biasing Debunked
Recently I've done a collection of measurements and tests on interconnect cables to see what I could find that would explain the sonic differences that many people, including myself, have grown accustomed to hearing. The test equipment was an Audio Precision System 2 Cascade. Test objects were a handful of cables of varying construction and claims to audiophile performance.
- Distortion: Not only sine wave, but also extremely complex full-spectrum multitone testing (including signal sequences derived from actual music). There were NO differences between the cables tested.
- Phase noise: While this would have shown up anyway in the above tests, it was separately checked at frequencies well above the audio band. Nothing showed up .
- "Micro phase shifts": The AP2's resolution is so good you can read the length of a cable to within a few inches by measuring the phase difference between input and output. Apart from this, nothing turned up.
- In-Out difference. Actually, two different cables of equal length were fed the above distortion test signals in opposite phase. The two outputs were summed through a trimmable network to null the output. Well, the output nulled completely (better than 120dB across the audio band).
In short, apart from a constant time delay of a few nanoseconds (depending on length), an interconnect will have the same voltage at its output as at its input.
But What If...?
Or will it? There's one well known (and usually ignored) effect in unbalanced connections, which is that the same conductor that connects the chassis also serves as reference to the signal. In a normal cable, these are 100% coupled, which means that the part of the chassis error voltage that drops across the inductive part of the cable impedance (end-to-end impedance of the shield) will couple into the conductor and be compensated 100% (Yes! Unbalanced connections have got CMRR in some way). However, lower frequencies will cause more voltage drop across the resistive component of the shield, and this appears as an error voltage at the receiving end. Take a coaxial cable, take the jacket (sheath) off and dress it in a number of extra layers of shield salvaged from other cables. Hear the sound improve... This addresses the same problem as "mains conditioners" but it does so much more effectively.
The intelligent solution however, is to use balanced connections. In a balanced connection, two conductors are used in addition to the shield. Sometimes, they are driven with opposite-phase signals, sometimes one is simply tied to ground at the source end through a series resistance equal to that of the source. Both options are fine. The crux of the affair is that the receiver looks only at the voltage difference between the signal conductor and the reference (or inverted signal) conductor. This will remove the effect of voltages across the shield completely as the signal reference and chassis connection functions are duly separated.
Microphonics
There may not be a difference between what goes into a cable and what comes out, but this does not mean that the presence of the cable can't modify the signal. I'm talking about microphonics of course. This effect has two causes: triboelectric charging and condenser mic effect .
Triboelectric charging is the same effect that causes you to accumulate electric charge when walking across a thick carpet in winter. The charge is siphoned off to the terminating resistances (mostly the output impedance of the source) and creates a voltage there as long as the cable is moving.
Condenser microphones work by varying the capacitance of a cap that has a more or less constant charge stored in it. This is done by connecting it across a voltage source with a very high internal resistance. As sound waves change the capacitance, the actual voltage across the capacitor changes too. In a cable, the transmitted signal takes the function of bias voltage. Motion of the conductors will change the capacitance.
Applying a "bias voltage" as is done by certain cable companies in a bid to linearise the dielectric (the purported nonlinearity of which consistently fails to show up in any test) is extremely counterproductive in this respect! The higher the voltage on the cable, the greater the condenser microphone effect.
Reducing triboelectric charging is done by using a dielectric/conductor duo that produces little contact charge. Aluminium and paper are one such combination, cotton and steel another. Unfortunately, paper and especially cotton are quite soft, making the cable particularly susceptible to the condenser mic effect. A method to reduce triboelectric noise in normal insulators consists of lubricating the shield/insulator interface with graphite.
Reducing the condenser mic effect requires a tough (hard to deform) dielectric. Teflon is a famous example. Unfortunately, teflon is incredibly triboelectric against practically any other substance. In addition to this, the stiffness of teflon and also silver makes the cable nearly lossless, mechanically speaking. Measured microphonic impulse responses show tremendous ringing in the upper audio band. This could explain the "brightness" often attributed to silver/teflon cables.
To make matters worse, teflon and silver are about the worst thinkable combination in terms of triboelecticity .
On the other hand, there is the "sound engineering" solution: use a signal source with the lowest possible impedance. Charges generated and transferred because of either effect are absorbed at the source and the receiving end will never get to see it. I have been surprised, though, of how low this drive impedance should be before cable microphonics disappear below the noise floor of good audio gear.
Summary
To recap: to make cables disappear from the sonic equation, all that is needed is balanced transmission combined with sub-1ohm output impedance line drivers. I would like to propose this as a standard for audiophile equipment makers.
It shows that people who claim that cables do not make a difference are plainly deluding themselves. On the other hand, those that say that cables should not make a difference, are dead right.
Article Epilogue
We are greatly appreciative of Bruno's efforts in this article. However, I felt it important to mention that much of the focus of this article pertaining to microphonics and triboelectric effects is relative to how interconnects may interact when interfacing high gain low impedance drive to high impedance circuit terminations, such as the case with microphones and phono preamps. In reality the triboelectic effect rarely becomes a real world problem in consumer audio. Microphone applications must be considerate of these effects, especially since the cables attaching the microphones are often in motion caused by the singer and/or local mechanical vibrations which can induce noise into the system. In such instances there are specifically designed cables with dampening materials (usually cotton) to nullify this problem by acting like shock absorbers to reduce contact area as well as employing a different shield construction which is less prone to triboelectric noise.
Cable Distortion and Dielectric Biasing Debunked - page 2
These THD measurements actually have a story behind them. After making my results known, I was informed that some people claimed to have measured plain harmonic distortions on audio cables, more specifically at lowish signal levels. THD levels well over -120dB were reported, with great differences between different makes of cable. These distortions were subsequently attributed to "micro diode effects" in the cable.
I was invited to make a set of measurements to duplicate the test. In first instance, I just grabbed four different cables from my bench and submitted them to distortion measurements.
One was a cheap A/V cable: a triple coax with serve shield, all bare copper.
One was a Japanese manufactured cable with audiophile pretenses by a brand named Hisago. It was the only cable in the test field I've actually listened to, and I remember feeling the sound was exceptionally appalling (without "e" and two "l"). Conductors are bare OFC, insulator is foamed PE.
The third cable was a 50ohm RF coax with solid PE insulator and bare copper conductor. Shield is tinned copper.
The fourth is a coax with Teflon jacket (sheath), teflon insulator and silvered conductors/shield.
Test Set-Up
Source setting was 1kHz, 30mV, 20 Ohms impedance. The plots show 256 times power averaged FFTs of the residual. This means the 30mV fundamental was notched out such that the distortion/noise performance of the ADC does not affect the result. All dBs are relative to the fundamental. Power averaging smoothes the noise floor to be better able to pick harmonic components from the noise.
The graphs are squeaky clean. The generator third harmonic at around -130dB just peeks out. No other distortion products are visible.
These plots are clearly not very helpful - once distortion products are 10dB or so below the noise floor, there's no way of finding them back. Although it is already clear that no distortion products of -120dB levels are present, the inquisitive eye wants to see more.
The Work Around
Luckily, there's a trick called synchronous averaging. It adds up signal records grabbed synchronously with the stimulus, and performs the FFT on the averaged measurement afterwards. All "correlated" components (harmonics etc) will be increased by a factor of n, while noise increases only with the square root of n. The net effect is that the relative contribution of the noise is decreased by the square root of n. Averaging 256 records will thus improve SNR of the measurement by 24dB. Of course, the noise floor is no longer smooth as with power averaging. This makes reading harmonic levels near the (reduced) noise floor somewhat difficult. Otherwise, generator and analyser settings are the same.
Second and third harmonic of the generator are now clearly visible (had they come from the cables, they would be different on each plot). A faint hint of a 5th is visible, but its reading is unreliable due to its proximity to the noise floor.
No Distortion Products Even Remotely Attributable to the Cable
OK, let's now suppose that the low generator output impedance is just reducing dielectric distortion in the same way that it does microphonics, and that the high input impedance of the analyser is preventing the claimed "micro diodes" from becoming sufficiently biased to actually produce distortion. Generator output and analyser input are now set to 600 ohms impedance. This causes the signal level to drop by half, to 15mV. The tests are re-run on all cables.
No News. No Distortion.
A month later, two envelopes leave Steve Eddy's office (Owner of Q-Audio.com ). One is addressed to me, the other to the person who had originally claimed to be measuring distortion.
The envelopes each contain an identical set of four different types of cable.
- An unmarked cable like the ones packaged along with cd players etc.
- An "old style" Radio Shack Gold cable. This cable was laid out as a zip cord.
- A spanking new (totally unused - no "burnin" or what is it) Radio Shack Gold cable. The new cable is sold separate
- An RG174 cable. This cable is interesting in that the conductor is made of copper plated steel ("copperweld"). At least steel is known to have nonlinear magnetic properties so who knows, at sufficiently large signal levels.
This time, no time is lost on making power-averaged plots. All plots are synchronously averaged. Four settings are tried on each cable:
- Amplitude=30mV, Zout=20 Ohms, Zin=high (100k)
- Amplitude=30mV, Zout=600 Ohms, Zin=600 Ohms
- Amplitude=13V, Zout=20 Ohms, Zin=600 Ohms
- Amplitude=13V, Zout=600 Ohms, Zin=600 Ohms
All cables yield frighteningly identical results, including the RG174 cable.
I emailed Steve Eddy to complain that the tests were boring. He replied that this is a sacrifice to make for science. (I felt much better after that).
Cable Distortion Plots and Commentary
Special thanks to Bruno Putzeys, Philips DSL
Chief Engineer Class D Audio at Philips Digital Systems Labs