DO in Wonderland by Dick Olsher
If it's Tuesday, it must be Belgium. Here I was, in Claude Van Damme City, at the tail end of a European adventure that began at the Milano Hi-Fi Show (September 1996) the week before, courtesy of my hosts - Audio Note's Mike Trei and Herb Reichert. Af ter lunch in Brussels, it was off to Calais and a rendezvous with the car ferry to Dover, England.
The seed of the basic idea was planted somewhere across the English Channel. With a cold Heineken in hand, the discussion turned to audio. Not really surprising considering the company present, but as it veered off onto a road less traveled; new insigh ts were crystallizing in the subconscious mind. The crux of the problem was this: if you believe that music reproduction in the home should be an event that communicates the emotional intensity of the original performance, then where in the audio signal a re emotions and feelings coded?
Most of us are probably familiar with the idea that our genetic code shapes who and what we are. From the size of our nose or the volume of our cranium, and even disposition to allergies, we are slaves to the tiny genes in our cells. The musical equivalent of biological genes are the micromodulations in volume, frequency and time that make music human, that embed a range of emotions and moods within the melodic flow of notes. To paraphrase Mies van der Rohe, emotions are in the details. Consider the human voice. More than any other instrument, it can inspire, make you laugh or cry. It can coax love, hate, despair, and infinite shades of blue from a black-and-white musical score. The stream of syllables from a singer's mouth is modulated in several dime nsions. Time variations are pretty easy to discern. Subtle tremolo effects are much less obvious, as are frequency modulations smaller than 5 to 7 Hz - the normal range of vibrato. However, all of these clues are crucial to the auditory system in analyzin g the stream of information that impinges on our ears. It is precisely this sort of complexity that has made it impossible for machines to faithfully mimic human voice. Voice synthesizer chips sound mechanical because they lack the richness of "detail" th at characterizes human voice.
I used to think that audio gear was the lie that made us hear the musical truth. Despite their "poorer" test bench measurements, vacuum-tube gear has consistently pushed my buttons more effectively than a host of "technically perfect" solid-state amps. Lesley, my dear spouse, is an even more fanatical tubephile than I am. She's not into hardware and status symbols per se. She'll listen, crinkle her nose, and then turn to me with "there's no warmth" or "colors are wrong," and finally in an accusing to ne: "these aren't tubes, are they?" And, of course she's always right (at least in these matters).
My experience over the past several years with single-ended triode amplification has convinced me that rather than being a "lie," simple triode-based circuits do tell the truth more cogently than the competition in at least one crucial area. Recall tha t the music's emotional content is in the form of micromodulations - low-level signals. These signals come to life in the first few milliwatts of amplification. I've said that the first watt sets the stage and lets the emotions bubble to the surface, and if it's wrong, it matters little to me that there's another 199 more watts like it in reserve. Emotional impact is fully fleshed out only when such critical low-level detail is allowed to blossom. Quality over quantity is what it's all about. Simple, low- power, triode circuits appear to perform better at low-signal levels than do high-powered and more complex designs. And so, we landed at the white cliffs of Dover, with the notion "simpler is better" clearly etched in my mind.
The Peter Q. Test
The next day, after much needed sleep, Peter Qvortrup, Audio Note UK's numero uno, ushered me into his Brighton home, directly into the inner sanctum, his private listening room. The all-silver Audio Note system was situated up against a large bay wind ow. A pair of Gaku-Ons squatted obediently on the carpet front and center. The speakers (model 3/SE, with silver wire, caps, inductors, and voice coils) were positioned against the wall. The front end consisted of a Voyd turntable, Io cartridge and M7 pre amp. Nothing but silver interconnects and speaker cable. No dogs, cats, and the kids were at school... perfect. Visually, software rather than hardware dominated the room. Almost all of the walls and part of the floor were covered with vinyl. Amen. Over 23,000 albums, Peter tells me, and proud of it. In my experience, only our own Jerry Gladstein can lay claim t o an even larger collection. For the record, there wasn't a single CD in sight!
Several albums into the listening session, it became obvious to me that a window of pristine clarity had been opened onto the soundstage. The hardware and other technicalities fade into the background; they're no longer important. And even at the low v olumes Peter likes to listen at, I'm into the music. I sigh in contentment and lean back on the sofa. Right about then, Peter turns to me and says: "You know, I like to say that your system is only as good as your worst record."
OK, so in hindsight maybe it wasn't the conceptual thunderbolt I thought it to be at the time. But at that moment it was as if the skies over Brighton parted and a great shaft of light had descended from the heavens and set my wheels into motion. What Peter meant was that if you put on your worst album, perhaps a recording with a close to zero signal to noise ratio, then how well your system manages to extract musical meaning from such a recording defines its figure of merit. This then is the Peter Q. test. We listened to some pretty atrocious historical recordings made at the turn of the century. One that stands out in particular was of Edvard Grieg playing one of his own compositions on the piano. Ninety-percent grunge I'd say, but listening closely, it was actually possible to discern Grieg's individual keyboard style.
The visual analogy is that of a fuzzy photograph and the ability to discern patterns in such a picture. An even more compelling example is given by Bregman ([Auditory Scene Analysis], MIT Press 1990). He shows a figure consisting of fragments that are really parts of familiar objects (actually parts of the letter B). The fragments were obtained by putting an irregularly shaped mask over the figure and eliminating those parts hidden by the mask. Without any information on how to connect the fragments, t he eye is unable to complete the fragments. The basic problem according to Bregman is that the visual system does not know where the evidence is incomplete. Once the picture is shown with the mask present, the visual system quickly joins the fragments wit hout the observer having to think about it.
The auditory system needs low-level info (micromodulations) in order to properly analyze the auditory scene. Pulling harmonics together from a jumbled auditory stream to form a coherent harmonic envelope depends critically on the unique micromodulation signature of a particular voice or instrument. Harmonics with similar volume and frequency modulations are perceived as belonging together and are fused into a perceptual whole. The more microdetail a system can reveal, the easier it becomes to make sens e of a bad recording. Which brings me to the Grand unifying idea of chaos as an active agent in the audio reproduction chain.
Lunch at Brighton's Grand Hotel is indeed a grand experience. Besides a clear view of the English Channel, the food as well as the waitresses are undeniably delightful. It was here that the final piece of the puzzle fell into place. Any dynamical syste m may operate in three distinct states: static, periodic, and chaotic. The first two states are well known and quite amenable to conventional engineering analysis. Chaotic behavior, however, only allows for limited predictability. Chaos is everywhere, eve n in the solar system, which was once held to be unconditionally stable. It is now realized that internal chaos makes predictions for the future of our solar system impossible. But before we panic about the possibility of our planet Earth spiraling into t he sun, let's consider chaos in audio amplifiers.
Retrieval of low-level detail near the noise floor of the amplifier represents a chaotic process. The problem is that the baseline of an amplifier drifts unpredictably in amplitude so as to mask low-level detail. The drift can be caused by stored energ y in circuit dielectrics, fluctuations in component values due to temperature, or power supply instabilities. It is then reasonable to assume that simpler designs that minimize the potential causes for baseline drift would be more effective in communicati ng the music's emotional content.
Well, it turns out that I'm not alone in invoking chaos in audio systems. Gerard Perrot of Hephaistos Laboratories in France defines a new circuit characteristic, circuit memory, in a recent paper presented at the 100th Convention of the Audio Engineer ing Society in Copenhagen (AES Preprint 4282). He points out that classical or steady-state sine wave measurements of distortion products are only reliable for stable systems and that circuit non-linearities are ignored by such measurements. Circuit memor y he says is caused by a variety of circuit instabilities, many of which are a function of the signal itself and possess a time constant. In components, memory is said to be caused by thermal feedback in transistors, resistor self heating, dielectric abso rption in capacitors, and skin effect in cable. Memory also occurs in circuits and results primarily from non-linearities in power supplies and feedback loops.
Mr. Perrot describes a test procedure he devised for measurement of circuit memory artifacts and gives test results for three different amplifiers. Several listening tests were also conducted to determine how they correlated with the measurements. The finding was that memory artifacts were better correlated with sound quality than were traditional THD measurements. A triode tube amplifier, with a much poorer THD figure than a commercially available high-quality transistor amp, was judged to be much mor e natural sounding. The third amp in the listening tests, a new low-memory and low-THD transistor design, was apparently preferred to the tube amp even by "tube fanatics." This result, according to Perrot seems to invalidate the traditional explanation fo r the preference for tube circuits, namely the presence of euphonic second-order harmonic distortion.
Chaos lives, though for the time being, I'm content to evaluate its impact on audio quality the old-fashioned way: by putting my ear into the loop.
|