I find myself thinking more and more that there are two modes of responding to reproduced sound. I'll refer to them as though they were peoplewhich they're notand call them the Musician and the Technician. The Musician
knows and loves the sound of live music, and judges reproduced sound by how well it preserves the
beauty and emotional impact of the original. The Technician does not know live musical sound, and
judges according to some mental checklist of technical categories.
The two types were neatly separated for me recently by a remark made by Caltech professor F. Brock Fuller, a knowledgeable audiophile (meaning lover of sound, not lover of audio). In connection with our digital listening test, we were comparing the sound at a recorder's output with its input, a microphone feed of live music. Dr. Fuller said that if you ran down a recording engineer's diagnostic list of technical maladies while listening
to the output, you would find nothing amiss, and would conclude that the machine was perfect or
nearly so. A Technician, that is, would give it a clean bill of health. However, he continued, the
fact that the doctor finds nothing wrong doesn't mean that the patient is necessarily all right;
and he felt that this patient was indeed ill. For when we switched from output to input, we could
hear that the unit had taken away much of the beauty of the sound, leaving it uninteresting and
without musical impact.
I am not saying that the Musician listens to the music, and the Technician to the sound. I'm trying to describe two different ways of listening to the sound. The Technician listens in categories defined by the technology. The trouble is, there are not enough categories and never could be, because what's important in musical sound changes
with the meaning of the music. What does not changeand this is what the Musician
realizesis that the sound itself is an organic part of the meaning.
For example, consider Chopin's Étude in E
Major, Op. 10, No. 3 for pianoyou would recognize it in a moment if I could play it for you.
One phrase of this piece ends on E just above middle C. The next phrase begins on the E an octave
higher. You know that every musical tone is actually a series of harmonics, and that the upper E
is the second harmonic of the lower E and is therefore implicit in the end of the first phrase
before it explicitly begins the second. Now on a Steinwaythe instrument I playthe
various harmonics of a single note develop over time. Which ones are prominent may change from
moment to moment. When I perform this étude, I want a particular emotional relation between the
two phrases, one which requires the second to appear in the most gentle possible way out of the
first. I listen carefully to the development of the upper E implicit in the end of the first
phrase, and I join the actual played note to it at the moment when they will meld most smoothly.
Few recordings convey the harmonic development of piano tone; most would damage this tiny detail of interpretation. Hearing such a recording, the Musician might be frustrated by inability to follow the meaning of the phrase join; but the
Technician would not notice this lack, as the problem does not come under standard 'diagnostic'
categories.
For another example, consider the clever and beautiful thing Brahms does in one of his trios when he leads a violin melody downward and has the 'cello take over at just the point where the sounds of the two instruments allow the most seamless
join. (We tried crossing the melody at other points in my "Alive with
Music!" class at Caltech, and verified that Brahms really did get it right!) If the reproducing
chain treats the formants of the two instruments differentlyformants are resonances of
constant frequency characteristic of an instrumentthe join will not be smooth, and the way
Brahms makes you hear pure music instead of two instruments will fail. The Musician may be puzzled here, or may
doubt the performers' sensitivity. The Technician will not notice anything amiss.
Such intimate relations between sound and meaning are the rule, not the exception, in music; and they show that listening categories must be flexible and music-oriented. Beyond this, the various aspects of the sound must be recorded so as to keep
their organic relationships intact, because music is an organic expression of human communication, not a
product with 'features' which can be added or subtracted at will.
What do I mean by this? Consider the performance situation. The tempo chosen by the artist depends not only on mood but also on the acoustics of the room. One plays slower for clarity in a reverberant hall; but in a 'dry' room such as the
typical recording studio, a faster tempo helps to keep the music alive. If reverberation is added
to a studio recording, the disagreement between tempo and acoustics makes nonsense; and since
tempo is crucial to emotion, the feeling may become nonsense, too. The error here by the
Technician producer is to separate the elements of musical sound.
The performance situation is a feedback loop of artist, instrument, room, sound, and audience. The artist is trained to maximize communication within that loop. Changing one element outside the loopby adding reverberation, equalization
or anything elseupsets the organic sense and throws away the artist's training and work.
Sometimes we must listen as Technicians, I suppose; but we should regard what we learn this way as no more necessarily relevant to musical fidelity than what we learn at the test bench. We do it simply to check things out. When
testing our work against the goal of high fidelity, however, we must welcome into our perception
all aspects of musical meaning.
We may never succeed in reproducing live music's combination of power, delicacy and beauty, nor its ability to involve us emotionally; but in the attempt to do so, we will learn much, not only about audio but about our perceptions and ourselves.
The Music of Sound
Guest editorial in The Audio Amateur, issue 5, 1982.
Reprinted in Hi-Fi News & Record Review (England), Sept. 1985.
Copyright © 1982, 1985, 1997 James Boyk
Over the last few years, I have listened to a lot of reproduced sound with musicians, audiophiles,
recording engineers, and students in "Projects in
Music & Science," my course at California Institute of Technology.