Speakers are the weakest link in almost all stereo systems. This has been my opinion for many years. In fact, it was the subject of the very first of this series of posts over a year ago (What’s the Weakest Link in Your System?). But at the other end of the chain, the recording end, the microphones and the techniques used in recording may be a bigger problem – especially in terms of imaging.
Images courtsey of ProSound Web
There has been much research on how we “locate” and perceive sound sources. Our two ears and brain make a very powerful, real-time audio analyzer. The technique our ear/brain audio processor uses to determine locations of sound sources varies with frequency. In the treble range (about 2000 Hz and above) our head produces “shadowing.” This shadow produces different sound balance and level from one ear to the other when the source is not directly in front of or directly behind us. Our brain uses the differences to locate the source. In the mid-bass (about 80-800 Hz) the time delay between when the sound reaches each ear is the major technique for locating the source. Through the midrange (about 500-3000 Hz and, yes, there’s some overlap with the bass and treble here), both processes are used. Most voices fall in or near these midrange frequencies. This makes voices among the easiest sounds to locate and have the sharpest discriminations. (An interesting note: front-center and rear-center locations are often confused since they have no shadow nor time delay.) Read more in It’s All About Imaging!
The first stereo microphone system was demonstrated at the great Electrical Exhibition in Paris in 1881. A French designer by the name of Clement Ader spaced two microphones at the Paris Opera and transmitted the signal over telephone lines to people listening on stereo headphones. They reported that they felt like they were in the hall. The reason for two microphones was to get more equal sound levels from all the performers. The “stereo” effect as we know it today was an accident.
Fifty years later, in the early 1930s, research into the best microphone techniques to recreate a firm stereo image was pursued with vigor on both sides of the Atlantic.
Alan Blumlein, working for the British company, EMI, demonstrated that if the identical mono signal was sent to two speakers, the perceived source location would be in between the speakers. He also showed that by just changing the volume levels, the perceived source could be moved from one speaker to the other.
This led him to champion co-incident microphone techniques. In this style of recording, two microphones are located as close as possible in the horizontal plane. Usually the microphone transducers (elements) are placed one on top of the other. There should be no (or very little) time delay for the sound coming to each element. Each channel relies on the directivity pattern of each element to pick up much of shadowing effects.
At the same time, researchers at Bell Laboratories in New Jersey, under the direction of Dr. Harvey Fletcher, were working on systems using spaced microphones. In this technique, the two microphones are spaced apart, causing a time and phase delay between the signals. The signal reaching the closer microphone when the performer is off-center arrives earlier because of the shorter distance. Omni-directional microphones are often used in the technique.
The debate between the two schools continues to today. A consistent problem is monitoring the recordings. Engineers often use headphones as this separates the right channel recording totally from the left channel and eliminates room, speaker placement, listener placement and speaker effects. But, most serious listeners use speakers. Speakers add all kinds of complications.
From Recording Studio to Your Listening Room
Current Ohm Walsh speakers (not the Ohm A, F or G) use the speaker equivalent of the Blumlein system. As a listener moves toward the right, they get more directly in front of the primary axis of the left speaker and further away from the primary axis of the right speaker (as well as getting closer to the right). The phase of each is correct through most of the listening area and your brain perceives a single location for each source.
Spaced pair recordings can sound spectacular on any good-imaging speaker when listening from their sweet-spot. With Walsh speakers there are a number of “sweet-spots”, in addition to the usual one (equal-distant from the speakers). The Walshs can also create “sour-spots” between the “sweet” when the time delays of the recording fight the time delays of the speaker placement. This can cause total cancellations. When spaced the width of a head, the problem can be extremely confusing to listeners.
More in the future…
See ya September 1
‘Til Then, Enjoy & Good Listening!
John Strohbeen Author
John Strohbeen is the president of Ohm Speakers.