If you look at the comments section to a posting I wrote about ABL, you’ll see a short conversation there between me and a happy Beomaster 8000 customer who said that I had made an error in making sweeping generalisations about the function of a “loudness” filter in older gear. I said that, in older gear, a loudness filter boosted the bass (and maybe the treble) with a fixed gain, regardless of listening level (also known as “the position of the volume knob”). Henning said that this was incorrect, and that, in his Beomaster 8000, the amount of boost applied by the loudness filter was, indeed, varied with volume.
So, I dusted off one of our Beomaster 8000’s (made in the early 1980’s) to find out if he was correct.
I sent an MLS signal to the Tape 1 input (left channel) of the Beomaster 8000, and connected a differential probe to the speaker output. (The reason for the probe was to bring the signal back down to something like a line level to keep my sound card happy…)
I set the volume to 0.1, switched the loudness filter off, and measured the magnitude response.
Then I turned the loudness filter on, and measured again.
I repeated this for volume steps 0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0, and 5.5. I didn’t do volume step 6.0 because this overloaded the input of my sound card and created the weird artefacts that occur when you clip an MLS signal. No matter…
Then I plotted the results, which are shown below.
Remember that these are NOT the absolute magnitude response curves of the Beomaster 8000. These are the DIFFERENCE between the Loudness ON and Loudness OFF at different volume settings.
At the top, you see a green line which is very, very flat. This means that, at the highest volume setting I tested (vol = 5.5) there was no difference between loudness on and off.
As you start coming down, you can see that the bass is boosted more and more, starting even at volume step 5.0 (the purple line, second from the top). At the bottom volume step (0.1, there is a nearly 35 dB boost at 20 Hz when the loudness filter is on.
You may also notice two other things in these plots. The first is the ripple in the lower curves. the second is the apparent treble boost at the bottom setting. Both of these artefacts are not actually in the signal. These are artefacts of the measurements that I did. So, you should ignore them, since they’re not there in “real life”.
So, Henning, I was wrong and you are correct – the Beomaster 8000 does indeed have a loudness filter that varies with volume. I stand corrected. Thanks for the info – and a fun afternoon!
So, you’ve just installed a pair of loudspeakers, or a multichannel surround system. If you’re a normal person then you have not set up your system following the recommendations stated in the International Telecommunications Union’s document “Rec. ITU-R BS.775-1: MULTICHANNEL STEREOPHONIC SOUND SYSTEM WITH AND WITHOUT ACCOMPANYING PICTURE”. That document states that, in a best case, you should use a loudspeaker placement as is shown below in Figure 1.
In a typical configuration, the loudspeakers are NOT the same distance from the listening position – and this is a BIG problem if you’re worried about the accuracy of phantom image placement. Why is this? Well, let’s back up a little…
Localisation in the Real World
Let’s say that you and I were standing out in the middle of a snow-covered frozen pond on a quiet winter day. I stand some distance away from you and we have a conversation. When I’m doing the talking, the sound of my voice leaves my mouth and moves towards you.
If I’m directly in front of you, then the sound (in theory) arrives at both of your ears simultaneously (resulting in an Interaural Time Difference or ITD of 0 ms) and at exactly the same level (resulting in an Interaural Amplitude Difference or IAD of 0 dB). Your brain detects that the ITD is 0 ms and the IAD is 0 dB, and decides that I must be directly in front of you (or directly behind you, or above you – at least I must be somewhere on your sagittal plane…)
If I move slightly to your left, then two things happen, generally speaking. Firstly, the sound of my voice arrives at your left ear before your right ear because it’s closer to me. Secondly, the sound of my voice is generally louder in your left ear than in your right ear, not only because it’s closer, but (mostly) because your head shadows your right ear from the sound of my voice. So, you brain detects that my voice is earlier and louder in your left ear, so I must be somewhere on your left.
Of course, there are many other, smaller cues that tell you where the sound is coming from exactly – but we don’t need to get into those details today.
There are two important thing to note here. The first is that these two principal cues – the ITD and the IAD – are not equally important. If they got in a fight, the ITD would win. If a sound arrived at your left ear earlier, but was louder in your right ear, it would have to be a LOT louder in the right ear to convince you that you should ignore the ITD information…
The second thing is that the time differences we’re talking about are very very small. If I were directly to one side of you, looking directly at your left ear, say… then the sound would arrive at your right ear approximately only 700 µs – that’s 700 millionths of a second or 0.0007 seconds later than at your left ear.
So, the moral of this story so far is that we are very sensitive to differences in the time of arrival of a sound at our two ears.
Localisation in a reproduced world
Now go back to the same snow-covered frozen lake with a pair of loudspeakers instead of bringing me along, and set them up in a standard stereo configuration, where the listening position and the two loudspeakers form an equilateral triangle. This means that when you sit and listen to the signals coming out of the loudspeakers
- the two loudspeakers are the same distance from the listening position, and
- the left loudspeaker is 30º to the left of front-centre, and the right loudspeaker is 30º to the right of front-centre.
Have a seat and we’ll play some sound. To start, we’ll play the same sound in both loudspeakers at exactly the same time, and at exactly the same level. Initially, the sound from the left loudspeaker will reach your left ear, and the sound from the right loudspeaker reaches your right ear. A very short time later the sound from the left loudspeaker reaches your right ear and the sound from the right loudspeaker reaches your left ear (this effect is called Interaural Crosstalk – but that’s not important). After this, nothing happens, because you are sitting in the middle of a frozen lake covered in snow – so there are no reflections from anything.
Since the sounds in the two loudspeakers are identical, then the sounds in your ears are also identical to each other. And, just as is the case in real-life, if the sounds in your two ears are identical, you’ll localise the sound source as coming from somewhere on your sagittal plane. Due to some other details in the localisation cues that we’re not talking about here, chances are that you’ll hear the sound as originating from a position directly in front of you – between the two loudspeakers.
Because the apparent location of that sound is a position where there is no loudspeaker, it’e like a ghost – so it’s called a “phantom centre” image.
That’s the centre image, but how do we move the image slightly to one side or the other? It’s actually really easy – we just need to remember the effects of ITD and IAD, and do something similar.
So, if I play a sound out of both loudspeakers at exactly the same time, but I make one loudspeaker slightly louder than the other, then the phantom image will appear to come from a position that is closer to the louder loudspeaker. So, if the right channel is louder than the left channel, then the image appears to come from somewhere on the right. Eventually, if the right loudspeaker is louder enough (about 15 dB, give or take), then the image will appear to be in that loudspeaker.
Similarly, if I were to keep the levels of the two loudspeakers identical, but I were to play the sound out of the right loudspeaker a little earlier instead, then the phantom image will also move towards the earlier loudspeaker.
There have been many studies done to find out exactly what apparent phantom image position results from exactly what level or delay difference between the two loudspeakers (or a combination of the two). One of the first ones was done by Gert Simonsen in 1983, in which he found the following results.
|Image Position||Amplitude difference||Time difference|
|0º||0.0 dB||0.0 ms|
|10º||2.5 dB||0.2 ms|
|20º||5.5 dB||0.44 ms|
|30º||15.0 dB||1.12 ms|
Note that this test was done with loudspeakers at ±30º – so the bottom line of the table means “in one of the loudspeakers”. Also, I have to be clear that the values in this table are NOT to be used concurrently. So, this shows the values that are needed to produce the desired phantom image location using EITHER amplitude differences OR time differences.
Again, the same two important points apply.
Firstly, the time differences are a more “powerful” cue than the amplitude differences. In other words, if the left loudspeaker is earlier, but the right loudspeaker is louder, you’ll hear the phantom image location towards the left, unless the right loudspeaker is a LOT louder.
Secondly, you are VERY sensitive to time differences. The left loudspeaker only needs to be 1.12 ms earlier than the right loudspeaker in order for the phantom image to move all the way into that loudspeaker. That’s equivalent to the left loudspeaker being about 38.5 cm closer than the right loudspeaker (because the speed of sound is about 344 m/s (depending on the temperature) and 0.00112 * 344 = 0.385 m).
Those last two paragraphs were the “punch line” – if the distances to the loudspeakers are NOT the same, then, unless you do something about it, you’ll wind up hearing your phantom images pulling towards the closer loudspeaker. And it doesn’t take much of an error in distance to produce a big effect.
Whaddya gonna do about it?
Almost every surround processor and Audio Video Receiver in the world gives you the option of entering the Speaker Distances in a menu somewhere. There are two possible reasons for this.
The first is not so important – it’s to align the sound at the listening position with the video. If you’re sitting 3 m from the loudspeakers and the TV, then the sound arrives 8.7 ms after you see the picture (the same is true if you are listening to a person speaking 3 m away from you). To eliminate this delay, the loudspeakers could produce the sound 8.7 ms too early, and the sound would reach you at the same time as you see the video. As I said, however, this is not a problem to lose much sleep over, unless you sit VERY far away from your television.
The second reason is very important, as we’ve already seen. If, as we established at the start of this posting, you’re a normal person, then your loudspeakers are not all the same distance from the listening position. This means that you should apply a delay to the closer loudspeaker(s) to get them to “wait” for the sound as it travels towards you from the further loudspeakers. That way, if you have the same sound in all channels at the same time, then the loudspeaker do NOT produce it at the same time, but it arrives at the listening position simultaneously, as it should.
Problem solved! Right?
Corrections that need correcting
Let’s make a configuration of a pair of loudspeakers and a listening position that is obviously wrong.
Figure 2 shows the example of a very bad loudspeaker configuration for stereo listening. (I’m keeping things restricted to two channels to keep things simple – but multichannel is the same…) The right loudspeaker is much closer than the left loudspeaker, so all phantom images will appear to “bunch together” into the right loudspeaker.
So, to do the correction, you measure the distances to the two loudspeakers from the listening position and enter those two values into the surround processor. It then subtracts the smaller distance from the larger distance, converts that to a delay time, and delays the closer loudspeaker by that amount to compensate for the difference.
So, after the delay is applied to the closer loudspeaker, in theory, you have a stereo pair of loudspeakers that are equidistant from the listening position. This means that, instead of hearing (for example) the phantom centre images in the closer loudspeaker, you’ll hear it as being positioned at the centre point between the distant loudspeaker (the left one, in this example) and the “virtual” one (the right one in this example). This is shown below.
As you can see in Figure 6, the resulting phantom image is at the centre point between the two resulting loudspeakers. But, if you look not-too-carefully-at-all, then you can see that the angle from the listening position to that centre point is not the same angle as the centre point between the two REAL loudspeakers (the black dot).
So, this means that, if you use distances ONLY to time-align two (or more) loudspeakers, then your correction till not be perfect. And, the more incorrect your actual loudspeaker configuration, the more incorrect the correction will be.
How do I fix it?
Notice that, after “correction”, the phantom image is still pulling towards the closer loudspeaker.
As we saw above, in order to push a phantom centre image towards a loudspeaker, you have to make the sound in that loudspeaker earlier.
So, what we need to do, after the distance-based time alignment is done, is to force the more distant loudspeaker to be a little earlier than the closer one. That will pull the phantom image towards it.
In order to use a distance compensation to make a loudspeaker produce the sound earlier, we have to tell the processor that it’s further away than it actually is. This makes the processor “think” that it needs to send the sound out early to compensate for the extra propagation delay caused by the distance.
So, to make the further loudspeaker a little early relative to the other loudspeaker, we either have to tell the processor that it’s further away from the listening position than it really is, or we reduce the reported distance to the closer loudspeaker to delay it a little more.
This means that, in the example shown in Figure 7, above, we should add a little to the distance to the left loudspeaker before entering the value in the menus, or subtract a little from the distance to the right loudspeaker instead.
How much is enough?
You might, at this point, be asking yourself “Why can’t this be done automatically? It’s just a little trigonometry, after all…”
If things were as simple as I’ve described here, then you’d be right – the math that is converting distance compensation to audio delays could include this offset, and everything would be fine.
The problem is that I’ve over-simplified a little on the way through. For example, not everyone hears exactly a 10º shift in phantom image with a 2.5 dB inter-channel amplitude difference. Those numbers are the average of a listening test with a number of subjects. Also, when other researchers have done the same test, they get slightly different results. (see this page for information).
Also, the directivity of the loudspeaker will have an influence (that is likely going to be frequency-dependent). So, if you’ve “toed in” your loudspeakers, then (in the example above) the further one will be “aimed” at you better than the closer one, which will have an influence on the perceived location of the phantom centre.
So, the only way to really do the final “tweaking” or “fine tuning” of the distance-compensation delays is to do it by listening.
Normally, I start by entering the distances correctly. Then, while sitting in the listening position, I use a monophonic track (Suzanne Vega singing “Tom’s Diner” works well) and I increase the distance in the surround processor’s menu of the loudspeaker that I want to pull the image towards. In other words, if the phantom centre appears to be located too far to the left, I “lie” to the surround processor and tell it that the right loudspeaker is further by 10 cm. I keep adding distance until the image is moved to the correct location.
It’s obviously fake – but it’s a demo nonetheless…
Bang & Olufsen recently released its latest television called BeoVision Eclipse. If you look around the web for comments and reviews, one of the things you’ll come across is that many people are calling it a “soundbar” which is only partly true, which is why B&O calls is a SoundCenter instead.
In order to explain the difference, let’s start by looking at what basic components you would need to buy in order to have the equivalent capabilities of the Eclipse.
- 4K HDR OLED screen
- Multichannel audio
- Surround processor + Three-channel amplifier with 150 watts per channel OR
- Audio-Video Receiver (AVR) with 150 watts per channel
- 19 discrete audio output channels
- 1- to 16.5- up/down mixing, dynamic with signal
- User-configurable dynamic output routing
- Intelligent Bass Management
- Three full-range loudspeakers
- DLNA, Streaming, and multiroom compatible
This is shown in the block diagram in Figure 1 – and it’s important to note that this just an overview of the capabilities – not a thorough list.
I’m from the acoustics department, so I’m not going to talk about the video portion of the Eclipse – it’s best to stick with what I know…
From the outside, the Eclipse obviously has 3 woofers, each driven by its own 100 W amplifier as well as 2 full range drivers and a tweeter, each of which is individually powered by its own 50 W amplifier. Those 6 amplifiers are each fed by its own Digital to Analogue Converter (or DAC).
The total result of this is a discrete 3-channel loudspeaker array (which some might label a “soundbar”) that is fully-active, and with all processing (such as crossovers, filtering, and ABL, as described in this posting) performed in the Digital Signal Processing (or DSP).
When it leaves the factory, those three channels are preset to act as the Left Front (Lf), Centre Front (Cf), and Right Front (Rf) audio channels, however, these can be changed by the user, as I’ll describe below.
The BeoVision Eclipse, like all other current BeoVision televisions includes both wired and wireless outputs for connection to external loudspeakers for customers who either want to have a larger multichannel system, or wish to have the option to upgrade to one in the future.
The Eclipse has 8 wired outputs (on 4 Power Link connections – each of which has 2 discrete audio channels) and 8 wireless outputs (using Wireless Power Link).
This means that, in total, you can have up to 19 loudspeakers delivering signals in a large multichannel surround system (8 wired + 8 wireless + 3 internal). However, even if you have all of those loudspeakers connected, you don’t have to use all of them all of the time…
Audio signal processing
There are many Surround Processors and Audio-Video Receivers (or AVR’s) in the world. These have the primary job of receiving a signal (say, from an HDMI input) and decoding it, splitting it up into the video and audio outputs. The audio channels in the signal are then sent to the appropriate output. However, with almost all Surround Processors and AVRs, the output channel routing is fixed. In other words, the left surround output of the AVR always goes to the same loudspeaker, in the left surround position.
In a Bang & Olufsen television like the BeoVision Eclipse, this routing is not fixed. So, for example, if you connect two extra external loudspeakers, you might choose to use them as the Left Surround (Ls) and Right Surround (Rs) outputs, with the three internal loudspeakers providing the Lf, Cf, and Rf channels. This is shown in Figure 2.
This configuration would be saved as a “Speaker Preset” and labelled as you wish (for example, “surround sound”) and even set as a default configuration for the inputs that you wish (the Blu-ray player, for example).
However, you aren’t stuck with this setup. Let’s say, for example, that, when you have dinner, you would like to use the external loudspeakers ONLY as a stereo pair, as is shown below in Figure 3.
Now, the external loudspeakers have changed their Speaker Roles. They were Left Surround and Right Surround in Figure 2 – but now they’re Right Front and Left Front. This configuration can be saved as another Speaker Group, and labelled something like “Dinner Music” for example.
You could also do something completely non-intuitive – for example a configuration for watching the evening news, where you only need to hear the dialogue, but everyone else in the house is either asleep, or not interested in current affairs. Then you can route the Centre Front channel to the closet loudspeaker only, as shown below in Figure 4.
This can be saved as another Speaker Group, called “Speech – Night Listening” for example.
It should also be noted that there are no rules applied to the distribution of Speaker Roles in a Speaker Group. So, for example, if you wanted to have 19 loudspeakers, all playing the Left Surround channel, the TV will let you do this. I’m not suggesting that this is a good idea – I’m merely saying that the TV will not stop you from doing this…
Of course, when you create a Speaker Group, you not only define the various roles of the loudspeakers, you also set their Speaker Levels and Speaker Distances to ensure that the levels and time-of-arrivals are all aligned as you require for your configuration.
Update: I just made a new Speaker Group on a system with a BeoVision Eclipse and a pair of BeoLab 90’s that I thought might make an interesting addition to this section. The Eclipse Speaker Group was created such that all connected loudspeakers (internal and external) were set to have a Speaker Role of NONE. This basically means that the TV uses no loudspeakers. You may wonder why this is a useful Speaker Group. The reason is that I was using the Eclipse as an external monitor for a computer, but I wanted to listen to music from the BeoLab 90’s from another device (which is connected to their S/P-DIF Coaxial input). So, the Eclipse turns off the BeoLab 90’s, which “frees them up” to automatically switch to the S/P-DIF input.
Internally, the Eclipse, like the BeoVision 11, Avant, Horizon, and 14, can create up to a 16-channel upmix of all signals that come into it, using the True Image algorithm. However, if your input channel mapping matches your output, then the upmixer does nothing. This decision (whether to upmix, downmix, or do nothing) is continually made on-the-fly. So, for example, let’s say that you have a 5.1-channel loudspeaker configuration with 5 main loudspeakers and one subwoofer. You start by playing 2-channel stereo music from a USB stick and the True Image algorithm will upmix the 2 input channels to your 5 output channels, and also bass mange the low frequency content to the subwoofer. You then switch to watch a DVD with a 5.1-channel signal, and True Image will connect the 6 input channels to the 6 loudspeakers directly without doing any interim spatial processing. Then, you change to a Blu-ray disc with 7.1-channel audio content and True Image will downmix the 8 incoming channels to your 6 loudspeakers.
All of this happens automatically, and is also true if you switch Speaker Groups. So, if you start watching the 5.1-channel DVD with a 5.1-channel Speaker Group, then True Image will pass the signals through. If you then switch to the 2-channel Speaker Group, True Image will automatically start downmixing for you (rather than just not playing the “missing” output channels).
Of course, if you’re a purist, then the True Image algorithm can be disabled, and the incoming audio channels can be just routed to their respective outputs directly. However, this means that if your input format does not match your output format, then either you’ll not hear some audio channels (if you have more input channels than output channels) OR some loudspeakers will not play audio (if you have fewer input channels than output channels).
Intelligent bass management
If all of the external loudspeakers that you’ve connected to the BeoVision Eclipse are Bang & Olufsen products, then you simply tell the television which loudspeaker models you have (if they’re connected wirelessly, then this happens automatically) and the TV will automatically decide whether each loudspeaker should be bass-managed or not. This is because the TV is programmed with the bass capabilities of all Bang & Olufsen loudspeakers in the current portfolio – and many legacy products. This means that the TV “knows” which speakers can play the loudest bass – so it will automatically configure itself for each Speaker Group, ensuring that your bass is re-routed to the most capable loudspeakers.
Of course, this can be over-ridden in the user menus. So, if you wish to disable Bass Management, you can do so. However, you can also create extreme cases where you send the bass managed signal to all loudspeakers. This is not necessarily a good idea – nor will it necessarily give you the most bass (due to possible phase differences between the loudspeakers, for example) – however, you can do it if you wish.
If the external loudspeakers are not Bang & Olufsen products, then you simply choose “Other” as your Speaker Connection (or speaker type) in the menus, and the TV will know that it cannot make automatic decisions about the bass management – so you’ll have to configure this yourself.
Automatic Latency Management
Different Bang & Olufsen loudspeakers have different “latencies”. (The latency of a loudspeaker is the time it takes for the signal to go through it – from the electrical input to the acoustical output.) For some older products (like the BeoLab 3, for example) then the latency is 0 ms, because it is an analogue loudspeaker. For some others, it is between 2.5 and 5 ms (depending on the particular loudspeaker). The BeoLab 50 and BeoLab 90 each have two latency modes: either 25 ms or 100 ms, depending on how they are configured.
In order to ensure that all of these different loudspeakers can “live together” in a single surround system (and also in a multiroom configuration with other products in your house), the TV must also “know” the latencies of the various loudspeakers that are connected to it.
In addition, the BeoVision Eclipse can “tell” the BeoLab 50 and 90 to change latency settings on-the-fly to optimise the configuration to ensure lip sync. (Note that, in order for this to happen, the BeoLab 50 and 90 must be set to “Auto” latency mode, allowing them to be switched by the TV.)
As I said at the top, I’m concentrating on the audio and acoustic features of the BeoVision Eclipse. There are many aspects of the LG screen that I won’t discuss here. In addition, there are a multitude of video and audio input options and built-in sources (like Netflix, Amazon, Google Chromecast, Apple AirPlay, and so on…) which I also won’t go through.
Finally, of course, it goes without saying that in order to control all of this you only need to have one remote control sitting on your coffee table…
For more information
Let’s start by inventing a loudspeaker. It has a perfectly flat on-axis response in a free field. This means that if you send a signal into it, then it doesn’t cause any particular frequency to sound louder or quieter than the others when you measure it in an infinite space that is free of reflections.
We’ll also say that it has a perfectly omnidirectional directivity. This means that the loudspeaker has the same behaviour in all directions – there is no “front” or “back” – sound goes everywhere identically.
Let’s then put that loudspeaker in a strange room that has only two walls – the left wall and the front wall – and these extend to infinity. We’ll put the loudspeaker, say 1 m from the left wall and 70 cm from the front wall. These are completely arbitrary values, but they’re not weird… Finally, we’ll sit 3 m away from the loudspeaker, as if we were set up to listen to it as the left front loudspeaker in a stereo pair.
A floorpan of that setup is shown below in Figure 1.
If the two walls were completely absorptive, then there would be no energy reflected from them. If we were to replace the loudspeaker with a light bulb, then the equivalent would be to paint the walls flat black so no light would be reflected. In this theoretically perfect case, then the impulse response and the magnitude response of the loudspeaker at the listening position would be the same as in a free field, since there are no reflections. These would look like the plots in Figure 2.
Through the looking glass
Imagine that you’re standing outdoors on a moonless night, and the only things you have with you are a lightbulb (that is magically lit) and a mirror. You’ll be able to see two light bulbs – the real one, and the one that is reflected by the mirror. If there is really no other light and no other objects, then you won’t even know that it’s a mirror, and you’ll just see two light bulbs (unless, of course, you can see yourself as well…)
In 1929, an acoustical physicist working at Bell Laboratories named Carl F. Eyring presented a new idea to the Acoustical Society of America. He was trying to calculate the reverberation time in “dead” rooms by considering that the walls were perfect mirrors, and that instead of thinking of sound sources and reflections, you could just pretend that the walls didn’t exist, and that the reflections were actually just images of other sound sources on the other side of the wall (just like that second light bulb in the example above…)
This method of simulating and predicting acoustical behaviour in rooms, now called the “image model” has been used by many people over the decades. Eyring published a paper describing it in 1930, but it has since been standard method, both for prediction and acoustical simulation (first proposed by Allen and Berkley in 1979).
The effects of one sidewall reflection
Let’s use the image model to do a very basic prediction of what will happen to our impulse and magnitude responses if we have a single reflection from the left-hand wall.
As can be seen in Figure 5, the resulting magnitude response of an omnidirectional loudspeaker with a single, perfect reflection certainly has some noticeable artefacts. If the listening position were closer to the loudspeaker, the artefacts would be smaller, since the reflected signal would be quieter than the direct sound The further away you get, the more the two path lengths are the same, and therefore the bigger the effect on the summed signal.
Of course, this is an unrealistic simulation, since everything is “perfect” – perfect reflection, perfectly omnidirectional loudspeaker with a perfectly flat magnitude response, and so on… However, for the purposes of this posting, that’s good enough.
Let’s now change the directivity of the loudspeaker to alter the balance of level between the direct and the reflected sounds. We’ll make the loudspeaker’s beam width more narrow, giving it the same behaviour as a cardioid microphone (which is called a cardioid because a polar plot of its directivity pattern looks like a heart – cardiovascular and cardioid have the same root).
If you look at Figure 7, you’ll see that the times of arrival of the two signals have not changed, but that the effect of the artefact in the frequency domain is reduced (the peaks and dips are smaller). The frequencies of the peaks and dips are the same as in Figure 5 because those are determined by the delay difference between the two spikes in the impulse response. The peaks and dips are smaller because the reflected sound is quieter (because the image loudspeaker – the reflected signal is beaming in a different direction).
Let’s try a different directivity pattern – a dipole, which has a polar patter than looks like a figure “8”.
Notice now that, because the listening position is almost perfectly in line with the “null” – the “dead zone” of the reflected loudspeaker, there is almost nothing to reflect. Consequently, there is very little effect on the on-axis magnitude response of the loudspeaker, as can be seen in the magnitude response in Figure 9.
So, the moral of the story so far is that without moving the loudspeaker or the listening position, or changing the wall’s characteristics, the time response and magnitude response of the loudspeaker at the listening position is heavily dependent on the loudspeaker’s directivity.
Let’s continue the experiment, making the front wall reflective as well.
In Figure 15, one additional effect can be seen. Since the reflection off the front wall is negative (in other words, it “pulls” when the direct sound “pushes”) due to the behaviour of the dipole, there is a cancellation in the low frequencies, causing a drop in level in the low end. If we were to push the loudspeaker closer to the front wall, this effect would become more and more obvious.
The moral of the story is…
Of course, this is all very theoretical, however, it should give you an idea of three things.
The first is a simple method of thinking about reflections. You can use the Image Model method to imagine that your walls are mirrors, and you can “see” the other loudspeakers on the other sides of those mirrors. Those images are where your reflections are coming from.
The second is the obvious point – that the summed magnitude response of a loudspeaker’s direct sound, and its reflections is dependent on many things, the directivity being one of them.
The third is possibly the most important. All three of the loudspeaker models I’ve used here have razor-flat on-axis responses in a free field. So, if you were trying to decide which of these three loudspeakers to buy, you’d look at their “frequency response” plots or data and see that all of them are flat to within 0.0001 dB from 1 Hz to infinity Hz, and you’d think that they’d all sound “the same” under the same conditions. However, nothing could be further from the truth. These three loudspeaker with identical on-axis responses will sound completely different. This does not mean that an on-axis magnitude response is useless. It only means that it’s useless in the absence of other information such as the loudspeaker’s power response or its frequency-dependent directivity.
To keep things simple, I have not included frequency-dependent directivity effects. I may do that some day – but beware that it is not enough to say “the loudspeaker beams at higher frequencies so I don’t have to worry about it up there” because that’s not necessarily true – it’s different from loudspeaker to loudspeaker.
This also means that none of the plots I’ve shown here can be used to conclude anything about the real world. All it’s good for is to get a conceptual, intuitive idea of what’s going on when you put a loudspeaker near a wall.
One final comment: the microphone that I’m simulating here has an omnidirectional characteristic. This means that it is as sensitive to the reflected sound as it is to the direct sound, since the angle of incidence of the sound is irrelevant. The way we humans perceive sound is different. We do not perceive the comb filter that the microphone sees when the reflection is coming in from our side, since this is information that is recognised by the brain as being reflected – and it’s used to determine the distance to the sound source. However, if you plug one ear, you may notice that things sound more like you see in the plots, since you lose part of your ability to localise the direction of the signals in the horizontal plane.
For more reading…
Allen, J.B., & Berkley, D.A. (1979) “Image Method for Efficiently Simulating Small-Room Acoustics,” Journal of the Acoustical Society of America, 65(4): 943-950, April.
Eyring, C.F. (1930) “Reverberation Time in ‘Dead’ Rooms,” Journal of the Acoustical Society of America, 1: 217-241.
Gibbs, B.M., & Jones, D.K., (1972) “A Simple Image Method for Calculating the Distribution of Sound Pressure Levels Within an Enclosure,” Acustica, 26: 24-32.