The first introduction for many to the Equalizer is the so-called, “tone” controls on your home hi-fi system or on the radio in your car. Typically, you have a Bass knob and a Treble knob. People don’t know what to do with them so they turn up the Bass to 3 o’clock or better, they have the Treble knob set at 3 o’clock or better. If they have a very nice system, they may also have a Mid-range knob; they do not know what to do with it either, so they turn that up to 3 o’clock or better, too. If there is a Loudness Contour button, they punch that in, too. Why? … because it sounds louder and louder is better, right? Well, yes, louder always sounds better – whether it actually is or not, is the question. They are not sure about any of what they have done so they just turn everything up. If this article is effective at all, you will no longer be one of these people.
As a musician you have (or should have) an advantage over the average consumer, or non-musician, when it comes to intelligently using the equalizer. That is the purpose of this article: to make you aware of the natural advantage you have and how to put it to use. We will define several of the fundamental terms (pun intended), which are all music related, that should help you learn to use the equalizer in your projects.
We mentioned on your home hi-fi the knobs are referred to TONE controls. You may be thinking, “I thought on a synthesizer the Filter was responsible for the instrument’s tone”. And you would be correct. An Equalizer is basically based on a series of filters and amplifiers. Indeed filters are responsible for what we as musicians refer to as tone or timbre, however, what does this mean in terms of what we hear and how we hear? The Equalizer is responsible for the harmonic balance of the music program. You can make a particular frequency band (range) louder (boost) or softer (cut) by raising or lowering the Gain control for that segment. In the Motif XS/XF/MOXF there are many different equalizers: at the Element level, at the Voice level, at the Performance PART level, at the Mixing PART level and finally at the overall Master output level. There are several different types of EQ: 2-band, 3-band, 5-band, single band parametric, straight level boost. In this article we will explain the device, in general, and take a look at the special VCM EQ 501 in specific.
The VCM EQ 501 is an Insertion Effect and is a type of equalizer called a parametric
EQ. This means that you have control over 3 parameters: the Bandwidth (or Q), the Frequency, and the boost/cut function. The other type of EQ you may be familiar with is a Graphic EQ – where, in contrast, the frequency and bandwidths are fixed at particular predetermined positions. Graphic EQ’s are often used to ‘fix’ a room. The parametric and semi-parametric EQs (as found in your synthesizer) allows you to select the frequency and the bandwidth
of each. Musically speaking the ability to select the frequency, the range of frequencies affected and boost or cut, is viewed as more practical inside the working of your synthesizer.
Let’s open our discussion with a quick look at the nature of sound. There is ‘noise’ and there is ‘music’. Noise is chaos and music is order. What makes music pleasing is that there is structure and some consistency in the nature of the vibration. All Sound is vibration. When this vibration is random, we call it noise and when it is structured, we call it music. Picture in your mind’s eye a guitar string or a piano string in motion. When a string, attached at both ends, is plucked/struck it goes into vibration and it will maintain a consistent number of vibrations per second. We call this consistent vibration its frequency – measured in cycles per second, (or Hertz). The number, or frequency, of the vibrations is what musicians call its pitch. We have all heard that the “A” above middle “C” is identified as “A440“. This means that any object that vibrates consistently at 440 times in a second will give off the pitch we call an “A”.
Not all vibrations are audible (what we call ‘sound’). I must state this as, after all, light is a type of vibration at an extremely high frequency. The Moon travels around the Earth at a frequency of once every 28 days. The frequency response of the human ear to “sound” is approximately 20 cycles per second up to about 20,000 cycles per second. Your mileage will vary. This estimate can actually be slightly lower and/or higher depending on the individual. But this is the measurement for the average best pound-for-pound aural athlete on the planet, a seven-year old boy (go figure!). As you get older your frequency response naturally rolls off in the high-end – the more often you listen at extremely loud volumes the quicker your frequency response shrinks. This works out great because the older you get the less you want to hear screeching music. We have all heard that dogs can hear higher frequencies than we human beings and that many animals apparently hear lower than we can as they react to Earthquakes well before we become aware. But vibrations that occur between 20-20,000 cycles per second fall into the human audio range. (It’s our general “frequency response”, and it is not flat!)
The nature of how the frequency doubles every octave as you go up the scale, explains why chords sound better and clearer in the middle and upper range of the keyboard and denser and thicker (and less intelligible) when played low on the keyboard. The octave between A440 and A880 is much larger in size than the octave between A55 and A110 – it is easier to fill in harmonies when you have room to establish them. An Aminor7 chord in the octave starting with A440 will be more intelligible than playing that same chord building up from A55. You know this from having played the keyboard, now the numbers give you a reason why. there are only 55 frequencies between the low octave and there are 440 between the octave starting at A440
We said the equalizer affects the harmonic balance of the audible program. Harmonics is a term we need to understand as musicians. Technically, it is the whole integer multiples of the fundamental note. Let’s take the “A440” as an example. This is the fundamental pitch – to find the next higher harmonic you would multiply 440 x 2 = 880. To find the next harmonic you would multiply 440 times the next whole integer, 440 x 3 = 1320; and so on. So every whole number multiplied by the fundamental gives us a higher harmonic. Each harmonic has different amplitude (volume) giving each sound a unique identity. Harmonics are like the fingerprints of the sound. Every sound has a unique harmonic signature much like every human has a distinctly individual fingerprint. Our ear-brain can recognize sounds because it is uncannily able to recall a remarkable number of sound IDs.
Here is an example. Say you are in a totally dark room and behind you and to your left someone drops a coin. You immediately turn to your left and say, “That was a quarter, approximately 10 feet away”. Not only are you able to pinpoint where the sound occurred, you are able to identify the amount of money and from the sound you can now tell a lot about the room you are in and from what material the floor is made. A quarter has a distinctly different harmonic content from a nickel or a dime or a penny. And it does not sound like a half-dollar or a silver dollar. It sounds different bouncing off of wood than on metal or concrete. In fact, without a thought you realize that what I am saying to you is true. These are sounds you have heard (of course, if you live in Europe or Asia you must substitute coins of your own currency). But in your own mind you know you could tell them a part.
The sound hits your left ear an instant before it reaches the right (sound travels at 1100ft per second, estimate the time it takes to travel the few inches between your ears) and you immediately turn to the left. The reverberation and the phasing of the sound bouncing off the walls and ceiling, if any, immediately detail for your brain, the environment – you instantly know if you are indoors or out, and how large an area you are in at the moment. And because of the keen awareness of your consciousness, you can even tell if the coin was dropped and from what height or was thrown at you – the threat mechanism analyzes the situation. As a human you do this all in an instant, no thought, just reaction! Cool!
You can recognize human voices by the harmonic content of the person speaking. Each of us has a unique harmonic structure to our voice. You can even tell a friend’s voice over a device with as poor sound quality as through a cell telephone. Telephones have a frequency response of about 300-3,000 cycles per second (so don’t waste your time playing music to some one over the phone). But even in this severely narrow frequency region you are able to easily recognize your friend’s voice from someone selling a new long distance service.
Say a trumpet plays an “A440” and then a trombone plays an “A440”, do you think you could tell them a part? Of course, you could, no problem. Why? Because the harmonic content of the instruments are different – even though they are both brass instruments the shape dictates that they have different tone (timbre), i.e., different harmonic structure.
How does this happen? And what makes the harmonic content different. We said harmonics are whole integer multiples of the fundamental and in any most musical tones there are actually several tones happening simultaneously. We will now talk about the harmonic series – and as musicians this should be somewhat familiar.
Picture a piano string in motion giving a pitch of “A440”. If you have a high-speed camera and took a series of photographs you would see one where there was exactly 440 sine waves between the bridge and the nut. Another picture would reveal a contortion of the string with exactly 880 sine waves, and another would reveal 1320, and yet another 1760, and so on. When a string is sounding the fundamental “A440” it is also simultaneously, at softer levels, giving off the rest of the “harmonic series”. Just how loud each of the higher harmonics is and when it occurs in time, is what gives each sound its unique signature. The ear-brain can detect minute variations in the volume of these harmonics and it is this that we identify when we identify a sound.
The relative volumes of the harmonics can even change overtime – this also is part of what the ear-brain memorizes and anticipates as it looks through its memory banks to identify the source.
It is very ironic that nowadays it is common practice to visually show the waveform in software and firmware samplers and recorders (diagram above). For us ‘old-school’ recording engineers nothing could be stranger – because looking at the waveform really tells you very little about how it sounds, while hearing it tells you everything. Could you tell a trumpet from a trombone by looking at the waveform? Doubtful… But our eyes are such needy things. The ear is phenomenal and makes the best identifier when it comes to sound – be it random or structured. It can hear sounds a trillion times louder than the softest sound it hears. A trillion! 1,000,000,000,000
Okay, let’s take a closer look at frequency. As musicians you know that each time you double or halve the frequency you move up or down a musical octave. The piano is a great instrument to do this on because very few instruments that go as high go as low, and very few instruments that go as low go as high. We said the “A” above middle “C” was “A440”. The “A” below middle C is “A220”, and the “A” below that is “A110”. And below this we have “A55” and the lowest note on the piano is “A27.5”. Going up the scale we have “A880”, “A1760” and “A3520”.
Just because I’m sure you are curious the highest note on the piano is “C4186.009”. So the full range of a concert grand piano is from “A27.5” ~ “C4186.009”. We said the human ear can hear from 20-20,000. That leaves a lot of room between the highest “C” and the highest audible frequency. What is in that area? Is it important? Harmonics; and are they important!?! …You bet. They are what give sound its intelligibility. Without the upper harmonics the sound is dull and uninteresting to the ear-brain. The way it was explained to me when I was a young recording engineer is:
If the eye is drawn to things that shine and sparkle, then it is the upper harmonics that sparkle for the ear.
Okay, what does all this have to do with the equalizer. A knowledge of the pitch (frequency) that is sounding will help you when it comes to using an equalizer – relating that to the musical instrument or keyboard will definitely help you target the proper area. A working knowledge of the harmonic series and the importance of upper harmonics will help you when you are manipulating the harmonic balance of the sound. This is what you do when you EQ something – you are changing the balance of the harmonics by making certain frequencies louder or softer. And the danger is that you run the risk of making a sound, unrecognizable.
The natural instinct is to listen to a sound and look for what you are not getting enough of… this is why most civilians (a kind term for the “technically unwashed” or average consumer) only boost with their equalizer controls. Once you learn to listen closely, you start to hear something and say to yourself – ‘what am I getting too much of…’ By removing some problem areas you can actually make what you want stand out. If you only boost all the time you sometimes only compound the problem. We’ll give you a good working example of this with a Kick drum later in this series on Equalizers. Until next time… Bad Mister