Equalizers Part II

Equalization in the Real World

I became a recording engineer the old fashioned way… I ran for coffee, setup microphones, cleaned tape machines (yes, it was the stone ages). I did an apprenticeship at a small jazz studio in New York City. The studio was affiliated with a record label and it was a very comfortable atmosphere both to learn and to record music. I got there because I needed to know how to get the sound I wanted when I recorded my band. I realized in my first recording experience as a musician that I could not express myself to the engineer. I knew what I wanted but could not speak ‘the language’. I was young, and there is nothing wrong with being young as long as you realize that you are and that you have time to learn. So that is what I did. I said to myself: “Self, you need to learn about this recording stuff if you are going to be a musician.” And that is what I did. After each session I would ask the engineer, when this happened why did you do this, and when that happened why did you do that… it was a process. Finally came the day I was “flying solo” – doing my first recording session.

It was the guitar player’s date. He showed up hours early. I knew he was serious because he had his own foot-rest and his hands were meticulously manicured. I got him a comfortable chair, setup a pair of U87 microphones – one over the tone whole and one above the nut and fret board. I spent about 5 minutes listening to him play, right there in the room, and then retired to the control room. I then went back in and adjusted my microphone positioning until I found a good distance. Notice I did not immediately go to the console’s EQ (and assume my first mic positioning was golden) – moving a microphone is always a better first solution than using an EQ. The distance from the source is a better harmonic balance than a knob on any EQ, period. I was pleased. I was getting this big, warm rich tone from that acoustic guitar. So big you felt you could walk between the strings. Oh, yeah, this was good. In walks the bass player. I set up a U47 (a favorite back in the day for acoustic bass). And I was getting a big, warm rich tone from that acoustic bass.

Now the guitar and bass started playing together. Hmmm! The big-warm-rich guitar was getting in the way of the bass. I made some adjustments to the EQ so that the guitar and bass worked together. In walked the piano player. We had a very fine piano in the studio and I had my own ‘thing’ for recording piano – it is my instrument after all. I did my ‘thing’ on the piano sound. Now the three of them started to play together, and I realized, once again, I needed to adjust the bass and the guitar to accommodate the piano. Okay. After some more minor adjustments, I was happy with the sound of the three of them.

In walked the drummer, and 40 minutes later I had a decent drum sound… Drums… need I say more? Rattles, squeeking, ringing, etc., etc. Then the flute player… I put him in the vocal booth. This allowed that I could still get clean isolation yet not have the microphone so close to the flute that it would give an unnatural sound. The isolation booth allowed the microphone to be far enough away to give the flute some airy-ness. Mic the nose or the chin of the player (or singer) – this avoids all the plosive wind puffs that can sound like a storm. You have to tell them to ignore the mic position, because they are used to being right in-the-mic because “live” it is a battle to be heard – so good mic technique goes out the window. Most people have no concept how to get the best sound from a mic – they never get beyond fighting to be heard.

Okay, a good hour and a half of setup and we are ready for a run through. This is when the big learning moment occurred – I call it the “AHA! Moment” (some say “Eureka”, same thing). They were running through the first number and it was sounding excellent – really excellent. Everything well balanced and clear – I could hear everything just like I wanted it. The learning moment came when I pressed the solo button, in turn, on each instrument. That is when it hit me. I soloed that guitar – and there was this much thinner guitar sound, nothing like the big warm rich sound I had initially. I undid the solo button and the guitar sat right in place with the other instruments. I soloed the bass – and there was this much thinner bass sound, nothing like the big warm rich sound I had initially. I undid the solo button and the bass sat right in place with the other instruments.

Wow! No way I would EQ these instruments like this as individuals – but they were perfect in the context of the ensemble. They supported each other well. And the bottomline is: that is what you are going for, the ensemble sound. Because it is a band, not a bunch of individual sounds to be heard alone. It is all about how they work together. AHA! Eureka… hurray! I had learned the first and most important lesson in recording – make the final product work. This was not “solo” instruments, this was an ensemble and the sounds should support each other in the context of the music they are making, now!

VCM Technology

The geniuses at Yamaha’s K’s Lab in Hamamatsu have created (recreated) the response and behavior of the classic Equalizers of the past. What VCM (Virtual Circuitry Modeling) does is not just mimic the device’s results (as is the case with many of the modeling devices out there)… what they have done is mathematically model the actual electronic components used in the classic equalizers of the past. The resistors and capacitors used to build these devices, their tolerances; their response to heat variation and how they were used in the actual circuitry of the devices is what has been modeled. This means that the input-output signal flow will behave the same as the original device under similar conditions. By mathematically modeling the components and the circuitry they can virtually build almost any device. The unique charm of these classic devices is what has been captured with the VCM technology.

VCM EQ 501 Parameters

A close look at the front panel of the VCM EQ 501 will reveal why it is called this: it is a 5-band EQ.
EQ501panel
The three brown knobs on the top row are labeled “Q”. The second row of five knobs is the “Frequency” setting. The five knobs of the bottom row are the “Gain” boost or cut. In the upper right corner is the overall OUTPUT.

Q – is a term that denotes how wide a frequency umbrella each band will cover (and you thought it was a group of omnipotent beings that live in the Q-continuum); Another word for “Q” is Bandwidth. When you boost a frequency it will make a peak (looks like a mountain peak) and when you attenuate a frequency it will look like a valley.

Q1A wide setting (a low Q number, top) will cover several octaves, while a narrow setting (a high Q number, bottom) will be a very small range of frequencies.

Q2Notice the lowest band (1) and the highest band (5), do not have a “Q” control. This is because these two ranges are a type that is referred to as “shelving”.
Q3
Rather than create a peak (mountain) when you increase the gain, it creates a plateau (flattens) and all frequencies below the low band are effected. In the case of the high band all frequencies above are effected. Basically they act as a HPF at the low-end and an LPF at the high-end. An HPF is a High Pass Filter – which you can take literally: it allows higher frequencies to pass and blocks the lows. And conversely, a LPF or Low Pass Filter allow the low frequencies to pass and blocks the highs.

FREQ – is frequency. And is simply, as we have explained, the center pitch of the peak or valley. And in the case of the shelving bands it is the point at which all frequencies below or above start to be boosted or cut. Frequency is measured in cycles per second, or Hertz (Hz); a parameter named after the man, Heinrich Hertz, who did groundbreaking work in this area.

GAIN – is output for that band of frequencies. If the Gain is set at 0 none of the other settings matter, as no boost or attenuation can take place. Gain is measured in dB or deciBels; Bell is Alexander Graham Bell, who did groundbreaking work in microphones, speakers, telephone, etc. dB is a ratio – a comparison of one setting to another. In general, a change of 1dB is hardly audible, 2dB change will be recognized by a majority of listeners (if they are paying attention), while a change of 3dB is clearly audible to everyone.

OUTPUT – In the upper right corner you see the OUTPUT knob. This is the overall output post the EQ. When signal first enters an EQ it passes through a filter which drops the overall level. The OUTPUT is to ensure that you can return the signal to what is considered UNITY gain – so there is not overall loss of level. An equalizer is a combination of filters and amplifiers. As we will learn, you do not always boost (at least not if you are wise), the OUTPUT can be important to give you back unity output post the signal going through the equalizer. Believe it or not, cutting (attenuating) the output gain at certain frequencies can be beneficial to a sound.

VCM EQ 501 Parameter Ranges

The Q setting (also referred to as Bandwidth) is available on the center three bands. The widest setting 0.50 through to narrow setting 16.0

The Frequency ranges are as follows:
Low (band 1) 31.5Hz ~ 2.0kHz
Low-mid (band 2) 50.0Hz ~ 20kHz
Mid (band 3) 50.0Hz ~ 20kHz
Hi-mid (band 4) 50.0Hz ~ 20kHz
High (band 5) 500Hz ~ 20kHz

The incremental steps between frequency settings are at 1/12 of an octave (as a musician you would understand this as approximately every half-step through out the music range). Without going into a discussion of Equal Temperament scaling – suffice it to say that this tuning is based on an octave being divided into twelve equal parts.

The Gain range is plus/minus 12dB for the two outer (shelving) filters and +/-18dB for the 3 peaking bands. You should resist the urge to use the Gain controls at the extreme setting – it is not that it is never necessary, it is just that plus or minus 12dB is a huge amount and plus or minus 18dB is a tremendous amount. Do not turn a knob like a guitar player… sorry for that one, but as you know, guitar players amps are built so you turn everything all the way up (to 11), don’t, this is professional audio gear. It is the very rare occasion that you need that type of severe Equalization. If you think you need that much, you might want to consider that you have selected the wrong sound.

The OUTPUT can be increased or decreased by +/-12dB.

In the upper right corner of the VCM EQ 501 you will see that you can select from several PRESETs:

Flat – a condition where all settings are neutralized (no gain or attenuation at any frequency). There are only two other presets: “Radio speaker” and “Dance BD & SN”

Typically, you cannot pre-set equalization, as it always will depend on the type of signal coming in and this can vary per session. The preset given here are special general ‘effect-type’ equalization settings that when applied to program will give the general overall feel of an old radio program or a typical dance bass drum/snare combination. The reason most tips and tricks about EQ’ing have to be taken with a grain of salt, is that it is very much based on the particular signal. EQ’ing Kick drums, for example, well each drum could have a different resonant tone, so saying that the center frequency should be “xyz” would not always be true, your mileage is going to vary. However, some general tips, based on the fact that the kick drum is a low frequency, can be made. Be prepared to use your own ears when EQ’ing!
Bad Mister

Equalizers Part I

The first introduction for many to the Equalizer is the so-called, “tone” controls on your home hi-fi system or on the radio in your car. Typically, you have a Bass knob and a Treble knob. People don’t know what to do with them so they turn up the Bass to 3 o’clock or better, they have the Treble knob set at 3 o’clock or better. If they have a very nice system, they may also have a Mid-range knob; they do not know what to do with it either, so they turn that up to 3 o’clock or better, too. If there is a Loudness Contour button, they punch that in, too. Why? … because it sounds louder and louder is better, right? Well, yes, louder always sounds better – whether it actually is or not, is the question. They are not sure about any of what they have done so they just turn everything up. If this article is effective at all, you will no longer be one of these people.

As a musician you have (or should have) an advantage over the average consumer, or non-musician, when it comes to intelligently using the equalizer. That is the purpose of this article: to make you aware of the natural advantage you have and how to put it to use. We will define several of the fundamental terms (pun intended), which are all music related, that should help you learn to use the equalizer in your projects.

We mentioned on your home hi-fi the knobs are referred to TONE controls. You may be thinking, “I thought on a synthesizer the Filter was responsible for the instrument’s tone”. And you would be correct. An Equalizer is basically based on a series of filters and amplifiers. Indeed filters are responsible for what we as musicians refer to as tone or timbre, however, what does this mean in terms of what we hear and how we hear? The Equalizer is responsible for the harmonic balance of the music program. You can make a particular frequency band (range) louder (boost) or softer (cut) by raising or lowering the Gain control for that segment. In the Motif XS/XF/MOXF there are many different equalizers: at the Element level, at the Voice level, at the Performance PART level, at the Mixing PART level and finally at the overall Master output level. There are several different types of EQ: 2-band, 3-band, 5-band, single band parametric, straight level boost. In this article we will explain the device, in general, and take a look at the special VCM EQ 501 in specific.

The VCM EQ 501 is an Insertion Effect and is a type of equalizer called a parametric EQ. This means that you have control over 3 parameters: the Bandwidth (or Q), the Frequency, and the boost/cut function. The other type of EQ you may be familiar with is a Graphic EQ – where, in contrast, the frequency and bandwidths are fixed at particular predetermined positions. Graphic EQ’s are often used to ‘fix’ a room. The parametric and semi-parametric EQs (as found in your synthesizer) allows you to select the frequency and the bandwidth of each. Musically speaking the ability to select the frequency, the range of frequencies affected and boost or cut, is viewed as more practical inside the working of your synthesizer.

Musical Background

Let’s open our discussion with a quick look at the nature of sound. There is ‘noise’ and there is ‘music’. Noise is chaos and music is order. What makes music pleasing is that there is structure and some consistency in the nature of the vibration. All Sound is vibration. When this vibration is random, we call it noise and when it is structured, we call it music. Picture in your mind’s eye a guitar string or a piano string in motion. When a string, attached at both ends, is plucked/struck it goes into vibration and it will maintain a consistent number of vibrations per second. We call this consistent vibration its frequency – measured in cycles per second, (or Hertz). The number, or frequency, of the vibrations is what musicians call its pitch. We have all heard that the “A” above middle “C” is identified as “A440“. This means that any object that vibrates consistently at 440 times in a second will give off the pitch we call an “A”.

Not all vibrations are audible (what we call ‘sound’). I must state this as, after all, light is a type of vibration at an extremely high frequency. The Moon travels around the Earth at a frequency of once every 28 days. The frequency response of the human ear to “sound” is approximately 20 cycles per second up to about 20,000 cycles per second. Your mileage will vary. This estimate can actually be slightly lower and/or higher depending on the individual. But this is the measurement for the average best pound-for-pound aural athlete on the planet, a seven-year old boy (go figure!). As you get older your frequency response naturally rolls off in the high-end – the more often you listen at extremely loud volumes the quicker your frequency response shrinks. This works out great because the older you get the less you want to hear screeching music. We have all heard that dogs can hear higher frequencies than we human beings and that many animals apparently hear lower than we can as they react to Earthquakes well before we become aware. But vibrations that occur between 20-20,000 cycles per second fall into the human audio range. (It’s our general “frequency response”, and it is not flat!)

The nature of how the frequency doubles every octave as you go up the scale, explains why chords sound better and clearer in the middle and upper range of the keyboard and denser and thicker (and less intelligible) when played low on the keyboard. The octave between A440 and A880 is much larger in size than the octave between A55 and A110 – it is easier to fill in harmonies when you have room to establish them. An Aminor7 chord in the octave starting with A440 will be more intelligible than playing that same chord building up from A55. You know this from having played the keyboard, now the numbers give you a reason why. there are only 55 frequencies between the low octave and there are 440 between the octave starting at A440

We said the equalizer affects the harmonic balance of the audible program. Harmonics is a term we need to understand as musicians. Technically, it is the whole integer multiples of the fundamental note. Let’s take the “A440” as an example. This is the fundamental pitch – to find the next higher harmonic you would multiply 440 x 2 = 880. To find the next harmonic you would multiply 440 times the next whole integer, 440 x 3 = 1320; and so on. So every whole number multiplied by the fundamental gives us a higher harmonic. Each harmonic has different amplitude (volume) giving each sound a unique identity. Harmonics are like the fingerprints of the sound. Every sound has a unique harmonic signature much like every human has a distinctly individual fingerprint. Our ear-brain can recognize sounds because it is uncannily able to recall a remarkable number of sound IDs.

Here is an example. Say you are in a totally dark room and behind you and to your left someone drops a coin. You immediately turn to your left and say, “That was a quarter, approximately 10 feet away”. Not only are you able to pinpoint where the sound occurred, you are able to identify the amount of money and from the sound you can now tell a lot about the room you are in and from what material the floor is made. A quarter has a distinctly different harmonic content from a nickel or a dime or a penny. And it does not sound like a half-dollar or a silver dollar. It sounds different bouncing off of wood than on metal or concrete. In fact, without a thought you realize that what I am saying to you is true. These are sounds you have heard (of course, if you live in Europe or Asia you must substitute coins of your own currency). But in your own mind you know you could tell them a part.

The sound hits your left ear an instant before it reaches the right (sound travels at 1100ft per second, estimate the time it takes to travel the few inches between your ears) and you immediately turn to the left. The reverberation and the phasing of the sound bouncing off the walls and ceiling, if any, immediately detail for your brain, the environment – you instantly know if you are indoors or out, and how large an area you are in at the moment. And because of the keen awareness of your consciousness, you can even tell if the coin was dropped and from what height or was thrown at you – the threat mechanism analyzes the situation. As a human you do this all in an instant, no thought, just reaction! Cool!

You can recognize human voices by the harmonic content of the person speaking. Each of us has a unique harmonic structure to our voice. You can even tell a friend’s voice over a device with as poor sound quality as through a cell telephone. Telephones have a frequency response of about 300-3,000 cycles per second (so don’t waste your time playing music to some one over the phone). But even in this severely narrow frequency region you are able to easily recognize your friend’s voice from someone selling a new long distance service.

Say a trumpet plays an “A440” and then a trombone plays an “A440”, do you think you could tell them a part? Of course, you could, no problem. Why? Because the harmonic content of the instruments are different – even though they are both brass instruments the shape dictates that they have different tone (timbre), i.e., different harmonic structure.

How does this happen? And what makes the harmonic content different. We said harmonics are whole integer multiples of the fundamental and in any most musical tones there are actually several tones happening simultaneously. We will now talk about the harmonic series – and as musicians this should be somewhat familiar.

SinewavePicture a piano string in motion giving a pitch of “A440”. If you have a high-speed camera and took a series of photographs you would see one where there was exactly 440 sine waves between the bridge and the nut. Another picture would reveal a contortion of the string with exactly 880 sine waves, and another would reveal 1320, and yet another 1760, and so on. When a string is sounding the fundamental “A440” it is also simultaneously, at softer levels, giving off the rest of the “harmonic series”. Just how loud each of the higher harmonics is and when it occurs in time, is what gives each sound its unique signature. The ear-brain can detect minute variations in the volume of these harmonics and it is this that we identify when we identify a sound.

The relative volumes of the harmonics can even change overtime – this also is part of what the ear-brain memorizes and anticipates as it looks through its memory banks to identify the source.
ComplexWave
It is very ironic that nowadays it is common practice to visually show the waveform in software and firmware samplers and recorders (diagram above). For us ‘old-school’ recording engineers nothing could be stranger – because looking at the waveform really tells you very little about how it sounds, while hearing it tells you everything. Could you tell a trumpet from a trombone by looking at the waveform? Doubtful… But our eyes are such needy things. The ear is phenomenal and makes the best identifier when it comes to sound – be it random or structured. It can hear sounds a trillion times louder than the softest sound it hears. A trillion! 1,000,000,000,000

Okay, let’s take a closer look at frequency. As musicians you know that each time you double or halve the frequency you move up or down a musical octave. The piano is a great instrument to do this on because very few instruments that go as high go as low, and very few instruments that go as low go as high. We said the “A” above middle “C” was “A440”. The “A” below middle C is “A220”, and the “A” below that is “A110”. And below this we have “A55” and the lowest note on the piano is “A27.5”. Going up the scale we have “A880”, “A1760” and “A3520”.

Just because I’m sure you are curious the highest note on the piano is “C4186.009”. So the full range of a concert grand piano is from “A27.5” ~ “C4186.009”. We said the human ear can hear from 20-20,000. That leaves a lot of room between the highest “C” and the highest audible frequency. What is in that area? Is it important? Harmonics; and are they important!?! …You bet. They are what give sound its intelligibility. Without the upper harmonics the sound is dull and uninteresting to the ear-brain. The way it was explained to me when I was a young recording engineer is:

If the eye is drawn to things that shine and sparkle, then it is the upper harmonics that sparkle for the ear.

Okay, what does all this have to do with the equalizer. A knowledge of the pitch (frequency) that is sounding will help you when it comes to using an equalizer – relating that to the musical instrument or keyboard will definitely help you target the proper area. A working knowledge of the harmonic series and the importance of upper harmonics will help you when you are manipulating the harmonic balance of the sound. This is what you do when you EQ something – you are changing the balance of the harmonics by making certain frequencies louder or softer. And the danger is that you run the risk of making a sound, unrecognizable.

The natural instinct is to listen to a sound and look for what you are not getting enough of… this is why most civilians (a kind term for the “technically unwashed” or average consumer) only boost with their equalizer controls. Once you learn to listen closely, you start to hear something and say to yourself – ‘what am I getting too much of…’ By removing some problem areas you can actually make what you want stand out. If you only boost all the time you sometimes only compound the problem. We’ll give you a good working example of this with a Kick drum later in this series on Equalizers. Until next time… Bad Mister

© 2024 Yamaha Corporation of America and Yamaha Corporation. All rights reserved.    Terms of Use | Privacy Policy | Contact Us