Manny’s Modulation Manifesto: Intro to FM Synthesis

Introduction

What is this seemingly complex mystery that FM synthesis is all about? It has a reputation for being difficult to understand and program, partially because it introduced new terminologies like Algorithms, Carriers and Modulators in place of the usual oscillators and filters. However, I believe the main reason for the bad rap came about because the DX7 was just so different than the analog subtractive synthesizers with which everyone had been familiar. Rather than starting with a complex waveform with existing harmonic overtones, FM is a constructive system where you start with a simple sine wave and have to build complex wave harmonics from the ground up. Plus you had to do that, shape and control it from only one input slider instead of 20 to 30 knobs! Thus it was very awkward to just play around and see what happened to your sound in contrast to what you could readily do by just selecting a sawtooth or square/pulse wave on an analog synth and start turning all the knobs. The user interface on analog subtractive synths was more immediate and rewarded experimentation without requiring a deeper understanding of the instrument. For example, why does a pulse wave sound different as you change the pulse width? What are the technical differences between a low pass or band pass filter and what is resonance? Essentially you just twiddled the knobs until it sounded cool and you liked the sound.

Over the years as I’ve been programming FM synthesis, I’ve found the most constructive way to learn is to focus on and become familiar with “how” an FM sound changes when tweaking the core parameters of Operator Frequency Ratios and output Levels instead of “why”. Rather than get into the math and theory on why FM synthesis does what it does, these articles will come from a more practical approach of how to program for various ‘families’ or categories of sounds for you to gain experience with how FM responds. Yamaha has made this so much easier and straightforward with the front panel of the reface DX where you have immediate access to Algorithm, Ratio, Level and Feedback, as well as the online Soundmondo.com editor.
Through this series of articles and example tutorial, I hope you’ll get the experience to build a mental library of “when I do ‘this’, then ‘that’ is what happens to the sound” just like everyone does twiddling knobs with traditional subtractive synthesis. With each article, there will be links to content on Soundmondo.com to demonstrate the concepts and topics in action to see how it all comes together.

I like to approach synthesis and sound design by breaking down sounds into three foundational components: Pitch, Timbre (i.e. harmonic content or structure), and Amplitude (volume). All these components are universal to any synthesis system. How these three components change over time I’ll refer to as the “behavior” of the sound. Behavior typically is a blend of the automatic changes built into the sound and changes from real time input. An example of a common automatic change would be what happens in response to an Envelope generator. Common real time changes would be what happens in response to key velocity and controllers such as pitch bend, mod wheels, aftertouch, assignable sliders and the like. So when I approach sound design, I start with the context of how I’m going to use a sound, and then what type of timbre and behavior would it need to fit into that context. So, let’s dive in with sound design and building sounds from scratch so that you can experience how versatile FM synthesis can be.

Basic Waveform Creation

Please reference the Reface DX Voice “Wave Example” on Soundmondo.

With Yamaha’s traditional FM synthesis you start with simple sine waves – referred to as Operators — and have to build your waveform to create harmonics. This is done by choosing an Algorithm, which defines which Operators are Carriers and which are Modulators and how they interact, or how they are “patched” together. A Carrier Operator is one that you can hear directly and a Modulator Operator is one that you can only hear by the harmonics it creates. The harmonics you hear are determined by the Frequency Ratio between the Modulator Operator and the Carrier Operator. The loudness or brightness of those harmonics is determined by the output Level of the Modulator.

We’ll begin with the Reface DX Voice “Wave Example” on Soundmondo.com. You will see there a link to a YouTube video that expands on what I will be outlining here. This Voice uses Algorithm 5 that has Operator 1 as a single Carrier, and Operators 2, 3 & 4 as Modulators each patched directly into Operator 1. This will illustrate how we create traditional analog style waveforms, and a distinctly digital one. Play and hold a note. The sound starts as a sawtooth wave, changes to a square wave, then ends with a metallic bell wave. Here’s what it looks like (click picture to go to linked video “Wave Demo 1”):

{mp4}wave{/mp4}

Let’s look at the various parts of this sound. First, turn off both Operators 3 & 4 so we just hear the sound from Operators 1 & 2. These two Operators create a basic sawtooth wave because of the Frequency Ratios of 1.00 for both Operators 1 & 2. As you hold a key, the Envelope controlling the Level for Operator 2 decays to zero and the harmonics created by Operator 2 disappear leaving just the sine wave of Operator 1. This is similar to the effect of closing a low pass filter on a sawtooth wave using subtractive (analog) synthesis. Next, turn off Operator 2 and turn on Operator 3, so you are just hearing Operators 1 & 3. These Operators create a basic square wave because of the Frequency Ratios of 1.00 for Operator 1 and 2.00 for Operator 2. Play and hold a note. As the Envelope for Operator 3 first increases then decreases the output of Operator 3, you will hear the sound start as a sine wave, change into a square wave, then back to a sine wave, again similar to opening a closing a low pass filter on a square wave.

Finally, turn off Operator 3 and turn on Operator 4, so you are just hearing Operators 1 & 4. These operators create a basic metallic bell tone because of the Frequency Ratios of 1.00 for Operator 1 and 4.77 for Operator 4. Play and hold a key, and notice that as the Envelope for Operator 4 increases the output of Operator 4 you will hear the sound start as a sine wave then change into a metallic bell tone with inharmonic overtones. This is a classic digital waveform done easily with FM. The reason for this behavior is the Frequency Ratio of Operator 4 is not a whole number. This is similar to what you would hear from ring modulation or “cross modulation” in an analog/subtractive system.

Harmonics

I’ve shown the waveform shapes in the first video clip, now lets look directly at the harmonics, or overtones, as they change in the sound over time from the Envelopes controlling the modulation amount (meaning the Levels) from Operators 2, 3 and 4 (click picture to go to linked video “Spectra Demo 1”):

{mp4}spectra{/mp4}

You see the initial sawtooth wave has all the harmonic overtones (even and odd) present. As the modulation from Operator 2 fades out and the modulation from Operator 3 fades in, you’ll see every other harmonic disappear so it’s just the odd numbered harmonics of a square wave sounding. Finally, as the modulation from Operator 3 fades out and the modulation from Operator 4 begins to fade in, you’ll see the inharmonic overtones gain intensity.

Envelopes

You have heard and seen the basics of how Operator Frequency Ratio and Modulator Operator Level build complex waveforms, and how a Modulator Operator Envelope controls its Output Level to change the amount and brightness of those harmonics over time as you hold a note. Additionally, you heard the overall loudness or volume of the sound changes over time while holding a note is controlled by the Envelope settings of the Carrier Operator.

In this example, we started with Ratios to recreate some familiar waveforms. Now let’s spend some time listening to what different Ratio settings sound like. Go back to the “Wave Example” Voice and make sure Operator 2 is turned on. Turn off Operators 3 & 4 so we’re back to the initial sawtooth wave sound. Start changing the Frequency Ratio of Operator 2 and listen to the sound as you set it to 2.00, then 3.00, then 4.00 etc. up to about 12.00 then down to 0.500. Remember to play new notes with each setting so you can hear how the Envelope shapes the sound. Next, reset and leave Operator 2 Frequency Ratio at 1.00 and start changing the Frequency Ratio of Operator 1 from 0.500, to 1.00, then 2.00 etc. and again up to about 12.00. Listen to how the sound changes. Go back and try some random non-whole number Ratios like 1.27, 2.70, 3.37 etc. for both Operators. Try random whole number settings like 4.00 for Operator 1, 3.00 for Operator 2; or 2.00 for Op 1 and 5.00 for Op 2, or 6.00 for Op 1 and 7.00 for Op 2 etc., and listen to what they sound like. Start building a library of what various Ratio combinations sound like so that you will know where to begin when creating your own Voices.

Levels and Feedback

So far, we’ve focused on the Frequency Ratios of the Operators and how they change the harmonic overtone structure of the sound. Let’s move on to Level and Feedback. In simple terms, Level and Feedback control the intensity or volume of the harmonic overtones in the final waveform created. However, they do so in slightly different ways. Increasing or decreasing the Level of a modulator Operator increases or decreases the amount and brightness of the harmonic overtones created in the Operator that it modulates. Feedback changes the amount and brightness of harmonic overtones within that same Operator. Reface DX has a new implementation of Feedback that allows for the creation of either all the even and odd harmonic overtone series of a sawtooth wave with positive Feedback values or just the odd harmonic overtone series of a square wave with negative feedback settings.

This poses the following question: “Do I change the output Level or the Feedback in order to increase or decrease the amount of harmonic overtones in the sound I’m hearing?” The answer is “which one makes the sound change in a way that sounds good to you.” This is not meant to be a snarky answer, but this is where the technically correct answer gets into the arcane mathematics of FM behavior. At the end of the day, this is less important then exploring what happens when you change Level and Feedback values. The one practical thing to remember is that Level can be controlled by an Envelope and Velocity (as well as Key Scaling and the LFO, which we’ll cover in later articles) whereas Feedback is fixed and has no controller sources.

Let’s explore the variations of modulator Level and Feedback. Reference the Reface DX Voice “Level vs Feedback” on Soundmondo.com. This Voice again uses Algorithm 5 where Operator 1 is the Carrier and Operators 2, 3 and 4 are Modulators each directly patched into Operator 1. All Operator Frequency Ratios are set to 1.00. This Voice has Operators 3 & 4 turned off by default, so when you first play the Voice, the sound you hear is just from Operators 1 & 2, which is our basic sawtooth wave. You will see that Operator 2 has both its Level and Feedback set to 80. Play a few notes and get familiar with the sound. Next, turn off Operator 2 and turn on Operator 3. Operator 3 has its Level set to 65 and the Feedback set to 95 – so we’ve decreased the Level by 15 and raised the Feedback by 15.

Play some notes and notice how it sounds similar but has distinctly reduced midrange harmonic overtones and stays pretty uniform when playing upper notes. Now, turn off Operator 3 and turn on Operator 4. Operator 4 has the Level set to 95 and the Feedback set to 65, the inverse change where the Level has increased by 15 and the Feedback decreased by 15 compared to Operator 2. Play some notes and notice that it sounds distinctly different with significantly accentuated midrange overtones and gets especially harsh in the upper notes.

Next, go back and turn on Operator 2 and turn off both Operators 3 & 4 and look at what happens if you change the Operator Level leaving Feedback the same and vice versa. Increase and decrease the Level for Operator 2 as you play and hold notes and listen to how the sound changes. Now reset the Level back to 80 then increase and decrease the Feedback as you play and hold notes. In general you will notice that increasing the Feedback results in increasing higher range harmonic overtones faster than what happens when increasing Level — which tends to increase the mid-range harmonic overtones more noticeably. You might have noticed that when Level and/or Feedback are set higher than 95-100 that a lot of very pronounced and drastic changes occur in the Voice.

To put it another way, the Level and Feedback values is a type of “waveshaping” that accentuates different ranges of the harmonic overtones depending on which one is set higher or lower. Changes in the sound are pretty predictable and similar to the response of a low pass filter when Level and Feedback are both under a value of 90, but that the sound and harmonic overtones change very drastically and unpredictably when the values get above 90 creating that aggressive, digital tone that is well recognized as FM synthesis.

So until next time, practice and tweak your Frequency Ratios, Levels and Feedback values and listen to the various changes they create in the final sound.

Want to discuss or comment on this lesson? Join the conversation on the Forum here.

And check out Lesson 2 – Solo Brass Voices now.

A little bit about the author:

Manny Fernandez has been involved with sound programming and synthesizer development for over 30 years. Initially self taught on an ARP Odyssey and Sequential Pro-One, he also studied academically on Buchla modular systems in the early 80’s. With a solid background in analog synthesis, he then dove into digital systems with release of the original DX7. Along with his aftermarket programming for Sound Source Unlimited, Manny is well known for his factory FM programming work on Yamaha’s DX7II, SY77, SY99, FS1R and DX200 as well as the VL1 and VL70 physical modeling synthesizers.

Mastering MONTAGE: Audio Rec on DAW, Part II

The video below is a great accompaniment to this article. Check it out:

In the last article (Cubase Setup Guide Workflow: Audio REC on DAW) we introduced you to the basic Cubase configuration and we learned to follow the routing from the MONTAGE PART (OUTPUT) to the Cubase VST CONNECTIONS > INPUT, and finally assigned that INPUT to an Audio Track. This time we will begin with more details on the MONTAGE OUTPUT assignment side of things and discuss MULTI PART PERFORMANCES.

In our first installment we saw that the QUICK SETUP #3: “Audio REC on DAW” template mapped out PARTS 1-16 so that all thirty-two Outputs are in use. You may never actually use them all at once, but they are mapped this way in the default template. In a situation where you are building a multiple part composition in your DAW, where you are using the MONTAGE as your principal multi-timbral tone engine, you may have different instruments in each PART of your PERFORMANCE. Let’s recall such a PERFORMANCE and take a look at the impact that the template has when applied.

Please recall the 8 PART PERFORMANCE: “Kreuzberg Funk”:

Kreuzberg

This 8 PART Performance is using 7 Arpeggios, which you control with the left side of a split keyboard. The Lead Synth is mapped C#3 ~ G8 for right hand play. Notes C3 and below will control the arpeggio backing… The Super Knob and MW are in play. The 8 [SCENE] buttons switch both Motion Sequences and Arpeggio Patterns making for different musical Sections. If you don’t know what to do (in terms of “playing” this, simply press the [AUDITION] button to observe what happens with the SCENES and the Super Knob, et al). You can still use the [MUTE] and [SOLO] functions to hear what each PART is contributing

PART 1 – Electronic Line _ Main L&R
PART 2 – Bass _ USB 1&2
PART 3 – Musical FX _ USB 3&4 (only present in Scene #5)
PART 4 – Drums _ USB 5&6
PART 5 – Percussion _ USB 7&8
PART 6 – pitched Musical FX _ USB 9&10 (only present in Scene #4)
PART 7 – Synth Lead _ USB 11&12
PART 8 – noise FX _ USB 13&14 (only present in Scene #4)

When the Quick Setup #3 Audio REC on DAW template is applied with this PERFORMANCE each instrument is bused to its own discreet Output, initially. This may or may not be what you want to do, but we just want you to see how these PARTS are routed: 
within each PART you can see the assignment on the “Part Settings” > “General” screen:

From HOME
Press [EDIT]
Press [PART SELECT 1]
Press the lower [COMMON] button or touch “Common” in the lower left of the screen
Touch “Part Settings” > “General”

PART1mLR

The fact that PART 1 is the only one going through the MAIN L&R Output means it is the only one that is taking advantage of the REVERB and VARIATION Effects, in this default setup. You may wish to change this; again, this is a production decision that needs to be considered.  

The OUTPUT assignment for any of the PARTs can also be seen when you navigate to “Effect” > “Routing”.

The Synth Comp sound in PART 2 is functioning as the Bass line sound for this Performance – you can see that PART 2 is assigned to USB1&2 (as per the template)

PART2usb12

PART 3 happens to be an FM-X sound effect. It, too, will be routed to a discreet Output on its “Part Settings” > “General” screen:

PRT3FMusb34

And so on, go the OUTPUT assignments. You can clearly understand that you might want to record each to its own discreet stereo track. Why Stereo? And this is a good question, it will depend on what you are doing within each PART. For example, does the sound in the PART take advantage of the stereo panorama within what it is tasked to do. Remember, things like Motion Sequences can move items around in the stereo field. You can SOLO each PART and listen as you explore. Is the instrument a stereo sampled instrument? Does it use an Insertion Effect that requires stereo to reproduce the effect? There are many reasons why you might want to assign a single PART to a stereo bus > route it to a stereo Input > recorded on a stereo Track. These are the production decisions you can (and need to) make on a per project basis.

To receive this in Cubase it will be necessary to create the appropriate number of INPUT buses.
Navigate to DEVICES > VST CONNECTIONS > select the INPUT tab > click on “ADD BUS” and create the appropriate number of INPUT Buses:

8BusInputs

In Cubase, on the main Track view screen, create the appropriate number of AUDIO TRACKS and assign, each-by-each, to the designated Input. Track 1 (AUDIO 1) will use the “Stereo In”, Track 2 (AUDIO 2) will use the “Stereo In 2”, and so on through to Track 8 (AUDIO 8) which will use “Stereo In 8”

Bus2TrackIf you always think in terms of signal flow, from SOURCE to DESTINATION, you cannot go wrong. The SOURCE is the Montage PART, which you set on a specific OUTPUT assignment. You then receive that OUTPUT to a VST CONNECTIONS INPUT in the DAW. Then you connect it to the DESTINATION, the AUDIO Track which set to the designated Input. Once you can follow this signal flow you can see that it is logical and actually very easy to use. It’s not more difficult than plugging analog cables between devices in the real world.

You can name each INPUT (if you so desire) but as you can see, it is not mandatory. The default names for the buses is just fine, as long as you stay organized. You can, of course, name the Tracks. Track Names are more important than Bus Names – the reason: Once the signal has be transferred from Source to Destination, the Bus is open to now transport other passengers and can do so to different Track destinations. So the Bus Name is less important than the Name you place on the Track because the track represents a permanent home for the recorded audio. (The Bus was just how it got there)!

You, of course, can customize the OUTPUT assignments coming from the MONTAGE to suit you needs, and that is just what we will do in the next example. Please recall the PERFORMANCE “CFX + FM EP”.

CFX FMEP

This is a 5 PART PERFORMANCE where four of the PARTS make up the acoustic piano sound and the fifth PART is the FM Electric Piano. Let’s analyze and decide just what would be the best way to record this PERFORMANCE. Playing this PERFORMANCE, you can hear that it is a stereo piano and the FM electric piano is using a stereo Chorus effect. If you have been ‘getting’ how the routing flow of the default “AUDIO REC on DAW” template works, you quickly realize that it will not work for this PERFORMANCE. Four of the PARTS belong together – so if we think of these four PART’s audio signal as the “passengers”, we can put them all on the same bus to a single stereo Destination. And if we wish, we can record the Electric Piano on its own stereo pair or it can also be placed on the same bus with the four acoustic piano PARTS. But we certainly do not want to record each of the five PARTs to its own stereo audio track – that would make no sense (in this case). When you have a multiple PART PERFORMANCE that is using the multiple Parts as building blocks, you must adjust your thinking about your routing decisions. As Producer, you must make these production decisions based on the bigger picture that is your Project.

Experiments to try with “CFX + FM EP”
Setup and record as a basic stereo program – all Parts going through the System Effects and recorded as a single Stereo Track. (Arm a single audio track; Record the AUDITION data and play it back). Navigate to each Part and ensure the PART OUTPUT assignments as follows:
PART 1 _ Main L&R
PART 2 _ Main L&R
PART 3 _ Main L&R
PART 4 _ Main L&R
PART 5 _ Main L&R

“PART OUTPUT” is always found:
Press EDIT]
Press the PART SELECT button corresponding to the PART you wish to view
Press the lower [COMMON] button to select PART COMMON parameters or touch the blue “Common” box in the lower left of the screen
Touch “Part Settings” > “General” > find the PART OUTPUT

In Cubase, using a single Stereo Bus, record a Piano performance to a single Stereo Track.

Setup and record with the acoustic piano (Parts 1-4) bused via the “Main L&R” while the FM piano is routed to a discreet audio pair. (Arm two audio tracks)
PART 1 _ Main L&R
PART 2 _ Main L&R
PART 3 _ Main L&R
PART 4 _ Main L&R
PART 5 _ USB 1&2

In Cubase, using two Stereo Buses, record a Piano/E. Piano performance recording to two tracks.

SYSTEM and MASTER EFFECT, to be or not to be?
If you wish to record without the SYSTEM EFFECTS and MASTER EFFECT, you can simply use the FX Bypass feature – found on the the top line of the screen, touch “FX” and select to turn OFF System Effects and the Master Effect. (Shown below)

FXbypass

The only difference between the “USB 1-30” Assignable Output pairs and the “Main L&R” stereo pair is the SYSTEM and MASTER EFFECTS are applied to the “Main L&R” Outputs. By turning the SYSTEM EFFECTS and MASTER EFFECT OFF: No System Effects are recorded to the DAW – You would use this scenario when you are opting to add a plugin Reverb during mixdown*. (Arm two audio tracks).
*It is often preferred to leave adding Reverb Send amounts until you have assembled all the musical components. It is another “production” decision.
You can also setup the Montage as an external effect, routing audio signals from the DAW through the MONTAGE effects.

Again, you can use the [AUDITION] function during these routing experiments to provide music content so that you can concentrate on the signal flow and assignments, rather than splitting concentrating on ‘what to play’ and how to route it. Later, after you have become comfortable with routing and assigning, you can play/record something meaningful.

Not to repeat myself, but just to stress the fundamental point: Any Template, and specifically this “AUDIO REC on DAW” template, is just a starting point. You will, more often than not, have to make changes in the routing to match what you have selected to record.

Why did they default it as it is – with all thirty-two Output ports assigned as stereo pairs for the 16 PARTS? Because if you ever need this, it would be the one that takes you the longest to setup… and for when you need separate assignments, this one covers the most bases.

Please take the time and learn to use the OUTPUT assignment function on the MONTAGE. It’s as easy as 1-2-3: 

1) Montage PART OUT >>> 2) DAW INPUT configuration >>>  3)connect the input to an AUDIO TRACK: RECORD 

Have questions or comments about this article? Join the conversation on the Forum here.

Need to catch up on the first part of this lesson? Check it out here.

And stay tuned: our own “Bad Mister” – Phil Clendeninn – has more in the works for all you MONTAGE fans!

© 2024 Yamaha Corporation of America and Yamaha Corporation. All rights reserved.    Terms of Use | Privacy Policy | Contact Us