MONTAGifying MOTIF: “Rule the Earth”

This is your quintessential rock shuffle – “Everyone Wants to Rule the World” is the fairly obvious musical reference used here (Tears for Fears, circa 1980’s). We often get the question about copyright infringement – no, it’s not copyright infringement until you use this to create the same chords and melody line, and then try to claim it as your own! You cannot steal a song “by accident.”

So let’s get started:

MainHOME
PART 1: Rock Stereo Kit 1
PART 2: Uni Punch
PART 3: PWM Percussion
PART 4: Slow PWM Brass

Let’s continue with the discussion of Assignable Outputs. The MONTAGE signal routing scheme mimics what you would find in any professional mixing console. The mixer here is very much a digital mixer, with Inserts and Sends, Mutes and Solo, Pan control and Faders. There are eighteen inputs to the console: 16 Synth Parts, an A/D Input Part and a Digital Input Part. It features methods of automation that integrate with the synthesizer engine and it can be controlled with tempo related commands. You can even create, store and recall on-demand, your own MIDI and audio setup templates – which can include Part output assignments, monitor preferences, and in general, settings for your most common tasks.

Yamaha pioneered digital mixing consoles back in 1987 when the DMP7 was introduced as the world’s first MIDI automated digital console designed to work with the TX816 – back then Synthesizers didn’t have EQs or Effect processing – those things were added at the mixing console. Old school. Today, all the sophistication found in stand alone digital mixers is built into the MONTAGE. It is important from a historical standpoint but also from a conceptual standpoint to understand exactly how the synth engine and the digital console integrate in your MONTAGE. As we’ll see, there are settings that can be made and stored as part of the synth sound, and others that are stored as part of the mixer. Imagine having a physical mixer, the Fader is set to 101 and currently Channel is a mic on a guitar… later you might change the channel to support an electric guitar. You could change the instrument on the channel and the Fader could still be set at 101. There are channel settings that can be made and maintained independent of the instrument on the channel. A Part can “inherit” certain mixer channel settings.

The factory QUICK SETUPs should be used to gain a basic understanding of what can be configured. In working with literally hundreds of first time users, I have discovered that there are three common “newbie” misconceptions 1) SETUP with a synth and a computer is difficult (it is not), 2) it must be done every time you want to work with the synth and computer (it does not, once you configure some basic things your computer “remembers” the configuration and 3) there is one configuration that works for everything (there is not, you need to learn several basic things to allow you to record, to overdub and to mixdown).

Once you’ve gained a feel for working with MONTAGE and your DAW, you can tweak these Factory Quick Setups and overwrite them with your own custom configurations. There is no “one way” to work. This means you can develop your own workflow and MONTAGE will remember your preferences. The common newbie misconception is that everyone works the same exact way. You will develop your own workflow.

For example, the factory AUDIO REC ON DAW “Quick Setup”, routes each of the 16 MONTAGE PARTS to its own stereo pair of Outputs. And you can understand why this is the factory default – it’s the one setup that would take you the longest amount of time to manually configure. If you want to create complete audio stems of each MONTAGE PART, the factory Quick Setup is ideal. 

In many situations as a keyboard player/composer doing music for hire, you are asked to deliver your parts as individual audio stems for the convenience of mixing elsewhere. This is when having the option to route Parts, isolated to their own Audio Outputs and Tracks, without “shared” effects (Reverb), is on point. MONTAGE is flexible enough to route individual drums to USB Outputs from within your Drum Kit Part.

However, if you only need to route a few Parts in isolation, you might opt for a Quick Setup for Audio that resets all Parts to “Main L&R” – then you can make the individual settings on a case by case basis, as necessary. Point is, you can customize this to your most often used workflow.

Just when do you route an instrument (via Assignable Outputs) to its own isolated Track?
Since you can – using MIDI commands – automate fully the mixdown of your synth sounds, it seems like an extra layer of work to then separate everything as audio in the DAW. There is little reason to isolate instuments to separate audio tracks, unless you have plans to further process the sound in your DAW, prior to, or during mixdown. Because you can link the MONTAGE and your DAW via MIDI Clock, you can insure that tempo will remain locked between the two devices. This allows you the freedom to come back and overdub new tracks, keeping everything in synchronization.

If you’ve been thinking exclusively MIDI when Mixing, we must remind you that with MONTAGE there are results that only occur as a result of the interaction of the Parts as audio. If you are side chaining, or Vocoding signals, you will want to capture this interaction as audio. Your “music production” decision process must include the best way to capture this for your composition. Sometimes capturing the interaction is best done by isolating or combining certain of the Parts to certain audio Output buses. For example, when using an external audio source as a modifier in the Motion Control Engine, you may opt to use the Direct and encoded Signal, or just the encoded Signal. But our point is, some sounds you will be creating present a challenge as to how you should record them, and how you will route them. Discover and master the basics first, then branch out by experimenting.

“Rule The Earth”
Rock Stereo Kit – this Kit gives the sound of a drum kit in a room. A very open sound, not studio-slick, but plenty of ambient room sound (done without effects). There is even a ‘ring’ in the snare (an acquired taste thing). This is done purposefully and is a “production decision”. When you are choosing a Drum Kit, please recognize that they are fairly flexible when it comes to ‘producing’ a specific sound, but there is also a basic “character” fundamental to each Kit. In general, the two octaves from C1-B2 represent the drummer‘s “trap kit”: Kick, snare, sidestick, rimshot/clap, Hihats, cymbals, toms. It is these Keys that give the Kit it’s name.

Kits have “a personality” but you can freely substitute individual drums or entire Kits. Let’s take a close look at some things to know when substituting Drum Kits in a Performance that already has Arps phrases assigned and mixer settings that we would want to keep.

Using the “View” HOME screen, press [PART SELECT 1] we can view the PART 1 Drum Kit Elements:
ViewHOME
Let’s try substituting a different Drum Kit and understand the logistics.

We can audition different Kits. Instead of a using the PERFORMANCE CATEGORY SEARCH, we can use the PART CATEGORY SEARCH by pressing [SHIFT] + [CATEGORY SEARCH] This will ensure that we are searching to replace just a single PART and not searching to replace the entire PERFORMANCE:

 – With PART 1 ‘selected’ (highlighted).
 – Press [SHIFT] + [CATEGORY SEARCH].
 – The screen will read “Part1 – Category Search”.

 – Set the “Bank/Favorite” = Preset.
 – Touch “Drum/Perc” > “Drums”.

Along the bottom we will want to deactivate the “Param. with Part” options for MIXING, ARP/MS, SCENE, and ZONE – deselect these prior to making a KIT selection:
PartCatSearch drum
This means that instead of recalling the Drum Kit with its previous MIXING, its previously assigned ARPs and Motion Sequences, its previous SCENE settings and ZONE items, it will simply come in and inherit those settings we already have on our Mixer, it will inherit the Arpeggio phrases already selected in our PERFORMANCE, and the SCENE snapshots already taken for our PERFORMANCE, etc. If you fail to deactivate these then the Drum Kit will bring in the default Arp Phrase and settings for that Kit with the Part.

Above I have selected “Real Drums Kit 2”. Changing KITS is changing 73 instruments – you will need to retrigger a KEY to start the new KIT playing the ARP Phrase. The ARP PHRASE will remain assigned to the PART, the newly selected KIT will now inherit the current settings. Simply trigger a Key on the beat.

In this manner you can substitute KITs, and yet keep the current Performance settings. PARAMETER WITH PART – is used to bring previous settings into the current PERFORMANCE, in this case we want the new PART to inherit our current settings so you must OPT OUT (deactivate each option). 

Some Kits are close miked, others have more room ambience. Find a kit that suits your taste or one that has some thing interesting to work with. In future articles we will come back to this and take a look at how you can assign any drum to your kit and how you can recreate drums that mimic any era of recorded sound.

DRUM ROUTE

Let’s say we want to take separate audio outputs for the Bass Drum, Snare Drum and Hihats.

Drum Kit PARTS are different from Normal PARTS, and it is not just that old joke about drummers being different from musicians, in a Drum Kit PART each KEY is autonomous. Each Key is a different instrument. While in the typical Normal PART all Keys make up one instrument. A Drum Kit is a conglomeration of individual drum and percussion sounds herded together into a single entity. Mostly each instrument occupies a single Key, although there are exceptions: the Hihat occupies three keys, the triangle occupies two keys… for different articulations. Each Drum Key has its own Waveform, its own Volume control, its own Pan position, its own Filters (HPF and LPF), its own AEG, its own EQ, it can even be tuned individually and can either receive or ignore NOTE-OFF. Where there is a Normal PART, the Keys usually share these things as a group; in a Drum PART, each KEY can stand alone. Each Key can even be routed to its own audio bus Output. 

How can this be accomplished? That’s what we’ll tackle next:

Let’s say we want to take BD, SD and the three HHs and route them on their own buses to the DAW.

Drum Kit PARTS, like Normal PARTs, initially default to being assigned to the “Main L&R” Output. We can, on a DRUM KIT PART, select PART OUTPUT = “Drum”. This allows us to then route individual KEYs to an assignable Output:
PtOut Drum
What we need to understand about the DRUM PART is how it works as a whole. A Drum Kit, like all PARTs, can have two INSERTION EFFECTS. And just as we saw in the Normal PART, an individual Element could be routed to Ins A or Ins B as we may desire. Once we have set the Kit’s PART OUTPUT = “Drum”, we can select to send individual Drum Keys to Assignable Outputs. Set the Connect = Thru – this bypasses the Insertion blocks. you can see this in the flow chart shown on the screen exactly how each Key’s signal is routed:
DrumRouting
This will be important to understand about Drum Kit routing. The screen above shows the Drum PART (PART 1), Common, Effect, Routing. As you touch a KEY (“Keyboard Select” is active, green), you can see its routing connection through INSERT A, or INSERT B or THRU. The THRU option will allow you to send that drum and isolate it to its own DRUM KEY OUT. Shown below is the Bass Drum (C1) Key’s “Osc/Tune” screen. “Connect” = Thru and the “Drum Key Out” can be set as desired:
DrumKeyOut
It is set by touching the DRUM KEY OUT box to view the option pop-in menu:
Kick USB1
 – Press the snare Key D1 to recall its data.
 – Set the “Drum Key Out” = USB Mono > USB2.

 – Touch each of the three Hihat Keys (F#1, G#1, A#1) Assign each in turn to “USB Mono > USB3”.

Such an assignment can only take place when the overall PART OUTPUT for the entire Kit has been set to “Drum”, then the individual Drum Key has been set to THRU (bypassing the Insertion A and B blocks). Within a Drum Kit Part, a Drum Key can “share” the Insertion Effect with the other Keys, so in the Drum universe the Insertion Effects are the “shared” effects.

Options
Lets take a look at the Drum Key options. For the sake of learning, let’s recall the factory Kit “Dry Standard Kit”. Using the Part Category Search, Select the “Dry Standard Kit”. More of a neutral sound, the character is different.

As you press various Keys in this Kit, while viewing the Part Common Effect Routing screen, notice that some Keys are routed to the Insertion blocks while others are routed THRU:
D1 Thru
If you wanted to route the SD on D1 to an assignable Output, you would have to set the Connect parameter = “Thru”. Doing so insures its isolation from the other Drum Keys which may be using the Insertion Effects, When Thru is selected, and the PART OUTPUT = Drum, you will see the “Drum Key Out” option appear (lower right corner).

If you CONNECT the Drum Key to an INSERTION EFFECT block, for example, shown below D1 is ‘connected’ to the INS B block (SPX Room reverb):
D1 InsB
The “Drum Key Out” parameter will be unavailable (shown in black background which is the equivalent of being grayed out) for selection. Once you opt to use an INSERTION EFFECT, the signal will be returned to the “shared” system output with the rest of the drum Keys in this Kit. 

Take your time with the routing options. An entire Drum Kit, all 73 Elements, each have the option to ’connect’ via the ROUTING scenario. If you want to route the entire Kit as a Stereo Instrument simply set the PART containing the KIT to “PART OUT” = “MAIN L&R”, but if you want to route some of the drums as individuals, you must set the “PART OUT” = “Drum”. This is the impetus for each KEY being able to be assigned to the MONTAGE Output matrix and having that assignment respected.

In a Drum PART, you may find certain drums routed to INSERTION Blocks in groups. For example, in the “Dry Standard Kit” it is the Tom-Toms on D2, C2, B1, A1, and G1 that are utilizing the VCM EQ 501 as a group. Anything you route to INSERT BLOCK A would share the EQ setting designed for the Toms. 

If you are not going to use the Toms in your particular composition, you can certainly reallocate the INSERT EFFECT A block to do something else. Again, there is no one way to use the resources, but knowing how to explore them can help you when you need to accomplish something. I have often talked to long-time Motif users, who never realized that Insertion Effects could be applied usefully within a Drum Kit because they did not understand the routing and therefore could not see the potential. The INSERT CONNECTION parameter (located between the two Insert blocks) is set to “Parallel” meaning that these two effects can be used independently. Drums are not like a guitar where you are going to put multiple effects in a row – you might want to dedicate one of the Effect blocks to a group of drums (like the Toms) and the other to create an ambience for the Percussion. 

In most MONTAGE Kits you will notice that Latin Percussion is covered in the two octaves bewteen middle “C”, C3 and C5. And it is these that are using the SPX ROOM to create a “percussion environment”: giving the ability to treat the reverb on the percussion separately from that on the prinicipal drum kit sounds. This bit of subtle difference can give your “Drums” more presence in your mix, instead of just a drummer, you can create the illusion of a drummer and several percussionists. If you begin to listen to your favorite recording, notice how percussion instrument sounds “fit” when there is also a drummer. Good use of placement (both volume and Pan position) is critical.

DAW Inputs
When you connect a Part or a Drum to a MONTAGE Assignable Output, you must create a corresponding input in the DAW to receive the audio. If you are using a stereo pair you’ll want to create a stereo input pair. If you are routing signals on mono buses you will make a mono bus input for each one. This is simple enough but an important thing to understand. Stereo Outs connect to Stereo Ins on the computer DAW, Mono Outs connect to Mono Ins. The importance of this rather simple rule will be discussed next.

And then finally, you create a corresponding stereo or mono audio track that uses that Input. A Stereo Track set to receive Audio from a Stereo bus; A Mono Track set to Receive Audio from each Mono bus. Many people when first encountering this type of configuration, opt to create Inputs for every possibility – then all you need to do is use the ones you need. Others opt to create the setups as they go. I believe it is best to create the inputs as you need them. Certainly software companies would likely recommend this as a strategy, because creating an Input that you are not using or not going to use does require some CPU muscle. Besides, once you understand the Signal Flow it really only takes a second to assign a Part in the MONTAGE, then with a couple of clicks in your DAW, you’ve create a corresponding Input, and Audio Track. The more familiar you are with this routine, the better you get at doing it quickly.

And you definitely want to get to the point where it is as second nature as anything you do often. The thing about Signal Flow is you can follow the audio from source to final destination and this makes it extremely easy to troubleshoot when issues arise.

Why when using USB Assignable Outputs are the sounds hard left or hard right?
When you assign a Bass Drum to USB1, as a mono bus Out, you may notice that the audio is sounding only in the left speaker channel, and when you assigned the Snare Drum to USB2 it only sounds from the right speaker. If this is the case, then you are still monitoring the MONTAGE Direct. In any Stereo pairing the odd number is the left channel and the even number is the right channel. If you were thinking, “It’s mono, hey, shouldn’t it be in the middle, equal in each speaker?” – you would not be completely wrong in thinking this way – and this brings us to our next important concept: “Monitoring” Direct or through the DAW.

When using the Assignable Output, we’ve said that the Part is removed from the system outputs. The only way you are able to hear it is because for monitoring purposes only, the Assignable buses are routed to the Main outs. Odd numbered buses left, even numbered buses right.

If, however, you turn DIRECT MONITOR = OFF, then complete the routing circuit through your DAW’s corresponding Audio Input and Audio Track, you will be able to monitor the signal after it arrives at the target destination. When you monitor the mono bus information post it arriving in the computer, the Bass Drum and Snare Drum, arriving on USB1 and USB2, respectively, will each default to “Center” as you were thinking. This is because of *where* we have selected to monitor. Once the “mono bus” arrives in the DAW, you can pan it as you desire.

If you are worried about the fact that monitoring through the computer means you are listening to the latency, yes you would be. Typically your latency is only a few milliseconds. If you think that you can hear that, you probably cannot. I don’t argue with people who say they do. If you simultaneously monitor two signals that are 6ms apart, of course you can hear the flanging as that is easily detectable by the human ear. The ear can hear items smaller than 0.01ms if they are simultaneously sounding. But if you are listen to just one signal, you probably cannot pick out which one is immediate, and which one is 6ms behind. 

Perspective: when a guitar player strums a chord, there can be as much as 6ms between the E string and the A string. When transferring items from MONTAGE to audio tracks the issue of latency is often moot because the transfer can take place when no one is playing. Rendering audio can be done at non-critical times. So it is a complete non-issue here.

Often when wearing your technician (recording engineer) hat, you may switch back and forth between monitoring direct and monitoring post the DAW – once you realize that changing how you are monitoring does not interrupt or change the recording at all. Whether you choose to monitor direct or post the recorder, the signal your are monitoring is analog and totally separate from the actual digital signal that is recorded to your computer. How loud it is in your speakers does not matter; whether or not the level is turned up, at all, recording can still take place… monitoring is just listening. This is why when dealing with audio your meters are very important. Record Level is significantly different from Monitoring Volume.

Until next time, enjoy.

Questions or comments about this lesson? Join the conversation on the Forum here.

And stay tuned for more from Bad Mister!

Download here: Rule_the_Earth.X7B

MONTAGifying MOTIF: “Creepn Worm”


Sound Designing Opportunities

Once MOXF or Motif XS/XF four Part Performance data is loaded (converted) into MONTAGE, create a file in native MONTAGE format for much quicker load times. At maximum, a converted Performance will have four Parts; each Part will have a maximum of two Assign Knobs active and six Control Set (Source/Destination) assignments.

In the MONTAGE, you can add four more realtime controllable KBD CTRL Parts. Each Part will have its own eight AssignKnobs and sixteen Control Set (Source/Destination) assignments. You can begin to expand on the programming, integrating it into the Motion Control Synthesis Engine. You can combine them with FM-X Parts to create entirely new musical creations:

 – Every Motif XS/XF and MOXF VOICE is already available in the MONTAGE as a Single (green) Part Performance. In fact, all of the Factory Presets are already available as ”Single Part Performances” in the MONTAGE factory set.
 – Each Motif XS/XF and MOXF Arpeggio phrase is already available in the MONTAGE.

New with firmware version 2.00, you can opt to convert User VOICES to MONTAGE “Single Part Performances”, or you can opt to convert the entire bank of four Part PERFORMANCES. Motif XS (.X0A), Motif XF (.X3A) and MOXF (.X6A) Files can now be converted, resulting in you being able to have your XS/XF or MOXF data available in MONTAGE.

For example, opting to Load “PERF” from a Motif XF .X3A All Data File will result in MONTAGE converting all 512 User Performances – including all the Voices, Arps and Waveforms that make them work. Each Motif XF Voice will occupy a Part 1-4 in the appropriate Performance filling 512 of the 640 Performance locations in the User Bank.

If you opt to load as “VOICE”, each XF User VOICE (512 Normal, 32 DrumKit) will be loaded as a “Single Part” Performance, filling 544 of the 640 Performances in the User Bank.

Perspective 
After studying the individual Parts as originally programmed, you can better decide what its role will be in the new combination of sounds. As a sound designer, each layer you choose should have a purpose – you want to avoid just stacking sounds willy-nilly -anyone can do that. Each PART you *merge* should have a role that you design for it. In previous articles we saw how you can morph between layers, and how you can use XA CONTROL to have some PARTS strategically change the size of the ensemble during your play (as we did with the String Orchestra Parts). You also gain perspective on what to change to accomplish the sonic results you require.

A Normal Motif XS/XF/MOXF VOICE (now a Single AWM2 Part) has eight Waveforms, one per Element, and alone is capable of being a fully playable sound – complete with its own set of Filters, Envelope Generators, EQs and Effects. It is a waste of resources to just layer 8 KBD CTRL PARTS and play all the layers at all times. While some folks will do that, hopefully, you will find that it is not a contest of how many you can play at once – it is the access to how you can make the sound change, evolve, behave and move! This week we’ll delve a bit more into those fantastic Filters.

“Creepn Worm” is made from the following three PARTS:
PART 1: Long HiPa
PART 2: High Wire
PART 3: 5th Atmosphere
Creepn Worm home page on MONTAGE screen
This is a multi-layered synth pad sound. Each of the three PARTS are layered across all keys and respond at all velocities. One would describe this sound as very synthy. It is a Pad sound but not particulary string-like, nor is it particularly brass-like, but we describe it as very synth pad-like. It is clearly not trying to emulate any acoustic instrument, so much as it is a synth sound for it’s own sake. The filter movement – if you had to describe it – is atypical, and we will see why. Let’s break it down. We’ll do so by playing the original Single Part as a separate entity:
Creepn Worm home page on MONTAGE with single part indicated
Let’s begin our tour this week by recalling the Single PART PERFORMANCE called “Long HiPa” used in PART 1 of “Creepn Worm”. We want to hear and study the original programming – the sound was a VOICE in the XS/XF/MOXF and is one of the Single Part Performances (green) in the MONTAGE basic factory ROM.

“Long HiPa”

 – Press [CATEGORY SEARCH]
 – In the Search area type in “Long HiPa” to find the Single PART PERFORMANCE:
Performance page on MONTAGE screen where
If you are listening in stereo, (and we sure hope you are) the first thing you should notice is that the filter movement is such that the high frequencies are heard first and as time goes on we hear more low frequencies being allowed through. We are hearing the result of the Amplitude Envelope Generator (loudness) and Filter Envelope Generator (timbre change). And there seem to be several distinct areas moving within the stereo field. You can feel it come from the center and split down both sides of the stereo field.

The “HiPa” spelling is revealed as a hint that this is using “High Pass” Filtering. A High Pass Filter is one that initially allows high frequencies to pass and blocks low frequencies. Why this sounds “synthy” and “atypical” to us is because in the emulative world of sound designing, it is the Low Pass Filter that does the lion’s share of work when attempting to mimic instrument behavior. This is because, in general, the harder you hammer, strike, pluck, blow or bow a musical instrument, the more high frequencies it produces. Therefore, an LPF made sensitive to Key-on velocity is most often used to create this effect – and the harder you play the brighter the sound. Velocity is used to raise the Cutoff Frequency of an LPF, which means the timbre gets brighter. Here however, we are using the Envelope Generator to create movement of the Cutoff Frequency and the movement is in the opposite direction. An “envelope” describes movement over time.

The HPF24D translates to High Pass Filter 24dB per octave, Digital. For every octave you go below the Cutoff Frequency the signal will be down 24dB. This means that if you play an A440 and it is 0dB on the meter, the “A” at A220 is -24dB relative to the A440. Frequencies below the Cutoff Frequency are reduced by 24dB for each Octave you go down. This is consider a steep curve (also called a 4-pole Filter).

Later we will take a closer look into the FILTER TYPE.

MONTAGE screen showing PART 1 - Element 1 = HPF24D.
The High Pass Filter TYPE, shown above, is the one that can be very top-down revealing (pun intended). If you hold a chord you can hear the filter frequency move. That “rip the sky” type sound that is so familiar is accomplished by revealing High Frequency content first and allowing the sound to fill-in harmonically over time. Elements 1, 2 and 3 all use the same HPF24D Type (24dB per Octave High Pass Filter). The typical LPF (as used in Element 4) works in the opposite direction (shown below):
MONTAGE screen showing the typical Low Pass Filter in Edit mode for Part 1 - Element 4.

This Single PART is made up of four Elements. Element 1, 2 and 3 use the identical settings for High Pass Filtering and fill the hard left, hard right and center positions in the stereo field, respectively. Element 4 (center) comes in slowly but has the more traditional LPF – let’s take a look/listen:

 – From the ”Long HiPa” HOME screen. 
 – Press [EDIT].
 – Press [PART SELECT 1].

Extra Credit: Begin to see the lights:
Observe your right front panel lighting:

 – The [PERFORMANCE CONTROL] button is lit brightly.
 – > the [PART SELECT 1] button in row 1 is lit brightly indicating we have selected PART 1 for editing
 – > the upper [MUTE] button is lit brightly – this means a lit PART in the PART MUTE row 2, is sounding. [PART MUTE 1] is lit brightly to indicate it is currently sounding.
 – > and on the bottom half of buttons, you can see the lower [COMMON] button is lit – indicating we are looking at parameters related to all Elements together.
 – Row 3 is Element Select 1-8 they all glow, Row 4 reveals, however, that there are only four Elements in this PART – revealed by the brightly lit ELEMENT/OPERATOR buttons along the bottom row. (Four of the possible eight Elements are active).
 – Finally over on the extreme right side, the lower [MUTE] is lit to indicate that bright lights in row 4 means active Elements.

More to Explore:
Using the row 4 MUTE 1-4 buttons you can listen to different combinations of the four Elements. If you isolate each Element and play a bit you will discover that Element 1 and Element 2 are ‘bookends’ – both use the same Waveform type, Filter Type, one is panned Left and the other Right, slightly detuned from each other, to create balance in the mix. They both happen to be made from a Waveform named “Fat Saw Mn” (A fat analog sawtooth waveform, mono). Element 3 is made from a Waveform called “OB Mod Saw” (an Oberheim type modulated sawtooth waveform) – tuned an octave down and panned dead center. In stereo, you should feel that spacing as the movement is taking place in three distinct areas of the panorama. You can hear/feel that there is a Cross Delay happening, as well. The repeats Pan from side to side as they fade away.

Element 4, stands away from the others – a Waveform called “Dark Light” is used, but what is different about Element 4 is that its Amplitude (AEG) has a slow attack, while as you play the other three Elements have a rather immediate attack in Amplitude, but they have a slow reveal of frequencies due to filter movement. The three HPF Parts come on immediately with just the thin sound of filtered lows; by contrast Element 4 is very full as it comes in but rises in amplitude slowly – so it is no surprise that it is using a Low Pass Filter.

Navigation: Some Things to Explore
Compare the FEG and AEG of Elements 1, 2, 3 versus that of Element 4. Amplitude Envelope Generator determine how loudness changes over time. Filter Envelope Generator determines the timbre/tone (harmonic) changes over time:

 – Touch “Amplitude” > “Amp EG”.
 – Select each Element in turn, to compare the graphic representation describing loudness:
MONTAGE screen in edit mode for Part 1 - Element 1 with buttons highlighted for attack and other levels needed for Amp EG.
 – Touch “Filter” > ” Filter EG”.
 – Select each Element in turn, compare the graphic:
FilterEG
Understanding the role of Ampitude and Filter is fundamental to how sounds behave. The Envelope literally, shapes the outcome. If the Amplitude is not available to support sound, any Filter movement would not be heard. In other words, if the Amplitude EG is not there to support sound while the Filter is being swept, you will not hear it. Recognize that each Element has an individually programmable Filter and Amp. There will be times you want to control them all together and there will be times you want to control each individually – MONTAGE is powerful enough to offer you that control.

When you want to begin studying the CONTROLLER ASSIGNMENTS, we have learned that a good place to start is the OVERVIEW screen:
 – Shortcut: [SHIFT] + [HOME] = INFO (Overview).

OVERVIEW
MW – moving the Mod Wheel will immediately allow you to observe (hear) that it is controlling Filter Cutoff. As we look deeper, we will see it is also doing Element Level for Element 4 and the Dry/Wet balance of the Tempo Cross Delay. We have seen that there are three HPF24D and one LPF24D active in this PERFORMANCE. As you move the MW Up and finally reach maximum, you will notice the different segments disappear, as the delayed echoes are the last to disappear. Rapid movements of the MW while playing will reveal these delayed repeats and the complete ‘cutoff’ of all frequencies as the MW reaches the top. (MW is also Element Level for Element 4, so it is turning Element 4 down the higher the MW; Try it: Solo ELEMENT 4 and use the MW to Fade it out).
MWP1
Moving the MW will highlight its assignment. You can see (above) that it is assigned to PART 1. We can view what it is assigned to by setting the PART from “COMMON” to “PART 1”, then touching the “Edit Part 1 Control Settings” box:
MWassignHiPa
Make sure “Auto Select” is active (green) allowing you to move a Control and view its assignments.

Move the cursor to highlight the different DESTINATION boxes, this will allow you to view the individual settings for each assignment. You would view the above screen as showing: The MW (Source) is assigned to three Destinations 1, 2 and 4 in this PART. “Destination 1” is written in blue indicating you are currently viewing its Curve, Polarity and Ratio settings. It is changing CUTOFF for the Elements selected (Element SW); highlight “Destination 2” to discover it is changing ELEM LEVEL for Element 4. And highlight “Destination 4” – it is also changing the Dry/Wet balance of the Tempo Cross Delay. The single gesture of moving the MW can be scaled so it addresses each parameter as much or as little as you might require. This aspect is important in customizing sound behavior; you can move the MW a certain distance and have that gesture make each of these assigned parameters change the exact amount you desire. Scaling the response of controllers to your playing/performing style is what your personal goal should include.
Hint: use the RATIO parameter to set the maximum depth… you can do so by moving the MW the distance you find comfortable for maximum, set the Ratio to match your expectation for that maximum. Use PARAM 1 to shift the weight of how the depth is applied. For example, currently shown is the “Standard” CURVE, the higher the value for “Param 1” the later in the movement of the MW the change is applied.
 
RIBBON – the Ribbon is responsible for increasing (to the right) and decreasing (to the left) Filter Resonance. The Resonance occurs at the Cutoff frequency of the filter, and it is that frequency that is louder (more pronounced) than all others. You see the Resonance as a “peak” or mountain in the filter graphic at the Cutoff Frequency. It indicates that this frequency is extended (louder) that all others at this moment. You understand resonance from your basic elementary school science – where every object has a resonant frequency – you place a glass that resonates at a particular frequency in front of a speaker – and when you match that frequency at the appropriate Amplitude, you can break the glass. Every room (unfortunately) has a frequency that seems to resonate (or is naturally louder than all others) – as a musician you try to avoid rooms with too much resonance  (like gymnasiums). We often describe too much resonance as a “howl” – “…that one frequency was so much louder than the others, it absolutely was howling!”  Microphone feedback is a runaway resonant peak. Always be careful with “resonances” they can be startling – just like a microphone feedback, they can get away from you and run wild… “a little dab will do ya…”
Ribbon
ASSIGN KNOB 1 – E.LFO PMD (Element LFO Pitch Modulation Depth) – when using AUTO SELECT to view the Assign Knobs remember the [ASSIGN] button must be lit to allow the Rotary Encoders to represent the ASSIGN KNOB function.

ASSIGN KNOB 2 – FEG Decay 1 (Filter Envelope Generator Decay 1) – here you are controlling the envelope, or shape, of the movement of the Filters. This parameter is assigned to all four Elements (Element SW is ON) – so this means that all the Filters (the 3 HPF and 1 LPF) are all affected by this setting.

Let take a look at one of the Element FilterEG:
FEG decay1
 – Navigate to Element 3 and let’s SOLO it so we can hear exactly just what is happening.
 – To SOLO an Element while in EDIT use the lower SOLO button (extreme right front panel) and press an ELEMENT Select 1-8 button (third row of buttons). Or alternatively, you can simply MUTE Elements 1, 2, and 4.

 – Touch “Filter” > “Filter EG”. 
 – “FEG Decay 1” is set to 71 (shown above). As you move the highlight to this parameter notice the white square in the graphic moves to indiciate exactly where in the “Filter Envelope Decay 1” is currently. If you increase this towards 127 using the DATA DIAL – the filter movement will take a longer time to release the Low Frequencies. As you decrease it from 71, you will notice that the movement is much quicker. The programmer decided that the value 71 was the speed at which they wanted to initially have the filter movement take place. Reset the value to 71.

”Long HiPa” – the name now fully revealed: a long Filter Envelope moving a High Pass Filter.

This will also be the case for the HPF24D in both Elements 1 and 2, as well. Because the Assign Knob is set to control all Elements and their Filter Envelopes, together. This would be true even if the settings of the individual Element’s Filters were different Types – the Assign Knob works to OFFSET them all in a proportional manner.

When this parameter is assigned to a Controller (as it is to ASSIGN KNOB 2) moving Assign Knob 2, when the PART is “selected”, means you will be offsetting the “Filter Envelope Generator DECAY 1” parameter for each active Filter:
FEGassign
Above you can see that when viewing the ASSIGN KNOB 2 Control Set, you see each of the Element Switches is ON – meaning its Filter Envelope will respond to this change: Clockwise to increase time and counterclockwise to decrease the time.

To understand the use of POLARITY = BI – we know that this bipolarity allows us to work both above and below the initial stored value. So we know that the “FEG DECAY 1” was set to a value of 71 (done by ear, programmer’s choice for the starting point); now we know that the Assign Knob 2 at 12 o’clock posision will represent the stored value (71), and turning it clockwise will increase the time – exactly like raising the value for each of the Filter’s Decay 1 towards 127. And that turning the Assign Knob counterclockwise from the 12 o’clock position will be exactly like taking the 71 value for each of the Element Filters and reducing the Decay time. 

Note: don’t place too much emphasis on the value as a number – it is just a number on a non-linear scale from 0-127. A Time value of 0 is immediate, and a Time value of 127 is a significantly longer time. It is not a linear scale and is to be set “by ear”. 

Non-linear means the distance between any two integers is not always equal. The Time between 20 and 21 is not at all equal to the Time between value settings of 89 and 90. Music is best performed by ear and so is sound designing!

The Assign Knob is like a ‘super knob’ within the PART – because it is controlling multiple parameters across multiple Elements with differing values.

High Wire

“High Wire” is the name of the sound in PART 2 of the “Creepn Worm” Performance and is made with three Elements.

Using [CATEGORY SEARCH] let’s recall the original program as a Single Part Performance. Remember each Part is available in your MONTAGE as a separate Single Part Performance. By recalling it as a Single you can learn what the original programmer was interested in getting from this sound:
HighWire
 – Element 1 = Fat Saw St (AsSW1 Off).
 – Element 2 = StrongDetuned Pad St (AsSW1 On).
 – Element 3 = P5 SawDown 0 dg (AsSW2 On).

Each is assigned to only sound under specific conditions (according to the Element’s XA CONTROL setting). Element 1 will only sound when both Assign Switches are OFF. Element 2 will only sound when Assign Switch 1 is ON, and Element 3 will only sound when Assign Switch 2 is ON. This means you can have Element 2 and 3 both sounding when both Assign Switches are lit brightly:
XACtrl AllElem
 – From the HOME screen: Press [EDIT] > Press [PART SELECT 1] > touch “All” > touch “Osc”.

 – Play while viewing the flashing Element activity indicators (Elem1, Elem2, and Elem3) along the bottom of the screen until you are clear about how XA CONTROL allocates Elements. Rather than just layering these three Elements, the original programmer’s idea was to provide tonal variation – a choice. An Element only uses polyphony when it is requested to sound: when it’s parameter requirements are met (XA Control, Velocity Limit and Note Limit).

The other important thing to know about XA CONTROL is that it is always “sonically invisible”: when an Element is activated or deactivated you will not hear any interruption or glitch in the sound. Although XA CONTROL is responsible for having a particular Element make sound or not make sound, it is a thoroughly musical control – unlike Element On/Off or even the [MUTE] buttons, which are NOT “sonically invisible”. An ON/OFF switch is immediate and therefore cuts off the sound. A MUTE similarly, disconnects audio. They interrupt musical flow. XA CONTROL does not!

Instead of using the [MUTE/SOLO] buttons to isolate Elements, use the ASSIGN SWITCHES. The Switches will determine which Element is sounding. If you use the SOLO button, you still must meet the requirement of the Assign Switches to activate an Element.

Let’s navigate to the PART 1 Element screens and look at the “Filter” > “Type” screen:

Element 1 – Dual Band Pass Filter: (AsSW Off)
When you initially recall this Single Part Performance, you are hearing just Element 1 alone. The Waveform is called “Fat Saw St”, well… your ears should be telling you that this sounds anything but “fat” – it sounds very filtered. Thin, in fact. You will not be surprised that Element 1 features some filtering. It has a Dual BPF (or “Dual Band Pass Filter”). A Band Pass Filter is one that filters both high frequencies and low frequencies and allows only a band of frequencies in the middle to pass. The “Dual” means you have two independent Cutoff Frequencies, each with its own “resonant peak”. The “Distance” parameter is used to separate the two peaks:
DualBPF
 – Move the cursor to highlight the CUTOFF parameter (currently set to 150 on a scale of 0~255).
 – Using the DATA DIAL raise and lower the CUTOFF Frequency while holding a chord. Observe the graphic.
 – Change the DISTANCE between the Resonance peaks, then, again, move the CUTOFF up and down. You can hear the two distinct sweeps.
 – Play-Listen-Learn: Play with it, Listen to the results, Learn by doing.
 – If you press the [EDIT] button while in EDIT, you get the COMPARE function which will show you the original value for each item (in case you want to ‘put it back’ after experimenting). You cannot change any values while in COMPARE mode (the button will flash to indicate, it is read only) – you view the stored setting, disengage the EDIT/COMPARE feature, and then you return the value to its previous state.

Element 2 – Dual Band Elminate Filter: (AsSW1 On)
When [ASSIGN SW 1] is ON, Element 2 sounds. The Waveform is called “StrongDetuned Pad St” and features yet another Filter Type, the DUAL BEF (or “Dual Band Eliminate Filter”). A Band Eliminate Filter is one that filters two specific areas by reducing those band‘s Outputs. Like having two notches to remove frequencies in two separate areas of the spectrum. The “Dual“ means you have two independent notched Frequencies:
DualBEF
The Dual Band Eliminate Filter has two valleys and is in contrast to the Dual Band Pass Filter. The parameters are the same but the results are to remove two frequency bands. As the CUTOFF is varied you should hear the two valleys of frequencies eliminated. A lot of the filter movement in this sound is due to the Filter Envelope Generator (FEG) and there is a small amount of control also applied by the MW.
 
Element 3 – Low Pass Filter: (AsSW2 On)
When [ASSIGN SW 2] is ON, Element 3 sounds. The Waveform is called “P5 SawDown 0 dg” (a Prophet 5 Sawtooth wav, the ’down’ is the direction of the teeth, the zero degrees is the phase of the wave). Your ears should immediately recognize that the fullness of the initial sound probably means some type of LPF is being applied to this Element. And sure enough, it is a LPF24D. As we know, a Low Pass Filter allows lower frequencies to pass and initially blocks the higher harmonics:
LPF highWire
The more familiar LPF on Element 3. Apply your same experiments here – Move and observe CUTOFF. Raise and lower “Resonance” to understand what is meant by “resonant peak”. With a prominent peak showing move the CUTOFF up and down again with the DATA DIAL… listen.

The CUTOFF/KEY parameter is an offset that is centered around middle C and can attenuate the filter’s application. When Cutoff Key Scaling is at 0%, if the Resonance peak is boosting the frequency “A440”, then by the time you play up to the “A” at 880Hz, the signal will have died down by a cool -24dB (and will be hardly audible). If the Cutoff/Key is set to 100%, then as you go up the scale, the filters reduction of higher harmonics is offset.

When extreme high frequencies are “out of control”, you can shape the control with SCALING paramters – the CUTOFF/KEY is addressing the effectiveness of the filter across the keyboard scale.

Experiment with CUTOFF/VEL and RES/VEL. The higher the value the more Velocity will influence the result. This means as you raise this value, it will take MORE velocity to get the filter to open. You are moving the target farther away, so more effort is necessary. When no velocity is needed, it is easy to get the filter to behave. A positive value makes it harder (requiring more velocity) to achieve the result. (To some this is backwards, but it is not, it is how it works!)

Experiment and Learn
Select an Element situation using the Assign Switches, which via the Element’s XA CONTROL, determines which Element is sounding. On the Element’s “Filter Type” screen, play with the Cutoff, Resonance, Velocity and Scaling parameters to get a feel for what each of these three Filter Types can offer. The “TYPE” screen graphic represents Frequency spectrum on the x-axis (left-to-right), the shaded areas are the Frequencies allowed to passed through. The Amplitude or how loud the frequency is represented (up/Down) on the y-axis. The Resonance occurs at the Cutoff Frequency and is represented by peak at that point when increased. As you raise and lower the “Cutoff” value you are changing where along the frequency spectrum the filter starts to be active in its role of removing frequencies. Play with shapes, experiment with Cutoff and Velocity Sensitivity.

Play across the keyboard – get a sense of the Frequency (left-to-right) and Keyboard octaves (left-to-right). An acoustic piano can play fundamentals starting at the lowest “A”, A27.5 Hertz through to the highest “C”, C4186.01 Hertz. The graphic changes on your screen can better help you visualize what is happening to the sound. Play across the keyboard – make sure your eyes see what your ears are telling. For example, if you create two frequency peaks or two valleys, do you hear that as you play a scale up and down the keys? Start to relate a Frequency to the notes you are playing. Knowing what to listen for will help you when making your own decisions in sound designing. By having different Filter Types active with your sound, means you can create “Motion” by finding ways to move one sound into the other. Controllers and controller assignments should be seen as your personal interface with “performing” the sound.

Just FYI: the normal frequency response for humans is 20 Hertz to 20,000 Hertz… meaningyoucan hear about 2 1/2 octaves above the highest note on the piano (subtract high frequencies due to age, and how much loudness abuse you have subjected your ears to…)

Extra Credit: Fun stuff 
Even before MONTAGE introduced us to the Motion Sequence in this engine, the XS/XF and MOXF had a special type of Arpeggio called the “Ctrl” Type, which instead of featuring Note data, they included only Controller data. In order to have an Arpeggio work that only has Controller parameters, you must apply it to a sound. In the normal Note Arps, your Direct Key press is not heard, it is used to “instruct” the phrase. A “Ctrl” Arp PART should be set so the KEY MODE = “Direct”: this allows you to hear what you trigger with Key-On while the controller data is applied to the sound.

Among the 10,231 Arpeggios are “Ctrl/Hbrd” Types (Controller/Hybrid) – fun stuff!

Let’s return to the “High Wire” [PERFORMANCE (HOME)] screen:
 – Press [PART SELECT 1].
 – Make sure the [ASSIGN] button just to the left of the SUPER KNOB is lit.
 – Your screen should show the “Part 1 Assign” Knob icons: 
AsKn2arp
As you play notice that Assign 2 is in motion. It is following the “control” arp. It is applied only when you are triggering notes on the keyboard.

See below: These are the Arp Types assigned to this “High Wire” Part: You can use the [ARP SELECT] buttons to preview the movement types.
ArpTypes
The “Category” is Ct/Hb or Controller/Hybrid type Arpeggios. These do not necessarily contain the normal Note data, and some are velocity-zoned so that only specific Elements will be triggered by the data.

“AS2TriF4” and “AS2TriF1” – move the Assign Knob from 0 through 127 via a Triangle Wave, so as soon as it reaches maximum, it starts back down 127 to 0. The F4 and F1 are fairly obvious: the speed control where F4 takes four times the amount of time as F1.

“AS2Env1-3” are different shapes of movement, thus the Env (envelope). These are different shapes to create shimmering and rhythmic effects.

What you apply them to is really a creative decision. You can choose to change almost any assignable parameter. The AS2 Knob defaults to sending cc18 – this is used to animate the movement of the knob so this is recordable to an external DAW.
 
On the “Arpeggio” > “Common” screen (shown below), the KEY MODE = Direct.

This allows the keys that you press to trigger the MONTAGE Tone Generator, directly; at the same time the ARP is apply a CONTROL message to move Assign Knob 2 in various ways:
KeyModeDirect
This is necessary for this type of ARP. Since there are no note-on events in the arpeggio data, you must provide the note-ons; the Controller data is then applied to the notes that you play. You can, hopefully, now begin to see the similarity between ARPS of this “Control” Type and what we now know in MONTAGE as Motion Sequences. Motion Sequences are Control messages that can automate movement of parameters – but with much more control over the details of that application.

In this Single Part Performance, the Arp is assigned to such a set of Control Type arpeggios. Ok, let’s have some fun:

 – From the HOME screen.
 – Press [PART SELECT 1].
 – Make sure the [ASSIGN] button is light (this means the 8 Rotary Encoders will function as 8 Assign Knobs.

You should be able to see the “Part 1 Assign Knobs” in the screen just below the Performance Name “High Wire”.

If the master [ARP ON/OFF] and the Part Arp Switch are active, Assign Knob 2 should be in motion. It should do a smooth sweep from minimum (as soon as you ‘key’ a note) toward maximum, 0-127. As long as you hold down a key the motion will continue. You are hearing the keys you play “direct”, and the Assign Knob is applying a change to a parameter Destination.

Currently, the Assign Knob 2 is assigned to the VAR SEND (”Ensemble Detune”); the Arp phrase is increasing and decreasing the ‘send amount’ to this effect. This gives the sound a slowly varying pitch change.

press [EDIT] > lower [COMMON] > touch ”Mod/Control” > “Control Assign”

While viewing the Control Assign page let’s learn how you can go about changing an assignment:

 – With “Auto Select” active, touch AssignKnob 2 to recall its assignment: VAR SEND.
 – By touching VAR SEND in the Destination box, you recall a pop-in menu allowing you to see potential Destinations. These include parameters from the currently assigned Insertion Effects, Part parameters, and Element parameters:
menuDestinations  
 – Select a Destination – some will make more sense than others. For example, “Pitch” will, as you can imagine, sweep up and down, the tuning of the held notes.
 – Play around with the “Ratio” and “Param 1”
 – Then apply a different “Curve” to see just how far you can take this signal control.

Explore. You do not have to come up with anything inparticular in this kind of exploration. Often you are just looking to see what happens. The “Ratio” is going to affect how far a pitch variance you get. Sometimes “less is more”, especially with Pitch – intense movement in a small region can be just as, if not more, interesting than pushing everything to the maximum at all times. But it depends on if you are going for subtle madness or absolute maximum insanity… its a synth, go for it. Be sure to try both. There are no rules except those you set for yourself.

The “Curve” can be one of those doorways, that when you start down the hallway, you may be there for quite a while. I highly recommend you thoroughly explore the “Preset Curves” before you start making your own. Reason: get an idea of just how much you can bend and shape the movement of this Controller using the already made Curves. Each Curve can be shaped by the “Ratio” + “Param 1“ settings. “Param 1” weights the movement early or later, experiment with its influence on the Controller movement. If you come up with something interesting – please let us hear it! 

Try “Part Param” > “Volume”.
Try “Element” > “Cutoff”, “Element Pan”, etc.

While experimenting, don’t forget to move the MW – it is also controlling those CUTOFF Frequency settings of each Filter.
Each [SCENE] button will recall a different ARPEGGIO and therefore a different AssignKnob 2 movement.

Extra Credit: Those who are more advanced with synthesis, you see that you can use this automated Arpeggio to change the Speed of an LFO that you have assigned to perform a task. Say you’ve assigned an LFO to modulate Amplitude at a fixed rate, you can use this Arp Control Type to speed up and slow down the application of that Modulation, or create unorthodox movement for speed change. Because the XS/XF/MOXF were based on just two physical Assign Knobs, you will only find Arp Types assigned to Control AsgnKnob 1 and 2. When you’re ready, you can create your own Control Arps.

“5th Atmosphere”

Part 3 of the “Creepn Worm” is a sound we have worked with before: “5th Atmosphere” – it was the “Pad” Sound in the conversion “Second Breakfast”. This program has the musical fifth tuned Elements, chosen as Part 3 of the “Creepn Worm”:
5thAtmosphere
Rather than post the download .X7B MONTAGE CONNECT file this week, try *merging* these three Single Part Performances yourself – using your own imagination about what things you wish to bring out when they are merged in a Performance.

Methodology: Build your own Multi Part Performance
Recall “Long HiPa”, then touch “+” ADD, merge the “High Wire” PART to PART 2. Work on making these two work together… get creative. Remember what we’ve learned in previous articles in this series: use the ”Mod/Control” > “Receive Switch” to deactivate controllers you would rather not have working, if any. Blend these two sounds into something – see what you can come up with.

Later, if inspiration hits, go to PART 3 and “+” ADD, the “5th Atmosphere”. Have some fun while learning synthesis – it is what it is all about!

Be sure to download the new Version 2.00 on February 7th. To avoid any confusion and incompatibility, we will not include a download with this article. Use the Single PART Performances as found in the Factory Presets. Build your own PERFORMANCE – then after 2.00 comes out we’ll compare what you’ve come up with to the original! Enjoy! — Bad Mister.

Stay tuned for more from Bad Mister. And in the meantime, join the conversation about this lesson on the Forum here.

© 2024 Yamaha Corporation of America and Yamaha Corporation. All rights reserved.    Terms of Use | Privacy Policy | Contact Us