Yamaha is fortunate to have exceptional sound designers creating amazing synthesizer content. Scott Plunkett has a long history with Yamaha going back to the DX7II. More recently, Scott was responsible for creating the Performances in the Chick’s Mark V for MONTAGE Sample and Performance Library. He was also involved in creating Performances for the Bösendorfer Sample and Performance Library – as well as programming many of the onboard MONTAGE Performances. Scott is not only an amazing sound designer, he is a serious professional keyboardist who has worked extensively with artists like Boz Skaggs, Don Henley, Stevie Nicks and Chris Issak. Scott is a very busy guy! We were lucky enough to catch up with him for this fascinating interview about his background.
Yamahasynth (YS): What is your musical background?
Scott Plunkett (SP): I don’t come from a musical family. My dad played the organ a little by ear and my mom apparently played the piano, but I never got to hear her play. Fortunately for me, the era I grew up in was the heyday of lounge organ and my dad was a big fan, so he got a Hammond L-100 spinet organ. I loved playing it and trying to imitate the music I heard on movies and TV shows. Eventually, some of the other kids in the neighborhood started taking music lessons, and I wanted to join them. It was a bit of a rocky start – only because my teachers tried to get me to really dig-in to music at a time when I was young and still interested in sports and hanging out with my friends. I stopped playing for a few years, but around the time I heard the first Doors album with that Vox Continental that sounded like some kind of carnival gone wrong! After that, playing in a band was all I wanted to do. I got serious and found some great teachers. I was doing pretty well working with bands and wasn’t sure what else I wanted to do with music, so I never had much formal training. I took music theory and other music classes in college, but music wasn’t my major. What’s interesting to me now about that time was that the music department insisted that music majors take a science class. Since I was still unsure about my major, I decided to keep my options open and took a physics class called “Acoustics”. I wasn’t even a little bit enthusiastic when I signed up and was worried that it would be hard for someone with very little science background. It turned out to be one of the most influential classes in my life. The things I learned in that class were the foundation of everything I’ve ever done in sound design. I remember that we spent one class just listening to pieces of very early electronic music. I’d never heard anything like that! I think the Minimoog had just been released and new sounds were showing up in all genres of music. It was an exciting time and I was completely fascinated.
YS: When did you start doing sound design?
SP: As I mentioned before, the Minimoog came out when I was in college in the early ’70s. A friend of mine got one and we would spend hours playing with it. I eventually bought one and started learning about subtractive synthesis in general. At the time, I was mainly interested in making the sounds I was hearing on the radio and making orchestral type sounds to use in the bands I was playing with. Since early synths didn’t have any kind of preset ability, I was forced to learn exactly what every knob and switch did so I could change sounds between songs. At the time, the lack of presets seemed like a massive liability, but it turned out to be a blessing because it gave me the incentive to learn exactly what was going on. Since pretty much all subtractive synthesizers share the same fundamental structure, everything I learned on my Minimoog helped me work with the other analog synths I eventually owned. I think keyboard players from my generation are pretty lucky. We were around when all of these technologies were first introduced. If I had to start sound design right now and was faced with all of the sound making possibilities we have at our disposal, I might be too intimidated to get started. I saw sampling develop from the Fairlight to the early Emulators, learned FM programming with the DX7 and messed around with early additive and FM with a friend’s Synclavier. Even a lot of the digital technologies we take for granted now, like granulators and spectral effects, were just esoteric little software “projects” that were complicated and didn’t work in real-time. However, those projects gave us our first glimpse at what was around the corner so we were ready for them when CPUs were fast enough to make useful effects out of them. So, my education in sound design techniques was a gradual process where I got to learn each new thing as it came along.
YS: What were the first Yamaha synthesizers you worked on?
SP: When I started touring in 1980, I worked with Boz Scaggs, who did a lot of work with the guys in Toto. Since those guys used the CS80 quite a bit, he had one on the tour. I was totally excited to get to work with it, since I couldn’t afford one of my own. I still remember being blown away at how huge it sounded! Just like about everyone else who was around at that time, the DX7 was my first “have to have it” Yamaha synth. It’s probably hard for most people to understand now, but for most of us, this was the first affordable digital synthesizer that any of us had seen. Digital synths were more than $10,000 and analog polysynths with no more than 8 notes of polyphony and no velocity sensitivity or aftertouch were $4,000+, so it was unbelievable that we could buy a digital synth with 16 notes of polyphony, velocity sensitivity and aftertouch for $2,000! They were sold out in the U.S. almost immediately, so when I was touring in Japan, I went to a store and bought two of them. I stayed up all night tweaking sounds, and I think I had it up and running in the show a few nights later. FM synthesis was pretty different from analog synthesis, but it sounded so unique, I felt like mastering it was well worth the time it took. I eventually learned enough about FM that I started trading DX7 Voices with other musicians, and that’s when I got connected to people at Yamaha. My first programming job was for the DX7II.
I remember going to Japan and sitting in a room with David Bristow and Gary Leuenberger, the guys who programmed the original DX7, and feeling like I was way out of my league – which turned out to be true! But, those guys couldn’t have been nicer, and I learned a lot from just watching them and listening to them talk about FM with one another. I think I only got a few patches in the final factory set and was pretty sure that would be the end of my Yamaha programming experience, but everyone there was very encouraging and kept me involved as other projects developed.
YS: Tell us a bit about your approach to sound design?
SP: For me, there are basically two different kinds of sound design: directed and undirected. Directed sound design is where I have a specific sound in mind or am working on making a sample set sound and feel like the original instrument. I think of it as directed because there are specific goals and the result can be judged as right or wrong depending on how close it is to meeting those goals. Undirected sound design is what happens when I get tired of the discipline of directed sound design and just start exploring to see where ideas take me. I may want to explore a new feature on a synth, which will make a sound that I like, but hey, what would happen if I modulated that with this LFO over here? Hours can go by when this stuff gets started and it doesn’t always end with a useful sound, but it’s always fun, plus I find that I learn things that help me program other sounds in the future. I take different approaches when programming sounds that I’ll use myself for a show or recording project and when I’m programming sounds for a factory set. When I’m programming for a specific song, I only care about making the sound fit into the track. If that means I have to remove all of the bottom end or add more detuning than I’d usually like in order to make the sound work, that’s what I’ll do. I also don’t worry much about assigning lots of controllers or finishing all the details. I do just enough to make the sound function in the song and then I move on to another sound. On the other hand, when it’s something for my own use, I have no problem with making a sound, recording it and then mangling the audio to make something new or something that compliments the original sound. I’m fine with spending quite a bit of time on a part as long as I’m getting interesting results. With the tools we have these days, it isn’t hard to think of things to try. Sometimes, the hardest part of sound design is knowing when to stop! When I’m working on sounds for a factory set, I have no idea specifically how they will be used, so I try to make them sound good on their own and then include useful controller assignments so the user can make adjustments for their specific situation. I know that all of us who programmed MONTAGE spent quite a bit of our programming time considering what assignments would be most useful for the eight assignable knobs in each part and then how these would be used to make changes with the super knob. It really took a lot of planning ahead, but we wanted each program to have plenty of useful variations. These are the kinds of details that I might skip for personal sounds, but are always necessary for factory sounds.
YS: What are some of the interesting avenues you have found working with MONTAGE?
SP: I think programming MONTAGE was a little intimidating compared to the MOTIF series because it had so many new features and so much more power. Even the basic architecture of a Performance on MONTAGE is more powerful than 8 Voices in a MOTIF. Add in all the variations you can do with the Part knobs and Super Knob, and you can spend days on just one Performance. One Performance can do as much or more than almost an entire bank of MOTIF sounds! Of course, the FM engine was a big addition. One of my favorite MONTAGE tricks is assigning individual Operator volume and pitch to various controllers to give the user the ability to change the sound dramatically without forcing them to learn FM. I’m also a big fan of layering, so it’s nice to have something like FM that’s very malleable to add to sample-based sounds, which sometimes can be a little too static. But, I think the new feature I wound up using the most might be the Motion Sequences. I got particularly fond of using them to modulate effects. For instance, the Control Delay changes pitch like a tape delay when the delay time is changed. If you pick an unusual Motion Sequence shape to modulate it, maybe only at certain velocities, you can make a subtle or dramatic pitch shift. If you run that into another long delay or reverb, you can get an eerie sound with some bizarre pitch shifting. Or, you could duplicate the part, turn off the Motion Sequence in the copied part and get a nice pseudo random detuning effect.
YS: We just released Chick’s Mark V for MONTAGE and I love what you have done. Can you describe your approach to this project (Editor’s note: All of Scott’s editing notes are available when you download Chick’s Mark V for MONTAGE).
SP: When I’m working on a sample library, there’s always Performance #1 that’s meant to be the most realistic version of the original instrument. I spend a lot of time trying to get that right, because it’s the foundation for most of the Performances that follow. That usually means listening closely to examples of the instrument type or, if I’m lucky, recordings of the particular instrument that produced the sample set. In this case, I was pretty far along with the first Performance and decided I would listen to Chick’s Mark V on recordings to see if it was usually through an amp or typically had effects I should know about. I pulled up part of a live show in iTunes and was amazed at how close the sample set sounded to the real EP when, all of a sudden, Chick started doing a bunch of pitch bending on his solo, and I realized I wasn’t hearing his Mark V at all – it was the same sample set on the MOTIF! The good news was that it wasn’t too hard to hit that target sound. My goal with that set was to make things I thought Chick might use and to cast a wider net to include sounds that would work for the rest of us who aren’t quite able to play the kind of music he plays, but might want to use his Rhodes sound for other types of music. I read interviews with him and he talked about running through rental amps and experimenting with ring modulators in his early days with the Rhodes. I made sure those kinds of things were included. He also did a lot of soloing on the Minimoog, so it was obvious that a split performance with EP and solo synth would work. I usually include some Performances that layer EPs with pads, other pianos and strings because these combinations are pop music staples. I also include Performances with pedal effects like chorus, phasers, wah wah and rotary effects because these were the typical effects used on Rhodes pianos in their heyday. About the time I finish with the obvious things that I think people will want, I try to come up with things that are more creative. I like ambient things, so I might make the Rhodes into more of a pad with lots of filtering, delay and reverb or make the EP sound older and clunkier to give it a different personality. These are probably the least used of all the Performances I make in this kind of set, but a lot of time they’re my favorites because they’re unusual or take advantage of a feature that I like but don’t normally get to use.
YS: I know that helping younger musicians and sound designers is important to you. Can you offer some advice from your experiences that you’d like to pass along?
SP: With the tools available to beginners these days, I think it’s possible to get impressive sounding musical results without actually having much formal music training. This isn’t new. I’ve met plenty of pop musicians who don’t know how to read music at all and I still consider them to be great musicians. Good musicians make music naturally and if you’re a talented musician, it’s tempting to ignore the musical training part and just start making music. But sooner or later, the lack of any kind of training can make life difficult, especially when you need to communicate with other musicians. You don’t have to knock yourself out, but time spent on learning to read music and understanding basic music theory will pay off. The day you wind up in a studio with the string section you’ve been dying to put on your song, you’ll be glad when you can look at the score and tell the players that you want that F in the second beat of the third measure to be changed to a D. Also, remember that pretty much anything you hear is notated somewhere. You can look at pieces that you like and see how everything is put together or just look through music to get ideas. The same thing goes for sound designers. It’s easy to just go through presets and tweak knobs that have been pre-programmed to do something useful and feel like you’ve mastered a synthesizer. But, what do you do if someone puts up an initialized voice and tells you to make a big, thick synth pad? The great thing about synthesizers is that you can usually get into an edit mode and actually see the exact values that make up the sound. This is a great way to learn. But also, take the time to learn about the fundamentals of sound. Learn what the harmonic series is all about, then you’ll understand why you can make your synth filter lock on to some weird, but cool, interval way above the note you’re playing and you’ll understand what all those strange looking horizontal lines are in a spectral editor. The more you know, the more creative you become. My favorite moments are when I learn something new and my mind immediately has a “what if” moment. You might not see much of me for the rest of that day…
Want to know more about how Scott uses Steinberg HALion in sound design? Check out that interview here.
Join us for the discussion about this session of “Behind the Synth” on the Forum here – and stay tuned for more coming soon!