Synth Forum

Notifications
Clear all

Help with understangin 32 outputs.

3 Posts
2 Users
0 Likes
3,071 Views
Walter
Posts: 0
Active Member
Topic starter
 

I’ve been studying the programming basics series and whatever other tutorials there are out there on the Montage since I got it last Friday and I’m feeling comfortable moving around the menus. Still have a lot to learn, but enjoying it.
When I tried the unit at Sweetwater Gear fest and discussed the unit with Yamaha technical people, I felt that the Montage was mainly geared towards live performances. After doing 150 performances a year with my band for 40 years, I’m done with live playing and I'm having a blast recording at home. I loved the sound of the Montage and have been playing and studying it for the past week.
The issue I’m trying to my head around is the 32 audio outputs and the fact that you need to use a special Steinberg driver that converts the Montage as your audio interface. I use an RME UCX Audio interface along with Sonar Platinum.
I guess I could just try it, but I’m not sure how this all works. Let’s say I have already laid down some tracks in Sonar and want to add some parts with the Montage. If the Montage is now the Audio Interface, do I still hear the recorded tracks through the Montage interface or do I hear just the Montage.
I recorded a MIDI track with the Montage using the standard MIDI In/Out and Audio into my interface and it works just fine. Sonar even records the Super Knob positions via SYSEX and will play back the sounds with the Super Knob effect changes. (IT WOULD REALLY BE NICE IF THE SUPER KNOB COULD BE PROGRAMMED WITH CONTROL CHANGES!). I understand that this was not original intent for the Montage. To get the full effect you must play the song (i.e. playing a string section and then play a cello section by moving the Super Knob to bring up the Cello.) But what is the advantage of doing this for a recording (not live performance) since you could achieve the same effect with “one track, one sound”
If I wanted to play a piece and have the sound morph between a MCX and an Electric Piano, I could do the same with a MIDI track for the MCX and a duplicate MIDI track for the Electric Piano. Then just crossfade the two sounds to get the same effect. Of course if you could you could control the Super Knob with Sonar’s Control change….:-)
Getting back to the 32 Audio outputs, what is the advantage of having 32 Audio Outputs? I realize that I probably have better plug-ins for EQ, Dynamics, Delays, etc., but the stereo out of the Montage sounds pretty good out of the box. For drums, I use Superior Drummer 2.0 or Addictive Drums 2.
Sorry for such a long post, but hope I can get someone to enlighten me. (Bad Mister?)
Thanks
Walt

 
Posted : 01/07/2016 5:59 pm
Bad Mister
Posts: 12304
 

First, let's talk about audio and computers. I'm not good at sugar coating things so I won't, but computers use the driver to help it manage audio and MIDI data in a timely fashion. The built-in drivers are not sufficient for what we as musicians require. But long story short each of the low latency devices that we use as audio interfaces has its own driver and work setup. The audio recorded on one can easily be played back by another. So the fact that you use a different audio interface and driver to record a part of your project should not be a concern. The concern is when you actually recording that the driver does the heavy lifting. Playback (is easier) even the built in drivers are playback capable.

You simply want to keep track of the sample rate and bit resolution you use when you record so you don't wind up converting things unnecessarily. So playing back items recorded with other interfaces is not a concern in either direction.

The only way to take advantage of the Montage's 32-audio bus outputs for recording is to use the special Yamaha Steinberg USB driver. It is the responsibility of this driver to prepare the computer for the simultaneous use as an audio/MIDI interface with the exact configuration of the Montage.

When you are operating at 44.1kHz, 24-bit, the computer sees 32-inputs and returns 6-outputs via USB audio, and three Ports of MIDI. If you opt for higher sample rates up to 192kHz, it reduces the number of simultaneous Montage outputs to four stereo pairs.

So when you opt to use a different audio interface (and this is done all the time) you simply have to hassle reconfiguring your setup. Typically the change of drivers is just a few clicks on the screen, Cubase even allows you to "hot swap" drivers mid session - no reboot required. But the biggest hassle is usually swapping the audio cables - as the device acting as your audio interface quite naturally insists on connecting directly to the sound system, by definition.

But that hassle aside, the flexibility of MIDI recording and smooth audio rendering is what the Montage and its audio routing scheme is about. Each Part can be routed to either the Main L&R outputs (which has a USB component) or assigned discreetly to any of 30 assignable outputs, configurable as odd/even stereo pairs or as individual mono outs, as required. drum Kit Parts can even route individual Drum Keys to USB assignable outputs.

The assignable outputs are discreet and remove the assigned instrument from the main (system) outputs. Each Part has a 3-band EQ PRE the Dual Insertion blocks, and a 2-band parametric POST the Insert blocks, and then it gets routed out via USB. So you have tons of boutique processing in Montage's full featured digital mixer. Like any sophisticated busing system multiple Parts can be routed on any bus as necessary. The effects on board the Montage will hold their own among your plugins.

Now to the subjective portion of your question. I got into audio engineering as an extension of my quest to live full time as a musician. I am very sensitive, therefore, to the differences between what is the musician's realm and what is the engineer's realm. Example, a keyboard player performs on a synth where velocity response is integral to the result. Could be a barking Rhodes or a slap bass. When it comes to recording/editing that performance a change in volume by increasing the audio level in the mix is one thing, but making it louder by increasing the velocity values, is quite another. If you get my meaning.

A change in velocity changes radically the musician's original performance and this is clearly in the musician's realm, turning something up or down in the mix might more clearly be the engineer's realm. Just extend that example to playing and recording, in general... I'm surprised that you don't see this when it comes to Motion Control, by animating the sound as I perform it, musically that's expression, that's why I play - that gets my juices flowing. I'm all about that expression, that control over nuance, that realtime thing. I'm not turning that over to the engineer - who I know can recreate something very close. I'm a live (living) performer, it's my thing to make the musical parts breathe and animate, I'm not turning all the fun over to the mix engineer. I've worked the magic from behind the console, too, so I've tasted that and know it well.

Sure, we've gotten really good at piecing together recordings a bit at a time. You truly become unaware sometimes that this was the process. Because done well, it is not detectable. But I'm old school, and find myself redoing takes rather than fixing things with the technology. I always feel I've got a better take, so let's do it again. I feel guilt if I fix it in the mix.

But to each musician there is likely to be a different workflow, a different balance on the musician versus the engineer thing. What you say is true, why perform it when you can just as easily work the same 'magic' by recording separate takes and mixing it to make it happen... well, that's really the definition of a subjective question. And I don't have an answer, I don't think there needs to be one. Every project is different, and different approaches cause their own magic to happen. Reality is use a combination of methods, that what winds up happening most times anyway.

Enjoy!

 
Posted : 01/07/2016 7:10 pm
Walter
Posts: 0
Active Member
Topic starter
 

Thanks. As usual, you don’t answer questions with a “yup”’ or “nope” but go into great detail that I (and others I’m sure) appreciate.
I think basically you’re telling me that the Montage is set up with great flexibility so that you can record/optimize (edit) or play “live” and if you mess up (it happens to me!) you can re-record a live performance until your happy with your sound. I do both but for different reasons. I have a Roland Jupiter 80, Roland Integra 7, and a Roland FR-8x Accordion, plus a gazillion softsynths. For some synth sounds, i.e. a lap steel guitar, you can’t play it totally live. You can play the basic tune and record the MIDI, but you need to program it to get the correct articulations (correct chord bends, etc.). I don’t think I could play a song live and get the Super Knob in the right position good enough for something that going to be recorded. I’m not that good, or too fussy.  Hence, the request for the ability to control the Super Knob with MIDI Controller in a DAW. If you can control it from a pedal you should be able to control it with a DAW…

I’m in a good position. I’m a musician, but I’m also very technical. My real job was an Integrated Circuit Design Engineer with 40+ years of design engineering experience, including designing power electronics ICs for Disk Drives and other applications; Audio Amplifiers, Automotive, Power Supply Controllers. TV, GFCI, I’ve also taking classes in Audio Engineering and Recording Studio Mixing/Mastering.
Note: this is my first Yamaha and because of the excellent support (read: Bad Mister et al) it won’t be my last. Roland doesn’t provide much support.

Thanks again,

Walt

PS: The programming basics series is very well done! The examples are very helpful. One suggestion for additional credit; at the end of the lesson, give the reader an assignment to create something based on what was shown in the lesson. i.e. “put together a destination that does this function with these parameters”. Then in the next Programming Series show how it should be done. Just my 2 cents.

 
Posted : 01/07/2016 9:40 pm
Share:

© 2024 Yamaha Corporation of America and Yamaha Corporation. All rights reserved.    Terms of Use | Privacy Policy | Contact Us