I'm working in a Cubase 8 Pro project. I have a track where I want to fade the volume down to zero over four bars.
I figured out that I can do it by adding Volume automation to that track. I can also do it by recording Volume CC MIDI events to the track, which I did by setting the track to Record and moving a slider on my XF. (I know I could also use a Job on the XF, but I'm concentrating on Cubase for the moment.)
My question is, what's the trade-off? Is one way definitely better than the other? Does it depend on the possible situations? Which situations would that be?
I found this, which says that you can apparently switch back and forth between MIDI and automation relatively easily.
Since you are using Cubase Pro 8 we recommend you create automation with an article more recent than 9 years ago (Cubase SX)... much has changed (some things are still the same) either way, try this link for a more current overview of Automation:
If you have a specific question, ask here.
I believe the advantage comes if and when you need or want to edit your automatable parameters. Certainly having them separated from the Note-On data makes it much easier to manage.
When it comes to controlling volume, you have to decide which method will work best for your goal. Obviously cc011 Expression, and cc007 Main channel Volume can be used to adjust individual tracks. But automation of volume (in particular) can be handled at mixdown just as easily... It is not always necessary to deal with volume while data is in MIDI coded messages. You can, particularly, deal with automating fade out volume during at final mixdown.
But then again, it depends... if your final goal is a MIDI file then you can approach an overall fade out as you consider best for your musical goal.
Wow. I didn't know that these videos existed. (Smacks forehead with heel of hand.) I'm in the process of watching all of them.
I had no idea Cubase could do all of these things. And here I thought I was getting towards the top of this mountain of a learning curve. (That's OK; it's fun.)
But finding out about all of these Cubase features is making me wonder how things will go once I've started mastering them. For instance, I've learned how to use Patterns on the XF, but it might be easier to use the Arranger Track feature of Cubase instead. I have lots of Voices to use on the XF, and now I find out about all the instruments that come with Cubase. Same thing for effects, and probably a number of other features.
So I'm curious what an expert user of the XF/Cubase combo actually does. Which features of which product actually get used? Does an expert use Arranger Tracks sometimes and XF Patterns other times?
I'm starting to see the XF more as a collection of components than as a unitary device. I suppose that's progress…
Cubase is also a very mature product, which means, like the Motif XF, it has many features for you to discover. If you learn to use half of them you will be well on your way to unlimited workflows. Whether you prefer the XF's Pattern mode or the Cubase Arranger feature is really up to you and your own preference for workflow. I use both but at different times in my production workflow.
If you are using the Motif XF VST you can use multiple instances of XFs to break the limit of 16 Parts. The FREEZE function allows you to create temporary audio tracks from the 1-Motif XF VST, while you launch 2-Motif XF VST, and so on. "Freeze" locks your MIDI Tracks (so you keep that level of "Undo") while you work on the second set of sixteen Parts and so on.
Cubase allows you to commit to these temporary audio files (Freeze) which reduces CPU load yet expands your available use of the hardware XF. Obviously, you can open as many Motif XF VSTs as you require.
I think what you will ultimately discover is that certain features help inspire you during the creative process, others will let you easily change your mind once you have some basic things down, for example, working in the XF you probably have gotten fairly used to working quickly with Patterns and MIDI data, however, manipulating audio parts is easier done in the computer. So say you wish to work out a section of music that includes an audio clip, knowing how to quickly set up the Arranger function in Cubase is going to serve you well... You can define any region to play in a cycling loop.
When composing using the Chord Track feature to generate certain musical comping parts can be useful and interactive. And help break old habits and help you generate fresh new musical parts. The more tools (merit badges) you have under your belt, the more you will discover when they can be used. It's like anything else... The more you explore yourself, the more you'll have a perspective on just how a feature might help you compose or help you discover something new in the creative process.
I've spent time with arpeggio phrases and Real Time Loop Remix (in the XF) and discovered a method of creating unique musical Phrases that rather than just using the typical repeating arp type function, I use to assemble a linear phrase which actually goes somewhere... Each can find something unique - some times just teaching yourself a feature causes you to create something you otherwise would not have... That's the fun of it.
And just some thoughts on MIDI Controller created Events and MIDI Automation... I guess my background, first in music and then in audio engineering, helps me clearly differentiate between the two. I think we all do this on some level: things like adding Pitch Bend and Modulation Wheel and controllers like this are musical performance gestures that clearly are the realm of the musician. But often things like pan position and certain volume changes are the realm of the audio engineer.
While you can "draw" in Midi control events with the Cubase pencil... I just differentiate (if only in my approach) those actions that are musical in nature and those that fall under the realm of engineering.
Changing Velocity is a musician thing 100% of the time, changing the mix balance (Volume) between this part and the rhythm section is an engineer's thing. Velocity can affect which articulation is selected to play... Changing the Velocity of a note-on Event can cause a bass to "slap" or play a regular tone, so that's clearly a musical decision. Changing the playback volume of that same note is an engineering decision, if you see my meaning.
So I think of "automation" as being for those things I'd want to change when wearing my engineer's hat! At least that is my initial approach... If I don't like the MW or PB performance (knowing me) I'll probably just do it over rather than edit it - I realize I could edit it with automation, but knowing me (as the musician)... And that's the advantage here, it's your choice.