Hi,
I am looking for a way to control the 'velocity' expression of voices, from a controller - and somewhat independent of the keybed action. Here, I am referring to velocity as the actual dynamic timbre change and not volume.
For example, as I play a string or brass instrument, I would love to control the velocity in a linear way, before or after the sound starts playing, preferably using a pedal. If I am understanding correctly, the keyboard is already doing this by triggering different Elements or groups set up in a voice depending on the force applied to the key. Finally, my questions are
- is there a way to transition between these Elements, after the key is pressed, using a controller?
- if so, can I setup the controller (external or otherwise) to control this expression with any XA or compatible voice, without editing?
Idealy, both the pedal and keyboard velocity would work together. For example, I would program it this way: at 0-30 pedal value, use elements 1 to 3 - depending on keyboard velocity; when 30-80, use elements 3-5; if over 80, use elements 5-8.
I hope at least a bit of this makes sense 🙂 . Thanks!
Some of it does make sense but here is something that you are possibly not understanding...
Velocity is a single value sent at Key-On. It is a value 1-127, for example, the MIDI message for a single NOTE-ON type event might look like this: "90 3C 64"
translated this is:
90 = Note-On, Channel 1
3C = Note 60 (middle "C")
64 = velocity (100)
Striking the key would cause the Voice assigned to the current Program to trigger the Elements that meet the velocity requirement of 100. Let's use a slap bass sound as a basic example of a typical velocity swap: if the "slap" Element is set to trigger such that the VELOCITY RANGE = 101-127, the non-slap Element will sound. It is the *initial* velocity value that determines which Element will sound. Once that message is sent no other change to Velocity can be made. Velocity sends one value per key strike and it sends it at the moment of Note-On. You cannot change the Velocity while the note-on it was sent with is active... The velocity is apart of the definition of Note-On.
The "slap" sound cannot be triggered unless you send another note-on that exceeds 101 (in our example)
Velocity is not affected by movement of a pedal. Velocity is only affected by movement of a key, at note-on.
Yes, velocity influences, or can influence, how loud.
Yes, a pedal influences, or can influence, how loud.
No, a pedal cannot influence velocity... Only the downward action on a key.
is there a way to transition between these Elements, after the key is pressed, using a controller?
Yes, you can transition between Elements with a Controller. Whether or not the methods available will work for you, will depend largely on the sound itself. For example, a Controller can be used to fade in an Element. It can fade in and even simultaneously fade out another Element. But in our basic velocity swap example, this is not workable because, if the note-on event defines 100 as the velocity and I move the Controller to fade in the Slap Element, well, it never got the message to sound in the first place.
An alternative:
So you could have both the main bass sound Element and the slap Element set to full Velocity Range 1-127. Now our note-on will cause both the main body sound and the slap to occur. Now using the Controller we can seamless fade in one or the other of the Elements. Unfortunately, in this example, fading in the "slap" sound would not have any musical value... Mainly because the percussive slap is very "time sensitive" (this is why I said it depends on the Voice you've selected).
But if you are transitioning between two Elements where they both sustaining sounds, certainly you can transition between Elements with a Controller.
Here's another example... The Medium Large Section normally play bowed (arco) when no AF button is pressed but you can transition to pizzicato (plucked) by pressing [AF1]... But new Note-Ons would be necessary to actually trigger the plucked sound - the KEYs send note-ons, the Controller only makes the other Element ready to sound at next note-on.
There are other possibilities...
- if so, can I setup the controller (external or otherwise) to control this expression with any XA or compatible voice, without editing?
Without editing? I don't understand the question. You cannot change anything without editing.
Idealy, both the pedal and keyboard velocity would work together. For example, I would program it this way: at 0-30 pedal value, use elements 1 to 3 - depending on keyboard velocity; when 30-80, use elements 3-5; if over 80, use elements 5-8.
Again, only the speed of the KEY being pressed can determine Velocity (by definition). And you've listed Element 3 and Element 5 in two separate groups... They cannot both sound and not sound simultaneously - maybe that's a typo (?)
That said, the [AF1]/[AF2] buttons can be used to transition Elements... They can be operated by a Foot Pedal, but they only respond to ON (lit) or OFF (not lit). They have nothing to do with velocity, however. But as mentioned, depending on the nature of the Element being controlled you can switch Elements at any time using the buttons in conjunction with the XA CONTROL parameter.
It is not really clear musically what you are transitioning between so I can only give you examples "in theory" that would be possible.
For example, Elements can be activated and deactivate by controllers. You could assign Elements 1-3 to only play under a certain condition of a selected controller, you could assign Elements 4-5 to only play under certain conditions of another controller, and Elements 6-8 under a different condition. Here's an example:
Elements 1, 2, and 3 set to XA CONTROL = All AF off
Elements 4 and 5 set to XA CONTROL = AF 1 on
Elements 6, 7, and 8 set to XA CONTROL = AF 2 on
Now you have a way to determine which Elements will be sounding at any time ... Independent of velocity. They all can be played, when activated, at any velocity value you wish. The ASSIGNABLE FUNCTION buttons [AF1]/[AF2] can be set to "latch" or "momentary" and you freely switch and seamlessly transition between the three conditions. With the buttons set to "momentary" you can manipulate what is sounding by touching one, the other or neither of the buttons.
Hope that helps in your experiments. The XA CONTROL is assigned per Element on the [F1] OSCILLATOR / [SF1] WAVE screen.
Thank you for the elaborate answer Bad Mister, very much appreciated. I am sorry I wasn't clear enough, when I was referring to 'velocity' I was not referring to the MIDI variable itself, but rather the sound quality change as you play an actual instrument with different force (in a vocal performance whisper vs. screaming) - such as provided with Expanded Articulation. Of course, I am referring here to sustained instruments only.
It is great to hear that I can control transitions between elements after the key is pressed, having that control is I think all I need. I do feel that such a standardized function is much needed when recreating non-percussive instrument performances - such as a bow that continues to shape the sound long after the note 'starts'. Keeping that sound shape constant between notes is natural with a bow, but difficult when pressing new keys - hence the idea of a pedal that controls the sound in a more 'linear' fashion, and independent of the Key Velocity.
My second question was - in technical terms - can I control which of the 8 Voice Elements are playing by sending MIDI messages from an external controller, without editing the voice to make it ready for such control?
Thank you!
Thank you for the elaborate answer Bad Mister, very much appreciated. I am sorry I wasn't clear enough, when I was referring to 'velocity' I was not referring to the MIDI variable itself, but rather the sound quality change as you play an actual instrument with different force (in a vocal performance whisper vs. screaming) - such as provided with Expanded Articulation. Of course, I am referring here to sustained instruments only.
Velocity is the traditional method for "the sound quality change when you play an actual instrument with different force..." If you have a whispered phrase layered in a Voice with a screaming phrase... You trigger the whisper with velocities 1-100 and you trigger the scream with velocities 101-127. This is no different than the slap bass example.
As I mentioned, you could assign a controller to determine which Element is available to sound... Set them both to full velocity range so they both trigger at note-on, but control the volume of one positively to controller movement, and the other negatively so you can fade in/fade out. Or have a controller button assigned that when pressed the volume of one is completely reduced in level (negative Element Level) while the other is completely maximum (positive Element Level... But this may not be satisfactory because it will likely sound like a new note-on
It is great to hear that I can control transitions between elements after the key is pressed, having that control is I think all I need. I do feel that such a standardized function is much needed when recreating non-percussive instrument performances - such as a bow that continues to shape the sound long after the note 'starts'. Keeping that sound shape constant between notes is natural with a bow, but difficult when pressing new keys - hence the idea of a pedal that controls the sound in a more 'linear' fashion, and independent of the Key Velocity.
This exactly how the engineers at Yamaha felt when they went to work on XA CONTROL (Expanded Articulation Control)! A way to dynamically (in real time) change the Elements that are sounding during the performance of a musical phrase. Traditionally you had Velocity and velocity range. As we explained Velocity requires a new NOTE-ON event to have any further influence on what you are playing.
The different XA CONTROL assignments each has a different use case.
There are the ones that control Elements via the condition of the AF1/2 buttons: "All AF off", "AF 1 on", "AF 2 on"
There the alternating features: "wave cycle", "wave random"
There is the key performance gestures: "legato", "key-off sound"
The AF buttons can be used to turn on/off a group of Elements at any time during your performing
The alternating (cycle) function is great for some specific or random changes.
The performing gesture of playing a mono Voice legato, that is, playing a new note before releasing the previous avoids retriggering the Element that represents the attack portion of the note. When XA CONTROL = "legato" the Instrument can be switched to this "legato" Element instead... The sound is "knitted" seamlessly to the 'body' Element (one that does not feature an attack portion - but is the same instrument sound sounding from the portion of the sound after the attack) this allows one note-on stroke, for example, to be articulated followed by an entire phrase of single notes played and actually articulated without a new attack of the sound. Not only do you have to set the XA CONTROL to *legato*, you have choose a waveform that is stored without the attack portion of the audio. If you set a Waveform that has a recorded attack, the legato function would still switch to that waveform, but your results will not be legato in nature.
With strings you have prominent (spiccato) bow strokes that can be brought in either with Velocity (traditional method) or when you activate the ASSIGNABLE FUNCTION button assigned to activate the XA CONTROL. For example, the Solo strings feature a spiccato bow stroke that can be articulated when AF1 is pressed... Importantly this allows the flexibility of not making the spiccato stroke dependent of a 'hard strike' as is the limitation with Velocity.
It allows for the same thing on plucked strings, what guitarist called "hammer-ons" - a single pluck of the string yet other notes are fretted with the upper hand, without re-attacking (plucking) the string again.
These articulations so are the real innovation of the MOTIF Series when it transitioned to the 8 Element architecture... The additional Elements are not used to layer, so much as they are applied as alternative ways to evoke sound from the instrument being emulated/performed. Usually when you see "AF1" or "AF2" in the name of the Voice, the programmer is trying to alert you to explore how the Assignable Function buttons are assigned within the Voice.
Again, this whole alternate method of performance control developed in specific response to the inherent limitation of VELOCITY which is "the sound quality change when you play an actual instrument with different force..." XA CONTROL allows you to introduce new Elements at the given velocity of the original key strike, yet provides a seamless transition to them.
My second question was - in technical terms - can I control which of the 8 Voice Elements are playing by sending MIDI messages from an external controller, without editing the voice to make it ready for such control?
No, of course not. There is no thinking on the part of the synth. The legato gesture for example... On a Blues Guitar, or a Legato Flute, the Element with the articulated Attack must be present, the Element that is the Guitar sans the pluck, and the Flute Element without the breath attack, must be in the Voice, and set to XA CONTROL = legato.
Then when the XF detects you playing notes in a legato fashion, instead of re triggering the attack Element (like all other sample-based synths) the XF goes and gets the legato Element and "knits" it on to the current sound. The word "knit" here is used to signify that the sound transitions smoothly and undectably to the legato Element.
You will find examples of triple strike velocity instruments, with triple legato Elements... So that at any velocity the XF can select an appropriate volume for the legato transition... The fact that a musician can play it and be totally unaware of the all the hamsters running on treadmills behind the scenes is a testament to how smooth this all is executed. But you do have to be playing a Voice that has been pre-programmed to behave this way.
Try the following Electric Guitar Voice: "Bluesy Clean Legato"
Press [EDIT]
Use the lit numbered buttons [9]-[16] to temporarily Mute/unMute Elements 1-8
Element 3 is the legato Element. If you turn Element 3 OFF (press [11] Off) you will hear the guitar sound "plucked" with each note-on even if you play legato, however when Element 3 is active and you play legato the note is heard without the *attack* because Element 3 is a guitar waveform sans the plucked attack portion of the wave.
Yes indeed, it is amazing how natural these instruments sound already, and the amount of programming that went into each one of them. On spur of the moment performances, it just feels great to hear the unexpected but organic expressivity as you play along. I am merely out to gain finer control of that, for prepared performances, and it is great to hear the mechanism is there - can't wait to try it out.
Again, thank you for your detailed insights.
Just wanted to thank Yamaha, even if the answer comes in the form of a new product - in this case Montage: