Page 314/315 of the ops doc specifies the Coarse spec for Elements as:
Coarse (Course Tune)
Settings: −48–+48
On the Osc/Tune screen you can manually set the parameter to that full range of values.
But if you make a controller assignment from AsgnKnob1 to Element->Coarse you can only use a 2 octave range.
Using INIT defaults for AWM2 the default position for knob 1 is 512 and gives a +12 (1 octave) value. The max position of 1023 gives a +24 value and the 0 position gives a 0 (no adjustment) value.
I've heard that AWM2 waveforms can only be stretched 2 octaves but you can manually change the coarse setting up or down by 4 octaves.
It's not an issue - just reporting what I've found.
I'm using Coarse tuning assignments in the process of analyzing the various curve types to determine the XY mapping for the different shapes.
For the standard curve with default settings every 42 values of the knob changes the output by 1 semitone.
I'm still looking for a better parm to use so if you know of one let me know.
I think the problem here is your assumption what exactly is going on. It's not unreasonable when you press the [CONTROL ASSIGN] button while a parameter is highlighted (like element coarse tuning) you would expect that parameter is what's getting offset. Most of the time you'd be correct. Now I have to say I don't know exactly how Montage M works -- but Montage classic would do the "same thing" as the pitch bend wheel when you make the pitch-based control assignments.
I talk about this here (circa 2016): https://yamahasynth.com/community/montage-series-synthesizers/control-assign-pitch-sound-shaping-ideas/
I wished that there was a way to also target note-shift style pitch alterations that would work like element coarse tuning or note shift. However, at least in Montage classic, we didn't ever get this. And the biggest reason why this would be useful is for programming transpositions inside a Performance to quickly "cheat" if a singer changed keys on you and you weren't proficient enough to transpose on the fly. Or sometimes I'll be reading music for a transposed instrument it might be easier just to have the keyboard handle the transposition without having to "wreck" the transposition once you changed to another Performance.
At any rate - the limits of what this modulation can do is based on the limits of pitch stretching -- something entirely different from note shifting.
And how you can hear this best would be to load Init Normal AWM2, set the pitch bend range to +24 and also assign whatever pitch destination you want to an assignable knob or any controller you want. Then max out the source controller and listen to the result. Then put the controller back to "0" offset (wherever that is) and play the note 2 octaves up. Different sound. Now set element coarse pitch to (+)24 and play the original note. Different sound than the control assignment max offset sound - but same pitch. Now push the pitch bend wheel all the way "up" and listen to the note. Sounds the same as the control assignment offset note (and not like the coarse pitch parameter +24 note). Conclusion: you're not offsetting element coarse pitch. You're doing the same thing pitch bend does.
Current Yamaha Synthesizers: Montage Classic 7, Motif XF6, S90XS, MO6, EX5R
I think the problem here is your assumption what exactly is going on.
I'm not sure what assumption you mean. The only assumption I am making is that the controller destination 'Element -> Coarse' is, in fact, the same 'Coarse' parameter shown on the 'Osc / Tune' element screen.
Is that assumption correct?
But if you make a controller assignment from AsgnKnob1 to Element->Coarse you can only use a 2 octave range.
. . .
but you can manually change the coarse setting up or down by 4 octaves.
So what I'm reporting is that using the 'Coarse' parameter on the touchscreen you can adjust from -48 to +48 or 4 octaves.
But using a part knob you can only adjust from -24 to +24.
The question is why are those different? If those two reference the same parameter then it has NOTHING to do with how the transformation actually occurs.
It's not unreasonable when you press the [CONTROL ASSIGN] button while a parameter is highlighted (like element coarse tuning) you would expect that parameter is what's getting offset.
Again - I don't know what you mean here.
On the 'Mod Control - Control Assign' screen I select 'AsgnKnob 1' as the source and 'Element -> Coarse' as the destination.
I don't see how that has anything to do with what parameter might be highlighted on the screen.
My mention of pitch limitations being -24 to +24 was only to suggest that maybe the Coarse setting was intended to be that same range but for some reason it was using -48 to +48 instead but the controller assignment used the correct range.
So the question is: why does a destination of 'Element -> Coarse' only allow a range of -24 to +24k but a manual setting on the 'Osc / Tune' screen allows a range of -48 to +48?
Seems to me the range of the parameter should be the same no matter which way you set the value. If so then one of the two ranges is incorrect.
I'm just saying that the interface implies one thing and sets you up to make the wrong assumption.
What the control assignment is actually doing is a type of simulated pitch bend. However you arrive at a different conclusion is a wrong assumption based off of misdirection given by the interface/GUI/etc.
Nowhere in the documentation does the control assignment tables or GUI destination names tell you exactly what "register" is being offset. These are sometimes misleading marketing names. You can do your own experiments -- outlined here already -- to get at the "truth" of what these sometimes misleading marketing terms do.
And ... If you happen to do the shortcut control assignment where you scroll over the element coarse pitch parameter, and the [CONTROL ASSIGN] lights up, then you press this button and turn an Assignable knob to assign what you would think is this parameter being offset --- you'll find that this parameter isn't being offset at all. The system would act differently if it were. I'm not saying you did or did not do this, but it's an example and supporting evidence of how what's going on behind the curtains is not what would be reasonable to assume.
And, again, this is all based on Montage classic which may or may not apply to the M. You'll have to sort that out on your own. It just "smelled" familiar so I give my 2016 observation as a theory to explain what you're seeing.
Current Yamaha Synthesizers: Montage Classic 7, Motif XF6, S90XS, MO6, EX5R
... If you turn your control assignment knob and instead of a smooth "glide" and instead hear the chromatic scale then M is different. The coarse tuning only does chromatic steps and doesn't stretch pitches. It's a note shift at the element level. If it does anything finer than chromatic steps -- you're not offsetting the coarse tuning.
Current Yamaha Synthesizers: Montage Classic 7, Motif XF6, S90XS, MO6, EX5R
Nowhere in the documentation does the control assignment tables or GUI destination names tell you exactly what "register" is being offset.
And that isn't even the WORST part! The worst part is that there is no documentation as to the range or limits of any of those destinations if you can't 'assume' (as I did) that the limits are the same as the actual parameter.
Which means if you assign ANY parameter to a knob there is no way of knowing what the range of possible offset values is that the knob can provide. Not only isn't it documented but there is no good way to try to test to see what it is.
I was lucky in the test I did that I could tell the limit for Coarse was 2 octaves by listening to it.
If you happen to do the shortcut control assignment where you scroll over the element coarse pitch parameter, and the [CONTROL ASSIGN] lights up, then you press this button and turn an Assignable knob to assign what you would think is this parameter being offset --- you'll find that this parameter isn't being offset at all.
It doesn't light up for any of the element level parms.
... If you turn your control assignment knob and instead of a smooth "glide" and instead hear the chromatic scale then M is different.
Nope - you get a smooth glide with a limit of 2 octaves. I totally missed that clue.
Not expecting anything but I emailed support to see if they can provide a list of min/max range values for the parameters in the assignment destination list.
Well pitch bend is +24 -48. I had thought negative might go further in 2016 but wasn't sure. It's not that hard to work out though. For coarse the max is +24 semitones (which represents the right-hand extreme output of a standard curve with +32 ratio) and min is -24 semitones (which represents the right-hand extreme output of a standard curve with -32 ratio).
For standard curves the maximum and minimum values they can output are reached at +32 and -32 ratios. This is less of a function of the offset's maximum but more of the way the ratios work for this curve type.
Using unipolar curves ...
It was decided that the maximum output of any curve type would represent 2 octaves and minimum (or highest negative) offset would represent -2 octaves. One octave is arrived at using a ratio of +/- 16. And other note values are not possible because 16 steps do not map to 12 notes.
Using a bipolar curves you get double the resolution since each ratio step (try this for hold type curve) goes half as far.
You can see this visually by noticing when the curve "runs into" the top or bottom of the graph. For unipolar it does this at 32. For bipolar it does this at 64. So you are able to get 64 steps on "each" (not the positive side since it goes to +63 max, not 64) side using bipolar. -32 is one octave down and -64 is two. Now there's 32 steps per octave which doesn't map to 12 but the error is smaller (ignoring tempered).
If you stack two curves on top of each other you notice you can still only offset by two octaves so the limits are still +/- 24 semitones. However you want to think about that.
Pitch is the easiest to hear all of the limits.
So it seems like they matched up (again, for classic) the limits of the curve to the limits of element coarse destination.
Current Yamaha Synthesizers: Montage Classic 7, Motif XF6, S90XS, MO6, EX5R
Pitch is the easiest to hear all of the limits.
LOL! That is why I started with 'Element -> Coarse'. I was hoping to verify that the knobs would use the same range, and set, of values specified for the UI parameters.
Thanks for that detailed info - it will be useful for the set of parameters where those curves make sense.
But in the big picture the issue isn't that one parameter but the whole set of possible destinations. I was hoping to find some way to extrapolate what I find for one parm to the others.
Consider the effect destinations. For example SPX Hall as documented on page 159 in the Data List. The screen lists 11 parms and the doc lists 10 - doesn't include Dry/Wet.
The doc lists the actual range for each parm but also a value range. Reverb Time is 0.3s - 30.0s with a value range of 0-69.
So one unknown is how the value range 0-69 maps to the time range 0.3s - 30.0s.
Just as important is how the value range then maps to the knob range.
That is the one I was trying to figure out: how a value range (0-69 in this example) maps to a knob range. If you can assume (there's that word again) a simple strategy it could just be that 0 range = 0 knob and 69 range = 1023 knob with linear interpolation inbetween.
So 1024/70 = 14.6 rounded to 15 would mean that every change of 15 for the knob changes the range value by 1. That is how some SuperKnob movements I've noticed seem to do it.
But which effect (or other) parameters does that method apply to? Because that is the method I thought I would find the 'Element -> Coarse' doing, which it isn't.
The thing is I don't know of any way to test the type of mapping that is being done because you generally have no visibility into the result. How would you possibly measure the change in 'Reverb Time' from .3s to something else?
I've asked support to weigh in on whether there is a simple range (value to knob) mapping that is done and, if so, what parameters it applies to but who knows if there will be any response. And I won't consider 'use your ears' useful response!
Thanks for your input. It's always appreciated. I'm sure at times it may seem like I'm arguing with you. That isn't my intention - I'm just trying to nail down what I can as efficiently as I can.
The doc lists the actual range for each parm but also a value range. Reverb Time is 0.3s - 30.0s with a value range of 0-69.
So one unknown is how the value range 0-69 maps to the time range 0.3s - 30.0s.
... as in table 4? Reverb time points you to table 4 to relate values to seconds. See attached for a sample of the (not complete) table taken from the M data list.
That gets you from 0-69 to seconds. Then you'd need to decide if it's important enough to determine if the right most of a standard curve at +32 ratio is 69 offset (full bore reverb time) or if the mapping of input to output is something different. I don't really know but generally would think the right most of ratio +32 would give you the max value for the destination (some/most documented, some not) if the parameter started at (is programmed as) 0.
Some of this stuff can be tested. Some easier than others. And some Yamaha (not necessarily me) would assert it's not important to know the details because the influence of "direction" and "more juju" or "less juju" is enough to know.
Current Yamaha Synthesizers: Montage Classic 7, Motif XF6, S90XS, MO6, EX5R
That gets you from 0-69 to seconds. Then you'd need to decide if it's important enough to determine if the right most of a standard curve at +32 ratio is 69 offset (full bore reverb time) or if the mapping of input to output is something different. I don't really know but generally would think the right most of ratio +32 would give you the max value for the destination (some/most documented, some not) if the parameter started at (is programmed as) 0.
Those curves, I believe, appear to be based on 0-127 for input and 0-127 for output on both the M and earlier models.
Knobs for earlier models match that that 0-127 input but for M's knobs are 0-1023.
So with 70 possible values 128/70 = 1.82857 and 1024/70 = 14.62857 meaning each knob movement of 2 (rounded) represents a value change of 1 for older models but for the M the movement is 15 (rounded).
I'm hypothesizing that the order of flow is something like:
1. The input, from either old/new knob gets mapped to the curve input range of 0-127 either before or after rounding.
2. that curve input gets mapped to a curve output in the range 0-127 (could be 0-1023 for M's?)
3. that curve output gets mapped to one of the 70 values (for this example)
Which of the 70 values you end up with could depend on when, or even if, the 1024 range is mapped to the 128 range
and when any rounding is done.
Some of this stuff can be tested.
That is why, given what I said above, I'm trying to find a parameter that :
1. has a small number of possible values (but not evenly divisible by 8)
2. changes in the value can be readily heard or otherwise detected
A lot of possible difference goes away if the M's value gets immediately converted and rounded to the 128 range since Yamaha
could just divide by 8 and then round it to get an equivalent 'old' knob value.
On the other hand the code for M's could apply the 1024 range all through the process.
Given the difference in possible precision between rounded 128 values and rounded 1024 values there is a good chance for rounding to have a measurable effect for cases where you have non-integral numbers of data values. For example, 70 data points doesn't divide even ly into either 128 or 1024 so the order of conversion might be able to be detected.
My testing goal is to try to construct a table showing what knob value you need in order to get each of those 70 possible data points. I remember seeing one Morph example that actually morphed between filter types (which doesn't really make a lot of sense to me).
There are only a small number of filter types so when you turned the super knob the filter type would stay the same for some number of values and then moving the knob just +1 would cause the filter type to jump to a new value where it would stay for some number of values.
That is the sort of 'ranging' I am trying to reverse engineer for the assignable parameters.
Why, you (and our local troll) ask? So I can create test code that sends SysEx to effect appropriate knobs changes. For the example of 70 values I want to be able to create SysEx commands that will change a knob to the value that represents each of those 70 possible parameter values.
I had many of the same goals and decided things I could clearly hear like pitch were most important for exactness and things that are more difficult to hear (that could stand for some error) ballpark is good enough. And ballpark is achievable.
I've asked for this kind of stuff before but repeated inquiries are fine.
Current Yamaha Synthesizers: Montage Classic 7, Motif XF6, S90XS, MO6, EX5R
Why, you (and our local troll) ask? So I can create test code that sends SysEx to effect appropriate knobs changes. For the example of 70 values I want to be able to create SysEx commands that will change a knob to the value that represents each of those 70 possible parameter values.
I ask you this question in you another post, I ask you the same question here because I though about it reading this post. You can programing macro with systéme exclusive embedded in wave file( I have no idea how to do that and I don’t learn programmation but of coarse this is what give you more possibilities to reach your goals)
so could you for example programing keys to trigger multiple articulated waveform like we use to do within Kontakt articulated choirs or strings libraries?
Montage 7 classic
so could you for example programing keys to trigger multiple articulated waveform like we use to do within Kontakt articulated choirs or strings libraries?
I have no experience with 'Kontakt articulated choirs or strings libraries' and most of the answer was covered in previous replies to similar questions from you.
On Yamaha the 'trigger' for a sound is a Note On event. Those events specify the midi note number and a velocity value.
For an AWM2 part that uses Elements each element specifies a Yamaha 'waveform'. That waveform uses the midi note number and velocity to locate the appropriate keybank contained in the waveform.
Each keybank contains ONE mono or stereo 'sample' where a sample is basically a short, looped, WAV file equivalent.
The waveform and keybank organization is shown in the Operations doc - just select 'waveform edit' from the list
https://manual.yamaha.com/mi/synth/montage_m/en/om02screenparameters0090.html
1. one note on event can cause any number of elements to be active
2. each active element can contribute ONE sample for its waveform
3. the combination of the samples from ALL elements will be heard
4. so you can configure the elements to delay, or otherwise modify, their sample to get various effects.
In the 'Chordz 2 Chill 2' performance each single key press plays several different tones each of which has a short delay which causes the key presss to sound like an Arpeggio.
so could you for example programing keys to trigger multiple articulated waveform like we use to do within Kontakt articulated choirs or strings libraries?
I have no experience with 'Kontakt articulated choirs or strings libraries' and most of the answer was covered in previous replies to similar questions from you.
On Yamaha the 'trigger' for a sound is a Note On event. Those events specify the midi note number and a velocity value.
For an AWM2 part that uses Elements each element specifies a Yamaha 'waveform'. That waveform uses the midi note number and velocity to locate the appropriate keybank contained in the waveform.
Each keybank contains ONE mono or stereo 'sample' where a sample is basically a short, looped, WAV file equivalent.
The waveform and keybank organization is shown in the Operations doc - just select 'waveform edit' from the list
https://manual.yamaha.com/mi/synth/montage_m/en/om02screenparameters0090.html
1. one note on event can cause any number of elements to be active
2. each active element can contribute ONE sample for its waveform
3. the combination of the samples from ALL elements will be heard
4. so you can configure the elements to delay, or otherwise modify, their sample to get various effects.
In the 'Chordz 2 Chill 2' performance each single key press plays several different tones each of which has a short delay which causes the key presss to sound like an Arpeggio.
Toby, may be you miss understood my question? It is all about system exclusive you can embedded in a waveform as you say and then you can program a key to trigger whatever you like. And this made me thinking of keys you have in Kontakt library that trigger different waveform for have an realistic play of an orchestra for exemple with different behavior of instruments. Usually you have 5 to 12 keys on the keyboard that don’t play notes but change the waveform seamlessly.
Montage 7 classic
Toby, may be you miss understood my question?
That's always possible.
It is all about system exclusive you can embedded in a waveform as you say and then you can program a key to trigger whatever you like.
You can embed most any valid SysEx command in a midi file as I mentioned before - though I obviously haven't tested all possible combinations.
But on Montage a key is triggered by the user pressing it or be receiving a Note On event as I described earlier.
And this made me thinking of keys you have in Kontakt library that trigger different waveform
On Montage an AWM2 element specifies a single waveform. So to activate a different waveform you need to activate a different element. Elements are activated when they receive a Note On event within their note range and velocity range.
for have an realistic play of an orchestra for exemple with different behavior of instruments.
Many of the presets do that. A performance uses different parts with each typically 'playing' a different instrument.
Usually you have 5 to 12 keys on the keyboard that don’t play notes but change the waveform seamlessly.
I think we have arrived back at what you said earlier:
Toby, may be you miss understood my question?
I don't think I have any more to add given that I'm not sure where our disconnect is. I'm sure others will weigh in when they have time.