Synth Forum

Notifications
Clear all

Smart Morph Questions

6 Posts
4 Users
0 Reactions
1,745 Views
Posts: 0
Eminent Member
Topic starter
 

1. On the Smart Morph Analyze screen, you can choose three parameters (one each for red/green/blue). If I change these parameters for an existing Smart Morph sound and run Learn again, will it morph those three new parameters rather than the ones that were originally chosen?

For example, if Red is Random Pan and I change Red to Filter Cutoff and run Learn, will Red now morph Filter Cutoff?

2. I'm not sure what manual coloring does on the Analyze screen. Are just the colors on the screen changed, or does this change the parameters that the colors represent?

3. Each time we run Learn, does it randomly change the sound even if we don't change anything?

4. Does Smart Morph only morph the three parameters that are chosen on the Analyze screen, or does it also morph other parameters of the sound that users don't see?

5. Should we keep the sounds in parts 9-16 if we think we would want to run Learn again on a sound?

Thanks!

 
Posted : 03/06/2020 1:14 pm
Bad Mister
Posts: 12303
 

Hi Cindy,
Thanks for the questions.

1. On the Smart Morph Analyze screen, you can choose three parameters (one each for red/green/blue). If I change these parameters for an existing Smart Morph sound and run Learn again, will it morph those three new parameters rather than the ones that were originally chosen?

For example, if Red is Random Pan and I change Red to Filter Cutoff and run Learn, will Red now morph Filter Cutoff?

2. I'm not sure what manual coloring does on the Analyze screen. Are just the colors on the screen changed, or does this change the parameters that the colors represent?

We’ll take these two questions together, as they are related. If you leave the coloring choice set to “Auto” this means the machine learning routine will automatically set the three parameters each time you run the “Learn” function.
If you set the color option to “Manual” this means you want to direct the parameter morphing function by selecting the specific parameters for Red, Green, and Blue.

Therefore, if you want to input parameters for the three (color) options, you would select “Manual”. If you do not (Auto), then executing “Learn” will change according to the artificial intelligence... and those three options will change according to the AI analysis.

3. Each time we run Learn, does it randomly change the sound even if we don't change anything?

Yes, if the setting is “Auto”. If the setting is “Manual”, you get to make the selections yourself.

4. Does Smart Morph only morph the three parameters that are chosen on the Analyze screen, or does it also morph other parameters of the sound that users don't see?

If you have selected four ‘parent’ Parts, 9-12, to work with, they will be shown on the 32x32 grid as four randomly placed white pixels. If you touch that pixel, you will hear the ‘parent’ sound (same as if you selected that Part 9-12, directly). On the “SmartMorph” > “Edit” screen you can see a white line pointing from the Part to the white pixel that represents the FM-X settings that would create that sound.

If you have four parent sources, they are on the map as four white pixels. When you execute “Learn” these parent sounds move to a different location on the grid... this is where this neural network learning algorithm builds the transitions.... move one pixel off of that white box and you are ‘morphing’ towards a different sonic point, and are some where ‘in between’.... the sonic unknown. The farther you move away from a white pixel the less influence it’s ‘gravity’ has, the less recognizable as that source the sound becomes.

The number of parameters that change between one of those white pixel (parent) locations and another is a result of this analysis. I don’t think it ever takes a similar path again. And it changes whatever it has to in order to get to the next known quantity.

These ‘Maps’ are truly snowflakes or crystals that keep evolving each time you execute the process — even if you do exactly the same settings, with the same parent source sound, the result will be different because that’s built into the algorithm.

5. Should we keep the sounds in parts 9-16 if we think we would want to run Learn again on a sound?

That is exactly the reason to keep the ‘parent’ sounds in 9-16... this is totally up to you. If you think you may want to continue to explore, then definitely. If not, you can delete them

You can see many of the 32 example SmartMorph” programs have been supported by additional KBD CTRL Parts, and Insertion Effects, Super Knob automation, etc. But the SmartMorph sound is being calculated in Part 1. You can continue to work on a SmartMorph; you can create spin-off sounds, STORE them separately from the original; branch off building on previous work.

I would capture the ‘parent’ sounds in MONTAGE CONNECT (so I could revisit them). You do have an option to Delete the parent (9-16) Parts when you finish constructing the SmartMorph in Part 1.

The SmartMorph FM-X sound is always going to be generated in Part 1.
The FM-X programs you add to Parts 9-16, are mapped each to a white pixel on the map of Part 1... when you ask it to “Learn”, it creates a sonic landscape that fills in the map (called a “Self-Organizing Map”, SOM). Each of the 1024 pixels can potentially be a new timbre, built with parameters that morph the sound. Many parameters are in play.

Notice that the most recent “Learn” is placed in an edit buffer, so if you push things too far you can get back the previous (Undo/Redo).
You’ll find this feature has truly limitless possibilities

 
Posted : 03/06/2020 2:57 pm
Posts: 0
Eminent Member
Topic starter
 

Thanks for your explanation - that helps a lot! 😀

 
Posted : 03/06/2020 5:55 pm
Posts: 0
Eminent Member
 

Great explanations as always, thanks.

Some questions I have as another newbie to smart morphing. 🙂

1. When selecting Auto, it chooses parameters automatically for the R/G/B colors on the map, and Manual lets you choose them manually. Does this change the parameters that are actually "morphed", or does it only change how the colors on the map are rendered? In other words, if I create a morph, then change one of the color's parameters manually, it only changes what parameter renders that color on the map, not what parameter actually changes?

2. When on the Edit page, part 1 is shown along with 9-16. When doing the initial "Learn", are any parameters in part 1 included in the analysis, or is part 1 only the destination for the resulting morphed part? In other words, it doesn't matter what is in part 1 when you start out, right? All part 1 parameters are overwritten?

3. Once a morphed part is in part 1 and I use the X-Y map to "morph" the sound, are the parameter changes stored in the edit buffer, so if I store the performance the changes are stored as part of the performance? And if I were to add the stored, morphed part to another performance, would it retain whatever settings I had last "morphed" to?

4. When selecting FM-X parts to include in parts 9-16 from the morph edit page, it appears only FM-X performances are available on the list (not AWM2+FM-X performances), and only part 1 of a multi-part FM-X performance can be brought in? Maybe a future update could enhance this to allow other parts of multi-part performances to be brought in? (one could bring in/copy parts manually into parts 9-16 before going into smart morph).

5. Are there any plans in the future to add AWM2 smart morphing, or to make use of the X-Y map for other things like blending of different parts, like on a vector synth's joystick? Or adding X-Y map as an available control in the Control Assign/mod matrix, would be really cool.

 
Posted : 03/06/2020 9:24 pm
Bad Mister
Posts: 12303
 

1. When selecting Auto, it chooses parameters automatically for the R/G/B colors on the map, and Manual lets you choose them manually. Does this change the parameters that are actually "morphed", or does it only change how the colors on the map are rendered? In other words, if I create a morph, then change one of the color's parameters manually, it only changes what parameter renders that color on the map, not what parameter actually changes?

Not sure I understand that exactly, but first perhaps we should be clear about the sound generated in Part 1 — the generated FM-X SmartMorph Part. This Part includes a 32x32 map where each pixel is, in theory, a different arrangement of the FM-X parameters.

The R/G/B parameters are your input vectors to the map creation. I don’t want to over-simplify what is a very sophisticated, complex “machine learning“ algorithm. I don’t want to say they are the only parameters that change but maybe it’s better to say you are focusing and changing the amount of weight they exhibit in the result.

2. When on the Edit page, part 1 is shown along with 9-16. When doing the initial "Learn", are any parameters in part 1 included in the analysis, or is part 1 only the destination for the resulting morphed part? In other words, it doesn't matter what is in part 1 when you start out, right? All part 1 parameters are overwritten?

Part 1 is basically overwritten by the morph map generated by the analysis of the parent FM-X Parts you place in Parts 9-16. It builds a morph map out of the selected ‘parent’ Parts. You will inherit the supporting parameters of the FM-X sound in that Part (Controllers, Effects, etc) but as soon as you interact physically with the “map” - a new FM-X sound is generated.

Each of the ‘parent’ FM-X Parts are represented initially as a (white) pixel location on that map. All other pixels are involved in the morph.

3. Once a morphed part is in part 1 and I use the X-Y map to "morph" the sound, are the parameter changes stored in the edit buffer, so if I store the performance the changes are stored as part of the performance? And if I were to add the stored, morphed part to another performance, would it retain whatever settings I had last "morphed" to?

Again, not sure I understand exactly. But you can store any particular variation... you can select any two pixels and have the Super Knob move between them. You could store Performances where all you changed were the Start and End points of the Super Knob Morph. You could store a new Performances at any point, making many spin-offs...

4. When selecting FM-X parts to include in parts 9-16 from the morph edit page, it appears only FM-X performances are available on the list (not AWM2+FM-X performances), and only part 1 of a multi-part FM-X performance can be brought in? Maybe a future update could enhance this to allow other parts of multi-part performances to be brought in? (one could bring in/copy parts manually into parts 9-16 before going into smart morph).

When working from an “Init Normal FM-X” (you can only select FM-X Performances) you inherit nothing. Each ‘parent’ is going to be a separate entity and occupy a single pixel on the map (and influence those nearby).

The best way to learn (pun very much intended) about how this works is to use the “Learn” function to experiment. Then you’ll be able to answer most of your own questions. You can construct your Performance in any number of ways and using the EXCHANGE function to move them to wherever you need within your Performance.

5. Are there any plans in the future to add AWM2 smart morphing, or to make use of the X-Y map for other things like blending of different parts, like on a vector synth's joystick? Or adding X-Y map as an available control in the Control Assign/mod matrix, would be really cool.

We can never talk about future plans, sorry.

The FM-X SmartMorphing is limited to FM-X sounds (sampled audio does not behave the same). The “map“ is for “like blending of different parts, like on a vector synth’s joystick” right now! Have you tried it? You can manually move your finger on the map.
Call up one of the 32 FM-X Smart Morph Performances
Touch “Smart Morph” > “Play”
If a Motion Sequence is automating movement, press [MOTION SEQ HOLD] to pause the Sequence... you can manually use the map as vector pad ...

 
Posted : 04/06/2020 2:39 am
Jason
Posts: 8259
Illustrious Member
 

One interesting paradigm that Smart Morph sets up - that wasn't necessarily there before - is that there's a feature that cannot move from PART 1. As, currently, the map will only control PART 1 and we cannot move the morphed creation to a different PART for control by the map.

Previously, all features had the ability to swap to a different position and still "work". But now we have something that won't work unless it's in PART 1.

That's the background - and the point is that this puts more stress on the MIDI receive channel limitation. Because previously, in multi-channel mode, we had an opportunity for our sounds to match our external gear's limits on MIDI channel. We could have moved around the PART ordering as one tool to setup the MIDI receive-to-instrument (which "sound" we stuck in which PART) which would translate in what MIDI channel what responded to. However, this feature nails down the morph to MIDI channel 1 (PART 1) and just adds a little more pressure to the fixed MIDI receive channel to PART relationship.

Now Nate said fairly clearly that this wasn't something that's going to change (the MIDI receive channels being fixed). Perhaps the future we can move PART 1's morph creation to a different PART? Setup the map to target one of the 1st 8 PARTs to allow this movement. Still only allowing one PART to be controlled by the map - but allow the morphed creation to land in any PART 1-8.

I really am not taking this as an opportunity to ask for assignable MIDI receive channels.

Current Yamaha Synthesizers: Montage Classic 7, Motif XF6, S90XS, MO6, EX5R

 
Posted : 04/06/2020 7:50 am
Share:

© 2024 Yamaha Corporation of America and Yamaha Corporation. All rights reserved.    Terms of Use | Privacy Policy | Contact Us