Synth Forum

Notifications
Clear all

Envelope Follower Parameters - how do they relate/correlate to the target part amplitude envelope?

28 Posts
3 Users
0 Reactions
745 Views
Posts: 0
New Member Guest
Topic starter
 

Page 6 of the reference doc

Envelope Follower:
Envelope Follower is a function for detecting the volume envelope of the input signal waveform and
modifying sounds dynamically.
NOTE Envelope Follower can be controlled not only by the audio signal from an external device, but also by the output
of all Parts.

For the rest refer to page 21 of the doc

For example, you can modify the sound of Part 2 by using the Envelope Follower for Part 1
(EnvFollower 1) as the “Source.”

That seems to say that the amplitude envelope of part 1 is used to modify/modulate the amplitude envelope of part 2.

Is that the correct interpretation? Assuming it is let's reverse the part numbers and use the envelope follower for part 2 to modify/modulate the amplitude envelope of part 1. That now matches the diagram at the bottom of page 21 on which my quesions are based.

1. part 2 is the source but the amplitude envelope for an INIT NORMAL (AWM2) (e.g.) element does NOT include times that are specified in MS - all times (Attack, Decay1, Decay2, Release) are specified as a number from 0 - 127.

2. part 1 is the destination whose amplitude envelope is to be controlled. Like part 2 its time values are specified as numbers from 0 - 127.

3. The diagram at the bottom of page 21 shows Attack and Release parameters with values specified in MS. Attack -> 1 - 40 ms, Release -> 10 - 680 ms. Any idea why these are specified in milliseconds when all other envelope parameters are specified as integer offsets?

4. If it is the volume of the input that controls things what happens whan an A/D source is used that doesn't necessarily have a KEY ON/KEY OFF series of events?

How do the attack/release values in MS interact with (correspond to) the attack, decay1, decay2, release parameters of the part 1 amplitude envelope?

That is, how do you relate times in MS to times with a value of 0-127? How are the Env parameters of part 1 combined with the MS time values to construct an amplitude envelope for part 1?

Are those envelope follower parameters used to OFFSET the part values? Or do they replace the part values? If 75 ms (shown on diagram) is less than the actual part 1 release value does the release happen at 75 ms?

How does a KEY OFF event affect the resulting envelope?

I'm having trouble just figuring out how to even test some of the above questions.

Any links to other sources of info about the actual workings (not the philosophy) of the envelope follower would be appreciated.

 
Posted : 28/09/2023 12:24 am
Jason
Posts: 8259
Illustrious Member
 

A controller is something that outputs some value and can control something else. The assignable knobs, mod wheel, pitch bend, etc. are all controllers and it's easy to grasp this. Envelope followers are controllers too. More on that later.

Any single envelope follower is like an extra assignable knob for the Part that uses it. Theoretically, the envelope follower can be any value from 0-127. In practice, I've had difficulty achieving 127 -- but it can be done (and the audio that is being followed to create such a high level probably isn't something you're going to use -- meaning you probably won't output this sound to your Main L/R outputs).

As a convention: I'm going to say "Envelope #" (as in Envelope 1) and this shows up as "EnvFollower 1" in the documentation Andrew quoted.

To me, it's always useful to imagine things like envelope follower and MS Lanes as knobs. Without imagining these abstract controllers this way - it's difficult to relate to. Normal knobs change when you grab the knob and turn it back and forth. These virtual knobs are spun by something else - not fingers. The envelope follower virtual knobs are "spun" by how loud the Part is that is labeled by the envelope follower (you see Envelope 1 - and that's Part 1's loudness spinning that knob). The louder the Part is, the higher the number gets. Each envelope you can tweak the gain and hysteresis. Hysteresis is a way of smoothing out rapid changes - so if something gets quiet really quickly then louder quickly and there's a lot of hysteresis - then you may not see any change in the value. But with this same high hysteresis and if the Part slowly fades out from loud to soft then the value would go from a high value to a low value.

Unlike ANY other controller - Envelope followers have the power to reach from Part to Part. Not even Part assignable knobs can do this. They can be connected to superknob or the common assignable knobs - but this is different from "connecting" to another Part.

Envelope 1 is Part 1's loudness. If Part 2 uses "Envelope 1" as a source controller then Part 1 is able to "communicate" to Part 2 through its (Part 1's) loudness (or sonic energy or however you want to think of this).

If Part 1 (the Part being followed) has its output set to OFF then the envelope follower will still work the same. So the Part is silent out your speakers but can be used just as a signal for the other Parts (including itself). This is usually how I use the envelope follower - to follow another Part that's had its output turned off. The output of a Part is the final stage before mixing with other Parts and the envelope follower taps into a Part before this final stage.

If Part 2 uses Part 1's envelope (Envelope 1 or EnvFollower 1) then this follower (Envelope 1) doesn't necessarily modulate the volume of Part 2. It's just a knob like all the rest. The knob is a source. This knob can be assigned to any of the parameters any other knob can be assigned to. So modulating volume is an option (Part 2's volume) but you can also assign the destination to something other than volume like controlling an insertion effect parameter or pitch or ...

For example, you can modify the sound of Part 2 by using the Envelope Follower for Part 1
(EnvFollower 1) as the “Source.”

To beat this dead horse - "modify the sound of Part 2" does NOT (necessarily) mean the amplitude of Part 2. Modifying the sound includes pitch, LFO speed, and the laundry list of other parameters that are destinations in the control matrix. They all modify the "sound" in one way or another.

1. part 2 is the source but the amplitude envelope for an INIT NORMAL (AWM2) (e.g.) element does NOT include times that are specified in MS - all times (Attack, Decay1, Decay2, Release) are specified as a number from 0 - 127.

I don't follow. I know there's a good question in there, I just can't piece it together. If this has something to do with how do you take a controller that goes from 0-127 and control a parameter that only goes from 0-64 (or whatever) then the answer is the same way as if this were the modwheel or assignable knob. A parameter with a limited range will hit a ceiling and not go any higher. I'm not sure I know what "MS" means in the above. If it means Motion Sequence then I'm not sure how motion sequence relates. If Part 2 is the source and Part 2's MS contributes to Part 2 getting louder and softer then MS is a way to control the loudness of Part 2 which will directly impact Part 2's envelope output (Envelope 2).

I'm circling back to this #1 because I think I now understand a little more about the question. The "Attack" and "Release" controls on the envelope follower's output behavior is different than attack/release of an EG (envelope generator). The reason why is that an EG is a different beast. You define exactly how tall it gets and how fast it gets there (slope of the line). An EG is an absolute thing when you define attack, release, and decays. The envelope follower is creating an "envelope" (really, it's just turning a virtual knob) that can be influenced by parameters called "attack" and "release". This is not an absolute thing because the audio energy of the source Part is a dynamic thing (or can be) so it could be moving around and these just influence how the output of the envelope follower react to this change in sound pressure / sonic energy / loudness.

2. part 1 is the destination whose amplitude envelope is to be controlled. Like part 2 its time values are specified as numbers from 0 - 127.

This use of "destination" is different from the control matrix's "destination". So there's either missing information (like the parameter that's Part 1's destination) or there's overloading of the term "destination". Which is fine to say this usage of "destination" means it's just the Part that's using Part 2's (source) Envelope Follower output in some way without specifying what the final destination is.

3. The diagram at the bottom of page 21 shows Attack and Release parameters with values specified in MS. Attack -> 1 - 40 ms, Release -> 10 - 680 ms. Any idea why these are specified in milliseconds when all other envelope parameters are specified as integer offsets?

I don't understand "MS" above as in "specified in MS". No matter, let me roll forward to your questions about attack and release. The envelope follower can be tweaked - as I mentioned before - so it responds faster or slower to the change in sound pressure (loudness, energy, etc) of the Part that's being followed. In this picture Part 2 is the source as you say and you can change the attack so it's slower to track the change of getting louder or the release where you can make the envelope follower more slowly track the change of the input level getting less loud. These have little to do with any other parameter that's called "attack" and "release". They're just here to shape how the virtual knob of Part 2's envelope follower output works (referred to as "Envelope 2" or docs would say "EnvFollower 2" ).

4. If it is the volume of the input that controls things what happens whan an A/D source is used that doesn't necessarily have a KEY ON/KEY OFF series of events?

Key on and key off doesn't (necessarily) have anything to do with the sonic output of a Part. This concept isn't tied to pressing keys or not pressing keys. Using effects that make noise all the time no matter if you press a key or not you can create a Part that's "loud" without ever pressing a key and it's envelope follower output will be a number near the 127 end of the spectrum. An envelope follower's output is just about "how loud" something is (Any Part or the master output meaning "all" Parts together or the A/D input's loudness) ... with an additional "filter" of this attack/release/gain stuff to tweak the response and bias the level. So if A/D is the source then EnvFollower AD will output however loud the A/D inputs are. If you have a constant bullhorn sample that goes through a volume pedal and then the output of this volume pedal is hooked up to A/D inputs then the A/D Envelope Follower will output something close to 0 when you have the volume pedal in the heel position ("all the way back" -- lowest volume) and then will be something close to 127 when you have the volume pedal in the full toe position ("all the way forward -- highest volume). Nothing to do with key events.

Are those envelope follower parameters used to OFFSET the part values? Or do they replace the part values? If 75 ms (shown on diagram) is less than the actual part 1 release value does the release happen at 75 ms?

There's no equivalency here. "Release" is different and independent for an envelope follower vs. anything in the Part itself (AEG release or PEG release or FEG release). They're all different and unrelated.

An envelope follower as a virtual knob CAN offset part values (I mean, it's a knob and a destination of using this knob as a source can be anything in the control matrix) - but I know there's a disconnect in how you visualize this system mixing up Part stuff with the envelope follower's tweaking options.

How does a KEY OFF event affect the resulting envelope?

There's no answer for that. It depends on what a key off event does. How it impacts the loudness of the Part (or Master) you're asking about. And obviously a key off event doesn't have any chance to impact the A/D follower unless you were to take the output of the keyboard and route it back into A/D (directly or indirectly).

A key off event could make a Part louder. It could make it softer. It could have a long tail and take forever to get quieter. It could do nothing (take an FM part with the release time set infinite and a high level).

At the end of the day, the Part or Master or A/D is just about how loud each of those things is. So whatever you do to change the loudness of these Parts and A/D - it will be reflected in the envelope follower that "follows" this loudness.

I think there's a fair amount of purge you need to do on your understanding and get more basic.

---

An envelope follower is a reflection of the loudness of the thing it's following (Part 1, Part 2, etc).

You can make it react to the loudness faster or slower in different ways using the envelope follower tweaking parameters "gain", "attack", and "release". And these words "attack" and "release" have nothing to do with envelope generators. They're a different beast.

Envelope followers are just virtual knobs that can be tied to any destination parameter.

Not spam

Current Yamaha Synthesizers: Montage Classic 7, Motif XF6, S90XS, MO6, EX5R

 
Posted : 28/09/2023 5:01 am
Jason
Posts: 8259
Illustrious Member
 

The way I usually use the envelope follower in the following way:

Say I have Part 1 that is the "source" (is the thing being envelope followed) and Part 2 is where I use "Envelope 1".

1) I setup Part 1's output so it's OFF. It's still active for envelope follower when off. I do this because I'm just devising a way to "signal" another Part to do something and I don't want to hear this signal.

2) A simple signal would be to use Part 1 as an FM-X Part. I'll set it as monophonic so more fingers down do not increase the volume. I'll set it up with infinite release so once I press a key there will be sonic energy ("loudness" ) and it won't change. I'll also probably go overboard and setup the pitch to absolute so every key is the same pitch. And I'll make sure the amplitude envelope generator (AEG) for Part 1 has a zero attack time so there's sound right away that stays constant. I'll turn off all effects (no insertion, no sends to var or reverb) so effects do not change the loudness of Part 1.

3) I may limit the note (keyboard key) range. Lets say from C3-C3 (just one note).

When I recall this performance - Part 1 will not have any sound energy because nothing is making noise yet. Not Part 1 or Part 2. When I press a key that's not C3 - Part 1 will still be flat lined. The envelope follower following Part 1 will be flatlined too. It's just a reflection of the loudness after all. The first time I press MIDI note C3 - Part one now starts making noise and gets as loud is I've programmed it to. It will stay this same volume no matter if I press C3 again or not. Usually I have a wider range of notes to trigger which is where it makes more sense to use monophonic.

4) I'll setup Part 2 so Envelope 1 is the source controller. The destination parameter could be volume. And I can say use a large negative ratio so that when I press note C3 and Envelope 1 gets "big" then volume is subtracted from by a big number.

Part 2 is "normal" and has its output set to Main L/R so I can hear it (unlike Part 1). Say it's a piano.

So the utility here is that I can play any note that's not C3 and Part 2 will have a normal volume piano. I can hear it because maybe the level is set to 100 and I hear it fine. Then when I press C3, it squelches Part 2 because the envelope follower output for Part 1 gets big which turns the virtual knob to say 115 and when the control matrix curve has an input of 115 then the offset could be -130 and 100 minus 130 is 0 (you can't get lower than the floor of 0).

I thought it may help to have an actual application that matches how to practically use envelope follower.

Not spam

Current Yamaha Synthesizers: Montage Classic 7, Motif XF6, S90XS, MO6, EX5R

 
Posted : 28/09/2023 5:19 am
Posts: 0
New Member Guest
Topic starter
 

I don't understand "MS" above as in "specified in MS

It means milliseconds.

1. part 2 is the source but the amplitude envelope for an INIT NORMAL (AWM2) (e.g.) element does NOT include times that are specified in MS - all times (Attack, Decay1, Decay2, Release) are specified as a number from 0 - 127.

I don't follow. I know there's a good question in there, I just can't piece it together.

The envelope follower parms are millisecond values but the AEG parms are values from 0-127 and can't be correlated to actual time or millisecond values.

Bad Misters Envelope follower article is using one part to control the volume of another part.
h-t-t-p-s-:-/-/-w-w-w-.yamahasynth.com/learn/modx/mastering-modx-envelope-follower

The AEG of the destination part creates an envelope for the part's volume. The diagram shows the source parameters for attack and release being specified in milliseconds (MS). But the AEG parameters for attack, decay, etc are specified as 0-127.

So are the source attack and release millisecond values being used to alter the envelope that the AEG creates? If not what do the source attack and release modify?

The diagram you posted shows attack of 26ms and release of 75 milliseconds. Ok - but 26 milliseconds from what starting point in time? Key On? The start of the envelope created by the AEG? What is the reference point for those two millisecond values?

There's no equivalency here. "Release" is different and independent for an envelope follower vs. anything in the Part itself (AEG release or PEG release or FEG release). They're all different and unrelated.
[/quote[
The attack of 26ms and release of 75 ms have to relate to SOMETHING - what do they relate to.

What do attack and release mean if you say they are NOT related to the attack/release in an AEG envelope?

You seem to be saying that a KEY OFF event that takes an AEG envelope directly to its release point is completely unrelated to that 75 ms release time of the envelope follower.

So I'm not clear at all on the chain of events 1) when the attack of 25ms takes effect, 2) when the release of 75 ms takes effect versus when the AEG envelope values take effect.

 
Posted : 28/09/2023 5:57 am
Posts: 0
New Member Guest
Topic starter
 

An envelope follower is a reflection of the loudness of the thing it's following (Part 1, Part 2, etc).

You can make it react to the loudness faster or slower in different ways using the envelope follower tweaking parameters "gain", "attack", and "release".

Ok - envelope follower action comes first. So, as a controller it takes a loudness as input and spits out a value (0-127?).

Lets assume a steady state input loudness currently producing a steady state output value.

1. the input loudness now increases measurably and stays at that new level.

Does an attack of 25ms mean the output value of the follower won't begin to change for 25ms? Or does it mean it will take 25ms to change from the steady state value to the new value?

How does the 75ms release value come into play? Is the attack only on an increase and the release only on a decrease?

How do these two values work together to modify the output of the envelope follower?

 
Posted : 28/09/2023 6:12 am
Posts: 0
New Member Guest
Topic starter
 

Envelope followers are just virtual knobs that can be tied to any destination parameter.

Ok - but in Bad Mister's example they are tied to a part volume to alter that volume.

There is also an AEG envelope that is used to alter that volume. Those two alterations must happen
either serially or in parallel.

There is no AEG envelope for the follower value to alter until the AEG creates one. And for each KEY ON
event a new envelope will be generated but the output of the envelope follower doesn't necesarily correlate to KEY ON events since it comes from a different source.

I'm trying to understand the actions that happen and the timeline they happen in for Bad Mister's example.

And trying to figure out how to create meaningful tests that will help show the order, and timing, of events.

 
Posted : 28/09/2023 6:20 am
Posts: 0
New Member Guest
Topic starter
 

I thought it may help to have an actual application that matches how to practically use envelope follower.

Tomorrow I'll try to replicate your example.

 
Posted : 28/09/2023 6:24 am
Bad Mister
Posts: 12303
 

If you think of the Envelope Follower as detecting audio shapes, and creating a loudness contour of the actual audio energy generated by the ‘Source’ — you can better understand how it works, and why its Attack and Release settings are measured in time increments (milliseconds; abbreviated always as “ms” lower case) and not the values of the MONTAGE/MODX/MODX+ Envelope Generators (0-127)

Say the Source is the AD IN and is your drummer’s snare drum or kick drum mic… every time it causes an audio spike, the Envelope Follower will react, if the ATTACK parameter in the Envelope Follower is set quick enough to respond.

A kick or snare drum hit, being *percussive*, are very fast, very abrupt. If you set the Envelope Follower “Attack” so that it is too slow (a number too large) it is possible that the entire trigger event might occur before the Envelope Follower can respond.

If you think about a snare drum as an audio event… it could not be clearer when it begins. Your ears tell you exactly when it begins. If your Envelope Follower “Attack” is set to too large a value, the entire snare drum event will have come and gone before it can respond at all. It will simply ignore the event…

Reference for Perspective: Because musicians don’t deal in milliseconds you may not realize that at a tempo of 120bpm, your basic snare drum backbeat on “the 2” and “the 4” are exactly one full second apart (1000ms).
The snare drum’s entire audio footprint comes and goes in well less that an eighth of a second (125ms). Setting the Attack at a value that is higher than the trigger event is in duration — will miss the entire barn! Make sense? A setting higher than 125ms = It is being told to react after the chickens have already escaped. You apply no influence.
_ (It is similar to setting the Attack on a Compressor/Limiter) — set the Attack according the audio sources profile.

If you want to use that snare drum’s audio shape as a triggering/controlling event, you would lower the the Envelope Follower Attack to an appropriate value to be influenced by that audio shape. Setting it as short as possible will ensure that as soon as it occurs it can react— if you set the Attack too long, the snare drum event may pass completely before the Envelope Follower can react.

As to the Release, the Envelope generated can be held open for the amount of time as indicated by this parameter (680ms). Once the unit is triggered, you set the amount of time (in ms) for the Follower to allow signal to pass. Observe the movement of the meter, the higher the Release value the longer it takes for the result meter to return to silence, the shorter the Release value the quicker the control will return the result to zero.
_ (Again, this is similar to how Release works on a Compressor/Limiter) — set the Release according to the response you want.

Say you wanted your drummer’s drum input audio event to trigger a big, phat bass sound effect representing the footsteps of Godzilla, the mythical monster from the movies, by setting the Attack very quick (lowest ms value) and the Release very long (longest ms value), the rather short trigger event can have a relatively longer release time… giving it a bigger profile… by assigning this to the appropriate synth Part you could turn each audio hit into the giant’s rather ominous footsteps.

If you set the Attack very quick, and the Release also rather quick, instead of big plodding steps, you might want to turn the quick snare hits into quick target hits, as if the monster was simply walking across the stereo field in tap shoes.

Remember the Envelope Follower’s output can be targeting any internal sound…
the Envelope Follower’s input can be any internal Part — you can just as easily feed in a Drum Kit that’s been assigned an Arpeggio as the Source.

The target sound will begin according to the ATTACK setting. Try it using an abrupt sound like a percussive drum hit to define the attack. Start with the smallest Attack value, as you increase it beyond a certain point, you will get no response (once you hear/observe this happen it will become crystal clear (Aha! Moment)… because, again, the sound itself is quick—therefore a quick Attack setting must be used to capture it. By the time you increase passed a certain value the trigger sound is over, you’ll have missed it entirely.

Imagination:
You can use a Drum Arpeggio (feed an internal Part to the Envelope Follower) - instead of sending its audio to the normal Main L&R Outputs, you might just send it’s audio profile to the Envelope Follower and route it to the Operator Levels of an FM-X Pad sound’s Modulator Output… each different drum hit from the Arp Phrase, although not directly audible, will trigger a harmonic burst in the FM-X Pad creating a unique and rhythmically interesting result. The more dynamic the Arp Phrase the more interesting the harmonic changes in the targets you to which you route it.

Extra Credit:
Volume control as a target is fairly easy to hear (thus it makes a good example to experiment with when learning) — but please don’t limit your thinking to just the “usual suspects” — use your imagination, any available assignable Control Destination can be targeted. And almost anything can be used as a Source. The Source can be (audible) routed to the Main L&R Output and the Envelope Follower, simultaneously… or, (as I’ve found useful) by defeating the connection to the Main Out, the Source can be ‘hidden’ from direct detection and act as a silent stealthy ‘ghost’ control.

I can have a ‘silent’ rhythmic thing influencing my Pad sounds, changing not only volume, panning, send amount, or filter cutoff (the ‘usual suspects’), but also things like dynamic changes in Reverb Time, or activating a big “Gated Reverb” moment, on specific hits in a phrase…

Learning Curve: getting a handle on what makes a good Envelope Follower Source is the skill in this whole thing. You quickly learn what makes a good Source and what does not.
Say you setup a silent Drum Arp (that is, a drum groove disconnected from the Main L&R, only going through the Envelope Follower)… the difference between one drum groove and another is now going to have to be judged entirely differently.

The individual drum instruments are indistinguishable - you only are listening to the shape of the audio’s loudness, the outline — it’s very much like only being able to see the person’s two dimensional shadow on the wall instead of their 3-D image. So more space, more silence between events may make for more energetic and interesting source material.

You may find using fewer drums from an ARP Phrase useful… “less is more” can be understood clearly, especially when you are listening to audio in terms of just its dynamic peaks and dips. You find yourself making your own ghost source material — it’s a great way to add unique movement to your music. But don’t only think rhythmically…

 
Posted : 28/09/2023 10:44 am
Jason
Posts: 8259
Illustrious Member
 

Oh, I didn't recognize MS - but that's correct. Usually listed as mS or ms. And "MS" was (for me) too overloaded with motion sequence for me to get past that. No problem. I get it now. It wasn't so important for me to follow that.

The best tool for seeing (literally seeing, not figuratively) what the envelope follower tweaking values of "attack" and "release" do would be to look at the same screen where you are setting these values. There are two meters which will both be bouncing from min to max in a similar manner. The top meter (bar graph) is the raw input signal without "attack" and "release" (or gain) applied and the bottom meter will show you how the final output of the envelope follower will behave with attack and release applied. For me, the usual process here is to move around one value or the other and watch what happens.

Gain will scale the result and you can see the impact of gain as well looking at the bottom meter.

BTW: I usually set attack and release to minimum values to limit the hysteresis as much as possible. I usually want the reaction time of the envelope follower to be as quick as possible to the raw input loudness. If I could set attack and release to 0 (and gain to unity) so the input and output were the exact same then I would do that. Maybe fiddling with gain to make up for an input signal that doesn't reach 127 if that mattered to me (although there are consequences to using gain - which you can see for yourself looking at the bottom meter). Rather than using gain in this way, I usually just scale my control assignment curve that uses an envelope follower as a source to compensate rather than trying to make the virtual knob of an envelope follower spin full range.

Current Yamaha Synthesizers: Montage Classic 7, Motif XF6, S90XS, MO6, EX5R

 
Posted : 28/09/2023 4:25 pm
Posts: 0
New Member Guest
Topic starter
 

The target sound will begin according to the ATTACK setting.

That is not clear to me.

Doesn't the sound of a typical part (e.g. piano) begin (i.e. it's envelopes begin) at KEY ON?

So I'm trying to understand the correlation between these two event streams:

A. KEY ON of the target part - which begins the sounds journey through the envelope trajectories (FEG PEG, AEG)

B. Output level change of the envelope follower source.

Aren't these two event triggers asynchronous? Part 1 is trucking along and the envelope follower change might NEVER happen or might happen at any time at all relative to where part 1 activity is in relation to part 1's envelope (ADSR).

1. Is the output of the envelope follower a value from 0 to 127? from -64 to +63?

2. Is that output value then used as an OFFSET to the target parameter (VOLUME in the article example) similar to how other offsets (e.g. quick edit offsets) are applied?

Based on comments so far it seems like the envelope follower is producing it's own envelope based on

1. the change of input value
2. the attack and release settings

Almost as if the change of input value snap of a snare drum is acting as a 'key on/key off' pair.

1. there is no input at all from the envelope follower (EF) source
1. EF produces a steady state output value - e.g. 0 with a possible range of -64 to +63
2. snare drum hits - causing an input spike of 40 units of some sort
3. output follower creates a mini envelope using a delta of 40 and the attack/release settings.
4. that envelope is used as an offset to the target parameter. For 'volume' the destination might be
at any point at all in its journey along its envelopes.

It is that ASYNCHRONOUS connection between the source and target parts that I'm having trouble getting my head around.

Is there an implicit assumption that source and target will both be coordinated/synced with a common timer such as tempo or beat?

Otherwise the follower's output changes will 'fall on deaf ears' so to speak and not overlap with the target activity at all.

If that is the case I can understand why a follower might be used soley, or most often, with an arpeggio in order to effect synchronization.

I think the 'fog' may be lifting some but it will probably take me until the release of the new product until it sinks in more.

And then I might have to start all over again if that 'new product' does things differently.

Thanks for the detailed info and examples.

 
Posted : 28/09/2023 5:03 pm
Bad Mister
Posts: 12303
 

In the example that I mentioned:
The Envelope Follower Source is a snare drum hit fed to the AD IN via a microphone
A snare drum has a very fast (immediate) Attack.
The trigger this creates in the Envelope Follower can be used, (not to trigger a Note-on)… what the Envelope Follower creates (using the snare hit as the Source) will be a rapid rise in the target Control Destination parameter. Targets include assignable controls and do not include Notes. A Note that is engaged (being held) and has its volume “biased” completely to the Envelope Follower will pulse (sound) following the shape of the Source. (example a staccato burst - bass note)

If the target is the Operator Output Level of an FM-X Modulator, you will get a harmonic burst. Very effective when applied to a Pad sound… if the Pad sound is droning… playing a held chord for eight measures, the energy pulses generated by the Envelope Follower will cause a harmonic burst that mimics the snare drum hits.

This rapid rise in the assigned parameter could be what I described in my example (Operator Output of a Modulator), but it could be a rapid change in pan position, or a spike in the reverb send, a change in EQ setting, the type of event used determines what it might be good to use it for.

A quick spike like a snare drum hit has the result shaped like a percussive hit.
A more gradual attack for source, will result in a more gradual change in the target parameter value.

Recommendation
Look in the Data List booklet (pdf) for the “Control List” for an idea of potential Destinations. These are the targets for the Envelope Follower (notice “Note-on” is not a potential destination, at all). Destinations are all assignable control destination parameters.

 
Posted : 28/09/2023 6:52 pm
Posts: 0
New Member Guest
Topic starter
 

The trigger this creates can be used, (not to trigger a Note-on which you are trying to apply (?) Not sure why)… what the Envelope Follower creates is a rapid rise in the target Control Destination parameter.

I'm not trying to trigger a note-on. But something in the destination part has to create a sound for that 'rapid rise' to affect.

what the Envelope Follower creates is a rapid rise in the target Control Destination parameter.

Thanks - I understand that. But as far as I know it does NOT actually trigger the part to make a sound for that parameter to then affect.

Seems to me that you have to make sure that the 'rapid rise' occurs at a useful point in the target envelopes being used. This is the key issue I mentioned above:

It is that ASYNCHRONOUS connection between the source and target parts that I'm having trouble getting my head around.

Is there an implicit assumption that source and target will both be coordinated/synced with a common timer such as tempo or beat?

The follower doesn't trigger the destination to create a sound - it creates that 'rapid rise' to modify a sound that the destination is already making. If it triggers a rise when the destination isn't making a sound, or isn't in a useful place in its envelopes.

The left hand is the destination. The right hand is the follower source/trigger.

Doesn't the right hand need to know what the left hand is doing so it can trigger it at a useful time?

Don't the left and right hands (destination sound, follower source/trigger) need to be synchronized or coordinated to make sure they overlap each other properly?

How do you make sure that synchronization occurs? Motion sequences can be synced: Tempo, Beat, Arp.

I don't see how you synce the follower source to the destination.

 
Posted : 28/09/2023 7:19 pm
Bad Mister
Posts: 12303
 

Yes, this is music… so there must be either manual or mechanical synchronization… if that is your goal.

so the drummer hitting that example snare drum Source, in my example, is playing in reference to the same tempo I am playing. So I hit and hold a chord on my Pad sound. Each of the drummer’s snare drum hits causes a harmonic reaction in my Pad sound. We are both playing the same song and are referencing each other.

I also gave an example with Godzilla — strictly sound effects, tempo is not necessarily needed. The Key must be engaged, the Envelope Follower is a Modifier…

 
Posted : 28/09/2023 7:29 pm
Jason
Posts: 8259
Illustrious Member
 

Doesn't the sound of a typical part (e.g. piano) begin (i.e. it's envelopes begin) at KEY ON?

Yes, typically a Part that you hear doesn't make any noise until you hit a key. That's the typical/usual way this works. I could leave it there - but there are unusual cases where a Part would start making noise before you press any key. You can stack up different effects that would do this. The scratchy record simulator runs all the time and you can send this through distortion or other effects that would amplify and modulate this without having to have any keys involved. For a target this isn't very "useful" - but for envelope follower sources - it could be something you utilize.

So I'm trying to understand the correlation between these two event streams:

A. KEY ON of the target part - which begins the sounds journey through the envelope trajectories (FEG PEG, AEG)

Adding: a key on event that triggers an element that can filter based on key range and velocity. So assuming those are met, the FEG/PEG/AEG starts for the target.

B. Output level change of the envelope follower source.

Assuming the envelope follower source is a different Part than the target Part - then there is no correlation at all. Note on events that make the target Part start to make noise may not necessarily change the source Part at all. Or there may be other things devised to delay the source Part's noise making like a key-on LFO, motion sequence, AEG, key delay, etc. What is happening at the output of a target Part (a Part that uses an Envelope follower) of some other/different source Part is not "seen" or factored into what the source is doing. They're trains on separate tracks doing their own things according to their own rules.

Aren't these two event triggers asynchronous? Part 1 is trucking along and the envelope follower change might NEVER happen or might happen at any time at all relative to where part 1 activity is in relation to part 1's envelope (ADSR).

I know you're likely referencing something you've already defined somewhere else - but, still, it would be helpful to reiterate what the source is that is being envelope followed and which target Part is using this virtual knob.

1. Is the output of the envelope follower a value from 0 to 127? from -64 to +63?

It's just (analogous to) a knob. Knobs go 0-127. All controllers go 0-127. Aftertouch, MS Lane, Mod Wheel, breath controller, all of them.

2. Is that output value then used as an OFFSET to the target parameter (VOLUME in the article example) similar to how other offsets (e.g. quick edit offsets) are applied?

Once you start talking about target parameters - it works the same way as any non-envelope-follower (as a source controller) works. When the source controller is 0 (Envelope follower output of the source Part is 0) then the left side of the control assignment curve is the input value. Input values run left and right (x-axis of graph) in these curves and the output are the values up/down (along y-axis of graph). This is how everything works. Nothing is different when using an Envelope follower.

And also, whatever the output of the curve is (which is different than the output of the envelope follower - we're at the next "stage" here) will be an offset to the target parameter. As with for all control assignments no matter what source controller is you use (envelope follower, mod wheel, aftertouch, foot controller, etc). This is just fundamental stuff to the control assignment system that you should have encountered before with assignable knobs and other things.

Based on comments so far it seems like the envelope follower is producing it's own envelope based on

Correction: it's producing a number from 0-127 which is different than an "envelope". The number follows the loudness of the thing it's monitoring (a Part, A/D, etc). It's instantaneous more-or-less unless you apply the attack/release tweaking hysteresis.

Almost as if the change of input value snap of a snare drum is acting as a 'key on/key off' pair.

1. there is no input at all from the envelope follower (EF) source
1. EF produces a steady state output value - e.g. 0 with a possible range of -64 to +63
2. snare drum hits - causing an input spike of 40 units of some sort
3. output follower creates a mini envelope using a delta of 40 and the attack/release settings.
4. that envelope is used as an offset to the target parameter. For 'volume' the destination might be
at any point at all in its journey along its envelopes.

It's really no more complicated than if a Part is making a louder noise than the envelope follower for that is following that Part will be a bigger number. Hysteresis comes into play - but that's "next level" that you can look at on your own. And the final result is just a number 0-127 representing how loud - right now ("instantaneous" ) the followed Part is.

It is that ASYNCHRONOUS connection between the source and target parts that I'm having trouble getting my head around.

There's hardly any relationship at all between Parts. The envelope follower result is not a triggered event. It isn't latched. It's constantly being "spun" (think as a knob) by the loudness of the Part that is being followed. That doesn't necessarily have anything at all - at all - none - to do with the target Part and what it's doing or not doing.

Is there an implicit assumption that source and target will both be coordinated/synced with a common timer such as tempo or beat?

No. Why would they be? You're automating a knob using some other Part's loudness as the modulator for this virtual knob. It's just a knob. If you could hook up magic earphones to the Part that's being followed and could hear how loud that Part was (even if its output was set to OFF - you'd still hear just the loudness of this Part in your magic earphones) AND you could turn an assignable knob way up when the sound gets loud and way down when the sound is silent and everything between - then you'd be simulating exactly what the envelope followers are doing.

Current Yamaha Synthesizers: Montage Classic 7, Motif XF6, S90XS, MO6, EX5R

 
Posted : 28/09/2023 8:11 pm
Posts: 0
New Member Guest
Topic starter
 

No. Why would they be? You're automating a knob using some other Part's loudness as the modulator for this virtual knob.

They would be synced because you have to twirl the source follower knob while the destination part is actually doing something.

Using the left hand, right hand example. The left hand is a destination part volume. The right hand is the enveloper follower part.

If I press a key with the left hand it makes a sound whose volume changes based on the AEG envelope created.

If I press a key with the right hand it changes the output of the envelope follower.

If those two things aren't synced, or don't overlap, the follower output won't do anything at all.

If those two things overlap on the attack phase of the left hand part AEG envelope it will have a radically different result than if they overlap on the release phase of the left hand AEG.

Assume that the left hand does:
1. one key NOTE ON
2. hold for 1 second
3. NOTE OFF

Then the above will traverse some portion of the AEG envelope time. To be effective the output
changes of the envelope follower need to occur during that envelope transition don't they?

And don't you want to control whether the follower changes occur during the envelope's
Attack, or decay1, or decay2, or release times? If the changes don't overlap nothing will
happen. If the changes overlap at different points the effect will be different.

Don't you need to control that so that the follower output changes ALWAYS occur during
'decays' (or other specific envelope phase)?

 
Posted : 28/09/2023 8:31 pm
Page 1 / 2
Share:

© 2024 Yamaha Corporation of America and Yamaha Corporation. All rights reserved.    Terms of Use | Privacy Policy | Contact Us