Synth Forum

Notifications
Clear all

Can Sustain Control Messages Be "Throttled?"

6 Posts
2 Users
0 Reactions
1,647 Views
Michael Trigoboff
Posts: 0
Honorable Member
Topic starter
 

Since I updated my PC to Windows 10 (possibly not the best decision I ever made), I've been getting strange audio pops and stutters when I play my XF7 into Cubase 8.5 Pro. The problem is intermittent and difficult to reproduce, but I have the impression that it may be related to my use of the sustain pedal.

I have a Yamaha FC3 sustain pedal, and I use it with "half damper" enabled on the XF7. I've noticed that when I'm "riding the pedal," it generates a ton of MIDI control messages. I'm wondering if somehow the amount of control data is overloading the FW channel to/from Cubase.

In particular, I'm wondering if there's a way to tell the XF7 to "throttle" the control data and send (for example) no more than 10 sustain control messages per second down the FW connection to Cubase.

 
Posted : 09/01/2016 2:12 am
Bad Mister
Posts: 12303
 

No.

And it is unlikely the amount of control change message is causing any audible pops or clicks.

If you are really worried, set the Sustain pedal parameter to FC3 Half Off (Half Dampering Off) particularly on Voices that are not set to respond to sustain messages of 0~127. No need to generate all that controller data on Voices that are not built to respond to it.

If you are recording all that data you can "thin" it out in your sequencer ("thin" is the term used for removing dense controller data).

 
Posted : 09/01/2016 6:20 am
Michael Trigoboff
Posts: 0
Honorable Member
Topic starter
 

I may have fixed my FireWire audio problem.

Over at motifator.com, 5pinDIN told me about ysfwutility.exe, a small utility program that comes packaged with the Yamaha Steinberg FW Driver (in a "Utility" subfolder). My IEEE1394 Buffer Size had been set to Small, which is the default. I set it to Large instead, and the problem seems to have gone away.

Ironically, the other day I ordered a FW plugin board for my PC because the FW on the motherboard uses the VIA FW chipset, and I found out that there are problems in Windows 10 with that chipset. (You can see that discussion in the motifator.com link in the paragraph above.) Yesterday was when I found out about the utility. And today the new FW board arrived from Amazon.

I think I'll hold onto the board for about a week to make sure the intermittent problem I've been experiencing has actually gone away.

I'm posting this here for general information, and also to ask this question:

What's the trade-off with regard to this IEEE1394 buffer size setting? I've got 16 GB of RAM in my PC, so presumably memory usage is not a problem. Are there any disadvantages to having a large buffer?

 
Posted : 13/01/2016 1:28 am
Bad Mister
Posts: 12303
 

No, there does not have to be. Computers were not originally designed to do audio... It's best explained as a bucket brigade, you know, how communities used to put out fires before fire trucks that can pump water where common... audio is water, and you have to fill a bucket and pass it along to the next person, and to the next and finally it get thrown on the fire.

The BUFFER SIZE is the size of the bucket. If your buckets are too small and water spills out, you hear that as 'clicks' and 'pops'. So increase the buffer sizer so you can carry the water without any spillage. If the bucket is so very big the down side is it takes longer to fill it, and there may be more latency. The key is to get the bucket size correct so you can efficiently get the water to the fire without spilling any and without making the bucket size so big that you slow everything down filling it.

Each computer will vary. The Utility you speak off is in your Driver folder and is a utility that is used to whip those computers that are stubborn into place (it sets the basic buffering size for the computer). The thing is if you understand audio latency in computer systems you can avoid being downwind of the latency in most cases. Use your computer resources wisely. Avoid having it process more than is necessary. (Learn to use the FREEZE function - which creates temporary AUDIO files so the computer does not have to use valuable resources to process everything all the time). Monitor what you play "direct" whenever possible! If you are monitoring through the computer you will be downwind of the latency - the computer must receive incoming data, process it, and then pass it back out. Only when you monitor through the computer are you downwind of this latency. In Cubase if using the XF as a VST, Mute the AUDIO LANE and setup to DIRECt MONITOR your own playing.

If you monitor both DIRECT and the AUDIO LANE, you will notice a slight flanging or chorusing... this is the AUDIO LANE which is slightly late... slightly behind your direct signal. Everyone can hear this doubling of signals. You can choose to MUTE the latent signal and monitor the direct signal.

Say you have several tracks already recorded and you are overdubbing your lead melody as audio, if you are monitoring your XF direct, you will not experience any latency. If you do not DIRECT MONITOR, this means you are listening to your own audio after it travels to the computer, arrives on the track and then is sent back to the FW16E (interface) to your monitors. That takes time... Cubase will tell you exactly how long that time is.

Direct Monitoring eliminates the frustration of latency. Why then doesn't everyone just monitor direct... well, if you have a plug-in Effect that you want to use while you record, it probably would help to hear that effect as your are playing (it might cause you to play/perform differently) well, that plug-in cannot process the signal without some time lag. (this is why it is best, in my opinion, to use plug-in Effects post recording. Fortunately your XF is full of boutique quality effects which you can use and monitor direct (zero latency)! Your hardware effects are upwind of latency and will travel from the keyboard to your monitor speakers directly, no compute latency... When you playback Cubase will have placed your newly recorded audio precisely where it belongs (Cubase features advanced delay compensation) to make sure your audio is placed where you heard it while monitoring direct.

The key is to use that UTILITY and use the setting that gives you the best results. The Bucket should not be too big, nor too small. When it is too small you hear drips (clicks and pops). When it is too large you cannot play because your notes are noticeably delayed. How much is too much? That will depend certainly by the time you reach 20-30ms it is impossible to play (if you are not monitoring direct)!

Playback latency (the time it takes the computer to get it together to send you the audio) is not at all significant, this is why even the generic drivers that come with a computer are fine for playback... It is when you need to playback and overdub,... that is where the generic drivers are impossible to use.

 
Posted : 13/01/2016 5:38 am
Michael Trigoboff
Posts: 0
Honorable Member
Topic starter
 

Thanks! I understood all of that except for this:

Playback latency (the time it takes the computer to get it together to send you the audio) is not at all significant, this is why even the generic drivers that come with a computer are fine for playback... It is when you need to playback and overdub,... that is where the generic drivers are impossible to use.

I think that in my current setup (Motif XF + FW16E + PC) I never use any "generic drivers." Am I correct in this? Which "generic drivers" did you mean?

Also, I'm a bit confused about an aspect of latency. Let's say I'm recording a new MIDI track to a song containing tracks I've previously recorded into Cubase. In the Cubase project, I have both MIDI and audio tracks.

As I listen to what's already been recorded, I play along. I'm listening to what I play via "DIRECT MONITOR." Given the existence of latency, how does my new MIDI track end up properly synchronized with what was already there? Or does it?

 
Posted : 23/01/2016 11:09 pm
Bad Mister
Posts: 12303
 

I think that in my current setup (Motif XF + FW16E + PC) I never use any "generic drivers." Am I correct in this? Which "generic drivers" did you mean?

The drivers that come with the computer to drive its own speaker system. If you wand to just playback an audio Project from Cubase without hooking up your synthesizer, playback option would be to use the built-in audio (the computer's own soundcard). The fact is any built-in audio driver can playback your audio. There is no noticeable latency (beyond the 100 or so milliseconds between pressing Play and hearing sound). You will not experience any delay because you are NOT playing along... You are just playing back.

When using the low latency driver and monitoring "direct" all timing of data is handled by Cubase. There is a very sophisticated DELAY COMPENSATION algorithm that automatically places your recorded audio precisely where it belongs. All incoming audio is "time stamped" then printed where it will precisely coincide with your previously recorded tracks. Notice how input and output latency is measured to multiple decimal places by Cubase. It uses this to evaluate and precisely place the audio in your Project. (This to me is what Cubase does better than most DAW software... Because how important is timing?)

 
Posted : 24/01/2016 11:09 am
Share:

© 2024 Yamaha Corporation of America and Yamaha Corporation. All rights reserved.    Terms of Use | Privacy Policy | Contact Us