Why something like 'Asio Guard'/'Anticipative FX processing' is needed for those playing midi keyboards

Post your ideas and suggestions here

Return to “To Do”

[You can only see part of this thread as you are not logged in to the forums]
Sat Jan 11, 2020 8:34 pm


Why something like 'Asio Guard'/'Anticipative FX processing' is needed for those playing midi keyboards

I know it has been debated before (I searched), but I don't think there has been 'made a case' for the need for such things, and thus, the developers may be unaware of the need for what it is used for (since I saw in an earlier thread that rendering/freezing was suggested).

So, although freezing/rendering is a kind of workaround, it is a workaround that involves quite a bit of steps, and with something like 'Asio Guard'/'Anticipative FX processing' it would in many cases not be needed.

So, scenario:
You are a good keyboard player and for the track you are making you are going to play lots of midi-inputs to different vsti-instruments, from a midi keyboard (think orchestral instruments here, there are lots).
To be able to do that, one would want not-to-high latency, so one gets the vibe and feel for the keyboard and the connection to the sounds.
(Yes I know that some grand pianos have a very long latency, but that is in fact a little bit different as everything is connected to each other physically on a grand piano, so when you enter down a key, you can feel and hear the motion and get the vibe of the string being struck, so it is easier to get totally into the vibe than from a midi keyboard that doesn't give any input other than the feeling you have pressed the key to the bottom, and then the wait for the sound to appear.)

So, one would want low latency, but oh, there are some very cpu hungry vsti instruments that are played, and just after a handful, the sound starts to crack (on a 7920X at GameBoost).

In FL Studio one can then do two things, either render some of the tracks to free up resources (consolidate track, make a note of what the channel-output was, make sure the settings in the rendering is correct, open the new track, send to same channel (no processing on the channel, just some nice routing), make sure the original is muted)

Or, first start by raising the latency, and then, later do the above.

Oh, and there is another way to solve this too, if it wasn't for the fact that is is much faster to play the midi-keyboard, and that would be to manually in the piano roll enter all the notes, then the latency wouldn't matter and one could have a very high one (and then still, have to consolidate tracks after some more tracks to get the sound not to crack).
(and an extra computer with VEP (Vienna Ensemble Pro), is not really working well with FL Studio, so that either is not a solution here)

So, how does the competition work with this, and why hasn't FL Studio fixed this yet?
First, many FL Studio users use the piano roll to enter the notes, and, although that work fine for just some few things, it is a very slow way if one is a good keyboard player and there are lots of things to be played.
However, since so many of the user base use the piano roll, I would think this issue hasn't really come up that much, since many will not notice any problem.

The competition, however, has a larger amount of keyboard playing users, and for them it is vital to have a solution to this.
And they have, in the form of 'Asio Guard'/'Anticipative FX processing' and similar names for basically the same technology and fix.

And that fix is: when playing in live vsti instrument, like from a midi keyboard to a synth or sampler, they create two audio paths for the system.
One where they pre-render everything a second or two before the play-cursor gets to the place you're at, so without any sound coming, the computer has already created all the sound, so it is 'ahead' of the place you start, having already processed the audio, just delaying the sound itself to be in sync with the sound from the midi keyboard triggering the vsti.
And then, one can have a very low sound card latency, and that will be the latency of the midi keyboard playing, and at the same time have a very long audio processing buffer for the cpu to pre-handle and pre-render everything, thus being 'ahead' of the place the music is, and then effectively playing/rendering sound as good as if it had a very very super long latency /buffer set up on the audio card, without having any of the negative effects of such a high buffer, as for the live keyboard playing, the low buffer from the asio card would be used.

Makes sense?

So, this is the reason the other companies has such techniques and features, and if FL Studio too had such features, it would probably be more likely to be used by a new group of people too, in addition to all the ones using it now, and for those using it that has to do workarounds such as consolidate/render tracks, it would ease the workflow and make it faster.


Return to “To Do”