Automation (and PPQ) is broken, but I've figured out why!

Post your ideas and suggestions here

Return to “To Do”

[You can only see part of this thread as you are not logged in to the forums]
h3h4
Mon Mar 20, 2023 4:04 pm

x

Automation (and PPQ) is broken, but I've figured out why!

EDIT 1: I have edited this post significantly to make it more concise and improve the quality of the solution that I have suggested.
EDIT 2: I have now attached an example project that reproduces this issue for realtime playback at 96 PPQ, even with 200ms of smoothing on each automation clip. Attached is also an Edison spectrogram showing the transient glitches as vertical spikes.
EDIT 3: Improved the suggested "algorithm" again.
EDIT 4: Improved and simplified suggestions about PPQ.
EDIT 5: I figured out why FL's PPQ throttles performance so much! (It's probably obvious to the devs given they see the code, but it's good to know...)

PART A. THE PROBLEM

I've been trying an Ableton Free Trial out recently, and I've been getting automation glitches in FL Studio when doing sudden step changes that do not occur in Ableton. The automation clips in FL Studio have their controller settings to 100ms of smoothing, but there's STILL artefacts on the step changes. Even if I don't use smoothing, and instead manually draw slightly more gradual curves, there's STILL artefacts. I have only been able to eliminate these artefacts by cranking up the project PPQ to almost the maximum, which almost doubles the CPU load. This CPU load is higher *even when all automation is flat* (due to the active points being set to "Hold"). With the default PPQ, I have to have the smoothing unacceptably high to avoid glitches, to the point where the transition time is extremely audible.

There are lots of other strange things about PPQ: you lose information just because you might temporarily change your project to a lower temporal resolution for performance reasons! This doesn't appear necessary or desirable. If you decrease your project sampling rate, you shouldn't expect FL Studio to overwrite all the existing assets on disk with down-sampled versions. FL would just need to down-sample when loading them into the project. If you decrease the temporal resolution for performance reasons (if this is even necessary), then the saved-to-disk assets, including parts of your project, should still retain their old resolution. This can be avoided by ditching the variable PPQ system entirely - representation of time positions should be a fixed high-resolution data type. Any associated computations with performance hits should be configurable independently of the representation resolution. I'll go into more detail about that below.

PART B. TESTING AUTOMATION IN MULTIPLE DAWS

I believe I've identified what is essentially a bug in how FL Studio delivers automation values to plugins. I tested this by creating a test VST3 instrument plugin with a single parameter that simply prints all of the automation changes and other details received from the host to a network socket that I can read in a separate window. I tested this in FL Studio 21 and Ableton Live 11. Here's what I found:
  • FL Studio
    • Automation values are never delivered at any time other than the very start of an audio block.
    • The block sizes actually requested by FL Studio also appear to line up with the PPQ grid. With a PPQ of 960 at a tempo of 130 BPM, FL Studio requests block sizes of 21 samples, even if the audio buffer size is configured way higher!
    • If the VST wrapper has the "fixed block size" setting enabled, then FL Studio instead requests 88-sample blocks for some reason, and at higher PPQ settings, this fails to deliver one automation point per pulse.
    • If the PPQ is turned down so that the samples per pulse is greater than the audio buffer length, then automation values start getting delivered at the start of every second block, every third block, and so on.
  • Ableton Live
    • Ableton will deliver automation values at positions throughout the block, sometimes delivering multiple values in a single block.
    • The amount of values delivered appears to be related to the rate at which the values are changing.
    • Unlike FL Studio, Ableton always requested the same block size from the plugin.
According to the VST3 specification, a host only delivering automation values at the start of (some) blocks like FL Studio appears to be doing will cause the plugin to interpret the automation curve as a staircase, and not a series of linear line segments as would typically be desired. Refer to the following link about the VST3 API behavior for more information:

https://steinbergmedia.github.io/vst3_d ... n-playback

PART C. FIXING THE AUTOMATION PROBLEM

Unless the automation value at the end of the current block hasn't changed since the previous value that was delivered, a value always needs to be delivered at the end of the block to get the correct behavior from the plugin. The simplest way I can think of to fix this is as follows (this applies to continuous parameters only!):
  • Provide a project-wide configuration option that determines the automation rate. This could be measured in units of Hertz, and would be internally converted to a number of audio samples that forms the period at which automation values are sampled and delivered (the sampling is also necessary to apply smoothing filters). This automation rate should be completely independent of the project timebase! There's no reason they have to be related.
  • Every time a block is being prepared for a plugin, check whether any of the periodic automation sample points fall within the block, and for each of those points, sample the automation curve, and add it to the list. Note that these points may end up landing anywhere within the block and will not just occur at the beginning!
  • After adding the samples to the list, look ahead one more automation sample and sample the next point on the automation curve. Use a linear interpolation between this sample and the sample before it to determine the value that would occur at the final sample of the current block. If that value differs from the previous automation value, then add it to the list. (*)
  • The rendering settings should also include a parameter that determines the automation rate for a render. For offline rendering, there's no reason this couldn't be set to almost any value you want. This control might be useful since having the automation rate too high for rendering might cause artefacts or step changes to surface that did not occur in realtime playback. This means that just by setting the automation rate a bit lower, you get some automatic smoothing with no smoothing filters needed.
Regarding the point marked (*) above, it's important that the final value (if it is needed) is calculated by a linear interpolation of the neighboring sample points of the automation curve, and not by sampling the automation curve at that point. We don't want the behavior to depend on the block size, and sampling the final point directly could cause clicks and pops that shouldn't happen at a lower automation rate if the additional point occurred just after a step change. If the automation source is realtime, and lookahead is not possible, you can try a causal interpolation using the previous sample points instead, or delay the automation by one automation sample :-(.

Discrete parameters should be handled differently. FL Studio should aim to reproduce step changes in discrete parameters exactly, with 0 samples of additional latency (no filtering or interpolation delays like might be used for continuous parameters). Timing accuracy of discrete parameters is likely more important, since the plugin might need to receive the new value alongside a simultaneous MIDI event (this also means the rounding procedure to convert PPQ pulses to sample offsets needs to be exactly the same for timing of MIDI events and automation points). The user shouldn't have to turn the automation rate up to single-sample precision to achieve this. For this reason, perhaps FL Studio could do the following:
  • Add a configuration option for each automation clip, or in the "Remote control settings", that enables "Exact step changes". This option would be greyed out if any smoothing was enabled, since it would be pointless. This option would always be enabled for discrete parameters. (The smoothing options should be disabled for discrete parameters.)
  • For any automation curve with this option enabled, FL Studio would insert automation values at consecutive pairs of samples to exactly reproduce step changes (this is easily done by looping through all the curve segments that overlap the current block and checking for vertical lines or steps, and then inserting automation values at the pair of consecutive samples where the step occurs, taking care to handle the boundary case where this pair occurs at the block boundary).
  • If the parameter is continuous, these sample points would be in addition to the uniform sampling at the automation rates. If the parameter is discrete, then these would be the only sample points.
  • The first sample in each pair of automation values will be required to stop the plugin from interpreting the change as a linear change, which may happen depending on how the points line up with the blocks.
There are other improvements that could be made to the smoothing options under "Remote control settings":
  • The "Time" control in the "Remote control settings" should really be two controls: one to set the smoothing type (e.g., 1 pole, 3 poles, whatever), and one to set the smoothing time.
  • The "Time" control should support a "Type in value..." context menu option too.
  • The range of filters available could be expanded: for non-realtime controller sources, supporting causal, acausal and anti-causal smoothing would be nice (i.e., the smoothing filter depends either on past samples only, past and future samples, or future samples only). This would achieve different shapes of the final curve, e.g., it would enable smoothing only prior to a step change, or smoothing on either side of it. You'd probably want to use FIR filters to avoid infinite lookahead in the acausal and anti-causal case.
  • There should be a way to control the project-wide default behavior for this automation smoothing and interpolation that affects all new automation clips.
PART D. FIXING THE PPQ PROBLEM

As mentioned earlier, I noticed that blocks requested by FL line up with the PPQ grid. If I set the PPQ to 960, then at 130 BPM, that would correspond to a pulse every ~21 samples. And viola, FL Studio requests blocks of size 21 samples, even with a buffer size configured to be 1024! This is probably one of the main reasons why performance is totally throttled at higher PPQ. What FL Studio is doing is "entangling" multiple things that could be independent or, in software terms, "orthogonal". The following things are entangled by FL Studio:
  • The automation/control rate, i.e., the rate at which automation curves are sampled for filtering and to deliver to plugins.
  • The resolution of the memory representation of temporal positions, which determines the precision to which items can be placed, the timing accuracy to which the playback of clips and patterns is triggered, and the timing accuracy of MIDI events are received by plugins.
  • The block size at which the audio engine processes sound.
These things do not need to be entangled! Right now, FL Studio has all three of the above dependent on one another. For that reason, the only way to achieve faster automation is by turning up the timebase, which causes the block size used by the audio engine to drop, which causes performance to be throttled. However, all three items in the list above should be almost completely independent. The timebase should not have to change to get more or less accurate automation, and the block size should not depend on the timebase or the automation rate. The mixer "hot loop" should be operating on one block at a time, not one pulse at a time. In each iteration, the audio engine needs to:
  • Continuous parameters only: For each automation curve intersecting the current block, identify the curve segments that intersect the current block, and sample the curve segments at all the automation sample points that fall within the current block. These samples can be reused if the automation curve has multiple destinations.
  • Continuous parameters only: Apply any necessary smoothing filters and mapping formulas to lists of sample points obtained above (continuous parameters only).
  • Discrete parameters or parameters with "Exact step changes" enabled: Identify step changes that occur within the current block and and add pairs of automation samples to encode them to the list (taking care to handle the boundary case where this pair lies across a block boundary).
  • Continuous parameters only: Add an extra automation value at the final sample of the block if necessary (see Part C of this post).
  • When sending each block to a plugin, pass the automation values (as computed above) along with it.
Accuracy of the timebase should have nothing to do with any of the above. You just need to check for automation segments that overlap, and sample them. There's lots of optimizations you can do to avoid expensive searches.

If the above factors are able to be disentangled, then FL Studio will be able to adopt a fixed, high-resolution PPQ. For a fixed PPQ, I suggest you choose a PPQ of 80640. This has the following benefits:
  • 80640 has a good prime factorization of 2^8 * 3^2 * 5 * 7, enabling exact representation of 1024th notes, triplets, nested triplets, pentuplets, and heptuplets.
  • 80640 is the least common multiple of all existing PPQ values (24, 48, 72, 96, 120, 144, 168, 192, 384, 768, 960). This means that all old projects can be losslessly converted to the new PPQ value.
  • At 200 BPM, the pulse duration of a 80640 PPQ is 60 / 200 / 80640 = 3.72 microseconds! This means that even rhythmic intervals that cannot be exactly represented can still be represented to within +/- 0.5 samples, an error that is imperceptible to humans.
  • At 200 BPM, an IEEE 754 double-precision floating-point number is capable of exactly representing durations of up to (2^53 - 1) / 80640 / 200 / (60*24*365) = 1063 years, which is longer than anyone will need (IEEE doubles have a 53-bit mantissa). Even if pulse durations are intended to be integer-only, pulse durations will need to fit inside floating point numbers for calculation purposes, such as to convert to sample offsets, and to evaluate automation curves at sample points that might not line up with the timebase units.
Some miscellaneous PPQ-related suggestions that might be nice, especially given a high, fixed PPQ:
  • It would be nice if the user had the option of entering a rational number as the duration of a note/clip/etc. If I wanted to enter a note duration of "1/5", I should be able to, especially if the timebase supports it! To do this, just adapt your UI component for representing editable time so that you can right click and click "Set as fraction...", bringing up a dialog that allows you to set an integer part, a numerator and a denominator, determining what fraction of a beat the time is. The dialog would have a label indicating whether the fraction could exactly be represented, and what the approximation error would be. Math rockers and crazy drummers would love this feature.
  • The "Chop" tool in the piano roll should support dividing the note into any number of notes, not just powers of 2. A label could indicate the approximation error (if any) for the proposed count. Regardless of whether the timebase can exactly represent the division or not, the original start and end points should be preserved.
  • With a higher PPQ, it's more important that the grid snapping behavior is robust and flexible. See viewtopic.php?p=1830569
Also, on an unrelated note, the documentation about PPQ appears to be wrong. It says that increasing the PPQ won't affect note placement, but this doesn't appear possible. E.g., if I increase the PPQ from 96 to 120, that's surely going to affect note positions since 120 is not divisible by 96. Durations of 1/96 are not in general representable in a 120 PPQ time-base.

PART E. SUMMARY

The most important things to take away from this post are:
  • A project-wide automation rate option should be added that is independent of the project timebase.
  • For continuous parameters, automation values should be delivered at exactly this automation rate to each plugin, with timing rounded to the nearest sample.
  • For continuous parameters, additional automation points at the end of each block will be needed to prevent plugins interpreting the curve as a staircase, and these should be calculated by interpolating the neighboring sample points of the automation curve, and not by sampling the automation curve at that point.
  • For discrete parameters, automation points should just be delivered "manually" where each step change occurs, so that they get to the plugin ASAP, arriving at the same sample offset as any MIDI events at the same position in the project.
  • The automation rate, timebase pulse duration, and audio engine block size should all be independent! The delivery of automation points does not have to line up with pulses of the timebase, nor with audio engine blocks.
  • The accuracy of the memory representation of the project timebase should not affect how much CPU is used.
  • A fixed, high-resolution timebase is preferable. 80640 PPQ is a good candidate (reasons listed above).
None of my above suggestions actually require FL to adopt a fixed block size, but it would enable that as a possibility. If you are going to re-engineer the mixer, then you may want to consider other areas for improvement too, e.g., viewtopic.php?t=301531
You do not have the required permissions to view the files attached to this post.
Last edited by h3h4 on Tue Mar 28, 2023 12:44 pm, edited 18 times in total.










nucleon
Wed Apr 12, 2023 2:02 pm

x

Re: Automation (and PPQ) is broken, but I've figured out why!

Hi. Thank you for the detailed analysis. Indeed...
You do not have the required permissions to view the files attached to this post.



Return to “To Do”