Need to scratch that musical Itch?
Then say hello to Trackspatcher
The audio back scratching bitch!
EQ matches one signal and Auto Carves EQ space in realtime from the other signal allowing your chosen signal space to breath & shine through your mix. No Complicated automation needed.
If this sounds up your street i will be adding the download here when ready, hopefully October. I will add a post and update the title when it's ready.
IN DEPTH EXTENDED INFO (Will also be included in the download if you don't want to read it at the moment)
As you may tell from the name and the GUI it's inspired by Trackspacer but it does things a little bit differently.
For those of you that do not know about Trackspacer, it's essentially a mix of a multiband compressor and an auto EQ that instead of being triggered by a volume level alone is instead triggered by several frequencies via a band split sidechain input.
An example of its use would be to facilitate keeping your vocals on top of your mix, or for a radio broadcast as an alternative intuitive use compared with a standard talk-over compressor.
If you are a little lost right now consider reading the section titled 'Masking Frequencies' much further on down this post. It does go a bit in-depth but it may help explain away any questions you may have at this stage.
For those of you already familiar with Trackspacer and its concepts I'll explain how mine differs and then the controls.
SPACER VS SPATCHER
Trackspacer uses 32 bands of detection, mine contains 110 that will be customizable. I'll be honest, what frequencies Trackspacer's bands are set to I do not know but personally I wanted to approach things a little differently.
I originally created a Trackspacer preset within a single instance of Patcher with only 7 bands of detection, this beta version takes the concept further combining 12 into one super patch, so my preset is inherently 12 Trackspacers in one, represented via 12 Patchers within one main Patcher Inception style. Each one tuned to the different note frequencies over the span of 9 octaves
The reason I wanted to add the twist of detecting bands based on note pitch was to fundamentally add a little more musical context to the end result.
Don't like this idea?
Then go down the rabbit hole into the workings of Patcher and tweak the positioning or active state of each band to your liking.
CPU consumption can be reduced upon by turning off the high quality over-sampling mode in all the EQs. I have done this already for your convenience and will be including 2 presets in the download, one with high quality enabled, the other disabled. The non HQ version will still take a fair bit of CPU though.
I have included control table macro surfaces as an easy way to activate/deactivate the detection of not only each separate band but also groups of those bands based on pitch class and or octave number.
It includes a band shift control that enables you to offset the pitch of all detection bands whilst maintaining the relative intervals between them.
Please note: This is different to the bandwidth control.
The little dashes above controls indicate their default positions.
B a n d w i d t h
The bandwidth control already present is a global control that opens up the amount of bandwidth for every band, affecting both the sidechain detection EQs as well as the correction EQs as this allows the ducking amount to more dynamically reflect what's being detected.
Please note: The more detection bands there are, the more accurate a depiction the correction EQs will end up with.
But! For this to work as intended bandwidth must be kept low and tight. A general rule of thumb to bear in mind is… Higher numbers of active detection bands need less bandwidth and conversely less bands need more.
Having a large bandwidth at the same time as having many active detection bands will completely null and void the point of having more bands in the first place, wider Q settings will cause multiple bands to cross over one another, effectively doubling up the amount of reduction at the cross points, resulting in a less accurate, harsher treated output compared to that of the original detected amounts of the input.
Of course CPU consumption will increase with every band of detection, thus we can't always run as many active bands as we may like. So the bandwidth is essentially a trade off control, to be used in greater amounts when there's less detection.
Complementing the global bandwidth is the slope/roll off settings for bands. In FL their EQ has a choice of 9 positions with gradients steep to gentil, but be aware i have not added this global control yet, this too is for the update. This control will be overlaid on top of the current bandwidth control similar to the dual sidechain control.
S i d e c h a i n & B a n d p a s s
Notice the 2 bright blue buttons when first loading the preset, This informs you that both the sidechain listening and band pass features are engaged (simply click to disable).
They are on by default on purpose so you can monitor the sidechain and dial in a lovely bandpass, focusing in on what matters only.
But please note: To hear anything in the first place you must first connect your sidechain signal(s) to the internal workings via the 'Map' tab.
It's a piece of cake but I've also added a handy setup instructions note pad within the preset.
if you struggle to see it enlarge Patcher's workspace and make use of the visual filtering buttons top right.
Should you require to further increase or decrease detection sensitivity to certain areas of your side signal you can do this via the map tab and opening up the bandpass EQ and editing bands 2-6.
Unlike Trackspacer, there's a handy level control for when listening to the sidechain input (outside ring of the sidechain listen button).
I added it incase your sidechain input is too low or high in volume and you want to adjust this whilst setting the bandpass. This is purely a listening control and will not affect the original incoming level sent to the detection section. The amount control is for this purpose.
It's very important not to alter the sidechain volume at source (pre-Patcher) just for the purposes of listening to the side signal and or setting the bandpass as doing this will directly effect the output result because the changes in gain at source will reflect in the level sent to the detection section.
However, it's only natural your sidechain level pre-Patcher will change during the mixing of your project, and although this will reflect in the output of Trackspatcher this is completely fine in this circumstance as the change in output will reflect your overall mix, but what I'm saying is don't knowingly alter it pre-Patcher purely for the purpose of listening to it within Trackspatcher itself, use the level control instead as that's what it's there for.
A m o u n t
The big central knob controls the overall intensity applied to the output signal. One of the differences compared with Trackspacer is the ability to also boost frequencies instead of only ducking them. hence my GUI showing the terms duck and boost instead of 0 to 100.
Naturally the primary purpose of this preset is for ducking frequencies but you could boost them to impart the characteristics of one sound onto another, a little like a vocoder for example.
S p e e d
The speed control. This acts like a combination control of the attack and release from Trackspacer. It controls how quickly after a band detects a change will that band react to the change and thus how quickly it will either cease affecting gain and return to zero or move the gain to another value. Turning the control to the right will increase the detection speed acting like the attack, more to the left will decrease the speed acting like the release. Fully right will give the most accurate detection match but not necessarily your most preferred sound. Personally I tend to keep it more to the right but decide for yourself.
T e n s i o n
Tension will add to or subtract from the linear detection value. Its purpose is as a rounding out control to be used for fine tuning. It acts like a mix between the speed and amount controls in one. With that in mind its best used sparingly and not to extremes, however the option is still there to push it so should you wish.
The default position is the central position which does nothing at all. turning towards the left side will give an increasingly concave rounding amount, more to the right increasingly convex.
If you read all this well done you've earned yourself a cup of tea with a jammie Wagon Wheel.
Many thanks for your support on this one guys, I look forward to the feedback, and remember...
Need to scratch that musical Itch?
Then say hello to Trackspatcher
The audio back scratching bitch!
Just thought I'd share this info on frequency masking I wrote for someone i was teaching, I may as well share it in case it helps others.
He was asking for advice about EQing masking frequencies… i.e. same frequencies present in two or more elements playing at the same time… Strings and vocals for example.
He had two main questions…
1) Why would I want to EQ the same 'common' frequency content of these so called masking elements. Wouldn't it be more harmonious playing them both as they were?
2) Does the EQing need to have a lot of bands with sharp narrow cuts or can I just use a band or two with broader cuts?
I can totally understand why he would think that having the same frequencies play at the same time was a good thing because in his mind just like we choose sounds and notes that harmonically fit with each other then why would we need to reduce the gain of some of the same shared frequencies as surely they would compliment one another.
The answer is they do compliment each other but that's not the problem, the problem is to do with phase. Imagine the following…
You're watching 2 lead dancers on stage, if these two dancers were in perfect sync you could make sense of their performance, if they were only slightly out of sync you may or may not notice an impact on the integrity of the overall performance. If they were very out of sync it would ruin the whole performance, your eyes wouldn't know who to concentrate on, you would lose focus, they couldn't both stand out, one performance would take away from the other unless one stepped down into the background for the other to shine and take the limelight. The same is true for audio.
Naturally the 'common or shared' frequency content of the different elements will not be perfectly in phase at all times, because of this frequencies of one element will sometimes sync with the other element(s) and other times will not. When they sync you have an addition of amplitude, when they don't sync there's a reduction of amplitude, the degree of reduction depends how many degrees out of phase they are to each other and also the amplitudes of each to begin with. For example 180 degrees of phase shift of one frequency compared to another instance of the same frequency playing in sync of time would result in complete silence if the amplitude of both were also the same. If one was 9db and the other was 6db we would be left with 3db.
In reality however, each element will be evolving pitch, timbre and amplitude and thus will interact with each other continually in a dynamic push and pull, a tug of war fight for the limelight so to speak thus varying amounts of phasing through out the performance will naturally occur.
Please understand I am not saying your overall mix will be 'out of phase' by sharing frequencies. Your overall phase may be in great shape but your Vectorscope will never be static, it will be continually dancing around even despite remaining 'in phase' overall. This is simply the interaction of your mix as discussed above and you will always need to carve a little space here and there for some elements, this is a normal part of mixing.
So what can he do?
Well EQ is of course one of the best ways, but first i said, there are a few other options to ask yourself...
A) Do you need to address the masking?
You need to be aware of masking issues yes but you don't necessarily need to treat them all the time because 'if it aint broke don't fix it' as the saying goes. If you listen back and everything sounds good to you and you can hear all the elements in the way you intended then don't think oh but i haven't treated masking frequencies yet. No you don't need to in this case.
B) Out of all the competing elements, which one needs the limelight? Chances are in the example i made of strings vs vocals the vocals will take presidence 9 times out of ten. Decide a pecking order. Reduce the gain of the element you need least instead of boosting the gain of the thing you want to hear more.
C) Do you actually need the element at all? Do you really need that extra percussion sound in the main chorus that's competing sonically but not musically. By all means keep it in the intro, outro, breakdown, e.t.c but just go through the pecking order and be bold enough to get rid at that particular point in the song if you simply don't miss it. Just because it was serving a purpose in the verse doesn't mean you need it in the chorus and vise versa.
D) For those competing elements that do need to be there can you sequence them so they do not play as much or at all at the same time? An example of this is the off beat bass line, yes it serves purpose dynamically to add drive and energy through syncopated rhythm but it also serves to reduce masking content too.
E) Can you transpose the element to a different octave to reduce the masking? Each element likes to stand out in its own space in the mix so try experimenting with what octave range they sit. They may need to stay in the same range as the other element but they may work much better plus or minus an octave.
F) Panning, yes i highly recommend panning as an extra way to separate masking elements occupying a similar frequency domain. BUT! Don't forget your mix still needs to translate well in mono so think of this as the cherry on the cake, in my opinion you should look on the other options before this one for the purpose of treating masking frequencies and this can add that finishing touch.
Now if you still need to EQ the masking frequencies this is a corrective technique as opposed to a creative one so treat the masking frequencies early on in your plugin chain.
Firstly as mentioned above every element in your mix should be sitting in it's own spectrum within the human hearing spectrum, there is no instrument/element that needs to occupy a full 20hz-20khz band, some will require broader spectrums than others (like a riser sweep for instance) but never the full spectrum.
With this in mind a humble band pass using both low and high pass filters is your friend.
Generally speaking you should be high and or low passing most elements of your mix regardless of masking frequencies, i mention this because it will help get rid of a lot of potential masking content which you won't be needing anyway.
So assuming this has been done the second thing to mention is your treatment of masking frequencies should not necessarily be permanent. In our example of strings vs vocals the strings may sound terrible if the vocals were not playing anymore but you left the EQ cuts in, so It's much better to do this dynamically and not permanently.
However I appreciate the dynamic aspects would be a lot of work if automated yourself, fortunately there are several plugins that address this. The most well known of which is Trackspacer by Wavesfactory, if this is not something you want to purchase there may be similar free plugins out there too maybe as well as my preset.
On to his second question in reference to masking frequencies which was… Does the EQing need to have a lot of bands with sharp narrow cuts or can I just use a band or two with broader cuts?
To clarify his question he is asking if there are several competing frequencies in a close proximity would he need to notch out as close to the individual frequencies as possible using many EQ bands each with narrow Q settings or could he just use one or two bands instead but with broader Q settings to get rid of them all, would it really make much difference?
The answer is both yes and no, or more precisely sometimes yes and sometimes no.
You see If you have more bands at narrow Q width you can technically more accurately duck only the masking frequencies and not interfere with other non masking content as much. There are two things to point out however.
Firstly most EQs (well basically all i know of) have a limited minimum Q setting and thus depending where abouts in the spectrum you are cutting you will not be able to cut only one single frequency. In general most of the narrowest cuts possible will be affecting more than one single frequency.
Secondly even though the masking frequencies will generally be concentrated around a specific area of bandwidth, there is a high probability that there is still going to be A LOT of different masking frequencies and to try and isolate all of them would be extremely difficult for even the most talented engineers out there, very very time consuming and very fatiguing on your ears. So practically speaking using fewer but wider eq cuts is best. This is why plugins such as Trackspacer are extreamly useful.
BUT! in certain situations you will need to put the time in to use more bands with narrower cuts, i will explain now why that may be the case…
If we use 2 sets of vocals that were masking each other as an example, there's a good reason why you would want a more precise operation and not a broad approach with limited bands.
To understand why you need to understand a little about harmonics and evolution.
You see EQing lower range instruments will arguably be affected more severely than higher ones when using the exact same Q settings because of the way harmonics work.
Using the standard equal tempered western scale lets start at pitch A0 at 27.5hz. The next octave is always a doubling so A1 is 55hz. So A0-A1 only has a range of frequencies between 27.5-55hz for this octave to occupy, a difference of 27.5
The next octave is A2 at 110hz. A1-A2 has a frequency range from 55-110hz, a difference of 55. Then 110-220 with a difference of 110.
So as you can see with every higher octave you go you have twice as much timbral complexity available for that octave because with each jump there are twice as many frequencies that can fill that octave.
There is much more audio detail available in between the frequencies we perceive as particular notes. We percieve pitch based on a single fundamental frequency (most humans can perceive pitch in under 10ms of constant duration of the fundamental).
What we perceive as timbre is the overall sound combination of all the other frequencies present and their individual amplitudes (usually less than the fundamental). These other frequencies are either classed as harmonics (frequencies with whole number based mathmatical relationships compared to the fundamental) or partials which are basically like harmonics only they are inherently more discordant or inharmonic because their ratios to the fundamental are not related by integers (whole numbers).
An example of these type of timbres are sounds like bells, cymbals, metalic sounds and sounds created using FM Synthesis.
If you've ever woundered why a guitar and piano can be tuned to the same pitch yet sound different, the answer is timbre, essentially it is their fundamental pitch that is tuned to the the same frequency but they will each have their own unique set of harmonics and partials which combine to form what we perceive as timbre and the inate character of that instrument. Without timbre we would have no music worth listening to as it would be nothing but sine waves.
Thus this is why cutting with one particular Q width high up will not cut as many notes of our scale as the same Q setting would lower down the spectrum.
So you may think ok David so you're saying i can get away with less bands at wider settings the higher up the audio spectrum i go?... well not quite.
You could certainly argue this for really high frequencies contributing to the 'air' of sounds or the high 'sizzle' of cymbals but within the spectrum of human speech we have evolved to discern a huge array of emotion from tonal differences in these areas.
So even though it would be true to say wider cuts will not cut as much of the musical scale as the same setting would much lower down it would still impact our listening experience a lot more in the particular case of say vocals because we have evolved to be highly reactive to slight timbral changes here, arguably directly because of the increased scope for timbral diversity over the lower octaves. Hence why in the example of vocals more precise EQing could be called for because with human speech and singing we naturally subconsciously scrutinise these elements much more.
So having a broader cut in the sub range technically speaking would wipe out a huge part of the scale of our track in this area but we are not adapted to hear this area anywhere near like human speech thus counter intuitively broader cuts can work.
Hope this helps some people. Day
Discuss how to use FL Studio
35 posts • Page 1 of 1
[You can only see part of this thread as you are not logged in to the forums]