One of the problems with modern music production is that one does not always have access to expensive microphones and recording booths, either because one is on the move, or you cant afford it. So what I would love to see is a neural network based plugin that allows you to emulate pristine recording conditions.
The neural network would be trained on source and target material. This source and target material would be captured in a similar manner to an impulse response, but would be much longer than a simple noise burst, in order to train the neural network.
Firstly there would be different modes, based on different recording types, ie singing, speech/rap, instrumental and foley models.
To create the singing model, a very transparent high quality recording of male and female singers capturing alto, soprano, tenor, bass, whistle, falsetto and vocal fry singing is captured. Then this recording is used as the training target. This target material is then played back using near-field monitor speakers into various laptop and smartphone mics. The neural network is then trained to process the cheap low quality recording to sound like the high quality studio recording.
This could be taken a step further to allow users to capture the space they are recording in. The target recording could be played back to high quality capture and low quality mics in different spaces, hence allowing the user to retain the reverb, or remove the reverb from their recording.
Post your ideas and suggestions here
1 post • Page 1 of 1
[You can only see part of this thread as you are not logged in to the forums]