I feel like “cortex control” offers a potential next level of control rather than simply mirroring the QC. Similar to a (simple) sound design experiment of creating your own synth using logic by dropping a wav file, what if this functionality could be applied to the QC? Like changing a guitar into a synth, but in this case using a WAV file.
I understand that i don’t understand the “magic” behind machine learning or neural network training, but it seems like the potential is there.
Imagine using a contact microphone on a piece of tinfoil, crinkle it a bit, then dragging the wav into CC to “create fx”… it does its emulation thing (sanity check - like capture) and you create a block. You could even offer potential add-ons (ADSR) in the process.
Put a wet/dry mix (should be on everything - imo) and now my bass guitar sounds like les claypool laser beaming through ice.
And of course it would be sharable on the cloud!!!