Can we record and model the sound of an acoustic instrument (like the cello) instead of capturing the sound of its amp?

Can we record and model the sound of an acoustic instrument (like the cello) instead of capturing the sound of an amp ? More precisely, can we record the sound of our acoustic instrument with a good microphone and have it modeled, so that when we send it the piezo microphone of the instrument, it corrects the difference to get as close as possible? ( A bit like the tone dexter does…)Thank you for your help.

No, the capture process measures the changes an amp/pedal makes to a source sound, it doesn’t generate its own sound like a cello.

I think he’s asking (as a Feature Request) to be able to do that.

There has been talk of adding the ability to shoot IRs thru the QC, if that was possible it would pretty much be the same thing. There’s probably a request for that already though.

It would be cool; acoustic instrument IRs are the best way to make a piezo sound better, IMO.

2 Likes

That’s pretty much how the Tone Dexter works coupled with a preamp and tone stack. It sounds fantastic.

Hello, thank you for your answers. I will clarify my question.
The tone dexter calculates the difference between the 2 sources at the time of learning (because it is done with the 2 sources simultaneously, Piezo and MIC).
The QC only calculates one source, the one with the microphone at the amp, wich will be the model ( if I’m not mistaken.)
So I wanted to know: 1) if this made a big difference
2) if (despite the fact that QCit is made to record the sound of AN AMP, to make a model), Would it work to record the sound of an acoustic instrument instead of the amp, to make the sound model, and would that be convincing?. So instead of putting the microphone in front of an amp, we would put it directly in front of the cello, so that it can recreate this model later, with a different source like my piezo. thank you

Hi @Barquieu ,

The main problem we are facing is the 21ms of IR length (because, the process generates an IR that you use with a microphone input)

Before it is technically possible on the QC (soon, in 5 years probably), you should try making an IR of your instrument and test it on the QC (which I have already done for my acoustic guitars).
You will realize that the low end (the “body” of the instrument) is truncated by those 21ms… It sounds like a really, really cheap Tone Dexter clone from china. (my 10 year old fishman aura sounds better than this)

There is a link to a lot of info and tools to make your own acoustic instrument IR (Jon Fields open source algorithm - Acoustic IR)

I would say : for now, the best tool for the job isn’t the QC but the … Tone Dexter.

Thank you very much for you completed and detailed answer !

There are also captures on the cloud of the TC Electronic BodyRez and Tone Dexter. I haven’t tried them yet, but maybe this could also work for you.

I have made the same feature request earlier. I suggested creating a “Neural capture” for acoustic / piezo instruments, ie a different version of neural learning process that can use a played instrument as source like Tonedexter does. And in same request I asked that if this is not something they want to tackle, then just an IR creator tonedexter style would be good too, although surely less accurate than what neural capture technology is capable of.

Also you can easily make own IR:s of your acoustic instruments online and then export the IR:s to QC:

3 Likes