Is this true? Are QC amp models "deep" captures?


I’m curious if there’s any truth to this, but Worship Tutorials seemed to indicate that NDSP models amps based on extensive capturing. If so, that would give me a whole new level of respect for QC amp models. Oddly, after hearing that, suddenly the QC amp models sound much better to me…lol.

Does anyone have an info or can anyone corroborate the claim? Right around 37:40

1 Like

If the models are actually some kind of extended captures where different settings of the amp were captured and merged into one “model”, it would explain the limited parameters available on the QC amp models compared to other modelers. Technically it would be a “morphable capture” rather than an actual bottom-up model.

It’s questionable whether it should be more respected than modeling the behaviour of the actual amp circuits on a component level.

1 Like

Well, only a Neural DSP Insider could write an accurate answer to this question, but we can make some educated guess:

  • ‘Deep Capture’ doesn’t equate to ‘a Bunch of Captures’ , otherwise all the QC ‘modeled’ amps would consume more or less the same amount of CPU, which is NOT currently the case (the 1.4 added modeled amps consume more CPU than the others, but are really great !)
  • A bunch of static capture packaged in a good looking U.I. with knobs would be similar to the ‘Rig Player’ of Overloud TH-U VST, that is : they are a LOT of static captures that cover a LOT of knob permutations (and only Overloud has the SDK to create them - for 3rd party to create TH-U Rigs, they probably need a licensed version of the SDK…)
  • What the ‘Deep Capture’ refers to (and according to several ‘hints’ dropped by Doug on Discord) is a much complicated process, where an I.A. is used to recreate a modeled version of an amp.

Basically, there’s 2 ways to try to mimic an Amp with software modeling:

  • Component Modeling : each physical component of an Amp is recreated in software, this is a ‘bottom-up’ approach , where you must first emulate elemental parts of an amp, and then assemble all those parts … (Fractal uses this approach, and several other modelers too)
  • I.A. modeling : you ‘feed’ an I.A. with a lot of samples of the Amp you want to model, and the I.A. generates a kind of ‘mathematical formula’ that would produce the same output based on the same input (this is a very simplified explanation). This is a ‘Up-Bottom’ approach, where you begin with the desired result, and then recreate elements that allows you to reach the desired result. I think that the ‘Deep Capture’ is refering to this kind of I.A. assisted modeling on the QC…

So, for the QC , we can say that a ‘Capture’ is only a ‘snapshot’ of an Amp, wereas a ‘Deep Capture’ are a bunch a snapshot that feeds an I.A. used to recreate a dynamic, software-based, emulation of an Amp. As QC users, we can do a ‘snapshot’ capture, but we can’t do a ‘Deep Capture’ (because it requires much more time, and manual coding is probably required anyway…)

Again, just my $0.02 , an official explanation would be better, but we can understand that Neural DSP don’t want the competitors to know more about their ‘secret sauce’ :shushing_face:

1 Like

If you look at some of the QC Captured Models they have more parameters than just a standard Capture. This seems to support the idea that some models are deep Captures or several Captures put together to form one deep capture. I can see NDSP eventually changing the Capture process to support combining several Captures into one model. Right now we can only do a snapshot of one setting. Imagine being able to Capture 5 or 6 settings and combining into one preset. I can see that in a future update. I think it’s just the maturing of the Capture process.

Hey everyone!

I wanted to chime in that the QC amp models are not several captures combined into one, it’s a different process. :wink:


Amazing work: [1811.00334] Deep Learning for Tube Amplifier Emulation

1 Like

In the following interview Doug talks about the automated ML approach with dedicated machines turning the amp knobs and so on: S2 E4 Inference: Douglas Castro, NeuralDSP: AI transforming an industry - YouTube

He notes that they didn’t have the resources to do traditional DSP amp modeling from the ground up. Instead it seems like signal response of real-world amps is measured in an automated process where the main amp parameters are varied and later on the labeled audio data is used to train ML models.

While the results are obviously impressive, the process seems to be limited to relatively easy accessible amp parameters. Things like varying the value of a bright cap, changing tube bias or even swapping power amp tubes are obviously not so practical to automate and thus not modeled / captured.

Sorry for digging into such a discussion. I received my QC a few days ago and being a researcher by trade I can’t help myself but try to understand at least some some general principles of the tech.


Truly interesting interview! Thanks for the link. Was surprised this YT video had so few views. It provides a substantial insight into how amp model capture/creation works on the QC.

I agree with your observation that this “top-down” approach makes it more difficult to emulate variables like changing tube bias. However, it is also capable of capturing, at least the sonic results, of extremely subtle interactions and behaviors both within, and between, hardware components. Details that could easily be overlooked in bottom-up component modeling.

For other modeler companies, creating amps by modeling their components, nothing but the most painstaking, detailed, complex, and accurate of “bottom up” approaches, that in essence mirrored the exact design of an amp circuit by circuit, capacitor by capacitor, and all of the physics relating to its component parts, and the interactions between them, would be capable of what Neural claims to be doing with AI modeling.

As the video points out, Neural doesn’t need to know the physics and all the potential variables of what is happening within an amp. They short circuit that laborious and human labor-intensive process of creation by instead capturing the end result by analyzing the amp’s sound instead of its hardware.

In the end run it would seem to come down to who is accurately capturing the most data points of a physical amp under multiple configurations. Amps’ sounds change as their parameter knobs are turned, hardware ages, more/less current is applied, wires and their proximity to each other create fields, materials expand and contract or gain a charge or change in chemical constitution, and a multitude of other variables that encompass some fairly heady physics.

I can see where, at least in the short run, a more comprehensive capture of the end result (the amp’s sound), using AI, might ultimately be a superior and more economical approach, rather than attempting to accurately model every bit of hardware and their interactive behavior within an amp.

Maybe ultimately the two worlds, top-down using AI, and bottom-up using human circuit analysis, will collide, and they will let an AI and some robotics loose on bottom-up modeling every variation of a hardware component’s behavior in an amp. Then contrast and combine that with AI top-down modeling of the amp’s output(sound) and merge the two. There is probably some measure of this contrasting going on in the bottom-up component modeling approach already even if it doesn’t use AI. I would think the developer would check his component modeling results against the actual sound of the amplifier.

Hybrid approaches might be similar to animation and CGI motion capture where you might model the bones and muscles in a human knee joint’s operation but also attach sensors to a skinsuit to capture the knee’s apparent motion in real-time. Then combining all that data for a convincing illusion of natural motion.


You may find this one interesting as well: Neural Networks as Guitar Amps (with Neural DSP interview) - YouTube


There was a great video by the data scientist who did his PHD thesis on using neural nets to capture a fender amp tone. Neural hired him, and that was the birth of this (I believe).

And on the subjective side, I just played a gig in Vegas to a crowd of 1,000+. The other guitarist used a real amp/board, I used my QC. You’d never know the difference once it comes out the front of house. I think the entire industry will have to follow suite.


If I understand what Neural does, it’s basically just applying modern ML techniques to guitar audio chains. If that’s true, the limitation is just based on what you choose to black box and what you choose not to. In a sense, it’s not a real limitation, because in this context what’s in the black box doesn’t matter. So for example, to model the Granophyre, they took the tubes out of the “black box” and used it as an input parameter to the modeling algo.

The whole approach is pretty clever and it’s really just a matter of being quicker on the draw to adopt modern technologies than other gear companies. They’re using machine learning techniques that companies like Google, Amazon, and Facebook use, just applying it to amps and such. Sure, you can’t open up the code and see “oh, here’s where the preamp tubes are”, but it doesn’t matter much, because it accurately gives the right output (tone, feel, etc) given some input (your guitar playing).

I think what Neural DSP has done that is different is include amp knob positions, controlled by ‘robots’ into the machine learning training sets, along with the resulting sound inputs and outputs. This allows the machine learning to also included the amp’s controls and their interactions in its generated model. The results seem quite good and I’m looking forward to where Quad Cortex goes from here.


I have used the Fractal FM3 for over a year and, while I’m extremely satisfied by the models, I would like a unit with more DISP-power to try dual amps. The obvious alternatives are the FM9 (I recently entered the waiting list in Europe) and the QC (I do not mention Helix as I am not a big fan).
QC has many interesting features, such as wifi and the wonderful UI. It’s even cheaper than the FM9 in Europe.
What gives me pause is exactly the modelling technique. I recently read a post suggesting some issues with dynamics of some amps (as if a compressor was on). I wonder if this might have to do with the up-bottom approach. The dynamics on the FM3 are indistinguishable from a physical tube amp. My edge of breakup presets behave exactly as they should. I can even set the amp using the same technique as I would on a physical model (turning the knob until I find the spot where there the change in tone is the greatest). I fear that I would miss that in the QC.

I have not noticed any issues with edge of breakup dynamics with QC, that’s where I live most of the time. Results will depend a lot on what is used to reproduce the sound. Headphones, studio monitors and PA cabinets aren’t likely to produce the dynamics and feel you would expect from a guitar amp. I’ve obtained good results from a Powercab, and a cabinet I made with two Eminence Beta-10CX coaxial speakers using ASD:1001 tweeters and a really good stereo power amp.

Hi Valgua,

Welcome to the QC Forum. I have a FM3 also. I love it! The ability to build tone from an existing preset or from scratch is just awesome! Fractal has done an amazing job at recreating the circuit of a Tube Amp with Software. That along with the effects make it a force to be reckoned with. I think what people tend to forget is how mature the Fractal Platform is. 10 Years of updates have given us a very mature technology that has given guitarists a great recording and performing tool.

The Quad Cortex is new technology. First introduced in January 2020, for the Quad Cortex to even be compared to technologies that have been around over 10 plus years is a feat in and of itself. The UI is amazingly simple to use. But for me, there is just something about how the QC reacts and feels when using it. I find myself creating tones on the FM3 and capturing them. I’ve gotten some amazing results! Personally I do not find the QC lacking in Dynamics. I can build an AC30 preset on Fractal, Capture it and it will have close to the same Dynamics as the FM3.

But it is important that the snapshot you capture from the FM3 have enough saturation but not overly saturated. I find that the QC will react dynamically as long as the input gain structure is set properly. I think as the QC platform matures we’ll get better dynamics along with better tones and effects. We are 2 years into this new technology. In listening to interviews with Douglas Castro (I recommend everyone listen) he explains the concepts behind the QC. Since it is purely a software driven device their ability to improve and develop existing code along with new amp and effect algorithms means the future is bright for the QC.

Kemper and Fractal technologies have been around over 10 years. IMVHO, the QC just sounds and feels better. But tone is a very subjective thing. I suggest everyone make that determination themselves. For me, the QC checks all the boxes and once the technology matures it’s only going to get better. The fact that a device that’s only 2 years in can compete with other modelers that have been around 10 plus years, is a testament to just how good the QC is.

I were about to ask whether you could share your FM3 captures but I already found you and your presets on neural cloud :slightly_smiling_face: Thanks and keep it going :+1:

Apart from that, referring to other posts, I’d like to mention that I have never heard the term “up-bottom”. The opposite of “bottom-up” would rather be “top-down”. Sorry for being picky here. I’m a little stigmatized by these terms.

Thank you! I know that QC is the new kid on the block and that is obviously a challenge when competing with seasoned players such as Line 6 and Fractal. As you say, it’s an incredible achievement that the QC is considered a credible competitor after only two years. I admire what Neural is doing. The QC has some amazing features and I genuinely love that it has wifi. It is an elegant product. I think that it will be fantastic after a few updates. My questions are only aimed at making an informed choice in the present.