Truly interesting interview! Thanks for the link. Was surprised this YT video had so few views. It provides a substantial insight into how amp model capture/creation works on the QC.
I agree with your observation that this “top-down” approach makes it more difficult to emulate variables like changing tube bias. However, it is also capable of capturing, at least the sonic results, of extremely subtle interactions and behaviors both within, and between, hardware components. Details that could easily be overlooked in bottom-up component modeling.
For other modeler companies, creating amps by modeling their components, nothing but the most painstaking, detailed, complex, and accurate of “bottom up” approaches, that in essence mirrored the exact design of an amp circuit by circuit, capacitor by capacitor, and all of the physics relating to its component parts, and the interactions between them, would be capable of what Neural claims to be doing with AI modeling.
As the video points out, Neural doesn’t need to know the physics and all the potential variables of what is happening within an amp. They short circuit that laborious and human labor-intensive process of creation by instead capturing the end result by analyzing the amp’s sound instead of its hardware.
In the end run it would seem to come down to who is accurately capturing the most data points of a physical amp under multiple configurations. Amps’ sounds change as their parameter knobs are turned, hardware ages, more/less current is applied, wires and their proximity to each other create fields, materials expand and contract or gain a charge or change in chemical constitution, and a multitude of other variables that encompass some fairly heady physics.
I can see where, at least in the short run, a more comprehensive capture of the end result (the amp’s sound), using AI, might ultimately be a superior and more economical approach, rather than attempting to accurately model every bit of hardware and their interactive behavior within an amp.
Maybe ultimately the two worlds, top-down using AI, and bottom-up using human circuit analysis, will collide, and they will let an AI and some robotics loose on bottom-up modeling every variation of a hardware component’s behavior in an amp. Then contrast and combine that with AI top-down modeling of the amp’s output(sound) and merge the two. There is probably some measure of this contrasting going on in the bottom-up component modeling approach already even if it doesn’t use AI. I would think the developer would check his component modeling results against the actual sound of the amplifier.
Hybrid approaches might be similar to animation and CGI motion capture where you might model the bones and muscles in a human knee joint’s operation but also attach sensors to a skinsuit to capture the knee’s apparent motion in real-time. Then combining all that data for a convincing illusion of natural motion.