The future of modelers?

I just thought I’d try and spark a discussion now that Cap2 is out.

Moving capture processing to the cloud means that physical hardware specs won’t need to be as cutting-edge to keep up with the needs of players. Capture V2 and the existing processing mean that a four year old QC doesn’t need to be replaced with ‘bigger and better’, because the caps are going to be improved significantly and compatible with the same tools (nice job, Neural!).

So, if we start offloading the big processing to the cloud… what happens now? How do you compel people to make big purchases on hardware when there’s no significant limitations with existing tools?

Personally, I’d like to see a general trend toward a focus on durability. What we have is great, but if switches could be made to survive a million clicks instead of a half million, or devices specifically designed to withstand drinks being dumped on them, that’d be great for everyone.

Another one would be dynamic, real-time, super zero-latency audio-to-MIDI conversion without piezos, but I think we need the hardware to get a little further along first.

Thoughts?

This sounds like the history of Digidesign to me. Older people will know what I mean :slight_smile: When Pro Tools came out, there was no way the CPU’s of that time could handle the load of digital processing. Then, CPU’s got powerful and Pro Tools turned into a giant useless dongle ( copyright protection ). Then Apple bought Logic and CPUs got really powerful. Then slowly Pro Tools lost customers to Logic. However, Pro Tools were always leading with the advantage of being first. As far as the innovation go, Logic is leading. It has session players, mastering engineer plugs etc, etc.. However, a good studio is expected to have a nice acoustic room as well. Most of them are heavily invested in Pro Tools.
In the world of modelers, it is a different story. It was Sansamp for years. Then came Line6 and they kept the leadership for a long time. They might still be the leader since they sold a lot of HX Stomps, however now there are serious contenders. Neural is one of them.
However, I do not know how much better can the modelers get without solving one great issue.
What is that?
Amp in the room sound. If someone can do that in some way, it will be the end of an era.
It seems physically impossible to produce a 12 inch guitar speaker sound from a 6 inch studio monitor but hey you never know what future has. Until someone achieves that goal of having an amp in the room sound, the next development area will be about better effects and similar things.
Neural can already hold its own, thanks to their incredible User Interface on hardware and software.
We will see what happens.

The logical flaw in speaker modelling is that the sound is always, but always coming out of the speakers you have, in the room you’re in. It’s just physics, no matter how refined your psycho-acoustic compensation a 6” speaker cannot push air like a 12” one.

Yes, exactly the point I was making when I said “It seems physically impossible to produce a 12 inch guitar speaker sound from a 6 inch studio monitor”. However, perception is just electrical signals interpreted by our brain. So, if this happens in near or far future, it would have to be a completely different ‘outside the box’ solution. How? Well we don’t know yet.

Interesting!

So, do you think that will have an impact on live sound? Simulated room acoustics versus the ones in the physical room? Or would this be more for studio work only?

I don’t really know. Nothing has been invented for this yet.
However, I did not know how we could simulate miked amps so well that we could not tell the difference in 90’s where we mostly used SansAmp and then POD. Look how far we came in 35 years and the innovation speed is accelerating. We can not know for sure but we came to a point where we can not tell the difference between a real or captured miked amp. So it will either be small increments of resolution / detail increase or a real next step innovation where people can not tell the difference between a modeler or an amp in the room. The innovation could just be an FRFR speaker of some kind but most likely something we never thought of or something invented by an accident.

1 Like

Once that happens, Leslie speaker simulations are going to blow our minds. lol

1 Like

I would imagine it will have to be in the cab end of things as in the speaker itself, that produces the authentic reproduction of any single speaker… so given that they use different materials to make speakers polypropylene, hemp cone paper cones etc etc and the manufacturing and design of the chassis and other parts of the speaker play a part in its sound…

They would need to have a speaker that can replicate all these factors in one speaker… and from what little I know about making speakers seems very unlikely that will ever happen.. Is it possible to build a speaker with no color or sound to it :person_shrugging:… but if that is possible maybe Google can tell me haha and given the fact that you read about studio monitors claiming flat response but yet every studio monitor sounds different

But if it is the case then they would just need to have eq curves applied to that speaker in the digital realm for every speaker type … and in that process the need for the microphone can be removed and only when needed for recording purposes which then is already produced inside the modeller already… That’s the way I see it, and I’m probably not making any sense really..

That’s both the best and worst part about the Internet - you don’t have to be smart or logical to get your opinion out there! lol

(kidding - no, you make perfect sense. Sometimes good ideas are borne out of wild theories too, so I try not to judge anyway :slight_smile: )

1 Like

I tried to imagine the scenario myself. I was wondering if it would be possible to have let’s say a 9 x 4 inch speakers or even 12 x 1 inch speakers that activated depending on the model of speaker they are simulating. if 10 inch is simulated, you would have the outer row and column of speakers deactivated. So, somehow try and match the amount of moving air. So a combination of speakers to simulate a bigger one.
Then, I gave up on the idea and thought that it would probably be on the perception side.
Just like a sun glass - like augmented reality.
Perception side is easier since you would only need a very good pair of in-ears.
If you guys ever played a big stage where your amp is too far to hear its actual voice so you need to rely on miked amps’ sound coming out of PA monitors or in-ear monitors.
I think it might be easier to simulate an amp in the room sound with special design in ears. But I do not know for sure of course. Research and Development for that kind of innovation.

It’s currently hard for us to analyze how a speaker sounds without putting a mic in front of it. Then we’re analyzing the mic and the room too. White-box speaker modeling based on understanding the physics of how the magnets, voice coil, cone, etc. contribute to the tone so we could model it might be quite complex and difficult. But I suppose it might be able to be done someday.

1 Like

Well, not quite there yet but very close.

1 Like

I think in the room amp sound means I have to be in the room playing through that amp… virtual reality and the likes or perceived in the room sound through specifically designed in ears takes the in the room sound back to having to put somtin in or on your ears… which for me I can nearly get a very close approximation to that already with headfones and the right ambient verb… Id like to hear the frfr or whatever speaker enclosure your using to be able to produce the real sound cause youl be playing through the speaker in a room just like a real amp… as it stands today 6” can not sound like 12” inch speakers, and that 6” can go down to 35khz just like the 12” can yet the 12” you will feel and hear in completely different way… with bone conduction there’s a possibility to make that 6” feel more like a 12” or 18”…. To me it would still need to be a speaker that can cover every curve of every guitar speaker… so the ultimate clean, non coloured flatest speaker it would have to be…

You still need processing power on the device for all the effects or amp models. The cloud based capture doesn’t really change that, especially since it took a while to make captures anyways and not everyone uses them

Amp in the room only matters for medium size gigs or bedroom players.
For a big gig youll be going FOH. For a small gig these days youll probably be forced to go IEM and again FOH.
Neither case will involve amp in the room sound for the audience. And that sound will probably feature your guitar sound being tweaked for the band mix by the sound engineer.

Jim - it is possible to isolate the behavior of a speaker cabinet from the microphone used to capture it - or the room it’s placed in. (See spinorama measurements performed either in an anechoic chamber or with a Klippel NFS.)

That allows us to characterize the impulse response at any angle - and thus, tonality (FR) and directivity. Reproducing it is another matter: there are limited tools available to dynamically adjust the directivity of a (reasonably-sized) speaker, and each of them have logical consequences.

Low end control is relatively easy - you could get pretty close using a simple gradient array. (Think one speaker on the front, one speaker on the rear, similar to a cardioid subwoofer.)

Controlling HF is trickier - but as @yavuz pointed out, beamsteering via a tightly packed array of HF drivers is one solution. This has been deployed to…varying…degrees of success in commercial products - see EAW Anya, Martin MLA, Holoplot. However, the associated mechanical and time-domain (tightness) penalties aren’t ideal for a guitar cab IMO.

  • Of note: a guitar cab’s limited bandwidth and predictable characteristics make this problem somewhat easier than a general-purpose beamsteering PA.

If I were to try this, I’d instead do an active-cardioid speaker: one front-firing coaxial 12” in the front, one rear-firing 12”. With this sort of setup, you can somewhat control HF behavior by playing with the coaxial driver’s crossover - how much do you want the beaminess of the cone vs. the controlled dispersion of the compression driver?

  • It’s not a perfectly universal reproduction device. But it would probably get you in the ballpark for a litany of common single-driver guitar cabs.

Returning to the original topic, however - my primary concern with V2 captures is longevity. The cloud is merely somebody else’s computer, not magic: NDSP’s capture training service will not be around forever. Right now, I think it makes sense for them to keep that cloud-native: not everyone has a particularly powerful computer, and it reduces the risk of reverse-engineering. As the QC ages, hopefully they will allow that training to be performed on a user’s local PC instead.

2 Likes

I’m struggling with this cloud service thing myself. Because I wrote a jazz teaching mobile app and I have video lessons that I had recorded for students. I’m thinking how long should I have this service going if I ever decide not to go on with it. Then I think everything changes so fast now who knows what is gonna happen in 10 or 20 years later. I also remembered big techs do it. I mean they stop some services with a few months notice.
Then again from a consumer’s perspective, I bought myself a Nano Cortex and plan to get a Quad so it is a big investment. It is over 2000€ combined with nano. I could have gotten myself a guitar that would serve me a life time. However, tube amps are not the same. If I had only one tube amp and never carried it, it would fine. If I had 3-4 and kept maintaining, servicing them, it is another story. So, If my Quad and Nano served me 10 to 15 years I should be fine. I have a friend that still uses SansAmp ( Original ) on stage. He says he likes it. So that is another perspective.
The best deal for this kind of simulation purchase is Strymon pedals I guess until efx world go into 3D delays or somehow similar route. You buy the pedal, it never upgrades, it is your software / hardware in a box and that is it. Consumer knows, Production company knows. Strymon TimeLine is still considered to be the one of the most advanced delays. So everyone is happy.
With modeling, it is different. Consumers expect constant updates and fast. Every day if possible, if not every week or every month or at least once a few months. This is a bit too much for the coders load I think. I mean the direction that it is going.
Back to multi-speaker FRFR. I have one more idea about this.
Imagine having a modeler that could model the room and your FRFR or your PA speaker.
Just like having many models of tube amps. It would first model your tube amp. Then model your FRFR with array of smaller speakers inside the room. Then tweak the model so it would get closer to the amp in the room.
So instead of one it would take two models to get closer to the amp.
What do you guys think?

Sorry, I’m still confused who the sound in the room is for?
Anyone not in the room and in the same place in the room as the guitarist is hearing a different sound. I think sound in the room is more a feel thing, usually down to volume. I turn my FR-12 all the way up and it sounds much better.

For everyone.. If somebody somehow managed to get a sound from a modeler or an FRFR that is indistinguishable from the original amp in the same room, everyone would be talking about it, buying it.

So when you have your amp im the room sound how will you share that sound with an audience/others?
I mean we cant do it by recording a real amp in a room.