I know this is far-fetched. But I think it’d be awesome if the quad cortex one day could listen to a guitar solo from any source and automatically generate a patch recreating that guitar tone using AI technology.
I think AI has a ways to go to pull that off. I’d be happy to have more MIDI capability and faster preset changes…don’t distract them!
With a detailed prompt, Chat GPT will give directions on how to build a preset accordingly.
Or maybe we could read the manual and figures things out by ourselves
Thanks for your kind words.
Did’t want to be rude in my comment, wich was about the Chat GPT prompt comment btw.
AI will ruin us all, and make us become idiots. That just my opinion.
Should i say “Yeah chapter GPT rules, why bother learning how to read and write”,
NOT to be insulted in comments, by someone i never met in person and doesn’t know me?
I Could have been less sarcastic i must admit
This will definitely ruin us, from the free Chat GPT site:
Prompt: Build a preset for the neural quad cortex that will emulate the david gilmore lead sound for the guitar solo on the pink floyd comfortably numb
Result:
To recreate David Gilmour’s lead tone for the guitar solo in “Comfortably Numb” on the Neural DSP Quad Cortex, we’ll aim for a smooth, sustaining, slightly compressed lead sound with delay and reverb, modeled after his Hiwatt + Big Muff + echo/delay rig.
Quad Cortex Preset – “Comfortably Numb Lead”
Block Chain Layout
css
CopyEdit
Input → Compressor → Overdrive/Fuzz → Amp → Cab → Delay → Reverb → Output
1. Input
- Noise Gate: Light setting
- Threshold: -50 dB
- Release: 100 ms
2. Compressor (Keeley Compressor or Dyna Comp style)
- Type: Compressor
- Threshold: -40 dB
- Ratio: 4:1
- Attack: 10 ms
- Release: 100 ms
- Mix: 80%
- Level: Match volume
This helps smooth Gilmour’s dynamic sustain.
3. Overdrive / Fuzz (Big Muff style)
- Type: Pi Fuzz (select a Big Muff Pi model or similar fuzz)
- Gain: 5.5–6
- Tone: 3.5–4
- Volume: To match level post-drive
Tip: If the fuzz gets too fizzy, roll off the tone slightly or stack it with a Tube Screamer-style block before it to tighten low end.
4. Amp Block (Hiwatt DR103)
- Model: Hiwatt DR103
- Gain: 5.5–6
- Bass: 4
- Mid: 6.5
- Treble: 6
- Presence: 5.5
- Master: ~6
- Sag: Medium
Gilmour used a Hiwatt with power amp pushing and low preamp gain, with his dirt from pedals.
5. Cab Block
- Impulse Response (IR): Celestion-style Fane speakers or Hiwatt 4x12
- Mic 1: SM57 off-axis (distance: 1")
- Mic 2: Ribbon R121 (distance: 3"–6")
- Pan: Blend 50/50
- High Cut: 6.5 kHz
- Low Cut: 90 Hz
6. Delay (Binson Echorec-style or analog delay)
- Type: Tape Echo / Analog Delay
- Time: 450–480 ms
- Feedback: 35%
- Mix: 35–40%
- Modulation: slight (1.5–2 depth)
- Ducking: Optional (light)
7. Reverb (Plate or Hall)
- Type: Plate Reverb
- Decay: 4.5–5.5 s
- Pre-delay: 80 ms
- Mix: 25–30%
- High Cut: 5–6 kHz
- Low Cut: 150 Hz
Optional Enhancements
- EQ Block after the amp:
- Slight boost at 800 Hz
- High shelf roll-off above 7 kHz if fizz appears
- Volume Pedal before Delay:
- For swelling into notes
- Scene Switching:
- Scene A: Rhythm (amp only, no fuzz)
- Scene B: Solo (fuzz+delay+verb)
Guitar Suggestions
- Stratocaster with neck or middle pickup
- Tone knob rolled back slightly (around 7–8)
Would you like me to export this as a Quad Cortex preset file or give you a diagram image of the block chain layout?
I‘ve also tried ChatGpt for tone analysing and it‘s totally crazy. And it‘s getting better and better in no time. I‘m pretty sure in maybe 2 years we’ll have ampsims we can only dream of today.
The Spark 2 from Positive Grid does something similar. You type in a description of the sound you want and it puts together a chain of pedals and amps that it thinks fit. It would be cool if this was a feature on cortex cloud and you could just download the preset. They wouldn’t even need to make it a new feature on the firmware.
Hi thanks for your thoughts…
AI scares the sh## outta me because it does so many things better than me.
I’ve had more fun learning from mistakes that just happen to fit perfectly in a mix.
If everyone sounds perfect…like say mozart, we would never have had Louie Louie…I rock out to both!!!
..I have to agree with an earlier comment “RTFM”…
I think AI generated backing tracks would be a better proposition
I really believe tone is in your hands… And the endless search for someone else’s tone has always baffled me… A tone for the style is more we’re I see it… Build your own tones be you. And you will find your best sounding and playing self there… Never mind chatgpt Ai be your own hero…
Great! Can AI make my hands sound like Robben Ford’s or Matteo Mancuso’s?
I disagree. While it would “cool” is it really ethical? The guitarists on the recordings that we like CRAFTED their tones by what they like, or just had available to them, and that’s part of the artistry. If they had AI to just capture and recreate the tones of other guitarists they liked, would they have ever become unique in their own way of creating music and crafting their sound? I really think people need to step back from AI and think about what it is doing in hindering creativity. Remove the discovery and learning process away, and then it’s just a bunch of copy and paste… Just my 2 cents