Neural on mono vs stereo tracks

Hello everyone, I feel like such an idiot for asking this but I’m really struggling to figure this out.

I am using neural dsp (I have tried this with each plugin, the results are the same). I am using studio one. My gain on my interface is set appropriately, my di is clean and the level is set appropriately, I am monitoring through great mixing headphones.

Firstly, if I am recording a tone that I expect to be in stereo (ie a tone with reverb, or a ping pong delay, etc), I put neural on a stereo track, set neural to mono mode, and the output is indeed in stereo. Sounds fantastic. After printing the recording, I get a stereo track with two different sides, since the reverb makes the mono signal into a stereo one.

My problem arises when recording mono, rhythm tones that I later hard pan to either side. I have found two approaches to doing this.

Approach 1)
This is what I understand to be the correct way. I have a mono track, neural is on the mono track. Neural is set to mono mode, set to a rhythm tone (let’s say, straight down the middle using gojira, every setting is default). Not happy with the tone. It sounds extremely unclear, unfocused, and also very noisy; any slight noise that I make with my fingers is amplified much more that I would normally expect it to be. After printing the recording, I of course get a mono track.

Approach 2)
This is what I understand to be the incorrect way. I put neural on a STEREO track. Neural is in mono mode. Gojira, default settings, just like before. Sounds fantastic. The tone is much more focused, clearer, and MUCH less noisy. Just like YouTube demos I’ve watched. The output shows as stereo, but each side is identical, so it sounds like mono. After printing the recording, I get a stereo track with two identical sides. (It’s worth noting that when I try the standalone, it sounds like approach 2).

It almost sounds like approach 1 takes the two identical sides of approach 2 and sums them together or something, or at least that’s my working theory.

Is there a way to get results that sound like approach 2 while recording on a mono track? Why do the two methods sound so different?

1 Like

Hey there , Yeah I know exactly what you mean .
Ive got a bunch of their plugins and i’m trying to record mono but something feels really off .
I dont even know if we’re supposed to record on mono tracks or stereo tracks.
It feels to me like on a mono tracks the quality of the sound just really isnt the same.
It would be nice to have more guidance as to how to record with these plugins for noobs like me.
Have you ever got a fix to your problems?
If so, let me know!

Another cool thing to do, if you have your guitar coming in as a stereo signal from the pedal board, for example, I send both L/R channels from the pedal board, thru 2 of the DAC channels… (say 7&8 coming in). but I create 2 mono tracks, set the input of one to input from Ch7 into one mono channel, then the second Ch8 into another mono track. Now I have full control over each input; I leave the pan at “C”, apply different FX to each, and it creates the illusion of space. For example, one channel sends thru a delay and the other thru a different delay, but BOTH must be set to different delay times. One thing I found out in 25 years, is that (if your math is up to snuff) Prime numbers have no factors below the prime number. 7 divisible by 7 and 1. ergo prime. If you take two delays and always set them as different primes, they will distinctly make the L/R outputs sound like different guitars recorded. My example is to use something like 19 ms on one channel and 23 ms on the other… adjusting to taste. But primes, in review to use, are like 7 11 13 17 19 12 19 31 37 41 any combination of unique primes for each channel. Those two primes multiplied together will not sync up until the number of MS gets greater… using, say “13 against 29” the only delay time that will sync with the other delay, in this case, is 13x29 = at 377 ms… then again at 754 ms, then at 1131 ms. If you were to use non-prime delay ms, for example, 12 and 20, those times will sync up every 15 ms (12-20, is a multiple of 3 and 5, but every 15 ms, that delay will meet and create artifacts. Bottom line, always use PRIME number Milliseconds on your delays, there is much less chance of one delay syncing with other delays. Any time one frequency is played against the other, you will get the sum of those frequencies and the difference. They will be harmonics in your sound. If you’re using non-primes every time those delays sync up, you’ll get more times where your delays are creating other subharmonic frequencies.

In summary, when you use delays, stick to using a prime number of Milliseconds and don’t use them at the same ‘ms’, you’ll get aliasing, and unwanted subharmonic frequencies that will interact with your other channels in the mix.

Read about the Nyquist Frequency. It’s a golden rule every mix/master should know.