Low Latency Mode (and power mode?) for QC

Try to limit the signal chain within 1-2 row only.
I made test and found once I enable row 3-4 latency will be at least increase 2ms+
Official explains this due to row 3-4 Will activate 2nd core of CPU

1 Like

you can use reaper and reainsert to measure it if you’re really curious

Any thoughts on this analysis? This is an interesting conversation.

I did not see more than 13ms latency, so I’m a bit surprised about the 23ms in the video.
And I also think I would feel the 23ms, if any of my presets would have this kind of latency.

1 Like

Also, the 24ms was with 5 captures… I’m not sure I would ever be using 5 captures simultaneously.

Me too. Assuming the results are correct, 23ms is absolutely in the realm of creating a softer, less responsive, more spongey feel. Neural needs to work on reducing this in future updates if they haven’t already. This video is about 6 months old so latency test numbers might conceivably be better since the latest update.

2 Likes

Ok, so I finally got to do some testing.
The preset in question uses 40% cpu and is probably near as much as I’ll ever need. I realize that cpu usage isn’t the only factor relating to latency, but I’m still using it as an indicator.

The result:
Everything bypassed (including amp and cab): 7,2ms

Scene 1 (5 blocks engaged):
7,8ms

So at 40% we’re already at a point where the latency is nearly 8ms. Which means that 60% of the QC’s capacity is as good as ununusable for people like me who are a bit sensitive to latency.

Unless the QC functions totally unlike a computer (which I doubt), there should be a way to reduce the buffer size, so that more than half of that processing power can be put to better use… Someone more techy than me are welcome to tell my why I am wrong :grin:

So can I reduce the latency of that preset?
Now, by removing three blocks (which in itself takes it down to 6,58ms) and moving everything away from row 3/4 I was able to get down to 4,145ms, which is much more acceptable. So yes, one could compromise and do workarounds but, the selling point of the unit is itĀ“s power and flexibility. One really shouldn’t have to…

And that concludes todays prayer :pray:

3 Likes

I like that the OP is thinking out of the box while recognizing that ā€œmaking the QC more efficientā€ is the way to go. While @MortenRL does propose an innovative workaround, grabbing half the processing power to solve the latency issue may not be a good way to approach this. Offering this proposed ā€œefficiencyā€ mode might have the unintended consequence of encouraging the writing, or persistence, of inefficient code that has not been properly optimized and slowing delivery of better code. To me this is like saying, instead of developing a more efficient mode of transportation, let’s just keep everything parked.

Frankly the last thing I want is to give up a significant portion of processing power, which might include way fewer blocks, or having to exclude more complex and resource intensive amps or effects from the signal chain. IMO this is a terrible way to go about things. The code needs to be optimized going forward so that latency is lowered while leaving the ability to have long and/or complex signal chains intact! All the other major modeling companies have managed to pull it off. If, after optimization is delivered, they want to provide this option for users who value super low latency over flexibility in their signal chain, I think that would be a grand idea.

1 Like

I agree that optimizing is the way to go!

My point is that until Neural gets there, this would be a nice temporary solution (emphasis on temporary). And as soon as they are there, it would take on new life as just a nifty feature.

Considering the ā€œmost powerfulā€ slogan, I think being able to achieve super low numbers would be great for marketing as well…

2 Likes

Ok, so after some back and forth with support, this is their reply after sending my results:

ā€œWhen you’re using the QC as a modeler, the round trip latency is usually between 2 - 5 ms, which is unnoticeable by the human ear. Even 7.8 ms is unnoticeable.
The latency inherent to the QC is the value mentioned above. Any additional latency will come from external sources.ā€

To which I cannot even…
For reference, this is 7.8 ms of delay. ItĀ“s not an ā€œechoā€ of course, but there is no way thatĀ“s ā€œunnoticableā€. If youĀ“re trying to be right on the beat, but youĀ“re that much behind or in front… That makes a difference. Even 5ms is perceivable as separate transients. When weĀ“re getting down to 4ms, the transients start to become indistinguishable.

Also blaming external sources is such poor taste. IĀ“ve made it clear that there are no external sources (guitar → cable → QC → passive in-ears).

IĀ“m not expecting anyone to fix the issue this instant, but for support to just flat out deny that itĀ“s a potential issue… Well, itĀ“s dissapointing. Hoping to see some unexpected improvement in 2.0…

3 Likes

@MortenRL it’s very subjective, so I guess it’s difficult to find a common opinion.

7ms equals around 2m of distance from a speaker. I often have this when playing live and I have no problem playing like that.
When I use headphones, I find the same latency irritating, for whatever psychological reason.
I imagine that people more sensitive than me will have even more problems.

I don’t agree with Neural’s statement that the maximum is 5ms, I have measured much more and, as you, I can quite exclude external influences.

So, for me it’s not an issue, but I understand if it is one for you and I believe it could be optimized.

2 Likes

Just for the record, latency and DSP are not directly proportional. You can use a lot of DSP and still have lower latency, just like you can get higher latency with less DSP. It’s more about specific items and how you’ve routed the grid than it is about total power. I noticed a custom IR that was introducing nearly 20ms or more all by itself, for example.

The average I see is around 5-12 for most presets I’ve built (many with a lot of row splitting). Each FX loop adds 2-3ms by itself even disabled, so that’s another thing to keep in mind.

1 Like

Yup, I understand that it’s not directly proportional.

For example when testing, I discovered that removing one out of three different delays did very little to the latency. Removing two made a little bit. But then when you remove all three it suddenly drops considerably. This seemed to be the case with other ā€œgroupsā€ of fx as well.

one thing I’ve noticed is you probably need to set mix level on those to zero or something to measure that, or bypass if it doesn’t affect latency. Otherwise you’re probably confusing the measurement instrumentation because of the actual delay effect? Not sure if true, just a thought

Does a FX Sent Block (that is never receiving anything) add Latency?

I measured a 2.54ms latency on an FX Loop with a patch cable between Send & Return.

It’s the DA conversion to Send and AD conversion to return that adds latency.

No, its only if you return

When I first got my QC I did several latency tests against other devices. Now this was on one of the earliest operating systems. So I can’t say, but maybe things have changed sense then.

I notices that the more captures I had in line, the more latency I measured. So I started measuring the latency of different blocks and found some to be slightly more than others. For me, I decided to make sure my patches were as tidy as they could be to combat any latency I might be feeling. Get rid of any unused muted blocks that might be hanging around. Even if you’re not using them, they are adding up.

I am curious now, I’ll have to go back and do another test with the new OS to see if anything has changed.

I measured QC empty to 4.6ms (Accordingly to Leo Gibson on yt).
Add wireless instrument, wireless monitoring and… regret the boss gt1000.
But I was aware of that…
My real problem is the ASIO drivers…2.0 a little better but… latency is unacceptable to me when recoding processed audio from midi keyboard at the minimum buffer required to avoid artifacts.
I should keep my interafce just for that and is a shame!

I’ve done some experiments trying to identify the latency of specific blocks and found it quite challenging because it appears as if QC bumps up the latency in certain increments that don’t correspond directly to what is required for an effect, but rather for a set of effects if that makes sense.

For instance, sometimes I would add a block and observe 0 increase in latency. However, if I removed some other block first that would cause a decrease in latency, and then add a block that previously appeared to be ā€œlatency freeā€, the overall latency would increase. In one particularly weird case it looked like I could get slightly different latencies for the same exact preset depending on the order in which I constructed it.

I was doing these tests on a single line in the grid to make sure the result isn’t affected by routing through other DSP cores, etc. I’ve also measured multiple times to make sure the results are stable.

From a technical standpoint, it kind of doesn’t make sense to me how the latency may in any way be dependent on DSP load. At the end of the day, the device has to process a buffer within a fixed time budget that doesn’t exceed its length, otherwise it wouldn’t be possible to achieve uninterrupted sound. Unless QC actually increases the internal buffer size for more complex presets…