I think I know what’s going on - or I have a guess:
Processing of blocks in row 1 and 2 is done by one core and processing of blocks in row 3 and 4 is done by another core. Buffering of data is needed when data is passed from one core to another and that causes latency.
When there are no blocks to be processed by the core assigned to rows 3 and 4 the code is optimised to simply skip the transfer of data to that core. But in the code a split is also just a block. So adding a split to row 3 causes the data to be send through and processed by the 2nd core, with the added latency as a result.
But I can’t understand why the utilisation of a second core adds 2.7ms of latency when the total round trip latency is 1.8ms when only one core is utilized?
And the plot thickens …:
I just did a measurement with a gain block on row 1, the output of row 1 set to Send 1, the input of row 3 set to Ret 1, a gain block on row 3, the output of row 3 set to Out 1/2. I expected the result to be the 4.5ms we’ve seen with rows 1 and 3 in use + the 1.8ms I’ve measured earlier that an effects loop adds to the total latency, but my measured latency is 9ms! That makes absolutely no sense to me!
I have done another measurement with the same setup as above but using an effects loop block on row 1 and that give me the expected 4.5ms + 1.8ms = 6.3ms of latency.
I then moved the effects loop block to row 3 and that gives a total latency of 9ms! That can’t be right?!
@thomasotto could you please replicate these tests and let me know if you’re getting the same results?