Why does GLONASS consume more GPU processing power?

When running a self test on GPU (Test GPU Speed), it is very easy to tell that there is a potential to run many more satellites without GLONASS. Once this is enabled, the self test drops drastically. Where can I find more information on this?

When you simulate a signal like GPS L1 C/A alone, Skydel will use a sampling rate of 12.5MSps. If you simulate GLONASS G1 alone, Skydel will use a similar sampling rate and use the same GPU resources. But if you simulate both on the same output, Skydel will use a sampling rate of 50MSps because the two signals have different central frequency and a higher sampling rate will allow both signals to fit in the same RF band. In this case, both signals must be simulate at a higher sampling rate compared to when they are simulated alone. Hence, the increase of GPU resources required to simulate both at the same time. With modern GPU, you can simulate all constellations on all frequencies for all satellites in view, so this constraint is rarely an issue.