Let's say I have a Linux machine with 10 CPUs (1000%)
and a docker container that at its peak consumes 500%
of the CPU and the process takes 10 hours
to complete (let's say it runs most of the time at its peak consumption). It means, that the process runs with no limitations.
Now I've setup resource restrictions to that container with --cpus=2.5
so it'll be throttled in some point of time. I was expecting that the process will take about 20 hours
after limitations, but obviously it's not true (it's way more).
The question is: Why the throttling of the process is not linear? Are there any inefficiencies related to rescheduling the process, because at the beginning docker sees that there is a system with 10 CPUs
? How is it possibly related to CPU registers?