On Mon, 22 Sep 2003, E23 wrote:
I've been following the discussions on the
problems with the web
servers. It seems that there are too many concurrent processes for the
limited hardware we are currently using. I believe I read numbers
somewhere of a steady load of 20+ processes in the run queue on one of
the servers. Of course there will be a lot of context switching going
on, with some overhead. But worse, it will guarantee that all processes
get the worst possible total running time.
Example: n processes arrive at the same time. If the necessary CPU time
for every process is t seconds, all processes will finish after t*n
seconds in a concurrent system.
If, on the other hand, the processes run completely serialized, one
after another, the first process will finish after t seconds, the second
one after 2*t seconds and so on up to n*t seconds for the n:th process.
This gives an average running time of t*n/2 seconds, with a worst case
time of t*n.
This goes wrong on 2 counts:
1. If there is any amount of true parallellism going on, the time for n
concurrent connections will be less than n*t seconds. If during the
execution, there is time spent waiting for an answer from another
machine, the amount of parallellism may well be large.
2. Processes may differ in necessary execution time. Doing them subsequently
increases the amount of time that 'fast' processes have to wait for
'slow'
processes. For example, running a 1 millisecond and a 10 millisecond
process parallel will give 2 milliseconds for the fast, and 11 milliseconds
for the slow process. If run in sequence, this will be (assuming that
either process gets first 50% of the time) on average 5.5 milliseconds for
the fast, and 10.5 milliseconds for the slow process.
Andre Engels