Kate,
the problem is timing- the purge has to happen before the edited page is returned, otherwise the user might not see the new version.
The purge code writes to a socket and won't read from it until the next round, so with more squids in the list each squid has a tiny bit more time to process the request. If no socket can be established in the first place (network problem for example) the squid is taken out of the list for the main part of the function. The most important thing is that more squids increase the time needed for the purge only marginally. The code that reads from the socket:
while (strlen($res) < 100 && $esc < 200 ) { $res .= @fread($sockets[$s],512); $esc++; usleep(20); }
This means that it won't try to read from the socket forever, and the waiting adds up to 0.004 seconds. Not sure about the fread's timeout, iirc it's fairly low if it waits at all, i forget. Come to think of it, usleep should probably wait a bit longer in less cycles, not sure why i picked it that low- i think it also consumes a some cpu time in php. If the server returns slower than one round of writing to sockets we usually get his replies in the next round (we read up 512 bytes while the normal header returned is about 150 bytes). The network buffer would allow us to do about 80 purges without any reading, so there's enough room for the usual two purges per edit (page and history).
Additional pages (depending on the changed one) are purged after returning the page to the user (deferred update).
A while ago we had the german squid (really cheap hoster, was more down than up..) on the purge list without causing any problems.
Overall i believe more squids aren't much of a problem, and if slow return times get a problem we can still decrease the times it tries to read from the socket (worth a try anyway).