[Labs-l] Simultaneous job limits?

Ryan Lane rlane32 at gmail.com
Mon Mar 30 16:42:22 UTC 2015


Take a look into gevent. For most workloads you just need to monkeypatch
all and it'll just magically thread your network requests.

On Sun, Mar 29, 2015 at 9:37 PM, Anthony Di Franco <di.franco at gmail.com>
wrote:

> The intent of lots of jobs is to have lots of redis queue workers making
> network requests in parallel, so it would probably be significant work with
> python's threading or twisted or the like to consolidate multiple requests
> into a single OS process, but I'll start looking into it, and the resource
> increase  seems to have us covered in the meantime. Thanks!
>
> On Sun, Mar 29, 2015 at 7:49 PM, Yuvi Panda <yuvipanda at gmail.com> wrote:
>
>> Part of the reason was that they asked for trusty and we only had one
>> trusty node working for exec hosts, apparently. I have just added 5
>> more and that should help.
>>
>> However, +1 to what Coren said. Can you simplify your code to combine
>> the tasks together?
>>
>> _______________________________________________
>> Labs-l mailing list
>> Labs-l at lists.wikimedia.org
>> https://lists.wikimedia.org/mailman/listinfo/labs-l
>>
>
>
> _______________________________________________
> Labs-l mailing list
> Labs-l at lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/labs-l
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.wikimedia.org/pipermail/labs-l/attachments/20150330/779e6ad0/attachment.html>


More information about the Labs-l mailing list