Hello toolserver users,
as you may know, there were some bigger problems related to sun grid engine starting in november 2011. I asked DaB. to become a sge manager for helping them to solve these problems.
During the last months i silently started reconfiguring sge in small steps so that it was always possible to use it as before and no downtime was needed. This took some time because i am only a
volunteer and i had to changes nearly everything. Additional Nosy and DaB. changed some solaris configurations that i proposed.
All scripts that used grid engine before can continue to run without changes. But maybe you can increase your script performance by adding additional informations.
In the past you were requested to choose a suitable queue (all.q or longrun) for your job. Many people choosed a queue that did not fit best for their task. So i changed this procedure.
Now you have to add all resources that your job needs during runtime on job submition. Then sge will choose queue and host that fits best for your requirements. So you don't have to care about
different queues anymore (you may have seen that there are much more queues than before).
All jobs must at least contain informations about maximum runtime (h_rt) and peak memory usage (virtual_free). This information may get obligatory in future. Currently only a warning message is shown.
You also have to request other resources like sql connections, free temp space, etc. if these are needed by your job. Please read documentation on toolserverwiki i have updated today:
https://wiki.toolserver.org/view/Job_scheduling
This currently contains the main informations you need to know, but maybe i add some more examples later.
I also have added a new script called "qcronsub". This is the replacement for "cronsub" most of you used before. Differently to cronsub it accepts the same arguments as the original "qsub" command by
grid engine. So now it is possible the add all resource values at command line.
Please note that you should always use cronie at submit.toolserver.org for submitting jobs to sge by cron. These cron tasks will always be executed even if one host (e.g. clematis or willow) is down.
This is the suggested usage since about 17 months. Many people have migrated their cron jobs from nightshade to willow during the last weeks. But they will have the same problem again if willow must
be shut down for a longer time (which hopefully never happens).
--
Example:
This morning Dr. Trigon complained that his job "mainbot" did not run immediatly and was queued for a long time. I would guess he submitted his job from cron using "cronsub mainbot -l
/home/drtrigon/pywikipedia/mainbot.py"
This indicates that the job runs forevery (longrun) with unkown memory usage. So grid engine was only able to start this job on willow.
It is not possible to run infinite job on the webservers (only shorter jobs are allowed so that most jobs have finished before high webserver usage is expected during the evening). Nor it was possible
to run it on the server running mail transfer agent which only have less than 500MB memory free, but much cpu power (expected memory usage is unkown). Other servers like nightshade and yarrow aren't
currently available.
According to the last run of this job it takes about 2 hours and 30 minutes runtime and had a peek usage of 370 MB memory. I got these values by requesting grid engine about usage statistics of the
last ten days: "qacct -j mainbot -d 10".
To be safe that the job gets always enough resouces i would suggest to raise the values to 4 hours and 500MB memory. It is not a problem if you request more resouces than really needed, but job
needing more resources than requested may be killed. So the new submit command would be:
"qcronsub -N mainbot -l h_rt=4:00:00 -l virtual_free=500MB /home/drtrigon/pywikipedia/mainbot.py"
This job could run on both webserver during low load and on willow. Grid engine also knows that it cannot run on mailservers because of high memory usage.
The job "ircbot" by drtrigon was started on mailserver last night. This job really needs an infinity runtime (-l h_rt=INFINITY), but only uses low memory (40M).
Jobs that have a limited runtime should not be submitted with an infinity runtime value - even if the expected runtime is some days or weeks. E.g. pywikipedia script should be updated regulary from
svn, so the must be end after some days and restartet. e.g. "qcronsub -l h_rt 120:0:0 scriptname" submits a job with a maximum runtime of five days.
--
If you have any questions about grid engine usage feel free to ask me or the toolserver admins on irc or mailing list.
Toolserver grid currently uses four servers and still has many cpu power and memory available. Only willow is currently very busy. Please do not run process on other servers than on login server
(willow and nightshade) without sge resource control (except cronie for submitting jobs to grid engine on host submit).
Sincerely,
Merlissimo