Thanks Markus.
runJobs.php has been running for over 24 hours now with just a couple of restarts following 'non-mediawiki run-time exceptions' and with editing disabled. I wonder; is the 10,000 jobs limit actioned by reference to the job table record count because the script has obviously processed vastly more than 10,000 records in the 24 hours it has been running?
Total number of jobs records remaining is @ 69,000 - ie a reduction of just 5,000 or so since starting. During that time the highest record ID number has increased by over 30,000 so it is clear to me that some jobs actually add more jobs to the queue. At this rate it will take close to a week of continuous script running to clear. It is also clear that there is a problem which I have to find and fix.
I have a spare server with the latest stable versions of everything installed. I intend to debug this issue on it so damage risk is secondary. However, I would prefer not to screw things up to the extent of requiring a reinstall so I need to know how to specify all the various job types on the command line. I also need to know if there is an index table associated with jobs which may also need to be emptied if the job table is emptied.
I'm not a world champ techie btw - just a dogged amateur prepared to learn.
On 26/04/2015 21:21, Markus Krötzsch wrote:
Hi Peter,
As for SMW-related jobs, it is safe to delete them from the table and (optionally) to use the SMW refresh maintenance script for updating or even completely regenerating data of all pages (if you think it is needed).
I think that something similar should apply to all jobs MW generates (i.e., it should be ok to drop them and to run appropriate maintenance scripts instead if something appears to be out of date).
Some other extensions may also create jobs (e.g. ReplaceText) which, if deleted, will not happen (and there is no script for them either). If you want to be careful, you can delete jobs from the job table type-by-type instead of dropping the whole contents at once.
Cheers,
Markus
On 26.04.2015 21:56, Peter Presland wrote:
We have a serious problem with the Job Queue. I believe there may be a circular references in one or more nested templates. Tracking down the problem is complicated by having run the wiki for an extended period during which a considerable amount of MW and SMW template development was undertaken, the upshot of which was that the number of jobs placed in the jobs queue far exceeded the clearing of them through the normal mechanism. The highest ID number in the job table is around 3 million (that's over about 3 years) and the table currently contains some 80,000 records.
I have run runJobs.php for over 12 hours and the reduction in the number of records is painfully slow. Observing the table contents during script execution reveals that certain jobs actually add more jobs to the table such that it is reducing at the rate of maybe 10 records per minute on average at most, in spite watching the terminal window disposing of them at maybe 10 times that rate. I clearly have to get to the bottom of this issue and would like to empty the job table manually, then edit each of the potentially offending templates in turn to see how many jobs they produce and how those jobs are cleared by runJobs.php.
My questions are:
- Is there an index table and any other tables which need to be emptied
to effectively clear the queue but not damage anything? 2. Is there any possibility of data damage in proceeding in this manner apart from fat fingers?
Once sorted I intend to confine template development to a non-production server and to run runJobs.php from a scheduled cron job at least once per day.
The wiki can be viewed here: https://wikispooks.com/wiki/Main_Page and any help or suggestions etc would be very much appreciated
MediaWiki-l mailing list To unsubscribe, go to: https://lists.wikimedia.org/mailman/listinfo/mediawiki-l