On Fri, Apr 26, 2019 at 10:19 AM Jim Hu <jim.hu.biobio(a)gmail.com> wrote:
The code is throwing the mass commit error only if
something interrupts
processing before execution is complete - that includes my testing with
die() statements or other errors that I’ve been tracking down. So it seems
like doing execution via the Special page is caching all database
interactions into some unknown number of transactions that never get
committed until page execution makes it all the way to the end. If it
doesn’t, the mass commit error happens.
MediaWiki generally wraps database requests in an automatic transaction (at
least when you are in a web request - jobs and maintenance scripts have
somewhat different rules). There are ways to push callbacks into a followup
transaction too, with things like DeferredUpdate or
Database::onTransactionCommitOrIdle. There shouldn't really be a way for a
special page to cause errors with that, other than manually calling
commit/rollback methods in the wrong way. die() would prevent the
transaction from being committed but wouldn't generate an error, I think.
It looks to me like in the earlier versions of MW
creating a page is
committed during execution, so one can pull a revision for that page and do
stuff to it in the same pass through the special page execution. Now it’s
acting like there is not revision to pull after doing
WikiPage::doEditContent(). When I comment out the right blocks of code, I
can see that I create the desired page from my template, but my attempt to
pull the revision text back out and do something to it blanks the page and
causes errors from code that expects there to be text in the revision.
That sounds like consistent read [1] issues - MySQL transactions don't see
rows that have been newly inserted (in the same transaction) under certain
circumstances. The usual workaround for that is to do a locking read, e.g.
use the 'LOCK IN SHARE MODE' option with Database::select.
[1]
https://dev.mysql.com/doc/refman/8.0/en/innodb-consistent-read.html