*This change impacts people running bots and semi-automated tools to edit Wikidata.*
Hello all,
Based on the previous discussions that happened around the limitation set up to fix the important dispatch lag on clients, we came with a new solution to try.
The database behind Wikidata is replicated to several other database servers. At each edit, the changes are replicated to these other servers. There is always a short lag, which is usually less than a second. If this lag is too high, the other databases can’t synchronize correctly, which can cause problems for reading and editing Wikidata, or reusing data on other projects.
If the lag is too high on too many servers, the master database stops accepting new edits. When the lag is close to the limit, the system is prioritizing “humans” edits and ignore the edits from bots, sending back an error. This limit is set up by the maxlag option in the API.
People writing bots can set up a number as maxlag for their bot. The default value is 5. This number is used to evaluate two things: the replication lag between master database and replicas, and the size of the job queue.
*On Tuesday, June 3rd, maxlag will also evaluate the dispatch lag between Wikidata and clients (eg Wikipedias).*
The dispatch lag is the latency between an edit on Wikidata and the moment when it’s shown on clients. Its median value is around 2 minutes.
*If you’re running a bot and using a standard configuration (maxlag=5), when the median of dispatch lag is more than 300 seconds, your bot edits won’t be saved and will return an error. *
If this change is impacting your work too much, please let us know by letting a comment in this ticket https://phabricator.wikimedia.org/T194950. This is also where you can ask any question. You can also change your configuration in order to increase the maxlag limit.
More information: Wikidata dispatch Grafana board https://grafana.wikimedia.org/dashboard/db/wikidata-dispatch?refresh=1m&orgId=1
Thanks for your constructive feedback,