Hi,
I very often get the error "Wikimedia Toolforge Error: The tool you are
trying to access is currently receiving more traffic than it can
handle." opening
the tool with a link like
https://geohack.toolforge.org/geohack.php?params=54.140279_N_10.817215_E_gl…
Because the error is very frequently, isn't it a solution, to raise the
traffic for this tool?
What do you think about it?
Kind regards
Doc Taxon ...
Our Scholia webapp on Toolforge is struggling with the request load at https://scholia.toolforge.org/
According to toolviews (also displayed at https://people.compute.dtu.dk/faan/scholia-page-view-statistics.html) we had yesterday almost 700,000 daily hit. Monitoring the log (tail -f uwsgi.log) I see quite a lot of various "troubling" hits, e.g., to the static files. I do not know what it is but I imagine it is some kind of GenAI wannabees with a Playwright script crawling Scholia.
In the current Scholia webapp code, we have embarrassments:
1) Serving static files in the Flask webapp.
2) Blocking requests to Wikidata API or the SPARQL endpoint here and there in the code (most Scholia requests are client-side SPARQL requests though).
I have been moving some static file requests to tools-static.wmflabs.org, but still need to do some more that is embedded in Bootstrap.
I am thinking about moving from Flask to an async framework. I am gaining some experience with FastAPI for web services.
Am I correct that async on Toolforge will buy us a bit of extra performance?
Are there other Toolforge users that have struggling webapps and, if yes, then what do you do? If it continue we could do login I suppose.
In the Scholia repo we have this issue: https://github.com/WDscholia/scholia/issues/2727
best regards
Finn Årup Nielsen
https://people.compute.dtu.dk/faan/
Hello, all!
If you use a cloud-vps project (other than toolforge), please update the
entry about your project on this page:
https://wikitech.wikimedia.org/wiki/News/2025_Cloud_VPS_Purge
There are detailed instructions on that page about how to annotate your
project. If your project is unclaimed after a few months, it will be
subject to suspension and, ultimately, deletion. Perhaps more
importantly, you will receive a huge number of ever-grumpier emails from
cloud administrators asking you to respond.
In previous years we've only asked that you mark projects as 'in use'.
This year we're also trying to gather summary information about the
actual purpose of each project as part of an initiative to clarify
use-cases of cloud-vps and toolforge; please include as much information
as you are able.
If you see an unclaimed project on that list which you use but are not
an admin of, feel free to make a note anyway, or reach out to your
admins and enourage them to do so.
Thank you!
-Andrew
_______________________________________________
Cloud-announce mailing list -- cloud-announce(a)lists.wikimedia.org
List information: https://lists.wikimedia.org/postorius/lists/cloud-announce.lists.wikimedia.…
Hello all,
the recent Toolforge NFS server OS upgrade has helped improve NFS workers
getting stuck[0]. We will be performing a followup and shorter
maintenance tomorrow, Wed 15th 8-9 UTC.
Thank you for your understanding and patience.
best,
Filippo
[0]: https://phabricator.wikimedia.org/T404584
--
*Filippo Giunchedi*
Staff Site Reliability Engineer
Wikimedia Foundation <https://wikimediafoundation.org/>
_______________________________________________
Cloud-announce mailing list -- cloud-announce(a)lists.wikimedia.org
List information: https://lists.wikimedia.org/postorius/lists/cloud-announce.lists.wikimedia.…
Hello all,
On Monday 13th from 8 to 10 UTC there will be a maintenance window for
Toolforge NFS. We will be striving to minimize the user-facing NFS outage
and its related services. This maintenance window affects all tools with
NFS access enabled, the bastion servers and tools-static hosted files.
Tools using build service images without NFS mounts [0] will not be
affected.
The maintenance window will be used to upgrade the Toolforge NFS server to
a Debian Trixie VM, bringing more than two years of Linux development. The
upgrade, while part of the regular OS lifecycle, is also aimed at narrowing
down the NFS-related problems within Toolforge. Please see [1] for more
details.
best,
Filippo
[0]:
https://wikitech.wikimedia.org/wiki/Help:Toolforge/Building_container_image…
[1]: https://phabricator.wikimedia.org/T404584
--
*Filippo Giunchedi*
Staff Site Reliability Engineer
Wikimedia Foundation <https://wikimediafoundation.org/>
_______________________________________________
Cloud-announce mailing list -- cloud-announce(a)lists.wikimedia.org
List information: https://lists.wikimedia.org/postorius/lists/cloud-announce.lists.wikimedia.…
(If you don't work with SHA1 values of revisions, you can ignore this
message)
Hello,
As part of performance improvements to the revision table, we are
reviewing
the purpose and usage of the `rev_sha1` field.
Currently, this field is mainly used to detect identical revisions, for
example
in manual revert detection. The `rev_sha1` value is calculated from the
SHA1 values of all slots in the revision, which are stored in the
content table:
- The SHA1 of a slot is generated from its content in base36.
- For revisions with only one slot (the case for all wikis except
Commons),
`rev_sha1` matches the SHA1 of that slot.
- On Commons, most revisions have two slots ("main" and "mediainfo"). In
that case, the SHA1 of the revision is computed by concatenating the
SHA1
values of both slots, then hashing that concatenated value again with
SHA1.
We have decided to drop the `rev_sha1` field and compute the SHA1 value
of a
revision on the fly from the `content_sha1` values in its slots.
The same change applies to the archive table: the `ar_sha1` field (for
deleted
revisions) will also be removed.
If you currently use the `rev_sha1` or `ar_sha1` fields, please switch
to
using `content_sha1` instead. These fields will be removed from
wikireplicas
in three weeks.
You can follow progress here: https://phabricator.wikimedia.org/T389026
Thank you,
Alexander Vorwerk — IRC: Zabe