Hi,
Hope this is the right channel for solving the issue. Varnish has been
implemented a year ago, excellent response, working fine. But after
installation of Mobilefrontend several months later the issues were
detected. (mediawiki 1.38.2, varnish 7.1)
Purges after the page updates run and purge desktop entries automatically
as well, but when the same page is accessed from the mobile device, the old
page version before the update is shown, the mobile entry has not been
purged. Mobilefrontend is installed and configured as advised in
mw:Extension:MobileFrontend/Configuring_browser_auto-detection option 2,
same domain.
The only difference between requests for desktop/mobile version is in the
header x-subdomain: no for mobile version (shows during the debug in
varnishlog). But even curl sending this header for purge request did not
work.
Configuration:
Mediawiki:
wfLoadExtension( 'MobileFrontend' );
$wgMFAutodetectMobileView = true;
$wgMinervaEnableSiteNotice = true;
$wgMFDefaultSkinClass = 'SkinMinerva';
...
$wgInternalServer = "http://127.0.0.1:8080";
$wgUseCdn = true;
$wgCdnServers = array();
$wgCdnServers[] = "127.0.0.1:8080";
Varnish:
According https://www.mediawiki.org/wiki/Manual:Varnish_caching, it had to
be customized, the page is little out-of-date and according to:
https://www.mediawiki.org/wiki/Extension:MobileFrontend/Configuration a
https://www.mediawiki.org/wiki/Extension:MobileFrontend/Configuring_browser…
.
In sub vcl_recv {
....
if (req.method == "PURGE") {
if ((req.http.X-Forwarded-For == "ip address") ||
(req.http.X-Forwarded-For == "ip address") ||
(req.http.X-Forwarded-For == "ip address")) {
set req.http.host = "host";
return (hash);
} else {
return (synth(405, "Not allowed"));
}
}
...
unset req.http.x-subdomain; # Requester shouldn't be allowed to
supply arbitrary X-Subdomain header
if (req.http.User-Agent ~
"(?i)^(lg-|sie-|nec-|lge-|sgh-|pg-)|(mobi|240x240|240x320|320x320|alcatel|android|audio
vox|bada|benq|blackberry|cdm-|compal-|docomo|ericsson|hiptop|htc[-_]|huawei|ipod|kddi-|kindle|meego|midp|mitsu|mmp\/|mot-
|motor|ngm_|nintendo|opera.m|palm|panasonic|philips|phone|playstation|portalmmm|sagem-|samsung|sanyo|sec-|sendo|sharp|sof
tbank|symbian|teleca|up.browser|webos)") {
set req.http.x-subdomain = "no";
}
...
return (hash);
...
sub vcl_hash {
# Cache the mobile version of pages separately.
#
# NOTE: x-subdomain header should only have one value (if it
exists),
# therefore vcl_recv() should remove user-supplied X-Subdomain
header.
hash_data(req.http.x-subdomain);
}
...
import purge;
sub my_purge {
set req.http.purged = purge.soft(0s,30s);
if (req.http.purged == "0") {
return (synth(404));
}
else {
return (synth(200));
}
}
After the purge if someone does not request the page and it would not be
loaded into cache, after 30 seconds the page should be loaded automatically.
The consequence of the issue is that mobile users do not see up-to-date
versions of the pages, the only workaround is restart varnish.
Thanks for advice pointing to the solution.
Pavel Spacek
Hi,
isn't it better to avoid invisible characters in page titles while creating
the pages?
Please look here, there has been problems with invisible characters working
with it when parsing or page linking those page titles with invisible
unicode characters:
https://de.wikipedia.org/wiki/Benutzer_Diskussion:Wurgl#Liste_der_Biografie…
Instead of this there will never be a problem when invisible characters
within the page title name will be deleted when creating the page.
What do you think about it and what technical approaches do already exist?
How are LTR and RTL marks dealt if creating pages with them?
Thank you very much and kind regards
Martin
aka user:Doc_Taxon
Hello. Could you tell me, please, is there a way to check if a window
exists in manager by its symbolic name? hasWindow() does not except such a
name. getWindow() returns empty window if it was not exists before.
getWindow()...isInitialized() is always true. All I need is to check if the
window "win1" exists, and if it's not, create it. Thank you.
Igal (User:IKhitron)
Hello cloud-vps users!
It's time for our annual cleanup of unused projects and resources. Every
year or so the Cloud Services team tries to identify and clean up unused
projects and VMs. We do this via an opt-in process: anyone can mark a
project as 'in use,' and that project will be preserved for another year.
I've created a wiki page that lists all existing projects, here:
https://wikitech.wikimedia.org/wiki/News/Cloud_VPS_2022_Purge
If you are a VPS user, please visit that page and mark any projects that
you use as {{Used}}. Note that it's not necessary for you to be a
project admin to mark something -- if you know that you're currently
using a resource and want to keep using it, go ahead and mark it
accordingly. If you /are/ a project admin, please take a moment to mark
which VMs are or aren't used in your projects.
When February arrives, I will shut down and begin the process of
reclaiming resources from unused projects.
If you think you use a VPS project but aren't sure which, I encourage
you to poke around on https://tools.wmflabs.org/openstack-browser/ to
see what looks familiar. Worst case, just email
cloud(a)lists.wikimedia.org with a description of your use case and we'll
sort it out there.
Exclusive toolforge users are free to ignore this email.
Thank you!
-Andrew and the WMCS team
On MW 1.39.0 and .1 and PHP 8.1.2-1ubuntu2.9,
I am trying to revise a parserhook extension to mediawiki that uses
wfParseUrl().
https://doc.wikimedia.org/mediawiki-core/master/php/GlobalFunctions_8php.ht…
says that is deprecated and I should use UrlUtils::parse().
The former looks like a function and the latter looks like a class, perhaps
a subclass of Utils. My first question is what if any use statement do I
need. The extension already has use Html, but use UrlUtils gives an error
because it can't be found.
Do I need to instantiate Utils or UrlUtils and invoke the urlparser as
$urlUtils -> parse()? When I invoke UrlUtils::parse I get a complaint about
calling a non-static method statically.
The old code was
$url_parts = wfParseUrl( $graph_url );
and the new
$url_parts = UrlUtils::parse( $graph_url );
Any help would be appreciated.
Tim
Hello all!
The Search Platform Team usually holds an open meeting on the first
Wednesday of each month. Come talk to us about anything related to
Wikimedia search, Wikidata Query Service (WDQS), Wikimedia Commons Query
Service (WCQS), etc.!
Feel free to add your items to the Etherpad Agenda for the next meeting.
Details for our next meeting:
Date: Wednesday, January 11, 2023
Time: 16:00-17:00 UTC / 08:00 PDT / 11:00 EDT / 17:00 CET
Etherpad: https://etherpad.wikimedia.org/p/Search_Platform_Office_Hours
Google Meet link: https://meet.google.com/vgj-bbeb-uyi
Join by phone: https://tel.meet/vgj-bbeb-uyi?pin=8118110806927
Have fun and see you soon!
--
*Guillaume Lederrey* (he/him)
Engineering Manager
Wikimedia Foundation <https://wikimediafoundation.org/>