Lis ton message de Nganguem Victor avant qu'il ne soit effacé!
Pour lire ton message, suis simplement ce lien:
http://eu1.badoo.com/chatnoirxx/in/p4hxwt052ok/?lang_id=6
D'autres personnes sont aussi présentes:
Oded (Tel Aviv, Israël)
Priya (Udaipur, Inde)
RajaYogi BK (Udaipur, Inde)
Karuna (Udaipur, Inde)
Chika Reginald Onyia (Pnompen', Cambodge)
...Qui d'autre?
http://eu1.badoo.com/chatnoirxx/in/p4hxwt052ok/?lang_id=6
Les liens ne fonctionnent pas dans ce message? Copie les dans la barre d'adresse de ton navigateur.
Tu as reçu cet email suite à un message envoyé par Nganguem Victor de notre système. S'il s'agit d'une erreur, ignore simplement cet email. La requête sera alors effacée du système.
Amuse-toi bien !
L'équipe Badoo
Courrier automatique de Badoo suite à l'envoi d'un message à ton attention sur Badoo. Les réponses ne sont ni stockées, ni traitées. Si tu ne veux plus recevoir de message de Badoo, fais-le nous savoir:
http://eu1.badoo.com/impersonation.phtml?lang_id=6&mail_code=65&email=media…
Hi!
I am building a wiki do document and collect information on the Palme case (https://en.wikipedia.org/wiki/Assassination_of_Olof_Palme). The case was closed two years ago and since then a lot of documents have been released. The police investigation is one of the three largest in world history. The complete material is around 60 000 documents consisting of around 1 000 000 pages, of which we have roughly 5%.
My wiki, https://wpu.nu is collecting these documents, OCRs them with Google Cloud Vision, and publishes them using the Proofread Page extension. This is done using a python script running on the server accessing the wiki via the API. Some users are also writing "regular pages" and helps me sort through the material and proof read it. This bit works very well for the most part.
The wiki is running on a bare metal server with AMD Ryzen 5 3600 6-Core Processor (12 logical cores) and 64Gb of RAM. MariaDB (10.3.34) is used for the database. I have used Elastic for SMW data and fulltext search but have been switching back and forth in my debugging efforts.
From the investigation we have almost 60 000 sections (namespace Uppslag), 22 000 chapters (namespace Avsnitt) and 6000 documents (namespace Index). The documents and the Index namespace are handled by the Proofread Page extension. I have changed the PRP templates to suit the annotation and ui needs of wpu. For instance, each Index page has a semantic attribute pointing out which section it is attached to. Between all these pages there are semantic links that represents relations between the sections. This can be for instance the relation between a person and a specific gun, or an organisation or place.
Each namespace is rendered with its corresponding template, which in turn includes several other templates. The templates renders the ui but also contains a lot of business logic, that adds to categories, sets semantic data etc.
I will use an example to try to explain it better. This is an example of a section page: https://wpu.nu/wiki/Uppslag:E13-00 which in the header shows information such as date, document number and relations to other pages. Below that is the meta-information from the semantic data in the corresponding Index page, followed by the pages of that document.
The metadata of the page is entered using Page Forms and rendered using the Uppslag_visning template. I use the Uppslag template to set a few variables that are used a lot in the Uppslag_visning template. Uppslag_visning also sets the page semantic data and categories. A semantic query is used to find if there is a corresponding Index page, if so a template is used to render its metadata. Another semantic query is used to get the pages of the Index and render them using template calls in the query.
Oh, and the skin is custom. It is based off of the Pivot-skin but changed extensively.
I have run in to a few problems which have led me to question if MediaWiki + SMW is the right tools for the job, question my sanity and the principle of cause and effect :) It is not a specific problem or bug as such.
Naturally, I often make changes to templates used by the 60 000 section pages. This queues a lot of refreshLinks-jobs in the job queue - initially taking a few hours to clear. I run the jobs as a service and experimented with the options to runJobs.php to get a good usage of the resources. I optimized the templates to reduce the resources needed for each job. (e.g. using proxy-templates to instanciate "variables" to reduce the number of identical function calls, save calculated data in semantic properties etc). This helped a little.
I noticed that a large portion of the refreshLinks-jobs failed with issues locking the Localization Cache table. At that moment i had the runJobs.php --maxjobs parameter set quite high, like 100-500 and --procs around 16 or 32. I lowered the --maxjobs to around 5 and the problem seemed solved. CPU utilization went down and also iowait. The job queue still took a very long time to clear. Looking at the mysql process list i found that a lot of time was spent by jobs trying to delete the Localisation Cache.
I switched to 'array' (and tested 'files' as well). This speed up the queue processing a bit but caused various errors. Sometimes the localisation cache data was read as a int(1) and sometimes the data seemed truncated. Looking at the source I found that the cache file writes and reads was not protected by any mutex or lock. That caused one job to read the LC file when another was writing it, causing a truncated file to be read. I implemented locks and exception handling in the LC-code for jobs to recover should they read corrupted data. I also mounted the $IP/cache dir on a ram-disk. The jobs now went through without LC-errors and a bit faster, but...
MediaWiki must be one of the most battle tested software systems written by man. How come they forgot to lock files that are used in a concurrent setting? I must be doing something wrong here?
Should the jobs be run serially? Why is the LC-cleared for each job when the cache should be clean? Maybe there is some kind of development and deployment path that circumvents this problem?
I could have lived with that the wiki lagged an hour or so, but I also experience data inconsistency. For instance sometimes the query for the Index finds an index, but the query for Pages finds nothing, resulting in that the metadata is filled in on the page but no document is shown. Sometimes when i purge a page the document is shown, and if I purge again, it is gone. Data from other sections in the same chapter is shown wrongly and various sequences of refresh and purge may fix it. The entire wiki is plagued with this type of inconsistency making it a very unreliable source of information for its users.
Any help, tips and pointers would be greatly appreciated.
A first step should probably be to get to a consistent state.
Best regards,
Simon
PS: I am probably running a refreshLinks or rebuildData when you visit the wiki, so the info found there might vary.
__________
Versions:
Ubuntu 20.04 64bit / Linux 5.4.0-107-generic
MediaWiki 1.35.4
Semantic MediaWiki 3.2.3
PHP 7.4.3 (fpm-fcgi)
MariaDB 10.3.34-MariaDB-0ubuntu0.20.04.1-log
ICU 66.1
Lua 5.1.5
Elasticsearch 6.8.23
Since the beginning of the year, the Wikimedia Language team has enabled
translation backports for MediaWiki core, extensions and skins hosted on
Gerrit. On a weekly schedule compatible translations from master branch are
backpored to all the supported release branches. Currently supported
branches are 1.35–1.38.
Translation backports partially replace the purpose of the
LocalisationUpdate extension. Wikimedia sites no longer use the extension,
and to our knowledge only a few other users of the extension exist, because
it needs manual setup to use.
We, the Language team, think that maintaining the LocalisationUpdate
extension is no longer a good use of our time. We are asking for your
feedback about the future of this extension.
We are planning to:
* Remove LocalisationUpdate from the MediaWiki Language Extension Bundle
starting from version 2022.07
* Remove us as maintainers of the extension
Additionally, based on the feedback, we are planning to either mark the
extension as unmaintained, transfer maintenance to a new maintainer, or
request the extension to be archived and removed from the list of
extensions bundled with MediaWiki core if there is no indication that
anyone uses this extension.
We request your feedback and welcome discussion on
https://phabricator.wikimedia.org/T300498. Please let us know if you are
using this extension and whether you would be interested in maintaining it.
*Anticipated questions*
Q: What about Wikimedia sites: does this mean they will not get frequent
translation updates as they used to have?
A: We still think this is important, but we do not think the previous
solution can be restored. We would like to collaborate on new solutions.
One solution could be more frequent deployments.
-Niklas
Hi all,
I'm writing to the list in the hope of receiving some feedback,
questions, maybe answers to help us solve a conundrum with sessions we
have on our Wikibase/Mediawiki installation.
We're running two different docker-based wikibase installations (one
staging, one production) on two separate virtual machines. Both use the
SimpleSaml extension to connect to a SAML implementation, but we have
found that the random session deletion happens both with and without the
SAML extension used.
The symptoms that we've seen are that out of the blue, users are
disconnected from mediawiki. Since we use SAML, it's enough to click the
Login link to again be connected.
The duration or number of requests until a session is deleted seems
random. It does appear though, that the more (or more frequent) requests
are made (and thus background jobs run) the quicker a session is deleted.
We have even tried setting $wgObjectCacheSessionExpiry = 7200; in order
to exclude any TimeZone related issues which was our first suspicion.
However, changing this from the default 1h session expiry does not
change the behavior we're seeing.
Sessions are deleted regardless of their nature. E.g. sessions
established for oauth connections are deleted in the same way and at the
same time as other sessions. E.g. using wikidataintegrator we're able to
run a number of requests until the CSRF token expires (with the number
of requests that succeed before this happens being random):
[…]
Created Item Q379 from Classhttp://www.cidoc-crm.org/cidoc-crm/E34_Inscription.
Created Item Q380 from Classhttp://www.cidoc-crm.org/cidoc-crm/E53_Place.
Errorwhile writing to Wikidata
ERROR creating class referencefor http://www.cidoc-crm.org/cidoc-crm/E35_Title: {'error': {'code':'badtoken','info':'Invalid CSRF token.','*':'See https://saf-dev.bnl.lu/w/api.php for API usage. Subscribe to the
mediawiki-api-announce mailing list at
<https://lists.wikimedia.org/mailman/listinfo/mediawiki-api-announce>
for notice of API deprecations and breaking changes.'}}
Errorwhile writing to Wikidata
ERROR creating object property referencefor http://www.cidoc-crm.org/cidoc-crm/P16_used_specific_object: {'error': {'code':'badtoken','info':'Invalid CSRF token.','*':'See https://saf-dev.bnl.lu/w/api.php for API usage. Subscribe to the
mediawiki-api-announce mailing list at
<https://lists.wikimedia.org/mailman/listinfo/mediawiki-api-announce>
for notice of API deprecations and breaking changes.'}}
If sessions are not randomly deleted, then session renewal seems to work
as planned with them being renewed after half their expiry time:
|[DBQuery] SqlBagOStuff::fetchBlobMulti [0s] db.svc:3306: SELECT
keyname,value,exptime FROM `objectcache` WHE RE keyname =
'mediawiki:MWSession:n9krnsfa303qk9c2osp6c81tp1k2bp5b' [session]
SessionBackend
"require/require_once/MediaWiki\Session\Session->renew/MediaWiki\Session\SessionBackend-
>renew" metadata dirty for renew():
require/require_once/MediaWiki\Session\Session->renew/MediaWiki\Session\Sessi
onBackend->renew [session] SessionBackend
"n9krnsfa303qk9c2osp6c81tp1k2bp5b" force-persist for renew():
require/require_once/Media
Wiki\Session\Session->renew/MediaWiki\Session\SessionBackend->renew
[session] SessionBackend "n9krnsfa303qk9c2osp6c81tp1k2bp5b" save:
dataDirty=0 metaDirty=1 forcePersist=1 [cookie] setcookie:
"mediawiki_session", "n9krnsfa303qk9c2osp6c81tp1k2bp5b", "0", "/", "",
"1", "1", "" [cookie] setcookie: "mediawikiUserID", "13", "1648233666",
"/", "", "1", "1", "" [cookie] setcookie: "mediawikiUserName", "Ibl676",
"1648233666", "/", "", "1", "1", "" [cookie] already deleted setcookie:
"mediawikiToken", "", "1614105666", "/", "", "1", "1", "" [cookie]
already deleted setcookie: "forceHTTPS", "", "1614105666", "/", "", "",
"1", "" [session] SessionBackend "n9krnsfa303qk9c2osp6c81tp1k2bp5b"
Taking over PHP session [session] SessionBackend
"n9krnsfa303qk9c2osp6c81tp1k2bp5b" save: dataDirty=0 metaDirty=1
forcePersist=1 [cookie] already set setcookie: "mediawiki_session",
"n9krnsfa303qk9c2osp6c81tp1k2bp5b", "0", "/", "", "1", "1", "" [cookie]
already set setcookie: "mediawikiUserID", "13", "1648233666", "/", "",
"1", "1", "" [cookie] already set setcookie: "mediawikiUserName",
"Ibl676", "1648233666", "/", "", "1", "1", "" [cookie] already deleted
setcookie: "mediawikiToken", "", "1614105666", "/", "", "1", "1", ""
[cookie] already deleted setcookie: "forceHTTPS", "", "1614105666", "/",
"", "", "1", "" [DBQuery] SqlBagOStuff::updateTable [0.002s]
db.svc:3306: REPLACE INTO `objectcache` (keyname,value,exptime) VALU ES
('mediawiki:MWSession:n9krnsfa303qk9c2osp6c81tp1k2bp5b','...\0','20220223194106')|
Here is one example where the session got deleted by whatever feels
responsible to do so:
MariaDB [mediawiki] > select keyname, exptimefrom objectcachewhere keynamelike 'mediawiki:MWSession%';
+------------------------------------------------------+---------------------+
| keyname | exptime |
+------------------------------------------------------+---------------------+
| mediawiki:MWSession:t7d5lms2nrma6k627c6usrgc6289en3a |2022-03-24 17:34:34 |
+------------------------------------------------------+---------------------+
1 rowin set (0.001 sec)
MariaDB [mediawiki] > select keyname, exptimefrom objectcachewhere keynamelike 'mediawiki:MWSession%';
Emptyset (0.001 sec)
MariaDB [mediawiki] > select now();
+---------------------+
| now() |
+---------------------+
|2022-03-24 17:37:15 |
+---------------------+
1 rowin set (0.024 sec)
The only instance of me finding something remotely related to randomly
deleting a session would be these log entries:
|2022-03-24 17:03:29 51290acae35a mediawiki: SessionManager using store
SqlBagOStuff 2022-03-24 17:03:29 51290acae35a mediawiki: Saving all
sessions on shutdown 2022-03-24 17:03:29 51290acae35a mediawiki:
SessionManager using store SqlBagOStuff 2022-03-24 17:03:29 51290acae35a
mediawiki: Session
"[30]MediaWiki\Session\CookieSessionProvider<-:13:Ibl676>8favvi92d08njut82j66un60r601titn":
Unverified user provided and no metadata to auth it 2022-03-24 17:03:29
51290acae35a mediawiki: setcookie: "mediawiki_session", "",
"1616605409", "/", "", "1", "1", "" 2022-03-24 17:03:29 51290acae35a
mediawiki: setcookie: "mediawikiUserID", "", "1616605409", "/", "", "1",
"1", "" 2022-03-24 17:03:29 51290acae35a mediawiki: already deleted
setcookie: "mediawikiToken", "", "1616605409", "/", "", "1", "1", ""
2022-03-24 17:03:29 51290acae35a mediawiki: already deleted setcookie:
"forceHTTPS", "", "1616605409", "/", "", "", "1", "" 2022-03-24 17:03:29
51290acae35a mediawiki: SessionBackend
"ujf6rcheseb51lpaojrqggb4c23s1flr" is unsaved, marking dirty in
constructor 2022-03-24 17:03:29 51290acae35a mediawiki: SessionBackend
"ujf6rcheseb51lpaojrqggb4c23s1flr" save: dataDirty=1 metaDirty=1
forcePersist=0 2022-03-24 17:03:29 51290acae35a mediawiki: already
deleted setcookie: "mediawiki_session", "", "1616605409", "/", "", "1",
"1", "" 2022-03-24 17:03:29 51290acae35a mediawiki: already deleted
setcookie: "mediawikiUserID", "", "1616605409", "/", "", "1", "1", ""
2022-03-24 17:03:29 51290acae35a mediawiki: already deleted setcookie:
"mediawikiToken", "", "1616605409", "/", "", "1", "1", "" 2022-03-24
17:03:29 51290acae35a mediawiki: already deleted setcookie:
"forceHTTPS", "", "1616605409", "/", "", "", "1", "" 2022-03-24 17:03:29
51290acae35a mediawiki: Saving all sessions on shutdown |
IBL676 is my SAML ID. I'm not sure why the user is unverified and what
metadata would be used/required to verify it?
Looking forward to any advice on how to further investigate what is
going on here.
Regards,
David
--
*TenTwentyFour S.à r.l.*
www.tentwentyfour.lu <https://www.tentwentyfour.lu>
*T*: +352 20 211 1024
*F*: +352 20 211 1023
1 place de l'Hôtel de Ville
4138 Esch-sur-Alzette
Hello all,
It's coming close to the time for annual appointments of community members
to serve on the Code of Conduct committee (CoCC). The Code of Conduct
Committee is a team of five trusted individuals (plus five auxiliary
members) with diverse affiliations responsible for general enforcement of
the Code of conduct for Wikimedia technical spaces. Committee members are
in charge of processing complaints, discussing with the parties affected,
agreeing on resolutions, and following up on their enforcement. For more on
their duties and roles, see
https://www.mediawiki.org/wiki/Code_of_Conduct/Committee.
This is a call for community members interested in volunteering for
appointment to this committee. Volunteers serving in this role should be
experienced Wikimedians or have had experience serving in a similar
position before.
The current committee is doing the selection and will research and discuss
candidates. Six weeks before the beginning of the next Committee term,
meaning 07 May 2022, they will publish their candidate slate (a list of
candidates) on-wiki. The community can provide feedback on these
candidates, via private email to the group choosing the next Committee. The
feedback period will be two weeks. The current Committee will then either
finalize the slate, or update the candidate slate in response to concerns
raised. If the candidate slate changes, there will be another two week
feedback period covering the newly proposed members. After the selections
are finalized, there will be a training period, after which the new
Committee is appointed. The current Committee continues to serve until the
feedback, selection, and training process is complete.
If you are interested in serving on this committee or like to nominate a
candidate, please write an email to techconductcandidates AT wikimedia.org
with details of your experience on the projects, your thoughts on the code
of conduct and the committee and what you hope to bring to the role and
whether you have a preference in being auxiliary or main member of the
committee. The committee consists of five main members plus five auxiliary
members and they will serve for a year; all applications are appreciated
and will be carefully considered. The deadline for applications is *the end
of day on 30 April 2022*.
Please feel free to pass this invitation along to any users who you think
may be qualified and interested.
Best,
Martin Urbanec, on behalf of the Code of Conduct Committee
Hello,
I'm trying to upgrade/migrate my mediawiki from my old Ubuntu VM 14.04
to a new Ubuntu VM 20.04. The detail of the components are below. I
managed to set up a new mediawiki 1.37.2 with MariaDB at the new VM and
copied images directory tree and converted LocalSettings.php to make it
workable but when I restored mysql DB into mariaDB with "mysql -u root
-p toshiwiki(my DB name) < toshiwikibackup.sql", I got "Database error"
at the main page with the detail below:
[a50bfc1261fc29c06a343aec] 2022-04-08 05:32:47: Fatal exception of type
"Wikimedia\Rdbms\DBQueryError"
Can someone tell me where to start debugging or fixing my database to
let MariaDB to understand the imported DB properly? I'm willing/wanting
to learn MySQL/MariaDB more to understand the SQL command and DB
structure to be able to fix problem like this. I'm guessing I need to
modify mysqldump file before restoring it. But just don't know where to
start.
Thanks,
Toshi
***
Old environment:
Ubuntu 14.04
Mediawiki 1.26.0
PHP 5.5.9
MySQL 5.5.672
New environment:
Ubuntu 20.04
Mediawiki 1.37.2
PHP 7.4.3
MariaDB 10.3.34
ICU 66.1
--
Toshi Esumi
web: http://www.toshiesumi.com
blog: http://toshiesumi.blogspot.com