Hi All,
After Installing the MobileFrontEnd extension, I am unable to run it due to
some error.
*Fatal error: Uncaught Error: Call to undefined method
MediaWiki\MediaWikiServices::getContentHandlerFactory() in
/Library/WebServer/Documents/myweb/otherprojects/mw/core/extensions/MobileFrontend/includes/MobileFrontendEditorHooks.php*
I am not sure what is missing and how can I make this run.
Thanks
Hi all,
We're still working on a project with the MediaWiki API, and we've ran into
a different issue regarding page moves/redirects.
We're trying to pull revision and redirect data from the "Killing/Death of
Luo Changqing" page and talk page. Unfortunately, this page wasn't found
when pulling it through the MediaWiki API when we filtered using our date
range from 2009-2019. Either Death or Killing worked prior to the page
move, but now we found that we can no longer access the revisions that
occurred during the old time frame.
Regarding pages that have been moved/redirected, what would you recommend
us to do pull this data that was previously available?
Thanks,
Jackie, James, Junyi, Kirby
Hello all,
For almost a year, the Wikidata development team has been working on the
task of redesigning and migrating the wb_terms table, which had become too
big and unsustainable over the years.
You can read the tale of our journey on this blog post: Come to Terms with
Changes
<https://phabricator.wikimedia.org/phame/post/view/195/coming_to_terms_with_…>
If you’re a tool maintainer and your tool queries directly the Labs
database replicas, you can read more details
<https://lists.wikimedia.org/pipermail/wikidata/2020-March/013901.html>
about the next steps and how to update your code.
Congratulations to all the developers involved in this big project!
Cheers,
--
Léa Lacroix
Project Manager Community Communication for Wikidata
Wikimedia Deutschland e.V.
Tempelhofer Ufer 23-24
10963 Berlin
www.wikimedia.de
Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e. V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
der Nummer 23855 Nz. Als gemeinnützig anerkannt durch das Finanzamt für
Körperschaften I Berlin, Steuernummer 27/029/42207.
Yo! I've written a short post on improving JSDocs with typing. It's
called "the best documentation automation can buy." If you write
JavaScript, you may find it informative. If you're already using this
new tooling in your projects, I'd be interested to hear your thoughts as
well.
In Grant's words, "Write the docs!"
https://phabricator.wikimedia.org/phame/post/view/194/the_best_documentatio…
Stephen
Hi Emufarmers!
Thanks for your attention! :) Yes, the tool might detect the sentences that
already referenced, because Citation Detective actually feeds *every single
sentence* in an article to Citation Need model and extract sentences with
high scores.
Highlighting a sourced sentence doesn't mean the source used is unreliable
as Citation Detective has no idea of whether a sentence has a reference and
the content of the source. I would say it's just like double confirm that
the sentence needs a citation, and yes there is a citation already.
You might have questions why the tool didn't exclude those sentences
already have a reference. The reason is a reference doesn't necessarily
apply just to the sentence right before it. They could apply to more than
one sentence or a whole paragraph, and there's no way to determine that
from the Wikitext. So that's why the tool was designed (at least for the
initial version) to feed every sentence to the model and detect citation
need score.
There's the same case with { citation needed } tags, so you would also find
sentences with {cn} tag highlighted in the prototype. That means both human
and machine think the statement needs a citation to a reliable source.
I hope this clarifies things for you. :)
Aiko
Dear,
Hope This mail finds you well.
First of all I would like to introduce myself to you... I'm Mahmoud Ahmed
studying computer science at Cairo university.
I am currently trying to join GSoC 2020
(Add 'Reverted' filter to RC Filters)
But I have some questions, please.
Can you give me more details clearly about this project (Add 'Reverted'
filter to RC Filters)
Because I am very excited about it.
Finally... Thank you very much in advance.
Regards,
Mahmoud Ahmed.
--
Hi all,
I’m happy to announce the outcome of an Outreachy internship
<https://phabricator.wikimedia.org/T233707> that I’m finishing up. It is a
new tool and public dataset named Citation Detective which tool developers
and researchers can now use for their projects.
Citation Detective <https://meta.wikimedia.org/wiki/Citation_Detective>
contains sentences that have been identified as needing a citation using a
machine learning-based classifier published earlier last year
<https://arxiv.org/pdf/1902.11116.pdf> by WMF researchers and
collaborators. As part of Outreachy, I developed a tool
<https://github.com/AikoChou/citationdetective> (hosted on Toolforge
<https://tools.wmflabs.org>) to run through Wikipedia and extract
high-scoring sentences along with contextual information.
As an example use case for this data, I also created a proof of concept for
integrating Citation Detective and Citation Hunt
<https://tools.wmflabs.org/citationhunt>. Check out my prototype Citation
Hunt <https://tools.wmflabs.org/aiko-citationhunt>, which uses Citation
Detective to import sentences that would not normally be featured in
Citation Hunt. The repository for that is here
<https://github.com/AikoChou/citationhunt>.
This dataset currently includes sentences from ~120,000 randomly selected
articles from the English Wikipedia. In future work, we hope to expand this
to more language Wikipedia projects and a greater number of articles. It is
also possible to expand the database to contain more fields in a future
version according to feedback from tool developers and researchers. More
use cases for this type of data were identified in a design research project
<https://meta.wikimedia.org/wiki/Research:Identification_of_Unsourced_Statem…>
conducted last year by Jonathan Morgan.
You can find more information in our Wiki Workshop submission
<https://commons.wikimedia.org/wiki/File:Citation_Detective_WikiWorkshop2020…>
and in my blog <https://rollingmist.home.blog/> which documented the whole
journey.
Thank you very much!
Kind regard,
Aiko
Hi!
I'm a student currently pursuing a MSc in Data Science and I've been
thinking of applying to GSoC with Wikimedia this year. For over a year now
I've been a system admin of a medium-sized wiki, I wrote a couple
extensions (you can find them here:
https://www.mediawiki.org/wiki/User:Ostrzyciel) and some patches to core.
By being a sysadmin of a wiki I watch its performance closely and over time
I've discovered the single thing that slowed down the wiki the most was
InstantCommons. It turns out the ForeignApiRepo code is fine for a few
pages with little images, but once your wiki starts using Commons imagery a
lot, things get ugly, quick. Like parsing-a-page-takes-2-minutes-ugly. Or
the whole wiki can collapse if Commons isn't responding for some reason.
I think improving this would kind of correlate with Wikimedia's mission of
hosting the most accessible free media repository in the world :) I really
wish more people could use Commons extensively, and that would certainly
help it.
I did some research into that topic and came up with a few solutions, but I
am by no means an expert in MW architecture, I would be grateful if I
received some help from people familiar with Parsoid and the action API.
You can find a more detailed explanation here:
https://phabricator.wikimedia.org/T247406
I am also looking for mentors for this project :)
Thank you!
Ostrzyciel
Hello,
I'm forwarding an invitation to the Wikimedia Café meetup for this month.
Pine
( https://meta.wikimedia.org/wiki/User:Pine )
---------- Forwarded message ---------
From: Lane Rasberry <lane(a)bluerasberry.com>
Date: Sun, Mar 15, 2020 at 10:43 PM
Subject: [Wikimedia-l] Wikimedia Café - Sat 28 March 2020...
To: Wikimedia Mailing List <wikimedia-l(a)lists.wikimedia.org>
>
> Hello,
I am writing to invite anyone to join the next online meeting of Wikimedia
Café on Saturday 28 March 2020 4:30 PM UTC. Details for joining are at
https://meta.wikimedia.org/wiki/Wikimedia_Café
----> (video room open at that time) https://virginia.zoom.us/my/wikilgbt
The agenda for this month includes discussing COVID-19 and Wikipedia and
how Wikimedia community members feel about WMF/community relations.
Wikimedia Café is a modest, one-hour, monthly online meeting which for the
past few months has had fewer than 10 attendees. At these meetings anyone
can propose to discuss any topic of broad Wikimedia community interest, as
if we all were able to meet in person over coffee. The meetings themselves
are an experiment in small group Wikimedia community conversation with
video chat, phone access options, and online shared notetaking. Please see
WikiProject Remote Event Participation for more information about this
general style of online event.
https://meta.wikimedia.org/wiki/WikiProject_remote_event_participation
- Anyone interested in joining may do so.
- Anyone interested in reading notes of past meetings can find them on
the meta page.
- If there is anyone who wants to get their ideas published in the wiki
world, consider looking at how this Café works, because voice chat with
notetaking could be a way to organize your own wiki community.
Thanks Pine for performing as host in this and thanks to anyone who submits
topics for discussion or who is able to join.
--
Lane Rasberry
user:bluerasberry on Wikipedia
206.801.0814
lane(a)bluerasberry.com
_______________________________________________
Wikimedia-l mailing list, guidelines at:
https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines and
https://meta.wikimedia.org/wiki/Wikimedia-l
New messages to: Wikimedia-l(a)lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l,
<mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe>
Hello,
This is a brief and very unofficial note to say thanks for what
appears to be good reliability and performance of WMF's technical
infrastructure during the current global public health environment and
the swarms of contributing and reading activity on ENWP and elsewhere.
My guess is that there are many dashboards that show how site
performance and reliability are doing. At a convenient time, perhaps
for the upcoming issue of The Signpost, I'd be interested in seeing a
report regarding how the infrastructure is handling the situation.
Ever onward,
Pine
( https://meta.wikimedia.org/wiki/User:Pine )