Hi guys,
As part of my Computer Science degree I have to do a project, which I’ve decided to do related to MediaWiki spam.
I’m looking for any articles/papers/research about spam on MediaWiki. Likewise, anything about intelligent systems running on MediaWiki would also be helpful. If anyone knows of any articles, or knows of a website/journal which contains MediaWiki-related articles, it would be much appreciated!
Best regards,
Richard Cook
Sent from Mail for Windows 10
hello. i am making tatar script converter. i have made tests now and
it has become easier to work. i am saying about test in
\mediawiki\tests\phpunit\languages\classes\ . i am working in windows.
i open command line window in \mediawiki\tests\phpunit\ , and type
"chcp 65001" and "C:\xampp\php\php-cgi.exe phpunit.php languages", and
test runs, but there are some disadvantages:
1) i count till 30 before test completes and my test is only a little
dot among nearly 1000 dots here.
2) and some letters are not shown (only squares, ie they have not
fonts in lucida console font).
i would like if somebody writes a script to run my test from browser
window. i think i can make such test.php, but i cannot reuse the
already made class for tatar language, can only copy tests between the
class and my script manually.
3) and also i will need to make diff as it works in the test.
i think i can do that after some time of learning of the test scripts,
but maybe somebody can make that easily.
( https://phabricator.wikimedia.org/T114878 )
This probably isn't the right place to report this, but if anyone
knows anyone who maintains GeoHack, today it's emitting KML
placemark names containing raw ampersands, which Google Earth
doesn't like. Example: [[Baltimore & Ohio Railroad Bridge,
Antietam Creek]]. Manually editing it to & solves the
problem.
Thank you for your message. I am currently out of the office, with no email access. I will be returning on October 12, 2015.
————————————————————
Vielen Dank für Ihre Nachricht. Ich bin derzeit nicht im Büro und lese auch keine E-Mails. Ich bin ab dem 12.Oktober 2015 zurück und werde dann meine E-Mails bearbeiten.
Falls Sie Fragen bezüglich der Studie zur “Forschungspraxis in den Geisteswissenschaften” haben, wenden Sie sich bitte an Frau Alexa Schlegel (alexa.schlegel(a)inf.fu-berlin.de).
Vielen Dank.
Herzliche Grüße,
Claudia Müller-Birn
On Oct 1, 2015, at 2:00 AM, communitymetrics(a)wikimedia.org wrote:
>
> Hi Community Metrics team,
>
> this is your automatic monthly Phabricator statistics mail.
>
> Number of accounts created in (2015-09): 363
> Number of active users (any activity) in (2015-09): 863
> Number of task authors in (2015-09): 477
> Number of users who have closed tasks in (2015-09): 268
>
> Number of projects which had at least one task moved from one column
> to another on their workboard in (2015-09): 209
>
> Number of tasks created in (2015-09): 3351
> Number of tasks closed in (2015-09): 2981
>
> Number of open and stalled tasks in total: 25811
>
> Median age in days of open tasks by priority:
> Unbreak now: 11
> Needs Triage: 125
> High: 164
> Normal: 301
> Low: 660
> Lowest: 528
> (How long tasks have been open, not how long they have had that priority)
>
> TODO: Numbers which refer to closed tasks might not be correct, as described in T1003.
>
> Yours sincerely,
> Fab Rick Aytor
>
> (via community_metrics.sh on iridium at Thu Oct 1 00:00:08 UTC 2015)
>
> _______________________________________________
> Wikitech-l mailing list
> Wikitech-l(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Hello all together,
with this e-mail I want to inform you, that, with change I20e46165fb76[1],
we removed a long living Hook in MobileFrontend, called EnableMobileModules.
It was added to empower extensions to add modules to OutputPage only, if the
page was recognized as requested for a "mobile" friendly output (a mobile
device or the "Mobile/Desktop switch" at the bottom of every page). The Hook
was deprecated with a notice a long time ago and was now completely removed.
Although it seems, that the hook wasn't used in any extension hosted in
Wikimedia git I want to give some hints how to achieve something that was
made with the use of this removed hook in extensions that maybe aren't
hosted in Wikimedia's git.
== Register mobile (only) modules ==
If you used the EnablemobileModules hook to register a module only, if the
page is a "mobile" page, you should migrate to the ResourceLoader. It
provides an easy way to specify the target of a module (as you might know
already). So, you can define the module with the mobile target only
('targets' => array( 'mobile' )) and add the module through the
BeforePageDisplay hook. If you need to invest some time to migrate to this
way, please allow me to mention, that it could be a good step to think about
the size of your modules you want (or already) load for the mobile page, too
:) (Think of peoples data plans and network speed :P)
== Run code in mobile mode only ==
If you need to run some code only in the "Mobile page" context (only, if a
page is requested for the mobile mode) you may have used the
EnableMobileModules hook (even if it wasn't thought for this use case). To
use this code in the future, you can use the BeforePageDisplayMobile hook,
which already runs along with the EnableMobileModules for a while. It's
nearly a copy of the BeforePageDisplay hook (with the same signature and
functioning), but it's only executed in the mobile mode.
If you have any questions about how to migrate some code or some other
mobile (code) related things, feel free to e-mail mobile-l or ask in
#wikimedia-mobile!
Have a nice weekend!
Best,
Florian
[1] https://gerrit.wikimedia.org/r/#/c/243226/6
Hi all,
[Sorry I'm new to the list so I'm not sure if a similar discussion has
happened before or if the questions appear naive.]
I am working with a masters student and another colleagues on Wikimedia
image data. The idea is to combine the meta-data and some descriptors
computed from the content of the images in Wikimedia with the structured
data of DBpedia/Wikidata to (hopefully) create a semantic search service
over these images.
The goal would ultimately be to enable queries such as "give me images
of cathedrals in Europe" or "give me images where an Iraqi politician
met an American politician" or "give me pairs of similar images where
the first image is a Spanish national football player and the second
image is of somebody else". These queries are executed based on the
combination of structured data from DBpedia/Wikidata, and standard image
descriptors (used, e.g., for searching for similar images).
The goal is ambitious but from our side, nothing looks infeasible. If
you are interested, a sketch of some of the more technical details of
our idea are given in this short workshop paper:
http://aidanhogan.com/docs/imgpedia_amw2015.pdf
In any case, for this project, we would need to get the meta-data and
the image content itself for as many of the Wikimedia images linked from
Wikipedia as possible. So our questions would be:
* How many images are we talking about in Wikimedia (considering most
recent version, for example)?
* How many are linked from Wikipedia (e.g., English, any language)?
* What overall on-disk size would those images be?
* What would be the best way to access/download those images in bulk?
* How could we get the meta-data as well?
Any answers or hints on where to look would be great.
From our own searches, it seems the number of Wikimedia images is
around 23 million and those used on Wikipedia (all languages) is around
6 million, so we're talking about a ball-park of maybe 10 terabytes of
raw image content? We know we can extract a list of relevant Wikidata
images from the Wikipedia dump. In terms of getting image content and
meta-data in bulk, crawling is not a great option for obvious reasons
... the possible options we found mentioned on the Web were:
1. The following mirror for rsynching image data:
http://ftpmirror.your.org/pub/wikimedia/images/
2. The All Images API to get some meta-data for images (but not the
content). https://www.mediawiki.org/wiki/API:Allimages
So the idea we are looking at right now is to get images from 1. and
then try match them with the meta-data from 2. Would this make the most
sense? Also, the only documentation for 1. we could find was:
https://meta.wikimedia.org/wiki/Mirroring_Wikimedia_project_XML_dumps#Media
Is there more of a description on how the folder structure is organised
and how, e.g., to figure out the URL of each image?
Any hints or feedback would be great.
Best/thanks,
Aidan
All,
I want to share an update about the WMF’s product leadership. As of today,
Wes Moran has stepped into the role of Vice President of Product. In this
role, he’ll oversee the Foundation’s product across all audience verticals,
reporting to me. Please join me in welcoming Wes into this new role.
Wes has more than 17 years of experience in product design and development,
with expertise in open source communities, user-centered design,
interaction design, user experience, user research and testing, product
management, and more. Since joining the Foundation, he led the
newly-created Discovery team to focus on user needs, transparency, and
accountability, and worked closely with our colleagues at Wikimedia
Deutschland on key projects such as Wikidata. He has been an invaluable
resource as we have continued to recruit for a Chief Technology Officer,
working with Terry and myself to cover critical leadership responsibilities.
With this change, Tomasz Finc, Katie Horn, Toby Negrin, and Trevor Parscal
will report to Wes. Tomasz will now lead the Discovery team. Tomasz has
been Director of Discovery and helped build and lead development of this
new team. He has served in many leadership capacities over his 7 years here
at the foundation and joined us from Amazon where he worked with the A9
search team. Please join me in congratulating Wes and Tomasz.
~~~~Lila
In a recent meeting of the MediaWiki Architecture Committee, it was
agreed that Timo Tijhof (Krinkle) would be invited to join the
committee. Timo accepted this invitation.
Timo is a talented software engineer with experience in many areas,
especially the MediaWiki core and JavaScript frontend components such
as ResourceLoader and VisualEditor. He currently works for WMF in the
performance team. I look forward to working with him on the
Architecture Committee.
-- Tim Starling