On 23-Apr-14 21:29, wikitech-l-request(a)lists.wikimedia.org wrote:
> Re: API attribute ID for querying wikipedia pages
@Matma Rex: This is way to general, I think it would be a lot better
when this would be in more detail. For example when I want to fetch a
table with all currencies on
https://en.wikipedia.org/wiki/List_of_circulating_currencies, I would
make an API call like
this:https://en.wikipedia.org/w/api.php?action=parse&page=List%20of%20circu….
This returns 5 sections with "numbers" which I can use as reference
points, but I would rather have a "number" for the table in the section.
A section can have multiple tables.
Querying specific (structured) data from Wikipedia is still very
difficult in my opinion. My suggestion is that every paragraph, image,
link and table get a unique identifiable number. This way Wikipedia gets
more machine readable.
Just a reminder: this session is about to start.
---------- Forwarded message ----------
From: Manuel Schneider <manuel.schneider(a)wikimedia.ch>
Date: Thu, Apr 24, 2014 at 9:00 AM
Subject: [Wikimedia-l] Wikimedia Hackathon: Info session in one hour!
To: Wikimedia Mailing List <wikimedia-l(a)lists.wikimedia.org>
Dear all,
in one hour we will have an info session via Google Hangout on the
upcoming Wikimedia Hackathon in Zürich. If you want to join just check
out this event:
https://plus.google.com/events/cj4okkse0n8ealb7mntrc4458a8
The video will also be shared on Youtube.
Below is the mail that has been sent to all folks who have already
registered for the Hackathon.
By the way you can still register, we even happen to have very few beds
in reserve:
https://docs.google.com/forms/d/1nlrQ7cox36xaNK1u9iKP-thogY5TVrilOGJR79DqQ9…
> the Wikimedia Foundation has arranged a Google Hangout in order to
> get a quick introduction for the upcoming Wikimedia Hackathon in
> Zürich. We have decided to open this meeting for all participants and
> will discuss how to get to the Youth Hostel, get an outlook on the
> program and answer your questions.
>
> If you are interested and have time you can participate in this
> Hangout later at April 24th 17:00 UTC here:
> * https://plus.google.com/events/cj4okkse0n8ealb7mntrc4458a8
>
> The session will be recorded on Youtube, so everyone else can watch
> it afterwards. If you cannot attend but have questions, please send
> them to me.
Regards,
Manuel
--
Wikimedia CH - Verein zur Förderung Freien Wissens
Lausanne, +41 (21) 34066-22 - www.wikimedia.ch
_______________________________________________
Wikimedia-l mailing list
Wikimedia-l(a)lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l,
<mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe>
--
Quim Gil
Engineering Community Manager @ Wikimedia Foundation
http://www.mediawiki.org/wiki/User:Qgil
Several people had a meeting last week to start answering the question:
> What do our SOA plans look like for the next fiscal year? How much
> of this work will be spent refactoring MW Core, vs. augmenting with
> new, non-MediaWiki framework(s)?
Here's my summary of what we agreed on last week and what we still need
to work out. (Participants included WMF engineers, Wikia engineers, and
Markus Glaser.)
Agreed:
* Modularity is a good thing
* WMF will start inviting a Wikia engineer to the weekly "scrum of
scrums" meeting
* Wikia will have an all-team meeting in May in SF - will then compare
notes with WMF about what toolset they're choosing for WikiFactory & the
new mobile article page prototype
(Please also see lines 240-252 of the full notes for SOA risks and
deliverables we'll need from service providers:
http://etherpad.wikimedia.org/p/soa-kickoff )
Undecided:
* what parts of MediaWiki will be refactored to follow in Parsoid's
footsteps, who is going to do that, and when
* what the working group's concrete targets would be, and who would be in it
* what languages to write services in
* whether Rashomon will only have a REST API or a PHP one as well
* how to communicate between WMF and Wikia (wikitech-l? a different
list? lightweight IRC meetings?)
* who is Wikia's negotiating partner at WMF, who can authoritatively
decide which languages/tools MediaWiki will use
Given this, I'll continue working on architectural guidelines that
prescribe modularity in general but don't specifically call for SOA in
all new features.
Thanks.
--
Sumana Harihareswara
Senior Technical Writer
Wikimedia Foundation
I'm trying to write a script to remove duplicate links with the click of a
button in the toolbar, yet when I set the function as the callback for the
button, it runs the function on page load. This is what I've got:
function removeDuplicateLinks(){
var box =$('[id^=wpTextbox]');
var text = box.val();
var start=text.split('');
var i;
for (i = 1;typeof(start[i]) !== 'undefined' ;i++){
console.log(start[i]);
start[i] = start[i].split('')[0];
text = text.replace('/\[\[' + start[i] + '\]\]/', start[i]);
}
box.val(text);
}
if (wgAction == 'edit'){
mw.toolbar.addButton( {
imageFile: '
http://localhost/wikidev/images/2/20/Button_cite_template.png',
speedTip: 'Remove duplicate links',
callback: removeDuplicateLinks(),
} );
}
I've tried setting the callback to 'removeDuplicateLinks',
removeDuplicateLinks, and I've even tried turning it into an anonymous
function bound to a variable, which I then tried to pass as the callback.
Am I misusing syntax, here?
I'd appreciate any help.
--
----
Justin Folvarcik
*"When the power of love overcomes the love of power, the world will
finally know peace."*-Jimi Hendrix
I have just pushed a new version of the TitleValue patch to Gerrit:
<https://gerrit.wikimedia.org/r/106517>.
I have also updated the RDF to reflect the latest changes:
<https://www.mediawiki.org/wiki/Requests_for_comment/TitleValue>.
Please have a look. I have tried to address several issues with the previous
proposal, and reduce the complexity of the proposal. I have also tried to adjust
the service interfaces to make migration easier.
Any feedback would be very welcome!
-- daniel
Hi all,
I have been accepted into the Google Summer of Code 2014 and will be
working on the "Tools for mass migration of legacy translated wiki content"
project [1].
As per the project timeline, I have drafted the requirements [2] for this
project after discussing with my mentors - Nikerabbit and Nemo_bis. It
would be great if the translation admins and translators have a look at the
requirements and let us know if they have any other requirements from their
side.
Please mention them on the corresponding discussion page [3]. I would like
to get some feedback on the importance of requirements mentioned as
optional in that page.
I need to get done with this by the next couple of days, so I would
appreciate if you can give this a quick look. :)
Thank you!
[1] https://www.mediawiki.org/wiki/Extension:Translate/Mass_migration_tools
[2]
https://www.mediawiki.org/wiki/Extension:Translate/Mass_migration_tools/Req…
[3]
https://www.mediawiki.org/wiki/Extension_Talk:Translate/Mass_migration_tool…
--
Warm Regards,
*Pratik Lahoti*
User:BPositive <http://www.mediawiki.org/wiki/User:BPositive>
'Wikitrack' started out from an idea that I had about tracking edits on
Wikipedia from mobile. I always felt that there's more to tracking edits
and their quality on Wikipedia. For one, we could zero-in on better
contributors than just going by the edit counts. And doing that for mobile
could be good for a tool that can be used on the go.
WikiTrack in its initial version had been released last year for Wikipedia
projects of few Indian languages, namely Tamil, Malayalam, Kannada and
Sanskrit for Android. The initial version allows the users to just track
the Recent Changes, the diff between changes, their Watchlist and any
user's contributions. Each of these have seen thousands of downloads (As of
the day of sending this email: WikiTrack Tamil[1] - 17345 downloads with
an average rating of 4.2 out of 5, WikiTrack Kannada[2] - 14480 downloads
with an average rating of 4.038 out of 5 and WikiTrack Malayalam [3] with
7756 downloads with an average rating of 3.8 out of 5 on Google's Play
Store) with thousands of active users tracking their favorite Wikimedia
project using this application. The project so far has been self funded and
the source code of the app has been released under GPL.
I've put in an IEG Proposal[4] with a goal to consolidate these different
apps into one and to provide support for all Indian languages and beyond
(covering as many languages as possible), to rewrite the code, to make it
better and to extend it further to improve the utility of the app including
a release for iOS.
The idea is to achieve the above goal and get it to a good shape before
planning further on including other useful ways of allowing the editors to
track Wikimedia projects from mobile and to keep them engaged.
I realize that this is the very last minute request, but would love to hear
feedback about it.
[1]
https://play.google.com/store/apps/details?id=com.saaranga.wikitracktamil
[2] https://play.google.com/store/apps/details?id=com.saaranga.wikikannada
[3]
https://play.google.com/store/apps/details?id=com.saaranga.wikitrackmalayal…
[4] https://meta.wikimedia.org/wiki/Grants:IEG/WikiTrack
In case this is an incorrect list to post this, kindly accept my apologies
and help by pointing me to the right one.
Thanks!
--
Hari Prasad Nadig
http://hpnadig.nethttp://twitter.com/hpnadighttp://flickr.com/hpnadig
What is the proper way to make an empty edit of an existing article programmatically, in a maintenance script? I tried this:
$title = Title::newFromText("My existing article name"); // succeeds
$wikiPage = new WikiPage($title); // succeeds
$wikiPage->doEditContent($wikiPage->getContent(Revision::RAW), '');
but doEditContent just sits there until I get "limit.sh: timed out" in my shell. I can do an empty edit through the browser (click Edit, click Save) just fine.
Thanks,
DanB
Currently we are experiencing problems when we try to query wikipedia.
Fetching content via the Wikipedia API can be a lot easier in our
opinion. The problem we have is that it is possible to fetch content via
the property "rvsection", which will accept a value (number) which
represents the section number starting from the top section to the
bottom section. This is a very "dangerous" way of fetching content. When
there is another section inserted on top of the page, all section
numbers will be moved 1 up.
A better way for fetching content via an API is to assign a unique ID to
a section, a paragraph, a table, an image etc. This way we could simply
fetch a part of the content of wikipedia via this ID.
I would like to know if my problem is shared with other developers
inside the Wikipedia API team.
Kind regards,
Daan Kuijsten