Is the only purpose action=titleblacklist to validate? Can it be used to actually override the blacklist? If not then I have a proposal. Is it at all possible that the action can return an override token that can then be used in the task to be completed, if they have that right?
Gesendet von Maximilian's iPhone.
(Sent from Maximilian's iPhone.)
At the Semantic MediaWiki conference (SMWCon) a few days ago, Yuri
mentioned that we're considering making our web API JSON-only. In
response, Steve Newcomb emailed me the message below, and gave me
permission to forward it to mediawiki-api for your consideration. Thank
you, Steve Newcomb.
--
Sumana Harihareswara
Engineering Community Manager
Wikimedia Foundation
Dear Ms. Harihareswara,
The remarks that appear below, after my signature, are informed by
participation in years of earnest presentations and discussions about
XML vs. JSON at the Balisage conferences (see balisage.org).
The below remarks are extracted from the documentation of a tool we use
in our consulting practice, which includes data management/publishing
services for U.S. government customers. The extract is from a
discussion of how the tool can optionally format XML for
human-readability *without* polluting the data with spurious new
whitespace. Then it digresses to more general considerations in a NOTE
which is directly relevant to the JSON vs. XML question.
It all boils down to a simple question: will the data ever be used
outside its current known applications and/or software? If the answer
is "No", then JSON is probably the right choice. If the answer is
"Yes", then XML is certainly a better choice, but then the questions
arise: "Whose perspective on the data should be baked into it?", and
"Who will pay the cost of baking it in?"
All best wishes for you and for humanity's ongoing invention of
civilization, which depends on the longevity of knowledge,
Steve Newcomb
srn(a)coolheads.com
----------------------------------
In consideration of the haphazard way in which XML data are sometimes
processed in the real world, one may with some justification worry
about how a given XML document may someday be understood,
especially when whitespace is significant. [This tool's] use of markup
characters for all readability-whitespace moots the criticism of
XML that JSON is easier than XML to read and use for data
interchange on account of the fact that, in JSON, all whitespace
is intrinsically explicit and not subject to subsequent diddling
when parsed, even when JSON data are elegantly formatted for
readability.
Note: Needless to say, both syntaxes, XML and JSON, have
advantages and disadvantages. In the context of this
discussion, it may be worthwhile to highlight the essential
difference between JSON and XML, which is that XML provides
(demands, really) an explicit distinction between data and
data-about-data (metadata), while JSON does not.
In other words, XML requires specific classes of things to
be endowed with names, while JSON imposes no such
constraint. XML offers a standard way of unambiguously
distinguishing the names of classes of data, and the names
of attributes of those classes, from the data themselves.
These names must be chosen somehow. Normally, the chosen
names are meaningful. The choice of a specific name by a
human being is the making of a semantic commitment. Thus,
in XML, data are expressed in a way that almost inevitably
reflects how someone (perhaps even the author!) thought the
data should, or at least could, be understood. JSON, by
contrast, does not demand that such a perspective be
explicitly embedded in the data. If such a perspective is
embedded in JSON data, JSON does not provide a standard way
of abstracting that perspective from the data.
But neither syntax prohibits the processing of data in terms
of a data/metadata perspective other than the one(s) that
were embedded in them. Whatever information XML can convey,
JSON can also convey, and vice versa. However, if a
data/metadata distinction needs to be baked into the data,
such as when the data may need to be understood by a human
being apart from any specific software application, XML is
simpler to use, and the baked-in data/metadata distinction
will be universally understandable as such, not only because
of the World Wide Web Consortium XML Recommendation, but
also because of ISO International Standard 8879-1986, as
amended.
If a baked-in data/metadata distinction is not desired, JSON
is pretty clearly the better choice, but then at least two
questions arise:
(1) Are you certain that an embedded data/metadata
distinction will be undesirable for all future
applications of these data, including applications
that do not yet exist?
(2) Are you certain that you wish to forego your
opportunity to influence how these data will be
understood, including by persons as yet unborn?
Hi, we would like to place Wikipedia locations on an interactive map, with
the Article associated with it. Hopefully this will create an Map centric
display of Wikipedia historical locations (rather than a text/page centric
one.). Don't worry, full credit and links will be given.
So. Is there a reliable way, using the API to gather the Co-ords of
locations (that have them)? Ideally, we would like to have a pair of queries
that.
. Returns = Articles of a category, that ALSO have a location Co-ord
specified
. Returns = The article, or some kind of data block with that
Co-ord, based on the articles Title
I have watched the video tutorial, and dabbled with various Queries and
Parses of pages. I can get a list of Articles that match a given Category,
and I can compile a list of Categories. However, I am finding the
geolocation data to be somewhat patchy. But that's probably down to me not
knowing the magic formula.
So, does anyone know how to address this issue? Any help most appreciated.
Cheers
Chris Thomas
I should have re-read this more carefully. Something (which wasn't
germane anyway) is wrong:
"...subsequent diddling when parsed, even when JSON data are elegantly
formatted for readability."
is wrong.
"...subsequent diddling, even when JSON data are elegantly formatted for
readability."
is correct.
I.e., "when parsed" doesn't belong there.
Steve Newcomb
On 03/24/2013 02:54 PM, Sumana Harihareswara wrote:
> At the Semantic MediaWiki conference (SMWCon) a few days ago, Yuri
> mentioned that we're considering making our web API JSON-only. In
> response, Steve Newcomb emailed me the message below, and gave me
> permission to forward it to mediawiki-api for your consideration. Thank
> you, Steve Newcomb.
>
Remember that clients should not be depending on the specific query string
data returned inside the query-continue node, with either the old-style or
new style continuation.
As a partial fix for bug 24782[1], Gerrit change 22742 will cause
action=query&list=recentchanges to start returning rccontinue for
continuations rather than rcstart as it has done in the past. Changes of
this type may be coming to other modules as well as further fixes to bug
24782 are implemented.
[1]: https://bugzilla.wikimedia.org/show_bug.cgi?id=24782
[2]: https://gerrit.wikimedia.org/r/#/c/22742/
--
Brad Jorsch
Software Engineer
Wikimedia Foundation
_______________________________________________
Mediawiki-api-announce mailing list
Mediawiki-api-announce(a)lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-api-announce
Hi all.
When I protect a certain page (action=protect&protections=edit=G8), all
users of the wiki are blocked to edit it (as expected), but also, the
participants of G8 (the group that should edit this page)... someone knows
this issue?
I initially created the following two lines in LocalSettings.php:
$wgGroupPermissions['G8']['edit'] = false;
$wgGroupPermissions['G8']['read'] = false;
And after, I changed these permissions to 'true':
$wgGroupPermissions['G8']['edit'] = true;
$wgGroupPermissions['G8']['read'] = true;
I have also modified the following line in the DefaultSettings.php file:
$wgRestrictionLevels = array( '', 'autoconfirmed',
'sysop','G1','G2','G3','G4','G5','G6','G7','G8');
But, nothing...
Does anyone know if there is something to configure/do yet?
Thanks for your help.
péricles
--
«L’homme qui veut s’instruire doit lire d’abord, et puis voyager pour
rectifier ce qu’il a appris.»
“The man who seeks to educate himself must first read and then travel in
order to correct what he has learned.”
CASANOVA, Giovanni Giacomo
Yuri, whom you've seen working on the MediaWiki API, now works for the
Wikimedia Foundation. :-)
best,
Sumana Harihareswara
-------- Original Message --------
Subject: [Wmfall] Yuri Astrakhan & Adam Baso join Mobile department
partner team
Date: Mon, 18 Mar 2013 10:29:42 -0700
From: Tomasz Finc <tfinc(a)wikimedia.org>
To: Wikimedia developers <wikitech-l(a)lists.wikimedia.org>, "Staff
(All)" <wmfall(a)lists.wikimedia.org>
Greetings all,
I'm pleased to announce that the mobile department has two new staff
members. Yuri Astrakhan & Adam Baso join as sr. software developers on
the mobile partner team. In this role Yuri and Adam will support
projects like Wikipedia Zero, SMS/USSD, and J2ME to further the reach
of our projects in geographic areas that have both financial and
technical impediments to access Wikipedia. They will be working
closely with Kul and Dan from the global development group.
Yuri has been heavily involved in Wikipedia-related projects 2005-2007
developing the API framework and querying subsystem, contributing to
pywikibot code, and making millions of changes as yurikbot, while at
the same time working as a software consultant for several large
banks. In 2008 Yuri joined a small hedge-fund to lead the development
of an automated trading platform. While there, Yuri continued various
open source projects such as time-series database (timeseriesdb).
After over five years, Yuri has rejoined the MediaWiki community and
will be working for us from New York.
Adam spent the past seven years working in the field of information
security, specializing in application security, identity management,
and encryption in the retail, government, and banking sectors. Adam
led the OWASP Minneapolis-Saint Paul chapter for a couple of years,
and proudly organized the OWASP AppSec USA 2011 conference. Adam and
his wife are relocating to San Francisco from Minneapolis-Saint Paul,
and they look forward to the opportunity to live in such a thriving
software-friendly community.
The mobile group is excited and proud to welcome both Yuri & Adam as
sr. engineers to the partner team.
This completes the team and allows them to work aggressively to reach
our 4 billion page target through outreach projects like Wikipedia
Zero.
Please join me in welcoming Yuri and Adam to the Wikimedia Foundation!
--tomasz
_______________________________________________
Wmfall mailing list
Wmfall(a)lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wmfall
As some folks may recall, an action=createaccount was added to the API a
few weeks ago. Unfortunately the first pass didn't include CAPTCHA support,
so we haven't been able to use it for the live sites yet (which use
ConfirmEdit's "FancyCaptcha" mode). We expect to start using this in the
next couple weeks for the mobile Commons apps, so it's time to make it
captcha-friendly...
I've made a first stab at adding support, based on the existing captcha
interfaces for login and editing:
MediaWiki core: https://gerrit.wikimedia.org/r/53793
ConfirmEdit ext: https://gerrit.wikimedia.org/r/53794
So far I've tested it with the default 'math captcha' mode, with this test
rig: https://github.com/brion/mw-createaccount-test
If a captcha needs to be run, action=createaccount will spit back a result
including a 'captcha' field including several subfields, such as in this
example:
https://www.mediawiki.org/wiki/API_talk:Account_creation#Captcha_additions
Since account creation requires a first request to get a token anyway, this
shouldn't significantly add to the complexity of using the API.
Text captchas will have a 'question' subfield to be presented; image
captchas will have a 'url' field which should be loaded as the image.
'type' and 'mime' will vary, and probably shouldn't be used too closely.
Pass back the captcha id field in 'captchaid' and the response word in
'captchaword' parameters along with the final request, and it should pass
the captcha. If the captcha doesn't pass, you get back new captcha info and
can continue.
Questions:
* Does this seem to make sense? :)
* Will other API users kill me for this or will it be easy to use?
* Any surprises that may turn up?
* Any bugs/style problems in my code so far?
-- brion