Hello all,
I am happy to announce the beta release of the Wikimedia REST Content API
at
https://rest.wikimedia.org/
Each domain has its own API documentation, which is auto-generated from
Swagger API specs. For example, here is the link for the English Wikipedia:
https://rest.wikimedia.org/en.wikipedia.org/v1/?doc
At present, this API provides convenient and low-latency access to article
HTML, page metadata and content conversions between HTML and wikitext.
After extensive testing we are confident that these endpoints are ready for
production use, but have marked them as 'unstable' until we have also
validated this with production users. You can start writing applications
that depend on it now, if you aren't afraid of possible minor changes
before transitioning to 'stable' status. For the definition of the terms
‘stable’ and ‘unstable’ see https://www.mediawiki.org/wiki/API_versioning .
While general and not specific to VisualEditor, the selection of endpoints
reflects this release's focus on speeding up VisualEditor. By storing
private Parsoid round-trip information separately, we were able to reduce
the HTML size by about 40%. This in turn reduces network transfer and
processing times, which will make loading and saving with VisualEditor
faster. We are also switching from a cache to actual storage, which will
eliminate slow VisualEditor loads caused by cache misses. Other users of
Parsoid HTML like Flow, HTML dumps, the OCG PDF renderer or Content
translation will benefit similarly.
But, we are not done yet. In the medium term, we plan to further reduce the
HTML size by separating out all read-write metadata. This should allow us
to use Parsoid HTML with its semantic markup
<https://www.mediawiki.org/wiki/Parsoid/MediaWiki_DOM_spec> directly for
both views and editing without increasing the HTML size over the current
output. Combined with performance work in VisualEditor, this has the
potential to make switching to visual editing instantaneous and free of any
scrolling.
We are also investigating a sub-page-level edit API for micro-contributions
and very fast VisualEditor saves. HTML saves don't necessarily have to wait
for the page to re-render from wikitext, which means that we can
potentially make them faster than wikitext saves. For this to work we'll
need to minimize network transfer and processing time on both client and
server.
More generally, this API is intended to be the beginning of a multi-purpose
content API. Its implementation (RESTBase
<http://www.mediawiki.org/wiki/RESTBase>) is driven by a declarative
Swagger API specification, which helps to make it straightforward to extend
the API with new entry points. The same API spec is also used to
auto-generate the aforementioned sandbox environment, complete with handy
"try it" buttons. So, please give it a try and let us know what you think!
This API is currently unmetered; we recommend that users not perform more
than 200 requests per second and may implement limitations if necessary.
I also want to use this opportunity to thank all contributors who made this
possible:
- Marko Obrovac, Eric Evans, James Douglas and Hardik Juneja on the
Services team worked hard to build RESTBase, and to make it as extensible
and clean as it is now.
- Filippo Giunchedi, Alex Kosiaris, Andrew Otto, Faidon Liambotis, Rob
Halsell and Mark Bergsma helped to procure and set up the Cassandra storage
cluster backing this API.
- The Parsoid team with Subbu Sastry, Arlo Breault, C. Scott Ananian and
Marc Ordinas i Llopis is solving the extremely difficult task of converting
between wikitext and HTML, and built a new API that lets us retrieve and
pass in metadata separately.
- On the MediaWiki core team, Brad Jorsch quickly created a minimal
authorization API that will let us support private wikis, and Aaron Schulz,
Alex Monk and Ori Livneh built and extended the VirtualRestService that
lets VisualEditor and MediaWiki in general easily access external services.
We welcome your feedback here: https://www.mediawiki.org/wiki/Talk:RESTBase
- and in Phabricator
<https://phabricator.wikimedia.org/maniphest/task/create/?projects=RESTBase&…:>
.
Sincerely --
Gabriel Wicke
Principal Software Engineer, Wikimedia Foundation
Hi wikimedia dev,
I noticed "rel=canonical links" in ruwiki is using "https" (while enwiki is
still using "http").
I'd like to understand if this is just a "bug" or is a starting of some
large change across all language sites (then what's the timeline)?
Thanks
--
Jiang BIAN
This email may be confidential or privileged. If you received this
communication by mistake, please don't forward it to anyone else, please
erase all copies and attachments, and please let me know that it went to
the wrong person. Thanks.
Hi,
The Romanian Wikipedia has a code repository used mainly for robots of
interest to the local community. So far, it has been hosted on Google
Code. Since that site is closing, we are considering replacements and
one of the possibilities was the Wikimedia git server.
We would like to know what are the steps to follow in order to have a
project created?
Thanks,
Strainu
Hello and welcome to the latest edition of the WMF Engineering Roadmap
and Deployment update.
The full log of planned deployments next week can be found at:
<https://wikitech.wikimedia.org/wiki/Deployments#Week_of_March_16th>
For a longer term view, see the new Roadmap project in Phabricator:
<https://phabricator.wikimedia.org/tag/roadmap/>
A quick list of notable items for next week...
== All Week ==
* Progressive rollout of RESTBase to all wikis
** <https://phabricator.wikimedia.org/T89066>
== Tuesday ==
* MediaWiki deploy
** group1 to 1.25wmf21: All non-Wikipedia sites (Wiktionary, Wikisource,
Wikinews, Wikibooks, Wikiquote, Wikiversity, and a few other sites)
** <https://www.mediawiki.org/wiki/MediaWiki_1.25/wmf21>
== Wednesday ==
* MediaWiki deploy
** group2 to 1.25wmf21 (all Wikipedias)
** group0 to 1.25wmf22 (test/test2/testwikidata/mediawiki)
Thanks and as always, questions and comments welcome,
Greg
--
| Greg Grossmeier GPG: B2FA 27B1 F7EB D327 6B8E |
| identi.ca: @greg A18D 1138 8E47 FAC8 1C7D |
Excellent, thank you Keegan and all who have worked on this.
Pine
On Mar 13, 2015 1:07 PM, "Keegan Peterzell" <kpeterzell(a)wikimedia.org>
wrote:
> Hi all,
>
> Single-user login[1] finalization will be taking place next month.[2] I
> know this has been said before over the past two years, but it is actually
> going to take place after nearly a decade of waiting :)
>
> I just posted an important announcement about the renaming process itself
> on Meta.[3] If you're interested, please take some time to read it over and
> help translate if possible. A one sentence message about this has been sent
> to village pumps as well, where available.
>
> All accounts that will be affected by this will be contacted on their talk
> page within the next couple of days.[4] All local wikis also have a
> publicly listed database of users who will be renamed, available at
> Special:UsersWhoWillBeRenamed.[4]
>
> Thanks for your time, please help spread the word to your other mailing
> lists and/or communities.
>
> 1. https://meta.wikimedia.org/wiki/Help:Unified_login
> 2.
> https://meta.wikimedia.org/wiki/Single_User_Login_finalisation_announcement
> 3.
>
> https://meta.wikimedia.org/wiki/Single_User_Login_finalisation_announcement…
> 4.
>
> https://meta.wikimedia.org/wiki/Single_User_Login_finalisation_announcement…
> 5. for example,
> https://vo.wikipedia.org/wiki/Patikos:UsersWhoWillBeRenamedHmm
>
> --
> Keegan Peterzell
> Community Liaison, Product
> Wikimedia Foundation
> _______________________________________________
> Wikimedia-l mailing list, guidelines at:
> https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines
> Wikimedia-l(a)lists.wikimedia.org
> Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l,
> <mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe>
Hi folks,
I'd like to make this extension a bit more robust:
http://www.mediawiki.org/wiki/Extension:TemplateTable
In short, the extension adds a tag which displays a table of all the
invocations of a given template.
Currently this is implemented by fetching the text for each page a
template is referenced on and searching for braces (eg. {{ }}).
I was thinking that running the text through the parser and hooking
all the template expansions to grab the params would be more robust.
However, there don't appear to be any hooks that would provide access
to this data.
So, 2 questions:
1. Any opposition to adding a hook in the parser called every time a
template is expanded (with the template name and parameters as args)?
2. Any hints as to where in the parser would be a good place to add that hook?
Thanks,
Rusty
Greetings!
We’d love to hear what you think of the Wikimedia Blog, so we can improve it to better serve you and our movement.
Could you please take this short survey, if you haven’t already?
https://www.surveymonkey.com/s/wikimedia-blog?c=email-wikitech
The survey ends tomorrow. Your feedback will help improve the blog — and inform our editorial strategy for WMF communications.
We would also be grateful if you would invite others in your community to take the survey, so we get more responses from across our movement.
We will post survey results at the end of March, both on the blog and on Meta.
Thanks for sharing your feedback!
Fabrice
P.S.: If you are not familiar with the blog, we invite you to visit it before you take the survey:
https://blog.wikimedia.org/
_______________________________
Fabrice Florin
Movement Communications Manager
Wikimedia Foundation
https://en.wikipedia.org/wiki/User:Fabrice_Florin_(WMF)
Dear All,
On occasion of "Marathi Bhasha Divas", we have organised Photothon on
Marathi Wikipedia from February 27 to March 6 , We have received more than
7,000 photographs through web and Android app.
We hosted an event called edit-a-thon during the celebration of womens day,
there were around 17 women members of Marathi Wikipedia who took part in
this event, wherein they were given a presentation on categorization of
images collected in Photothon and small description on 'How to write
Articles in Wikipedia'. Around 6,000 photographs were categorized by these
members, event conducted by Selva Rani.
Thank you
1)
https://mr.wikipedia.org/wiki/%E0%A4%9A%E0%A4%BF%E0%A4%A4%E0%A5%8D%E0%A4%B0…
2)
https://mr.wikipedia.org/wiki/%E0%A4%9A%E0%A4%BF%E0%A4%A4%E0%A5%8D%E0%A4%B0…
3)
https://mr.wikipedia.org/wiki/%E0%A4%9A%E0%A4%BF%E0%A4%A4%E0%A5%8D%E0%A4%B0…
---
Santosh M. Shingare,
Research Assistant ,
Indian Institute Of Technology Bombay,
Powai , Mumbai - 400076
Maharashtra, ( INDIA )
*Mob- *+91 9890984632
*Email ID : * santosh.shingare(a)gmail.com
It should be possible to create an HTML-only wiki, with Visual Editor
as the primary editing mechanism and no wikitext parsing for typical
views and edits. Advanced users could install Parsoid to round-trip
from the HTML DOM to wikitext for source editing, translating from
wikitext back to the HTML DOM for database storage and display.
(Eventually we may similarly allow round-trip "source" editing in
other formats, such as Markdown or a new and refreshed "wikitext 2.0"
-- but let's limit discussion to VE/HTML for now.)
So what are the architectural improvements needed?
* ContentHandler[1] laid the groundwork for non-wikitext page content.
Building on it, an HTML-format "Mediawiki DOM" ContentHandler must be
written, using DOM methods to separate sections and extract redirects.
The "Mediawiki DOM" Content implementation must extract secondary data
(links, categories, etc) directly from the DOM. (Alternatively, page
metadata should be stored in a separate JSON "page metadata"
attachment and custom editors provided.)
* An HTML-based DifferenceEngine[2] must be implemented to allow
visualizing changes without resorting to wikitext.
* VisualEditor must be tweaked to fetch Mediawiki DOM directly,
bypassing Parsoid; ditto on save. (I believe RESTbase is working
toward this already.)
* System messages must be associated with a content model, to allow
HTML-formatted system messages. Localization workflows need to
accommodate non-wikitext messages. Most messages do not need/should
not have formatting and should probably shift to a "plaintext" content
model.
* The Sanitizer[3] will need improvement so that it is appropriate to
run directly on Mediawiki DOM.
* Compatibility thunks are also desirable. These would use Parsoid
(in-process?) to dynamically generate wikitext from the Mediawiki DOM
to allow some legacy extensions and APIs to function.
What other areas would present roadblocks to an HTML-only wiki (or,
more generally, to a non-wikitext mediawiki)? I'm hoping the mediawiki
hackers on this list can educate me on other areas that might be
problematic (or other place where useful groundwork has already been
laid).
--scott
ps. I submitted a proposal very similar to this to wikimania 2015:
https://wikimania2015.wikimedia.org/wiki/Submissions/Mediawiki_without_wiki…
Help me improve my (proposed future) talk!
[1] https://www.mediawiki.org/wiki/Manual:ContentHandler
[2] https://git.wikimedia.org/blob/mediawiki%2Fcore.git/master/includes%2Fdiff%…
[3] https://git.wikimedia.org/blob/mediawiki%2Fcore.git/master/includes%2FSanit…
--
(http://cscott.net)