Just curious -- what's the state of forcing HTTPS for all user sessions?
It's simple common sense at this point to protect all our users from
session hijacking on local networks or MITM attacks.
I see some Gerrit activity on adding "preferences" or special groups for
HTTPS, which seems a horrid practice when we could just protect everyone...
I would like to announce the release of MediaWiki language extension
* sha256sum: bd6aca60101308f429d90d421e35093328e7a05ea74d35c05a98474ab648dec4
* Installation instructions are at https://www.mediawiki.org/wiki/MLEB
* Announcements of new releases will be posted to a mailing list:
* Report bugs to https://bugzilla.wikimedia.org
* Talk with us at #mediawiki-i18n @ freenode
Release notes for each extension are below.
Amir E. Aharoni
== Babel ==
Only localisation updates.
== cldr ==
* Update CLDR to 23 and rewrite parser
== CleanChanges ==
== LocalisationUpdate ==
== Translate ==
=== Highlights ===
Work continued on stabilizing the new Translate UX (TUX) interface.
Specifically, numerous fixes were made to make TUX work correctly in
Microsoft Internet Explorer.
The "message tools menu" was added to give easy access from the
message translation interface to the history of the message and to
translations in all languages (Special:Translations).
Basic file format support was added for Xliff.
Initial work was done to add support for sandboxing - allowing users
to make several test translations before getting full translation
permissions. This is an incomplete experimental feature and it is not
active by default.
=== Noteworthy changes ===
* Removed the hide tab that appeared even though there were no more warnings.
* Messages saying that there is nothing to proofread were clarified.
* Show "Cancel" instead of "Skip" at the last message in the curent view.
* Clicking a suggestion copies the suggestion to the translation field
(previously, a user had to press a link, which was harder).
* Simple paging for translation search results.
* <tvar|> no longer visible on translation pages in source language (bug 46925)
* The bottom toolbar is always shown, even without a scroll.
* When a user modifies a translation, it is no longer proofreadable (bug 46687)
* Numerous minor styling changes.
* RTL fixes for Special:Magic, to ensure cleaner display of magic words
in right-to-left translations.
== UniversalLanguageSelector ==
=== Highlights ===
Starting with this release, MLEB is no longer compatible with
MediaWiki 1.19. To use MLEB 2013.04 or later, you must used it
combined with MediWiki 1.20.4 or a later version.
Work began on making the ULS appear in the sidebar rather than at the
top of the page near the personal links menu. This is an incomplete
and experimental feature. It can be tested by setting the variable
$wgULSPosition to 'interlanguage'.
=== Noteworthy changes ===
* Wikimedia Foundation's GeoIP service is now used by default for
detecting the user's location and guessing the suggested language.
* A web font for the Divehi language was added.
Amir Elisha Aharoni · אָמִיר אֱלִישָׁע אַהֲרוֹנִי
“We're living in pieces,
I want to live in peace.” – T. Moore
I have an own idea for my GSoC project that I'd like to share with you.
Its not a perfect one, so please forgive any mistakes.
The project is related to the existing GSoC project "*Incremental Data dumps
*" , but is in no way a replacement for it.
For a long time, a lot of offline solutions for Wikipedia have sprung up on
the internet. All of these have been unofficial solutions, and have
limitations. A major problem is the* increasing size of the data dumps*,
and the problem of *updating the local content. *
Consider the situation in a place where internet is costly/
unavailable.(For the purpose of discussion, lets consider a school in a 3rd
world country.) Internet speeds are extremely slow, and accessing Wikipedia
directly from the web is out of the question.
Such a school would greatly benefit from an instance of Wikipedia on a
local server. Now up to here, the school can use any of the freely
available offline Wikipedia solutions to make a local instance. The problem
arises when the database in the local instance becomes obsolete. The client
is then required to download an entire new dump(approx. 10 GB in size) and
load it into the database.
Another problem that arises is that most 3rd part programs *do not allow
network access*, and a new instance of the database is required(approx. 40
GB) on each installation.For instance, in a school with around 50 desktops,
each desktop would require a 40 GB database. Plus, *updating* them becomes
even more difficult.
So here's my *idea*:
Modify the existing MediaWiki software and to add a few PHP/Python scripts
which will automatically update the database and will run in the
background.(Details on how the update is done is described later).
Initially, the MediaWiki(modified) will take an XML dump/ SQL dump (SQL
dump preferred) as input and will create the local instance of Wikipedia.
Later on, the updates will be added to the database automatically by the
The installation process is extremely easy, it just requires a server
package like XAMPP and the MediaWiki bundle.
Process of updating:
There will be two methods of updating the server. Both will be implemented
into the MediaWiki bundle. Method 2 requires the functionality of
incremental data dumps, so it can be completed only after the functionality
is available. Perhaps I can collaborate with the student selected for
incremental data dumps.
Method 1: (online update) A list of all pages are made and published by
Wikipedia. This can be in an XML format. The only information in the XML
file will be the page IDs and the last-touched date. This file will be
downloaded by the MediaWiki bundle, and the page IDs will be compared with
the pages of the existing local database.
case 1: A new page ID in XML file: denotes a new page added.
case 2: A page which is present in the local database is not among the page
IDs- denotes a deleted page.
case 3: A page in the local database has a different 'last touched'
compared to the one in the local database- denotes an edited page.
In each case, the change is made in the local database and if the new page
data is required, the data is obtained using MediaWiki API.
These offline instances of Wikipedia will be only used in cases where the
internet speeds are very low, so they *won't cause much load on the servers*
method 2: (offline update): (Requires the functionality of the existing
project "Incremental data dumps"):
In this case, the incremental data dumps are downloaded by the
user(admin) and fed to the MediaWiki installation the same way the original
dump is fed(as a normal file), and the corresponding changes are made by
the bundle. Since I'm not aware of the XML format used in incremental
updates, I cannot describe it now.
Advantages : An offline solution can be provided for regions where internet
access is a scarce resource. this would greatly benefit developing nations
, and would help in making the world's information more free and openly
available to everyone.
All comments are welcome !
PS: about me: I'm a 2nd year undergraduate student in Indian Institute of
Technology, Patna. I code for fun.
hobbies: CUDA programming, robotics, etc.
Kiran Mathew Koshy
While editing Wikipedia articles, I often faced a situtation when I
accidently pressed "Back" in browser, then spontaneously pressing
random buttons, returning the edit form and... having a blank editor
or the last submitted version of an article. I've been so much
frustrated every time when I ruined the new article and had to start
it over again.
What can we do to fix this? I propose to keep the current textarea
state in localStorage, at least one version per article (we can store
them by the relevant article ID, also adding the current timestamp and
to the structure). We can also let the user disable key press
triggered storage update, but to save backups every N secs, or just to
save them by pressing a button.
[Timestamp and possibly other info would be stored to collect garbage.
As we know, localStorage has a small quota.]
There is another approach: storing the backups at the server side.
Some people ( :] ) suggest that it's quite cheap and may be
reasonable. Anyhow, we cannow allow per-keypress backup update due to
requests latency. And this would be a disaster to process a huge bunch
of some way useless requests simultaneously, unless dedicated servers
are run to serve swiftly and obediently.
All this stuff can be developed as a gadget. You can make it a
WikiEditor plugin. And it would be re-e-aally fantastic to make this
work with VisualEditor. Who would try to make this thing look good? Or
should I do this otherwise? :)
Павел Селіцкас/Pavel Selitskas
Wizardist @ Wikimedia projects
Last November, I started to clean up on the Glossary page on meta, as
an attempt to revive it and expand it to include many technical terms,
notably related to Wikimedia Engineering (see e-mail below).
There were (and are) already many glossaries spread around the wikis:
* one for MediaWiki: https://www.mediawiki.org/wiki/Manual:Glossary
* one for Wikidata: https://www.wikidata.org/wiki/Wikidata:Glossary
* one for Labs: https://wikitech.wikimedia.org/wiki/Help:Terminology
* two for the English Wikipedia:
My thinking at the time was that it would be better to include tech
terms in meta's glossary, because fragmentation isn't a good thing for
glossaries: The user probably doesn't want to search a term through a
dozen glossaries (that they know of), and it would be easier if they
could just search in one place.
The fact is, though, that we're not going to merge all the existing
glossaries into one anytime soon, so overlap and duplication will
remain anyway. Also, it feels weird to have tech content on meta, and
the glossary is getting very long (and possibly more difficult to
maintain). Therefore, I'm now reconsidering the decision of mixing
tech terms and general movement terms on meta.
Below are the current solutions I'm seeing to move forward; I'd love
to get some feedback as to what people think would be the best way to
* Status quo: We keep the current glossaries as they are, even if they
overlap and duplicate work. We'll manage.
* Wikidata: If Wikidata could be used to host terms and definitions
(in various languages), and wikis could pull this data using
templates/Lua, it would be a sane way to reduce duplication, while
still allowing local wikis to complement it with their own terms. For
example, "administrator" is a generic term across Wikimedia sites
(even MediaWiki sites), so it would go into the general glossary
repository on Wikidata; but "DYK" could be local to the English
Wikipedia. With proper templates, the integration between remote and
local terms could be seamless. It seems to me, however, that this
would require significant development work.
* Google custom search: Waldir recently used Google Custom Search to
created a search tool to find technical information across many pages
and sites where information is currently fragmented:
. We could set up a similar tool (or a floss alternative) that would
include all glossaries. By advertising the tool prominently on
existing glossary pages (so that users know it exists), this could
allow us to curate more specific glossaries, while keeping them all
searchable with one tool.
Right now, I'm inclined to go with the "custom search" solution,
because it looks like the easiest and fastest to implement, while
reducing maintenance costs and remaining flexible. That said, I'd love
to hear feedback and opinions about this before implementing anything.
On Tue, Nov 20, 2012 at 7:55 PM, Guillaume Paumier
> The use of jargon, acronyms and other abbreviations throughout the
> Wikimedia movement is a major source of communication issues, and
> barriers to comprehension and involvement.
> The recent thread on this list about "What is Product?" is an example
> of this, as are initialisms that have long been known to be a barrier
> for Wikipedia newcomers.
> A way to bridge people and communities with different vocabularies is
> to write and maintain a glossary that explains jargon in plain English
> terms. We've been lacking a good and up-to-date glossary for Wikimedia
> "stuff" (Foundation, chapter, movement, technology, etc.).
> Therefore, I've started to clean up and expand the outdated Glossary
> on meta, but it's a lot of work, and I don't have all the answers
> myself either. I'll continue to work on it, but I'd love to get some
> help on this and to make it a collaborative effort.
> If you have a few minutes to spare, please consider helping your
> (current and future) fellow Wikimedians by writing a few definitions
> if there are terms that you can explain in plain English. Additions of
> new terms are much welcome as well:
> Some caveats:
> * As part of my work, I'm mostly interested in a glossary from a
> technical perspective, so the list currently has a technical bias. I'm
> hoping that by sending this message to a wider audience, people from
> the whole movement will contribute to the glossary and balance it out.
> * Also, I've started to clean up the glossary, but it still contains
> dated terms and definitions from a few years ago (like the FundCom),
> so boldly edit/remove obsolete content.
Technical Communications Manager — Wikimedia Foundation
Nowadays we, Japanese Wikipedians, are discussing what we should do for
the next election of the Diet, which is the first one after the Election
Law allows online election campaigning.
And there are an idea to install FlaggedRevs or PendingChanges extension
to hide the unreviewed edits to avoid defamation and vandalism and so on.
So I have a question; can we apply such functions to the specific pages
(e.g. with a template or a magic word)? or are there any other suited
functions/extensions to handle such problem?
I am a 3rd year student doing graduation on computer science and
engineering in a public engineering university in Bangladesh.
this year i have developed a project on constitution of Bangladesh.
where i used XML parsing technique.
i want to develop an application for android device which can give an user
environment to read, share, modify articles directly from an android
does this contribute any good?
This is my second attempt for a proposal, but I think this is a project
that is *much* better than my previous one, and has a much bigger demand.
I'd love to work on this as a GSoC project!
Before I submit this as an official proposal, I'd like to ask for your
thoughts about this. The proposal concentrates on adding RTL support to
VisualEditor, especially based on this requirements/spec page:
Hebrew is my maiden language, and I'm familiar with a lot of the problems
that are raised when using RTL, especially when using it alongside a mix of
LTR and RTL languages.
A first draft of this proposal is available here:
html5. However, this is my first time applying for GSoC and it's my first
time contributing to such a big project as MediaWiki and VisualEditor :)
I'd love to hear your thoughts, ideas and feedback!
Thank you again,
No trees were harmed in the creation of this post.
But billions of electrons, photons, and electromagnetic waves were terribly
inconvenienced during its transmission!