I have checked for Bengali Images, its works fine with 100% accuracy. Any
how can it be implemented in Proofread extension?
---------- Forwarded message ----------
From: Subhashish Panigrahi <subhashish(a)cis-india.org>
Date: Sat, Aug 29, 2015 at 3:22 PM
Subject: [Wikimediaindia-l] Google's Optical Character Recognition software
now works with all South Asian languages
-----BEGIN PGP SIGNED MESSAGE-----
Google's OCR which apparently is most accurate OCR
we have seen so far, works really good for all the major South Asian
Here are test cases of many Indian scripts: https://goo.gl/3X75iR.
Except Gurmukhi most scripts are working really good.
This could be really useful for Indian language Wikimedians and will
come handy for digitization of printed and scanned text. Here is an
animated tutorial for Wikimedians to use this tool for
Please write to me if anyone wants to localize this tutorial in your
Programme Officer, Access To Knowledge
Centre for Internet and Society
@subhapa / https://cis-india.org
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
-----END PGP SIGNATURE-----
Wikimediaindia-l mailing list
To unsubscribe from the list / change mailing preferences visit
I'm deeply convinced that splitting wikisource projects into variuos
languages has been a mistake.
Is anyone so bold to imagine that it is possible to revert that mistake?
Or, are we forced to travel along the* diabolicum* trail?
can you tell me how many communities are doing, right now, the contest for
the birthday of Wikisource?
it.source is doing that, but I don't know about Catalans, for example.
Also, I remember there was the code for counting people here:
Is it updated?
---------- Forwarded message ----------
From: *Gnangarra* <gnangarra(a)gmail.com>
Date: Friday 6 November 2015
Subject: [Wikimedia-l] TPP - copyright
To: Wikimedia Mailing List <wikimedia-l(a)lists.wikimedia.org>, "
affiliates(a)lists.wikimedia.org" <affiliates(a)lists.wikimedia.org>, Wikimedia
Commons Discussion List <commons-l(a)lists.wikimedia.org>
We have a new problem to face in the coming months assuming countries
ratify the Trans Pacific Partnership
The text of the agreement has been released in the last 24 hours, early
commentary is indicating that copyright changes will occur restoring
copyright to some works that are currently PD.
According reports this will affect media sourced in Canada where copyright
will be extended from 50-70 years meaning that image sin this period may
need to be deleted both on commons and on en:wp, Australian sourced images
face a similar issue as will other countries.
Rather than a piece meal commons copyright battle, and a duplicate one on
en:wp being lead by unqualified wikilawyers resulting in project
discrepancies. I'm calling on the community to take more holistic approach
and request that the WMF ask for its legal eagles to give an edict we can
take or communities to explain what will happen in each jurisdiction as the
TPP is ratified.
This will also give us guidance as to how Affiliates can approach and
support activities locally to ensure material that is already freely
available remains so.
President Wikimedia Australia
Wikimedia-l mailing list, guidelines at:
Your code modifications for http://wikitolearn.org/ are interesting. I'm
pretty sure that KDE policies don't force you to fork MediaWiki
extensions locally, so your patches are definitely welcome upstream.
I'm not sure what you mean with your point about <dmath> being rejected
by the community; perhaps you refer to some performance decision made by
WMF. If your modifications to Math are incompatible with some decision
of the maintainers, you can ask a different repository on gerrit or
another branch on the same repository, so that non-WMF users can use
As for your comments on chapters and drafts, I don't see anything
incompatible with how Wikibooks and Wikiversity work. If you have a
solution for what we call "book management" i.e.
https://phabricator.wikimedia.org/T17071 (worked on by Raylton and
), that's especially interesting.
To reach the Wikibooks and Wikiversity community, the best way is to use
a medium that can involve their active editors, such as their mailing
lists (cc'ed here) or wikis.
As is typical for smaller communities, connecting all the pages to Wikidata
items can be a major undertaking. Luckily many users have already used
Template:Wikipdia (https://www.wikidata.org/wiki/Q15632185) to connect
Wikisource pages to Wikipedia pages. Because Wikipedia pages are already
connected quite well to Wikidata, it is easy to find the common item.
I ran a bot on ar-wikisource and found ~400 inclusions of the template. The
bot set ~170 sitelinks. The rest of the pages either already had a sitelink
or the corresponding Wikipedia page did not have a Wikidata-item yet.
Having also checked around 30 items by hand I couldn't find any mistakes,
which shows that the template is well curated. If anyone would like me to
run the bot on another language of Wikisource or share the code just send
me a message.
I just wanted to thank everybody which participated at the event, and
especially the team which managed it. Also a special credit to WMfr
which granted scholarships to me and several other people I was please
I learned a lot during this week and was really happy with the outcome.
Being able to have face to face discussions with some of you was really
great. Your enthusiasm and
kindness is a major source of motivation and inspiration. I hope that
each attendee will have a troubleless return trip, and for those who
stayed some extra days, that they thoroughly enjoy their time in Vienna
and surroundings. I was personally totally impressed by the architecture!
Also it was such a pleasure to be able to dance with Asaf. :P
(unfortunately, there is only the paper version).
Karen Coyle is a great librarian, who also collaborates with Internet
archive and OpenLibrary. She's also an expert in FRBR and bibliographic
models, so this book should be really of interest for all of us who are
trying to wrap their minds around books, Wikisource and Wikidata :-)