Hi All,
I have checked for Bengali Images, its works fine with 100% accuracy. Any
how can it be implemented in Proofread extension?
Regards,
Jayanta
---------- Forwarded message ----------
From: Subhashish Panigrahi <subhashish(a)cis-india.org>
Date: Sat, Aug 29, 2015 at 3:22 PM
Subject: [Wikimediaindia-l] Google's Optical Character Recognition software
now works with all South Asian languages
To: wikimediaindia-l(a)lists.wikimedia.org
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
Google's OCR which apparently is most accurate OCR
we have seen so far, works really good for all the major South Asian
scripts:
http://globalvoicesonline.org/2015/08/29/googles-optical-character-recog
nition-software-now-works-with-all-south-asian-languages
Here are test cases of many Indian scripts: https://goo.gl/3X75iR.
Except Gurmukhi most scripts are working really good.
This could be really useful for Indian language Wikimedians and will
come handy for digitization of printed and scanned text. Here is an
animated tutorial for Wikimedians to use this tool for
Wikisource/Wikipedia:
https://commons.wikimedia.org/wiki/File:Tutorial_to_use_Google_Optical_C
haracter_Recognition.gif
Please write to me if anyone wants to localize this tutorial in your
language.
- --
Best!
Subhashish Panigrahi
Programme Officer, Access To Knowledge
Centre for Internet and Society
@subhapa / https://cis-india.org
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQIcBAEBCAAGBQJV4YD0AAoJEHThehXZGxGO9ywP/RcJOXB3tFHJNF03X23x1jkY
vffu+1Iob6kLMZt/JD3nTmpXasXDlme6pbGzaT7/YZsC0VouN+4NE9HoEmZAksJF
3nn7HoEive4mDalXH5qyATOilezqIEYOG2c32LVYHnX6Co+fXPVa5WqsHn5js957
OionIc5t0V9zlGB6e5RLOacPWXsAhXyVunaeY6Ma33cOWHFdVnu1XpUGphJ+miVj
EWszTzjDOPlFiMsSsVonjWHvuz7hYPKXxvVXViXY1QAsoOT7wztvOepzM/hAPmYM
kGiODSaN8fU/e/2l4xdnMRymAt8hsz61hdye2UYx7xRjlda/23BKNZz0hiuWiqgO
FBntHycaHyqR8+fUK5EPE0vnqLp/7XdtRtQkRficuEDYlHz4PlMW8oiVEGhSZOaG
fdpgg02sojU1iMOGOs3h/ODWxkRrE3qpG+eT8n1mWJp6Tq7ZLEaQGxW1P6ytlPFF
qOz8JKl94D/MI7ybAtp+IsuUQk160H9wUPmaLxgemDRom7220xV6BysbmaMEWwww
hgO4fBNG6dPUMp825pTSxx18rY/Kw53sgHmUasixCL6Zv6xnM3rRuTxjZh8j77TR
gq2sKgoU+JkYt9eBpVRjrFO90xS5MxPrvL/lGH6P1smAODPull3o0tR681+NGKRp
C8vU5vJOlmL+HlNXBSh9
=lwbI
-----END PGP SIGNATURE-----
_______________________________________________
Wikimediaindia-l mailing list
Wikimediaindia-l(a)lists.wikimedia.org
To unsubscribe from the list / change mailing preferences visit
https://lists.wikimedia.org/mailman/listinfo/wikimediaindia-l
I'm deeply convinced that splitting wikisource projects into variuos
languages has been a mistake.
Is anyone so bold to imagine that it is possible to revert that mistake?
Or, are we forced to travel along the* diabolicum* trail?
Alex
Hi everyone,
can you tell me how many communities are doing, right now, the contest for
the birthday of Wikisource?
it.source is doing that, but I don't know about Catalans, for example.
Also, I remember there was the code for counting people here:
http://pastebin.com/2bifzpjn
Is it updated?
Aubrey
---------- Forwarded message ----------
From: *Gnangarra* <gnangarra(a)gmail.com>
Date: Friday 6 November 2015
Subject: [Wikimedia-l] TPP - copyright
To: Wikimedia Mailing List <wikimedia-l(a)lists.wikimedia.org>, "
affiliates(a)lists.wikimedia.org" <affiliates(a)lists.wikimedia.org>, Wikimedia
Commons Discussion List <commons-l(a)lists.wikimedia.org>
We have a new problem to face in the coming months assuming countries
ratify the Trans Pacific Partnership
https://en.wikipedia.org/wiki/Trans-Pacific_Partnership
The text of the agreement has been released in the last 24 hours, early
commentary is indicating that copyright changes will occur restoring
copyright to some works that are currently PD.
http://boingboing.net/2015/11/06/how-tpp-will-clobber-canadas.html
According reports this will affect media sourced in Canada where copyright
will be extended from 50-70 years meaning that image sin this period may
need to be deleted both on commons and on en:wp, Australian sourced images
face a similar issue as will other countries.
Rather than a piece meal commons copyright battle, and a duplicate one on
en:wp being lead by unqualified wikilawyers resulting in project
discrepancies. I'm calling on the community to take more holistic approach
and request that the WMF ask for its legal eagles to give an edict we can
take or communities to explain what will happen in each jurisdiction as the
TPP is ratified.
This will also give us guidance as to how Affiliates can approach and
support activities locally to ensure material that is already freely
available remains so.
--
G
ideon
President Wikimedia Australia
WMAU: http://www.wikimedia.org.au/wiki/User:Gnangarra
_______________________________________________
Wikimedia-l mailing list, guidelines at:
https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines
Wikimedia-l(a)lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l,
<mailto:wikimedia-l-request@lists.wikimedia.org <javascript:;>
?subject=unsubscribe>
Your code modifications for http://wikitolearn.org/ are interesting. I'm
pretty sure that KDE policies don't force you to fork MediaWiki
extensions locally, so your patches are definitely welcome upstream.
I'm not sure what you mean with your point about <dmath> being rejected
by the community; perhaps you refer to some performance decision made by
WMF. If your modifications to Math are incompatible with some decision
of the maintainers, you can ask a different repository on gerrit or
another branch on the same repository, so that non-WMF users can use
your code.
As for your comments on chapters and drafts, I don't see anything
incompatible with how Wikibooks and Wikiversity work. If you have a
solution for what we call "book management" i.e.
https://phabricator.wikimedia.org/T17071 (worked on by Raylton and
others with
https://meta.wikimedia.org/wiki/Category:GSoC_Mediawiki_Book_Experience
), that's especially interesting.
To reach the Wikibooks and Wikiversity community, the best way is to use
a medium that can involve their active editors, such as their mailing
lists (cc'ed here) or wikis.
Nemo
Hi all!
As is typical for smaller communities, connecting all the pages to Wikidata
items can be a major undertaking. Luckily many users have already used
Template:Wikipdia (https://www.wikidata.org/wiki/Q15632185) to connect
Wikisource pages to Wikipedia pages. Because Wikipedia pages are already
connected quite well to Wikidata, it is easy to find the common item.
I ran a bot on ar-wikisource and found ~400 inclusions of the template. The
bot set ~170 sitelinks. The rest of the pages either already had a sitelink
or the corresponding Wikipedia page did not have a Wikidata-item yet.
Having also checked around 30 items by hand I couldn't find any mistakes,
which shows that the template is well curated. If anyone would like me to
run the bot on another language of Wikisource or share the code just send
me a message.
-Tobias
Hello,
I just wanted to thank everybody which participated at the event, and
especially the team which managed it. Also a special credit to WMfr
which granted scholarships to me and several other people I was please
to meet.
I learned a lot during this week and was really happy with the outcome.
Being able to have face to face discussions with some of you was really
great. Your enthusiasm and
kindness is a major source of motivation and inspiration. I hope that
each attendee will have a troubleless return trip, and for those who
stayed some extra days, that they thoroughly enjoy their time in Vienna
and surroundings. I was personally totally impressed by the architecture!
Also it was such a pleasure to be able to dance with Asaf. :P
Kind regards,
mathieu
Here:
https://books.google.com.br/books?hl=en&lr=lang_en&id=AYX3CgAAQBAJ&oi=fnd&p…
(unfortunately, there is only the paper version).
Karen Coyle is a great librarian, who also collaborates with Internet
archive and OpenLibrary. She's also an expert in FRBR and bibliographic
models, so this book should be really of interest for all of us who are
trying to wrap their minds around books, Wikisource and Wikidata :-)
Aubrey