Scripto is an alternative to the ProofreadPage extension used
by Wikisource. It is based on Mediawiki but also on OpenLayers,
the software used to zoom and pan in OpenStreetMap.
The only website I have seen that uses Scripto is the U.K.
War Department papers, and in many ways it is more clumsy
than ProofreadPage. But there might be a few ideas that could
be worth picking up. Take a look.
The software is described at http://scripto.org/
As for reference installations, they mention
http://…
[View More]wardepartmentpapers.org/transcribe.php
--
Lars Aronsson (lars(a)aronsson.se)
Aronsson Datateknik - http://aronsson.se
[View Less]
Into a recent talk at en.source Scriptorium, it has been told that nsPage
can be viewed merely as a proofreading tool, the ns0 transclusion/text
being the real core of source content.
I have a different opinion, since I see nsPage code as the real core of
source content, ns0 being merely a derived content, that could be obtained
with complete automation with a set of data wrapped into a Lua/Scribunto
set of structural data (wrapping any needed data for header template and
for pages tag), so …
[View More]that any ns0 page/subpage could be obtained with a
template {{Derive|index base page name}}.
Giving to nsPage such a core content role, it will be much simpler to wrap
into it TEI data, and any POV related to different styles of
chapter/sections structure/naming could be avoided; html rendering will be
unchanged, so saving IMHO conversion in ePub.
What do you think about?
Alex brollo
[View Less]
If the problem is to automate bibliographic data importing, a solution is
what you propose, to import everything. Another one is to have an import
tool to automatically import the data for the item that needs it. In WP
they do that, there is a tool to import book/journal info by ISBN/doi. The
same can be done in WD.
Micru
On Mon, Aug 26, 2013 at 9:23 AM, Thomas Douillard <
thomas.douillard(a)gmail.com> wrote:
> If Wikidata has an ambition to be a really reliable database, we should …
[View More]do
> eveything we can to make it easy for users to use any source they want. In
> this perspective, if we got datas with guaranted high quality, it make it
> easy for Wikidatian to find and use these references for users. Entering a
> reference in the database seems to me a highly fastidious, boring, and
> easily automated task.
>
> With that in mind, any reference that the user will not have to enter by
> hand is something good, and import high quality sources datas should pass
> every Wikidata community barriers easily. If there is no problem for the
> software to handle that many information, I say we really have no reason
> not to do the imports.
>
> Tom
>
>
> _______________________________________________
> Wikidata-l mailing list
> Wikidata-l(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata-l
>
>
--
Etiamsi omnes, ego non
[View Less]
Hi,
I am from Malayalam Wikisource (http://ml.wikisource.org/ ) and I am having
a doubt. Is it possible to incorporate Semantic Forms along with
ProofreadPage Extension? It is for something like a dictionary. The scanned
image should come on right and on the left, there should be a form with the
fields - Word, Meaning1, Meaning2 etc.
Any suggestions?
Regards,
Balasankar C
http://balasankarc.in
I have prepared a book with several editions in Wikidata so you can get the
feeling of what the structure looks like
http://www.wikidata.org/wiki/Q6911
It is based on the set of properties described at the Books Task Force
page, which are supposed to be inter-operable with the Wikipedia infobox
(each language wikipedia will be able to feature a different edition as
soon as arbitrary item access is available), Wikisource and Commons.
http://www.wikidata.org/wiki/Wikidata:Books_task_force
The "…
[View More]edition items" in Wikidata could in turn be connected to the new
BookManagerv2 extension that Molly is preparing for her GsoC:
http://www.mollywhite.net/blog/?p=87http://tools.wmflabs.org/bookmanagerv2/wiki/Book:The_Interpretation_of_Drea…
What is your opinion of these advances? Do you think they are heading in
the right direction?
If you could pass the information to your local Wikisource, it would be
great to have some feedback about potential problems, and to know what is
the perception of each project about it.
Thanks!
Micru
[View Less]
Hi all,
hope some of you are still on vacation :-)
Meanwhile, between a cocktail on the beach and a swim, you can check this
out:
https://meta.wikimedia.org/wiki/Massively-Multiplayer_Online_Bibliography
Asaf Bartov is the head of the Wikimedia Grants Program, and a long
contributor of Hebrew Wikisource.
I think this project could be feasible, but moreover I believe that
Wikisource should be a part/partner of it.
We can discuss here, if you prefer.
Aubrey
---------- Forwarded message ------…
[View More]----
From: Asaf Bartov <abartov(a)wikimedia.org>
Date: Sun, Aug 18, 2013 at 7:38 AM
Subject: [libraries] MMOB
To: Wikimedia & Libraries <libraries(a)lists.wikimedia.org>
Hello.
Some of you have heard me rant about this for a couple of years now. So, I
finally wrote something up:
https://meta.wikimedia.org/wiki/Massively-Multiplayer_Online_Bibliography
Much, much to be added, but I'd love for this to be a group conversation,
so by all means, dig in! :)
Asaf Bartov
Wikimedia Foundation <http://www.wikimediafoundation.org>
[View Less]
Nice progress at archive-it, a service Wikimedia projects users have for
long dreamt using. Maybe someone on this list is interested in applying.
Nemo
-------- Messaggio originale --------
Oggetto: Job Posting: Web Application/Software Developer for Archive-It
Data: Tue, 06 Aug 2013 22:12:10 +0000
Mittente: <internetarchive>
Job Posting: Web Application/Software Developer for Archive-It
**The Internet Archive is looking for a smart, collaborative and
resourceful engineer to lead …
[View More]and do the development of the next
generation of the Archive-It service, a web based application used by
libraries and archives around the world. The Internet Archive is a
digital public library founded in 1996. Archive-It is a self-sustaining
revenue generating subscription service first launched in 2006.
Primary responsibilities would be to extend the success of Archive-It,
which librarians and archivists use to create collections of digital
content, and then make them accessible to researchers, scholars and the
general public. Widely considered to be the market leader since its’
inception, Archive-It’s partner base has archived over fivebillion
webpages and over 260 terabytes of data. http://archive-it.org
<http://archive-it.org/>
Working for Archive-It program’s director, this position has technical
responsibility to evolve this service while still being straightforward
enough to be operated by 300+ partner organizations and their users with
minimal technical skills. Our current system is primarily Java based and
we are looking to help build the next-generation of Archive-It using the
latest web technologies. The ideal candidate will possess a desire to
work collaboratively with a small internal team and a large, vocal and
active user community; demonstrating independence, creativity,
initiative and technological savvy, in addition to being a great
programmer/architect.
*The ideal candidate will have: *
* 5+ years work experience in Java and Python web application development
* Experience with Hadoop, specifically HBase and Pig
* Experience developing web application /database back-end (SQL or
NoSQL). /
* Good understanding of latest web framework technologies, both JVM
and non-JVM based, and trade-offs between them/./
* Strong familiarity with all aspects of web technology and protocols,
including: HTTP, HTML, and Javascript
* Experience with a variety of web applications, machine clusters,
distributed systems, and high-volume data services.
* Flexibility and a sense of humor
* BS Computer Science, or equivalent work experience
*Bonus points for:*
* Experience with web crawlers and/or applications designed to display
[archived] web content (especially server-side apps)
* Open source practices experience
* Experience and/or interest in user interface design and information
architecture
* Familiarity with Apache SOLR or similar facet-based search technologies
* Experience with the building/architecture of social media sites
* Experience building out a mobile platform
*To apply:*
Please send your resume and cover letter to kristine at archive dot org
with the subject line “Web App Developer Archive-It”.
The Archive thanks all applicants for their interest, but advises that
only those selected for an interview will be contacted. No phone calls
please!
We are an equal opportunity employer.
[View Less]