[Foundation-l] Creating contents for "small" Wikipedias
Sabine Cretella
sabine_cretella at yahoo.it
Sun Jan 15 19:29:57 UTC 2006
How to create content for small Wikpedias?
The fastest way is "translation".
Premisses:
*Most people need a starting point to "create" so we must give them to
all these potential editors.
*People have different interests so also that content must be different.
*We have loads of, for example, African languages that need "help".
*People do not have a stable internet access.
*People in, for example, African countries could work offline on Linux
systems and things could be uploaded.
*Many would like translations to be proof read and have "reference
material" to look up sentence structures and terminology immediately.
Well ... actually translations are mostly done by copying the contents
from one wikipedia to the other and then start translating. People use
paper dictionaries or workarounds - search for terminology over and over
again.
Now there's that neat software allowing for Computer Assited
Translation: OmegaT (http://www.omegat.org/omegat/omegat.html)
Example
*You save the contents of one Wikipedia in OpenOffice.org (.odt) format.
*You have a terminology list from English to Bambara
*You have .tmx files (translation memories) from former Translations
from English to Bambara
*You create a new OmegaT project and select the files to be translated -
well then you should close the project again (this will be better in one
of the future versions)
*Copy your glossary + old translation memories in their place
*Open the project and start to translate
*While translating you will see similar sentences that were already
translated before in the translation memory match window (the so-called
partial matches - these can also be inserted automatically in the target
sentence so that you only need to modify what is different)
*While translating you will see the words of the sentence that are
present in the glossary in the glossary match window and this helps you
with your translation since you will not need to look them up in a
dictionary
In future the glossary function will be connected to Ultimate
Wiktionary, or WiktionaryZ, through a reference implementation - and
this means that all terminology that is in WiktionaryZ is also at
disposal for translations.
Well there are always thoughts about machine translation around: well
this is the way translators work - we are "helped" by the computer to do
our jobs ... now if we take these techniques and show them to many
people who can work offline as well and if these people exchange
translation memories and then their glossaries with WiktionaryZ this
means that "small" Wikipedias can have a lot of general contents quite
fast.
The next step is then: any translation can be proofread before uploading
it to the wikipedia - so the article can be worked on offline quite easily.
Upload can be done with the help of the bot (pywikipediabot) - and this
can be done by anyone - even by people who don't know the language. In
this way online and offline communities, for example of African
languages, where people cannot always access the Internet, can work out
really well.
Of course an offline-readable wikipedia would then make a lot of sense
in these languages and also some particular way to organise the project
(pages need to be marked as "being translated by" etc.) that can then be
organised also by a non native speaker.
Well so who would like to try out OmegaT for the translation of
articles? (btw. I am doing that to translate from Italian and other
languages to Neapolitan - and of course for my "ordinary work" into
German :-)
Best, Sabine
___________________________________
Yahoo! Messenger with Voice: chiama da PC a telefono a tariffe esclusive
http://it.messenger.yahoo.com
More information about the foundation-l
mailing list