Hoi,
We have reached another nice number at WiktionaryZ.. We have our 500th
user.. As you may know, we are quite happy to increase not only our
number of editors (please let us know when you have your Babel templates
and have read [[DefinedMeaning]]) we have also slowly but surely
expanded the number of languages that we support, Thai Ido and Serbian
among the ones that have added lately.
In the background there have been many interesting discussions about all
kinds of everything, many of the suggestions made are feasible but are
reliant of the state of the software and the availability of people
willing to program for us. The first volunteer contribution is about to
be go life in a few days time; this is software that will change the
behaviour of the screen. The way you like to have your screen configured
will get some stickiness. Thanks Rod :) The first stage of versioning is
likely to go life next week.
We have had a setback with the development of the "Mulitlingual
MediaWiki", the guy who started to work on it fell sick and the money
promised for it has become uncertain for now due to legal problems at
the donor side. We now have the funding to make this happen thanks to
Wikia and the University Bamberg. We even have some money to create some
new functionality to make the life of our editors and users a bit more
sweet; things like lists of the words in a language and/or a collection
or the words where there is content in language A and there is no
content in language B...
From a content point of view, I do not have a clue how many "articles"
WiktionaryZ has, I do know that it grows rapidly certainly when you
consider that the growth is based on what is done manually. Our Alexa
ranking has some amusement value;
http://www.alexa.com/data/details/traffic_details?q=&url=wiktionaryz.org
and we have our own stats as well; http://wiktionaryz.org/stats/
When you have not visited us yet, this is as great a time as any, if it
were only to boost the "Alexa" ratings :)
Thanks,
GerardM
Well I am crossposting this blog text as well as copying it to some
people in blind copy since it involves various projects - as for
Wikipedia: it is about contents creation for small wikipedias (that's
why I changed the original title above).
The original post is at:
http://sabinecretella.blogspot.com/2006/08/omegat-wiktionaryz-betawiki-some…
Your comments and thoughts are very much appreciated.
Best, Sabine
*****
OmegaT, WiktionaryZ, Betawiki ... some questions that need an
answer ...
In the Wiktionary IRC the following questions were made by Connel: "...
considers omegat.org. Is the intent for it to just auto-upload stuff to
WZ? to/from ZW? Or betawiki, or both betawiki and WZ? Or is betawiki
just for WikiMedia total localization?"
That is a lot ... so let me go step by step.
The intent of OmegaT <http://en.wikipedia.org/wiki/OmegaT> is not to
auto-upload stuff to WiktionaryZ <http://wiktionaryz.org> or download it
from there. Nor is it only there for Betawiki
<http://nike.users.idler.fi/betawiki/Etusivu> and WiktionaryZ, even if
it will probably be used for both sooner or later. OmegaT is a CAT
<http://en.wikipedia.org/wiki/Computer-assisted_translation>-Tool that
helps translators to do their work.
What does this mean: imagine you use for all of your translations a tool
that creates a Translation Memory, a file containing the translations
you did segmented into sentences, combining source and target sentence.
Then you do further translations and let the CAT-Tool access these
already translated files. Now if your translation is of a subject you
already translated chances are high that most terminology needed is
already in there and you can even see in which context it was used. So
with OmegaT you do a search on your project and the available
translation memories to see if and how a term was already translated.
This can help a lot.
Now consider a manual - of a machine, a computer, whatever. These
manuals need updates once a new version of that machine or computer is
produced. Normally companies than also just update the description and
parts of it remain the same as before (simply because the functionality
of these parts is still the same). When you then translate you will find
these parts that are unchanged in your translation memory and depending
on how you set your options OmegaT proposes the 100% match or overwrites
the translation part of your project with the already existing
translations. In this way you can save loads of time.
Having the right parser also the MediaWiki <http://mediawiki.org> UI
could be translated in such a way. Now we always will have people that
translate things manually online and who will not use a CAT. This means
that OmegaT should be able to access the single pages containing the
messages on Betawiki, you translate them on your computer and store them
to the page in the correct language version. This is feasible.
Another use will be: creation of contents for small wikipedias. Once we
get our wiki read/wiki write option within OmegaT it is possible to
start a translation of an article, let's say from the English wikipedia
<http://en.wikipedia.org>, and translate it to any language, let's say
the Neapolitan wikipedia <http://nap.wikipedia.org>. This means you tell
OmegaT which page to get on en.wikipedia <http://en.wikipedia.org> and
which page to write on nap.wikipedia <http://nap.wikipedia.org>. The
same is valid for any African language. The advantage of this is: if
there is no online-connection people can work offline on translations.
The translation memories out of these translations should be stored
(WiktionaryZ is already enabled to upload translation memories)
somewhere in order to allow others to access and use them to be faster
and of higher quality during their own translations. Another aspect of
doing things this way is: the proof reading of a translation is easier
since you see the source text above the translation for each sentence.
This easens the job a lot and the quality of the translated article raises.
Now to WiktionaryZ and OmegaT: OmegaT for now has quite a simple
glossary function - you create a tab separated text file and put it into
your glossary directory. While you translate OmegaT shows you the
translation proposals for the words that are present in that sentence
and in the glossary. Now imagine what that means if you connect the
glossary function to WiktionaryZ: the whole repository of data at your
fingertips - of course: considering the mass of data that is online in
WiktionaryZ it becomes very important to attribute domains to
terminology. Often a word can be translated in 20 ways or even more into
another language ... well, it does not make sense if you are doing a
translation about medical equipment that you get proposals from another
domain, let's say machinery - the possibilities from other domains
should only be proposed (showing that other domain) when there is no
entry for medical equipment.
At this stage we don't have this domain structure for terminology on
WiktionaryZ and therefore the data, once we have loads of it online,
cannot be used - it would just create a huge mess and would be very time
consuming. So one of the things we really nees asap is a domain
structure where we can connect the single terms to - the sooner we have
it the better .... otherwise we will have loads of double and triple
work or WiktionaryZ could become completely useless for the use within
OmegaT and as such it would not be of any advantage for translators. Not
even for scientist really ... imagine a biologist search for terminology
and get whatever result ... also those of machinery or whatever other
domain.
Back to the use within OmegaT:
The next step is then: what if the searched term is not in WiktionaryZ
... I already noted that during my last translation - for now it is too
time consuming to add terms to WiktionaryZ and also Wiktionary when you
wish to do that while you are translating - but: it would make so much
sense. So what is planned in the reference implementation for a
translation glossary
<http://meta.wikimedia.org/wiki/Reference_implementation_for_a_translation_g…>
is that when working with OmegaT you get the possibility to add such a
term directly from there. You simply tell OmegaT to add it to
WiktionaryZ with your user ID and you can attribute all the necessary
domains etc. without problems as well as tag the term as "definition
needs to be added". What happens in that way is that WiktionaryZ will
get quite a bunch of very specific terminology over time.
Another use is OmegaT for language lessons - Connel, from en.wiktionary
<http://en.wiktionary.org> thought about it and he is right: OmegaT
could be used for language learning as well ... what if we have a huge
sentence repository and people start to translate texts to study that
language - they do not need a paper dictionary - OmegaT would help them
to see the use of a word in various sentences and they would get the
terminology proposals like the translators. When being back at school or
university (or maybe also online with a language teacher) they can
understand their errors, update WiktionaryZ and the online sentence
repository.
For exams teachers would have a mass of proposals and they could
determine which glossary group shall be included in the exams ... that
is to be thought about ... it was not considered up to now even if there
are already thoughts on how to use WiktionaryZ for language learning.
Did I miss something? Hmmm ... not sure. Well if you have questions:
just ask :-)
Hi,
our problem is we have 2 articles in our mediawiki one contains ...intranet... in the title and the other one ...intranets... in the title.
If I enter "intranet" into the search I will only recieve the first article as a result but not the second on although "intranet" is a part of "intranets".
I would like to recieve both articles as a result even if i just type "intra" please tell me how that is possible with the Mediawiki version 1.7.1. Do I have to configure something or do i need a plugin???
Please Help!
Thanx
Tim
--
Der GMX SmartSurfer hilft bis zu 70% Ihrer Onlinekosten zu sparen!
Ideal für Modem und ISDN: http://www.gmx.net/de/go/smartsurfer
Re-sending my mail from yesterday ... for some reason it does not go
through.
-------- Original-Nachricht --------
Betreff: Re: [Wiktionary-l] WZ Timeline.
Datum: Sat, 05 Aug 2006 13:29:12 +0200
Von: Sabine Cretella <sabine_cretella(a)yahoo.it>
An: The Wiktionary (http://www.wiktionary.org) mailing list
<wiktionary-l(a)Wikipedia.org>
Referenzen: <9c00d9790608042051k6ec4195dn2ddd9eea56ff936a(a)mail.gmail.com>
Hi Michael,
that timeline you saw is quite an old one.
As for now we have the following dates:
- versioning should work by the end of August/beginning of September
- by October there should be a publicly editable version
As you probably know sometimes unforeseen things happen - so the dates I
am giving here are purely indicative and could change a bit.
You can already edit on WiktionaryZ right now, but it is us to give you
edit rights for the wikidata part, since versioning is not working
properly and we only can give edit rights to people we know and trust. I
suppose that is understandable. Of course the ordinary wiki pages can be
edited by anyone (well, it's a wiki :-)
Wishing you a great week-end!
Sabine
Michael Monaghan schrieb:
> Hi,
>
> Could someone estimate when a stable alpha/beta WZ might be available?
> [give or take a few months].
>
> Is http://meta.wikimedia.org/wiki/Wikidata/Timetable is the best page
> to keep an eye on for this info?
>
> Thanks,
>
> ~mm
> _______________________________________________
> Wiktionary-l mailing list
> Wiktionary-l(a)Wikipedia.org
> http://mail.wikipedia.org/mailman/listinfo/wiktionary-l
>
>
Chiacchiera con i tuoi amici in tempo reale!
http://it.yahoo.com/mail_it/foot/*http://it.messenger.yahoo.com
Hi Michael,
Could you give me the web address for the wikipedia download just prior to March 2002 or the last update before March 2002.
Best,
David
--- On Fri 08/04, Michael Monaghan < mickmon(a)gmail.com > wrote:
From: Michael Monaghan [mailto: mickmon(a)gmail.com]
To: wiktionary-l(a)wikipedia.org
Date: Fri, 4 Aug 2006 23:51:19 -0400
Subject: [Wiktionary-l] WZ Timeline.
Hi,Could someone estimate when a stable alpha/beta WZ might be available?[give or take a few months].Is http://meta.wikimedia.org/wiki/Wikidata/Timetable is the best pageto keep an eye on for this info?Thanks,~mm_______________________________________________Wiktionary-l mailing listWiktionary-l@Wikipedia.orghttp://mail.wikipedia.org/mailman/listinfo/wiktionary-l
_______________________________________________
Join Excite! - http://www.excite.com
The most personalized portal on the Web!
Hi,
Could someone estimate when a stable alpha/beta WZ might be available?
[give or take a few months].
Is http://meta.wikimedia.org/wiki/Wikidata/Timetable is the best page
to keep an eye on for this info?
Thanks,
~mm