On 08/07/2010 02:23 AM, Andreas Kolbe wrote:
Word-processing the Google output to arrive at a readable, written text creates more work than it saves.
This is where our experience differs. I'm working faster with the Google Translator Toolkit than without.
If Google want to build up their translation memory, I suggest they pay publishers for permission to analyse existing, published translations, and read those into their memory. This will give them a database of translations that the market judged good enough to publish, written by people who (presumably) understood the subject matter they were working in.
If we forget Google for a while, this is actually something that we could do on our own. There are enough texts in Wikisource (out of copyright books) that are available in more than one language. In some cases, we will run into old spelling and use of language, but it will be better than nothing. The result could be good input to Wiktionary.
Here is the Norwegian original of Nansen's Eskimoliv, http://no.wikisource.org/wiki/Indeks:Nansen-Eskimoliv.djvu
And here is the Swedish translation, both from 1891, http://sv.wikisource.org/wiki/Index:Eskim%C3%A5lif.djvu
Norwegian: Grønland er paa en eiendommelig vis knyttet til vort land og folk.
Swedish: Grönland är på ett egendomligt sätt knutet till vårt land och vårt folk.
As you can see, there is one difference already in this first sentence: The original ends "to our country and people", while the translation ends "to our country and our people".
Is there any good free software for aligning parallel texts and extracting translations? Looking around, I found NAtools, TagAligner, and Bitextor, but they require texts to be marked up already. Are these the best and most modern tools available?