On 26/08/05, Sabine Cretella sabine_cretella@yahoo.it wrote:
The easiest thing is to use a CAT-Tool to translate the UI and then put the translation memory (tmx) at disposal - so anyone who does an update will have 100% matches from the old UI and the rest needs to be translated. This way the translator goes also over the correction of eventual misspellings or changes in terminology are possible in order to have an even better product.
One of the big problems with the current setup is that the MediaWiki: namespace (the "live" messages in the database of a particular project) are used both for localisation and customisation - so whenever you export the messages from, say, Wikipedia, you have to work out which changes are due to changes in the software, which are cosmetic but appropriate for application to other projects, and which are specific to the particular project. This is probably the biggest challenge which any new l10n system (or even a new approach to i18n) needs to address.
Can you pass me the php file of the actual version? + for example EN+DE or EN+IT from an older localised version? This way I can also try to create an alignment file to have a basis to work on.
The PHP files are, naturally, in the source of the software - see http://www.mediawiki.org/wiki/Download The easiest way is to get them out of the web-based CVS interface: * http://cvs.sourceforge.net/viewcvs.py/wikipedia/phase3/languages/ * the language codes are the same ones Wikimedia domains use; the English interface, which is also the default for missing messages in other languages, is in "Language.php" * so, for German: http://cvs.sourceforge.net/viewcvs.py/*checkout*/wikipedia/phase3/languages/...
Also, if you haven't already, have a look at the documentation on meta: * http://meta.wikimedia.org/wiki/Category:Localisation * http://meta.wikimedia.org/wiki/MediaWiki_localisation * http://meta.wikimedia.org/wiki/Help:MediaWiki_namespace etc