El sáb, 02-02-2008 a las 22:10 +0000, Thomas Dalton escribió:
On 02/02/2008, Francis Tyers spectre@ivixor.net wrote:
El sáb, 02-02-2008 a las 22:03 +0000, Thomas Dalton escribió:
This shows a misunderstanding of what is meant by data, as was outlined by my first post. Indeed, you seem to be trying to separate the "machine parts" from the "human parts" something I stated is not possible. I would welcome you to try and show otherwise [patches welcome].
If you put together a wordlist based on interwiki links, that will be separable from the software that uses the list (the easiest way to do it would involve a separate file containing the list). You can release the software under GPL and the wordlist (if it's even copyrightable) under GFDL.
That is not possible due to the way in which the software works. As I mentioned in my first email it is not possible to decouple the "wordlist" part from the non-wordlist part and distribute them as separate packages.
Believe me, I had thought of that.
I don't believe you. Unless you are going through the list manually writing appropriate code for each word (which would take years), you must have some form of automated process with goes through the wordlist. There is nothing stopping you running that automated code on an external file.
You're right, thinking more it could be done with a diff that each user individually patches.
Of course this would make distributing binary packages difficult (the language data is compiled into a binary representation before used). Although a binary diff could be done.
But then, when was the last time an end-user had to apply a binary diff to their free software? Personally I don't consider this reasonable or maintainable for a large number of language pairs. Which was my original point.
Fran