Karl Wick wrote:
>The main problem I see with the GNU FDL as
>it stands is that it demands that any work that
>uses any of its content must be released under
>the same GNU FDL license.
Replace "any work" with "any derivative work" and you are right. And the viral
nature of the license is the whole point - otherwise somebody could make a
proprietary fork. The GFDL ensures that the content is forever free.
>However there are other open content licences
>out there that people will be using. So, if some day
>down the road anyone wants to mix content from a
>Creative Commons license or any other license at
>all, the work must be released under the GNU
And the same problem applies to the Creative Commons Share Alike license; text
under that license can't be incorporated into GNU FDL works. Same for every
other copyleft viral license. So what is your point? I've already mentioned
that our long term goal should be to encourage the major open content license
writers to make their licenses compatible wherever possible. See
>So any work I do on a textbook will be limited
>to only GNU versions.
No - you can re-license any work you create any way you want. But the version
on Wikipedia and all subsequent modifications by others will forever be under
the GNU FDL.
>It would be as if the work wereforever condemned to
>be in its own, proprietary format, 100% incompatible
>with all other sources and licenses, including all other
>open content licenses that I am familiar with.
Proprietary? Do you have any idea what that means?
>Remember that the GNU FDL was created for software,
>not open content.
Uh, no. It was created for documention and any other non-fiction works.
>And remember that even RMS says that it may
>not be the ideal license for open content.
He said no such thing. All that is written on that particular subject is that
"We recommend making all sorts of educational and reference works free also,
using free documentation licenses such as the GNU Free Documentation License
(GNU FDL)." and "For other kinds of works, we recommend you consider the
licenses proposed by Creative Commons." Were in there does it say that the
GNU FDL isn't ideal for open content?
>One solution I see would be to create a special
>version of the GNU FDL just for open content, or
>just for Wikipedia.
For God's sake man! The GNU FDL /is/ already for open content.
>That way we could decide for ourselves without
>needing the rest of the GNU world to go along with it.
Where were you two and a half years ago when such an idea actually had a
chance to see daylight? Due to the viral nature of the the GNU FDL it cannot
be revoked unless every single person who has ever contributed unique
copyrightable content to Wikipedia agreed to the change in license terms.
And to ignore Wikipedia as a text resource by having the textbook project
under an incompatible license or license combination would defeat the whole
purpose of Wikipedia. There is already a great deal of text in Wikipedia that
can be ported to textbook form and organization with relative ease.
>Or, adapting another license like one of the Creative
? Sorry, but they have the same problems. The only real advantage they have
over the GNU FDL is that they are easier to understand and are not written
specifically for documention.
>Thats the only way I see that will prevent eternal,
>unmixable forks of content.
And where are these mythical content forks that you speak of? There is no
magic bullet here and the only way we can ensure the freedom of our content
is to choose one copyleft viral license and go with it. Wikipedia is by far
the largest open content resource in the world -- let's follow their lead and
try to encourage license compatibility with the people who write the
-- Daniel Mayer (aka mav)
Once again today, someone is complaining about page load time on the pump. I am beginning to suspect that this is a bandwidth problem. What has been done to investigate the reason behind this problem, and what were the results of said tests? I haven't really heard a straight answer from anyone saying what the problem is, just speculation.
It strikes me as increasingly obvious that some concerted effort to be
as NPOV as possible on the Israeli-Palestinian issue is necessary, as
it's starting to be one of the more frequent edit wars, and distributed
throughout the wiki, even in places you might not expect.
Two issues in particular that have come up lately, one from each side:
1. [[User:BL]] is mass-adding the contents of palestineremembered.com --
massive lists with hundreds of subpages comprising every village
(defined as 10 or more people) destroyed in the 1948 war, every
"massacre" (defined as 10 or more people) committed or purportedly
committed during that war (little effort is made to distinguish), and a
whole host of other information that's difficult if not impossible to
Even if it weren't for the difficulty in verifying this information, it
strikes me as somewhat odd that we'd have 300 pages dedicated to Arabs
killed in 1948, and only a single page dedicated to the Armenian
genocide, or the Pontian Genocide, or the Hutu-Tutsi genocide, and so
on. I don't think it'd be a good idea to add 10,000 pages or so, one
for each village ("village" defined as 10 people or more) destroyed in
each of those conflicts. And if we're going to have a separate page for
every instance of civilian deaths during a war, WW2 alone would be
another 10,000 pages or so.
2. [[User:RK]] is, as is probably obvious, somewhat of a pro-Israeli
activist, and is becoming difficult to clean up. The latest thing I've
noticed is him adding 2-paragraph-long attacks on Arab anti-Semitism to
articles such as [[George Washington]] and [[Benjamin Franklin]], in the
guise of "defending" their "tarnished" reputations against charges of
anti-Semitism stemming from little-known fabricated quotes.
Not to single out these two users in particular; they're the two that
come to mind at first. And these two issues in particular are also
being dealt with on talk pages. But it's becoming clear that it will be
very difficult to catch all of these, so perhaps some more concerted
effort is needed. I'm not sure exactly what to propose, but it seems as
a minimum we need a group of several people who are not particularly
partial to either side -- but who are knowledgeable about the issues --
to essentially police (hopefully in as unconfrontational a way as
possible) this sort of stuff. The problem is that those most
knowledgeable and interested in spending a great deal of time writing
articles on these topics are often those who are most partisan to one
side or the other.
> > The Daniel C. Boyer bandwagon is getting a bit out of control. Daniel is
> > user on wiki who has an article about himself on wiki. While that seems
> > as he is a real if rather minor celebrity and artist, his articles
> > to himself are breeding like rabbits.
The Cunctator wrote
>I fail to see the problem. Wikipedia is meant to be a universal
Once Daniel don't interpret universal as universally about him! :-)
MSN 8 helps eliminate e-mail viruses. Get 2 months FREE*.
>Everyone tries to invent their own phonetic system when they're in their
>teens. I did.
>But reinventing the wheel is generally a bad idea.
>if every dictionary & encyclopedia has its own system, then it is not
>transferable, and the reader has to relearn for each book they open.
>Let's stick to something that is universal: IPA / SAMPA.
I've seen IPA/SAMPA used in Wikipedia articles and didn't like it. The
former uses too many non-ASCII characters to make it easy enough to use
and the latter uses weird ASCII signs that make the word @n"rid@bl
[unreadable] and l33t-like for those who don't know the system.
The system used in practically every school dictionary I've seen
(except for Merriam-Webster's) is simple: prime mark after accented
syllable (doubled for two accents in one word), flipped e for
schwa, breve (curve) for short vowels, macron (line) for long vowels,
diaresis or circumflex for vowels of far and fur, line across th for
voiced and no line for unvoiced. This last one is the only thing not
representable in Unicode; everything else is sĭm'pəl ēnəf' for us to use.
(If those words came out garbled, either your mail client or mine doesn't
support Unicode. That was 73 12D 6D 27 70 259 6C 20 113 6E 259 66 27 hex.)
Even better is the limited letters-only system used in textbooks that aren't
dictionaries: complex words are repeated in identical-sounding imitations
(I-mi-TAY-shuns). If this latter system is formalized (FAR-muh-liyzd) it
may be simpler than SAMPA for those who don't have the time to learn it.
Geoffrey Thomas / jěf'rē t�'məs / JEF-ree TAH-muhs
Do you Yahoo!?
Yahoo! SiteBuilder - Free, easy-to-use web site design software
I cannot get on to wiki. On the rare times I could today (less than 20% of
all attempts), I could not get into many pages. On those rare occasions when
I did actually get into something, I could not move between article and talk
page. If I actually could get onto a talk page I rarely could save anything
and my watchlist cannot be got to.
Instead I'd constantly get the damned message -
Host 'larousse.wikipedia.org' is blocked because of many connection errors.
Unblock with 'mysqladmin flush-hosts'
If this error persists after reloading and clearing your browser cache,
please notify the Wikipedia developers.
I reloaded and cleared the browser cache to no effect. And the 'please
notify the Wikipedia developers' message would draw a blank, going nowhere.
What is going on?
MSN 8 with e-mail virus protection service: 2 months FREE*
I would like to suggest that we simply keep HTML mail turned *OFF* and
not make a big fuss or flamewar about it. Both sides of the discussion
have good points, so let's just go for backwards-compatibility so
everybody can participate without problems, complaints, or annoyances.