[Foundation-l] Attribution by URL reasoning?

Lars Aronsson lars at aronsson.se
Wed Mar 11 21:18:41 UTC 2009


Anthony wrote:

> In 1.0 and 2.0 I assume the appropriate section is 4(d).  The 
> change from 1.0 to 2.0 adds a requirement to specify a URL.

Copyright (and also the European author's rights / Urheberrecht) 
used to be all about making copies, presumably physical copies. In 
trials such as the one against The Pirate Bay, an essential 
question is: who made the copies?  Was it the uploader, the 
Internet provider, the link index, or the downloader?  The four 
persons stand accused of assisting in the non-authorized creation 
of copies (medhjälp till exemplarframställning). This question is 
completely irrelevant to computer science, because bits are copied 
all the time, from one transistor to the next.  But it is relevant 
to copyright law, because it is full of references to copy-making.

Record stores and a libraries are collections of copies.  The 
copies are the substance, and the space between the copies 
(sleeves, shelves, brick walls) is just an empty shell.  (There 
are exceptions: The Library of Congress would probably not be 
quite the same if it moved to some other building.)

The Internet, on the other hand, consists entirely of the space 
between the copies: servers, storage systems, connections, 
routers, networks, HTML markup, links, domain names, websites, 
search engines, page ranks, browsers, user communities, brand 
recognition, foundations, companies.  There's so much jar and so 
little jam, that we have given this a special name: content.

When I started a Gopher server in 1992 it was a fun technical 
experiment, but I needed content and had none.  I started to write 
my own texts, but didn't get far.  So I started to scan 
out-of-copyright Scandinavian books.  I looked at Project 
Gutenberg and called my offspring Project Runeberg.  But I was 
surprised that they only cared for the books, the e-texts, and 
didn't seem to care for the space between the copies.  Theirs was 
just a pile of e-texts, which could be copied around at will, 
residing on some random FTP server that could change at any time. 
My first wish had been to build the server, the structure around 
the e-texts. They thought a mirror server was a great help in 
spreading e-texts. I though a mirror server was a sign of my 
failure to build the one-stop-shop. Since the e-texts are out of 
copyright, I couldn't stop people from mirroring them, but I could 
build a server that was good enough and pretty enough that there 
was no immediate need for mirrors.

When Wikipedia started in 2001, I was fascinated by the concept, 
but surprised that again the same old "content only" attitude was 
there. Mirrors were allowed and encouraged. Is this really the 
way? So many details in technology and project guidelines could be 
questioned, that I started my own wiki in Swedish (susning.nu) 
that made a few things different. Among them, I didn't require 
GFDL licensing from my users, so the content was not "free" and 
could not be mirrored. This is not because I'm an evil person, but 
because I didn't see why that content needed to be free and 
mirrored if I provided a server good enough. Skipping the free 
licensing created a lock-in, which is something that most 
businesses would envy. Again, susning.nu was primarily a really 
fun technical experiment. It took until December 2003 before the 
German Wikipedia caught up at 50,000 articles. Soon after that, 
vandalism got out of hand, and I closed susning.nu in April 2004.
The user community moved over to the Swedish Wikipedia, which has 
been quite successful.

That was 2001-2004. After I closed susning.nu, I have tried to 
help Wikipedia. I now have a better understanding of why free 
licensing is useful. What I don't fully understand is if we're 
really trying to fulfill the mission (Imagine a world...), why are 
we spending so much effort (fundraising, staffing, operations) to 
operate the world's 7th most visited webserver? Is that necessary 
or just a burden? If free content is our focus, why spend so much 
on the space between the copies? The 7th most visited website is a 
Manhattan bank with marble-clad walls. Do we need that?

If the content is free, people don't need to drink from our 
watertap. It's the water that's important, not the tap. We could 
have a minimal webserver to receive new edits. Serving replication 
feeds to a handful of media corporations (who might pay for it!) 
should be far cheaper than to receive all this web traffic.  Some 
universities might serve up ad-free mirrors. We could be the 
Associated Press instead of the New York Times, the producer 
instead of the retailer.

Or is the fact that we spend so much to maintain the 7th most 
visited website an admission to the fact that the space between 
the copies actually has a great value to us? A value that will be 
strengthened by cementing its URL and/or the name Wikipedia 
(attributing the project) into the new license?

I'm not against that. I will go with whatever. I'm very flexible 
and I still think this is a very fun technical experiment. But I 
think the change is worth some consideration.


-- 
  Lars Aronsson (lars at aronsson.se)
  Aronsson Datateknik - http://aronsson.se

  Wikimedia Sverige - stöd fri kunskap - http://wikimedia.se/




More information about the wikimedia-l mailing list