"Hunter Peress" <hfastjava(a)yahoo.com> schrieb:
> can someone act quickly before an enterprising punk reads this ;-)
Sorry, too late. According to whois-information, wp.org is owned by
"Name-4-Sale, Div. of QBX INC"
So, I'd like to add a little block of attribution data to each page
(optional, per-installation; I'm guessing Wikipedia wouldn't use
this). Something along the lines of:
This article last edited on April 21, 2004 by Evan Prodromou.
Based on work by Alice Notaperson, Bob Alsonotaperson, users
Crankshaft, Deckchair and Eggplant, and anonymous editors.
For each (distinct) person who's listed in the old table, it'd show
their real name if it's set, or their user name if not. All anonymous
edits would be lumped under "anonymous editors". Contributors would be
listed with real-named folks first, then pseudo'd folks, then
anonymous. There's no particular reason for that; it could be any
other way (although I don't see a big point making it configurable).
The goal here is to make it easy for redistributors to comply with
license provisions that require author attribution (such as some
Creative Commons licenses), without having to dig through a whole
bunch of history pages.
Anyhoo, the Metadata.php code already does most of this logic, albeit
for output in RDF format. I'd like to take that stuff and put it in
the Article class, in a method like "getContributors". The method
could then be used both from the attribution code and from the RDF
getContributors would return an array of arrays, each of which would
0. User ID
1. User account name
2. User real name, if set
Another option would be to create User objects for each entry in the
returned array, but a) I don't think that most of the User object
fields (email, preferences) are needed, and b) I'd be worried about
slingin' around incomplete User objects. So, I think the arrays are
the best bet.
Does returning an array of arrays seem insane? Would it be wrong to
add this method to Article? If so, where else would it go?
Evan Prodromou <evan(a)wikitravel.org>
Wikitravel - http://www.wikitravel.org/
The free, complete, up-to-date and reliable world-wide travel guide
I have gotten a complaint that 188.8.131.52 is or was attempting to
use a proxy at 184.108.40.206. How can that be?
It is important that we not probe third parties for open proxies under
most circumstances. It might be appropriate if we are experiencing
direct abuse ourselves and we're trying to track down why, but as a
matter of routine, it causes us to get complaints which will get us in
big trouble with our isp.
"Magnus Manske" <magnus.manske(a)web.de> schrieb:
> If the GFDL really requires that list *on the same document* (can't be
> really the same page, think printed version again), can't we declare the
> whole wikipedia as one giant document in itself? [Translation to
> legalese would be required]
The GFDL really requires that; furthermore, it specifies _where_ in the
document it must be:
"List on the Title Page, as authors, one or more persons or entities responsible for authorship of the modifications in the Modified Version, together with at least five of the principal authors of the Document (all of its principal authors, if it has fewer than five), unless they release you from this requirement."
Where title page is defined as:
"The "Title Page" means, for a printed book, the title page itself, plus such following pages as are needed to hold, legibly, the material this License requires to appear in the title page. For works in formats which do not have any title page as such, "Title Page" means the text near the most prominent appearance of the work's title, preceding the beginning of the body of the text."
Thus, the GNU/FDL not only requires us to put the 5 main authors on the page, it requires us to
put the last author plus the 5 main authors of the previous version directly under the title of the
page. Calling the whole thing a single document, it means we have to do so on the Main Page
the domain "wikipedia.at" redirects to pliny.wikipedia.org
(220.127.116.11) which is currently down. Can somebody please
correct this so that the domain points to the German Wikipedia
There's been some progress with the monobook skin recently, new things
are basic rtl support and user styles.
You can tweak styles in the monobook skin by adding a page called
'monobook.css' as a subpage of your user page. My test css is at
http://test.wikipedia.org/wiki/User:Gwicke/monobook.css for example.
Similar with js, the it's called monobook.js in that case.
Other skins don't have the links in the header currently, but those are
easy to add.
The css and js pages are editable only to the user and developers, they
appear protected to anybody else.
The wiki src is retrived with a new method to get the raw wiki text:
application/x-zope-edit. Any of these return the plain wiki src, just
the content header differs. A charset option is optional, e.g.
Brion and me have added an RTL stylesheet to monobook, it seems to work
fine in Opera 7.23, Mozilla/Firefox, IE5.5 and mostly IE6. Screenshots
It's time to realize that we haven't enough servers in our cluster. One goes down, and we have problems.... And wikpedia is slow !
It would be really a good idea to get 2-3 more servers to be used as apache or squid.
Another thing, zwinger our nfs server needs fast access in reading, cause each time a user requests a page, it hits at least one file on zwinger...
A nfs server with a raid1 ide drive, would be fine I think and it would improve reliability(big disks of course, we begin to be short of space on zwinger too. 4G left with only 2 backups).
The actual squid server could be used as an apache server then.
And of course DB SERVER !!!!
We have compressed old table on Suda to keep saving some space, but we're really short in space. And we can't migrate other wikipedias as de, nl, ... to UTF-8 before getting more space and speed. There is only 3G left on Suda, and we need space for logs. So we copy often logs to another server...
If space is full, we'll have to use Curly as DB server, with only one ide drive, back in time in december when wikipedia was so slow (yeah, I know it's slow now, but it was worse in december)
Wikimedia must get new servers to improve speed. At least for those who has given money !
Since the machine is solely accessible physically. Please: run memtest86 to determine if there is
bad memory. If not, then its possible that like coronelli the machine will resume normal
operations with removal of RAM.
Please try to save all logs possible during the troubleshooting process so that just in case a fix
is not found, we can bring lkml to our aid.
Do you Yahoo!?
Win a $20,000 Career Makeover at Yahoo! HotJobs