I thought we might have another week but according to the last wikistats update
we had 495K articles on the 20th. That means we should distribute the press
release on Monday.
http://meta.wikipedia.org/wiki/Wikimedia's_first_press_release
My question is this: Should the non-English versions of the press release be
distributed first (followed by the English one that Wednesday), or can the
English one be distributed at the same time? Can the severs handle it?
I assume it doesn't really matter now, but I still would like to have
confirmation on that point.
-- mav
__________________________________
Do you Yahoo!?
Yahoo! Mail SpamGuard - Read only the mail you want.
http://antispam.yahoo.com/tools
The following brief conversation took place on
WikiEN-l. Seems like someone here would know what's
going on, if anyone does.
-Rich Holton (aka Anthropos)
--- John Knouse <jaknouse(a)frognet.net> wrote:
> Date: Sat, 21 Feb 2004 16:42:07 -0800
> From: John Knouse <jaknouse(a)frognet.net>
> To: wikien-l(a)Wikipedia.org
> Subject: [WikiEN-l] wikipedia and hard drive
>
> I'm using Netscape 4.7 and the dreaded Windows 98
> first ed. I'm going to try redownloading and
> reinstalling netscape to see if that makes a diff.
>
> John Knouse
> jaknouse(a)frognet.net
>
>
> >
> > Message: 9
> > Date: Sat, 21 Feb 2004 10:56:25 -0800 (PST)
> > From: Rich Holton <rich_holton(a)yahoo.com>
> > Subject: Re: [WikiEN-l] Wikipedia and hard drive
> > To: English Wikipedia <wikien-l(a)Wikipedia.org>
> > Message-ID:
> <20040221185625.82021.qmail(a)web60304.mail.yahoo.com>
> > Content-Type: text/plain; charset=us-ascii
> >
> > Wow, John. Sounds really weird. What OS / browser
> are
> > you using?
> >
> > -Rich Holton (aka Anthropos)
> >
> > --- John Knouse <jaknouse(a)frognet.net> wrote:
> > > There's something wrong with Wikipedia or my
> > > connection with
> > > it. All of a sudden, starting Friday, Feb. 20,
> > > whenever I
> > > log onto Wikipedia it fails to respond, and all
> the
> > > free
> > > space on my hard drive suddenly disappears.
> This
> > > does not
> > > happen with anything else on-line. Anybody know
> > > what's
> > > going on?
> > >
> > > --
> > > John Knouse
> > > jaknouse(a)frognet.net
>
> _______________________________________________
> WikiEN-l mailing list
> WikiEN-l(a)Wikipedia.org
> http://mail.wikipedia.org/mailman/listinfo/wikien-l
__________________________________
Do you Yahoo!?
Yahoo! Mail SpamGuard - Read only the mail you want.
http://antispam.yahoo.com/tools
Brion,
Camille Constans is taking care of running the wikistats on the server now,
which is very kind of him, and one less burden for you.
Two questions:
A
Could you cast the magic spell on http://www.wikipedia.org/wikistats/csv/
again?
"Access is forbidden."
B
Webalizers pages have not been generated after January 23
Is this because all Wp's are now accessed through several web servers, and
the access logs each tell an incomplete story?
If so, is it an option to run Webalizer job on each server.
I might then merge the page request/visits per day figures for each web
server in my script.
If so what url's/folders do I use to access the data?
Erik Zachte
Trying to keep [[en:Wikipedia:Protected pages]] up-to-date, I want to run
the SQL query
SELECT cur_namespace,cur_title FROM cur WHERE cur_restrictions="sysop"
However, this gets hit by the 30 seconds limit, even with a LIMIT as low as 30.
Which I find strange as it seems to be a rather simple search, not more or
less complicated than, say:
SELECT cur_namespace,cur_title FROM cur WHERE cur_namespace=9
for which the complete results (ca. 50 items) appear 1 second or so.
Is cur_restrictions for some reason so very hard to reach? And if so, why is
it, and what can be done about it?
Andre Engels
Purging of images is now in CVS.
For scaled images, this depends on visiting the page containing the scaled
image once without caching, which means it is automatic for users with
session.
The image is purged whenever a new scaled image is generated which is
done whenever a link to this image is rendered. In some cases anon users
might still have to reload to get the new scaled image, but this should be
rare.
Unscaled images are purged immediately on edit, deletion or revert.
--
Gabriel Wicke
I am having a problem with a new wikipedia hobby: cleaning out dead end
pages. The problem is that a number of the entries in the list of dead
end pages are "#redirect" pages which are not at all dead ends. As an
example, at the moment of this writing, in
http://en.wikipedia.org/wiki/Special:Deadendpages
entry number 20 is [[AIBO]], which is a redirect to [[Aibo]]. The latter
is not a dead end page, and even if it would be, [[AIBO]] should
probably not be listed either....
Clearly not all redirects are listed, but the ones that are listed make
cleaning up a lot more difficult.
Rob
--
Rob W.W. Hooft || rob(a)hooft.net || http://www.hooft.net/people/rob/
> Currently only the image page is purged,
> not the image itself.
> I'll try to take a look at it tonight.
> --
> Gabriel Wicke
Is this really true?
I'm asking because we have a possible image copyright infringement on the
danish Wikipedia which will possibly require us to delete a large number of
pictures in a couple of days. We would need those pictures to get removed
completely, not just the image page.
You can see the dispute and the current list of pages/pictures that might
need to be deleted here:
http://da.wikipedia.org/wiki/Wikipedia_diskussion:Kilder
Regards
Christian
I wonder if anyone might be excited to go through a few days of
access_logs and determine the top 1000 most popular images on
wikipedia? I think this would be useful in our discussions on
wikipedia-l as to "fair use" and so on.
My own position, which is not (yet) shared by many, is that we should
take a hardline stance against using anything that isn't pretty
clearly GNU-free. That means (a) licensed under some free license
such as FDL or one of the creative commons free licenses (b) public
domain and (c) possibly some fair use, under the most restrictive law
of the language in question.
It would be informative to know what images our users are actually
seeing, so that we can guess at how difficult it will be to eliminate
them or replace them with free equivalents.
--Jimbo
"RameezDon" <rameezdon(a)gawab.com> schrieb:
> i m facing a problem with the structure of the MySQL database
>
> i m not able to understand how do we categorize the articles in the "cur" table
>
> ofcourse "cur_id" acts as the primary key & with the help of "cur_title" we get to know
>
> the title of the article.....but how do we relate that particular article to a specific field say "Physics" or "Engineering "....or any specific branch. is the "cur_namespace" field somewhat related ???? but it consist of integers & nothing else... so could u plz help me with the needful......
No, the field is not encoded at all. Only the text of the article gives this
kind of information, as things currently are, this is not explicitly encoded
in the software.
If it is the "(Area)" behind the title you are looking for, this is simply
part of the title.
The cur_namespace part shows whether the page is a normal encyclopedia
page (0), talk page (1), user page (2), Wikipedia page (4), image description
page (6), etcetera.
Andre Engels