Hi,
someone requested that a button 'remove article from watchlist' on the
SpecialWatchlist - page woud be helpful.
I have also activated the option put everything you edit in the watchlist,
and I agree with the usefullness of such a button.
A little look in the code showed me, that SpecialWatchlist uses the
funktions beginRecentChangesList, recentChangesLine and endRecentChangesList.
The question is: would it be better to include the button in these
functions (additional parameter etc.) or would it be better to make 'extra'
functions for the watchlist?
I hope some of the developers can give me a hint.
--
Smurf
smurf(a)AdamAnt.mud.de
------------------------- Anthill inside! ---------------------------
This discussion is making my head spin... Let me see if I can get the
basics in order for myself (please, let me know if I don't have it
right):
* Everyone basically agrees that the text (not including quotes, which
I don't intend to discuss here) of Wikipedia is okay, even if the
text refers to an image that is fair use.
* Everyone agrees that The en Wikipedia has some images that we are
legally allowed to use (on the website) under fair use assuming that
we don't claim they are GFDL.
* Most everyone agrees that there is little chance that we can get the
copyright holders of every image to switch to a GFDL license. This
said, some people say that we can't distribute the images with the
GFDL products, as it would violate the license.
* Most everyone has decided what they think, and will argue his/her
opinion until the cows come home (and then some).
If I understand correctly, the problem is most evident when we
consider printed formats (like grandma's encyclopedia). I don't think
that anyone has argued that we can distribute fair use images if we go
to a printed (combined) work. So, fair use images should not be in a
printed version.
This said, it seems reasonable to say that articles that *NEED* an
image should have GFDL images only. Articles that benefit from images
should use GFDL images or shouldn't talk about the images (don't say
"image below" or the like) as fair use images will not appear in a
printed version.
As I understand it, there are people who would argue that this is
unacceptable, and that some articles *NEED* an image where no GFDL
images is available. I'd love to hear of 1 or 2 possibilities where
an article is unacceptable without an image, and where a GFDL picture
or drawing would not suffice.
--
"Jason C. Richey" <jasonr(a)bomis.com>
After some delays and bug-hunting my script for the HTML static versions
is in acceptable shape.
Here you can see an example, built from a SQL file of some weeks ago:
(Don't try the Search box!!! I explain below)
http://www.arcetri.astro.it/~puglisi/wiki/dump/ma/main_page.html
Please don't DOS the connection, it's not a very fast line.
Interested parties can find the script here:
http://www.arcetri.astro.it/~puglisi/wiki/wiki2static.txt
(renamed to .txt due to some server misconfig)
use a wide terminal for this one. Everything (html code included) is in
one single file. The whitespace may appear weird because I use 4-space
tabs. There's no need to tell me you don't like the coding style, I
alread know :-)))
Some issues:
- the topbar links do not work (known bug :-). The Edit link goes to the
online wikipedia site.
- interlanguage links are ignored
- some wiki markup is not recognized yet.
- no images are present (of course!)
- filenames should be OK for most filesystems not "8.3" limited
(max 63 chars, only a-z, 0-9 and underscore)
- despite the two-letter subdirectories, some of them have over 4,000
files in them!
- Time: the script takes more than 2 hours on my 1.3 Ghz Athlon...
- Size: this dump is about 800MB. (tar.gz is just 110MB). I think
that I can bring it down to 600-650MB with a bit of trimming and
eliminating unnecessary redirects. BUT, without some form of compression,
the English wikipedia will soon overflow a single CD. Maybe we should
target DVDs? :-)
- Images: no images are present here. AFAIK, each of them has a SQL record
(that my script skips), but the actual image data is not included. How
many megabytes of images we have? I think it will be impossible to store
the full images on a CD. Certainly it's possible on a DVD. Maybe a low-res
version could be included in a CD.
- Search: I tried a javascript search that worked well for small sized
databases: it's basically a big array of strings (article titles and
filenames) with some lines code that do a regexp match against them.
For full-sized databases like this one, the search page becomes an 8
megabytes monster that takes forever to process (IE grabs 100 MB of memory
and stops there, Opera is even worse). I'll see if I can find a different
solution.
Enough for now. While I carry on development, any input is welcome.
Ciao,
Alfio
Sorry to drag this up again.
But please could someone look into this:
http://sourceforge.net/tracker/index.php?func=detail&atid=411195&aid=746048…
I've just had to interlink the following pages:
[[de:Erdalkalimetalle]] [[eo:Teralkala metalo]]
[[ja:アルカリ土類金属]]
[[zh:%E7%A2%B1%E5%9C%9F%E9%87%91%E5%B1%9E]] [[fr:Métaux
alcalino-terreux]] [[en:Alkaline earth metal]]
and frankly, I've proceeded as if the above feature was already working
because culling the self-link from each page would have taken me
forever. So all those pages link to themselves.
I have a problem whit a new account from a user on the dutch Wikipedia.
He can not login. A new password recieved by use of the "forget password"
does not work.
I have requested a new password for that user and asked to forwarded it to
my. It does not work also.
Can a develloper fix it please?
User "Roel"
http://nl.wikipedia.org/wiki/Gebruiker:Roel
"Iemand (waarschijnlijk uzelf) vanaf IP-adres 62.166.105.192 heeft verzocht
u een nieuw wachtwoord voor Wikipedia NL toe te zenden. Het nieuwe
wachtwoord
voor gebruiker "Roel" is "FNp1tsN". Advies: nu aanmelden
en uw wachtwoord wijzigigen."
--
Contact: walter AT wikipedia.be
Ook een artikeltje schrijven? WikipediaNL, de vrije GNU/FDL encyclopedie
http://www.wikipedia.be
Why are we so stuck on the GFDL? Are we (as a community) opposed to
operating under another (though equally free) license that is more
specific about quoutes and such?
--
"Jason C. Richey" <jasonr(a)bomis.com>
I didn't have any edit conflicts in preview mode in the last months. I
only get them after hitting the save button. But I'm quite sure the
software used to check for edit conflicts also wenn hitting on "preview".
Was this a necessairy change, or is it just a bug?
Kurt
Hi,
in the german Datebase are still some bugs:
select cur_id,cur_title, count(cur_title) as num from cur where
cur_namespace=0 group by cur_title order by num desc limit 5
Which way is the best to get rid of those double enties?
- Delete them via SQL usind the cur_id?
- Delete them both via ''delete article'' and insert the one of interesst
again? (if that works)
- something completly different?
Sorry, for asking again after 3 weeks.
--
Smurf
smurf(a)AdamAnt.mud.de
------------------------- Anthill inside! ---------------------------
--- Anthere <anthere6(a)yahoo.com> wrote:
>
> --- Erik Moeller <erik_moeller(a)gmx.de> wrote:
> > We're now up to the latest Wikipedia software
> > version on the English
> > Wikipedia. Some changes I and others made in the
> > last weeks:
> >
> > 1) There is a new feature called "oldest articles"
> > (colloq. "ancient
> > pages"). Currently this inevitably lists first all
> > the imported articles
> > from the phase I software that have not been
> edited
> > yet ever since.
> > Hopefully, as these become updated, this feature
> > will allow us to
> > systematically go through our old material and
> make
> > sure it is in good
> > shape and up to date.
> >
> > 2) Sysops will note that the page deletion feature
> > now auto-pastes the
> > content of pages that are smaller than 500 bytes
> > into the deletion comment
> > (only the first 150 characters). It also does so
> if
> > the current revision
> > is blanked and the previous revision contains text
> > that can be pasted.
> >
> > 3) The deletion feature now indicates if you are
> > about to delete a page
> > that has a history.
> >
> > 4) The "New pages" list now shows the bytesize of
> > each page, making it
> > easier to pick nice, long articles for the Main
> > Page.
> >
> > 5) Some minor layout stuff, some by me, some by
> > others. Notably, the ugly
> > "It was last modified" sentence at the bottom of
> > each page has been fixed.
> >
> > Regards,
> >
> > Erik
>
> Is it planned to make it available to other wikis as
> well ? 2 and 3 in particular
>
> Ant
I think my question was missed. Probably because I
adressed it to the wrong list.
International people are wondering when the last
english features will be applied to other phase III
wikipedias. Is there anything we should provide you to
help there ?
Ant
__________________________________
Do you Yahoo!?
Yahoo! Calendar - Free online calendar with sync to Outlook(TM).
http://calendar.yahoo.com
Hello all,
a new static HTML dump can be found in:
http://www/~puglisi/wiki/wikipedia/ma/main_page.html
The script used for this is not downloadable yet, but it will be soon.
Many errors with wikipedia markup, redirects, etc. are fixed. Search sorta
works with IE and Mozilla.
What to do now:
- images are not included, but linked to the online site (each image is
represented as a link). This is a stopgap solution, but I don't have many
ideas about this :-)
- search is done with word indexes (only for page titles), and is
acceptably fast on IE and a bit slower on Mozilla. Still does not work on
Opera, and still limited to only one word to search for, but already
usable :-)
- according to Nero, this dump fits into a 660 CD-Rom, so it's quite
burnable :-)
So now that the technicalities are being resolved, it's time for politics
to enter the issue: namely, what we'll do with this version? CD
distribution? HTML mirrors? what else?
New version planned to fix some of the problems listed above. Stay tuned
:-)
Ciao,
Alfio