It strikes me as violating the principle of least astonishment that
"Delete this page" and the little "(del)" link next to the current image
revision do different things on an image description page, namely:
* "Delete this page" deletes only the image description page, leaving
the image file and its revisions intact, and the image remains in the
images list
* "(del)" deletes the image file, any old revisions, the entry from the
images list, *and* the description page.
User expectation seems to be that "delete this page" should perform the
second function.
-- brion vibber (brion @ pobox.com)
Thanks for rebooting the server, Brion. One of these days, I'll have to learn how to do it myself.
Ed Poor
-----Original Message-----
From: Brion Vibber [mailto:vibber@aludra.usc.edu]
Sent: Monday, November 25, 2002 5:46 PM
To: wikipedia-l(a)wikipedia.org
Subject: Re: [Wikipedia-l] Help accessing Wikipedia?
On Mon, 25 Nov 2002, Zoe wrote:
> The Wikipedia seems to be inaccessible right now. Anybody know what's
> wrong and if it's something that's going to be fixed any time soon?
Up again.
I tried upping the priority on the simple queries used for loading up and
displaying pages to see if it would improve interactive performance, but I
don't think it helped much.
-- brion vibber (brion @ pobox.com)
Nick,
Your idea assumes that the "lag" problem is due to overloading a single machine, which plays double roles: database backeand and web server. So, if we divide the work amoung 2 or more machines, you expect faster throughput. Right?
(I'm just repeating the obvious to make sure that what's obvious to me, is what you really meant!)
I guess if we all pitch in $50 each we can buy another machine. Where should I send my money?
Ed Poor
Please restore old meaning of <pre> which suppressed interpretation
of wiki markup ! (for Polish Wikipedia at least, I don't care much
whether others will have broken markup)
It broke virtually all code examples on Polish Wikipedia !!!
And don't do any more changes to markup in future without telling
people about that.
Hello all,
Could someone please create a page similar to
[[Wikipedia:German Wikipedia language links]] now that the Polish Wikipedia
has been moved to the Phase III software and fully supports interwiki
links ?
Thank you.
Regards,
Kpjas.
I want to add support for TeX mode to Wikipedia script.
I wrote a program which takes TeX code on stdin,
validates and standarizes it (so that "x+y" and "x + y" don't have to be
generated twice), and even has some extensions ("%" -> "\%"),
checks if given image already exists, and if i doesn't, passes it to latex,
dvips and convert to get nice antialiased png file.
How to integrate it with Wikipedia script ?
Wikipedia script when rendering should find all <math>.*</math> in markup,
call this program (which is very fast if images are already rendered)
and get info provided by it to render a page. I'm not even sure what should
that program return.
There are 3 possibilities:
* illegal markup - <tt>Illegal markup: ^$%^*^%$%(^$%(^$^$%^$</tt>
* markup validated ok, latex runs corectly - <img href="/path/345456986858674.png">
* markup validated ok, latex failed - ???
The last case shouldn't happen too often, but if we always wait for latex
to finish, that will unnecesarily increase latency.
I'm also thinking that some similar solution should be added for
chemical reactions.
Any opinions ?
The Special: doesn't really count, because it refers to pages that are
dynamically generated, not to something that is article-like.
The Wikipedia: namespace; is it a namespace only because changes to
articles in it don't show up in Recent Changes?
Obviously the User: and User_talk: and Talk: and Wikipedia_talk and
Special_talk namespaces although I'm not sure those things really
need to be separate namespaces.
Jonathan
--
Geek House Productions, Ltd.
Providing Unix & Internet Contracting and Consulting,
QA Testing, Technical Documentation, Systems Design & Implementation,
General Programming, E-commerce, Web & Mail Services since 1998
Phone: 604-435-1205
Email: djw(a)reactor-core.org
Webpage: http://reactor-core.org
Address: 2459 E 41st Ave, Vancouver, BC V5R2W2
Tomasz Wegrzanowski wrote:
>>On Wed, Nov 27, 2002 at 10:38:56AM -0800, Brion
Vibber wrote:
>> 2 - 'User' (user pages; names coincide with user
>>names -- but there are also crudely made
slash->>subpages in user namespace)
>
>User's subpages are very useful. Can we add some
>support for them ? For example by automatically
>adding links from User:X to all User:X/*
>and link from every User:X/* to User:X ?
Please do. This used to be available to all pages but
was killed in Phase III. Killing subpage functionality
for the article namespace makes perfect sense but this
feature would still be very useful for user pages and
all talk pages. It is currently a chore to make talk
archive subpages and user subpages.
-- Daniel Mayer (aka mav)
__________________________________________________
Do you Yahoo!?
Yahoo! Mail Plus - Powerful. Affordable. Sign up now.
http://mailplus.yahoo.com
I have a program whih reads something like this:
string text_in, text_out;
tex_tree tree;
int hash;
text_in = read_whole (stdin);
tree = parse (text_in);
if (tree == null)
{
print ("failure " + text_out);
return;
}
text_out = regenerate (tree);
hash = md5(text_out);
if (!file_exists(filesystem_path + string(hash) + ".png"))
if (fork() == 0)
{
// validation ensures that latex won't fail too often
call_latex_to_generate_that_file(text_out, filesystem_path + hash + ".png");
_exit();
}
print ("<img alt=" + text_in + " src = " + http_path + md5 + ".png>");
return;
Now, the problem is - it can be external program, but 1 fork/exec is required
per TeX equation. Parsing and checking if image is already generated is very
fast, but all these fork/execs are unnecessary overhead. I see a few solutions:
* pass all equations at once, just 1 fork/exec per article would be neccesary.
* make a table of already generated TeX - "input TeX, hash(output TeX)", and let
php code check if image already exists without having to parse TeX. The problem
is, database is quite slow, and these computations are rather simple.
* make a table of "hash(input TeX), hash(output TeX)". I don't know if it will
be significantly faster
* providing symlinks from (path + hash(input TeX) + ".png") to
(path + hash(output TeX) + ".png") this way PHP can check if image is already
generated. If image is generated but from different input TeX (for example "x+y"
is already generated, we ask for "x + y"), then that program will just add
a symlink.
* change that program into shared library. That was my initial idea, but it
seems to be quite hard with PHP.
* rewrite that program in PHP (will be slow, unless PHP features LALR parsers
generator)