Ryan Lane:
>
> Is there any way to make a template automatically appear on newly created
> pages? If so, is there any way to do it by specific namespaces? I have a
> template that tells users how they should use talk pages, but it would be
> much more useful if it was added by default. It would also be nice to be
> able to do something like this when categories are used (like formatting
> rules for topics under certain categories).
>
> If this is not currently possible, is this something that the wikimedia
> community would be interested in? If so, what are some ideas on a good way
> to implement this, and which objects would need to be modified to add this
> functionality?
A long time ago, empty pages had a standard text inside.
It lead to the creation of many pages with exactly that text.
Thus, the practice was discontinued.
Maybe a "template on request" button, as part of the edit menu?
Schewek
--
______________________________________________
Check out the latest SMS services @ http://www.linuxmail.org
This allows you to send and receive SMS through your mailbox.
Powered by Outblaze
Is there any way to make a template automatically appear on newly created
pages? If so, is there any way to do it by specific namespaces? I have a
template that tells users how they should use talk pages, but it would be
much more useful if it was added by default. It would also be nice to be
able to do something like this when categories are used (like formatting
rules for topics under certain categories).
If this is not currently possible, is this something that the wikimedia
community would be interested in? If so, what are some ideas on a good way
to implement this, and which objects would need to be modified to add this
functionality?
Thanks,
Ryan Lane
NAVOCEANO Code N622
System Technology and Applications Branch
PH: (228) 688-4616
Some of the apache boxen have been intermittently running into an
out-of-memory condition, which is Very Bad.
I've configured PHP with the memory limit option, set for 20MB. If that
proves to be too low, we can raise it in php.ini.
-- brion vibber (brion @ pobox.com)
I finally came around to work on the XML parser that has been promised
so long. Good news first: It already works so it can render real wiki
pages (in CVS HEAD).
Bad news: Still incredibly un-debugged and inclomplete.
For those of you who think "tell me when it works perfectly", you can
stop reading now ;-)
So, whatdoyaneed to test it? You'll need the "flexbisonparse" module
from CVS, which contains Timwis lexx/bison stuff. I didn't actually work
on the bison files so far. I couldn't get it to "make" on my linux box
(some zend libs not found or something), so I wrote a "Makefile.cli",
which you should rename to "Makefile". It will "make" a command-line
parser that can convert wikipedia markup to XML. One can pipe the wiki
markup in and gets the XML.
Then, follow the three-line instructions at the top of ParserXML.php. It
should work now.
The output will look strange, as it produces three copies of the article
text (the rendered XHTML, the "dumped" xml, and a structured xml tree)
as debug information. You can turn the debug information off by editing
the very end of the ParserXML.php file (you'll see where).
As I had some trouble passing the wiki markup to the command-line
version of the wiki2xml parser, I currently create a temporary file,
pipe that into the parser, catch the output, and remove the temporary
file again. I am aware that this is incredibly ugly, and that Timwi's
default makefile, creating a shared PHP object and passing the data
through there, is a lot cooler (and faster) than mine. However:
1. As I said, it didn't compile on my box, so I guess I'll not be the
only one; compiling the cli version should work everywhere
2. The shared object thingy limits the use to MediaWiki
My thoughts to #2: I hope that the wiki2xml parser will be beneficial to
many projects. One thing I thought of is a C/C++ program that can
directly access an SQL dump, read the articles, have them parsed to XML,
and then written as whatever-you-like: XHTML (static versions), PDF
(WikiReader), DigiBib format (another XML format for the German CD).
As for the XML2XHTML parser itself: basic wiki markup, tables, links,
images, basic html, nowiki are working. *Not* working is template
inclusion, and things I didn't think of ;-)
I probably won't get much more done this week. But, I'll be at the
Berlin conference, so we can talk about this, or even have a hacking
session...
Magnus
> One thing I think I will add is a text byte size field on the revision
> table; with individual-revision compression we no longer can easily get
> the size short of decompressing the text to see what it looks like.
> Generally this size will not change, either, since a given revision's
> source text is immutable.
I'll need to parse the full article text anyway, for several stats.
Number of int/ext links, image links, word count, ...
If I run directly on the database the job will run for days.
Fine with me, but heavy queries are a problem every now and then, or no
more?
I'd better make the counts job incremental then.
It is a bit less flexible and more error prone on script updates,
but it can be done. Any idea when the new scheme will be implemented?
Erik Zachte
I know I'm a bit late with this, but it is pretty hard to keep in touch with
everything going on.
I'm trying to grasp what will be the consequences of the db schema change
for the wikistats scripts, which use raw database dumps.
I need to know which user edits in Revision were for namespace 0.
Also which entries in Text belong to the same article, were in namespace 0
and at what time they were saved.
The new setup seems to imply I need to build huge tables which exceed phys.
memory,
hence sharp performance penalties (the job already runs +/- 24 hrs),
or I need to sort and merge these huge files several times before real work
starts.
If I understand correctly I would have to sort Page and Revision on
page_id=rev_page and merge into a new file, say PageRev.
Then sort PageRev file on rev_id and merge with Text.
All of his would not be necessary if a few small fields were replicated
across tables.
Impact on db size would be trivial, on page save time zero.
Namespace -> Revision.
Namespace, Rev_Page, Timestamp -> Text.
---------
Unrelated, will there be a periodic (costly) query to produce something
similar to the cur dump, which is used by quite a few scripts.
Downloading all complete db's is not workable.
Erik Zachte
Dear all,
I am happy to tell you that ENotif+EAuthent 2.00 has been merged into
the developing path of MediaWiki. This is sent only a short information
to keep you updated about the status
You can direct your questions directly to me.
Tom
I've committed an experimental 'Live Preview' feature to CVS HEAD. If
enabled, it will load rendered preview text from JavaScript and insert
it into the page without having to submit the form and wait for the
entire page to re-load.
There are a couple potential advantages to this: first, it doesn't
trash the open edit page. The edit box's undo history should remain
intact, and if the load fails (eg you hit a rough patch with the
servers) your page is still there. (Some browsers treat the 'back'
button badly with respect to no-caching modes, and if you go back from
an error page to your last edit you lose everything.)
Second, it's less burdensome on the server. By skipping the skin
output, server time to handle a preview is cut about in half for
shorter pages (70ms to 34ms in my test, Athlon XP 2400+ w/ Turck
installed).
It's still incomplete and likely has a bunch of fun problems. For
starters, the category and interlanguage links aren't transferred, and
error conditions probably aren't all handled right. But, it could be
worthwhile to pursue.
I've tested it (lightly) in Safari 1.2.4, Firefox 1.0, and MSIE 6.0 (on
XP SP2). On browsers that don't support the XMLHttpRequest interface or
have scripting disabled, it should transparently fall back to form
submission and full-page loading. Set $wgLivePreview = true; to turn it
on.
-- brion vibber (brion @ pobox.com)
it just occured to me that there could be a very nifty way to *help* all
good men... to help categorize wikipedia articles.
please, whoever it may concern, too.
try considering a new feature: a dropdown menu for inserting/deleting
categories into/from articles is definitely what the wikipedia world needs!
i am not suggesting offering *all* categories under the sun but only
universally agree on top-level categories in immutable form
i guess wikipedia could really benefit from categorization work already
done by
* dmoz.org
* openCyc
* universal decimal classification (UDC)
* etc. ...
think.
8)
Please take a look at
http://cs.wikipedia.org/wiki/N%C3%A1pov%C4%9Bda:Jak_se_p%C5%99ihl%C3%A1sit#…
The link [[Wikipedie:Uživatelské jméno]] is displayed in red (and
leads to edit), although the referenced page exists (you can check
that). I tried to edit both pages to force some kind of link cache
refresh, but the link is still red, although it displays normally in
preview.
Do someone know what is the problem here and how to fix it? (Maybe
some direct database fix would be needed?)
Thanks,
[[ cs:User:Mormegil | Petr Kadlec ]]