Dear Wikimedia Technicians,
because the Wikimedia site for logo requests seems not to be viewed much,
I post my request here:
The Nauruan Wikipedia needs a logo. The text should be "Wikipedia -
Encyclopedia Emenengame". Thank you for creating and installing it!
For any comments you can post them here:
http://en.wikipedia.org/wiki/User:CdaMVvWgS
The link to the Nauruan Wikipedia is:
http://na.wikipedia.org/
Greetings and thank you again for the logo,
CdaMVvWgS
--
GMX ProMail mit bestem Virenschutz http://www.gmx.net/de/go/mail
+++ Empfehlung der Redaktion +++ Internet Professionell 10/04 +++
This is a MediaWiki code design discussion; people uninterested in
MediaWiki's guts can safely ignore it.
So, I think one of the big questions third-party admins have is how to
skin MediaWiki. Right now, it requires writing some PHP code. I'd like
to float some suggestions for how we can make our skin architecture a
bit easier for everyone.
In MediaWiki 1.5, I think we should re-focus our Skin classes
specifically on *HTML generation*. I think we should keep the name
"Skin", just because it's hard not to, but I think we should steer away
from using server-side PHP code to do layout design.
Instead of having skin classes for them, I think we should use alternate
CSS stylesheets to provide the "look-and-feel" differences -- Monobook,
Nostalgia, Standard, and Cologne Blue, as well as future look-and-feels.
Hopefully, if we can make this transition, we can open up the process of
skinning MediaWiki to people with more CSS/(X)HTML experience than PHP
chops, and get more/better-looking/easier-to-use skins. Admins who want
to have "branded" styles for their sites could do so with CSS.
To support browsers that can't handle CSS styling (I think lynx is the
only one in wide usage that is in this category, but there may be
others), we should provide another HTML generation class on the server
to make very simple, style-free HTML.
Extensions -- hopefully third-party extensions -- could provide HTML
generation from PHP template systems like PHPTal, Smarty, etc. *I*
wouldn't use them, but for folks who are really into these templating
system, there'd be an option. There may also be call for a skin subclass
to generate table-based layout for HTML 3.0-era browsers.
Lastly, I think we should push most of the business logic of skins up to
the root class. By this I mean stuff like: figuring out whether or not
to show admin function links, or links that should only be shown for
articles, or links that should only be shown for logged-in users, or
whatever.
Historically this has been replicated in each skin class, but in recent
versions there's been some consolidation. It would be great to finish
this process and get all this code into one spot in the root skin class.
I think the PHPTal/SkinTemplate mechanism -- one function per logical
"menu" (nav urls, personal urls, etc.) returning an array of arrays,
each one representing a link -- is the best system here.
Attached is a rough class diagram to illustrate these ideas. To
summarize:
* Abstract class "Skin" provides a framework and supports business
logic for determining which links are shown, etc.
* SkinBasic gives rock-bottom, dead-simple HTML.
* SkinTemplate is the most-used, default HTML generator. It allows
an alternation between multiple CSS stylesheets, and generates
the kind of high-quality HTML (or XHTML if the browser is
capable) to make CSS-styling simple and useful.
* SkinPHPTal and SkinSmarty are third-party HTML-generation
classes that use those templating systems. There could be other
such classes; they're only on the diagram to illustrate that we
want skins to be able to handle this kind of subclassing.
Now, we're awful close to something like this already; I think the
SkinTemplate class in 1.4, with some changes for handling multiple
alternate CSS stylesheets, is a really good "default" class. Although
it's not a requirement, the other HTML generators could use the same
stylesheets as long as they generate similar HTML.
The main drawbacks with this kind of system are:
1. We won't have pixel-identical versions of Nostalgia, Standard,
and Cologne Blue. I don't think this is a huge problem.
2. Folk wisdom that "Nostalgia makes HTML that can be read in lynx"
would become untrue.
3. We have a couple of classes that vary by the user's preferences
(SkinBasic versus SkinTemplate), and a couple that vary by the
sysadmin's preferences (SkinTemplate vs. PHPTal, Smarty, ...).
4. It's not clear how to allow different stylesheets. There's a
good way here: http://www.alistapart.com/articles/alternate/
I realize we're in a release cycle and 1.5-timeframe discussions are
somewhat distracting, but I've been thinking about this and figured it
was worth discussing while I still had it in my brain.
~ESP
--
Evan Prodromou <evan(a)wikitravel.org>
Hi,
Sorry to trouble you here...
I can't seem to get into bugzilla.wikipedia.org. It doesn't like my
password and I'm not getting E-Mail from it. So, I'm posting my real
problems here after syncing to the phase3 head branch. Could I trouble
someone to post if for me or perhaps fix the problem(s)?
---------------
I am getting PHP errors when I try to open the Special:Newimages page.
They are:
[Mon Dec 6 04:31:31 2004] [error] PHP Notice: Undefined variable:
wgHashedUploadDirectory in /Library/MediaWiki/includes/Image.php on line
561
> function getFullPath( $fromSharedRepository = false )
> {
> global $wgUploadDirectory, $wgSharedUploadDirectory;
>
> $dir = $fromSharedRepository ? $wgSharedUploadDirectory :
> $wgUploadDirectory;
> $ishashed = $fromSharedRepository ? $wgHashedSharedUploadDirectory :
> $wgHashedUploadDirectory;
> $name = $this->name;
> $fullpath = $dir . wfGetHashPath($name) . $name;
> return $fullpath;
> }
'$wgHashedUploadDirectory' is not defined with a "global" statement.
[Mon Dec 6 04:31:31 2004] [error] PHP Fatal error: Call to a member
function on a non-object in /Library/MediaWiki/includes/Image.php on line
115
function newFromTitle( $nt )
{
$img = new Image( $nt->getDBKey() ); # line 115.
$img->title = $nt;
return $img;
}
implies that $nt is not an object.
Another problem on the 1.5 branch is that it doesn't seem to be
recognizing Sysops. My administrator IDs are not getting any special
privileges and I am not seeing any administrator or beaurcrates when I try
to list them with the User List special page.
Upgrading from a v1.3.8 db seemed to gone off without an error.
MediaWiki: 1.5alpha
PHP: 4.3.4 (apache)
MySQL: 4.0.13
What is the proper protocol when reporting bleeding edge source problems,
like these?
Nick Pisarro
I am sending you this info to tell you, that I cannot achieve to have
the Enotif and Eauthent code ready for 1.4, despite all efforts. It is
likely, that I back- port it later to the (forthcoming) 1.4 release,
from which I branched today.
The decision gives me a little time for more tests and optimisation and
for code review by you all, without being in too much hurry. "So next
station: 1.5"
Tom
P.S. End of December, I will meet many of you developers in Berlin, I
guess. Perhaps I can make a presentation talk about all the changes the
Enotif patch introduces (if you like such a talk).
I've seen a few threads go by about not being able to use "funny"
characters in page names... Everything from non-US characters, to the
plus sign ( + ) and apostrophe ( ' ).
Is there a reason that the page titles aren't just stored in the
database in urlencode( )'d page titles? When a search, etc. is
performed, we could just convert the string to its urlencode( )
equivalent, and do the same for [[ ]] type links.
I'll code this up on the 1.4 code base if no one objects, but I'm
guessing there's some good reason it hasn't been done before.
(It's pretty annoying to me that you can't name a page "C++") :-)
thanks,
-Nick
This is a MediaWiki programming issue; if you're only interested in
admin stuff, you can safely delete.
So, I'd like to set up a system of hooks and filters for MediaWiki. We
have right now four ways to change how MediaWiki works:
1. Tag extensions. You can add new tags that the parser can
generate code for.
2. Special pages. You can add special pages that do different kinds
of queries.
3. Edit filtering. You can put a pre-filter on editing.
4. Skin changes. You can create new skins.
However, we don't have an easy way to change the behaviour of mainline
functionality. I think it would be a good idea to add some simple hook
processing to MediaWiki, so that extensions can add hooks and run custom
code when something happens.
Consider, for example, an extension to log key actions in the wiki to a
specialized logging system. The extension setup function could look
something like this:
function setup_logging {
global $wgLogFile;
wfAddHook('after_article_save', log_article_save, $wgLogFile);
wfAddHook('after_delete', log_deletion, $wgLogFile);
wfAddHook('after_user_ban', log_user_ban, $wgLogFile);
}
By calling the wfAddHook() function, the extension asks that a function
get called if/when an event happens. wfAddHook() takes three arguments:
the name of the event (say, after an article is deleted), a function to
call, and an optional data block that can be used by the function. This
way, we can use the same function for different hooks or different
actions, like:
wfAddHook('after_article_save', irc_notify, 'brion');
wfAddHook('after_article_save', irc_notify, 'TimStarling');
A hook function would look something like this:
function log_deletion(&$params, $data) {
$title = $params['title'];
$dbkey = $title->getDBkey();
$filename = $data;
foo_log_message("Article '$dbkey' deleted.", $filename);
return true;
}
The $params argument here is an associative array of the event-specific
parameters. We use this instead of named arguments so that the same
function could be used for different events. The $data is just the data
item that was used when the hook was added.
Hooks can return four possible values:
* true -- the hook has operated successfully and subsequent hooks
should be run
* false -- the hook has operated successfully but no subsequent
hooks should be run
* "some string" -- an error occurred; processing should stop and
the error should be shown to the user
* NULL -- the hook has successfully done the work necessary and
the calling function should skip
The last result would be for cases where the hook function replaces the
main functionality. For example, if you wanted to authenticate users to
a custom system (LDAP, another PHP program, whatever), you could do:
wfAddHook('before_user_login', ldap_login);
# ...
function ldap_login(&$params, $data) {
$user_id = $params['user_id'];
$password = $params['password'];
# log user into LDAP
return NULL;
}
Note that a reference to the parameters is passed to the hook function:
the hook could theoretically change its parameters, like so:
wfAddHook('before_article_save', reverse_title);
# ...
function reverse_title(&$params, $data) {
$old_title = $params['title'];
$params['title'] = Title::makeTitle($old_title->getNamespace(), strrev($old_title->getDBkey()));
return true;
}
A calling function or method would use the wfRunHooks() function to run
the hooks related to a particular event, like so:
class Article {
# ...
function submit(...) {
$params['title'] = ...;
$params['user'] = ...;
$params['section'] = ...;
$params['is_new'] = ...;
if (wfRunHooks('before_article_save', $params)) {
# save the article
wfRunHooks('after_article_save');
}
}
wfRunHooks() returns true if the calling function should continue
processing (the hooks ran OK), or false if it shouldn't (an error
occurred, or one of the hooks handled the action already). Checking the
return value matters more for "before_*" hooks than for "after_*" hooks.
The big advantage to this event-handling approach is that we can start
isolating site-specific features into extension files. The "mainline"
code only concerns itself with "mainline" features, and "extension" code
can handle more exotic ones. It simplifies our mainline code,
centralizes extension features into one easy-to-comprehend package, and
hopefully improves our quality and reliability. (The simpler our
mainline code is, the fewer bugs it will have; the easier it is to read,
the more people will want to help maintain it.)
It should also help us evaluate/implement experimental new features, and
obviate the need to patch the mainline code to do so. Lastly, it could
let us move little-used features out to extension files to simplify the
main code.
There are disadvantages, of course, too. If MediaWiki is only supposed
to work for a single wiki installation, or multiple installations that
should behave exactly the same, then the hooks code is unnecessary -- we
should just code it all in the mainline code. There's also some
additional complexity in the mainline code in setting up the parameters
and calling the hook functions; this can be balanced against the
complexity of adding each extension's code into the mainline functions.
Finally, there is a lot of documentation that would need to be done to
say 1) where hooks are defined and 2) what parameters they provide.
Anyways, I'm going to post this to meta, and implement it for 1.4. But I
figured this would be a good forum for discussion -- hopefully making
the feature better.
~ESP
--
Evan Prodromou <evan(a)wikitravel.org>
Those of you on enwiki may have noticed my recent attempts to
democraticise page deletion. I feel the current deletion system on
unwiki is not ideal.
After having a think about how best to solve this problem, it seemed
best to me to have deletion work the same as editing. The fewer
different cases, the fewer problems. The simple solution would be to
have an empty page be the same as a non-existant page (and
blank/non-existant pages could have edit history). That way page
deletion/undeletion would be under the same influences as the adding
and deletion of bits of pages.
This seems like an obvious way of implementing things, however when
mediawiki was designed it wasn't done like that, which leads me to
ask: Are there technical reasons why it wasn't? Deleted pages are
already kept in the database, so it wouldn't cost more space.
(if there are any misconceptions about how mediawiki works in this
mail please let me know; I'm not a PHP programmer (though I might be
forced to become one should I be able to get support for this kind of
scheme on enwiki)).
--
Frank v Waveren Fingerprint: BDD7 D61E
fvw(a)[var.cx|stack.nl] ICQ#10074100 5D39 CF05 4BFC F57A
Public key: hkp://wwwkeys.pgp.net/468D62C8 FA00 7D51 468D 62C8
Hi ...
I am very interested on the new security model (developed by hashar i
think), as
the one listed on http://meta.wikimedia.org/wiki/Help:User_levels
Are this implemented on REL1.4 (i dont think so).
Are you planning to implement on final 1.4 ?
Where are more documentation about this module ??? what pages are being
used ?
i would like to cooperate on this subject.
This enhance is critical to my wiki.
thanks
--
Problems with Windows - Reboot.
Problems with Linux - Be root.
Vic (aKa CoffMan)
vic(a)wickle.com
wickle(a)gmail.com
For collecting (and giving back later on!) information from Wikipedia (or other
MediaWikis) we are interested in structured data from a
http://meta.wikimedia.org/wiki/Machine-friendly_wiki_interface.
A request like <url>Foo?action=raw and <url>Special:Export (for XML) do a good
job. The problem is, that for *categories* this "action=raw" returns the wiki
tags. But what we would need is its *evaluated content*, i.e. the list of
articles, not the command which generates them.
How about an extension 'evaluated' like say
http://de.wikipedia.org/wiki/Kategorie:Ort_in_der_Schweiz?action=raw&evalua…
which does not return the Wiki tags ("[[Kategorie:Ort in Europa|Schweiz]]...")
but the list as it is presented in the default HTML page.
The same would apply to the Special:Export-Page, where obviously this would be
an option.