Some weeks ago I asked whether in general is was acceptable to use
HTML Tags in wikipedia. There was no unique answer, but my impression
was that those constructs which do not have a wikipedia equivalent
such as blockquote.
Now recently I was pointed out, on a german discussion page, that the
use of HTML Tags was, well, basically forbidden.
- One reason was that todays converters could not deal with such
constructs and therefore it would be impossible to convert
discussions page say in wikibooks (I honestly fail to see the
value of that but anyhow).
- The other reason was that the use of such TAGS annoys other
Could anybody comment on that, especially on the issue whether a
administrator has the right to impose the use of certain formating
Schema change pending in HEAD, take care when updating. It's not live yet.
In case anyone's interested, it's an "externallinks" table, to support a fast retroactive spam
filter which hasn't been written yet. In the meantime, it might be useful to find and analyse spam.
J'essaie en vain de modifier le menu navigation de mon mediawiki mais
Pourtant je vais bien à mon/wiki/index.php?title=MediaWiki:Sidebar je
deprotege, je modifie mais quelquesoit la modif, rien ne se passe ! (je
vide la cache pourtant ...)
Que faut-il faire d'autre ?
Merci d'avance :)
First of all I explain the problem.
I installed and configured mediawiki and it works so far. The only problem is uploading files.
The server used here is used by a lot of users. Therefore it is configurated in a way that makes it necessary for all php scripts that do file operations (such as uploading images) to be renamed to cgi and add a new line in the beginning that calls the correct interpreter for this cgi script. Only this way safe mode restrictions can be handled on this machine.
A disadvantage of this is that cgi scrips are ran more slowly than php scripts on this server. Therefore it would be best to use index.cgi for uploading images and index.php for all other purposes.
I tried to use index.cgi only: I renamed index.php to index.cgi and added the interpretar line into it. Further I replaced 'index.php' calls in all media wiki script files by 'index.cgi' calls.
But it didn't work: when I opend the index.cgi in a browser (I tried several browsers) it took some time and then the following error was shown:
Redirection limit for this URL exceeded.
Is there a simple way to make changes to media wiki such that it is started as a cgi and not as php script file?
Does anyone have experience with this?
Verschicken Sie romantische, coole und witzige Bilder per SMS!
Jetzt bei WEB.DE FreeMail: http://f.web.de/?mc=021193
I'm looking to parse the text contents ("old_text" field) of articles in the
"text" table from a pages_articles.xml dump into MySQL, without using MediaWiki.
How exactly would I do this? I tried extracting functions from parser.php but
this proved fairly complicated given the dependencies involved (with includes
and global variables); and I'm not sure how the functions work anyway.
What I need is something that can ignore stuff like Image and WikiQuote code,
replace the headers with h1/h2/h3/etc. tags, and maybe parse external and
In other words, I'm looking for something that takes the text field and parses
it to be able to display in basic readable HTML output without any bells and
Is there any function like this available anywhere? Or some guide on the specs
of such a function?
I have the following in my LocalSettings.php file, and the important part
here is that I am not allowing anyone to edit the pages unless they are a
user. I am also not allowing guests to create user accounts, and therefore
I am the only one who can edit the wiki or create accounts. This makes
sense with the following settings, right?
My question, though, is that I found some kind of bug, it seems. I was
messing around today, and I logged out of my account on my wiki. I then
clicked on the link that says "log in or create account" (even though no one
but me can do that lol) On the page that is shown, I get a box for username
and password, the remember me box, and two buttons... a login button and a
mail me a new password button. It also shows the text for the creation of a
new account, but the new account text-entry boxes are not shown, and neither
is the create account button. This is fine, but I wanted to clean it up,
and I have no idea how to remove this portion of the Special:Userlogin
Here's where it gets tricky, though...
I decided to try something out, to see if it would work for the heck of it.
I doubted it, but who knows, right? I edited the URL address bar of my
browser, and typed in:
When I hit enter to see if I could edit that page (yeah right), surprise
surprise!!! You can't edit it
INSTEAD, it loads up the other text boxes to create an account and even the
submit button for the create account form!
This is not a good thing at all :(
MediaWiki 1.5.6, Debian Sarge 3.1, Kernel 2.6.8-2-686, Apache
$wgGroupPermissions['*' ]['createaccount'] = false;
$wgGroupPermissions['*' ]['read'] = true;
$wgGroupPermissions['*' ]['edit'] = false;
$wgGroupPermissions['user' ]['move'] = true;
$wgGroupPermissions['user' ]['read'] = true;
$wgGroupPermissions['user' ]['edit'] = true;
$wgGroupPermissions['user' ]['upload'] = true;
$wgGroupPermissions['bot' ]['bot'] = true;
$wgGroupPermissions['sysop']['block'] = true;
$wgGroupPermissions['sysop']['createaccount'] = true;
$wgGroupPermissions['sysop']['delete'] = true;
$wgGroupPermissions['sysop']['editinterface'] = true;
$wgGroupPermissions['sysop']['import'] = true;
$wgGroupPermissions['sysop']['importupload'] = true;
$wgGroupPermissions['sysop']['move'] = true;
$wgGroupPermissions['sysop']['patrol'] = true;
$wgGroupPermissions['sysop']['protect'] = true;
$wgGroupPermissions['sysop']['rollback'] = true;
$wgGroupPermissions['sysop']['upload'] = true;
$wgGroupPermissions['bureaucrat']['userrights'] = true;
Hi, I suppose this is a mail that Brian can answer (nooo ... don't kill
me ... I know you are overloaded with work).
On the Neapolitan wikipedia we have one particularity: it is a language
without stadardised writing (up to now) and it has local varieties that
sometimes vary really a lot. Besides that there are regions that are
attributed to the Neapolitan language group that really "far away" from
Neapolitan - this means that there are languages (that are not
considered as such) that are not understandable for us when we hear
those people talk.
Now as much as I understand the namespace manager could help us in that.
We could create namespaces for:
*phonetic Neapolitan (at the moment, this would have the majority of
articles at the moment)
*language A (attributed to the NAP language group)
*language B (attributed to the NAP language group)
The mainpage would then become a page that leads to the several
namespaces where the NAMESPACE:Main_page would actually hold the main
page of that specific namespace and that could be different according to
So is the namespace manager made for such an approach? If yes, this
really helps us a lot since we do not need a single wikipedia for each
of these languages, we do not need to have more than one or two persons
to contribute to a language and this way these few people can start to
create contents and others will follow by time. It would not delude
people who join the projects just to be able to work in their language
and then see the possibility denied, since very often it is easier to
decide on a local level if something is to be considered "so different"
to be a separate language or just a local variety that should go into
the phonetic part (for example this would be the case for "Maiorese, the
Neapolitan spoken in Maiori - it is different from Neapolitan of Naples,
but it is easy to be understood by Neapolitans ... it is just a
different way to pronounce words, but not having a standardised way of
writing you can imagine that people from Maiori write a different
Neapolitan than people from Naples).
Another advantage of having groups of languages on one wiki is that
organising and administration becomes more effective and less time
consuming - this does not mean that we should merge big wikis (this
would be problematic- there are already too many edits to really be able
to really check everything) - but wikis of a certain "language group
region" or maybe languages with different writing standards (like nds
for example) there it would make sense.
Well I need to answer an e-mail of the Neapolitan discussion group and
therefore it would be helpful to know if this was possible or not.
Yahoo! Messenger with Voice: chiama da PC a telefono a tariffe esclusive
i would like to know how mediawiki stores data and page links.
I've also another question.
Does any indexation of the db tables start when a new article is added or a link is created?
I use MySql and my purpose is data storing offline with the creation of new istances of my categories and adding links to the new pages into the old ones .
Thanks in advance,
Someone suggested I (commons and en user:pfctdayelise) post this here.
If I am mistaken and Google does in fact index the Commons, I would
really appreciate someone explaining how to go about it. Comments
welcome on either of my user talk pages, COM:VP or by email.
cheers, Brianna (pfctdayelise)
Originally at [[commons:COM:VP#Keywords]]:
A related problem is that Google does not index the commons. Well,
more or less. It is basically impossible to find any commons content
in there - as you normally can by going ' keyword
site:commons.wikimedia.org '. Personally I find this rather
unbelievable and appalling on Google's behalf. Does anyone else think
we should, um, make them aware of this? pfctdayelise 13:26, 16 January
This has been discussed before. Part of the problem is that
descriptive info is in "foo.jpg", but Google has no way of knowing
that we have text pages that just happen to be named identically to
the nontextual image files that everybody in the world uses. Another
problem is that Google ranks by references to the page, and most
commons pages look like orphans or near-orphans, image references from
WP articles being made "secretly" via the MediaWiki local/commons
two-step lookup. On keywords, feel free to add them anywhere, they
will help our own search algorithm. I'm not so inclined to be
concerned about that, considering how many thousands of images still
have not a single link or category. Stan Shebs 00:22, 17 January 2006
Google does have a way to know that those pages are HTML,
not binary files: it's in the http header field called "Content-Type".
Google just ignores it (or does not even try to load such a page).
[snip] Duesentrieb(?!) 02:01, 17 January 2006 (UTC)
"Mathematicians do it with Nobel's wife."