Hi all,
I am working something about the access right control with mediawiki :
- MediaWiki <http://www.mediawiki.org/>: 1.5.7
- PHP <http://www.php.net/>: 5.0.4 (apache2handler)
- MySQL <http://www.mysql.com/>: 4.1.13-nt
During the work I found a useful class "Group" in Group.php. It seems that
this class is never used.
In order to use this class I have to make a corresponding database, but I
don't know what is the schema of
the database. According to the class Group the following columns are
necessary:
name
id
description
dataLoaded
rights
Are these columns complete?
Which data types are used for the corresponding columns?
Any answers are appreciated.
Thanks a lot
Ting Wang
An automated run of parserTests.php showed the following failures:
Running test Table security: embedded pipes (http://mail.wikipedia.org/pipermail/wikitech-l/2006-April/034637.html)... FAILED!
Running test Magic Word: {{NUMBEROFFILES}}... FAILED!
Running test BUG 1887, part 2: A <math> with a thumbnail- math enabled... FAILED!
Passed 300 of 303 tests (99.01%) FAILED!
Wikipedia's Jimmy Wales will be speaking in SF this FRIDAY, Apr. 14
-r
----- Forwarded message from Stewart Brand <sb(a)gbn.org> -----
Date: Mon, 10 Apr 2006 12:12:58 -0700
To: salt(a)list.longnow.org
From: Stewart Brand <sb(a)gbn.org>
Subject: [SALT]
Wikipedia's Jimmy Wales this FRIDAY, Apr. 14 (for forwarding)
Vision is one of the strongest forms of long-term thinking.
Like any project, the online encyclopedia Wikipedia deploys month-to-month
tactics in the service of year-to-year strategy. Both must serve an
unusually
ambitious decade-to-decade vision, stated by founder JIMMY WALES:
"Wikipedia is
first and foremost an effort to create and distribute a free encyclopedia
of
the highest possible quality to every single person on the planet in their
own
language."
The English language edition of Wikipedia, which started in January 2001,
reached its millionth article last month. A year ago it was half that.
There
are 123 other language editions active, with German (350,000 articles),
French,
Polish, and Japanese leading the way. The editing process is radically
"open
source"--- anyone can edit any article. Article authors and amenders get
no
pay and no public credit. Against reason, the process works spectacularly.
Wikipedia has become the primary online research source. It currently
costs
$320,000 a quarter to produce.
For the talk on Friday, Wales is expanding on his usual Wikipedia-only
presentation to address the larger and longer picture that Wikipedia's
success
hints at:
"Vision: Wikipedia and the Future of Free Culture," Jimmy Wales, Cowell
Theater, Fort Mason, San Francisco, 7pm, Friday, April 14. The lecture
starts
promptly at 7:30pm. Admission is free ($10 donation welcome as always, not
required).
NOTE ON SECURING A SEAT: This talk may be very popular, with the
possibility
of an overflow audience. You can ensure yourself a seat by making a
reservation, which costs $5 a person. Reserve through Long Now's home page
(http://www.longnow.org) or phone 415-561-6582. Reservations will stop
being
taken at 4:30 pm April 14. Apart from reserved seats, which must be
occupied
by 7:20 pm at the event or be released, seating is
first-come-first-served, and
admission is free.
(This reservation service is offered in an attempt to avoid previous
problems
with exceptionally popular talks. Jared Diamond and Brian Eno, for
example,
had far more people at the door than could fit inside, and many went away
frustrated. Anticipating that problem at a joint talk by Freeman, Esther,
and
George Dyson, Long Now offered the ability to make reservations by phone.
People then filled the house with what turned out to be illusory
reservations.
My email about that discouraged other people from showing up, and we wound
up
with half a house. So this time there's a financial incentive to make only
sincere reservations. The Cowell Theater seats 400, with room for another
40
in the lobby watching on live TV. My email to this list on Friday will
reflect
your prospects of getting in by just showing up. Reserved seats not filled
will by released at 7:20pm, ten minutes before showtime.)
This is one of a monthly series of Seminars About Long-term Thinking
organized
by The Long Now Foundation, usually on second Fridays, usually at Fort
Mason.
If you would like to be notified by email of forthcoming talks, please
contact
Simone Davalos--- simone(a)longnow.org, 415-561-6582.
You are welcome to forward this note to anyone you think might be
interested.
--Stewart Brand
PS. Much of Kevin Kelly's March talk, "The Next 100 Years of Science:
Long-term Trends in the Scientific Method," is now available in text form,
with
his great slides, at Edge.org:
http://www.edge.org/documents/archive/edge179.html .
--
Stewart Brand -- sb(a)gbn.org
The Long Now Foundation - http://www.longnow.org
Seminars: http://www.longnow.org/projects/seminars/calendar.php
Seminar downloads: http://www.longnow.org/shop/free-downloads/seminars/
----- End forwarded message -----
--
http://www.cfcl.com/rdm Rich Morin
http://www.cfcl.com/rdm/resume rdm(a)cfcl.com
http://www.cfcl.com/rdm/weblog +1 650-873-7841
Technical editing and writing, programming, and web development
Recovery consisted of four phases:
1) "What the hell is going on?"
Outage mentioned immediately by users in IRC. On investigation, the whole of
PowerMedium appeared to be offline. Mark indicated that they had a major network
problem. I phoned their support line; they confirmed a big network problem and
said they were bringing in Charles to work on it. (This was about 3:30pm Sunday
afternoon Florida time).
At this point there was nothing further we could do, we had to wait for them to
fix things on their end. I also called Kyle so we'd have a pair of hands in the
office when things started to come back online.
2) "Why does nothing work?"
After an hour or so they apparently had their general issues under control.
PowerMedium's own web site came back up, we could get at our own switch over the
network, and Charles (bw) was available online.
Between us remote folks, and bw & Kyle on-site we did some banging on rocks. We
found that in addition to the network outage, there had been a power problem
(presumably this killed their routers too), which had rebooted everything.
In this stage we were confronted with the fragility of the internal DNS and LDAP
we had set up to make everything work. While we've expended some effort to
minimize the dependencies on NFS, we hadn't yet put similar effort into these
services. Until these services were restored, booting was a vveeerrryyyy slow
proposition (with lots of timeout steps), and it took another hour or so to get
key infrastructure back in place to where we could seriously get working.
3) "Where's my data?"
MediaWiki is highly reliant on its database backend. With machines on, we were
able to start the MySQL databases loading up and running InnoDB transaction
recovery. This took much longer than expected, apparently because we have a
*huge* log size set on the master: about 1 gb. (James Day recommends reducing
this significantly.)
While this step was running, mail was brought back online and the additional
MySQL servers for text storage were brought online. Two of the slaves were found
to be slightly behind, and the master log file appears to be smaller than their
recorded master log offsets. This might indicate corruption of the master log
file, or it might simply indicate that the position was corrupted on the slaves.
In either case, this is very much non-fatal for text storage as it's very
redundant and automatically falls back to the master on missing loads. (But it
should be looked into. We may have write-back caching or other problems on those
boxen.)
4) "Where's my site?"
Once the primary database was done, it was time to flip the switch and watch the
site! Except that the Squid+LVS+Apache infrastructure is a little fragile, and
in particular LVS was not set up to start automatically.
At this point it was late in Europe and our volunteer admins who do much of the
squid and LVS work were asleep. I was able to find the information I needed on
our internal admin documentation wiki, and got these back online after a short
while.
Additionally I had to restart the IRC feeds for recent changes data, which
involved recovering a tool which had gotten moved around between home directories.
Things appear to be pretty much working at this point. In the short term, we
need to examine the broken MySQL slave servers, and make sure we're at full
capacity.
In the medium term, we need to make sure that all services either will start
automatically or can be very easily and plainly started manually.
We also *must* examine our DNS & LDAP infrastructure: if we can't make it boot
fast and reliably we need to consider replacing this with something more
primitive but reliable. (ewww, scp'ing hosts files...)
We also need to make sure that:
* Squid error messages are easily configurable and can be updated with necessary
information.
* DNS can be easily updated when the Florida cluster is offline, eg so that we
could redirect hits from Florida to another cluster for an error page or
read-only mirror.
-- brion vibber (brion @ pobox.com)
Hello,
I want warn the MediaWiki community against a possible virus attack to a Mediawiki website.
I published a website based on MediaWiki platform few weeks ago.
Today I have noticed Google indexes it, but this morning I have received a virus attack to my SMTP Server from a website that uses Mediawiki too.
The attack caused a DOS Denial of Service on the SMTP port number.
Every help is welcome.
Thanks in advance.
Eugenia Tenneriello
"L'unica organizzazione capace di crescita illimitata e di apprendimento
spontaneo e' la rete. Qualsiasi altra topologia pone dei limiti allo
sviluppo futuro" - Kewin Kelly
Moin,
as an extension developer I'd like to have two new features. I looked at
the doc, the code the FAQ etc but I couldn't find out if it is not
already possible to achive that:
* addCSS();
You can call from your extension things like addMeta(), addHeader() etc,
including addStyleSheet(), which adds a link to a stylesheet to the page.
However, forvarious reaons and extensions adding CSS code into the head
ala:
addCSS('YourCodeHere') would result in:
<style type="text/css">
<!--
YourCdeHere
--></style>
Is this already possible and did I just overlook it? If not, could it be
implemented? (Adding a link to a stylesheet requires external files,
which is messy and leaks files,see my other post "File leakage in
extensions")
Likewise, I would like something along the lines of:
__NOCATEGORIES__ (or __NOCAT__
__NOMENU__
__NOTOOLBOX__
__NOSEARCH__
__NOFOOTER__
that supress the various elements of the interface. (This is for a slide
extension, that allows one to give presentations from a wiki diectly).
Now, with the addCSS() from above that could be hacked (if you know which
skin to use, and add an extenion to each slide article, but the special
variables would be much cleaner and they would also allow these things to
be used on other pages w/o an extension.
For wikipedia, they should be probably disabled, but this can be achieved
via a technical solution (config var, or spamblicking etc).
What do I need to get these things into the current code?
Best wishes,
Tels
--
Signed on Fri Apr 7 10:47:09 2006 with key 0x93B84C15.
Visit my photo gallery at http://bloodgate.com/photos/
PGP key on http://bloodgate.com/tels.asc or per email.
"...pornographic images stay in the brain forever." -- Mary Anne Layden;
"That's a feature, not a bug." -- God
Seems like the spammers have found the web equivalent of an smtp open relay.
For example:
[http://wiki.cs.uiuc.edu/VisualWorks/DOWNLOAD/sb/index.htm sitz bath]
[http://www.buddy4u.com/view/?u=monophonic+ringtone monophonic ringtone]
[http://www.buddyprofile.com/viewprofile.php?username=nextelringtone
nextel ringtone]
These are links to legitimate sites that perform poor input
validation... The spammers have managed to convert the pages into http
redirects.
Because of how the various search engines work a link to a redirect
page is just as good as a link to the redirect target.
Since the spammers can make an infinite number of unique URLs at these
sites, blocking the exact URL is pointless. So right now our only
choices are to block legitimate sites because their poor hygiene
allows them to be used as a spam-bouncer, or allow ourselves to be
spammed with these sites and contribute to the declining usefulness of
the internet.
Things like this make nofollow more attractive all the time. Has
there ever been any discussion on perhaps allowing a white-list for
non-spam sites that we won't no-follow? This would be useful for
wikis who don't want to kill all their externals with no-follow.