PowerMedium had some sort of network and/or power outage for about an hour from
circa 19:20 UTC. We've been working on getting things back online since power
and network became available again.
Will post more about cause when we know more.
We've got a guy at the colo helping out and Kyle's in there also. Unless there
are other major problems we should be back online soon.
-- brion vibber (brion @ pobox.com)
2006/4/5, Ilmari Karonen <nospam(a)vyznev.net>:
>
> Minh Nguyen wrote:
> > Jan Vanoverpelt wrote:
> > >
> > > I think this problem is due to the fact that i have created some kind
> of
> > > "hard link" behind the image, so that the user is always forced to go
> > > directly to the edit-box with the preloaded text in it. Is there a
> way to
> > > fix this, so that the user (after he/she visited the non-existing
> page for
> > > the first time, added something and saved it) is directly taken to
> the
> > > actual page the second time (like is done in case of "normal" links)
> ??
> >
> > Yes, I think there is: the English Wikipedia has an "exists" template
> > <http://en.wikipedia.org/wiki/Template:Exists> to check if a particular
> > page has been created yet.
>
> Eww! That's clever, but also incredibly ugly. I can see why a template
> (or a parser function) for this might be useful, but Jan's problem would
> be much cleanly solved by a small patch to MediaWiki.
>
> A minimal patch would be to add a new action ("action=editnew"?) that
> would act like "action=edit", but only if the page does not already exist.
Thanks for the reaction, but unfortunately i don't have that much knowledge
about patching MediaWiki. Where do i have to define such a new action and
how do i have to define it? I would be glad if it just worked (even if it is
a little bit ugly ;-) so maybe it is advisable for me to keep trying to use
some kind of template?
Recapitulating what i am trying to do:
Scenario:
1) The user clicks on an image, which refers to a non-existing page.
2) The user is linked to the non-existing page's edit-box in which the
preloaded text is written.
3) The user adds some additional text and saves the page.
4) The user leaves the pages, surfs around on the wiki and goes back to the
image through which he previously created the new page. This is the point
where the practical problem jumps in: the user clicks on the image, BUT he
must be taken to the actual page (which exists now, so he does not have to
be linked directly to the edit-box of the page again).
I tried to modify the "exists"-template so that after this template has
checked whether a page exists or not, another template (a template with
preload-parameter for a new page or a template without preload-parameter for
an existing page) is returned:
=> The original qif-code of the {{exists}}-template is:
{{qif
|test={{booleq
|1=[[{{ucfirst:{{{1|defaultFalse}}}}}]]
|2={{:{{{1|defaultFalse}}}}}
}}
|then={{{else|false}}}
|else={{{then|true}}}
}}
=> I tried to adapt this into:
{{qif
|test={{booleq
|1=[[{{ucfirst:{{{1|defaultFalse}}}}}]]
|2={{:{{{1|defaultFalse}}}}}
}}
|then={{{else|{{hotspot || Image = image-name.gif | Link =
name-of-the-page
| Preload=Template:templatename | Heigth = 81px}}}}}
|else={{{then|{{hotspot-no-preload || Image = image-name.gif | Link
= name-of-the-page
| Heigth = 81px}}}}}
}}
Unfortunately, I want to apply this to several different images, each of
which links to a different page so i should have to put this qif-code into
different {{exists}}-templates so that each of these different
{{exists}}-templates then contains its own " image-name.gif"-reference and
"name-of-the-page"-link. This is not efficient because lots of
{{exists}}-templates have to be made.
A little bit more efficiently: I could put this qif-code on the wiki each
time such an image should be displayed, but i do not understand how the
name-of-the-page-parameter can be passed to this qif-code without referring
to a global template like {{exists|name-of-the-page}}. What i mean is that
when the {{exists}}-template is used, the parameter is passed by {{exists
|name-of-the-page}} and then the qif-code knows which page to check. What i
would like to accomplish now, is to adapt the qif-code so that i can
manually define in the qif-code which page should be checked for existence.
In short: how do i adapt the above qif-code, so that it knows what page to
check for existence, without passing the name-of-the-page-parameter via
{{exists|name-of-the-page}}?? Or is there another
"quite-easy-to-implement"-solution to this problem?
Thanks in advance for any help!
JAN
Hi all,
is that have any suitable method to produce the current month (in
number) without the leading zero? I've been found out in the MediaWiki
software currently only have the magic word CURRENTMONTH which would
generate the current month with leading zero. Is that have any other
ways to generate the current month without the leading zero? or maybe
I would like to submit a patch to BugZilla to resolve this issue?
regards
Shinjiman
Hi.
I'd like to insert a "Wiki contents table", similar to the green ones at:
http://en.wikibooks.org/wiki/Botany
Which is the wiki code to generate it?
Thank you very much,
--thomas
An automated run of parserTests.php showed the following failures:
Running test Table security: embedded pipes (http://mail.wikipedia.org/pipermail/wikitech-l/2006-April/034637.html)... FAILED!
Running test Magic Word: {{NUMBEROFFILES}}... FAILED!
Running test BUG 1887, part 2: A <math> with a thumbnail- math enabled... FAILED!
Passed 300 of 303 tests (99.01%) FAILED!
In response to a campaign by users of the English Wikipedia to harrass developers by introducing
increasingly ugly and inefficient meta-templates to popular pages, I've caved in and written a few
reasonably efficient parser functions. There are two conditional functions and a mathematical
expression function. The expression function should support uses such as time and date deltas, as
well as floating point applications such as unit conversion. The conditional functions should
replace most uses of {{qif}}, and improve the efficiency of similar templates.
Documentation is at:
http://meta.wikimedia.org/wiki/ParserFunctions
I would like to hear comments about the syntax, before we put them live. Syntax is guided by
consistency with existing functions such as {{localurl:}}, but if it looks too unwieldy then we can
probably change it.
Don't blame me. I've always been against turning wikitext into a programming language. I'm just
weaker than the other developers. How can I stand by and watch this sort of thing be inflicted on
our articles:
http://en.wikipedia.org/w/index.php?title=User:Ed_Poor/subtract&action=edit
I had a choice between going on a deletion rampage or answering the persistent calls on the wiki for
this kind of thing to be implemented.
Templates, that's where it all went wrong. Or custom messages, as we called them back then. If only
I understood what a Pandora's box I was opening when I implemented {{MSG:}}.
http://mail.wikipedia.org/pipermail/wikitech-l/2003-September/018536.html
-- Tim Starling
Hi all,
I guess you probably know that Wikipedia was down earlier, due to a
power fault at the colo centre (apparently). I was chatting to brion on
IRC and he recommended that I contact this list.
I think it should be possible to make Wikipedia fully redundant to
outages of individual data centres, and not too expensive. Here's how.
Get a BGP portable IP address range. Advertise this range from TWO
locations, at separate data centres. Have basically identical read-only
servers on each range, with the same IP addresses. Don't worry about IP
conflicts, as the servers are identical, and the shortest route from any
given client will point to just one data centre, and not move unless
that data centre goes down, when it will automatically fall back to the
other.
Under normal conditions, your load is shared between both data centres,
so you don't need to actually increase the number of servers. If one
goes down, all requests go to the other, so performance might drop, but
Wikipedia should stay up.
This only works for read-only servers, so the process of editing
Wikipedia would still rely on one of the groups (or some subset of
servers in that group) being masters, and all the other servers being
slaves that sync off those masters.
It's just a suggestion, I'd be interested to hear what you think.
If you are interested, I know a hosting company that has a BGP-portable
range (I used to work for them), and I could talk to them about whether
they can set up redundant IP tunnelling for that range to whatever IP
addresses (VPN endpoints) you want, so you wouldn't even need to have
your own BGP range.
Cheers, Chris.
--
___ __ _
/ __/ / ,__(_)_ | Chris Wilson <0000 at qwirx.com> - Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Perl/SQL/HTML Developer |
\ _/_/_/_//_/___/ | We are GNU-free your mind-and your software |
Real-time mirrors seem to be a recurring phenomenon. They are a drain on
Wikipedia's resources, and hunting them and shooting them down is a
continuing battle.
The reasoning behind these mirrors appears to be:
1 putting up a Wikipedia mirror with ads will make money...
2 too lazy to set up a proper mirror...
3 instead, set up a script that queries Wikipedia in real time...
4 profit!
However; why not turn this on its head, and offer a real-time, or
near-real-time, Wikipedia feed service to paid-up subscribers?
Currently, Wikipedia's running costs are about $1.2M per year, and this
pays for, among other things, serving about 4000 hits per second, that
is to say, about 1.26 x 10^11 hits per year, or about $ 10^-5 per hit.
(Of course, this is average gross cost; marginal cost will be
significantly higher, say $ 10^-4 per hit).
Web advertising rates are generally of the order of $1 CPM: that is, $
10^-3 per hit. If an advertiser manages to get 10,000,000 hits per year,
they will make $10,000 in ad revenue, and costs the Wikimedia Foundation
around $1000 in leeched server load.
What if we were to turn things round, and charge (say) $ 2 x 10^-4 per
hit for an official real-time mirror service? (Of course, this would be
aggregated in lumps, because it's impossible to bill tiny fractions of a
dollar). Now, the economics to the mirror operator is $ 10^-3 - $0.2 x
10^-3 per hit, and they still make 80% of the money they would have
before, and don't need to worry about being cut off. However, the
economics for the WF are now quite different: instead of losing $ 10^-4
per hit, the Foundation would make $ 2 x 10^-4 income - $ 10^-4 cost per
hit, and thus makes $ 1000 gross profit over the course of the year for
those 10,000,000 hits, which can be ploughed back into achieving the
Foundation's charitable goals (for example, by buying new server kit and
bandwidth, or paying for other real-world activities).
Note that the users of the real-time mirrors are _not_ being charged for
use of the GFDL content, which remains freely available as before; they
are being charged for real-time access to WP data, with no need to run a
modified copy of MediaWiki in order to run their service.
Administration of the scheme could be made automatic, by allowing the
existing credit-card interface to be used to for payment, and entering
an IP address or addresses to be authorized, an E-mail address for
contact, and getting an authorization key mailed back.
As a result:
* Wikipedia remains ad-free
* the WF gets revenue
* the advertisers still get to make (slightly less) money, but this time
without leeching unauthorized resources.
The feed could be provided from the existing software, only with a "null
skin" that produced only the rendered page content, thus both slightly
reducing the load of producing it (eg. no check for messages, greater
possibility for caching), and, at the same time, making the page content
easier to re-use, by removing the need to strip the user-interface from
around the page contents.
With other changes, for example, not checking for red/blue links,
serving costs could probably be reduced even further, and quote possibly
WF could charge more than $ 2 x 10^-4 per hit. Given the number of
mirrors around, setting up this scheme might pay for itself in a month
or less.
Good idea, or bad idea?
-- Neil
An automated run of parserTests.php showed the following failures:
Running test Table security: embedded pipes (http://mail.wikipedia.org/pipermail/wikitech-l/2006-April/034637.html)... FAILED!
Running test Magic Word: {{NUMBEROFFILES}}... FAILED!
Running test BUG 1887, part 2: A <math> with a thumbnail- math enabled... FAILED!
Passed 300 of 303 tests (99.01%) FAILED!