An automated run of parserTests.php showed the following failures:
Running test BUG 1887, part 2: A <math> with a thumbnail- math enabled... FAILED!
Running test Parser hook: static parser hook inside a comment... FAILED!
Passed 299 of 301 tests (99.34%) FAILED!
Hello sirs. I've been in communication with a user who feels that my block of
his IP address is inappropriate. E-mail correspondence follows.
----
On 3/21/06, Wwwwolf wrote:
Hello,
213.216.199.14 (Tuira-P1.suomi.net) is currently blocked, with the reasoning
being that it's suspected to be an open proxy.
This is indeed a proxy (but not an open one - or at least it's not
*supposed* to be, I think), used by my ISP (Oulun Puhelin Oy, a.k.a. Oulu
Telephone Company, Oulu, Finland). The thing is, it's a transparent proxy
that the ISP uses by default, and I'm not aware of any way for the users to
disable it (aside of using any other port besides 80, of course).
The proxy has been blocked a few times before on fi.wikipedia for vandalism.
The host name suggests it's a proxy for the users of Tuira region in Oulu,
and since I'm not that close to Tuira, I take an educated guess it handles a
huge chunk of the northern Oulu (remember, I'm not really that familiar
with the ISP's inner working, just guesses). I'd *hate* to drag every luser
in this big neighborhood out of their homes and ask them nicely, with a
baseball bat, if they have been adding crap to WP. There *has* to be a nicer
solution to this than that, right?
There's an unrepentent vandal in the neighborhood. What a scary thought.
Maybe I should move to Siberia. Meanwhile, maybe I need to edit from the
university =(
----
On 3/22/06, freakofnurture wrote:
If what you said was true, I would not have been able to perform this edit
through it:
http://en.wikipedia.org/w/index.php?title=Wikipedia:Sandbox&diff=prev&oldid…
Sorry, your IP is a Tor proxy.
----
On 3/22/06, Wwwwolf wrote:
All right, so my guess is there's some Bloody Idiot in the neighborhood who
runs a Tor node, 80/tcp traffic comes out of that host and gets intercepted
and forwarded by the ISP's proxy (that *every* customer is forced to use, I
remind you again).
In other words, the blocked IP is, in my educated guess, *not* the Tor node.
It just unwittingly hides a Tor node behind it.
So what exactly are we going to do here? Call the ISP (who are a Phone
Company, I remind you again) and ask them to remind people that
running Tor is very, very stupid because it makes rest of the customers
unable to edit Wikipedia? Or ask them to take fascist measures to make the
people not to run Tor exit points from home? Or move to Siberia; goodbye,
cruel world?
I'm not questioning the wisdom of blocking the thing if Tor traffic really
goes through here; all I'm saying this causes a lot of collateral damage.
And what am I supposed to do now? Look like a Chinese dissident and use
Privoxy, the most incomprehensible program devised by mankind since the
advent of Sendmail? I would rather not try to specifically avoid IP bans
even if I'm supposedly the innocent party.
----
On 3/22/06, Wwwwolf wrote:
Still on the open proxy issue:
It seems that the proxy does provide a valid X-Forwarded-For header. I made
a small script on my web host that spits out the HTTP remote address and
X-Forwarded-For header, and got 213.216.199.14 (Tuira-P1.suomi.net) and
82.128.217.58 (addr-82-128-217-58.suomi.net) respectively, latter of which
appears to be my correct IP.
Now please don't tell me Mediawiki can't block based on X-Forwarded-For.
This is the year 2006, after all... =)
----
[end of correspondence]
----
So, I'm wondering whether this is a shaggy dog story, or blatant trolling,
or possibly an alibi with plausible technical merit. I was about to post
this to [[WP:BJAODN]], but a fellow administrator referred me to this list.
All I know is... if *I* was able to make a sandbox edit of "Tor proxy ~~~~"
using his IP, anybody could just as easily the IP for abusive purposes.
Suffice it to say I don't know what to tell the guy.
--freakofnurture
--
View this message in context: http://www.nabble.com/Tor-and-%22X-Forwarded-For%22-t1325932.html#a3538731
Sent from the Wikipedia Developers forum at Nabble.com.
I am working with wikipedia 1.5.5 o 1.5.6, I don't remenber well
And i cannot find what table stores the content information about an
article, I mean, when I edit o create an article, How is it stored in wikidb
and in which table, I found an outdated database schema, it says a table
called cur, but that table is not in the version I have, coudl anyone help.
Please
--
Ángel Emilio Quezada Rodríguez
angel.quezada(a)gmail.com
> Date: Mar 17, 2006 10:24 AM
> Subject: [Wikitech-l] Distributed process pool
> To: wikitech-l(a)wikimedia.org
>
>
> Hello,
>
> Is there a way analyze the wikipedia logs to figure out what processes
> takes the most time? There is no immediate need, but I wanted to shoot
> off an idea to consider. If we were able to capture the processes that
> do take a heavy server load and push them onto a distributed process
> pool, would it help wikipedia or mediawiki in general? I'd imagine there
> would be a difference with process time over the speed of network
> traffic. Let say we determined that the code that creates a diff for two
> pages is a hog and can be put into the pool. We could use something like
> BOINC, http://boinc.berkeley.edu/, to standardize the pool. We can add
> the diff process to the pool as the server load gets heavy. The use of
> BOINC is more specific to research tasks, and it would need to be
> different for mediawiki. I just used the idea to keep this message short
> to get your feedback.
>
Jonathan,
Jared and I are actually working on a project to research and build a
peer-based distributed hosting framework for large free-content sites like
Wiki{m,p}edia. In a practical sense, we're using Wikipedia as a starting
point for the inquiry.
If we get accurate statistics on what kinds of processes these are and
what quantity of resources they consume, we might be able to integrate
these considerations into the simulation environment we'll be using to
evaluate various architechtures for distributed hosting.
I'm finally subscribing to the wikitech list, and I'll be interested in
learning about anything related that doesn't come up there.
-Erik
Since there's renewed interest in fixing the blocking problems (and
bug 550), I thought I'd draw attention to
http://meta.wikimedia.org/wiki/User:Robchurch/Blocking_Mechanism/Notes
which is a proposal for a fresh blocking mechanism.
I suspect this ties in a lot with the other current thread on this on
wikitech-l, and no doubt the principles are identical or near as
damnit. The precise implementation details are going to differ, but I
expect we can bang our heads together and whack out some code soon
enough.
Rob Church
> > At Wikipedia-en there is a policy against it. For a human-readable
> > encyclopaedia it's a bit of a tacky way of organising sub topics.
> >
> > Steve
There's actually a policy against it? Some people are trying to bring
it back. See
http://en.wikipedia.org/wiki/Root_page
Does anyone know whether it is possible to implement a hierarchical
navigation bar in Mediawiki like e.g.:
* CASES
** BUSINESSCASES-URL | BUSINESSCASES
*** BCASE1-URL | CASE1
*** BCASE2-URL | CASE2
** TECHNICALCASES-URL | TECHNICALCASES
*** TCASE1-URL | CASE1
*** TCASE2-URL | CASE2
Has somebody experience with this?
Thanks in advance for any help!
I would like to know if it is technically possible to add links to the
previous and next image (in alphabetical order) of an image on its page.
Or would it need too much resources to find the next image in the database?
It would be especially useful for scans of books, where the user could
directly go from one page to the next with such links. At presence the
links have to be added manually. A lot of work for books with 1,000+ pages.
Jofi
On 3/20/06, Brion Vibber <brion(a)pobox.com> wrote:
> [on wikipedia-l]
> The real situation is that our current blocking system sucks. A lot. And if we
> just "flipped a switch" it would suck *MUCH WORSE* because it would be virtually
> impossible to actually block anyone -- just create a bunch of accounts and
> you're immune until someone laboriously tracks them all down.
>
> So, it'll take more options and rethinking and generally some better, clearer
> idea of what blocking's supposed to do.
Here's one proposal. Please comment.
In any case, we leave in place the option to block an IP entirely.
Whatever other, softer notion of blocking we introduce is
just another option, to be used as a first choice but with
hard blocking available as always if necessary.
A first cut at "soft blocking" is to block anon edits
but permit logged-in users to edit. As you say,
the trouble is that vandals can make accounts too.
There are two ways to make an account: by hand
or with a bot. So we can say that to edit from a
soft-blocked IP, you must have two "real user" bits set:
- you've confirmed an email address
- you've passed a captcha[1]
The captcha makes it very difficult for a bot
to make accounts that can edit from behind a soft-block.
The email confirmation makes it take a couple of minutes
to make each account by hand.
If necessary we can add a couple more provisos:
- the same email address can't be used to confirm many accounts
(or many accounts that can edit from behind a soft-block)
- if the IP is from AOL and it's soft-blocked, the confirmed email
must be an AOL address. Since these cost money,
this proviso and the last one together limit the number of accounts
any AOL user can edit with when AOL is soft-blocked.
One could imagine generalizing this to other ISPs --
the hard part is cataloguing the IP range <-> email domain mapping --
but even a special case for AOL would be valuable.
None of this need affect people who just want to make an account,
to read logged in, or set their preferences; the only thing
an account that hasn't met the conditions need be barred
from doing through a soft-block is editing.
I see that Christian Siefkes has partially implemented
a similar proposal (comment 50):
http://bugzilla.wikipedia.org/show_bug.cgi?id=550
so the task is to finish it and give it whatever specific behavior
is deemed most useful.
I have some time and could implement this feature
if there's consensus we should have it.
Greg
(User:Gnp)
[1] Of course for the sake of people who can't see the captcha
we allow people to contact a human and ask for this bit to be set.