Hello!
You are receiving this email because your project has been select to
take part in a new effort by the PHP QA Team to make sure that your
project still works with PHP versions to-be-released. With this we hope
to make sure that you are either aware of things that might break, or to
make sure we don't introduce any strange regressions. With this effort
we hope to build a better relation between the PHP Team and the major
projects.
If you do not want to receive these heads-up emails, please reply to me
personally and I will remove you from the list; but, we hope that you
want to actively help us making PHP a better and more stable tool.
The first release candidate of PHP 4.4.1 can be found at
http://downloads.php.net/derick/ . If everything goes well, we hope to
release PHP 4.4.1 at the start of the week in which October the 17th
falls as there are certain critical issues that are solved in PHP 4.4.1.
If you find any issues, please contact the PHP QA team at
"php-qa(a)lists.php.net".
In case you think that other projects should also receive this kinds of
emails, please let me know privately, and I will add them to the list of
projects to contact.
regards,
Derick
--
Derick Rethans
http://derickrethans.nl | http://ez.no | http://xdebug.org
Hi,
Can someone please tell me where i can find the wfEscapeJsString method. I
have added a button in the edit tool bar and now want to add the sample text
and tool tip for it. I am guessing its in this method.
Thanks,
--
Amruta
Based on the tinyweb server, sqlite, PHP, MediaWiki and a self-written
XML-to-sqlite converter, I have created a package [1] for Windows that
can run a wikipedia without any installation required.
This package doesn't include any wikipedia data set, so you'll have to
generate your own from an XML dump. Instructions are dead easy, see the
included README file. Some drawbacks of this alpha-version:
* Conversion takes long
Took 6 hours to convert a recent XML dump on my 2.8GHz P4. So, start
this in the evening ;-)
* sqlite database gets huge
For the above mentioned dump, it is just below 3GB. However, as this is
mostly text, it should compress well, if someone wants to make a CD
image with a simple installer from it.
I might play with the whole setup, maybe keeping the actual text in the
zipped XML dump and just store titles etc. in the database. This will
speed up the conversion process and decrease database size considerably,
but will pose other problems and slow down the whole thing.
* Browsing is slow
Most pages render in under 10 sec, but some (like the Main Page, for
some unknown reason) get stuck somewhere and won't stop rendering. I'll
look into this. Meanwhile, the default start page is "Biology" ;-)
* Categories and interwiki links are broken
For categories to work, I'd have to regenerate all the links in the
database, which means rendering all articles, which would take too long.
I'm not sure why the interwiki links are broken. Looks like it forgot
the language codes, somehow.
* Links and articles with a single quote (') in them show as broken
This has something to do with sqlite quote escaping. I'm not sure how to
fix that, but basically shouldn't be a problem.
Other than that, you can read all (OK, most) of the pages, links work. I
did't dare try searching yet ;-)
It might be noteworthy that no MediaWiki code had to be patched for
this. The package actually includes the CVS info, so you can "cvs
update" MediaWiki. I only had to design a special LocalSettings.php and
write a DatabaseSqlite.php structure, containing some ugly hacks.
Strangely enough, I have to route all GET requests through a DOS batch
file, which is where I lose POST data. Otherwise, one could even edit
the wiki.
Please have a look at it and tell me what you think.
Magnus
[1] http://www.magnusmanske.de/wikipedia/standalonewiki.zip (about 6MB)
Hi all,
I am writing to request the immediate creation of an Illocano
Wikipedia due to the overwhelming support shown on Meta and the
complete lack of opposition.
Please see http://meta.wikimedia.org/wiki/Approved_requests_for_new_languages#Illokano…
The ISO code is ilo; it should thus be located at http://ilo.wikipedia.org/
Thanks
Node ue
--
SI HOC LEGERE SCIS NIMIVM ERVDITIONIS HABES
QVANTVM MATERIAE MATERIETVR MARMOTA MONAX SI MARMOTA MONAX MATERIAM
POSSIT MATERIARI
ESTNE VOLVMEN IN TOGA AN SOLVM TIBI LIBET ME VIDERE
There has been a lot of discussion lately on the or-talk list about
how to let tor and other anonymizing proxy users edit wikipedia without
allowing vandals free rein. Several straightforward approaches have been
proposed, such as holding edits in escrow pending approval by a trusted
user, and requiring anonymizing network users to login before posting.
The latter idea in particular could easily be abused, since abusers can
create a new account for each edit.
Roger Dingledine, tor's author, suggested creating a pseudonym service
using a cryptographic construction called blind signatures:
http://www.rsasecurity.com/rsalabs/node.asp?id=2339
Basically, Alice can generate a token, mathematically blind it
(obscuring its value), have it signed, then unblind the signature.
Anyone can verify that the signature on the token is valid, but nobody,
including the signer, can link the blinded value Alice had signed with
her unblinded token.
I implemented such a scheme which works as follows:
* Alice creates and blinds a token, then submits it to a token server
for signing. Optionally, the token server may have a list of IPs banned
from wikipedia, and refuse to sign Alice's token if her IP is on the list.
* The token server signs the blinded token, then records what IP address
Alice used so that she can't obtain multiple tokens per IP address.
Later, this will allow us to block Alice's IP address if she misbehaves,
just as Wikipedia admins currently do, except that now it'll work even
when she connects via tor. Token rationing could also be done based
on other (more or less) scarce resources, including email addresses,
captchas, CPU-intensive tasks or even money, just as I'm sure has been
proposed for the vanilla wikipedia. The advantage of blind signatures is
that tokens can be recorded and blocked without revealing the potentially
sensitive underlying resource (such as a personal email address or
IP address).
* Alice can now turn on tor and present her token to wp, without revealing
her actual IP address. This token takes the place of the IP address
record currently stored along with article edits, and can be blacklisted
just the same way that IPs are banned.
* However, I implemented an intermediary step which has several
advantages. Instead of presenting her token to wp, Alice generates an
essentially empty client certificate and presents it via the tor network
to a certificate authority (CA) for signing, along with the signed token.
The CA records that the token has been "spent" (preventing her from
receiving multiple certs per token), then signs her cert just as Verisign
would sign a server SSL certificate. Since she connects via tor, the CA
doesn't learn her real IP address.
* Alice installs the client certificate in her browser, then connects
to a special wp server running an SSL server that demands valid client
certificates from our CA. That configuration takes only 4 lines in my
apache-ssl server's httpd.conf. Apache automatically sets environment
variables which identify the client certificate, and which can be used
in place of the REMOTE_ADDR variable currently used to record users'
incoming IP addresses when marking page edits. Blocking a client cert
would then be just as easy as blocking an IP address.
All of Alice's edits will be marked with that identifier unless she
obtains a new IP address (or other scarce resource) and repeats the
process to obtain another certificate. Later, features can optionally
be added which will allow her to have separate identifiers for each edit
(protecting her in case, say, her repressive government confiscates her
computer in order to find out if she wrote a particular article they
disagree with).
I have already released code to implement this system, with the exception
of the wp-specific code. I sent the proposal to both the or-talk lists
and the cryptography list at metzdowd.com on Monday. Next I'd like your
comments, before I dive into the mediawiki code (or find someone willing
to help with this part). Once the feature is complete, we can set up a
live test wiki for people to bang on, before we consider implementation
on the live wp servers.
-J
Hi,
I asked about 2 months ago on this list if the Special:Export utility
was re-enabled, such that complete xml files, including all revisions,
could be generated and downloaded for all entries. I was told that it
will happen soon. I tried to use it the other day and it still wasn't
functional. Any idea when if this will be fixed/enable soon?
Thanks a lot.
Sorin Matei
Since the archives are fairly opaque on the subject, does anyone know if
it's possible to spider the database dumps offline through the use of
WikiFilter? Or do you have to be running the full MediaWiki software
locally?
Thanks; got a catch-the-mis-categorization project in mind.
Regards,
David R
Thank you so much Ashar and Tim,
for the setting up of the new subdomains!
With regards,
Jay B.
2005/9/25, wikitech-l-request(a)wikimedia.org <wikitech-l-request(a)wikimedia.org>:
> Message: 5
> Date: Sun, 25 Sep 2005 15:29:50 +0200
> From: Ashar Voultoiz <hashar(a)altern.org>
> Subject: [Wikitech-l] nap, war, lad wikipedias created
>
> Hello,
>
> With the technical assistance of Tim Starling, I created three new
> wikipedia projects which were pending somewhere on meta: .
>
> The projects are:
>
> Ladino : http://lad.wikipedia.org/
> Waray-Waray : http://nap.wikipedia.org/
> Neapolitan : http://war.wikipedia.org/
>
> For any trouble with those newly created projects, please reply to
> wikitech-l mailing list only (followup-to set).
>
--
ilooy.gaon(a)gmail.com
Dear Urs Richle,
What I want to do is make a picture of the linkages in layers you
could turn off or on to see how they go together in XTM, RDF or OWL.
Somewhere between these will be the best fit/clearest picture.
I could not get through to your email or find you on
http://tecfa.unige.ch/. Please let me know how to reach you to
continue this discussion so we can create one or two concrete examples
to post.
Sincerely,
Deborah MacPherson
On 10/5/05, wikitech-l-request(a)wikimedia.org
<wikitech-l-request(a)wikimedia.org> wrote:
> Send Wikitech-l mailing list submissions to
> wikitech-l(a)wikimedia.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://mail.wikipedia.org/mailman/listinfo/wikitech-l
> or, via email, send a message with subject or body 'help' to
> wikitech-l-request(a)wikimedia.org
>
> You can reach the person managing the list at
> wikitech-l-owner(a)wikimedia.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Wikitech-l digest..."
>
>
> Today's Topics:
>
> 1. Re: Re: [WikiEN-l] Ranking articles using machine-generated
> stats (Evan Martin)
> 2. Re: Single Sign-On (Johannes Ernst)
> 3. Clarification: Single Sign-On (Johannes Ernst)
> 4. Re: Clarification: Single Sign-On (Rob Lanphier)
> 5. Re: Map of Wikipedia (Urs Richle)
> 6. Re: Clarification: Single Sign-On (Johannes Ernst)
> 7. MediaWiki OpenID patch (Dan Libby)
> 8. Re: Re: How to add a button in the existing text editor
> (?var Arnfj?r? Bjarmason)
> 9. Re: Re: How to add a button in the existing text editor
> (Amruta Lonkar)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Wed, 5 Oct 2005 12:17:24 -0700
> From: Evan Martin <evanm(a)google.com>
> Subject: Re: [Wikitech-l] Re: [WikiEN-l] Ranking articles using
> machine-generated stats
> To: Wikimedia developers <wikitech-l(a)wikimedia.org>
> Cc: English Wikipedia <wikien-l(a)wikipedia.org>
> Message-ID:
> <9f43d19d0510051217n10249becpf0378e4326fb859(a)mail.google.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
> On 10/5/05, Neil Harris <usenet(a)tonal.clara.co.uk> wrote:
> > Neil Harris wrote:
> > > The corpus-based measures are particularly interesting; they mean we
> > > don't need to bug Google for a million search keys.
> >
> > Although if anyone from Google is monitoring this list, and wants to
> > give me a Google Account with 1.25M search keys, I'd be happy to set off
> > the appropriate script... or send it to you to run.
>
> In any case, the number of results reported is a very approximate
> estimate. See, for example,
> http://blog.outer-court.com/archive/2005-02-08.html#n72
>
> I think it'd be much easier to use a standard measure of usefulness:
> look at access logs on wikipedia's end. If article A gets twice the
> number of hits per day as article B, it would seem natural that
> someone would be twice as likely to look it up in a paper-based
> encyclopedia. (There are certainly exceptions like hot news stories
> or controversial topics during a revert war, but I think it'd take you
> a long way...)
>
> I like Neil's list too, but that, as they observed, is a lot more work.
>
> -- Evan, monitoring this list :)
>
>
> ------------------------------
>
> Message: 2
> Date: Wed, 5 Oct 2005 12:49:46 -0700
> From: Johannes Ernst <jernst+wikipedia.org(a)netmesh.us>
> Subject: Re: [Wikitech-l] Single Sign-On
> To: Wikimedia developers <wikitech-l(a)wikimedia.org>
> Message-ID: <21696F4B-DC98-4379-9688-0AB28FE3491B(a)netmesh.us>
> Content-Type: text/plain; charset="us-ascii"
>
> >> 2) Is this the right mailing list to discuss this?
> >>
>
> According to http://en.wikipedia.org/wiki/Wikipedia:Mailing_lists,
> this list is 'for any WikiMedia development issues, technical
> discussions, ..." which is what I'd like to discuss.
>
> MediaWiki-l seems to be "for people with questions about their own
> installation of MediaWiki" which isn't really what I have in mind.
>
>
> Johannes Ernst
>
>