I'm proposing again my idea of a Java-based client-side search engine:
http://quatramaran.ens.fr/~monniaux/wikipedia_en/search_applet.html
(Need good JVM, NOT the old Microsoft VM.)
The index is downloaded by chunks, as needed by the client. There's some
caching.
[The index was generated 6 months ago from a database dumb. I'll remake
it once I get mysql working again.]
What do you think? If interested, I may implement full-text search.
Does the current wikitex work together with the 1.4 branch ?
Our Tex User group has great interest to try out and maybe enchance the
system.
Matthias
--
---------------------------------
www.matthiaspospiech.de
From: "Jimmy (Jimbo) Wales" <jwales(a)wikia.com>
>Robin Shannon wrote:
>> >But it also means that wikipedia
>> > itself probably should turn the feature off.
>> >
>> > (This is not a decree or anything, just one voice in the discussion.)
>>
>> maybe in the 5 or so really big wikipedias, but i know for example
>> that en.wikibooks has a problem with one particular chinese
>> wikispammer, who doesnt get reverted for at least a couple of hours,
>> if not half a day sometimes. I presume there is probably similar probs
>> with some of the other smaller projects.
>
>I think that's probably right.
>
>It's about weighing the value of giving search engines good clues
>about pages that don't suck versus the value of discouraging wikispam
>when it is a problem.
>
>--Jimbo
I agree. I ineptly started a page at [[m:nofollow]], for discussion off this list.
Ben
We have a long-standing problem with AOL, which is that they insist on
being a single giant cluster of anonymizing proxies. Should we consider
sending a cookie to AOL browsers which issue edit requests, to give them
some kind of identity? This would, of course, mean some loss of privacy,
but no more than that of any other IP user who is not behind an
anonymizing proxy.
We could simply give them a random number, generated from a high-quality
PRNG, and send it to them as a resonably long-lived cookie when they
make their first edit request. This could then be used in lieu of an IP
address. So, we would have three types of name:
* IP addresses (with addresses in dotted-quad or IP6 notation) for
normal anons
* Logged-in users (names starting with a capital letter)
* Anons with cookies (dotless strings starting with a digit, say,
generated from a _hash_ of the cookie we sent)
Note that we only display a hash of the cookie contents. This allows us
to verify that the cookie is a genuine one sent by us, making spoofing
very hard to do. This could be as simple as keeping a table of valid
cookies; alternatively, some digital-signature scheme could be used to
remove the need for a database lookup. This would also prevent
mischevious users from impersonating AOL users by stealing their cookie.
All of this could be done with very little change to the code, if I
understand correctly how it works. This would let us watch and block AOL
users in much the same way as logged-in or IP users. The downside is
that we would probably have to block AOL users without cookies set from
editing to get the full benefit from this policy. We could easily send
them a message "Dear AOL user: you currently have cookies disabled; you
will need to enable cookies to edit this page. See here for more
information...".
Benefits:
* we can track AOL users for vandalism, at last
* they can still browse without needing cookies set
* no need for extra user interaction, if they have cookies set (which
they do by default)
* no other anons need to have cookies set at all
* this scheme can be extended to other totally anonymising ISPs, if
needed, including schools/colleges with proxy servers
Downside:
* AOL users lose a bit of anonymity (but, hey, that's the upside, too!).
* highly clueful AOL users could still work around this somewhat by
technical means, but: re-read the first clause of this sentence -- and
it will still deal with 99%.of the problem
Note that they are still _pseudonymous_, so there's no way of tracing
through to their real identities save through the AOL abuse department,
so we are still protecting their privacy.
So, this provides a nice tier between 'open' and 'blocked' that should
go a long way towards preventing the need for indiscriminate range-blocks.
How about it?
-- N.
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
We've been running a release candidate of Mailman 2.1.5 for some time; I
finally got around to upgrading to the final release. Hopefully there
should be no difficulties.
- -- brion vibber (brion @ pobox.com)
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.6 (Darwin)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iD8DBQFB9bqOwRnhpk1wk44RArk6AJ9RVIgZ0llyHZIy72nbEMts5zrRIQCeMoCY
0wAL3z8n3D4s18HGWRLa71o=
=sa/e
-----END PGP SIGNATURE-----
http://meta.wikipedia.org/Enotif
Hello all developers and mediawikipedians,
I need urgently to know, pushy as I am, whether or not you are in favour
of having User_talk: ***and*** User: pages treated in the same way:
currently, only User_talk pages show the "You have new messages" label
and trigger an ENotif in CVS HEAD versions, if the user has opted-in.
However more and more users have asked me whether I could also supervise
the accompanying ***User*** page, so that changes on this page are
also triggering ENotifs.
I suggest:
to modify the MediaWiki behaviour, so that both, the
User_talk:X *and* User:X pages trigger the "you have new messages" flag
and also an ENotif(if enabled). Of course, changes of User X do neither
trigger the display of the marker nor an ENotif.
Looking forward to your answers, please reply to
mailto:mail@wikinaut.de?Subject=ENotif%20also%20for%20Userpages
Wikinaut Tom
Where can one obtain a copy of the user table? I have the latest version (18
days old as of today!) of the cur table from download.wikimedia.org and imported
into a local mysql database. I would like to map the user id column to a name so
I can generate some reports on articles in the image namespace. For example,
listing unverified images by user name.
As my first attempt at doing some development work for MediaWiki, I
have what I believe to be a fix for bug 190
(http://bugzilla.wikipedia.org/show_bug.cgi?id=190). (It's not really a
bug, but a feature request). I've tested this on a local installation
of the software, and worked out the bugs (at least the obvious ones).
So what is the correct next step? Instructions on meta suggest that I
post the patch to this list. Is there some particular difference tool
that patches are created with? Or do I just post the code with
instructions about where to put it?
There used to be the test wiki to allow for more people to check this
kind of change. How is that done these days?
Thanks for any help.
-Rich Holton
(en.wikipedia:User:Rholton)
__________________________________
Do you Yahoo!?
The all-new My Yahoo! - Get yours free!
http://my.yahoo.com
Moin,
having written a <graph>-plugin for Mediawiki, I would like to announce it
here.
It takes textual graph descriptions between <graph></graph> like this:
For instance this input:
<graph>
[ Bonn ] -> [ Berlin ]
[ Berlin -> [ Frankfurt ] { border: 1px dotted black; }
[ Frankfurt ] -> [ Dresden ]
[ Berlin ] -> [ Potsdam ]
[ Potsdam ] => [ Cottbus ]
</graph>
Would be rendered in ASCII as (use monospaced font for viewing :o)
+------+ +--------+ ............. +---------+
| Bonn | --> | Berlin | --> : Frankfurt : --> | Dresden |
+------+ +--------+ ............. +---------+
|
|
v
+---------+ +---------+
| Potsdam | ==> | Cottbus |
+---------+ +---------+
HTML looks similiar, but more pretty (well, maybe :)
All the gory details, the patch, software, testcases, screenshots etc can
be found at:
http://bloodgate.com/perl/graph/
This is a proof-of-concept - e.g. it is likely not to work #:o)=
It is also still very early pre-alpha. Especially the Graph.php is very
rough - I never did read nor write PHP code before - but it looks
suspiciously like Perl and seems to work, so I am not complaining :)
However, before I wander off over the proverbial big cliff, I'd rather get
some corrections. Read: please tell me what you think about it, whether
this is going to be usefull/work/bring world-peace etc.
There are quite a few thing that are simple not implemented yet - I do
have plans to implement them in the near future, though :) However, most
of the work remains in the external parser/renderer.
My main interest in this area lies in _easily_ documentating network
plans, flow charts, schematics and other things in that area. IMHO having
such a feature in a wiki would be very usefull.
Best wishes,
Tels
PS: Special thanx go to Omega for beta testing!
--
Signed on Wed Jan 12 16:43:06 2005 with key 0x93B84C15.
Visit my photo gallery at http://bloodgate.com/photos/
PGP key on http://bloodgate.com/tels.asc or per email.
Marketing lesson #1: The synergy of the result driven leverage can
*never* incentivize a paradigm shift. -- Walterk (124748) on 2004-01-16
at /.
Hi,
I've been coding a wiki parser in JavaScript with the hope it could be
of some use for the project (especially in giving some relief to the
servers).
You can view a demo here: http://gusanos.sourceforge.net/wp/wikitest.htm
So far it supports the following features:
*Headings (all levels)
*Normal paragraphs
*Internal and external links (with hidden namespaces and parentheses even)
*Normal inline formatting (italics and bold)
*Lists and definition lists (can be nested)
*Tables with full nesting (can even nest other tables)
*Horizontal bars
What still needs to be done:
*<nowiki> tags
*Undesired HTML stripping
*Images
*Interproject links
*Interwiki links
*Categories
*Signatures
*TOC
*Hierogliphs
*Templates
*?
I think this could be useful for quick previews, avoiding extra server hits.
Some notes:
I've tested it in the following browsers (all of in which more or less
works): Firefox, Opera 7, Konqueror 3.3, Internet Explorer 6.
IE5 just yields an error and does nothing, but I don't feel like
wasting time making it work in it.
What do you think?
-Pedro Fayolle (aka Pilaf)