I would like to have 2 mailing lists set up, can someone do this for me?
One: wiki-research-l, "A low volume list for people who are actively
engaged in research on the Wikipedia project" -- this list should be
moderated with me as the moderator for now. It should be open for
anyone to subscribe to. I will find a proper moderator soon.
Two: wiki-offline-reader-l "A list for the co-ordination of work on a
free offline MediaWiki reader" -- there are several people who have
contacted me recently about doing a project like this and I feel that a
separate mailing list to focus on this will be helpful.
I am open to suggestions or commands :-) regarding the names, focus,
etc. of these lists, but I'd like to get them started fairly quickly
because I have business cards of lots of people who want me to do this.
--Jimbo
Is there already work in progress for a rating / voting mechanism for articles - perhaps as part of the validation project? I was thinking that some wikis (particularly wikicity wikis) mike very much like such a feature, and if developed to be configurable, could also be a part of the article validation process. What I'm thinking of is something that gives one the ability to to epinions style ratings, but configurable from wiki to wiki. I poked around Meta for awhile, but didn't find anything conclusive.
Thanks,
Aerik
Brion:
now, as this runs 1.5alpha2,
it would be nice to have the e-mail functions re-enabled on that testwiki.
Regards,
T.
P.S. I installed the alpha2 w/o problems ("fresh" database) and all
functions are working fine - as far as I can see.
Amit wrote:
> Does Wikipedia have customised syndication of its content?
Not currently, no. But i'm working together with evan on a extension
that provides extensive and customizable RDF information for articles
and media files. I hope this will be enabled on the wikipedias as soon
as we go to MediaWiki version 1.5. See
<http://meta.wikimedia.org/w/index.php?title=RDF>
An XML representation of wiki markup is also being worked on (by me and
others), but this will probably not be available before version 1.6. See
<http://meta.wikimedia.org/wiki/User:Duesentrieb/XML>
It would help to know some more technical detail about how you what data
you want to query and how you are going to precess it.
regards,
Daniel
--
Homepage: http://brightbyte.de
On 7 June, 2005 at 3:00AM Eastern Standard Time (9:00AM Paris/Berlin
time) we will be moving the bulk of the servers to a new facility across
the street.
I assume we will try to set up a sensible "downtime" page on the paris
squids or something. But the site will absolutely be down for awhile.
The colocation facility is providing staff to do the move, and we are
also supplying myself, Chad, Terry, and possibly Michael Davis and
possibly a friend of Chad's -- all to make this go as quickly as possible.
--Jimbo
On 04/06/05, QuotationsBook.com Webmaster/Support
<quotationsbook(a)gmail.com> wrote:
> Thanks very much for your note. I had a good read of these links, and
> due to being new to wikipedia data access methods, still couldn't make
> sense of what would be the best course of action. I essentially need
> to pass a search query to wikipedia e.g. "Wilde, Oscar" and for the
> first article returned, display the article text on quotationsbook.com
> for that author.
Well, if you want to use a large amount of such content, but don't
mind it lagging slightly behind the copy on Wikipedia itself, the
"cleanest" solution is to download a copy of the database and extract
the information yourself. However, that might seem a technically
complex solution, and the content you want may actually only be a
fraction of the Wikipedia database.
In which case, you have a further two options: ask for the "wikitext"
source of an article, using Special:Export or
en.wikipedia.org/w/index.php?title=<some article>&action=raw and have
a local copy of MediaWiki (or one of the programs at
http://meta.wikimedia.org/wiki/Alternative_parsers) to turn that into
HTML for you (less load on the Wikipedia server, but more complex for
you) or just request the rendered article and separate the content
from the navigation stuff (easy enough to automate if you look at the
source, though you may want to play around with some styling to make
things look right).
BTW, note that person articles in Wikipedia are generally titled
"Firstname Lastname", not "Lastname, Firstname" - e.g.
http://en.wikipedia.org/wiki/Oscar_Wilde You could probably have your
software guess the correct name in most cases, but
http://en.wikipedia.org/wiki/Special:Search?search=<some terms> may
also be useful (this will return an article with a completely matching
name if it exists, and search results otherwise).
> I don't know whether I can do this dynamically (request by request),
> or store a single cached copy of an article on my site, so that the
> request is only made once.
Well, that's almost entirely up to you - once you've downloaded data,
you can do what you like with it; nothing Wikipedia does could allow
or prevent a particular caching scheme at your end. However, out of
consideration for the frequently overloaded servers maintained by the
non-profit Wikimedia Foundation, some form of caching would probably
be considered far preferable to making a fresh request every time. A
standard HTTP "if-modified-since:" header, like most browsers and
proxies use, would do if you wanted to stay up-to-date; but how you
actually manage the request storage is entirely up to you.
Also note that relying on Wikipedia responding before you returned any
of your own content could slow down your site *a lot*, as the servers
often have heavy load or even go down for hours at a time. Not
necessarily a problem, but worth bearing in mind when designing your
caching solution.
--
Rowan Collins BSc
[IMSoP]
Hi, I would like to add some code to allow changing some options on the
gallery. In particular I would like to change the thumb size and
perhaps also the number of columns (and perhaps also add a caption
I have traced through the relevant bits of code of the parser to where
we extract the attributes, but it looks as though the parser right now
doesn't allow for attributes in the "<gallery>" tag.
For example if I wanted something like <gallery
thumb=150px;cols=1;header="My gallery heading"> then I don't think this
extra stuff is parsed out right now?
Any suggestions on the cleanest way to implement this? Will it be
accepted for inclusion...?
Thanks
Ed W
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
MediaWiki 1.5 alpha 2 includes a lot of bug fixes, feature merges,
and a security update.
THIS IS AN EXPERIMENTAL RELEASE FOR TESTING ONLY. Public or
in-production servers should use the stable MediaWiki 1.4.5 release.
Incorrect handling of page template inclusions made it possible to
inject JavaScript code into HTML attributes, which could lead to
cross-site scripting attacks on a publicly editable wiki.
Vulnerable releases and fix:
* 1.5 prerelease: fixed in 1.5alpha2
* 1.4 stable series: fixed in 1.4.5
* 1.3 legacy series: fixed in 1.3.13
* 1.2 series no longer supported; upgrade to 1.4.5 strongly recommended
For a relatively full list of changes since 1.5alpha1, see the changelog
in the release notes.
Release notes:
http://sourceforge.net/project/shownotes.php?release_id=332229
Download:
http://prdownloads.sf.net/wikipedia/mediawiki-1.5alpha2.tar.gz?download
Before asking for help, try the FAQ:
http://meta.wikimedia.org/wiki/MediaWiki_FAQ
Low-traffic release announcements mailing list:
http://mail.wikipedia.org/mailman/listinfo/mediawiki-announce
Wiki admin help mailing list:
http://mail.wikipedia.org/mailman/listinfo/mediawiki-l
Bug report system:
http://bugzilla.wikipedia.org/
Play "stump the developers" live on IRC:
#mediawiki on irc.freenode.net
- -- brion vibber (brion @ pobox.com)
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.4 (Darwin)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iD8DBQFCoHg3wRnhpk1wk44RAgK2AKCUiTvJ7fKmlwfy1ICpShBMFYNGvACgkiGn
oBhbMAqlYR9q0v9Q+vylRsY=
=N4ka
-----END PGP SIGNATURE-----
The new history compression class is finished now. I have
entered it in Bugzilla (ID 2310). The average compression is
about 5 times better than that of the currently used class.
Access to individual revisions is generally faster (factor
2 or so). Only if all revisions in a history blob are
read (e.g. when a page is exported), my class is slower,
especially when it has a large number of sections. But this
doesn't yet account for the time needed to load the
history blobs. I've tested the new class with
[[de:Wikipedia:Löschkandidaten/5. Februar 2005]] which is a
typical large discussion page with more than 50 headings.
Consecutive access to all revisions takes about 0.5 seconds
with ConcatenatedGzipHistoryBlobs and 1.4 seconds with
SplitMergeGzipHistoryBlobs. On the other hand 58
ConcatenatedGzipHistoryBlobs are needed with a total length
of 5937kB whereas 21 SplitMergeGzipHistoryBlobs with a total
length of 508kB can hold the same text. Maybe the time
difference for loading the blobs fully compensates for
the slower read access. What do you think?
--
Geschenkt: 3 Monate GMX ProMail gratis + 3 Ausgaben stern gratis
++ Jetzt anmelden & testen ++ http://www.gmx.net/de/go/promail ++