-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
MediaWiki 1.3.13 is a security maintenance release.
Incorrect handling of page template inclusions made it possible to
inject JavaScript code into HTML attributes, which could lead to
cross-site scripting attacks on a publicly editable wiki.
Vulnerable releases and fix:
* 1.5 prerelease: fixed in 1.5alpha2
* 1.4 stable series: fixed in 1.4.5
* 1.3 legacy series: fixed in 1.3.13
* 1.2 series no longer supported; upgrade to 1.4.5 strongly recommended
The 1.3.x series is no longer maintained except for security fixes;
new users and those seeking general bug fixes should install 1.4.5.
Existing 1.3.x installations not willing or able to upgrade to the
current stable relase should update the installation to 1.3.13; only
includes/Parser.php has changed from 1.3.12.
Release notes:
http://sourceforge.net/project/shownotes.php?release_id=332230
Download:
http://prdownloads.sf.net/wikipedia/mediawiki-1.3.13.tar.gz?download
Before asking for help, try the FAQ:
http://meta.wikimedia.org/wiki/MediaWiki_FAQ
Low-traffic release announcements mailing list:
http://mail.wikipedia.org/mailman/listinfo/mediawiki-announce
Wiki admin help mailing list:
http://mail.wikipedia.org/mailman/listinfo/mediawiki-l
Bug report system:
http://bugzilla.wikipedia.org/
Play "stump the developers" live on IRC:
#mediawiki on irc.freenode.net
- -- brion vibber (brion @ pobox.com)
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.4 (Darwin)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iD8DBQFCoHbQwRnhpk1wk44RArfFAJ924sPPqqy14sfDPOlVVF/zq3m9AwCfaTKY
/C1EiL5nXaEou/aJNTqsdI8=
=6HE3
-----END PGP SIGNATURE-----
Alfio Puglisi wrote in gmane.science.linguistics.wikipedia.technical:
> On Sun, 29 May 2005, Kate Turner wrote:
>
>>please let me know about any problems with these files, particularly if
>>they don't extract correctly.
>
> Test using
> http://dumps.wikimedia.org/images/wikipedia/fi/20050530_upload.tar
>
> GNU tar on Cygwin extracts all files correctly, except for the last ones
> (outside any subdirectory), where I get lots of "tar: Skipping to next
> header" errors, and a bunch of invalid gif and png files (button_bold.gif,
> button_bold.png and so on).
I'm getting errors as well such as these with the english wiktionary image dump.
For example:
e/e6/Caster.png
pax: Invalid header, starting valid header search.
e/e7/Centranthus_ruber.jpg
And then, as the other poster mentioned:
button_math.gif
pax: Invalid header, starting valid header search.
button_math.png
pax: Invalid header, starting valid header search.
magnify-clip.png
Until it finally asks for the the 2nd volume and finishes with this:
button_italic.png
pax: Invalid header, starting valid header search.
pax: End of archive volume 1 reached
pax: ustar vol 1, 293 files, 15218688 bytes read, 0 bytes written.
ATTENTION! pax archive volume change required.
Ready for archive volume: 2
Input archive name or "." to quit pax.
Archive name > .
Quitting pax!
pax: Premature end of file on archive read
Thanks,
-drmike
Quick question. Is there any modification to mediawiki that would allow PHP
code to be used on pages? I've tried adding "?php" as a tag, but without
success.
Hope you can help, thanks for the last answer about wantedpages.
at least, they will be when the dump is complete, which will take a few more
hours yet. once done, image dumps will be found at:
http://dumps.wikimedia.org/images/
please read the readme files. (note: don't download dumps without an
"upload.tar" symlink, because that means the dump is still in progress and
the file will be incomplete!)
please let me know about any problems with these files, particularly if they
don't extract correctly.
kate.
Hoi,
The IEEE LOM is a standard for providing Meta-data to the educational system. It is an open standard and it is being implemented in several
countries. Technically the standard exists into two parts. the technical labels and its localisations and localised vs universal content.
There was a Dutch organisation that asked in OTRS to host the Dutch Wikipedia so that it would be able to combine the Wikipedia content with
the IEEE LOM data. In principle there is nothing wrong with that. However if 50% of the Dutch data is of an universal nature, it would
mean that this 50% does not need to be entered for the articles in other languages. Hosting this metadata on the Wikimedia servers makes sense;
it allows for the opening up of Free content in a proprietary world. It would make a huge deduction in cost for every second language implementing the IEEE LOM data.
The questions I put to you are:
* Are we willing to host open standard meta data for the educational world.
* Are we willing to cooperate with organisations that are interested in implementing this data.
* How will we manage such things; funds can be found to pay people doing this kind of work - can we consider this
Thanks,
GerardM
Hoi,
The IEEE LOM is a standard for providing Meta-data to the educational
system. It is an open standard and it is being implemented in several
countries. Technically the standard exists into two parts. the technical
labels and its localisations and localised vs universal content.
There was a Dutch organisation that asked in OTRS to host the Dutch
Wikipedia so that it would be able to combine the Wikipedia content with
the IEEE LOM data. In principle there is nothing wrong with that.
However if 50% of the Dutch data is of an universal nature, it would
mean that this 50% does not need to be entered for the articles in other
languages. Hosting this metadata on the Wikimedia servers makes sense;
it allows for the opening up of Free content in a proprietary world. It
would make a huge deduction in cost for every second language
implementing the IEEE LOM data.
The questions I put to you are:
* Are we willing to host open standard meta data for the educational
world.
* Are we willing to cooperate with organisations that are interested
in implementing this data.
* How will we manage such things; funds can be found to pay people
doing this kind of work - can we consider this
Thanks,
GerardM
just finished downloading the en image dump, tried extracting it with winrar -
only to get an error - 'the archive is corupt' - though doing it with a
different extractor, not only doesnt preserve the folders, there only seems to
be 9200 images in the dump.... could well be more, as i havent extract it all,
because of this folder thing
anyone of an extracter that does a folder intact job
once that is extracted - how will my wiki know where the images are for each
artical, and thus include them within each artical?
thanks
I am writing a PHP script which aim is to retrieve the size of each row of
the history of several articles.
For this, I've spent some time looking at the different PHP files in
wiki/includesto try to understand the overall architecture of mediawiki.
I am not an expert in PHP, but my skils in C/C++ and Java (long time ago
though :) helped me.
My script reads a list of articles in an input file, and write in an output
file, the total length and the realtive size of each edit (difference
between current length and previous one). I am using the function
Article::getRevisionText in order to get the full text version of old_text.
It works perfectly for a limited number of articles, but the script stops
randomly when the list is longer.
It never stops at the same place even though around the same article (but
never at the same sql row), which makes me think that there is probably an
error related to memory or buffer issues and sql queries maybe, or at least
something being full that needs to be emptied.
I've spent several days but my limited PHP skills do not allow me to fix the
bug.
I am not asking anyone to debug the PHP file, but can anyone have a quick
look at the code to tell me whether there are problems in freeing memory,
buffers, sql results or anything that would justify the behaviour of the
script?
That would be a great help for my master thesis.
Thank you.
Kevin Carillo
<?php
define( 'MEDIAWIKI', true );
require_once( './includes/Defines.php' );
require_once( './LocalSettings.php' );
require_once( 'includes/Setup.php' );
$article_title = "";
$prev_article_size =0;
$contrib_size =0;
$count_edit =0;
$log_file_nb =1;
$count_article_20 =1;
$count_article =1;
#get database handler
$dbw =& wfGetDB( DB_MASTER );
#open log_file
$log_file = fopen("c:/www/contrib/nms0/nms0_archive_$log_file_nb.txt",'w');
# open file which contains titles of all articles
$article_title_file = fopen('c:/www/contrib/article_title_list.txt','r');
# main while loop that read the file containing titles pf articles
while (!feof($article_title_file)) {
# prepare log_file: open new file if 20 articles have been logged in
current log_file
if ($count_article_20 >20) {
# close current handler
fclose($log_file);
# prepare name of new log_file
$log_file_nb ++;
# reset nb of edits of the vurrent articles
$count_edit = 0;
# open new log file
$log_file =
fopen("c:/www/contrib/nms0/nms0_archive_$log_file_nb.txt",'w');
# reset counter
$count_article_20 =1;
}
# get current article title
$article_title_temp = fgets($article_title_file, 300);
# fgets adds a space character at the end of the article title -> remove
it
if (strpos($article_title_temp,' ') == FALSE) {
$article_title = substr
($article_title_temp,0,strlen($article_title_tempstrlen)-2);
}
else
$article_title = $article_title_temp;
fwrite($log_file,"\n Article: $article_title Nb: $count_article\n");
fwrite($log_file,"------------------------------------------------\n");
# get history of the article
$query="select old_id, old_title, old_text, old_flags, old_timestamp
from old where old_namespace=0 and old_title =
\"$article_title\"
order by old_timestamp;";
$obj = $dbw->doQuery($query);
#$dbw->deadlockLoop();
if ( $dbw->numRows( $obj ) ) {
while ( $row = $dbw->fetchObject( $obj ) ) {
$count_edit++;
$old_id = $row->old_id;
$old_title = $row->old_title;
$old_timestamp = $row->old_timestamp;
$old_text_full = Article::getRevisionText($row);
# calculate length
$length = strlen($old_text_full);
# calculate contribution size
$contrib_size=$length - $prev_article_size;
#keep length of article in $prev_article_size
$prev_article_size = $length;
fwrite($log_file,"old_title: $old_title edit nb:$count_edit ");
fwrite($log_file," length:$length sold_timestamp:
$old_timestamp contrib_size: $contrib_size \n");
} # end while loop SQL results
$dbw->freeResult( $obj );
} # end if
else {
fwrite($log_file,"$sold_title : No History. \n ");
}
# update count of articles to track for log_file
$count_article_20++;
# update total nb of articles processed
$count_article++;
} # end while loop article titles
# close file handlers
fclose($log_file);
fclose($article_title_file);
?>
The Wikimedia Research Team ..
http://meta.wikimedia.org/wiki/Wikimedia_Research_Team
.. is a new working group focused on studying Wikimedia's content and
technology, developing recommendations and specifications, and building
bridges between outside researchers, developers, the Board, and the
community.
The WRT is open for anyone to join. It is a way to channel information
and communications, nothing more.
The first meeting of the Wikimedia Research Team will take place on
Sunday, June 5, at 20:00 UTC. To figure out what this is in your
timezone, go to:
http://worldtimeserver.com/current_time_in_UTC.aspx
The IRC meeting will take place in the channel #wikimedia-research on
irc.freenode.net, where all the Wikimedia IRC channels are. See
http://meta.wikimedia.org/wiki/IRC_instructions
if you need help connecting.
Some organizational topics for the agenda:
* Agreeing on the structure and mission of the Team
* Building a membership roster with a list of interests for each member
* Systematically inviting individuals from all fields of research to
participate
* Distinguishing between high priority issues that affect the whole Team
and issues that should be discussed in breakaway groups
* Deciding which tools to use where (e.g. when to use Bugzilla, when to
use Meta)
* Defining the first breakaway groups
Some specific deliverables I'd personally like to start working on soon:
* Development task framework. How important is a specific task, how
suitable is it for newbie developers, how suitable for outside
development (e.g. extensions), how important is it for Wikimedia? A
general procedure for deciding when a task should move from volunteer
development into a recommendation for targeted (paid) development is
also needed.
* Research projects. I'm sure there are many students who'd like to do a
thesis on Wikimedia. We can develop a list of worthwhile topics to
study. For example: "It would be interesting to compare how, *over a
range of defined topics* (e.g. 'articles that any encyclopedia should
have'), how our content has developed over time -- in size, number of
images, links, and so on." Or: "A distributed survey among experts on
the quality of Wikipedia articles vs. articles in other encyclopedias."
* Community meetings. I want to have IRC meetings with each Wikimedia
project community (Wikipedia, Wikinews, Wikibooks, Wikisource,
Wikiquote, Wikispecies, Wiktionary, Wikicommons, Meta-Wiki) to listen to
their individual needs and discuss possible solutions with them.
But, as noted above, we have to agree in consensus which issues are high
priority and concern the group as a whole.
I would be very glad if you could make it. Please also invite others to
come, and to join the team itself at
http://meta.wikimedia.org/wiki/Wikimedia_Research_Team
The IRC log from the meeting will be made public.
Let me know if you have any questions.
All best,
Erik Möller
Chief Research Officer, Wikimedia Foundation
Hi!, i solve my last problem with image table.
I make this mysql script:
CREATE TABLE tmp_image (
img_name varchar(255) binary NOT NULL default '',
img_size int(8) unsigned NOT NULL default '0',
img_description tinyblob NOT NULL default '',
img_user int(5) unsigned NOT NULL default '0',
img_user_text varchar(255) binary NOT NULL default
'',
img_timestamp char(14) binary NOT NULL default ''
);
INSERT INTO tmp_image(img_name, img_size,
img_description, img_user, img_user_text,
img_timestamp) SELECT DISTINCT img_name, img_size,
img_description, img_user, img_user_text,
img_timestamp FROM image;
DELETE FROM image;
INSERT INTO image(img_name, img_size, img_description,
img_user, img_user_text, img_timestamp) SELECT
DISTINCT img_name, img_size, img_description,
img_user, img_user_text, img_timestamp FROM tmp_image;
DROP TABLE tmp_image;
And after, jeluf (thanks thanks thanks a lot :D) say
to me to reindex table with this script located on
maintenance/archive (hidde documented for friends...),
patch-image_name_unique.sql.
Well, now i'm on 1.4.x :D, but i have a little problem
with metanamespaces, on 1.3.11 i use on LocalSettings.
php a variable wgMetaNamespace with value
'enciclopedia'.
On lastest updates (maded with help of brion, tim,
dammit, and so so so more....if any time i get
official LAMP certification, on sign we must be all
;P) works fine but now doesnt. Why? I cant find any
about this.
I'm waiting for your replies. :D
Greets.
______________________________________________
Renovamos el Correo Yahoo!
Nuevos servicios, más seguridad
http://correo.yahoo.es