Hi All.
In new install I see this error:
# Environment checked. You can install MediaWiki.
#
Generating configuration file...
# Database type: MySQL
# Loading class: DatabaseMysql
# Attempting to connect to database server as ttd_wiki...success.
# Connected to 4.1.10-standard
# Database ta_wiki exists
# Creating tables... using MySQL 4 table defs...
Warning: fopen(../maintenance/tables.sql) [function.fopen]: failed to
open stream: No such file or directory in /home/xxx/public_html/wiki/includes/Database.php on line 1959
Could not open "../maintenance/tables.sql".
--
Best regards,
Kirill Krasnov
ICQ 82427351
I have completed analysis of all downloaded images from Wikipedia for
the last dump of enwiki-20070206. I ran across something that may be of
interest. When images are uploaded and contain unicode HTML strings in
the filenames, they are not getting converted into unicode utf8
characters after being uploaded. MediaWiki is simply storing them based
upon absolute filename. This is itself is not a problem for downloading
images or rendering, but
may cause invalid references during XML dump import if the programs
reading the XML data attempt to render the names as utf8.
The following images were identified to contain HTML unicode tags in the
filenames themselves in the last dump (not a large number of them, but
interesting).
&-20013;&-33775;&-27665;&-22283;&-20840;&-22294;.jpg
Portret_konny_genera&-322;a_Ignacego_Kruszewskiego.jpg
AM_Pozna&-324;.PNG
HüseyinK&-305;vr&-305;ko&-287;lu.jpg
S&-322;upsk-_Bulwar_nad_rzek&-261;_S&-322;upi&-261;.JPG
Braille&-12288;Writer.jpg
Ra&-1513;&-1493;&-1500;&-1497;_&-1504;&-1514;&-1503;&-1502;&-1493;&-1511;&-1496;&-1503;.jpg
&-1505;&-1490;&-1492;_&-1490;&-1497;&-1497;&-1501;_&-1490;&-1497;&-1512;.jpg
Stanis&-322;aw_Kania.jpg
It may make sense to render the names in utf8 instead of HTML unicode
strings during upload.
Jeff
I get the following fatal error
[Fri Mar 9 10:04:13 2007] [error] PHP Fatal error: Call to a member
function getText() on a non-object in /Volumes/dimer_web/WebServer/
Documents/colimod/colipedia/includes/SpecialSearch.php on line 333
When I search for "MG1655" at
http://ecoliwiki.net/colipedia/index.php
But other searches work just fine. MG1655 does not correspond to an
existing page when I add it to the url. Searching for nonsense
strings that don't have pages does not cause this crash. MG1655
occurs as a text match on lots of pages (10069 matches in the text
table), but I don't see why line 333 should care. In fact, I don't
see how the code ever reaches 333, which is for a single hit result.
Here are some db queries, in case these give any clues:
mysql> select * from page where page_title like "%MG1655%";
Empty set (0.54 sec)
mysql> select count(*) from text where old_text like "%MG1655%";
+----------+
| count(*) |
+----------+
| 10069 |
+----------+
1 row in set (6.92 sec)
Any thoughts?
Jim
=====================================
Jim Hu
Associate Professor
Dept. of Biochemistry and Biophysics
2128 TAMU
Texas A&M Univ.
College Station, TX 77843-2128
979-862-4054
The rebuildall.php script in MediaWiki 1.9.3 hangs after importing the
enwiki-20070206 XML dump during text search rebuilding.
The import loaded 4,500,000 pages (1,500,000 articles) and completed.
When rebuildall.php is invoked, the text search reports
2 X the number of pages (9,500,000+), runs till it reaches this number,
then deadlocks. Thread state is NS (stopped).
There is no state information when attempts are made to use gdb and
attach to the process.
Jeff
An automated run of parserTests.php showed the following failures:
This is MediaWiki version 1.10alpha (r20281).
Reading tests from "maintenance/parserTests.txt"...
Reading tests from "extensions/Cite/citeParserTests.txt"...
Reading tests from "extensions/Poem/poemParserTests.txt"...
3 previously failing test(s) now PASSING! :)
* Blank ref followed by ref with content [Fixed between 08-Mar-2007 08:17:23, 1.10alpha (r20223) and 09-Mar-2007 08:15:17, 1.10alpha (r20281)]
* Regression: non-blank ref "0" followed by ref with content [Fixed between 08-Mar-2007 08:17:23, 1.10alpha (r20223) and 09-Mar-2007 08:15:17, 1.10alpha (r20281)]
* Regression sanity check: non-blank ref "1" followed by ref with content [Fixed between 08-Mar-2007 08:17:23, 1.10alpha (r20223) and 09-Mar-2007 08:15:17, 1.10alpha (r20281)]
18 still FAILING test(s) :(
* URL-encoding in URL functions (single parameter) [Has never passed]
* URL-encoding in URL functions (multiple parameters) [Has never passed]
* TODO: Table security: embedded pipes (http://mail.wikipedia.org/pipermail/wikitech-l/2006-April/034637.html) [Has never passed]
* TODO: Link containing double-single-quotes '' (bug 4598) [Has never passed]
* TODO: message transform: <noinclude> in transcluded template (bug 4926) [Has never passed]
* TODO: message transform: <onlyinclude> in transcluded template (bug 4926) [Has never passed]
* BUG 1887, part 2: A <math> with a thumbnail- math enabled [Has never passed]
* TODO: HTML bullet list, unclosed tags (bug 5497) [Has never passed]
* TODO: HTML ordered list, unclosed tags (bug 5497) [Has never passed]
* TODO: HTML nested bullet list, open tags (bug 5497) [Has never passed]
* TODO: HTML nested ordered list, open tags (bug 5497) [Has never passed]
* TODO: Inline HTML vs wiki block nesting [Has never passed]
* TODO: Mixing markup for italics and bold [Has never passed]
* TODO: 5 quotes, code coverage +1 line [Has never passed]
* TODO: dt/dd/dl test [Has never passed]
* TODO: Images with the "|" character in the comment [Has never passed]
* TODO: Parents of subpages, two levels up, without trailing slash or name. [Has never passed]
* TODO: Parents of subpages, two levels up, with lots of extra trailing slashes. [Has never passed]
Passed 493 of 511 tests (96.48%)... 18 tests failed!
We're looking into the possibility of setting up a planet-style feed
aggregator for Wikimedia blogs. It would be nice to get some software
recommendations. Specifically, we would need an aggregator that
supports
- web based administration
- filtering feeds based on tags assigned to posts
It should scale well a large number of blogs & readers. It would also
be nice to be able to form "groups" of blogs, but that is optional.
Any tips & reviews would be appreciated.
--
Peace & Love,
Erik
DISCLAIMER: This message does not represent an official position of
the Wikimedia Foundation or its Board of Trustees.
"An old, rigid civilization is reluctantly dying. Something new, open,
free and exciting is waking up." -- Ming the Mechanic
Jim Wilson <wilson.jim.r(a)gmail.com> wrote:
> I agree wholeheartedly with Jared. A rewrite shouldn't invalidate
> everyone's collective time spent learning the current syntax. I mean,
> that's a whole lot of peoples time you could potentially be wasting.
I agree with you on this point. People should definitely still be
able to input and edit articles using the same wikitext markup to
which they've grown accustomed. Increasingly, however, people are
wanting something easier. I know people who find wikitext difficult
and are asking for WYSIWYG. Unfortunately, there's no easy way to use
wikitext as a basis for WYSIWYG. The people who are trying to do it
are going through something like the following:
wikitext -> HTML -> a Javascript WYSIWYG editor
When they save their changes, they have to go back through HTML on
the way to wikitext, and since there's no one-to-one correspondence
between wikitext and HTML, the results are inconsistent. It's hard to
imagine a good fix for this problem because the only people
interested in working on wikitext-to-HTML conversion are a subset of
the relatively small number who actually write code for MediaWiki.
Moreover, it's a moving target. Wikitext syntax changes every time
someone writes a new parser function, and with the proliferation of
MediaWiki-powered websites outside Wikimedia, it's looking more and
more like a language with numerous dialects rather than a single
consistent markup standard.
I realize that it's ambitious to contemplate putting XML under the
hood of MediaWiki -- just as it was ambitious in its day for Apple to
contemplate putting Unix under the hood of its graphical user
interface. The result, however, was a better, more extensible
operating system. If they hadn't done it, they'd probably have gone
the way of Atari or Amiga.
IMHO, MediaWiki is currently the best software in existence for web-
based wikis, and it has the considerable advantage of serving as the
content management system for Wikipedia, which alone will guarantee
its place in the world for the foreseeable future. However, there are
other competitors in the wings that are getting serious about
WYSIWYG, and MediaWiki might start to look dated if other platforms
manage to offer a significantly more user-friendly experience.
By the way, Jim...I like your RSS extension. I'm probably going to
install it on my own wiki.
--------------------------------
| Sheldon Rampton
| Research director, Center for Media & Democracy (www.prwatch.org)
| Author of books including:
| Friends In Deed: The Story of US-Nicaragua Sister Cities
| Toxic Sludge Is Good For You
| Mad Cow USA
| Trust Us, We're Experts
| Weapons of Mass Deception
| Banana Republicans
| The Best War Ever
--------------------------------
| Subscribe to our free weekly list serve by visiting:
| http://www.prwatch.org/cmd/subscribe_sotd.html
|
| Donate now to support independent, public interest reporting:
| https://secure.groundspring.org/dn/index.php?id=1118
--------------------------------
LibraryThing.com is a (commercial) website that allows users to
catalog their own book collections and compare it to what other
users have. Picking up on my idea to data mine ISBN numbers out
of Wikipedia articles, LibraryThing now for each book presents a
list of which (English) Wikipedia articles reference that book.
The site's owner Tim Spalding explained how this works in his blog
on February 26,
http://www.librarything.com/blog/2007/02/wikipedia-citatons-with-feed.php
The idea is free for taking. There are lots more data to be mined
out of Wikipedia (and sister projects) and reused in other
contexts. One way to do the data mining is my Extraktor script,
http://meta.wikimedia.org/wiki/User:LA2/Extraktor
Note that GFDL must still be respected. But since ISBN extraction
and other kinds of cross-referencing don't copy text or images,
these actions are not subject to copyright. And since the whole
purpose is to link back to Wikipedia, I think there is every
reason to encourage this kind of reuse.
--
Lars Aronsson (lars(a)aronsson.se)
Aronsson Datateknik - http://aronsson.se