I am trying to load cur_table.sql
mysql -u root -p<password> wiki-en < <directory>/20040817_cur_table.sql;
ERROR at line 1035:
I get this error after 1.5 hour. I try this 3 time and with 2 copy of sql file from two distinct DVD.
I want to know if i can do something to skip the error and execute the rest of the script.
I want to know how much of the job have been done.
If anyone can help please do it.
Thanks,
MM
Dori,
with "different language instances" I mean use of different languages in article texts (e.g. wikipedia) and system messages (languageXY.php).
We want to economize space for language independant parts (for instance images) using a "common area".
--
Ich freue mich auf Deine/Ihre Antwort!
Uwe Baumbach
U.Baumbach(a)web.de
-------------------------------------------------------------------------
In meinem Postfach ist ein Anti-Spam-Filter aktiv.
Um Fehler zu minimieren daher bitte unbedingt
jede Mail an mich mit einem korrekten Betreff versehen
und bitte sowohl im Betreff als auch im Mailtext
die bekannten "schlechten" Worte vermeiden.
Vielen Dank!
-------------------------------------------------------------------------
______________________________________________________________
Verschicken Sie romantische, coole und witzige Bilder per SMS!
Jetzt bei WEB.DE FreeMail: http://f.web.de/?mc=021193
> I don't think this is what I'm looking for, in fact I want exactly the opposite.
> I already have the Wikicode in my database, and I want to get rid of the
> Wikicode tags. The raw function returns the code in its original format
> (Wiki tags included) without turning it into HTML, right?
A very very raw function for doing that is done by SearchUpdate.
Though it sucks for presentation. The best I could advise would
be removing HTML tags from already wiki-parsed text. :)
Cheers,
Domas
For some while we've been using HTML Tidy to do additional correction
and clean-up on output. Normally this is relatively quick compared to
the other database overhead on page edits, though the time can get
relatively large for huge-o pages.
More seriously, forking and spawning an external tidy program can be a
bigger problem when the system's under heavy load.
I've checked into CVS HEAD the ability to use the PECL extension which
exposes an interface to the tidy library in-process. This speeds things
up a bit:
Short page ([[Stuff]], about 2.5k of HTML):
0.010ms no-op
2.891ms internalTidy
8.710ms externalTidy
Long page (a village pump page, 450k+ of HTML):
1.783ms no-op
266.066ms internalTidy
306.098ms externalTidy
Testing on a heavily loaded system the difference can go waaay up!
10 simultaneous tidy test threads:
Short page:
0.010ms no-op
2.736ms internalTidy
213.108ms externalTidy
Long page:
2.343ms no-op
565.822ms internalTidy
5868.871ms externalTidy
Heavy disk seeking (make clean on a GCC build) + 10 simultaneous tidy
test threads;
Short page:
0.010ms no-op
2.637ms internalTidy
928.098ms externalTidy
Long:
2.353ms no-op
4305.380ms internalTidy
6686.658ms externalTidy
This is coded for the PHP 4.3.x version of the extension, and may not
work on PHP5. Once installed ('pear install tidy' and add
'extension=tidy.so' to php.ini) it should automatically be picked up if
you've got $wgUseTidy on.
The changes are localized and don't alter the code interface, so I'll
backport this to 1.4 as a performance fix.
-- brion vibber (brion @ pobox.com)
> I am thinking about using CPP program to access data or/and PHP.
> Running Linux since January and still not starting to work with PHP I am
I don't think you need to play too much with non-PHP stuff
as all major code is already written. As one of my experiments
I did embed HTTP server itself into MediaWiki PHP framework.
It was fast, though, had some of issues to solve, like too many
'exit' points. Though, for read only operations there are not too
many of them. As for direction of development I'd choose as
Brion advised some scripts in maintenance/, and make them
export data suitable for easy embedding to your software.
Or even use PHP Gtk :)
Domas
Sure thing. CC'd.
-N
Jimmy Wales wrote:
>Can you send this to wikitech-l?
>
>Noah wrote:
>
>
>>To whom it may concern,
>>
>>I would love to contribute to your Wikipedia project, but I am running a
>>Tor server. However, my Tor server is merely an intermediate node, and
>>does not allow direct egress, nor ingress. I feel that Gadfium has made
>>an error in his parsing of the Tor server lists, as this information (in
>>the form of the rule, "reject *:*") is necessarily published as part of
>>the Tor network.
>>
>>Please forward this information along to the other admins, as there is a
>>programmatic way to block Tor servers that actually allow exit traffic
>>to a particular host and port (such as 80, in the case of Wikipedia).
>>
>>My current Tor server information is available below, and in the
>>automatically published Tor directories.
>>
>>http://serifos.eecs.harvard.edu:8000/cgi-bin/desc.pl?q=spunkybukkake
>>
>>
>>Thanks for your time,
>>
>>-Noah
>>
>>
>>
Hi,
I want to download the english wikipedia database and further process the
data. The sql dump is approx 30G in compressed format. Currently I have
setup MYSQL to import small database dumps in other languages. What is the
best and quickest approach to import the english database to MySQL? My
understanding is that I will have to download all compressed files from
http://download.wikimedia.org/wikipedia/en/ and cat them, so I can gunzip
them in order to import them to MySQL. Can I import subsets of the
databases without downloading all the dumps? If so, where can I find these
files ?
Is there any other ultility to convert the sql dumps to html apart from
wiki2static.pl script?
I am a new user and any help or advise will be very useful.
Thanks,
-Hemali
A request came to me from lb wikipedia.
The "good" user is this one :
http://lb.wikipedia.org/wiki/User%3ACornischong
The "bad" user is this one :
http://lb.wikipedia.org/wiki/User%3ACorni%C2%ADschong
In recent changes, they both appear as Cornischong. But the vandal is in
reality Corni-schong
Cornischong is an admin on this wiki, he does not dare blocking the
other Cornischong by fear he will be blocked as well...
What do you suggest ?
Anthere
Hi,
we plan to upgrade our wiki from 1.3.0Beta1 to 1.4.1.
After that we want to open two other instances with different language support but the "same" content area.
Please help with (links to) comments, suggests, possible problems, documentation or help for this situation.
Especially I am interested in issues concerning configuration, upload and use of common data (images!) in the different instances without (?) redundancy.
Being a beginner I am searching for general information about that.
Thanks for your patience.
Uwe Baumbach
U.Baumbach(a)web.de
______________________________________________________________
Verschicken Sie romantische, coole und witzige Bilder per SMS!
Jetzt bei WEB.DE FreeMail: http://f.web.de/?mc=021193
I am want to process data from wikipedia database.
Now I have some small MySQL databases for helping me to understand how
can I use them.
I am thinking about using CPP program to access data or/and PHP.
Running Linux since January and still not starting to work with PHP I am
looking for
some advice.
Thanks,
Mircea