Hi,
I am trying ti wipe out the images and other uploaded files from my wiki
system.
I have tried to delete is manually from the web browser and then clearing
out the archive table but that did not seem to work.
Although this deletes the image, there is still a link available from the
uploaded file log....
I've tried wiping out these tables: images, imageslink, oldimages but that
did not seem to work either..
I have about 500 files and it would be nice if there is a way to clean this
out for privacy purposes..
Thanks for your help,
Liz Kim
Hi,
I am trying to create an extension for the mediawiki
and found that when my extension does not compile wiki
just hangs and i get bank screen in the browser.
Is it possible to somehow enable printing php
compilation errors to the browser scren for debugging
purposes?
Thank you!
Evgeny.
__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com
Just an idea
http://blog.netvibes.com/
A blog to announce new and pending features. As well as to update on big
network changes. Or new hardware.
Audience: public at large.
Would not that be cool ?
Ant
Say, I made this template,
[[Category:{{{1}}}]][{{fullurl:Category:{{{1}}}}} {{{1}}}]
( http://radioscanningtw.jidanni.org/index.php?title=Template:C )
as I want categories clickable right where I mention them on a page.
And my site has a thousand categories that have members but no text,
and should stay that way, hence the fullurl to avoid the action=edit.
Of course the categories are repeated again at the bottom of each
page, but there is no __NO_CATEGORY_LIST__ to turn that off.
I suppose my template is OK. Can't think of a better way.
I suppose what I really want is something like the ISBN detection code
that one can set up to match certain regexps and do certain things so
one wouldn't need to use a template at all. And in the rare case when
one doesn't want a match, one could use <nowiki>.
Yes, a general perl style filter that would do things on the wiki text
mediawiki sees without changing what the user sees.
And all in one or two Local_Settings variables.
(Hmmm, probably just increasing the house of cards risks to the system.)
Steve Summit
> And in case no one's made the observation: after just a couple
> of initial hiccups (affecting, not surprisingly, only the
> biggest wikis), it seems to be working very well, with all
> dumps successfully up-to-date, on a cycle of just a few days.
> <http://download.wikimedia.org/> is a very pretty picture now.
> Well done!
Sorry Steve you are quite mistaken.
I'm probably the one person looking at the xml download progress report most
often, as I wait impatiently for a good moment to run wikistats after month
has completed
Dumps for largest Wikipedias still fail very frequently.
There has not been a useful English dumps for months and maybe a handful in
a whole year.
Sometimes the jump job reports all is well when it is not (Brion knows this)
I hate to chase Brion because he has thousand obligations but dump process
is pretty unstable still
http://download.wikimedia.org/enwiki/20060925/ is running now, but
http://download.wikimedia.org/enwiki/20060920/ reports it is still in
progress
http://download.wikimedia.org/enwiki/20060911/ reports on the 7z file all is
OK but is 36 Mb
http://download.wikimedia.org/enwiki/20060906/ reports on the 7z file all is
OK but is 19 Mb
http://download.wikimedia.org/enwiki/20060905/ reports on the 7z file all is
OK but is 98 bytes
http://download.wikimedia.org/enwiki/20060816/ reports on the 7z file all is
OK and it is 5.1 Gb but I know it is incomplete it just stops in the middle
of an article
http://download.wikimedia.org/enwiki/20060810/ failed
http://download.wikimedia.org/enwiki/20060803/ in progress
http://download.wikimedia.org/enwiki/20060717/ OK
http://download.wikimedia.org/enwiki/20060702/ OK
http://download.wikimedia.org/enwiki/20060619/ in progress
I could go on: 2 or 3 OK in 10 older runs
Early this year there was no valid en: archive dump for over 4 months.
I proposed doing the largest dumps in incremental steps (say one job per
letter of the alphabet and concat at the end), so that rerun after error
would be less costly
but Brion says there are no disk resources for that
As other people commented, the current situation helps to prevent forks ;)
So again I fully appreciate Brion can't be all things to al people.
But please don't suggest the dump process is reliable enough.
Erik Zachte
An automated run of parserTests.php showed the following failures:
Running test TODO: Table security: embedded pipes (http://mail.wikipedia.org/pipermail/wikitech-l/2006-April/034637.html)... FAILED!
Running test TODO: Link containing double-single-quotes '' (bug 4598)... FAILED!
Running test TODO: Template with thumb image (with link in description)... FAILED!
Running test Template infinite loop... FAILED!
Running test TODO: message transform: <noinclude> in transcluded template (bug 4926)... FAILED!
Running test TODO: message transform: <onlyinclude> in transcluded template (bug 4926)... FAILED!
Running test BUG 1887, part 2: A <math> with a thumbnail- math enabled... FAILED!
Running test TODO: HTML bullet list, unclosed tags (bug 5497)... FAILED!
Running test TODO: HTML ordered list, unclosed tags (bug 5497)... FAILED!
Running test TODO: HTML nested bullet list, open tags (bug 5497)... FAILED!
Running test TODO: HTML nested ordered list, open tags (bug 5497)... FAILED!
Running test TODO: Parsing optional HTML elements (Bug 6171)... FAILED!
Running test TODO: Inline HTML vs wiki block nesting... FAILED!
Running test TODO: Mixing markup for italics and bold... FAILED!
Running test TODO: 5 quotes, code coverage +1 line... FAILED!
Running test TODO: HTML Hex character encoding.... FAILED!
Running test TODO: dt/dd/dl test... FAILED!
Passed 412 of 429 tests (96.04%) FAILED!
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
hello,
at about 13:00 UTC on Sunday the 30th we will be performing some
maintenance on our network during which the site will be offline.
hopefully this won't last longer than a few minutes, although it will
involve a switch reload.
- river.
(see, not all our maintenance is scheduled 5 minutes in advance...)
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.1 (GNU/Linux)
iD8DBQFFHDu1rCkrk04PPEoRAi84AJ0abN/od2YXInuGLPcaic6r6b/yeQCeNRxn
cyAQmv4ffTFvM4InKQ/arKU=
=oaFr
-----END PGP SIGNATURE-----
>
> Mike wrote:
> > ...look at this snippet of XML from enwiki-latest-pages-articles.xml.bz2
> > taken from late August. I don't see any namespace in the <title> elements,
> > ...
> > <title>AaA</title>
> > ...
> > <title>AlgeriA</title>
>
> Those are main namespace (namespace 0) articles, so they don't
> have a prefix. But all the non-main-namespace pages do.
> The first is <id>724</id><title>Wikipedia:Adding Wikipedia
> articles to Nupedia</title>.)
Thanks! That explains a lot.
Mike O
--
_______________________________________________
Surf the Web in a faster, safer and easier way:
Download Opera 9 at http://www.opera.com
Powered by Outblaze