I'm currently implementing PHP 5 exceptions in MediaWiki and I've come
across a design question. How often should exceptions be used?
I've set up an exception handler which is quite capable of formatting
and displaying pretty error messages for the user. The HTML to be
displayed is defined by the exception class. But the question is, how
often should I use them? Say if I have the choice between:
function deleteUserInterface( $file ) {
if ( deleteFile( $file ) ) {
displaySuccess();
} else {
displayError();
}
}
function deleteFile( $file ) {
return (bool) unlink( $file );
}
function displayError() {
// pretty error
}
and...
function deleteUserInterface( $file ) {
deleteFile( $file );
displaySuccess();
}
function deleteFile( $file ) {
if ( !unlink( $file ) ) {
throw new DeleteError();
}
}
class DeleteError extends MWException{
function getHTML() {
// pretty error
}
}
which is better? Should the backend dictate the error format by
specifying an exception class with a fixed format? Of course, a caller
wishing to override the formatting could catch the exception, but is
that better than the old method of using a success/failure return value?
Should error handling be completely exception-dominated? Is there any
role for success/failure return values in a language with exception support?
-- Tim Starling
Many thanks: only a question more, please: these language/message files, LanguageLmo.php, and MessagesLmo.php. are to be placed at which URL?
http://lmo.wikipedia.org/MessageLmo.php or
http://lmo.wikipedia.org/wiki/MessageLmo.php ?
Are they compatible with the existing Mediawiki?
Many thanks.
Sincerely Yours,
Claudi
__________________________________________________
Do You Yahoo!?
En finir avec le spam? Yahoo! Mail vous offre la meilleure protection possible contre les messages non sollicités
http://mail.yahoo.fr Yahoo! Mail
Hello,
I posted this on commons-l a week ago to apparent deaf ears.
While browsing in my Special:Preferences today, I noticed that we
apparently support no less than five variants of Chinese:
zh
zh-cn (simplified, as used by the Chinese mainland)
zh-hk (Hong Kong, traditional)
zh-sg (Singapore, simplified)
zh-tw (Taiwan, traditional)
Is there a reason these codes have been set up as country codes rather
than, say, zh-hans and zh-hant? (Han Chinese, simplified/traditional -
these are codes that the Chinese translators on Commons have
favoured.)
Is there thought to be any distinction between zh-sg and zh-cn?
Can the 'automatic converter' thing that zh.wp has be turned on for
the Commons, so we can stop this unnecessary 'translation' between
scripts anyway? (And then we just offer "zh".)
cheers,
Brianna
en.{wp,wb}|commons|meta:user:pfctdayelise
For Min-Nan, Peh-Oe-Ji (POJ) is been widely used in Mid. and Southern Taiwan
for elemetary education. I have some relatives teach as POJ teachers in the
elementary school in those area.
Also, POJ is been used for writing an encyclopedia on zh-min-nan.wp, see
http://zh-min-nan.wikipedia.org
Min-Nan and Hakka are also been widely used in the southern China, however,
I do not know if they have any writing forms for those Chinese speaking
languages in there.
FYI.
Regards,
H.T.
-----Original Message-----
From: wikitech-l-bounces(a)wikimedia.org
[mailto:wikitech-l-bounces@wikimedia.org] On Behalf Of Mark Williamson
Sent: Saturday, June 10, 2006 2:56 PM
To: Wikimedia developers
Subject: Re: [Wikitech-l] Simplified/traditional Chinese on Commons
And although colloquial writing can be found in every major regional variety
of Chinese, as a popular phenomenon it is limited mostly to Cantonese, Wu,
Minnan, and Hakka for a few reasons: Cantonese and Wu are both the languages
of huge (HUGE) urban centers where people take pride in their local
identity; Minnan and Hakka are used on Taiwan, which doesn't currently have
a rabid government movement to eradicate local varieties so people who want
to develop written forms for the native languages have had quite a bit of
success.
Having said that, most people who write Minnan do not write it in Peh-oe-ji.
There is no single agreed-upon orthography, but most people use a mixture of
Chinese characters and Roman letters (and occasionally Japanese characters
as well). This is not currently practical for writing an encyclopaedia
though, because a huge portion of the words in the language have no
consensus as to what character should be used to write them.
Mark
___________________________________________________
最新版 Yahoo!奇摩即時通訊 7.0,免費網路電話任你打!
http://messenger.yahoo.com.tw/
[Tim Starling wrote] :
> For unexpected code branches which appear to be unreachable through
> ordinary user action, the previous code exited with a backtrace; I
> replaced these with an exception throw to produce similar behaviour.
>
> For the benefit of extensions and other unmigrated code, the old
> interfaces such as OutputPage::errorpage() and wfDebugDieBacktrace()
Can I please ask what the new Exceptions looks like in the HTML
output? How will I know one if I see one?
I'm familiar with the format of the old wfDebugDieBacktrace output,
which looked like this:
==========================================
Something went wrong.
Backtrace:
* GlobalFunctions.php line 659 calls wfBacktrace()
* DoStuff.php line 8121 calls wfDebugDieBacktrace()
* Linker.php line 141 calls Linker::DoStuff()
* Linker.php line 1594 calls Linker::Bar()
* Parser.php line 140 calls Linker::Bar()
* Parser.php line 1511 calls Parser::Foo()
* Parser.php line 849 calls Parser::replaceInternalLinks()
* Parser.php line 240 calls Parser::internalParse()
* EditPage.php line 1312 calls Parser::parse()
* EditPage.php line 1163 calls EditPage::getPreviewText()
* EditPage.php line 923 calls EditPage::showPreview()
* EditPage.php line 308 calls EditPage::showEditForm()
* EditPage.php line 148 calls EditPage::edit()
* Wiki.php line 380 calls EditPage::submit()
* Wiki.php line 50 calls MediaWiki::performAction()
* index.php line 136 calls MediaWiki::initialize()
* index.php line 3 calls require()
==========================================
Just wondering what the new Exceptions look like, so that I can grep
for them for when testing.
Previously I would grep for the string "wfDebugDieBacktrace", and if
that occurred in the output, it was a pretty good bet that something
went wrong; Could I for example now grep the output for the string
"Exception"?
All the best,
Nick.
An automated run of parserTests.php showed the following failures:
Running test Table security: embedded pipes (http://mail.wikipedia.org/pipermail/wikitech-l/2006-April/034637.html)... FAILED!
Running test Link containing double-single-quotes '' (bug 4598)... FAILED!
Running test message transform: <noinclude> in transcluded template (bug 4926)... FAILED!
Running test message transform: <onlyinclude> in transcluded template (bug 4926)... FAILED!
Running test BUG 1887, part 2: A <math> with a thumbnail- math enabled... FAILED!
Running test Language converter: output gets cut off unexpectedly (bug 5757)... FAILED!
Running test HTML bullet list, unclosed tags (bug 5497)... FAILED!
Running test HTML ordered list, unclosed tags (bug 5497)... FAILED!
Running test HTML nested bullet list, open tags (bug 5497)... FAILED!
Running test HTML nested ordered list, open tags (bug 5497)... FAILED!
Running test Parsing optional HTML elements (Bug 6171)... FAILED!
Running test Inline HTML vs wiki block nesting... FAILED!
Running test Mixing markup for italics and bold... FAILED!
Passed 391 of 404 tests (96.78%) FAILED!
hello,
i have given root access on the machines:
hemlock.knams.wikimedia.orgzedler.knams.wikimedia.org
(the toolserver servers) to Anders Wegge Jakobsen and Rob Church. hopefully
they will assist with resolving problems quickly.
- river.
Hi all,
need help/answers for following questions:
1. Where can I find source and doc for extension CharInsert?
2. Is it suitable for 1.5.8 ?
3. Any known problems with it?
We will use it for Geman, English and Netherland localizations.
THX!
--
Ich freue mich auf Deine/Ihre Antwort!
Uwe (Baumbach)
U.Baumbach(a)web.de
_____________________________________________________________________
Der WEB.DE SmartSurfer hilft bis zu 70% Ihrer Onlinekosten zu sparen!
http://smartsurfer.web.de/?mc=100071&distributionid=000000000071
Folks,
First, I have to apologize if this has been dealt with on the list before.
Just point me in the right direction. =)
I am doing academic research into Wikipedia and am a stand still.
I need:
- · The complete history of three articles as some sort of
text or flat file.
- · A way to access more history for later research
I try to download the datadumps
(pages-meta-history.xml.7z<http://download.wikimedia.org/enwiki/20060518/enwiki-20060518-pages-meta-hi…>)
in the following browsers with the following results:
Firefox Dump stops at 4 gig
IE Dump is seen only as a corrupt 1.6 gig file
Opera Dump is seen only as a corrupt 1.6 gig file
Is there a way to ftp the file?
I can't get either single article histories or the whole thing and I am at a
loss. This should be much easier than this, so I think I am missing
something.
Any help would be appreciated. Thanks!!!
Mark
Mark Bell
http://www.storygeek.com
"The future is here...it's just not widely distributed." - Tim O'Reilly
An automated run of parserTests.php showed the following failures:
Running test Table security: embedded pipes (http://mail.wikipedia.org/pipermail/wikitech-l/2006-April/034637.html)... FAILED!
Running test Link containing double-single-quotes '' (bug 4598)... FAILED!
Running test BUG 1887, part 2: A <math> with a thumbnail- math enabled... FAILED!
Running test Language converter: output gets cut off unexpectedly (bug 5757)... FAILED!
Running test HTML bullet list, unclosed tags (bug 5497)... FAILED!
Running test HTML ordered list, unclosed tags (bug 5497)... FAILED!
Running test HTML nested bullet list, open tags (bug 5497)... FAILED!
Running test HTML nested ordered list, open tags (bug 5497)... FAILED!
Running test Parsing optional HTML elements (Bug 6171)... FAILED!
Running test Inline HTML vs wiki block nesting... FAILED!
Running test Mixing markup for italics and bold... FAILED!
Passed 385 of 396 tests (97.22%) FAILED!