The other is to keep refactoring the PHP codebase (and
it's been
much changed since you left it, Lee) and, optionally, rewrite
particular hotspots in another language.
The downside of this are for people like myself who run an
external (though not public) version of mediawiki exclusively for use
in taking parts of it to convert to other formats (in my case, mobile
and handheld formats). If you change the core language mediawiki is
driven by, you further burdon external contributors and supporters of
mediawiki as a whole.
..unless you're talking about a rewrite _exclusively_ for use
on the Wikipedia/etc. servers, and not for the main
SF.net project as
distributed to the community. I suspect you're not talking about this
approach because that means maintaining two separate (and gradually
diverging) codebases.
In particular, PHP tends to impose an architecture
where each
request is served by an entirely new script invocation: you have to
build any information up from scratch on each hit, and sharing
things like localization tables between invocations is kind of hard.
(Below comments are paraphrased from a conversation I just had
about an hour ago with Rasmus Lerdorf):
Isn't this exactly what ICU[1] was developed to solve? ICU
automatically sticks requests in shared memory in order to
optimize itself across different processes on the same server.
The goal should be that scalability is pushed out into the
individual layers and doesn't become a factor at the language
level like with Java or ASP.
To be truly scalable, nothing should prevent subsequent
requests to be handled by different physical web servers. If
you want to impose an application layer dependance on shared
memory on a single physical server, you'll need to do that
yourself.
And no, it is not hard to use shared memory from PHP.
[1]
http://www-306.ibm.com/software/globalization/icu/index.jsp
David A. Desrosiers
desrod(a)gnu-designs.com
http://gnu-designs.com