Hi!
After installing MediaWiki, downloading and importing the SQL-dump I failed
rebuilding the link-tables with 'rebuildall.php'.
Invoking it from command line I get the error message:
> Can't use command-line utils with in-place install yet, sorry.
What may this mean?
Thanks a lot,
Jakob
texvc is able to produce MathML for some input (currently just some
very simple text, not complex equations). When using XHTML output, it's
possible to embed MathML directly in the text such that Mozilla and
maybe Amaya can render it inline:
<math xmlns="http://www.w3.org/1998/Math/MathML">bla bla</math>
Is this a useful thing we should support, is it worth the trouble?
Good:
* Text can be cut-n-pasted
* Can scale with zoomed font sizes and for printouts
* Inherits user overrides of forward and background colors
* Transparency with no alpha channel tricks
These would be good for users requiring large font sizes or
high-contrast displays;transparent PNGs that hard-code a black
foreground will be illegible for a user who forces white-on-black text.
All the above points are true of hacky HTML output as well, but hacky
HTML doesn't tend to look that good, particularly for fractions,
radicals, etc where positioning is hard to do right.
Bad:
* Needs more work on texvc to be useful for non-trivial equations.
* Supported by few browsers. Either a browser detect or an explicit
user option must be used. We'd prefer to only have one canonical
rendering to simplify caching.[1]
* Mozilla complains at the user that they don't have special math fonts
installed, which is rather annoying.
[1] A sick and evil option may be to introduce the MathML at load time
via JavaScript+DOM upon determining that browser should support it. As
with all such solutions this won't function with JavaScript disabled
and is thus suboptimal.
-- brion vibber (brion @ pobox.com)
I succeeded in changing the static text for [[MediaWiki:Fromwikipedia]]
and [[MediaWiki:Gnunote]] but if you leave them empty the default text
appears.
Is this expected behavior?
What do I enter for nothing to appear on the served page in the
respective place?
Thanks for the great product, Richard.
There's a lovely little program 'dvipng' which was designed to do
real-time previews of LaTeX in an editor
(http://sf.net/projects/preview-latex/) but is available as a
stand-alone utility as well. This can replace the dvips & convert steps
of our math rendering. In theory it should be faster (I haven't
benchmarked it), but the neat thing is that it can produce output with
a transparent background, which has been asked for for a while.
I've been able to get pretty nice output with this command line:
dvipng -q -D 120 -bg Transparent -T tight -o outputfile.png
inputfile.dvi
which produces files about the same dimensions as our current stuff.
They are 8-bit indexed PNG images with solid transparency, which is
compatible with Internet Explorer for Windows without any hacks, and
will still look nice on most light-colored backgrounds. However, they
won't look too great on a dark background, and some browsers may not be
able to print them correctly. There's also a truecolor option that I
haven't tried (which has potential problems of its own).
Attached is a diff to render.ml which uses dvipng instead of
dvips+convert. It would probably be fairly straightforward to make it a
conditional option, but I don't know ocaml. ;)
dvipng requires libgd to be available.
-- brion vibber (brion @ pobox.com)
WikiHiero is really neat, but only a small number of installations are
likely to enable it, and the large number of small images is rather
burdensome. Aside from the space factor, it *really* slows down CVS
operations to examine them all.
If there's no objection, I'd like to move the 'extensions' directory to
a top-level module, next to 'phase3' instead of inside it, so it can be
checked out separately.
We'd package it for separate download anyway, so this is mainly a
developers' convenience. The extensions folder could then just be
dropped into place and the appropriate switch enabled to turn it on in
an installation.
-- brion vibber (brion @ pobox.com)
The Tokenizer class seems to do a lot of lookahead, which can
potentially fall off the edge of the string. In strict error reporting
mode (E_ALL) this produces a notice-level warning about an out of range
string access. I see that JeLuF has been putting in isset() checks to
suppress the warnings.
Since a read past end of string produces a reasonably well-defined
result (ie, empty string), I wonder whether it's better just to
suppress the notice with a @ on the expression. Since the bounds check
happens anyway, the extra isset() doesn't _do_ anything.
As a side note: isset() on a past-end-of-string read actually itself
produces a notice-level error message in PHP 4.3.2, although they seem
to have turned it off by 4.3.4.
-- brion vibber (brion @ pobox.com)
Hello all,
This is at the same time, news from another wikipedia,
and somehow a feature request (or start of discussion,
whatever), hence sent to wikipedia-l and wikitech.
---------
NEWS SECTION
Today, the french wikipedia adopted new rules with
regards to sanctions and exclusion (majority of 93%)
To make it simple.
Before : the only action toward a problematic user was
banning. It was decided by consensus, with 100%
agreement. Two people were banned by this way, Mulot
(in august 2002) and Papotages (in november 2003).
This naturally has become unworkable.
Now : a new policy was adopted. This is not a final
policy, as several points have to be further
discussed, but it outlines the principles.
This policy is rather different from the english one.
I guess the difference is due to 1) we are less
numerous and 2) we never had a benevolent dictator :-)
The major differences rely here
* There is no arbitration committee. Decisions are
taken by the full community (with requirements of
number of contributions or length of presence
depending on the decision).
* The policy relies on two steps, clearly identified.
The first step is meant to slow down edition by a
problematic user, or to restrict his right of edition
to some parts of the project for example. It is
relying on the *agreement* of the user to respect
these rules. The community issues a sort of warning to
the problematic user and ask him to voluntarily
respect this collective warning.
For example, if a user is unable to collaborate on an
article, and starts edit wars on this article all the
time, he may be asked by the community not to edit
this specific article for one month.
The restriction in edition is automatically lifted
after a month.
If the user does not respect the request issued by the
community, the second step is reached. Similarly, a
user being issued repeated warnings and edition
restrictions in first step will meet second step.
The second step is restriction of edition, by
technical means. In short blocking/banning temporarily
or permanently.
# This means that the entire community will be able
to express disagreement to a user, depending on his
behavior as an editor.
# Decisions of restriction of edition or banning will
not be unilateral but collective
# Restriction in edition should not be necessarily
seen as a punishment for the user, but more a warning
from the collective, and request for him to behave
differently
# Restriction in edition should be respected by the
user himself, voluntarily. That means the user
actively chooses to behave within community norms or
not. If he accepts, his full rights will be
reinstated. If he refuses, a vote for banning will be
started
# Community answer to problematic behavior is
gradual. It allows room for voluntary behavior
improvement and general forgiveness.
-------
FEATURE REQUEST SECTION
You may note that the second step, restriction of
edition by technical means, is limited. The only point
on which we may act is time. Banning for one week, one
month, forever etc�.
I think it would be nice that technical means allow to
block people more selectively, such as blocking on all
meta space, or blocking on one article specifically.
In the first case (meta space blocking), that means we
recognise the right of the user to contribute to
articles themselves, but we do not welcome them in the
community.
In the second case (article blocking), that means we
could selectively prevent a user to edit on article or
some set of articles which are really �hot buttons�
for him.
This has been mentioned a couple of time already, as
well as edit throttling, which I think, holds interest
as well.
I would give a 100 wiki-kisses to any developer
interested in working on that :-)
I would also suggest raising funds for this, 'cause I
am not sure I own 100 wiki-kisses. But I promise I am
dedicated in making/removing people sysop and
bureaucrat status to give developers more free time
:-)
Anthere
__________________________________
Do you Yahoo!?
Yahoo! Photos: High-quality 4x6 digital prints for 25�
http://photos.yahoo.com/ph/print_splash
I have written up a short, math-y description of an algorithmic method
for determining whether or not a given revision constitutes reversion.
There have been some previous suggestions regarding automatically
determining whether something is a revision and using that information
in various ways. In addition, it has recently been suggested to the
arbitrators that one possible remedy in the matter of Wik would be to
restrict his ability to perform reversions.
Please take a look at
http://meta.wikipedia.org/wiki/Determining_reversion
for my proposal for defining and algorithmically determining what
constitutes reversion and how to calculate what I call "likelihood of
reversion". I would suggest that for most purposes any edit with a
likelihood of reversion over 90% could be counted as reversion. Please
leave your comments if you have any suggestions, critiques, etc.
- David
Hi,
while importing the cur dump today I received this error message:
Error: MySQL server has gone away
This error occurs regardless of whether I pipe the dump through mysql
(the "usual" way), or use a Perl script to import it through DBI (what I
usually use because it gives me a progress meter).
My investigation pin-pointed this to one particular line in the cur dump
which contains an INSERT statement which is for the page
[[Wikipedia:Upload log]]. The one row of data for this page is 1051409
bytes.
So I'm wondering:
1) Why does that cause the above error message? Am I doing something
wrong?
2) Do we have to have such a huge page? Can we circumvent this by
splitting the page up?
Thanks,
Timwi