Hello,
I fixed a typo in exporttext (Language.php), and added the export page
phrases in the LanguageDe.php.
Greetings Smurf
--
Smurf
smurf(a)AdamAnt.mud.de
------------------------- Anthill inside! ---------------------------
David Friedland wrote:
> Anyhow, it seems that just using the HTML entities for the Unicode IPA
> extensions is not an acceptable solution because it leaves IE users with
> lovely but useless rectangles where there ought to be IPA characters.
> There is a LaTeX extension called TIPA that allows the complete set of
> IPA characters and diacritics. If this were installed into the TeX math
> extensions, then a similar syntax could be used to generate images of
> the IPA from LaTeX input.
> I see the following possible solutions (in the order that I think is
good):
>
> 1.) Auto-detect the browser and send IPA Unicode to browsers that
> support it and TIPA LaTeX images to those that don't. (Pros: attractive
> display of IPA for all users. Cons: lots of programming)
>
> 2.) Just send TIPA LaTeX images (Pros: attractive display of IPA. Cons:
> Uses images in text when for some users embedded IPA Unicode would look
> better)
>
> 3.) Store the IPA in a special format or in a special tag, auto-detect
> the browser and send IPA Unicode to browsers that support it and SAMPA
> to the rest. (Pros: doesn't require inserting images or using TeX. Cons:
> SAMPA is ugly and hard to read)
>
> 4.) Render IPA into GIFs or PNGs and just insert them as images. (Pros:
> compatible with everything. Cons: time-consuming, and difficult to
change)
>
> 5.) Devise a Wikipedia-specific pronunciation scheme and just use that
> (blech!) (Pros: no coding required. Cons: YAAHPS (Yet Another Ad Hoc
> Pronunciation Scheme))
>
> 6.) Do nothing and continue to allow people to use ad-hoc pronunciation
> schemes (BLECH!!) (Pros: no action required. Cons: maintains status quo
> harms as described above)
I was just thinking of this problem, and the idea I came up with was to
have an option in user preferences of something like "Display
pronunciations in: o Unicode IPA o SAMPA" and then anything in an
article which begins with "SAMPA " would be detected and displayed
correctly (converting SAMPA to IPA if necessary), similarly to the idea
with the magic ISBNs. I think this is probably the simplest solution to
get working quickly, and it can be easily expanded to include additional
ASCII IPA schemes (there are several) or auto-generated IPA images if
someone implements that. Also, someone using IE but who has the correct
fonts installed would be able to see IPA.
You malign ad hoc pronunciation schemes, but they do have *some*
redeeming value. You can use a single ad-hoc system to represent
different dialects more easily than you can use IPA for the same
purpose, since users will read their own dialect into the pronunciation
guide for the ad-hoc system. Still, I can't imagine making up an ad-hoc
scheme for wikipedia; IPA is probably best for us.
I'm digging around the code to see how this could be done (and learning
PHP), but in the meantime, any comments?.
(Anything more on this should probably go to wikitech-l.)
-- Adam Raizen
Since yesterday, at least the rench wikipedia is running very slow.
Now it is unusable for all practical use, but two days ago it was,suddenly, running very well.
Is there an explanation for this or it's just hardware magic?
-- Looxix
Hello,
the same procedure than every time: short after getting some
translations in the CVS, there appear new proposals and phrases :)
If someone find some spare time: LanguageDe.php V1.26 awaits the stable
tree.
Greetings, The translator Smurf ;)
--
Smurf
smurf(a)AdamAnt.mud.de
------------------------- Anthill inside! ---------------------------
It fixes some bugs in last tarball and supports azimuthal projection from equator and poles.
One of the problems is that optimal settings for Latin script
and for CJK are very different. I'll probably add a latin/cjk switch.
eu.map is not so bad map of Europe in Polish. Labels for really small countries
(below 100 pixels on the map) are not displayed.
Because both ImageMagick and libart are buggy, this version uses
a hybrid soltution where all polygons are painted using libart,
and everything else using ImageMagick. Yuck.
libart is available from Ruby/Gnome2
http://taw.pl.eu.org/~taw/earth.mng (800kB) or
http://taw.pl.eu.org/~taw/earth.gif (2MB) show animation of rotating Earth.
Of course equal-area projection is particularly badly suited for that,
but it looks cool anyway ;)
The following things are cached:
* bluemarble
* libart generated polygons
* rivers
Following is a discussion which started on the village pump, but I thought
it would be more appropriate to continue it here, considering the
closely-related discussion which has been going on here.
------------------------------
[Start village pump quote]
Once the full Wikipedia is downloaded, can smaller periodic updates covering
new stuff and changes be obtained and used to synch the local? --[[User:Ted
Clayton|Ted Clayton]] 04:26, 13 Sep 2003 (UTC)
:No, you can't. I've been thinking the same thing myself. I think we need
to:
:*Allow incremental updates for all types of download
:*Allow bulk image downloads
:*Package a stripped-down version of the old table in with the cur dumps,
where the revision history (users, times, comments etc.) is included, but
the old text itself is not
:*Develop a method of compressing the old table so that the similarity
between adjacent revisions can be used to full advantage
: -- [[User:Tim Starling|Tim Starling]] 04:38, Sep 13, 2003 (UTC)
Would it be easier to have incremental updates on something like a
subscription basis? The server packages dailies or weeklies and shoots them
out to everyone on the list? During off hours, mass-mail fashion?
Can you suggest sources or search-terms for table manipulations treatments,
as background for stripping and compressing? --[[User:Ted Clayton|Ted
Clayton]] 03:14, 14 Sep 2003 (UTC)
[End village pump quote]
-------------------------------
Regarding sending them to a list: do you mean by email? That would depend on
size -- anything more than a couple of megs a week and we'll max out
people's inboxes. I think we'd be better off making available a series of
patches available via HTTP, and provide a client tool (probably just a PHP
script) to download the required patches and merge them into the local
database.
Regarding sources: I'm not aware of anyone doing exactly this task before,
although I haven't really looked. I imagine we would just roll our own.
Perhaps dumping the data using the mysql client, then doing some text
processing and finally running it through gzip. Assuming the text processing
can be done fast enough, a PHP script would probably be best for this part
too, for consistency.
We could use the binary log, like Brion suggested. I've been meaning to
reply to that message. We can convert the log to runnable SQL using
"mysqlbinlog". Then we'd have to parse each query to determine what table it
writes to, just like what MySQL slaves do when tables are excluded. It
wouldn't be perfectly efficient, because multiple writes to the same row
would all be included. So a cur dump might have 100 copies of the village
pump in it. But that's better than including 100,000 unchanged articles. If
we get ambitious we can always filter out the redundant writes.
-- Tim Starling <t`starling`physics`unimelb`edu`au>
I've sort of hacked together templates for Tarquin's Paddington and
Montparnasse proposed skins as Smarty templates, and stuffed and prodded
it into the development branch of MediaWiki.
Go on down to test.wikipedia.org, log in, and pick the new skins.
(Warning: very VERY incomplete! Lots of links not showing pretty text
yet.)
Smarty templates get compiled down to PHP scripts, so "in theory" it
shouldn't be slower than writing the interface in raw php. The advantage
is in maintainability; I've attached the 4k Paddington template file for
comparison against the current combined 96k of OutputPage and Skin which
contain, amongst many other things, the HTML layout in scattered dribs
and drabs.
To get the Smarty template working, I swapped a chunk of
OutputPage::output() into Skin::outputPage(), so the widest pieces of
page layout could be done by the template.
I think it'd be a good thing to separate out the functional groups in
OutputPage and Skin: wikitext->HTML conversion, RecentChanges layout,
history page layout, image upload history layout, HTTP magic, and
full-page layout (at the least!).
Skin discussion:
http://meta.wikipedia.org/wiki/Skins
General info on Smarty:
http://smarty.php.net/
Installation notes: get the Smarty download from the above site (Smart
is under LGPL license), untar it, and make sure its "libs" subdirectory
is in your PHP include path. Put the *.tpl files into a "templates"
directory under the wiki script directory (ie, /w/templates) and make
also a templates_c directory writeable by the web server, where it
stores the PHP scripts compiled from the templates.
-- brion vibber (brion @ pobox.com)