Note: this has nothing to do with the previous debate over layout/logo
policies etc. and I am currently the only person who gives feedback for
users for these problems:
The following bugs or support issues were reported by users and verified
(most has to do with right-to-left issues, not all are solvable):
* In Mozilla (only), spacing (:), dots (*) and definitions don't show up
properly (they are tied to the right, ie. not structured properly) --
probably due to partial RTL support, apparently not solvable.
* In IE, After entering a special page (statistics, orphans, etc.) and
pressing "Back" and then "Forward", the browser doesn't return to the
special page. It seems that the URL is broken (bad encoding probably) --
I don't know if this is solvable, (I know, IE sucks, but works best with
the site due to the bug above) :-)
* TeX markup looks written backward when presented as HTML. -- *Highest
priority to fix*. Either <div dir=ltr> is added to all formulas or TeX
should be always rendered as PNG (I already tell users to change to that
option in prefs, but there is currently no solution for anonymous users
except changing the defaults)
* With the sliding bar on the left (for hebrew), Konqourer shows pages
much wider than necassery. -- Don't know about this, I don't use linux
but may be a browser support issue. The user who reported this said this
is becuase the site doesn't use W3C HTML. (don't know if that's true)
* Enhanced recent changes shows left to right and small icons for
opening/collapsing don't show (i.e. not usable right now) -- Not urgent
to fix right now, but may be easy.
That's about it. The most urgent one is the problem with TeX (should
always show as PNG for anonymous and registered users). Fixing the IE
problem with "back" "forward" would also be nice (if possible).
Cheers,
Rotem
On May 30, user Hooft moved [[nl:Huidziekten]] to [[nl:Dermatologie]].
However, in the database this was for some reason dated on December 28,
a time when Hooft had not started working on Wikipedia yet.
My questions:
* Is this a known bug, and if so, is its cause known?
* Has it been or will it be corrected?
* Could this one case be changed by hand?
Andre Engels
There seem to be consensus to apply that.
Warning to certain people: some Polish Wikipedians want to change default
skin too, and there's nothing non-Polish Wikipedians can do to stop it,
would consensus on it emerge among Polish Wikipedians.
> I have written a PERL script that parses the SQL dumps (cur & old)
> and generates a html file, containing lots of info about
> wikipedians articles and database for each month
> since the project started:
> Please note that the script produces historical growth figures
> per Wikipedia based on the
** new (link) counting system **
> right from the first month.
>>> Now a report for the English WP is available as well <<<
The script parses 6 GB of data in 33 min. on my 1.2 GHz PC, which ihmo
is not too bad.
http://members.chello.nl/epzachte/Wikipedia/Statistics
I propose to run this script weekly on the new SQL dumps for all WP's
and put the resulting html files in a public folder.
ToDo:
unicode support
prepare a consolidated report for all Wikipedias
prepare a csv file e.g. for import into Excel (for graphics)
Erik Zachte
To Erik Moeller:
Thanks for splitting the file. I have downloaded and reassembled
everything without problems.
To TimWi:
>You should use a Download manager that can resume your download in the
>case of an error. Try, for example, http://www.getright.com/ (but
that's
>just an example, I don't favour any particular one).
I'm not talking of interrupted downloads, TimWi. As said I downloaded
the same 100 Mb file 8 times. All downloads made it till the end. All 8
files have the same byte count. Yet 4 have the correct MD5 checksum, 4
have not, each one different. I experienced problems like this before,
hence the test.
I'm not an expert on this, but might the TCP/IP internal checksum be so
small that every once in a while a transmission error by chance produces
a correct checksum for the garbled block? Such a rare event would only
be noticed on huge transfers. If one in 400,000 small HTML GETs fails
one would not notice. I one file is sent as 400,000 small packages then
it will be noticed in this all or nothing scenario.
Erik Zachte
Summary in English follows below. This is from the German list.
Timwi wrote:
> [[Heilige[s|n] Römische[s|n] Reich[|es] Deutscher Nation]]
> oder
> [[Heiliges^n Römisches^n Reich ^es Deutscher Nation]]
> oder sowas.
oder einfach [[Heilgen Römischen Reiches Deutscher Nation]] und ein
bisschen "fuzzy matching" in der Liste existierenden Artikeln?
So ein Algorithmus könnte auch den Schwedischen und Dänischen
Wikipedien nützlich sein. Hier ein Vorschlag:
1. Wenn ein Klammerlink kein direkten Match hat (es gibt kein Artikel
"Heiligen Römischen Reiches...").
2. Und wenn das Klammerlink aus drei oder mehr Wörter besteht.
3. Ersätzt mit ".*" (oder SQL "%") die zwei letzten Buchstaben in
jedes Linkwort. ("Heiligen Römischen Reiches" wird zur Regexp
"Heilig.* Römisch.* Reich.*" oder SQL "Heilg% Römisch% Reich%")
4. Wenn das Suchmuster genau _eine_ Artikelüberschrift antrifft,
diesen Artikel automatisch verlinken.
SUMMARY IN ENGLISH:
The German language has a problem with making wiki links from phrases
where word endings need to change to make the link text fit in a
sentence, something like "calf -> calves", but on a much greater
scale. For example an article heading might be "Heiliges Römisches
Reich Deutscher Nation" (the Holy Roman Empire of German Nationality)
but a in typical phrase where you in English can simply write
This was a typical property of the [[Holy Roman Empire of ...]]
where the article heading appears unmodified as the link text.
But the German text would have to be:
Das war ein typisches Eigenschaft des
[[Heiliges Römisches Reich Deutscher Nation |
Heiligen Römischen Reiches Deutscher Nation]]
^^ ^^ ^^
In German, these different word endings are never (?) longer than the
last two characters of a word, which made me suggest the following
algorithm, from which I think the Swedish and Danish Wikipedia could
also benefit:
1. When a bracket link doesn't have a direct match,
2. And the bracket link consists of three words or more,
3. Replace with ".*" or SQL "%" the last two characters of each word
in the link text.
4. If this search pattern matches exactly *one* article heading,
make a link directly to that article.
This would make it possible to write [[Heiligen Römischen Reiches
Deutscher Nation]] without the pipe character and real form, the
WikiToHtml conversion would not find a direct match (1), but since the
link contains more than two words it tries a search for "Heilig%
Römisch% Reich% Deutsch% Nati%" and thus finds the correct article to
link to.
--
Lars Aronsson (lars(a)aronsson.se)
Aronsson Datateknik - http://aronsson.se/
I (and current 5 other people) are planning on writing an API for
wiki[pedia] in Python, and I was wondering if I could enlist anyone on
this list's help in getting the project off of the ground? First of all,
I would like to request developer access for myself personally on the
sourceforge project (my username is mdbecker). I would like to get more
intimate with the code, and possibly contribute to it in the near
future. Secondly, I was wondering if one of the current sourceforge
developers could make this a sub-project, or help me make it a separate
project? Also, I would like to know if anyone would like to join the
project. We will be starting by programming some simple scripts to
automate some tasks (such as greeting new users.) Once we feel
comfortable with interacting with the wiki through scripts, we will
outline the API, and start work on it. Additionally, LDC offered in the
past the use of a test server. Could he, or someone else who is in the
know, inform me as to how to use that machine? Since I don't expect all
of the developers to have access to a UNIX system (which will probably
be our primary dev env), we will also be looking for a place to actually
test run out scripts. I hear sourceforge has "computing farms." Is it
hard to get access to these? Are they UNIX systems?
Thanks for your help
--
Michael Becker
Wikipedia is down, returning
Could not connect to DB on 130.94.122.197
Host 'larousse.wikipedia.org' is blocked because of many connection errors.
Unblock with 'mysqladmin flush-hosts'
Hi, since I can't nag Brion, can some developer update this:
http://meta.wikipedia.org/wiki/LanguageHe.php to this:
http://he.wikipedia.org/ . Thanks
BTW, the Hebrew Wikipedia was started on 9 July 2003 and already has ~70
articles. (about 60 of them created today ;-) after it was published in
an open-source related blog)..
-- Rotem
Someone on the German mailing list reported a problem with Cologne Blue
and Opera. I was able to fix the problem by making the following changes:
- in cologneblue.css, change the line
#content { position: absolute; top: 0; margin: 0; padding: 0; }
by removing the "position: absolute;", and
- in the actual page HTML, change the line beginning with
<div id='topbar'><table width='98%' border=0 cellspacing=0
by replacing 98% with 100%.
and the page still sppears to display fine in Mozilla. Could someone
else please test these changes on a wider range of browsers?
Should I commit these changes to CVS then?
Thanks,
Timwi