Hi @all,
Could someone please tell me how the value of 'bodytext' in MonoBook.php page
content is set?
Seems that it references to the variable $mBodytext in SkinTemplate.php
($tpl->setRef( 'bodytext', $out->mBodytext );)
But I can't comprehend where $mBodytext is filled?
greets & thanks
magggus
>
> Message: 9
> Date: Thu, 28 May 2009 09:23:54 +1000
> From: Steve Bennett <stevagewp(a)gmail.com>
> Subject: Re: [Wikitech-l] new extension for embedded music
> > scores
> To: Wikimedia developers <wikitech-l(a)lists.wikimedia.org>
> Message-ID:
> <b8ceeef70905271623o5e5cb80ub789ed36a89e48f2(a)mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
> On Tue, May 26, 2009 at 11:49 PM, Birgitte SB <birgitte_sb(a)yahoo.com>
> wrote:
> > That wouldn't be very useful for Wikisource purposes.
> ?We need something editable.
>
> I was assuming the user would include the LilyPond source
> along with
> the image. As I did here, for example:
> http://en.wikipedia.org/wiki/File:Chopin_theme_op_28.png
>
> Is there an "official" way to do this? I wasn't sure
> whether Commons
> or Wikisource was the right place? (I ended up at en as I
> wasn't sure
> about the copyright status...in the end I think almost all
> Chopin is
> PD).
>
> Steve
That is not what we are looking to do at Wikisource. We want to be editable like any other text. We can already show images of scans sheet music. I can't quite articulate this well but here is a sample how we are able to at best now
http://en.wikisource.org/wiki/Oh_How_I_Hate_to_Get_up_in_the_Morning
Birgitte SB
A fresh LocalSettings.php contains
$wgDBTableOptions = "ENGINE=InnoDB, DEFAULT CHARSET=binary";
$wgDBmysql5 = true;
Can I convert the other databases of my
http://www.mediawiki.org/wiki/Manual:Wiki_family to be able to use those
settings too, or must I forever wrap those settings in a
switch($wgSitename){}?
If found the structure, (mysqldump --no-data) of my old vs. new
databases is vastly different, despite update.php. Different so much
that the old data (mysqldump --no-create-info) could not be inserted
into the new structure without errors.
$ GET http://lists.wikimedia.org/robots.txt |sed 's/^/>/'
># robots.txt for lists.wikimedia.org
>#
># Disabled crawling for several lists 2005-11-26 to
># discourage people from complaining about items they
># post on public mailing lists being the first Google
># search result about them.
Tell them to use http://en.wikipedia.org/wiki/X-No-Archive next time.
Then there would be no need to make all the other people's posts
unsearchable too, and it will even work on all the other sites the lists
are archived, e.g., gmane.org.
># Disabled for all lists 2006-11-03, now that an internal
># search has been set up using htdig.
I can't find the URL for that.
># Note that list archives remain public.
Public but not searchable?!
>User-agent: *
>Disallow: /pipermail/
--- On Sun, 5/24/09, wikitech-l-request(a)lists.wikimedia.org <wikitech-l-request(a)lists.wikimedia.org> wrote:
>
> Message: 10
> Date: Sun, 24 May 2009 23:29:52 +1000
> From: Steve Bennett <stevagewp(a)gmail.com>
> Subject: Re: [Wikitech-l] new extension for embedded music
> scores
> To: Wikimedia developers <wikitech-l(a)lists.wikimedia.org>
> Message-ID:
> <b8ceeef70905240629k13792d2ch2a9eef40e3088435(a)mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
> On Sun, Nov 9, 2008 at 5:25 AM, River Tarnell
> <river(a)loreley.flyingparchment.org.uk>
> wrote:
> > ?http://abc.sourceforge.net/
>
> You know what would be useful? A website that lets you
> input ABC (or
> LilyPond, for that matter) text, and produces an image as
> output.
> Hence avoiding the need to download and install it. Does
> such a thing
> exist?
>
> Steve
>
>
That wouldn't be very useful for Wikisource purposes. We need something editable.
Birgitte SB
I would like to request categorization for the media projects to the bug
tracker. To get a brief idea of the components getting packaged into the
"new-upload" branch check out:
http://www.mediawiki.org/wiki/Media_Projects_Overview
I think the large scope of code and the fact that MwEmbed can be used in
self-contained mode warrants a high level Product categorization.
Something like MwEmbed : The self contained jQuery based javascript
library for embedding mediaWiki interfaces:
Then components for the library could (presently) include the following:
* Add Media Wizard
* Firefogg
* Clip Edit
* Embed Video
* Sequence Editor
* Timed Text
* example usage
* js Script-Loader
* Themes and Styles
*
*I also want to report some strangeness with bugzilla. I sometimes get
the below error when trying to log in (without "restrict to ip checked"
) and I occasionally get time-outs when submitting bugs:
Undef to trick_taint at Bugzilla/Util.pm line 67
Bugzilla::Util::trick_taint('undef') called at
Bugzilla/Auth/Persist/Cookie.pm line 61
Bugzilla::Auth::Persist::Cookie::persist_login('Bugzilla::Auth::Persist::Cookie=ARRAY(0xXXXXXX)',
'Bugzilla::User=HASH(0xXXXXXX)') called at Bugzilla/Auth.pm line 147
Bugzilla::Auth::_handle_login_result('Bugzilla::Auth=ARRAY(0xXXXXXX)',
'HASH(0xXXXXX)', 2) called at Bugzilla/Auth.pm line 92
Bugzilla::Auth::login('Bugzilla::Auth=ARRAY(0xXXXXX)', 2) called at
Bugzilla.pm line 232
Bugzilla::login('Bugzilla', 0) called at
/srv/org/wikimedia/bugzilla/relogin.cgi line 192
peace,
michael
Hello!
You are receiving this email because your project has been selected to
take part in a new effort by the PHP QA Team to make sure that your
project still works with PHP versions to-be-released. With this we
hope to make sure that you are either aware of things that might
break, or to make sure we don't introduce any strange regressions.
With this effort we hope to build a better relationship between the
PHP Team and the major projects.
If you do not want to receive these heads-up emails, please reply to
me personally and I will remove you from the list; but, we hope that
you want to actively help us making PHP a better and more stable tool.
The first release candidate of PHP 5.2.10 was just released and can be
downloaded from http://downloads.php.net/ilia/, the win32 binaries are
available at http://windows.php.net/qa/. Please try this release
candidate against your code and let us know if any regressions should
you find any. The goal is to have 5.2.10 out within two to three weeks
time, so timely testing would be extremely helpful.
In case you think that other projects should also receive this kinds
of emails, please let me know privately, and I will add them to the
list of projects to contact.
Best Regards,
Ilia Alshanetsky
5.2 Release Master
Hi,
This may be a bit obvious – but I don’t have quite as much experience
in this area. The SQL Dumps provided at http://download.wikimedia.org do
not contain specifications for the “DEFAULT CHARSET” of the respective
Table. When installing MediaWiki – it seems to be recommended to use the
binary Charset. I would like to know how to import one of these dumps
into a Table with the binary Charset.
Right now I import on the cmdline: E.g.
mysql wikidb < enwiki-20090306-pagelinks.sql
This results in the corresponding Table being dropped and then recreated
again. The problem with this is that the newly created Table does not
have the “DEFAULT CHARSET” set to Binary, because the SQL Dumps do not
have these specified.
I first attempted to modify my my.cnf file to set the “DEFAULT CHARSET”
to binary for new Tables. I attempted to make the following changes to
my.cnf:
[client]
default-character-set=binary
[mysqld]
default-character-set=binary
default-collation=binary
character-set-server=binary
collation-server=binary
init-connect='SET NAMES binary'
I restarted the Server – but I found that the new Table that gets
created, gets created in UTF-8, not binary.
I then attempted to edit the SQL File i.e. replace the line
) TYPE=InnoDB;
With
) TYPE=InnoDB DEFAULT CHARSET=binary;
This works, in the sense that now the new Table gets created in Binary.
However I think I am making mistakes in editing the file. These files
are rather large, so I wrote code in Perl, and again in Java to do the
editing. They can manage to do the above substitution, but I am not
entirely confident about their UTF-8 handling. The problem appears when
I am trying to import these modified files, where I get an error
“Duplicate entry” e.g. for the enwiki-20090306-pagelinks.sql file, I get
the error:
ERROR 1062 (23000) at line 1359: Duplicate entry
'1198132-2-Gangleri/tests/links/�' for key 1
I would like to add that importing this file as UTF-8 results in this
“Duplicate entry” error coming much earlier in the input file.
So, what’s the correct way of importing these SQL Dumps, such that they
are imported into a Table in Binary? If my above description is not
clear please let me know and I would try to explain again.
Thanks a lot,
O. O.
P. S. I am running MediaWiki/MySQL under Ubuntu. I hope UTF-8 is handled
correctly on the Commandline Bash – but I don’t know how to check that.
Hi,
I hope this is the right list to post this email - otherwise I would
appreciate being directed to the right one.
Shortly, I would like to promote a project for opening the search box
to external entities. My main motivation would be shared by many
researchers in interactive information retrieval (IIR): In order to
run experiments about new techniques in IIR, it is necessary to
evaluate them, and hence to have enough users to try new approaches.
It is possible to simulate or to do low scale experiments, but to
validate such approaches necessitate much bigger databases.
My proposal would be to include a third option below the search box,
which would be to use an external search engine which would
communicate with wikipedia in order to provide search results - the
communication would allow wikipedia to control what is happening in
order to avoid problems (from latency to spam).
The search box would allow a user to use either a "random" search
engine, or to use one that could be set in the preferences.
I would suggest the randomness to be not so random, in the sense that
it should favour good search engines over bad one - hence the title
"Darwinian search". That would improve the special search box quality
over time, while stimulating research in my area.
I think it would also be beneficial for wikipedia, since
1) it distributes the search load to other back ends
2) it would improve search quality (and may change the way people use
wikipedia) and may be included as a default by wikipedia in the longer
term
3) it does not cost much - once the API and the main means to ensure
quality are set, the system will work by itself
The fact that the title contains "Darwinian" comes from the process of
how search engines back ends will be selected and the fact that we are
celebrating the 200th anniversary of Charles Darwin birth.
I do not develop more here, since I first want to know if there is
some interest.
Best regards,
Benjamin Piwowarski (University of Glasgow, UK)
Hi!
I'm trying to setup a wikipedia mirror from the dumps and I already
wrote about it to this list: Most of the images are missing. Marcus Buck
suggested to me to also import Wikimedia commons. I downloaded both
dumps and tried to merge them in one database, but there are pages that
exist in both mediawikis, so there are duplicate key errors. How is it
supposed to be done? What is the setup of the official wikipedias?
Thanks for you help!
Kind regards,
Christian Reitwießner