like already announced after the last maintenance, the next maintenance will
Wednesday, 7 December between 19:00 and 1:00 UTC.
The roots will collect what they will do at  until Sunday night. If you
have something for us to do (like a software-update) please open a bugreport
at JIRA and make sure to add the label "maintaince-window" until Sunday noon.
The roots plan to finish the configuration of Apache until Wednesday too, but
there will be no switch yet to give you all some time for testing (I will send
a eMail with more details when the time is right).
Userpage: [[:w:de:User:DaB.]] — PGP: 2B255885
I keep running into a problem with one of the PHP-based projects I'm
working on. According to Eclipse, my code doesn't have any syntax
errors, but whenever I try to run said code, I get a blank screen
instead of a confirmation message.
I feel like I'd be able to figure out what the problem is, except no
matter what method I try to turn on error reporting, nothing seems to
work. Is this ability disabled on the toolserver for some reason, or are
the methods described everywhere on the internet not the way to go in
our particular case?
In a message off-list, Platonides wrote:
> I think pretty much evryone using them would want the last dump, so I
> don't see a problem in keeping world readable just the last two dumps or
> so (I chose the number two in the case someone started using one dump
> and wanted to finish with that, and a new one was published in the
This is incorrect. When I got the idea to analyze external link statistics, I
needed all old dumps I could get, and they took a lot of time to download.
There was neither disk space nor bandwidth on the toolserver, so I had
to use a server of my own. I now have all page.sql.gz and externallinks.sql.gz,
but only I can use them, because they are on my server and not on the toolserver.
These files now take 160 GB, which is a fraction of a 2 TB disk that cost
100 euro to purchase. We're talking disk space at the cost of a lunch.
Limiting the toolserver to what most people would use, we could just
restrict it to dumps of the English and German Wikipedia, since that
is what the majority of users would be interested in. That sort of
thinking will lead you wrong every time.
How hard can it be to get enough disk space on the toolserver? I think
many chapters contribute money to its operation. Is it not enough?
Lars Aronsson (lars(a)aronsson.se)
Aronsson Datateknik - http://aronsson.se
currently there is quite a big mess in how dump files are handled. They are located in several locations without any system in it, some locations are public, some private, thus there are obviously duplicates which eat the space etc. Also their naming differs.
Hence I've got this proposal:
I would like to set up the system for overal handling of dumps which includes system of their storage, naming/linking and updating.
That would help to lower down the used space, easier transfer of the entire dump storage (i.e. in future we could have dedicated HDD(s) to dumps only), easier maintenance etc.
The idea of storage system, naming and linking is nearly complete, maintenance scripts work, but might be tweaked in some cases, some additional scripts might be necessary.
I think that this could be multi maintainer project for couple people (in case one is not around, other one can step in), so is there anybody active interested in joining?
---- FOR THOSE WHO USE DUMPS ----
When moving to the new system this would mean to you:
1) you would have to submit a list of dumps you use
2) you would have to update your tools which use dumps to use the shared dumps
For some time during the transition we would keep symlinks on old locations (instead of the files themselves) but the final step is to have dumps only on one place.
Questions, comments, suggestions?
I've got couple questions about CouchDB:
* Do we have it installed and available?
* Is there any documentation (I tried to search the wiki for "couch" - no results)?
* Anybody using it with PHP, could please share any example?
* Could it be installed, please?
I'm kinda new to the toolserver, I'm currently experimenting with the environment before I begin to write some code, and I have a problem with the database connection.
I readhttps://wiki.toolserver.org/view/Database_access from top to bottom, then tried a few queries from the wiki, only to get the following error:
ERROR 1045 (28000): Access denied for user 'strainu'@'damiana-bge0.esi.toolserver.org' (using password: YES)
I then tried to run the script from https://wiki.toolserver.org/view/Iterating_over_wikis
The output is as follows:
strainu@willow:~$ bash test.bash &> test.out
strainu@willow:~$ cat test.out | wc -l
strainu@willow:~$ cat test.out | grep ERROR | wc -l
So it would appear I can access only 1492 - 2*726 = 40 tables out of 746. I haven't touched .my.cnf in any way, so I'm guessing the password is correct.
Does anybody have any ideas/suggestions on what's going on?
Dario Taraborelli, a Wikimedia Foundation research analyst, wants to
make sure that the Wikimedia data sources and infrastructure that we
provide serve your needs, present and future. So please take this short
survey (see below). Thanks.
-Sumana Harihareswara, WMF volunteer development coordinator
> Dear all,
> apologies for cross-posting.
> I'd like to ask you to take a moment and help us understand how to better serve the research / data mining / mashup developer community with Wikimedia data: http://bit.ly/WikimediaData
> If you are interested in working with our data please take this survey and help us circulate the link among your contacts, or just retweet http://twitter.com/#!/ReaderMeter/status/143843238837620736
the toolserver is currently unavailable. I did not expect a "ifconfig -a status" to hang the maschine but rebooting failed because the remote console (ilom) was unable to restart the system.
Someone from the data center in Tampa will go and try to reanimate the server.
Will keep you updated.
Say what? (Sorry this is the kinda stuff I don't understand yet).
FYI The cronie was installed on willow and this has ran plenty of times
On 06/12/2011 07:27, Cron Daemon wrote:
> error: failed receiving gdi request response for mid=1 (got syncron message receive timeout error).