Hello,
is there ZWS configuration available to view somewhere?
I'm particularly looking for things like list and order of default documents (directory indexes) and content types ATM, but I think it would be useful to have such relevant configuration settings which have direct influence on behavior written somewhere if it's not yet, maybe https://wiki.toolserver.org/view/ZWS/Default_config or so... However, if it already exists, please navigate me, thanks.
Kind regards
Danny B.
I'm going to run into toolserver some simple python + djvuLivre routines to
test the possibility to obtain a "wikicaptcha", built to be useful for
wikisource activity.
I'm far from sufficiently skilled to write all the project, in particular
the final user interface; but I'm not far to implement something like a
"voluntary wikicaptcha", t.i. to select controversial OCR interpretation of
words of a djvu file, and to present their image, extracted from image
layer, into a html form, so that a willing user could upload their "human
interpretation" and fix djvu text layer.
While asking if any of you is interested about, I wonder there would be any
wrong, or hurting toolserver politics, or raising safety issues, in
publishing the python code of such layman tries into toolserver wiki into a
subpage of my account, or otherwhere, so that any toolserver user could take
a look if curious or simply could take inspiration to develop the idea as
the idea IMHO deserves.
Alex brollo
Hi,
We're about to apply necessary schema changes for Wikimedia's MediaWiki
1.17 upgrade to the Toolserver databases. We will apply the changes to
one server for each cluster at once; while the primary server is being
updated, user databases will be unavailable, but replication won't be
interrupted.
The exception is s2/s5, which currently only has one server. Databases
on these clusters will not be replicated and might be partially
unavailable during the maintenance.
This was not announced in advance since we only learnt about the changes
about 10 minutes ago. Sorry.
- river.
I know this has come up previously, but I don't think it was ever addressed.
What's the process for updating the design of <http://toolserver.org>? Can
the index file be made to load from the Toolserver wiki (similar to how
www.wikipedia.org works at Meta-Wiki)? Does it require a JIRA ticket to
update the design? Is an updated design even allowed?
My issue is that the page currently looks broken. It looks as though only
some of the content loaded and parts are missing.
Any thoughts or pointers would be appreciated.
MZMcBride
I'd like to install into my toolserver account djvuLibre binaries. Unluckily
my knowlege of Unix is very primitive - approaching to nothing.
Is some of you willing to take a look to
http://djvu.sourceforge.net/index.html, and to tell me if Solaris 6
(sparc)djvulibre-3.5.5-solaris6-sparc.tar.gz<http://downloads.sourceforge.net/djvu/djvulibre-3.5.5-solaris6-sparc.tar.gz>
binary package is the right one? and.... is a good idea to install it into
toolserver? finally, if it's a good idea: is there a simple "magic recipe"
to install that package in UNIX? I never installed binaries into a unix
environment....
Thanks
Alex
Hi everyone,
For Wiki Loves Monuments
(http://commons.wikimedia.org/wiki/Commons:Wiki_Loves_Monuments) I'm
harvesting information about monuments (cultural heritage) into a
database. These monuments can be found in the database p_erfgoed_p at
sql. The tables are in the form monuments_<countrycode> and there is one
table monuments_all in which all country tables are aggregated.
I hacked up something to show these monuments in Google Earth:
http://toolserver.org/~erfgoed/monuments_test/ but my focus is more on
adding more countries and improving the data. So maybe someone else
feels like being creative with this data?
Maarten
Ps. Source at https://fisheye.toolserver.org/browse/erfgoed/erfgoedbot/
Hi all,
I'm finding myself making calls to the live api on the wmf wikis and
thinking: Writing the query from scratch for every detail (or copying
it from the bits and pieces in mw source code) for every time I need
the data is nonsense since it was already done..... in mediawiki core.
MediaWiki uses the API internally as well in some places (more in
extensions) with a FauxRequest (which is calling the api without a
real http request).
From a server hosting a live MediaWiki-site it's very easy (php
include includes/WebStart.php anad make a FauxRequest() - one can make
several requests all without making a http request ). However from the
toolserver it's a little trickier since we're not on the same server.
So I was thinking :
* upload a mediawiki install (the same version that WMF runs, ie.
1.16wmf4 or 1.17wmf1)
* make it not publically accessable (we don't want people actually
browsing the wiki)
* Configure it in a special way so that one can use the same code for
any wiki (ie. a $lang and $family variable of some kind)
Then one can include includes/WebStart.php, and use the API (ie. using
the huge library of quiries already in the MediaWiki core (ie.
action=query&list=categorymembers, using generators and getting
properties, you name it) like this:
<source>
$site = 'wikipedia';
$lang = 'en';
require( $mw_root . '/includes/WebStart.php' ); // loads all common
code including LocalSettings.php
// LocalSettings contains extra code to check for $site and $lang
figuring out the correct $wgDBname,
// $wgDBserver etc. a tiny bit like wmf's CommonSettings.php
$apiRequest = array(
'action' => 'query',
'list' => '...',
/* etc. */
);
/* etc. */
</source>
This should basically be includable by anyone so that not everybody
has to re-do this.
ie. it could be in /home/somebody/wmf-api/includes/WebStart.php
which would be a checkout of the wmf-branch in SVN and (maybe) the
same extensions etc.
This will make it a lot easier to interact with the database when you
need certain information, this will also prevent us from hardcoding
names all the time (which I'm sure happends a lot and this is one of
the causes some tools brake over time when just small details changed).
I believe some of the toolserver users already have parts of mediawiki
in their home (I imagine stuff like GlobalFunctions.php can be very
handy at times).
Basically I'm asking three things:
* Has this been done already ? If so, we should document this better
as I spent time looking for it but came up empty
* Do we want this ? Are there potential problems here, what do we need
to tackle or fix on our side ?
* Who would want to do this ? (If nobody has plans for this already, I
would like to do this)
--
Krinkle
I posted this message without a descriptive subject line, I apologyze. I
only hope that posting it again under a better subject is a good idea.
I'd like to install into my toolserver account djvuLibre binaries. Unluckily
my knowlege of Unix is very primitive - approaching to nothing.
Is some of you willing to take a look to
http://djvu.sourceforge.net/index.html, and to tell me if Solaris 6
(sparc)djvulibre-3.5.5-solaris6-sparc.tar.gz<http://downloads.sourceforge.net/djvu/djvulibre-3.5.5-solaris6-sparc.tar.gz>binary
package is the right one? and.... is a good idea to install it into
toolserver? finally, if it's a good idea: is there a simple "magic recipe"
to install that package in toolserve? I never installed binaries into a unix
environment....
Thanks
Alex
FYI I am in the process of gathering images from the Google Art Project.
I've uploaded a small sample to Commons which you can see here:
http://commons.wikimedia.org/wiki/Category:Google_Art_Project
And one really, really, big image to the Internet Archive (because it's
>100MB):
http://www.archive.org/details/VincentVanGogh-StarryNight-GoogleArtProject
I won't be making any more uploads until I have all the images and there are
artwork templates filled out on Commons for all the ones that will be
uploaded.
--
Derrick Coetzee
User:Dcoetzee
On Sat, Feb 5, 2011 at 4:00 AM, <toolserver-l-request(a)lists.wikimedia.org>wrote:
> I see "NPG, reloaded" coming our way...
>
> Note that Commons already has plenty of images from at least some of
> these museums, e.g.:
> http://commons.wikimedia.org/wiki/Van_Gogh_Museum
>
> For the rest, we could just ask them "now that Google has put images
> online, can we too?", then go and take our own pictures.
>
> Too radical? ;-)
>
> Magnus
>
> [...]
>
> You can make your own pictures if you have
> a 7 Gigapixel camera ;-)
> In this video they show how google works:
> http://www.youtube.com/watch?v=D1EOJr11bvo
>
> Also if highres-cameras from gigapan.org are cheap available you need
> time and the right light in the museum.
>
> We still have some technics on commons to get high resolution images
> from websites:
> http://commons.wikimedia.org/wiki/Help:Zoomable_images
> This seems the right place for documentation.
>
> But I'm not sure perhaps better to ask friendly or to wait until more
> images from museums are online. Not sure if the museums are happy if we
> take the pictures and present it in a format that you can easily print
> out....
>
> Greetings
>
> Am 04.02.2011 14:12, schrieb Magnus Manske:
> > I see "NPG, reloaded" coming our way...
> >
> > Note that Commons already has plenty of images from at least some of
> > these museums, e.g.:
> > http://commons.wikimedia.org/wiki/Van_Gogh_Museum
> >
> > For the rest, we could just ask them "now that Google has put images
> > online, can we too?", then go and take our own pictures.
> >
> > Too radical? ;-)
> >
> > Magnus
> >
> >
> > On Fri, Feb 4, 2011 at 7:15 AM, Beao at Toolserver.org
> > <beao(a)toolserver.org> wrote:
> >> Hello everybody! I've been working on a bash script to rip the high
> quality
> >> images from the Google Art Project.
> >> From what I understand, all depictions of the original works go under
> >> PD-Art, even though Google claims otherwise in their FAQ. What I'm
> wondering
> >> is whether anyone is interested in helping me rip all these images.
> There
> >> are a lot of images to rip, and each rip takes at least an hour. All you
> >> need is a GNU/Linux OS and some basic knowledge on using the terminal.
> >>
> >> --
> >> Beao
>
Hi,
I would like to install nagious for testing, would that be ok? I am working
on patches for nagios, and would like to monitor free software projects
services for quality.
it could be use to monitor bots and other tools for status and speed.
thanks,
mike
--
James Michael DuPont
Member of Free Libre Open Source Software Kosova and Albania flossk.orgflossal.org