Hello everybody! I've been working on a bash script to rip the high quality
images from the Google Art Project.
>From what I understand, all depictions of the original works go under
PD-Art, even though Google claims otherwise in their FAQ. What I'm wondering
is whether anyone is interested in helping me rip all these images. There
are a lot of images to rip, and each rip takes at least an hour. All you
need is a GNU/Linux OS and some basic knowledge on using the terminal.
--
Beao <http://toolserver.org/~beao/>
Dear All,
We wanted to let you know of three recent APIs we have built for
WikiTrust that may be useful to tool developers. If anyone would like
to build a service on top of these APIs (or have suggestions for other
APIs that we might be able to provide with our data), we would love to
work with you to make it happen.
kind regards,
Luca, Bo, and Ian
VANDALISM API: A request like
http://en.collaborativetrust.com/WikiTrust/RemoteAPI?method=quality&revid=4…
returns the vandalism probability for revision 1234. This is a plain
text floating point number from zero to one, where 1.0 means that the
revision is vandalism.
SELECTION API: A request like
http://en.collaborativetrust.com/WikiTrust/RemoteAPI?method=select&pageid=2…
returns a JSON array of objects describing the top-quality recent
revisions of the page. Each entry consist of a revision_id, and of a
few additional parameters.
TEXT ORIGIN API: A request like
http://en.collaborativetrust.com/WikiTrust/RemoteAPI?method=wikimarkup&page…
returns a JSON object containing a string, consisting of the WikiTrust
text with some additional markup. The markup consists in tags like:
Since {{#t:10,84893431,Habitual gardner}}its inception in
{{#t:10,86765634,Bassemkhalifa}}1928 the movement
which means that the words “its inception in” were written by
“Habitual gardner”, in revision 86765634 (and have trust 10).
---
See http://www.wikitrust.net/vandalism-api for additional information
about these APIs.
Hello all
Has anybody ever used rrdtool on TS through python? Anyone recently?
I am trying to get this working, but I always get:
(process:26600): Pango-CRITICAL **: No modules found:
No builtin or dynamically loaded modules were found.
PangoFc will not work correctly.
This probably means there was an error in the creation of:
'/etc/opt/ts/pango/pango.modules'
You should create this file by running:
pango-querymodules > '/etc/opt/ts/pango/pango.modules'
(process:26600): Pango-WARNING **: failed to choose a font, expect ugly
output. engine-type='PangoRenderFc', script='common'
(process:26600): Pango-WARNING **: failed to choose a font, expect ugly
output. engine-type='PangoRenderFc', script='latin'
google tells me there is e.g. TS-706 [1] dealing with similar issues
but it does not seems to work for me...
The other question is regarding http://toolserver.org/~daniel/stats/
which was not updated since Sep. 2010; why is this? Was this 'service'
shut down?
Any help would be greatly appreciated! Thanks in advance...
Greetings
[1] https://jira.toolserver.org/browse/TS-706
I realized I have to make some character set changes on one of my user
databases, and I want to make a backup before I start. Of course
there are server backups as well, but I view those as an absolute last
resort.
I was planning to use mysqldump for this. Is there a better option
that I'm overlooking?
Is there any reason that the output of 'mysqldump -hSERVER DATABASE'
with no additional options would have problems restoring on the
toolserver databases?
Are there any subtle options that need to be set on mysqldump to make
sure the output is usable in that environment?
- Carl
Hi all!
what's the best way to get around the same original policy to fetch data from
the toolserver with a XMLHTTPRequest in a Wikipedia gadget? Is there a best
practice, a nice and easy, generally usable method? what would it take to make one?
I know that some gadgets have been doing this, but I also know that it's a bit
tricky. I think a general solution to this, documented somewhere, would make
life a lot easier... does something like this exist? if not, why not?
-- daniel
PS: while i'm at it, is there a wrapper function for XMLHTTPRequest in the
standard MEdiaWiki JS? I don't want to re-invent a sucky wheel :)
Hi,
As I previously posted on wikitech-l, some Wikimedia lists are now
available via NNTP. toolserver-l and toolserver-announce are included
in the initial selection of available lists; if you would like to read
them via NNTP (and, for toolserver-l, post), see
<http://news.tcx.org.uk/wikimedia.html>.
Compared to the GMane interface, this gateway:
* Does not rename lists (all lists are called wikimedia.<list name>),
* Does not rewrite email addresses in posts, thereby breaking PGP
signatures.
- river.