Hi,
I have just joined, I am from mumbai, india. I would like to get the
articles translated in marathi, my mother tongue. Looking at the effort
and no of volunteers, this will not be usable in any reasonable amount
of time.
That has made me think of alternatives - machine translation. A state
funded institute has a software available but I don't have access to it
yet.
Pl. comment about this approach. Has this been tried for any other
language earlier.
Thanks & regards,
Prasad Gadgil
________________________________________________________________________
Yahoo! India Matrimony: Find your life partner online
Go to: http://yahoo.shaadi.com/india-matrimony
Dear Wikipedia-Wizards,
we are a group of four researchers building an extension for
Wikipedia, called the "Semantic Wikipedia", which is technically a
MediaWiki extension.
The project is described here:
http://meta.wikimedia.org/wiki/Semantic_MediaWiki
can be used as a demo here:
http://wiki.ontoworld.org
As a short summary, it allows users to type links, which yields to
the creation of semantic metadata (page name, link type, link
target). In a similar fashion we allow for the annotation of
attributes. If this project will be deployed on Wikipedia, a huge
amount of machine-processable data could be generated. We will
provide an RDF export per page and a SPARQL-query endpoint for the
whole Semantic Wikipedia (SPARQL is like SQL, but more adapted to
the data model of RDF, a building block of the semantic web).
Currently, we have two problems and would be glad if you help us:
1.
The tool stack in the semantic web community is mainly built on
Java. For C, there is only on "triple store" (which is needed for
efficient RDF storage & querying). The only candiadate we have,
"3store" is not very mature - but many Java stores are. Especially
the open-source system "Sesame" (openrdf.org) would be our choice
for implementation. But, as far as I understand Wikipedia, Java is
not open source enough, as there is no open source implementation of
Java itself? Is this true or just a rumor?
2.
Syntax. We had to extend the syntax slightly to enable annotations
of links and data values. Currently we settled down to use
[[link type::link target|optional alternate label]]
Sample, on page "London": ... is in [[located in::England]] ...
Renders as: ... is in England .... (England = Linked)
for relations, and for attributes.
[[attribute type:=data value with unit|optional alternate label]]
Sample, on page "London": ... rains on [[rain:=234 days/year]] ....
Renders as .... rains on 234 days/year (nothing linked)
For a full explanation of whay and what we try to do,
you can also have a look at a paper, which we wrote for a
conference:
http://www.aifb.uni-karlsruhe.de/Publikationen/showPublikation_english?publ…
BTW: I promised Jimmy (in San Diego) to explain him, what the semantic web
is. I still work on that :-)
Thanks a lot in advance,
Kind regards,
Max Völkel
--
Dipl.-Inform. Max Völkel
University of Karlsruhe, AIFB, Knowledge Management Group
mvo(a)aifb.uni-karlsruhe.de +49 721 608-4754 www.xam.de
Hello,
We currently have ganglia wich give usefully reporting about server
status. What about notifications when something is about to go wrong
(disk usage at 95%, lot of errors in memcached, too many slow queries,
server suddenly swapping ..).
Kate wrote servmon that gave (give?) usefull informations about server
but I am personally to lazy to hack that.
Recently I finally found a job, part of my tasks is to setup a
monitoring tool. My choice ? Nagios. It's an open source monitoring tool
that I have setup on larousse some months ago. I asked avar and mark
their though about having a monitoring tool, their answer was: sure!
So let's start with Nagios.
Nagios is still on larousse although it is not running at the momment. I
could easily upgrade it to lastest version (2.0b4), tweak the config
files to add the new servers (something like 60+ new friends).
We will have to choose a server to run nagios on. Larousse seems to be a
good choice as it is mostly idling, serve pages for
http://noc.wikimedia.org/ and got used for servmon. Larousse could
become THE monitoring device (and eventually move ganglia from zwinger
to larousse).
Next step is to agree on a way to check services on the various hosts.
There is several solution for that:
1/ run a daemon on each server (nrpe), listening to queries from the
monitoring host and giving back results.
2/ hack something that grab data from gmetad and add new metric plugins
to ganglia. The good point is that we will then have those data showing
in ganglia.
3/ make checks through ssh ussing passwordless ssh-key. I personally
dont like that.
4/ deploy snmp everywhere
The nrpe approach need to setup a daemon on each server. Problem, most
of the data are already available through gmetad. The good point is that
it is easy to setup (rpm -i nrpe , same config files and plugins for
every servers).
Reusing gmetad data is probably a better idea, the data in nagios and
ganglia would be the same. One of the problems is that we will have to
code a nagios plugin that cache the gmetad data to avoid multiples
queries (we probably dont want to query gmetad for cpu, then for memory
then for nfs call, then for each disk space usage).
SNMP is a great tool for grabing devices status. Again it s probably
redundant with gmetad but will let us monitor network equipment such as
the switches, our ISP router and probably the console switch.
cheers,
--
Ashar Voultoiz - WP++++
http://en.wikipedia.org/wiki/User:Hasharhttp://www.livejournal.com/community/wikitech/
IM: hashar(a)jabber.org ICQ: 15325080
Hi there....
I've downloaded wikinews and installed on my mediawiki.
There is a page like
http://en.wikinews.org/w/index.php/Template:Hurricane_2005_Infobox
On wikinews site all text from template is consisted in the table from right
side but on my site it's just a usual rows.
The difference between sorce code generation when processed
{{InfoboxStart|infotitle=[[w:Hurricane|Hurricanes]] - 2005}} wiki code.
On my site:
<div style="margin-left: 5px; float: right; width: 200px; background:
#edf7ff; border: solid #666666 1px; padding: 0px; font-size: x-small;">
<div style="text-decoration:none; background: #7ec9fd; padding: 2px;
text-align: center; style; font-size: 140%; border-botom: solid 1px
#666666;"><b><a href="http://en.wikipedia.org/wiki/Avian_Flu" class='extiw'
title="w:Avian Flu">Avian Flu</a></b></div ></div>
On wikinews:
<div style="margin-left: 5px; float: right; width: 200px; background:
#edf7ff; border: solid #666666 1px; padding: 0px; fon-size: x-small;">
<div style="text-decoration:none; background: #7ec9fd; padding: 2px;
text-align: center; style; font-size: 140%; border-bottom: solid 1px
#666666;"><b><a href="http://en.wikipedia.org/wiki/Avian_Flu" class='extiw'
title="w:Avian Flu">Avian Flu</a></b></div>
Template:InfoboxStart is <div style="margin-left: 5px; float: right; width:
200px; background: #edf7ff; border: solid #666666 1px; padding: 0px;
font-size: x-small;">
<div style="text-decoration:none; background: #7ec9fd; padding: 2px;
text-align: center; style; font-size: 140%; border-botom: solid 1px
#666666;"> '''{{{infotitle}}}'''</div>
So there is redundant </div > on my site. Without this tag table is OK!
I've checked 1.4, 1.5 and 1.6(developer version) there is the same
situation.
Is it a bug in Parser.php or it's some settings in mediawiki?
Thank you
Sergey
Gregory Maxwell wrote:
>When a user edits, we request a cookie "usertoken" or whatever. If
>they do not have one, we generate a long random number and give them
>one. Every edit made by that browser (no matter which user is logged
>in) the cookie is returned. We add an extra column to recent changes
>to store this value.
>A new version of sockcheck is produced that finds users who share
>revisions with the same token, much like we can do with IPs already.
>Viola, cookie based sockcheck.
>Thoughts?
Can a cookie carry between different IPs for the same browser? i.e.,
user hangs up and dials again for a different IP?
- d.
-----Original Message-----
From: wikitech-l-bounces(a)wikimedia.org [mailto:wikitech-l-bounces@wikimedia.org] On Behalf Of Brion Vibber
Sent: Friday, November 18, 2005 2:26 PM
To: Wikimedia developers
Subject: Re: [Wikitech-l] Parser caching
Sechan, Gabe wrote:
> How so? I'm honestly curious here. My permissions are stored in
> page_restrictions. Its just a simple group read/can't read thing
> (only groups in the field can read a page. Unless the field is blank,
> in which case anyone can). I put the restrictions checking in
> Revision::getText() and Title::getText().
Be very careful; this will prevent all internal functions from loading text properly, and could result in permanent data corruption on internal maintenance processes (compression/uncompression for instance, backup data dumps, perhaps future upgrades).
More generally about the parser cache; if you add per-user changes that affect rendering you need to take this into account in the parser cache option hash. See User.php.
-- brion vibber (brion @ pobox.com)
Good catch, I didn't think of backup. I'll add in an exception to the checking so that internal functions bypass it. Since these would be done via an admin or the command line, excepting those 2 cases ought to work (admins need full vision anyway).
Gabe
How so? I'm honestly curious here. My permissions are stored in page_restrictions. Its just a simple group read/can't read thing (only groups in the field can read a page. Unless the field is blank, in which case anyone can). I put the restrictions checking in Revision::getText() and Title::getText(). This is working for edit text, history text, etc. The only exception is for the main node itself- there it works only after I edit the page (so if I edit the page, then read it, it works. If I just protect it without editing, it fails). Putting in the fake cache miss seems to fix that bug. What am I missing?
Gabe
-----Original Message-----
From: wikitech-l-bounces(a)wikimedia.org [mailto:wikitech-l-bounces@wikimedia.org] On Behalf Of Brion Vibber
Sent: Friday, November 18, 2005 12:07 PM
To: Wikimedia developers
Subject: Re: [Wikitech-l] Parser caching
Sechan, Gabe wrote:
> Does the parser cache the results of pages after parsing them? If so,
> is there a quick place I could turn it off for certain pages?
> Such as pretending that they cache miss? The parsing is messing up my
> read protections on pages- the permissions are working perfectly on
> special pages like edit or history that try to access the page, but
> don't always protect the node itself unless I edit it after protecting
> it.
If this is causing a problem for you, your permissions scheme is inherently flawed and is probably a huge security hole.
-- brion vibber (brion @ pobox.com)
Does the parser cache the results of pages after parsing them? If so, is there a quick place I could turn it off for certain pages? Such as pretending that they cache miss? The parsing is messing up my read protections on pages- the permissions are working perfectly on special pages like edit or history that try to access the page, but don't always protect the node itself unless I edit it after protecting it.
Gabe
Ok, I tried forsing ParserCache::get to pretend there's a cache miss on protected pages, and it seems to be working. If anyone knows a reason this wouldn't work fully or that this is a bad idea (other than performance, which should be minor as less than 1% of pages are read restricted), please tell me.
Gabe
-----Original Message-----
From: wikitech-l-bounces(a)wikimedia.org [mailto:wikitech-l-bounces@wikimedia.org] On Behalf Of Sechan, Gabe
Sent: Friday, November 18, 2005 11:35 AM
To: wikitech-l(a)wikimedia.org
Subject: [Wikitech-l] Parser caching
Does the parser cache the results of pages after parsing them? If so, is there a quick place I could turn it off for certain pages? Such as pretending that they cache miss? The parsing is messing up my read protections on pages- the permissions are working perfectly on special pages like edit or history that try to access the page, but don't always protect the node itself unless I edit it after protecting it.
Gabe
_______________________________________________
Wikitech-l mailing list
Wikitech-l(a)wikimedia.org
http://mail.wikipedia.org/mailman/listinfo/wikitech-l