Andy Mabbett, 08/02/2013 14:57:
> I'd like to ask your support the project I started:
>
> <http://pigsonthewing.org.uk/open-licensed-format-recordings-voices-wikipedi…>
>
> asking the subjects of Wikipedia articles to record a 10-second sample
> of their speaking voice, for use on those articles.
>
> An example script is "Hello, my name is [name]. I was born in [place]
> and I have been [job or position] since [year]".
>
> So far, the participants:
>
> <http://commons.wikimedia.org/wiki/Category:Voice_intro_project>
>
> include Sue Black, Cory Doctorow, Bill Thompson and Dave Winer; and
> we've just had our first recording in French - but we need many more.
>
> Do you know anyone who has an article about them? Do you know of tools
> that would simplify the process of making ogg files, open licensing
> them, and uploading them to Commons? How can we include more speakers
> of other languages?
I think we really need such a tool, for instance it's a shame that
Wiktionary doesn't have pronounciation recordings on most of its entries.
Of course it's better if the speaker is authoritative (like the subject
in person for a biography or a professional for Wiktionary), but tools
would help everyone.
Nemo
Three days left! This is also about new user groups and thematical
organisations for individual projects, so forwarding.
(Plus, it's been a while since our last all-projects mass-crossposting.
;-) )
Nemo
-------- Messaggio originale --------
Oggetto: Re: [Wikimedia-l] The Affiliations Committee is looking for
candidate members
Data: Tue, 8 Jan 2013 16:46:05 +0100
Mittente: Lodewijk
A: Wikimedia Mailing List
Hi all,
this is the last reminder that the call will be closing on January 12 -
this is four days from now.
Please share the call for candidates on relevant fora.
Best regards,
Lodewijk Gelauff
2013/1/2 Lodewijk <lodewijk(a)effeietsanders.org>
> Dear all,
>
> I would like to take this opportunity to remind you all that the
> Affiliations Committee is still looking for candidates! Applications can be
> sent in until 12 January as explained below.
>
> Do you understand how Wikimedia works as an organization, and would you
> like to help new organizations to get started? Then please apply! I hope
> for many high quality applications!
>
> Best,
> Lodewijk
>
> 2012/12/12 Bence Damokos <bdamokos(a)gmail.com>
>
>> Dear all,
>>
>> The Affiliations Committee [1], the committee that is responsible for
>> guiding volunteers in establishing Chapters, User Groups and Thematic
>> Organizations ("affiliates" in short) and approving them when they are
>> ready is looking for about 6 new members.
>>
>> The main focus of AffCom is to guide groups of volunteers in forming
>> affiliates. We make sure that the group is large enough to be viable
>> (and advise them on how to get bigger), review bylaws for compliance
>> with the requirements and best practices, and advise the Board of the
>> Wikimedia Foundation on issues connected to Chapters, Thematic
>> Organizations and User Groups.
>>
>> This requires communication with volunteers all over the World,
>> negotiating skills and cultural sensitivity and the ability to
>> understand legal texts. We try to get a healthy mix of different skill
>> sets in our members.
>>
>> Key skills/experience that we are looking for in candidate members,
>> are typically:
>>
>> * Excitement by the challenge of helping to empower groups of
>> volunteers worldwide
>> * Willingness to work in a sometimes bureaucratic, sometimes political
>> process
>> * 4 hours per week availability[2]
>> * International orientation
>> * Very good communication skills in English
>> * Ability to work and communicate with other cultures
>> * Strong understanding of the structure and work of affiliates and the WMF
>> * Communication skills in other languages are a major plus
>> * Experience with or in an active affiliate is a major plus
>>
>> With the help of the Affiliations Committee, 2012 has been an exciting
>> year of transformation for the movement with the introduction of new
>> types of affiliation. This means that the workload of the Committee
>> has increased and diversified and help is wanted! Currently many
>> applications to become a Chapter, Thematic Organization or User Group
>> are in the pipeline and can use your attention and dedication!
>>
>> You can send your applications with your name, contact data (e-mail,
>> wiki username), experience and motivation to join to the AffCom email
>> address, affcom AT lists DOT wikimedia DOT org by January 12, 2013.
>> You will get a confirmation that your application came through.
>>
>> Members are usually selected every twelve months for staggered two
>> year terms. The applications will be considered by the current members
>> and outgoing members and Committee advisers, who are not seeking
>> re-selection.
>>
>> Since I will be a candidate for re-selection myself, this process will
>> be managed by another committee member, Lodewijk Gelauff. I hope for
>> many suitable applications. If you have any questions, please don't
>> hesitate to email me or Lodewijk[3] privately. We are happy to chat or
>> have a phone call with anyone about our work, if this helps them
>> decide to apply.
>>
>> Please distribute this call among your networks, and do apply if you
>> are interested.
>>
>>
>> Best regards,
>> Bence Damokos
>> Chair,
>> Affiliations Committee
>>
>>
>>
>> [1]: https://meta.wikimedia.org/wiki/Affiliations_Committee (please
>> follow the links and familiarize yourself with our work)
>> [2]: Our member standards of participation are at:
>>
>> http://meta.wikimedia.org/wiki/Affiliations_Committee/Resolutions/Standard_…
>> [3]: http://meta.wikimedia.org/wiki/Special:EmailUser/Effeietsanders
>>
>> _______________________________________________
>> Wikimedia-l mailing list
>> Wikimedia-l(a)lists.wikimedia.org
>> Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l
>>
>
>
_______________________________________________
Wikimedia-l mailing list
Wikimedia-l(a)lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l
Indeed, Wiktionary-l is the list you might find more help on. Look at
the archives, they're mostly discussions of similar problems.
There was also some attempt to merge another similar mailing list and
some effort on DBpedia-like projects, but I don't remember the conclusion.
Nemo
Judit, Ács, 23/11/2012 11:18:
> Hi,
>
> I am trying to tranlations from Wiktionaries in different languages.
> Currently I use the "All pages, current versions only" dump. Is there a
> way to find out the language template tags (is that the correct term?)
> for each Wiktionary and each language?
>
> For example:
> This is the Hungarian page 'karcsu' (slim, slender)
> http://hu.wiktionary.org/wiki/karcs%C3%BA (the edit page:
> http://hu.wiktionary.org/w/index.php?title=karcs%C3%BA&action=edit)
> The translation table always (?) starts like this:
> {{-ford-}}
> {{trans-top}}
> *{{en}}: {{t|en|slim}}, {{t|en|slender}}
>
> Where {{-ford-}} comes from the word forditas (translation in Hungarian,
> I skipped the accents). The translations look like the 3rd row and
> (hopefully) contain the other languages wiki codes (en, fr, de).
>
> Also on the page 'slim' in the Hungarian Wiktionary there are some tags
> which nobody would understand unless they are Hungarian and they have
> learned some Hungarian grammar.
> http://hu.wiktionary.org/wiki/slim and
> http://hu.wiktionary.org/w/index.php?title=slim&action=edit
> The first line is:
> {{engmell|comp=slimmer|sup=slimmest|pron=/slɪm/|audio=us}}
>
> Where 'engmell' is derived from 'english melleknev', melleknev meaning
> adjective in Hungarian. There rest is similarly confusing.
>
> It gets even more confusing if I look at other Wiktionaries. It seems
> that there are no standards that all Wiktionaries follow.
>
> Is this meta-information available somewhere?
>
> I hope I managed to explain it clearly and I am asking on the right list.
>
> Thank you in advance,
> Judit Acs
>
>
> _______________________________________________
> Xmldatadumps-l mailing list
> Xmldatadumps-l(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/xmldatadumps-l
>
Hi all,
First, I would like to congrat for all your marvellous work. I need some
help with the wiktionary extraction framework. I tried to run the
wiktionary extraction framework for the greek language as per your
instructions here http://wiki.dbpedia.org/Wiktionary, updated java jdk to
1.7, etc. I have changed the configuration in config.preperties to language
el, config.xml to language el, and made a file config-el.xml with all the
appropriate mappings, based on the config-en.xml. I downloaded the dump
from
http://dumps.wikimedia.org/elwiktionary/20121029/elwiktionary-20121029-page…
is the newest elwiktionary dump. I put the file unziped in the folder
wiktionaryDumpa as metioned in the instructions, and after prompt during
the compilation of the code, i moved it to
wiktionaryDump/elwitkionary/20121029/ . The program seems to run, but after
5(!!!) days, even if during mvn scala:run showed the triples on the screen
it didn't outputed any file!!! So where is the triples?Or, is there any
spesific configuration i mistook and should change? I also tried the binary
provided on the project homepage but nothing seems to be happen even for
the default configuration and test dump.
Thank you in advance. Looking forward for your reply.
Kind Regards
Karampatakis Sotiris
--
Sotiris Karampatakis
M. Sc. Web Science
Faculty of Mathematics
Aristotle University of Thessaloniki
Greece
mail:s.karampatakis@gmail.com
Some solid CNI work.
---------- Forwarded message ----------
From: "Chris Leonard" <cjlhomeaddress(a)gmail.com>
Date: Oct 19, 2012 7:30 PM
Subject: [support-gang] Help needed - Amazonian Peru
To: "Community Support Volunteers -- who help respond to "help AT laptop.org<
support-gang(a)lists.laptop.org>
Dear Support Gang,
I need some "mechanical turk" style help from the Support Gang to
create a usable dictionary resource for the children of Amazonian Peru
who speak Asháninka (lang-cni). I have a large (435 page) image
document that I need to process page-by-page into digitized text
files.
All of the files and the necessary steps are laid out on this wiki
page. If you have an Internet connection and a mouse, you can help
out with this important task.
Please go to this page and read the step-by-step directions.
http://wiki.laptop.org/go/User:Cjl/cni-dictionary
It really takes no particular computer skills to help out with this,
it is just repetitive and a bit tedious. The other limiting factor is
that the preferred OCR web-site will only process 15 pages per user
per hour (for free), so having as many users as possible working on
this.
If twenty people stepped up, we could have this done tonight and I
could move on to the next steps (copy-editing and formatting),
preparation for distribution in various forms, etc.
Thanks to anyone out there that will help me out with this.
cjl
Sugar Labs Translation Team Coordinattor
_______________________________________________
support-gang mailing list
support-gang(a)lists.laptop.org
http://lists.laptop.org/listinfo/support-gang
*Apologies for cross-posting*
On September 23-24-25, the Multilingual Linked Open Data for Enterprises
Workshop(MLODE) will happen in Leipzig, Germany and is co-located with SABRE and
the Leipziger Semantic Web Day. Please find all information here:
http://sabre2012.infai.org/mlode
News
* See the people attending the conference in our people viewer (add yourself, if
you are attending) -http://mlode.nlp2rdf.org/people/view.html
* In parallel to the code-a-thon there will be an Apache Stanbol and Linked
Media Framework Tutorial from 9 am to 12:30 pm (please join no later than 10 am)
and a LOD2 Stack Tutorial at 2 pm -
http://wiki.aksw.org/Events/2012/LeipzigerSemanticWebDay/Tutorien
* Twitter tag #mlode
* Program published -http://tinyurl.com/mlode-schedule
* Please apply for lightning talks here: mlode2012 -at-
lists.informatik.uni-leipzig.de
* Don't forget to send your submission of the Monnet Challenge John McCrae - to
win up to 600 Euro -http://sabre2012.infai.org/mlode/monnet-challenge
* Code-a-thon: We will provide support and assistance for developers new to RDF
* If you arrive on Sunday, you can join us for the zero day, where we brainstorm
for the code-a-thon: Leipziger Zoo at 10 am and the bar Kicker IN at 7 pm
* The workshop is accompanied by data post proceeding Special Issue in the
Semantic Web Journal -
http://www.semantic-web-journal.net/blog/call-multilingual-linked-open-data…
We would like to thank our sponsors for supporting the workshop:
* The Working MultilingualWeb-LT Working Group -
http://www.w3.org/International/multilingualweb/lt/
* The Interactive Knowledge Stack (IKS) EU Research Project -
http://www.iks-project.eu/
* The Monnet Project -http://www.monnet-project.eu/
We all hope to see you there,
Sebastian Hellmann and Steven Moran
on behalf of the whole MLODE organisation committee
--
Dipl. Inf. Sebastian Hellmann
Department of Computer Science, University of Leipzig
Events:
* http://sabre2012.infai.org/mlode (Leipzig, Sept. 23-24-25, 2012)
* http://wole2012.eurecom.fr (*Deadline: July 31st 2012*)
Projects: http://nlp2rdf.org , http://dbpedia.org
Homepage: http://bis.informatik.uni-leipzig.de/SebastianHellmann
Research Group: http://aksw.org
*Save the date: Leipzig, Germany 23-24-25 September 2012
http://sabre2012.infai.org/mlode
Co-located with the Leipziger Semantic Web Day: http://aksw.org/lswt
====== Multilingual Linked Open Data for Enterprises ======
MLODE will bring together developers, data producers, academia and
enterprises and connect people, communities, data and industrial use
cases. The workshop will be very interactive and you are expected to
help us achieve common goals:
* bootstrap and build a Linguistic Linked Open Data Cloud (LLOD):
http://linguistics.okfn.org/resources/llod/
* establish best practices for multilingual linked open data
* create incentives for businesses and lower the barrier for
participation in LOD for natural language processing and
internationalisation and localisation enterprises.
We are expecting intensive participation by members of the following
communities (these are teasers, see the **detailed descriptions for each
community** further below):
* DBpedia ( http://dbpedia.org <about:blank>): DBpedia International now
has over 10 language-specific chapters (such as http://el.dbpedia.org
<http://el.dbpedia.org/>). At the MLODE workshop there will be a DBpedia
Developers meetup. We will discuss the “Future of DBpedia” and create a
common Road Map. If you want to get more involved in DBpedia, the
workshop will be a good opportunity to meet the team.
* Working Group for Open Data in Linguistics (OWLG,
http://linguistics.okfn.org <http://linguistics.okfn.org/%29>): Now is
the time to get your data into the LLOD cloud! We have created a
development team that will convert your data to RDF and help establish
links: http://code.google.com/p/mlode/. Please submit your data sets
soon! (Furthermore we will have a legal session to discuss licensing
issues.)
* Multilingual Web ( http://www.multilingualweb.eu
<http://www.multilingualweb.eu/>): Free, open data and lexica; we will
have a session discussing best practices for multilingual linked open
data (http://mlode.okfnpad.org/best-practices-multilingual-lod) and
compatability with the RDF world with ITS 2.0.
* Apache Stanbol ( http://incubator.apache.org/stanbol/): Enterprises
will have the chance to present their use cases during lightning talks
and we will have a Apache Stanbol Booth and an install fest to show
hands-on how combined usage of public and closed data can be achieved
and what benefits firms can gain from using these rapidly increasing
data pools.
* Ontolex W3C Community Group ( http://www.w3.org/community/ontolex/):
Monnet Challenge will provide a data bounty for developers who convert
data sets using lemon.
* Also: NLP2RDF (http://nlp2rdf.org <http://nlp2rdf.org/>) - the NIF
project, DBpedia Spotlight (http://spotlight.dbpedia.org
<http://spotlight.dbpedia.org/>), Wiktionary2RDF
(http://dbpedia.org/Wiktionary)
How you can contribute:
* Contact us if you are an enterprise and want to prepare a small
presentation/lightning talk about your business use cases (using LOD) or
problems you have (please see below for details)
* Contact us if you want to give a short presentation on a relevant topic
* We are looking for a sponsor for a DBpedia Booth
* Submit your data sets for the LLOD: http://code.google.com/p/mlode/
* Become a sponsor of the workshop:
http://sabre2012.infai.org/mlode/funding?&#sponsorship
* Or donate money and help the individual communities:
http://sabre2012.infai.org/mlode/Funding
DBpedia is a good example of a freely available and open data set that
was generated by crowd-sourcing and academia, but it has provided an
immense value to businesses and industry. We want to build on and
continue this success for the areas of natural language processing
enterprises and the internationalisation and localisation industries.
The goal of the workshop is to bootstrap a Multilingual Linked Open Data
cloud by bringing together many different linked open data sets and by
creating synergy among different research and business communities. This
workshop is aimed at researchers and industry and commercial consumers
of data produced by research. We hope for mutual benefits between
(potentially non-commercial) data providers and enterprises: Open-source
and open-licences for software have shown that they can be successful in
a commercial environment. How can we transfer these models to
Multilingual Linked Open Data? And how can the transformation of
currently monolingual Linked Open Data sources into a Multilingual Web
of Open Data spur cross-linguistic research, and commercial applications
in internationalisation and localisation enterprises?
===== Sponsors =====
We would like to thank our sponsors for supporting the workshop:
* The **Working MultilingualWeb-LT Working Group** -
http://www.w3.org/International/multilingualweb/lt/
* The **Interactive Knowledge Stack (IKS) EU Research Project** -
http://www.iks-project.eu/
* The **Monnet Project** - http://www.monnet-project.eu/
===== Monnet Challenge =====
The Monnet Project (http://www.monnet-project.eu/) is offering the
following bounties for the conversion of existing linguistic resources
into linked data, in particular focussing on the
**lemon format** (http://www.monnet-project.eu/lemon) . **Bounties are
600, 400, 200, 100, 50 Euros** . The selection of winners will be done
by a committee of Ontolex community members.
Core criteria:
* Number of triples (relative to other submissions). Emphasis is of
course on number of triples containing a URI from //lemon// .
* Expressiveness and quality of lemon used (How many properties and
classes of //lemon// are you using? Are you using them correctly?)
* Impact (Is the data set you converted important and central to our
cause? We also rate data sets for less-spoken languages higher, because
of the rarity effect.)
Additional criteria:
* Note that you can convert and submit more than one data set. You will
be rated for the combined data you converted (so each person can only
make one submission).
* You will be given extra points if you publish converted data early and
other people build upon your work (e.g. fix errors).
* All submissions will be considered for inclusion in the data
post-proceedings.
Detailed information on how to submit can be found on the Monnet
Challenge page:
http://sabre2012.infai.org/mlode/monnet-challenge
Submission will end 10 days before the workshop. The deadline therefore
is **September 13th, 2012**
===== Planned Sessions =====
Each session will have an etherpad
http://sabre2012.infai.org/mlode/etherpadso that you can already
participate in advance.
==== Submit your data today ====
In preparation, from now until September 23rd, we will:
* Collect data sets relevant to the Linguistic Linked Open Data Cloud
http://code.google.com/p/mlode/
* Provide conversion services and data 'bounties' to convert as much
data as possible to RDF before the workshop
* Help debugging and hosting your Linked Data
We are interested in data that is linguistic in nature, such as corpora
and lexica, as well as data that might be used to improve Natural
Language Processing methods such as large governmental parallel corpora
or entity linking engines.
==== Sun 23th: Community Get Together ====
Community Get Together - no program, just social activities, e.g.
barbecue, beach volleyball. Time and place will be announced soon.
==== 24th: Code-Sprint-a-Thon ====
Code-Sprint-a-Thon (hands-on workshop) with data providers, visionaries
and developers from all communities. The focus of the Code-Sprint-a-Thon
will be on gathering the requirements and use cases from attendees and
then developers will start to initiate these ideas with the collected
data sets, e.g. interesting cross-data set queries, visualisations, data
mash-ups. The result will be more Multilingual Linked Open Data, more
links, more tools and more applications.
=== DBpedia ===
Many DBpedia developers will be available during this workshop so that
you can ask them questions directly. Bring your laptop and they will
show you how to download and query DBpedia.
=== Apache Stanbol ===
Developers from Apache Stanbol ( http://incubator.apache.org/stanbol/)
will be at the Apache Stanbol booth and they will have an install fest
to show hands-on how combined usage of public and closed data can be
achieved and what benefits firms can gain from using the rapidly
increasing data pools.
==== 25th: Announcements ====
* State of the Linguistic LOD Cloud (
http://linguistics.okfn.org/resources/llod/)
* NIF 2.0 (http://wiki.nlp2rdf.org/)
* Presentation of the results of the Code-Sprint-a-Thon
* Announcement of the Monnet Challenge Winners
====25th: Lightning Talks: Use Cases by Enterprises ====
We are looking for companies to present their use cases and/or products
that are relevant to the topics of the MLODE workshop. Please contact us
if your enterprise would like to present on a topic from this
(non-exhaustive) list:
* Use cases based on Linked Data (either open or closed)
* Solutions that are built with data from the LOD cloud
* Problems that constitute barriers for economic exploitation of LOD
* Ideas of what could be built with Linguistic/Multilingual LOD
We aim to address questions like:
* How can we unlock the data created by research and open communities
for enterprises?
* What is missing?
* How can we build bridges?
Submission ends on September 13th, which is one week before the
workshop. Presentations will be around 3-5 minutes.
====25th: Session on Best Practices for Multilingual Linked Open Data ====
Please have a look at the etherpad:
http://mlode.okfnpad.org/best-practices-multilingual-lod
<http://mlode.okfnpad.org/best-practices-multilingual-lod?>
<http://mlode.okfnpad.org/best-practices-multilingual-lod?>
====25th: Session on Legal Issues ====
Erik Ketzan (http://www.linkedin.com/in/erikketzan)
<http://www.linkedin.com/in/erikketzan%29>will present the Clarin Legal
Helpdesk and talk about current problems regarding database licences.
Please have a look at the etherpad: http://mlode.okfnpad.org/legal-session
==== 25th: Session on DBpedia Roadmap ====
Please have a look at the etherpad: http://mlode.okfnpad.org/DBpedia-roadmap
==== Data post proceedings ====
This workshop will publish a data post proceedings. As this is a new
concept, the rules for submission are not yet fixed. We will collect
ideas here: http://mlode.okfnpad.org/data-post-proceedings
During the discussion at the conference, we will pin down the details.
===== Participating Communities =====
==== Multilingual Web ====
MLODE Contact: Dominic Jones ( https://www.scss.tcd.ie/dominic.jones/)
Many ideas were generated into the best-practice use of Multilingual LOD
at the W3C sponsored "Multilingual Web – Linked Open Data and
MultilingualWeb-LT Requirements" workshop held in Dublin, Ireland, June
2012
(http://www.multilingualweb.eu/en/documents/dublin-workshop/dublin-program).
One of the aims of MLODE workshop is to continue discussion around the
best-practices for application of LOD in the Multilingual Web and the
transformation of currently monolingual LOD resources into multiple
languages, for example a multi-lingual DBpedia. Topics for discussion
and talking points will be carried over from the Dublin workshop and
discussed during the MLODE workshop but new ideas or suggestions are of
course welcome and requested. We will have a session discussing best
practices for multilingual linked open data and compatibility of the RDF
world with ITS 2.0. You can already participate in the discussion:
http://mlode.okfnpad.org/best-practices-multilingual-lod
==== DBpedia ====
MLODE Contact: Dimitris Kontokostas
DBpedia ( http://dbpedia.org <about:blank>): DBpedia International now
has over 10 language-specific chapters (such as http://el.dbpedia.org
<http://el.dbpedia.org/>) . At the workshop there will be a DBpedia
Developers meetup, we will discuss the “Future of DBpedia” and create a
common Road Map. If you want to get more involved in DBpedia, the
workshop will be a good opportunity to meet the team.
==== OWLG ====
MLODE Contact: Richard Littauer
Working Group for Open Data in Linguistics (OWLG,
http://linguistics.okfn.org <http://linguistics.okfn.org/%29>): Now is
the time to get your data into the LLOD cloud! We have created a
development team that will convert your data to RDF and help establish
links: http://code.google.com/p/mlode/. Please submit your data sets!
(Furthermore we will have a legal session to discuss licensing issues.)
==== Ontolex ====
MLODE Contact: John McCrae
Ontolex W3C Community Group ( http://www.w3.org/community/ontolex/):
Monnet Challenge will provide a data bounty for developers who convert
data sets using lemon
==== Apache Stanbol ====
MLODE Contact: John Pereira
Apache Stanbol ( http://incubator.apache.org/stanbol/): Enterprises will
have the chance to present their use cases during lightning talks and we
will have a Apache Stanbol Booth and an install fest to show hands-on
how combined usage of public and closed data can be achieved and what
benefits firms can gain from using the rapidly increasing data pools.
==== NLP2RDF ====
NLP2RDF (http://nlp2rdf.org <http://nlp2rdf.org/>): the NIF project will
announce the new NIF 2.0 Specification at the conference. Discussion is
currently going on at the Wiki (http://wiki.nlp2rdf.org)
<http://wiki.nlp2rdf.org/wiki/Main_Page%29>and the mailing list
http://lists.informatik.uni-leipzig.de/mailman/listinfo/nlp2rdf
==== Other Communities ====
* DBpedia Spotlight (http://spotlight.dbpedia.org
<http://spotlight.dbpedia.org/>)
* Wiktionary2RDF (http://dbpedia.org/Wiktionary)
===== Program =====
Most of the session are already quite clear (see above), just the
detailed time plan is still missing
===== Contact =====
For any inquiries regarding the workshop, you can reach the //whole//
MLODE committee at //mlode2012 [at] lists.informatik.uni-leipzig.de//.
If you are interested in sponsoring the event, please contact the
workshop organizers (Sebastian Hellmann and Steven Moran) through
//mlode2012-sponsor [at] lists.informatik.uni-leipzig.de// .
Some financial aid may be available (travel cost or conference fee),
please contact Steven Moran //mlode2012-sponsor [at]
lists.informatik.uni-leipzig.de// .
==== MLODE Committee ====
Chairs:
* Sebastian Hellmann, University of Leipzig
* Steven Moran, University of Munich
Student Chairs:
* Martin Brümmer, University of Leipzig
* Dimitris Kontokostas, University of Leipzig
Community Committee:
* Richard Littauer, Saarland University
* Dominic Jones, Trinity College
* John McCrae, Bielefeld University
* Jose Emilio Labra Gayo, University of Oviedo
* John Peirera, Apache Stanbol
* Dimitris Kontokostas, University of Leipzig
==== Venue ====
MLODE is part of the SABRE Multiconference, which is located at the
Faculty of Economics and Management Scienc in the center of Leipzig:
http://www.wifa.uni-leipzig.de/en/kontakt.html#c16824
**MLODE is part of the SABRE Multiconference, which is located at the
Faculty of Economics and Management Scienc in the center of Leipzig:
http://www.wifa.uni-leipzig.de/en/kontakt.html#c16824*
*
*
-- Dipl. Inf. Sebastian Hellmann Department of Computer Science, University of Leipzig Events: * http://sabre2012.infai.org/mlode (Leipzig, Sept. 23-24-25, 2012) * http://wole2012.eurecom.fr (*Deadline: July 31st 2012*) Projects: http://nlp2rdf.org , http://dbpedia.org Homepage: http://bis.informatik.uni-leipzig.de/SebastianHellmann Research Group: http://aksw.org