Hi Gerard and all,
Von: Gerard Meijssen [mailto:gerard.meijssen@gmail.com]
Gesendet: Mittwoch, 25. September 2013 10:26
An: Kingsley Idehen
Cc: Discussion list for the Wikidata project.; Chris Bizer; Samuel Klein; Sören Auer; Christian Bizer
Betreff: Re: [Wikidata-l] Cooperation between Wikidata and DBpedia and Wikipedia
Hoi,
Congratulations on the new version of DBpedia :) .. This makes it an auspicious occasion to talk about future collaboration.
At this time several Wikimedians are busy harvesting data from Wikipedia and loading it into Wikidata.
Yes, we did see this and of course all these Wikimedians are more than welcome to reuse parts of our code for building their harvesters (the DBpedia extraction framework does a lot of little things around data cleaning and recognizing different variations of values as well as normalizing units of measurements that I guess would also be useful for the other harvesters.)
You are harvesting data from Wikipedia and loading it into DBpedia. As we are including data into Wikidata, it goes into DBpedia as well... We might as well work together on this.
One of the best parts (as far as I am concerned) is your knowledge of fields used in infoboxes and knowing how they are the same / related to fields in other infoboxes. This expertise should be easy to absorb into Wikidata
Yes, all the information about the infobox to ontology mappings are maintained by the DBpedia community in the mappings wiki (http://mappings.dbpedia.org/index.php/Main_Page) and of course Wikidata is more than welcome to use them.
As far as I know the mappings wiki also already contains mappings from DBpedia classes/properties to the corresponding Wikidata classes/properties for about 1/3 of all classes/properties (Sebastian Hellmann cc’ed knows more about this). So it would be very simple for you to use the mappings to fill your repository.
In Wikidata we use qualifiers, will you adopt qualifiers in DBpedia? At this time qualifiers are not handled by the harvesting software I am familiar with.
What kind of qualifiers? Do you mean provenance information?
To me these are the two issues that determine how easy it will be to effectively collaborate on great content for Wikidata. To me the most important aspect of Wikidata is that it is actually used. Information added becomes available in many places. Consequently more data for Wikidata, data that fits in well and is closely related to the 100+ wikipedias you are harvesting will ensure a rich experience in so many places.
Yes.
Cheers,
Chris
Thanks,
GerardM
On 26 August 2013 17:36, Kingsley Idehen <kidehen(a)openlinksw.com> wrote:
On 8/26/13 10:44 AM, Gerard Meijssen wrote:
I do know how much the DBpedia people want to reach out and connect in any positive way with both Wikipedia and Wikidata.
I have no knowledge (on the DBpedia side) of any resistance to collaborate with Wikidata. We've always seen this effort (like other structured data efforts of this kind e.g., Freebase, YAGO etc..) as being mutually beneficial.
--
Regards,
Kingsley Idehen
Founder & CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog: http://www.openlinksw.com/blog/~kidehen
Twitter/Identi.ca handle: @kidehen
Google+ Profile: https://plus.google.com/112399767740508618350/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Hoi,
I have been blogging a lot the last two days with DBpedia in mind. My
understanding is that at DBpedia a lot of effort went into making something
of a cohesive model of properties. Now that the "main type GND" is about to
be deleted, it makes sense to adopt much of the work that has been done at
DBpedia.
The benefits are:
- we will get access to academically reviewed data structures
- we do not have to wait and ponder and get in to thebusiness enriching
the data content of DBpedia
- we can easily compare the data in DBpedia and Wikidata
- more importantly, DBpedia has spend effort in connecting to other
resources
Yes, we can import data from DBpedia and we can import data from Wikipedia.
Actually we can do both. The one thing that needs to be considered is that
we need data before we can curate it. With more data available it becomes
more relevant to invest time in tools that compare data. We can start doing
this now and, over time this will become more relevant. But now we need
more properties and the associated date.
What do you think?
Thanks,
GerardM
http://ultimategerard.blogspot.com
Hi all,
sorry if I burst in with a simple question, but... is there a tool to
count how many sitelinks for a single project are there in Wikidata?
I mean, if I want to know how many (and which) items in Wikidata have
a sitelink to, say, Yoruba Wikipedia, what tool do I have to use, if
exists?
Thanks.
--
Luca "Sannita" Martinelli
http://it.wikipedia.org/wiki/Utente:Sannita
Hello All,
It struck me that one interesting way to see if subclasses are useful was to test this hypothesis.
Let QID_a and QID_b be two Wikidata items.
Conjecture: if QID_b is subclass of QID_a,
then count_stelinks(QID_b) <= count_sitelinks(QID_a).
Has anyone investigated this problem, or can think of an efficient way to test it? Or can tell me why it ought not to be true?
Maximilian Klein
Wikipedian in Residence, OCLC
+17074787023
Dear all,
First, sorry for sending an email: I want to help, but I don't have the time required to understand how the wiki RfC mechanism work [1]. More precisely that one seems really not the appropriate for a first dive :-(
In fact reading it I'm not even sure I understand the question anymore. To me the original question was about the properties P31 and P279 themselves (Eric's mail still list them as an option, albeit a popular one), ie, rather on how to represent a classification (independent from which one is chosen). But now I see plenty hardcore ontological discussions on the RfC page, which are indeed about getting a unified top-level ontology...
The basic question is, can you really get a unified, perfectly structured and clean classification of things?
I'm slightly surprised that Wikidata would go there. You want users to add classes in the future, no? Or to use the existing wikipedia categories as a source of classification?
In either case, you'd end up making weird inferences possible, if you apply the formal semantics of P31 and P279 as they're defined for rdf:type and rdfs:subClassOf [4,5]. Actually even if you invest time making a clean top-level, the lower-level parts of the classification will probably very soon diverge from formal ontology "meta-principles" that structure SUMO, DOLCE, BFO, etc.
And it's probably very alright, for most of your usage scenarios. Having simple, intuitive classification semantics is possible without the full formal ontology apparatus. Namely, you can use something that looks like rdf:type/rdfs:subClassOf, but with looser semantics.
1. you could use something like the dc:type property from the Dublin Core framework, instead of rdf:type. Possibly creating sub-properties of it, using a list like the one at [7] for input.
2. You could use something like skos:broader and skos:narrower [8] for the links between the 'looser classes'
Of course this does not correspond to formal ontological framework as in the Semantic Web sense. But well, if the 'classification' doesn't fit a super-formal framework, I see no reason to desperately try to shoehorn it into RDFS.
Note that I would quite disagree with the second part of the sentence from one of the RfC-related pages [9]:
"
There is a consensus on Wikidata against creating other properties which perform this function as it is felt a clean hierarchy of classes is in keeping with W3C recommendations and will make it easier to use the data here.
"
First, getting a clean hierarchy won't make things easier, if you end up with a too static/formal view on the world. Second, the feeling about the W3C recommendations is wrong. W3C has actually pushed SKOS to allow 'softer' classifications to be represented having to undergo the ordeals and dangers of RDFS/OWL...
But I realize all this might be regarded as questioning the decision you made earlier on using P31 and P279 instead of the GND type, so I'm going to stop bothering you ;-)
Best,
Antoine
---
Antoine Isaac
Scientific coordinator, Europeana.eu
[1] https://www.wikidata.org/wiki/Wikidata:Requests_for_comment/Migrating_away_…
[2] http://lists.wikimedia.org/pipermail/wikidata-l/2013-September/002815.html
[3] http://lists.wikimedia.org/pipermail/wikidata-l/2013-September/002816.html
[4] http://www.w3.org/TR/rdf-schema/#ch_type
[5] http://www.w3.org/TR/rdf-schema/#ch_subclassof
[6] http://purl.org/dc/terms/type
[7] https://www.wikidata.org/wiki/Wikidata:Requests_for_comment/Migrating_away_…
[8] http://www.w3.org/TR/skos-primer/#secrel
[9] https://www.wikidata.org/wiki/Help:Modeling#Hierarchy_of_classes
Heya folks :)
We have just enabled interwiki links for Wikimedia Commons via
Wikidata. This means they now no longer need to be stored in the
wikitext on Commons but instead can be stored in Wikidata together
with the links for Wikipedia and Wikivoyage.
Cheers
Lydia
--
Lydia Pintscher - http://about.me/lydia.pintscher
Community Communications for Technical Projects
Wikimedia Deutschland e.V.
Obentrautstr. 72
10963 Berlin
www.wikimedia.de
Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e. V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg
unter der Nummer 23855 Nz. Als gemeinnützig anerkannt durch das
Finanzamt für Körperschaften I Berlin, Steuernummer 27/681/51985.
Subject: [Wikidata-l] 'Person' or 'human', upper ontologies and
migrating 4 million claims
Antoine
while there there are discussions in the RFC about high level ontlogies
there is other stuff happening out on the wikidata item pages.
Editors are constructing low level ontologies using ''instance of' and
'subclass of' and these are gradually creeping upwards.
'is in administrative unit' and 'located on terrain feature' are being used
to build another hierarchy of places on earth and 'part of' is being used
to build a hierarchy of places off the planet.
'occupation (person)' is becoming more important than 'instance of' in
classifying humans and 'child'
'instance of' is also being used to classify all the items derived from
wikipedia pages that don't quite fit - category pages, disambiguation
pages, compound items (describing more than one thing - like 'Bonnie and
Clyde'), so tools can find these to exclude them from queries or whatever.
Personally I can't see an awful lot of use for an upper level ontology -
all the use cases I've seen are for the lower levels. If an upper level is
to be added (and I'm sure it will - 'encyclopaedic' is close to a synonym
for 'completist') then why not have all of the upper level ontologies?
'subclass of' can be used to create a variety of upper level ontologies on
top of the base levels derived from the items we have. After all the enwp
categories have three different upper level ontologies!
Joe
user:filceolaire
On Mon, Sep 23, 2013 at 1:00 PM, <wikidata-l-request(a)lists.wikimedia.org>wrote:
> Send Wikidata-l mailing list submissions to
> wikidata-l(a)lists.wikimedia.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> https://lists.wikimedia.org/mailman/listinfo/wikidata-l
> or, via email, send a message with subject or body 'help' to
> wikidata-l-request(a)lists.wikimedia.org
>
> You can reach the person managing the list at
> wikidata-l-owner(a)lists.wikimedia.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Wikidata-l digest..."
>
>
> Today's Topics:
>
> 1. 'Person' or 'human', upper ontologies and migrating 4
> million claims (Antoine Isaac)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Sun, 22 Sep 2013 22:24:32 +0200
> From: Antoine Isaac <aisaac(a)few.vu.nl>
> To: <wikidata-l(a)lists.wikimedia.org>
> Subject: [Wikidata-l] 'Person' or 'human', upper ontologies and
> migrating 4 million claims
> Message-ID: <523F5200.7080704(a)few.vu.nl>
> Content-Type: text/plain; charset="ISO-8859-1"; format=flowed
>
> Dear all,
>
> First, sorry for sending an email: I want to help, but I don't have the
> time required to understand how the wiki RfC mechanism work [1]. More
> precisely that one seems really not the appropriate for a first dive :-(
>
> In fact reading it I'm not even sure I understand the question anymore. To
> me the original question was about the properties P31 and P279 themselves
> (Eric's mail still list them as an option, albeit a popular one), ie,
> rather on how to represent a classification (independent from which one is
> chosen). But now I see plenty hardcore ontological discussions on the RfC
> page, which are indeed about getting a unified top-level ontology...
>
> The basic question is, can you really get a unified, perfectly structured
> and clean classification of things?
> I'm slightly surprised that Wikidata would go there. You want users to add
> classes in the future, no? Or to use the existing wikipedia categories as a
> source of classification?
> In either case, you'd end up making weird inferences possible, if you
> apply the formal semantics of P31 and P279 as they're defined for rdf:type
> and rdfs:subClassOf [4,5]. Actually even if you invest time making a clean
> top-level, the lower-level parts of the classification will probably very
> soon diverge from formal ontology "meta-principles" that structure SUMO,
> DOLCE, BFO, etc.
>
> And it's probably very alright, for most of your usage scenarios. Having
> simple, intuitive classification semantics is possible without the full
> formal ontology apparatus. Namely, you can use something that looks like
> rdf:type/rdfs:subClassOf, but with looser semantics.
> 1. you could use something like the dc:type property from the Dublin Core
> framework, instead of rdf:type. Possibly creating sub-properties of it,
> using a list like the one at [7] for input.
> 2. You could use something like skos:broader and skos:narrower [8] for the
> links between the 'looser classes'
>
>
> Of course this does not correspond to formal ontological framework as in
> the Semantic Web sense. But well, if the 'classification' doesn't fit a
> super-formal framework, I see no reason to desperately try to shoehorn it
> into RDFS.
>
> Note that I would quite disagree with the second part of the sentence from
> one of the RfC-related pages [9]:
> "
> There is a consensus on Wikidata against creating other properties which
> perform this function as it is felt a clean hierarchy of classes is in
> keeping with W3C recommendations and will make it easier to use the data
> here.
> "
> First, getting a clean hierarchy won't make things easier, if you end up
> with a too static/formal view on the world. Second, the feeling about the
> W3C recommendations is wrong. W3C has actually pushed SKOS to allow
> 'softer' classifications to be represented having to undergo the ordeals
> and dangers of RDFS/OWL...
>
> But I realize all this might be regarded as questioning the decision you
> made earlier on using P31 and P279 instead of the GND type, so I'm going to
> stop bothering you ;-)
>
> Best,
>
> Antoine
> ---
> Antoine Isaac
> Scientific coordinator, Europeana.eu
>
> [1]
> https://www.wikidata.org/wiki/Wikidata:Requests_for_comment/Migrating_away_…
> [2]
> http://lists.wikimedia.org/pipermail/wikidata-l/2013-September/002815.html
> [3]
> http://lists.wikimedia.org/pipermail/wikidata-l/2013-September/002816.html
> [4] http://www.w3.org/TR/rdf-schema/#ch_type
> [5] http://www.w3.org/TR/rdf-schema/#ch_subclassof
> [6] http://purl.org/dc/terms/type
> [7]
> https://www.wikidata.org/wiki/Wikidata:Requests_for_comment/Migrating_away_…
> [8] http://www.w3.org/TR/skos-primer/#secrel
> [9] https://www.wikidata.org/wiki/Help:Modeling#Hierarchy_of_classes
>
>
>
> ------------------------------
>
> _______________________________________________
> Wikidata-l mailing list
> Wikidata-l(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata-l
>
>
> End of Wikidata-l Digest, Vol 22, Issue 22
> ******************************************
>
FYI
This is the first release of DBpedia that includes cross-references (via
owl:sameAs linkages) to WikiData URIs.
-------- Original Message --------
Subject: [Dbpedia-announcements] ANN: DBpedia 3.9 released, including
wider infobox coverage, additional type statements, and new YAGO and
Wikidata links
Date: Mon, 23 Sep 2013 12:27:17 +0200
From: Christian Bizer <chris(a)bizer.de>
To: <dbpedia-discussion(a)lists.sourceforge.net>,
<dbpedia-developers(a)lists.sourceforge.net>, <public-lod(a)w3.org>,
<semantic-web(a)w3.org>, <dbpedia-announcements(a)lists.sourceforge.net>
Hi all,
we are happy to announce the release of DBpedia 3.9.
The most important improvements of the new release compared to DBpedia 3.8
are:
1. the new release is based on updated Wikipedia dumps dating from March /
April 2013 (the 3.8 release was based on dumps from June 2012), leading to
an overall increase in the number of concepts in the English edition from
3.7 to 4.0 million things.
2. the DBpedia ontology is enlarged and the number of infobox to ontology
mappings has risen, leading to richer and cleaner concept descriptions.
3. we extended the DBpedia type system to also cover Wikipedia articles that
do not contain an infobox.
4. we provide links pointing from DBpedia concepts to Wikidata concepts and
updated the links pointing at YAGO concepts and classes, making it easier to
integrate knowledge from these sources.
The English version of the DBpedia knowledge base currently describes 4.0
million things, out of which 3.22 million are classified in a consistent
Ontology, including 832,000 persons, 639,000 places (including 427,000
populated places), 372,000 creative works (including 116,000 music albums,
78,000 films and 18,500 video games), 209,000 organizations (including
49,000 companies and 45,000 educational institutions), 226,000 species and
5,600 diseases.
We provide localized versions of DBpedia in 119 languages. All these
versions together describe 24.9 million things, out of which 16.8 million
overlap (are interlinked) with the concepts from the English DBpedia. The
full DBpedia data set features labels and abstracts for 12.6 million unique
things in 119 different languages; 24.6 million links to images and 27.6
million links to external web pages; 45.0 million external links into other
RDF datasets, 67.0 million links to Wikipedia categories, and 41.2 million
YAGO categories.
Altogether the DBpedia 3.9 release consists of 2.46 billion pieces of
information (RDF triples) out of which 470 million were extracted from the
English edition of Wikipedia, 1.98 billion were extracted from other
language editions, and about 45 million are links to external data sets.
Detailed statistics about the DBpedia data sets in 24 popular languages are
provided at http://wiki.dbpedia.org/Datasets39/DatasetStatistics
The main changes between DBpedia 3.8 and 3.9 are described below. For
additional, more detailed information please refer to the Change Log
(http://wiki.dbpedia.org/Changelog)
1. Enlarged Ontology
The DBpedia community added new classes and properties to the DBpedia
ontology via the mappings wiki. The DBpedia 3.9 ontology encompasses
529 classes (DBpedia 3.8: 359)
927 object properties (DBpedia 3.8: 800)
1290 datatype properties (DBpedia 3.8: 859)
116 specialized datatype properties (DBpedia 3.8: 116)
46 owl:equivalentClass and 31 owl:equivalentProperty mappings to
http://schema.org
2. Additional Infobox to Ontology Mappings
The editors of the mappings wiki also defined many new mappings from
Wikipedia templates to DBpedia classes. For the DBpedia 3.9 extraction, we
used 3177 mappings (DBpedia 3.8: 2347 mappings), that are distributed as
follows over the languages covered in the release.
English: 431 mappings
Polish: 382 mappings
Dutch: 335 mappings
German: 219 mappings
Greek: 215 mappings
Portuguese: 211 mappings
Slovenian: 170 mappings
French: 165 mappings
Korean: 148 mappings
Spanish: 137 mappings
Hungarian: 111 mappings
Turkish: 91 mappings
Japanese: 72 mappings
Czech: 66 mappings
Italian: 62 mappings
Bulgarian: 61 mappings
Indonesian: 59 mappings
Catalan: 52 mappings
Arabic: 51 mappings
Russian: 48 mappings
Croatian: 36 mappings
Basque: 32 mappings
Irish: 17 mappings
Bengali: 6 mappings
3. Extended Type System to cover Articles without Infobox
Until the DBpedia 3.8 release, a concept was only assigned a type (like
person or place) if the corresponding Wikipedia article contains an infobox
indicating this type. The new 3.9 release now also contains type statements
for articles without infobox that were inferred based on the link structure
within the DBpedia knowledge base using the algorithm described in
Paulheim/Bizer 2013 [1]. Applying the algorithm allowed us to provide type
information for 440,000 concepts that were formerly not typed. A similar
algorithm was also used to identify and remove potentially wrong links from
the knowledge base.
4. New and updated RDF Links into External Data Sources
We added RDF links to Wikidata and updated the following RDF link sets
pointing at other Linked Data sources: YAGO, Freebase, Geonames, GADM and
EUNIS. For an overview about all data sets that are interlinked from DBpedia
please refer to http://wiki.dbpedia.org/Interlinking
5. New Find Related Concepts Service
We offer a new service for finding resources that are related to a given
DBpedia seed resource. More information about the service is found at
http://wiki.dbpedia.org/FindRelated
Accessing the DBpedia 3.9 Release:
You can download the new DBpedia datasets from
http://wiki.dbpedia.org/Downloads39
As usual, the dataset is also available as Linked Data and via the DBpedia
SPARQL endpoint at http://dbpedia.org/sparql
Lots of thanks to:
* Jona Christopher Sahnwaldt (Freelancer funded by the University of
Mannheim, Germany) for improving the DBpedia extraction framework, for
extracting the DBpedia 3.9 data sets for all 119 languages, and for
generating the updated RDF links to external data sets.
* All editors that contributed to the DBpedia ontology mappings via the
Mappings Wiki.
* Heiko Paulheim (University of Mannheim, Germany) for inventing and
implementing the algorithm to generate additional type statements for
formerly untyped resources.
* The whole Internationalization Committee for pushing the DBpedia
internationalization forward.
* Dimitris Kontokostas (University of Leipzig) for improving the DBpedia
extraction framework and loading the new release onto the DBpedia download
server in Leipzig.
* Volha Bryl (University of Mannheim, Germany) for generating the statistics
about the new release.
* Petar Ristoski (University of Mannheim, Germany) for generating the
updated links pointing at the GADM database of Global Administrative Areas.
* Kingsley Idehen, Patrick van Kleef, and Mitko Iliev (all OpenLink
Software) for loading the new data set into the Virtuoso instance that
serves the Linked Data view and SPARQL endpoint.
* OpenLink Software (http://www.openlinksw.com/) altogether for providing
the server infrastructure for DBpedia.
* Julien Cojan, Andrea Di Menna, Ahmed Ktob, Julien Plu, Jim Regan and
others who contributed improvements to the DBpedia extraction framework via
the source code repository on GitHub.
The work on the DBpedia 3.9 release was financially supported by the
European Commission through the project LOD2 - Creating Knowledge out of
Linked Data (http://lod2.eu/).
More information about DBpedia is found at http://dbpedia.org/About as well
as in the new overview article [2] about the project.
Have fun with the new DBpedia release!
Cheers,
Christian Bizer and Christopher Sahnwaldt
[1] http://www.heikopaulheim.com/docs/iswc2013.pdf
[2] http://svn.aksw.org/papers/2013/SWJ_DBpedia/public.pdf
------------------------------------------------------------------------------
LIMITED TIME SALE - Full Year of Microsoft Training For Just $49.99!
1,500+ hours of tutorials including VisualStudio 2012, Windows 8, SharePoint
2013, SQL 2012, MVC 4, more. BEST VALUE: New Multi-Library Power Pack includes
Mobile, Cloud, Java, and UX Design. Lowest price ever! Ends 9/20/13.
http://pubads.g.doubleclick.net/gampad/clk?id=58041151&iu=/4140/ostg.clktrk
_______________________________________________
Dbpedia-announcements mailing list
Dbpedia-announcements(a)lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dbpedia-announcements
FYI
Nemo
P.s.: P.s.: You can check whether the WMF protects the logo of your
project by seeing if it's listed as "registered trademark" on
<https://wikimediafoundation.org/wiki/Wikimedia_trademarks>.
-------- Messaggio originale --------
Oggetto: [Wikimedia-l] It's time to reclaim the community logo
Data: Sat, 21 Sep 2013 12:16:16 +0200
Mittente: Tomasz W. Kozlowski
Hello community,
this is to inform you that in response to the trademarking of the
Wikimedia community logo[1], created in 2006 by Artur “WarX”
Fijałkowski, which was discussed on this mailing list[2] as well as on
Meta[3] back in March, a small group of community members—Artur, myself,
Federico Leva (Nemo) and John Vandenberg—have initiated a formal process
of opposition against the registration of the trademark by the
Foundation in order to *reclaim the logo* for unrestricted use by the
community.
We appreciate the Foundation’s protection of the other trademarks they
have registered so far, including the logos of Wikipedia, Wikisource and
some other sister projects. In the case of the community logo, however,
it is our belief that the Foundation’s actions are exactly opposite to
what the community logo stands for and contradict the purpose behind its
very existence.
We would like to make it clear that it is not our intention to damage
anyone; our actions are a challenge against what we perceive as
unilateral declaration of ownership of an asset that has always belonged
to the wider community, and not to one or another organisation that is
part of the movement. By formally opposing the registration of the
trademark we hope to ensure the history of this logo is not disregarded,
and we wish to protect the community against unnecessary bureaucracy
and, to use another quote, let “groups who do not purport to represent
the WMF”[4] to continue to be able to freely associate with a logo that
has been part of their identity for so long.
We also want to note that this is in no way a legal action against the
Foundation, but a simple notice of opposition against the registration
of the logo in the European Union. If we assume good faith, we can only
be confident that the WMF, having now a formal occasion, will withdraw
its registration of the logo rather than continue using movement
resources to force the community into lengthy, expensive proceedings.
We invite all community members interested in this issue to express
their opinions at:
https://meta.wikimedia.org/wiki/Talk:Community_Logo/Reclaim_the_Logo
If any of you would like to help us in any way (covering the costs of
the opposition, promoting the discussion, etc.), please feel free to
contact us off–list.
Artur Fijalkowski (WarX)
Tomasz Kozlowski (odder)
Federico Leva (Nemo)
John Vandenberg (jayvdb)
== References ==
* [1] https://meta.wikimedia.org/wiki/File:Wikimedia_Community_Logo.svg
* [2]
https://lists.wikimedia.org/pipermail/wikimedia-l/2013-March/124715.html
* [3] https://meta.wikimedia.org/wiki/Talk:Community_Logo
* [4]
http://lists.wikimedia.org/pipermail/wikimedia-l/2013-March/124730.html
_______________________________________________
Wikimedia-l mailing list
Wikimedia-l(a)lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l,
<mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe>
Heya folks :)
Here's your fresh serving of weekly news about all things Wikidata.
This time with the first work on paper cuts you reported (keep them
coming), oversight nominations and more.
https://meta.wikimedia.org/wiki/Wikidata/Status_updates/2013_09_20
Cheers
Lydia
--
Lydia Pintscher - http://about.me/lydia.pintscher
Community Communications for Technical Projects
Wikimedia Deutschland e.V.
Obentrautstr. 72
10963 Berlin
www.wikimedia.de
Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e. V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg
unter der Nummer 23855 Nz. Als gemeinnützig anerkannt durch das
Finanzamt für Körperschaften I Berlin, Steuernummer 27/681/51985.