On Wed, Jul 30, 2014 at 9:11 AM, Luis Villa <lvilla(a)wikimedia.org> wrote:
>
> On Tue, Jul 29, 2014 at 7:32 PM, Gaurav Vaidya <gaurav(a)ggvaidya.com>
> wrote:
>
>> - This includes 363 license templates that indicate licensing for
>> Commons files under public domain, Creative Commons and other open access
>> licenses. These were created by bots and still require verification before
>> use. They are listed at
>> http://mappings.dbpedia.org/index.php/Category:Commons_media_license
>>
>
> Interesting!
>
Good to hear that :)
> Is there documentation somewhere on how you ended up with those particular
> 363 licenses? Failing that, a pointer at the relevant code would be welcome
> :)
>
This involved some manual work to gather the related templates and a bot to
import them in the DBpedia mappings wiki. See the following links for
details
https://commons.wikimedia.org/wiki/User:Gaurav/DBpedia/dcterms:licensehttps://github.com/gaurav/extraction-framework/issues/16https://github.com/gaurav/extraction-framework/issues/18https://github.com/gaurav/extraction-framework/issues/20https://github.com/gaurav/extraction-framework/pull/30
The way we designed it with Gaurav there is no need to code anything to
change an existing licence mapping or add a new one
you just need to request editor rights for the DBpedia mappings wiki (
http://mappings.dbpedia.org)
Hard-coding this into code could probably give us more fine-grained control
but it would be much harder to adjust.
Best,
Dimitris
>
> Thanks-
> Luis
>
>
> --
> Luis Villa
> Deputy General Counsel
> Wikimedia Foundation
> 415.839.6885 ext. 6810
>
> *This message may be confidential or legally privileged. If you have
> received it by accident, please delete it and let us know about the
> mistake. As an attorney for the Wikimedia Foundation, for legal/ethical
> reasons I cannot give legal advice to, or serve as a lawyer for, community
> members, volunteers, or staff members in their personal capacity. For more
> on what this means, please see our legal disclaimer
> <https://meta.wikimedia.org/wiki/Wikimedia_Legal_Disclaimer>.*
>
> _______________________________________________
> Wikidata-l mailing list
> Wikidata-l(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata-l
>
>
--
Kontokostas Dimitris
--
Kontokostas Dimitris
Hi everybody,
We are happy to announce an experimental RDF dump of the Wikimedia Commons. A complete first draft is now available online at http://nl.dbpedia.org/downloads/commonswiki/20140705/, and will be eventually accesible from http://commons.dbpedia.org. A small sample dataset, which may be easier to browse, is available on Github at https://github.com/gaurav/commons-extraction/tree/master/commonswiki/201401…
The following datasets showcases some of the improvements that we’ve been working on over the last two months:
- File information (*-file-information.*) is a completely new dataset that contains information on the files in the Commons, including file and thumbnail URLs, file extensions, file type classes and MIME types.
- DBpedia’s Mappings Extractor (*-mappingbased-properties.*) uses templates stored on the Mapping server (http://mappings.dbpedia.org/) to create RDF for information-rich templates. This system still has some important limitations, such as not being able to process process embedded templates (e.g. license templates inside {{Information}}), but top-level templates are completely configurable. The existing mappings are available at http://mappings.dbpedia.org/index.php/Mapping_commons
- This includes 363 license templates that indicate licensing for Commons files under public domain, Creative Commons and other open access licenses. These were created by bots and still require verification before use. They are listed at http://mappings.dbpedia.org/index.php/Category:Commons_media_license
- The DBpedia Geoextractor (*-geo-coordinates.*) now extracts geographical coordinates from Commons files using the {{Location}} template.
- The DBpedia SKOS Extractor (*-skos-categories.*) now identifies relationships between Commons categories, building a SKOS-based description of the entire Commons category tree.
Please have a look and let us know what you think. We’ll be working on a number of open tasks over the next three weeks, listed at https://github.com/gaurav/extraction-framework/issues?state=open -- if you see something wrong with what we’ve done above, or have an issue you’d particularly like us to tackle, please report it there or drop me an e-mail!
This work is sponsored by the Google Summer of Code program
(https://www.google-melange.com/gsoc/project/details/google/gsoc2014/gaurav/…).
Thanks!
cheers,
The DBpedia Commons extraction team:
Gaurav Vaidya
Dimitris Kontokostas
Andrea Di Menna
Jimmy O’Regan
Commons uses the freely-licensed Ogg and WebM formats
<http://commons.wikimedia.org/wiki/Commons:File_types#Sound> for media
files; unfortunately these are not supported by default in Safari and
Internet Explorer, as Apple and Microsoft favor a competing format. Manual
codec installation goes against modern user expectations, and isn't
possible in some environments.
I've spent some research time working on ogv.js
<https://github.com/brion/ogv.js>, which uses Mozilla's emscripten
<http://emscripten.org/> and Adobe's CrossBridge
<http://adobe-flash.github.io/crossbridge/> to cross-compile the Ogg Vorbis
and Theora codecs to JavaScript and Flash. This allows decoding and playing
Ogg media in the browser without additional software installation.
Here's a live demo wiki with ogv.js embedded into the player widget:
http://ogvjs-testing.wmflabs.org/wiki/Demo
On Firefox, Chrome, or Opera you'll continue to get native Ogg or WebM
playback; on Safari 6.1/7 and iOS 7 Mobile Safari you get the JavaScript
Ogg player, and on IE 9/10/11 you get the Flash Ogg player. (Microsoft
lists Web Audio as "in development <http://status.modern.ie/webaudioapi>"
for future IE versions, which will enable use of the pure JS version there
as well.)
This is very much a work in progress, but I'm pretty confident this is
something we can deploy later this year to get basic A/V playback to "just
work" for another chunk of our users.
I'll be presenting some further status updates & related topics at my
Wikimania talk.
Note that the Flash code used for IE is entirely open-source and uses none
of the proprietary multimedia codecs built into Flash. I consider this a
delightfully subversive use of Flash, and it would please me no end to get
it deployed on Wikimedia. :)
-- brion
Hoi.
I am very sorry to have learned that Amir Ladsgroup may not enter the
United Kingdom for Wikimania. As you may know, Amir is crucial to the
development of the Pywikibot software.
Not having him present is awful. Obviously other people can present but it
is not the same as Amir presenting.. It may be possible to have him present
by Skype or Hangout.
Would this be an option ?
For your information the amount of effort that has gone in acquiring a visa
for Amir has been huge. A big thank you to all the people involved. It is
sad that all the effort did not produce the result we all wished for.
Thanks,
GerardM
muchas gracias .
desearles suerte..
--
Nunca digas nunca, di mejor: gracias, permiso, disculpe.
Este mensaje le ha llegado mediante el servicio de correo electronico que ofrece Infomed para respaldar el cumplimiento de las misiones del Sistema Nacional de Salud. La persona que envia este correo asume el compromiso de usar el servicio a tales fines y cumplir con las regulaciones establecidas
Infomed: http://www.sld.cu/
As of now, images of structural formulas have to be created using third
party software and converting the output to SVG or PNG. With MolHandler
we aim for a solution capable accepting and rendering chemical markup
files and providing a web-interface for easily creating, modifying and
re-mixing formula files. This does not only make re-using existing
structures easier and simplifies creation of structures, moreover it
allows Wikis to adopt a unified style for rendering these structures,
makes structures searchable (sub-structure search) allows pulling,
pushing and verifying data from big databases like ChemSpider and
PubChem. In the future we plan to enable support for spectra and more
sophisticated file formats to have at least some minimum support for
chemistry-related Wiki-works.
I am currently looking for features you would find helpful as well as
your opinion of what is needed to deploy MolHandler to Wikimedia Commons
and therefore created a test wiki[1] at which you can create user
accounts. A non-exhaustive list of features is available for raking by
drag&drop. Or just write here what you at least want, what you would
like to see soon and what is less important to you.
-- Rillke
[1] http://mol.wmflabs.org/
Yesterday a number of yahoo.uservoice.com highly voted requests were
closed on the basis that a redesign of the redesign has fixed them. It
seems the update is only available in English and I don't find a way to
change interface language (!), can someone who sees it please give some
insight?
Nemo
----
Dear Flickr members,
We recently launched a new photo page towards the end of June. Because
of the feedback from you, we’re moving the photo page in a direction
that more closely resembles previous iterations of the product, but with
contemporary design and the new framework that delivers photos so much
faster than before.
These are the advantages of the new photo page:
*Moved the photo information back, so now it appears below the photo.
*The comments have been moved back below the photo as well.
*The text is now black on a white background.
If you have any feedback regarding the new photo page, positive or
negative, please feel free to submit it using a new thread.
Please be aware that we are actively listening to feedback from our
users to improve Flickr and your user experience.
12 GB of metadata and 50 TB of files released by Yahoo!. «Plus about 49
million of the photos are geotagged!»
http://yahoolabs.tumblr.com/post/89783581601/one-hundred-million-creative-c…
Can this dataset help linking projects' articles and Wikidata items to
the images we miss? I can imagine
* a FIST expansion querying the data in some way to find flickr images
"nearby" a geocoded page of ours,
* a bot mapping photo IDs from geolocated Wikidata entries and then a
bot importing on Commons those we lack,
* a Wikidata Game to aid automation in some of the above.
Where an image is in a non-free Creative Commons license, we can
flickrmail the author to relicense it, the success rate is typically
high (we could also do this by bot, if we're able to automatically write
a message mentioning the specific files and the pages where we'd use them).
Nemo