Dear all,
As you might know, we're busily preparing the schedule for this year's OKCon, which is going to be in Berlin on the 30th June and 1st July.
We have had quite some people asking to extend the deadline for submission. So here is your chance: Submissions will be open until Mai, 9th at midnight.
http://okcon.org/2011/submit/
If you have something interesting but *really* can't make it to this deadline please contact us at okcon(a)okfn.org - we might make an exception.
If you haven't already seen it you can check out the Call for Participation
http://okcon.org/2011/cfp/
I'm also writing to say that we'd really like you (yes: you!) to come! Registration is open at:
http://okcon2011.eventbrite.com/
All the best,
Daniel
Following is a sneak preview of Speakers and Programme (please note this is not jet all confirmed).
Andreas Blumauer (Semantic Web Company)
Andreas Meiszner (United Nations University)
Chris Bizer (Free University Berlin)
Chris Taggart (Openly Local)
David Bollier (On the Commons)
Georgia Angelaki (Europeana)
Glyn Moody (Technology Writer)
James Boyle (Center for the Study of the Public Domain at Duke Law School)
Jaron Rowan (FCForum)
Jordan Hacher (Open Data Commons)
Michel Bauwens (P2P Foundation)
Michelle Thorne (Creative Commons)
Nigel Shadbolt (Professor of Artificial Intelligence)
Nina Paley (Artist & Filmmaker)
Pippa Buchanan (School of Webcraft with Mozilla and P2PU)
Richard Stallman (Free Software Foundation)
Rufus Pollock (Open Knowledge Foundation)
Till Kreuzer (iRights)
Tom Lee (Sunlight Foundation)
Wouter Tebbens (Free Knowledge Institute)
Also the draft programme is starting to get shape and will be published very soon.
--
Daniel Dietrich
The Open Knowledge Foundation
Promoting Open Knowledge in a Digital Age
www.okfn.org - www.opendefinition.org
Mail: daniel.dietrich(a)okfn.org
Mobil: +49 171 780 870 3
Skype: ddie22
Twitter: @ddie
On 4/2/11 5:59 AM, Jodi Schneider wrote:
>
> Yes--keeping the domain name is important. Otherwise, we break all
> links, and alienate existing users -- many of whom do not read these
> lists, and who may check the site infrequently. Since we're a nonprofit,
> we should ask about a discounted price.
OK, that makes sense. I am happy to make the inquiry later in the month,
or someone else could do it sooner.
> Further, we might want to change hosting again sometime in the future
> (for instance if Referata went away or significantly changed).
>
> I see that "Referata offers hosting of semantic wikis" but I hadn't
> heard of it before, though WikiWorks is well-known. What's your
> connection with Referata, and how stable are they? It appears that
> hosting is funded by the fees, with the free hosting just coming along
> for the ride...
No connection. I suggest it for three reasons:
1. It's a concrete proposal for folks to respond to.
2. Yaron is apparently one of the principals of Semantic MediaWiki and
thus invested in the MW/SMW community.
3. The only alternative I found in some brief searching was Wikia, which
seemed less desirable because leaving Wikia seems to be hard (people
report that they leave your wiki up even if you move somewhere else and
ask for it to be turned off) and because they would put ads on it.
I'm happy to chip in on funding if needed.
> No--the existing skin needs improvement. Is there info about the default
> Refarata skin?
I did not check. They do seem to have the Vector skin available (that's
the new one that Wikipedia uses, right?).
> It's great to have your offer of help for the transition. But one
> challenge is ongoing technical leadership. I'd like some clarification
> from Referata about what is included in hosting. Any volunteers for
> technical administration would be welcome, too!
Does this answer your questions?
http://referata.com/wiki/Referata:Features
I've proposed the "Ad-supported" ($50/mo list) and "Enterprise" ($80/mo)
service levels. My take on ad-supported means that we could put our own
ads on it if we want, but they don't put any of theirs, but that's
something to clarify certainly.
I'm happy to be on top of technical issues and liaise with the vendor on
getting stuff fixed, on a best effort basis (i.e., no guarantees of
response time, no action if I'm on vacation, etc.). I have no interest
in hacking LocalSettings.php, etc., but it looks like the vendor would
take care of that.
I'm also happy to do the import/export tasks I proposed earlier.
Reid
I have recently been toying with the idea of having (undergraduate) students post reading summaries to a wiki, and the recent discussion on AcaWiki on this list leads me to post it here, though I can appreciate the argument that it is off topic.
So the idea is that students would do their normal reading responses, and then as part of a project work together with students who happened to do responses on the same paper to create an appropriate summary somewhere.
Before getting to particulars, is there some other venue where this discussion would be more appropriate?
--
Regards,
Joseph Reagle http://reagle.org/joseph/
(Perhaps using speech recognition, sorry for any speakos.)
Dear everybody!
I hope that you are happy and fine!
I am looking for previous studies, data, writing, suggestions on two issues I am working at the moment:
* Wikipedia and emotions, from a plurality of perspectives (i.e. data which emerged on studies in terms of emotions that wikipedians express, analysis from the perspective of sociology of emotions on wikipedia case, etc.)
* Old people (people of more than 55 years old) participaiting in Wikimedia projects (I.e. stadistics of participation, attends to reach old people, etc.).
Thank you in advance. Have a nice day! Mayo
«·´`·.(*·.¸(`·.¸ ¸.·´)¸.·*).·´`·»
«·´¨*·¸¸« Mayo Fuster Morell ».¸.·*¨`·»
«·´`·.(¸.·´(¸.·* *·.¸)`·.¸).·´`·»
Research Digital Commons Governance: http://www.onlinecreation.info
Ph.D European University Institute
Postdoctoral Researcher. Institute of Govern and Public Policies. Autonomous University of Barcelona.
Visiting scholar. Internet Interdisciplinary Institute. Open University of Catalonia (UOC).
Visiting researcher (2008). School of information. University of California, Berkeley.
Member Research Committee. Wikimedia Foundation
http://www.onlinecreation.info
E-mail: mayo.fuster(a)eui.eu
Skype: mayoneti
Phone Spanish State: 0034-648877748
Hi everyone,
We are still looking for Wikipedia contributors for a brief interview
for our study on how Wikipedia works. Please consider participating.
Best regards,
Stine Eckert
See more details here:
Why Wikipedia works -- Wikipedia contributors for brief e-mail
interview needed
The Philip Merrill College of Journalism at the University of Maryland
is seeking Wikipedia contributors willing to participate in a brief
e-mail interview. If you have been contributing to Wikipedia and you
are over 18 years old, please consider participating in our study. We
will share the result of the study with you. Your information will be
confidential and your name will not be used. If you are interested
please e-mail
Dr. Linda Steiner at lsteiner(a)jmail.umd.edu or
Stine Eckert at keckert(a)jmail.umd.edu.
What are the recommended ways to recruit Wikipedians for a research study?
My thoughts are:
Specific recruitment (i.e. to particular populations/randomized samples):
- email?
- Talk page messages?
Generic recruitment:
- post to the Village Pump
- post to the appropriate project mailing list(s)
Does that seem right?
Anybody willing to share successful email/Talk page messages (offlist is fine)? I'm particularly concerned about giving sufficient info, tone, and not being spammy (perhaps a hard balance to hit!).
-Jodi
Dear colleagues,
if a wiki contains information originally published elsewhere, the
question arises how the updated wiki version of such information
should be properly cited.
The Species ID wiki ( http://www.species-id.net/ ) has recently, in
collaboration with the journals ZooKeys and PhytoKeys as well as the
Plazi repository, imported a number of taxonomic treatments as wiki
pages, and the above-mentioned issue was addressed by incorporating
the generic link to the wiki page into new journal publications, and
providing a suggested citation format on-wiki that includes the
original work along with a permalink to the most recent wiki version
and the wiki contributors until that version.
For some example pages, see
http://species-id.net/wiki/Neobidessodes_darwiniensis or
http://species-id.net/wiki/Sinocallipus_catba .
The publisher's news release on the matter is at
http://www.pensoft.net/news.php?n=53 , and I have commented in my blog
at
http://www.science3point0.com/evomri/2011/04/16/citing-versioned-papers-rob…
, touching upon the need for a tailored karma system.
Comments and suggestions very welcome.
With my best wishes,
Daniel
--
http://www.google.com/profiles/daniel.mietchen
Hi all;
We know that websites are fragile and that broken links are common.
Wikimedia (and other communities like Wikia) publish dumps of their wikis,
but, that is not common. Most wiki communities don't publish any backups, so
their users can't do anything when a disaster occurs (data loss, attack), if
they want to fork, etc. Of course they can use Special:Export, but that
requires a huge hand-made effort, and the images are not downloaded.
I'm working in WikiTeam,[1] a group inside Archive Team, where we want to
archive wikis, from Wikipedia to tiniest ones. As I said, Wikipedia
publishes backups, so not problem here. But I have developed a script that
downloads all the pages of a wiki (using Special:Export), it merges them
into an unique XML file (as pages-history dumps) and downloads all the
images (if you enable that option). That is great if you want to have a
backup of your favorite wiki, or to clone a defunct wiki (abandoned by its
administrator), or you want to move your wiki from a free wikifarm to a
personal paid hosting, etc.
Also, of course, you can use this script to retrieve the full histories of a
wiki, and research, just as a Wikipedia dump.
We are running this script in several wikis and uploading the complete
histories to the download section[2], building a little wiki library. Don't
be fooled by their sizes. They are 7zip files which usually expand to many
MB.
I hope you enjoy this script, make backups of your favorite wikis and
research them.
Regards,
emijrp
[1] http://code.google.com/p/wikiteam/
[2] http://code.google.com/p/wikiteam/downloads/list?can=1