It looks like a solution to bug 4547 is on the horizon.
https://bugzilla.wikimedia.org/show_bug.cgi?id=4547
See also [Wikitech-l] Reasonably efficient interwiki transclusion
http://www.gossamer-threads.com/lists/wiki/wikitech/197322
This will be very useful for templates which Commons has developed,
especially language related templates, however I am concerned that
people are also planning on using Commons as a repo for Wikipedia
infoboxes, and including the *data* on Commons rather than just the
template code. e.g.
http://www.mediawiki.org/wiki/User:Peter17/GSoc_2010#Interest
This centralisation of data makes sense on many levels, however using
Commons as the host of this data will result in many edit wars moving
to the Commons project, involving people from many languages. Even
the infobox structure can be the cause of edit wars.
I think it is undesirable to have these Wikipedia problems added to
Commons existing problems. ;-)
Tying Wikipedia and Commons closer together is also problematic when
we consider the differing audience and scope of each project,
especially in light of the recent media problems. If the core
templates and data used by Wikipedia are hosted/modified on Commons,
it will be more difficult to justify why Commons accepts content which
isn't appropriate on Wikipedia.
A centralised data wiki has been proposed previously, many times:
http://meta.wikimedia.org/wiki/Wikidata/historicalhttp://meta.wikimedia.org/wiki/Wikidatahttp://meta.wikimedia.org/wiki/Wikidata_%282%29http://meta.wikimedia.org/wiki/WikiDatabank
Non-WMF projects, such as freebase, dbpedia, etc., have been exploring
this space.
Isn't it time that we started a new project!? ;-)
A wikidata project could use semantic mediawiki from the outset, and
be seeded with data from dbpedia.
A lot of existing & proposed projects would benefit from a centralised
wikidata project. e.g. a genealogy wiki could use the relationships
stored on the wikidata project. wikisource and commons could use the
central data wiki for their Author and Creator details.
--
John Vandenberg
For the new uploader I'm working on[1], we want it to remember your
previous preferences about what license to use and maybe a few other things.
Here's what I'm thinking about:
- We add a new preference for preferred license.
- If present, this prefills the upload form with a license.
- If absent, no license is prefilled.
- Whatever you pick in this form overwrites the preference. That is,
uploading a file has the side effect of storing a preference for the
next time.
I realize this doesn't capture every edge case, but the point is to get
behaviour that's simple enough that most people can actually use, and
experienced users can work with.
Our main difficulty at this point in the upload form is going to be
about explaining licenses. I don't want to ask them to do understand
something else like "Do you want to keep the same license choice for
next time?" before they've even finished uploading this one. If it turns
out that we have to ask them about this explicitly, I'd rather leave
that to the end of the process.
[1] http://commons.prototype.wikimedia.org/ -- this is JUST a prototype,
we're changing a lot
--
Neil Kandalgaonkar |) <neilk(a)wikimedia.org>
This is a sorta-technical sorta-copyright issue. InstantCommons is
great stuff, it spreads free content with correct attribution in a
marvellous manner.
But Commons contains a certain number of non-free files, specifically
Wikimedia logos and so forth. I just noticed (on a wiki using
InstantCommons) that these are served up through it just the same.
Is there any relatively simple way to stop this happening? (Including
not caring.)
- d.
On 22 July 2010 12:59, R M Harris <rmharris(a)sympatico.ca> wrote:
> I’ve posted a series of questions for discussion on the Meta page that hosts
> the study (http://meta.wikimedia.org/wiki/Talk:2010_Wikimedia_Study_of_Controversial_C….) Please feel free to visit the page and contribute to the
> discussion.
Looking at the contributors so far, I'm not sure that discussion is
recoverable to any form of usefulness.
- d.
Hello. It’s Robert Harris once again. It’s been just over a
month since I began working on the study commissioned by the Wikimedia Board on
Potentially Objectionable Content on WMF projects. During that time, I’ve
spoken to many people inside and outside Wikimedia, but the time has come, I
think, to actively begin a discussion within the communities about some of the
questions which I've encountered, specifically around Commons and
images within Commons. To that end,
I’ve posted a series of questions for discussion on the Meta page that hosts
the study (http://meta.wikimedia.org/wiki/Talk:2010_Wikimedia_Study_of_Controversial_C….) Please feel free to visit the page and contribute to the
discussion. And please post the link, if you might, anywhere within the
projects where you think it might be relevant.
I look forward to the comments of any of you who wish to join the discussion. I, of course, am specially interested in what members of Commons think about these things.
(Note: This announcement is also an update to James Owen's prior email
announcing Sue would be doing office hours this Friday. The date has
been moved up to Thursday. The time remains the same.)
On Thursday, July 22, the Wikimedia Office Hour will be hosted
by Sue Gardner, Executive Director of the Wikimedia Foundation. The
Office Hour is from 2230 to 2330 UTC (3:30 PM to 4:30 AM PDT).
If you do not have an IRC client, there are two ways you can come chat
using a web browser: First is using the Wikizine chat gateway at
<http://chatwikizine.memebot.com/cgi-bin/cgiirc/irc.cgi>. Type a
nickname, select irc.freenode.net from the top menu and
#wikimedia-office from the following menu, then login to join.
Also, you can access Freenode by going to http://webchat.freenode.net/,
typing in the nickname of your choice and choosing wikimedia-office as
the channel. You may be prompted to click through a security warning,
which you can click to accept.
Please feel free to forward (and translate!) this email to any other
relevant email lists you happen to be on.
--
Cary Bass
Volunteer Coordinator, Wikimedia Foundation
Support Free Knowledge: http://wikimediafoundation.org/wiki/Donate
Hi,
Thank you for your mail. I'm out of the office till 18th July 2010 with no email access. In the case of any urgent matter please text me on my mobile (see Aria). I'll respond your mail when I return.
Thank you,
Laszlo
On Fri, Jul 16, 2010 at 9:26 AM, Aubrey <zanni.andrea84(a)gmail.com> wrote:
>...
>
> The issue of metadata is nontheless serious, because it's one of the most
> important flaws of Wikisource: not applying standards (i.e Dublin Core) and not
> having a proper tools for export/import and harvest metadata is still make us
> amateurs, at least for "real" digital libraries (who focus mainly on the
> metadata stuff, and sometimes provide either texts or images (it is really rare
> to have both)).
This is also a problem with Wikimedia Commons.
http://strategy.wikimedia.org/wiki/Proposal:Dublin_Core
> The Perseus project is an *amazing* project, but I regard them far more ahead
> than us. The PP is actually a Virtual Research Environment, with tools for
> scholars and researcher for studying texts, (concordances and similar stuff).
I agree. I would go further; PP will always be far more advanced than
a mediawiki system.
They store their data in TEI format, which is an extremely rich
standard. Wikisource can incorporate some of the TEI concepts by
using templates, but I doubt we could ever be a leader in this area,
nor do I think we want to.
http://en.wikipedia.org/wiki/Text_Encoding_Initiative
> It happens that I just finished my Master thesis about collaborative digital
> libraries for scholars (in the Italian context), and the outcome is quite clear:
> researcher do want collaborative tools in DLs, but wiki system are
> to simple and (right now) too naive to really help scholars in their work (and
> there's a lot of other issues I'm not going to explain here).
>
> I would love to have PP people involved in collaboration with Wikisource, just
> don't know if this is possible.
I agree. PP and Wikisource are too different, and have very little to
gain from the other. PP wants to improve/increase collaboration &
community, but not at the expense of loosing the quality of their
metadata. Wikisource wants to improve quality and metadata, but not
at the expense of the ability to collaboration and our simple editing
interface.
Again, interoperability is the first step towards useful
'collaboration'. i.e. Wikisource needs to export TEI. Then we could
feed our poorly annotated/described sources into PP, where the
academic community would then add the metadata.
TEI export would also be useful for wiktionary.
> Just one more thing: why this awesome thread has not been linked to the
> source-l? Probably that would have been the best place to discuss.
;-)
--
John Vandenberg
Hi,
Thank you for your mail. I'm out of the office till 18th July 2010 with no email access. In the case of any urgent matter please text me on my mobile (see Aria). I'll respond your mail when I return.
Thank you,
Laszlo
Hi,
Thank you for your mail. I'm out of the office till 18th July 2010 with no email access. In the case of any urgent matter please text me on my mobile (see Aria). I'll respond your mail when I return.
Thank you,
Laszlo