I took a closer look at Linux Libertine for possible use as a webfont for
headers. Linux Libertine is a "classic" serif font that would match the
character of the site (i.e. it looks "encylopedic"). It has a wide
character coverage (over 2000 characters) and support for most ligatures.
It even has its own bug tracker (
http://sourceforge.net/p/linuxlibertine/bugs/). It's only shortcoming is
that it has an install base of pretty much no one.
Unfortunately, the WOFF file for the base font (not including bold, italic,
etc.) is 516K which is way too large to use as a webfont. I imagine this is
due to the font's character coverage. One option would be for us to fork
Linux Libertine, reduce the character coverage (for example, it's very rare
to need math and symbol glyphs in headers), and see if we can get it small
enough to try delivering as a webfont. This is probably not something we
could do immediately, but I think it's an idea worth looking at. Another
option would be seeing if we could convince some major Linux distros to
include it as a default font.
As far as it's aesthetic qualities (cover your ears, devs), it has been
positively reviewed by several design sites.[1] Apparently the font
designers put so much work into tweaking the kerning that it would cause
some older word processors to run out of kerning memory! You can see
samples of it here: http://www.linuxlibertine.org/index.php?id=86&L=1.
The
[1] See sidebar at http://www.linuxlibertine.org/index.php?id=2&L=1
Ryan Kaldari
On Mon, Feb 17, 2014 at 11:49 AM, May Tee-Galloway
<mgalloway(a)wikimedia.org>wrote:
> We've been testing out Open Sans on the apps team, it's an open source
> font. The goals with any font choice is high quality (legible, scannable,
> well-kerned, etc), has wide character set, and since every font has its own
> personality, we want the font choice to reflect us and our content, and
> among that is credible, neutral, and high quality.
>
> Not all fonts are created equal. Helvetica is very widely used not only
> because it's such a polished font but it was designed specifically to be
> the font that is neutral and to have no implied meanings like many fonts
> do. Sounds perfect, except for the not free part.
>
> We're actively looking and trying out helvetica neue alternative that's
> open source but it's been challenging. They either don't come with enough
> characters, not well-kerned, or has too much personality that is not us.
>
> I understand the preference for an open source font but we are giving up
> certain areas that are probably just as important as being open source like
> reading experience.
>
> As for Georgia or Helvetica, serif (Georgia) fonts are recommended with
> larger texts because they don't reduce well on screen. Sans serif
> (Helvetica) fonts are recommended with smaller texts because they retain
> their general character shapes better than serif fonts. [1] One might argue
> that our web body text is not that small, hence we can use serif. There are
> three reasons why I wouldn't recommend that. 1. Content looks large and
> fine on the web but when it's displayed on phones and tablets, it's not as
> big anymore to use serif. 2. Why don't we use serif on web and sans serif
> on other platforms? Because that causes inconsistency. Readers should
> experience the same experience regardless of platform. WP content should be
> the one that takes center stage, not "why is my content appearing different
> on my tablet or phone?" We have fallback font options only when we must
> choose an alternative. 3. Helvetica has a neutral font personality. Serif,
> on the other hand, has many implications like traditional, Roman, formal,
> etc. [2,3]
>
> We know the importance for using an open source font and we have been
> looking for an alternative. We also know that we care deeply for our
> reader's experience. Helvetica was chosen to use because it helped reflect
> our content type, it's high quality, has good amount of character set (and
> if it doesn't, it's fairly easy to find a similar-ish font to match). But I
> can't lie it's a beautiful font, I can assure you we didn't judge Helvetica
> by its cover though. ;P Hope this helps!
>
> [1]
> http://www.webdesignerdepot.com/2013/03/serif-vs-sans-the-final-battle/
> [2]
> http://psychology.wichita.edu/surl/usabilitynews/81/PersonalityofFonts.asp
> [3] http://opusdesign.us/to-be-or-not-to-be-the-serif-question/
>
> May
>
> On Feb 15, 2014, at 9:07 PM, Ryan Kaldari <rkaldari(a)wikimedia.org> wrote:
>
> Frankly, I think there has been a large degree of intransigence on both
> sides. The free font advocates have refused to identify the fonts that they
> want to be considered and why they should be considered other than the fact
> that they are free, and the designers have refused to take any initiative
> on considering free fonts. The free fonts that I know have been considered
> are:
> * DejaVu Serif. Conclusion: Widely installed, but horribly ugly and looks
> nothing like the style desired by the designers.
> * Nimbus Roman No9 L. Conclusion: Basically a clone of Times. Most Linux
> systems map Times to Nimbus Roman No9 L, so there is no advantage to
> specifying "Nimbus Roman No9 L" rather than "Times" (which also maps to
> fonts on Windows and Mac).
> * Linux Libertine. Conclusion: A well-designed free font that matches the
> look of the Wikipedia wordmark. Unfortunately, it is not installed by
> default on any systems (as far as anyone knows) but is bundled with
> LibreOffice as an application font. If MediaWiki were using webfonts, this
> would likely be the serif font of choice rather than Georgia, but since we
> are relying on pre-installed fonts, it would be rather pointless to list it.
> * Liberation Sans. Conclusion: Essentially a free substitute for Arial.
> Like Nimbus Roman, there is no advantage to specifying "Liberation Sans"
> instead of "Arial" (which is at the bottom of the sans-serif stack) since
> Linux systems will map to Liberation Sans anyway, while other systems will
> apply Arial.
>
> As to proving the quality of Georgia and Helvetica Neue, I don't think the
> designers have done that, but I also haven't seen any evidence from the
> free font advocates concerning the quality of any free fonts. So in my
> view, both sides of the debate have been delinquent.
>
> Ryan Kaldari
>
>
> On Sat, Feb 15, 2014 at 4:16 PM, Greg Grossmeier <greg(a)wikimedia.org>wrote:
>
>> <quote name="Steven Walling" date="2014-02-15" time="16:08:41 -0800">
>> > On Sat, Feb 15, 2014 at 3:59 PM, Greg Grossmeier <greg(a)wikimedia.org>
>> wrote:
>> >
>> > > <quote name="Federico Leva (Nemo)" date="2014-02-15" time="22:52:31
>> +0100">
>> > > > And surely, before WMF/"MediaWiki" tell the world that no free fonts
>> > > > of good quality exist, there will be some document detailing exactly
>> > > > why and based on what arguments/data/research the numerous free
>> > > > alternatives were all rejected? Free fonts developers are an
>> > > > invaluable resource for serving Wikimedia projects' content in all
>> > > > languages, we shouldn't carelessly slap them in their face.
>> > >
>> > > I just skimmed the entire thread again, and yes, this has been
>> requested
>> > > a few times but no one from the WMF Design team has responded with
>> that
>> > > analysis (or if would respond with an analysis). The first time it was
>> > > requested the person was told to ask the Design list, then the next
>> > > message CC'd the design list, but no response on that point.
>> > >
>> > > I don't see much on https://www.mediawiki.org/wiki/Typography_refresh
>> > > nor it's talk page. Nor
>> > > https://www.mediawiki.org/wiki/Wikimedia_Foundation_Design/Typography
>> > >
>> >
>> > There wasn't an answer because the question is a fundamental
>> > misunderstanding of the way CSS works and options that are within our
>> > reach. The question isn't "are there good free fonts?" the question is
>> "can
>> > we deliver good free fonts to all users?". I'll try to help the UX team
>> > document the answer better.
>>
>> Thanks.
>>
>> I may be part of the misunderstanding-of-how-things-work-in-font-land
>> contingent. Advice/clarity appreciated.
>>
>> Greg
>>
>>
>> --
>> | Greg Grossmeier GPG: B2FA 27B1 F7EB D327 6B8E |
>> | identi.ca: @greg A18D 1138 8E47 FAC8 1C7D |
>>
>> _______________________________________________
>> Design mailing list
>> Design(a)lists.wikimedia.org
>> https://lists.wikimedia.org/mailman/listinfo/design
>>
>
> _______________________________________________
> Design mailing list
> Design(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/design
>
>
> _______________________________________________
> Design mailing list
> Design(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/design
>
>
I took a closer look at Linux Libertine for possible use as a webfont for
headers. Linux Libertine is a "classic" serif font that would match the
character of the site (i.e. it looks "encylopedic"). It has a wide
character coverage (over 2000 characters) and support for most ligatures.
It even has its own bug tracker (
http://sourceforge.net/p/linuxlibertine/bugs/). It's only shortcoming is
that it has an install base of pretty much no one.
Unfortunately, the WOFF file for the base font (not including bold, italic,
etc.) is 516K which is way too large to use as a webfont. I imagine this is
due to the font's character coverage. One option would be for us to fork
Linux Libertine, reduce the character coverage (for example, it's very rare
to need math and symbol glyphs in headers), and see if we can get it small
enough to try delivering as a webfont. This is probably not something we
could do immediately, but I think it's an idea worth looking at. Another
option would be seeing if we could convince some major Linux distros to
include it as a default font.
As far as it's aesthetic qualities (cover your ears, devs), it has been
positively reviewed by several design sites.[1] Apparently the font
designers put so much work into tweaking the kerning that it would cause
some older word processors to run out of kerning memory! You can see
samples of it here: http://www.linuxlibertine.org/index.php?id=86&L=1.
The
[1] See sidebar at http://www.linuxlibertine.org/index.php?id=2&L=1
Ryan Kaldari
On Mon, Feb 17, 2014 at 11:49 AM, May Tee-Galloway
<mgalloway(a)wikimedia.org>wrote:
> We've been testing out Open Sans on the apps team, it's an open source
> font. The goals with any font choice is high quality (legible, scannable,
> well-kerned, etc), has wide character set, and since every font has its own
> personality, we want the font choice to reflect us and our content, and
> among that is credible, neutral, and high quality.
>
> Not all fonts are created equal. Helvetica is very widely used not only
> because it's such a polished font but it was designed specifically to be
> the font that is neutral and to have no implied meanings like many fonts
> do. Sounds perfect, except for the not free part.
>
> We're actively looking and trying out helvetica neue alternative that's
> open source but it's been challenging. They either don't come with enough
> characters, not well-kerned, or has too much personality that is not us.
>
> I understand the preference for an open source font but we are giving up
> certain areas that are probably just as important as being open source like
> reading experience.
>
> As for Georgia or Helvetica, serif (Georgia) fonts are recommended with
> larger texts because they don't reduce well on screen. Sans serif
> (Helvetica) fonts are recommended with smaller texts because they retain
> their general character shapes better than serif fonts. [1] One might argue
> that our web body text is not that small, hence we can use serif. There are
> three reasons why I wouldn't recommend that. 1. Content looks large and
> fine on the web but when it's displayed on phones and tablets, it's not as
> big anymore to use serif. 2. Why don't we use serif on web and sans serif
> on other platforms? Because that causes inconsistency. Readers should
> experience the same experience regardless of platform. WP content should be
> the one that takes center stage, not "why is my content appearing different
> on my tablet or phone?" We have fallback font options only when we must
> choose an alternative. 3. Helvetica has a neutral font personality. Serif,
> on the other hand, has many implications like traditional, Roman, formal,
> etc. [2,3]
>
> We know the importance for using an open source font and we have been
> looking for an alternative. We also know that we care deeply for our
> reader's experience. Helvetica was chosen to use because it helped reflect
> our content type, it's high quality, has good amount of character set (and
> if it doesn't, it's fairly easy to find a similar-ish font to match). But I
> can't lie it's a beautiful font, I can assure you we didn't judge Helvetica
> by its cover though. ;P Hope this helps!
>
> [1]
> http://www.webdesignerdepot.com/2013/03/serif-vs-sans-the-final-battle/
> [2]
> http://psychology.wichita.edu/surl/usabilitynews/81/PersonalityofFonts.asp
> [3] http://opusdesign.us/to-be-or-not-to-be-the-serif-question/
>
> May
>
> On Feb 15, 2014, at 9:07 PM, Ryan Kaldari <rkaldari(a)wikimedia.org> wrote:
>
> Frankly, I think there has been a large degree of intransigence on both
> sides. The free font advocates have refused to identify the fonts that they
> want to be considered and why they should be considered other than the fact
> that they are free, and the designers have refused to take any initiative
> on considering free fonts. The free fonts that I know have been considered
> are:
> * DejaVu Serif. Conclusion: Widely installed, but horribly ugly and looks
> nothing like the style desired by the designers.
> * Nimbus Roman No9 L. Conclusion: Basically a clone of Times. Most Linux
> systems map Times to Nimbus Roman No9 L, so there is no advantage to
> specifying "Nimbus Roman No9 L" rather than "Times" (which also maps to
> fonts on Windows and Mac).
> * Linux Libertine. Conclusion: A well-designed free font that matches the
> look of the Wikipedia wordmark. Unfortunately, it is not installed by
> default on any systems (as far as anyone knows) but is bundled with
> LibreOffice as an application font. If MediaWiki were using webfonts, this
> would likely be the serif font of choice rather than Georgia, but since we
> are relying on pre-installed fonts, it would be rather pointless to list it.
> * Liberation Sans. Conclusion: Essentially a free substitute for Arial.
> Like Nimbus Roman, there is no advantage to specifying "Liberation Sans"
> instead of "Arial" (which is at the bottom of the sans-serif stack) since
> Linux systems will map to Liberation Sans anyway, while other systems will
> apply Arial.
>
> As to proving the quality of Georgia and Helvetica Neue, I don't think the
> designers have done that, but I also haven't seen any evidence from the
> free font advocates concerning the quality of any free fonts. So in my
> view, both sides of the debate have been delinquent.
>
> Ryan Kaldari
>
>
> On Sat, Feb 15, 2014 at 4:16 PM, Greg Grossmeier <greg(a)wikimedia.org>wrote:
>
>> <quote name="Steven Walling" date="2014-02-15" time="16:08:41 -0800">
>> > On Sat, Feb 15, 2014 at 3:59 PM, Greg Grossmeier <greg(a)wikimedia.org>
>> wrote:
>> >
>> > > <quote name="Federico Leva (Nemo)" date="2014-02-15" time="22:52:31
>> +0100">
>> > > > And surely, before WMF/"MediaWiki" tell the world that no free fonts
>> > > > of good quality exist, there will be some document detailing exactly
>> > > > why and based on what arguments/data/research the numerous free
>> > > > alternatives were all rejected? Free fonts developers are an
>> > > > invaluable resource for serving Wikimedia projects' content in all
>> > > > languages, we shouldn't carelessly slap them in their face.
>> > >
>> > > I just skimmed the entire thread again, and yes, this has been
>> requested
>> > > a few times but no one from the WMF Design team has responded with
>> that
>> > > analysis (or if would respond with an analysis). The first time it was
>> > > requested the person was told to ask the Design list, then the next
>> > > message CC'd the design list, but no response on that point.
>> > >
>> > > I don't see much on https://www.mediawiki.org/wiki/Typography_refresh
>> > > nor it's talk page. Nor
>> > > https://www.mediawiki.org/wiki/Wikimedia_Foundation_Design/Typography
>> > >
>> >
>> > There wasn't an answer because the question is a fundamental
>> > misunderstanding of the way CSS works and options that are within our
>> > reach. The question isn't "are there good free fonts?" the question is
>> "can
>> > we deliver good free fonts to all users?". I'll try to help the UX team
>> > document the answer better.
>>
>> Thanks.
>>
>> I may be part of the misunderstanding-of-how-things-work-in-font-land
>> contingent. Advice/clarity appreciated.
>>
>> Greg
>>
>>
>> --
>> | Greg Grossmeier GPG: B2FA 27B1 F7EB D327 6B8E |
>> | identi.ca: @greg A18D 1138 8E47 FAC8 1C7D |
>>
>> _______________________________________________
>> Design mailing list
>> Design(a)lists.wikimedia.org
>> https://lists.wikimedia.org/mailman/listinfo/design
>>
>
> _______________________________________________
> Design mailing list
> Design(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/design
>
>
> _______________________________________________
> Design mailing list
> Design(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/design
>
>
Those were literally the first 10 random articles I encountered which
didn't have descriptions.
The tool that generates the descriptions deserves a lot more development.
> Magnus' tool is very much a prototype, and represents a tiny glimpse of
> what's possible. Looking at its current output is a straw man.
It's not a straw man at all - it's a baseline to move the discussion away
from the abstract. We need to start looking at real examples.
One of my main concerns is "a lot more development" is actually an
understatement as many of the optimizations will be language dependent.
On Wed, Aug 19, 2015 at 2:57 AM, Magnus Manske <magnusmanske(a)googlemail.com>
wrote:
> Oh, and as for examples, random-paging just got me this:
>
> https://en.wikipedia.org/wiki/Jules_Malou
>
> Manual description: Belgian politician
>
> Automatic description: Belgian politician and lawyer, Prime Minister of
> Belgium, and member of the Chamber of Representatives of Belgium
> (1810–1886) ♂
>
> I know which one I'd prefer...
>
>
> On Wed, Aug 19, 2015 at 10:50 AM Magnus Manske <
> magnusmanske(a)googlemail.com> wrote:
>
>> Thank you Dmitry! Well phrased and to the point!
>>
>> As for "templating", that might be the worst of both worlds; without the
>> flexibility and over-time improvement of automatic descriptions, but making
>> it harder for people to enter (compared to "free-style" text). We have a
>> Visual Editor on Wikipedia for a reason :-)
>>
>>
>>
>> On Wed, Aug 19, 2015 at 4:07 AM Dmitry Brant <dbrant(a)wikimedia.org>
>> wrote:
>>
>>> My thoughts, as ever(!), are as follows:
>>>
>>> - The tool that generates the descriptions deserves a lot more
>>> development. Magnus' tool is very much a prototype, and represents a tiny
>>> glimpse of what's possible. Looking at its current output is a straw man.
>>> - Auto-generated descriptions work for current articles, and *all
>>> future articles*. They automatically adapt to updated data. They
>>> automatically become more accurate as new data is added.
>>> - When you edit the descriptions yourself, you're not really making a
>>> meaningful contribution to the *data* that underpins the given Wikidata
>>> entry; i.e. you're not contributing any new information. You're simply
>>> paraphrasing the first sentence or two of the Wikipedia article. That can't
>>> possibly be a productive use of contributors' time.
>>>
>>> As for Brian's suggestion:
>>> It would be a step forward; we can even invent a whole template-type
>>> syntax for transcluding bits of actual data into the description. But IMO,
>>> that kind of effort would still be better spent on fully-automatic
>>> descriptions, because that's the ideal that semi-automatic descriptions can
>>> only approach.
>>>
>>>
>>> On Tue, Aug 18, 2015 at 10:36 PM, Brian Gerstle <bgerstle(a)wikimedia.org>
>>> wrote:
>>>
>>>> Could there be a way to have our nicely curated description cake and
>>>> eat it too? For example, interpolating data into the description and/or
>>>> marking data points which are referenced in the description (so as to mark
>>>> it as outdated when they change)?
>>>>
>>>> I appreciate the potential benefits of generated descriptions (and
>>>> other things), but Monte's examples might have swayed me towards human
>>>> curated—when available.
>>>>
>>>> On Tuesday, August 18, 2015, Monte Hurd <mhurd(a)wikimedia.org> wrote:
>>>>
>>>>> Ok, so I just did what I proposed. I went to random enwiki articles
>>>>> and described the first ten I found which didn't already have descriptions:
>>>>>
>>>>>
>>>>> - "Courage Under Fire", *1996 film about a Gulf War friendly-fire
>>>>> incident*
>>>>>
>>>>> - "Pebasiconcha immanis", *largest known species of land snail,
>>>>> extinct*
>>>>>
>>>>> - "List of Kenyan writers", *notable Kenyan authors*
>>>>>
>>>>> - "Solar eclipse of December 14, 1917", *annular eclipse which lasted
>>>>> 77 seconds*
>>>>>
>>>>> - "Natchaug Forest Lumber Shed", *historic Civilian Conservation
>>>>> Corps post-and-beam building*
>>>>>
>>>>> - "Sun of Jamaica (album)", *debut 1980 studio album by Goombay Dance
>>>>> Band*
>>>>>
>>>>> - "E-1027", *modernist villa in France by architect Eileen Gray*
>>>>>
>>>>> - "Daingerfield State Park", *park in Morris County, Texas, USA,
>>>>> bordering Lake Daingerfield*
>>>>>
>>>>> - "Todo Lo Que Soy-En Vivo", *2014 Live album by Mexican pop singer
>>>>> Fey*
>>>>>
>>>>> - "2009 UEFA Regions' Cup", *6th UEFA Regions' Cup, won by Castile
>>>>> and Leon*
>>>>>
>>>>>
>>>>>
>>>>> And here are the respective descriptions from Magnus' (quite
>>>>> excellent) autodesc.js:
>>>>>
>>>>>
>>>>>
>>>>> - "Courage Under Fire", *1996 film by Edward Zwick, produced by John
>>>>> Davis and David T. Friendly from United States of America*
>>>>>
>>>>> - "Pebasiconcha immanis", *species of Mollusca*
>>>>>
>>>>> - "List of Kenyan writers", *Wikimedia list article*
>>>>>
>>>>> - "Solar eclipse of December 14, 1917", *solar eclipse*
>>>>>
>>>>> - "Natchaug Forest Lumber Shed", *Construction in Connecticut, United
>>>>> States of America*
>>>>>
>>>>> - "Sun of Jamaica (album)", *album*
>>>>>
>>>>> - "E-1027", *villa in Roquebrune-Cap-Martin, France*
>>>>>
>>>>> - "Daingerfield State Park", *state park and state park of a state of
>>>>> the United States in Texas, United States of America*
>>>>>
>>>>> - "Todo Lo Que Soy-En Vivo", *live album by Fey*
>>>>>
>>>>> - "2009 UEFA Regions' Cup", *none*
>>>>>
>>>>>
>>>>>
>>>>> Thoughts?
>>>>>
>>>>> Just trying to make my own bold assertions falsifiable :)
>>>>>
>>>>>
>>>>>
>>>>> On Tue, Aug 18, 2015 at 6:32 PM, Monte Hurd <mhurd(a)wikimedia.org>
>>>>> wrote:
>>>>>
>>>>>> The whole human-vs-extracted descriptions quality question could be
>>>>>> fairly easy to test I think:
>>>>>>
>>>>>> - Pick, some number of articles at random.
>>>>>> - Run them through a description extraction script.
>>>>>> - Have a human describe the same articles with, say, the app
>>>>>> interface I demo'ed.
>>>>>>
>>>>>> If nothing else this exercise could perhaps make what's thus far been
>>>>>> a wildly abstract discussion more concrete.
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Tue, Aug 18, 2015 at 6:17 PM, Monte Hurd <mhurd(a)wikimedia.org>
>>>>>> wrote:
>>>>>>
>>>>>>> If having the most elegant description extraction mechanism was the
>>>>>>> goal I would totally agree ;)
>>>>>>>
>>>>>>> On Tue, Aug 18, 2015 at 5:19 PM, Dmitry Brant <dbrant(a)wikimedia.org>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> IMO, allowing the user to edit the description is a missed
>>>>>>>> opportunity to make the user edit the actual *data*, such that the
>>>>>>>> description is generated correctly.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Tue, Aug 18, 2015 at 8:02 PM, Monte Hurd <mhurd(a)wikimedia.org>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> IMO, if the goal is quality, then human curated descriptions are
>>>>>>>>> superior until such time as the auto-generation script passes the Turing
>>>>>>>>> test ;)
>>>>>>>>>
>>>>>>>>> I see these empty descriptions as an amazing opportunity to give
>>>>>>>>> *everyone* an easy new way to edit. I whipped an app editing interface up
>>>>>>>>> at the Lyon hackathon:
>>>>>>>>> bluetooth720 <https://www.youtube.com/watch?v=6VblyGhf_c8>
>>>>>>>>>
>>>>>>>>> I used it to add a couple hundred descriptions in a single day
>>>>>>>>> just by hitting "random" then adding descriptions for articles which didn't
>>>>>>>>> have them.
>>>>>>>>>
>>>>>>>>> I'd love to try a limited test of this in production to get a
>>>>>>>>> sense for how effective human curation can be if the interface is easy to
>>>>>>>>> use...
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Tue, Aug 18, 2015 at 1:25 PM, Jan Ainali <
>>>>>>>>> jan.ainali(a)wikimedia.se> wrote:
>>>>>>>>>
>>>>>>>>>> Nice one!
>>>>>>>>>>
>>>>>>>>>> Does not appear to work on svwiki though. Does it have something
>>>>>>>>>> to do with that the wiki in question does not display that tagline?
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> *Med vänliga hälsningar,Jan Ainali*
>>>>>>>>>>
>>>>>>>>>> Verksamhetschef, Wikimedia Sverige <http://wikimedia.se>
>>>>>>>>>> 0729 - 67 29 48
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> *Tänk dig en värld där varje människa har fri tillgång till
>>>>>>>>>> mänsklighetens samlade kunskap. Det är det vi gör.*
>>>>>>>>>> Bli medlem. <http://blimedlem.wikimedia.se>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> 2015-08-18 17:23 GMT+02:00 Magnus Manske <
>>>>>>>>>> magnusmanske(a)googlemail.com>:
>>>>>>>>>>
>>>>>>>>>>> Show automatic description underneath "From Wikipedia...":
>>>>>>>>>>> https://en.wikipedia.org/wiki/User:Magnus_Manske/autodesc.js
>>>>>>>>>>>
>>>>>>>>>>> To use, add:
>>>>>>>>>>> importScript ( 'User:Magnus_Manske/autodesc.js' ) ;
>>>>>>>>>>> to your common.js
>>>>>>>>>>>
>>>>>>>>>>> On Tue, Aug 18, 2015 at 9:47 AM Jane Darnell <jane023(a)gmail.com>
>>>>>>>>>>> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> It would be even better if this (short: 3 field max)
>>>>>>>>>>>> pipe-separated list was available as a gadget to wikidatans on Wikipedia
>>>>>>>>>>>> (like me). I can't see if a page I am on has an "instance of" (though it
>>>>>>>>>>>> should) and I can see the description thanks to another gadget (sorry no
>>>>>>>>>>>> idea which one that is). Often I will update empty descriptions, but if I
>>>>>>>>>>>> was served basic fields (so for a painting, the creator field), I would
>>>>>>>>>>>> click through to update that too.
>>>>>>>>>>>>
>>>>>>>>>>>> On Tue, Aug 18, 2015 at 9:58 AM, Federico Leva (Nemo) <
>>>>>>>>>>>> nemowiki(a)gmail.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> Jane Darnell, 15/08/2015 08:53:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> Yes but even if the descriptions were just the contents of
>>>>>>>>>>>>>> fields
>>>>>>>>>>>>>> separated by a pipe it would be better than nothing.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> +1, item descriptions are mostly useless in my experience.
>>>>>>>>>>>>>
>>>>>>>>>>>>> As for "get into production on Wikipedia" I don't know what it
>>>>>>>>>>>>> means, I certainly don't like 1) mobile-specific features, 2) overriding
>>>>>>>>>>>>> existing manually curated content; but it's good to 3) fill gaps. Mobile
>>>>>>>>>>>>> folks often do (1) and (2), if they *instead* did (3) I'd be very happy. :)
>>>>>>>>>>>>>
>>>>>>>>>>>>> Nemo
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>> Mobile-l mailing list
>>>>>>>>>>>> Mobile-l(a)lists.wikimedia.org
>>>>>>>>>>>> https://lists.wikimedia.org/mailman/listinfo/mobile-l
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> _______________________________________________
>>>>>>>>>>> Mobile-l mailing list
>>>>>>>>>>> Mobile-l(a)lists.wikimedia.org
>>>>>>>>>>> https://lists.wikimedia.org/mailman/listinfo/mobile-l
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> _______________________________________________
>>>>>>>>>> Mobile-l mailing list
>>>>>>>>>> Mobile-l(a)lists.wikimedia.org
>>>>>>>>>> https://lists.wikimedia.org/mailman/listinfo/mobile-l
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> _______________________________________________
>>>>>>>>> Mobile-l mailing list
>>>>>>>>> Mobile-l(a)lists.wikimedia.org
>>>>>>>>> https://lists.wikimedia.org/mailman/listinfo/mobile-l
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> Dmitry Brant
>>>>>>>> Mobile Apps Team (Android)
>>>>>>>> Wikimedia Foundation
>>>>>>>> https://www.mediawiki.org/wiki/Wikimedia_mobile_engineering
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>> --
>>>> EN Wikipedia user page:
>>>> https://en.wikipedia.org/wiki/User:Brian.gerstle
>>>> IRC: bgerstle
>>>>
>>>>
>>>
>>>
>>> --
>>> Dmitry Brant
>>> Mobile Apps Team (Android)
>>> Wikimedia Foundation
>>> https://www.mediawiki.org/wiki/Wikimedia_mobile_engineering
>>>
>>>
>>> On Tue, Aug 18, 2015 at 10:36 PM, Brian Gerstle <bgerstle(a)wikimedia.org>
>>> wrote:
>>>
>>>> Could there be a way to have our nicely curated description cake and
>>>> eat it too? For example, interpolating data into the description and/or
>>>> marking data points which are referenced in the description (so as to mark
>>>> it as outdated when they change)?
>>>>
>>>> I appreciate the potential benefits of generated descriptions (and
>>>> other things), but Monte's examples might have swayed me towards human
>>>> curated—when available.
>>>>
>>>> On Tuesday, August 18, 2015, Monte Hurd <mhurd(a)wikimedia.org> wrote:
>>>>
>>>>> Ok, so I just did what I proposed. I went to random enwiki articles
>>>>> and described the first ten I found which didn't already have descriptions:
>>>>>
>>>>>
>>>>> - "Courage Under Fire", *1996 film about a Gulf War friendly-fire
>>>>> incident*
>>>>>
>>>>> - "Pebasiconcha immanis", *largest known species of land snail,
>>>>> extinct*
>>>>>
>>>>> - "List of Kenyan writers", *notable Kenyan authors*
>>>>>
>>>>> - "Solar eclipse of December 14, 1917", *annular eclipse which lasted
>>>>> 77 seconds*
>>>>>
>>>>> - "Natchaug Forest Lumber Shed", *historic Civilian Conservation
>>>>> Corps post-and-beam building*
>>>>>
>>>>> - "Sun of Jamaica (album)", *debut 1980 studio album by Goombay Dance
>>>>> Band*
>>>>>
>>>>> - "E-1027", *modernist villa in France by architect Eileen Gray*
>>>>>
>>>>> - "Daingerfield State Park", *park in Morris County, Texas, USA,
>>>>> bordering Lake Daingerfield*
>>>>>
>>>>> - "Todo Lo Que Soy-En Vivo", *2014 Live album by Mexican pop singer
>>>>> Fey*
>>>>>
>>>>> - "2009 UEFA Regions' Cup", *6th UEFA Regions' Cup, won by Castile
>>>>> and Leon*
>>>>>
>>>>>
>>>>>
>>>>> And here are the respective descriptions from Magnus' (quite
>>>>> excellent) autodesc.js:
>>>>>
>>>>>
>>>>>
>>>>> - "Courage Under Fire", *1996 film by Edward Zwick, produced by John
>>>>> Davis and David T. Friendly from United States of America*
>>>>>
>>>>> - "Pebasiconcha immanis", *species of Mollusca*
>>>>>
>>>>> - "List of Kenyan writers", *Wikimedia list article*
>>>>>
>>>>> - "Solar eclipse of December 14, 1917", *solar eclipse*
>>>>>
>>>>> - "Natchaug Forest Lumber Shed", *Construction in Connecticut, United
>>>>> States of America*
>>>>>
>>>>> - "Sun of Jamaica (album)", *album*
>>>>>
>>>>> - "E-1027", *villa in Roquebrune-Cap-Martin, France*
>>>>>
>>>>> - "Daingerfield State Park", *state park and state park of a state of
>>>>> the United States in Texas, United States of America*
>>>>>
>>>>> - "Todo Lo Que Soy-En Vivo", *live album by Fey*
>>>>>
>>>>> - "2009 UEFA Regions' Cup", *none*
>>>>>
>>>>>
>>>>>
>>>>> Thoughts?
>>>>>
>>>>> Just trying to make my own bold assertions falsifiable :)
>>>>>
>>>>>
>>>>>
>>>>> On Tue, Aug 18, 2015 at 6:32 PM, Monte Hurd <mhurd(a)wikimedia.org>
>>>>> wrote:
>>>>>
>>>>>> The whole human-vs-extracted descriptions quality question could be
>>>>>> fairly easy to test I think:
>>>>>>
>>>>>> - Pick, some number of articles at random.
>>>>>> - Run them through a description extraction script.
>>>>>> - Have a human describe the same articles with, say, the app
>>>>>> interface I demo'ed.
>>>>>>
>>>>>> If nothing else this exercise could perhaps make what's thus far been
>>>>>> a wildly abstract discussion more concrete.
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Tue, Aug 18, 2015 at 6:17 PM, Monte Hurd <mhurd(a)wikimedia.org>
>>>>>> wrote:
>>>>>>
>>>>>>> If having the most elegant description extraction mechanism was the
>>>>>>> goal I would totally agree ;)
>>>>>>>
>>>>>>> On Tue, Aug 18, 2015 at 5:19 PM, Dmitry Brant <dbrant(a)wikimedia.org>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> IMO, allowing the user to edit the description is a missed
>>>>>>>> opportunity to make the user edit the actual *data*, such that the
>>>>>>>> description is generated correctly.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Tue, Aug 18, 2015 at 8:02 PM, Monte Hurd <mhurd(a)wikimedia.org>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> IMO, if the goal is quality, then human curated descriptions are
>>>>>>>>> superior until such time as the auto-generation script passes the Turing
>>>>>>>>> test ;)
>>>>>>>>>
>>>>>>>>> I see these empty descriptions as an amazing opportunity to give
>>>>>>>>> *everyone* an easy new way to edit. I whipped an app editing interface up
>>>>>>>>> at the Lyon hackathon:
>>>>>>>>> bluetooth720 <https://www.youtube.com/watch?v=6VblyGhf_c8>
>>>>>>>>>
>>>>>>>>> I used it to add a couple hundred descriptions in a single day
>>>>>>>>> just by hitting "random" then adding descriptions for articles which didn't
>>>>>>>>> have them.
>>>>>>>>>
>>>>>>>>> I'd love to try a limited test of this in production to get a
>>>>>>>>> sense for how effective human curation can be if the interface is easy to
>>>>>>>>> use...
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Tue, Aug 18, 2015 at 1:25 PM, Jan Ainali <
>>>>>>>>> jan.ainali(a)wikimedia.se> wrote:
>>>>>>>>>
>>>>>>>>>> Nice one!
>>>>>>>>>>
>>>>>>>>>> Does not appear to work on svwiki though. Does it have something
>>>>>>>>>> to do with that the wiki in question does not display that tagline?
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> *Med vänliga hälsningar,Jan Ainali*
>>>>>>>>>>
>>>>>>>>>> Verksamhetschef, Wikimedia Sverige <http://wikimedia.se>
>>>>>>>>>> 0729 - 67 29 48
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> *Tänk dig en värld där varje människa har fri tillgång till
>>>>>>>>>> mänsklighetens samlade kunskap. Det är det vi gör.*
>>>>>>>>>> Bli medlem. <http://blimedlem.wikimedia.se>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> 2015-08-18 17:23 GMT+02:00 Magnus Manske <
>>>>>>>>>> magnusmanske(a)googlemail.com>:
>>>>>>>>>>
>>>>>>>>>>> Show automatic description underneath "From Wikipedia...":
>>>>>>>>>>> https://en.wikipedia.org/wiki/User:Magnus_Manske/autodesc.js
>>>>>>>>>>>
>>>>>>>>>>> To use, add:
>>>>>>>>>>> importScript ( 'User:Magnus_Manske/autodesc.js' ) ;
>>>>>>>>>>> to your common.js
>>>>>>>>>>>
>>>>>>>>>>> On Tue, Aug 18, 2015 at 9:47 AM Jane Darnell <jane023(a)gmail.com>
>>>>>>>>>>> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> It would be even better if this (short: 3 field max)
>>>>>>>>>>>> pipe-separated list was available as a gadget to wikidatans on Wikipedia
>>>>>>>>>>>> (like me). I can't see if a page I am on has an "instance of" (though it
>>>>>>>>>>>> should) and I can see the description thanks to another gadget (sorry no
>>>>>>>>>>>> idea which one that is). Often I will update empty descriptions, but if I
>>>>>>>>>>>> was served basic fields (so for a painting, the creator field), I would
>>>>>>>>>>>> click through to update that too.
>>>>>>>>>>>>
>>>>>>>>>>>> On Tue, Aug 18, 2015 at 9:58 AM, Federico Leva (Nemo) <
>>>>>>>>>>>> nemowiki(a)gmail.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> Jane Darnell, 15/08/2015 08:53:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> Yes but even if the descriptions were just the contents of
>>>>>>>>>>>>>> fields
>>>>>>>>>>>>>> separated by a pipe it would be better than nothing.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> +1, item descriptions are mostly useless in my experience.
>>>>>>>>>>>>>
>>>>>>>>>>>>> As for "get into production on Wikipedia" I don't know what it
>>>>>>>>>>>>> means, I certainly don't like 1) mobile-specific features, 2) overriding
>>>>>>>>>>>>> existing manually curated content; but it's good to 3) fill gaps. Mobile
>>>>>>>>>>>>> folks often do (1) and (2), if they *instead* did (3) I'd be very happy. :)
>>>>>>>>>>>>>
>>>>>>>>>>>>> Nemo
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>> Mobile-l mailing list
>>>>>>>>>>>> Mobile-l(a)lists.wikimedia.org
>>>>>>>>>>>> https://lists.wikimedia.org/mailman/listinfo/mobile-l
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> _______________________________________________
>>>>>>>>>>> Mobile-l mailing list
>>>>>>>>>>> Mobile-l(a)lists.wikimedia.org
>>>>>>>>>>> https://lists.wikimedia.org/mailman/listinfo/mobile-l
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> _______________________________________________
>>>>>>>>>> Mobile-l mailing list
>>>>>>>>>> Mobile-l(a)lists.wikimedia.org
>>>>>>>>>> https://lists.wikimedia.org/mailman/listinfo/mobile-l
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> _______________________________________________
>>>>>>>>> Mobile-l mailing list
>>>>>>>>> Mobile-l(a)lists.wikimedia.org
>>>>>>>>> https://lists.wikimedia.org/mailman/listinfo/mobile-l
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> Dmitry Brant
>>>>>>>> Mobile Apps Team (Android)
>>>>>>>> Wikimedia Foundation
>>>>>>>> https://www.mediawiki.org/wiki/Wikimedia_mobile_engineering
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>> --
>>>> EN Wikipedia user page:
>>>> https://en.wikipedia.org/wiki/User:Brian.gerstle
>>>> IRC: bgerstle
>>>>
>>>>
>>>
>>>
>>> --
>>> Dmitry Brant
>>> Mobile Apps Team (Android)
>>> Wikimedia Foundation
>>> https://www.mediawiki.org/wiki/Wikimedia_mobile_engineering
>>>
>>> _______________________________________________
>>> Mobile-l mailing list
>>> Mobile-l(a)lists.wikimedia.org
>>> https://lists.wikimedia.org/mailman/listinfo/mobile-l
>>>
>>
> _______________________________________________
> Mobile-l mailing list
> Mobile-l(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/mobile-l
>
>
Wonderful work, Miloš.
Some notes on edit count:
1. Some Wikipedias import all the versions of a translated article because
they believe that it's required for attribution (AFAIK it isn't). This, of
course, inflates the edit count in a completely artificial way, and sadly I
don't know how to filter this chaff.
2. Bot edits could probably be filtered out, but there are some very
different types of bots and it should be taken into account when measuring
community success. Some bots just create articles (Waray, Swedish, Dutch).
Some fix interlanguage links (not any longer, but it was huge everywhere
before 2013). Some auto-fix spelling, and it's a sign of a healthy
community (Hebrew, Catalan, and some others). Some are smarter than
AbuseFilter at reverting vandalism, and that's also a good sign.
3. Some sysops delete revisions with vandalism, which could simply be
reverted. I don't know how prevalent it is. More generally, deleted
revisions could probably be counted in a useful way as part of this project.
בתאריך 14 ביוני 2015 14:14, "Milos Rancic" <millosh(a)gmail.com> כתב:
> I started writing a longer email, but then realized that it's better
> to stick with the most important points, as everything is anyway
> enough complex. Thus, just metrics and its applications, not anything
> else.
>
> While I was reloading a year ago my few years old idea to open
> Wikipedia in 3000 more languages, I realized that we have substantial
> problem. The most numerous communities have ~100 active (thus 5+
> edits/month) editors per million of speakers. As my hypothesis was
> that we could have Wikipedias in languages spoken by more than 10,000
> people, that would mean that at the best they would have 1 (one)
> active editor. Thus, something else has to be done... But before that,
> we have to gather data and have the idea what's that "something".
>
> My first idea -- something of a kind between a desperate one and "we
> should try something" -- was to ask people from Wikimedia Estonia,
> Wikimedia Finland and Wikimedia UK to try to reach as many as possible
> new active users on particular projects. The point is that Scottish
> Gaelic, Estonian and Finish are among the top in active users per
> million of speakers.
>
> A year later, Estonians are doing a very good job (others are good, as
> well). They are above 100 active users per million of speakers and in
> a couple of years they could reach even a couple of hundreds.
>
> But, there is an obvious flaw in this kind of reasoning and I was
> aware of it from the beginning: It's about languages spoken i rich
> countries, while we'll be dealing with the communities on the opposite
> end of wealth. However, at least it's possible to increase relative
> number of active users in "ideal" situations, which means that ~100
> active users per million of speakers is not a kind of realistic
> maximum.
>
> Thanks to the project Wiktionary meets Matica srpska, I am getting now
> more precise insights into Ethnologue data (don't ask me what's the
> relation, it was a couple of paragraphs long explanation inside of the
> email I didn't send).
>
> So, a month ago or so I got the first data and the news were very
> good: more than 5000 languages won't die during the next 100 years.
> More than 2500 languages are in very good shape. If we take for
> granted that Ethnologue's data are about languages.
>
> In the meantime, Sylvian mentioned on Languages list that he is
> working on Kichwa Wikipedia. And he noted one important thing: if we
> are going to have Wikipedias in languages like Kichwa is -- and that's
> likely the prototype for the most of the languages which we will meet
> in the future -- we have to adapt to them, not to impose unrealistic
> expectations to them. That's connected to the data, as I want to know
> what we could expect from them. (A note to self: literacy rate is very
> important parameter, as well.)
>
> It is also important to be able to follow numerically the development
> of particular community and give them know-how based on previous
> successful experiences.
>
> As we got more results from Ethnologue data, my ambitions raised. Of
> course I wanted to get number of articles per speaker. I got an
> approximate correlation between Wikipedia editions and Ethnologue
> data. Yes, of course, I knew that there are Wikipedia editions with a
> lot of bot-generated articles. So, I've cut data to languages with 5
> or more on Ethnologue language vitality scale and with the condition
> that the language has to have native speakers and I've got pretty sane
> results. Yes, Dutch and Swedish Wikipedias include a lot of
> bot-generated articles, but the number of articles in those langauges
> are quite fine in comparison with the rest of the projects.
>
> There are few arguments in favor of counting (even bot-generated) articles:
> * First, the most important flaw in analyzing such data is taking
> their synchrony, not the development. But synchrony is the starting
> point. By looking into development, we could monitor the number of new
> articles per month and we could easily conclude what's the normal
> state of the community and what's not.
> * Then it doesn't take a lot of efforts to create legitimate
> information on some of the topics by using bots. If legitimate
> articles, that gives us a clue about the capacity of particular
> community to create articles and thus spread free knowledge.
> * For example, if organized properly, it's not hard to create sane
> articles based on English (or Spanish or whichever) Wikipedia
> templates about actors and movies. That means that English (or Spanish
> or whichever) Wikipedia raises capacity of other Wikipedia editions,
> which is legitimate and quite relevant. It's relevant in the sense
> that we should particularly care about languages with large number of
> L2 speakers and languages used as international or regional lingua
> franca. In reverse note, we could conclude which languages have
> potential to create a lot of articles thanks to the fact that the
> speakers of that language are fluent in one of the big languages.
> That's also quite relevant for "gross capacity" to share knowledge in
> their own language.
> * The number of possible articles will always raise. Even for
> bot-generated articles. (Take as an example newly discovered planets
> outside of our solar system. For monolinguals, it's relevant to have
> that kind of information in their native language.) Thus,
> possibilities will raise and it's important to monitor capacities of
> the communities. Having a programmer raises capacity, obviously.
> Having a dexterous community member, capable to find a programmer
> inside of the movement willing to help creating a bot also counts.
>
> I've seen projects with a lot of edits and disproportionally small
> number of articles. From my perspective, it's better to have more
> articles than to have a lot of rollbacks and a lot of talk. Although
> the community itself is our most important value, our main task is to
> create articles, not to argue. Besides the fact that it could be a
> sign of bad community health.
>
> But there are many other possible indicators, which could work in the
> most of the cases. For example, edit count. From the first five
> projects by the number of articles, we could easily conclude that the
> ranks are: (1) English, (2) German, (3) French, (4) Dutch, (5)
> Swedish, not (1) English, (2) Swedish, (3) Dutch, (4) German, (5)
> French. (By taking a look into the other Wikipedias, we could see that
> even Chinese on 15th place is stronger than the Swedish Wikipedia on
> 2nd one.)
>
> Not counting English as world's primary lingua franca, It's also
> interesting to see that the edits per German and French speaker is
> roughly 1.5, while 0.6 in Russian case. Danish is ~1.7, Polish is
> ~1.05, Serbian is ~1.2, but Japanese is ~0.4 and Swahili ~0.05. (I
> made approximations without a calculator, thus error range is likely
> +-10% :) ) Thus, GDP/PPP per capita doesn't need to be that important
> factor (in the sense "if you reach particular GDP/PPP per capita, it's
> not anymore important factor"), while other things could be.
>
> It's also important to have in mind that various data are likely
> exposing various issues. And every issue has to be analyzed from
> socio-economic perspective (obviously, Japanese Wikipedia is not
> relatively weak because of the same reason as Russian or Swahili
> Wikipedia are).
>
> I will include as many parameters as possible in the future analysis.
> As I have now the number of speakers of particular language per
> country, it is possible now to correlate economic development with
> particular language.
>
> On Jun 13, 2015 09:38, "Federico Leva (Nemo)" <nemowiki(a)gmail.com> wrote:
> >
> > Asaf Bartov, 13/06/2015 02:42:
> >>
> >> The (already existing) metric of active-editors-per-million-speakers is,
> >> it seems to me, a far more robust metric. Erik Z.'s
> stats.wikimedia.org
> >> <http://stats.wikimedia.org> is offering that metric.
> >
> >
> > I personally agree on this in general, but Millosh is trying something
> different in his current quest, i.e. content ingestion and content coverage
> assessment, also for missing language subdomains. (By the way, I created
> the category, please add stuff:
> https://meta.wikimedia.org/wiki/Category:Content_coverage .)
> >
> > Mere article count tells us very little and he acknowledged it. As you
> added analytics: maybe when https://phabricator.wikimedia.org/T44259 is
> fixed we can also do fancy things like join various tables and count
> (countable) articles above a minimum threshold of hits, or something like
> that.
> >
> > Oh, and the total number of internal links in a wiki is also an
> interesting metric in many cases: they're often a good indicator of how
> curated a wiki globally is, while bot-created articles are often orphan.
> (Locally there might be overlinking but that's rarely a wiki-wide issue.) I
> don't remember how reliable the WikiStats numbers are, but they often give
> a good clue already.
> >
> > Nemo
> >
> > _______________________________________________
> > Languages mailing list
> > Languages(a)lists.wikimedia.org
> > https://lists.wikimedia.org/mailman/listinfo/languages
>
> _______________________________________________
> Languages mailing list
> Languages(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/languages
>
Wonderful work, Miloš.
Some notes on edit count:
1. Some Wikipedias import all the versions of a translated article because
they believe that it's required for attribution (AFAIK it isn't). This, of
course, inflates the edit count in a completely artificial way, and sadly I
don't know how to filter this chaff.
2. Bot edits could probably be filtered out, but there are some very
different types of bots and it should be taken into account when measuring
community success. Some bots just create articles (Waray, Swedish, Dutch).
Some fix interlanguage links (not any longer, but it was huge everywhere
before 2013). Some auto-fix spelling, and it's a sign of a healthy
community (Hebrew, Catalan, and some others). Some are smarter than
AbuseFilter at reverting vandalism, and that's also a good sign.
3. Some sysops delete revisions with vandalism, which could simply be
reverted. I don't know how prevalent it is. More generally, deleted
revisions could probably be counted in a useful way as part of this project.
בתאריך 14 ביוני 2015 14:14, "Milos Rancic" <millosh(a)gmail.com> כתב:
> I started writing a longer email, but then realized that it's better
> to stick with the most important points, as everything is anyway
> enough complex. Thus, just metrics and its applications, not anything
> else.
>
> While I was reloading a year ago my few years old idea to open
> Wikipedia in 3000 more languages, I realized that we have substantial
> problem. The most numerous communities have ~100 active (thus 5+
> edits/month) editors per million of speakers. As my hypothesis was
> that we could have Wikipedias in languages spoken by more than 10,000
> people, that would mean that at the best they would have 1 (one)
> active editor. Thus, something else has to be done... But before that,
> we have to gather data and have the idea what's that "something".
>
> My first idea -- something of a kind between a desperate one and "we
> should try something" -- was to ask people from Wikimedia Estonia,
> Wikimedia Finland and Wikimedia UK to try to reach as many as possible
> new active users on particular projects. The point is that Scottish
> Gaelic, Estonian and Finish are among the top in active users per
> million of speakers.
>
> A year later, Estonians are doing a very good job (others are good, as
> well). They are above 100 active users per million of speakers and in
> a couple of years they could reach even a couple of hundreds.
>
> But, there is an obvious flaw in this kind of reasoning and I was
> aware of it from the beginning: It's about languages spoken i rich
> countries, while we'll be dealing with the communities on the opposite
> end of wealth. However, at least it's possible to increase relative
> number of active users in "ideal" situations, which means that ~100
> active users per million of speakers is not a kind of realistic
> maximum.
>
> Thanks to the project Wiktionary meets Matica srpska, I am getting now
> more precise insights into Ethnologue data (don't ask me what's the
> relation, it was a couple of paragraphs long explanation inside of the
> email I didn't send).
>
> So, a month ago or so I got the first data and the news were very
> good: more than 5000 languages won't die during the next 100 years.
> More than 2500 languages are in very good shape. If we take for
> granted that Ethnologue's data are about languages.
>
> In the meantime, Sylvian mentioned on Languages list that he is
> working on Kichwa Wikipedia. And he noted one important thing: if we
> are going to have Wikipedias in languages like Kichwa is -- and that's
> likely the prototype for the most of the languages which we will meet
> in the future -- we have to adapt to them, not to impose unrealistic
> expectations to them. That's connected to the data, as I want to know
> what we could expect from them. (A note to self: literacy rate is very
> important parameter, as well.)
>
> It is also important to be able to follow numerically the development
> of particular community and give them know-how based on previous
> successful experiences.
>
> As we got more results from Ethnologue data, my ambitions raised. Of
> course I wanted to get number of articles per speaker. I got an
> approximate correlation between Wikipedia editions and Ethnologue
> data. Yes, of course, I knew that there are Wikipedia editions with a
> lot of bot-generated articles. So, I've cut data to languages with 5
> or more on Ethnologue language vitality scale and with the condition
> that the language has to have native speakers and I've got pretty sane
> results. Yes, Dutch and Swedish Wikipedias include a lot of
> bot-generated articles, but the number of articles in those langauges
> are quite fine in comparison with the rest of the projects.
>
> There are few arguments in favor of counting (even bot-generated) articles:
> * First, the most important flaw in analyzing such data is taking
> their synchrony, not the development. But synchrony is the starting
> point. By looking into development, we could monitor the number of new
> articles per month and we could easily conclude what's the normal
> state of the community and what's not.
> * Then it doesn't take a lot of efforts to create legitimate
> information on some of the topics by using bots. If legitimate
> articles, that gives us a clue about the capacity of particular
> community to create articles and thus spread free knowledge.
> * For example, if organized properly, it's not hard to create sane
> articles based on English (or Spanish or whichever) Wikipedia
> templates about actors and movies. That means that English (or Spanish
> or whichever) Wikipedia raises capacity of other Wikipedia editions,
> which is legitimate and quite relevant. It's relevant in the sense
> that we should particularly care about languages with large number of
> L2 speakers and languages used as international or regional lingua
> franca. In reverse note, we could conclude which languages have
> potential to create a lot of articles thanks to the fact that the
> speakers of that language are fluent in one of the big languages.
> That's also quite relevant for "gross capacity" to share knowledge in
> their own language.
> * The number of possible articles will always raise. Even for
> bot-generated articles. (Take as an example newly discovered planets
> outside of our solar system. For monolinguals, it's relevant to have
> that kind of information in their native language.) Thus,
> possibilities will raise and it's important to monitor capacities of
> the communities. Having a programmer raises capacity, obviously.
> Having a dexterous community member, capable to find a programmer
> inside of the movement willing to help creating a bot also counts.
>
> I've seen projects with a lot of edits and disproportionally small
> number of articles. From my perspective, it's better to have more
> articles than to have a lot of rollbacks and a lot of talk. Although
> the community itself is our most important value, our main task is to
> create articles, not to argue. Besides the fact that it could be a
> sign of bad community health.
>
> But there are many other possible indicators, which could work in the
> most of the cases. For example, edit count. From the first five
> projects by the number of articles, we could easily conclude that the
> ranks are: (1) English, (2) German, (3) French, (4) Dutch, (5)
> Swedish, not (1) English, (2) Swedish, (3) Dutch, (4) German, (5)
> French. (By taking a look into the other Wikipedias, we could see that
> even Chinese on 15th place is stronger than the Swedish Wikipedia on
> 2nd one.)
>
> Not counting English as world's primary lingua franca, It's also
> interesting to see that the edits per German and French speaker is
> roughly 1.5, while 0.6 in Russian case. Danish is ~1.7, Polish is
> ~1.05, Serbian is ~1.2, but Japanese is ~0.4 and Swahili ~0.05. (I
> made approximations without a calculator, thus error range is likely
> +-10% :) ) Thus, GDP/PPP per capita doesn't need to be that important
> factor (in the sense "if you reach particular GDP/PPP per capita, it's
> not anymore important factor"), while other things could be.
>
> It's also important to have in mind that various data are likely
> exposing various issues. And every issue has to be analyzed from
> socio-economic perspective (obviously, Japanese Wikipedia is not
> relatively weak because of the same reason as Russian or Swahili
> Wikipedia are).
>
> I will include as many parameters as possible in the future analysis.
> As I have now the number of speakers of particular language per
> country, it is possible now to correlate economic development with
> particular language.
>
> On Jun 13, 2015 09:38, "Federico Leva (Nemo)" <nemowiki(a)gmail.com> wrote:
> >
> > Asaf Bartov, 13/06/2015 02:42:
> >>
> >> The (already existing) metric of active-editors-per-million-speakers is,
> >> it seems to me, a far more robust metric. Erik Z.'s
> stats.wikimedia.org
> >> <http://stats.wikimedia.org> is offering that metric.
> >
> >
> > I personally agree on this in general, but Millosh is trying something
> different in his current quest, i.e. content ingestion and content coverage
> assessment, also for missing language subdomains. (By the way, I created
> the category, please add stuff:
> https://meta.wikimedia.org/wiki/Category:Content_coverage .)
> >
> > Mere article count tells us very little and he acknowledged it. As you
> added analytics: maybe when https://phabricator.wikimedia.org/T44259 is
> fixed we can also do fancy things like join various tables and count
> (countable) articles above a minimum threshold of hits, or something like
> that.
> >
> > Oh, and the total number of internal links in a wiki is also an
> interesting metric in many cases: they're often a good indicator of how
> curated a wiki globally is, while bot-created articles are often orphan.
> (Locally there might be overlinking but that's rarely a wiki-wide issue.) I
> don't remember how reliable the WikiStats numbers are, but they often give
> a good clue already.
> >
> > Nemo
> >
> > _______________________________________________
> > Languages mailing list
> > Languages(a)lists.wikimedia.org
> > https://lists.wikimedia.org/mailman/listinfo/languages
>
> _______________________________________________
> Languages mailing list
> Languages(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/languages
>
You can't get ISI to index your journal until it has a reputation that they
can verify. This can easily take years.
ISI-index for journals is a disease anyways. Many non-US academic inst.
use it as a mark for good quality journals and for promotion cases require
ISI indexed journal publication counts. When in fact, many high quality
journals are not indexed by ISI. ACM TOCHI wasn't indexed until 2007-ish,
in fact, as an example, and it is the premier journal in HCI.
I personally find no meaning in ISI indexing, and often once a journal gets
indexed by ISI, they attract a ton of submissions that are low quality,
making more work for editorial boards.
--Ed
---------------
Ed H. Chi, Staff Research Scientist, Google
CHI2012 Technical Program co-chair
On Sat, Sep 15, 2012 at 5:00 AM, <
wiki-research-l-request(a)lists.wikimedia.org> wrote:
> Send Wiki-research-l mailing list submissions to
> wiki-research-l(a)lists.wikimedia.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> https://lists.wikimedia.org/mailman/listinfo/wiki-research-l
> or, via email, send a message with subject or body 'help' to
> wiki-research-l-request(a)lists.wikimedia.org
>
> You can reach the person managing the list at
> wiki-research-l-owner(a)lists.wikimedia.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Wiki-research-l digest..."
>
>
> Today's Topics:
>
> 1. Re: Open-Access journals for papers about wikis
> (Dariusz Jemielniak)
> 2. Re: Open-Access journals for papers about wikis (emijrp)
> 3. Re: Open-Access journals for papers about wikis
> (Federico Leva (Nemo))
> 4. Re: Open-Access journals for papers about wikis
> (Dariusz Jemielniak)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Sat, 15 Sep 2012 09:45:13 +0200
> From: Dariusz Jemielniak <darekj(a)alk.edu.pl>
> To: Research into Wikimedia content and communities
> <wiki-research-l(a)lists.wikimedia.org>
> Subject: Re: [Wiki-research-l] Open-Access journals for papers about
> wikis
> Message-ID:
> <CADeSpGVdL_8x8OicEZ=+Z2xr9GVDx8R-8HgW=
> g9oVGUNjTDyyA(a)mail.gmail.com>
> Content-Type: text/plain; charset="ISO-8859-1"
>
> hi,
>
> On Fri, Sep 14, 2012 at 4:00 PM, Samuel Klein <sj(a)wikimedia.org> wrote:
> > I've been thinking recently that we should start this journal. There
> isn't an obvious candidate, despite some of the amazing research that's
> been done, and the extreme
> > transparency that allows much deeper work to be done on wiki communities
> in the future.
>
> I'll gladly help and support the idea. I think that just as Mathieu
> pointed out, The Journal of Peer Production is a good candidate, since
> it is already out there and running (even if low on the radar).
> Otherwise, there can be of course a journal dedicated to wiki-related
> work, it is quite easy to set it up (e.g. on Open Journal Systems
> platform). The key is not setting up a journal, since this is an easy
> part, but building a community that would regularly read it and
> contribute. In this sense Wikipedia may be a good common ground.
>
> On Fri, Sep 14, 2012 at 7:41 PM, Piotr Konieczny <piokon(a)post.pl> wrote:
> > So what does it take to get a journal indexed in ISI?
>
> The procedure is quite lengthy and not entirely transparent. In short,
> you request being reviewed and from issue X onwards they check how
> often an average article from the journal is cited in other ISI
> journals. If you go above the threshold, you're in. The problem is
> that Thomson arbitrarily decides whether they want to audit a journal,
> arbitrarily calculates what constitutes an "article" (yes, it is not
> clear - some journals have editorials counted, some don't, in some
> cases Thomson calculates the citations for non-articles, but does not
> include the number of non-articles in the equation. Scientific, right?
> ;) invited articles count... or not, research notes - same, etc.). Oh,
> and also Thomson arbitrarily may or may not punish by banning you from
> ISI for real or imaginary manipulations (such as inbreed citations -
> some editors encourage citing other articles from the same journal,
> since they count like any others from the ISI list). There's actually
> a whole body of literature on journal rankings. Still, this is the
> game we have to play.
>
> One key factor in getting ISI is a community to drive the journal - if
> Wikipedia research community was widely willing to support one new
> journal, received updates etc., it would likely get cited and go off
> the ground (the case of "The Academy of Management Learning and
> Education" - on the ISI 2 years after the first issue, if I remember
> correctly).
>
> Btw, CSCW is on ISI list, but is not open access.
>
> On Fri, Sep 14, 2012 at 6:26 PM, Aaron Halfaker
> <aaron.halfaker(a)gmail.com> wrote:
> > Growing WikiSym into an open conference
>
> unfortunately, this does not help in some fields. For instance, in
> management/organization studies conference papers don't count at all,
> so actually there is a strong incentive not to go to a conference such
> as WikiSym, since it results in wasting a paper you cannot really
> publish in way that would count. European RAEs rely more and more
> heavily on ISI and on ERIH rankings, so also non-ranked journals do
> not count anymore.
>
> best,
>
> dariusz
>
>
>
>
> ------------------------------
>
> Message: 2
> Date: Sat, 15 Sep 2012 11:12:11 +0200
> From: emijrp <emijrp(a)gmail.com>
> To: darekj(a)alk.edu.pl, Research into Wikimedia content and communities
> <wiki-research-l(a)lists.wikimedia.org>, Samuel Klein
> <meta.sj(a)gmail.com>
> Subject: Re: [Wiki-research-l] Open-Access journals for papers about
> wikis
> Message-ID:
> <CAPgALA5psCsodkfQsOsB=
> 04MHLJ4TkV9B8QfYBaowx_ML2hNSA(a)mail.gmail.com>
> Content-Type: text/plain; charset="iso-8859-1"
>
> The idea of creating a journal just for wikis is highly seductive for
> me.The "pillars" might be:
>
> * peer-reviewed, but publish a list of rejected papers and the reviewers
> comments
> * open-access (CC-BY-SA)
> * ask always for the datasets and offer them to download, the same for the
> developed software used in the research
> * encourage authors to publish early, publish often (as in free software)
> * supported by donations
>
> And... we can open a wiki where those who want can write papers in a
> collaborative and public way. You can start a new paper with colleagues or
> ask for volunteers authors interested in joining to your idea. When authors
> think that paper is finished and stable, they submit it to the journal and
> it is peer-reviewed again and published or discarded and returned to the
> wiki for improvements.
>
> Perhaps we may join efforts with the Wikimedia Research Newsletter? And
> start a page in meta:? ; )
>
> 2012/9/15 Dariusz Jemielniak <darekj(a)alk.edu.pl>
>
> > hi,
> >
> > On Fri, Sep 14, 2012 at 4:00 PM, Samuel Klein <sj(a)wikimedia.org> wrote:
> > > I've been thinking recently that we should start this journal. There
> > isn't an obvious candidate, despite some of the amazing research that's
> > been done, and the extreme
> > > transparency that allows much deeper work to be done on wiki
> communities
> > in the future.
> >
> > I'll gladly help and support the idea. I think that just as Mathieu
> > pointed out, The Journal of Peer Production is a good candidate, since
> > it is already out there and running (even if low on the radar).
> > Otherwise, there can be of course a journal dedicated to wiki-related
> > work, it is quite easy to set it up (e.g. on Open Journal Systems
> > platform). The key is not setting up a journal, since this is an easy
> > part, but building a community that would regularly read it and
> > contribute. In this sense Wikipedia may be a good common ground.
> >
> > On Fri, Sep 14, 2012 at 7:41 PM, Piotr Konieczny <piokon(a)post.pl> wrote:
> > > So what does it take to get a journal indexed in ISI?
> >
> > The procedure is quite lengthy and not entirely transparent. In short,
> > you request being reviewed and from issue X onwards they check how
> > often an average article from the journal is cited in other ISI
> > journals. If you go above the threshold, you're in. The problem is
> > that Thomson arbitrarily decides whether they want to audit a journal,
> > arbitrarily calculates what constitutes an "article" (yes, it is not
> > clear - some journals have editorials counted, some don't, in some
> > cases Thomson calculates the citations for non-articles, but does not
> > include the number of non-articles in the equation. Scientific, right?
> > ;) invited articles count... or not, research notes - same, etc.). Oh,
> > and also Thomson arbitrarily may or may not punish by banning you from
> > ISI for real or imaginary manipulations (such as inbreed citations -
> > some editors encourage citing other articles from the same journal,
> > since they count like any others from the ISI list). There's actually
> > a whole body of literature on journal rankings. Still, this is the
> > game we have to play.
> >
> > One key factor in getting ISI is a community to drive the journal - if
> > Wikipedia research community was widely willing to support one new
> > journal, received updates etc., it would likely get cited and go off
> > the ground (the case of "The Academy of Management Learning and
> > Education" - on the ISI 2 years after the first issue, if I remember
> > correctly).
> >
> > Btw, CSCW is on ISI list, but is not open access.
> >
> > On Fri, Sep 14, 2012 at 6:26 PM, Aaron Halfaker
> > <aaron.halfaker(a)gmail.com> wrote:
> > > Growing WikiSym into an open conference
> >
> > unfortunately, this does not help in some fields. For instance, in
> > management/organization studies conference papers don't count at all,
> > so actually there is a strong incentive not to go to a conference such
> > as WikiSym, since it results in wasting a paper you cannot really
> > publish in way that would count. European RAEs rely more and more
> > heavily on ISI and on ERIH rankings, so also non-ranked journals do
> > not count anymore.
> >
> > best,
> >
> > dariusz
> >
> >
> > _______________________________________________
> > Wiki-research-l mailing list
> > Wiki-research-l(a)lists.wikimedia.org
> > https://lists.wikimedia.org/mailman/listinfo/wiki-research-l
> >
>
>
>
> --
> Emilio J. Rodr?guez-Posada. E-mail: emijrp AT gmail DOT com
> Pre-doctoral student at the University of C?diz (Spain)
> Projects: AVBOT <http://code.google.com/p/avbot/> |
> StatMediaWiki<http://statmediawiki.forja.rediris.es>
> | WikiEvidens <http://code.google.com/p/wikievidens/> |
> WikiPapers<http://wikipapers.referata.com>
> | WikiTeam <http://code.google.com/p/wikiteam/>
> Personal website: https://sites.google.com/site/emijrp/
>
My hero Magnus Manske noted
> The situation, for most languages, is this: No manual descriptions, on
basically any item. And that will remain so for the (near) future.
Automatic descriptions can change that, literally over night, with a little
programming and linguistic effort. ... This is a "force multiplier" of
volunteer effort with a factor of 250. And we ignore that ... why, exactly?
The potential of AutoDesc is so enormous to attain "a world in which every
single person on the planet is given free access to the sum of all human
knowledge" that it should be the entire movement's top project. I nearly
wrote a career-limiting e-mail rant to WMF-all on that subject last night.
In this e-mail thread we're talking about it in the limited scope of "Wikidata
descriptions in search on mobile web beta", where the mobile client
presents a useful signpost for *existing* articles, in an emblem on lead
images and in search results. That's important but we're missing the forest
for a single tree when discussing such a transformative technology. If only
WMF had a CTO for such things [1].
Anyway, returning to this specific use case:
* Nobody is saying store the AutoDesc in the Wikidata per-language
description field.
* Nobody is saying show the AutoDesc if there is an existing Wikidata
description.
* Is anybody against showing AutoDesc, after some refinement and
productization [2], in these mobile use cases when there is no Wikidata
description?
* I propose the AutoDesc as a quality bar that any edit to a Wikidata
description needs to improve on (but again that's a topic beyond this mail
thread).
Yours, excitedly,
=S Page
[1] http://grnh.se/30f54b , apply today!
[2] https://bitbucket.org/magnusmanske/autodesc/src/HEAD/www/js/?at=master
and https://github.com/dbrant/wikidata-autodesc . It's already a nodejs
service, can we append "oid" and declare victory ? :-)
On Wed, Aug 19, 2015 at 2:57 AM, Magnus Manske <magnusmanske(a)googlemail.com>
wrote:
> Oh, and as for examples, random-paging just got me this:
>
> https://en.wikipedia.org/wiki/Jules_Malou
>
> Manual description: Belgian politician
>
> Automatic description: Belgian politician and lawyer, Prime Minister of
> Belgium, and member of the Chamber of Representatives of Belgium
> (1810–1886) ♂
>
> I know which one I'd prefer...
>
>
> On Wed, Aug 19, 2015 at 10:50 AM Magnus Manske <
> magnusmanske(a)googlemail.com> wrote:
>
>> Thank you Dmitry! Well phrased and to the point!
>>
>> As for "templating", that might be the worst of both worlds; without the
>> flexibility and over-time improvement of automatic descriptions, but making
>> it harder for people to enter (compared to "free-style" text). We have a
>> Visual Editor on Wikipedia for a reason :-)
>>
>>
>>
>> On Wed, Aug 19, 2015 at 4:07 AM Dmitry Brant <dbrant(a)wikimedia.org>
>> wrote:
>>
>>> My thoughts, as ever(!), are as follows:
>>>
>>> - The tool that generates the descriptions deserves a lot more
>>> development. Magnus' tool is very much a prototype, and represents a tiny
>>> glimpse of what's possible. Looking at its current output is a straw man.
>>> - Auto-generated descriptions work for current articles, and *all
>>> future articles*. They automatically adapt to updated data. They
>>> automatically become more accurate as new data is added.
>>> - When you edit the descriptions yourself, you're not really making a
>>> meaningful contribution to the *data* that underpins the given Wikidata
>>> entry; i.e. you're not contributing any new information. You're simply
>>> paraphrasing the first sentence or two of the Wikipedia article. That can't
>>> possibly be a productive use of contributors' time.
>>>
>>> As for Brian's suggestion:
>>> It would be a step forward; we can even invent a whole template-type
>>> syntax for transcluding bits of actual data into the description. But IMO,
>>> that kind of effort would still be better spent on fully-automatic
>>> descriptions, because that's the ideal that semi-automatic descriptions can
>>> only approach.
>>>
>>>
>>> On Tue, Aug 18, 2015 at 10:36 PM, Brian Gerstle <bgerstle(a)wikimedia.org>
>>> wrote:
>>>
>>>> Could there be a way to have our nicely curated description cake and
>>>> eat it too? For example, interpolating data into the description and/or
>>>> marking data points which are referenced in the description (so as to mark
>>>> it as outdated when they change)?
>>>>
>>>> I appreciate the potential benefits of generated descriptions (and
>>>> other things), but Monte's examples might have swayed me towards human
>>>> curated—when available.
>>>>
>>>> On Tuesday, August 18, 2015, Monte Hurd <mhurd(a)wikimedia.org> wrote:
>>>>
>>>>> Ok, so I just did what I proposed. I went to random enwiki articles
>>>>> and described the first ten I found which didn't already have descriptions:
>>>>>
>>>>>
>>>>> - "Courage Under Fire", *1996 film about a Gulf War friendly-fire
>>>>> incident*
>>>>>
>>>>> - "Pebasiconcha immanis", *largest known species of land snail,
>>>>> extinct*
>>>>>
>>>>> - "List of Kenyan writers", *notable Kenyan authors*
>>>>>
>>>>> - "Solar eclipse of December 14, 1917", *annular eclipse which lasted
>>>>> 77 seconds*
>>>>>
>>>>> - "Natchaug Forest Lumber Shed", *historic Civilian Conservation
>>>>> Corps post-and-beam building*
>>>>>
>>>>> - "Sun of Jamaica (album)", *debut 1980 studio album by Goombay Dance
>>>>> Band*
>>>>>
>>>>> - "E-1027", *modernist villa in France by architect Eileen Gray*
>>>>>
>>>>> - "Daingerfield State Park", *park in Morris County, Texas, USA,
>>>>> bordering Lake Daingerfield*
>>>>>
>>>>> - "Todo Lo Que Soy-En Vivo", *2014 Live album by Mexican pop singer
>>>>> Fey*
>>>>>
>>>>> - "2009 UEFA Regions' Cup", *6th UEFA Regions' Cup, won by Castile
>>>>> and Leon*
>>>>>
>>>>>
>>>>>
>>>>> And here are the respective descriptions from Magnus' (quite
>>>>> excellent) autodesc.js:
>>>>>
>>>>>
>>>>>
>>>>> - "Courage Under Fire", *1996 film by Edward Zwick, produced by John
>>>>> Davis and David T. Friendly from United States of America*
>>>>>
>>>>> - "Pebasiconcha immanis", *species of Mollusca*
>>>>>
>>>>> - "List of Kenyan writers", *Wikimedia list article*
>>>>>
>>>>> - "Solar eclipse of December 14, 1917", *solar eclipse*
>>>>>
>>>>> - "Natchaug Forest Lumber Shed", *Construction in Connecticut, United
>>>>> States of America*
>>>>>
>>>>> - "Sun of Jamaica (album)", *album*
>>>>>
>>>>> - "E-1027", *villa in Roquebrune-Cap-Martin, France*
>>>>>
>>>>> - "Daingerfield State Park", *state park and state park of a state of
>>>>> the United States in Texas, United States of America*
>>>>>
>>>>> - "Todo Lo Que Soy-En Vivo", *live album by Fey*
>>>>>
>>>>> - "2009 UEFA Regions' Cup", *none*
>>>>>
>>>>>
>>>>>
>>>>> Thoughts?
>>>>>
>>>>> Just trying to make my own bold assertions falsifiable :)
>>>>>
>>>>>
>>>>>
>>>>> On Tue, Aug 18, 2015 at 6:32 PM, Monte Hurd <mhurd(a)wikimedia.org>
>>>>> wrote:
>>>>>
>>>>>> The whole human-vs-extracted descriptions quality question could be
>>>>>> fairly easy to test I think:
>>>>>>
>>>>>> - Pick, some number of articles at random.
>>>>>> - Run them through a description extraction script.
>>>>>> - Have a human describe the same articles with, say, the app
>>>>>> interface I demo'ed.
>>>>>>
>>>>>> If nothing else this exercise could perhaps make what's thus far been
>>>>>> a wildly abstract discussion more concrete.
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Tue, Aug 18, 2015 at 6:17 PM, Monte Hurd <mhurd(a)wikimedia.org>
>>>>>> wrote:
>>>>>>
>>>>>>> If having the most elegant description extraction mechanism was the
>>>>>>> goal I would totally agree ;)
>>>>>>>
>>>>>>> On Tue, Aug 18, 2015 at 5:19 PM, Dmitry Brant <dbrant(a)wikimedia.org>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> IMO, allowing the user to edit the description is a missed
>>>>>>>> opportunity to make the user edit the actual *data*, such that the
>>>>>>>> description is generated correctly.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Tue, Aug 18, 2015 at 8:02 PM, Monte Hurd <mhurd(a)wikimedia.org>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> IMO, if the goal is quality, then human curated descriptions are
>>>>>>>>> superior until such time as the auto-generation script passes the Turing
>>>>>>>>> test ;)
>>>>>>>>>
>>>>>>>>> I see these empty descriptions as an amazing opportunity to give
>>>>>>>>> *everyone* an easy new way to edit. I whipped an app editing interface up
>>>>>>>>> at the Lyon hackathon:
>>>>>>>>> bluetooth720 <https://www.youtube.com/watch?v=6VblyGhf_c8>
>>>>>>>>>
>>>>>>>>> I used it to add a couple hundred descriptions in a single day
>>>>>>>>> just by hitting "random" then adding descriptions for articles which didn't
>>>>>>>>> have them.
>>>>>>>>>
>>>>>>>>> I'd love to try a limited test of this in production to get a
>>>>>>>>> sense for how effective human curation can be if the interface is easy to
>>>>>>>>> use...
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Tue, Aug 18, 2015 at 1:25 PM, Jan Ainali <
>>>>>>>>> jan.ainali(a)wikimedia.se> wrote:
>>>>>>>>>
>>>>>>>>>> Nice one!
>>>>>>>>>>
>>>>>>>>>> Does not appear to work on svwiki though. Does it have something
>>>>>>>>>> to do with that the wiki in question does not display that tagline?
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> *Med vänliga hälsningar,Jan Ainali*
>>>>>>>>>>
>>>>>>>>>> Verksamhetschef, Wikimedia Sverige <http://wikimedia.se>
>>>>>>>>>> 0729 - 67 29 48
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> *Tänk dig en värld där varje människa har fri tillgång till
>>>>>>>>>> mänsklighetens samlade kunskap. Det är det vi gör.*
>>>>>>>>>> Bli medlem. <http://blimedlem.wikimedia.se>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> 2015-08-18 17:23 GMT+02:00 Magnus Manske <
>>>>>>>>>> magnusmanske(a)googlemail.com>:
>>>>>>>>>>
>>>>>>>>>>> Show automatic description underneath "From Wikipedia...":
>>>>>>>>>>> https://en.wikipedia.org/wiki/User:Magnus_Manske/autodesc.js
>>>>>>>>>>>
>>>>>>>>>>> To use, add:
>>>>>>>>>>> importScript ( 'User:Magnus_Manske/autodesc.js' ) ;
>>>>>>>>>>> to your common.js
>>>>>>>>>>>
>>>>>>>>>>> On Tue, Aug 18, 2015 at 9:47 AM Jane Darnell <jane023(a)gmail.com>
>>>>>>>>>>> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> It would be even better if this (short: 3 field max)
>>>>>>>>>>>> pipe-separated list was available as a gadget to wikidatans on Wikipedia
>>>>>>>>>>>> (like me). I can't see if a page I am on has an "instance of" (though it
>>>>>>>>>>>> should) and I can see the description thanks to another gadget (sorry no
>>>>>>>>>>>> idea which one that is). Often I will update empty descriptions, but if I
>>>>>>>>>>>> was served basic fields (so for a painting, the creator field), I would
>>>>>>>>>>>> click through to update that too.
>>>>>>>>>>>>
>>>>>>>>>>>> On Tue, Aug 18, 2015 at 9:58 AM, Federico Leva (Nemo) <
>>>>>>>>>>>> nemowiki(a)gmail.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> Jane Darnell, 15/08/2015 08:53:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> Yes but even if the descriptions were just the contents of
>>>>>>>>>>>>>> fields
>>>>>>>>>>>>>> separated by a pipe it would be better than nothing.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> +1, item descriptions are mostly useless in my experience.
>>>>>>>>>>>>>
>>>>>>>>>>>>> As for "get into production on Wikipedia" I don't know what it
>>>>>>>>>>>>> means, I certainly don't like 1) mobile-specific features, 2) overriding
>>>>>>>>>>>>> existing manually curated content; but it's good to 3) fill gaps. Mobile
>>>>>>>>>>>>> folks often do (1) and (2), if they *instead* did (3) I'd be very happy. :)
>>>>>>>>>>>>>
>>>>>>>>>>>>> Nemo
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>> Mobile-l mailing list
>>>>>>>>>>>> Mobile-l(a)lists.wikimedia.org
>>>>>>>>>>>> https://lists.wikimedia.org/mailman/listinfo/mobile-l
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> _______________________________________________
>>>>>>>>>>> Mobile-l mailing list
>>>>>>>>>>> Mobile-l(a)lists.wikimedia.org
>>>>>>>>>>> https://lists.wikimedia.org/mailman/listinfo/mobile-l
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> _______________________________________________
>>>>>>>>>> Mobile-l mailing list
>>>>>>>>>> Mobile-l(a)lists.wikimedia.org
>>>>>>>>>> https://lists.wikimedia.org/mailman/listinfo/mobile-l
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> _______________________________________________
>>>>>>>>> Mobile-l mailing list
>>>>>>>>> Mobile-l(a)lists.wikimedia.org
>>>>>>>>> https://lists.wikimedia.org/mailman/listinfo/mobile-l
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> Dmitry Brant
>>>>>>>> Mobile Apps Team (Android)
>>>>>>>> Wikimedia Foundation
>>>>>>>> https://www.mediawiki.org/wiki/Wikimedia_mobile_engineering
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>> --
>>>> EN Wikipedia user page:
>>>> https://en.wikipedia.org/wiki/User:Brian.gerstle
>>>> IRC: bgerstle
>>>>
>>>>
>>>
>>>
>>> --
>>> Dmitry Brant
>>> Mobile Apps Team (Android)
>>> Wikimedia Foundation
>>> https://www.mediawiki.org/wiki/Wikimedia_mobile_engineering
>>>
>>>
>>> On Tue, Aug 18, 2015 at 10:36 PM, Brian Gerstle <bgerstle(a)wikimedia.org>
>>> wrote:
>>>
>>>> Could there be a way to have our nicely curated description cake and
>>>> eat it too? For example, interpolating data into the description and/or
>>>> marking data points which are referenced in the description (so as to mark
>>>> it as outdated when they change)?
>>>>
>>>> I appreciate the potential benefits of generated descriptions (and
>>>> other things), but Monte's examples might have swayed me towards human
>>>> curated—when available.
>>>>
>>>> On Tuesday, August 18, 2015, Monte Hurd <mhurd(a)wikimedia.org> wrote:
>>>>
>>>>> Ok, so I just did what I proposed. I went to random enwiki articles
>>>>> and described the first ten I found which didn't already have descriptions:
>>>>>
>>>>>
>>>>> - "Courage Under Fire", *1996 film about a Gulf War friendly-fire
>>>>> incident*
>>>>>
>>>>> - "Pebasiconcha immanis", *largest known species of land snail,
>>>>> extinct*
>>>>>
>>>>> - "List of Kenyan writers", *notable Kenyan authors*
>>>>>
>>>>> - "Solar eclipse of December 14, 1917", *annular eclipse which lasted
>>>>> 77 seconds*
>>>>>
>>>>> - "Natchaug Forest Lumber Shed", *historic Civilian Conservation
>>>>> Corps post-and-beam building*
>>>>>
>>>>> - "Sun of Jamaica (album)", *debut 1980 studio album by Goombay Dance
>>>>> Band*
>>>>>
>>>>> - "E-1027", *modernist villa in France by architect Eileen Gray*
>>>>>
>>>>> - "Daingerfield State Park", *park in Morris County, Texas, USA,
>>>>> bordering Lake Daingerfield*
>>>>>
>>>>> - "Todo Lo Que Soy-En Vivo", *2014 Live album by Mexican pop singer
>>>>> Fey*
>>>>>
>>>>> - "2009 UEFA Regions' Cup", *6th UEFA Regions' Cup, won by Castile
>>>>> and Leon*
>>>>>
>>>>>
>>>>>
>>>>> And here are the respective descriptions from Magnus' (quite
>>>>> excellent) autodesc.js:
>>>>>
>>>>>
>>>>>
>>>>> - "Courage Under Fire", *1996 film by Edward Zwick, produced by John
>>>>> Davis and David T. Friendly from United States of America*
>>>>>
>>>>> - "Pebasiconcha immanis", *species of Mollusca*
>>>>>
>>>>> - "List of Kenyan writers", *Wikimedia list article*
>>>>>
>>>>> - "Solar eclipse of December 14, 1917", *solar eclipse*
>>>>>
>>>>> - "Natchaug Forest Lumber Shed", *Construction in Connecticut, United
>>>>> States of America*
>>>>>
>>>>> - "Sun of Jamaica (album)", *album*
>>>>>
>>>>> - "E-1027", *villa in Roquebrune-Cap-Martin, France*
>>>>>
>>>>> - "Daingerfield State Park", *state park and state park of a state of
>>>>> the United States in Texas, United States of America*
>>>>>
>>>>> - "Todo Lo Que Soy-En Vivo", *live album by Fey*
>>>>>
>>>>> - "2009 UEFA Regions' Cup", *none*
>>>>>
>>>>>
>>>>>
>>>>> Thoughts?
>>>>>
>>>>> Just trying to make my own bold assertions falsifiable :)
>>>>>
>>>>>
>>>>>
>>>>> On Tue, Aug 18, 2015 at 6:32 PM, Monte Hurd <mhurd(a)wikimedia.org>
>>>>> wrote:
>>>>>
>>>>>> The whole human-vs-extracted descriptions quality question could be
>>>>>> fairly easy to test I think:
>>>>>>
>>>>>> - Pick, some number of articles at random.
>>>>>> - Run them through a description extraction script.
>>>>>> - Have a human describe the same articles with, say, the app
>>>>>> interface I demo'ed.
>>>>>>
>>>>>> If nothing else this exercise could perhaps make what's thus far been
>>>>>> a wildly abstract discussion more concrete.
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Tue, Aug 18, 2015 at 6:17 PM, Monte Hurd <mhurd(a)wikimedia.org>
>>>>>> wrote:
>>>>>>
>>>>>>> If having the most elegant description extraction mechanism was the
>>>>>>> goal I would totally agree ;)
>>>>>>>
>>>>>>> On Tue, Aug 18, 2015 at 5:19 PM, Dmitry Brant <dbrant(a)wikimedia.org>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> IMO, allowing the user to edit the description is a missed
>>>>>>>> opportunity to make the user edit the actual *data*, such that the
>>>>>>>> description is generated correctly.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Tue, Aug 18, 2015 at 8:02 PM, Monte Hurd <mhurd(a)wikimedia.org>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> IMO, if the goal is quality, then human curated descriptions are
>>>>>>>>> superior until such time as the auto-generation script passes the Turing
>>>>>>>>> test ;)
>>>>>>>>>
>>>>>>>>> I see these empty descriptions as an amazing opportunity to give
>>>>>>>>> *everyone* an easy new way to edit. I whipped an app editing interface up
>>>>>>>>> at the Lyon hackathon:
>>>>>>>>> bluetooth720 <https://www.youtube.com/watch?v=6VblyGhf_c8>
>>>>>>>>>
>>>>>>>>> I used it to add a couple hundred descriptions in a single day
>>>>>>>>> just by hitting "random" then adding descriptions for articles which didn't
>>>>>>>>> have them.
>>>>>>>>>
>>>>>>>>> I'd love to try a limited test of this in production to get a
>>>>>>>>> sense for how effective human curation can be if the interface is easy to
>>>>>>>>> use...
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Tue, Aug 18, 2015 at 1:25 PM, Jan Ainali <
>>>>>>>>> jan.ainali(a)wikimedia.se> wrote:
>>>>>>>>>
>>>>>>>>>> Nice one!
>>>>>>>>>>
>>>>>>>>>> Does not appear to work on svwiki though. Does it have something
>>>>>>>>>> to do with that the wiki in question does not display that tagline?
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> *Med vänliga hälsningar,Jan Ainali*
>>>>>>>>>>
>>>>>>>>>> Verksamhetschef, Wikimedia Sverige <http://wikimedia.se>
>>>>>>>>>> 0729 - 67 29 48
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> *Tänk dig en värld där varje människa har fri tillgång till
>>>>>>>>>> mänsklighetens samlade kunskap. Det är det vi gör.*
>>>>>>>>>> Bli medlem. <http://blimedlem.wikimedia.se>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> 2015-08-18 17:23 GMT+02:00 Magnus Manske <
>>>>>>>>>> magnusmanske(a)googlemail.com>:
>>>>>>>>>>
>>>>>>>>>>> Show automatic description underneath "From Wikipedia...":
>>>>>>>>>>> https://en.wikipedia.org/wiki/User:Magnus_Manske/autodesc.js
>>>>>>>>>>>
>>>>>>>>>>> To use, add:
>>>>>>>>>>> importScript ( 'User:Magnus_Manske/autodesc.js' ) ;
>>>>>>>>>>> to your common.js
>>>>>>>>>>>
>>>>>>>>>>> On Tue, Aug 18, 2015 at 9:47 AM Jane Darnell <jane023(a)gmail.com>
>>>>>>>>>>> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> It would be even better if this (short: 3 field max)
>>>>>>>>>>>> pipe-separated list was available as a gadget to wikidatans on Wikipedia
>>>>>>>>>>>> (like me). I can't see if a page I am on has an "instance of" (though it
>>>>>>>>>>>> should) and I can see the description thanks to another gadget (sorry no
>>>>>>>>>>>> idea which one that is). Often I will update empty descriptions, but if I
>>>>>>>>>>>> was served basic fields (so for a painting, the creator field), I would
>>>>>>>>>>>> click through to update that too.
>>>>>>>>>>>>
>>>>>>>>>>>> On Tue, Aug 18, 2015 at 9:58 AM, Federico Leva (Nemo) <
>>>>>>>>>>>> nemowiki(a)gmail.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> Jane Darnell, 15/08/2015 08:53:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> Yes but even if the descriptions were just the contents of
>>>>>>>>>>>>>> fields
>>>>>>>>>>>>>> separated by a pipe it would be better than nothing.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> +1, item descriptions are mostly useless in my experience.
>>>>>>>>>>>>>
>>>>>>>>>>>>> As for "get into production on Wikipedia" I don't know what it
>>>>>>>>>>>>> means, I certainly don't like 1) mobile-specific features, 2) overriding
>>>>>>>>>>>>> existing manually curated content; but it's good to 3) fill gaps. Mobile
>>>>>>>>>>>>> folks often do (1) and (2), if they *instead* did (3) I'd be very happy. :)
>>>>>>>>>>>>>
>>>>>>>>>>>>> Nemo
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>> Mobile-l mailing list
>>>>>>>>>>>> Mobile-l(a)lists.wikimedia.org
>>>>>>>>>>>> https://lists.wikimedia.org/mailman/listinfo/mobile-l
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> _______________________________________________
>>>>>>>>>>> Mobile-l mailing list
>>>>>>>>>>> Mobile-l(a)lists.wikimedia.org
>>>>>>>>>>> https://lists.wikimedia.org/mailman/listinfo/mobile-l
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> _______________________________________________
>>>>>>>>>> Mobile-l mailing list
>>>>>>>>>> Mobile-l(a)lists.wikimedia.org
>>>>>>>>>> https://lists.wikimedia.org/mailman/listinfo/mobile-l
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> _______________________________________________
>>>>>>>>> Mobile-l mailing list
>>>>>>>>> Mobile-l(a)lists.wikimedia.org
>>>>>>>>> https://lists.wikimedia.org/mailman/listinfo/mobile-l
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> Dmitry Brant
>>>>>>>> Mobile Apps Team (Android)
>>>>>>>> Wikimedia Foundation
>>>>>>>> https://www.mediawiki.org/wiki/Wikimedia_mobile_engineering
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>> --
>>>> EN Wikipedia user page:
>>>> https://en.wikipedia.org/wiki/User:Brian.gerstle
>>>> IRC: bgerstle
>>>>
>>>>
>>>
>>>
>>> --
>>> Dmitry Brant
>>> Mobile Apps Team (Android)
>>> Wikimedia Foundation
>>> https://www.mediawiki.org/wiki/Wikimedia_mobile_engineering
>>>
>>> _______________________________________________
>>> Mobile-l mailing list
>>> Mobile-l(a)lists.wikimedia.org
>>> https://lists.wikimedia.org/mailman/listinfo/mobile-l
>>>
>>
> _______________________________________________
> Mobile-l mailing list
> Mobile-l(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/mobile-l
>
>
--
=S Page WMF Tech writer
Kerry said:
>
> Because of the criticism of “not giving back”, could we perhaps do things
> to try to make the researcher feel part of the community to make “giving
> back” more likely? For example, could we give them a slot every now and
> again to talk about their project in the R&D Showcase? Encourage them to be
> on this mailing list. Are we at a point where it might make sense to
> organise a Wikipedia research conference to help build a research
> community? Just thinking aloud here …
This is a bit different than the main topic, so I wanted to break it out
into another reply.
We just had Nate Matias[0] from the MIT media lab present on his work at
the last showcase[1]. We also just sent out a survey about the showcase
that includes a call for recommended speakers at future showcases[2]. As
for a Wikipedia research conference, see OpenSym[3] (formerly WikiSym) and
Wikimania[4] (not as researchy, but a great venue to maximize wiki research
impact).
0. http://natematias.com/
1.
https://www.mediawiki.org/wiki/Analytics/Research_and_Data/Showcase#July_20…
2.
http://lists.wikimedia.org/pipermail/wiki-research-l/2014-July/003574.html
3. http://www.opensym.org/os2014/
4. https://wikimania2014.wikimedia.org/wiki/Main_Page
On Thu, Jul 17, 2014 at 8:30 AM, Aaron Halfaker <aaron.halfaker(a)gmail.com>
wrote:
> > Aaron, when I read that it is active because I had heard from others in
> your team about a year or two ago that this wasn't going to be the vehicle
> for obtaining permission going forward and that a new, more lightweight
> process was being designed.
>
> 1) If anyone told you that we are no longer active, they were wrong.
> 2) The "lightweight" process you refer to is what I linked to in enwiki
> in my previous response. See again:
> https://en.wikipedia.org/wiki/Wikipedia:Research_recruitment
>
> Generally, there seems to be a misconception that RCom == paid WMF
> activities. While RCom involves a relationship with the Wikimedia
> Foundation, our activities as part of RCom are 100% volunteer and open to
> participation from other Wikipedians (seriously, let me know if you want to
> help out!), and as such, our backlog tends to suffer when our available
> volunteer time does. FWIW, I became involved in this work as a volunteer
> (before I started working with the WMF). With that in mind, it seems like
> we are not discussing RCom itself which is mostly inactive -- so much as we
> are discussing the subject recruitment review process which is still
> active. Let me state this clearly: *If you send an email to me or Dario
> about a research project that you would like reviewed, we will help you
> coordinate a review. *Our job as review coordinators is to make sure
> that the study is adequately documented and that Wikipedians and other
> researchers are pulled in to discuss the material. We don't just welcome
> broad involvement -- we need it! We all suffer from the lack of it.
> Please show up help us!
>
> To give you some context on the current stats and situation, I should
> probably give a bit of history. I've been working to improve subject
> recruitment review -- with the goal of improving interactions between
> researchers and Wikipedians -- for years. Let me first say that *I'm
> game to make this better**.* In my experience, the biggest issue to
> documenting the a review/endorsement/whatever process that I have come
> across is this: there seems to be a lot of people who feel that minimizing *process
> description* provides power and adaptability to intended processes[1].
> It's these people that I've regularly battled in my frequent efforts to
> increase the formalization around the subject recruitment proposal vetting
> process (e.g. SRAG had a structured appeals process and stated timelines).
> The result of these battles is the severely under-documented process
> "described" in meta:R:FAQ <https://meta.wikimedia.org/wiki/Research:FAQ>.
>
> Here's some links to my previous work on subject recruitment process that
> will show these old discussions about process creep
> <https://en.wikipedia.org/wiki/Wikipedia:Avoid_instruction_creep>.
>
> -
> https://en.wikipedia.org/wiki/Wikipedia:Subject_Recruitment_Approvals_Group
> -
> https://en.wikipedia.org/wiki/Wikipedia_talk:Subject_Recruitment_Approvals_…
> -
> https://en.wikipedia.org/w/index.php?title=Wikipedia:Research&oldid=3546001…
> - https://en.wikipedia.org/wiki/Wikipedia_talk:Research/Archive_1
> - https://en.wikipedia.org/wiki/Wikipedia_talk:Research/Archive_2 --
> Note that this was actually an *enwiki policy* for about 5 hours
> before the RfC was overturned due to too few editors being involved in the
> straw poll.
>
> For new work, see my current (but stalled for about 1.5 years) push for a
> structured process on English Wikipedia.
> https://en.wikipedia.org/wiki/Wikipedia:Research_recruitment See also
> the checklist I have been working on with Lane.
> https://en.wikipedia.org/wiki/Wikipedia:Research_recruitment/Wikipedian_che…
>
> When you review these docs and the corresponding conversations, please
> keep in mind that I was a new Wikipedian for the development of WP:SRAG and
> WP:Research, so I made some really critical mistakes -- like taking
> hyperbolic criticism of the proposals personally. :\
>
> So what now? Well, in the meantime, if you let me know about some subject
> recruitment you want to do, I'll help you find someone to coordinate a
> review that fits within the process described in the RCom docs. In the
> short term, are any of you folks interested in going through some
> iterations of the new WP:Research_recruitment policy doc?
>
> -Aaron
>
>
> On Thu, Jul 17, 2014 at 2:38 AM, Heather Ford <hfordsa(a)gmail.com> wrote:
>
>> Agree with Kerry that we really need to have a more flexible process that
>> speaks to the main problem that (I think) RCOM was started to solve i.e.
>> that Wikipedians were getting tired of being continually contacted by
>> researchers to fill out *surveys*. I'm not sure where feelings are about
>> that right now (I certainly haven't seen a huge amount of surveys myself)
>> but I guess the big question right now is whether RCOM is actually active
>> or not. I must say that I was surprised, Aaron, when I read that it is
>> active because I had heard from others in your team about a year or two ago
>> that this wasn't going to be the vehicle for obtaining permission going
>> forward and that a new, more lightweight process was being designed. As
>> Nathan discusses on the Wikimedia-l list, there aren't many indications
>> that RCOM is still active. Perhaps there has been a recent decision to
>> resuscitate it? If that's the case, let us know about it :) And then we can
>> discuss what needs to happen to build a good process.
>>
>> One immediate requirement that I've been talking to others about is
>> finding ways of making the case to the WMF as a group of researchers for
>> the anonymization of country level data, for example. I've spoken to a few
>> researchers (and I myself made a request about a year ago that hasn't been
>> responded to) and it seems like some work is required by the foundation to
>> do this anonymisation but that there are a few of us who would be really
>> keen to use this data to produce research very valuable to Wikipedia -
>> especially from smaller language versions/developing countries. Having an
>> official process that assesses how worthwhile this investment of time would
>> be to the Foundation would be a great idea, I think, but right now there
>> seems to be a general focus on the research that the Foundation does itself
>> rather than enabling researchers outside. I know how busy Aaron and Dario
>> (and others in the team) are so perhaps this requires a new position to
>> coordinate between researchers and Foundation resources?
>>
>> Anyway, I think the big question right now is whether there are any plans
>> for RCOM that have been made by the research team and the only people who
>> can answer that are folks in the research team :)
>>
>> Best,
>> Heather.
>>
>> Heather Ford
>> Oxford Internet Institute <http://www.oii.ox.ac.uk> Doctoral Programme
>> EthnographyMatters <http://ethnographymatters.net> | Oxford Digital
>> Ethnography Group <http://www.oii.ox.ac.uk/research/projects/?id=115>
>> http://hblog.org | @hfordsa <http://www.twitter.com/hfordsa>
>>
>>
>>
>>
>> On 17 July 2014 08:49, Kerry Raymond <kerry.raymond(a)gmail.com> wrote:
>>
>>> Yes, I meant the community/communities of WMF. But the authority of
>>> the community derives from WMF, which chooses to delegate such matters. I
>>> think that “advise” is a good word to use.
>>>
>>>
>>>
>>> Kerry
>>>
>>>
>>>
>>>
>>> ------------------------------
>>>
>>> *From:* Amir E. Aharoni [mailto:amir.aharoni@mail.huji.ac.il]
>>> *Sent:* Thursday, 17 July 2014 5:37 PM
>>> *To:* kerry.raymond(a)gmail.com; Research into Wikimedia content and
>>> communities
>>>
>>> *Subject:* Re: [Wiki-research-l] discussion about wikipedia surveys
>>>
>>>
>>>
>>> > WMF does not "own" me as a contributor; it does not decide who can
>>> and cannot recruit me for whatever purposes.
>>>
>>> I don't think that it really should be about WMF. The WMF shouldn't
>>> enforce anything. The community can formulate good practices for
>>> researchers and _advise_ community members not to cooperate with
>>> researchers who don't follow these practices. Not much more is needed.
>>>
>>>
>>>
>>> --
>>> Amir Elisha Aharoni · אָמִיר אֱלִישָׁע אַהֲרוֹנִי
>>> http://aharoni.wordpress.com
>>> “We're living in pieces,
>>> I want to live in peace.” – T. Moore
>>>
>>>
>>>
>>> 2014-07-17 8:24 GMT+03:00 Kerry Raymond <kerry.raymond(a)gmail.com>:
>>>
>>> Just saying here what I already put on the Talk page:
>>>
>>>
>>>
>>> I am a little bothered by the opening sentence "This page documents the
>>> process that researchers must follow before asking Wikipedia contributors
>>> to participate in research studies such as surveys, interviews and
>>> experiments."
>>>
>>> WMF does not "own" me as a contributor; it does not decide who can and
>>> cannot recruit me for whatever purposes. What WMF does own is its
>>> communication channels to me as a contributor and WMF has a right to
>>> control what occurs on those channels. Also I think WMF probably should be
>>> concerned about both its readers and its contributors being recruited
>>> through its channels (as either might be being recruited). I think this
>>> distinction should be made, e.g.
>>>
>>> "This page documents the process that researchers must follow if they
>>> wish to use Wikipedia's (WMF's?) communication channels to recruit people
>>> to participate in research studies such as surveys, interviews and
>>> experiments. Communication channels include its mailing lists, its Project
>>> pages, Talk pages, and User Talk pages [and whatever else I've forgotten]."
>>>
>>>
>>>
>>> If researchers want to recruit WPians via non-WMF means, I don’t think
>>> it’s any business of WMF’s. An example might be a researcher who wanted to
>>> contact WPians via chapters or thorgs; I would leave it for the
>>> chapter/thorg to decide if they wanted to assist the researcher via their
>>> communication channels.
>>>
>>>
>>>
>>> Of course, the practical reality of it is that some researchers
>>> (oblivious of WMF’s concerns in relation to recruitment of WPians to
>>> research projects) will simply use WMF’s channels without asking nicely
>>> first. Obviously we can remove such requests on-wiki and follow up any
>>> email requests with the commentary that this was not an approved request.
>>> In my category of [whatever else I’ve forgotten], I guess there are things
>>> like Facebook groups and any other social media presence.
>>>
>>>
>>>
>>> Also to be practical, if WMF is to have a process to vet research
>>> surveys, I think it has to be sufficiently fast and not be overly demanding
>>> to avoid the possibility of the researcher giving up (“too hard to deal
>>> with these people”) and simply spamming email, project pages, social media
>>> in the hope of recruiting some participants regardless. That is, if we make
>>> it too slow/hard to do the right thing, we effectively encourage doing the
>>> wrong thing. Also, what value-add can we give them to reward those who do
>>> the right thing? It’s nice to have a carrot as well as a stick when it
>>> comes to onerous processes J
>>>
>>>
>>>
>>> Because of the criticism of “not giving back”, could we perhaps do
>>> things to try to make the researcher feel part of the community to make
>>> “giving back” more likely? For example, could we give them a slot every now
>>> and again to talk about their project in the R&D Showcase? Encourage them
>>> to be on this mailing list. Are we at a point where it might make sense to
>>> organise a Wikipedia research conference to help build a research
>>> community? Just thinking aloud here …
>>>
>>>
>>>
>>> Kerry
>>>
>>>
>>>
>>>
>>> ------------------------------
>>>
>>> *From:* wiki-research-l-bounces(a)lists.wikimedia.org [mailto:
>>> wiki-research-l-bounces(a)lists.wikimedia.org] *On Behalf Of *Aaron
>>> Halfaker
>>> *Sent:* Thursday, 17 July 2014 6:59 AM
>>> *To:* Research into Wikimedia content and communities
>>> *Subject:* Re: [Wiki-research-l] discussion about wikipedia surveys
>>>
>>>
>>>
>>> RCOM review is still alive and looking for new reviewers (really,
>>> coordinators). Researchers can be directed to me or Dario (
>>> dtaraborelli(a)wikimedia.org) to be assigned a reviewer. There is also a
>>> proposed policy on enwiki that could use some eyeballs:
>>> https://en.wikipedia.org/wiki/Wikipedia:Research_recruitment
>>>
>>>
>>>
>>> On Wed, Jul 16, 2014 at 11:16 AM, Federico Leva (Nemo) <
>>> nemowiki(a)gmail.com> wrote:
>>>
>>> phoebe ayers, 16/07/2014 19:21:
>>>
>>> > (Personally, I think the answer should be to resuscitate RCOM, but
>>> > that's easy to say and harder to do!)
>>>
>>> IMHO in the meanwhile the most useful thing folks can do is subscribing
>>> to the feed of new research pages:
>>> <
>>> https://meta.wikimedia.org/w/index.php?title=Special:NewPages&feed=atom&hid…
>>> >
>>> It's easier to build a functioning RCOM out of an active community of
>>> "reviewers", than the other way round.
>>>
>>> Nemo
>>>
>>> _______________________________________________
>>> Wiki-research-l mailing list
>>> Wiki-research-l(a)lists.wikimedia.org
>>> https://lists.wikimedia.org/mailman/listinfo/wiki-research-l
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> Wiki-research-l mailing list
>>> Wiki-research-l(a)lists.wikimedia.org
>>> https://lists.wikimedia.org/mailman/listinfo/wiki-research-l
>>>
>>>
>>>
>>> _______________________________________________
>>> Wiki-research-l mailing list
>>> Wiki-research-l(a)lists.wikimedia.org
>>> https://lists.wikimedia.org/mailman/listinfo/wiki-research-l
>>>
>>>
>>
>> _______________________________________________
>> Wiki-research-l mailing list
>> Wiki-research-l(a)lists.wikimedia.org
>> https://lists.wikimedia.org/mailman/listinfo/wiki-research-l
>>
>>
>
> _______________________________________________
> Wiki-research-l mailing list
> Wiki-research-l(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wiki-research-l
>
>
Stop
Too ----------
Sent from my Nokia phone
------Original message------
From: <wikimedia-l-request(a)lists.wikimedia.org>
To: <wikimedia-l(a)lists.wikimedia.org>
Date: Wednesday, January 15, 2014 9:52:03 AM GMT+0000
Subject: Wikimedia-l Digest, Vol 118, Issue 44
Send Wikimedia-l mailing list submissions to
wikimedia-l(a)lists.wikimedia.org
To subscribe or unsubscribe via the World Wide Web, visit
https://lists.wikimedia.org/mailman/listinfo/wikimedia-l
or, via email, send a message with subject or body 'help' to
wikimedia-l-request(a)lists.wikimedia.org
You can reach the person managing the list at
wikimedia-l-owner(a)lists.wikimedia.org
When replying, please edit your Subject line so it is more specific
than "Re: Contents of Wikimedia-l digest..."
Today's Topics:
1. Re: Thanking anonymous users (Oliver Keyes)
2. Re: Thanking anonymous users (Marc A. Pelletier)
3. Visually impaired (rupert THURNER)
4. Re: Visually impaired (Stevie Benton)
5. Re: Visually impaired (Gerard Meijssen)
6. Re: Visually impaired (Jon Davies)
7. Re: Visually impaired (Amir E. Aharoni)
8. Re: Visually impaired (Jon Davies)
----------------------------------------------------------------------
Message: 1
Date: Tue, 14 Jan 2014 14:02:49 -0800
From: Oliver Keyes <okeyes(a)wikimedia.org>
To: Wikimedia Mailing List <wikimedia-l(a)lists.wikimedia.org>
Subject: Re: [Wikimedia-l] Thanking anonymous users
Message-ID:
<CAAUQgdCMx2P7cmMK67kJiTf6yLnguwbHet8TfdOgs0RRqd7+rQ(a)mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
Aha. I totally agree with that, then, but I don't think it's the motivation
for this feature.
On 14 January 2014 13:28, David Gerard <dgerard(a)gmail.com> wrote:
> On 14 January 2014 21:20, Oliver Keyes <okeyes(a)wikimedia.org> wrote:
> > On 14 January 2014 12:29, Federico Leva (Nemo) <nemowiki(a)gmail.com>
> wrote:
>
> >> I'd rather call is a systemic bias which makes us favor standardised
> >> technological whizbangs just because we can measure them rather than
> for an
> >> actual effectiveness.
>
> > So you'd rather measure effectiveness through...the feeling in your
> water?
>
>
> No, he means doing things because they're susceptible to measurement,
> rather than because they're a good thing to do.
>
> The sort of thinking that leads to lightboxes over pages. "Just look
> at our response metrics!" Just look at your page.
>
>
> - d.
>
> _______________________________________________
> Wikimedia-l mailing list
> Wikimedia-l(a)lists.wikimedia.org
> Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l,
> <mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe>
>
--
Oliver Keyes
Product Analyst
Wikimedia Foundation
------------------------------
Message: 2
Date: Tue, 14 Jan 2014 18:02:51 -0500
From: "Marc A. Pelletier" <marc(a)uberbox.org>
To: wikimedia-l(a)lists.wikimedia.org
Subject: Re: [Wikimedia-l] Thanking anonymous users
Message-ID: <52D5C21B.2090304(a)uberbox.org>
Content-Type: text/plain; charset=UTF-8
On 01/14/2014 04:07 PM, Tilman Bayer wrote:
> but I wouldn't rule out
> the possibility that they still achieved a good approximation.
I'd wager that what they have gotten might be a poor sample; there is
certainly a correlation between being a "power/advanced" user and more
intricate talk page archiving -- so the class of users most likely to
get some kinds of barnstars would end up being the most underrepresented
in the dataset.
I haven't read their paper though - they may well have accounted for
that in some manner.
-- Marc
------------------------------
Message: 3
Date: Wed, 15 Jan 2014 09:26:10 +0100
From: rupert THURNER <rupert.thurner(a)gmail.com>
To: Wikimedia Mailing List <wikimedia-l(a)lists.wikimedia.org>
Subject: [Wikimedia-l] Visually impaired
Message-ID:
<CAJs9aZ9p1E+ziOsDRTf8cukEUmZjPgVWyjsF=pdNAgOuoEpgKg(a)mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
Hi, would anybody of you have some starting points concerning wikipedia for
visually impaired persons, both computer and mobile devices?
Rupert
------------------------------
Message: 4
Date: Wed, 15 Jan 2014 08:32:35 +0000
From: Stevie Benton <stevie.benton(a)wikimedia.org.uk>
To: Wikimedia Mailing List <wikimedia-l(a)lists.wikimedia.org>
Subject: Re: [Wikimedia-l] Visually impaired
Message-ID:
<CACti2rLjS21BDutRx-P95Heju+VSmycCcHr5sd1p+nWN5NxEjA(a)mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
Hi Rupert,
Wikimedia UK is currently looking at this in relation to our own wiki.
There's some thoughts and notes at
https://wikimedia.org.uk/wiki/Accessibility_of_the_Wikimedia_UK_website which
draw extensively from information provided by the Royal National Institute
of Blind People - http://www.rnib.org.uk/Pages/Home.aspx
I hope this is useful.
Stevie
On 15 January 2014 08:26, rupert THURNER <rupert.thurner(a)gmail.com> wrote:
> Hi, would anybody of you have some starting points concerning wikipedia for
> visually impaired persons, both computer and mobile devices?
>
> Rupert
> _______________________________________________
> Wikimedia-l mailing list
> Wikimedia-l(a)lists.wikimedia.org
> Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l,
> <mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe>
--
Stevie Benton
Head of External Relations
Wikimedia UK
+44 (0) 20 7065 0993 / +44 (0) 7803 505 173
@StevieBenton
Wikimedia UK is a Company Limited by Guarantee registered in England
and Wales, Registered No. 6741827. Registered Charity No.1144513.
Registered Office 4th Floor, Development House, 56-64 Leonard Street,
London EC2A 4LT. United Kingdom. Wikimedia UK is the UK chapter of a
global Wikimedia movement. The Wikimedia projects are run by the
Wikimedia Foundation (who operate Wikipedia, amongst other projects).
*Wikimedia UK is an independent non-profit charity with no legal
control over Wikipedia nor responsibility for its contents.*
------------------------------
Message: 5
Date: Wed, 15 Jan 2014 09:50:43 +0100
From: Gerard Meijssen <gerard.meijssen(a)gmail.com>
To: Wikimedia Mailing List <wikimedia-l(a)lists.wikimedia.org>
Subject: Re: [Wikimedia-l] Visually impaired
Message-ID:
<CAO53wxW5QFzqNqGuqyH-6cY-gQ6G0wjjwJ+qxbVbBjKp8LQ5Pg(a)mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
Hoi,
One obvious point to start is the functionality of the ULS. It already
serves one function for people who have a handicap with their perception.
It has the OpenDyslexic font for people with dyslexia. There are multiple
ways functionality can be provided who have a visual handicap. The size of
the characters can be increased, the colour scheme can be changed (some
people only see yellow on white..)
If there is one thing wrong with the ULS, it is not in the functionality
but by the utter lack of visibility. ULS is a major component of MediaWiki
and it is not given prominence, Truly how are people going to find
OpenDyslexic... (we are talking about 7 to 10% of a population)...
Work is done to get more support for webfonts on mobile phones.. It is
being developed.
Thanks,
GerardM
On 15 January 2014 09:26, rupert THURNER <rupert.thurner(a)gmail.com> wrote:
> Hi, would anybody of you have some starting points concerning wikipedia for
> visually impaired persons, both computer and mobile devices?
>
> Rupert
> _______________________________________________
> Wikimedia-l mailing list
> Wikimedia-l(a)lists.wikimedia.org
> Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l,
> <mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe>
------------------------------
Message: 6
Date: Wed, 15 Jan 2014 09:34:57 +0000
From: Jon Davies <jon.davies(a)wikimedia.org.uk>
To: Wikimedia Mailing List <wikimedia-l(a)lists.wikimedia.org>
Subject: Re: [Wikimedia-l] Visually impaired
Message-ID:
<CAM7S2qruN3yOpt=eZRDO90fFTMK49q3xSGCETPrybdFCt2aZGg(a)mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
How about starting with what a 'ULS' is? That would help :)
On 15 January 2014 08:50, Gerard Meijssen <gerard.meijssen(a)gmail.com> wrote:
> Hoi,
> One obvious point to start is the functionality of the ULS. It already
> serves one function for people who have a handicap with their perception.
> It has the OpenDyslexic font for people with dyslexia. There are multiple
> ways functionality can be provided who have a visual handicap. The size of
> the characters can be increased, the colour scheme can be changed (some
> people only see yellow on white..)
>
> If there is one thing wrong with the ULS, it is not in the functionality
> but by the utter lack of visibility. ULS is a major component of MediaWiki
> and it is not given prominence, Truly how are people going to find
> OpenDyslexic... (we are talking about 7 to 10% of a population)...
>
> Work is done to get more support for webfonts on mobile phones.. It is
> being developed.
> Thanks,
> GerardM
>
>
> On 15 January 2014 09:26, rupert THURNER <rupert.thurner(a)gmail.com> wrote:
>
> > Hi, would anybody of you have some starting points concerning wikipedia
> for
> > visually impaired persons, both computer and mobile devices?
> >
> > Rupert
> > _______________________________________________
> > Wikimedia-l mailing list
> > Wikimedia-l(a)lists.wikimedia.org
> > Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l,
> > <mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe>
> _______________________________________________
> Wikimedia-l mailing list
> Wikimedia-l(a)lists.wikimedia.org
> Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l,
> <mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe>
>
--
*Jon Davies - Chief Executive Wikimedia UK*. Mobile (0044) 7803 505 169
tweet @jonatreesdavies
Wikimedia UK is a Company Limited by Guarantee registered in England and
Wales, Registered No. 6741827. Registered Charity No.1144513. Registered
Office 4th Floor, Development House, 56-64 Leonard Street, London EC2A 4LT.
United Kingdom. Wikimedia UK is the UK chapter of a global Wikimedia
movement. The Wikimedia projects are run by the Wikimedia Foundation (who
operate Wikipedia, amongst other projects).
Telephone (0044) 207 065 0990.
Visit http://www.wikimedia.org.uk/ and @wikimediauk
------------------------------
Message: 7
Date: Wed, 15 Jan 2014 11:47:32 +0200
From: "Amir E. Aharoni" <amir.aharoni(a)mail.huji.ac.il>
To: Wikimedia Mailing List <wikimedia-l(a)lists.wikimedia.org>
Subject: Re: [Wikimedia-l] Visually impaired
Message-ID:
<CACtNa8sGO1dS9cGA0yLKU6Ud+-DzQcoJwwgKBAQTNyLgnbPsOw(a)mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
ULS is short for the Universal Language Selector extension. This tool
provides language selection, webfonts and keyboard layouts for different
languages. It appears if you click the gear icon near the interlanguage
links. Among other things, it provides the OpenDyslexic font for some
languages written in the Latin alphabet, and it is supposed to be more
comfortable to read for dyslexic people. So it can be considered and
accessibility tool, but I don't think that it is relevant for
visually-impaired people.
--
Amir Elisha Aharoni · אָמִיר אֱלִישָׁע אַהֲרוֹנִי
http://aharoni.wordpress.com
“We're living in pieces,
I want to live in peace.” – T. Moore
2014/1/15 Jon Davies <jon.davies(a)wikimedia.org.uk>
> How about starting with what a 'ULS' is? That would help :)
>
>
> On 15 January 2014 08:50, Gerard Meijssen <gerard.meijssen(a)gmail.com>
> wrote:
>
> > Hoi,
> > One obvious point to start is the functionality of the ULS. It already
> > serves one function for people who have a handicap with their perception.
> > It has the OpenDyslexic font for people with dyslexia. There are multiple
> > ways functionality can be provided who have a visual handicap. The size
> of
> > the characters can be increased, the colour scheme can be changed (some
> > people only see yellow on white..)
> >
> > If there is one thing wrong with the ULS, it is not in the functionality
> > but by the utter lack of visibility. ULS is a major component of
> MediaWiki
> > and it is not given prominence, Truly how are people going to find
> > OpenDyslexic... (we are talking about 7 to 10% of a population)...
> >
> > Work is done to get more support for webfonts on mobile phones.. It is
> > being developed.
> > Thanks,
> > GerardM
> >
> >
> > On 15 January 2014 09:26, rupert THURNER <rupert.thurner(a)gmail.com>
> wrote:
> >
> > > Hi, would anybody of you have some starting points concerning wikipedia
> > for
> > > visually impaired persons, both computer and mobile devices?
> > >
> > > Rupert
> > > _______________________________________________
> > > Wikimedia-l mailing list
> > > Wikimedia-l(a)lists.wikimedia.org
> > > Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l,
> > > <mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe>
> > _______________________________________________
> > Wikimedia-l mailing list
> > Wikimedia-l(a)lists.wikimedia.org
> > Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l,
> > <mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe>
> >
>
>
>
> --
> *Jon Davies - Chief Executive Wikimedia UK*. Mobile (0044) 7803 505 169
> tweet @jonatreesdavies
>
> Wikimedia UK is a Company Limited by Guarantee registered in England and
> Wales, Registered No. 6741827. Registered Charity No.1144513. Registered
> Office 4th Floor, Development House, 56-64 Leonard Street, London EC2A 4LT.
> United Kingdom. Wikimedia UK is the UK chapter of a global Wikimedia
> movement. The Wikimedia projects are run by the Wikimedia Foundation (who
> operate Wikipedia, amongst other projects).
> Telephone (0044) 207 065 0990.
>
> Visit http://www.wikimedia.org.uk/ and @wikimediauk
> _______________________________________________
> Wikimedia-l mailing list
> Wikimedia-l(a)lists.wikimedia.org
> Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l,
> <mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe>
>
------------------------------
Message: 8
Date: Wed, 15 Jan 2014 09:51:30 +0000
From: Jon Davies <jon.davies(a)wikimedia.org.uk>
To: Wikimedia Mailing List <wikimedia-l(a)lists.wikimedia.org>
Subject: Re: [Wikimedia-l] Visually impaired
Message-ID:
<CAM7S2qpA4-r-7ZKobBv_NfXujkWC73VU-yb1wdws8484JA0EKw(a)mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
WOW on toast! Thanks Amir.
On 15 January 2014 09:47, Amir E. Aharoni <amir.aharoni(a)mail.huji.ac.il>wrote:
> ULS is short for the Universal Language Selector extension. This tool
> provides language selection, webfonts and keyboard layouts for different
> languages. It appears if you click the gear icon near the interlanguage
> links. Among other things, it provides the OpenDyslexic font for some
> languages written in the Latin alphabet, and it is supposed to be more
> comfortable to read for dyslexic people. So it can be considered and
> accessibility tool, but I don't think that it is relevant for
> visually-impaired people.
>
>
>
>
> --
> Amir Elisha Aharoni · אָמִיר אֱלִישָׁע אַהֲרוֹנִי
> http://aharoni.wordpress.com
> “We're living in pieces,
> I want to live in peace.” – T. Moore
>
>
> 2014/1/15 Jon Davies <jon.davies(a)wikimedia.org.uk>
>
> > How about starting with what a 'ULS' is? That would help :)
> >
> >
> > On 15 January 2014 08:50, Gerard Meijssen <gerard.meijssen(a)gmail.com>
> > wrote:
> >
> > > Hoi,
> > > One obvious point to start is the functionality of the ULS. It already
> > > serves one function for people who have a handicap with their
> perception.
> > > It has the OpenDyslexic font for people with dyslexia. There are
> multiple
> > > ways functionality can be provided who have a visual handicap. The size
> > of
> > > the characters can be increased, the colour scheme can be changed (some
> > > people only see yellow on white..)
> > >
> > > If there is one thing wrong with the ULS, it is not in the
> functionality
> > > but by the utter lack of visibility. ULS is a major component of
> > MediaWiki
> > > and it is not given prominence, Truly how are people going to find
> > > OpenDyslexic... (we are talking about 7 to 10% of a population)...
> > >
> > > Work is done to get more support for webfonts on mobile phones.. It is
> > > being developed.
> > > Thanks,
> > > GerardM
> > >
> > >
> > > On 15 January 2014 09:26, rupert THURNER <rupert.thurner(a)gmail.com>
> > wrote:
> > >
> > > > Hi, would anybody of you have some starting points concerning
> wikipedia
> > > for
> > > > visually impaired persons, both computer and mobile devices?
> > > >
> > > > Rupert
> > > > _______________________________________________
> > > > Wikimedia-l mailing list
> > > > Wikimedia-l(a)lists.wikimedia.org
> > > > Unsubscribe:
> https://lists.wikimedia.org/mailman/listinfo/wikimedia-l,
> > > > <mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe>
> > > _______________________________________________
> > > Wikimedia-l mailing list
> > > Wikimedia-l(a)lists.wikimedia.org
> > > Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l,
> > > <mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe>
> > >
> >
> >
> >
> > --
> > *Jon Davies - Chief Executive Wikimedia UK*. Mobile (0044) 7803 505 169
> > tweet @jonatreesdavies
> >
> > Wikimedia UK is a Company Limited by Guarantee registered in England and
> > Wales, Registered No. 6741827. Registered Charity No.1144513. Registered
> > Office 4th Floor, Development House, 56-64 Leonard Street, London EC2A
> 4LT.
> > United Kingdom. Wikimedia UK is the UK chapter of a global Wikimedia
> > movement. The Wikimedia projects are run by the Wikimedia Foundation (who
> > operate Wikipedia, amongst other projects).
> > Telephone (0044) 207 065 0990.
> >
> > Visit http://www.wikimedia.org.uk/ and @wikimediauk
> > _______________________________________________
> > Wikimedia-l mailing list
> > Wikimedia-l(a)lists.wikimedia.org
> > Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l,
> > <mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe>
> >
> _______________________________________________
> Wikimedia-l mailing list
> Wikimedia-l(a)lists.wikimedia.org
> Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l,
> <mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe>
>
--
*Jon Davies - Chief Executive Wikimedia UK*. Mobile (0044) 7803 505 169
tweet @jonatreesdavies
Wikimedia UK is a Company Limited by Guarantee registered in England and
Wales, Registered No. 6741827. Registered Charity No.1144513. Registered
Office 4th Floor, Development House, 56-64 Leonard Street, London EC2A 4LT.
United Kingdom. Wikimedia UK is the UK chapter of a global Wikimedia
movement. The Wikimedia projects are run by the Wikimedia Foundation (who
operate Wikipedia, amongst other projects).
Telephone (0044) 207 065 0990.
Visit http://www.wikimedia.org.uk/ and @wikimediauk
------------------------------
_______________________________________________
Wikimedia-l mailing list
Wikimedia-l(a)lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikimedia-l
End of Wikimedia-l Digest, Vol 118, Issue 44
********************************************
If RCOM needs more volunteer Wikimedians, the alive and well IEG Committee
includes a Research Working Group that reviews grant proposals for WMF
funding through the IEG program, so RCOM could reach out to IEGCom. I'm on
IEGCom and the RWG but I can't speak for RCOM. (:
Pine
On Thu, Jul 17, 2014 at 3:10 PM, Kerry Raymond <kerry.raymond(a)gmail.com>
wrote:
> I guess I was not so much thinking of an general invitation to the R&D
> Showcase but a specific “expectation” (albeit couched as an invitation) on
> those given permission to recruit via WMF channels to give a few short (or
> long as appropriate to the stage of their research) talks on their project.
> Ditto research projects supported through IEG or similar.
>
>
>
> I agree that OpenSym is available as a research conference but it is not
> run by our community and therefore doesn’t help to create a sense of
> community with the researchers in question. Wikimania is run by our
> community but isn’t a research conference (would not count as a publication
> for academic purposes). But I don’t know if it’s realistic to try to
> establish another conference in terms of the volunteer effort to run it.
>
>
>
> Kerry
>
>
> ------------------------------
>
> *From:* wiki-research-l-bounces(a)lists.wikimedia.org [mailto:
> wiki-research-l-bounces(a)lists.wikimedia.org] *On Behalf Of *Aaron Halfaker
> *Sent:* Friday, 18 July 2014 1:45 AM
>
> *To:* Research into Wikimedia content and communities
> *Subject:* Re: [Wiki-research-l] discussion about wikipedia surveys
>
>
>
>
>
>
>
> Kerry said:
>
> Because of the criticism of “not giving back”, could we perhaps do things
> to try to make the researcher feel part of the community to make “giving
> back” more likely? For example, could we give them a slot every now and
> again to talk about their project in the R&D Showcase? Encourage them to be
> on this mailing list. Are we at a point where it might make sense to
> organise a Wikipedia research conference to help build a research
> community? Just thinking aloud here …
>
>
>
> This is a bit different than the main topic, so I wanted to break it out
> into another reply.
>
>
>
> We just had Nate Matias[0] from the MIT media lab present on his work at
> the last showcase[1]. We also just sent out a survey about the showcase
> that includes a call for recommended speakers at future showcases[2]. As
> for a Wikipedia research conference, see OpenSym[3] (formerly WikiSym) and
> Wikimania[4] (not as researchy, but a great venue to maximize wiki research
> impact).
>
>
>
> 0. http://natematias.com/
>
> 1.
> https://www.mediawiki.org/wiki/Analytics/Research_and_Data/Showcase#July_20…
>
> 2.
> http://lists.wikimedia.org/pipermail/wiki-research-l/2014-July/003574.html
>
> 3. http://www.opensym.org/os2014/
>
> 4. https://wikimania2014.wikimedia.org/wiki/Main_Page
>
>
>
> On Thu, Jul 17, 2014 at 8:30 AM, Aaron Halfaker <aaron.halfaker(a)gmail.com>
> wrote:
>
> > Aaron, when I read that it is active because I had heard from others in
> your team about a year or two ago that this wasn't going to be the vehicle
> for obtaining permission going forward and that a new, more lightweight
> process was being designed.
>
>
>
> 1) If anyone told you that we are no longer active, they were wrong.
>
> 2) The "lightweight" process you refer to is what I linked to in enwiki in
> my previous response. See again:
> https://en.wikipedia.org/wiki/Wikipedia:Research_recruitment
>
>
>
> Generally, there seems to be a misconception that RCom == paid WMF
> activities. While RCom involves a relationship with the Wikimedia
> Foundation, our activities as part of RCom are 100% volunteer and open to
> participation from other Wikipedians (seriously, let me know if you want to
> help out!), and as such, our backlog tends to suffer when our available
> volunteer time does. FWIW, I became involved in this work as a volunteer
> (before I started working with the WMF). With that in mind, it seems like
> we are not discussing RCom itself which is mostly inactive -- so much as we
> are discussing the subject recruitment review process which is still
> active. Let me state this clearly: *If you send an email to me or Dario
> about a research project that you would like reviewed, we will help you
> coordinate a review. *Our job as review coordinators is to make sure
> that the study is adequately documented and that Wikipedians and other
> researchers are pulled in to discuss the material. We don't just welcome
> broad involvement -- we need it! We all suffer from the lack of it.
> Please show up help us!
>
>
>
> To give you some context on the current stats and situation, I should
> probably give a bit of history. I've been working to improve subject
> recruitment review -- with the goal of improving interactions between
> researchers and Wikipedians -- for years. Let me first say that *I'm
> game to make this better.* In my experience, the biggest issue to
> documenting the a review/endorsement/whatever process that I have come
> across is this: there seems to be a lot of people who feel that minimizing *process
> description* provides power and adaptability to intended processes[1].
> It's these people that I've regularly battled in my frequent efforts to
> increase the formalization around the subject recruitment proposal vetting
> process (e.g. SRAG had a structured appeals process and stated timelines).
> The result of these battles is the severely under-documented process
> "described" in meta:R:FAQ <https://meta.wikimedia.org/wiki/Research:FAQ>.
>
>
>
> Here's some links to my previous work on subject recruitment process that
> will show these old discussions about process creep
> <https://en.wikipedia.org/wiki/Wikipedia:Avoid_instruction_creep>.
>
> ·
> https://en.wikipedia.org/wiki/Wikipedia:Subject_Recruitment_Approvals_Group
>
> o
> https://en.wikipedia.org/wiki/Wikipedia_talk:Subject_Recruitment_Approvals_…
>
> ·
> https://en.wikipedia.org/w/index.php?title=Wikipedia:Research&oldid=3546001…
>
> o https://en.wikipedia.org/wiki/Wikipedia_talk:Research/Archive_1
>
> o https://en.wikipedia.org/wiki/Wikipedia_talk:Research/Archive_2 -- Note
> that this was actually an *enwiki policy* for about 5 hours before the
> RfC was overturned due to too few editors being involved in the straw poll.
>
> For new work, see my current (but stalled for about 1.5 years) push for a
> structured process on English Wikipedia.
> https://en.wikipedia.org/wiki/Wikipedia:Research_recruitment See also
> the checklist I have been working on with Lane.
> https://en.wikipedia.org/wiki/Wikipedia:Research_recruitment/Wikipedian_che…
>
>
>
> When you review these docs and the corresponding conversations, please
> keep in mind that I was a new Wikipedian for the development of WP:SRAG and
> WP:Research, so I made some really critical mistakes -- like taking
> hyperbolic criticism of the proposals personally. :\
>
>
>
> So what now? Well, in the meantime, if you let me know about some subject
> recruitment you want to do, I'll help you find someone to coordinate a
> review that fits within the process described in the RCom docs. In the
> short term, are any of you folks interested in going through some
> iterations of the new WP:Research_recruitment policy doc?
>
>
>
> -Aaron
>
>
>
> On Thu, Jul 17, 2014 at 2:38 AM, Heather Ford <hfordsa(a)gmail.com> wrote:
>
> Agree with Kerry that we really need to have a more flexible process that
> speaks to the main problem that (I think) RCOM was started to solve i.e.
> that Wikipedians were getting tired of being continually contacted by
> researchers to fill out *surveys*. I'm not sure where feelings are about
> that right now (I certainly haven't seen a huge amount of surveys myself)
> but I guess the big question right now is whether RCOM is actually active
> or not. I must say that I was surprised, Aaron, when I read that it is
> active because I had heard from others in your team about a year or two ago
> that this wasn't going to be the vehicle for obtaining permission going
> forward and that a new, more lightweight process was being designed. As
> Nathan discusses on the Wikimedia-l list, there aren't many indications
> that RCOM is still active. Perhaps there has been a recent decision to
> resuscitate it? If that's the case, let us know about it :) And then we can
> discuss what needs to happen to build a good process.
>
>
>
> One immediate requirement that I've been talking to others about is
> finding ways of making the case to the WMF as a group of researchers for
> the anonymization of country level data, for example. I've spoken to a few
> researchers (and I myself made a request about a year ago that hasn't been
> responded to) and it seems like some work is required by the foundation to
> do this anonymisation but that there are a few of us who would be really
> keen to use this data to produce research very valuable to Wikipedia -
> especially from smaller language versions/developing countries. Having an
> official process that assesses how worthwhile this investment of time would
> be to the Foundation would be a great idea, I think, but right now there
> seems to be a general focus on the research that the Foundation does itself
> rather than enabling researchers outside. I know how busy Aaron and Dario
> (and others in the team) are so perhaps this requires a new position to
> coordinate between researchers and Foundation resources?
>
>
>
> Anyway, I think the big question right now is whether there are any plans
> for RCOM that have been made by the research team and the only people who
> can answer that are folks in the research team :)
>
>
>
> Best,
>
> Heather.
>
>
> Heather Ford
> Oxford Internet Institute <http://www.oii.ox.ac.uk> Doctoral Programme
> EthnographyMatters <http://ethnographymatters.net> | Oxford Digital
> Ethnography Group <http://www.oii.ox.ac.uk/research/projects/?id=115>
> http://hblog.org | @hfordsa <http://www.twitter.com/hfordsa>
>
>
>
>
>
> On 17 July 2014 08:49, Kerry Raymond <kerry.raymond(a)gmail.com> wrote:
>
> Yes, I meant the community/communities of WMF. But the authority of the
> community derives from WMF, which chooses to delegate such matters. I think
> that “advise” is a good word to use.
>
>
>
> Kerry
>
>
>
>
> ------------------------------
>
> *From:* Amir E. Aharoni [mailto:amir.aharoni@mail.huji.ac.il]
> *Sent:* Thursday, 17 July 2014 5:37 PM
> *To:* kerry.raymond(a)gmail.com; Research into Wikimedia content and
> communities
>
>
> *Subject:* Re: [Wiki-research-l] discussion about wikipedia surveys
>
>
>
> > WMF does not "own" me as a contributor; it does not decide who can and
> cannot recruit me for whatever purposes.
>
> I don't think that it really should be about WMF. The WMF shouldn't
> enforce anything. The community can formulate good practices for
> researchers and _advise_ community members not to cooperate with
> researchers who don't follow these practices. Not much more is needed.
>
>
>
> --
> Amir Elisha Aharoni · אָמִיר אֱלִישָׁע אַהֲרוֹנִי
> http://aharoni.wordpress.com
> “We're living in pieces,
> I want to live in peace.” – T. Moore
>
>
>
> 2014-07-17 8:24 GMT+03:00 Kerry Raymond <kerry.raymond(a)gmail.com>:
>
> Just saying here what I already put on the Talk page:
>
>
>
> I am a little bothered by the opening sentence "This page documents the
> process that researchers must follow before asking Wikipedia contributors
> to participate in research studies such as surveys, interviews and
> experiments."
>
> WMF does not "own" me as a contributor; it does not decide who can and
> cannot recruit me for whatever purposes. What WMF does own is its
> communication channels to me as a contributor and WMF has a right to
> control what occurs on those channels. Also I think WMF probably should be
> concerned about both its readers and its contributors being recruited
> through its channels (as either might be being recruited). I think this
> distinction should be made, e.g.
>
> "This page documents the process that researchers must follow if they wish
> to use Wikipedia's (WMF's?) communication channels to recruit people to
> participate in research studies such as surveys, interviews and
> experiments. Communication channels include its mailing lists, its Project
> pages, Talk pages, and User Talk pages [and whatever else I've forgotten]."
>
>
>
> If researchers want to recruit WPians via non-WMF means, I don’t think
> it’s any business of WMF’s. An example might be a researcher who wanted to
> contact WPians via chapters or thorgs; I would leave it for the
> chapter/thorg to decide if they wanted to assist the researcher via their
> communication channels.
>
>
>
> Of course, the practical reality of it is that some researchers (oblivious
> of WMF’s concerns in relation to recruitment of WPians to research
> projects) will simply use WMF’s channels without asking nicely first.
> Obviously we can remove such requests on-wiki and follow up any email
> requests with the commentary that this was not an approved request. In my
> category of [whatever else I’ve forgotten], I guess there are things like
> Facebook groups and any other social media presence.
>
>
>
> Also to be practical, if WMF is to have a process to vet research surveys,
> I think it has to be sufficiently fast and not be overly demanding to avoid
> the possibility of the researcher giving up (“too hard to deal with these
> people”) and simply spamming email, project pages, social media in the hope
> of recruiting some participants regardless. That is, if we make it too
> slow/hard to do the right thing, we effectively encourage doing the wrong
> thing. Also, what value-add can we give them to reward those who do the
> right thing? It’s nice to have a carrot as well as a stick when it comes to
> onerous processes J
>
>
>
> Because of the criticism of “not giving back”, could we perhaps do things
> to try to make the researcher feel part of the community to make “giving
> back” more likely? For example, could we give them a slot every now and
> again to talk about their project in the R&D Showcase? Encourage them to be
> on this mailing list. Are we at a point where it might make sense to
> organise a Wikipedia research conference to help build a research
> community? Just thinking aloud here …
>
>
>
> Kerry
>
>
>
>
> ------------------------------
>
> *From:* wiki-research-l-bounces(a)lists.wikimedia.org [mailto:
> wiki-research-l-bounces(a)lists.wikimedia.org] *On Behalf Of *Aaron Halfaker
> *Sent:* Thursday, 17 July 2014 6:59 AM
> *To:* Research into Wikimedia content and communities
> *Subject:* Re: [Wiki-research-l] discussion about wikipedia surveys
>
>
>
> RCOM review is still alive and looking for new reviewers (really,
> coordinators). Researchers can be directed to me or Dario (
> dtaraborelli(a)wikimedia.org) to be assigned a reviewer. There is also a
> proposed policy on enwiki that could use some eyeballs:
> https://en.wikipedia.org/wiki/Wikipedia:Research_recruitment
>
>
>
> On Wed, Jul 16, 2014 at 11:16 AM, Federico Leva (Nemo) <nemowiki(a)gmail.com>
> wrote:
>
> phoebe ayers, 16/07/2014 19:21:
>
> > (Personally, I think the answer should be to resuscitate RCOM, but
> > that's easy to say and harder to do!)
>
> IMHO in the meanwhile the most useful thing folks can do is subscribing
> to the feed of new research pages:
> <
> https://meta.wikimedia.org/w/index.php?title=Special:NewPages&feed=atom&hid…
> >
> It's easier to build a functioning RCOM out of an active community of
> "reviewers", than the other way round.
>
> Nemo
>
> _______________________________________________
> Wiki-research-l mailing list
> Wiki-research-l(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wiki-research-l
>
>
>
>
> _______________________________________________
> Wiki-research-l mailing list
> Wiki-research-l(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wiki-research-l
>
>
>
>
> _______________________________________________
> Wiki-research-l mailing list
> Wiki-research-l(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wiki-research-l
>
>
>
>
> _______________________________________________
> Wiki-research-l mailing list
> Wiki-research-l(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wiki-research-l
>
>
>
>
> _______________________________________________
> Wiki-research-l mailing list
> Wiki-research-l(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wiki-research-l
>
>
>
> _______________________________________________
> Wiki-research-l mailing list
> Wiki-research-l(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wiki-research-l
>
>