I was describing to someone how Wikipedia works:
"anyone can edit" etc.
He answered with this argument:
"Wikipedia is the triumph of the average person!
of the man in the street!)"
(average meaning: not good, not bad, just OK)
I asked "why?"
His explanation:
"Great brilliant works are built by individuals.
Groups of people can only create average works.
If someone writes something good in the wiki,
other average persons will intervene with his/her
work and turn it into an average work. If someone
writes something bad in the wiki, the others will
again turn it into something of average value.
with your system (meaning: Wikipedia's system)
you can be sure that you will never create
something too bad but also never something too
good. You can create only average articles."
The idea behind his argument was that Wikipedia
will be a good resource as long as it attracts
good cotnributors. but it will soon become an
average site/encyclopaedia because it allows
anyone to join the project and edit, and most
people are just average persons and not brilliant
writers.
Do you think it's true? and how can we answer
this argument?
--Optim
__________________________________
Do you Yahoo!?
Yahoo! SiteBuilder - Free web site building tool. Try it!
http://webhosting.yahoo.com/ps/sb/
On Sunday 28 July 2002 03:00 am, The Cunctator wrote:
> What are the articles this person has been changing?
For 66.108.155.126:
20:08 Jul 27, 2002 Computer
20:07 Jul 27, 2002 Exploit
20:07 Jul 27, 2002 AOL
20:05 Jul 27, 2002 Hacker
20:05 Jul 27, 2002 Leet
20:03 Jul 27, 2002 Root
20:02 Jul 27, 2002 Hacker
19:59 Jul 27, 2002 Hacker
19:58 Jul 27, 2002 Hacker
19:54 Jul 27, 2002 Principle of least astonishment
19:54 Jul 27, 2002 Hacker
19:52 Jul 27, 2002 Trance music
19:51 Jul 27, 2002 Trance music
For 208.24.115.6:
20:20 Jul 27, 2002 Hacker
For 141.157.232.26:
20:19 Jul 27, 2002 Hacker
Most of these were complete replacements with discoherent statements.
Such as "TAP IS THE ABSOLUTE DEFINITION OF THE NOUN HACKER" for Hacker.
For the specifics follow http://www.wikipedia.com/wiki/Special:Ipblocklist
and look at the contribs.
--mav
Español: Estoy enviar un correo electrónico de Investigación-l que puede
ser de interés para las personas que se suscriben a las listas de correo de
Wikipedia.
English: I am forwarding an email from Research-l that may be of interest
to people who subscribe to Wikipedia mailing lists.
Pine
( https://meta.wikimedia.org/wiki/User:Pine )
---------- Forwarded message ---------
From: Leila Zia <leila(a)wikimedia.org>
Date: Fri, Jan 18, 2019, 4:15 PM
Subject: [Wiki-research-l] Why the world reads Wikipedia: beyond English
To: Research into Wikimedia content and communities <
wiki-research-l(a)lists.wikimedia.org>
Hi all,
As some of you know, we started a line of research back in 2016 to
understand Wikipedia readers better. We published the first taxonomy
of Wikipedia readers and we studied and characterized the reader types
in English Wikipedia [1]. During the past 1+ year, we focused on
learning about the potential differences of Wikipedia readers across
languages based on the taxonomy built in [1]. We've learned a lot, and
today we're sharing what we learnt with you.
Some pointers:
* Publication: https://arxiv.org/abs/1812.00474
* Data:
https://figshare.com/articles/Why_the_World_Reads_Wikipedia/7579937/1
* (under continuous improvement) Research page on meta:
https://meta.wikimedia.org/wiki/Research:Characterizing_Wikipedia_Reader_Be…
* Research showcase presentation:
https://www.mediawiki.org/wiki/Wikimedia_Research/Showcase#December_2018
* A series of presentations to WMF teams and community: Look for tasks
under https://phabricator.wikimedia.org/T201699 with title "Present
the results of WtWRW" for link to slides and more info when available.
* We will send out a blog post about it hopefully soon. A blog post
about the intermediate results is at
https://wikimediafoundation.org/2018/03/15/why-the-world-reads-wikipedia/
In a nutshell:
* We ran the taxonomy of Wikipedia readers in 14 languages and
measured the prevalence of Wikipedia use-cases and characterized
Wikipedia readers in these languages.
* While we observe similarities in terms of the prevalence of the use
cases as well as the way we can characterize readers, we can see that
Wikipedia languages lend themselves to different distributions of
readership and characteristics. In many cases, the one-size-fit-all
solutions may simply not work for readers.
* Intrinsic learning remains as the number one motivation for people
to come to Wikipedia in the majority of the languages, followed by
media.
* In-depth reading and the reading of scientific oriented topics is
highly and negatively correlated with the socio-economic status and
Human Development Index of countries the readers in these languages
are coming from. Long articles that may seem just too long for the
bulk of our audience in US, Japan, and the Netherlands is in high
demand in India, Bolivia, Argentina, Panamá, México, …
* ...
This research was not possible without the extensive contributions by
our formal collaborators: Florian Lemmerich (RWTH Aachen University)
and Bob West (EPFL). On the WMF end, I was fortunate to work with
Diego Saez on this project as well as more recently, Isaac Johnson.
And all those in the Reading Web and Legal team who supported us
throughout the process. I also want to underline the amazing work that
the volunteers in the languages in the study did to support us heavily
to learn more about their languages, not only through help with
communications within their communities but also with the translation
task which was not an easy one as they were asked to offer their time
not only to translate but also do in-person meetings with us for us to
make sure the intent of the question is translated the same way across
the languages. Usernames Strainu, Tgr, Amire80, Awossink, Antanana,
Lyzzy, Shangkuanlc, Whym, Kaganer, عباد_ديرانية, Satdeep_Gill, Racso,
Hasive: Thank you!
Next we are going to extend this study to include demographics
information. More information about it coming out in the next few
weeks. (And I will send out a separate email to wikimedia-l about this
topic and future research over the weekend. I need some time to
finalize the message to make the message most useful for that
audience.:)
Best,
Leila
[1] https://arxiv.org/abs/1702.05379
--
Leila Zia
Senior Research Scientist, Lead
Wikimedia Foundation
_______________________________________________
Wiki-research-l mailing list
Wiki-research-l(a)lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wiki-research-l
Cool, thanks! I read this a while ago, rereading again.
On Tue, Jan 15, 2019 at 3:28 AM Sebastian Hellmann <
hellmann(a)informatik.uni-leipzig.de> wrote:
> Hi all,
>
> let me send you a paper from 2013, which might either help directly or at
> least to get some ideas...
>
> A lemon lexicon for DBpedia, Christina Unger, John McCrae, Sebastian
> Walter, Sara Winter, Philipp Cimiano, 2013, Proceedings of 1st
> International Workshop on NLP and DBpedia, co-located with the 12th
> International Semantic Web Conference (ISWC 2013), October 21-25, Sydney,
> Australia
>
> https://github.com/ag-sc/lemon.dbpedia
>
> https://pdfs.semanticscholar.org/638e/b4959db792c94411339439013eef536fb052.…
>
> Since the mappings from DBpedia to Wikidata properties are here:
> http://mappings.dbpedia.org/index.php?title=Special:AllPages&namespace=202
> e.g. http://mappings.dbpedia.org/index.php/OntologyProperty:BirthDate
>
> You could directly use the DBpedia-lemon lexicalisation for Wikidata.
>
> The mappings can be downloaded with
>
> git clone https://github.com/dbpedia/extraction-framework ; cd core ;
> ../run download-mappings
>
>
> All the best,
>
> Sebastian
>
>
>
>
> On 14.01.19 18:34, Denny Vrandečić wrote:
>
> Felipe,
>
> thanks for the kind words.
>
> There are a few research projects that use Wikidata to generate parts of
> Wikipedia articles - see for example https://arxiv.org/abs/1702.06235 which
> is almost as good as human results and beats templates by far, but only for
> the first sentence of biographies.
>
> Lucie Kaffee has also quite a body of research on that topic, and has
> worked very succesfully and tightly with some Wikipedia communities on
> these questions. Here's her bibliography:
> https://scholar.google.com/citations?user=xiuGTq0AAAAJ&hl=de
>
> Another project of hers is currently under review for a grant:
> https://meta.wikimedia.org/wiki/Grants:Project/Scribe:_Supporting_Under-res…
> - I would suggest to take a look and if you are so inclined to express
> support. It is totally worth it!
>
> My opinion is that these projects are great for starters, and should be
> done (low-hanging fruits and all that), but won't get much further at least
> for a while, mostly because Wikidata rarely offers more than a skeleton of
> content. A decent Wikipedia article will include much, much more content
> than what is represented in Wikidata. And if you only use that for input,
> you're limiting yourself too much.
>
> Here's a different approach based on summarization over input sources:
> https://www.wired.com/story/using-artificial-intelligence-to-fix-wikipedias… -
> this has a more promising approach for the short- to mid-term.
>
> I still maintain that the Abstract Wikipedia approach has certain
> advantages over both learned approaches, and is most aligned with Lucie's
> work. The machine learned approaches always fall short on the dimension of
> editability, due to the black-boxness of their solutions.
>
> Also, furthermore, agree to Jeblad.
>
> Remains the question, why is there not more discussion? Maybe because
> there is nothing substantial to discuss yet :) The two white papers are
> rather high level and the idea is not concrete enough yet, so that I
> wouldn't expect too much discussion yet going on on-wiki. That was similar
> to Wikidata - the number who discussed Wikidata at this level of maturity
> was tiny, it increased considerably once an actual design plan was
> suggested, but still remained small - and then exploded once the system was
> deployed. I would be surprised and delighted if we managed to avoid this
> pattern this time, but I can't do more than publicly present the idea,
> announce plans once they are there, and hope for a timely discussion :)
>
> Cheers,
> Denny
>
>
> On Mon, Jan 14, 2019 at 2:54 AM John Erling Blad <jeblad(a)gmail.com> wrote:
>
>> An additional note; what Wikipedia urgently needs is a way to create
>> and reuse canned text (aka "templates"), and a way to adapt that text
>> to data from Wikidata. That is mostly just inflection rules, but in
>> some cases it involves grammar rules. To create larger pieces of text
>> is much harder, especially if the text is supposed to be readable.
>> Jumbling sentences together as is commonly done by various botscripts
>> does not work very well, or rather, it does not work at all.
>>
>> On Mon, Jan 14, 2019 at 11:44 AM John Erling Blad <jeblad(a)gmail.com>
>> wrote:
>> >
>> > Using an abstract language as an basis for translations have been
>> > tried before, and is almost as hard as translating between two common
>> > languages.
>> >
>> > There are two really hard problems, it is the implied references and
>> > the cultural context. An artificial language can get rid of the
>> > implied references, but it tend to create very weird and unnatural
>> > expressions. If the cultural context is removed, then it can be
>> > extremely hard to put it back in, and without any cultural context it
>> > can be hard to explain anything.
>> >
>> > But yes, you can make an abstract language, but it won't give you any
>> > high quality prose.
>> >
>> > On Mon, Jan 14, 2019 at 8:09 AM Felipe Schenone <schenonef(a)gmail.com>
>> wrote:
>> > >
>> > > This is quite an awesome idea. But thinking about it, wouldn't it be
>> possible to use structured data in wikidata to generate articles? Can't we
>> skip the need of learning an abstract language by using wikidata?
>> > >
>> > > Also, is there discussion about this idea anywhere in the Wikimedia
>> wikis? I haven't found any...
>> > >
>> > > On Sat, Sep 29, 2018 at 3:44 PM Pine W <wiki.pine(a)gmail.com> wrote:
>> > >>
>> > >> Forwarding because this (ambitious!) proposal may be of interest to
>> people
>> > >> on other lists. I'm not endorsing the proposal at this time, but I'm
>> > >> curious about it.
>> > >>
>> > >> Pine
>> > >> ( https://meta.wikimedia.org/wiki/User:Pine )
>> > >>
>> > >>
>> > >> ---------- Forwarded message ---------
>> > >> From: Denny Vrandečić <vrandecic(a)gmail.com>
>> > >> Date: Sat, Sep 29, 2018 at 6:32 PM
>> > >> Subject: [Wikimedia-l] Wikipedia in an abstract language
>> > >> To: Wikimedia Mailing List <wikimedia-l(a)lists.wikimedia.org>
>> > >>
>> > >>
>> > >> Semantic Web languages allow to express ontologies and knowledge
>> bases in a
>> > >> way meant to be particularly amenable to the Web. Ontologies
>> formalize the
>> > >> shared understanding of a domain. But the most expressive and
>> widespread
>> > >> languages that we know of are human natural languages, and the
>> largest
>> > >> knowledge base we have is the wealth of text written in human
>> languages.
>> > >>
>> > >> We looks for a path to bridge the gap between knowledge
>> representation
>> > >> languages such as OWL and human natural languages such as English. We
>> > >> propose a project to simultaneously expose that gap, allow to
>> collaborate
>> > >> on closing it, make progress widely visible, and is highly
>> attractive and
>> > >> valuable in its own right: a Wikipedia written in an abstract
>> language to
>> > >> be rendered into any natural language on request. This would make
>> current
>> > >> Wikipedia editors about 100x more productive, and increase the
>> content of
>> > >> Wikipedia by 10x. For billions of users this will unlock knowledge
>> they
>> > >> currently do not have access to.
>> > >>
>> > >> My first talk on this topic will be on October 10, 2018,
>> 16:45-17:00, at
>> > >> the Asilomar in Monterey, CA during the Blue Sky track of ISWC. My
>> second,
>> > >> longer talk on the topic will be at the DL workshop in Tempe, AZ,
>> October
>> > >> 27-29. Comments are very welcome as I prepare the slides and the
>> talk.
>> > >>
>> > >> Link to the paper: http://simia.net/download/abstractwikipedia.pdf
>> > >>
>> > >> Cheers,
>> > >> Denny
>> > >> _______________________________________________
>> > >> Wikimedia-l mailing list, guidelines at:
>> > >> https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines and
>> > >> https://meta.wikimedia.org/wiki/Wikimedia-l
>> > >> New messages to: Wikimedia-l(a)lists.wikimedia.org
>> > >> Unsubscribe:
>> https://lists.wikimedia.org/mailman/listinfo/wikimedia-l,
>> > >> <mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe>
>> > >> _______________________________________________
>> > >> Wikipedia-l mailing list
>> > >> Wikipedia-l(a)lists.wikimedia.org
>> > >> https://lists.wikimedia.org/mailman/listinfo/wikipedia-l
>> > >
>> > > _______________________________________________
>> > > Wikidata mailing list
>> > > Wikidata(a)lists.wikimedia.org
>> > > https://lists.wikimedia.org/mailman/listinfo/wikidata
>>
>> _______________________________________________
>> Wikidata mailing list
>> Wikidata(a)lists.wikimedia.org
>> https://lists.wikimedia.org/mailman/listinfo/wikidata
>>
>
> _______________________________________________
> Wikidata mailing listWikidata@lists.wikimedia.orghttps://lists.wikimedia.org/mailman/listinfo/wikidata
>
> --
> All the best,
> Sebastian Hellmann
>
> Director of Knowledge Integration and Linked Data Technologies (KILT)
> Competence Center
> at the Institute for Applied Informatics (InfAI) at Leipzig University
> Executive Director of the DBpedia Association
> Projects: http://dbpedia.org, http://nlp2rdf.org,
> http://linguistics.okfn.org, https://www.w3.org/community/ld4lt
> <http://www.w3.org/community/ld4lt>
> Homepage: http://aksw.org/SebastianHellmann
> Research Group: http://aksw.org
>
Forwarding good news to the Wikipedia-l and WikiEN-l lists. I am guessing
that this tool may be of interest to Wikipedians who contribute in diverse
languages.
Thanks for your work, Johanna and the WMDE Technical Wishes Team.
Pine
( https://meta.wikimedia.org/wiki/User:Pine )
---------- Forwarded message ---------
From: Johanna Strodt via Commons-l <commons-l(a)lists.wikimedia.org>
Date: Mon, Jan 14, 2019, 4:04 AM
Subject: [Commons-l] Coming soon to all wikis: beta feature FileExporter
To: <commons-l(a)lists.wikimedia.org>
// sorry for cross-posting
The FileExporter will soon be released as a beta feature to all wikis. The
planned deployment date is January 16.
Some background: Files from local wikis should be transferred to Wikimedia
Commons if their license allows it. This way, they can be used by all
wikis. But if you wanted to import a local file to Commons in the past, you
couldn’t properly transfer its file and page history. It was a wish from a
survey of the German-speaking communities to find a solution to this
problem.
Now, the FileExporter makes it possible to import a file from a local wiki
to Wikimedia Commons, while keeping its history intact. A first version has
already been a beta feature on a few first wikis since June 2018. [1] Since
then, bugs were fixed and features were added.
If you’re interested in importing local files, please give the feature a
try. Even though it’s a beta feature, you can use it for real file
imports. Please
note that in order to get started, you need to
1.
activate the beta feature “FileExporter” on your wiki [2], and
2.
make sure your wiki has a proper configuration file. Configuration files
are maintained by each wiki's community. They define, among other
things, whether a file can be exported. Exports from wikis without a
configuration file are blocked. Find more more information in the
documentation. [3]
We’re looking forward to your feedback on the central feedback page! [4] A
big thanks to everyone who gave feedback so far.
If you wish to learn more about the project, have a look at the page of the
wish. [5]
For the Technical Wishes team,
Johanna
[1] deployment plan:
https://meta.wikimedia.org/wiki/WMDE_Technical_Wishes/Move_files_to_Commons…
[2] go to Preferences > Beta features, e.g.
https://meta.wikimedia.org/wiki/Special:Preferences#mw-prefsection-betafeat…
[3] documentation on configuration files:
https://meta.wikimedia.org/wiki/WMDE_Technical_Wishes/Move_files_to_Commons…
[4] central feedback page:
https://www.mediawiki.org/wiki/Help_talk:Extension:FileImporter
[5] project page:
https://meta.wikimedia.org/wiki/WMDE_Technical_Wishes/Move_files_to_Commons
Johanna Strodt Project Manager Community Communications Technical Wishlist,
Wikimedia Deutschland
_______________________________________________
Commons-l mailing list
Commons-l(a)lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/commons-l
Felipe,
thanks for the kind words.
There are a few research projects that use Wikidata to generate parts of
Wikipedia articles - see for example https://arxiv.org/abs/1702.06235 which
is almost as good as human results and beats templates by far, but only for
the first sentence of biographies.
Lucie Kaffee has also quite a body of research on that topic, and has
worked very succesfully and tightly with some Wikipedia communities on
these questions. Here's her bibliography:
https://scholar.google.com/citations?user=xiuGTq0AAAAJ&hl=de
Another project of hers is currently under review for a grant:
https://meta.wikimedia.org/wiki/Grants:Project/Scribe:_Supporting_Under-res…
- I would suggest to take a look and if you are so inclined to express
support. It is totally worth it!
My opinion is that these projects are great for starters, and should be
done (low-hanging fruits and all that), but won't get much further at least
for a while, mostly because Wikidata rarely offers more than a skeleton of
content. A decent Wikipedia article will include much, much more content
than what is represented in Wikidata. And if you only use that for input,
you're limiting yourself too much.
Here's a different approach based on summarization over input sources:
https://www.wired.com/story/using-artificial-intelligence-to-fix-wikipedias…
-
this has a more promising approach for the short- to mid-term.
I still maintain that the Abstract Wikipedia approach has certain
advantages over both learned approaches, and is most aligned with Lucie's
work. The machine learned approaches always fall short on the dimension of
editability, due to the black-boxness of their solutions.
Also, furthermore, agree to Jeblad.
Remains the question, why is there not more discussion? Maybe because there
is nothing substantial to discuss yet :) The two white papers are rather
high level and the idea is not concrete enough yet, so that I wouldn't
expect too much discussion yet going on on-wiki. That was similar to
Wikidata - the number who discussed Wikidata at this level of maturity was
tiny, it increased considerably once an actual design plan was suggested,
but still remained small - and then exploded once the system was deployed.
I would be surprised and delighted if we managed to avoid this pattern this
time, but I can't do more than publicly present the idea, announce plans
once they are there, and hope for a timely discussion :)
Cheers,
Denny
On Mon, Jan 14, 2019 at 2:54 AM John Erling Blad <jeblad(a)gmail.com> wrote:
> An additional note; what Wikipedia urgently needs is a way to create
> and reuse canned text (aka "templates"), and a way to adapt that text
> to data from Wikidata. That is mostly just inflection rules, but in
> some cases it involves grammar rules. To create larger pieces of text
> is much harder, especially if the text is supposed to be readable.
> Jumbling sentences together as is commonly done by various botscripts
> does not work very well, or rather, it does not work at all.
>
> On Mon, Jan 14, 2019 at 11:44 AM John Erling Blad <jeblad(a)gmail.com>
> wrote:
> >
> > Using an abstract language as an basis for translations have been
> > tried before, and is almost as hard as translating between two common
> > languages.
> >
> > There are two really hard problems, it is the implied references and
> > the cultural context. An artificial language can get rid of the
> > implied references, but it tend to create very weird and unnatural
> > expressions. If the cultural context is removed, then it can be
> > extremely hard to put it back in, and without any cultural context it
> > can be hard to explain anything.
> >
> > But yes, you can make an abstract language, but it won't give you any
> > high quality prose.
> >
> > On Mon, Jan 14, 2019 at 8:09 AM Felipe Schenone <schenonef(a)gmail.com>
> wrote:
> > >
> > > This is quite an awesome idea. But thinking about it, wouldn't it be
> possible to use structured data in wikidata to generate articles? Can't we
> skip the need of learning an abstract language by using wikidata?
> > >
> > > Also, is there discussion about this idea anywhere in the Wikimedia
> wikis? I haven't found any...
> > >
> > > On Sat, Sep 29, 2018 at 3:44 PM Pine W <wiki.pine(a)gmail.com> wrote:
> > >>
> > >> Forwarding because this (ambitious!) proposal may be of interest to
> people
> > >> on other lists. I'm not endorsing the proposal at this time, but I'm
> > >> curious about it.
> > >>
> > >> Pine
> > >> ( https://meta.wikimedia.org/wiki/User:Pine )
> > >>
> > >>
> > >> ---------- Forwarded message ---------
> > >> From: Denny Vrandečić <vrandecic(a)gmail.com>
> > >> Date: Sat, Sep 29, 2018 at 6:32 PM
> > >> Subject: [Wikimedia-l] Wikipedia in an abstract language
> > >> To: Wikimedia Mailing List <wikimedia-l(a)lists.wikimedia.org>
> > >>
> > >>
> > >> Semantic Web languages allow to express ontologies and knowledge
> bases in a
> > >> way meant to be particularly amenable to the Web. Ontologies
> formalize the
> > >> shared understanding of a domain. But the most expressive and
> widespread
> > >> languages that we know of are human natural languages, and the largest
> > >> knowledge base we have is the wealth of text written in human
> languages.
> > >>
> > >> We looks for a path to bridge the gap between knowledge representation
> > >> languages such as OWL and human natural languages such as English. We
> > >> propose a project to simultaneously expose that gap, allow to
> collaborate
> > >> on closing it, make progress widely visible, and is highly attractive
> and
> > >> valuable in its own right: a Wikipedia written in an abstract
> language to
> > >> be rendered into any natural language on request. This would make
> current
> > >> Wikipedia editors about 100x more productive, and increase the
> content of
> > >> Wikipedia by 10x. For billions of users this will unlock knowledge
> they
> > >> currently do not have access to.
> > >>
> > >> My first talk on this topic will be on October 10, 2018, 16:45-17:00,
> at
> > >> the Asilomar in Monterey, CA during the Blue Sky track of ISWC. My
> second,
> > >> longer talk on the topic will be at the DL workshop in Tempe, AZ,
> October
> > >> 27-29. Comments are very welcome as I prepare the slides and the talk.
> > >>
> > >> Link to the paper: http://simia.net/download/abstractwikipedia.pdf
> > >>
> > >> Cheers,
> > >> Denny
> > >> _______________________________________________
> > >> Wikimedia-l mailing list, guidelines at:
> > >> https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines and
> > >> https://meta.wikimedia.org/wiki/Wikimedia-l
> > >> New messages to: Wikimedia-l(a)lists.wikimedia.org
> > >> Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l
> ,
> > >> <mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe>
> > >> _______________________________________________
> > >> Wikipedia-l mailing list
> > >> Wikipedia-l(a)lists.wikimedia.org
> > >> https://lists.wikimedia.org/mailman/listinfo/wikipedia-l
> > >
> > > _______________________________________________
> > > Wikidata mailing list
> > > Wikidata(a)lists.wikimedia.org
> > > https://lists.wikimedia.org/mailman/listinfo/wikidata
>
> _______________________________________________
> Wikidata mailing list
> Wikidata(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
Forwarding because this (ambitious!) proposal may be of interest to people
on other lists. I'm not endorsing the proposal at this time, but I'm
curious about it.
Pine
( https://meta.wikimedia.org/wiki/User:Pine )
---------- Forwarded message ---------
From: Denny Vrandečić <vrandecic(a)gmail.com>
Date: Sat, Sep 29, 2018 at 6:32 PM
Subject: [Wikimedia-l] Wikipedia in an abstract language
To: Wikimedia Mailing List <wikimedia-l(a)lists.wikimedia.org>
Semantic Web languages allow to express ontologies and knowledge bases in a
way meant to be particularly amenable to the Web. Ontologies formalize the
shared understanding of a domain. But the most expressive and widespread
languages that we know of are human natural languages, and the largest
knowledge base we have is the wealth of text written in human languages.
We looks for a path to bridge the gap between knowledge representation
languages such as OWL and human natural languages such as English. We
propose a project to simultaneously expose that gap, allow to collaborate
on closing it, make progress widely visible, and is highly attractive and
valuable in its own right: a Wikipedia written in an abstract language to
be rendered into any natural language on request. This would make current
Wikipedia editors about 100x more productive, and increase the content of
Wikipedia by 10x. For billions of users this will unlock knowledge they
currently do not have access to.
My first talk on this topic will be on October 10, 2018, 16:45-17:00, at
the Asilomar in Monterey, CA during the Blue Sky track of ISWC. My second,
longer talk on the topic will be at the DL workshop in Tempe, AZ, October
27-29. Comments are very welcome as I prepare the slides and the talk.
Link to the paper: http://simia.net/download/abstractwikipedia.pdf
Cheers,
Denny
_______________________________________________
Wikimedia-l mailing list, guidelines at:
https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines and
https://meta.wikimedia.org/wiki/Wikimedia-l
New messages to: Wikimedia-l(a)lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l,
<mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe>
Hi Janna,
I am forwarding this announcement to the Wikipenia and ENWP mailing lists.
The presentation looks interesting to me.
Do you know which Wikipedia language edition(s) the author studied for this
research?
Pine
( https://meta.wikimedia.org/wiki/User:Pine )
On Thu, Jan 10, 2019, 10:49 AM Janna Layton <> wrote:
> Hello, everyone,
>
> The next Research Showcase, *Understanding participation in Wikipedia*,
> will be live-streamed next Wednesday, January 16, at 11:30 AM PST/19:30
> UTC. This presentation is about new editors.
>
> YouTube stream: https://www.youtube.com/watch?v=Fc51jE_KNTc
>
> As usual, you can join the conversation on IRC at #wikimedia-research. You
> can also watch our past research showcases here:
> https://www.mediawiki.org/wiki/Wikimedia_Research/Showcase
>
> This month's presentation:
>
> *Understanding participation in Wikipedia: Studies on the relationship
> between new editors’ motivations and activity*
>
> By Martina Balestra, New York University
>
> Peer production communities like Wikipedia often struggle to retain
> contributors beyond their initial engagement. Theory suggests this may be
> related to their levels of motivation, though prior studies either center
> on contributors’ activity or use cross-sectional survey methods, and
> overlook accompanied changes in motivation. In this talk, I will present a
> series of studies aimed at filling this gap. We begin by looking at how
> Wikipedia editors’ early motivations influence the activities that they
> come to engage in, and how these motivations change over the first three
> months of participation in Wikipedia. We then look at the relationship
> between editing activity and intrinsic motivation specifically over time.
> We find that new editors’ early motivations are predictive of their future
> activity, but that these motivations tend to change with time. Moreover,
> newcomers’ intrinsic motivation is reinforced by the amount of activity
> they engage in over time: editors who had a high level of intrinsic
> motivation entered a virtuous cycle where the more they edited the more
> motivated they became, whereas those who initially had low intrinsic
> motivation entered a vicious cycle. Our findings shed new light on the
> importance of early experiences and reveal that the relationship between
> motivation and activity is more complex than previously understood.
>
> --
> Janna Layton
> Administrative Assistant - Audiences & Technology
> Wikimedia Foundation <https://wikimediafoundation.org/>
> _______________________________________________
> Analytics mailing list
> Analytics(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/analytics
>