Hi Folks,
I am in need of 6 more Wikipedians to complete my interview study on what AI means for Wikipedia. No personally identifiable information will be required of you.
Please email me if you or a Wikipedian you know may be interested and available or have questions. I am hoping to complete interviews within the next 2 weeks.
Details below:
Interview Request - University of Cambridge Study
My name is Aarshin Karande and I am a student at the University of Cambridge enrolled in the MSt, AI Ethics & Society program administered by the Leverhulme Centre for the Future of Intelligence<http://lcfi.ac.uk/>. The summative coursework for this program is an original research project submitted through a dissertation.
My dissertation will examine the uses of AI on Wikipedia and how Wikipedians are implicated by them. I am inviting Wikipedians to participate in 45-minute- to 65-minute-long interviews. In these interviews, we will discuss:
*
Your background as a Wikipedian
*
The work you do on the platform
*
Your observations about how Wikipedia has changed over time
*
Your comments about AI
*
Your ideas about what AI means for Wikipedia
*
Anything else you may find relevant and important to this topic
This project is looking for 10 participants. Interviews will be conducted throughout August 2024 remotely via Zoom. Participants' identities will be anonymized to remove any personally identifying information.
If you would like to participate in this study, please message me at ak2471(a)cam.ac.uk. For further information, please refer to the participant information sheet<https://drive.google.com/file/d/1UHk4eexdkx6NSi_BWB0nAo_ZqnOAP-RN/view?usp=…>. If you have any questions or concerns about this project, please message me at ak2471(a)cam.ac.uk.
Cheers,
-Aarshin
Aarshin Karande
MSt Candidate, AI Ethics & Society
Hughes Hall, University of Cambridge
email<mailto:ak2471@cam.ac.uk> • phone<tel:14257498056> • linkedin<http://www.linkedin.com/in/aarshinkarande> • website<http://www.aarshinkarande.com/>
Hello all,
Sorry for cross-posting.
The Technical Decision-Making Forum Retrospective
<https://www.mediawiki.org/wiki/Technical_decision_making> team invites you
to join one of our “listening sessions” about the Wikimedia's technical
decision-making processes.
We are running the listening sessions to provide a venue for people to tell
us about their experience, thoughts, and needs regarding the process of
making technical decisions across the Wikimedia technical spaces. This
complements the survey
<https://wikimediafoundation.limesurvey.net/885471?lang=en>, which closed
on August 7.
Who should participate in the listening sessions?
People who do technical work that relies on software maintained by the
Wikimedia Foundation (WMF) or affiliates. If you contribute code to
MediaWiki or extensions used by Wikimedia, or you maintain gadgets or tools
that rely on WMF infrastructure, and you want to tell us more than could be
expressed through the survey, the listening sessions are for you.
How can I take part in a listening session?
There will be four sessions on two days, to accommodate all time zones. The
two first sessions are scheduled:
- Wednesday, September 13, 14:00 – 14:50 UTC
<https://zonestamp.toolforge.org/1694613630>
- Wednesday, September 13, 20:00 – 20:50 UTC
<https://zonestamp.toolforge.org/1694635220>
The sessions will be held on the Zoom platform.
If you want to participate, please sign up for the one you want to attend: <
https://www.mediawiki.org/wiki/Technical_decision_making/Listening_Sessions>.
If none of the times work for you, please leave a message on the talk page.
It will help us schedule the two last sessions.
The sessions will be held in English. If you want to participate but you
are not comfortable speaking English, please say so when signing up so that
we can provide interpretation services.
The sessions will be recorded and transcribed so we can later go back and
extract all relevant information. The recordings and transcripts will not
be made public, except for anonymized summaries of the outcomes.
What will the Retrospective Team do with the information?
The retrospective team will collect the input provided through the survey,
the listening sessions and other means, and will publish an anonymized
summary that will help leadership make decisions about the future of the
process.
In the listening sessions, we particularly hope to gather information on
the general needs and perceptions about decision-making in our technical
spaces. This will help us understand what kind of decisions happen in the
spaces, who is involved, who is impacted, and how to adjust our processes
accordingly.
Are the listening sessions the best way to participate?
The primary way for us to gather information about people’s needs and wants
with respect to technical decision making was the survey
<https://wikimediafoundation.limesurvey.net/885471?lang=en>. The listening
sessions are an important addition that provides a venue for free form
conversations, so we can learn about aspects that do not fit well with the
structure of the survey.
In addition to the listening sessions and the survey, there are two more
ways to share your thoughts about technical decision making: You can post
on the talk page
<https://www.mediawiki.org/wiki/Talk:Technical_decision_making/Technical_Dec…>,
or you can send an email to <tdf-retro-2023(a)lists.wikimedia.org>.
Where can I find more information?
There are several places where you can find more information about the
Technical Decision-Making Process Retrospective:
-
The original announcement about the retrospective from Tajh Taylor:
https://lists.wikimedia.org/hyperkitty/list/wikitech-l@lists.wikimedia.org/…
-
The Technical Decision-Making Process general information page:
https://www.mediawiki.org/wiki/Technical_decision_making
-
The Technical Decision-Making Process Retrospective page:
https://www.mediawiki.org/wiki/Technical_decision_making/Technical_Decision…
-
The Phabricator ticket: https://phabricator.wikimedia.org/T333235
Who is running the technical decision making retrospective?
The retrospective was initiated by Tajh Taylor. The core group running the
process consists of Moriel Schottlender (chair), Daniel Kinzler, Chris
Danis, Kosta Harlan, and Temilola Adeleye. You can contact us at <
tdf-retro-2023(a)lists.wikimedia.org>.
Thank you for participating!
--
Benoît Evellin - Trizek (he/him)
Community Relations Specialist
Wikimedia Foundation <https://wikimediafoundation.org/>
Hi,
My name is Anna Yuan and I am an undergraduate student working under the
supervision of Dr.Haiyi Zhu <https://haiyizhu.com/> in the HCI Department
at Carnegie Mellon University. Our team is currently conducting a research
study on Wikipedia's ORES system. Our research focuses on exploring
opportunities to better communicate the affordance of the ORES system and
thus help people effectively design and use ORES-based applications.
If you have developed or used any ORES-based application, we would love to
invite you to participate in this research. The research will be an
interview that takes approximately 45 minutes. During the research, I will
ask you about your background, your current experience of using ORES and
ORES-based applications, and your suggestion on how to improve the
ecosystem.
All participants will be offered $20 amazon gift cards. If you are
interested in taking part in this research or would like more information,
please reply to this email and let me know.
I am looking forward to your response.
Best,
Anna
Wiki Account: https://en.wikipedia.org/wiki/User:Bobo.03
FYI
-------- Messaggio inoltrato --------
Oggetto: [Xmldatadumps-l] Your comments needed (long term dumps rewrite?)
Data: Thu, 19 Feb 2015 12:30:01 +0200
Mittente: Ariel Glenn WMF <ariel(a)wikimedia.org>
A: Xmldatadumps-l(a)lists.wikimedia.org
The MediaWiki Core team has opened a discussion about getting more
involved in and maybe redoing the dumps infrastructure. A good starting
point is to understand how folks use the dumps already or want to use
them but can't, and some questions about that are listed here:
https://www.mediawiki.org/wiki/Wikimedia_MediaWiki_Core_Team/Backlog/Improv…
I've added some notes but please go weigh in. Don't be shy about what
you do/what you need, this is the time to get it all on the table.
Ariel
As brought to my attention by Max, through a post on his talk page:
https://meta.wikimedia.org/wiki/User_talk:MZMcBride#HTTPS_switch
== The problem ==
Basically, some user defined JS out there has "http://" hard coded, and
when you request insecure (non-https) resources during a secure session
you get errors in many modern browsers.
This is a heads up that this user created code might need some special
attention before and after the switch over.
== How to fix ==
The best option is to use "protocol relative" urls, these are urls that
start with "//someurl" instead of "http://" or "https://" or "gopher://"
etc.
Here's a blog post about when we started using protocol relative urls in
MediaWiki and at WMF:
https://blog.wikimedia.org/2011/07/19/protocol-relative-urls-enabled-on-tes…
I've put up basic instructions on the HTTPS page on metawiki:
https://meta.wikimedia.org/wiki/HTTPS#Help.21_My_code_is_broken.21
Thanks for getting this out to the appropriate channels!
Greg
cc'ing Wikibots-l(a)lists.wikimedia.org because there might be some
overlap of affected people on that list.
--
| Greg Grossmeier GPG: B2FA 27B1 F7EB D327 6B8E |
| identi.ca: @greg A18D 1138 8E47 FAC8 1C7D |
This has implications for bot owners (all users, including bot users,
will be forced to HTTPS instead of HTTP when logged in).
Please review your code to make sure it won't break on Wednesday :)
Greg
----- Forwarded message from Greg Grossmeier <greg(a)wikimedia.org> -----
> Date: Mon, 19 Aug 2013 17:00:09 -0700
> From: Greg Grossmeier <greg(a)wikimedia.org>
> To: Wikimedia developers <wikitech-l(a)lists.wikimedia.org>, Wikitech Ambassadors
> <wikitech-ambassadors(a)lists.wikimedia.org>
> Subject: HTTPS for logged in users on Wednesday August 21st
>
> Hello all,
>
> As we outlined in our blog post on the future of HTTPS at the Wikimedia
> Foundation[0], the plan is to enable HTTPS by default for logged in
> users on August 21st, this Wednesday.
>
> We are still on target for that rollout date.
>
> As this can have severe consequences for users where HTTPS is blocked by
> governments/network operators *in addition to* users who connect to
> Wikimedia sites via high latency connections, we've set up a page on
> MetaWiki[1] describing what is going on and what it means for users and
> what they can do to report problems.
>
> Please help watch out for any unintended consequences on August 21st and
> report any negative issues to us as soon as you can. Bugzilla[2], IRC
> (#wikimedia-operations), or the (forthcoming) OTRS email are all fine.
> Also, feel free to email myself or ping me directly on IRC.
>
> Best,
>
> Greg
>
> [0] https://blog.wikimedia.org/2013/08/01/future-https-wikimedia-projects/
> [1] https://meta.wikimedia.org/wiki/HTTPS
> [2] https://bugzilla.wikimedia.org
>
> --
> | Greg Grossmeier GPG: B2FA 27B1 F7EB D327 6B8E |
> | identi.ca: @greg A18D 1138 8E47 FAC8 1C7D |
----- End forwarded message -----
--
| Greg Grossmeier GPG: B2FA 27B1 F7EB D327 6B8E |
| identi.ca: @greg A18D 1138 8E47 FAC8 1C7D |
User:Sadads and I are currently working on an academic article related
to the coverage of historical information on Wikipedia (see the
outline of our methods below). We were thinking that some of the
existing bots or perhaps people in the bot-writing community might be
able to help us write some scripts that will pull the information we
need off wiki. We are not script writers, but we think what we want to
do is pretty easy. Please let us know if you can help us out! Thanks!
Adrianne (User:Wadewitz) and Alex (User:Sadads)
______________________________________________________________________
In this study, we use the following ways to analyze the ways in which
Wikipedia articles approach historiography. We approached our analysis
in two ways: quantitatively and qualitatively. In our quantitative
approach, we followed the following procedures:
First, we look at the number of different sources the article cites.
We determined this by running a script over the article that counted
the number of discrete citations in the footnotes and works cited.
Because many articles have a large number of sources but rely on a
small number of them for much of their information, we also look at
how often each source is used and whether any one source is used
disproportionately. While there are reliable sources that could be
used in this way, we have found that this is a marker of an article
that presents only one historiographical viewpoint.
Second, we are also interested in the types of sources used. So, using
a script to check the publication information and template information
of the source, we analyzed the ratio of journal to book to newspaper
to web sources. Moreover, because articles that have a wide span of
publication dates tend to have a good representation of
historiography, we analyzed the dates published of the sources.
Third, we searched the articles for the following words, based on a
preliminary survey of 25 articles we used as initial. These words
indicated that the articles approached history and historiography from
an ambiguous or debatable position: “probably”, “possibly”, “on the
other hand”, “one view”, “bias”, “perspectives”. We also searched for
sections such as “Historiography”, “Modern view”, “Legacy”, and
“Assessment”.
We chose to analyze 19th-century FA, GA, and B articles. The GA and FA
articles have undergone a review process on Wikipedia and thus should
be better. We excluded any B article that had been through a peer
review on the site, as we wanted to contrast articles that had been
through Wikipedia content revision process. We wanted to know what the
“best” articles Wikipedia had to offer before and after comment by the
community. We also chose this field as both of us have some
familiarity with the time period but neither of us had worked
extensively on the articles, so there was no conflict of interest. We
also excluded any military history articles because of the significant
difference in historiographic focus of the military history community.
Additionally, the Wikipedia community has a significant more coverage
on the topic of military history, both in number of articles and level
of commitment to that subtopic within the community, WikiProject
Military History being one of the most active and having a different
standard of topic coverage.
--
Dr. Adrianne Wadewitz
Mellon Digital Scholarship Fellow
Center for Digital Learning + Research
Occidental College
http://www.oxy.edu/center-digital-learning-research/abouthttps://sites.google.com/site/wadewitz/
Congratulations to everyone involved. I'd love to have a look at it and propose it for Tamil Wikipedia. Were the Swedish common names obtained by manual translation?
Regards,
Sundar
"That language is an instrument of human reason, and not merely a medium for the expression of thought, is a truth generally admitted."
- George Boole, quoted in Iverson's Turing Award Lecture
>________________________________
> From: "wikibots-l-request(a)lists.wikimedia.org" <wikibots-l-request(a)lists.wikimedia.org>
>To: wikibots-l(a)lists.wikimedia.org
>Sent: Thursday, October 18, 2012 5:30 PM
>Subject: Wikibots-l Digest, Vol 40, Issue 2
>
>Send Wikibots-l mailing list submissions to
> wikibots-l(a)lists.wikimedia.org
>
>To subscribe or unsubscribe via the World Wide Web, visit
> https://lists.wikimedia.org/mailman/listinfo/wikibots-l
>or, via email, send a message with subject or body 'help' to
> wikibots-l-request(a)lists.wikimedia.org
>
>You can reach the person managing the list at
> wikibots-l-owner(a)lists.wikimedia.org
>
>When replying, please edit your Subject line so it is more specific
>than "Re: Contents of Wikibots-l digest..."
>
>
>Today's Topics:
>
> 1. A bot to create articles about species (Lars Aronsson)
> 2. Re: [Wikitech-l] A bot to create articles about species
> (Nikola Smolenski)
>
>
>----------------------------------------------------------------------
>
>Message: 1
>Date: Thu, 18 Oct 2012 03:26:20 +0200
>From: Lars Aronsson <lars(a)aronsson.se>
>To: Wikimedia developers <wikitech-l(a)lists.wikimedia.org>, Wikimedia
> bot editors discussion <wikibots-l(a)lists.wikimedia.org>
>Subject: [Wikibots-l] A bot to create articles about species
>Message-ID: <507F5ABC.5080400(a)aronsson.se>
>Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
>User:Lsj has written 4000 lines of C# source code on top
>of the DotNetWikiBot framework, to create 10,000 articles
>in Swedish about bird species in the spring of 2012 and
>recently even more articles in Swedish about fungi species.
>
>Some information about his Lsjbot is found here,
>http://sv.wikipedia.org/wiki/Wikipedia:Projekt_DotNetWikiBot_Framework/Lsjb…
>
>The otherwise very reluctant/skeptic/picky Swedish Wikipedia
>community has gladly accepted these well-written articles.
>
>I think it would be interesting if a community of wikipedians
>in some other language would try to translate this bot.
>Some languages might have notability or relevance requirements
>that these species don't fulfill, others might think 1700
>bytes is a too short article. But I think the citation of
>sources and correctness of fact would be generally accepted.
>
>Here is a blog post in Swedish about the bird articles,
>http://wikimediasverige.wordpress.com/2012/03/06/10-000-fagelarter-pa-svens…
>
>Some 3,600 birds are found in this category for articles
>that were bot-created and have not yet been inspected,
>http://sv.wikipedia.org/wiki/Kategori:Robotskapade_f%C3%A5gelartiklar
>Some 54,000 fungi species are found here,
>http://sv.wikipedia.org/wiki/Kategori:Robotskapade_svampartiklar
>The birds more often have common names, which are preferred
>as article names instead of the Latin/scientific names,
>e.g. the blue-and-white swallow,
>http://sv.wikipedia.org/wiki/Bl%C3%A5vit_svala
>where the Latin name is a bot-created redirect to the
>bot-created article,
>http://sv.wikipedia.org/wiki/Pygochelidon_cyanoleuca
>
>At the Swedish Wikipedia village pump there is now a
>discussion of whether to continue with species of animals,
>plants, bacteria, etc.
>http://sv.wikipedia.org/wiki/Wikipedia:Bybrunnen#Botskapande_av_artiklar_f.…
>
>
>--
> Lars Aronsson (lars(a)aronsson.se)
> Aronsson Datateknik - http://aronsson.se
>
>
>
>
>
>------------------------------
>
>Message: 2
>Date: Thu, 18 Oct 2012 08:46:22 +0200
>From: Nikola Smolenski <smolensk(a)eunet.rs>
>To: Wikimedia developers <wikitech-l(a)lists.wikimedia.org>
>Cc: Wikimedia bot editors discussion <wikibots-l(a)lists.wikimedia.org>
>Subject: Re: [Wikibots-l] [Wikitech-l] A bot to create articles about
> species
>Message-ID: <507FA5BE.5060308(a)eunet.rs>
>Content-Type: text/plain; charset="utf-8"; Format="flowed"
>
>On 18/10/12 03:26, Lars Aronsson wrote:
>> User:Lsj has written 4000 lines of C# source code on top
>> of the DotNetWikiBot framework, to create 10,000 articles
>> in Swedish about bird species in the spring of 2012 and
>> recently even more articles in Swedish about fungi species.
>>
>> Some information about his Lsjbot is found here,
>> http://sv.wikipedia.org/wiki/Wikipedia:Projekt_DotNetWikiBot_Framework/Lsjb…
>>
>> The otherwise very reluctant/skeptic/picky Swedish Wikipedia
>> community has gladly accepted these well-written articles.
>>
>> I think it would be interesting if a community of wikipedians
>> in some other language would try to translate this bot.
>> Some languages might have notability or relevance requirements
>> that these species don't fulfill, others might think 1700
>> bytes is a too short article. But I think the citation of
>> sources and correctness of fact would be generally accepted.
>
>The need for such bots should cease after Wikidata is fully deployed. I
>suggest to interested programmers that they should direct their effort
>there.
>
>
>------------------------------
>
>_______________________________________________
>Wikibots-l mailing list
>Wikibots-l(a)lists.wikimedia.org
>https://lists.wikimedia.org/mailman/listinfo/wikibots-l
>
>
>End of Wikibots-l Digest, Vol 40, Issue 2
>*****************************************
>
>
>
Hi folks :)
As you might have heard we've been working on Wikidata
(http://meta.wikimedia.org/wiki/Wikidata) for a few months now. The
first phase of Wikidata is about centralizing language links in
Wikidata. We are getting close to a first deployment of this on the
Hungarian Wikipedia and then probably the Hebrew or Italian Wikipedia.
I wanted to give you a heads-up about this.
A few people have already started working on bots to migrate language
links to Wikidata where desired. (The current system will continue to
work.) This includes the PyWikidata library. You can find the current
state here: http://meta.wikimedia.org/wiki/Wikidata/Bots.
I also wanted to let you know that we'll unfortunately have to break
the API after deployment. I'll keep you updated on this. It might be
wise to wait for that to happen before transitioning existing bots.
Please let me know about any questions you have.
Cheers
Lydia
--
Lydia Pintscher - http://about.me/lydia.pintscher
Community Communications for Wikidata
Wikimedia Deutschland e.V.
Obentrautstr. 72
10963 Berlin
www.wikimedia.de
Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e. V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg
unter der Nummer 23855 Nz. Als gemeinnützig anerkannt durch das
Finanzamt für Körperschaften I Berlin, Steuernummer 27/681/51985.