Hi all!
Sorry for the delay- I had a super jam packed weekend and now upcoming week.
A few points- thank you for the feedback! In general, I love feedback and
criticism and I definitely got it :) Two, didn't realize this was a *wiki
only* related research channel, so I'll try to bear that in mind in the
future when sharing things I am writing or have written.
But thirdly and lastly, this is not an academic article. This is an article
published in design magazine about research related to ethics within
product design, specifically products using utilizing machine learning and
artificial intelligence. Though, I would love to write an academic paper on
ethics of design utilizing machine learning *in* product design. If that
sounds interesting to any of you, please get at me. I love to collaborate.
So- the tone of voice is *quite* snarky but I stand by it, again because
this was written for Fast Company. I have much more academic writing, if
you are interested in reading that, but it is on online harassment and
automation. This article is designed to be a primer of information for
product designers who may have heard Elon focusing on the dangers of AI.
There are plenty of things to worry about in the future of AI, like the
integration of artificial intelligence into the military or drones, for
example. But publicly, there are no cases of that. There is, publicly, a
variety of investigations done by ProPublica, which I link to in my
article, about predictive policing and it's racial bias. The article itself
is designed to be *approachable* for all readers, *especially non technical
readers*. And this piece, in it's tone which I stand by, was designed to
jokingly respond to Musk's hyperbolic freak out.
This is, instead, an article designed for lay people, and everyday
designers, to think about what are the current issues with AI, examples of
current issues with implicit bias in machine learning products right now,
and other articles and videos to watch. What this is is a class syllabus
wrapped in a layer of a very genial tone so everyday designers have
something to chew on and some real information to grasp.
There aren't a lot of resources for everyday designers out there. There are
not a lot of resources for start ups, product managers, designers, front
end developers, etc on what is out there in this new and emerging field of
artificial intelligence and how it exists currently within products already
out in the world. Truth be told, this is an article I wrote for my old
coworkers at IBM Watson Design- on why having a real conversation about
ethically how you should design, ethically how you should build products
using machine learning and what questions you should ask about what you are
building and why. I saw and had *very few* of those conversations. I am
writing for *those plumbers* who are out there making things right now, and
have bad leadership and bad guidance, but are generally excited about
product design and the future of AI, and they also have to ship their
products now. Because, I am, also, a plumber. What I am doing *right now*
at the Wikimedia Foundation is the fantastically weird but unsexy of job of
designing tools and UI to mitigate online harassment while studying on
wiki-harassment. It's not just research but a design schedule of rolling
out tools quickly for the community to mitigate the onslaught of a lot of
very real problems that are happening as we speak. I love it, I love the
research that I'm doing because it's about the present and the future.
Plumbing is important, it's how we all avoid cholera. Future city planning
is important, it's how larger society functions together. Both are
important.
I think we're really lucky to work where we all work and to be a part of
this community. We get to question, openly and transparently, we get to
solicit feedback, and we get to work on very meaningful software. Not every
technologist or researcher is as lucky as we are. And those are the
technologists I am most keen to talk to- what does it mean to fold in a
technology that you don't understand very well, how do you design and
utilize design thinking to make *something right now* and how do you do
that without recreating a surveillance tool? It's really hard if you don't
understand how to think about the threat model of your product, of what you
intend to make and how it can be used to harm. There are so few primers for
designers that exist on thinking about products from an ethical standpoint,
and a standpoint of implicit bias. All of which are such important things
to talk about when you are building products that use algorithms, and data,
and the algorithm + the data really will determine what your product does
more so than the design intends.
But you all know this already, it's lot's of other people that don't :)
Best,
Caroline
Ps. the briefest, tiniest of FYIs, in online harassment and security,
plumbers have a *hyper specific* connotation to them
<https://hypatia.ca/2016/06/21/no-more-rock-stars/>.
On Mon, Aug 28, 2017 at 2:17 PM, Aaron Halfaker <aaron.halfaker(a)gmail.com>
wrote:
OK ok. There's some hyperbole in this article and
we are the type of
people bent on citations and support. This isn't a research publication and
Caroline admits in the beginning that she's going to get into a bit of a
lecturing tone.
But honestly I liked the article. It makes a good point and pushes a
sentiment that I share. Hearing about killer robots turning on humanity is
sort of like hearing someone tell you that they are worried about global
warming on Mars for future civilizations there when we ought to be more
alarmed and focused on the coastal cities on Earth right now. We have so
many pressing issues with AIs that are affecting people right now that the
future focused alarm is, well, a bit alarmist! Honestly, I think that's
the side of AI that lay people understand while the nuanced issues present
in the AIs alive today are poorly understood and desperately in need of
regulation
I don't think that the people who ought to worry about AIs current problems
are "plumbers". They are you. They are me. They are Elon Musk.
Identifying and dealing with the structural inequalities that AIs create
today is state-of-the-art work. If we knew how to do it, we'd be done
already. If you disagree, please show me where I can go get a tradeschool
degree that will tell me what to do and negate the need for my research
agenda.
-Aaron
On Mon, Aug 28, 2017 at 1:58 AM, Robert West <west(a)cs.stanford.edu> wrote:
Hi Caroline,
The premise of this article seems to be that everyone needs to solve
either
the immediate or the distant problems. No one
(and certainly not Elon
Musk)
would argue that there are no immediate problems
with AI, but why should
that keep us from thinking ahead?
In a company, too, you have plumbers who fix the bathrooms today and
strategists who plan business 20 years ahead. We need both. If the
plumbers
didn't worry about the immediate problems,
the strategists couldn't do
their jobs. If the strategists didn't worry about the distant problems,
the
plumbers might not have jobs down the road.
Also, your argument stands on sandy ground from paragraph one, where you
claim that AI will never threaten humanity, without giving the inkling of
an argument.
Bob
On Fri, Aug 25, 2017 at 6:50 PM, Caroline Sinders <
csinders(a)wikimedia.org>
wrote:
hi all,
i just started a column with fast co and wrote an article about elon
musk's
AI panic.
https://www.fastcodesign.com/90137818/dear-elon-forget-
killer-robots-heres-what-you-should-really-worry-about
would love some feedback :)
best,
caroline
_______________________________________________
Wiki-research-l mailing list
Wiki-research-l(a)lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wiki-research-l
_______________________________________________
Wiki-research-l mailing list
Wiki-research-l(a)lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wiki-research-l
_______________________________________________
Wiki-research-l mailing list
Wiki-research-l(a)lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wiki-research-l