When I read this article, I wasn't struck that the author was saying she thought that technology "owed" her particular results.
I think the point she's making is that so much of our life now is mediated by algorithms that make choices we may not understand, and that impacts how we see the world in ways we can't easily anticipate or account for (supporting quotes below). And the problem is subtler and more pervasive than simply issues of "filter bubbles" and "fake news" that are currently garnering the biggest headlines.
This is part of a broader conversation that happening right now around algorithmic transparency and "ethical AI". Lots and lots of big names are weighing in on the topic[1][2][3][4][5][6].
I haven't see a whole lot of specific design guidance around how to support transparency in the context of search yet, but I'd be interested in hearing from others who have. Detailed, readable documentation (which is accessible directly from the search interface) sounds like a pretty good start :)
- Jonathan
"I am still not accustomed to the drastic ways search algorithms can direct people’s lives. We’re so used to Google’s suggested spellings and the autocorrect of texting apps that we’ve stopped thinking too hard about how we search or how we spell. If I tap out Chrissy but should have typed Krissy, I implicitly believe that of course the opaque algorithms of Facebook will intuit my intent. But we have no way of probing the limits of the algorithms that govern our lives."
"When we talk about the algorithms that drive sites like Google and Facebook, we marvel at their cleverness in serving us information, or we worry about the ways in which they exacerbate bias—profiling people based on gross data trends, for example, to decide who gets a loan and who doesn’t. But there is a complex web of algorithmic life-shaping at work that we barely register. It’s not that I wish Facebook treated its Cs and Ks alike. It’s that by not knowing the rules, we give up some agency to mathematical calculations."