Unboxing Algorithms

Leave a comment
Business & Culture / User Research / UX Design

I’m on a team with Ariel Duncan and Paul Roberts, and for our graduate “thesis” project, we chose to focus on algorithms and their impact on culture. We landed on algorithms – and the technologies that use them – because of the increasing roles that they play in shaping daily human life.

With algorithms playing a distinct role in shaping the 2016 election, we felt that it was timely to understand their often opaque internal mechanisms and hidden biases. Part of our mission is to raise awareness around the cultural, social, and political impact of algorithms and empower people to explore how they shape and are shaped by technologies that use algorithms.

Introduction

The 2016 election revealed to many people that their perspective of the world have been influenced by the echo chamber effects of the “news” presented to them in social media. This echo chamber has, in part, been reinforced by Facebook and other social media platforms decisions to prioritize information that spreads affirmation. (1) Although Facebook is not the only platform contributing to the echo-chamber effects, its reach, size, and disproportionate power to shape our lives makes it notable.

Behind the veil of simple interfaces and delightful interactions, algorithms perform complex calculations and quietly feed us information. They often hide unnecessary complexities from us. Algorithms take data we give them, apply rules and procedures, and provide results we want to see, almost instantaneously. The personalized results often affirm our beliefs and support our needs – any friction is considered undesirable and subsequently eradicated.

Algorithms are embedded in the patchwork of our lives, and they automate and structure choices we encounter. Yet, we often are not exposed to their inner working unless something goes awry. (2) We’ve created black boxes to hide their complexities, which makes it more difficult to trace the work that algorithms do – to help or hurt us. Algorithms are largely invisible – perhaps too abstract for us to notice – and it’s hard to hold something accountable when we don’t readily see it. (3) To be more vigilant, we need to understand how algorithms work.

What are algorithms?

Algorithms are a set of guidelines on how to perform a task. Some algorithms take the form of basic mathematical functions; others build on those functions. Essentially, they’re a set of instructions. Some algorithms evolve through machine learning, which is a computational system that learns to get things right by getting them wrong. (4) In general, an algorithm uses some data as input, performs computational operations on that data, and then output a result that could either be shown to people or passed into another algorithm as input.

In this way, algorithms do work for us. Algorithmic decisions can be based on rules about what should happen next in a process, given what’s already happened, or on calculations using massive amounts of data. (5) What kinds of decisions do algorithms make? According to Nicholas Diakopoulos, algorithms perform a few distinct functions, often in conjunction with each other. (6)

Prioritization: Ranking or ordering that emphasizes certain information over others. Every algorithm designed to perform this function has a set of criteria or metrics it will use to sort something based on its importance. In effect, these criterion embed a set of values that determine what gets pushed to the top of the ranking.

Classification: Categorizing something based on its attributes or features. Here, the algorithm sets the dividing line for how something falls in different classes. How an algorithm classify something depends on the definitions of the criteria that humans set. Generally, misclassifications come in two flavors: false positives and false negatives. Where bias can enter is when the algorithms are being tuned: as false positives are reduced, false negatives will often increase, and vice versa. Tuning one way or the other can privilege different stakeholders in a decision and imprint a value judgment of the person tuning the algorithm.

Association: Forming relationships between two entities. Association decisions also include aggregating entities en masse to form a cluster. These clusters can also be prioritized, leading to a decision of relevance. Association decisions rests on criteria that determine how similar two entities are and also suffer from false positives and false negatives. The value (and dangers) of association decision comes from the meanings that could be inferred from the association.

Filtering: Including or excluding information based on some rules and criteria. Inputs for filtering decisions are often the outputs of prioritizing, classification, or association decisions. The effects of filtering are emphasizing or censoring certain information, which could amplify biases.

The crux of algorithmic power rests on algorithms automatically making these decisions for us. (7) The criteria – and the data that prescribed those criteria – come from people. Because of this, algorithms are not purely a technical phenomenon. They decide what’s good (for you) and tell you what’s important and worthy of attention.

As Pamela Pavliscak explains:

Algorithms – like Web sites or apps or organizations or people – have a point of view. Personalization algorithms exist not just to create a better experience, but because organizations have business goals. Humans create algorithms, so their point of view gets embedded in the system. (8)

As complex algorithmic operations perform these sorts of decision-making through many cycles, the biases of human working on algorithms get perpetuated and amplified. Human biases gets inscribed into systems they design. We have to recognize that the computational determination to the question of “what’s important” or “what’s desirable” is laden with power.

How do algorithms affect our lives?

As we interact with algorithm-powered products, we shape the algorithms and are simultaneously shaped by them. Facebook’s algorithms are one of the better known ones but there are others that undergird our everyday life. They are everywhere, and they increasingly structure our lives. Here are some obvious ones.

  • Twitter Trends
  • Netflix recommendations
  • Uber wait times

And the not so obvious ones.

Looking at the examples above, it would be remiss for us to dismiss algorithms as a purely technical phenomenon. As Helen Nissenbaum describes, “Technologies in the form of algorithms convey, reproduce, and reinforce beliefs and values.” (13)

To fully grasp the social implications of algorithms, we look to the cultural work that they do: the prioritizing, classifying, associating, and filtering that shape our collective behaviors. Ted Striphas, the author of the forthcoming book, Algorithmic Culture, recounts how

“over the last 30 years or so, human beings have been delegating the work of culture – the sorting, classifying and hierarchizing of people, places, objects and ideas – increasingly to computational processes.” (14) Through our interactions with these people, places, objects, and ideas, we “produce new habits of thought, conduct, and expression that likely wouldn’t exist in [the algorithm’s] absence.” (15)

This is what we mean when we say algorithms impact culture and vice versa.

Likewise, we increasingly rely on computational processes to tell us about cultural goods that we can’t fully deduce ourselves. In a sense, algorithms surface what’s important and what’s worth our attention, which English literary critic Matthew Arnold said was the purpose of culture: to determine “the best which has been thought and said.” (16)

What’s interesting here is the scale at which algorithms automate cultural work, which contributes to reality construction by the selection or omission of information. (17) By affording and impeding certain practices, behaviors, and activities, algorithms perpetuate values and impose order on society. (18) Daniel Neyland & Norma Möllers note that algorithms gain their power through algorithmic association in which assemblages of people, things, resources, and other entities conspire together to materialize their effects. (19)

The extent of which they create rules for everyday life makes algorithmic technologies align more closely with institutions rather than technical infrastructures. (20) Therefore algorithms could be seen as governance mechanisms – as “instruments used to exert power” and as “increasingly autonomous actors with power to further political and economic interests on the individual but also on the public/collective level.” (21)

Algorithms have a point of view. But what if you disagree? Because algorithms carry particular values and assumptions about how the world works – and how it should work – we have a hard time challenging their results or have a say in what data they use to make their decisions if we think the outcome is unfair. And these biases may not be intentional. The data that feeds algorithms can inherit biases of previous decision-makers and lead to unconscious discrimination. Complex manipulations of data can cause data to interact in unexpected ways – and infer incorrectly from unintentional associations or correlations. (22)

As earlier examples suggest, this is related to the way humans are represented in the algorithmic system. In codifying human complexities in technical systems, we strip away the residual qualities that may be crucial for decision-making, sometimes with detrimental consequences. To quote Mike Ananny in Toward an Ethics of Algorithms:

“Who is inside and outside, who may speak, who may not, and who has authority and may be believed depend on communication technologies that see some people as like or unlike others, despite variations the technologies cannot capture.” (23)

What are some issues in designing for algorithms?

The design world’s preoccupation with seamless, simple, and frictionless user experience normalizes the invisible in everyday products. But the dominant narrative in the UX industry today masks an important negotiation that is taking place beneath the rhetoric, which is the question of “what is ‘good’ ?” Ensconced in everyone’s definition of “good” are values that people hold important in design.

Matt Ratto defines seamlessness as “the deliberate ‘making invisible’ of the variety of technical systems, artifacts, individuals and organizations that make up an information infrastructure.” (24) But should this disappearance of algorithmic technology into the digital infrastructure be a desirable quality? If the election was any indication of the extent of its influence, we should sit up and examine our own practices and decisions around what’s desirable. This is not merely a technology problem; it’s a design problem, too, says Fast.Co editor Cliff Kuang:

That entire time I’ve always assumed that “making things frictionless” was an unalloyed good, right up there with science, efficient markets and trustworthy courts. But [Max] Read’s essay [about fake news on Facebook] made my stomach heave, because it made me ask: Is a fully user-friendly world actually the best world we can create?…The modern user experience is a black box that has made it easier and easier to consume things and harder and harder to remake them. (25)

Algorithms have the power to structure individual realities and behaviors as well as influence the society at large. However, several issues pose difficulties for investigating their effects.

1) Opacity of algorithmic black boxes

According to Jenna Burrell, the following sources contribute to algorithmic opacity (26):

  • Trade secret: Many algorithms are hidden behind trade secret protection to gain competitive advantage and protect proprietary and customer information. This means companies do not have to disclose or reveal how their algorithms work.
  • Comprehensibility: Understanding algorithms is a highly specialized skill that requires education and training. Even for technically trained people, the complexity can render them unable to understand all the data transformations that take place to produce an output. (27)
  • “Bottom-up” pattern-seeking: While humans can designate what algorithms should look for, algorithms use a “bottom-up” approach on patterns they find in the data. This is problematic because a person cannot point to a reason for why an algorithm has produced a particular output. Therefore, it’s not possible to inquire into what an algorithm is “thinking” because it seeks and reacts to the “inexpressible commonalities in millions of pieces of training data.” (28)

2) Perceived objectivity of computational logic

Although algorithmic operations project an aura of objectivity, in reality, they encode human biases in the form of criteria, categories, rules, and training data. According to Nicholas Diakopoulos, these kinds of systematic biases are the most insidious because they often go unnoticed and unquestioned. (29)

The perceived objectivity of algorithms raises ethical concerns because they signal certainty, discourage alternative explorations, and create coherence among disparate objects. The funneling of people’s behaviors toward a narrowed set of acceptable options – while discouraging or eliminating others – is what Mike Ananny describes as “using categories to discipline action.” (30) Therefore categories have consequences. Since the fitting of people, their past actions, and objects into categories involves unseen computational judgements, it’s easy for people to accept their results as a given, much less challenge them.

3) Lack of awareness around values underlying assumptions

Both the opacity of algorithms and their perceived objectivity leave little room for conversations about the values and assumptions that are embedded in algorithmic systems. We posit that these debates around values and assumptions are essentially debates around what’s “acceptable,” “good,” or “desirable.” However, it is difficult to hold these conversations when people are either unaware or unconcerned with how algorithmic systems affect them.

Personalization algorithms construct individual realities which tend to confirm an individual’s opinions and preferences. This and other “seamless” experiences muddle the ability to see, inspect, question, or understand the how the criteria we use in algorithms relates to their output. (31) This benefits computational systems. Efficient algorithms need stable categories of people whose behaviors are predictable. But herein lies the ethical power of algorithms: to “create a disciplined network of humans and machines that resembles and recreates probabilities, making the set of possible outcomes the model anticipates likely and reasonable.” (32)

As we apply algorithmic systems to new technologies, we need to put into practice “ethical critiques that keep flexible and contestable their fundamental forms, power, and meanings.” This is because when such systems become less and less ambiguous, we lose the ability to reinterpret them and lose opportunities to intervene and influence their ethics. (33)

Designers already tacitly critique the definition of “acceptable,” “good,” and “desirable” during the design process. What’s different here is the call to make this negotiation more explicit. Some people are rendered invisible, quite literally, by algorithms. Others are marginalized or discriminated against by their outcomes. Although not all algorithmic technologies are inherently negative, algorithms don’t have to be intentionally discriminatory against race, gender, etc. to have discriminatory consequences.

Design is political because it has consequences. As designers, we can design things to have different consequences. We can only do this if we consciously reflect on our choices and examine the wider implications of our decisions.

What role does design play?

In Herbert Simon’s words, “Design is devising courses of action aimed at turning existing situations into preferred ones.” (34) This means that designers have agency to steer their efforts towards what ought to be. In doing so, they decide what matters. Dubberly and Pangaro describes this process as

“a conversation about what to conserve and what to change, a conversation about what we value.” (35)

But they also posit that design isn’t only about what to change but how we frame the problem. This framing of the problem is political because we have to choose between different subjective interpretations of the same problem.

Every time we design, we are advancing certain values and viewpoints. We argue for why something should exist or why we are improving a situation. We engage in debates about what actions to enable and/or constrain, and for whom. This is especially important for designing algorithms, not only because of reasons explained in the previous section but also because algorithms have agency.

Here, we define agency as: “the capacity for action or transformative capacity.” (36) Anne Balsamo explains that humans are not the only ones privileged with agency, but in the process of designing, we confer agency to what we design. (37) To actively examine, reflect, and critique the values, assumptions and visions that are perpetuated through algorithms, we envision a different role for design beyond merely producing solutions for today’s realities.

We aim to use design as a way to question rather than generate solutions.

We take on the role of design to “critique the clean, orderly, and homogenous future that is at the heart of…modernist visions of ubiquity and use these critiques to better understand the ethical dimensions of our increasingly socio-technical world.” (41) In the tradition of critical and speculative design, our goal is to challenge others, as well as ourselves, to see algorithms in new ways. A. Baki Kocaballi frames agency in design less as “paying attention to the values we inscribe” but as challenging the “unquestioned, taken for granted values embedded in our thinking and practices.” (38)

Making algorithms more transparent isn’t enough. Although there is power in cutting through the invisibility of algorithmic black boxes, we consider it a more passive approach to raising awareness around their impact at large. And here lies the crux of designer’s agency:

as we decide what to preserve and what to change outside of today’s constraints, we can more readily include the heterogeneity of people and their situations rather than simplifying it away.

We are asking people to trust us with their data and allow algorithms to increasingly make decisions for them in every facet of their life. The privilege we have to tell people how to act and think calls on us to challenge our own practices and question the dominant paradigms in our industry.

(Feature image by Hugh Manon)

References

  1. Cliff Kuang, “Trump Exposes a Fatal Flaw in User-Friendly Design,” Fast.Co Design, November 11, 2016. https://www.fastcodesign.com/3065565/what-responsibility-does-design-bear-for-the-trump-era, accessed on 1/25/17.
  2. Khovanskaya, Vera, Maria Bezaitis, and Phoebe Sengers. “The Case of the Strangerationist: Re-interpreting Critical Technical Practice.” In Proceedings of the 2016 ACM Conference on Designing Interactive Systems, pp. 134-145. ACM, 2016.
  3. Lockton, Dan. “As we may understand: A constructionist approach to ‘behaviour change and the Internet of Things.” Medium, Nov. 1, 2014, https://medium.com/@danlockton/as-we-may-understand-2002d6bf0f0d#.d2gob2lb9, accessed 1/5/17.
  4. Pamela Pavliscak, “Algorithms as the New Material of Design” in UX Matters blog, June 14, 2016. http://www.uxmatters.com/mt/archives/2016/06/algorithms-as-the-new-material-of-design.php accessed on 1/20/17.
  5. Diakopoulos, Nicholas. “Algorithmic accountability reporting: On the investigation of black boxes.” Tow Center for Digital Journalism, Columbia University (2014).
  6. Ibid.
  7. Ibid.
  8. Pavliscak, “Algorithms as New Material for Design.”
  9. https://www.themarshallproject.org/2016/02/03/policing-the-future#.HRhV66iAp
  10. http://www.slate.com/articles/technology/future_tense/2013/08/words_banned_from_bing_and_google_s_autocomplete_algorithms.html
  11. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
  12. https://www.degruyter.com/view/j/popets.2015.1.issue-1/popets-2015-0007/popets-2015-0007.xml
  13. Nissenbaum, Helen. “From preemption to circumvention: If technology regulates, why do we need regulation (and vice versa)?.” Berkeley Technology Law Journal 26, no. 3 (2011): 1367-1386.
  14. Striphas, Ted. “Algorithmic Culture.” European Journal of Cultural Studies 18, no. 4-5 (2015), 396.
  15. Ibid.
  16. Giuseppe Granieri, “Algorithmic Culture: culture now has two audiences: people and machines” interview with Ted Striphas published on Medium, April 30, 2014. https://medium.com/futurists-views/algorithmic-culture-culture-now-has-two-audiences-people-and-machines-2bdaa404f643#.3vtq8x3jf, accessed 1/28/17.
  17. Just, Natascha, and Michael Latzer. “Governance by algorithms: reality construction by algorithmic selection on the Internet.” Media, Culture & Society (2016): 10.
  18. Nissenbaum, “Preemption to Circumvention.”
  19. Neyland, Daniel and Möllers, Norma. “Algorithmic IF … THEN rules and the conditions and consequences of power.” Information, Communication & Society, (2016): 1-18.
  20. Napoli, Philip M. “Automated media: An institutional theory perspective on algorithmic media production and consumption.” Communication Theory 24, no. 3 (2014): 340-360.
  21. Just, “Governance by algorithms,” 8.
  22. Hardt, Moritz. “How big data is unfair: Understanding sources of unfairness in data driven decision making.” Medium. https://medium. com/@ mrtz/how-big-data-is-unfair-9aa544d739de, accessed Jan. 20, 2017.
  23. Ananny, Mike. “Toward an ethics of algorithms: Convening, observation, probability, and timeliness.” Science, Technology, & Human Values 41, no. 1 (2016): 93.
  24. Ratto, Matt. “Ethics of seamless infrastructures: Resources and future directions.” International Review of Information Ethics 8, no. 8 (2007): 12.
  25. Cliff Kuang, “Trump Exposes a Fatal Flaw in User-Friendly Design,” Fast.Co Design, November 11, 2016. https://www.fastcodesign.com/3065565/what-responsibility-does-design-bear-for-the-trump-era, accessed on 1/25/17.
  26. Dourish, Paul. “Algorithms and their others: Algorithmic culture in context.” Big Data & Society 3, no. 2 (2016).
  27. Khovanskaya, et al. “The Case of the Strangerationist.”
  28. Dourish, “Algorithms and their others: Algorithmic culture in context” 7.
  29. Diakopoulos, Nick. “Understanding bias in computational news media.” Nieman Journalism Lab 10 (2012).
  30. Ananny, “Toward an ethics of algorithms: Convening, observation, probability, and timeliness.”
  31. Lockton, Dan. “As we may understand.”
  32. Mackenzie, A. 2015. ‘‘The Production of Prediction: What Does Machine Learning Want?’’ European Journal of Cultural Studies 18 (4-5): 429-45.
  33. Bijker, W. E. 1995. Conclusion: The Politics of Sociotechnical Change of Bicycles, Bakelites, and Bulbs: Toward a Theory of Sociotechnical Change. Cambridge, MA: The MIT Press.
  34. Simon, Herbert, Sciences of the Artificial, The MIT Press, Cambridge, Massachusetts, 1969.
  35. Dubberly, Hugh, and Paul Pangaro. “Cybernetics and Design: Conversations for Action.” Cybernetics & Human Knowing 22, no. 2-3 (2015)
  36. Kocaballi, “Embracing relational agency,” 99.
  37. Balsamo, Anne. Designing culture: The technological imagination at work. Duke University Press, 2011.
  38. Ratto, Matt. “Ethics of seamless infrastructures: Resources and future directions.” International Review of Information Ethics 8, no. 8 (2007): 21-25.
  39. Kocaballi, “Embracing Relational Agency,” 99. [emphasis ours].

The Author

Clem Auyeung is a content designer based in Washington, D.C.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s