Data Violence

 
 

As more of human life is controlled or guided by computer algorithms, there is growing concern about various biases that these algorithms encode, and the real-world implications of such biases. For example, in 2015 a black developer realized Google’s photo recognition software tagged pictures of him and his friends as gorillas [1]. Similarly, it was found that facial recognition software struggled to read black faces [2]. Other problems have arisen, including Facebook automatically suspending the accounts of Native Americans for having seemingly “fake” names [3], Google Translate replacing gender-neutral pronouns with gendered pronouns in English according to sexist stereotypes [4], and airport body scanners flagging transgender bodies as threats [5]. Some have labelled this phenomenon “data violence”, noting that coding choices can “implicitly and explicitly lead to harmful or even fatal outcomes” [6].

Some software developers and commentators have claimed that complaints about data violence are overhyped. For instance, some have claimed that these problematic results are simply unfortunate side effects of data analyses and statistical models that are, in other respects, highly accurate and useful. Software developers are not necessarily doing anything wrong when they create algorithms that, for the most part, work very well—even if that software has unintended biases. It may be conceded that in some cases, an algorithm might end up reflecting some broader social injustice, leading to biased results—such as when racial disparities in arrest rates affect the results of software used to predict criminal behaviors [7]. But even then, developers sometimes argue, the problem is not with the software itself, but with the broader injustices for which the developers themselves are not responsible. Relatedly, some argue that there is a division of labor in software development that makes it the responsibility of the architect of the larger project to pay attention to the broader social implications of the software, and not necessarily the individual engineers who work on them. Or, perhaps, they need an “in-house philosopher” to consider these messy ethical concerns for them [8].

However, others find this response to be little more than an attempt to avoid responsibility for the way in which their own actions help to reinforce and reproduce biases and injustices. Many instances of racist and sexist errors are due to developers’ biases, stereotypes, and interests. Software engineers carry with them assumptions about what should be considered “normal” or what range of cases they must account for; these assumptions can affect how software is programmed and the types of testing it undergoes. Furthermore, engineers may often overlook important “edge cases” [9]. or the problematic implications of their doing so, because of the fact that the tech industry is overwhelmingly white and male [10]. Given that these software problems disproportionately harm members of historically marginalized groups, there seems to be a further concern that leaving developer diversity unaddressed or viewing these failures as merely instances of poor engineering will not fix the underlying problem.

DISCUSSION QUESTIONS

  1. Who is responsible for the kinds of “data violence” described in the case? Are individual engineers morally responsible, or does the responsibility lie with software architects or software companies?

  2. What does it mean for something to be sexist or racist? Can we consider software sexist or racist, even though it doesn’t itself have intentions or attitudes?

  3. What, if anything, should software companies do to address data violence?

References

[1] USA TODAY, “Google Photos labeled black people 'gorillas'“

[2] Huffington Post, “Here's Why Facial Recognition Tech Can't Figure Out Black People”

[3] The Guardian, “Facebook still suspending Native Americans over 'real name' policy”

[4] Quartz, “Google Translate’s gender bias pairs “he” with “hardworking” and “she” with lazy, and other examples”

[5] TIME, “Transgender Passenger Protests 'Denigrating' Treatment From TSA”

[6] Medium, “Data Violence and How Bad Engineering Choices Can Damage Society”

[7] Pro Publica, “Machine Bias”

[8] VentureBeat, “Google’s in-house philosopher: Technologists need a ‘moral operating system’”

[9] Wikipedia, “Edge case”

[10] WIRED, “Women and Minorities in Tech, By the Numbers”

 
 
 

EXPLORE MORE CONTEXT

 
Previous
Previous

Tsk Tsk, Tusk Tusk

Next
Next

Unauthorized Dumping