Recent technological developments in artificial intelligence have enabled new techniques of manipulating images, audio, and video. Of particular concern among these is the ability to create and deploy AI-generated media, or “deepfakes.” Innovations in machine learning have greatly increased the availability and sophistication of fake audio and video clips, which make it possible to realistically depict people saying or doing things they never actually said or did. 

Deepfaking video footage for entertainment purposes may bring some interesting benefits. Recent examples have used deepfaking technology to create new fan-fiction content based upon the film industry’s CGI representations of older or deceased actors in some of its most popular films. A now-viral video builds upon footage from Lucasfilm’s 2016 Rogue One: A Star Wars Story and showcases lifelike and foreboding footage of franchise villain, Grand Moff Tarkin (portrayed by Peter Cushing, who died in 1991) [1]. The video also shows a “de-aged” portrayal of a youthful Princess Leia Organa, one of the Star Wars franchise’s most beloved characters (portrayed by Carrie Fisher, who was in her late 50s at the time Rogue One was produced). Deepfake clips like these have delighted fans across the internet, and the YouTube creator who produced them has since been hired as a special effects artist at Lucasfilm [2].

Given the fact that many citizens’ information environments are already complicated by an increased frequency of speculation, misinformation, and motivated reasoning, some researchers worry that deepfakes will cause worsening “truth decay” by engaging citizens’ cognitive biases in ways that open both individuals and groups to “novel forms of exploitation, intimidation, and personal sabotage” [3].

Potential concerns about deepfakes range from questions about the accuracy of media portrayals to worries about the ability to convincingly put words into the (digital) mouths of high-profile public figures. In one recent case, documentary producer Morgan Neville admitted to commissioning a software company to create a synthetic audio voice for the documentary’s deceased subject, the late television star Anthony Bourdain. Neville did not disclose the presence of the AI-generated voice in the film, allowing viewers to believe that the voice was indeed Bourdain’s own [4].

Others worry about the more profound social and political implications, for example, of synthetic viral videos which falsely depict House Speaker Nancy Pelosi as visibly intoxicated during a press conference [5], or which have put words that were never actually said into the mouth of former President Barack Obama [6]. Many worry about an environment in which we can no longer trust what we see with our own eyes. As philosopher Regina Rini suggests, “we ought to think of images as more like testimony than perception. In other words, [we] should only trust a recording if [we] would trust the word of the person producing it” [7].

DISCUSSION QUESTIONS

  1. How, if at all, should the use of AI-driven “deepfake” technology be constrained by policymakers?

  2. What skills and dispositions are needed for internet users to engage knowledgeably and deftly in an information world characterized by deepfakes?

  3. In what ways are deep fakes a new and distinctive threat to public discourse and understanding? In what ways are they not so different from other forms of misinformation? 

References

[1] CinemaBlend, “Rogue One Deepfake Makes Star Wars’ Leia And Grand Moff Tarkin Look Even More Lifelike”

[2] IndieWire, “Lucasfilm Hired the YouTuber Who Used Deepfakes to Tweak Luke Skywalker ‘Mandalorian’ VFX”

[3] Robert Chesney and Danielle Keat Citron, “A Looming Challenge for Privacy, Democracy, and National Security”, California Law Review 1753, 2019.

[4] The New Yorker, “The Ethics of a Deepfake Anthony Bourdain Voice”

[5] The Washington Post, “Another Fake Video of Nancy Pelosi Goes Viral on Facebook”

[6] Ars Electronica, “Obama Deepfake”

[7] The New York Times Opinion, “Deepfakes Are Coming. We Can No Longer Believe What We See.”

 
 
 

EXPLORE MORE CONTEXT

 
Previous
Previous

Familial Obligations

Next
Next

Paralympic Pay Parity