Simulacra@Scale
This ongoing project understands various aspects of audiovisual evidence circulated through contemporary ICTs. Overall, the project uses fieldwork and document analysis to interrogate how concepts of audiovisual evidence both determine and are shaped by by politics, culture, and socio-technical constructions. De-colonial, feminist STS, and critical theories inform analysis of audiovisual objects, their technical production, and popular discourse around those objects.
In Deepfakes and Cheap Fakes, Data & Society Affiliates Britt Paris and Joan Donovan trace decades of audiovisual (AV) manipulation to demonstrate how evolving technologies aid consolidations of power in society.
Like many past media technologies, deepfakes and cheap fakes have jolted traditional rules around evidence and truth, and trusted institutions must step in to redefine those boundaries. This process, however, risks a select few experts gaining “juridical, economic, or discursive power,” thus further entrenching social, political, and cultural hierarchies. Those without the power to negotiate truth–including people of color, women, and the LGBTQA+ community–will be left vulnerable to increased harms, say the authors.
Paris and Donovan argue that we need more than an exclusively technological approach to address the threats of deep and cheap fakes. Any solution must take into account both the history of evidence and the “social processes that produce truth” so that the power of expertise does not lie only in the hands of a few and reinforce structural inequality, but rather, is distributed amongst at risk-communities.
Thanks to social media, both kinds of AV manipulation can now be spread at unprecedented speeds. For a spectrum diagram, click here.
From Panic to Profit
|
Beware the Cheapfakes
|
Simulacra at Scale
|
Other Op-eds
Paris, B., and Pasquetto, I. (8 December, 2019) Why do Facebook, others refuse to address the weaponization of fake information?, New Jersey Star Ledger.
Paris, B. (20, September, 2019). The Deeper Danger of Deepfakes: Worry Less About Politicians and More About Powerless People. New York Daily News.
Paris, B. (20, September, 2019). The Deeper Danger of Deepfakes: Worry Less About Politicians and More About Powerless People. New York Daily News.
Quoted and Interviewed
Mills-Rodrigo, C. (8 January, 2020). Lawmakers Voice Skepticism Over Facebook's Deepfake Ban. The Hill.
Shwayder, M. (8 January, 2020). Why a Deepfake Ban Won't Solve Facebook's Real Problems. Digital Trends.
Chen, A. (20 December, 2019). This startup claims its deepfakes will protect your privacy. MIT Technology Review.
Nafis, T. (14 December, 2019). Politics, Porn and Toxic World of Deepfakes. Al Jazeera, Listening Post.
Amer, P., (13 December, 2019). Deepfakes are getting better. Should we be worried? Boston Globe.
Penney, J., Leaver, N., Friedberg, B., and Donovan, J. (3 October, 2019). The Chilling Effects of Disinformation on Political Engagement. Nieman Reports.
Knight, W. (2 October, 2019). Even the A.I. Behind Deepfakes Can't Save Us From Being Duped. Wired.
Chen, A. (2 October, 2019). Three Threats Posed By Deepfakes That Technology Won't Solve. MIT Technology Review.
Benson, T. (26 September, 2019). A.I. Created the Madness of Deepfakes But Who Can Save Us From It? Inverse.
Howell, J. (19 September, 2019). Cheapfakes v. Deepfakes .This Week in Tech Podcast.
Schiffler, Z. (18 September, 2019). AI Can't Protect Us from Deepfakes, Argues New Report. The Verge.
Schwarz, O., (4 July, 2019). Could "Fake Text" be the Next Global Political Threat? The Guardian.
Smith, T., (17 June, 2019). The Weaponization of AI. Cyberwire.
Cook, J., (13 June, 2019). Deepfake Videos and the Threat of Not Knowing What's Real. Huffington Post.
Herrera, S., (12 June, 2019). Facebook's Mark Zuckerberg Finds Self on Receiving End of Fake Video. Wall Street Journal.
Muscato, L., (8 December, 2018). More Tech Won't Save Us From Disinformation. OpenNews Source.
Martineau, P., (8 November, 2018). How an Infowars Video Became a White House Tweet. WIRED Magazine.