FREE WHITEWATER

‘Deep Fakes’ in a Deeper Context

In the video Op-Ed above, Claire Wardle responds to growing alarm around “deepfakes” — seemingly realistic videos generated by artificial intelligence. First seen on Reddit with pornographic videos doctored to feature the faces of female celebrities, deepfakes were made popular in 2018 by a fake public service announcement featuring former President Barack Obama. Words and faces can now be almost seamlessly superimposed. The result: We can no longer trust our eyes.

In June, the House Intelligence Committee convened a hearing on the threat deepfakes pose to national security. And platforms like Facebook, YouTube and Twitter are contemplating whether, and how, to address this new disinformation format. It’s a conversation gaining urgency in the lead-up to the 2020 election.

Yet deepfakes are no more scary than their predecessors, “shallowfakes,” which use far more accessible editing tools to slow down, speed up, omit or otherwise manipulate context. The real danger of fakes — deep or shallow — is that their very existence creates a world in which almost everything can be dismissed as false.

Subscribe
Notify of

0 Comments
Inline Feedbacks
View all comments