Hello and a big welcome to our new subscribers from Axios, Hearst UK, Ella Lab, ERIM, and many others.
The 1915 American film “The Birth of a Nation” was both a major technological breakthrough and a profoundly racist work. Studies suggest that the movie helped perpetuate racial violence – and it was so persuasive in spreading hate against African Americans in part because it was so innovative from a technical point of view.
A century later, it’s hard to imagine that a single film would have such a significant impact. What this story shows is that with every revolutionary technology there comes a period of adaptation before we can learn how to handle it ethically and navigate its pitfalls.
Generative AI hasn’t contributed to anything nearly as bad as the renewal of the Ku Klux Klan, but there are a lot of concerns around its potential to help spread disinformation at scale.
One of the biggest challenges is navigating hyper-realistic photos and videos. It's becoming harder and harder to distinguish AI-generated images from real photos – and systems like Midjourney are becoming better with every month.
How to ensure you don’t fall for a fake Pope in a puffer jacket or something more sinister? And how to make sure you don’t misinform your readers, even inadvertently?
Alberto Puliafito, an Italian journalist and media manager and The Fix’s contributor who has been leading our generative AI coverage, has a helpful guide for journalists and newsroom leaders on dealing with deepfakes.
Here’s four crucial pieces of advice:
In the absence of obvious anchors to reality and three independent sources, don’t use an image or a video as a source
Use specialised tools to spot fake images
If you can’t prove in any way that a photo or a video is real, simply don't publish them
Don’t contribute to polluting the information environment – never use AI generated hyper-realistic images without a clear statement printed on the image itself
For more context and examples, read the full piece on The Fix’s website.