When AI Meets Fake News

“Fake News” is Collins Dictionary’s word of the year for 2017. From Russian spies, to Balkan troll farms, to Facebook, the creation of purposefully untruthful articles has dominated our discussions this year. One would hope that recognition of the threat would stymie its effectiveness, however there’s every reason to believe the fake news crisis is about to get much worse.

GoogleNVIDIA, and other AI leaders have developed deep-learning technology (called generative adversarial networks, or GANs) that will make the creation of fake images and videos easier for people who seek to muddy the waters of social discourse. Ian Goodfellow, staff research scientist at Google Brain and a leading voice in the future of artificial intelligence, recently described our trust in video as evidence of something occurring or not, “a little bit of a fluke.”

What should we do? It begins with critical thinking. Even now, in an era when a video might be deceptively edited but not fabricated whole cloth, people struggle to evaluate the information placed before them. Whether it is recognizing an ad as an ad, identifying bias, or effectively searching for an information need, people of all ages score poorly.

If we don’t want to find ourselves in a future where partisan sides create different realities for people on either side of an issue, we have to invest in developing critical thinking skills at every level our population. To shirk this responsibility is to consign ourselves to a world of chaos, where any strain of wild speculation can become the truth for someone with the right computer software.