The Fight Against Fake News Rages On

Weber Shandwick St. Louis
5 min readNov 28, 2018

By Dave Collett

It’s been just over two years since the birth of the modern “fake news” phenomenon. But despite many attempts to stamp it out, the anxiety and the actuality of fake news is as prevalent as ever.

Experts say it’s likely to only get worse. New technologies that manipulate and imitate audio and video make it conceivable to create a hauntingly realistic video about anything and featuring anyone, when it’s actually fake. Other software has been developed allowing users to not only edit voice recordings to switch the order in which words and phrases were said, but to add in words and phrases that were not said.

These technologies are not just the domain of major companies and movie studios. Increasingly, deep fake technologies are becoming mainstream, though mostly for entertainment purposes. Consider how Snapchat and Facebook have made it commonplace to use augmented reality in our daily lives with “face swap” videos and the like — millions of people use AR tech every day.

Other apps are following in the footsteps of major social platforms, introducing smart technologies in amusing ways. One example is a new AI developed by UC Berkeley students that can apply the smooth dance moves from a video of a professional dancer to you and your body.

While it may be unreasonable to expect anyone to believe you’ve suddenly acquired the dance skills of Beyoncé or Bruno Mars, there is research showing people are more likely to share false information than true information. One fake-news study found that “fake news and false rumors reach more people, penetrate deeper into the social network, and spread much faster than accurate stories.”

In response, Facebook and other social media platforms are prepping for battle. To stop fake news trolls and bots, these networks are improving detection processes, disarming accounts through AI and improving user empowerment and functionality. Many of these changes are designed to help people recognize the credibility of various news organizations and the validity of news articles. Facebook is even piloting a new system to rate the trustworthiness of its users — for unverified, everyday people like you and me — in an additional effort to stem the tide of people sharing fake news, either deliberately or unwittingly.

It can be harder to tell the difference than you would think. To test your own ability, check out this quiz from The New York Times.

And while these platforms raise their game, some are even making a game out of the fight against fake news.

As scary as all this is, fake news has evolved from the description of “news that isn’t real.” It’s now come to mean “news that is thought to be biased or unfavorable.” That evolved definition of fake news is even more troublesome. In a politically charged environment, like the one we’re in now, almost any piece of news can be accused of being fake if it’s interpreted to be biased.

Could AI analyze an individual piece of coverage and compute a “bias rating” to tell someone if an article is biased and, if so, how? And by how much?

Yeah, it’s probably right around the corner. There’s already AI tech being deployed by Google to help improve which news stories get served to which people. And there’s already AI that can read, analyze and spit out a summary of long text with impressive accuracy.

It’s reasonable to think that AI could be taught to examine news for biases, cross-reference sources and other factual information, and use an algorithm to calculate all of its findings. It would be nice if news articles came with bias or trust “ratings” in this way, similar to what wine bottles have on store shelves.

But would people even believe these ratings standards or care? Evidence from research on confirmation bias tendencies shows that people prefer to believe what they already think.

Therefore, we can’t rely on or wait for AI tech to solve our current problems. We need to put up a fight ourselves. There are at least four things we can do right now, as humans and as communications professionals, to help eliminate fake news:

1. Don’t lie — and don’t work with liars. Yes, as PR professionals and marketers, we advocate a position. But that position must be anchored in truth. If we lie, or let their clients lie, we are literally purveyors of fake news. We must eliminate “spin” — both the word and the practice it’s referring to — and cast it out of our profession.

2. Identify and call out fake news and lies. Many of us are former journalists, and routinely work with the media. We have a particular set of skills and expertise that allow us to expertly spot and flag fake news. If you see genuinely fake news, make sure people know about it and not fall into a trap of engaging with it. Every retweet and like gives these fables another ounce of unwarranted credibility.

3. Call out reporting mistakes and inaccuracies (but do it politely and professionally). Reporters are human. I used to be one, and I made plenty of mistakes. Some 20 years after I switched to PR, there are far fewer reporters and editors to catch errors. This gap, and the need to move even faster, means mistakes are going to happen more often. Good advocates and allies will help reports avoid being accused of “fake news” for making an honest mistake.

4. Be prepared. Don’t wait for fake news to strike — ask yourself now: Are you ready for what happens if someone targets your organization with a fake post? Is such a scenario part of your current crisis communications plan? If not, take action now to plan how you’d react, how you’d set the record straight and how quickly you’d need to move to protect your reputation.

The fight against fake news isn’t going to end anytime soon, and we all need to do our part. With or without AI to help us win the battle.

--

--

Weber Shandwick St. Louis

Weber Shandwick is the in-culture communications agency. We make brave ideas that drive real impact for communities — and organizations — around the world.