Facebook is looking at ways to mitigate the impact of fake news

Fake news has been top of mind for a lot of people as of late. After the presidential election in the U.S. and public outcry over the prevalence of false information on Facebook, the company announced measures to prevent fake news from appearing on the platform. But there are still concerns about the civic consequences of fake news and the effect it has on people once it’s already been spread.

Now, Facebook is trying to figure out what to do once the misinformation is already out and how to curtail its impact, Facebook VP of News Feed Adam Mosseri said last night on a panel, “Separating fact from fantasy: Is fake news undermining the truth?,” at the University of California Berkeley.

One approach, Mosseri said, could be to let people know retroactively. So, if someone read and/or shared a story that ended up being fake, Facebook could notify them.

“You want to make sure as little comes in the system as possible and when it happens, you need to react as quickly as you can,” Mosseri said. “And if you didn’t find it until later then you need to consider letting people know. The question is who and how. I don’t know if we’ll do that but it’s certainly something we’re considering.”

The term “fake news” is being used to describe anything from errors to deliberate falsehoods, UC Berkeley Graduate School of Journalism Dean Edward Wasserman said on the panel. Whether it’s intentional manipulation or a journalist who accidentally gets something wrong in a news story, it can fall under the umbrella of fake news.

Last month, Mosseri revealed the company’s plans to battle fake news that gets shared and spread throughout the platform. Earlier this week, Facebook expanded those measures to Germany.

But Mosseri noted last night that fake news was a thing on Facebook before the election, and something Facebook had been working to address for a couple of years. The main difference has been the public outcry in the wake of the election, Mosseri said.

“In terms of how much we’ve seen, we actually haven’t seen a ton of increase around the election,” Mosseri said. “The amount of fake news on the platform, actually — and I’m not trying to diminish the importance of the issue — is relatively small. It’s a very small percentage of what people see. It should be smaller. It should get as close to zero as possible.”

Something all the panelists agreed on last night is that there will always be fake news, especially as long as there is a financial incentive. Although Facebook doesn’t make money from fake news, Mosseri said, the platform does send economic value to fake news publishers.

“We need to do what we can to reduce the distribution that fake news publishers get as close as we can to zero,” Mosseri said. “That’s kind of what we started to do in December and we have more work to do.”

Ultimately, the aim is to prevent fake news from entering Facebook in the first place, Mosseri said. That’s why “disrupting the economic model is so important,” he said. One area Mosseri is particularly excited about is taking a closer look at landing pages. If you look at a landing page and 90% of the page are ads, Mosseri said, that’s a sign that it’s not a legitimate site.

“We’ve done some work but I think we have a lot more work to do,” Mosseri said.