Should We Fight Fake News By Banning Gullible People From The Internet? – Forbes
All of the conversation to date around “fake news” has focused on catching false or misleading stories in progress and stopping them from spreading further. But, as I’ve noted this week, both professional journalists and academics are themselves frequently bad at verification and vetting and even marquee news outlets and academic journals run stories that are later shown to be false, misleading or at best unverified. What happens when a story goes viral and then days, weeks or even months later it is determined to be false? How do we go back and notify everyone that saw that story that they read something which was later determined to be false or misleading?
Search Google for the phrase “viral hoax” and you will turn up reams of stories that spread like wildfire through the social and/or mainstream media spheres, only to be disproven long after they had become entrenched wisdom. Even if proposed fact checking services from Facebook and others come to fruition and are able to fact check all of Facebook in realtime, that still means hundreds, thousands or even millions of users will have read a story before it is debunked and won’t see the newly affixed warning telling new users that the content’s veracity has been questioned.
This question of how to notify previous readers of a story that it has been retracted is one that has long confronted the academic community. When an academic journal formally determines that a paper contains false or misleading statements or otherwise must be removed from the scholarly record, it “retracts” the paper by either removing the paper from its online archive or affixing a special notice to it. Scholars who stumble upon the paper in future will see the notice that it has been retracted, but what of the myriad scholars who may have read and even cited the paper while it was still considered valid? Retracting a paper reflects poorly on a journal, since it suggests that the journal’s peer review process failed to catch the issues in question. Thus, while journals will affix a warning to a paper, they typically don’t issue press releases or widely publicize retractions except in very special circumstances.
To ensure that such retractions are noticed by the scholarly community, websites like Retraction Watch actively monitor retractions in major journals and publicize them to the broader scholarly community. Yet, this still leaves the issue that a retracted paper may have been extensively cited in numerous other papers before it is caught. Typically, all of those other papers that cite the retracted paper are not themselves retracted unless the retracted paper was used as the primary evidence to support its major claims. Thus, even after a paper is struck from the scholarly record, it lives on in the citations of myriad other papers. In turn, others may cite the paper without ever reading it by copy-pasting from the references list of one of those papers.
It is not that we don’t have the technology to provide better warnings. It would be relatively straightforward for large citation index companies to compile a list of all retractions from the journals they index and then flag all of the papers that cite those papers and issue a machine-readable index file that all journals could use to display warning messages when accessing any of that content, or offer a browser plugin that would flag any mention of the paper anywhere on the web.
Then there is the issue of what to do about a story that is largely correct and contains just a single relatively minor falsehood buried in the middle. Does the entire story need to be retracted or should the outlet simply edit the paper to remove or correct the offending statement?
Let’s turn for a moment to how the New York Times, one of the nation’s most prestigious newspapers, deals with this issue. The Times Public Editor Liz Spayd wrote in 2013: “The Times’s policy is clear: When an early version of an article contains a clear factual error, that error is fixed in the article itself and, at the same time, a correction notice is added at the end. That doesn’t always happen in practice, especially in breaking news stories where the facts are in flux. Sometimes a change is made quickly and a correction comes later; sometimes the correction never comes at all.” She went on to chronicle several cases where the Times made major factual changes to articles without appending any sort of editorial note alerting readers to the change. In one case, the initial version of the story had substantial facts incorrect, completely shifting the understanding of the story. This past March the Times’ practice of “stealth editing” garnered headlines itself when the Times rewrote an article about Bernie Sanders to shift its tone from supportive to attacking his record.
Returning to the story of Santa Claus and the dying child that went viral earlier this week, several of the outlets that covered the story left their coverage as-is without so much as the briefest of editorial notes to alert readers that the facts may not be as they appear. While some outlets like CNN and Washington Post appended editorial notes to their coverage of the story, BBC, Daily Mail, Today and People all left their original coverage as-is, with no indication that the story was subsequently called into question.
Let’s say Facebook had been running its fact checking service earlier this week and its contributors had eventually flagged the Santa Claus story as contested. What should be done about all of the myriad readers who saw the article and even shared it before it was flagged?
Should Facebook use its database of what posts you viewed and shared to go back and compile a list of everyone who read the article and/or shared it and show them a warning message the next time they log in to Facebook to let them know that an article they read or shared was later determined to be contested?
More controversially, once Facebook begins its fact checking service, it will, ultimately, have a database of all those people who read and share such content more frequently than other users. What will it do with this information? One has only to look at the kinds of filtering options it offers to advertisers today to foresee a future where it might assign such users a “gullible” flag that advertisers would likely find of great interest. Or, might it restrict their ability to post and share on the platform to restrict the flow of false news, essentially relegating such users to the status of stigmatized second class citizens? Only time will tell.
Putting this all together, we see that instead of talking exclusively about how to stop fake news from spreading further, we must also start talking about what to do about all of the people who read and shared those articles before they were flagged as fake. How do we let them know that something they read has been called into question and what are the ethical considerations for profiling users who commonly read and share such content and assigning labels like “gullible” to them or banning them outright from the Internet at large? These are the uncharted waters of the future of the Internet.