The Drawback of Free Speech in an Age of Disinformation

Kate Starbird, a professor of human-computer interaction at the University of Washington who tracks disinformation on social media, called the Facebook label “worse than nothing.” Most of the time, adding a weak label to a Trump post has the effect of “getting an awareness boost by creating a second cycle of news about Republican allegations of bias in content moderation,” says Nathaniel Persily, a Stanford law professor and co -Director of the University’s Democracy and Internet Program.

Facebook has since updated its labels based on testing and feedback, including from civil rights leaders. “We have the labels that we have now much more than before,” says Monika Bickert, Vice President for Content Policy at Facebook. “You have gotten stronger. But I would expect us to refine it further as we see what works. Facebook updated the label on Trump’s Sept. 28 tweet: “Both face-to-face and email voting have a long history of trustworthiness in the US, and it is forecast to continue this year. Source: Bipartisan Policy Center. “In a Trump post on October 6th with more falsehoods about voting, Facebook added an additional sentence to this label:” Voting fraud is extremely rare in voting methods. “(Other labels, however, remain bland, and much of the misleading content related to voting is left unlabeled .)

Angelo Carusone, president of Media Matters for America, a nonprofit media watchdog group, finds the changes useful but frustratingly late. “We have refused to touch any content, to reject an ocean of disinformation about voting and electoral integrity and any efforts to fix it. They metastasize it and are now starting to do what they could have been doing all along. “Carusone also points out that independent researchers do not have access to data that would allow them to investigate key questions related to companies’ claims to address disinformation. How common is disinformation and hate speech on the platforms? Is it less likely that those who see the Facebook, Twitter, and YouTube information labels will share false and misleading content? What kind of warning has the greatest impact?

Twitter and Facebook are reducing the spread of some bogus posts, but during this election season Starbird has shared or retweeted bogus content tens of thousands or more times before companies make visible efforts to fix it. “We are currently watching disinformation go viral and desperate to refute it,” she tweeted in September. “By the time we do this – even in cases where platforms become active – the wrong information / narration has already done its damage.”

Facebook came under intense criticism of the role it played in the last president’s race. During the 2016 campaign, Facebook later reported, Russian activists spent about $ 100,000 to buy around 3,000 ads that would benefit Trump primarily through the sowing of racial segregation. By choosing Facebook, a small investment paid off immensely as the users of the site distributed the planted ads to their followers. “The size of Facebook means we have our risk focused,” says Brendan Nyhan, a political scientist at Dartmouth College. “If they’re wrong, they’re wrong nationally or globally.”

Facebook and YouTube have treated political ads as protected language so they may contain false and misleading information. Online ads – like direct mail and robocalls – can make it very difficult to correct the record. Online advertisers can use microtargeting to identify the segments of users they want to reach. “Misleading television advertising can be countered and scrutinized,” while a misleading message in a microtargeted ad “is hidden from challenge by the other campaign or media,” said Zeynep Tufekci, sociologist at the University of North Carolina at Chapel Hill and the author of the 2017 book “Twitter and Tear Gas” wrote in a predictive op-ed 2012 in the New York Times.

Local groups are using similar tactics this election season. This summer, Trump-sponsored FreedomWorks group, founded by the Koch brothers, advertised 150 Facebook ads directing people to a page with a picture of LeBron James. The picture was accompanied by a quote in which James denounced polling closings as racist, designed to trick people into preventing them from voting by mail. After the Washington Post reported it, Facebook removed the page for violating voter meddling guidelines, but only after the ads were viewed a hundred thousand times.

Comments are closed.