I actually think it's super weird how people are blaming their diminished sense of well-being on the Trump administration. Personal events determine my quality of life; not who's in the WH.
People are just rage rage rage against the dying of the light.
Investigators are looking into a possible connection to a white supremacist group in Florida. The leader of Republic of Florida group confirmed to the Associated Press that Cruz was a member. Police have said that a connection to the group is "not confirmed"
As Zeynep Tufekci has eloquently argued: "The most effective forms of censorship today involve meddling with trust and attention, not muzzling speech itself."So we also get subjected to all this intentional padding, applied selectively, to defuse debate and derail clear lines of argument; to encourage confusion and apathy; to shift blame and buy time. Bored people are less likely to call their political representatives to complain.Truly fake news is the inception layer cake that never stops being baked. Because pouring FUD onto an already polarized debate -- and seeking to shift what are by nature shifty sands (after all information, misinformation and disinformation can be relative concepts, depending on your personal perspective/prejudices) -- makes it hard for any outsider to nail this gelatinous fakery to the wall.Why would social media platforms want to participate in this FUDing? Because it's in their business interests not to be identified as the primary conduit for democracy damaging disinformation.And because they're terrified of being regulated on account of the content they serve. They absolutely do not want to be treated as the digital equivalents to traditional media outlets.But the stakes are high indeed when democracy and the rule of law are on the line. And by failing to be pro-active about the existential threat posed by digitally accelerated disinformation, social media platforms have unwittingly made the case for external regulation of their global information-shaping and distribution platforms louder and more compelling than ever.* Every gun outrage in America is now routinely followed by a flood of Russian-linked Twitter bot activity. Exacerbating social division is the name of this game. And it's playing out all over social media continually, not just around elections.In the case of Russian digital meddling connected to the UK's 2016 Brexit referendum, which we now know for sure existed -- still without having all of the data we need to quantify the actual impact, the chairman of a UK parliamentary committee that's running an enquiry into fake news has accused both Twitter and Facebook of essentially ignoring requests for data and help, and doing none of the work the committee asked of them.Facebook has since said it will take a more thorough look through its archives. And Twitter has drip-fed some tidbits of additional infomation. But more than a year and a half after the vote itself, many, many questions remain.And just this week another third party study suggested that the impact of Russian Brexit trolling was far larger than has been so far conceded by the two social media firms.The PR company that carried out this research included in its report a long list of outstanding questions for Facebook and Twitter.Here they are: How much did [Russian-backed media outlets] RT, Sputnik and Ruptly spend on advertising on your platforms in the six months before the referendum in 2016? How much have these media platforms spent to build their social followings? Sputnik has no active Facebook page, but has a significant number of Facebook shares for anti-EU content, does Sputnik have an active Facebook advertising account? Will Facebook and Twitter check the dissemination of content from these sites to check they are not using bots to push their content? Did either RT, Sputnik or Ruptly use 'dark posts' on either Facebook or Twitter to push their content during the EU referendum, or have they used 'dark posts' to build their extensive social media following? What processes do Facebook or Twitter have in place when accepting advertising from media outlets or state owned corporations from autocratic or authoritarian countries? Noting that Twitter no longer takes advertising from either RT or Sputnik. Did any representatives of Facebook or Twitter pro-actively engage with RT or Sputnik to sell inventory, products or services on the two platforms in the period before 23 June 2016?We put these questions to Facebook and Twitter.In response, a Twitter spokeswoman pointed us to some "key points" from a previous letter it sent to the DCMS committee (emphasis hers): In response to the Commission's request for information concerning Russian-funded campaign activity conducted during the regulated period for the June 2016 EU Referendum (15 April to 23 June 2016), Twitter reviewed referendum-related advertising on our platform during the relevant time period. Among the accounts that we have previously identified as likely funded from Russian sources, we have thus far identified one account--@RT_com-- which promoted referendum-related content during the regulated period. $1,031.99 was spent on six referendum-related ads during the regulated period. With regard to future activity by Russian-funded accounts, on 26 October 2017, Twitter announced that it would no longer accept advertisements from RT and Sputnik and will donate the $1.9 million that RT had spent globally on advertising on Twitter to academic research into elections and civil engagement. That decision was based on a retrospective review that we initiated in the aftermath of the 2016 U.S. Presidential Elections and following the U.S. intelligence community's conclusion that both RT and Sputnik have attempted to interfere with the election on behalf of the Russian government. Accordingly, @RT_com will not be eligible to use Twitter's promoted products in the future.The Twitter spokeswoman declined to provide any new on-the-record information in response to the specific questions.A Facebook representative first asked to see the full study, which we sent, then failed to provide a response to the questions at all.The PR firm behind the research, 89up, makes this particular study fairly easy for them to ignore. It's a pro-Remain organization. The research was not undertaken by a group of impartial university academics. The study isn't peer reviewed, and so on.But, in an illustrative twist, if you Google "89up Brexit", Google New injects fresh Kremlin-backed opinions into the search results it delivers -- see the top and third result here...
To wit, the tweets said that the online advertising campaign led by the shadowy Internet Research Agency was meant to divide the American people, not influence the 2016 election.Antonio García Martínez (@antoniogm) is an Ideas contributor for WIRED. Before turning to writing, he dropped out of a doctoral program in physics to work on Goldman Sachs' credit trading desk, then joined the Silicon Valley startup world, where he founded his own startup (acquired by Twitter in 2011) and finally joined Facebook's early monetization team, where he headed the company's targeting efforts. His 2016 memoir, Chaos Monkeys, was a New York Times best seller and NPR Best Book of the Year, and his writing has appeared in Vanity Fair, The Guardian, and The Washington Post. He splits his time between a sailboat on the San Francisco Bay and a yurt in Washington's San Juan Islands.You're probably skeptical of Rob's claim, and I don't blame you. The world looks very different to people outside the belly of Facebook's monetization beast. But when you're on the inside, like Rob is and like I was, and you have access to the revenue dashboards detailing every ring of the cash register, your worldview tends to follow what advertising data can and cannot tell you.From this worldview, it's still not clear how much influence the IRA had with its Facebook ads (which, as others have pointed out, is just one small part of the huge propaganda campaign that Mueller is currently investigating). But no matter how you look at them, Russia's Facebook ads were almost certainly less consequential than the Trump campaign's mastery of two critical parts of the Facebook advertising infrastructure: The ads auction, and a benign-sounding but actually Orwellian product called Custom Audiences (and its diabolical little brother, Lookalike Audiences). Both of which sound incredibly dull, until you realize that the fate of our 242-year-old experiment in democracy once depended on them, and surely will again.Like many things at Facebook, the ads auction is a version of something Google built first. As on Google, Facebook has a piece of ad real estate that it's auctioning off, and potential advertisers submit a piece of ad creative, a targeting spec for their ideal user, and a bid for what they're willing to pay to obtain a desired response (such as a click, a like, or a comment). Rather than simply reward that ad position to the highest bidder, though, Facebook uses a complex model that considers both the dollar value of each bid as well as how good a piece of clickbait (or view-bait, or comment-bait) the corresponding ad is. If Facebook's model thinks your ad is 10 times more likely to engage a user than another company's ad, then your effective bid at auction is considered 10 times higher than a company willing to pay the same dollar amount.A canny marketer with really engaging (or outraging) content can goose their effective purchasing power at the ads auction, piggybacking on Facebook's estimation of their clickbaitiness to win many more auctions (for the same or less money) than an unengaging competitor. That's why, if you've noticed a News Feed ad that's pulling out all the stops (via provocative stock photography or other gimcrackery) to get you to click on it, it's partly because the advertiser is aiming to pump up their engagement levels and increase their exposure, all without paying any more money.During the run-up to the election, the Trump and Clinton campaigns bid ruthlessly for the same online real estate in front of the same swing-state voters. But because Trump used provocative content to stoke social media buzz, and he was better able to drive likes, comments, and shares than Clinton, his bids received a boost from Facebook's click model, effectively winning him more media for less money. In essence, Clinton was paying Manhattan prices for the square footage on your smartphone's screen, while Trump was paying Detroit prices. Facebook users in swing states who felt Trump had taken over their news feeds may not have been hallucinating.(Speaking of Manhattan vs. Detroit prices, there are some (very nonmetaphorical) differences in media costs across the country that also impacted Trump's ability to reach voters. Broadly, advertising costs in rural, out-of-the-way areas are considerably less than in hotly contested, dense urban areas. As each campaign tried to mobilize its base, largely rural Trump voters were probably cheaper to reach than Clinton's urban voters. Consider Germantown, Pa. (a Philly suburb Clinton won by a landslide) vs. Belmont County, Ohio (a rural county Trump comfortably won). Actual media costs are closely guarded secrets, but Facebook's own advertiser tools can give us some ballpark estimates. For zip code 43950 (covering the county seat of St. Clairsville, Ohio), Facebook estimates an advertiser can show an ad to about 83 people per dollar. For zip code 19144 in the Philly suburbs, that number sinks to 50 people an ad for every dollar of ad spend. Averaged over lots of time and space, the impacts on media budgets can be sizable. Anyway ...) The Like button is our new ballot box, and democracy has been transformed into an algorithmic popularity contest.The above auction analysis is even more true for News Feed, which is only based on engagement, with every user mired in a self-reinforcing loop of engagement, followed by optimized content, followed by more revealing engagement, then more content, ad infinitum. The candidate who can trigger that feedback loop ultimately wins. The Like button is our new ballot box, and democracy has been transformed into an algorithmic popularity contest.But how to trigger the loop? For that, we need the machinery of targeting. (Full disclosure: I was the original product manager for Custom Audiences, and along with a team of other product managers and engineers, I launched the first versions of Facebook precision targeting in the summer of 2012, in those heady and desperate days of the IPO and sudden investor expectation.)Despite folklore about "selling your data," most Facebook advertisers couldn't care less about your Likes, your drunk college photos, or your gossipy chats with a boyfriend. What advertisers want to do is find the person who left a product unpurchased in an online shopping cart, just used a loyalty card to buy diapers at Safeway, or registered as a Republican voter in Stark County, Ohio (a swing county in a swing state).Custom Audiences lets them do that. It's the tunnel beneath the data wall that allows the outside world into Facebook's well-protected garden, and it's like that by design.Browsed for shoes and then saw them on Facebook? You're in a Custom Audience.Registered for an email newsletter or used your email as login somewhere? You're in a Custom Audience.Ordered something to a postal address known to merchants and marketers? You're definitely in a Custom Audience.Here's how it works in practice:A campaign manager takes a list of emails or other personal data for people they think will be susceptible to a certain type of messaging (e.g. people in Florida who donated money to Trump For America). They upload that spreadsheet to Facebook via the Ads Manager tool, and Facebook scours its user data, looks for users who match the uploaded spreadsheet, and turns the matches into an "Audience," which is really just a set of Facebook users.Facebook can also populate an audience by reading a user's cookies--those digital fragments gathered through a user's wanderings around the web. Half the bizarre conspiracy theories around Facebook targeting boil down to you leaving a data trail somewhere inside our consumer economy that was then uploaded via Custom Audiences. In the language of database people, there's now a "join" between the Facebook user ID (that's you) and this outside third-party who knows what you bought, browsed, or who you voted for (probably). That join is permanent, irrevocable, and will follow you to every screen where you've used Facebook.The above is pretty rudimentary data plumbing. But only when you've built a Custom Audience can you build Lookalike Audiences-- the most unknown, poorly understood, and yet powerful weapon in the Facebook ads arsenal.With a mere mouse click from our hypothetical campaign manager, Facebook now searches the friends of everyone in the Custom Audience, trying to find everyone who (wait for it) "looks like" you. Using a witches' brew of mutual engagement--probably including some mix of shared page Likes, interacting with similar News Feed or Ads content, a score used to measure your social proximity to friends--the Custom Audience is expanded to a bigger set of like-minded people. Lookalikes.(Another way to picture it: Your social network resembles a nutrient-rich petri dish, just sitting out in the open. Custom Audiences helps mercenary marketers find that dish, and lets them plant the bacterium of a Facebook post inside it. From there, your own interaction with the meme, which is echoed in News Feed, spreads it to your immediate vicinity. Lookalike Audiences finishes the job by pushing it to the edges of your social petri dish, to everyone whose tastes and behaviors resemble yours. The net result is a network overrun by an infectious meme, dutifully placed there by an advertiser, and spread by the ads and News Feed machinery.)
"Falsehood flies, and the Truth comes limping after it," Jonathan Swift once wrote.It was hyperbole three centuries ago. But it is a factual description of social media, according to an ambitious and first-of-its-kind study published Thursday in Science.The massive new study analyzes every major contested news story in English across the span of Twitter's existence--some 126,000 stories, tweeted by 3 million users, over more than 10 years--and finds that the truth simply cannot compete with hoax and rumor. By every common metric, falsehood consistently dominates the truth on Twitter, the study finds: Fake news and false rumors reach more people, penetrate deeper into the social network, and spread much faster than accurate stories."It seems to be pretty clear [from our study] that false information outperforms true information," said Soroush Vosoughi, a data scientist at MIT who has studied fake news since 2013 and who led this study. "And that is not just because of bots. It might have something to do with human nature."The study has already prompted alarm from social scientists. "We must redesign our information ecosystem in the 21st century," write a group of 16 political scientists and legal scholars in an essay also published Thursday in Science. They call for a new drive of interdisciplinary research "to reduce the spread of fake news and to address the underlying pathologies it has revealed.""How can we create a news ecosystem ... that values and promotes truth?" they ask.The new study suggests that it will not be easy. Though Vosoughi and his colleagues only focus on Twitter--the study was conducted using exclusive data that the company made available to MIT--their work has implications for Facebook, YouTube, and every major social network. Any platform that regularly amplifies engaging or provocative content runs the risk of amplifying fake news along with it.Though the study is written in the clinical language of statistics, it offers a methodical indictment of the accuracy of information that spreads on these platforms. A false story is much more likely to go viral than a real story, the authors find. A false story reaches 1,500 people six times quicker, on average, than a true story does. And while false stories outperform the truth on every subject--including business, terrorism and war, science and technology, and entertainment--fake news about politics regularly does best.Twitter users seem almost to prefer sharing falsehoods. Even when the researchers controlled for every difference between the accounts originating rumors--like whether that person had more followers or was verified--falsehoods were still 70 percent more likely to get retweeted than accurate news.And blame for this problem cannot be laid with our robotic brethren. From 2006 to 2016, Twitter bots amplified true stories as much as they amplified false ones, the study found. Fake news prospers, the authors write, "because humans, not robots, are more likely to spread it."Political scientists and social-media researchers largely praised the study, saying it gave the broadest and most rigorous look so far into the scale of the fake-news problem on social networks, though some disputed its findings about bots and questioned its definition of news.
What makes this study different? In the past, researchers have looked into the problem of falsehoods spreading online. They've often focused on rumors around singular events, like the speculation that preceded the discovery of the Higgs boson in 2012 or the rumors that followed the Haiti earthquake in 2010.This new paper takes a far grander scale, looking at nearly the entire lifespan of Twitter: every piece of controversial news that propagated on the service from September 2006 to December 2016. But to do that, Vosoughi and his colleagues had to answer a more preliminary question first: What is truth? And how do we know?It's a question that can have life-or-death consequences."[Fake news] has become a white-hot political and, really, cultural topic, but the trigger for us was personal events that hit Boston five years ago," said Deb Roy, a media scientist at MIT and one of the authors of the new study.On April 15, 2013, two bombs exploded near the route of the Boston Marathon, killing three people and injuring hundreds more. Almost immediately, wild conspiracy theories about the bombings took over Twitter and other social-media platforms. The mess of information only grew more intense on April 19, when the governor of Massachusetts asked millions of people to remain in their homes as police conducted a huge manhunt."I was on lockdown with my wife and kids in our house in Belmont for two days, and Soroush was on lockdown in Cambridge," Roy told me. Stuck inside, Twitter became their lifeline to the outside world. "We heard a lot of things that were not true, and we heard a lot of things that did turn out to be true" using the service, he said.The ordeal soon ended. But when the two men reunited on campus, they agreed it seemed seemed silly for Vosoughi--then a Ph.D. student focused on social media--to research anything but what they had just lived through. Roy, his adviser, blessed the project.He made a truth machine: an algorithm that could sort through torrents of tweets and pull out the facts most likely to be accurate from them. It focused on three attributes of a given tweet: the properties of its author (were they verified?), the kind of language it used (was it sophisticated?), and how a given tweet propagated through the network."The model that Soroush developed was able to predict accuracy with a far-above-chance performance," said Roy. He earned his Ph.D. in 2015.After that, the two men--and Sinan Aral, a professor of management at MIT--turned to examining how falsehoods move across Twitter as a whole. But they were back not only at the "what is truth?" question, but its more pertinent twin: How does the computer know what truth is?They opted to turn to the ultimate arbiter of fact online: the third-party fact-checking sites. By scraping and analyzing six different fact-checking sites--including Snopes, Politifact, and FactCheck.org--they generated a list of tens of thousands of online rumors that had spread between 2006 and 2016 on Twitter. Then they searched Twitter for these rumors, using a proprietary search engine owned by the social network called Gnip.Ultimately, they found about 126,000 tweets, which, together, had been retweeted more than 4.5 million times. Some linked to "fake" stories hosted on other websites. Some started rumors themselves, either in the text of a tweet or in an attached image. (The team used a special program that could search for words contained within static tweet images.) And some contained true information or linked to it elsewhere.Then they ran a series of analyses, comparing the popularity of the fake rumors with the popularity of the real news. What they found astounded them.Speaking from MIT this week, Vosoughi gave me an example: There are lots of ways for a tweet to get 10,000 retweets, he said. If a celebrity sends Tweet A, and they have a couple million followers, maybe 10,000 people will see Tweet A in their timeline and decide to retweet it. Tweet A was broadcast, creating a big but shallow pattern.Meanwhile, someone without many followers sends Tweet B. It goes out to their 20 followers--but one of those people sees it, and retweets it, and then one of their followers sees it and retweets it too, on and on until tens of thousands of people have seen and shared Tweet B.Tweet A and Tweet B both have the same size audience, but Tweet B has more "depth," to use Vosoughi's term. It chained together retweets, going viral in a way that Tweet A never did. "It could reach 1,000 retweets, but it has a very different shape," he said.Here's the thing: Fake news dominates according to both metrics. It consistently reaches a larger audience, and it tunnels much deeper into social networks than real news does. The authors found that accurate news wasn't able to chain together more than 10 retweets. Fake news could put together a retweet chain 19 links long--and do it 10 times as fast as accurate news put together its measly 10 retweets.These results proved robust even when they were checked by humans, not bots. Separate from the main inquiry, a group of undergraduate students fact-checked a random selection of roughly 13,000 English-language tweets from the same period. They found that false information outperformed true information in ways "nearly identical" to the main data set, according to the study.What does this look like in real life? Take two examples from the last presidential election. In August 2015, a rumor circulated on social media that Donald Trump had let a sick child use his plane to get urgent medical care. Snopes confirmed almost all of the tale as true. But according to the team's estimates, only about 1,300 people shared or retweeted the story.In February 2016, a rumor developed that Trump's elderly cousin had recently died and that he had opposed the magnate's presidential bid in his obituary. "As a proud bearer of the Trump name, I implore you all, please don't let that walking mucus bag become president," the obituary reportedly said. But Snopes could not find evidence of the cousin, or his obituary, and rejected the story as false.Nonetheless, roughly 38,000 Twitter users shared the story. And it put together a retweet chain three times as long as the sick-child story managed.A false story alleging the boxer Floyd Mayweather had worn a Muslim head scarf to a Trump rally also reached an audience more than 10 times the size of the sick-child story.Why does falsehood do so well? The MIT team settled on two hypotheses.First, fake news seems to be more "novel" than real news. Falsehoods are often notably different from the all the tweets that have appeared in a user's timeline 60 days prior to their retweeting them, the team found.Second, fake news evokes much more emotion than the average tweet. The researchers created a database of the words that Twitter users used to reply to the 126,000 contested tweets, then analyzed it with a state-of-the-art sentiment-analysis tool. Fake tweets tended to elicit words associated with surprise and disgust, while accurate tweets summoned words associated with sadness and trust, they found.The team wanted to answer one more question: Were Twitter bots helping to spread misinformation?After using two different bot-detection algorithms on their sample of 3 million Twitter users, they found that the automated bots were spreading false news--but they were retweeting it at the same rate that they retweeted accurate information."The massive differences in how true and false news spreads on Twitter cannot be explained by the presence of bots," Aral told me.But some political scientists cautioned that this should not be used to disprove the role of Russian bots in seeding disinformation recently. An "army" of Russian-associated bots helped amplify divisive rhetoric after the school shooting in Parkland, Florida, The New York Times has reported."It can both be the case that (1) over the whole 10-year data set, bots don't favor false propaganda and (2) in a recent subset of cases, botnets have been strategically deployed to spread the reach of false propaganda claims," said Dave Karpf, a political scientist at George Washington University, in an email."My guess is that the paper is going to get picked up as 'scientific proof that bots don't really matter!' And this paper does indeed show that, if we're looking at the full life span of Twitter. But the real bots debate assumes that their usage has recently escalated because strategic actors have poured resources into their use. This paper doesn't refute that assumption," he said.Vosoughi agrees that his paper does not determine whether the use of botnets changed around the 2016 election. "We did not study the change in the role of bots across time," he told me in an email. "This is an interesting question and one that we will probably look at in future work."Some political scientists also questioned the study's definition of "news." By turning to the fact-checking sites, the study blurs together a wide range of false information: outright lies, urban legends, hoaxes, spoofs, falsehoods, and "fake news." It does not just look at fake news by itself--that is, articles or videos that look like news content, and which appear to have gone through a journalistic process, but which are actually made up.Therefore, the study may undercount "non-contested news": accurate news that is widely understood to be true. For many years, the most retweeted post in Twitter's history celebrated Obama's re-election as president. But as his victory was not a widely disputed fact, Snopes and other fact-checking sites never confirmed it.The study also elides content and news. "All our audience research suggests a vast majority of users see news as clearly distinct from content more broadly," Nielsen, the Oxford professor, said in an email. "Saying that untrue content, including rumors, spread faster than true statements on Twitter is a bit different from saying false news and true news spread at different rates."But many researchers told me that simply understanding why false rumors travel so far, so fast, was as important as knowing that they do so in the first place."The key takeaway is really that content that arouses strong emotions spreads further, faster, more deeply, and more broadly on Twitter," said Tromble, the political scientist, in an email. "This particular finding is consistent with research in a number of different areas, including psychology and communication studies. It's also relatively intuitive.""False information online is often really novel and frequently negative," said Nyhan, the Dartmouth professor. "We know those are two features of information generally that grab our attention as human beings and that cause us to want to share that information with others--we're attentive to novel threats and especially attentive to negative threats.""It's all too easy to create both when you're not bound by the limitations of reality. So people can exploit the interaction of human psychology and the design of these networks in powerful ways," he added.He lauded Twitter for making its data available to researchers and called on other major platforms, like Facebook, to do the same. "In terms of research, the platforms are the whole ballgame. We have so much to learn but we're so constrained in what we can study without platform partnership and collaboration," he said.
Yet these do not encompass the most depressing finding of the study. When they began their research, the MIT team expected that users who shared the most fake news would basically be crowd-pleasers. They assumed they would find a group of people who obsessively use Twitter in a partisan or sensationalist way, accumulating more fans and followers than their more fact-based peers.In fact, the team found that the opposite is true. Users who share accurate information have more followers, and send more tweets, than fake-news sharers. These fact-guided users have also been on Twitter for longer, and they are more likely to be verified. In short, the most trustworthy users can boast every obvious structural advantage that Twitter, either as a company or a community, can bestow on its best users.The truth has a running start, in other words--but inaccuracies, somehow, still win the race. "Falsehood diffused further and faster than the truth despite these differences [between accounts], not because of them," write the authors.This finding should dispirit every user who turns to social media to find or distribute accurate information. It suggests that no matter how adroitly people plan to use Twitter--no matter how meticulously they curate their feed or follow reliable sources--they can still get snookered by a falsehood in the heat of the moment.Related StoryPinocchio dollsWhy It's Okay to Call It 'Fake News'It suggests--to me, at least, a Twitter user since 2007, and someone who got his start in journalism because of the social network--that social-media platforms do not encourage the kind of behavior that anchors a democratic government. On platforms where every user is at once a reader, a writer, and a publisher, falsehoods are too seductive not to succeed: The thrill of novelty is too alluring, the titillation of disgust too difficult to transcend. After a long and aggravating day, even the most staid user might find themselves lunging for the politically advantageous rumor. Amid an anxious election season, even the most public-minded user might subvert their higher interest to win an argument.It is unclear which interventions, if any, could reverse this tendency toward falsehood. "We don't know enough to say what works and what doesn't," Aral told me. There is little evidence that people change their opinion because they see a fact-checking site reject one of their beliefs, for instance. Labeling fake news as such, on a social network or search engine, may do little to deter it as well.In short, social media seems to systematically amplify falsehood at the expense of the truth, and no one--neither experts nor politicians nor tech companies--knows how to reverse that trend. It is a dangerous moment for any system of government premised on a common public reality.