Big surprise: The Twitter Files — the reports based on internal Twitter documents and messages that the company’s new owner Elon Musk provided to journalists — have landed as a polarizing salvo in the culture war.
Many inclined to distrust what they see as Big Tech’s liberal leanings have cried vindication. The documents show in detail how Twitter made key content moderation decisions that disadvantaged Trump, conservatives, and people who broke with the public health consensus on Covid-19. They say the evidence proves that, again and again, Twitter intervened to squelch speech that the liberal establishment didn’t like.
Meanwhile, others — including most liberals and many mainstream journalists — are unimpressed. They say Twitter’s policies here were already known and that the specific decisions in question — blocking a story they feared stemmed from a foreign hack, banning the account of President Trump after he incited an insurrection, and deboosting accounts spreading public health misinformation — generally seem at least defensible.
The discourse has quickly become one of us versus them — perfect for Twitter. The journalists to whom Musk gave the documents — most prominently, Substackers Matt Taibbi and Bari Weiss — are outspoken, unsparing critics of what they believe is the “woke” liberal groupthink that pervades mainstream American media institutions, making them now effectively allies of the right in the culture war. Musk’s behavior since buying Twitter has made him a villain to the left, too.
So liberals have been inclined to view anything they say with deep skepticism, an instinct that was seemingly vindicated quickly after Taibbi posted his first report. He spotlighted an email stating that in October 2020 the Biden campaign had sent along requests to delete certain tweets, writing that an executive responded: “Handled.” Musk responded to this revelation with outrage: “If this isn’t a violation of the Constitution’s First Amendment, what is?” But internet archive sleuths soon established the deleted tweets were pornographic or nude images of Hunter Biden that violated Twitter’s ban on nonconsensually posted sexual material, something Taibbi seemingly had not known. “No, you do not have a Constitutional right to post Hunter Biden’s dick pic on Twitter,” the Bulwark’s Tim Miller wrote. Additionally, some fear that the documents are being selectively pruned to tell a preferred story that could lack context.
Still, it is worth evaluating the documents on their own merits to the extent we can, without a too-hasty dismissal of all Taibbi and Weiss’s arguments or a defense of Twitter’s old management regime. That regime is gone now, but while they were in place, Twitter was a powerful institution that had a major impact on politics, and its decisions deserve scrutiny — just as decisions made by Twitter’s new regime, or monarch, deserve scrutiny. Some of the previous management’s decisions, it seems to me, were wrong, and indeed arguably driven by liberal groupthink. Others I’m less certain about, but they’re at least worth discussing. So here are the main decisions being second-guessed.
Was Twitter right to block the New York Post story about Hunter Biden’s laptop?
The first part of the Twitter Files, from Taibbi, focuses on Twitter’s October 2020 decision to outright ban links to the first New York Post story about Hunter Biden’s laptop. The ban lasted a little over one day before Twitter lifted it, but the recriminations have continued ever since.
Twitter’s justification was that the story violated its policy against posting “hacked materials.” However, the Post said the materials came from a laptop abandoned at a computer repair store, not a hack. There was widespread skepticism of this claim at the time, but there was no evidence for the hack supposition, and none has since emerged. So what was Twitter thinking?
One clue is in a message by Trust and Safety chief Yoel Roth, who alludes to “the SEVERE risks here and lessons of 2016.” In 2016, there was an effort by the Russian government to interfere with the general election in a way that would hurt Hillary Clinton and Democrats’ prospects. As later documented in the Mueller report, this effort involved both a “troll farm” of Russian accounts masquerading as Americans to spread false or inflammatory information, and the “hack-and-leak” campaign in which leading Democrats’ emails were stolen and provided to WikiLeaks.
After Trump won, many leading figures in politics, tech, media, and law enforcement concluded that major social media platforms like Twitter and Facebook should have done more to stop this Russian interference effort and the spread of “misinformation” more generally (with some arguing that this was a problem regardless of electoral impact, and others claiming that this helped or even caused Trump’s victory). Law enforcement officials argued the Russian campaign was illegal and indicted about two dozen Russians believed to be involved in it. Social media companies began to take a more aggressive approach to curbing what they saw as misinformation, and as the 2020 election approached, they met regularly with FBI and other government officials to discuss the dangers of potential new foreign interference campaigns.
But several issues are being conflated here. Misinformation is (in theory) false information. Foreign propaganda is not necessarily false, but it is being spread by a foreign government with malicious intent (for example, to inflame America’s divisions). Hacked material, though, is tricker in part because it often isn’t misinformation — its power comes from its accuracy. Now, it is theoretically possible that false information could be mixed in with true information as part of a hacked document dump, so it’s important to authenticate it to the extent possible. And even authentic information can often be ripped out of context to appear more damning than it really is. Still, Twitter was putting itself in the awkward position where it would be resolving to suppress information that could well be accurate, for the greater good of preventing foreign interference in an election.
More broadly, a blanket ban on hacked material doesn’t seem particularly well thought through, since a fair amount of journalism is based on material that is illicitly obtained in some way (such as the Pentagon Papers). Every major media source wrote about the DNC and Podesta email leaks, as well as the leaked State Department cables, while entertainment journalists wrote about the Sony hack. Should all those stories be banned like the Post’s was? A standard that Twitter won’t host any sexual images of someone posted without their consent, or any personal information like someone’s address, is a neutral one. Beyond that, determining what stolen or hacked information is newsworthy is inherently subjective. Should that judgment be left to social media companies?
Then there’s the problem that Twitter jumped to the conclusion that this was a hack in the first place. I can see why they did — recent high-profile examples of mass personal info dumps like this were generally hacks. So if you had been anticipating a chance to “do over” 2016’s hack scandal, here it seemed to be. But it was jumping to a conclusion. Additionally, the apparent belief of some employees that proactively censoring the story until there was more information about whether it was hacked info was a way to express “caution” seems dubious — fully banning a link to a media outlet from the platform was a sweeping measure.
So to me this seems a pretty clear case of overreach by Twitter. This wasn’t a “rigging” of the election (again, the ban was only in place for a little over a day). But the decision — born out of a blinkered focus on avoiding a repeat of 2016, rather than taking speech or press freedom or the different details of this situation into account — was the wrong call, in my view.
Was Twitter right to ban Trump?
Parts 3, 4, and 5 of the Twitter Files all focus on the company’s decision to ban President Trump’s account in the wake of the January 6, 2021, attack. They show that as pressure for the company to act against Trump rose from both outside voices and their own employees, Twitter leaders applied various standards in determining Trump’s account shouldn’t yet be banned, before making a rather abrupt switch in deciding to ban him on January 8, saying two tweets of his that day violated their “glorification of violence” policy and that Trump’s account presented a “risk of further incitement of violence.”
Weiss points out that, earlier in the day, Twitter staffers evaluated those new Trump tweets — one saying he wouldn’t attend the inauguration, another that “75,000,000 great American Patriots who voted for me” will “not be disrespected or treated unfairly in any way, shape or form!!!” — and concluded they did not violate policies against incitement of violence. Only later did top executives ask about other possible interpretations and begin discussing whether this was a coded “glorification of violence” interpretation. Weiss’s implication is that, under immense internal and external pressure, Twitter’s executives searched for a pretext to ban Trump, and found one. (The day before, Facebook had done something similar.)
Weiss also points out that this was the only time a sitting head of state was banned from the platform, and that Twitter previously allowed wide latitude to world leaders’ accounts, even those who posted hateful rhetoric or even direct calls to violence (though it’s not a surprise that social media companies would have different standards in different countries with very different political situations and that they might treat the company’s home country somewhat differently).
Even if you accept Trump was treated differently, the question is whether that different treatment was justified and called for considering what Trump had done: launched a months-long campaign of constant falsehoods aimed at pressuring Republicans to steal the election from Joe Biden, a campaign that eventually spiraled into real-world violence when a mob stormed the US Capitol. In the view of many, American democracy was at stake here — it was not yet clear whether Trump really would step aside, and many feared further violence — so social media companies had a responsibility to act rather than enable its destruction. (Roth said multiple Twitter employees had quoted Hannah Arendt’s The Banality of Evil to him, suggesting the company’s blind adherence to process meant enabling something horrifying.)
What this really boils down to is a larger clash of worldviews related to Trump, and to which institutions should or should not be trusted.
One worldview — accepted to varying degrees by liberals, anti-Trump conservatives, and significant portions of the tech and media industries — was that Trump’s presidency was an unprecedented threat to US democracy, that he was enabling a rise of hate toward minority groups that put lives at risk, that his constant lies amounted to an assault on the truth, and that a society-wide effort to resist him was necessary. “Business as usual” in media or tech companies is no longer tenable if you believe your country is sliding into authoritarianism, this argument goes. Journalists and tech workers shouldn’t be neutral toward the prospect of American democracy ending, they should instead take a values-based stand in defense of it — and in defense of truth itself.
The violence of January 6 heightened concerns of further violent turmoil and pushed more people into this camp. “I’ve been part of the ‘he’s the president, we can’t deactivate him’ crowd for 4 years now but even I have to say, I feel complicit allowing this to happen and I would like to see him deactivated immediately,” one Twitter employee wrote in the company’s Slack, according to NBC News.
In contrast, the journalists reporting on the Twitter Files, as well as Musk himself, have a starkly different interpretation of politics. They aren’t Trumpists (Taibbi is historically of the left, Weiss said she voted for Biden, Musk said he supports Ron DeSantis) but they’ve become united by a loathing for what they see as the liberal groupthink that has become hegemonic in much of the media and Silicon Valley, which they argue chills dissent and free speech, and often advances the interests of the Democratic Party. This includes “wokeness” and cancel culture, but goes beyond those topics. For instance, they believe Trump got a raw deal in the Russia investigation — arguing many in the media, the Democratic Party, and the government either believed or willfully perpetrated what amounted to a false conspiracy theory that Trump was in cahoots with Vladimir Putin. Whatever they might believe about Trump’s flaws, their commentary shows that for some time they have been far more animated by what they see as the excesses of Trump’s opponents in the media, tech companies, and the government.
If you’re inclined to think Trump a singular threat that must be resisted — and you can point to the January 6 attacks as proof of your theory — then a major social media company banning him is more justifiable. But if you think the liberals at the social media company are themselves a major threat to speech, then the power they wielded in banning Trump may disquiet you.
Yet it should be noted that the phenomenon of controversial Twitter bannings occurring at top executives’ whims has not been solved under the Musk regime. Musk has already decided to suspend Kanye West’s account, keep a preexisting ban on Infowars host Alex Jones in place, and ban an account tracking flight information for Musk’s private jet (even though he said last month his “commitment to free speech” was so strong he would allow that account to keep posting).
Did Twitter — or the Biden administration — overreach in efforts to limit Covid-19 misinformation?
The Twitter Files has not featured a full installment about Covid-19 yet, but Musk has promised, “It is coming bigtime.” In part two of the series, though, Weiss showed that Stanford School of Medicine professor Jay Bhattacharya had been placed on a Twitter “Trends Blacklist” — preventing his tweets from showing up in trending topics searches.
After this, Bhattacharya tweeted that, during a visit to Twitter headquarters at Musk’s invitation this week, employees told him he was placed on that blacklist the first day he joined Twitter, in August 2021 and that he believes it must have been because of this tweet:
Mortality from #COVID19 differs more than a thousand-fold between the old and young. Focused protection is the compassionate approach that balances COVID risks and collateral damage to public health.https://t.co/63I0hcZK1J
— Jay Bhattacharya (@DrJBhattacharya) August 23, 2021
The link there was to the Great Barrington Declaration, a controversial October 2020 open letter by Bhattacharya and two other professors arguing that only those people most vulnerable to the virus should continue to lock down and distance, while everyone else should “resume life as normal,” which would result in them getting the virus and, hopefully, “herd immunity” in the population. Shortly afterward, 80 other public health experts responded with their own letter calling their herd immunity theory “a dangerous fallacy unsupported by scientific evidence.”
When the Covid-19 pandemic broke out, Twitter again grappled with the topic of “misinformation.” As with Trump (and with hate speech), Twitter executives likely believed lives could well hinge on their decisions. So by May 2020, the company announced it would remove or label tweets that “directly pose a risk to someone’s health or well-being,” such as encouragements that people disregard social distancing guidelines.
But the company essentially defined “misinformation” as whatever went against the public health establishment’s current conventional wisdom. And as time passed, Covid quickly became another issue where conservatives and some journalists came to deeply distrust that establishment, viewing it as making mistakes and giving politically slanted guidance.
The situation took another turn when President Biden took office. By the summer of 2021, his administration was trying to encourage widespread vaccine adoption in the hope the pandemic could be ended entirely. (The omicron variant, which sufficiently evaded vaccines to end that hope, was not yet circulating.) Toward that end, administration officials publicly demanded social companies do more to fight misinformation, and poured private pressure on the companies to delete certain specific accounts.
One of those accounts belonged to commentator Alex Berenson, who “has mischaracterized just about every detail regarding the vaccines to make the dubious case that most people would be better off avoiding them,” according to the Atlantic’s Derek Thompson. After Berenson was eventually banned, he sued and obtained records showing the White House had specifically asked Twitter why he hadn’t been kicked off the platform yet. Another lawsuit against the administration, from Republican state attorneys general and other people who believed their speech was suppressed (including Bhattacharya), is also pending.
All that is to say that there is a thorny question here about whether the government should be trying to get individual people who have violated no laws banned from social media. And from the standpoint of 2022, when the US has adopted a return-to-normal policy without universal vaccination or the virus being suppressed, and when there’s increased attention on whether school lockdowns harmed children, some reflection may be called for about what constitutes misinformation and what constitutes opinions people may have about policy in a free society.