Insurrection and the Internet
James Ting-Edwards Senior Policy Advisor •
On 6 January 2021, a crowd of Trump supporters broke into the US Capitol building. This blog looks at the online response, including the decisions by Facebook and Twitter to remove Trump from their services. What questions do these decisions raise? And what this all might mean for the future of the Internet in New Zealand?
Trump inspired the most shared break-in in history
On 6 January, Twitter and Facebook removed their most famous user from their services. Earlier that day, President Donald Trump had spoken to thousands of people gathered to support his claims of election fraud. Members of this group then marched on the US Capitol building and eventually broke in, disrupting the Congress vote to confirm the election of Joe Biden as US President. Messages and videos shared online showed violence surrounding the break-in which led to the deaths of six people. With the number of guns involved, and the pipe bombs found at the Democrat and Republican committee headquarters, it could have been much much worse.
The collage of messages shared online showed different perspectives on the day. Elected representatives and congressional staff hid behind barricaded office doors. Images and videos showed what happened, but not why. Triumphant trespassers posed for selfies with statues. Crowds of people around a gallows chanted “hang Mike Pence.” Did footage of police letting people through a barrier show collusion with rioters, or a tactical move to contain them? Even agreed events led to debate, rumours, and conspiracy theories online.
For months, President Trump had been alleging election fraud and disputing the outcome of the November election he had lost to Joe Biden. His allegations were supported by members of his party, including some who had won office on the same ballot papers counted in the same way. Trump brought dozens of lawsuits challenging election processes, but as judges dismissed each one for lack of evidence, his supporters spread the rallying cry of #StopTheSteal online, in the media, and through huge gatherings in the midst of a growing COVID-19 pandemic.
The online response
It would be tempting to say that it took a week to impeach Trump again, but only a day to remove him from the major online services. But that’s a bit misleading. Online services had been responding to Trump’s election fraud claims for months. In the lead-up to the election, Twitter and other platforms had started to label Trump posts — that questioned the integrity of postal voting or alleged massive voting fraud — as “contested claims,” similar to the way they had labelled unofficial information about the COVID-19 pandemic. The response to labelling Trump was mixed, with some saying it didn’t go far enough, while his supporters said it showed an unfair bias against him by Silicon Valley social media.
The situation on 6 January changed the way online services treated Trump. Within a day of the break-in, Facebook, Instagram and Twitter had decided to stop him from posting. By the time he was impeached, those bans were extended and he was also removed from Twitch, Snapchat, Shopify and YouTube. Then the wave of removals expanded to online communities from TikTok to Discord, Reddit and Pinterest. As the removals went further, some people tried to move to other online services that would be more private or more sympathetic to their perspectives on election fraud and calls for action. The messaging service Telegram reported that it was blocking dozens of channels where people were inciting violence. Then Amazon’s AWS announced it would stop providing services that supported the social network Parler, pointing to similar calls for violence hosted on Parler.
What does this mean for the Internet?
I think that the online issues raised here are just as important as the political and law enforcement ones, and deserve just as much reflection. Trump was the most powerful man in the world, and being removed from Twitter seemed to impact his power more than being impeached again.
I certainly don’t have all the answers, but I think there are some useful things to bear in mind as we work through the issues here. Below are some of the thoughts that come to mind for me.
Online services have a lot of influence and they act for lots of different reasons
In 2021, the decisions of online services can have a huge impact on what people say and who they get to say it to. The impact of removing Trump shows that much. There’s probably a range of different reasons behind the wave of online removals. In some cases, there was a risk of more violence. In others, it may have been employee pressure. Trump losing some political influence, and following a trend might also have been factors.
Whatever the mix of reasons, it would be helpful to have more transparency on how these decisions are made, given the big impacts they can have. Having a chance to review or challenge such decisions might also be useful.
Preventing violence is not really a difficult free expression issue
There’s been a lot of discussion about the steps online services took to remove Trump, and whether this raises free expression issues. In legal terms, it seems pretty clear that these services can decide to stop serving anyone for any reason under their contractual terms, and there’s no formal free speech interest under the US First amendment because the decisions here are not government actions. But, knowing what the US law says doesn’t tell us what the call should be or how it should be made. In New Zealand, the law that upholds free expression is the Bill of Rights Act 1990, and one interesting bit of the law is that it recognises free expression and other important rights cannot be absolute on their own. Instead, policy choices often require balancing the interests protected by free expression rights against other interests and practical concerns, like protecting privacy, upholding the rule of law, or preventing violence that will hurt people.
My personal view is that the immediate removal decisions were justifiable in the moment. Facebook and Twitter saw their services being used to encourage, plan, and record violent acts that disrupted a democratic process and killed six people. I can’t see any reason that these services would have to allow the use of their services to encourage and plan acts of violence. In that situation, removing people’s posting privileges seems fair enough to me. But the removals beyond that involve some harder judgements that I’m less sure of.
There’s a lot of questions that will require more thinking
There are lots of questions that come out of this, and few of them have good easy answers. Did the big online services exercise too much power? Or did they let things get out of control by holding off on meaningful action for too long? Either way, perhaps it’s a big problem that these online services have so much power. But would it be even worse if no one could make the call to remove harmful misinformation?
What should we do from here?
There are big questions about the different roles in the stack
Conversations about removing bad stuff and promoting good stuff online tend to focus on the most familiar online services like Facebook and Twitter. But the response to Parler, including moves by Apple, Google, and Amazon to remove its apps and underlying cloud services show that getting information across the Internet involves lots of players, and raises questions about what their different roles and responsibilities should be.
In general, I think it’s more sensible to focus content-level policy actions on content-level Internet services, because that’s where people will have more of the information needed to make good calls. By contrast, ISPs have much less information, and so filtering at that level is a very blunt instrument.
Get better information before thinking about regulation
This was a big news event, and those sometimes get a quick policy response. But a quick policy response is likely to be an unhelpful one, particularly in places like New Zealand that are so far from direct influence on the issues at hand.
We think that the most useful response for now would be to gather good information on the issues and players here, perhaps as an early part of the New Zealand government’s planned media law review.
We will keep seeing misinformation online and offline
The break-in at the Capitol was a dramatic event that resulted from a flow of messages over months and years. We will see more concerning misinformation globally and in New Zealand, and can expect this to be a factor in how people respond to COVID-19 vaccines and other important public health measures.
InternetNZ has supported work by Tohatoha to monitor misinformation and educate people about how information spreads online. We hope to see this work develop with time and funding as people recognise just how important the issue is in New Zealand.
We’re keen to continue the conversationhear from you as we work towards an Internet for good
InternetNZ supports an ‘Internet for good.’ We think that the events at the US Capitol, and the online response, raise a lot of questions and challenges for that vision. We’ll be doing more thinking about the issues it raises, and the ways that we and New Zealanders can work together to build the Internet we need. If you’re keen to hear more and share your ideas, you can keep an eye on our Twitter or Facebook feeds.
We also have an Internet community space on Slack. This space will be used to connect the wider Internet community and discuss Internet-related issues.