How do we stop misinformation being a superspreader?
Opinion piece by Andrew Cushen – InternetNZ's Interim Chief Executive •
This opinion piece was published in Stuff on 15 March 2022.
There is a dark side of having New Zealanders connected to social media platforms with the ability to hear from and speak to thousands. That dark side is the creation and spread of both mis and disinformation online. Whether it’s misinformation — where the person sharing it buys into what they are saying, or disinformation — where the person sharing the message is aiming to mislead — it’s reaching people in Aotearoa and the effects are massive.
Online misinformation is changing people’s behaviour, and that matters for all of us. During the continuing health crisis of COVID-19, messages spreading among health influencers and mums’ groups have contributed to vaccine reluctance amongst thousands of New Zealanders. And when people are scared to get vaccinated, we see worse illness and more pressure on the health system. Misinformation can lead people’s fears and frustrations in dark directions. In January last year, we saw a violent crowd trying to stop the Presidential election process in the US Capitol. And last month in New Zealand, we saw members of our community occupying the Parliament lawn — an event which included violent rhetoric from the beginning, and ended with fires and violence against Police.
New Zealanders are ever more worried about these trends. Research we’ve done at InternetNZ shows that 66% of New Zealanders are very concerned about information that is misleading or wrong being shared online, a number which has increased markedly since last year. We’ve seen an even larger increase in the number of us that worry about conspiracy theories spreading online too.
It’s no surprise that concern is growing when we see what misinformation can encourage people to do. In one sense, it's a sign of the times. There’s perhaps a natural reaction at play in that we have unusual, unprecedented things happening — like COVID-19 — that defy normal explanations. So people start looking for things that make sense of the confusing and challenging times we live in. And some, when they can’t find answers that address their concerns, find something to hang on to in the wrong places.
At the heart of all of this is social media, the place where so much of this content is created and shared. The tools of social media that let us connect with friends and family also enable people to find others who share their beliefs, no matter how strange. Algorithms that focus on clicks and views can drive people further down radical rabbit holes, and lead to the most questionable content spreading quicker and further than more sober and balanced points of view. One big question is whether the companies that make those tools are doing enough to understand and address these problems cropping up in our society and around the world.
Our research shows that the group of products run by Meta, the new name for Facebook, are an overwhelming feature of New Zealand’s social media landscape. Using Facebook, Facebook Messenger, Instagram, or Whatsapp is something 79% of New Zealanders do daily. These are the four most-used social apps in New Zealand, and they are all controlled by a single company. This monopoly on so much of our attention gives Meta a hugely influential role in what people in our community think about the world and their role in it. Because so many people get their news and share their views this way, Meta and other online services are at the heart of the misinformation problem.
We can give credit where credit is due. The big online services have recognised these problems to some extent. In response to pressure for change, and to initiatives like the Christchurch Call, they have taken some steps to curb harmful misinformation. We have seen online services work to label concerning content, and to remove accounts that repeatedly spread harmful misinformation, such as claims of election fraud by former President Trump. Most of the large services do invest in people to review harmful content, though this places a huge burden on those workers. Finally, they all generally offer ways for people to report concerning material. But is this enough? In particular, is it the right approach to meet the needs of our people and our communities?
We know that reporting mechanisms are not working well for the people who need them most, and that responding effectively to misinformation can require a lot of local context. For example, right now some very strange claims are being made about the Prime Minister’s partner. Some of these posts look like sincere conspiracy theories. And others like jokes based on how absurd those theories are. Even if they are doing their best, can we trust overseas-based companies, without local context, to get it right on these things?
We think it’s time for New Zealand to take a hard look hard at ways to regulate the services that have so much power over how our communities communicate. And we think the right place to start is by listening to those in our community who are already working on these issues, to make sure we get answers that work for New Zealand.