Aotearoa needs solutions to harmful online content
New research released today emphasises our growing concern for harmful content online.
Today the Classifications Office released their report, ‘What we’re watching – New Zealanders’ views about what we see on screen and online.’
The report shows that 83% of New Zealanders are concerned about harmful or inappropriate content on social media, video-sharing sites or other websites. And over half of us (53%) had seen online content that promotes or encourages harmful attitudes or behaviours, such as discrimination, terrorism or suicide.
Interim Chief Executive of InternetNZ, Andrew Cushen, says the report findings add to the growing evidence that this is a critical issue here in Aotearoa.
“Current systems for dealing with harmful, hateful and potentially illegal content online are not working for communities. There are real concerns, real risks, and real hurt occurring right now.” says Cushen.
The report findings show that most New Zealanders have concerns that our systems are not doing well enough. It states that many lack confidence in reporting harmful content. Most New Zealanders (74%) would consider reporting online content that was harmful, dangerous or illegal to an official agency in New Zealand. However, results showed a high level of uncertainty about how to go about reporting such content, or what the response would be.
“Currently, reporting of harms is spread across different organisations and regulations. That makes it very hard for people to know where to go, and also means there is no shared data gathering about the types of harms occurring.
“Harmful content online is an issue that governments and communities are struggling with globally. But while there aren't any easy or obvious answers, there are approaches we can take in New Zealand to ensure we find solutions that work for the people and communities most affected.
“The Government is currently doing a wide-ranging review of the content regulatory system. This is our chance to update laws and regulations that were put in place for a different world, but also to explore non-regulatory approaches.
“In order to get these systems right, we need to listen to the people most affected by harmful behaviours online and also to the people and groups in our communities who are already working on these issues. Coordination and resourcing for this type of dialogue could start now, and should be part of how the system continues to evolve over time,” says Cushen.
While online harm is widespread, we know that some groups and communities face harm more than others. For example, the report says it is more common for Māori and Pacific participants to see content promoting hatred or discrimination based on race, culture and religion. And it is more common for younger people aged between 16 to 29 to see content promoting violent extremism or terrorism.
The report also shows that people lack confidence in tech companies to keep them safe. Just 33% ‘somewhat’ or ‘strongly’ agree that online platforms provide what people need to keep them safe.
At the same time, the group of products run by Meta are an overwhelming feature of New Zealand’s social media landscape. InternetNZ research shows using Facebook, Facebook Messenger, Instagram, or WhatsApp is something 79% of New Zealanders who are online do on a daily basis.
“Government — together with communities, platforms and experts — needs to find effective ways to ensure that these services are part of the solution rather than part of the problem.
“We think the content regulatory review is the best vehicle for working on these issues. It could propose some effective ways to manage these harms, especially if community voices are supported to participate in designing approaches that meet the needs of people in Aotearoa,” says Cushen.
You can see the 'What we're watching' report on the Classifications Office website here: https://www.classificationoffice.govt.nz/resources/research/what-were-watching/