Platform monitoring and moderation
Identifying, minimising, and removing objectionable content are key concerns addressed in the Safer Online Services and Media Platforms (SOSMP) discussion document. Common themes relating to these tasks include blocking or filtering of objectionable (i.e. illegal) content, algorithms and their role in amplifying harmful content, and transparency of platform processes, particularly via regular reporting.
Discussion of algorithms in the SOSMP
The SOSMP document acknowledges that there are frequent concerns about “the transparency and performance of the algorithms and other systems that social media companies use to direct content to users” (p. 80).
As a possible solution, the SOSMP suggests “algorithmic controls to prevent amplifying very harmful content” (p. 30) and notes that industry codes “could include rules for responsible and transparent design of ranking algorithms like Facebook’s Newsfeed” (p. 25), which the document suggests could help limit the amount of harmful content pushed to users.
Transparency reporting
The SOSMP argues that it would be essential that the proposed Independent Regulator “have more oversight to engage on issues such as algorithmic controls to prevent amplifying very harmful content, transparency reporting and other accountability measures” (p. 30). Those reporting and other accountability measures would include periodic transparency reporting that details how the impact of harm from content or conduct is being reduced by, for example, “actively managing algorithms and other content engagement tools” (p. 36).
Filtering, blocking, and takedown of content in the SOSMP
The SOSMP also makes some limited references to other moderation options for objectionable content, i.e. content that would be illegal to distribute or possess in New Zealand. For example, the discussion document notes that the Independent Regulator would monitor how platforms comply with industry codes to identify and remove illegal material as quickly as possible, both through regular moderation practices and through participation in “industry safety measures including voluntary filters” (p. 54).
The document notes the existence of one such voluntary filter in use in New Zealand: the Digital Child Exploitation Filtering System (DCEFS), which blocks access to websites that host images of child sexual abuse. This voluntary system, which has been operated by DIA since 2010, is used by most ISPs in New Zealand, meaning “approximately 92% of New Zealand internet users are protected from child sexual abuse material” (p. 55). As a result of ongoing review, the discussion document notes that the number of URLs blocked by DCEFS can vary between 250 and 700.
If enacted, the new Independent Regulator would be given the mandate to “continue to work with industry and the public to explore the use of similar voluntary filtering systems” (p. 55, para. 104). However, the document also states that any new voluntary filters would “apply only to material that’s already illegal, or where the material can confidently be deemed illegal” (p. 55). Moreover, the SOSMP also states that DIA is “not proposing any changes to the types of material that are currently considered illegal in New Zealand” (p. 52).
The discussion document does ask for feedback in alternative approaches for dealing with harmful content, however, including compulsory, rather than voluntary, filters, which would “prevent access by customers of New Zealand ISPs to unambiguously objectionable content such as images of children being sexually abused, extreme violence, and bestiality” (p. 57). In addition, this more ‘prescriptive’ approach would allow the Independent Regulator to issue a ‘takedown notice’ for illegal content (p. 54), and ask a judge to impose a ‘service disruption’ order for platforms that “repeatedly host illegal material and fail to action takedown notices” (p. 57).
Feedback questions concerning platform monitoring and moderation
Question 11: What do you think about the different approaches we could take, including the supportive and prescriptive alternatives? (p. 43)
Question 12: Do you think that the proposed model of enforcing codes of practice would work? (p. 48)
Question 13: Do you think the regulator would have sufficient powers to effectively oversee the framework? Why/why not? (p. 48)
Question 14: Do you agree that the regulator’s enforcement powers should be limited to civil liability actions? (p.48)
Question 15: How do you think the system should respond to persistent non-compliance? (p. 48)
Question 18: Is the regulator the appropriate body to exercise takedown powers? (p.56)
Question 19: Should takedown powers be extended to content that is illegal under other New Zealand laws? If so, how wide should this power be? (p. 56)
Question 20: If takedown powers are available for content that is illegal under other New Zealand laws, should an interim takedown be available in advance of a conviction, like an injunction? (p. 56)
Submissions
Decisions about these new regulations should be influenced by listening to communities most impacted by harmful content online. If you’d like to make a submission to the Department of Internal Affairs responding to the SOSMP discussion doc, check out our page, ‘Make a submission’.