Tech news the world over has been abuzz and aghast at the Facebook and Cambridge Analytica revelations (scandal? brouhaha?). There's been a number of experts opining about deleting facebook, how privacy is dead or how its all our own faults for not reading terms and conditions. As a team, we've been debating, discussing and ranting about this case. You could even say we've been internalising a really complicated situation in our heads.
At InternetNZ our mission is to "promote the Internet's benefit and protect its potential". This story comprises both. It's complicated, it's multi-faceted and most importantly, if we're going to have better societies, a better world and a better Internet, then the "attention economy" that platforms like Facebook exemplify will need to change.
What IS this about again?
The situation that's made this a big story goes roughly like this. Facebook and other platforms let people connect and learn about each other. A company called Cambridge Analytica used this to get lots of people's information and use this for questionable political purposes (as a consultancy that brings data science to the dark arts of political management). For a more detailed explainer of what happened, we recommend these summaries:
- Radio New Zealand: "Explainer: Why you should care about Cambridge Analytica"
- Buzzfeed: "Here's How Facebook Got Into This Mess: A Timeline"
We are more interested in the larger Internet issues that come from this case, and the broader platforms and attention economy.
- The individual: what you can do, informed consent and personal responsibility?
- Platform business models: what happens when you are the product?
- Collective impact and collective action: what should we do next?
What you can do and the frame of personal responsibility
You can, and should, use this scandal as an opportunity to review your privacy settings on Facebook, check what apps have access to your facebook data and start to restrict access to your account. But don't stop there - review your privacy settings on all of your social media - chances are the privacy policies have changed in some way
But while it is important that we all take responsibility for our online presence, we do think that the framing of this as a personal control and personal data lesson is not quite right.
After taking a few days to get their ducks in order, Facebook's CEO and majority shareholder Mark Zuckerberg made a Facebook post and did some TV interviews. His main message was that he was very sorry, the company had made some mistakes, many issues had already been solved and that he would work very hard to fix them to keep people's trust. The formulation of Facebook's crisis control is covered very well in this piece on Slate from Nicholas Proferes which covers all of Zuckerberg's apologies over the years and Facebook narratives.
The one piece we wanted to pick up on is the overall framing of these privacy and data issues as individual matters. "If everyone just managed their data better, everything would be fine", is a theme (and it's a message that does the rounds heaps in other situations). We think the issues are much bigger than that.
Yes, there are significant personal risks around the release, reuse, breach or control of your information. And yes, Facebook and other platforms need to do better in giving us humans more control over our information. But the risks are not just around someone using your information, it's in the society-spanning aggregation and the potential insights, and power to influence the direction of society, that can be taken from the large, context and content rich datasets that online platforms have access to.
You can lock down your settings, and maybe even persuade your friends to do the same, but the way other people's data is gathered and used can still affect elections, it can still be used to "identify" LGBTQI people using facial recognition.
That still affects you. It's like vaccination, where a population-wide response is needed: a broad-scale solution to broad-scale issues.
The trouble(s) with platform business models
When the service is free YOU are the product. We've all heard that enough times for it to become almost meaningless.
Online platforms like Facebook and Twitter make most of their revenue through advertising. They draw people in with friend's updates and cat photos, and skim off their attention and analytics for advertising. While this holds true, data about their users will remain the most valuable asset they have.
We argue that this business model is inherently hostile to the needs of individuals and societies. While your information is a platform's most valuable asset, they will continue to value collecting and using data over privacy and informed consent. Here are a few elements of these platforms that we need to be aware of.
Social media platforms are by and large free services that offer value to users through network effects. The value of the social media platform increases when more people are using it - you want to be where your friends are. Network effects draw people to the same platforms. Facebook is monopoly-like in this way.
A platform's success is measured by engagement. Social media platforms are incentivised to prioritise engagement (or 'attention'), that is, getting you to use their platform and interact with it as often and for as long as possible to amplify the network effects (and provide more opportunities for data collection). This engagement doesn't have to be positive for you to have value. The trolls in the Youtube comments or those experiencing harassment on Twitter are still engaging with the platform.
When a platform is free to users, its revenue is likely coming from advertisers, who use the network effects, engagement, and data held about you to better target their ads to audiences. Facebook is dependent on ad revenue - at 97% of income in 2017, Facebook is more dependent on advertising than the rest of the Big Tech Five (Apple, Amazon, Google and Microsoft).
So when you've got an advertising platform with an audience of over 2 billion people, which holds enough data about you to know how to influence you, you've got a recipe for a disaster waiting to be taken advantage of. This Cambridge Analytica story is the first time we've really gotten a glimpse of the potential of these platforms when they are misused on such a scale, but shutting down one company's access to Facebook data is not the end of the crisis.
Collective impact and collective action
That different groups can use online platforms' data and advertising channels to suppress voters, influence elections across the world, try to divide political movements or target LGBTQI or activist communities for hate-speech isn't a personal risk for all of us (but it is a very real threat to some of us). These are collective problems that affect our societies in very difficult ways.
They are likely to require collective action. We don't think that a lone CEO or founder can "fix" the platforms for you, and we don't think that if you just take control of your data you'll be fine. The reality is that alone they can't, you can't and you might not be.
We think that there is a case for some form of regulation needed for these large Internet platforms. When we look at what is happening across the platforms we see:
- influence campaigns that seek to move democratic contests
- fascist groups organising and harrassing people
- open AI platforms being used to created deep fake porn movies victimising women
- fake news spreading either for mischief or profit.
The list goes on and what we see, as policy analysts is market failure.
These platforms are not monopolies, but there are elements of failure for citizens and societies. Regulation is an appropriate and normal way for societies to control commercial interests that are damaging or degrading our wider social and political needs.
So what next?
#deletefacebook is not a solution to this problem. If only it were that simple.
The popular Internet platforms offer huge benefits to billions of people - that's why they are popular. They make it easier for people without advanced technical skills to connect across divides of distance, to find and share friends and ideas, to publish to niche or global audiences.
While we have some control over our privacy settings on these platforms, the responsibility to protect us, the users, lies with the platforms.
It is not good enough, that a company worth almost half a trillion dollars, has failed to consider the ramifications of the way they manage their users' data. We should expect more from big platforms, and there needs to be more transparency about how our data is collected, used, and shared.
Facebook's Information Operations and Facebook report from April 2017 is a great example of the level of analysis and thought that online platforms can bring to bear on the complex social and geopolitical issues that their products and services can expose, create or be leveraged by. We just want to see the platforms putting in the effort that Weedon, Nuland and Stamos did for Facebook in that report before the crises and scandals happen.
It is not too late to change what we demand from our platform providers. We can coordinate to make those demands as users. We can also coordinate as citizens, demanding regulation where it is needed to support our privacy, protect our democratic interests, and enable our flourishing as people rather than advertising targets.
Would you like to know more (further reading):
- Anil Dash recently wrote a piece on how the nature of the influences on technology effect outcomes: https://medium.com/humane-tech/12-things-everyone-should-understand-about-tech-d158f5a26411
- Zeynep Tufekci's rather excellent, but scary TEDtalk 'We're building a dystopia just to make people click on ads': https://www.ted.com/talks/zeynep_tufekci_we_re_building_a_dystopia_just_to_make_people_click_on_ads
- How Democracy Can Survive Big Data, an op-ed from the New York Times: https://www.nytimes.com/2018/03/22/opinion/democracy-survive-data.html