The Cambridge Analytica scandal has saturated news in recent weeks. Just in case you’ve been living as a recluse, Cambridge Analytica is a British consulting and data mining company who recently made headlines for using the data of 50 million Facebook users to influence elections. Most notably, Cambridge Analytica was hired by the Trump campaign during the 2016 US presidential election and the Brexit side during the 2016 UK referendum. They’re basically the bad guys making sure the worst possible outcome happens in every election, they might also be behind the break up of the Spice Girls (rumoured but unconfirmed).
So how did Cambridge Analytica become the Cruella De Vil of data?
Well, they were not actually doing anything wrong; they saw an opportunity to manipulate Facebook’s privacy regulations for their benefit and went with it. Through Facebook’s regulations, Cambridge Analytica was able to procure user data from a third party app. It may be logical that people who use this app by logging into Facebook assume their data may be used by that app for other means. As nowadays even grocery stores monitor what customers purchase and suggest specific items that may be desirable or on sale, it has become rather commonplace for our data to be used for purposes other than we initially intended.
In most cases, we, as consumers, find these occurrences quite useful – especially when advertisements are specifically directed. So, how is Cambridge Analytica using data for targeted advertising any different? A major issue arises when we consider how Cambridge Analytica got a hold of 50 million user profiles; while the app they used to procure the data only had 200, 000 users, Facebook’s regulations allowed the mining of not only those 200, 000 users but also their Facebook friends. While those 200, 000 users may have fathomed that their data would be used in some other way by the app, the Facebook friends who had their data mined certainly did not consent to it. When there is a player like Cambridge Analytica who is disrupting democratic systems through targeted and manipulative marketing techniques, one has to question whether their data is not only being used without consent, but further to bend their free will.
The solution seems simple; get rid of Facebook, delete the data and that way no nefarious Cambridge characters will be able to manipulate you. In reality, detaching from a social media platform like Facebook is much harder. Most millennials grew up with Facebook; it is used not only as a networking tool, but also as an archiving system of sorts. As such, removing a digital history cultivated through Facebook is not an appealing thought for many. In hooking users from a young age, Facebook ensures that consumers continue to use their product, no matter how they seek to use us. Although Facebook is a free platform, by opening data to advertisers users have become commodities without informed consent. While users can read through privacy policies, they are often not entirely accessible; one really has to delve into Facebook and its respective apps to understand how their data is being used. Moreover, for people who have grown up using Facebook, one might not think to compare the initial privacy policy from the time they joined with policies that might have been modified thousands of times by now.
Facebook seems an altruistic player in the scandal, having exposed Cambridge Analytica once they discovered they had failed to delete the user data, but can we really trust a company that has stretched the lines of consent and thrived off the commodification of its consumers?
We acknowledge the Ngunnawal and Ngambri people, who are the Traditional Custodians of the land on which Woroni, Woroni Radio and Woroni TV are created, edited, published, printed and distributed. We pay our respects to Elders past and present. We acknowledge that the name Woroni was taken from the Wadi Wadi Nation without permission, and we are striving to do better for future reconciliation.