Let's start with a quote from Alan Westin, often considered the father of defining privacy. In his words:
"Privacy is the claim of individuals, groups or institutions to determine for themselves when, how, and to what extent information about them is communicated to others."
Since Westin’s first attempt to define individual privacy, the world we live in today as individuals is exceptionally different. The extent of digital traces produced everyday across the world has introduced an unprecedented degree of complexity into how we govern the privacy of our online identity. Each technological iteration has expanded the ability to collect more and different forms of personal data. The camera is cited as an example of a technology which irreversibly changed the ability to give consent on personal privacy, Conversely, blockchain introduces new possibility to protect this privacy. With increasingly sophisticated technologies, we must go beyond considering the applicability of digital governance strategies to today’s data infrastructure. Instead, any approach requires adaptability and agility to new relationships with the technologies of tomorrow. Currently, ‘black-box’ systems such as neural nets, support vector machines and matrix factorisation provide little interpretability of what data inputs are used and how outputs are reached.
We may trust an algorithm’s decisions relying on logic obscured by ‘hidden layers’ for recommender systems of movies or commercial products, but a lack of explainability has more serious consequences when considering Algorithmic decision-making in healthcare, judicial systems and even social credit scores here in China. Such applications amass and apply data in complicated ways, integrating personal data on an unseen scale, tightening the link between our online and offline selves.
Any discussion of digital governance must consider how privacy can be protected in this world where data is gathered and shared with the increasing speed and ingenuity introduced by artificially intelligent systems. At current, for most of us, lapses in privacy protection cause manageable annoyances in daily life, like last year, following my search of lightbulbs on google, I could not escape thoroughly unwanted lightbulb adverts from littering my social media, or after buying one product on Taobao or Amazon, we are repetitively shown similar products despite only wanting to buy one in the first place. Yet data misuse and exploitation can have serious consequences at the detriment to individuals and also to society at large. The manipulation of public opinion, the dissemination of fake news and the potential for discrimination through mass surveillance are not mere annoyances. Treating our data as property has appeal, especially when such scandals damage trust individuals place in corporations or governments.
Nevertheless, data does not lend itself to ownership, both due to its low monetary value to the individual and high social value to others. Assigning property rights to data is a fruitless approach, benefiting neither the individual, the corporation nor the government. Data should not be commodified by the ‘right to own’,but its uses should be explained in advocacy for a ‘right to know’ and‘a right to understand’. Accordingly, like the GDPR, where individuals, as decentralised agents, are empowered to know, correct, and delete personal information about themselves, and companies are legally required to offer such an immutable explanation.
In answering the complex question of who should own data, we should instead ascribe to the view that in lies knowledge is power, and applying this logic to data rights we best ensure a mutual beneficially digital civilization for us all.