While digital technologies once promised a new era of emancipatory politics and socio-economic inclusion, things have not turned out quite as planned. Governments and a few powerful tech firms, operating on the false pretense that data is a resource just like oil and gold, have instead built an unprecedented new regime of social control.
NEW YORK – The revolution in information technology since the 1980s has transformed modern life, reducing the costs of collecting, storing, and sharing data, and creating an entirely new medium of communication and exchange: the Internet. By setting the stage for new forms of social, political, and economic engagement, the IT revolution, we were told, would strengthen individual autonomy and social inclusion.
Yet 40 years later, the main winners have been not the bulk of the population, but a narrow cohort of powerful entities. States and a handful of tech companies have amassed vast amounts of data and turned it into an instrument of surveillance and control – all for political and economic gain.
IT confirms the old adage that revolutions always eat their children. But the relative fragility of data controls means that it is still possible to correct past mistakes and harness the digital era’s positive potential. To do so, we must correct misconceptions about the nature of data: they are not assets but means of total control, especially when in the hands of concentrated power.
Data are often likened to gold and oil, as if information were just another asset to be owned privately and exploited for economic gain. But data are different from ordinary assets. Raw data are abundant and non-rivalrous. Here is an asset that does not follow the rules of scarcity, and thus has no intrinsic economic value. As such, there is no reason to amass vast amounts of data other than as a means of control over the same individuals who produced them.
Unfortunately, individual data “producers” cannot easily exclude others from “their” data, at least not yet. Until we have effective digital keys that prevent platform, smartphone, and Internet of Things (IoT) companies from continuously tracking every move a person makes, it is impossible to prevent others from capturing our data. We are told to trust companies like Apple because it shares our values and will protect our privacy. But, at the end of the day, there is no way for consumers to prevent Apple from breaking its promises, or even to know if it did.
The European Union’s 2018 General Data Protection Regulation has been hailed as a model for protecting privacy, and other jurisdictions, including California and Brazil, are now emulating it. The GDPR’s primary objective is to require consent for the collection of “personal” data, meaning any information that can be used to identify an individual. Yet the distinction between personal data that are subject to privacy regulation and industrial data (from the IoT) that are up for grabs is not always clear.
Don’t miss our next event, taking place at the AI Action Summit in Paris. Register now, and watch live on February 10 as leading thinkers consider what effective AI governance demands.
Register Now
In fact, what may appear to be non-personal information can often be combined with other data to identify the individual who left a digital footprint. A recent New York Times investigation found that cell phones emit “non-personal” location data by the second. These signals can be used to track an individual from her home to her workplace and back. With that information, private or government entities can easily link that person to an address and discover her identity. And with the spread of facial-recognition software, there will no longer even be a pretense of separating behavior from the behaving individual.
China’s emerging “social credit” system illustrates the potential for data to be used as an instrument of total control. In a world where every action leaves a digital footprint that can be used to determine a person’s access to credit, transportation, social services, or health care, the very notion of individual freedom and autonomy vanishes, taking with it the foundation for democratic governance.
The Chinese government’s pursuit of total control is not unique. Even in the Netherlands, a court had to strike down a government “System Risk Indication” program on the grounds that it violated fundamental human rights (a ruling that is, of course, unthinkable in China and many other countries). After hoovering up behavioral data about the poor and other recipients of social benefits, the government had deployed an algorithm to identify individuals most likely to commit benefits fraud in the future.
To recognize that data is not an asset, but rather an instrument of control, implies that governing it, creating data property rights for consumers, or using antitrust law to restore competition cannot not address the real issue: how to prevent rule by data. Breaking up existing firms may be a necessary interim solution, but it will not be the end goal. More broadly, there simply is no good reason for anyone to collect and store information about others beyond what is essential for maintaining safety and security, or for ensuring the proper functioning of platforms that are designed with their users’ interests in mind.
The mere possibility of gathering and storing data is not a justification for doing so. For states, restraint should be the default position, in accordance with constitutional and international protections of fundamental human rights. Much of Big Tech treats data like Roman law treated wild animals: they are not owned by anybody (res nullius), and can therefore be claimed as personal property by whoever captures them. But Roman law also included a competing concept. As Richard Epstein of New York University reminds us, there is such a thing as common property (res communis) – things that shall not be appropriated by anybody.
One might argue that it is too late to undo what has already been done. We already live in the age of big data, Big Tech, and surveillance states. But rule by data depends on a continuous flow of new data. While the past cannot be undone, the future is up for grabs. To protect individual autonomy requires banning data harvesting and developing technologies that give data producers full control over their data; and in cases where such practices are necessary, they should occur within digital infrastructures that are governed in users’ interest. The goal of the digital revolution should be to protect freedom, not strengthen surveillance.
To have unlimited access to our content including in-depth commentaries, book reviews, exclusive interviews, PS OnPoint and PS The Big Picture, please subscribe
NEW YORK – The revolution in information technology since the 1980s has transformed modern life, reducing the costs of collecting, storing, and sharing data, and creating an entirely new medium of communication and exchange: the Internet. By setting the stage for new forms of social, political, and economic engagement, the IT revolution, we were told, would strengthen individual autonomy and social inclusion.
Yet 40 years later, the main winners have been not the bulk of the population, but a narrow cohort of powerful entities. States and a handful of tech companies have amassed vast amounts of data and turned it into an instrument of surveillance and control – all for political and economic gain.
IT confirms the old adage that revolutions always eat their children. But the relative fragility of data controls means that it is still possible to correct past mistakes and harness the digital era’s positive potential. To do so, we must correct misconceptions about the nature of data: they are not assets but means of total control, especially when in the hands of concentrated power.
Data are often likened to gold and oil, as if information were just another asset to be owned privately and exploited for economic gain. But data are different from ordinary assets. Raw data are abundant and non-rivalrous. Here is an asset that does not follow the rules of scarcity, and thus has no intrinsic economic value. As such, there is no reason to amass vast amounts of data other than as a means of control over the same individuals who produced them.
Unfortunately, individual data “producers” cannot easily exclude others from “their” data, at least not yet. Until we have effective digital keys that prevent platform, smartphone, and Internet of Things (IoT) companies from continuously tracking every move a person makes, it is impossible to prevent others from capturing our data. We are told to trust companies like Apple because it shares our values and will protect our privacy. But, at the end of the day, there is no way for consumers to prevent Apple from breaking its promises, or even to know if it did.
The European Union’s 2018 General Data Protection Regulation has been hailed as a model for protecting privacy, and other jurisdictions, including California and Brazil, are now emulating it. The GDPR’s primary objective is to require consent for the collection of “personal” data, meaning any information that can be used to identify an individual. Yet the distinction between personal data that are subject to privacy regulation and industrial data (from the IoT) that are up for grabs is not always clear.
PS Events: AI Action Summit 2025
Don’t miss our next event, taking place at the AI Action Summit in Paris. Register now, and watch live on February 10 as leading thinkers consider what effective AI governance demands.
Register Now
In fact, what may appear to be non-personal information can often be combined with other data to identify the individual who left a digital footprint. A recent New York Times investigation found that cell phones emit “non-personal” location data by the second. These signals can be used to track an individual from her home to her workplace and back. With that information, private or government entities can easily link that person to an address and discover her identity. And with the spread of facial-recognition software, there will no longer even be a pretense of separating behavior from the behaving individual.
China’s emerging “social credit” system illustrates the potential for data to be used as an instrument of total control. In a world where every action leaves a digital footprint that can be used to determine a person’s access to credit, transportation, social services, or health care, the very notion of individual freedom and autonomy vanishes, taking with it the foundation for democratic governance.
The Chinese government’s pursuit of total control is not unique. Even in the Netherlands, a court had to strike down a government “System Risk Indication” program on the grounds that it violated fundamental human rights (a ruling that is, of course, unthinkable in China and many other countries). After hoovering up behavioral data about the poor and other recipients of social benefits, the government had deployed an algorithm to identify individuals most likely to commit benefits fraud in the future.
To recognize that data is not an asset, but rather an instrument of control, implies that governing it, creating data property rights for consumers, or using antitrust law to restore competition cannot not address the real issue: how to prevent rule by data. Breaking up existing firms may be a necessary interim solution, but it will not be the end goal. More broadly, there simply is no good reason for anyone to collect and store information about others beyond what is essential for maintaining safety and security, or for ensuring the proper functioning of platforms that are designed with their users’ interests in mind.
The mere possibility of gathering and storing data is not a justification for doing so. For states, restraint should be the default position, in accordance with constitutional and international protections of fundamental human rights. Much of Big Tech treats data like Roman law treated wild animals: they are not owned by anybody (res nullius), and can therefore be claimed as personal property by whoever captures them. But Roman law also included a competing concept. As Richard Epstein of New York University reminds us, there is such a thing as common property (res communis) – things that shall not be appropriated by anybody.
One might argue that it is too late to undo what has already been done. We already live in the age of big data, Big Tech, and surveillance states. But rule by data depends on a continuous flow of new data. While the past cannot be undone, the future is up for grabs. To protect individual autonomy requires banning data harvesting and developing technologies that give data producers full control over their data; and in cases where such practices are necessary, they should occur within digital infrastructures that are governed in users’ interest. The goal of the digital revolution should be to protect freedom, not strengthen surveillance.