What are Milton Friedman's Debate Secrets

Big data

Evgeny Morozov

To person

Born 1984; Internet researcher, columnist, and author; currently Bosch Fellow in Public Policy at the American Academy in Berlin. [email protected]

Much of the debate about the future of data protection and big data revolves around the widespread but rarely questioned assumption that data protection is merely a protective shield against interference by the state, the media or large companies. Hence, it is often assumed that privacy violations are one-off occurrences due to some leak: sensitive personal information is suddenly shared with a much larger group of people; what was once a secret suddenly becomes common knowledge - and cannot be made secret again; and so on.

Strangely enough, in this logic, invasions of privacy are seen as more fleeting events that actually aim at the disclosure of a larger secret - an evolved, static asset that is supposed to be secured by various data protection measures. Anyone who assumes that there is no such good behind the protective layer of law or convention has no problem with fleeting violations of privacy piling up: If there is nothing to be got from the "vault of our privacy", it doesn't matter how often the bank is robbed. This is the theoretical framework on which the meaningless, yet ubiquitous phrase is based: "I have nothing to hide."

One problem with this line of argument is that it relates primarily to the past, not the future. It is helpful to protect "static" goods from one-off attacks, but not to develop an idea of ​​future, dynamic protected goods in accordance with one's own principles and values. But what if the real goal of data protection is - instead of protecting a well-designed database of secrets from constant attacks - to create good conditions for the development of a new, future-oriented identity that is independent of the numerous restrictions imposed by the state and Large corporation? In other words, what if data protection is not primarily about ensuring that we can hide what we want to hide, but rather to allow us all to be what we could be - even at a time when the Are the rooms for experimentation shrinking because the needs of the secret services are constantly growing and the business models of corporations are constantly evolving?

If this is actually the case, if it is not so much about keeping our secrets, but about preserving extensive open spaces in which we can continue to experiment with different ideas, lifestyles and identities, then the "I-have-" works. but-nothing-to-hide argument "no longer, because it does not capture the actual subject matter that is involved in the task of giving up privacy. Instead, the slogan should be updated to "I have nothing to do" or "I have nothing to want", which could be a fitting description of the "tired society" analyzed by Byung-Chul Han: giving up one's own space to experiment the task of every ambition to determine one's own life - that is, the tacit acceptance of the status quo.

Should we succeed in breaking free of this theoretical prejudice that data protection is about protecting a good from the past, and would we instead see data protection as an opportunity to live an alternative future - one that is not forced upon us by anyone, but rather which we choose for ourselves semi-autonomously - then we would recognize that the current processes that quantify and datafify everything severely limit this possibility. I would like to outline two current processes, one of which has to do with companies and one with the state. In my opinion, they represent the greatest challenge in protecting those private spaces. Certainly, this dichotomy is somewhat artificial, because - and I hope this becomes clear - the greatest challenge is the powerful amalgamation of the commercial interests of today's companies with the security interests of today's governments .

Silent battle for our options

Companies now know that the most detailed knowledge of their customers results in greater profits. So they collect as much data as possible and are geared towards those who have the best platforms for observing our behavior - currently Facebook and Google. However, what do they do with all the data? Well, they analyze them to ensure we consume even more of their products by addressing us in a targeted, hard-to-resist way: streaming services try to analyze our music or movie tastes and keep recommending products to us, that we probably like. In this case, the manipulation is minimal and not particularly worrying (unless, of course, you do not believe that a downward spiral is at work here that threatens to make us prisoners of our own taste, because we do not learn anything more, even slightly from it deviates).

However, if you want to understand which type social engineering To actually be able to afford these enormous collections of data, you have to look at other industries. Take the US gambling industry, for example. There, casinos have found out how they can use detailed data analysis and carefully tailored offers to lure customers back to Las Vegas - for example by promising concert tickets for their favorite band or free dinners and hotel stays on specific dates that mean something to customers. The basic idea is that companies can set hidden emotional incentives by constantly monitoring their customers. These are not necessarily easy to attribute to those companies, but they do lead to more products being consumed. Of course, this trend is older than the digital media, but the digital media provide the raw data for this emotional engineering enables.

The nightmare scenario looks like this: By doing a Google search, tweeting or posting something on Facebook, you indicate that you are up to something. You don't even need to be aware of your intentions: just like our body language, our verbal communication reveals emotions and intentions that are still unknown to us. Technology companies can analyze both verbal and non-verbal utterances and use them to predict our behavior. Since you have announced an intention to do or buy something - or perhaps to radically change your previous behavior - an invisible auction for your life suddenly begins in the background. Some companies would like you to do X (become a vegetarian, for example), while others would prefer Y (continue to eat meat) - the silent mathematical battle between the companies (and their algorithms) shapes the socio-economic background which you decide.

Of course, you don't have to go along with this postmodern path completely and you can point out that the decision is ultimately in our hands; That is certainly the case, but it seems indisputable to me that the factors that influence our decisions today are far more complex - and far more market-driven - than, for example, thirty years ago. Back then, our attitudes and decisions were much more influenced by other factors; we were much more likely to act according to some sort of religious or spiritual paradigm, similar to what our parents or closest relatives decided, or simply behaved in a way that would not attract attention in a smaller community. One can certainly be glad that modernity has freed us from some of these limitations. But one would have to be quite naive to believe that the vacuum created by the dissolution of traditions has not been filled by today's cybernetic capitalism; it replaces conventional dogmatic thinking with a non-transparent, covert manipulation of the options available to us. We are led to believe we are "free to choose," as economists Milton and Rose Friedman famously put it, but the options we have are predetermined.