Enabled by exponential technological advancements in data storage, transmission and analysis, the drive to “datify” our lives is creating an ultra-transparent world where we are never free from being under surveillance.

Increasing aspects of our lives are now recorded as digital data that are systematically stored, aggregated, analysed, and sold. Despite the promise of big data to improve our lives, all encompassing data surveillance constitutes a new form of power that poses a risk not only to our privacy, but to our free will.

Data surveillance started out with online behaviour tracking designed to help marketers customise their messages and offerings. Driven by companies aiming to provide personalised product, service and content recommendations, data were utilised to generate value for customers.

But data surveillance has become increasingly invasive and its scope has broadened with the proliferation of the internet-of-things and embedded computing. The former expands surveillance to our homes, cars, and daily activities by harvesting data from smart and mobile devices. The latter extends surveillance and places it inside our bodies where biometric data can be collected.

Two characteristics of data surveillance enable its expansion.

It’s multifaceted

Data are used to track and circumscribe people’s behaviour across space and time dimensions. An example of space-based tracking is geo-marketing. With access to real-time physical location data, marketers can send tailored ads to consumers’ mobile devices to prompt them to visit stores in their vicinity. To maximise their effectiveness, marketers can tailor the content and timing of ads based on consumers’ past and current location behaviours, sometimes without consumers’ consent.

Location data from GPS or street maps can only approximate a person’s location. But with recent technology, marketers can accurately determine whether a consumer has been inside a store or merely passed by it. This way they can check whether serving ads has resulted in a store visit, and refine subsequent ads.

Health applications track and structure people’s time. They allow users to plan daily activities, schedule workouts, and monitor their progress. Some applications enable users to plan their caloric intake over time. Other applications let users track their sleep pattern.

While users can set their initial health goals, many applications rely on the initial information to structure a progress plan that includes recommended rest times, workout load, caloric intake, and sleep. Applications can send users notifications to ensure compliance with the plan: a reminder that a workout is overdue; a warning that a caloric limit is reached; or a positive reinforcement when a goal has been reached. Despite the sensitive nature of these data, it is not uncommon that they are sold to third parties.

It’s opaque and distributed

Our digital traces are collected by multiple governmental and business entities which engage in data exchange through markets whose structure is mostly hidden from people.

Data are typically classified into three categories: first-party, which companies gather directly from their customers through their website, app, or customer-relationship-management system; second-party, which is another company’s first-party data and is acquired directly from it, and; third-party, which is collected, aggregated, and sold by specialised data vendors.

Despite the size of this market, how data are exchanged through it remains unknown to most people (how many of us know who can see our Facebook likes, Google searches, or Uber rides, and what they use these data for?).

Some data surveillance applications go beyond recording to predicting behavioural trends.

Predictive analytics are used in healthcare, public policy, and management to render organisations and people more productive. Growing in popularity, these practices have raised serious ethical concerns around social inequality, social discrimination, and privacy. They have also sparked a debate about what predictive big data can be used for.

It’s nudging us

A more worrying trend is the use of big data to manipulate human behaviour at scale by incentivising “appropriate” activities, and penalising “inappropriate” activities. In recent years, governments in the UK, US, and Australia have been experimenting with attempts to “correct” the behaviour of their citizens through “nudge units”.

With the application of big data, the scope of such efforts can be greatly extended. For instance, based on data acquired (directly or indirectly) from your favourite health app, your insurance company could raise your rates if it determined your lifestyle to be unhealthy. Based on the same data, your bank could classify you as a “high-risk customer” and charge you a higher interest on your loan.

Using data from your smart car, your car insurance company could decrease your premium if it deemed your driving to be safe.

By signalling “appropriate behaviours” companies and governments aim to shape our behaviour. As the scope of data surveillance increases, more of our behaviours will be evaluated and “corrected” and this disciplinary drive will become increasingly inescapable.

With this disciplinary drive becoming routine, there is a danger we will start to accept it as the norm, and pattern our own behaviour to comply with external expectations, to the detriment of our free will.

The “datafication” of our lives is an undeniable trend which is impacting all of us. However, its societal consequences are not predetermined. We need to have an open discussion about its nature and implications, and about the kind of society we want to live in.


This article was originally published on The Conversation. Read the original article. Please refer to The Conversation’s republishing guidelines before republishing this article.

Image: Chris Yang

Related content