The M2M/IoT enjoys the debate around the number of connected devices we will have in the future. Those fabulous numbers bring fabulous opportunities! But, the IoT community, or part of it, is also aware that the dream will come through if it is safe and positively impact people and organisations. Some conditions for that to happen are ethical issues around the use of data. In order to understand those issues and their impact on the development of the IoT, we have met Dr. Katerina Hadjimatheou, Senior Research Fellow at the Interdisciplinary Ethics Research Group at University of Warwick. Dr. Hadjimatheou is specialised in ethics of security and surveillance technologies.

Saverio Romeo (SR): The IoT community moves along the view that we will live and work in spaces connected and intelligent. This is provided by connected devices of any sort – from fitness trackers to temperature sensors and so on. Which are the ethical issues to consider in a fully connected society?

Katerina Hadjimatheou (KH): There are three consideration to do:

a)   What kind of information is being processed: some kinds of information are considered inherently private, for example, information about intimate aspects of our lives, such as information about our close friendships, our families, our sex lives, and our bodies including our health.  Information about our energy consumption or purchasing habits is not inherently private, but this does not mean we want it shared with everybody. Which leads me to the second consideration, namely,

b) Who is viewing that information: most of us don’t mind non-intimate information about us, such as our energy consumption or our purchasing habits being collected in order that we receive better advice from our energy providers on how to save money, or so that we can receive advertising that is more reflective of our preferences. Yet at the same time most people would feel uncomfortable about their purchasing history being provided to their mother in law so that she can choose them a better Christmas present. And, even though some information is very intimate, there are usually some people we gladly share it with. For example, we routinely share information about our health, sexual habits, and families with doctors even though such information is ‘inherently’ private. As this suggests, in order to preserve privacy, we need to be able to choose whom to share information with, which leads us conveniently to the third consideration:

c) Whether we have any choice about the kinds of information collected and the things that are done with it: a ‘liberal’ society such as our own is built partly upon the belief that people should as far as is possible be at liberty (i.e. free) to decide for themselves where to live, what job to do, what kinds of relationships to have, how to treat their bodies, and generally how to live their lives. For example, most people believe that the extent to which one lives a healthy lifestyle should be a matter of personal choice, and not something imposed on people by others. This means that, even if it is better for us to eat healthily, and even if it can be demonstrated that having remote connected devices monitor our food consumption and exercise improves health outcomes, we believe people should have a choice about whether to adopt such devices. This does not mean that everything should be a matter of choice, because some of the decisions we make about our own bodies affect the wellbeing of others. For example, it is nowadays commonly accepted in the EU that one’s personal choice to smoke should be limited to places where it does not affect adversely the health of others. Nevertheless, in a liberal society personal choice is always very important and must be taken into account when designing technologies. Technologies and systems that enable personal choice are better than those that do not and technologies and systems that enable fine-grained choices, at different points in the process of technology adoption are better than those that offer only a one-off, single chance to ‘agree’ or ‘disagree’. More chance to consent or opt-out from aspects of technology decreases the risk of what is known as ‘function’ or ‘mission’ creep. Mission creep occurs when data collected for one purpose is then used for another. For example, some smart metering systems collect data about temperature within a house which can also be used to determine how many people are at home. The fact that people sign up to a system that monitors temperature within a house in order to know when to turn the radiators on does not mean they should be treated as if they also signed up to a system that monitors the number of people in the house. 

SR: Many believe that privacy by design should be a fundamental criterion in designing IoT solutions. But, what is privacy by design?

KH:  Privacy by design means building privacy concerns into the design of technologies, processes, and systems. One of the basic principles of privacy by design is that of data minimisation. This basically means collecting only the amount of data necessary to fulfil the specific functions of the device/system that people have signed up for. Data minimisation reduces the risk of function or mission creep. It occurs at the level of the device. But it can also occur later on. For example, most of us feel that a system which involves automatic processing of data with no human involvement is less intrusive than one which involves people actually accessing that data.

SR: The IoT community believes that data is strategic – in business and service terms. In order to fully exploit that potential, there is a school of thought that believes in data openness. From an ethical perspective, how do you see data openness?

KH: Perhaps predictably, I would say that it depends what kind of data is being discussed. Obviously, making the code for a very intrusive surveillance software open-source is not a sensible idea, given the risk of cyber criminality, misuse by authoritarian regimes, not to mention terrorism. Similarly, for those people unlucky enough to live in human-rights abusing regimes such as Isis, encryption is a vital tool for preserving privacy as well as many other basic liberties. Less dramatically, the combination of lots of data from different sources can result in the re-identification of individuals who believe they are being monitored by their devices on an anonymous basis. Data openness seems a good idea as a general principle when the data refers to the actions of a government operating in a democracy, because such governments should be accountable to the people, and for that transparency is necessary. In contrast, customers do not owe transparency to the companies they do business with, and businesses who collect data about their customers have a duty of care to those people to process the data in ways that do not infringe on their privacy, even when customers have, perhaps unthinkingly, clicked that ‘agree’ button.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s