Privacy Preservation

Privacy Legislation

Today, primarily the principles of notice, consent, access and security are enforced, e.g. in e-commerce and online advertising. Privacy legislation also touches some mature technologies which are part of the IoT evolution: RFID and camera networks have received much attention in the past. Recent legislation efforts have focussed on data protection in cloud computing and adequate protection of web users against tracking [1].

However, already today, the level of privacy protection offered by legislation is insufficient, as day-to-day data spills and unpunished privacy breaches indicate. The Internet of Things will undoubtedly create new grey areas with ample space to circumvent legislative boundaries.

First, most pieces of legislation centre around the fuzzy notion of Personally Identifiable Information (PII). However, efforts towards a concise definition of what constitutes PII (e.g. by enumerating combinations of identifying attributes) are quickly deprecated as new IoT technologies unlock and combine new sets of data that can enable identification and make it increasingly challenging to distinguish PII from non-PII.

Second, timeliness of legislation is a constant issue, e.g. tracking of web-users has been used for many years before the European Commission passed a law against it in early 2011. With the IoT evolving fast, legislation is bound to fall even farther behind. An example is Smart Meter readings, which already allow inferring comprehensive information about people’s lifestyle.

Third, already today, many privacy breaches go unnoticed. In the IoT, awareness of privacy breaches among users will be even lower, as data collection moves into everyday things and happens more passively. The legislation, however, is often the only response to public protests and outcries that require awareness of incidents in the first place.

Finally, the economics of privacy are still in favour of those in disregard of privacy legislation. On the one side, development of PETs, enforcement and audits of privacy- protection policies is expensive and can limit business models. On the other hand, violations of privacy legislation either go unpunished or result only incomparably small fines, while public awareness is still too low to induce excessive damage to public reputation. Thus, disregard of privacy legislation, as, e.g. Google deliberately circumventing Safari’s user tracking protection, seems profitable. Over this incident, Google paid a record fine of $22.5 Million in a settlement with the Federal Trade Commission (FTC), but it is conceivable that the earnings more than compensated.

It will be a significant challenge to design a unified enduring legislative framework for privacy protection in the Internet of Things, instead of passing quickly outdated pieces of legislation on singular technologies. Success will undoubtedly require a comprehensive knowledge of the technologic basis of the IoT and its ongoing evolution. The key, however, will be a deep understanding of existing and lingering new threats to privacy in the IoT – these threats are what legislation needs to protect against, ultimately.

IoT Privacy Preservation Threats


Identification denotes the threat of associating a (persistent) identifier, e.g. a name and address or a pseudonym of any kind, with an individual and data about him. The threat thus lies in associating an identity to specific privacy violating context, and it also enables and aggravates other threats, e.g. profiling and tracking of individuals or a combination of different data sources.

The threat of identification is currently most dominant in the information processing phase at the backend services of our reference model, where vast amounts of information are concentrated in a central place outside of the subject’s control.

First, surveillance camera technology is increasingly integrated and used in non-security contexts, e.g. for analytics and marketing. As facial databases (e.g. from Facebook) become available also to non-governmental parties like marketing platforms, automatic identification of individuals from camera images is already a reality.

Second, the increasing (wireless) interconnection and vertical communication of everyday things, opens up possibilities for identification of devices through fingerprinting. It was recognised already for an RFID technology that the aura of their things can identify individuals.

Third, speech recognition is widely used in mobile applications, and vast databases of speech samples are already being built. Those could potentially be used to recognise and identify individuals, e.g. by governments requesting access to that data. With speech recognition evolving as a powerful way of interaction with IoT systems and the proliferation of cloud computing for processing tasks, this will further amplify the attack vector and privacy risks.

Identity protection and, complementary, protection against identification is a predominant topic in RFID privacy but has also gained much attention in the areas of data anonymisation, and privacy-enhancing identity management. Those approaches are challenging to fit the IoT: Most data anonymisation techniques can be broken using auxiliary data, that is likely to become available at some point during the IoT evolution. Identity management solutions, besides relying heavily on expensive crypto-operations, are mostly designed for very confined environments, like enterprise or home networks and thus tricky to fit the distributed, diverse and heterogeneous environment of the IoT. Approaches from RFID privacy due to similarities in resource constraints and numbers of things are the most promising. However, those approaches do not account for the diverse data sources available in the IoT as e.g. camera images and speech samples.

Localization and Tracking

Localisation and tracking is the threat of determining and recording a person’s location through time and space. Monitoring requires identification of some kind to bind continuos localisations to one individual. Already today, tracking is possible through different means, e.g. GPS, internet traffic, or cell phone location. Many concrete privacy violations have been identified related to this threat, e.g. GPS stalking, the disclosure of private information such as an illness, or generally the uneasy feeling of being watched. However, localisation and tracking of individuals is also an essential function in many IoT systems [2]. These examples show that users perceive it as a violation when they don’t have control over their location information, are unaware of its disclosure, or if the information is used and combined in an inappropriate context.

In the immediate physical proximity, localisation and tracking usually do not lead to privacy violations, as, e.g. anyone in the immediate surrounding can directly observe the subject’s location. Traditionally, localisation and monitoring thus appear as a threat mainly in the phase of information processing, when locations traces are built at backends outside the subject’s control. However, the IoT evolution will change and aggravate this threat in three ways.

First, we observe an increasing use of location-based services (LBS). IoT technologies will not only support the development of such LBS and improve their accuracy but also expand those services to indoor environments, e.g. for smart retail.

Second, as data collection becomes more passive, more pervasive and less intrusive, users become less aware of when they are being tracked and the associated risks. Third, the increasing interaction with smart things and systems leaves data trails that not only put the user at risk of identification but also allow to track his location and activity, e.g. to swipe an NFC-enabled smartphone to get a bus ticket or using the cities’ smart parking system. With these developments, the threat of localisation and tracking will also appear in the interaction phase, making the subject trackable in situations where he might falsely perceive physical separation from others, e.g. by walls or shelves, like privacy.

Research on location privacy has proposed many approaches that can be categorised by their architectural perspective into client-server, trusted third party, and distributed/peer-to-peer. However, these approaches have been mostly tailored to outdoor scenarios where the user actively uses an LBS through his smartphone. Thus, these approaches do not fit without significant modifications to the changes brought about by IoT. The main challenges we identify are awareness of tracking in the face of passive data collection, control of shared location data in indoor environments, and privacy-preserving protocols for interaction with IoT systems.


Profiling denotes the threat of compiling information dossiers about individuals to infer interests by correlation with other profiles and data. Profiling methods are mostly used for personalisation in e-commerce (e.g. in recommender systems, newsletters and advertisements) but also for internal optimisation based on customer demographics and interests. Examples, where profiling leads to a violation of privacy violation, are price discrimination, unsolicited advertisements, social engineering, or erroneous automatic decisions, e.g. by Facebook automatic detection of sexual offenders. Also, collecting and selling profiles about people as practised by several data marketplaces today is commonly perceived as a privacy violation. The examples show that the profiling threat appears mainly in the dissemination phase, towards third parties, but also towards the subject itself in the form of erroneous or discriminating decisions.

Existing approaches to preserve privacy include client-side personalisation, data perturbation, obfuscation and anonymisation, distribution, and working on encrypted data. These approaches can be applied to IoT scenarios but must be adapted from the usual model that assumes a central database and account for the many distributed data sources which are expected in the IoT. It will require considerable efforts for recalibration of metrics and redesign of algorithms, as, e.g. recent work in differential privacy for distributed data sources shows. After all, data collection is one of the central promises of the IoT and the primary driver for its realisation. We thus see the biggest challenge in balancing the interests of businesses for profiling and data analysis with users’ privacy requirements.

Privacy-Violating Interaction and Presentation

This threat refers to conveying private information through a public medium and in the process, disclosing it to an unwanted audience. It can be loosely sketched as shoulder- surfing but in real-world environments.

Many IoT applications, e.g. smart retail, transportation, and healthcare, envision and require substantial interaction with the user. In such systems, it is imaginable that information will be provided to users using smart things in their environment, e.g. through advanced lighting installations, speakers or video screens. Vice versa, users will control systems in new intuitive ways using the elements surrounding them, e.g. moving, touching and speaking to smart things. However, many of those interaction and presentation mechanisms are inherently public, i.e. people in the vicinity can observe them. It becomes a threat to privacy when private information is exchanged between the system and its user. In smart cities, e.g. a person might ask for the way to a specific health clinic. Such a query should not be answered, e.g. by displaying the way on a public display nearby, visible to any passers-by. Another example is recommendations in stores that reflect private interests, such as specific diet food and medicine, movies or books on precarious topics. Due to its close connection to interaction and presentation mechanisms, the threat of privacy-violating interactions and presentation appears primarily in the homonymous phases of our reference model.

Since such advanced IoT services are still in the future, privacy-violating interactions have not received much attention from researchers. Interaction mechanisms are, however, crucial to usable IoT systems and privacy threats must consequently be addressed. We identify two specific challenges that will have to be solved.

First, we need means for automatic detection of privacy-sensitive content. It is easily imaginable that the provisioning of content and rendering it for the user are handled in two steps by two different systems: E.g. company A generates recommendations for customers of a store, which are then delivered to the customer by company B’s system: either by special lighting and the use of speakers or through a push to his smartphone.

Second, with the previous point in mind, scoping will be necessary, i.e. how can we scope public presentation medium to a specific subgroup of recipients or a particular physical area? This approach would prove useful to support users, which have no smartphone (or any other device providing a private channel for interactions and presentations). However, it will be challenging to determine the captive audience of a particular presentation medium accurately, separate the intended target group and adjust the scope accordingly. E.g. what if the target user is in the midst of a group of people?

Applications for privacy-preserving pervasive interaction mechanisms are, e.g. smart stores and malls, smart cities and healthcare applications. Here, it would indeed be an achievement to provide similar levels of privacy as people would expect in the contexts of their everyday conversations, i.e. interactions with their peers.

Lifecycle Transitions

Privacy is threatened when smart things disclose private information during changes of control spheres in their lifecycle. The problem has been observed directly about compromising photos and videos that are often found on used cameras or smartphones – in some cases “disturbing” data has even been spotted on “new” devices. Since privacy violations from lifecycle transitions are mainly due to the collected and stored information, this threat relates to the information collection phase of our reference model.

Two developments in the IoT will likely aggravate issues due to the lifecycle of things. First, smart things will interact with some persons, other things, systems, or services and amass this information in product history logs. In some applications, such data is highly sensitive, e.g. health-data collected by medical devices for home-care. But also the collection of simple usage- data (e.g. location, duration, frequency) could disclose much about the lifestyle of people. Already today, detailed usage logs are maintained for warranty cases in TV sets, notebooks or cars. Second, as exchangeable everyday things such as light bulbs become smart, the sheer numbers of such items entering and leaving the personal sphere will make it increasingly difficult to prevent disclosure of such information.

Despite obvious problems with the lifecycle of today's smartphones, cameras, and other storage devices this threat has not been adequately addressed. The lifecycle of most consumer products is still modelled as buy-once-own- forever, and solutions have not evolved beyond a complete memory wipe (e.g. before selling a phone) or physical destruction (e.g. before disposal of a hard drive). Smart things could, however, feature a much more dynamic lifecycle, with items being borrowed, exchanged, added and disposed of freely.

We thus identify the requirement for flexible solutions that will undoubtedly pose some challenges: Automatic detection of lifecycle transitions of a smart thing will be required to implement suitable privacy lifecycle management mechanisms. E.g. a smart rubbish bin could automatically cleanse all items in it from private information, such as medical prescriptions on a smart pillbox. It will be difficult, though, to automatically distinguish between different lifecycle transitions as, e.g. lending, selling or disposing of an item and taking the appropriate action. Specific lifecycle transitions, e.g. borrowing a smart thing, will require locking private information temporarily, e.g. the readings of a vital signs monitor. Once the device has returned to its original owner, the private data can be unlocked, and the original owner can continue to use it seamlessly [3].

Inventory Attack

Inventory attacks refer to the unauthorised collection of information about the existence and characteristics of personal things. One evolving feature of the IoT is interconnection. With the realisation of the All-IP and end-to-end vision, smart things become query-able over the Internet. While things can then be queried from anywhere by legitimate entities (e.g. the owner and authorised users of the system), non-legitimate parties can query and exploit this to compile an inventory list of things at a specific place, e.g. of a household, office building, or factory. Even if smart things could distinguish legitimate from illegitimate queries, a fingerprint of their communication speeds, reaction times and other unique characteristics could potentially be used to determine their type and model. With the predicted proliferation of wireless communication technology, fingerprinting attacks could also be mounted passively, e.g. by an eavesdropper in the vicinity of the victim’s house.

The impact of new technologies on this threat is not yet clear. On the one hand, we expect the diversification of technologies in the IoT as more and more different things become smart. Diversification increases the attack vector for fingerprinting, as, e.g. observed with the many diverse configurations of web browsers. On the other hand, at some point in time, we expect the establishment of specific standards for communication and interaction that could reduce such differences.

Numerous concrete privacy violations based on inventory attacks are imaginable or have happened. First, burglars can use inventory information for targeted break-ins at private homes, offices and factories, similar to how they already use social media today to stake out potential victims. Note that a comprehensive inventory attack could then also be used to profile the anti-burglar system down to every last presence sensor. Second, law enforcement and other authorities could use the attack to conduct (unwarranted) searches. Third, private information is disclosed by the possession of specific things, such as personal interests (e.g. books, movies, music) or health (e.g. medicine, medical devices). Fourth, efforts for industrial espionage can be complemented through an inventory attack, as noted by Mattern.

Radomirovic and Van Deursen have recognised the danger of profiling through fingerprinting in the context of RFID. However, with RFID, the problem is at a much more local scope as RFID tags can be read only from a close distance and queries are mostly restricted to reading the tag’s identifier. As analysed above, the problem will aggravate in the IoT evolution as the attack vector is significantly increased by increasing proliferation of wireless communications, end-to-end connectivity, and more sophisticated queries. To thwart inventory attacks in the IoT, we identify the following two technical challenges: First, smart things must be able to authenticate queries and only answer to those by legitimate parties to thwart active inventory attacks through querying. Research in lightweight security provides useful approaches for authentication in resource-constrained environments. Second, mechanisms that ensure robustness against fingerprinting will be required to prevent passive inventory attacks based on the communication fingerprint of a smart thing. Inventory attacks will undoubtedly be difficult to counter. The fact that the use of PETs, though meant to protect privacy, can make fingerprinting even easier, leaves hiding in the (privacy-ignorant) masses currently as the most viable but suboptimal solution. However, an IoT system that discloses comprehensive information about its owner’s possessions is not likely to gain acceptance [4].


This threat consists in linking different previously separated systems such that the combination of data sources reveals (truthful or erroneous) information that the subject did not disclose to the previously isolated sources and, most importantly, also did not want to reveal. Users fear poor judgement and loss of context when data that was gathered from different parties under different contexts and permissions are combined. Privacy violations can also arise from bypassing privacy protection mechanisms, as the risks of unauthorised access and leaks of private information increases when systems collaborate to combine data sources. A third example of privacy violations through linkage of data sources and systems is the increased risk of re-identification of anonymised data. A common approach towards protecting privacy is working on anonymised data only, but the act of combining different sets of anonymous data can often enable re-identification through unforeseen effects. The examples show that the threat of linkage primarily appears in the information dissemination phase.

The threat of linkage will aggravate the IoT evolution for two main reasons. First, horizontal integration will eventually link systems from different companies and manufacturers to form a heterogeneous distributed system- of-systems delivering new services that no single system could provide on its own. Successful collaboration will above all require an agile exchange of data and controls between the different parties [5]. However, as horizontal integration features more local data flows than vertical integration, it could provide a way to enhance privacy. Second, the linkage of systems will render data collection in the IoT even less transparent than what already is expected from the predicted passive and unintrusive data collection by smart things.

Threats from linking different systems and information sources are not entirely new. They can already be observed in the domain of online social networks (OSN) and their applications. However, this involves only two parties (i.e. the OSN and the third party application), while the IoT is expected to feature services that depend on the interaction and collaboration of many coequal systems. Here, we identify three technical challenges for privacy- enhanced systems-of-systems: First, transparency about what information system-of-systems shares with whom is crucial to gain user acceptance. Second, permission models and access control must be adapted to the plurality of stakeholders collaborating in linked systems. Third, data anonymisation techniques must work on linked systems and be robust against a combination of many different sets of data. E.g. it will be interesting how concepts like differential privacy can be fitted to such multi-stakeholder multi-systems scenarios.

[1] Mary R. Schurgot, David A. Shinberg, Lloyd G. Greenwald, “Experiments with security and privacy in IoT networks”, World of Wireless Mobile and Multimedia Networks (WoWMoM) 2015 IEEE 16th International Symposium on a, pp. 1-6, 2015.
[2] F. Skarmeta, J. L. Hernandez-Ramos, M. Moreno, “A decentralised approach for security and privacy challenges in the internet of things”, proceedings of the IEEE World Forum on Internet of Things (WF-IoT), March 6-8, 2014.
[3] R. Roman, J. Zhou, J. Lopez, “On the Features and Challenges of Security and Privacy in the Distributed Internet of Things”, Computer Networks, vol. 57, no. 10, pp. 2266-2279, 2013.
[4] N. Li et al., “Privacy Preservation in Wireless Sensor Networks: A State-of-the-Art Survey”, Ad Hoc Networks, vol. 7, no. 8, pp. 1501-1514, 2009.
[5] R.H. Weber, “Internet of Things—New Security and Privacy Challenges”, Computer Law and Security Rev., vol. 26, no. 1, pp. 23-30, 2010.
en/iot-open/security_and_privacy_in_iot_ume/iot_privacy/iot_privacy_in_common/privacy_preservation_in_the_iot.txt · Last modified: 2020/07/20 09:00 by
CC Attribution-Share Alike 4.0 International Valid CSS Driven by DokuWiki do yourself a favour and use a real browser - get firefox!! Recent changes RSS feed Valid XHTML 1.0