6 Challenges of the digital public space

Privacy and Security

 

 

 

Chapter 5 focused on ownership and transactions as a fundamental aspect of human interaction and culture, which is being affected by digital public spaces and related technologies. The discussion mainly focused on potential benefits conveyed by new contexts, although we touched on potential negative effects. This chapter and Chapter 7, in contrast, will examine two areas of digital life which are raising particular concern amongst both researchers and the general public. While the tone of this book so far has been generally optimistic with regard to digital public space, we should take care to examine as cautionary examples areas which might have significant negative impact on our interactions unless carefully managed. The specific set of challenges which will be addressed in this chapter are those connected to privacy and security.

Privacy in digital space

Some thoughts are so private that you only share them with a therapist or 17,000 people on the internet.

(lordoftheinternet, Tumblr, 2013)

At the start of this book, we spent some time discussing what is covered by public space, and public interaction. Publicness is usually held up in contrast to privacy, and the nature of what we are able to keep private is also something heavily affected by digital technology. We have touched on this several times already; for example the enforcement of rules within privately owned digital spaces that are occupied by the public, or ability to keep your identity private while interacting online, and the effect this can have on responsibility and behaviour.

Privacy has been defined as ‘the right of an individual to be alone’ (Acquisti et al, 2007). It can also cover rights over management of access, including the access individuals, groups or institutions give others to information about themselves (Westin, 1967) or ‘the selective control of access to the self’ (Altman, 1975). Most people consider that they have a right to some form of privacy, and are sceptical of technology that appears to compromise these rights. However, privacy is not a single concept, and has many different aspects which must be considered. Susen (2011) notes three different facets or spectra of privacy; ‘society versus individual (“collective” versus “personal”), visibility versus concealment (“transparent” versus “opaque”), and openness versus closure (“accessible” versus “sealed”)’. Each of these is separate, meaning that something can be private without being closed, as well as public without being visible.

This complex dynamic of individuality, concealment and accessibility is reflected in the nature of online privacy in digital public spaces. Public and private are not always antonyms, and it can be difficult to draw lines between them. Since they are not binary on/off positions, and it is also possible for something to be both, confusion can occur if there is a difference in expectations. While public space is accessible to all and may be used by anyone, there may still be an expectation that activity within this space is personal and private, in that it is not shared with everyone, in the same manner as a private conversation in a public park. In digital public spaces, these boundaries between public and private are often much less visible, and this can lead to situations where misunderstandings arise or advantage is taken of the shareable nature of content that was perceived as private. It is even the case that the nature of what is ‘public’ has changed, as digital publics create long lasting persistent records that can be shared long after the initial interaction.

Under existing notions, privacy is often thought of in a binary way – something is either private or public. According to the general rule, if something occurs in a public place, it is not private. But a more nuanced view of privacy suggests that [a particular instance of public shaming on the internet] involved taking an event that occurred in one context and significantly altering its nature – by making it permanent and widespread.

(Solove, 2007)

Digital information space is constructed through sharing of information, much of which is personal. While we have already discussed that there are many benefits to connectedness, there are risks to allowing others unrestricted access to information about yourself. There must therefore be consideration of who any particular digital content is made available to, what can be done with it, and whether it might be ‘overheard’ by people other than the intended recipient. In a comparison with physical public space, some activities which people carry out in public in large conurbations are only acceptable because of a sense of anonymity; that as part of the crowd, you are unidentified and (potentially) untraceable. If this anonymity is lost, the activities may become riskier or less desirable. In Chapter 4, profiles were discussed, and how any interaction in the digital public space requires creating an online identity that might be tied to personal information and could be used to identify individuals with their ‘real world’ identities. This, along with the persistence of digital information, means that seemingly fleeting interactions may become concrete and traceable, at the detriment of privacy. Intrinsic to the digital public space is the ease of sharing of information, but this brings up many questions about what should be shared, and what should be kept private.

Security, as well as privacy, is a critical issue for consideration: how information that is deemed to be private is maintained as such. The interconnectedness of digital systems makes it easier to cause large effects across wide groups of people, which opens up opportunities for criminals. Marc Goodman has spoken about how the connected world has created a ‘crime singularity’, citing for example the Sony Playstation Hack in 2011 that compromised the banking details of 100 million people (Goodman, 2012). For the first time in human history, it is possible for one person to perpetrate a crime of theft against millions of other people in one go. If the security of a database fails, it compromises the privacy of many people. Equally, damage can be caused when private information enters the public space and becomes spread and shared: the practice of ‘doxxing’ which is carried out by groups wishing to attack individuals, involves sharing personal details such as names, addresses and family information which can lead to those targeted fearing for their personal safety. Perpetrators of such information distribution often appear to hold a perception that if such information can be uncovered by a determined individual, then there is no moral barrier to spreading this to the public space, with little consideration for the potential effects.

Particularly in terms of digital communities and social networks, there is a lot of grey area between things that are posted ‘publicly’ (with the expectation that anybody can see it) and things that are posted ‘privately’ (for a small privileged number of receivers). Information on social networks is often conveyed ‘publicly’ within a specific group but there is an expectation that it will not be viewed beyond these boundaries. This is maintained either explicitly by ‘privacy settings’ that require logins and passwords to access the content, or by what danah boyd calls ‘security through obscurity’ mentioned in Chapter 1: that nobody outside of the intended audience will be interested in the content and therefore will not seek it out (boyd, 2007). There is also a significant social factor of trust involved: that secrets shared with trusted friends are not distributed beyond that circle. There are many moral and ethical dilemmas regarding what we share online about ourselves and about others.

When people think about private digital information, they often consider things like personal details, credit card and account information such as in the above PlayStation example, or emails and documents that are intended to be sent to a specific, small audience only. Security of this information is very important, and is the objective of many tools and technologies to ensure that only the intended recipients can read, say, an email that you send to your family. But because there is the facility to record every process and interaction in digital public space, it means that privacy can be compromised through gathering and distribution of information that appears innocuous, or by use of information which is created by the very act of using the digital public space. The time at which you accessed an individual website may not be considered private, but your privacy may be compromised if this data over time reveals your movements each day. These considerations mean that a different awareness of privacy is necessary when existing in digital public space. When designing and working with digital spaces is it important to consider how privacy might be upheld, what risks to privacy might exist; and whether giving up privacy is an acceptable loss for the benefits entailed by systems that rely on mass data collection and thus cannot function in an entirely private manner.

There is some evidence that this awareness is already beginning to be adopted by those who have grown up living with digital public space. Carrie James, in conducting research with young people, describes how they consider what privacy means to them and how this changes their approach to the internet: ‘In their accounts, privacy is about controlling content about yourself and the audiences for that content. Madeline, age 21, said, “Privacy, to me, means that people I don’t know can’t find out where I am. I don’t want people that I don’t know knowing where I am or what I’m doing.”’ (James, 2014). She found that ‘A little more than half of the tweens (52 percent) and teens and young adults (55 percent) that we interviewed asserted that privacy is diminished online’ and quotes a twelve-year-old who explains that ‘basically anything in the cyberspace is, when you put it on something electronic, it’s gone. Your privacy is pretty much gone’.

Because the digital public space offers new ways to structure and manage our social interactions, as discussed in Chapter 4, we can often fall back onto actions and behaviours that would have been appropriate in non-digital space but now carry greater risks because of the changed nature of digital interaction; something written in a group email or posted on a forum is not the same as a conversation in a pub because it persists and can be shared with others. This means that the repercussions of such behaviour can be far more widespread and persistent: ‘Invaded privacy, stolen words, racist speech – offenses such as these have existed in human life for eons. Yet when they are committed in networked publics in a globally interconnected world, the stakes are arguably higher, the harm arguably deeper or at least more lasting’ (James, 2014).

Some of these risks may not be immediately apparent to people using the digital public space as an arena for interaction:

In daily off line life, these boundaries [for controlling privacy and disclosure] tend to be obvious. We are aware of who we are talking to, through vocalization or bodily posture and gestures; who we write to; what we have heard and from whom; who can see us walk down the street; who can see us use the toilet; if cameras are pointed at us (although some closed-circuit television requires actively looking for it); who or what has touched us; and who and what we have touched (whether friendly or unfriendly).

(Houghton & Joinson, 2010)

This attentiveness to where our social boundaries are and how information is distributed is an intrinsic property of our behaviour as described in Chapter 2, and thus the fact that the boundaries can be much less clear in digital space may cause disconnects between our perceptions and the reality.

Digital trails: data created unknowingly

The scattered bits of data in the electronic universe can seem to be ‘nothing more than the odds and ends of our lives – data lint that only the perverse would bother collecting.’ What makes current attacks on privacy so insidious is the fact that few of us have any idea how those bits of lint are being gathered into a lint ball of truly remarkable dimensions.

(Sykes, 1999)

A surprising amount of supposedly ‘private’ knowledge can be gained from information that the majority of people do not even think about giving up, or are unaware of the power of. By carrying out our lives in the digital public space we may be revealing more about ourselves than we expect, and this can lead to violations of privacy.

Most digital interactions leave a trace, and these can be used in remarkable ways to overcome seeming anonymity within the digital public space. The following examples are simply that, examples; and it would be almost impossible to list all the ways in which information about us is recorded living in a world overlaid with digital public space. Traffic analysis, for example, uses your movements in digital space to recover information about you. Traffic data (collected regarding online usage) does not contain information on the content of messages that you send, but simply the fact that you sent them, and to where. When sending information over the public internet, whether it be emails, what you enter into web fields or simply connections to a website, this traffic data may be visible publicly and can be collected, and analysed. This traffic analysis can be useful for users, for example to inform search engines by examining how people navigate between pages and which links are most popular. It may help identify patterns to prevent criminal activity such as credit card fraud. But it can also be used to infer sensitive information which may be used negatively, for example allowing a company to know whether customers have been looking at the sites of competitors, and offer a lower price only if that were the case. To avoid such traffic monitoring, it is possible to use services such as Tor which conceal traffic and location information, but while some argue that such encryption services should be built as standard into networks (Danezis & Clayton, 2007), others express concern that the use of them may conceal illegal activity. Indeed, concerns over crime, and in particular terrorist activity, have led to several governments introducing rules (such as the EU Data Retention Directive 2006, or the Communications Data Bill proposed by UK Home Secretary Theresa May, nicknamed the ‘Snooper’s Charter’) whereby internet service providers must keep customer data for a significant amount of time and turn this over to the police in the case of an enquiry. Some organisations such as the Open Rights Group1 have argued that this sort of surveillance may contravene privacy rights.

In Chapter 5, we discussed the common practice of giving up ownership of personal data collected by ‘free’ services. Many websites use cookies: data files which store individual information in web browsers. This allows the pages to ‘remember’ you, to present you with appropriate content, to prevent you having to log in repeatedly, or to enable you to build a ‘shopping cart’ that retains your selections while you browse other parts of the site. In 2009, an EU directive was introduced which as implemented in the UK meant that all websites using cookies must make visitors aware of this fact and give them an opportunity to opt out should they wish. But because cookies work to make the browsing experience smoother and more useful, there is not a great deal of visible incentive to opt out, despite potentially compromised privacy. Many sites found it difficult to comply with this directive and retain their functionality. Enforcement was limited, and the scope of the law was clarified in 2013 to include exceptions.2 Cookies do not record large amounts of personal information, however if shared between servers they could contribute to a large body of data from which personal information could be inferred that could compromise privacy.

This kind of information, about digital activity rather than the content of the activity itself, is encompassed by what is known as metadata. This can be remarkably powerful, even at an individual level: for a simple demonstration of this, consider what might be inferred about someone who visits a football team’s website, then a ticket sales page, then makes a telephone call to a friend, then visits their workplace’s sick leave policy page, before sending an email to their boss. None of the content of their communication is known, but a pattern of information is revealed. Metadata also means it is extremely difficult to fully anonymise digital information, because these kinds of links function in such a way that unique patterns of behaviour can be identified. This tendency will only increase, as connectivity becomes more ubiquitous and pervasive, and everyone carries with them or is surrounded by technology which generates metadata about their activity. This reduces the possibility that any such data can be made truly anonymous, because of the detail captured and links that go from it to other information.3

As mentioned in Chapter 5, many people sign user agreements giving up ownership and thus privacy rights in order to use free services, often with no realisation that this is what they are doing. This may be either because they do not fully understand the ramifications of the agreements that they sign, or do not read them in the first place. It is also important to remember the distinction between security and privacy. Many people use secure cloud services to store personal files, which allow them to access these files from any computer. But although the ‘cloud’ sounds like an impartial, ephemeral space that you can purchase a part of, what cloud services really are is storage on networked servers. As put by the Free Software Foundation Europe: ‘There is no cloud, just other people’s computers’.4 Although users instil trust in services such as Dropbox and Gmail which store their information in a way that cannot be accessed by those who are unauthorised, the owners of these services are still able to access the content, should they wish (though this would perhaps be breaking the trust of the users).

But while people may not be aware of occasions when the privacy of their documents is potentially compromised, at least it is easy to understand what a violation of privacy means in this context; someone unauthorised accessing your files. On the other hand, they may not even realise how much personal data they are producing just by moving in the digital public space and carrying out activity there, or what it might be used for and what implications this might have for their privacy. This data may not appear to have much weight or relevance on its own, but can be extremely powerful in aggregate. Some of this may arise from the sheer weight of information, and the fact that it can be connected to draw conclusions: the dossier effect. When all information about a person gets cross referenced, it can reveal significant amounts, and there are many uses of this both lawful and unlawful.

What other kinds of apparently inconsequential metadata might you be generating in the digital public space? One category of information is location data. There are several ways this might be collected, for example the IP address which locates your PC, or GPS data collected by mobile phones. This GPS information might be attached to files that you create (giving information on where and when, for example, a photograph was taken). Alasdair Allan and Pete Warden found that Apple products including iPhones track and keep location data, even migrating it across devices (Allan & Warden, 2011). This data is not encrypted, meaning that it can be accessed by anyone, and potentially used without your knowledge to track your movements. Garfield (2012) describes how ‘Allan and Warden had no problem translating the recorded coordinates into maps, and one particularly striking screengrab from their presentation showed a train trip from Washington DC to New York City, with Allan’s whereabouts being registered every few seconds’. It is not clear why Apple keeps this data, but it does appear to be a conscious decision, and their Licence Agreement implies that they may use this ‘to provide and improve location-based products and services’ (Garfield, 2012). But extremely detailed pictures can be built in this way not only of the lives and activities of individuals but also patterns of behaviour across a wider population.

Another, older example of this kind of data harvesting is supermarket loyalty cards. By offering rewards for their use, these are able to keep track of every purchase that is made at the store, and can be used to identify trends both on an individual and more wide-scale level. This may allow the store to target particular items to individuals based on what they have bought previously, often in highly analytical ways. For example, algorithms can understand that people who buy pregnancy tests, and then maternity clothes, and then infant formula and nappies, might be prime targets for pureed baby food and teething rings. It is not necessarily that a greater amount of information about individuals exists, but that it can now be collected on a large scale, connected with other information and analysed in great detail through the mechanisms of data mining. This large scale interconnectedness of vast data sets and the computing power to analyse them is part of the previously mentioned ‘big data’ capabilities.

An example of the power of this big data analysis has been shown by Kosinski et al (2013) who were able to accurately predict a range of personal information about Facebook users, including sexual orientation, ethnicity, religious and political views, simply from items that they ‘liked’ using the social media service. The fact that these individual simple actions cumulate to form a powerful profile of an individual is probably not something that most people are aware of, and they may be quite horrified to realise that this is information that they are providing to Facebook as a company, even if not more widely.

As the digital public space extends into more aspects of our lives, there is the potential that even more of our private information could inadvertently be made available. In Chapter 3, the internet of things was discussed, with technology currently in development which will connect objects and items that we purchase and use, so that information can be shared at various aspects of their life cycle. This however means that metadata will be available from these objects, and this could have privacy implications. If objects have embedded RFID tags built into them in order to simplify supply chain monitoring, this identification will not necessarily stop once they are purchased. Tracking of merchandise via embedded RFID, as is already used by organisations such as Wal-Mart and even the US Department of Defence (Hayles, 2009) leads potentially to the tracking of the owners of said merchandise. There are implications for privacy if this information is shared and utilised by organisations with interests counter to our own. ‘Increasingly, we face a world where the things on our person, near to the body or means of transport, will be communicating with the network of embedded chips in the environment, allegedly for our benefit’ (Featherstone, 2009). This connectedness of our possessions is already coming into effect with identification documents such as passports, and even NFC5 enabled payment cards. Because these can be passively read, and readers (and the tags themselves) may not always be obvious, they carry an inherent risk that they can be ‘overheard’ to reveal information that we may prefer to stay private. This fear is already leading to measures being taken by some who fear this surveillance, such as wallets lined with aluminium foil to defeat unauthorised reading of RFID encoded cards. As our objects become Bruce Sterling’s ‘spimes’, (see Chapter 3) contributing to an information mesh of the digital public space, we also by our ownership of them become part of this information space.

fig6_1.jpg

Figure 6.1 RFID embedded object (Oyster card)

Personal information might also be revealed about us through logjects we own temporarily, which record their activity and transactions. Although objects which record their history might be extremely useful for supply chain, and encourage responsible use and recycling, it may be the case that you do not wish to have particular products associated with you. If their entire history can be read, including location data, it might mean that privacy for owners is compromised if, for example, your alcohol consumption can be tracked by the life cycle of the bottles that you purchase, and how quickly they are emptied and disposed of. If objects are tagged as belonging to you, should you reserve the right to have them ‘forget’ if you do not want to be listed as part of their history? It may be that these logjects move us away from built in obsolesce as we become more responsible for how we treat things we own, knowing that they will retain our history of use. But if this information can be read by external organisations, it increases the chance that anonymity will no longer be possible.

This networking of objects becomes even more critical when they are objects whose function has serious, continuous integration and impact on our lives, the disruption of which could be catastrophic. The most immediate example of this is health related devices, which can collect extremely personal data intrinsically linked to wellbeing. Compromised privacy in the data on these devices could mean information being released to those who we do not wish to have it, for example employers. While medical devices such as pacemakers are not new, there are now many more that routinely collect data and transmit it digitally to healthcare providers or make it accessible to patients in order to monitor their own health. Examples include insulin pumps for diabetics which provide constant monitoring and display blood glucose levels. Connecting these devices in order to allow them to transmit data to a central store may offer benefits to healthcare providers, but adding connectivity does potentially allow for abuse, especially if there is a two-way connection allowing the device to be accessed remotely, as is sometimes already the case. If this connection is accessed by someone unauthorised, it could lead to attacks whereby an individual was harmed by someone stopping their pacemaker or giving them too much insulin.

This assumes a criminal intent; someone hacking maliciously into what is private data. However, even well intentioned action can be harmful. In November 2014, the support organisation Samaritans released an app which tracked tweets and analysed them for potential evidence of those who might be struggling to cope and in need of support. There was widespread criticism of this app (New Scientist, 2014), with concerns both for the collection and sharing of mental health information without the consent of those concerned, and also the potential exposure of vulnerable people to trolls and others who wished to react negatively.

This type of data sharing also applies to wearable devices which may be tracking basic physical data such as heartrate and activity levels. The potential effects of this may be less severe or immediate, but of a greater current impact across a wider range of people. Devices such as the Fitbit are designed to turn individual data into shareable social information, in order to promote competition in achieving health goals. But this information could potentially also be shared without consent, added to the total digital profile of an individual, and could potentially be used for purposes which may compromise privacy. For example; someone with a wearable device which records sleep patterns may not wish to have that information accessed by their employers if it shows that they are only getting four hours sleep per night, leading to questions over whether they are fully able to perform their role adequately. Information on movement, sleep, even what you eat could be used for discriminatory purposes if accessed by insurers or employers.

It may even be the case that private data, which relates to your own health and wellbeing such as blood test results, is available to others but not to you. Although it can be accessed by clinicians, and potentially on a larger central database which may not be secure, medical test data is not always available to individuals who want to access it without lots of asking. And even if the data is accessible, it may not be in a format that promotes understanding by the patient. This is question of data ownership as well as privacy.

Case study 6.1: Blood data visualisation and use of patient data

There are many challenges and stresses to patients who, for whatever reason, have experienced kidney failure and need to be given regular dialysis treatment. But communicating with your doctor should not be one of them. And challenges can also exist for physicians, trying to help patients adapt their behaviour to improve their health. They may have huge amounts of data at their fingertips, but to translate this into things that will make sense can be a struggle without support.

In an attempt to support these doctors and help patients, the Creative Exchange ‘Kendal Blood Data Visualisation’ project created a digital app which takes the large volumes of (mainly numerical) data and displays it in easy to understand ways. These might show patients how, for example, their improved diet and eating habits over the course of a month have affected the amount of potassium phosphate in their blood results; too high or too low a level of which can lead to heart and bone disease. Patients can increasingly take the role of ‘partners’ in treatment, responsible in part for their own wellbeing. This can be empowering for some patients.

fig6_2.jpg

Figure 6.2 Blood data visualisation project

What is interesting here is that there is no new data being generated by the app; it is simply a tool for easier communication between doctors and patients. The visualisation techniques tease out the most important bits of information for the patient, and help the clinician explain them in a way that is straightforward to grasp for someone who might not be able to quickly interpret a large table of seemingly meaningless numbers. However it is important to remember that the original data might be interpreted in many ways by many different people.

In this project, development of the prototype was possible only because one of the researchers (Jeremy Davenport) is himself a dialysis patient working in partnership with clinicians, and was able to use his own data, and give consent for it to be used in the prototype. Jeremy commented that he was happy for his data to be used in this way given its value in building a credible prototype, although he did reflect on privacy issues; the prototype will be disseminated with anonymised data (J. Davenport, 2016, personal communication, 29 May).

Ethics of data usage are taken very seriously by the National Health Service (NHS), and to be able to gain access to medical blood test data from patients for research or development of such tools and processes requires significant justification. A detailed ethical approval application process must be carried out to ensure that patient privacy is protected. In theory, all of this data is anonymised, but with increasing computing power it becomes more difficult to ensure that patients cannot be identified: especially if, for example, it is a small subset of people with a particularly rare condition. While ethics of privacy are taken very seriously, it is conceivable that new innovations in the digital public space might mean that data can be used for purposes which were not considered when releasing it was approved.

In May 2016, New Scientist magazine obtained access to documents detailing an agreement between the Royal Free NHS Trust (which runs three London hospitals) and Google-owned artificial intelligence company DeepMind (Hodson, 2016a). This followed an announcement in February the same year that DeepMind was working with the NHS to build an app to help hospital staff monitor patients with kidney disease. But the document revealed that not only did DeepMind get access to current records of kidney health, but also to five years’ back records of full medical data revealing many aspects of health information including HIV status and details of abortions and drug overdoses. The aim of such projects is that the information from an individual patient could be compared to millions of other cases, and potentially identify if they are in the early stages of a disease with no symptoms if their course maps to that of many others. Tests could be run to confirm the preliminary diagnosis. An alternative use for these large data sets could be predicting outbreaks of infectious disease. However concerns have been raised (Hodson, 2016b) that Google and the Royal Free did not obtain regulatory approval for the use of this data, which could lead to privacy violations since identifiable patient information is included. Additional to this, there are concerns over the fact that Google is taking ownership of this data, from which they could potentially profit, as a closed private organisation with no obligations back to the public whose data they are using. Such privacy concerns must be carefully considered, especially when the data concerned has major implications for the wellbeing of large numbers of people.

Networked information from your devices and objects may also include personal details even if they are not on your person – such as the time you set your heating to come on being related to the time you arrive home from work – and may not be securely stored. When we add connected things to our homes or networks, we assume that the only people who will be able to manage how these objects will act will be ourselves. But breaches in this information security could enable crime to be committed based on information about your personal habits. It is even possible to imagine scenarios where your smart home could be ‘hacked’ and take actions that you do not want – such as the one mentioned in Chapter 3 where the doors are locked and you cannot leave.

Permanence of information leading to a lack of privacy is also, ironically, potentially an issue with Bitcoin, despite privacy being among its defining aspects because of the cash-like anonymous nature. As described in Chapter 5, Bitcoins can be exchanged securely, and because all transactions are recorded within the code there is no risk of ‘double spending’, meaning that they can be treated like cash. Consequently, they are extremely useful for transactions that need to be anonymous online, while also popular with whistle-blowers and for the trafficking of secrets. However since all transactions are embedded within the block chain, there is a permanent record of how each coin has been used.

As this selection of examples shows, there are numerous ways in which our actions create information about ourselves that can make its way into the hands of others by legal or illegal means. Craig Mundie lays out another critical issue, that of consent to use:

Today, there is simply so much data being collected, in so many ways, that it is practically impossible to give people a meaningful way to keep track of all the information about them that exists out there, much less to consent to its collection in the first place.

(Mundie, 2014)

Even if were possible to give an individual the opportunity to consent or not to every piece of personal information requested or collected in the digital space, this would take vast amounts of time and energy. In addition, most people are not easily willing to opt out, since they see no immediate impact and want to be able to access services that rely on this consent; often free in exchange for targeting advertising. Sensitive information may also be collected from completely public sources, and can be used in conjunction with dossiers collected from digital spaces to build up a detailed personal profile of an individual. This passive collection does not necessarily provide an opportunity for consent. And if the passive collection becomes a ubiquitous and intrinsic feature of modern life, there will potentially be large societal and individual benefits that come with being included in it; choosing to opt out could have significant costs, increasing the likelihood that consent will be given with all the associated negative impacts. A possible solution to this could be to change the point at which consent is necessary: it is not the collection of the data itself that matters, but the use of it. Mundie suggests that one option might be a ‘wrapper’ of metadata, encasing any personal information and requiring individual authorisation for each request to open it and make use of it for specific purposes. This wrapper would describe the origins of the data and the rules governing its usage, allowing blanket authority to be given for beneficial purposes while minimising potential misuse. This would however require that all data collected complied with this protocol, which seems optimistic given the myriad ways in which our personal data haze surrounds us and bleeds into every interaction we make in digital public space. This data generation is also not limited to passive collection of metadata but might be linked to our online profiles, personal identity or identities that we create for ourselves as the public-facing aspect of our existence in particular parts of digital public space, as discussed in Chapter 4.

Digital social spaces: speaking privately in digital public space

We have spent some time talking about the fact that the digital connected world consists of a variety of digital public spaces, with different properties and qualities, rather than one amorphous digital public space. While these are all to some extent ‘public’ spaces, in many cases they have boundaries which mean that while anyone could still access these areas, in practice they are restricted either through the shibboleth of cultural understanding or by arbitrary limits on participation. These are public spaces where the notion of ‘public’ is restricted to those who fulfil certain criteria. With the right properties, these can become ‘safe spaces’ where members of a particular community can expect to maintain a certain level of privacy, and information shared there will not escape into other online spaces or the ‘real world’.

Jessa Lingel writes about a specific example of this type of behaviour, in members of online community spaces and support groups for those interested in extreme body modifications (EBM). Because this behaviour is stigmatised, there is a tension between providing information in online forums to others to whom it would be beneficial (and who might find this information difficult to acquire elsewhere), and keeping the information exclusively within the community and sharing secrets only with trusted ‘insiders’.

This is the clearest expression of the politics of information: Information is not a stable, passive artifact in accounts from our participants; it is interactive, collective, and performative. Information is thus political in that it serves as the means of deciding who can be trusted and who cannot, who is a member and who is not.

(Lingel, 2013)

However, there is a limit to how private and ‘safe’ such spaces can be, by the very nature of the fact that they need to be accessible to any member of the group, and that there is no fool-proof way of confirming eligibility. There is also a measure of trust given to participants that the information will not be taken outside the confines of the specific space. Because this is fallible, and total trust may not be given, the area may not be considered an entirely safe space; behaviour and speech may still be regulated. The level to which this occurs may be dependent on the platform and type of space. For young people, who have grown up using online spaces to communicate, there now seems to be some acceptance of the idea that even if an attempt is made to restrict information sharing to supposedly exclusive digital public spaces, like a restricted Facebook group, there is a risk that by doing so the information may make its way further than intended and to other spaces. But it is easy to become complacent in a seemingly safe space, especially if others are sharing private information freely.

Some of the wariness of young people might be because the ownership of the space within which information is posted is held by corporations; so content may be shared not only with peers but with the curators of the information. Jared, age 24, said to Carrie James:

Online you almost give up your privacy. When you send an email, that email is actually the property of Yahoo or Gmail or whoever, or Harvard or the museum, whoever’s site you’re using. I mean, typically they’re not going to look at these things, but, you know, they have that right.

(James, 2014)

It is increasingly the case that digital spaces which were considered safe, anonymous or private are bleeding into other contexts and this can cause significant issues. One example of this is the phenomenon of employers monitoring or utilising the social media profiles or other online presence of current or potential employees. Several media reports have covered individuals who have been fired because of things that have been posted on their personal social media profiles, and it can now be common practice to search these profiles before interviewing candidates or offering them a position. What might previously have been a closed off personal life, undertaking activities that are not immoral or illegal but simply inappropriate for a work context, might now bleed over and affect other spheres of life: because they are no longer entirely private and can easily be found and connected to the individual. It may be that this propensity for searching online material means that the next generation learn to restrict their behaviour or curate their online presence more, or alternatively that there will come a time when acceptance of youthful indiscretions posted online means that such material is no longer considered as a barrier to employment. But it is not yet clear which of these will take place.

The reason this material can be so damaging may not only be because the content being examined is inherently inappropriate, but because context of what was considered a private space can be lost. There may be social cues, communicative norms and understanding associated with a particular group (for whom a message was intended) that get lost when the message is taken outside that group. This can particularly be the case with jokes; issues can arise when statements intended to be joking are removed from an ongoing context and thus unspoken cues associated with humour. In some cases, this does not affect the outcome: if the content reveals attitudes which are incompatible with the role then it could be argued that the context is irrelevant, as in the case of police officers disciplined for racist speech on Facebook (BBC News, 2011). However a different view may be given to cases such of that of Paul Chambers who was charged with ‘sending a public electronic message that was grossly offensive or of an indecent, obscene or menacing character contrary to the Communications Act 2003’ for the following tweet:

Crap! Robin Hood airport is closed. You’ve got a week and a bit to get your shit together, otherwise I’m blowing the airport sky-high!!

(Chambers, 2010)

The tweet, which was posted after cold weather forced closure of several UK airports, was found during an unrelated search by an off-duty airport manager (Wainright, 2010), and was reported to the police as a threat against the airport. Paul Chambers was fined £385, ordered to pay £600 costs and lost his job, though the conviction was later quashed after three appeals. The judgement in the final, successful appeal concluded that, in regard to the provisions of the Communications Act, ‘a message which does not create fear or apprehension in those to whom it is communicated, or who may reasonably be expected to see it, falls outside this provision, for the very simple reason that the message lacks menace’ (Collingwood & Broadbent, 2015). Two key points to note here are the expectations of audience, and equally the ‘creation of fear and apprehension’. The judge highlighted the importance of context and means of conveying a message in ascertaining its intent. Large public support for Paul Chambers, including many celebrities, tried to make the key point that almost everyone reading the tweet would from context conclude that it was a joke rather than a threat; indeed the case became widely known as the ‘Twitter Joke Trial’, referencing the humorous nature of the content. But it is often very hard to pinpoint humour and intent on a contextless medium such as Twitter, when ambient intimacy (as described in Chapter 4) has not been established and a single point of contact is examined. Another example of this was the case of PR chief Justine Sacco, who was the centre of a Twitter ‘storm’ over a tweet she made on 30 December 2013 before boarding a plane:

Going to Africa. Hope I don’t get AIDS. Just kidding. I’m white!

(Sacco, quoted in Ronson, 2015)

While she was on the 11-hour flight, the tweet spread virally and became a top trending topic, with thousands of calls for her resignation. She was indeed fired, and the results of her very public shaming had a major impact on her subsequent career and life. Having met with her, Jon Ronson reflects on the meaning behind the tweet:

Read literally, she said that white people don’t get AIDS, but it seems doubtful many interpreted it that way. More likely it was her apparently gleeful flaunting of her privilege that angered people. But after thinking about her tweet for a few seconds more, I began to suspect that it wasn’t racist but a reflexive critique of white privilege – on our tendency to naïvely imagine ourselves immune from life’s horrors. Sacco… had been yanked violently out of the context of her small social circle.

(Ronson, 2015)

Should there be an argument that these tweets were ‘private’ because there was no expectation that anyone outside the immediate social circle of the tweeters’ ‘followers’ might see them? Not really, since they were broadcast in the ‘public’ realm of an unlocked Twitter account. Content that is uploaded to social media platforms, particularly those of the open variety such as Twitter or Tumblr that are designed for sharing and spreading, should not be considered private and therefore their content should be judged accordingly before posting. This is certainly the view of the US Navy, who claimed in 1998 that Master Chief Petty Officer Timothy R. McVeigh violated their ‘don’t ask, don’t tell’ policy by having an online profile which identified him as gay. ‘The Navy insisted that by posting his biography on-line, McVeigh had waived his own privacy. In effect, they said, he had “told” the world of his sexual orientation. But what the Navy insisted was public was considered private by critics’ (Sykes, 1999). Such issues, particularly in the case of the ‘jokes’, arise both from the ambiguous public or private nature of online space, and (because of the novel nature of these spaces) a lack of societally accepted standards for what is acceptable behaviour in such ‘public’ spaces.

There is another key to this difference between digital social interaction venues, and previously existing physical public spaces where people met and discussed sensitive topics, or made jokes that could be taken as offensive or risqué: record of activity. A group meeting in a pub to discuss their participation in semi-legal activities will not (leaving aside covert surveillance, which can occur in either context) leave evidence beyond the say-so of the other members of the group, who have been trusted enough to participate in the first place. However in digital public space, all information transfer is recorded; potentially including metadata such as exact time and location. This could be shared outside the group in full, verbatim and leaving a permanent record. This reduces the safety of conversation, because of the knowledge that it can always be retrieved by someone.

This permanence also affects our public information across time. It might be the case that we no longer consider things we said in the past to represent the people we are now, and thus we would prefer them not be public or publicly linked to our current selves. Ayalon and Toch (2013) looked at how people’s attitudes to publication of information on social networks changed over time. They found that there was a significant negative correlation between how long ago status updates were posted on Facebook, and how happy people were for them to be shared. There was also a significant effect of major life changes – it seems that the participants did not consider the older updates relevant or representative, and no longer wanted them to be shared, with many opting that they would choose to remove or change them. The permanence of digital content means that it is harder to ‘forget’ things from our past or put them behind us. Retrospective privacy is important, and the new properties of digital public space mean that we must consider the implications of what we put online not only now, but in our future: and this may be something we are not yet equipped to do.

A solution to this may be to maintain secrecy rather than privacy. Increasingly, digital personae present a maintained identity that contains only selected aspects of the self. These are then, if not necessarily designed specifically to be public, more amenable to being exposed with less damage to reputation than a fully representative version of oneself. Individuals might even prepare several different versions of these for different audiences, and keep them separate. However these may be difficult to maintain in practice because it can be difficult to isolate social groups and remember exactly which contacts are in each group. This is especially the case once these groups grow past a certain size – it may be the case for example that you forget that a particular individual has been given access, and post something via that persona that you would rather they did not see. This is a particular hazard of the online space, as opposed to face to face interactions where the people present can be immediately seen and recognised.

Another problematic aspect can be if groups contain a mixture of different types of acquaintances. This can blur the boundaries of what is acceptable for that group, making it complex to create a suitable, consistent persona. There is also a limit to how many separate accounts can be maintained, leading to inevitable crossover. ‘Even with private accounts that only certain people can read, participants must contend with groups of people they do not normally bring together, such as acquaintances, friends, co-workers, and family’ (Marwick & boyd, 2010). There is also the risk, as outlined above, that things posted through a particular faceted profile may not stay within it and can quickly spread to unintended audiences, which might include those for whom the profile was deemed inappropriate.

In some cases, extremely personal information is willingly shared entirely anonymously in online spaces, disconnected from persistent identities, perhaps to gain support or simply to express sensitive or controversial opinions. As mentioned in Chapter 4, the art project PostSecret consists of a website where contributors submit (physical) postcards which are posted online anonymously. Each postcard reveals an intimate confession, which would generally be considered extremely private if traceable back to their originator. But despite being publicly posted, privacy is maintained by anonymity, and a large audience can still be reached. The ability to hold an anonymous profile and access digital space without linkage to a legal or public identity may be compromised by the increasing insistence on ‘real names’ by the companies who own the infrastructure of many digital spaces, who wish to link all aspects of online life. While this creates a fuller profile which is useful for marketing and personalisation, it means that it is more difficult for fragmented profiles to be established and therefore privacy between them to be maintained.

Why does personal information posted in these spaces become spread and distributed, when we might be uncomfortable with our own information being communicated in this manner? For this we must return to the basis of social media in our social communicative history; in gossip. We desire privacy for ourselves, but the propensity for gossip, to share privileged information, is strong. We have a high level of interest in the lives of others, particularly when the information is considered to be secret or illicit, therefore vicariously sharing other people’s private experiences is something that should be an expected component of any medium that enables this information to be transferred.

Some people give up their right to privacy for notoriety, for the opportunity to be known, for fame. Sykes (1999) talks about the ‘exhibitionist society’, and uses examples such as live webcams, comparing this to television equivalents: ‘tell-all’ reality TV shows such as Jerry Springer. The ease with which content can now be published means that anyone can make themselves into a celebrity, but with this comes the risks of loss of privacy that puts celebrity lives under public scrutiny. This kind of semi-consensual loss of privacy is not restricted to digital public space, however the rise of what Jenkins et al (2013) call ‘spreadable media’ means that it is much easier for those who put themselves in public view to find they are giving up more of their personal life than they may have intended. It may be that a constructed front was put forward as the image to be spread and the ‘personality’ to be in the public eye, but if this cuts too close to the personal, private information then privacy may be sacrificed without knowing it. In Stuart Evers’ speculative fiction story ‘Everyone Says’, technology is described by which the public can ‘link’ to the lives of others to experience their senses, thoughts and emotions. The appeal seems to be both to vicariously enjoy ‘celebrities’ who lead risky and hedonistic lives, and the simple novelty of experiencing someone else’s life, even if it is apparently unexciting. But the pressure of this scrutiny when too many people become interested in one’s life is palpable and leads to tragedy (Evers, 2015). While such technology is not likely to be realised any time soon, there are some aspects which ring all too true. By opening up our lives to others we leave ourselves open to scrutiny and the possibility of large public attention, which for people like Paul Chambers and Justine Sacco can have negative consequences.

Part of the reason for this possible creep between the information that we deem public and that which we would prefer to keep private, is that it is not just social content that we ourselves upload that can have impact on our privacy. A consequence of the social aspect of much of the digital public space is that others may, by virtue of digitising their social experience, share on our behalf without fully considering the consequences. ‘None of us are exempt from this social fact – people who elect not to join online social networks are often unconsenting participants on Facebook, YouTube, and the like, since both well-intentioned and mal-intentioned users share photos, videos, and comments featuring these nonusers’ (James, 2014). In Chapter 5, we looked at how there may be many conflicting claims to contested digital objects and their copies. This applies not just to information put into the digital space by individuals about themselves, but by others based on non-digital interactions. Privacy may be violated if, for example, photographs placed online contain images of individuals who may or may not have given consent for this usage. People using social media might include personal information about their own activities which can be used to draw inferences about others; for example describing being at a party with a friend who does not have a social media presence. When this third party content is uploaded about an individual it can be linked to their personally constructed online dossier and profile, and this can warp the intended digital presence; containing information that the individual might prefer to remain private, but which is then irrevocably linked with their online persona.

This extends beyond information recorded or passed on by those who we may know outside of the digital space – it can also take place between strangers if information that was collected in a physical public space is uploaded. Recording devices are becoming more and more ubiquitous; camera phones are carried by the majority of people, video recorders may be mounted on cars to record potential traffic accidents and determine fault, and there is an increased push towards cameras for officials such as law enforcement. This phenomenon has come to be known as ‘sousveillance’ (Thompson, 2013). The word, coined by Steve Mann, plays on the notion of surveillance and means the constant monitoring of all by all. Many of these digital recording devices also have the built-in capability to share the captured information instantly in digital space, with videos and photographs uploaded and shared on smartphones in almost real time in some circumstances. This has its own implications for privacy, since there is often not time to fully consider the repercussions that uploading information might have on individuals who might not even be known to the one doing the sharing.

An example of this can be seen in one of the earliest ‘memes’ and viral videos which spread across the internet. Artist and filmmaker Matthias Fritsch shot a video at the 2000 ‘Fuck Parade’ festival in Berlin which he hosted on his website for several years, and uploaded to YouTube in 2006. The video shows a tall, dynamic, Scandinavian-looking man confronting a drunken groper and then apparently leading a troupe of techno dancers. It was picked up by online communities, gained millions of viewers, was shared widely across the internet, and spawned a huge amount of remixed and reimagined material including figures, t-shirts and other merchandise. In 2009 the man playing the starring role, who came to be known as ‘Technoviking’ sent a cease and desist notice, and later sued Fritsch over commercial use of his image rights without authorisation. The German courts ruled that Fritsch must remove the man’s image from any material displayed in public, and pay him the €8000 earned from YouTube advertising on the video plus legal fees (Fritsch, 2015). But it is in many ways too late: the meme and its associated imagery has spread far and wide and cannot be contained. This situation is another example of controversies and limitations of copyright law for media in which work is shared and remixed and distributed. But it also highlights privacy concerns, as the video can be seen as an invasion of privacy for ‘Technoviking’, who would have at the time had no knowledge that his actions would be shared with millions or become a worldwide phenomenon. The case has raised many questions over the nature of image rights in a world where such material can be distributed so widely, so easily. While this is in part because of the timing, and the fact that it occurred many years before such viral distribution became a well appreciated act, similar viral videos with unknowing stars still regularly occur, suggesting that public behaviour is not limited by the knowledge of sousveillance (though it may have changed for many).

Sometimes such sharing is done not out of lack of thought for the rights of the individual, but in active pursuit of behaviour change either from them or to set an example to others. This digital ‘public shaming’ (similar to the backlash against the ‘joke’ tweet of Justine Sacco) is becoming more common, and can apply not only to antisocial behaviour committed online, but also in physical space; translated and transmitted online through images or video. Solove (2007) has written at length about ‘dog poop girl’ whose actions not cleaning up the mess left by her pet led to her image being shared widely online. The general opinion of many commentators seemed to be that since she was in public, she should not expect privacy, and there was no ethical dilemma in the sharing of her image. This is despite the fact that she gained notoriety to the point of strangers recognising her in the street, for an incident that prior to widespread digital sharing would not have spread beyond those who immediately witnessed the incident. There are ethical dilemmas associated with the use of digital public space not just in regard to your own privacy, but how you treat the privacy of others by sharing information about them, which you might be transferring from a physical public space to the digital public space.

Privacy implications of digital/physical world: surveillance and data

The examples above of ‘real life’ activity crossing over into the digital space and being distributed there are a critical reminder that when talking about digital public space we are not just talking about the digital information space, but also how, as discussed in Chapter 4, digital aspects are overlaid and intertwined with the physical world. This includes such technologies as the internet of things, augmented reality, and data collection and delivery captured within and through physical public spaces.

During the trial period of Google Glass, many people were worried about the privacy implications of, in effect, a camera that could record everything that someone sees as they go about their daily life and travel in public spaces. Public perception was that privacy might be negatively impacted by being recorded by someone wearing Glass, despite the fact that in reality constant recording was not practical due to the battery life of the device. These concerns may in part have led to the shelving of the project, though future versions may be in development. Whether these will address the privacy concerns is yet to be seen.

These fears belie the fact that much of our lives is already recorded. This is due not just to the sousveillance referenced above, which might be transient and associated with people passing through the space, but also to digital infrastructure increasingly built into (especially urban) environments. In particular it is important to note how this data collection can impact on privacy, through surveillance and technological innovation that can collect much more information than people might be aware of.

The first thing to consider is that physical proximity and access can have a surprisingly significant effect on digital privacy. A good example of this is the controversy that arose in 2010 when it emerged that Google Maps cars, travelling the world since 2007 to record photographic imagery to add to their maps, were also gathering electronic data and personal information as they went.

If you were on the Internet as one of Google’s Subarus rolled by, Google logged the precise nature of your communications, be it emails, search activity or banking transactions. As well as taking photographs, the cars had been consciously equipped with a piece of code designed to reap information about local wireless services, purportedly to improve its local search provisions. But it went beyond this, as another program swept up what it called personal ‘payload data’ and led the Federal Communications Commission in the US and other bodies in Europe to investigate allegations of wiretapping.

(Garfield, 2012)

There is no evidence that Google used the information, and they claim it was collected accidently because of legacy code embedded in their street view car equipment. They did however admit that ‘it was a mistake for us to include code in our software that collected payload data’ (quoted in Garfield, 2012). But this situation demonstrates how easy it is to collect private information simply by driving along a street close to wifi routers, and picking up signals that are openly accessible.

Wider sweeps of less detailed data from larger ranges of physical space can also lead to invasions of privacy. By collecting data from large areas and using algorithmic computing power, it is possible to undertake complex analysis that would be impossible without ubiquitous digital connectedness. This can reveal information about both individual movements and activities, and co-ordinated groups. The connectedness of physical public spaces might include the surveillance afforded by CCTV cameras and other recording devices (both publicly owned, and owned by individuals who upload the content to the digital public space) which can give wide coverage of activity, and provide a potentially permanent audio-visual record of what happens. The UK is particularly well known for significant amounts of CCTV coverage: Cuff (2003) suggests that by 2001 the average British citizen was captured on camera 300 times each day. The stated aim of this coverage is generally crime prevention, a task at which it appears to succeed, with reports of a 20% to 40% decrease in the crime rate following installation of cameras (Bowyer, 2004). But is this worth the associated reduction in privacy?6

The availability of camera footage in all spaces can lead to circumstances like those of the ‘dog poop girl’ mentioned above, where individual acts become spread and shared and have consequences for the people involved. But sophisticated analysis can also be used on surveillance footage to gather demographic and other data. For example face recognition technology has been the focus of much work, especially with heightened alerts in the wake of terrorist activity in many countries. These technologies might track particular groups, or conceivably allow any individual to be identified and tracked across multiple public spaces, and their activity monitored by, for example, governments. Certainly it is already possible to identify individuals from such footage, and not always for purposes of law enforcement: note the case of the comedian Michael McIntyre whose image was tweeted by the National Police Air Support Unit. While the publication of the photograph was potentially a breach of the Police guidelines and a legal invasion of privacy, it appears that taking the photograph in itself was not. A statement from a Metropolitan Police spokesperson said that ‘this tweet does not, as far as we know, constitute a breach of data protection legislation’ (BBC News, 2016). Controversy similarly arose around a Russian face recognition service called ‘Findface’ which allowed people to match photographs, perhaps taken on the street, to social media profiles. Concerns were raised that it compromised privacy and could be used for unscrupulous purposes such as by debt collectors. These fears seemed to prove justified when it was used by members of an online community to track down women who had appeared in pornographic films and spam their friends and family (Rothrock, 2016).

In Cory Doctorow’s 2008 speculative fiction novel Little Brother, he describes a near-future scenario which includes, as well as facial recognition, gait recognition. The young characters, in order to play truant and skip school, place stones in their shoes in order to disguise their gait and not be identified by the monitoring systems throughout the campus. This is not too farfetched, given that technology to identify individuals based on their gait already exists (Wang et al, 2003). While attempts have been made to address the issues inherent in this kind of mass surveillance capability, a code of conduct has not yet successfully been developed: in June 2015 talks to address this broke down after privacy advocates left in protest at the lack of engagement. They were disappointed in the conduct of industry representatives who are, for example, implementing systems designed to be able to identify when high value customers enter a shop (Lynch, 2015; Hodson, 2015).

There have been several responses to this constant thread of identification in public places including art pieces, research and recommendations (see Case study 6.2 ). An example of this type of work is the CV Dazzle project, which has been developed by artist Adam Harvey and uses hairstyle and makeup specifically designed to confuse face recognition algorithms and prevent identification. The name derives from dazzle camouflage used during World War I by naval vessels, which, to quote the company’s website,7 ‘used cubist-inspired designs to break apart the visual continuity of a battleship and conceal its orientation and size’.

Case study 6.2: Computer vision invisibility

For a long time, recognising faces was one of many tasks that were extremely difficult for computers but easy for humans. However these days facial identification software is more advanced and can often match people’s identity; in some cases even from images which are blurry or indistinct. This means however that anywhere which is covered by camera surveillance should change our expectations of being anonymous and lost in the crowd.

Ben Dalton, as part of his work in the Creative Exchange programme, has experimented with artworks which explore these notions of being identifiable by computer systems. Two particular works contrast different aspects of making oneself ‘invisible’ to ubiquitous surveillance. The first, the ‘Wildermann’, is based upon outfits worn in traditional festivals:

The wild man reoccurs as a motif in festivals throughout Europe, and is echoed in characters and costumes across the world. The wild man often takes on a role that muddies social order, mischief and the wilderness. Traditionally built from wild materials like branches, grass, animal bells and furs, the materials of modern wilderness are not moss and straw but mass-production and military-industrial detritus.

(Dalton, 2015)

The Wildermann camouflages identity and identification by obscuring the features which are recognisable by computers such as body outline, walking pace or number of limbs; similar to the approaches explored by the CV Dazzle face makeup project. However, while the wearer might look invisible to human-tracking algorithms, they look highly distinctive and extraordinary to humans. There is a social awkwardness cost to wearing a personal invisibility outfit.

Contrasting with this is a second ‘invisibility design experiment’ project which generates images that trick computer surveillance systems not by being invisible, but by being visible when they should not be. Dalton has produced a series of t-shirts which show images such as a cat, a diagram, or a piece of architecture, which contain key features that algorithmically match those in the face of Elvis Presley. As far as the surveillance software is concerned, this is Elvis’ face, but humans do not even notice. The more the t-shirts are worn in public, the more his face is visible, perhaps in many different places at once – sightings of Elvis on the increase! People buying the t-shirts are collaborating in confusing tracking systems, and are therefore shaping another form of invisibility. If this technique were to be used with markers for the face of an individual person travelling in the world, their actual location would be hidden amongst a sea of irrelevant data.

These experiments are extremes, but help to explore individual and group responses to the challenges of privacy in modern public spaces. Our traditional expectations of anonymity in the crowd are subverted by surveillance networks, and yet the algorithmic biases encoded by the makers of these systems suggest new ways of retaining control over our visibility, both in physical and online spaces.

If we want to avoid detection we might increasingly have to make use of such techniques to subvert what is becoming a standard part of life in a digital world – that our privacy is no longer ensured just because no human is watching us at that particular time.

Despite these fears surrounding surveillance and lack of privacy in a digitally connected world, it must be tempered with the fact that current technology is limited in its capabilities. It is generally only possible to match individuals to a ‘gallery’ of target faces, and any system will most likely generate errors, which may be false positives or false negatives depending on the sensitivity of the system (Bowyer, 2004). This then, is technology to identify specifically targeted individuals rather than track every person who travels in a public space (at least at current levels of technology) and the privacy implications should be judged accordingly.

Privacy solutions

Clearly privacy in digital public space is a major concern for many people, and it seems evident that individuals are right to be concerned over how data can be gathered and used, because the consequences could impact on their right to privacy. But since there are also benefits to digital public space that are bound up with such data collection, the critical question might instead be: how can privacy be maintained while still accessing the benefits of digital public space? Central to this is the understanding that if the default is to be connected (by ambient computing or digital public space), then to preserve private space and time there must be the facility to disconnect, to reserve an isolated state where we can be alone. Additionally, it is desirable to limit the amount to which our digital footprint can be used against us to provide benefits to others (such as marketing) rather than contributing to our own information stream. In effect, if we are using the digital public space as cognitive augmentation, we need to be able to control what enters our ‘minds’ and maintain private areas which are fully controlled by us and not subject to the wishes of others. The other side of this is that we must be able to contain the flow of information outwards from our person, and take control of what about us becomes available to others.

Awareness is an important factor: by alerting people to the fact that their data may be stored, and potential ways in which it might be used, it may be that they are able to make more informed decisions about what they share and what agreements they sign. Some of the concerns above relate to information which is stored on servers belonging to commercial companies, and the trust which must be placed in those organisations to keep your information secure. There is a movement developing of people who resent that these large organisations have an almost complete monopoly on our information through the fact that they enable connected digital life. Ind.ie is a company which is pushing back against this trend. Aral Balkan, the company founder, explains in his blog: ‘We’ve built a world where our everyday things track our every move, profile us, and exploit those profiles for monetary gain. A world with a wholly privatised public sphere. A world of malls, not parks. A corporatocracy, not a democracy’ (Balkan, 2015). Ind.ie are developing new networks and systems which instead operate on a peer to peer basis so that rather than relying on, say, your email being stored by Google in order to send it to someone else, files and documents are sent directly to the recipient without passing through any third-party intermediaries.

Another potential solution to privacy concerns is to build privacy maintenance into the structures of the technology itself. A critical aspect is the anonymisation of data, so that it cannot be traced back to individuals. Vaidya and Atluri (2007) discuss privacy-preserving profiling, using algorithmic clustering to conceal details of any individual and draw out conclusions from general user population while maintaining encryption and privacy. In this way, marketers can see trends and create profiles which can be used to create better services, but without being able to access individual details of the data which was collected, or even the structure of the profile itself.

Allowing for removal of personal information from the digital public space also goes some way towards ensuring privacy, such as the ‘right to be forgotten’ which was implemented in 2014. The Court of Justice of the European Union ruled that companies including Google, Microsoft and Yahoo must implement this, giving individuals the right to ask these search engines to remove information about them from search results ‘if the information is inaccurate, inadequate, irrelevant or excessive’. (European Commission, n.d.). But this kind of data ‘cleaning’ can itself have implications. It may by its existence reveal private information, in other words the fact that there is something to be ‘forgotten’. Information accidently made available in Google’s source code revealed in 2015 that over 95% of removal requests were from members of the public to remove ‘private, personal’ information (Tippmann & Powles, 2015).

This kind of manual curation of data collected about ourselves, making changes to the corpus of information that comprises the digital public space, may allow us to protect individual privacy by attaining control over our online selves. But by putting in the hands of individuals this power of choosing which ‘truths’ to display, are we damaging the integrity of the digital public space and causing bias? This will be explored in Chapter 7.

Generally speaking, privacy in the digital public space must be maintained both by design in the technologies which underpin it (to ensure that privacy is maintained wherever possible), and by social factors allowing the public to be more aware of their privacy rights, and what the implications might be of using connected technology.

Key points

Publicness is often placed in opposition to privacy, but in actuality both exist as a non-binary range of concepts and can co-exist.

Digital information space is constructed from shared information, which may be personal; it is therefore important to consider who it is shared with and who can ‘overhear’.

Activities carried out in digital public space may be less anonymous than those in physical public space due to the nature of digital content, its traceability and persistence. Actions and behaviours which may have been acceptable in non-digital space may carry greater risks and consequences.

Security must be considered in conjunction with privacy: how your private information is protected and kept out of public space.

Expectations of privacy in certain spaces may be based on false assumptions (such as the ‘security of obscurity’) and must be carefully considered. Boundaries between private and public, and who might be watching, may be less clear than in non-digital space.

We may create large amounts of data by our actions online that we are not aware of, which may be used without our knowledge or consent. This can be seemingly trivial information which is powerful in aggregate.

Sometimes relinquishing data is a prerequisite of using digital services which may be difficult to give up.

Connected physical objects in the internet of things may also reveal private information about our behaviour, and even critical information about our health and wellbeing.

It may not be possible to evaluate every piece of data collected about us and give consent for its use, an alternative would be blanket authority for specific purposes only.

Public spaces with boundaries restricting entry may become ‘safe’ spaces with community rules and accepted behaviours, especially if they cater to marginalised groups.

However these spaces may not be as private as they initially appear, and issues can occur when information leaks out.

Spaces may be separated for different audiences, and problems can occur if information moves out of these to inappropriate audiences (such as employers) or without context. It is dangerous to treat content on social media as ‘private’.

Digital information may persist over time and become inappropriate and thus less publicly acceptable if we want to provide a representative version of our ‘current’ self.

Maintaining secrecy by having separated digital personae may help with this, but is difficult to maintain, especially with the growing preference from social media networks for ‘real’ names.

Privacy may be sacrificed for notoriety, but this can occur without consent if information about you is distributed by friends or strangers; more possible in the ‘sousveillance’ world of constant recording.

Digital privacy aspects invade the ‘real world’ with increased digital surveillance and sousveillance, as well as privacy implications of data analysis such as face recognition.

Solutions might include radical steps such as obscuring faces from computer vision, controlling more closely what information about us enters or is preserved in the digital space, or improving the infrastructure so that privacy maintenance is built in.

Notes

1htt­ps:­//w­iki­.op­enr­igh­tsg­rou­p.o­rg

2htt­p:/­/ec­.eu­rop­a.e­u/i­pg/­bas­ics­/le­gal­/co­oki­es/­

3See Weise, Hardy et al, 2012; Conti et al, 2012.

4Free Software Foundation Europe provides downloadable materials bearing this slogan at htt­ps:­//f­sfe­.or­g/c­ont­rib­ute­/sp­rea­dth­ewo­rd.­en.­htm­l#n­ocl­oud­

5Near-Field Communication, which allows devices to communicate when brought into close proximity and allows ‘contactless’ payments.

6It is worth noting that the reduction in crime might not solely be attributable to the surveillance, but also to the perception of its existence. A sense of being watched, especially when images of eyes are used, has been shown to affect crime rates (Nettle et al, 2012).

7htt­p:/­/cv­daz­zle­.co­m/

References

Acquisti, A. Gritzalis, S. Lambrinoudakis, C. di Vimercati, S., eds. 2007. Digital privacy: theory, technologies, and practices. CRC Press, p.348.

Allan, A. and Warden, P., 2011. Got an iPhone or 3G iPad? Apple is recording your moves. [online] O’Reilly Radar. Available at: htt­p:/­/ra­dar­.or­eil­ly.­com­/20­11/­04/­app­le-­loc­ati­on-­tra­cki­ng.­htm­l [Accessed 18 October 2016].

Altman, I., 1975. The environment and social behavior: privacy, personal space, territory, and crowding. Brooks/Cole Publishing Co., p.2.

Ayalon, O. and Toch, E., 2013, July. Retrospective privacy: managing longitudinal privacy in online social networks. Proceedings of the Ninth Symposium on Usable Privacy and Security. ACM, p.4.

Balkan, A., 2015. Ethical Design Manifesto. [online] Available at: htt­ps:­//i­nd.­ie/­blo­g/e­thi­cal­-de­sig­n-m­ani­fes­to/­ [Accessed 19 October 2016].

BBC News, 2011. 150 officers warned over Facebook posts. [online] Available at: www­.bb­c.c­o.u­k/n­ews­/uk­-16­363­158­ [Accessed 18 October 2016].

BBC News, 2016. Did aerial photo of Michael McIntyre break privacy rules? [online] Available at: www­.bb­c.c­o.u­k/n­ews­/ma­gaz­ine­-33­535­578­ [Accessed 19 October 2016].

Bowyer, K.W., 2004. Face recognition technology: security versus privacy. IEEE Technology and Society Magazine, 23(1), pp.9–19.

boyd, d., 2007. Social network sites: public, private, or what. Knowledge Tree, 13(1), pp.1–7.

Chambers, P., 2010. My tweet was silly, but the police reaction was absurd. [online] The Guardian: Comment is Free. Available at: www­.th­egu­ard­ian­.co­m/c­omm­ent­isf­ree­/li­ber­tyc­ent­ral­/20­10/­may­/11­/tw­eet­-jo­ke-­cri­min­al-­rec­ord­-ai­rpo­rt [Accessed 18 October 2016].

Collingwood, L. and Broadbent, G., 2015. Offending and being offended online: vile messages, jokes and the law. Computer Law & Security Review, 31(6), pp.763–772.

Conti, M., Das, S.K., Bisdikian, C., Kumar, M., Ni, L.M., Passarella, A., Roussos, G., Tröster, G., Tsudik, G. and Zambonelli, F., 2012. Looking ahead in pervasive computing: challenges and opportunities in the era of cyber–physical convergence. Pervasive and Mobile Computing, 8(1), pp.2–21.

Cuff, D., 2003. Immanent domain. Journal of Architectural Education, 57(1), pp.43–49.

Dalton, B., 2015. The Barrow Woodwose. Royal College of Art Work in Progress 2015. Available at: htt­p:/­/so­c20­15.­rca­.ac­.uk­/be­n-d­alt­on/­ [Accessed 19 October 2016].

Danezis, G. and Clayton, R., 2007. Introducing traffic analysis. In: Acquisti, A. Gritzalis, S. Lambrinoudakis, C. di Vimercati, S. eds., 2007. Digital privacy: theory, technologies, and practices. CRC Press.

Doctorow, C., 2008. Little Brother. Tor Books.

European Commission, n.d. Factsheet on the ‘Right to be Forgotten’ ruling (c-131/12). Available at: htt­p:/­/ec­.eu­rop­a.e­u/j­ust­ice­/da­ta-­pro­tec­tio­n/f­ile­s/f­act­she­ets­/fa­cts­hee­t_d­ata­_pr­ote­cti­on_­en.­pdf­ [Accessed 19 October 2016].

Evers, S., 2015. Everyone says. In: Page, R. Amos, M. Rasmussen, S., eds., 2015. Beta-Life: stories from an A-life future. Comma Press.

Featherstone, M., 2009. Ubiquitous media: an introduction. Theory, Culture & Society, 26(2-3), pp.1–22.

Fritsch, M., 2015. The Story of Technoviking. Available at: htt­ps:­//v­ime­o.c­om/­140­265­561­ [Accessed 18 October 2016].

Garfield, S. 2012. On the map: why the world looks the way it does. Profile Books.

Goodman, M., 2012. A vision of crimes in the future. [online] TED Talks 2012. Available at: www­.te­d.c­om/­tal­ks/­mar­c_g­ood­man­_a_­vis­ion­_of­_cr­ime­s_i­n_t­he_­fut­ure­ [Accessed 18 October 2016].

Hayles, N.K., 2009. RFID: human agency and meaning in information-intensive environments. Theory, Culture & Society, 26(2-3), pp.47–72.

Hodson, H., 2015. Face recognition row over right to identify you in the street. [online] New Scientist. Available at: www­.ne­wsc­ien­tis­t.c­om/­art­icl­e/d­n27­754­-fa­ce-­rec­ogn­iti­on-­row­-ov­er-­rig­ht-­to-­ide­nti­fy-­you­-in­-th­e-s­tre­et [Accessed 19 October 2016].

Hodson, H., 2016a. Google knows your ills. New Scientist, 230(3072), pp.22–23.

Hodson, H., 2016b. Did Google’s NHS patient data deal need ethical approval? [online] New Scientist. Available at: www­.ne­wsc­ien­tis­t.c­om/­art­icl­e/2­088­056­-di­d-g­oog­les­-nh­s-p­ati­ent­-da­ta-­dea­l-n­eed­-et­hic­al-­app­rov­al/­ [Accessed 18 October 2016].

Houghton, D.J. and Joinson, A.N., 2010. Privacy, social network sites, and social relations. Journal of Technology in Human Services, 28(1-2), pp.74–94.

James, C., 2014. Disconnected: youth, new media, and the ethics gap. MIT Press, pp.2, 38 & 27.

Jenkins, H., Ford, S. and Green, J., 2013. Spreadable media: creating value and meaning in a networked culture. NYU Press.

Kosinski, M., Stillwell, D. and Graepel, T., 2013. Private traits and attributes are predictable from digital records of human behavior. Proceedings of the National Academy of Sciences, 110(15), pp.5802-5805.

Lingel, J., 2013. ‘Keep it secret, keep it safe’: information poverty, information norms, and stigma. Journal of the American Society for Information Science and Technology, 64(5), pp.981–991.

lordoftheinternet, 2013. Some thoughts are so private that you only share them with a therapist or 17,000 people on the internet. [Tumblr post] Available at: htt­p:/­/lo­rdo­fth­ein­ter­net­.tu­mbl­r.c­om/­pos­t/4­478­841­291­4/s­ome­-th­oug­hts­-ar­e-s­o-p­riv­ate­-th­at-­you­-on­ly-­sha­re [Accessed 18 October 2016].

Lynch, J., 2015. EFF and eight other privacy organizations back out of NTIA face recognition multi-stakeholder process. [online] Electronic Frontier Foundation. Available at: www­.ef­f.o­rg/­deeplinks/2015/06/eff-and-eight-other-privacy-organizations-back-out-ntia-face-recognition-multi. [Accessed 19 October 2016].

Marwick, A.E. and boyd, d., 2011. I tweet honestly, I tweet passionately: Twitter users, context collapse, and the imagined audience. New Media & Society, 13(1), pp.114–133.

Mundie, C., 2014. Privacy pragmatism; focus on data use, not data collection. Foreign Affairs, 93, p.28.

Nettle, D., Nott, K. and Bateson, M., 2012. ‘Cycle thieves, we are watching you’: impact of a simple signage intervention against bicycle theft. PLOS one, 7(12), p.e51738.

New Scientist, 2014. Twitter health. New Scientist, 2994.

Ronson, J., 2015. How one stupid tweet blew up Justine Sacco’s life. New York Times. Available at: www­.ny­tim­es.­com­/20­15/­02/­15/­mag­azi­ne/­how­-on­e-s­tup­id-­twe­et-­rui­ned­-ju­sti­ne-­sac­cos­-li­fe.­htm­l [Accessed 18 October 2016].

Rothrock, K., 2016. Facial recognition service becomes a weapon against Russian porn actresses. [online] Global Voices. Available at: htt­ps:­//g­lob­alv­oic­es.­org­/20­16/­04/­22/­fac­ial­-re­cog­nit­ion­-se­rvi­ce-­bec­ome­s-a­-we­apo­n-a­gai­nst­-ru­ssi­an-­por­n-a­ctr­ess­es/­?pl­atf­orm­=ho­ots­uit­e [Accessed 19 October 2016].

Solove, D.J., 2007. The future of reputation: gossip, rumor, and privacy on the internet. Yale University Press, p.7.

Susen, S., 2011. Critical notes on Habermas’s theory of the public sphere. Sociological Analysis, 5(1), pp.37–62.

Sykes, C.J., 1999. The end of privacy: the attack on personal rights at home, at work, on-line, and in court. Farrar, Straus, and Giroux, p.28.

Tippmann, S. and Powles, J. 2015. Google accidentally reveals data on ‘right to be forgotten’ requests. [online] The Guardian. Available at: www­.th­egu­ard­ian­.co­m/t­ech­nol­ogy­/20­15/­jul­/14­/go­ogl­e-a­cci­den­tal­ly-­rev­eal­s-r­igh­t-t­o-b­e-f­org­ott­en-­req­ues­ts [Accessed 19 October 2016].

Thompson, C., 2013. Smarter than you think: how technology is changing our minds for the better. Penguin.

Wainwright, M., 2010. Wrong kind of tweet leaves air traveller £1,000 out of pocket. [online] The Guardian. Available at: www­.th­egu­ard­ian­.co­m/u­k/2­010­/ma­y/1­0/t­wee­ter­-fi­ned­-sp­oof­-me­ssa­ge [Accessed 18 October 2016].

Wang, L., Tan, T., Ning, H. and Hu, W., 2003. Silhouette analysis-based gait recognition for human identification. IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(12), pp.1505–1518.

Weise, S., Hardy, J., Agarwal, P., Coulton, P., Friday, A. and Chiasson, M., 2012, September. Democratizing ubiquitous computing: a right for locality. Proceedings of the 2012 ACM Conference on Ubiquitous Computing. ACM, pp.521–530.

Vaidya, J. and Atluri. V., 2007. Privacy enhancing technologies. In: Acquisti, A. Gritzalis, S. Lambrinoudakis, C. di Vimercati, S., eds. Digital privacy: theory, technologies, and practices. CRC Press.

Westin, A., 1967. Privacy and freedom. Atheneum Press.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset