Did you recently buy a Samsung smart TV? If you are worried about privacy, you may be wondering how smart that decision was following the manufacturer’s warnings that its voice-activated televisions may record personal information — that is, your conversations — and transmit them to a third party.
The voice-activated television monitors spoken conversations to listen for commands and transmits them to another firm, which then performs the voice analysis. Samsung stated that the TVs may even do so when the voice-activation feature is turned off.
Such privacy snafus seem to be the norm these days: only recently, Google, following a UK Information Commissioner’s Office ruling, agreed to rewrite its privacy policy to make it “more accessible” and “to allow users to find its controls more easily” and, most pertinently, for its privacy policy to comply with the UK Data Protection Act. The Netherlands, too, threatened Google with a £12m fine if it didn’t put its affairs in order.
Facebook had to take similar steps in 2014, yet the changes do not fundamentally improve privacy, but simply ensure that the way our privacy is treated is easier to understand — especially where our data is part of a business model based on targeted advertising. Simply put, when we sign up, we still agree to share our data.
Most of the changes to Google’s privacy policy concern clearly informing users how their information will be treated. The default setting for users will still allow the use of their data unless they specifically opt out. Getting this data out of you and passing it around is the deal users make in exchange for free, advertising-based Web services.
But it’s endlessly apparent how firms that are evangelical about the need for user data to be accessible to them are nevertheless vague about how they then use it. Terms and conditions are long and bamboozling. The Information Commissioner described Google’s guidelines on privacy as “baffling”. And Google isn’t acting proactively, but dragging its feet until the regulator demands action.
Facebook’s apparently easier-to-read and more accessible privacy policy now permits data to pass between Facebook, WhatsApp and Instagram — an approach that has brought the scrutiny of German and Dutch data regulators. Facebook’s reasoning is that we’ll see adverts that are more relevant — the company is only trying to help. Yet consumer concerns remain, largely because the pace of change and this transition to a default of openness has arrived so quickly. When people find out what happens to their data, many are shocked at what they’ve signed up to.
So, what is really going on here, and what should we be concerned about?
Disruptive technology
The term disruptive technology is often found alongside terms such as 3D printing, robotics or artificial intelligence. According to the Harvard Business Review:
Disruptive technologies introduce a very different package of attributes from those that mainstream customers historically value, and they often perform far worse along one or two dimensions that are particularly important to those customers… At first, then, disruptive technologies tend to be used and valued only in new markets or new applications.
Data harvesting, data mining and analysis has transformed the way we look at our mobile devices and computer screens. Content is now adaptive and responsive to our behaviour. But that does not necessarily mean that these are technologies many of us want or need.
Our online communication tools such as e-mail and social media, largely free at point of use, are based upon optimising revenue through targeted advertising. For this to be cost-effective, the underlying technology was required to be disruptive — both in the way we socially interact and in its capacity to deliver commercial value. We have all noticed how social media has fundamentally disrupted our lives, but until fairly recently the underlying systems and software that can unravel who we are and what we are doing, and share this data in order to influence our consumer behaviour, have been paid little attention.
Not everyone is against this disruption: as Edward Snowden pointed out in 2013, public indifference is one of Google’s biggest allies in the privacy war. So Google’s reluctance should come as no surprise, because its current direction is based on disrupting its users’ privacy.
As is often the case, one disruptive technology gives rise to another: the dark Web is one reaction to the attempt to disrupt our freedom to be private, but the jury is out on whether efforts to reframe the current debate will succeed. The privacy-friendly Facebook competitor Ello was given considerable publicity but has already been written off by many, while the privacy-regarding cloud storage Spideroak seeks to challenge the likes of Google Drive and Dropbox.
But these are just tiny eddies in a river of free-to-use online services which treat user privacy as a saleable, tradeable commodity for corporations.
So, there seems to be a growing battle between corporations and users. Google, Microsoft, Amazon and other firms have been using their muscle to pay firms creating ad-blocking software to stop blocking. This war has only really just begun, with the tweaks to manage privacy — both to strip it away and to protect it — representing a volatile, emerging disruptive force.
The battle for access to our private thoughts and concerns is just getting going. The challenge for those innovating to protect privacy is to come up with viable alternatives that can change the current status quo. In the meantime, perhaps you should be careful what you say in your own living room.
- Paul Levy is senior researcher in innovation management at the University of Brighton
- This article was originally published on The Conversation