First, computer cookies as data collectors and now AI, bots and myriad automation – they are all bringing a level of ease and convenience to our lives as consumers; offering targeted ads, using our locations and preferences for relevance, saving us our valuable time. But they also offer unease and concern . . . followers or stalkers, assistants or snoopers? Just how far can technology – and brands and even governments – go when it comes to gathering and using data? It’s all about ethics, says Steven van Belleghem, delving into the great privacy & technology debate.
As marketers, most of us understand (and benefit from) the fact that Facebook can follow people’s behaviour over different websites. By leaving small data files (or cookies) in the browsers of the digital devices used by surfers, it means that, even if you are not a Facebook user, your online behaviour can be monitored by the social network site.
All that is needed is a click on a Facebook image on, say, a news site, and Facebook can follow your every digital move, allowing them to theoretically track habits of almost every internet user. Of course, opinions vary on whether or not this is a good thing, but in reality Facebook’s use of cookies has only limited consequences for society and the daily lives of consumers. In essence, all it does is decide what kind of advertising consumers get (or don’t get) to see, and it allows Facebook to already know who your family and friends are the first time you log on to the site. Some call this creepy, others call it incredibly user-friendly.
The privacy debate should be about impact
Arguably, the real impact this tracking has on society is minimal. It almost seems like some kind of sport for the privacy commissions of different countries to find new ways to give companies a rap over the knuckles. This type of privacy discussion really belongs to the second digital phase, when smartphones and social media were changing the world, while it is the privacy discussion around the third phase of digital evolution that should really worry and excite people.
While so much of the discussion is about brands following people online with cookies, most of us neglect to think about the sensors in our telephones. The debate should be around the power of artificial intelligence when armed with the data that is collected about us, whether it is through our behaviour online, on our smartphones or even via our virtual assistants.
If you have Google Home or Amazon Echo in your house, you might talk to it for 15 minutes a day, but it is also listening and recording for the remaining 23 hours and 45 minutes. In my latest book, I use the example of a court case using Amazon Echo in a murder trial, but at a more prosaic level, Amazon says it also uses the data it collects to learn more about dialects.
The amount of data held by tech giants like Amazon is staggering. Once unsupervised computer learning becomes possible, all this mass of data will become proactively useable, so it is important that the debate should cover what society really wants to happen with this vast source of knowledge.
Not many people realise, but when you agree to the privacy statement on smart TV from Samsung, it includes the following disclaimer: “Be careful you do not say things of a personal or sensitive nature around your television, because all this information will be recorded and sold on to third parties.” Samsung doesn’t do much with this data yet, but once personalised TV advertising becomes feasible, the data will become highly relevant and valuable. Just think, you might ask your partner if they fancy a pizza for dinner, and before you know it a Dominoes advert will appear.
In short, doesn’t it make sense to focus on those elements that could have a greater impact on society? How will we deal with the potential loss of jobs? How are we going to prepare for the greater need for digital skills? How are we going to adjust our education system? What do we want AI to do for us? What is the role of virtual personal assistants in our homes?
How far can technology go?
The most important debate is about the role of artificial intelligence in the world of The Day After Tomorrow. That world will be awash with oceans of data. This data will allow computers to make a major impact on many aspects of our lives. Consider, for example, the role of virtual personal assistants. We know that these machines not only carry out tasks on our behalf, but also listen to every word we say. To what extent should these machines be allowed to influence our lives?
Imagine that Google Home hears that a man is about to hit his wife. What should Google do with that information? In June 2017, the police received a call from a Google Assistant in similar circumstances. A man had attacked his girlfriend and was waving a gun around, threatening to kill her. The young woman had somehow managed to activate the device, so that it could send out a distress call. The police arrived just in time to save her life. In this instance, it was the woman who took the initiative to activate the virtual personal assistant. But what if that had not been the case?
What should we expect of this kind of AI device in the future? Should Google automatically phone the police? Or should it try to talk the man out of doing anything foolish? Just how far can technology go? If artificial intelligence can predict when somebody is going to commit a crime, should that person be punished simply for his/her intention to commit that crime? Are we moving towards the world depicted in the film Minority Report, where crime prevention is proactive rather than retrospective?
Superhumans in tomorrow’s world of privacy & technology
Another important area of debate is the evolution of healthcare. It will soon be possible to change human DNA and this potentially opens up remarkable possibilities for eradicating illness and disease. But it also means we can order ‘made-to-measure’ babies. Do you want a son with brown or blond hair? How smart would you like your daughter to be? If or when this becomes possible, it is open to question whether or not it is societally beneficial. If we fail to conduct this and other similar debates proactively, we will wake up one morning to discover that the future has already arrived – and then it will be too late.
Prof Steven Van Belleghem is an expert in customer focus in the digital world. He’s an award-winning author and his new book, ‘Customers The Day After Tomorrow’ is out now. Follow him on Twitter @StevenVBe, subscribe to his videos or visit his website.
Imagine that the Chinese government manages to develop a strain of smarter and stronger people. The United States will immediately see this as a threat to its military and economic position. In addition to an arms race, we will also find ourselves in a race to develop a new breed of superhumans. Science fiction? Perhaps, but the American Army is already conducting tests to see if brain manipulation will allow their soldiers to learn new skills more quickly. For example, they are using AI to examine the brain-wave data of their best snipers, to provide output that can be used to increase the analytical capabilities of other soldiers. The brain race is already under way, before the societal debate about its ethical acceptability has even started.
In the years to come, technology will be able to do many remarkable things. But not all of these things will be positive for society. We need to start talking about the possible implications. And we need to do it now.
Have an opinion on this article? Please join in the discussion: the GMA is a community of data driven marketers and YOUR opinion counts.
Leave your thoughts