According to a study by WPP digital group Syzygy, 85% of Brits believe that AI in marketing should be governed by a key principle from the movie franchise. Does it make sense and how really big is the risk man will succumbs the machine?
The “Blade Runner rule” says it should be illegal for AI applications such as social media bots, chatbots and virtual assistants to conceal their identity and pose as humans. I would like to ask you – and please answer honestly – two specific questions about that specific result:
- Do you really think the actual level of AI needs the ‘Blade Runner Rule’?
- Can a machines powered with AI conceals their identity and pose as humans?
My answer for the first question is YES, I think the ‘Blade Runner Rule’ is needed. But don’t take me wrong, I don’t think we need that rule right now to save us from the machines. We are years away from a level of AI that could potentially generate that danger. However, considering how slow the legal system was able to adapt to phenomenas such as the Internet, eCommerce, Google, Amazon and many others …well, if we start now, for sure we will be ready by the time the machines mock the humans.
The second question is a little more complicated and requires such technical explanation …yes, I know you don’t like too technical explanations, but let me elaborate. Let me just explain you a very interesting concept: Generative Adversarial Networks (GANs). The way this kind of Deep Learning architecture works is similar to a trainer working with an athlete: train, train and train while I’m checking and coaching you. GANs solve problems such:
- Train an artificial author which can write an article and explain data science concepts to a community in a very simplistic manner by learning from past articles. We are talking here about Natural Language Generation: the machine produce text content that humans are not able to identify as created by a machine.
- Create an artificial painter which can paint like any famous artist by learning from his / her past collections? (and the level of the paint is so perfect that even the painter will not recognise the original from the ‘fake’. Several applications on your smartphones use such a kind of models to generate – for instance – a new pictures Picasso style, from your original photo.
GANs consists of two competing neural networks, namely the Generator and the Discriminator. As it is suggestive of the name, the Generator is responsible for generating data from some input. The discriminator is then responsible for analysing that data and discriminating wether that data was real (if it came from our original dataset) or if it is fake (if it comes from the generator). Two neural networks that work independently one against the other to produce something for us, the humans.
Ok, you want a non technical explanation: we want a Rolex watch but we cannot afford it. Then we ask someone – let’s call him John – to build a fake Rolex that is so perfect even Rolex Factory will not be able to distinguish the fake from the real. Of course John needs time and several attempts. But at certain point in time – after millions of attempts – the Rolex Factory will not be able to see the difference. Moreover, probably John will come with new ideas and shapes of Rolex watches that even Rolex Factory will consider to put in its official collection as an original Rolex.
Well John is our Generator while Rolex Factory is our Discriminator. In reality are two neural networks competing against each other to produce and discriminate data: text, images, sounds …whatever data is possible to produce with a computer.
Yann LeCun, Director of Facebook AI, has defined GANs as “…the most interesting idea in the last ten years in machine learning.” And it is true, it something so new and exciting and can be created with just 50 lines of code in Python (https://goo.gl/n8gGYb).
The concept of GANs is revolutionary, two deep learning neural networks works one against the other to produce a result that humans cannot understand if it was produced by a man or by a machine. It is similar to what we used to do in machine learning having models that correct the errors of another model (meta modelling). It is a kind of concept where two AI entities are working together – or, in this case, one against the other – to produce something for the humans and in human style.
However don’t panic. We are far, far away from Roy Batty telling us “I’ve seen things you people wouldn’t believe. Attack ships on fire off the shoulder of Orion…”
Hopefully humans will also be smart enough not to plug experimental deep machine learning programs into something very dangerous, like an army of laser-toting androids or a nuclear reactor. But if someone does and a disaster ensues, it would be the result of human negligence and stupidity, not because the robots had a philosophical revelation about how bad humans are. This explains the reason we need rules. Rules for the humans, not for the machines.
GANs is the first practical example that should trigger the interest of legislators to start to regulate AI. As I said, we are far away from be in danger and succumb to machines, but it is time to act and start to apply precise rules to the whole AI field.