Why AI is just another buzzword?

‘AI-powered’ is tech’s meaningless unless your vendor lays cards on the table

Personally it all started around year 2000, when I met the first time Clementine, and I’m not talking about the hybrid between a mandarin orange and a sweet orange: Clementine was the first data mining software I used in my life to build machine learning (ML) models. Clementine was then acquired by SPSS – the famous software for logical batched and non-batched statistical analysis – and later SPSS was acquired by IBM.

Later, working in my garden, I found a Python hidden in grass and I felt in love with Deep Learning (DL). Of course I’m talking about the Python programming language something really fantastic to work with …particularly if you are a data scientist and you love Pandas – yeh, not the nice animal, of course. (Pandas is one of the key important libraries to analyse data in Python. Important: it is not a ML or DL library).

At this point, you will probably have the first question: what is the difference between ML and DL? Excellent question, let me try to formulate an answer.

Deep learning is a subset of machine learning, which is a subset of Artificial Intelligence. Very simple.

I can hear your complaints… let me elaborate:

  • AI is the all encompassing umbrella that covers everything from old fashion AI all the way to complex architectures usually used in Deep Learning, for instance Generative Adversarial Networks.
  • ML is a sub-field of AI that covers anything that has to do with the study of learning algorithms by training with data. There are whole chice of techniques that have been developed over the years like Linear Regression, K-means, Decision Trees, Random Forest, PCA, SVM and finally Artificial Neural Networks (ANN). Neural Networks are the elements i common between ML and DL. The difference between the two is – at a very high level – how they learn from data: in ML you teach the networks how to learn to solve the problem, in DL the networks are structures in several layers in order to learn how to solve the problem in an unsupervised approach.
  • DL – has I said before – structures algorithms in layers to create an “artificial neural network” that can learn and make decisions on its own.

Deep learning is a subfield of machine learning. While both fall under the broad category of artificial intelligence. You see now the point: if I say “I will solve your problem with AI”, you should ask me “What do you mean with AI?”. Or, to use a different example, if you want to fly from London to New York and I answer you: “I will solve your problem using the airline industry?” you will probably ask me: “What do you mean exactly with airline industry? Can you please be more precise?”

There is a lot of confusion these days about AI, ML and DL. Lot of vendors are abusing the marketing buzz ‘AI-Powered’ or similar stupid mantras. If someone offer you an AI solution, simply ask him “What do you mean exactly with AI?”. If he tells you is a well kept secret, then you should start to have some doubts. Why? Because nowadays the real powerful DL algorithms are all open source from the usual suspects: Google, Facebook, recently Amazon, Alibaba, Salesforce among the others.

One of the main reason those algorithms are open source is because the real gold is not in the algorithms, it is in the quantity of data you can offer to the neural network to learn and make decisions on its own. The value are not the algorithms, the value is the quantity of data you have available. Tons of data to make the network “very smart”. And who has today data – huge quantity of data – to train properly DL neural network?

A small example is Google Translate (https://translate.google.com). The service is around since 2006 using a statistical ML approach and, if you well remember, at the beginning the translation was – let’s put it in this way – “very funny”. In November 2016 Google released Neural Machine Translation a DL large neural network and – the most important point – it was trained using millions of examples. Do you know where Google took the data? That is hilarious, we – including me – provided the data to Google since 2006 using their ‘free’ service (!). If you test the service today, you will observe the level of accuracy Google Translate reached thanks to DL and millions of examples …moreover, every time we correct a translation using the right click option, we help Google improving the DL algorithm …and we do it for free!

That is the second question you should ask yourself: “Do I have enough data to train a DL network and make it so ‘smart’ to solve my business problem?” or “does the vendor offer me that huge amount of training data?”. I can assure you that Google, Amazon & Co. will not give you free access to their training data sets, and this is a real problem, unfortunately.

To summarize

Next time someone offers you an Artificial Intelligence solution ask 3 simply questions:

  • What do you mean with AI? ML or DL and what kind of specific algorithms your solutions is using?
  • Don’t be shy, if the answer is DL, ask also for the specific library behind: is it TensorFlow, Theano, Torch, Caffe, Neon, IBM Machine Learning Stack… or?
  • Does your solution come with a pre-trained model or should we provide the training datasets? Remember, if we we are talking here about DL you need millions of examples. Ask yourself: do I have millions of examples to train the Neural Network?

My last recommendation: focus on the business problem you want AI to solve. In most cases you don’t need DL and ML is enough. Very simple, cost-effective and elegant solutions build on platform like, for instance, Knime (http://knime.com) are strong enough to solve your business problem. (important: in its last release Knime allows you to build also DL using – for instance – deeplearning4j nodes)

Leave a Reply

Your email address will not be published. Required fields are marked *