Subscribe to Syndicate
Feb 09 2018 |
To ML or to think beyond ML
By: Aditya Pandey

(ML: Machine learning)

Nature has always been a quintessential source of inspiration for humankind.

Manifested in the artistically perfect and geometrically precise design of famous pyramids of Giza is the fascinating concept of golden ratio – a ratio programmed within the fabric of nature. Its ubiquity in the nature is the reason why we see its usage throughput our history.

Centuries later, not much has changed. The breathtaking proportions in design of Aston Martin DB9 derive its’ elegance and balance from the golden ratio.

The essence from the above premise is that a perfect near ideal scenario is possible by replicating nature through deconstructing an individual creation to its cellular level and then reverse engineer upon that. We think that is creativity.

Except that’s not true.

The idea and inspiration behind Machine Learning and the neural networks stems from our classical act of emulating nature.

In a Ted talk given by Fei-Fei Li, Director of Stanford’s Artificial Intelligence Lab and Vision Lab (Link below), Fei gives a simplistic explanation of how children at a tender age of 3 to 4 years can easily identify and classify different elements for e.g. differentiating a cat from a dog. However, deep neural networks and machine learning algorithms have to be pushed to their limits to do the same.

The moment when a child is born, he/she is exposed to a plethora of data points or scenarios wherein he/she is taught by parents and others to differentiate between a cat and a dog – cats with different colors, shades and in different backgrounds. This trains the child’s brain over time to reach a level of accuracy where after he/she can easily identify a cat in the most complex of the scenarios.

Machine learning is something similar. Consider a scenario of binary classification – wherein we have to train a model to identify if there is a cat in the picture (1) or no cat (0). For this, lots of images are taken as input and tagged with 0 or 1 (the actual output). These all data points are fed into the model. This trains the model by covering most of the unthinkable scenarios. This is a simple example of “supervised learning”.

Likewise, neural networks capture an essence of the astonishing functionality of our neurons. The mathematical representation of a neuron is as follows: Input signals from dendrites are given individual weights and passed through a transfer (summation) function and an activation function.

Seems complicated. Yeah because it is.

Neural networks have an input layer, an output layer and several hidden layers. An input is fed. The actual expected output and predicted output are known. The difference between expected and predicted output is represented as a loss function. And the goal is to minimize this loss function. Here, the calculus proves indispensable. With enough data points, neural networks restructure its variables through a concept called back-propagation to achieve pixel perfect accuracy.

My argument is - Are Machine learning and Neural Networks just another tools to replicate or deconstruct one of the myriad creations of nature? And can’t we for once think beyond the limits of nature?

So, whenever in doubt, don’t look at nature for a hint. Look beyond that.

https://www.ted.com/talks/fei_fei_li_how_we_re_teaching_computers_to_understand_pictures