twitter share facebook share 2020-09-24 1108

In 1950, English mathematician and cryptanalyst Alan Turing had famously asked, “Can machines think?” For seven decades since he raised this question, the world of computing has been engaged in a relentless pursuit to make machines think and reason like we do.

Artificial Intelligence (AI) has evolved based on ‘deep neural networks’, which are modelled on our brain structures and have an insatiable appetite for data. We throw massive amounts of training data at the network layers. Over time, the network learns how to sift through its own output and correctly identify the data.

AI has made enormous strides in recognizing objects and speech. Facebook recognizes faces in a class reunion photo and even locates the school in which it was taken. But it still cannot draw correlations or infer why a bunch of people have congregated to take that photo with the school’s main building as the backdrop.

Amazon’s Alexa, by advances in natural language processing, can follow orders. But the users, too, have to adjust their language to what Alexa understands, as though they are talking to a child. However, we do expect the child to grow up at some point in time and demonstrate more generic skills.

We are trying to make machines think like us, but human intelligence is multi-faceted, which is shaped by its constant interactions with our multi-dimensional world. There is plenty of common sense that we take for granted. A three-year old child can very quickly learn to distinguish between milk, water and chocolate milk. But a deep neural network will take significant training to do so.

Right from the start, AI has found it hard to do the easy things. There is a general consensus that deep neural networks as we know them today have inherent limitations in generalizing reasoning, whereas the human mind has a tremendous capacity to do this. Deep neural networks are trained through point-to-point adjustments, so it can only understand data that is similar to its training data. Imagine that your Telsa is zipping down the road and a rhino appears from nowhere and charges at the vehicle. The training data must cover such a rare scenario because deep learning cannot generalize.

Deep learning is hard to apply to larger scheme of things. This is why a fully autonomous car is still work-in-progress because of the risk of the long tail of ‘edge’ cases. Tesla cars that are moving on the road are connected with the real world, ingesting huge amounts of raw data to constantly help the e-automaker train its models. For example, a Tesla with its suspension set at low can graze over a temporary speed breaker inside a parking lot. The edge case is evidently not updated on its map. But the next time it approaches the bump, it will automatically raise its suspension.

The deep neural network has been constantly chasing its long tail. Before deep learning became a thing, ‘symbolic AI’ was the poster child of AI. IBM Watson, that famously won the TV game show Jeopardy! in 2011, used symbolic AI. It relied on imitating human knowledge in a declarative form — a combination of facts and rules such as the mountain is tall, we climb mountains.

Symbolic AI converts symbols into expressions and manipulates them to create new expressions. But it was difficult to scale it up for conditions that we ourselves did not understand well. For example, we were not quite sure how we learnt to recognize things.

Deep neural networks are great for perception training of robots that fetch toiletries for you in your hotel room. However, it is seeing a diminishing rate of return despite high volume of research. The industry now is attempting to combine both deep neural networks and symbolic AI—neuro-symbolic AI promises to be this middle-ground.

Rules-based Symbolic AI is more abstract and therefore can draw inferences and correlations from a large set of data. The neural network is great at pattern recognition by crunching large data sets that humans cannot do, which can then be made more abstract with symbols such as numbers. Deep learning can be used to train on sensory data like the reunion photo. That can then be used by the symbolic system to build rules-based abstractions.

We already have a huge knowledge base in areas such as finance and sciences that can be transformed into logical rules. Researchers are exploring how to embed those formulae into neural networks to make them more effective. The journey from single-purpose AI to its general-purpose form needs a meeting of incongruous minds. Until then, a child will reason better than Alexa.

Comments