A Google engineer recently claimed one of the company’s LAMda chatbots was ‘sentient’ — meaning it can express thoughts and feelings. Blake Lemoine claimed the chatbot — which was designed to have free-flowing conversations — had conversations with him about complex subjects such as rights and personhood.
He has since been fired on the grounds his claims were “wholly unfounded,” yet this has created such waves (and in some quarters alarms), because the majority of AI that we engage with on a daily basis (knowingly or unknowingly), for example face ID to open your iphone, isn’t built to act in the same way as a human brain does. So how close are we to a human-like AI?
The truth is nowhere near at all. For now, sentient AI remains in the realms of Netflix’s sci-fi category. But that’s not a bad thing.
We know humans are great at a general set of tasks but even as humans we all have jobs, areas that we specialise in and focus on to become better and better at that one thing. Looking at industry, given the technical difficulties of building AI which mimics the all-encompassing capabilities of a human brain, focus has been placed on building applications of AI technology to solve specific business problems.
The outcome is an explosion of solutions, tools, hardware and ultimately investment in two distinct areas: Natural Language Processing and Image/Video Processing and Recognition.
So, what is natural language processing?
Natural language processing (NLP) is the art of understanding languages, offering the capability to discern the complexities in written languages and to understand from the context what a sentence really means. Some of the best-known language models are GPT-3, BERT and Google’s LAMda, and they’re far from being sentient. These models are built to understand the correlation between sentences and words from a large set of ‘training’ information fed into them. They undergo vast calculations based on this information to output what the most likely next set of words are that should appear. This is why machines can seem to be human, they recognise the patterns that we have with words and can re-create randomised orders of these sentences. Some models can just do this better than others (think about the number of times you’ve needed to speak to a real human when a customer service chatbot doesn’t actually understand your problem!).
Image and video processing
This training and pattern recognition, while using different types of models, is also applied to 2D & 3D images. We’ve seen AI systems track people’s movements through large crowds, AI art generators that create art in the image of an artist, as well as novel areas such as digital pathology to identify cancerous samples in medical scans. While these specific applications are incredibly useful, it doesn’t follow that our driverless car image recognition undergoes the same feeling of euphoria or limitlessness that we humans do when we drive over a ridge and see a beautiful mountain vista.
So what’s next for AI?
So, with the human brain — and ‘sentience’ — looking more like the final frontier, it’s important to know what’s happening right now in the field of AI (and what’s next?).
With the climate emergency driving the major heatwave in the UK this July, the way we use energy and how much of it we use is thrust into the spotlight. The raw energy consumption of an AI algorithm is significant, where the chips required to run them use over 700 watts of power (with many of these needed to power the most complex applications). When we compare this to the human brain which only needs 20 watts, it’s clear why work is currently underway to develop the low-power hardware effective enough to train and run AI applications.
Although AI is being applied in a powerful way to deliver specific tangible business value, It’s clear that we’re a long way off from a general artificial intelligence which works like the human brain, or for that matter, ‘sentience’ in any form.