In the years ahead, we expect Artificial Intelligence (AI) to increasingly take on the baton from the cloud and smartphone, driving innovation and disruption across multiple industries. AI growth is demonstrated by another record year for M&A and fundraising, with 231 AI-related deals consummated while 2,235 transactions raised $26.6bn for AI start-ups.

AI continues to advance at an astonishing pace with a major milestone reached in natural language understanding (NLU) when a model from the Google AI Brain team achieved ‘superhuman’ performance in June 2019. Since then, other teams from the likes of Baidu, Alibaba, Microsoft and Facebook have done the same. In natural language processing (NLP), Microsoft has been working on a transformer-based generative language model (T-NLG) aiming to let machines generate words to complete unfinished sentences based on context, respond to a question with direct answers and summarise an article precisely like humans. To achieve this, its model features 17 billion parameters – to essentially learn from all the text published on the internet – compared to around 26 million used in a typical image-recognition model.

Microsoft’s model has already surpassed human performance, achieving 98% grammatical correctness and 96% factual correctness.

Microsoft’s model has already surpassed human performance, achieving 98% grammatical correctness and 96% factual correctness. The implications are profound: today, the relative scarcity and cost of collecting data required to train machines is preventing wider adoption of real-time analytics. However, transformer-based models have the potential to transform multiple industries.

Significant progress has also been made using AI to solve complex games building on DeepMind’s AlphaGo victory over Lee Sodol in 2016. Playing poker, Plurius, an AI engine designed by Carnegie Mellon and Facebook, beat 12 professional players over more than 10,000 hands. By playing millions of hands of poker against copies of itself, Plurius was able to use a limited look-ahead algorithm, rather than playing to the end through decision trees. This allowed it to ‘solve’ for poker in just eight days using a single 64-core server and just 28 cores during live play – remarkable when you consider DeepMind’s AlphaGo victory used 1,920 CPUs and 280 GPUs. In October, Google-owned DeepMind’s StarCraft 2 AI reached grandmaster status in a real-time strategy game, besting 99.8% of humans. It was trained by watching videos of professionals, then from simulated gameplay against itself over 44 days, equivalent to 200 human years playing the game.

More recently, NVIDIA has used a generative adversarial network (GAN) to recreate PAC-MAN without an underlying game engine. Rather than relying on actual gameplay, two competing neural networks instead created 50,000 episodes of content convincing enough to pass for the original, allowing the AI to learn the rules of the game.

However, there is still much work to do. In the case of PAC-MAN, the neutral networks trained themselves incorrectly, with ghosts closely trailing, rather than making contact with, the titular protagonist. More worryingly, GANs have been exploited to create ‘deepfakes’ seen on social media where faces have been transposed or voices transformed to give the impression that someone did or said something they did not. Last year, Nancy Pelosi was the target of a deepfake with her speech appearing slow and slurred in a video shown on Facebook, leading Rudy Guiliani to question her mental state. This issue is so serious that several US states have passed laws banning the use of deepfakes to interfere with elections.

Despite these – and other – challenges, we are comforted by the fact that controversy often accompanies the rapid diffusion of new technologies and that, in time, appropriate governance will be established to facilitate the rapid and healthy development of AI, one of most powerful technologies in decades.