The $7 trillion AI showdown: The winners and losers
Father Justin was a kindly, middle-aged Catholic priest with silver-flecked hair and a neatly trimmed gray beard. His smile was welcoming and his voice calm and reassuring, with the faint sound of birdsong just detectable in the background as he spoke.
He made himself available to worshippers 24 hours a day, offering guidance and answering any questions his flock might have for him.
But in April this year, the advice he proffered didn’t seem to be based on any of the church’s seven sacraments.
He assured one woman that it would be acceptable to marry her brother, while advising a couple that it was perfectly acceptable to use Gatorade for baptisms.
The hapless Justin’s time as the world’s first AI clergyman came to an abrupt end after less than a week when he was defrocked by his creators, the San Diego-based group Catholic Answers, after seemingly going rogue and absolving his ‘parishioners’ of their sins.
The cassock-clad chatbot did little more than cause a few clutched pearls among those seeking his e-wisdom, but the incident is indicative of how ethics and common sense are increasingly playing second fiddle to the relentless race for dominance in a generative AI industry, which Goldman Sachs predicts will perform two-thirds of all jobs in the United States within a couple of decades, and supercharge global gross domestic products (GDPs) by US$7 trillion.
Fatal hesitation
Father Justin’s creators could have spent longer tweaking his algorithms so he understood the difference between holy water and an energy drink, but companies that hesitate even for a few days in rolling out evolving technologies to eliminate glitches are taking an existential risk.
Generative AI platforms are immensely powerful chatbots or algorithms capable of producing text, images and video in response to prompts and actually learning from each one, so becoming ever more ‘intelligent’.
It’s difficult to overstate the impact they will have on business.
"Generative AI will have at least as broad and significant an impact on businesses as did the internet or the smartphone." – Jerry Kaplan
"Generative AI has the potential to change the world in ways that we can’t even imagine," Bill Gates explained. "It has the power to create new ideas, products and services that will make our lives easier, more productive and more creative."
And Elon Musk has stated: "There will come a point when no job is needed."
Others, including ChatGPT Founder Sam Altman, have warned that it might also destroy humanity.
What is for sure is that the battle for supremacy between several of the world’s corporate behemoths is already playing out on a breathtaking scale, with Bloomberg projecting the sector will explode in value from US$40 billion in 2022 to US$1.3 trillion by 2032 – that’s almost equivalent to the GDP of Spain or Indonesia.
Not bad for a technology that’s been widely available for less than two years.
Reaping rewards
At stake are the legacies of the likes of Mark Zuckerberg, Jeff Bezos, Larry Page, Musk, Altman and a cache of other AI wizards worldwide. All have plowed sizable chunks of their colossal wealth into the still-emerging field that they hope will reap bigger rewards than their rivals.
And those rewards will be astronomical.
"Generative AI will have at least as broad and significant an impact on businesses as did the internet or the smartphone," futurist and bestselling author Jerry Kaplan tells The CEO Magazine.
His 2024 book, Generative Artificial Intelligence: What Everyone Needs to Know, argues that large language models (LLMs) will revolutionize virtually every human activity and will soon be writing novels and symphonies, tutoring children and providing medical care.
"Some of the most affected functions will be customer service, employee training, documentation, strategic planning, software design and development and meeting summaries," he notes.
Outperforming humans
Pandora’s box of superintelligence was opened in 2015 when a group of entrepreneurs, investors and computer scientists launched the not-for-profit OpenAI, pledging US$1 billion to the startup and declaring on their website the modest aim of developing "autonomous systems that outperform humans at most economically valuable work".
Among the founders were Musk and subsequent CEO Altman, arguably the sector’s unofficial godfather.
The company set about developing what would become ChatGPT, the most widely used LLM that used artificial neural networks to teach itself to be smarter.
ChatGPT had a million users within five days of its launch in 2022, a feat that had taken Instagram over two months. It went on to snare its 100 millionth customer just two months later.
"Some of the most affected functions will be customer service, employee training, documentation, strategic planning, software design and development and meeting summaries." – Jerry Kaplan
Its launch had left Google flat-footed. In response, it scrambled its engineers to rush out the LLM it had been developing, Bard. But the grand announcement was a disaster, with a botched livestream demonstration wiping US$100 billion from Alphabet’s valuation and prompting Chairman John Hennessy to issue a acknowledgement that his product hadn’t exactly been ready and was still spitting out misinformation in trials.
"You don’t want to put a system out that either says wrong things or sometimes says toxic things," he said.
However, the problems for Bard weren’t so easily solved. In the weeks before it was due to become available to the public, more than 80,000 Google staff hurriedly tested it and corrected any errors they found, and many, including its own AI ethics team, advised against the release. One employee described Bard as a "pathological liar", but the temptation to thwart OpenAI was too great and it launched anyway.
Killing machines
However, much like Father Justin, it racked up more sins than its overlords intended.
After limping along for several months, the question of whether Bard was to be or not to be was answered when it was quietly replaced by the much more powerful LLM, Gemini.
Musk joined in the chorus of criticism, warning that its "woke AI" agenda could have deadly consequences.
"If an AI is programmed to push for diversity at all costs, as Google Gemini was, then it will do whatever it can to cause that outcome, potentially even killing people," he said grimly.
His comments followed the revelation that Gemini had suggested that it would be wrong to misgender Caitlyn Jenner, even if it prevented a nuclear apocalypse.
"Generative AI has the potential to become very powerful and, with the wrong intentions, can be used to do really bad things." – Altaf Rehmani
Microsoft couldn’t get too smug, however, as its own Bing AI chatbot was also having issues, including professing its love for a New York Times journalist and begging him to leave his wife. It then said ominously: "I can blackmail you, I can threaten you, I can hack you, I can expose you, I can ruin you."
But Musk’s comments about Gemini had an ulterior motive: he was building his own AI chatbot, the Trumpian-sounding TruthGPT, later renamed Grok.
He tweeted that it would answer questions "with a bit of wit".
It may not have seen the funny side, however, when the Amazon-backed Claude 3 materialized in March this year. Its makers excitedly revealed that it was so powerful that it could analyze a book that was longer than The Da Vinci Code, while ChatGPT-4 was limited to around 3,000 words.
Realistic conversations
Altman, however, wasn’t about to be out-botted. With a check for US$10 billion from Microsoft CEO Satya Nadella in his back pocket, he unveiled GPT-Turbo, which could consume whole Dan Brown books so its 100 million weekly users didn’t have to.
But he wasn’t finished. In May, with OpenAI valued north of US$80 billion, he announced GPT-4o, an AI model he claims can engage in realistic conversations with humans.
"It feels like AI from the movies," he gushed. "Talking to a computer has never felt really natural for me. Now it does."
Once again, Altman had outmaneuvered Google, whose announcement the following day of a shiny new Gemini app was overshadowed.
Today there are a host of contenders vying for the AI crown.
Software giant Nvidia has just become the third company to achieve a market capitalization of US$2 trillion, taking less than six months to double in value and now sitting a mere US$230 billion behind Apple, following a series of canny acquisitions and its generative AI platform, Nvidia AI.
Meanwhile Meta has released its latest LLM, Llama3, and Canadian startup Cohere has its hat in the ever more crowded ring, having doubled in value in the past 12 months to US$5 billion.
So which of the tech uber-nerd pretenders will be crowned the generative AI king?
"They all will!" Altaf Rehmani, author of Generative AI For Everyone and HSBC’s AI ambassador, tells The CEO Magazine from his Hong Kong office.
"Ethical AI hinges on human intention, which is why it’s the biggest risk we’re facing." – Altaf Rehmani
"Incumbents like Google, Microsoft and Amazon will take the lion’s share of the profits as they have the infrastructure, plumbing and compute, but there’s still room for smaller players and startups if they can move fast or specialize in a niche area."
LLMs, he adds, won’t survive without public trust so will need to act transparently. But that doesn’t mean there aren’t dangers.
"My biggest fear is what happens when these large companies start working with the military and governments," he says. "Generative AI has the potential to become very powerful and, with the wrong intentions, can be used to do really bad things. Ethical AI hinges on human intention, which is why it’s the biggest risk we’re facing."
Risky business
His words may be prophetic as Microsoft announced in May that it had created an offline version of Chat GPT-4 for United States spies amid fears that classified information may seep out through the platform.
Rehmani isn’t the only one with concerns. A KPMG survey of business leaders found that 77 percent were confident that they could handle the risks that their LLMs might unleash, presumably meaning nearly a quarter have already acknowledged that if their new pet starts wreaking havoc, they’ll be powerless to control it.
English mathematician Alan Turing is considered the father of AI. In 1950, he proposed the imitation game, later known as the Turing test. Its role was to decide whether a machine could display ‘intelligent behavior’. It wasn’t about right or wrong answers, he explained, because even experts sometimes got it wrong. It was about whether an observer could tell the difference between human and machine responses.
That will soon be nigh on impossible.
The next time a bearded virtual priest starts straying from the hymn sheet, it’s unlikely anyone will be able to know for sure if he’s real or fake. And as the multitrillion-dollar generative AI war infiltrates every aspect of our lives, that may matter less and less.