Artificial Intelligence - Listen to its creators
There are a lot of opinions about the potential for AI to severely disrupt modern civilization by somehow turning against or being turned against us.
One of the creators of AI, Geoffrey Hinton, stepped down from running Google to warn us. A second, Yann Le Cun, says, no, we should be fine.
Lets hear from the third co-creator of AI:
Yoshua Bengio. He is together with Geoffrey Hinton and Yann LeCun known as one of the godfathers of AI. The three of them were the 2018 recipients of the Turing Award, the computing science equivalent of the Nobel Prize, for a series of breakthroughs in deep learning credited with paving the way for the current AI boom.
Professor Bengio, from the University of Montreal has historically been described as an AI optimist, and is known as one of the most measured voices in his field.
So, what does he say?
He said last week (mid July 2023) that in his opinion, yes, we are travelling too quickly down a risky path.
“We don’t know how much time we have before it gets really dangerous,” Professor Bengio says.
“What I’ve been saying now for a few weeks is ‘Please give me arguments, convince me that we shouldn’t worry, because I’ll be so much happier.’
“And it hasn’t happened yet.”
“I got around, like, 20 per cent probability that it turns out catastrophic.”
Professor Bengio arrived at the figure based on several inputs, including a 50 per cent probability that AI would reach human-level capabilities within a decade, and a greater than 50 per cent likelihood that AI or humans themselves would turn the technology against humanity at scale.
“I think that the chances that we will be able to hold off such attacks is good, but it’s not 100 per cent … maybe 50 per cent,” he says.
As a result, after almost 40 years of working to bring about more sophisticated AI, Yoshua Bengio has decided in recent months to push in the opposite direction, in an attempt to slow it down.
“Even if it was 0.1 per cent [chance of doom], I would be worried enough to say I’m going to devote the rest of my life to trying to prevent that from happening,” he says.
The Rubicon moment he’s thinking of is when AI surpasses human capabilities.
That milestone, depending how you measure it, is referred to as artificial general intelligence (AGI) or more theatrically, the singularity.
Definitions vary, but every expert agrees that a more sophisticated version of AI that surpasses human capabilities in some, if not all, fields is coming, and the timeline is rapidly shrinking.
Like most of the world, Professor Bengio had always assumed we had decades to prepare, but thanks in no small part to his own efforts, that threshold is now much closer.
“I thought, ‘Oh, this is so far in the future that I don’t need to worry. And there will be so many good things in between that it’s worth continuing’,” he says.
“But now I’m much less sure.”
So, what is the take-away people?
We have done it once, with allowing a weak UN to permit a veto power with nukes to go to war against whoever they want.
We have done it twice, with a watching on as weak global governance has been unable to tackle the climate problem.
Are we doing it a third time with a silent global population watching on and just accepting there are no international mechanisms in place to hold private actors to account in the AI space?
When will the silent majority move?
When will the quiet majority of the world take the narrative away from the noisy few who will always scaremonger us to thinking globalism is bad, and insist we have strong global processes for regulation on globally important things??
I vote we don't leave it til too late. I vote now.
Come on, quiet majority.
Vote now.
Comments