Whether you think artificial intelligence will save the world or end it, there’s no doubt that we’re living in a moment of great excitement. AI, as we know, might not have existed without Yoshua Bengio.
Nicknamed the “godfather of artificial intelligence,” Bengio, 60, is a Canadian computer scientist who has devoted his research to neural networks and deep learning algorithms. His pioneering work paved the way for the AI models we use today, like OpenAI’s ChatGPT. and Claude from Anthropic.
“Intelligence gives power, and whoever controls that power – whether at the human level or above – will be very, very powerful,” Bengio said in an interview with Yahoo Finance. “Technology in general is used by people who want more power: economic domination, military domination, political domination. So before we create technology that could concentrate power in dangerous ways. We have to be very careful.”
In 2018, Bengio and two colleagues, formerly of Google (GOOG) Vice President Geoffrey Hinton (winner of the 2024 Nobel Prize in Physics), and Meta (META), Yann LeCun, chief AI scientist, won the Turing Prize, also known as the Nobel Prize of computing. In 2022, Bengio was the most cited computer scientist in the world. And Time magazine named him one of 100 most influential people in the world.
Although he helped invent the technology, Bengio has now become a voice of caution in the world of AI. This caution comes as investors continue to show great enthusiasm for the space and bid AI gaming stocks to new highs this year.
Nvidia’s darling AI chip (NVDA) the stock is up 172% year to date, for example, versus the S&P 500 (^GSPC) 21% gain.
The company is now valued at $3.25 trillion according to Yahoo Finance. databehind Apple (AAPL) slightly for the title of most valuable company in the world.
I interviewed Bengio about possible AI threats and successful tech companies.
The interview has been edited for length and clarity.
Yasmin Khorram: Why should we care about artificial intelligence on a human scale?
Yoshoua Bengio: If this falls into the wrong hands, whatever that means, it could be very dangerous. These tools could soon help terrorists, as well as state actors who wish to destroy our democracies. And then there is the problem that many scientists have pointed out, which is the way we currently train them: it is not clear how we could prevent these systems from becoming autonomous and having their own preservation goals, and we could lose control of these systems. So we’re on the right track to maybe creating monsters that could be more powerful than us.
OpenAI, Meta, Google, Amazon: which major AI player is right?
Morally, I would say the best performing company is Anthropic (whose major investors include Amazon (AMZN) and Google (GOOG)). But I think they all have biases because of the economic structure in which their survival depends on being among the leading companies and, ideally, being the first to arrive at AGI (artificial general intelligence) . And that means a race – an arms race between companies, where public safety risks being the losing goal.
Anthropic is giving many signs that they care a lot about avoiding catastrophic consequences. They were the first to propose a security policy in which there is a commitment that if AI ended up having capabilities that could be dangerous, they would stop this effort. They are also the only ones, with Elon Musk, to have been supporting (California AI Regulation Bill) SB 1047. In other words, saying: “Yes, with some improvements, we agree to have more transparency on safety procedures, results and accountability if we cause major damage.”
What do you think about the huge rise in AI stocks, like Nvidia?
What seems very certain to me is the long-term trajectory. So if you’re in this for the long term, it’s a pretty safe bet. Except that if we fail to protect the public, … (then) the reaction could be such that everything could collapse, right? Either because there is a societal backlash against AI in general, or because truly catastrophic things are happening and our economic structure is collapsing.
Either way, it would be bad for investors. So I think investors, if they were smart, would understand that we need to proceed with caution and avoid the kinds of mistakes and disasters that could harm our collective future.
What do you think of the AI chip race?
I think chips are clearly becoming an important piece of the puzzle and, of course, they are a bottleneck. It is very likely that the need for enormous amounts of computing is not going to disappear with the type of events and scientific advances that I can imagine in the years to come, and so it will be of strategic value to have computing capabilities. high-end AI chips. – and all stages of the supply chain will be important. Very few companies are able to do this right now, so I expect to see a lot more investment and hopefully some diversification.
What do you think about Salesforce introducing a billion autonomous agents by 2026?
Autonomy is one of the objectives of these companies, and it is a good economic reason. On a commercial level, this will be a major step forward in terms of the number of applications opened. Think about all the personal assistant apps. This requires much more autonomy than current state-of-the-art systems can offer. So it’s understandable that they would aim for something like this. The fact that Salesforce (RCMP) think they will be able to reach it in two years, to me that is concerning. We need to put safeguards in place, both from a governmental and technological perspective, before this happens.
Governor Newsom vetoed California’s SB 1047. Was it a mistake?
He didn’t do it give reasons it seemed logical to me, like wanting to regulate not only large systems but all small ones. …It’s possible things could change quickly – we talked about a few years. And maybe even if it’s a small possibility, like 10% (disaster risk), we need to be ready. We need regulation. We already need to incentivize businesses to document what they do in a way that is consistent across the industry.
The other problem is that companies feared lawsuits. I have spoken to many of these companies, but there is already tort law, so lawsuits could be filed at any time if they create harm. And what the bill did in terms of liability was narrow the scope of prosecution. …There were 10 conditions. You must meet all of these conditions for the law to support the lawsuit. So I think that actually helped. But there is ideological resistance to any involvement – to anything that is not the status quo, to any further state involvement in the affairs of these AI labs.
Yasmin Khorram is a senior reporter at Yahoo Finance. Follow Yasmin on Twitter/X @YasminKhorram and on LinkedIn. Send newsworthy tips to Yasmin: yasmin.khorram@yahooinc.com
Click here for the latest technology news that will impact the stock market
Read the latest financial and business news from Yahoo Finance