STORY: As companies race to develop more products powered by artificial intelligence, one top tech executive has issued a stark warning.
Microsoft President Brad Smith – whose company backed the creator of ChatGPT – on Thursday said his biggest concern around AI was deep fakes, realistic looking but false content intended to defraud people or compromise national security.
In a speech in Washington about how best to regulate AI, Smith called for steps to ensure that people know when a photo or video is real and when it is generated by AI, potentially for nefarious purposes.
Smith on Thursday also said in a blog post that machines should be subject to human oversight, and that those humans should be accountable to (quote) “everyone else.”
In short, said Smith, “we must always ensure that AI remains under human control."
This is especially critical, he said, when it comes major infrastructure powered by AI, including the electric grid and water systems – and called on Washington to create new laws requiring the operators of these systems to “build safety brakes into high-risk AI systems.”
Congress has struggled with what laws to pass to control AI even as companies large and small have raced to bring increasingly versatile AI to market.
Some proposals being considered would focus on AI that may put people's lives or livelihoods at risk, like in medicine and finance.
Last week, Sam Altman, the CEO of OpenAI, the startup behind ChatGPT, told a Senate panel that the use of AI to interfere with election integrity is a "significant area of concern," adding that it needs regulation.
Altman also attended a meeting with President Joe Biden and other tech CEOs, including those of Microsoft and Alphabet, to address AI concerns.
The warning from Smith comes on the same day that shares of dominant AI chipmaker Nvidia soared more than 25% on the back of surging demand, putting the company on the brink of becoming the first chipmaker with a market value of $1 trillion and sparking a rally in other AI-focused firms.