OpenAI, IBM Urge Senate to Act on AI Regulation After Past Tech Failures
(Bloomberg) -- The creator of ChatGPT and the privacy chief of International Business Machines Corp. both called on US senators during a hearing Tuesday to more heavily regulate artificial intelligence technologies that are raising ethical, legal and national security concerns.
Most Read from Bloomberg
Here’s How Much Wealth You Need to Join the Richest 1% Globally
Debt-Limit Talks to Intensify as Biden Set to Depart for Japan
JPMorgan Asset Says Markets Are Right to Bet on US Rate Cuts
A 32-Year-Old Nears Billionaire Status by Using AI to Broker Japan Mergers
Speaking to a Senate Judiciary subcommittee, OpenAI Chief Executive Officer Sam Altman praised the potential of the new technology, which he said could solve humanity’s biggest problems. But he also warned that artificial intelligence is powerful enough to change society in unpredictable ways, and “regulatory intervention by governments will be critical to mitigate the risks.”
“My worst fear is that we, the technology industry, cause significant harm to the world,” Altman said. “If this technology goes wrong, it can go quite wrong.”
IBM’s Chief Privacy and Trust Officer Christina Montgomery focused on a risk-based approach and called for “precision regulation” on how AI tools are used, rather than how they’re developed.
The senators openly questioned whether Congress is up to the task. Political gridlock and heavy lobbying from big technology firms have complicated efforts in Washington to set basic guardrails for challenges including data security and child protections for social media. And as senators pointed out in their questions, the deliberative process of Congress often lags far behind the pace of technology advancements.
Demonstrating AI’s power to deceive, Senator Richard Blumenthal, the Connecticut Democrat who chairs the panel, played an AI-written and produced recording that sounded exactly like him during his opening statement. While he urged AI innovators to work with regulators on new restrictions, he recognized that Congress hasn’t passed adequate protections for existing technology.
“Congress has a choice now. We had the same choice when we faced social media,” Blumenthal said. “Congress failed to meet the moment on social media. Now we have the obligation to do it on AI before the threats and the risks become real.”
Several senators advocated for a new regulatory agency with jurisdiction over AI and other emerging technologies. Altman welcomed that suggestion as a way for the US to continue leading on the technology that springs from American companies.
But Gary Marcus, a New York University professor who testified alongside Altman and Montgomery, warned that a new agency created to police AI would risk being captured by the industry it’s supposed to regulate.
Lawmakers questioned the potential for dangerous disinformation and the biases inherent in AI models trained on internet content. They raised the risks that AI-fabricated content poses for the democratic process, while also fretting that global adversaries like China could surpass US capabilities.
Blumenthal asked about “hallucinations” when AI technology gets information wrong. Tennessee Republican Marsha Blackburn asked about protections for singers and songwriters in her home state, drawing a pledge from Altman to work with artists on rights and compensation.
Missouri Senator Josh Hawley, the ranking Republican on the subcommittee, asked whether AI will serve to be as transformative as the printing press, disseminating knowledge more widely, or as destructive as the atomic bomb.
“To a certain extent, it’s up to us here, and to us as the American people, to write the answer,” Hawley said. “What kind of technology will this be? How will we use it to better our lives?”
Much of the discussion focused on generative AI, which can produce images, audio and text that seem human-crafted. OpenAI has driven many of these developments by introducing products like ChatGPT, which can converse or produce human-like, but not always accurate, blocks of text, as well as DALL-E, which can produce fantastical or eerily realistic images from simple text prompts.
But there are boundless other ways that machine learning is being deployed across the modern economy. Recommendation algorithms on social media rely on AI, as do programs that analyze large data sets or weather patterns.
The Biden administration has put forth several non-binding guidelines for artificial intelligence. The National Institute of Standards and Technology in January released a voluntary risk management framework to manage the most high-stakes applications of AI. The White House earlier this year published an “AI Bill of Rights” to help consumers navigate the new technology.
Federal Trade Commission Chair Lina Khan pledged to use existing law to guard against abuses enabled by AI technology. The Department of Homeland Security last month created a task force to study how AI can be be used to secure supply chains and combat drug trafficking.
In Tuesday’s hearing, Altman focused his initial policy recommendations on required registration for AI models of a certain sophistication. He said companies should be required to get a license to operate and conduct a series of tests before releasing new AI models.
Read more: OpenAI’s Altman Casts Global Gaze in Urging AI Regulation in US
Montgomery said policymakers should require AI products to be transparent about when users are interacting with a machine. She also touted IBM’s AI ethics board, which provides internal guardrails that Congress has yet to set.
“It’s often said that innovation moves too fast for government to keep up,” Montgomery said. “But while AI may be having its moment, the moment for government to play its proper role has not passed us by.”
(Updates with additional comments and context starting in eighth paragraph)
Most Read from Bloomberg Businessweek
Eric Adams Is Starving New York City’s Universal Pre-K Program
China’s $220 Billion Biotech Initiative Is Struggling to Take Off
©2023 Bloomberg L.P.