Democratic senator proposes new federal agency to regulate AI
Sen Michael Bennet says Congress must create 'an expert federal agency that can stand up for the American people and ensure AI tools and digital platforms operate in the public interest'
Colorado Sen. Michael Bennet on Thursday revealed legislation that would create a new federal agency to regulate artificial intelligence, an effort that comes just days after OpenAI CEO Sam Altman testified to Congress on the need for government oversight of AI technologies.
Bennet's bill, obtained by Fox Business, would create a Federal Digital Platform Commission with broad powers to make rules governing companies that provide "content primarily generated by algorithmic processes." The bill defines "algorithmic process" to include computer software that make decisions or generates content, two of the most powerful features of AI programs like OpenAI's ChatGPT.
"There’s no reason that the biggest tech companies on Earth should face less regulation than Colorado’s small businesses – especially as we see technology corrode our democracy and harm our kids' mental health with virtually no oversight," Bennet said in a statement. "Technology is moving quicker than Congress could ever hope to keep up with. We need an expert federal agency that can stand up for the American people and ensure AI tools and digital platforms operate in the public interest."
The senator's proposal would create requirements to ensure that "algorithmic processes" belonging to "systemically important" companies are "fair, transparent, and without harmful, abusive, anticompetitive, or deceptive bias." It would allow the commission to conduct audits, regular public risk assessments and implement transparency requirements, including for content moderation policies.
OPENAI CEO SAM ALTMAN INVITES FEDERAL REGULATION ON ARTIFICIAL INTELLIGENCE
In addition to AI platforms, the bill grants the proposed commission broad authority to regulate social media websites, search engines and other digital platforms. It would create a Code Council of technical experts from the industry and "civil society" to make recommendations for the commission to consider, like transparency standards.
The commission would have five members, with a chair appointed by the president and confirmed by the Senate. There could be no more than three members of the same political party serving on the committee at once, according to the legislative text.
In testimony before Congress earlier this week, OpenAI CEO Sam Altman invited government regulation of AI platforms to "mitigate" risks.
"As this technology advances, we understand that people are anxious about how it could change the way we live. We are too. But we believe that we can and must work together to identify and manage the potential downsides so that we can all enjoy the tremendous upsides. It is essential that powerful AI is developed with democratic values in mind. And this means that U.S. leadership is critical," Altman said Tuesday.
OPENAI CEO SAM ALTMAN ADMITS HIS BIGGEST FEAR FOR AI: ‘IT CAN GO QUITE WRONG’
"We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models," Altman added.
Lawmakers in both parties have shown an increasing appetite to develop regulations governing AI, and Sen. John Kennedy, R-La., even suggested Altman could be appointed as the chief of a new regulatory agency.
"I love my current job," Altman told lawmakers in response to Kennedy's suggestion.
Whether a new federal agency is needed was a topic that came up several times at the Senate hearing. Both Altman and New York University Professor Emeritus Gary Marcus agreed a new agency is needed.
OPENAI CEO ALTMAN POLITELY DECLINES JOB AS TOP AI REGULATOR: ‘I LOVE MY CURRENT JOB’
However, IBM official Christina Montgomery, the company’s chief privacy and trust officer, argued against a new agency and said AI risks should be managed using the existing infrastructure of the federal government. She also argued that AI should be regulated based on how it’s used, and that tougher rules should be imposed when AI is used for riskier applications.
CLICK HERE TO GET THE FOX NEWS APP
Marcus was very alarmed at the prospect of emerging AI technology, and at times argued that the companies building AI should not be trusted by themselves to make recommendations on how to regulate AI. He also recommended that independent scientists need to be deployed to independently ensure that companies are complying with AI rules.
Fox News' Peter Kasperowicz contributed to this report.