Thursday, December 12, 2024
A.I. regulation is just getting started.

A.I. regulation is just getting started.

Regulating artificial intelligence

Regulating artificial intelligence has been a hot topic in Washington in recent months, with lawmakers holding hearings and news conferences and the White House announcing voluntary A.I. safety commitments by seven technology companies on Friday.

But a closer look at the activity raises questions about how meaningful the actions are in setting policies around the rapidly evolving technology.

The answer is that it is not very meaningful yet. The United States is only at the beginning of what is likely to be a long and difficult path toward the creation of A.I. rules, lawmakers and policy experts said. While there have been hearings, meetings with top tech executives at the White House and speeches to introduce A.I. bills, it is too soon to predict even the roughest sketches of regulations to protect consumers and contain the risks that the technology poses to jobs, the spread of disinformation and security.

“This is still early days, and no one knows what a law will look like yet,” said Chris Lewis, president of the consumer group Public Knowledge, which has called for the creation of an independent agency to regulate A.I. and other tech companies.

The United States remains far behind Europe, where lawmakers are preparing to enact an A.I. law this year that would put new restrictions on what are seen as the technology’s riskiest uses. In contrast, there remains a lot of disagreement in the United States on the best way to handle a technology that many American lawmakers are still trying to understand.

That suits many of the tech companies, policy experts said. While some of the companies have said they welcome rules around A.I., they have also argued against tough regulations akin to those being created in Europe.

Here’s a rundown on the state of A.I. regulations in the United States.

The Biden administration has been on a fast-track listening tour with A.I. companies, academics and civil society groups. The effort began in May when Vice President Kamala Harris met at the White House with the chief executives of Microsoft, Google, OpenAI and Anthropic and pushed the tech industry to take safety more seriously.

On Friday, representatives of seven tech companies appeared at the White House to announce a set of principles for making their A.I. technologies safer, including third-party security checks and watermarking of A.I.-generated content to help stem the spread of misinformation.

Many of the practices that were announced had already been in place at OpenAI, Google and Microsoft, or were on track to take effect. They don’t represent new regulations. Promises of self-regulation also fell short of what consumer groups had hoped.

“Voluntary commitments are not enough when it comes to Big Tech,” said Caitriona Fitzgerald, deputy director at the Electronic Privacy Information Center, a privacy group. “Congress and federal regulators must put meaningful, enforceable guardrails in place to ensure the use of A.I. is fair, transparent and protects individuals’ privacy and civil rights.”

Last fall, the White House introduced a Blueprint for an A.I. Bill of Rights, a set of guidelines on consumer protections with the technology. The guidelines also aren’t regulations and are not enforceable. This week, White House officials said they were working on an executive order on A.I., but didn’t reveal details and timing.

The loudest drumbeat on regulating A.I. has come from lawmakers, some of whom have introduced bills on the technology. Their proposals include the creation of an agency to oversee A.I., liability for A.I. technologies that spread disinformation and the requirement of licensing for new A.I. tools.

Lawmakers have also held hearings about A.I., including a hearing in May with Sam Altman, the chief executive of OpenAI, which makes the ChatGPT chatbot. Some lawmakers have tossed around ideas for other regulations during the hearings, including nutritional labels to notify consumers of A.I. risks.

The bills are in their earliest stages and so far do not have the support needed to advance. Last month, The Senate leader, Chuck Schumer, Democrat of New York, announced a monthslong process for the creation of A.I. legislation that included educational sessions for members in the fall.

“In many ways we’re starting from scratch, but I believe Congress is up to the challenge,” he said during a speech at the time at the Center for Strategic and International Studies.

Regulatory agencies are beginning to take action by policing some issues emanating from A.I.

Last week, the Federal Trade Commission opened an investigation into OpenAI’s ChatGPT and asked for information on how the company secures its systems and how the chatbot could potentially harm consumers through the creation of false information. The F.T.C. chair, Lina Khan, has said she believes the agency has ample power under consumer protection and competition laws to police problematic behavior by A.I. companies.

“Waiting for Congress to act is not ideal given the usual timeline of congressional action,” said Andres Sawicki, a professor of law at the University of Miami.

Source

About Alex Chen

Alex Chen is a tech blogger based in Silicon Valley. He loves writing about the latest trends in the industry and sharing his insights with his readers. With years of experience in the field, Alex has built a loyal following of tech enthusiasts who appreciate his informative and engaging content. When he's not writing, Alex enjoys experimenting with new tech gadgets and exploring the vibrant tech scene in the Bay Area.

Check Also

Decade-long roadmap for scientific and technological advancements.

Decade-long roadmap for scientific and technological advancements.

JTSI Leads the Development of Western Australia’s 10-Year Science and Technology Plan JTSI is taking …

Leave a Reply

Your email address will not be published. Required fields are marked *