Proposed federal legislation would require guidelines for AI

U.S. Senator Maria Cantwell talks into a microphone at a brown wooden podium about AI while wearing a dark blue blazer.
The U.S. Senate Committee on Commerce, Science & Transportation holds a hearing to examine the need to protect Americans' privacy in Washington, D.C., on July 11. (Credit: Renee Bouchard / U.S. Senate photo)

Listen

Read

New federal legislation has been introduced to crack down on artificial intelligence, or AI.

The new bill, nicknamed the COPIED Act, would create federal guidelines for authenticating and detecting what content is created with AI. 

U.S. Sen. Maria Cantwell, a Democrat from Washington state, introduced the bill earlier this month, along with U.S. Sens. Martin Heinrich and Marsha Blackburn. Cantwell said AI is being used to convincingly tailor video and other digital content in order to influence people ahead of the November election. 

“The Department of Justice reported that it dismantled a Russian bot farm intended to sow discord in the United States, using AI,” Cantwell said.

Ryan Calo, a professor at the University of Washington and the co-director of the university’s Tech Policy Lab, said it’s easier for adversaries of all kinds, domestic and foreign, to create plausible looking and very damaging misinformation campaigns. 

“Russian disinformers are not going to comply with our laws, and so I think part of the response has to be political and economic,” Calo said. “And that’s one of the reasons the federal government needs to get involved.”

Placing more responsibility on the platforms to disincentivize misinformation is key, Calo said. 

AI’s influence over politics was one of the topics covered in the July 11 hearing about the COPIED Act, but so was the issue of AI in creative works. 

The legislation would also require new standards to be developed for watermarking original content created by artists, journalists and musicians. 

Multiple news and entertainment organizations across the country have voiced support for the legislation with written statements.

“Deepfakes pose a significant threat to the integrity of broadcasters’ trusted journalism, especially during an election year when accurate information is paramount,” said Curtis LeGeyt, the president and CEO of the National Association of Broadcasters. 

New rules would require new cybersecurity measures to be developed that would prevent tampering with original content.

“For SAG-AFTRA, protecting the ability of our members to control their images, likenesses, and voices is paramount. The capacity of AI to produce stunningly accurate digital representations of performers poses a real and present threat to the economic and reputational well-being and self-determination of our members,” said Duncan Crabtree-Ireland, the national executive director and chief negotiator for SAG-AFTRA.

The legislation would also allow content owners to sue platforms who use their content without permission.