Abipartisan coalition of twenty-three state attorneys general have called for a risk-based regulatory approach to artificial intelligence (AI) that reflects a nuanced understanding of the technology's challenges. They emphasize the need to prioritize reliability and security in AI systems, striking a delicate balance between innovation and consumer protection.

The U.S. government's regulatory guidance for artificial intelligence should be risk-based to ensure the technology is reliable and secure, a bipartisan coalition of 23 state attorneys general told the Biden administration Tuesday.

The group urged federal agencies to promote a framework that prioritizes mitigating potential risks AI poses to consumers in a letter to the National Telecommunications and Information Administration in response to its request for comment on its AI policies.

The NTIA is seeking public feedback on how to best audit AI systems and ensure they're trustworthy as it develops recommendations for responsible AI innovation. The Commerce Department agency's focus on the topic follows the release of an AI risk management framework by the White House, and comes as the already quickly-growing tech sector is on the precipice of growing even bigger.

news.bloomberglaw.com/...

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.