Hint: It's not the wacky conspiracy theory that's emerged from the nether reaches of the social media cesspool.

The answer: Both Biden and Swift have been the subject of dangerous and harmful deepfakes – technology that employs "deep learning" artificial intelligence to create fraudulent audio and visual content without the consent of those whose voices and images are portrayed.

Last month, Swift became the latest and perhaps most prominent victim of deepfake-generated pornography when doctored pictures of the singer circulated on X, the social media platform formerly known as Twitter. Those sexually explicit images immediately went ultra-viral – causing X to temporarily block all search results for the Grammy winner.

In the run-up to New Hampshire's Jan. 23 presidential primary election, Granite State voters received unsolicited robocall messages from an AI-generated voice impersonating President Biden that told recipients to stay home and "save your vote for the November election." The New Hampshire attorney general's office is now investigating this apparent "unlawful attempt" at voter suppression. And on Feb. 8, the Federal Communications Commission immediately banned robocalls using AI-created cloned voices, declaring them violations of the Telephone Consumer Protection Act.

Related: What to Know About New Hampshire's 2024 Primary

Outside of politics, the George Carlin estate filed suit over an AI-generated "comedy special" podcast that purports to adapt the late comedian's voice, likeness and comedic stylings to a discussion of modern topics. And a Hong Kong-based finance employee was duped by a digitally manipulated impersonation of his company's CFO who appeared on a videoconference and induced the worker to fraudulently transfer over $25 million to multiple bank accounts, according to law enforcement officials.

As this highly partisan election season goes on, incidents such as these are only likely to proliferate and convince credulous audiences to act upon what their eyes and ears receive without bothering to fact-check the veracity of the hyperrealistic images and sounds that today's AI technology can generate.

Fortunately, a bipartisan consensus willing to put necessary constraints on the abuse of AI technology seems to be emerging in Congress. In mid-January, eight House Representatives (four Republicans and four Democrats) sponsored the No Artificial Intelligence Fake Replicas And Unauthorized Duplication Act, otherwise known as the No AI FRAUD Act. The proposed legislation seeks to protect the "property right" inherent in the likeness and voice of both living and deceased persons by prohibiting the nonconsensual use of technological tools (defined as a "personalized cloning service") whose "primary purpose or function ... is to produce one or more digital voice replicas or digital depictions of particular, identified individuals."

Beyond curbing the use of these cloning tools to create deepfake performances, the bill precludes the dissemination of "a digital voice replica or digital depiction" with knowledge that it was unauthorized. Those who "materially contribute" to either of these prohibited activities (by, for example, knowingly funding such conduct) can also be held liable. The No AI FRAUD Act contains provisions for significant damages – including compensating the injured party for financial and physical injuries, retrieving the unauthorized user's profits, imposing a fine necessary to deter future misconduct and paying the injured party's attorneys' fees.

Related: AI and the Future of Work

Under current law, one of the significant impediments to viable defamation and privacy invasion lawsuits is the plea that the voices and images that AI generates are "fake." The NO AI FRAUD Act remedies this disclaimer loophole by making clear that it is no defense "that the individual rights owner did not participate in the creation, development, distribution, or dissemination of the unauthorized digital depiction, digital voice replica, or personalized cloning service." Even with disclaimers proclaiming a video, photo or audio recording's AI origins, the No AI FRAUD Act still holds creators liable for "unauthorized simulation of a voice or likeness."

Finally, by characterizing the rights defined in this proposed legislation as "intellectual property," the No AI FRAUD Act would be an exception to the near-blanket immunity enjoyed for nearly three decades by social media platforms that host user-generated content under Section 230 of the Communications Decency Act. That federal law has allowed social media sites and other internet platforms to effectively avoid responsibility for defamatory, privacy-invasive and other injurious content created by third parties.

The No AI FRAUD Act has already garnered enormous support from a coalition of nearly 200 organizations that are members of the Human Artistry Campaign. These groups include the Recording Industry Association of America, the Screen Actors Guild-American Federation of Television and Radio Artists, the Directors Guilds of America and Canada, the NFL and NHL Players' Associations, the National Association of Voice Actors, the Newsguild, the Communications Workers of America and the AFL-CIO.

Related: How Can the U.S. Protect Itself Against Cyberattacks?

Earlier this month, nearly 300 prominent performers and musicians lent their names to an ad supporting the No AI FRAUD Act that was published in USA Today. Those artists included Bradley Cooper, Bette Midler, Billy Porter, Cardi B, Chuck D, the Johnny Cash estate, Kristen Bell, Kristin Chenoweth, Nicki Minaj, Reba McEntire, Sean Astin, Smokey Robinson and many more.

This legislative effort also has support from the other legislative chamber of Capitol Hill. On Oct. 12, 2023, four U.S. Senators (two Democrats and two Republicans) presented a "discussion draft" of legislation likewise intended to protect actors, singers and others from having AI programs generate their likenesses and voices without their informed written consent. That legislation, known as the Nurture Originals, Foster Art and Keep Entertainment Safe – or NO FAKES – Act, would allow people, companies and platforms to be sued for producing or hosting "digital replicas." A principal difference between these two pieces of legislation is that the proposed Senate bill focuses more specifically on the rights of recording artists and actors; it provides a specific mechanism by which these artists can license the rights to their "image, voice, or visual likeness" for use in a "digital replica."

For those who have already been subjected to AI abuse, or who are likely to become the targets of such nonconsensual exploitation in the future, legislative relief cannot come soon enough. And since AI-generated fakery knows no partisan bounds, it is especially important for Republicans and Democrats in the House and Senate to come together to enact meaningful constraints before these troubling tools create even further uncertainty and division in the current, especially fraught, election season.

Originally published by U.S. News.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.