ChatGPT released an entirely AI-generated simulation of the Joe Rogan Experience podcast on April 14, 2023. The video depicted an entirely simulated video of Joe Rogan and Sam Altman, along with their respective commentary. In another release, ChatGPT released an entirely AI-generated interview of former President Donald Trump with Joe Rogan. Real-looking manipulation of video content is the Deep Fakes threat the world countenanced in 2019 with a Deep Fake of Mark Zuckerberg in which artists inserted a manipulated and sinister narrative about Facebook's monopolistic plans around data. Another doctored video of Congresswoman Nancy Pelosi in 2019 falsely depicted her as drunk and slurring her words.

The threat of disinformation from Deep Fakes and artificial intelligence has led to calls for new legislation to combat false impersonations and other forms of disinformation. A Bill in Congress, H.R. 2395 (the DEEP FAKES Accountability Act) was even introduced in 2021.

Disinformation has emerged as a significant modern global threat to national security (the broader military term "information operations" contains disinformation as one subcomponent of irregular warfare). Many adversaries to the West engage in disinformation tactics, such as the social media campaigns of the infamous Russian Internet Research Agency. Accordingly, the acceleration of AI technology, like ChatGPT, points to a far more troubling disinformation threat environment – and counter-efforts to advanced threats like AI seem nonexistent. As usual with cyberspace, the defenders are playing catchup. Fortunately, in my view, Web3 presents a "leap ahead" way to rebalance things, provided the right approach is charted and the right team is assembled. What follows in this article is: A Call to Action for Cyber Law!

Read More

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.