A recent experiment from researchers at Stanford University's Polarization and Social Change Lab and the Institute for Human-Centered Artificial Intelligence found that AI-generated messages intended to persuade human readers to reconsider their stance on a variety of hot-button policy issues were as persuasive as messages generated by humans. Participants became "significantly more supportive" of certain policies – including a smoking ban, gun control laws and a carbon tax – after reading AI-generated messages. While the AI-generated messaging adopted a more logical and factual approach to persuasive writing than human-generated messaging, it is unclear whether this is universally true for AIs trained on the GPT-3 model or whether the specific prompt input by researchers explains the AI's approach.

This experiment underscores the ability of AI chatbots trained on large language models to influence and persuade the general public. While well-intentioned actors can use such AI for noble purposes, malicious actors can just as well use them to spread misinformation or disinformation on a wide scale, including through social media and other online channels. The Stanford University researchers urge caution and call for lawmakers to immediately consider placing guardrails on the use of AI in political campaigns and activities.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.