Back to AI TrendsRegulatory Shift

AI-Generated Fake Comments Threaten Regulatory Integrity, Skewing Public Policy

Fast Company March 18, 2026

Executives should care that AI is making it dangerously easy to flood regulatory bodies with fake public comments, undermining informed policy-making and the democratic process. This surge in inauthentic input forces agencies to divert resources for verification, potentially delaying crucial decisions and eroding public trust in regulatory outcomes.

Key Intelligence

  • AI is enabling the creation of thousands of unique, but inauthentic, public comments, making it nearly impossible for agencies to discern genuine feedback.
  • A Southern California air quality agency received 20,000 suspicious comments opposing a heat pump rule, vastly exceeding typical volume and raising authenticity concerns.
  • Officials are receiving personalized, AI-generated emails thanking them for 'opposition' to rules their own teams drafted, highlighting the sophistication of these campaigns.
  • Regulatory bodies are struggling with the significant operational burden of verifying comment authenticity, diverting resources from their core mission.
  • The ease of generating fake feedback threatens the integrity of public comment systems, which are vital for democratic input and policy development.
  • This trend creates a 'noise-to-signal' problem, where legitimate public opinion can be drowned out by AI-amplified campaigns.