US States Pioneering AI Regulation Amid Federal Inaction

Fastcompany

In the absence of comprehensive federal oversight, U.S. state legislatures have emerged as the primary battlegrounds for establishing regulations around artificial intelligence technologies. This decentralized approach gained further momentum following the decisive defeat in Congress of a proposed moratorium on state-level AI regulation, effectively clearing the path for states to continue addressing the burgeoning technological landscape. Indeed, by 2025, every U.S. state had introduced some form of AI-related legislation, with several already enacting laws. Four specific aspects of AI have particularly captured the attention of state lawmakers: its use in government, healthcare applications, facial recognition technologies, and the rise of generative AI.

The responsible deployment and oversight of AI are especially critical within the public sector. Predictive AI, which leverages statistical analysis to generate forecasts, has already transformed numerous governmental functions, from determining eligibility for social services to informing recommendations for criminal justice sentencing and parole. However, the widespread adoption of algorithmic decision-making carries significant hidden costs, including the potential for algorithmic harms such as racial and gender biases. Recognizing these risks, state legislatures have introduced bills specifically targeting public sector AI use, emphasizing transparency, robust consumer protections, and clear acknowledgment of deployment risks. For instance, Colorado’s Artificial Intelligence Act mandates transparency and disclosure requirements for both developers and deployers of AI systems involved in consequential decisions. Montana’s new “Right to Compute” law compels AI developers to adopt risk management frameworks – structured methods for addressing security and privacy – for systems integrated into critical infrastructure. Furthermore, some states, like New York with its SB 8755 bill, have moved to establish dedicated bodies with oversight and regulatory authority.

The healthcare sector has also seen a flurry of legislative activity, with 34 states introducing over 250 AI-related health bills in the first half of 2025 alone. These bills generally fall into four categories: disclosure requirements for AI system developers and deployers, consumer protection measures designed to prevent unfair discrimination and ensure avenues for contesting AI-driven decisions, regulations governing insurers’ use of AI for healthcare approvals and payments, and rules for clinicians’ use of AI in patient diagnosis and treatment.

Facial recognition and surveillance technologies present distinct privacy challenges and risks of bias, particularly given the long-standing legal doctrine in the U.S. of protecting individual autonomy from government interference. Widely used in predictive policing and national security, facial recognition software has demonstrably exhibited biases against people of color. A seminal study by computer scientists Joy Buolamwini and Timnit Gebru revealed that such software was significantly less likely to correctly identify darker faces, highlighting profound challenges for Black individuals and other historically disadvantaged minorities. This bias often stems from the composition of training data and the lack of diversity within the teams developing these algorithms. By the end of 2024, 15 U.S. states had enacted laws to mitigate potential harms from facial recognition, with some regulations requiring vendors to publish bias test reports and detail their data management practices, alongside mandating human review in the application of these technologies. The real-world consequences of these biases are stark, as exemplified by the wrongful arrest of Porcha Woodruff in 2023, based solely on flawed facial recognition technology.

The pervasive spread of generative AI has likewise prompted concerns among state lawmakers. Utah’s Artificial Intelligence Policy Act initially required clear disclosure when generative AI systems were used to interact with individuals who inquired about AI involvement, though the scope was later narrowed to interactions involving advice or sensitive information collection. Last year, California passed AB 2013, a generative AI law that compels developers to publicly disclose information on their websites about the data used to train their AI systems, including “foundation models” – AI models trained on vast datasets adaptable to diverse tasks without further training. Given that AI developers have historically been reticent about their training data, such legislation could empower copyright owners whose content is used to train AI models by fostering greater transparency.

In the absence of a cohesive federal framework, states have proactively attempted to bridge the regulatory gap through their own legislative endeavors. While this emerging patchwork of laws may complicate compliance for AI developers, it provides crucial and much-needed oversight concerning privacy, civil rights, and consumer protections. However, this state-led progress faces a potential hurdle: the Trump administration’s “AI Action Plan,” announced on July 23, 2025. The plan explicitly states that “The Federal government should not allow AI-related Federal funding to be directed toward states with burdensome AI regulations.” This directive could potentially impede state efforts to regulate AI, forcing them to weigh essential regulations against the risk of losing vital federal funding.