US States Lead AI Regulation Amid Federal Inaction

Arstechnica

In the absence of comprehensive federal legislation, US state legislatures have emerged as the primary arena for establishing regulations around artificial intelligence technologies. This decentralized approach gained further momentum following the significant setback in Congress of a proposed moratorium on state-level AI regulation, effectively clearing the path for states to continue developing their own frameworks. Indeed, all 50 states introduced various AI-related bills in 2025, with several already enacting legislation.

Regulatory efforts at the state level primarily coalesce around four critical aspects of AI: its use in government, applications in healthcare, facial recognition technologies, and the burgeoning field of generative AI.

The oversight and responsible deployment of AI are particularly vital within the public sector. Predictive AI, which leverages statistical analysis for forecasting, has transformed numerous governmental functions, from assessing eligibility for social services to informing recommendations on criminal justice sentencing and parole. However, the widespread adoption of algorithmic decision-making carries substantial hidden costs, including the potential for algorithmic harms such as racial and gender biases. Recognizing these risks, state legislatures have introduced bills specifically targeting public sector AI use, emphasizing transparency, consumer protections, and the identification of deployment risks. Some states, like Colorado with its Artificial Intelligence Act, mandate transparency and disclosure requirements for developers and deployers of AI systems involved in consequential decisions. Montana’s new “Right to Compute” law, meanwhile, requires AI developers to adopt robust risk management frameworks—structured methods for addressing security and privacy—for systems integral to critical infrastructure. Other states, such as New York, have established dedicated bodies to provide oversight and regulatory authority.

The healthcare sector has also seen a flurry of legislative activity. In the first half of 2025 alone, 34 states introduced over 250 AI-related health bills, generally falling into four categories. Transparency-focused bills define disclosure requirements for AI system developers and deploying organizations. Consumer protection bills aim to prevent discriminatory practices by AI systems and ensure avenues for users to contest AI-driven decisions. Legislation also addresses insurers’ use of AI for healthcare approvals and payments, while other bills regulate the technology’s application by clinicians in diagnosing and treating patients.

Facial recognition and surveillance technologies present significant privacy challenges and risks of bias, particularly in the context of a long-standing legal doctrine in the US that champions individual autonomy against government interference. Commonly employed in predictive policing and national security, facial recognition software has demonstrably exhibited biases against people of color, raising concerns for civil liberties. Pioneering research by computer scientists Joy Buolamwolini and Timnit Gebru highlighted that such software is significantly less likely to correctly identify darker faces. Bias can also permeate the training data for these algorithms, often exacerbated by a lack of diversity within the development teams themselves. By the end of 2024, 15 US states had enacted laws to mitigate the potential harms from facial recognition, with regulations often including requirements for vendors to publish bias test reports, detail data management practices, and ensure human review in the application of these technologies.

The widespread adoption of generative AI has also prompted legislative attention across many states. Utah’s Artificial Intelligence Policy Act initially required individuals and organizations to disclose AI use during interactions if questioned, though its scope was later narrowed to interactions involving advice or sensitive information collection. Last year, California passed AB 2013, a generative AI law mandating that developers publish information on their websites regarding the data used to train their AI systems, including foundation models. These foundation models are AI models trained on exceptionally large datasets, adaptable to a wide range of tasks without additional training. Given that AI developers have historically been reluctant to disclose their training data, such legislation could provide much-needed transparency, potentially aiding copyright owners whose content is used in AI training.

In the absence of a comprehensive federal framework, states have stepped in to fill this regulatory void with their own legislative initiatives. While this emerging patchwork of laws may complicate compliance efforts for AI developers, many observers contend that state-level engagement offers crucial and necessary oversight for privacy, civil rights, and consumer protections. However, this decentralized progress faces potential headwinds. The Trump administration’s “AI Action Plan,” announced in July 2025, stated that federal AI-related funding should not be directed towards states with “burdensome” AI regulations. This stance could impede state efforts to regulate AI, forcing states to weigh essential regulations against the risk of losing vital federal funding.