Evolving

US AI Safety Standards: Federal & State Overview

Understanding the evolving patchwork of US AI regulation: executive orders, NIST frameworks, and state-level legislation.

Last updated: March 2026

The US Approach

Unlike the EU's single comprehensive regulation, the United States takes a decentralized, sector-specific approach to AI governance. Regulation comes from multiple sources: presidential executive orders, federal agency guidelines, and an increasingly active state legislature landscape.

Executive Orders on AI

Executive Order 14110 (October 2023) established the most comprehensive federal AI policy to date, requiring:

  • Safety testing and red-teaming results shared with the government for models above compute thresholds
  • Watermarking and content authentication for AI-generated material
  • Standards for biological synthesis screening
  • Guidelines for AI use in critical infrastructure

NIST AI Risk Management Framework

The National Institute of Standards and Technology (NIST) AI RMF provides a voluntary framework organized around four functions:

Govern

Establish policies, processes, and organizational structures for AI risk management.

Map

Identify and contextualize AI risks across the system lifecycle.

Measure

Assess, analyze, and track identified risks using quantitative and qualitative methods.

Manage

Prioritize and act on risks based on projected impact and likelihood.

State-Level Legislation

States are increasingly active in AI regulation. Key developments include:

  • California SB-1047: Would have required safety testing for large AI models; vetoed but spawned successor bills
  • Colorado AI Act: First state law specifically targeting algorithmic discrimination in high-risk decisions
  • Illinois AI Video Interview Act: Requires consent and disclosure for AI-analyzed video interviews
  • New York City Local Law 144: Auditing requirements for automated employment decision tools