Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Responses to OMB's Request for Comment on Draft Policy Guidance on Agency Use of AI | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
policyResponse to Request

Responses to OMB's Request for Comment on Draft Policy Guidance on Agency Use of AI

Date
November 30, 2023
abstract

Scholars from Stanford RegLab and HAI submitted two responses to the Office of Management and Budget’s (OMB) request for comment on its draft policy guidance “Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence.”


Response from Daniel Ho and Technology Governance Leaders on Behalf of Stanford RegLab

Mariano-Florentino Cuéllar, Daniel E. Ho, Jennifer Pahlka, Amy Perez, Kit Rodolfa, Gerald Ray

This response urges tailored solutions to AI regulation and offers suggestions for distinguishing between AI risk profiles to avoid further widening the gap between public and private sector capabilities. HAI Senior Fellow Daniel Ho and his co-authors also offer suggestions for adjusting the guidance to reduce the burden while maintaining AI safety, through measures such as securing data for equity assessments and building in public consultation mechanisms that do not lengthen rulemaking proceedings. 

Download the full response


Response from Daniel Ho, Percy Liang, and Technology Governance Leaders

Daniel E. Ho, Percy Liang, Timothy O'Reilly, Jennifer Pahlka, Todd Park, DJ Patil, Kit Rodolfa

This response highlights the importance of open approaches to innovation for government technology. Writing in their individual capacities, HAI Senior Fellows Daniel Ho and Percy Liang alongside their co-authors argue that the OMB Memo should draw an explicit connection to established federal policy around open source and acknowledge the long-recognized benefits to open source approaches and open research environments. They emphasize that the risks of a limited set of generative AI models should not detract from an overall commitment toward an open approach to AI innovation.

Download the full response

Share
Link copied to clipboard!
Authors
  • Mariano-Florentino Cuéllar
    Mariano-Florentino Cuéllar
  • Dan Ho headshot
    Daniel E. Ho
  • Jennifer Pahlka
    Jennifer Pahlka
  • Amy Perez
    Amy Perez
  • Gerald Ray
    Gerald Ray
  • Kit T. Rodolfa
    Kit T. Rodolfa
  • Percy Liang
    Percy Liang
  • Timothy O'Reilly
    Timothy O'Reilly
  • Todd Park
    Todd Park
  • DJ Patil
    DJ Patil

Related Publications

Adverse Event Reporting for AI: Developing the Information Infrastructure Government Needs to Learn and Act
Lindsey A. Gailmard, Drew Spence, Daniel E. Ho
Quick ReadJun 30, 2025
Issue Brief

This brief assesses the benefits of and provides policy recommendations for adverse event reporting systems for AI that report failures and harms post deployment.

Issue Brief

Adverse Event Reporting for AI: Developing the Information Infrastructure Government Needs to Learn and Act

Lindsey A. Gailmard, Drew Spence, Daniel E. Ho
Regulation, Policy, GovernancePrivacy, Safety, SecurityQuick ReadJun 30

This brief assesses the benefits of and provides policy recommendations for adverse event reporting systems for AI that report failures and harms post deployment.

Cleaning Up Policy Sludge: An AI Statutory Research System
Faiz Surani, Lindsey A. Gailmard, Allison Casasola, Varun Magesh, Emily J. Robitschek, Christine Tsang, Derek Ouyang, Daniel E. Ho
Quick ReadJun 18, 2025
Policy Brief

This brief introduces a novel AI tool that performs statutory surveys to help governments—such as the San Francisco City Attorney Office—identify policy sludge and accelerate legal reform.

Policy Brief

Cleaning Up Policy Sludge: An AI Statutory Research System

Faiz Surani, Lindsey A. Gailmard, Allison Casasola, Varun Magesh, Emily J. Robitschek, Christine Tsang, Derek Ouyang, Daniel E. Ho
Government, Public AdministrationQuick ReadJun 18

This brief introduces a novel AI tool that performs statutory surveys to help governments—such as the San Francisco City Attorney Office—identify policy sludge and accelerate legal reform.

Simulating Human Behavior with AI Agents
Joon Sung Park, Carolyn Q. Zou, Aaron Shaw, Benjamin Mako Hill, Carrie J. Cai, Meredith Ringel Morris, Robb Willer, Percy Liang, Michael S. Bernstein
Quick ReadMay 20, 2025
Policy Brief

This brief introduces a generative AI agent architecture that can simulate the attitudes of more than 1,000 real people in response to major social science survey questions.

Policy Brief

Simulating Human Behavior with AI Agents

Joon Sung Park, Carolyn Q. Zou, Aaron Shaw, Benjamin Mako Hill, Carrie J. Cai, Meredith Ringel Morris, Robb Willer, Percy Liang, Michael S. Bernstein
Generative AIQuick ReadMay 20

This brief introduces a generative AI agent architecture that can simulate the attitudes of more than 1,000 real people in response to major social science survey questions.

Policy Implications of DeepSeek AI’s Talent Base
Amy Zegart, Emerson Johnston
Quick ReadMay 06, 2025
Policy Brief

This brief presents an analysis of Chinese AI startup DeepSeek’s talent base and calls for U.S. policymakers to reinvest in competing to attract and retain global AI talent.

Policy Brief

Policy Implications of DeepSeek AI’s Talent Base

Amy Zegart, Emerson Johnston
International Affairs, International Security, International DevelopmentFoundation ModelsWorkforce, LaborQuick ReadMay 06

This brief presents an analysis of Chinese AI startup DeepSeek’s talent base and calls for U.S. policymakers to reinvest in competing to attract and retain global AI talent.