Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Response to U.S. AI Safety Institute’s Request for Comment on Managing Misuse Risk For Dual-Use Foundation Models | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
policyResponse to Request

Response to U.S. AI Safety Institute’s Request for Comment on Managing Misuse Risk For Dual-Use Foundation Models

Date
September 09, 2024
Topics
Regulation, Policy, Governance
Foundation Models
Privacy, Safety, Security
Read Paper
abstract

In this response to the U.S. AI Safety Institute’s (US AISI) request for comment on its draft guidelines for managing the misuse risk for dual-use foundation models, scholars from Stanford HAI, the Center for Research on Foundation Models (CRFM), and the Regulation, Evaluation, and Governance Lab (RegLab) urge the US AISI to strengthen its guidance on reproducible evaluations and third- party evaluations, as well as clarify guidance on post-deployment monitoring. They also encourage the institute to develop similar guidance for other actors in the foundation model supply chain and for non-misuse risks, while ensuring the continued open release of foundation models absent evidence of marginal risk.

Read Paper
Share
Link copied to clipboard!
Authors
  • Rishi Bommasani
    Rishi Bommasani
  • Alexander Wan
    Alexander Wan
  • Yifan Mai
    Yifan Mai
  • Percy Liang
    Percy Liang
  • Dan Ho headshot
    Daniel E. Ho

Related Publications

Adverse Event Reporting for AI: Developing the Information Infrastructure Government Needs to Learn and Act
Lindsey A. Gailmard, Drew Spence, Daniel E. Ho
Quick ReadJun 30, 2025
Issue Brief

This brief assesses the benefits of and provides policy recommendations for adverse event reporting systems for AI that report failures and harms post deployment.

Issue Brief

Adverse Event Reporting for AI: Developing the Information Infrastructure Government Needs to Learn and Act

Lindsey A. Gailmard, Drew Spence, Daniel E. Ho
Regulation, Policy, GovernancePrivacy, Safety, SecurityQuick ReadJun 30

This brief assesses the benefits of and provides policy recommendations for adverse event reporting systems for AI that report failures and harms post deployment.

Policy Implications of DeepSeek AI’s Talent Base
Amy Zegart, Emerson Johnston
Quick ReadMay 06, 2025
Policy Brief

This brief presents an analysis of Chinese AI startup DeepSeek’s talent base and calls for U.S. policymakers to reinvest in competing to attract and retain global AI talent.

Policy Brief

Policy Implications of DeepSeek AI’s Talent Base

Amy Zegart, Emerson Johnston
International Affairs, International Security, International DevelopmentFoundation ModelsWorkforce, LaborQuick ReadMay 06

This brief presents an analysis of Chinese AI startup DeepSeek’s talent base and calls for U.S. policymakers to reinvest in competing to attract and retain global AI talent.

Response to OSTP’s Request for Information on the Development of an AI Action Plan
Caroline Meinhardt, Daniel Zhang, Rishi Bommasani, Jennifer King, Russell Wald, Percy Liang, Daniel E. Ho
Mar 17, 2025
Response to Request

In this response to a request for information issued by the National Science Foundation’s Networking and Information Technology Research and Development National Coordination Office (on behalf of the Office of Science and Technology Policy), scholars from Stanford HAI, CRFM, and RegLab urge policymakers to prioritize four areas of policy action in their AI Action Plan: 1) Promote open innovation as a strategic advantage for U.S. competitiveness; 2) Maintain U.S. AI leadership by promoting scientific innovation; 3) Craft evidence-based AI policy that protects Americans without stifling innovation; 4) Empower government leaders with resources and technical expertise to ensure a “whole-of-government” approach to AI governance.

Response to Request

Response to OSTP’s Request for Information on the Development of an AI Action Plan

Caroline Meinhardt, Daniel Zhang, Rishi Bommasani, Jennifer King, Russell Wald, Percy Liang, Daniel E. Ho
Regulation, Policy, GovernanceMar 17

In this response to a request for information issued by the National Science Foundation’s Networking and Information Technology Research and Development National Coordination Office (on behalf of the Office of Science and Technology Policy), scholars from Stanford HAI, CRFM, and RegLab urge policymakers to prioritize four areas of policy action in their AI Action Plan: 1) Promote open innovation as a strategic advantage for U.S. competitiveness; 2) Maintain U.S. AI leadership by promoting scientific innovation; 3) Craft evidence-based AI policy that protects Americans without stifling innovation; 4) Empower government leaders with resources and technical expertise to ensure a “whole-of-government” approach to AI governance.

Safeguarding Third-Party AI Research
Kevin Klyman, Shayne Longpre, Sayash Kapoor, Rishi Bommasani, Percy Liang, Peter Henderson
Quick ReadFeb 13, 2025
Policy Brief
Safeguarding third-party AI research

This brief examines the barriers to independent AI evaluation and proposes safe harbors to protect good-faith third-party research.

Policy Brief
Safeguarding third-party AI research

Safeguarding Third-Party AI Research

Kevin Klyman, Shayne Longpre, Sayash Kapoor, Rishi Bommasani, Percy Liang, Peter Henderson
Privacy, Safety, SecurityRegulation, Policy, GovernanceQuick ReadFeb 13

This brief examines the barriers to independent AI evaluation and proposes safe harbors to protect good-faith third-party research.