Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Daniel E. Ho | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Back to Daniel E. Ho

All Related

Hallucinating Law: Legal Mistakes with Large Language Models are Pervasive
Daniel E. Ho
Matthew Dahl, Varun Magesh, Mirac Suzgun
Jan 11, 2024
news

A new study finds disturbing and pervasive errors among three popular models on a wide range of legal tasks.

Hallucinating Law: Legal Mistakes with Large Language Models are Pervasive

Daniel E. Ho
Matthew Dahl, Varun Magesh, Mirac Suzgun
Jan 11, 2024

A new study finds disturbing and pervasive errors among three popular models on a wide range of legal tasks.

news
Considerations for Governing Open Foundation Models
Rishi Bommasani, Sayash Kapoor, Kevin Klyman, Shayne Longpre, Ashwin Ramaswami, Daniel Zhang, Marietje Schaake, Daniel E. Ho, Arvind Narayanan, Percy Liang
Quick ReadDec 13, 2023
issue brief

This brief highlights the benefits of open foundation models and calls for greater focus on their marginal risks.

Considerations for Governing Open Foundation Models

Rishi Bommasani, Sayash Kapoor, Kevin Klyman, Shayne Longpre, Ashwin Ramaswami, Daniel Zhang, Marietje Schaake, Daniel E. Ho, Arvind Narayanan, Percy Liang
Quick ReadDec 13, 2023

This brief highlights the benefits of open foundation models and calls for greater focus on their marginal risks.

Foundation Models
issue brief
Responses to OMB's Request for Comment on Draft Policy Guidance on Agency Use of AI
Mariano-Florentino Cuéllar, Daniel E. Ho, Jennifer Pahlka, Amy Perez, Gerald Ray, Kit T. Rodolfa, Percy Liang, Timothy O'Reilly, Todd Park, DJ Patil
Nov 30, 2023
response to request

Scholars from Stanford RegLab and HAI submitted two responses to the Office of Management and Budget’s (OMB) request for comment on its draft policy guidance “Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence.”

Responses to OMB's Request for Comment on Draft Policy Guidance on Agency Use of AI

Mariano-Florentino Cuéllar, Daniel E. Ho, Jennifer Pahlka, Amy Perez, Gerald Ray, Kit T. Rodolfa, Percy Liang, Timothy O'Reilly, Todd Park, DJ Patil
Nov 30, 2023

Scholars from Stanford RegLab and HAI submitted two responses to the Office of Management and Budget’s (OMB) request for comment on its draft policy guidance “Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence.”

response to request
By the Numbers: Tracking The AI Executive Order
Caroline Meinhardt, Christie M. Lawrence, Lindsey A. Gailmard, Daniel Zhang, Rishi Bommasani, Rohini Kosoglu, Peter Henderson, Russell Wald, Daniel E. Ho
Nov 16, 2023
news

New Stanford tracker analyzes the 150 requirements of the White House Executive Order on AI and offers new insights into government priorities.

By the Numbers: Tracking The AI Executive Order

Caroline Meinhardt, Christie M. Lawrence, Lindsey A. Gailmard, Daniel Zhang, Rishi Bommasani, Rohini Kosoglu, Peter Henderson, Russell Wald, Daniel E. Ho
Nov 16, 2023

New Stanford tracker analyzes the 150 requirements of the White House Executive Order on AI and offers new insights into government priorities.

news
By the Numbers: Tracking The AI Executive Order
Caroline Meinhardt, Christie M. Lawrence, Lindsey A. Gailmard, Daniel Zhang, Rishi Bommasani, Rohini Kosoglu, Peter Henderson, Russell Wald, Daniel E. Ho
Nov 16, 2023
explainer
Your browser does not support the video tag.

New Stanford tracker analyzes the 150 requirements of the White House Executive Order on AI and offers new insights into government priorities.

By the Numbers: Tracking The AI Executive Order

Caroline Meinhardt, Christie M. Lawrence, Lindsey A. Gailmard, Daniel Zhang, Rishi Bommasani, Rohini Kosoglu, Peter Henderson, Russell Wald, Daniel E. Ho
Nov 16, 2023

New Stanford tracker analyzes the 150 requirements of the White House Executive Order on AI and offers new insights into government priorities.

Regulation, Policy, Governance
Government, Public Administration
Your browser does not support the video tag.
explainer
The AI Regulatory Alignment Problem
Neel Guha, Christie M. Lawrence, Lindsey A. Gailmard, Kit T. Rodolfa, Faiz Surani, Rishi Bommasani, Inioluwa Deborah Raji, Mariano-Florentino Cuéllar, Colleen Honigsberg, Percy Liang, Daniel E. Ho
Quick ReadNov 15, 2023
policy brief

This brief, produced in collaboration with Stanford RegLab, sheds light on the “regulatory misalignment” problem by considering the technical and institutional feasibility of four commonly proposed AI regulatory regimes.

The AI Regulatory Alignment Problem

Neel Guha, Christie M. Lawrence, Lindsey A. Gailmard, Kit T. Rodolfa, Faiz Surani, Rishi Bommasani, Inioluwa Deborah Raji, Mariano-Florentino Cuéllar, Colleen Honigsberg, Percy Liang, Daniel E. Ho
Quick ReadNov 15, 2023

This brief, produced in collaboration with Stanford RegLab, sheds light on the “regulatory misalignment” problem by considering the technical and institutional feasibility of four commonly proposed AI regulatory regimes.

Regulation, Policy, Governance
policy brief
Decoding the White House AI Executive Order’s Achievements
Rishi Bommasani, Christie M. Lawrence, Lindsey A. Gailmard, Caroline Meinhardt, Daniel Zhang, Peter Henderson, Russell Wald, Daniel E. Ho
Nov 02, 2023
news

America is ready again to lead on AI—and it won’t just be American companies shaping the AI landscape if the White House has anything to say about it.

Decoding the White House AI Executive Order’s Achievements

Rishi Bommasani, Christie M. Lawrence, Lindsey A. Gailmard, Caroline Meinhardt, Daniel Zhang, Peter Henderson, Russell Wald, Daniel E. Ho
Nov 02, 2023

America is ready again to lead on AI—and it won’t just be American companies shaping the AI landscape if the White House has anything to say about it.

news
Decoding the White House AI Executive Order’s Achievements
Rishi Bommasani, Christie M. Lawrence, Lindsey A. Gailmard, Caroline Meinhardt, Daniel Zhang, Peter Henderson, Russell Wald, Daniel E. Ho
Nov 02, 2023
explainer
Your browser does not support the video tag.

America is ready again to lead on AI—and it won’t just be American companies shaping the AI landscape if the White House has anything to say about it.

Decoding the White House AI Executive Order’s Achievements

Rishi Bommasani, Christie M. Lawrence, Lindsey A. Gailmard, Caroline Meinhardt, Daniel Zhang, Peter Henderson, Russell Wald, Daniel E. Ho
Nov 02, 2023

America is ready again to lead on AI—and it won’t just be American companies shaping the AI landscape if the White House has anything to say about it.

Government, Public Administration
Your browser does not support the video tag.
explainer
The Privacy-Bias Trade-Off
Arushi Gupta, Victor Y. Wu, Helen Webley-Brown, Jennifer King, Daniel E. Ho
Quick ReadOct 19, 2023
policy brief

Algorithmic fairness and privacy issues are increasingly drawing both policymakers’ and the public’s attention amid rapid advances in artificial intelligence (AI). But safeguarding privacy and addressing algorithmic bias can pose a less recognized trade-off. Data minimization, while beneficial for privacy, has simultaneously made it legally, technically, and bureaucratically difficult to acquire demographic information necessary to conduct equity assessments. In this brief, we document this tension by examining the U.S. government’s recent efforts to introduce government-wide equity assessments of federal programs. We propose a range of policy solutions that would enable agencies to navigate the privacy-bias trade-off.

The Privacy-Bias Trade-Off

Arushi Gupta, Victor Y. Wu, Helen Webley-Brown, Jennifer King, Daniel E. Ho
Quick ReadOct 19, 2023

Algorithmic fairness and privacy issues are increasingly drawing both policymakers’ and the public’s attention amid rapid advances in artificial intelligence (AI). But safeguarding privacy and addressing algorithmic bias can pose a less recognized trade-off. Data minimization, while beneficial for privacy, has simultaneously made it legally, technically, and bureaucratically difficult to acquire demographic information necessary to conduct equity assessments. In this brief, we document this tension by examining the U.S. government’s recent efforts to introduce government-wide equity assessments of federal programs. We propose a range of policy solutions that would enable agencies to navigate the privacy-bias trade-off.

Privacy, Safety, Security
policy brief
Daniel E. Ho's Testimony Before the Senate Committee on Homeland Security and Governmental Affairs
Daniel E. Ho
May 16, 2023
testimony
Your browser does not support the video tag.

Daniel E. Ho's Testimony Before the Senate Committee on Homeland Security and Governmental Affairs

Daniel E. Ho's Testimony Before the Senate Committee on Homeland Security and Governmental Affairs

Daniel E. Ho
May 16, 2023

Daniel E. Ho's Testimony Before the Senate Committee on Homeland Security and Governmental Affairs

Your browser does not support the video tag.
testimony
Generative AI: Perspectives from Stanford HAI
Russ Altman, Erik Brynjolfsson, Michele Elam, Surya Ganguli, Daniel E. Ho, James Landay, Curtis Langlotz, Fei-Fei Li, Percy Liang, Christopher Manning, Peter Norvig, Rob Reich, Vanessa Parli
Deep DiveMar 01, 2023
Research

A diversity of perspectives from Stanford leaders in medicine, science, engineering, humanities, and the social sciences on how generative AI might affect their fields and our world

Generative AI: Perspectives from Stanford HAI

Russ Altman, Erik Brynjolfsson, Michele Elam, Surya Ganguli, Daniel E. Ho, James Landay, Curtis Langlotz, Fei-Fei Li, Percy Liang, Christopher Manning, Peter Norvig, Rob Reich, Vanessa Parli
Deep DiveMar 01, 2023

A diversity of perspectives from Stanford leaders in medicine, science, engineering, humanities, and the social sciences on how generative AI might affect their fields and our world

Generative AI
Research
Implementation Challenges to Three Pillars of America’s AI Strategy
Christie M. Lawrence, Isaac Cui, Daniel E. Ho
Deep DiveDec 20, 2022
whitepaper

This white paper, produced in collaboration with Stanford RegLab, assesses the implementation status of three U.S. executive and legal actions related to AI innovation and trustworthy AI, calling for improvements in reporting and tracking key requirements. 

Implementation Challenges to Three Pillars of America’s AI Strategy

Christie M. Lawrence, Isaac Cui, Daniel E. Ho
Deep DiveDec 20, 2022

This white paper, produced in collaboration with Stanford RegLab, assesses the implementation status of three U.S. executive and legal actions related to AI innovation and trustworthy AI, calling for improvements in reporting and tracking key requirements. 

Government, Public Administration
Regulation, Policy, Governance
whitepaper
1
2
3
4