Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.
Sign Up For Latest News
A new study finds disturbing and pervasive errors among three popular models on a wide range of legal tasks.
A new study finds disturbing and pervasive errors among three popular models on a wide range of legal tasks.
This brief highlights the benefits of open foundation models and calls for greater focus on their marginal risks.
This brief highlights the benefits of open foundation models and calls for greater focus on their marginal risks.
Scholars from Stanford RegLab and HAI submitted two responses to the Office of Management and Budget’s (OMB) request for comment on its draft policy guidance “Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence.”
Scholars from Stanford RegLab and HAI submitted two responses to the Office of Management and Budget’s (OMB) request for comment on its draft policy guidance “Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence.”
New Stanford tracker analyzes the 150 requirements of the White House Executive Order on AI and offers new insights into government priorities.
New Stanford tracker analyzes the 150 requirements of the White House Executive Order on AI and offers new insights into government priorities.
New Stanford tracker analyzes the 150 requirements of the White House Executive Order on AI and offers new insights into government priorities.
New Stanford tracker analyzes the 150 requirements of the White House Executive Order on AI and offers new insights into government priorities.
This brief, produced in collaboration with Stanford RegLab, sheds light on the “regulatory misalignment” problem by considering the technical and institutional feasibility of four commonly proposed AI regulatory regimes.
This brief, produced in collaboration with Stanford RegLab, sheds light on the “regulatory misalignment” problem by considering the technical and institutional feasibility of four commonly proposed AI regulatory regimes.
America is ready again to lead on AI—and it won’t just be American companies shaping the AI landscape if the White House has anything to say about it.
America is ready again to lead on AI—and it won’t just be American companies shaping the AI landscape if the White House has anything to say about it.
America is ready again to lead on AI—and it won’t just be American companies shaping the AI landscape if the White House has anything to say about it.
America is ready again to lead on AI—and it won’t just be American companies shaping the AI landscape if the White House has anything to say about it.
Algorithmic fairness and privacy issues are increasingly drawing both policymakers’ and the public’s attention amid rapid advances in artificial intelligence (AI). But safeguarding privacy and addressing algorithmic bias can pose a less recognized trade-off. Data minimization, while beneficial for privacy, has simultaneously made it legally, technically, and bureaucratically difficult to acquire demographic information necessary to conduct equity assessments. In this brief, we document this tension by examining the U.S. government’s recent efforts to introduce government-wide equity assessments of federal programs. We propose a range of policy solutions that would enable agencies to navigate the privacy-bias trade-off.
Algorithmic fairness and privacy issues are increasingly drawing both policymakers’ and the public’s attention amid rapid advances in artificial intelligence (AI). But safeguarding privacy and addressing algorithmic bias can pose a less recognized trade-off. Data minimization, while beneficial for privacy, has simultaneously made it legally, technically, and bureaucratically difficult to acquire demographic information necessary to conduct equity assessments. In this brief, we document this tension by examining the U.S. government’s recent efforts to introduce government-wide equity assessments of federal programs. We propose a range of policy solutions that would enable agencies to navigate the privacy-bias trade-off.
Daniel E. Ho's Testimony Before the Senate Committee on Homeland Security and Governmental Affairs
Daniel E. Ho's Testimony Before the Senate Committee on Homeland Security and Governmental Affairs
A diversity of perspectives from Stanford leaders in medicine, science, engineering, humanities, and the social sciences on how generative AI might affect their fields and our world
A diversity of perspectives from Stanford leaders in medicine, science, engineering, humanities, and the social sciences on how generative AI might affect their fields and our world
This white paper, produced in collaboration with Stanford RegLab, assesses the implementation status of three U.S. executive and legal actions related to AI innovation and trustworthy AI, calling for improvements in reporting and tracking key requirements.
This white paper, produced in collaboration with Stanford RegLab, assesses the implementation status of three U.S. executive and legal actions related to AI innovation and trustworthy AI, calling for improvements in reporting and tracking key requirements.