Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Yejin Choi: Teaching AI How the World Works | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
news

Yejin Choi: Teaching AI How the World Works

Date
January 16, 2025
Topics
Machine Learning
yejin choi

Stanford HAI’s new Senior Fellow will study common sense AI and moving from large language models to small language models.

Throughout her career, Dr. Yejin Choi has tackled some of the most challenging - and sometimes unpopular - aspects of artificial intelligence (AI). 

Yejin is one of the top researchers on natural language processing (NLP) and AI. Her groundbreaking work on topics such as AI’s ability to use commonsense reasoning has earned her widespread recognition, including a MacArthur Fellowship. 

She joins Stanford HAI as the Dieter Schwarz Foundation HAI Professor, Professor of Computer Science, and Stanford HAI Senior Fellow from her previous post as senior research manager at the Allen Institute for Artificial Intelligence and professor at the Paul G. Allen School of Computer Science & Engineering at the University of Washington, where she has published some of the seminal papers on AI and common sense. 

In this Q&A, Yejin shares her plans for her new role, how she chose her career path, and describes how growing up as a girl in South Korea interested in science inspired her to take roads less traveled.

Tell us about your position at HAI and what you hope to accomplish.

As a senior fellow, I look forward to focusing on what I truly enjoy: AI research with a strong consideration for its impact on humans. I will continue the interdisciplinary work that has defined my career, drawing insights from fields such as cognitive neuroscience and philosophy to guide the design and evaluation of AI. Even more inspiring is the potential to give back to these fields by offering insights that could support their own intellectual endeavors. That is my vision.

Collaborating with moral philosophers like John Tasioulas at University of Oxford on AI’s moral decision making sparked my interest in exploring how large language models (LLMs) might make moral decisions. This led me to investigate pluralistic alignment – the concept that there could be multiple answers to a question rather than a single “gold” answer. Recent AI models often operate under the assumption of a gold answer, but reality is far more complex and influenced by factors like cultural norms. This realization underscored the importance of ensuring that AI is truly safe to humans. We must ensure that AI is not narrowly optimized for a single outcome, and I am eager to heavily invest in this work at Stanford HAI.

I'm also interested in focusing on more algorithmic work, especially designing AI models that are less data hungry and more computationally efficient. Current LLMs are excessively large, expensive, and limited to the few tech companies capable of building them. Smaller LLMs – or SLMs – is another research direction that I'm eager to pursue at Stanford HAI.

Talk about how you got on this path to where you are now?

My path is particularly convoluted. When I decided to do my PhD in natural language processing, AI was not yet a popular field. In fact, many people advised me against it. But I am adventure-seeking and was attracted to the fact that people weren’t yet excited about it. While other fields were more established, I wanted to position myself in a field that might one day rise to prominence.

My attraction to risky things started when I was a girl growing up in South Korea. I became fascinated with a competition for making wooden airplanes fly. One of the organizers questioned my participation, believing that girls should not be involved. I frequently faced a lot of dampening feedback like this. Throughout my career, I have grappled with a lot of self doubt and fear because of the cultural norms I grew up with. However, this has given me a deeper understanding of how cultural norms can influence a person’s life, a perspective that has influenced my research interests.

I didn't go straight from college to grad school because I didn’t see it as an option at the time. Instead, I started my career after college as a software developer working for Microsoft in Seattle around 2000. After a while, I decided I wanted to do something more risky, which was a PhD in AI.

Eventually, my interest turned to common sense and AI – a long-standing challenge that people looked down on at the time due to lack of advancement in the field. It is viewed as an almost impossible challenge. While I understood why some people had a negative view on it, I saw it as a critical challenge for advancing AI because so much of our understanding of the world is based on more than the visible text we see. For AI models to be truly helpful, they need to understand the unspoken rules about how the world works. Failure in the past didn’t mean we would necessarily fail again, especially given the advancements in data or compute power that had since emerged. 

What are some of your notable achievements?

During my time at Stony Brook University, I collaborated with Jeff Hancock, Founding Director of Stanford’s Social Media Lab, to create an NLP model that could analyze the linguistic patterns of language to detect whether a product review was fake or not. This work was especially significant at the time because it was an era where product reviews were becoming quite influential. What's fascinating is that pronoun use or whether someone tends to use more nouns versus adverbs can actually provide very good clues about whether a review is fake.

And, as surprising as it was, winning the MacArthur Fellowship for my work around using NLP to help AI use commonsense reasoning was a real validation for having chosen to pursue common sense and AI when others didn’t agree. 

Lastly, I’m proud of the work I’ve done around bias in AI. I did some of the early research that looks into racism and sexism in written text. This is related to my commonsense work in that it's about cultural norms and values and inferring the unspoken assumptions about how the social world works.

Share
Link copied to clipboard!
Contributor(s)
Nick Adams Pandolfo
Related
  • Yejin Choi
    Dieter Schwarz Foundation HAI Professor | Professor of Computer Science | Senior Fellow, Stanford HAI
    yejin choi

Related News

Stanford AI Scholars Find Support for Innovation in a Time of Uncertainty
Nikki Goth Itoi
Jul 01, 2025
News

Stanford HAI offers critical resources for faculty and students to continue groundbreaking research across the vast AI landscape.

News

Stanford AI Scholars Find Support for Innovation in a Time of Uncertainty

Nikki Goth Itoi
Machine LearningFoundation ModelsEducation, SkillsJul 01

Stanford HAI offers critical resources for faculty and students to continue groundbreaking research across the vast AI landscape.

Digital Twins Offer Insights into Brains Struggling with Math — and Hope for Students
Andrew Myers
Jun 06, 2025
News

Researchers used artificial intelligence to analyze the brain scans of students solving math problems, offering the first-ever peek into the neuroscience of math disabilities.

News

Digital Twins Offer Insights into Brains Struggling with Math — and Hope for Students

Andrew Myers
Machine LearningSciences (Social, Health, Biological, Physical)Jun 06

Researchers used artificial intelligence to analyze the brain scans of students solving math problems, offering the first-ever peek into the neuroscience of math disabilities.

Better Benchmarks for Safety-Critical AI Applications
Nikki Goth Itoi
May 27, 2025
News
Business graph digital concept

Stanford researchers investigate why models often fail in edge-case scenarios.

News
Business graph digital concept

Better Benchmarks for Safety-Critical AI Applications

Nikki Goth Itoi
Machine LearningMay 27

Stanford researchers investigate why models often fail in edge-case scenarios.