Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
HAI and Accelerator for Learning Partnership Grant | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
researchGrant

HAI and Accelerator for Learning Partnership Grant

Status
Closed
Apply
Grant Overview
2024 Grant Recipients
2023 Generative AI for the Future of Learning Recipients

Grant Overview

Learning through Creation with Generative AI

The Stanford Accelerator for Learning and the Stanford Institute for Human-Centered Artificial Intelligence invite research proposals advancing learning through creation with generative AI.

Nearly two years after the launch of ChatGPT, many applications of GenAI aim to automate current teaching & learning models and promote efficiencies in education. Yet, GenAI also offers a far bolder opportunity to transform the very way people learn: through creation. GenAI now presents learners with the exciting possibility of creating their own virtual worlds, simulations, chatbots, and other expressions of their developing knowledge.

The Stanford Accelerator for Learning and HAI invite proposals exploring GenAI’s potential to support learning through creative production, thought, or expression. This includes research on how genAI influences learning-by-making, imaginative exploration, or the development of creative abilities. Projects may target a wide range of creators, such as students, teachers, adults, or families, across various domains including STEM, arts, humanities and social sciences, and in diverse settings such as workplaces, museums, classrooms, and homes. Priority is given to proposals emphasizing creation or creativity in service of learning.

Funding covers early-stage work with scaling potential. We accept three types of proposals: (1) empirical research that investigates questions of GenAI and creation (2) design proposals that produce a working prototype of an AI-based tool or intervention or (3) a combination of design and empirical research.

Visit the Call for Proposals for criteria and eligibility. Applications were due on October 23, 2024. Please direct any questions to Catherine Chase at cchase@stanford.edu.


Request for proposals: Closed

Grant Overview
2024 Grant Recipients
2023 Generative AI for the Future of Learning Recipients
Share
Link copied to clipboard!
Related
  • A Large Scale RCT on Effective Error Messages in CS1
    Sierra Wang, John Mitchell, Christopher Piech
    Mar 07
    Research

    In this paper, we evaluate the most effective error message types through a large-scale randomized controlled trial conducted in an open-access, online introductory computer science course with 8,762 students from 146 countries. We assess existing error message enhancement strategies, as well as two novel approaches of our own: (1) generating error messages using OpenAI's GPT in real time and (2) constructing error messages that incorporate the course discussion forum. By examining students' direct responses to error messages, and their behavior throughout the course, we quantitatively evaluate the immediate and longer term efficacy of different error message types. We find that students using GPT generated error messages repeat an error 23.1% less often in the subsequent attempt, and resolve an error in 34.8% fewer additional attempts, compared to students using standard error messages. We also perform an analysis across various demographics to understand any disparities in the impact of different error message types. Our results find no significant difference in the effectiveness of GPT generated error messages for students from varying socioeconomic and demographic backgrounds. Our findings underscore GPT generated error messages as the most helpful error message type, especially as a universally effective intervention across demographics.

  • Evaluating Human and Machine Understanding of Data Visualizations
    Arnav Verma, Kushin Mukherjee, Christopher Potts, Elisa Kreiss, Judith Fan
    Jan 01
    Research
    Your browser does not support the video tag.

    Although data visualizations are a relatively recent invention, most people are expected to know how to read them. How do current machine learning systems compare with people when performing tasks involving data visualizations? Prior work evaluating machine data visualization understanding has relied upon weak benchmarks that do not resemble the tests used to assess these abilities in humans. We evaluated several state-of-the-art algorithms on data visualization literacy assessments designed for humans, and compared their responses to multiple cohorts of human participants with varying levels of experience with high school-level math. We found that these models systematically underperform all human cohorts and are highly sensitive to small changes in how they are prompted. Among the models we tested, GPT-4V most closely approximates human error patterns, but gaps remain between all models and humans. Our findings highlight the need for stronger benchmarks for data visualization understanding to advance artificial systems towards human-like reasoning about data visualizations.