Skip to main content
Expected Parrot Domain-Specific Language (EDSL) is an open-source Python package for conducting research with AI agents and language models. EDSL is designed for researchers who want to run surveys at scale: simulate hundreds of demographically diverse respondents, test question variations, compare responses across language models, label large text datasets, and combine AI results with real human data. It handles the complexity of managing agents, models, and parallel execution so you can focus on the research.

Getting started

1

Install EDSL

pip install edsl
See Installation for details.
2

Create an account

Sign up to access the Expected Parrot server, free storage, and collaboration tools. You can also log in directly from your workspace:
from edsl import Login
login()
3

Manage API keys

Your account comes with a key that lets you run surveys with all available models. You can also provide your own keys from service providers. See Managing Keys.
4

Run a survey

Read the Starter Tutorial and browse the how-to guides and notebooks for examples. See tips on using EDSL effectively.
5

Validate with real respondents

Add a human-in-the-loop by launching a web-based survey to share with real respondents. Learn more in the Survey Builder and Humanize sections.
Join our Discord to ask questions and chat with other users! Are you using EDSL for a research project? Email [email protected] and we’ll give you credits to run your project.

Introduction

Overview

Purpose, concepts and goals of the EDSL package.

Starter Tutorial

A step-by-step tutorial for getting started with EDSL.

Whitepaper

A whitepaper about the EDSL package (in progress).

Citation

How to cite the package in your work.

Papers & Citations

Research papers and articles that use or cite EDSL.

Teaching guide

A guide for teaching EDSL in the classroom.

Core Concepts

Questions

Different question types and applications.

Scenarios

Dynamically parameterize questions for tasks like data labeling.

Surveys

Construct surveys with rules and conditional logic.

Agents

Design AI agents with traits to respond to surveys.

Language Models

Select language models to generate results.

Results

Built-in methods for analyzing survey results.

Expected Parrot Platform

Remote Inference

Run surveys at the Expected Parrot server using any available model.

Remote Caching

Automatically store and share results and API calls.

Notebooks

Post and share .ipynb and .py files.

File Store

Store and share data files for use in EDSL projects.

Survey Builder

A no-code interface for creating surveys and collecting responses.

Validating with Humans

Humanize

Generate web-based surveys and collect responses from human respondents.

Prolific studies

Launch surveys as studies on Prolific.

Working with Results

Dataset

Work with tabular data using the Dataset class.

Estimating & Tracking Costs

Estimate and track costs for running surveys.

Exceptions & Debugging

Identify and handle exceptions.

Token usage

Monitor token limits and usage.