Skip to main content

Goals of this tutorial

We begin with technical setup: instructions for installing the EDSL library and storing API keys to access language models. Then we demonstrate some of the basic features of EDSL, with examples for constructing and running surveys with agents and models, analyzing responses as datasets, and validating results with human respondents. By the end of this tutorial, you will be able to use EDSL to do each of the following:
  • Construct various types of questions tailored to your research objectives.
  • Combine questions into surveys and integrate logical rules to control the survey flow.
  • Add context to questions by piping answers, adding memory of prior questions and answers, and using scenarios to add data or content to questions.
  • Design personas for AI agents to simulate responses to your surveys.
  • Choose and deploy large language models to generate responses for AI agents.
  • Analyze results as datasets with built-in analytical tools.
  • Validate LLM answers with human respondents.

Storing & sharing your work

We also introduce Coop: a platform for creating, storing and sharing AI-based research and launching hybrid human/AI surveys. Coop is fully integrated with EDSL and free to use. At the end of the tutorial we show how to use EDSL with Coop by posting content created in this tutorial for anyone to view at the web app and launching a web-based survey to compare LLM and human responses.

Further reading & questions

Please see our documentation page for more details on each of the topics covered in this notebook. If you encounter any issues or have questions, please email us at info@expectedparrot.com or post a question at our Discord channel.

Pre-requisites

EDSL is compatible with Python 3.9 - 3.12. Before starting this tutorial, please ensure that you have a Python environment set up on your machine or in a cloud-based environment, such as Google Colab. You can find instructions for installing Python at the Python Software Foundation.

Recommendations

The code examples in this tutorial are designed to be run in a Jupyter notebook or another Python environment, or in a cloud-based environment such as Google Colab. If you are using Google Colab, please see additional instructions for setting up EDSL in the Colab setup page in the documentation. We also recommend using a virtual environment when installing and using EDSL in order to avoid conflicts with other Python packages. You can find instructions for setting up a virtual environment at the Python Packaging Authority.

Installation

To begin using EDSL, you first need to install the library. This can either be done locally on your machine or in a cloud-based environment, such as Google Colab. Once you have decided where to install EDSL, you can choose to whether install it from PyPI or GitHub:

From PyPI

Install EDSL directly using pip, which is straightforward and recommended for most users. We also recommend using a virtual environment to manage your Python packages (see Recommendations above). Uncomment and run the following command to install EDSL from PyPI:
# ! uv pip install edsl -q
If you have already installed EDSL, you can uncomment and run the following code to check that your version is up to date (compare it to the version at PyPI):
# pip show edsl
If your version of EDSL is not up to date, uncomment and run the following code to update it:
# pip install --upgrade edsl

From GitHub

You can find the source code for EDSL and contribute to the project at GitHub. Installing from GitHub allows you to get the latest updates to EDSL before they are released to a new version at PyPI. This is recommended if you are using new features or contributing to the project. Uncomment and run the following command to install EDSL from GitHub:
# pip install git+https://github.com/expectedparrot/edsl.git@main

Create an account

Creating an account allows you to run survey jobs at Expected Parrot using language models of your choice, and automatically cache your results. Your account also allows you to launch human surveys and share your content and workflows with other users. Your account comes with $25 in credits for API calls to LLMs for getting started and a referal code for earning more credits. Create an account with an email address and password, or uncomment and run the following code to be prompted automatically:
# from edsl import login

# login()

Accessing LLMs

The next step is to decide how you want to access language models for running surveys. EDSL works with many popular language models that you can choose from to generate responses to your surveys. These models are hosted by various service providers, such as Anthropic, Azure, Bedrock, Deep Infra, Google, Groq, Mistral, OpenAI, Replicate, Together and Xai. In order to run a survey, you need to provide API keys for the service providers of models that you want to use. There are two methods for providing API keys to EDSL:
  • Use your Expected Parrot API key to access all available models
  • Provide your own API keys from service providers

Managing keys

To manage your keys, navigate to your Keys page and use the options to add keys and optionally share access to them with other users. You can specify which keys to use at any time, and check the current priority of your keys. Your Expected Parrot API key is used by default when a private key is not provided for a selected model. Please see instructions for alternative methods of storing your own API keys.
Note:If you try to run a survey without storing a required API key, you will be provided a link to activate remote inference and use your Expected Parrot API key.

Credits & tokens

Running surveys with language models requires tokens. If you are using your own API keys, service providers will bill you directly. If you are using your Expected Parrot API key to access models, you will need to purchase credits to cover token costs. Please see the model pricing page for details on available models and their current prices.
Note:Your account comes with 100 free credits. You can purchase more credits at any time at your Credits page.
After installing EDSL and storing API keys you are ready to run some examples!

Example: Running a simple question

EDSL comes with a variety of question types that we can choose from based on the form of the response that we want to get back from a model. To see a list of all question types:
from edsl import Question

Question.available()
question_typequestion_classexample_question
0checkboxQuestionCheckBoxQuestion(‘checkbox’, question_name = """never_eat""", question_text = """Which of the following foods would you eat if you had to?""", min_selections = 2, max_selections = 5, question_options = [‘soggy meatpie’, ‘rare snails’, ‘mouldy bread’, ‘panda milk custard’, ‘McDonalds’], include_comment = False)
1computeQuestionComputeQuestion(‘compute’, question_name = """computed_greeting""", question_text = """Hello !""")
2dictQuestionDictQuestion(‘dict’, question_name = """example""", question_text = """Please provide a simple recipe for hot chocolate.""", answer_keys = [‘title’, ‘ingredients’, ‘num_ingredients’, ‘instructions’], value_types = [‘str’, ‘list[str]’, ‘int’, ‘str’], value_descriptions = [‘The title of the recipe.’, ‘A list of ingredients.’, ‘The number of ingredients.’, ‘The instructions for making the recipe.’], question_presentation = """Please provide a simple recipe for hot chocolate.""", answering_instructions = """Please respond with a dictionary using the following keys: title, ingredients, num_ingredients, instructions. Here are descriptions of the values to provide: - “title”: “The title of the recipe.” - “ingredients”: “A list of ingredients.” - “num_ingredients”: “The number of ingredients.” - “instructions”: “The instructions for making the recipe.” The values should be formatted in the following types: - “title”: “str” - “ingredients”: “list[str]” - “num_ingredients”: “int” - “instructions”: “str” If you do not have a value for a given key, use “null”. After the answer, you can put a comment explaining your response on the next line. """)
3extractQuestionExtractQuestion(‘extract’, question_name = """extract_name""", question_text = """My name is Moby Dick. I have a PhD in astrology, but I’m actually a truck driver""", answer_template = )
4free_textQuestionFreeTextQuestion(‘free_text’, question_name = """how_are_you""", question_text = """How are you?""")
5functionalQuestionFunctionalQuestion(‘functional’, question_name = """sum_and_multiply""", question_text = """Calculate the sum of the list and multiply it by the agent trait multiplier.""")
6likert_fiveQuestionLikertFiveQuestion(‘likert_five’, question_name = """happy_raining""", question_text = """I’m only happy when it rains.""", question_options = [‘Strongly disagree’, ‘Disagree’, ‘Neutral’, ‘Agree’, ‘Strongly agree’])
7linear_scaleQuestionLinearScaleQuestion(‘linear_scale’, question_name = """ice_cream""", question_text = """How much do you like ice cream?""", question_options = [1, 2, 3, 4, 5], option_labels = )
8listQuestionListQuestion(‘list’, question_name = """list_of_foods""", question_text = """What are your favorite foods?""", max_list_items = None, min_list_items = None)
9matrixQuestionMatrixQuestion(‘matrix’, question_name = """child_happiness""", question_text = """How happy would you be with different numbers of children?""", question_items = [‘No children’, ‘1 child’, ‘2 children’, ‘3 or more children’], question_options = [1, 2, 3, 4, 5], option_labels = )
10multiple_choiceQuestionMultipleChoiceQuestion(‘multiple_choice’, question_name = """how_feeling""", question_text = """How are you?""", question_options = [‘Good’, ‘Great’, ‘OK’, ‘Bad’], include_comment = False)
11multiple_choice_with_otherQuestionMultipleChoiceWithOtherQuestion(‘multiple_choice_with_other’, question_name = """how_feeling_with_other""", question_text = """How are you?""", question_options = [‘Good’, ‘Great’, ‘OK’, ‘Bad’], include_comment = False)
12numericalQuestionNumericalQuestion(‘numerical’, question_name = """age""", question_text = """You are a 45 year old man. How old are you in years?""", min_value = 0, max_value = 86.7, include_comment = False)
13rankQuestionRankQuestion(‘rank’, question_name = """rank_foods""", question_text = """Rank your favorite foods.""", question_options = [‘Pizza’, ‘Pasta’, ‘Salad’, ‘Soup’], num_selections = 2)
14top_kQuestionTopKQuestion(‘top_k’, question_name = """two_fruits""", question_text = """Which of the following fruits do you prefer?""", min_selections = 2, max_selections = 2, question_options = [‘apple’, ‘banana’, ‘carrot’, ‘durian’], use_code = True)
15yes_noQuestionYesNoQuestion(‘yes_no’, question_name = """is_it_equal""", question_text = """Is 5 + 5 equal to 11?""", question_options = [‘No’, ‘Yes’])
We can see the components of a particular question type by importing the question type class and calling the example method on it:
from edsl import (
    # QuestionCheckBox,
    # QuestionExtract,
    # QuestionFreeText,
    # QuestionFunctional,
    # QuestionLikertFive,
    # QuestionLinearScale,
    # QuestionList,
    QuestionMultipleChoice,
    # QuestionNumerical,
    # QuestionRank,
    # QuestionTopK,
    # QuestionYesNo
)

q = QuestionMultipleChoice.example() # substitute any question type class name
q
QuestionMultipleChoice
keyvalue
0question_namehow_feeling
1question_textHow are you?
2question_options:0Good
3question_options:1Great
4question_options:2OK
5question_options:3Bad
6include_commentFalse
7question_typemultiple_choice
Here we create a simple multiple choice question of our own:
from edsl import QuestionMultipleChoice

q = QuestionMultipleChoice(
    question_name = "smallest_prime",
    question_text = "Which is the smallest prime number?",
    question_options = [0, 1, 2, 3]
)
We can administer the question to a language model by calling the run method on it. If you have activated remote inference and stored your Expected Parrot API key (see instructions above), the question will be run remotely at the Expected Parrot server. Results are stored at an unlisted Coop page by default; we can also set the visibility to public or private either when we run it or by updating the object (demonstrated in later examples). We can also view a progress report for the job:
results = q.run()

Inspecting results

This generates a dataset of Results that we can readily access with built-in methods for analysis. Here we select() the response to inspect it, together with the model that was used and the model’s “comment” about its response–a field that is automatically added to all question types other than free text:
results.select("model", "smallest_prime", "smallest_prime_comment")
model.modelanswer.smallest_primecomment.smallest_prime_comment
0gpt-4o22 is the smallest prime number because it is the only even number that is divisible by only 1 and itself.
The Results also include information about the question, model parameters, prompts, generated tokens and raw responses. To see a list of all the components:
results.columns
0
0agent.agent_index
1agent.agent_instruction
2agent.agent_name
3answer.smallest_prime
4cache_keys.smallest_prime_cache_key
5cache_used.smallest_prime_cache_used
6comment.smallest_prime_comment
7generated_tokens.smallest_prime_generated_tokens
8iteration.iteration
9model.frequency_penalty
10model.inference_service
11model.logprobs
12model.max_tokens
13model.model
14model.model_index
15model.presence_penalty
16model.temperature
17model.top_logprobs
18model.top_p
19prompt.smallest_prime_system_prompt
20prompt.smallest_prime_user_prompt
21question_options.smallest_prime_question_options
22question_text.smallest_prime_question_text
23question_type.smallest_prime_question_type
24raw_model_response.smallest_prime_cost
25raw_model_response.smallest_prime_input_price_per_million_tokens
26raw_model_response.smallest_prime_input_tokens
27raw_model_response.smallest_prime_one_usd_buys
28raw_model_response.smallest_prime_output_price_per_million_tokens
29raw_model_response.smallest_prime_output_tokens
30raw_model_response.smallest_prime_raw_model_response
31reasoning_summary.smallest_prime_reasoning_summary
32scenario.scenario_index
33validated.smallest_prime_validated

Example: Conducting a survey with agents and models

In the next example we construct a more complex survey consisting of multiple questions, and design personas for AI agents to answer the survey. Then we select specific language models to generate the answers. We start by creating questions in different types and passing them to a Survey:
from edsl import QuestionLinearScale, QuestionFreeText

q_enjoy = QuestionLinearScale(
    question_name = "enjoy",
    question_text = "On a scale from 1 to 5, how much do you enjoy reading?",
    question_options = [1, 2, 3, 4, 5],
    option_labels = {`{1:"Not at all", 5:"Very much"}
)

q_favorite_place = QuestionFreeText(
    question_name = "favorite_place",
    question_text = "Describe your favorite place for reading."
)
We construct a Survey by passing a list of questions:
from edsl import Survey

survey = Survey(questions = [q_enjoy, q_favorite_place])

Agents

An important feature of EDSL is the ability to create AI agents to answer questions. This is done by passing dictionaries of relevant “traits” to Agent objects that are used by language models to generate responses. Learn more about designing agents. Here we construct several simple agent personas to use with our survey:
from edsl import AgentList, Agent

agents = AgentList(
    Agent(traits = {`{"persona":p}) for p in ["artist", "mechanic", "sailor"]
)

Language models

EDSL works with many popular large language models that we can select to use with a survey. This makes it easy to compare responses among models in the results that are generated. See a current list of available models at our model pricing and performance page. You can also check available service providers:
from edsl import Model

Model.services()
ScenarioList scenarios: 15; keys: [‘service’];
service
0anthropic
1azure
2bedrock
3deep_infra
4deepseek
5google
6groq
7mistral
8ollama
9open_router
10openai
11openai_v2
12perplexity
13together
14xai
To check the default model that will be used if no models are specified for a survey (e.g., as in the first example above):
Model()
LanguageModel
keyvalue
0modelgpt-4o
1parameters:temperature0.500000
2parameters:max_tokens1000
3parameters:top_p1
4parameters:frequency_penalty0
5parameters:presence_penalty0
6parameters:logprobsFalse
7parameters:top_logprobs3
8inference_serviceopenai
Note:(Note that the output may be different if the default model has changed since this page was last updated.)
Here we select some models to use with our survey:
from edsl import ModelList, Model

models = ModelList([
    Model("gpt-4o", service_name = "openai"),
    Model("gemini-1.5-flash", service_name = "google"),
    Model("claude-3-7-sonnet-20250219", service_name = "anthropic")
])

Running a survey

We add agents and models to a survey using the by method. Then we administer a survey the same way that we do an individual question, by calling the run method on it:
results = survey.by(agents).by(models).run()
We can pass an expression to filter() the results and list the components to sort_by():
(
    results
    .filter("persona != 'artist'")
    .order_by("persona", "model")
    .select("model", "persona", "enjoy", "favorite_place")
)
model.modelagent.personaanswer.enjoyanswer.favorite_place
0claude-3-7-sonnet-20250219mechanic3As a mechanic, I’d have to say my favorite place for reading is actually in my workshop during lunch breaks or after hours when things quiet down. There’s this old, worn leather chair I salvaged and fixed up that sits in the corner by a window. The natural light is perfect during the day, and I’ve got a good shop lamp nearby for evening reading. I usually have service manuals, repair guides, or car magazines stacked on a small side table, but I also enjoy a good thriller or history book there too. There’s something calming about being surrounded by my tools and projects while taking a mental break with a book. The faint smell of oil and metal gives it a comfortable familiarity that I find relaxing. Plus, if I read something interesting about a new repair technique, I’m right where I need to be to try it out!
1gemini-1.5-flashmechanic3My favorite place to read? Gotta be my garage, actually. Not the whole thing, mind you. Just that little corner by the workbench, where the light’s just right – not too harsh, you know? I’ve got a comfy old stool there, worn smooth from years of use, and a little side table I’ve cobbled together from scrap parts. Got a good lamp on it, too, one of those adjustable ones so I can get the perfect angle. The air smells of oil and grease, sure, but it’s a familiar smell, comforting even. Plus, there’s always something interesting to look at – a half-finished project, a neat tool I haven’t used in a while… it keeps my mind occupied even when I’m supposed to be concentrating on the book. It’s peaceful in its own way, you know? The quiet hum of the fridge in the back, the occasional drip from a leaky faucet… it’s my sanctuary.
2gpt-4omechanic3As a mechanic, my favorite place for reading is actually my garage. It’s not your typical cozy reading nook, but there’s something about the smell of oil and the sound of tools clinking that feels comforting. I’ve got a sturdy workbench where I can prop up a manual or a good book about classic cars. The lighting is bright enough to read by, and when I need a break, I can glance up at the projects I’m working on. Plus, it’s quiet, especially when the garage door is down, so I can really focus on whatever I’m reading.
3claude-3-7-sonnet-20250219sailor3Ah, me favorite spot for readin’? That’d be the fo’c’sle (forecastle) of me ship when we’re anchored in a calm bay. There’s somethin’ special about sittin’ on a sea chest with me back against the hull, lantern swingin’ gently overhead, and the soft sounds of water lappin’ against the sides. The gentle rockin’ of the vessel puts ye in a perfect state of mind for gettin’ lost in a tale. I like to read in the early mornin’ when the air is crisp and most of the crew is still snorin’ away, or at dusk when the day’s work is done. When I’m ashore, I fancy findin’ a quiet spot near the harbor where I can still see the ships and smell the salt air. A good book, the distant cry of gulls, and the promise of the open water - that’s all this old salt needs for a perfect readin’ nook.
4gemini-1.5-flashsailor3Ahoy there! My favorite place for a good read? That’s easy. The crow’s nest, of course! Up there, high above the deck, with the wind whipping through my hair and the spray of the ocean kissing my face… It’s the perfect spot. The rocking of the ship is a bit of a distraction sometimes, but the view… the endless horizon… it’s inspiring. Makes even the dullest sea shanty seem like a thrilling adventure. Plus, nobody bothers you up there! Just me, my book, and the vast, beautiful ocean.
5gpt-4osailor3Ah, my favorite place for reading has to be the deck of a ship, right under the open sky. There’s something about the gentle sway of the sea and the salty breeze that makes the words come alive. I like to find a quiet spot, maybe near the stern where the sound of the waves is a bit more pronounced. The horizon stretches out endlessly, and with the sun setting, casting golden hues across the water, it feels like I’m part of the stories I’m reading. It’s a place where adventure and tranquility meet, perfect for diving into a good book.

Example: Adding context to questions

EDSL provides a variety of ways to add data or content to survey questions. These methods include:

Piping question answers

Here we demonstrate how to pipe the answer to a question into the text of another question. This is done by using a placeholder {{ <question_name>.answer }} in the text of the follow-on question where the answer to the prior question is to be inserted when the survey is run. This causes the questions to be administered in the required order (survey questions are administered asynchronously by default). Learn more about piping question answers. Here we insert the answer to a numerical question into the text of a follow-on yes/no question:
from edsl import QuestionNumerical, QuestionYesNo, Survey

q1 = QuestionNumerical(
    question_name = "random_number",
    question_text = "Pick a random number between 1 and 1,000."
)

q2 = QuestionYesNo(
    question_name = "prime",
    question_text = "Is this a prime number: {{ random_number.answer }}"
)
survey = Survey([q1, q2])

results = survey.run()
We can check the user_prompt for the prime question to verify that that the answer to the random_number question was piped into it:
results.select("random_number", "prime_user_prompt", "prime", "prime_comment")
answer.random_numberprompt.prime_user_promptanswer.primecomment.prime_comment
0487Is this a prime number: 487 No Yes Only 1 option may be selected. Please respond with just your answer. After the answer, you can put a comment explaining your response.Yes487 is a prime number because it has no divisors other than 1 and itself.

Adding “memory” of questions and answers

Here we instead add a “memory” of the first question and answer to the context of the second question. This is done by calling a memory rule and identifying the question(s) to add. Instead of just the answer, information about the full question and answer are presented with the follow-on question text, and no placeholder is used. Learn more about question memory rules. Here we demonstrate the add_targeted_memory method (we could also use set_full_memory_mode or other memory rules):
from edsl import QuestionNumerical, QuestionYesNo, Survey

q1 = QuestionNumerical(
    question_name = "random_number",
    question_text = "Pick a random number between 1 and 1,000."
)
q2 = QuestionYesNo(
    question_name = "prime",
    question_text = "Is the number you picked a prime number?"
)

survey = Survey([q1, q2]).add_targeted_memory(q2, q1)

results = survey.run()
We can again use the user_prompt to verify the context that was added to the follow-on question. To view the results in a long table, we can call the table() and long() methods to modify the default table view:
results.select("random_number", "prime_user_prompt", "prime", "prime_comment")
answer.random_numberprompt.prime_user_promptanswer.primecomment.prime_comment
0487Is the number you picked a prime number? No Yes Only 1 option may be selected. Please respond with just your answer. After the answer, you can put a comment explaining your response. Before the question you are now answering, you already answered the following question(s): Question: Pick a random number between 1 and 1,000. Answer: 487Yes487 is a prime number because it has no divisors other than 1 and itself.
Related topic: Learn more about exploring and simulating “randomness” with AI agents and LLMs inthis notebook.

Scenarios

We can also add external data or content to survey questions. This can be useful when you want to efficiently create and administer multiple versions of questions at once, e.g., for conducting data labeling tasks. This is done by creating Scenario dictionaries for the data or content to be used with a survey, where the keys match {{ placeholder }} names used in question texts (or question options) and the values are the content to be added. Scenarios can also be used to add metadata to survey results, e.g., data sources or other information that you may want to include in the results for reference but not necessarily include in question texts. In the next example we revise the prior survey questions about reading to take a parameter for other activities that we may want to add to the questions, and create simple scenarios for some activities. EDSL provides methods for automatically generating scenarios from a variety of data sources, including PDFs, CSVs, docs, images, tables and dicts. We use the from_list method to convert a list of activities into scenarios. Then we demonstrate how to use scenarios to create multiple versions of our questions either (i) when constructing a survey or (ii) when running it:
  • In the latter case, the by method is used to add scenarios to a survey of questions with placeholders at the time that it is run (the same way that agents and models are added to a survey). This adds a scenario column to the results with a row for each answer to each question for each scenario.
  • In the former case, the loop method is used to create a list of versions of a question with the scenarios already added to it; when the questions are passed to a survey and it is run, the results include columns for each individual question; there is no scenario column and a single row for each agent’s answers to all the questions.
Learn more about using scenarios. Here we create scenarios for a simple list of activities:
from edsl import ScenarioList

scenarios = ScenarioList.from_list("activity", ["reading", "running", "relaxing"])

Adding scenarios using the by method

Here we add the scenarios to the survey when we run it, together with any desired agents and models:
from edsl import QuestionLinearScale, QuestionFreeText, Survey

q_enjoy = QuestionLinearScale(
    question_name = "enjoy",
    question_text = "On a scale from 1 to 5, how much do you enjoy {{ scenario.activity }}?",
    question_options = [1, 2, 3, 4, 5],
    option_labels = {`{1:"Not at all", 5:"Very much"}
)

q_favorite_place = QuestionFreeText(
    question_name = "favorite_place",
    question_text = "In a brief sentence, describe your favorite place for {{ scenario.activity }}."
)

survey = Survey([q_enjoy, q_favorite_place])

results = survey.by(scenarios).by(agents).by(models).run()
We can optionally drop the prefixes agent, scenario, answer, etc., when fields are unique:
(
    results
    .filter("model.model == 'gpt-4o'")
    .order_by("activity", "persona")
    .select("activity", "persona", "enjoy", "favorite_place")
)
scenario.activityagent.personaanswer.enjoyanswer.favorite_place
0readingartist4My favorite place for reading is a cozy corner of my art studio, surrounded by vibrant canvases and the soft glow of afternoon light filtering through the window.
1readingmechanic3My favorite place for reading is the cozy corner of my garage, surrounded by tools and the smell of motor oil, where I can escape into a good book during breaks.
2readingsailor3My favorite place for reading is the deck of a ship, with the sound of waves lapping against the hull and a gentle sea breeze in the air.
3relaxingartist4My favorite place for relaxing is a quiet, sun-dappled corner of my art studio, surrounded by canvases and the soft hum of creativity.
4relaxingmechanic4My favorite place for relaxing is my garage, surrounded by the familiar scent of motor oil and the satisfying hum of engines.
5relaxingsailor3My favorite place for relaxing is on the deck of a ship, watching the horizon as the sun sets over the endless ocean.
6runningartist2My favorite place for running is a serene forest trail where the dappled sunlight dances through the leaves and the air is filled with the earthy scent of nature.
7runningmechanic1I don’t run much, but I’d imagine a quiet trail through the woods would be a nice spot for a jog.
8runningsailor2As a sailor, my favorite place for running is along the beach at sunrise, with the sound of the waves and the salty sea breeze filling the air.

Adding scenarios using the loop method

Here we add scenarios to questions when constructing a survey, as opposed to when running it. When we run the survey the results will include columns for each question and no scenario field. Note that we can also optionally use the scenario key in the question names (they are otherwise incremented by default):
from edsl import QuestionLinearScale, QuestionFreeText

q_enjoy = QuestionLinearScale(
    question_name = "enjoy_{{ scenario.activity }}", # optional use of scenario key
    question_text = "On a scale from 1 to 5, how much do you enjoy {{ activity }}?",
    question_options = [1, 2, 3, 4, 5],
    option_labels = {`{1:"Not at all", 5:"Very much"}
)

q_favorite_place = QuestionFreeText(
    question_name = "favorite_place_{{ scenario.activity }}", # optional use of scenario key
    question_text = "In a brief sentence, describe your favorite place for {{ scenario.activity }}."
)
Looping the scenarios to create lists of questions:
enjoy_questions = q_enjoy.loop(scenarios)
enjoy_questions
[Question('linear_scale', question_name = """enjoy_reading""", question_text = """On a scale from 1 to 5, how much do you enjoy reading?""", question_options = [1, 2, 3, 4, 5], option_labels = {`{1: 'Not at all', 5: 'Very much'}),
 Question('linear_scale', question_name = """enjoy_running""", question_text = """On a scale from 1 to 5, how much do you enjoy running?""", question_options = [1, 2, 3, 4, 5], option_labels = {`{1: 'Not at all', 5: 'Very much'}),
 Question('linear_scale', question_name = """enjoy_relaxing""", question_text = """On a scale from 1 to 5, how much do you enjoy relaxing?""", question_options = [1, 2, 3, 4, 5], option_labels = {`{1: 'Not at all', 5: 'Very much'})]
favorite_place_questions = q_favorite_place.loop(scenarios)
favorite_place_questions
[Question('free_text', question_name = """favorite_place_reading""", question_text = """In a brief sentence, describe your favorite place for reading."""),
 Question('free_text', question_name = """favorite_place_running""", question_text = """In a brief sentence, describe your favorite place for running."""),
 Question('free_text', question_name = """favorite_place_relaxing""", question_text = """In a brief sentence, describe your favorite place for relaxing.""")]
Combining the questions in a survey:
survey = Survey(questions = enjoy_questions + favorite_place_questions)

results = survey.by(agents).by(models).run()
We can see that there are additional question fields and no “scenario” field:
results.columns
0
0agent.agent_index
1agent.agent_instruction
2agent.agent_name
3agent.persona
4answer.enjoy_reading
5answer.enjoy_relaxing
6answer.enjoy_running
7answer.favorite_place_reading
8answer.favorite_place_relaxing
9answer.favorite_place_running
10cache_keys.enjoy_reading_cache_key
11cache_keys.enjoy_relaxing_cache_key
12cache_keys.enjoy_running_cache_key
13cache_keys.favorite_place_reading_cache_key
14cache_keys.favorite_place_relaxing_cache_key
15cache_keys.favorite_place_running_cache_key
16cache_used.enjoy_reading_cache_used
17cache_used.enjoy_relaxing_cache_used
18cache_used.enjoy_running_cache_used
19cache_used.favorite_place_reading_cache_used
20cache_used.favorite_place_relaxing_cache_used
21cache_used.favorite_place_running_cache_used
22comment.enjoy_reading_comment
23comment.enjoy_relaxing_comment
24comment.enjoy_running_comment
25comment.favorite_place_reading_comment
26comment.favorite_place_relaxing_comment
27comment.favorite_place_running_comment
28generated_tokens.enjoy_reading_generated_tokens
29generated_tokens.enjoy_relaxing_generated_tokens
30generated_tokens.enjoy_running_generated_tokens
31generated_tokens.favorite_place_reading_generated_tokens
32generated_tokens.favorite_place_relaxing_generated_tokens
33generated_tokens.favorite_place_running_generated_tokens
34iteration.iteration
35model.frequency_penalty
36model.inference_service
37model.logprobs
38model.maxOutputTokens
39model.max_tokens
40model.model
41model.model_index
42model.presence_penalty
43model.stopSequences
44model.temperature
45model.topK
46model.topP
47model.top_logprobs
48model.top_p
49prompt.enjoy_reading_system_prompt
50prompt.enjoy_reading_user_prompt
51prompt.enjoy_relaxing_system_prompt
52prompt.enjoy_relaxing_user_prompt
53prompt.enjoy_running_system_prompt
54prompt.enjoy_running_user_prompt
55prompt.favorite_place_reading_system_prompt
56prompt.favorite_place_reading_user_prompt
57prompt.favorite_place_relaxing_system_prompt
58prompt.favorite_place_relaxing_user_prompt
59prompt.favorite_place_running_system_prompt
60prompt.favorite_place_running_user_prompt
61question_options.enjoy_reading_question_options
62question_options.enjoy_relaxing_question_options
63question_options.enjoy_running_question_options
64question_options.favorite_place_reading_question_options
65question_options.favorite_place_relaxing_question_options
66question_options.favorite_place_running_question_options
67question_text.enjoy_reading_question_text
68question_text.enjoy_relaxing_question_text
69question_text.enjoy_running_question_text
70question_text.favorite_place_reading_question_text
71question_text.favorite_place_relaxing_question_text
72question_text.favorite_place_running_question_text
73question_type.enjoy_reading_question_type
74question_type.enjoy_relaxing_question_type
75question_type.enjoy_running_question_type
76question_type.favorite_place_reading_question_type
77question_type.favorite_place_relaxing_question_type
78question_type.favorite_place_running_question_type
79raw_model_response.enjoy_reading_cost
80raw_model_response.enjoy_reading_input_price_per_million_tokens
81raw_model_response.enjoy_reading_input_tokens
82raw_model_response.enjoy_reading_one_usd_buys
83raw_model_response.enjoy_reading_output_price_per_million_tokens
84raw_model_response.enjoy_reading_output_tokens
85raw_model_response.enjoy_reading_raw_model_response
86raw_model_response.enjoy_relaxing_cost
87raw_model_response.enjoy_relaxing_input_price_per_million_tokens
88raw_model_response.enjoy_relaxing_input_tokens
89raw_model_response.enjoy_relaxing_one_usd_buys
90raw_model_response.enjoy_relaxing_output_price_per_million_tokens
91raw_model_response.enjoy_relaxing_output_tokens
92raw_model_response.enjoy_relaxing_raw_model_response
93raw_model_response.enjoy_running_cost
94raw_model_response.enjoy_running_input_price_per_million_tokens
95raw_model_response.enjoy_running_input_tokens
96raw_model_response.enjoy_running_one_usd_buys
97raw_model_response.enjoy_running_output_price_per_million_tokens
98raw_model_response.enjoy_running_output_tokens
99raw_model_response.enjoy_running_raw_model_response
100raw_model_response.favorite_place_reading_cost
101raw_model_response.favorite_place_reading_input_price_per_million_tokens
102raw_model_response.favorite_place_reading_input_tokens
103raw_model_response.favorite_place_reading_one_usd_buys
104raw_model_response.favorite_place_reading_output_price_per_million_tokens
105raw_model_response.favorite_place_reading_output_tokens
106raw_model_response.favorite_place_reading_raw_model_response
107raw_model_response.favorite_place_relaxing_cost
108raw_model_response.favorite_place_relaxing_input_price_per_million_tokens
109raw_model_response.favorite_place_relaxing_input_tokens
110raw_model_response.favorite_place_relaxing_one_usd_buys
111raw_model_response.favorite_place_relaxing_output_price_per_million_tokens
112raw_model_response.favorite_place_relaxing_output_tokens
113raw_model_response.favorite_place_relaxing_raw_model_response
114raw_model_response.favorite_place_running_cost
115raw_model_response.favorite_place_running_input_price_per_million_tokens
116raw_model_response.favorite_place_running_input_tokens
117raw_model_response.favorite_place_running_one_usd_buys
118raw_model_response.favorite_place_running_output_price_per_million_tokens
119raw_model_response.favorite_place_running_output_tokens
120raw_model_response.favorite_place_running_raw_model_response
121reasoning_summary.enjoy_reading_reasoning_summary
122reasoning_summary.enjoy_relaxing_reasoning_summary
123reasoning_summary.enjoy_running_reasoning_summary
124reasoning_summary.favorite_place_reading_reasoning_summary
125reasoning_summary.favorite_place_relaxing_reasoning_summary
126reasoning_summary.favorite_place_running_reasoning_summary
127scenario.scenario_index
128validated.enjoy_reading_validated
129validated.enjoy_relaxing_validated
130validated.enjoy_running_validated
131validated.favorite_place_reading_validated
132validated.favorite_place_relaxing_validated
133validated.favorite_place_running_validated
(
    results
    .filter("model.model == 'gpt-4o'")
    .order_by("persona")
    .select("persona", "enjoy_reading", "enjoy_running", "enjoy_relaxing", "favorite_place_reading", "favorite_place_running", "favorite_place_relaxing")
)
agent.personaanswer.enjoy_readinganswer.enjoy_runninganswer.enjoy_relaxinganswer.favorite_place_readinganswer.favorite_place_runninganswer.favorite_place_relaxing
0artist424My favorite place for reading is a cozy corner of my art studio, surrounded by vibrant canvases and the soft glow of afternoon light filtering through the window.My favorite place for running is a serene forest trail where the dappled sunlight dances through the leaves and the air is filled with the earthy scent of nature.My favorite place for relaxing is a quiet, sun-dappled corner of my art studio, surrounded by canvases and the soft hum of creativity.
1mechanic314My favorite place for reading is the cozy corner of my garage, surrounded by tools and the smell of motor oil, where I can escape into a good book during breaks.I don’t run much, but I’d imagine a quiet trail through the woods would be a nice spot for a jog.My favorite place for relaxing is my garage, surrounded by the familiar scent of motor oil and the satisfying hum of engines.
2sailor323My favorite place for reading is the deck of a ship, with the sound of waves lapping against the hull and a gentle sea breeze in the air.As a sailor, my favorite place for running is along the beach at sunrise, with the sound of the waves and the salty sea breeze filling the air.My favorite place for relaxing is on the deck of a ship, watching the horizon as the sun sets over the endless ocean.

Exploring Results

EDSL comes with built-in methods for analyzing and visualizing survey results. For example, you can call the to_pandas method to convert results into a dataframe:
df = results.to_pandas(remove_prefix=True)
df
enjoy_readingenjoy_runningenjoy_relaxingfavorite_place_readingfavorite_place_runningfavorite_place_relaxingscenario_indexagent_instructionpersonaagent_indexenjoy_relaxing_reasoning_summaryenjoy_reading_reasoning_summaryenjoy_running_reasoning_summaryfavorite_place_running_reasoning_summaryenjoy_running_validatedenjoy_relaxing_validatedenjoy_reading_validatedfavorite_place_running_validatedfavorite_place_reading_validatedfavorite_place_relaxing_validated
0424My favorite place for reading is a cozy corner…My favorite place for running is a serene fore…My favorite place for relaxing is a quiet, sun…0You are answering questions as if you were a h…artist0NaNNaNNaNNaNTrueTrueTrueTrueTrueTrue
1315Honestly? Anywhere the light’s good and I’ve …Anywhere with a good view—preferably overlooki…My favorite place to relax is nestled in my st…0You are answering questions as if you were a h…artist0NaNNaNNaNNaNTrueTrueTrueTrueTrueTrue
2534My favorite place for reading is my sun-drench…As an artist, I find running along the coastal…My favorite place for relaxing is my sun-drenc…0You are answering questions as if you were a h…artist0NaNNaNNaNNaNTrueTrueTrueTrueTrueTrue
3314My favorite place for reading is the cozy corn…I don’t run much, but I’d imagine a quiet trai…My favorite place for relaxing is my garage, s…0You are answering questions as if you were a h…mechanic1NaNNaNNaNNaNTrueTrueTrueTrueTrueTrue
4313My favorite place to read? Gotta be my garage…Anywhere with a good, solid, well-maintained r…My garage, with a cold beer and a good engine …0You are answering questions as if you were a h…mechanic1NaNNaNNaNNaNTrueTrueTrueTrueTrueTrue
5334My favorite place for reading is in my small w…As a mechanic, I’d say my favorite place for r…My favorite place to relax is my garage worksh…0You are answering questions as if you were a h…mechanic1NaNNaNNaNNaNTrueTrueTrueTrueTrueTrue
6323My favorite place for reading is the deck of a…As a sailor, my favorite place for running is …My favorite place for relaxing is on the deck …0You are answering questions as if you were a h…sailor2NaNNaNNaNNaNTrueTrueTrueTrueTrueTrue
7314The crow’s nest, of course! The wind in my ha…Anywhere the wind whips off the ocean and the …A quiet cove, sheltered from the wind, with th…0You are answering questions as if you were a h…sailor2NaNNaNNaNNaNTrueTrueTrueTrueTrueTrue
8333I love reading in the ship’s crow’s nest at du…I’d say the long stretch of beach at sunrise, …Nothing beats the gentle sway of a hammock on …0You are answering questions as if you were a h…sailor2NaNNaNNaNNaNTrueTrueTrueTrueTrueTrue
9 rows × 134 columns The Results object also supports SQL-like queries with the the sql method:
results.sql("""
select model, persona, enjoy_reading, favorite_place_reading
from self
order by 1,2,3
""")
modelpersonaenjoy_readingfavorite_place_reading
0claude-3-7-sonnet-20250219artist5My favorite place for reading is my sun-drenched studio corner, where the natural light perfectly illuminates the pages while inspiring my artistic sensibilities.
1claude-3-7-sonnet-20250219mechanic3My favorite place for reading is in my small workshop corner, where the familiar smell of motor oil and the soft hum of the shop fan create a surprisingly peaceful atmosphere after a long day of working on engines.
2claude-3-7-sonnet-20250219sailor3I love reading in the ship’s crow’s nest at dusk, with the gentle rocking of the vessel and the vast ocean stretching to the horizon - it’s peaceful above the bustle of the deck below.
3gemini-1.5-flashartist3Honestly? Anywhere the light’s good and I’ve got a strong cup of coffee nearby. A sun-drenched cafe, my messy studio, even a quiet park bench will do.
4gemini-1.5-flashmechanic3My favorite place to read? Gotta be my garage, surrounded by the comforting smell of engine grease and the quiet hum of the air compressor.
5gemini-1.5-flashsailor3The crow’s nest, of course! The wind in my hair, the sea stretching out…perfect.
6gpt-4oartist4My favorite place for reading is a cozy corner of my art studio, surrounded by vibrant canvases and the soft glow of afternoon light filtering through the window.
7gpt-4omechanic3My favorite place for reading is the cozy corner of my garage, surrounded by tools and the smell of motor oil, where I can escape into a good book during breaks.
8gpt-4osailor3My favorite place for reading is the deck of a ship, with the sound of waves lapping against the hull and a gentle sea breeze in the air.

Validating results with humans

We can use the humanize method to launch a web-based version of a survey to collect responses from humans. Responses are immediately available at your Coop account, where you can launch surveys with LLMs and human responsents interactively. Here we use a method for generating a web-based version of the above survey, answer it, and then inspect the new results in code. Learn more about launching hybrid surveys about collecting responses with participant platform integrations.
web_info = survey.humanize()

web_info
{
    'project_name': 'Project',
    'uuid': '2504145b-329d-4970-82fb-10659e21e62c',
    'admin_url': 'https://www.expectedparrot.com/home/projects/2504145b-329d-4970-82fb-10659e21e62c',
    'respondent_url': 'https://www.expectedparrot.com/respond/projects/2504145b-329d-4970-82fb-10659e21e62c/runs/d6e66568-7921-479e-bb44-5b8880f3d9a2'
}
from edsl import Coop

coop = Coop()

human_results = coop.get_project_human_responses(web_info["uuid"])
human_results

Posting to Coop

Coop is a platform for creating, storing and sharing LLM-based research. It is fully integrated with EDSL and accessible from your workspace or Coop account page. Learn more about creating an account and using Coop. We can post any EDSL object to Coop by call the push method on it, optionally passing a description for the object, a convenient alias for the URL, and a visibility status (public, private or unlisted by default). For example, the results above are already posted to Coop because they were generated using remote inference (see links). The following code will post them manually:
results.push(
    description = "Starter tutorial sample survey results",
    alias = "starter-tutorial-example-survey-results",
    visibility = "public"
)
We can also post this notebook:
notebook.push(
    description = "Starter Tutorial",
    alias = "starter-tutorial-notebook",
    visibility = "public"
)
To update an object:
from edsl import Notebook

notebook = Notebook(path = "starter_tutorial.ipynb") # resave

notebook.patch("https://www.expectedparrot.com/content/RobinHorton/starter-tutorial-notebook", value = notebook)
{'status': 'success',
 'message': None,
 'requires_upload': False,
 'object_uuid': None}
I