Surveys
A Survey is collection of Questions that can be administered to one or more AI Agents and Language Models at once. Survey questions can be administered asynchronously (by default), or according to rules such as skip and stop logic, and with or without context of other questions in a survey.
Surveys can be used to collect data, generate content or perform other tasks. The results of a survey are stored in a Results object, which can be used to analyze the responses and other components of the survey. Learn more about built-in methods for working with Results objects in the Results section.
Key steps
The key steps to creating and conducting a survey are:
Sending a survey to a language model generates a dataset of Results that includes the responses and other components of the survey. Results can be analyzed and visualized using built-in methods of the Results object.
Key methods
A survey is administered by calling the run() method on the Survey object, after adding any agents, scenarios and models with the by() method, and any rules or memory with the appropriate methods (see detailed examples of each below):
add_skip_rule() - Skip a question based on a conditional expression (e.g., based on a response to another question).
add_stop_rule() - End the survey based on a conditional expression.
add_rule() - Administer a specified question next based on a conditional expression.
set_full_memory_mode() - Include a memory of all prior questions/answers at each new question in the survey.
set_lagged_memory() - Include a memory of a specified number of prior questions/answers at each new question in the survey.
add_targeted_memory() - Include a memory of a particular question/answer at another question in the survey.
add_memory_collection() - Include memories of a set of prior questions/answers at another question in the survey.
Piping
You can also pipe individual components of questions into other questions, such as inserting the answer to a question in the question text of another question. This is done by using the {{ question_name.answer }} syntax in the text of a question, and is useful for creating dynamic surveys that reference prior answers.
Note that this method is different from memory rules, which automatically inlude the full context of a specified question at a new question in the survey: “Before the question you are now answering, you already answered the following question(s): Question: <question_text> Answer: <answer>”. See examples below.
Flow
The show_flow() method displays the flow of a survey, showing the order of questions and any rules that have been applied. See example below.
Rules
The show_rules() method displays a table of the conditional rules that have been applied to a survey, and the questions they apply to. See examples below.
Prompts
The show_prompts() method displays the user and system prompts for each question in a survey. This is a companion method to the prompts() method of a Job object, which returns a dataset containing the prompts together with information about each question, scenario, agent, model and estimated cost. (A Job is created by adding a Model to a Survey or Question.) See examples below.
Constructing a survey
In the examples below we construct a simple survey of questions, and then demonstrate how to run it with various rules and memory options, how to add AI agents and language models, and how to analyze the results.
Defining questions
Questions can be defined as various types, including multiple choice, checkbox, free text, linear scale, numerical and other types. The formats are defined in the Questions module. Here we define some questions by importing question types and creating instances of them:
from edsl import QuestionMultipleChoice, QuestionCheckBox, QuestionLinearScale, QuestionNumerical
q1 = QuestionMultipleChoice(
question_name = "consume_local_news",
question_text = "How often do you consume local news?",
question_options = ["Daily", "Weekly", "Monthly", "Never"]
)
q2 = QuestionCheckBox(
question_name = "sources",
question_text = "What are your most common sources of local news? (Select all that apply)",
question_options = ["Television", "Newspaper", "Online news websites", "Social Media", "Radio", "Other"]
)
q3 = QuestionLinearScale(
question_name = "rate_coverage",
question_text = "On a scale of 1 to 10, how would you rate the quality of local news coverage in your area?",
question_options = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10],
option_labels = {1: "Very poor", 10: "Excellent"}
)
q4 = QuestionNumerical(
question_name = "minutes_reading",
question_text = "On average, how many minutes do you spend consuming local news each day?",
min_value = 0, # optional
max_value = 1440 # optional
)
Adding questions to a survey
Questions are passed to a Survey object as a list of question ids:
from edsl import Survey
survey = Survey(questions = [q1, q2, q3, q4])
Alternatively, questions can be added to a survey one at a time:
from edsl import Survey
survey = Survey().add_question(q1).add_question(q2).add_question(q3).add_question(q4)
Running a survey
Once constructed, a survey can be administered by calling the run() method. If question Scenarios, Agents or Language Models have been specified, they are added to the survey with the by method when running it. (If no language model is specified, the survey will be run with the default model, which can be inspected by running Model().)
For example, here we run the survey with a simple agent persona and specify that GPT-4o should be used. Note that the agent and model can be added in either order, so long as each type of component is added at once (e.g., if using multiple agents or models, pass them as a list to the by() method):
from edsl import Agent, Model
agent = Agent(traits = {"persona": "You are a teenager who hates reading."})
model = Model("gpt-4o")
results = survey.by(agent).by(model).run()
If remote inference is turned on, the survey will be run on the Expected Parrot server and information about accessing the results at your Coop account will be displayed. for example:
Job sent to server. (Job uuid=025d9fdc-efd9-4ca7-ac7a-f5ab28755f4d).
Job completed and Results stored on Coop: https://www.expectedparrot.com/content/4cfcf0c6-6aff-4447-90cb-cd9e01111a28.
If remote inference is turned off, the survey will be run locally and results will be added to your local cache only. Learn more about data and Remote Caching.
Optional parameters
There are optional parameters that can be passed to the run() method, including:
n - The number of responses to generate for each question (default is 1). Example: run(n=5) will administer the same exact question (and scenario, if any) to an agent and model 5 times.
show_progress_bar - A boolean value to show a progress bar while running the survey (default is False). Example: run(show_progress_bar=True).
cache - A boolean value to cache the results of the survey (default is False). Example: run(cache=False).
disable_remote_inference - A boolean value to indicate whether to run the survey locally while remote inference is activated (default is False). Example: run(disable_remote_inference=True).
remote_inference_results_visibility - A string value to indicate the visibility of the results on the Expected Parrot server, when a survey is being run remotely. Possible values are “public”, “unlisted” or “private” (default is “unlisted”). Visibility can also be modified at the Coop web app. Example: run(remote_inference_results_visibility=”public”).
Survey rules & logic
Rules can be applied to a survey with the add_skip_rule(), add_stop_rule() and add_rule() methods, which take a logical expression and the relevant questions.
Skip rules
The add_skip_rule() method skips a question if a condition is met. The (2) required parameters are the question to skip and the condition to evaluate.
Here we use add_skip_rule() to skip q2 if the response to “consume_local_news” is “Never”. Note that we can refer to the question to be skipped using either the id (“q2”) or question_name (“consume_local_news”):
from edsl import Survey
survey = Survey(questions = [q1, q2, q3, q4])
survey = survey.add_skip_rule(q2, "consume_local_news == 'Never'")
This is equivalent:
from edsl import Survey
survey = Survey(questions = [q1, q2, q3, q4])
survey = survey.add_skip_rule("sources", "consume_local_news == 'Never'")
We can run the survey and verify that the rule was applied:
results = survey.by(agent).by(model).run() # using the agent and model from the previous example
results.select("consume_local_news", "sources", "rate_coverage", "minutes_reading").print(format="rich")
This will print the answers, showing “None” for a skipped question (your own results for answers may vary):
┏━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━┓
┃ answer ┃ answer ┃ answer ┃ answer ┃
┃ .consume_local_news ┃ .sources ┃ .rate_coverage ┃ .minutes_reading ┃
┡━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━┩
│ Never │ None │ 4 │ 0 │
└─────────────────────┴──────────┴────────────────┴──────────────────┘
Show flow
We can call the show_flow() method to display a graphic of the flow of the survey, and verify how the skip rule was applied:
survey.show_flow()
Stop rules
The add_stop_rule() method stops the survey if a condition is met. The (2) required parameters are the question to stop at and the condition to evaluate.
Here we use add_stop_rule() to end the survey at q1 if the response is “Never” (note that we recreate the survey to demonstrate the stop rule alone):
survey = Survey(questions = [q1, q2, q3, q4])
survey = survey.add_stop_rule(q1, "consume_local_news == 'Never'")
This time we see that the survey ended when the response to “color” was “Blue”:
results = survey.by(agent).run()
results.select("consume_local_news", "sources", "rate_coverage", "minutes_reading").print(format="rich")
┏━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━┓
┃ answer ┃ answer ┃ answer ┃ answer ┃
┃ .consume_local_news ┃ .sources ┃ .rate_coverage ┃ .minutes_reading ┃
┡━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━┩
│ Never │ None │ None │ None │
└─────────────────────┴──────────┴────────────────┴──────────────────┘
Other rules
The generalizable add_rule() method is used to specify the next question to administer based on a condition. The (3) required parameters are the question to evaluate, the condition to evaluate, and the question to administer next.
Here we use add_rule() to specify that if the response to “color” is “Blue” then q4 should be administered next:
survey = Survey(questions = [q1, q2, q3, q4])
survey = survey.add_rule(q1, "consume_local_news == 'Never'", q4)
We can run the survey and verify that the rule was applied:
results = survey.by(agent).run()
results.select("consume_local_news", "sources", "rate_coverage", "minutes_reading").print(format="rich")
We can see that both q2 and q3 were skipped but q4 was administered (and the response makes sense for the agent):
┏━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━┓
┃ answer ┃ answer ┃ answer ┃ answer ┃
┃ .consume_local_news ┃ .sources ┃ .rate_coverage ┃ .minutes_reading ┃
┡━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━┩
│ Never │ None │ None │ 0 │
└─────────────────────┴──────────┴────────────────┴──────────────────┘
Conditional expressions
The rule expressions themselves (“consume_local_news == ‘Never’”) are written in Python. An expression is evaluated to True or False, with the answer substituted into the expression. The placeholder for this answer is the name of the question itself. In the examples, the answer to q1 is substituted into the expression “consume_local_news == ‘Never’”, as the name of q1 is “consume_local_news”.
Piping
Piping is a method of explicitly referencing components of a question in a later question. For example, here we use the answer to q0 in the prompt for q1:
from edsl import QuestionFreeText, QuestionList, Survey, Agent
q0 = QuestionFreeText(
question_name = "color",
question_text = "What is your favorite color?",
)
q1 = QuestionList(
question_name = "examples",
question_text = "Name some things that are {{ color.answer }}.",
)
survey = Survey([q0, q1])
agent = Agent(traits = {"persona": "You are a botanist."})
results = survey.by(agent).run()
results.select("color", "examples").print(format="rich")
In this example, q0 will be administered before q1 and the response to q0 is piped into q1. Output:
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ answer ┃ answer ┃
┃ .color ┃ .examples ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│ As a botanist, I find myself drawn to the vibrant greens │ ['Leaves', 'Grass', 'Ferns', 'Moss', 'Green algae'] │
│ of nature. Green is a color that symbolizes growth, life, │ │
│ and the beauty of plants, which are central to my work │ │
│ and passion. │ │
└───────────────────────────────────────────────────────────┴─────────────────────────────────────────────────────┘
If an answer is a list, we can use the list as the question_options in another question, or index items individually. Here we demonstrate examples of both:
from edsl import QuestionList, QuestionFreeText, QuestionMultipleChoice, Survey, Agent
q_colors = QuestionList(
question_name = "colors",
question_text = "What are your 3 favorite colors?",
max_list_items = 3
)
q_examples = QuestionFreeText(
question_name = "examples",
question_text = "Name some things that are {{ colors.answer }}",
)
q_favorite = QuestionMultipleChoice(
question_name = "favorite",
question_text = "Which is your #1 favorite color?",
question_options = [
"{{ colors.answer[0] }}",
"{{ colors.answer[1] }}",
"{{ colors.answer[2] }}",
]
)
survey = Survey([q_colors, q_examples, q_favorite])
agent = Agent(traits = {"persona": "You are a botanist."})
results = survey.by(agent).run()
results.select("colors", "examples", "favorite").print(format="rich")
Output:
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━┓
┃ answer ┃ answer ┃ answer ┃
┃ .colors ┃ .examples ┃ .favorite ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━┩
│ ['Green', 'Brown', 'Yellow'] │ Certainly! Here are some things that can be green, brown, or yellow: │ Green │
│ │ │ │
│ │ **Green:** │ │
│ │ 1. Leaves - Many plants have green leaves due to chlorophyll, which │ │
│ │ is essential for photosynthesis. │ │
│ │ 2. Grass - Typically green, especially when healthy and │ │
│ │ well-watered. │ │
│ │ 3. Green Apples - Varieties like Granny Smith are known for their │ │
│ │ green color. │ │
│ │ │ │
│ │ **Brown:** │ │
│ │ 1. Tree Bark - The outer layer of trees is often brown, providing │ │
│ │ protection. │ │
│ │ 2. Soil - Many types of soil appear brown, indicating organic │ │
│ │ matter. │ │
│ │ 3. Acorns - These seeds from oak trees are generally brown when │ │
│ │ mature. │ │
│ │ │ │
│ │ **Yellow:** │ │
│ │ 1. Sunflowers - Known for their bright yellow petals. │ │
│ │ 2. Bananas - Yellow when ripe and ready to eat. │ │
│ │ 3. Daffodils - These flowers are often a vibrant yellow, heralding │ │
│ │ spring. │ │
└──────────────────────────────┴──────────────────────────────────────────────────────────────────────┴───────────┘
This can also be done with agent traits. For example:
from edsl import Agent, QuestionFreeText
a = Agent(traits = {'first_name': 'John'})
q = QuestionFreeText(
question_text = 'What is your last name, {{ agent.first_name }}?',
question_name = "last_name"
)
job = q.by(a)
job.prompts().select('user_prompt').print(format="rich")
This code will output the text of the prompt for the question:
What is your last name, John?
We can also show both system and user prompts together with information about the question, agent and model by calling the show_prompts() method:
job.show_prompts()
Output:
┏━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━┳━━━━━━━━━━━━━━┓
┃ user_prompt ┃ system_prom… ┃ interview_i… ┃ question_na… ┃ scenario_ind… ┃ agent_index ┃ model ┃ estimated_c… ┃
┡━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━╇━━━━━━━━━━━━━━┩
│ What is your │ You are │ 0 │ last_name │ 0 │ 0 │ gpt-4o │ 0.0005375 │
│ last name, │ answering │ │ │ │ │ │ │
│ John? │ questions as │ │ │ │ │ │ │
│ │ if you were │ │ │ │ │ │ │
│ │ a human. Do │ │ │ │ │ │ │
│ │ not break │ │ │ │ │ │ │
│ │ character. │ │ │ │ │ │ │
│ │ You are an │ │ │ │ │ │ │
│ │ agent with │ │ │ │ │ │ │
│ │ the │ │ │ │ │ │ │
│ │ following │ │ │ │ │ │ │
│ │ persona: │ │ │ │ │ │ │
│ │ {'first_nam… │ │ │ │ │ │ │
│ │ 'John'} │ │ │ │ │ │ │
└──────────────┴──────────────┴──────────────┴──────────────┴───────────────┴─────────────┴────────┴──────────────┘
Question memory
When an agent is taking a survey, they can be prompted to “remember” answers to previous questions. This can be done in several ways:
Full memory
The method set_full_memory_mode() gives the agent all of the prior questions and answers at each new question in the survey, i.e., the first question and answer are included in the memory when answering the second question, both the first and second questions and answers are included in the memory when answering the third question, and so on. The method is called on the survey object:
survey = Survey(questions = [q1, q2, q3, q4])
survey = survey.set_full_memory_mode()
In the results, we can inspect the _user_prompt for each question to see that the agent was prompted to remember all of the prior questions:
results = survey.by(agent).run()
(
results
.select("consume_local_news_user_prompt", "sources_user_prompt", "rate_coverage_user_prompt", "minutes_reading_user_prompt")
.print(format="rich")
)
This will print the prompt that was used for each question, and we can see that each successive prompt references all prior questions and answers that were given:
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ prompt ┃ prompt ┃ prompt ┃ prompt ┃
┃ .consume_local_news_user_… ┃ .sources_user_prompt ┃ .rate_coverage_user_prompt ┃ .minutes_reading_user_pr… ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│ │ What are your most common │ On a scale of 1 to 10, how │ On average, how many │
│ How often do you consume │ sources of local news? │ would you rate the quality │ minutes do you spend │
│ local news? │ (Select all that apply) │ of local news coverage in │ consuming local news each │
│ │ │ your area? │ day? │
│ │ │ │ │
│ Daily │ 0: Television │ 1 : Very poor │ Minimum answer value: │
│ │ │ │ 0 │
│ Weekly │ 1: Newspaper │ 2 : │ │
│ │ │ │ │
│ Monthly │ 2: Online news websites │ 3 : │ Maximum answer value: │
│ │ │ │ 1440 │
│ Never │ 3: Social Media │ 4 : │ This question requires a │
│ │ │ │ numerical response in the │
│ │ 4: Radio │ 5 : │ form of an integer or │
│ Only 1 option may be │ │ │ decimal (e.g., -12, 0, 1, │
│ selected. │ 5: Other │ 6 : │ 2, 3.45, ...). │
│ │ │ │ Respond with just your │
│ Respond only with a string │ │ 7 : │ number on a single line. │
│ corresponding to one of │ │ │ If your response is │
│ the options. │ │ 8 : │ equivalent to zero, │
│ │ │ │ report '0' │
│ │ │ 9 : │ │
│ After the answer, you can │ │ │ │
│ put a comment explaining │ Please respond only with │ 10 : Excellent │ After the answer, put a │
│ why you chose that option │ a comma-separated list of │ │ comment explaining your │
│ on the next line. │ the code of the options │ Only 1 option may be │ choice on the next line. │
│ │ that apply, with square │ selected. │ Before the │
│ │ brackets. E.g., [0, 1, 3] │ │ question you are now │
│ │ │ Respond only with the code │ answering, you already │
│ │ │ corresponding to one of │ answered the following │
│ │ After the answer, you can │ the options. E.g., "1" or │ question(s): │
│ │ put a comment explaining │ "5" by itself. │ Question: │
│ │ your choice on the next │ │ How often do you consume │
│ │ line. │ After the answer, you can │ local news? │
│ │ Before the │ put a comment explaining │ Answer: Weekly │
│ │ question you are now │ why you chose that option │ │
│ │ answering, you already │ on the next line. │ Prior questions and │
│ │ answered the following │ Before the │ answers: Question: What │
│ │ question(s): │ question you are now │ are your most common │
│ │ Question: │ answering, you already │ sources of local news? │
│ │ How often do you consume │ answered the following │ (Select all that apply) │
│ │ local news? │ question(s): │ Answer: ['Online │
│ │ Answer: Weekly │ Question: │ news websites', 'Social │
│ │ │ How often do you consume │ Media'] │
│ │ │ local news? │ │
│ │ │ Answer: Weekly │ Prior questions and │
│ │ │ │ answers: Question: On a │
│ │ │ Prior questions and │ scale of 1 to 10, how │
│ │ │ answers: Question: What │ would you rate the │
│ │ │ are your most common │ quality of local news │
│ │ │ sources of local news? │ coverage in your area? │
│ │ │ (Select all that apply) │ Answer: 6 │
│ │ │ Answer: ['Online │ │
│ │ │ news websites', 'Social │ │
│ │ │ Media'] │ │
└────────────────────────────┴───────────────────────────┴────────────────────────────┴───────────────────────────┘
Note that this is slow and token-intensive, as the questions must be answered serially and requires the agent to remember all of the answers to the questions in the survey. In contrast, if the agent does not need to remember all of the answers to the questions in the survey, execution can proceed in parallel.
Lagged memory
The method set_lagged_memory() gives the agent a specified number of prior questions and answers at each new question in the survey; we pass it the number of prior questions and answers to remember. Here we use it to give the agent just 1 prior question/answer at each question:
survey = Survey(questions = [q1, q2, q3, q4])
survey = survey.set_lagged_memory(1)
We can inspect each _user_prompt again and see that the agent is only prompted to remember the last prior question/answer:
results = survey.by(agent).run()
(
results
.select("consume_local_news_user_prompt", "sources_user_prompt", "rate_coverage_user_prompt", "minutes_reading_user_prompt")
.print(format="rich")
)
This will print the prompts for each question:
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ prompt ┃ prompt ┃ prompt ┃ prompt ┃
┃ .consume_local_news_user_… ┃ .sources_user_prompt ┃ .rate_coverage_user_prompt ┃ .minutes_reading_user_pr… ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│ │ What are your most common │ On a scale of 1 to 10, how │ On average, how many │
│ How often do you consume │ sources of local news? │ would you rate the quality │ minutes do you spend │
│ local news? │ (Select all that apply) │ of local news coverage in │ consuming local news each │
│ │ │ your area? │ day? │
│ │ │ │ │
│ Daily │ 0: Television │ 1 : Very poor │ Minimum answer value: │
│ │ │ │ 0 │
│ Weekly │ 1: Newspaper │ 2 : │ │
│ │ │ │ │
│ Monthly │ 2: Online news websites │ 3 : │ Maximum answer value: │
│ │ │ │ 1440 │
│ Never │ 3: Social Media │ 4 : │ This question requires a │
│ │ │ │ numerical response in the │
│ │ 4: Radio │ 5 : │ form of an integer or │
│ Only 1 option may be │ │ │ decimal (e.g., -12, 0, 1, │
│ selected. │ 5: Other │ 6 : │ 2, 3.45, ...). │
│ │ │ │ Respond with just your │
│ Respond only with a string │ │ 7 : │ number on a single line. │
│ corresponding to one of │ │ │ If your response is │
│ the options. │ │ 8 : │ equivalent to zero, │
│ │ │ │ report '0' │
│ │ │ 9 : │ │
│ After the answer, you can │ │ │ │
│ put a comment explaining │ Please respond only with │ 10 : Excellent │ After the answer, put a │
│ why you chose that option │ a comma-separated list of │ │ comment explaining your │
│ on the next line. │ the code of the options │ Only 1 option may be │ choice on the next line. │
│ │ that apply, with square │ selected. │ Before the │
│ │ brackets. E.g., [0, 1, 3] │ │ question you are now │
│ │ │ Respond only with the code │ answering, you already │
│ │ │ corresponding to one of │ answered the following │
│ │ After the answer, you can │ the options. E.g., "1" or │ question(s): │
│ │ put a comment explaining │ "5" by itself. │ Question: │
│ │ your choice on the next │ │ On a scale of 1 to 10, │
│ │ line. │ After the answer, you can │ how would you rate the │
│ │ Before the │ put a comment explaining │ quality of local news │
│ │ question you are now │ why you chose that option │ coverage in your area? │
│ │ answering, you already │ on the next line. │ Answer: 6 │
│ │ answered the following │ Before the │ │
│ │ question(s): │ question you are now │ │
│ │ Question: │ answering, you already │ │
│ │ How often do you consume │ answered the following │ │
│ │ local news? │ question(s): │ │
│ │ Answer: Weekly │ Question: │ │
│ │ │ What are your most common │ │
│ │ │ sources of local news? │ │
│ │ │ (Select all that apply) │ │
│ │ │ Answer: ['Online │ │
│ │ │ news websites', 'Social │ │
│ │ │ Media'] │ │
└────────────────────────────┴───────────────────────────┴────────────────────────────┴───────────────────────────┘
Targeted memory
The method add_targeted_memory() gives the agent a targeted prior question and answer when answering another specified question. We pass it the question to answer and the prior question/answer to remember when answering it. Here we use it to give the agent the question/answer to q1 when prompting it to answer q4:
survey = Survey(questions = [q1, q2, q3, q4])
survey = survey.add_targeted_memory(q4, q1)
results = survey.by(agent).run()
(
results
.select("consume_local_news_user_prompt", "sources_user_prompt", "rate_coverage_user_prompt", "minutes_reading_user_prompt")
.print(format="rich")
)
Output:
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ prompt ┃ prompt ┃ prompt ┃ prompt ┃
┃ .consume_local_news_user_… ┃ .sources_user_prompt ┃ .rate_coverage_user_prompt ┃ .minutes_reading_user_pr… ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│ │ What are your most common │ On a scale of 1 to 10, how │ On average, how many │
│ How often do you consume │ sources of local news? │ would you rate the quality │ minutes do you spend │
│ local news? │ (Select all that apply) │ of local news coverage in │ consuming local news each │
│ │ │ your area? │ day? │
│ │ │ │ │
│ Daily │ 0: Television │ 1 : Very poor │ Minimum answer value: │
│ │ │ │ 0 │
│ Weekly │ 1: Newspaper │ 2 : │ │
│ │ │ │ │
│ Monthly │ 2: Online news websites │ 3 : │ Maximum answer value: │
│ │ │ │ 1440 │
│ Never │ 3: Social Media │ 4 : │ This question requires a │
│ │ │ │ numerical response in the │
│ │ 4: Radio │ 5 : │ form of an integer or │
│ Only 1 option may be │ │ │ decimal (e.g., -12, 0, 1, │
│ selected. │ 5: Other │ 6 : │ 2, 3.45, ...). │
│ │ │ │ Respond with just your │
│ Respond only with a string │ │ 7 : │ number on a single line. │
│ corresponding to one of │ │ │ If your response is │
│ the options. │ │ 8 : │ equivalent to zero, │
│ │ │ │ report '0' │
│ │ │ 9 : │ │
│ After the answer, you can │ │ │ │
│ put a comment explaining │ Please respond only with │ 10 : Excellent │ After the answer, put a │
│ why you chose that option │ a comma-separated list of │ │ comment explaining your │
│ on the next line. │ the code of the options │ Only 1 option may be │ choice on the next line. │
│ │ that apply, with square │ selected. │ Before the │
│ │ brackets. E.g., [0, 1, 3] │ │ question you are now │
│ │ │ Respond only with the code │ answering, you already │
│ │ │ corresponding to one of │ answered the following │
│ │ After the answer, you can │ the options. E.g., "1" or │ question(s): │
│ │ put a comment explaining │ "5" by itself. │ Question: │
│ │ your choice on the next │ │ How often do you consume │
│ │ line. │ After the answer, you can │ local news? │
│ │ │ put a comment explaining │ Answer: Weekly │
│ │ │ why you chose that option │ │
│ │ │ on the next line. │ │
└────────────────────────────┴───────────────────────────┴────────────────────────────┴───────────────────────────┘
Memory collection
The add_memory_collection() method is used to add sets of prior questions and answers to a given question. We pass it the question to be answered and the list of questions/answers to be remembered when answering it. For example, we can add the questions/answers for both q1 and q2 when prompting the agent to answer q4:
survey = Survey(questions = [q1, q2, q3, q4])
survey = survey.add_memory_collection(q4, [q1, q2])
results = survey.by(agent).run()
(
results
.select("consume_local_news_user_prompt", "sources_user_prompt", "rate_coverage_user_prompt", "minutes_reading_user_prompt")
.print(format="rich")
)
Output:
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ prompt ┃ prompt ┃ prompt ┃ prompt ┃
┃ .consume_local_news_user_… ┃ .sources_user_prompt ┃ .rate_coverage_user_prompt ┃ .minutes_reading_user_pr… ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│ │ What are your most common │ On a scale of 1 to 10, how │ On average, how many │
│ How often do you consume │ sources of local news? │ would you rate the quality │ minutes do you spend │
│ local news? │ (Select all that apply) │ of local news coverage in │ consuming local news each │
│ │ │ your area? │ day? │
│ │ │ │ │
│ Daily │ 0: Television │ 1 : Very poor │ Minimum answer value: │
│ │ │ │ 0 │
│ Weekly │ 1: Newspaper │ 2 : │ │
│ │ │ │ │
│ Monthly │ 2: Online news websites │ 3 : │ Maximum answer value: │
│ │ │ │ 1440 │
│ Never │ 3: Social Media │ 4 : │ This question requires a │
│ │ │ │ numerical response in the │
│ │ 4: Radio │ 5 : │ form of an integer or │
│ Only 1 option may be │ │ │ decimal (e.g., -12, 0, 1, │
│ selected. │ 5: Other │ 6 : │ 2, 3.45, ...). │
│ │ │ │ Respond with just your │
│ Respond only with a string │ │ 7 : │ number on a single line. │
│ corresponding to one of │ │ │ If your response is │
│ the options. │ │ 8 : │ equivalent to zero, │
│ │ │ │ report '0' │
│ │ │ 9 : │ │
│ After the answer, you can │ │ │ │
│ put a comment explaining │ Please respond only with │ 10 : Excellent │ After the answer, put a │
│ why you chose that option │ a comma-separated list of │ │ comment explaining your │
│ on the next line. │ the code of the options │ Only 1 option may be │ choice on the next line. │
│ │ that apply, with square │ selected. │ Before the │
│ │ brackets. E.g., [0, 1, 3] │ │ question you are now │
│ │ │ Respond only with the code │ answering, you already │
│ │ │ corresponding to one of │ answered the following │
│ │ After the answer, you can │ the options. E.g., "1" or │ question(s): │
│ │ put a comment explaining │ "5" by itself. │ Question: │
│ │ your choice on the next │ │ How often do you consume │
│ │ line. │ After the answer, you can │ local news? │
│ │ │ put a comment explaining │ Answer: Weekly │
│ │ │ why you chose that option │ │
│ │ │ on the next line. │ Prior questions and │
│ │ │ │ answers: Question: What │
│ │ │ │ are your most common │
│ │ │ │ sources of local news? │
│ │ │ │ (Select all that apply) │
│ │ │ │ Answer: ['Online │
│ │ │ │ news websites', 'Social │
│ │ │ │ Media'] │
└────────────────────────────┴───────────────────────────┴────────────────────────────┴───────────────────────────┘
Costs
Before running a survey, you can estimate the cost of running the survey in USD and the number of credits needed to run it remotely at the Expected Parrot server. After running a survey, you can see details on the actual cost of each response in the results. The costs are calculated based on the estimated and actual number of tokens used in the survey and the model(s) used to generate the prompts.
Estimated costs
Before running a survey, you can estimate the cost in USD of running the survey by calling the estimate_job_cost() method on a Job object (a survey combined with one or more models). This method returns a dictionary with the estimated costs and tokens for each model used with the survey. You can also estimate credits needed to run a survey remotely at the Expected Parrot server by passing the job to the remote_inference_cost() method of a Coop client object.
Example:
from edsl import QuestionFreeText, Survey, Agent, Model
q0 = QuestionFreeText(
question_name = "favorite_flower",
question_text = "What is the name of your favorite flower?"
)
q1 = QuestionFreeText(
question_name = "flower_color",
question_text = "What color is {{ favorite_flower.answer }}?"
)
survey = Survey(questions = [q0, q1])
a = Agent(traits = {"persona":"You are a botanist on Cape Cod."})
m = Model("gpt-4o")
job = survey.by(a).by(m)
estimated_job_cost = job.estimate_job_cost()
estimated_job_cost
Output:
{'estimated_total_cost': 0.0009175000000000001,
'estimated_total_input_tokens': 91,
'estimated_total_output_tokens': 69,
'model_costs': [{'inference_service': 'openai',
'model': 'gpt-4o',
'estimated_cost': 0.0009175000000000001,
'estimated_input_tokens': 91,
'estimated_output_tokens': 69}]}
To get the estimated cost in credits to run the job remotely:
from edsl import Coop
coop = Coop()
estimated_remote_inference_cost = coop.remote_inference_cost(job) # using the job object from above
estimated_remote_inference_cost
Output:
{'credits': 0.1, 'usd': 0.00092}
Details of the calculations for these methods can be found in the credits section.
Actual costs
The actual costs of running a survey are stored in the survey results. Details about the cost of each response can be accessed in the raw_model_response fields of the results dataset. For each question that was run, the following columns will appear in results:
raw_model_response.<question_name>_cost: The cost in USD for the API call to a language model service provider.
raw_model_response.<question_name>_one_usd_buys: The number of tokens that can be purchased with 1 USD (for reference).
raw_model_response.<question_name>_raw_model_response: A dictionary containing the raw response for the question, which includes the input text and tokens, output text and tokens, and other information about the API call. This dictionary is specific to the language model service provider and may contain additional information about the response.
The cost in credits of a response is calculated as follows:
The number of input tokens is multiplied by the input token rate set by the language model service provider.
The number of output tokens is multiplied by the output token rate set by the language model service provider.
The total cost in USD is converted to credits (1 USD = 100 credits).
The total cost in credits is rounded up to the nearest 1/100th of a credit.
To learn more about these methods and calculations, please see the credits section.
Survey class
A Survey is collection of questions that can be administered to an Agent.
- class edsl.surveys.Survey.Survey(questions: list[QuestionBase | Instruction | ChangeInstruction] | None = None, memory_plan: MemoryPlan | None = None, rule_collection: RuleCollection | None = None, question_groups: dict[str, tuple[int, int]] | None = None, name: str | None = None)[source]
Bases:
SurveyExportMixin
,SurveyFlowVisualizationMixin
,Base
A collection of questions that supports skip logic.
- __init__(questions: list[QuestionBase | Instruction | ChangeInstruction] | None = None, memory_plan: MemoryPlan | None = None, rule_collection: RuleCollection | None = None, question_groups: dict[str, tuple[int, int]] | None = None, name: str | None = None)[source]
Create a new survey.
- Parameters:
questions – The questions in the survey.
memory_plan – The memory plan for the survey.
rule_collection – The rule collection for the survey.
question_groups – The groups of questions in the survey.
name – The name of the survey - DEPRECATED.
>>> from edsl import QuestionFreeText >>> q1 = QuestionFreeText(question_text = "What is your name?", question_name = "name") >>> q2 = QuestionFreeText(question_text = "What is your favorite color?", question_name = "color") >>> q3 = QuestionFreeText(question_text = "Is a hot dog a sandwich", question_name = "food") >>> s = Survey([q1, q2, q3], question_groups = {"demographics": (0, 1), "substantive":(3)})
- add_instruction(instruction: Instruction | ChangeInstruction) Survey [source]
Add an instruction to the survey.
- Parameters:
instruction – The instruction to add to the survey.
>>> from edsl import Instruction >>> i = Instruction(text="Pay attention to the following questions.", name="intro") >>> s = Survey().add_instruction(i) >>> s.instruction_names_to_instructions {'intro': Instruction(name="intro", text="Pay attention to the following questions.")} >>> s.pseudo_indices {'intro': -0.5}
- add_memory_collection(focal_question: QuestionBase | str, prior_questions: List[QuestionBase | str]) Survey [source]
Add prior questions and responses so the agent has them when answering.
This adds instructions to a survey than when answering focal_question, the agent should also remember the answers to prior_questions listed in prior_questions.
- Parameters:
focal_question – The question that the agent is answering.
prior_questions – The questions that the agent should remember when answering the focal question.
Here we have it so that when answering q2, the agent should remember answers to q0 and q1:
>>> s = Survey.example().add_memory_collection("q2", ["q0", "q1"]) >>> s.memory_plan {'q2': Memory(prior_questions=['q0', 'q1'])}
- add_question(question: QuestionBase, index: int | None = None) Survey [source]
Add a question to survey.
- Parameters:
question – The question to add to the survey.
question_name – The name of the question. If not provided, the question name is used.
The question is appended at the end of the self.questions list A default rule is created that the next index is the next question.
>>> from edsl import QuestionMultipleChoice >>> q = QuestionMultipleChoice(question_text = "Do you like school?", question_options=["yes", "no"], question_name="q0") >>> s = Survey().add_question(q)
>>> s = Survey().add_question(q).add_question(q) Traceback (most recent call last): ... edsl.exceptions.surveys.SurveyCreationError: Question name 'q0' already exists in survey. Existing names are ['q0']. ...
- add_question_group(start_question: QuestionBase | str, end_question: QuestionBase | str, group_name: str) Survey [source]
Add a group of questions to the survey.
- Parameters:
start_question – The first question in the group.
end_question – The last question in the group.
group_name – The name of the group.
Example:
>>> s = Survey.example().add_question_group("q0", "q1", "group1") >>> s.question_groups {'group1': (0, 1)}
The name of the group must be a valid identifier:
>>> s = Survey.example().add_question_group("q0", "q2", "1group1") Traceback (most recent call last): ... edsl.exceptions.surveys.SurveyCreationError: Group name 1group1 is not a valid identifier. ... >>> s = Survey.example().add_question_group("q0", "q1", "q0") Traceback (most recent call last): ... edsl.exceptions.surveys.SurveyCreationError: ... ... >>> s = Survey.example().add_question_group("q1", "q0", "group1") Traceback (most recent call last): ... edsl.exceptions.surveys.SurveyCreationError: ... ...
- add_rule(question: QuestionBase | str, expression: str, next_question: QuestionBase | int, before_rule: bool = False) Survey [source]
Add a rule to a Question of the Survey.
- Parameters:
question – The question to add the rule to.
expression – The expression to evaluate.
next_question – The next question to go to if the rule is true.
before_rule – Whether the rule is evaluated before the question is answered.
This adds a rule that if the answer to q0 is ‘yes’, the next question is q2 (as opposed to q1)
>>> s = Survey.example().add_rule("q0", "{{ q0 }} == 'yes'", "q2") >>> s.next_question("q0", {"q0": "yes"}).question_name 'q2'
- add_skip_rule(question: QuestionBase | str, expression: str) Survey [source]
Adds a per-question skip rule to the survey.
- Parameters:
question – The question to add the skip rule to.
expression – The expression to evaluate.
This adds a rule that skips ‘q0’ always, before the question is answered:
>>> from edsl import QuestionFreeText >>> q0 = QuestionFreeText.example() >>> q0.question_name = "q0" >>> q1 = QuestionFreeText.example() >>> q1.question_name = "q1" >>> s = Survey([q0, q1]).add_skip_rule("q0", "True") >>> s.next_question("q0", {}).question_name 'q1'
Note that this is different from a rule that jumps to some other question after the question is answered.
- add_stop_rule(question: QuestionBase | str, expression: str) Survey [source]
Add a rule that stops the survey. The rule is evaluated after the question is answered. If the rule is true, the survey ends.
- Parameters:
question – The question to add the stop rule to.
expression – The expression to evaluate.
If this rule is true, the survey ends.
Here, answering “yes” to q0 ends the survey:
>>> s = Survey.example().add_stop_rule("q0", "q0 == 'yes'") >>> s.next_question("q0", {"q0": "yes"}) EndOfSurvey
By comparison, answering “no” to q0 does not end the survey:
>>> s.next_question("q0", {"q0": "no"}).question_name 'q1'
>>> s.add_stop_rule("q0", "q1 <> 'yes'") Traceback (most recent call last): ... edsl.exceptions.surveys.SurveyCreationError: The expression contains '<>', which is not allowed. You probably mean '!='. ...
- add_targeted_memory(focal_question: QuestionBase | str, prior_question: QuestionBase | str) Survey [source]
Add instructions to a survey than when answering focal_question.
- Parameters:
focal_question – The question that the agent is answering.
prior_question – The question that the agent should remember when answering the focal question.
Here we add instructions to a survey than when answering q2 they should remember q1:
>>> s = Survey.example().add_targeted_memory("q2", "q0") >>> s.memory_plan {'q2': Memory(prior_questions=['q0'])}
The agent should also remember the answers to prior_questions listed in prior_questions.
- by(*args: 'Agent' | 'Scenario' | 'LanguageModel') Jobs [source]
Add Agents, Scenarios, and LanguageModels to a survey and returns a runnable Jobs object.
- Parameters:
args – The Agents, Scenarios, and LanguageModels to add to the survey.
This takes the survey and adds an Agent and a Scenario via ‘by’ which converts to a Jobs object:
>>> s = Survey.example(); from edsl import Agent; from edsl import Scenario >>> s.by(Agent.example()).by(Scenario.example()) Jobs(...)
- clear_non_default_rules() Survey [source]
Remove all non-default rules from the survey.
>>> Survey.example().show_rules() ┏━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━━━┓ ┃ current_q ┃ expression ┃ next_q ┃ priority ┃ before_rule ┃ ┡━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━━━┩ │ 0 │ True │ 1 │ -1 │ False │ │ 0 │ q0 == 'yes' │ 2 │ 0 │ False │ │ 1 │ True │ 2 │ -1 │ False │ │ 2 │ True │ 3 │ -1 │ False │ └───────────┴─────────────┴────────┴──────────┴─────────────┘ >>> Survey.example().clear_non_default_rules().show_rules() ┏━━━━━━━━━━━┳━━━━━━━━━━━━┳━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━━━┓ ┃ current_q ┃ expression ┃ next_q ┃ priority ┃ before_rule ┃ ┡━━━━━━━━━━━╇━━━━━━━━━━━━╇━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━━━┩ │ 0 │ True │ 1 │ -1 │ False │ │ 1 │ True │ 2 │ -1 │ False │ │ 2 │ True │ 3 │ -1 │ False │ └───────────┴────────────┴────────┴──────────┴─────────────┘
- codebook() dict[str, str] [source]
Create a codebook for the survey, mapping question names to question text.
>>> s = Survey.example() >>> s.codebook() {'q0': 'Do you like school?', 'q1': 'Why not?', 'q2': 'Why?'}
- dag(textify: bool = False) DAG [source]
Return the DAG of the survey, which reflects both skip-logic and memory.
- Parameters:
textify – Whether to return the DAG with question names instead of indices.
>>> s = Survey.example() >>> d = s.dag() >>> d {1: {0}, 2: {0}}
- delete_question(identifier: str | int) Survey [source]
Delete a question from the survey.
- Parameters:
identifier – The name or index of the question to delete.
- Returns:
The updated Survey object.
>>> from edsl import QuestionMultipleChoice, Survey >>> q1 = QuestionMultipleChoice(question_text="Q1", question_options=["A", "B"], question_name="q1") >>> q2 = QuestionMultipleChoice(question_text="Q2", question_options=["C", "D"], question_name="q2") >>> s = Survey().add_question(q1).add_question(q2) >>> _ = s.delete_question("q1") >>> len(s.questions) 1 >>> _ = s.delete_question(0) >>> len(s.questions) 0
- classmethod example(params: bool = False, randomize: bool = False, include_instructions=False, custom_instructions: str | None = None) Survey [source]
Return an example survey.
>>> s = Survey.example() >>> [q.question_text for q in s.questions] ['Do you like school?', 'Why not?', 'Why?']
- classmethod from_dict(data: dict) Survey [source]
Deserialize the dictionary back to a Survey object.
- Parameters:
data – The dictionary to deserialize.
>>> d = Survey.example().to_dict() >>> s = Survey.from_dict(d) >>> s == Survey.example() True
>>> s = Survey.example(include_instructions = True) >>> d = s.to_dict() >>> news = Survey.from_dict(d) >>> news == s True
- gen_path_through_survey() Generator[QuestionBase, dict, None] [source]
Generate a coroutine that can be used to conduct an Interview.
The coroutine is a generator that yields a question and receives answers. It starts with the first question in the survey. The coroutine ends when an EndOfSurvey object is returned.
For the example survey, this is the rule table:
>>> s = Survey.example() >>> s.show_rules() ┏━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━━━┓ ┃ current_q ┃ expression ┃ next_q ┃ priority ┃ before_rule ┃ ┡━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━━━┩ │ 0 │ True │ 1 │ -1 │ False │ │ 0 │ q0 == 'yes' │ 2 │ 0 │ False │ │ 1 │ True │ 2 │ -1 │ False │ │ 2 │ True │ 3 │ -1 │ False │ └───────────┴─────────────┴────────┴──────────┴─────────────┘
Note that q0 has a rule that if the answer is ‘yes’, the next question is q2. If the answer is ‘no’, the next question is q1.
Here is the path through the survey if the answer to q0 is ‘yes’:
>>> i = s.gen_path_through_survey() >>> next(i) Question('multiple_choice', question_name = """q0""", question_text = """Do you like school?""", question_options = ['yes', 'no']) >>> i.send({"q0": "yes"}) Question('multiple_choice', question_name = """q2""", question_text = """Why?""", question_options = ['**lack*** of killer bees in cafeteria', 'other'])
And here is the path through the survey if the answer to q0 is ‘no’:
>>> i2 = s.gen_path_through_survey() >>> next(i2) Question('multiple_choice', question_name = """q0""", question_text = """Do you like school?""", question_options = ['yes', 'no']) >>> i2.send({"q0": "no"}) Question('multiple_choice', question_name = """q1""", question_text = """Why not?""", question_options = ['killer bees in cafeteria', 'other'])
- get_question(question_name: str) QuestionBase [source]
Return the question object given the question name.
- Parameters:
question_name – The name of the question to get.
>>> s = Survey.example() >>> s.get_question("q0") Question('multiple_choice', question_name = """q0""", question_text = """Do you like school?""", question_options = ['yes', 'no'])
- property last_item_was_instruction: bool[source]
Return whether the last item added to the survey was an instruction. This is used to determine the pseudo-index of the next item added to the survey.
Example:
>>> s = Survey.example() >>> s.last_item_was_instruction False >>> from edsl.surveys.instructions.Instruction import Instruction >>> s = s.add_instruction(Instruction(text="Pay attention to the following questions.", name="intro")) >>> s.last_item_was_instruction True
- property max_pseudo_index: float[source]
Return the maximum pseudo index in the survey. >>> Survey.example().max_pseudo_index 2
- move_question(identifier: str | int, new_index: int) Survey [source]
>>> from edsl import QuestionMultipleChoice, Survey >>> s = Survey.example() >>> s.question_names ['q0', 'q1', 'q2'] >>> s.move_question("q0", 2).question_names ['q1', 'q2', 'q0']
- next_question(current_question: str | QuestionBase, answers: dict) QuestionBase | EndOfSurveyParent [source]
Return the next question in a survey.
- Parameters:
current_question – The current question in the survey.
answers – The answers for the survey so far
If called with no arguments, it returns the first question in the survey.
If no answers are provided for a question with a rule, the next question is returned. If answers are provided, the next question is determined by the rules and the answers.
If the next question is the last question in the survey, an EndOfSurvey object is returned.
>>> s = Survey.example() >>> s.next_question("q0", {"q0": "yes"}).question_name 'q2' >>> s.next_question("q0", {"q0": "no"}).question_name 'q1'
- property parameters[source]
Return a set of parameters in the survey.
>>> s = Survey.example() >>> s.parameters set()
- property parameters_by_question[source]
Return a dictionary of parameters by question in the survey. >>> from edsl import QuestionFreeText >>> q = QuestionFreeText(question_name = “example”, question_text = “What is the capital of {{ country}}?”) >>> s = Survey([q]) >>> s.parameters_by_question {‘example’: {‘country’}}
- property question_name_to_index: dict[str, int][source]
Return a dictionary mapping question names to question indices.
Example:
>>> s = Survey.example() >>> s.question_name_to_index {'q0': 0, 'q1': 1, 'q2': 2}
- property question_names: list[str][source]
Return a list of question names in the survey.
Example:
>>> s = Survey.example() >>> s.question_names ['q0', 'q1', 'q2']
- question_names_to_questions() dict [source]
Return a dictionary mapping question names to question attributes.
- questions[source]
A collection of questions that supports skip logic.
Initalization: - questions: the questions in the survey (optional) - question_names: the names of the questions (optional) - name: the name of the survey (optional)
Methods: -
Notes: - The presumed order of the survey is the order in which questions are added.
- relevant_instructions(question) dict [source]
This should be a dictionry with keys as question names and values as instructions that are relevant to the question.
- Parameters:
question – The question to get the relevant instructions for.
# Did the instruction come before the question and was it not modified by a change instruction?
- property relevant_instructions_dict: InstructionCollection[source]
Return a dictionary with keys as question names and values as instructions that are relevant to the question.
>>> s = Survey.example(include_instructions=True) >>> s.relevant_instructions_dict {'q0': [Instruction(name="attention", text="Please pay attention!")], 'q1': [Instruction(name="attention", text="Please pay attention!")], 'q2': [Instruction(name="attention", text="Please pay attention!")]}
- run(*args, **kwargs) Results [source]
Turn the survey into a Job and runs it.
>>> from edsl import QuestionFreeText >>> s = Survey([QuestionFreeText.example()]) >>> from edsl.language_models import LanguageModel >>> m = LanguageModel.example(test_model = True, canned_response = "Great!") >>> results = s.by(m).run(cache = False, disable_remote_cache = True, disable_remote_inference = True) >>> results.select('answer.*') Dataset([{'answer.how_are_you': ['Great!']}])
- async run_async(model: 'Model' | None = None, agent: 'Agent' | None = None, cache: 'Cache' | None = None, disable_remote_inference: bool = False, **kwargs)[source]
Run the survey with default model, taking the required survey as arguments.
>>> import asyncio >>> from edsl.questions import QuestionFunctional >>> def f(scenario, agent_traits): return "yes" if scenario["period"] == "morning" else "no" >>> q = QuestionFunctional(question_name = "q0", func = f) >>> s = Survey([q]) >>> async def test_run_async(): result = await s.run_async(period="morning", disable_remote_inference = True); print(result.select("answer.q0").first()) >>> asyncio.run(test_run_async()) yes >>> import asyncio >>> from edsl.questions import QuestionFunctional >>> def f(scenario, agent_traits): return "yes" if scenario["period"] == "morning" else "no" >>> q = QuestionFunctional(question_name = "q0", func = f) >>> s = Survey([q]) >>> async def test_run_async(): result = await s.run_async(period="evening", disable_remote_inference = True); print(result.select("answer.q0").first()) >>> asyncio.run(test_run_async()) no
- property scenario_attributes: list[str][source]
Return a list of attributes that admissible Scenarios should have.
Here we have a survey with a question that uses a jinja2 style {{ }} template:
>>> from edsl import QuestionFreeText >>> s = Survey().add_question(QuestionFreeText(question_text="{{ greeting }}. What is your name?", question_name="name")) >>> s.scenario_attributes ['greeting']
>>> s = Survey().add_question(QuestionFreeText(question_text="{{ greeting }}. What is your {{ attribute }}?", question_name="name")) >>> s.scenario_attributes ['greeting', 'attribute']
- set_full_memory_mode() Survey [source]
Add instructions to a survey that the agent should remember all of the answers to the questions in the survey.
>>> s = Survey.example().set_full_memory_mode()
- set_lagged_memory(lags: int) Survey [source]
Add instructions to a survey that the agent should remember the answers to the questions in the survey.
The agent should remember the answers to the questions in the survey from the previous lags.
- show_rules() None [source]
Print out the rules in the survey.
>>> s = Survey.example() >>> s.show_rules() ┏━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━━━┓ ┃ current_q ┃ expression ┃ next_q ┃ priority ┃ before_rule ┃ ┡━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━━━┩ │ 0 │ True │ 1 │ -1 │ False │ │ 0 │ q0 == 'yes' │ 2 │ 0 │ False │ │ 1 │ True │ 2 │ -1 │ False │ │ 2 │ True │ 3 │ -1 │ False │ └───────────┴─────────────┴────────┴──────────┴─────────────┘