Surveys

A Survey is collection of questions that can be administered asynchronously to one or more agents and language models, or according to specified rules such as skip or stop logic.

The key steps to creating and conducting a survey are:

  • Creating Question objects of various types (multiple choice, checkbox, free text, numerical, linear scale, etc.)

  • Passing questions to a Survey to administer them together

  • Running the survey by sending it to a language Model

When running a survey you can also optionally:

  • Add traits for an AI Agent (or an AgentList of multiple agents) to answer the survey

  • Add data or content to questions using Scenario objects

  • Add conditional rules/logic, context and “memory” of responses to other questions

Running a survey automatically generates a Results object containing the responses and other components of the survey (questions, agents, scenarios, models, prompts, raw responses, etc.). See the Results module for more information on working with Results objects.

Key methods

A survey is administered by calling the run() method on the Survey object, after adding any agents, scenarios and models with the by() method, and any survey rules or memory with the appropriate methods. The methods for adding survey rules and memory include the following, which are each discussed in more detail below:

  • add_skip_rule() - Skip a question based on a conditional expression (e.g., the response to another question).

  • add_stop_rule() - End the survey based on a conditional expression.

  • add_rule() - Administer a specified question next based on a conditional expression.

  • set_full_memory_mode() - Include a memory of all prior questions/answers at each new question in the survey.

  • set_lagged_memory() - Include a memory of a specified number of prior questions/answers at each new question in the survey.

  • add_targeted_memory() - Include a memory of a specific question/answer at another question in the survey.

  • add_memory_collection() - Include memories of a set of prior questions/answers at any other question in the survey.

Piping

You can also pipe components of other questions into a question, for example, to reference the response to a previous question in a later question. (See examples below.)

Flow

A special method show_flow() will display the flow of the survey, showing the order of questions and any rules that have been applied. (See example below.)

Constructing a survey

Defining questions

Questions can be defined as various types, including multiple choice, checkbox, free text, linear scale, numerical and other types. The formats are defined in the questions module. Here we define some questions that we use to create a Survey object and demonstrate methods for applying survey rules and memory.

We start by creating some questions:

from edsl import QuestionMultipleChoice, QuestionCheckBox, QuestionLinearScale, QuestionNumerical

q1 = QuestionMultipleChoice(
   question_name = "consume_local_news",
   question_text = "How often do you consume local news?",
   question_options = ["Daily", "Weekly", "Monthly", "Never"]
)

q2 = QuestionCheckBox(
   question_name = "sources",
   question_text = "What are your most common sources of local news? (Select all that apply)",
   question_options = ["Television", "Newspaper", "Online news websites", "Social Media", "Radio", "Other"]
)

q3 = QuestionLinearScale(
   question_name = "rate_coverage",
   question_text = "On a scale of 1 to 10, how would you rate the quality of local news coverage in your area?",
   question_options = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10],
   option_labels = {1: "Very poor", 10: "Excellent"}
)

q4 = QuestionNumerical(
   question_name = "minutes_reading",
   question_text = "On average, how many minutes do you spend consuming local news each day?",
   min_value = 0, # optional
   max_value = 1440 # optional
)

Adding questions to a survey

Questions are passed to a Survey object as a list of question ids:

from edsl import Survey

survey = Survey(questions = [q1, q2, q3, q4])

Alternatively, questions can be added to a survey one at a time:

survey = Survey().add_question(q1).add_question(q2).add_question(q3).add_question(q4)

Running a survey

Once constructed, a Survey can be run, generating a Results object:

results = survey.run()

If question scenarios, agents or language models have been specified, they are added to the survey with the by method when running it. (If no model is specified, the survey will be run with the default model, which can be inspected by running Model().)

For example, here we run the survey with a simple agent persona and specify that GPT-4o should be used:

from edsl import Agent, Model

agent = Agent(traits = {"persona": "You are a teenager who hates reading."})

model = Model("gpt-4o")

results = survey.by(agent).by(model).run()

Note that these survey components can be chained in any order, so long as each type of component is chained at once (e.g., if adding multiple agents, use by.(agents) once where agents is a list of all Agent objects).

Learn more about specifying question scenarios, agents and language models and working with results in their respective modules:

Survey rules & logic

Rules can be applied to a survey with the add_skip_rule(), add_stop_rule() and add_rule() methods, which take a logical expression and the relevant questions.

Skip rules

The add_skip_rule() method skips a question if a condition is met. The (2) required parameters are the question to skip and the condition to evaluate.

Here we use add_skip_rule() to skip q2 if the response to “consume_local_news” is “Never”. Note that we can refer to the question to be skipped using either the id (“q2”) or question_name (“consume_local_news”):

survey = Survey(questions = [q1, q2, q3, q4])
survey = survey.add_skip_rule(q2, "consume_local_news == 'Never'")

This is equivalent:

survey = Survey(questions = [q1, q2, q3, q4])
survey = survey.add_skip_rule("sources", "consume_local_news == 'Never'")

We can run the survey and verify that the rule was applied:

from edsl import Agent

agent = Agent(traits = {"persona": "You are a teenager who hates reading."})

results = survey.by(agent).run()
results.select("consume_local_news", "sources", "rate_coverage", "minutes_reading").print(format="rich")

This will print the answers, showing “None” for a skipped question:

┏━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━┓
┃ answer              ┃ answer   ┃ answer         ┃ answer           ┃
┃ .consume_local_news ┃ .sources ┃ .rate_coverage ┃ .minutes_reading ┃
┡━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━┩
│ Never               │ None     │ 5              │ 0                │
└─────────────────────┴──────────┴────────────────┴──────────────────┘

Show flow

We can call the show_flow() method to display a graphic of the flow of the survey, and verify how the skip rule was applied:

survey.show_flow()

Stop rules

The add_stop_rule() method stops the survey if a condition is met. The (2) required parameters are the question to stop at and the condition to evaluate.

Here we use add_stop_rule() to end the survey at q1 if the response is “Never”:

survey = Survey(questions = [q1, q2, q3, q4])
survey = survey.add_stop_rule(q1, "consume_local_news == 'Never'")

This time we see that the survey ended when the response to “color” was “Blue”:

results = survey.by(agent).run()
results.select("consume_local_news", "sources", "rate_coverage", "minutes_reading").print(format="rich")
┏━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━┓
┃ answer              ┃ answer   ┃ answer         ┃ answer           ┃
┃ .consume_local_news ┃ .sources ┃ .rate_coverage ┃ .minutes_reading ┃
┡━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━┩
│ Never               │ None     │ None           │ None             │
└─────────────────────┴──────────┴────────────────┴──────────────────┘

Other rules

The generalizable add_rule() method is used to specify the next question to administer based on a condition. The (3) required parameters are the question to evaluate, the condition to evaluate, and the question to administer next.

Here we use add_rule() to specify that if the response to “color” is “Blue” then q4 should be administered next:

survey = Survey(questions = [q1, q2, q3, q4])
survey = survey.add_rule(q1, "consume_local_news == 'Never'", q4)

We can run the survey and verify that the rule was applied:

results = survey.by(agent).run()
results.select("consume_local_news", "sources", "rate_coverage", "minutes_reading").print(format="rich")

We can see that both q2 and q3 were skipped but q4 was administered:

┏━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━┓
┃ answer              ┃ answer   ┃ answer         ┃ answer           ┃
┃ .consume_local_news ┃ .sources ┃ .rate_coverage ┃ .minutes_reading ┃
┡━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━┩
│ Never               │ None     │ None           │ 0                │
└─────────────────────┴──────────┴────────────────┴──────────────────┘

Conditional expressions

The rule expressions themselves (“consume_local_news == ‘Never’”) are written in Python. An expression is evaluated to True or False, with the answer substituted into the expression. The placeholder for this answer is the name of the question itself. In the examples, the answer to q1 is substituted into the expression “consume_local_news == ‘Never’”, as the name of q1 is “consume_local_news”.

Piping

Piping is a method of explicitly referencing components of a question in a later question. For example, here we use the answer to q0 in the prompt for q1:

from edsl import QuestionFreeText, QuestionList, Survey, Agent

q0 = QuestionFreeText(
   question_name = "color",
   question_text = "What is your favorite color?",
)

q1 = QuestionList(
   question_name = "examples",
   question_text = "Name some things that are {{ color.answer }}.",
)

survey = Survey([q0, q1])

agent = Agent(traits = {"persona": "You are a botanist."})

results = survey.by(agent).run()

results.select("color", "examples").print(format="rich")

In this example, q0 will be administered before q1 and the response to q0 is piped into q1. Output:

┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ answer                                                    ┃ answer                                              ┃
┃ .color                                                    ┃ .examples                                           ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│ As a botanist, I find myself drawn to the vibrant greens  │ ['Leaves', 'Grass', 'Ferns', 'Moss', 'Green algae'] │
│ of nature. Green is a color that symbolizes growth, life, │                                                     │
│ and the beauty of plants, which are central to my work    │                                                     │
│ and passion.                                              │                                                     │
└───────────────────────────────────────────────────────────┴─────────────────────────────────────────────────────┘

If an answer is a list, we can index the items to use them as inputs. Here we use an answer in question options:

from edsl import QuestionList, QuestionFreeText, QuestionMultipleChoice, Survey, Agent

q_colors = QuestionList(
   question_name = "colors",
   question_text = "What are your 3 favorite colors?",
   max_list_items = 3
)

q_examples = QuestionFreeText(
   question_name = "examples",
   question_text = "Name some things that are {{ colors.answer }}",
)

q_favorite = QuestionMultipleChoice(
   question_name = "favorite",
   question_text = "Which is your #1 favorite color?",
   question_options = [
      "{{ colors.answer[0] }}",
      "{{ colors.answer[1] }}",
      "{{ colors.answer[2] }}",
   ]
)

survey = Survey([q_colors, q_examples, q_favorite])

agent = Agent(traits = {"persona": "You are a botanist."})

results = survey.by(agent).run()

results.select("colors", "examples", "favorite").print(format="rich")

Output:

┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━┓
┃ answer                       ┃ answer                                                               ┃ answer    ┃
┃ .colors                      ┃ .examples                                                            ┃ .favorite ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━┩
│ ['Green', 'Brown', 'Yellow'] │ Certainly! Here are some things that can be green, brown, or yellow: │ Green     │
│                              │                                                                      │           │
│                              │ **Green:**                                                           │           │
│                              │ 1. Leaves - Many plants have green leaves due to chlorophyll, which  │           │
│                              │ is essential for photosynthesis.                                     │           │
│                              │ 2. Grass - Typically green, especially when healthy and              │           │
│                              │ well-watered.                                                        │           │
│                              │ 3. Green Apples - Varieties like Granny Smith are known for their    │           │
│                              │ green color.                                                         │           │
│                              │                                                                      │           │
│                              │ **Brown:**                                                           │           │
│                              │ 1. Tree Bark - The outer layer of trees is often brown, providing    │           │
│                              │ protection.                                                          │           │
│                              │ 2. Soil - Many types of soil appear brown, indicating organic        │           │
│                              │ matter.                                                              │           │
│                              │ 3. Acorns - These seeds from oak trees are generally brown when      │           │
│                              │ mature.                                                              │           │
│                              │                                                                      │           │
│                              │ **Yellow:**                                                          │           │
│                              │ 1. Sunflowers - Known for their bright yellow petals.                │           │
│                              │ 2. Bananas - Yellow when ripe and ready to eat.                      │           │
│                              │ 3. Daffodils - These flowers are often a vibrant yellow, heralding   │           │
│                              │ spring.                                                              │           │
└──────────────────────────────┴──────────────────────────────────────────────────────────────────────┴───────────┘

This can also be done with agent traits. For example:

from edsl import Agent, QuestionFreeText

a = Agent(traits = {'first_name': 'John'})

q = QuestionFreeText(
   question_text = 'What is your last name, {{ agent.first_name }}?',
   question_name = "last_name"
)

job = q.by(a)

job.prompts().select('user_prompt').print(format="rich")

This code will output the text of the prompt for the question:

What is your last name, John?

We can also show both system and user prompts together with information about the question, agent and model by calling the show_prompts() method:

job.show_prompts()
┏━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━┳━━━━━━━━━━━━━━┓
┃ user_prompt  ┃ system_prom… ┃ interview_i… ┃ question_na… ┃ scenario_ind… ┃ agent_index ┃ model  ┃ estimated_c… ┃
┡━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━╇━━━━━━━━━━━━━━┩
│ What is your │ You are      │ 0            │ last_name    │ 0             │ 0           │ gpt-4o │ 0.0005375    │
│ last name,   │ answering    │              │              │               │             │        │              │
│ John?        │ questions as │              │              │               │             │        │              │
│              │ if you were  │              │              │               │             │        │              │
│              │ a human. Do  │              │              │               │             │        │              │
│              │ not break    │              │              │               │             │        │              │
│              │ character.   │              │              │               │             │        │              │
│              │ You are an   │              │              │               │             │        │              │
│              │ agent with   │              │              │               │             │        │              │
│              │ the          │              │              │               │             │        │              │
│              │ following    │              │              │               │             │        │              │
│              │ persona:     │              │              │               │             │        │              │
│              │ {'first_nam… │              │              │               │             │        │              │
│              │ 'John'}      │              │              │               │             │        │              │
└──────────────┴──────────────┴──────────────┴──────────────┴───────────────┴─────────────┴────────┴──────────────┘

Question memory

When an agent is taking a survey, they can be prompted to “remember” answers to previous questions. This can be done in several ways:

Full memory

The method set_full_memory_mode() gives the agent all of the prior questions and answers at each new question in the survey, i.e., the first question and answer are included in the memory when answering the second question, both the first and second questions and answers are included in the memory when answering the third question, and so on. The method is called on the survey object:

survey = Survey(questions = [q1, q2, q3, q4])
survey = survey.set_full_memory_mode()

In the results, we can inspect the _user_prompt for each question to see that the agent was prompted to remember all of the prior questions:

results = survey.by(agent).run()

(
   results
   .select("consume_local_news_user_prompt", "sources_user_prompt", "rate_coverage_user_prompt", "minutes_reading_user_prompt")
   .print(format="rich")
)

This will print the prompt that was used for each question, and we can see that each successive prompt references all prior questions and answers that were given:

┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ prompt                     ┃ prompt                    ┃ prompt                     ┃ prompt                    ┃
┃ .consume_local_news_user_… ┃ .sources_user_prompt      ┃ .rate_coverage_user_prompt ┃ .minutes_reading_user_pr… ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│                            │ What are your most common │ On a scale of 1 to 10, how │ On average, how many      │
│ How often do you consume   │ sources of local news?    │ would you rate the quality │ minutes do you spend      │
│ local news?                │ (Select all that apply)   │ of local news coverage in  │ consuming local news each │
│                            │                           │ your area?                 │ day?                      │
│                            │                           │                            │                           │
│ Daily                      │ 0: Television             │ 1 : Very poor              │     Minimum answer value: │
│                            │                           │                            │ 0                         │
│ Weekly                     │ 1: Newspaper              │ 2 :                        │                           │
│                            │                           │                            │                           │
│ Monthly                    │ 2: Online news websites   │ 3 :                        │     Maximum answer value: │
│                            │                           │                            │ 1440                      │
│ Never                      │ 3: Social Media           │ 4 :                        │ This question requires a  │
│                            │                           │                            │ numerical response in the │
│                            │ 4: Radio                  │ 5 :                        │ form of an integer or     │
│ Only 1 option may be       │                           │                            │ decimal (e.g., -12, 0, 1, │
│ selected.                  │ 5: Other                  │ 6 :                        │ 2, 3.45, ...).            │
│                            │                           │                            │ Respond with just your    │
│ Respond only with a string │                           │ 7 :                        │ number on a single line.  │
│ corresponding to one of    │                           │                            │ If your response is       │
│ the options.               │                           │ 8 :                        │ equivalent to zero,       │
│                            │                           │                            │ report '0'                │
│                            │                           │ 9 :                        │                           │
│ After the answer, you can  │                           │                            │                           │
│ put a comment explaining   │ Please respond only with  │ 10 : Excellent             │ After the answer, put a   │
│ why you chose that option  │ a comma-separated list of │                            │ comment explaining your   │
│ on the next line.          │ the code of the options   │ Only 1 option may be       │ choice on the next line.  │
│                            │ that apply, with square   │ selected.                  │         Before the        │
│                            │ brackets. E.g., [0, 1, 3] │                            │ question you are now      │
│                            │                           │ Respond only with the code │ answering, you already    │
│                            │                           │ corresponding to one of    │ answered the following    │
│                            │ After the answer, you can │ the options. E.g., "1" or  │ question(s):              │
│                            │ put a comment explaining  │ "5" by itself.             │                 Question: │
│                            │ your choice on the next   │                            │ How often do you consume  │
│                            │ line.                     │ After the answer, you can  │ local news?               │
│                            │         Before the        │ put a comment explaining   │         Answer: Weekly    │
│                            │ question you are now      │ why you chose that option  │                           │
│                            │ answering, you already    │ on the next line.          │  Prior questions and      │
│                            │ answered the following    │         Before the         │ answers:   Question: What │
│                            │ question(s):              │ question you are now       │ are your most common      │
│                            │                 Question: │ answering, you already     │ sources of local news?    │
│                            │ How often do you consume  │ answered the following     │ (Select all that apply)   │
│                            │ local news?               │ question(s):               │         Answer: ['Online  │
│                            │         Answer: Weekly    │                 Question:  │ news websites', 'Social   │
│                            │                           │ How often do you consume   │ Media']                   │
│                            │                           │ local news?                │                           │
│                            │                           │         Answer: Weekly     │  Prior questions and      │
│                            │                           │                            │ answers:   Question: On a │
│                            │                           │  Prior questions and       │ scale of 1 to 10, how     │
│                            │                           │ answers:   Question: What  │ would you rate the        │
│                            │                           │ are your most common       │ quality of local news     │
│                            │                           │ sources of local news?     │ coverage in your area?    │
│                            │                           │ (Select all that apply)    │         Answer: 6         │
│                            │                           │         Answer: ['Online   │                           │
│                            │                           │ news websites', 'Social    │                           │
│                            │                           │ Media']                    │                           │
└────────────────────────────┴───────────────────────────┴────────────────────────────┴───────────────────────────┘

Note that this is slow and token-intensive, as the questions must be answered serially and requires the agent to remember all of the answers to the questions in the survey. In contrast, if the agent does not need to remember all of the answers to the questions in the survey, execution can proceed in parallel.

Lagged memory

The method set_lagged_memory() gives the agent a specified number of prior questions and answers at each new question in the survey; we pass it the number of prior questions and answers to remember. Here we use it to give the agent just 1 prior question/answer at each question:

survey = Survey(questions = [q1, q2, q3, q4])
survey = survey.set_lagged_memory(1)

We can inspect each _user_prompt again and see that the agent is only prompted to remember the last prior question/answer:

results = survey.by(agent).run()

(
   results
   .select("consume_local_news_user_prompt", "sources_user_prompt", "rate_coverage_user_prompt", "minutes_reading_user_prompt")
   .print(format="rich")
)

This will print the prompts for each question:

┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ prompt                     ┃ prompt                    ┃ prompt                     ┃ prompt                    ┃
┃ .consume_local_news_user_… ┃ .sources_user_prompt      ┃ .rate_coverage_user_prompt ┃ .minutes_reading_user_pr… ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│                            │ What are your most common │ On a scale of 1 to 10, how │ On average, how many      │
│ How often do you consume   │ sources of local news?    │ would you rate the quality │ minutes do you spend      │
│ local news?                │ (Select all that apply)   │ of local news coverage in  │ consuming local news each │
│                            │                           │ your area?                 │ day?                      │
│                            │                           │                            │                           │
│ Daily                      │ 0: Television             │ 1 : Very poor              │     Minimum answer value: │
│                            │                           │                            │ 0                         │
│ Weekly                     │ 1: Newspaper              │ 2 :                        │                           │
│                            │                           │                            │                           │
│ Monthly                    │ 2: Online news websites   │ 3 :                        │     Maximum answer value: │
│                            │                           │                            │ 1440                      │
│ Never                      │ 3: Social Media           │ 4 :                        │ This question requires a  │
│                            │                           │                            │ numerical response in the │
│                            │ 4: Radio                  │ 5 :                        │ form of an integer or     │
│ Only 1 option may be       │                           │                            │ decimal (e.g., -12, 0, 1, │
│ selected.                  │ 5: Other                  │ 6 :                        │ 2, 3.45, ...).            │
│                            │                           │                            │ Respond with just your    │
│ Respond only with a string │                           │ 7 :                        │ number on a single line.  │
│ corresponding to one of    │                           │                            │ If your response is       │
│ the options.               │                           │ 8 :                        │ equivalent to zero,       │
│                            │                           │                            │ report '0'                │
│                            │                           │ 9 :                        │                           │
│ After the answer, you can  │                           │                            │                           │
│ put a comment explaining   │ Please respond only with  │ 10 : Excellent             │ After the answer, put a   │
│ why you chose that option  │ a comma-separated list of │                            │ comment explaining your   │
│ on the next line.          │ the code of the options   │ Only 1 option may be       │ choice on the next line.  │
│                            │ that apply, with square   │ selected.                  │         Before the        │
│                            │ brackets. E.g., [0, 1, 3] │                            │ question you are now      │
│                            │                           │ Respond only with the code │ answering, you already    │
│                            │                           │ corresponding to one of    │ answered the following    │
│                            │ After the answer, you can │ the options. E.g., "1" or  │ question(s):              │
│                            │ put a comment explaining  │ "5" by itself.             │                 Question: │
│                            │ your choice on the next   │                            │ On a scale of 1 to 10,    │
│                            │ line.                     │ After the answer, you can  │ how would you rate the    │
│                            │         Before the        │ put a comment explaining   │ quality of local news     │
│                            │ question you are now      │ why you chose that option  │ coverage in your area?    │
│                            │ answering, you already    │ on the next line.          │         Answer: 6         │
│                            │ answered the following    │         Before the         │                           │
│                            │ question(s):              │ question you are now       │                           │
│                            │                 Question: │ answering, you already     │                           │
│                            │ How often do you consume  │ answered the following     │                           │
│                            │ local news?               │ question(s):               │                           │
│                            │         Answer: Weekly    │                 Question:  │                           │
│                            │                           │ What are your most common  │                           │
│                            │                           │ sources of local news?     │                           │
│                            │                           │ (Select all that apply)    │                           │
│                            │                           │         Answer: ['Online   │                           │
│                            │                           │ news websites', 'Social    │                           │
│                            │                           │ Media']                    │                           │
└────────────────────────────┴───────────────────────────┴────────────────────────────┴───────────────────────────┘

Targeted memory

The method add_targeted_memory() gives the agent a targeted prior question and answer when answering another specified question. We pass it the question to answer and the prior question/answer to remember when answering it. Here we use it to give the agent the question/answer to q1 when prompting it to answer q4:

survey = Survey(questions = [q1, q2, q3, q4])
survey = survey.add_targeted_memory(q4, q1)

results = survey.by(agent).run()

(
   results
   .select("consume_local_news_user_prompt", "sources_user_prompt", "rate_coverage_user_prompt", "minutes_reading_user_prompt")
   .print(format="rich")
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ prompt                     ┃ prompt                    ┃ prompt                     ┃ prompt                    ┃
┃ .consume_local_news_user_… ┃ .sources_user_prompt      ┃ .rate_coverage_user_prompt ┃ .minutes_reading_user_pr… ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│                            │ What are your most common │ On a scale of 1 to 10, how │ On average, how many      │
│ How often do you consume   │ sources of local news?    │ would you rate the quality │ minutes do you spend      │
│ local news?                │ (Select all that apply)   │ of local news coverage in  │ consuming local news each │
│                            │                           │ your area?                 │ day?                      │
│                            │                           │                            │                           │
│ Daily                      │ 0: Television             │ 1 : Very poor              │     Minimum answer value: │
│                            │                           │                            │ 0                         │
│ Weekly                     │ 1: Newspaper              │ 2 :                        │                           │
│                            │                           │                            │                           │
│ Monthly                    │ 2: Online news websites   │ 3 :                        │     Maximum answer value: │
│                            │                           │                            │ 1440                      │
│ Never                      │ 3: Social Media           │ 4 :                        │ This question requires a  │
│                            │                           │                            │ numerical response in the │
│                            │ 4: Radio                  │ 5 :                        │ form of an integer or     │
│ Only 1 option may be       │                           │                            │ decimal (e.g., -12, 0, 1, │
│ selected.                  │ 5: Other                  │ 6 :                        │ 2, 3.45, ...).            │
│                            │                           │                            │ Respond with just your    │
│ Respond only with a string │                           │ 7 :                        │ number on a single line.  │
│ corresponding to one of    │                           │                            │ If your response is       │
│ the options.               │                           │ 8 :                        │ equivalent to zero,       │
│                            │                           │                            │ report '0'                │
│                            │                           │ 9 :                        │                           │
│ After the answer, you can  │                           │                            │                           │
│ put a comment explaining   │ Please respond only with  │ 10 : Excellent             │ After the answer, put a   │
│ why you chose that option  │ a comma-separated list of │                            │ comment explaining your   │
│ on the next line.          │ the code of the options   │ Only 1 option may be       │ choice on the next line.  │
│                            │ that apply, with square   │ selected.                  │         Before the        │
│                            │ brackets. E.g., [0, 1, 3] │                            │ question you are now      │
│                            │                           │ Respond only with the code │ answering, you already    │
│                            │                           │ corresponding to one of    │ answered the following    │
│                            │ After the answer, you can │ the options. E.g., "1" or  │ question(s):              │
│                            │ put a comment explaining  │ "5" by itself.             │                 Question: │
│                            │ your choice on the next   │                            │ How often do you consume  │
│                            │ line.                     │ After the answer, you can  │ local news?               │
│                            │                           │ put a comment explaining   │         Answer: Weekly    │
│                            │                           │ why you chose that option  │                           │
│                            │                           │ on the next line.          │                           │
└────────────────────────────┴───────────────────────────┴────────────────────────────┴───────────────────────────┘

Memory collection

The add_memory_collection() method is used to add sets of prior questions and answers to a given question. We pass it the question to be answered and the list of questions/answers to be remembered when answering it. For example, we can add the questions/answers for both q1 and q2 when prompting the agent to answer q4:

survey = Survey(questions = [q1, q2, q3, q4])
survey = survey.add_memory_collection(q4, [q1, q2])

results = survey.by(agent).run()

(
   results
   .select("consume_local_news_user_prompt", "sources_user_prompt", "rate_coverage_user_prompt", "minutes_reading_user_prompt")
   .print(format="rich")
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ prompt                     ┃ prompt                    ┃ prompt                     ┃ prompt                    ┃
┃ .consume_local_news_user_… ┃ .sources_user_prompt      ┃ .rate_coverage_user_prompt ┃ .minutes_reading_user_pr… ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│                            │ What are your most common │ On a scale of 1 to 10, how │ On average, how many      │
│ How often do you consume   │ sources of local news?    │ would you rate the quality │ minutes do you spend      │
│ local news?                │ (Select all that apply)   │ of local news coverage in  │ consuming local news each │
│                            │                           │ your area?                 │ day?                      │
│                            │                           │                            │                           │
│ Daily                      │ 0: Television             │ 1 : Very poor              │     Minimum answer value: │
│                            │                           │                            │ 0                         │
│ Weekly                     │ 1: Newspaper              │ 2 :                        │                           │
│                            │                           │                            │                           │
│ Monthly                    │ 2: Online news websites   │ 3 :                        │     Maximum answer value: │
│                            │                           │                            │ 1440                      │
│ Never                      │ 3: Social Media           │ 4 :                        │ This question requires a  │
│                            │                           │                            │ numerical response in the │
│                            │ 4: Radio                  │ 5 :                        │ form of an integer or     │
│ Only 1 option may be       │                           │                            │ decimal (e.g., -12, 0, 1, │
│ selected.                  │ 5: Other                  │ 6 :                        │ 2, 3.45, ...).            │
│                            │                           │                            │ Respond with just your    │
│ Respond only with a string │                           │ 7 :                        │ number on a single line.  │
│ corresponding to one of    │                           │                            │ If your response is       │
│ the options.               │                           │ 8 :                        │ equivalent to zero,       │
│                            │                           │                            │ report '0'                │
│                            │                           │ 9 :                        │                           │
│ After the answer, you can  │                           │                            │                           │
│ put a comment explaining   │ Please respond only with  │ 10 : Excellent             │ After the answer, put a   │
│ why you chose that option  │ a comma-separated list of │                            │ comment explaining your   │
│ on the next line.          │ the code of the options   │ Only 1 option may be       │ choice on the next line.  │
│                            │ that apply, with square   │ selected.                  │         Before the        │
│                            │ brackets. E.g., [0, 1, 3] │                            │ question you are now      │
│                            │                           │ Respond only with the code │ answering, you already    │
│                            │                           │ corresponding to one of    │ answered the following    │
│                            │ After the answer, you can │ the options. E.g., "1" or  │ question(s):              │
│                            │ put a comment explaining  │ "5" by itself.             │                 Question: │
│                            │ your choice on the next   │                            │ How often do you consume  │
│                            │ line.                     │ After the answer, you can  │ local news?               │
│                            │                           │ put a comment explaining   │         Answer: Weekly    │
│                            │                           │ why you chose that option  │                           │
│                            │                           │ on the next line.          │  Prior questions and      │
│                            │                           │                            │ answers:   Question: What │
│                            │                           │                            │ are your most common      │
│                            │                           │                            │ sources of local news?    │
│                            │                           │                            │ (Select all that apply)   │
│                            │                           │                            │         Answer: ['Online  │
│                            │                           │                            │ news websites', 'Social   │
│                            │                           │                            │ Media']                   │
└────────────────────────────┴───────────────────────────┴────────────────────────────┴───────────────────────────┘

Survey class

A Survey is collection of questions that can be administered to an Agent.

class edsl.surveys.Survey.Survey(questions: list[QuestionBase | Instruction | ChangeInstruction] | None = None, memory_plan: MemoryPlan | None = None, rule_collection: RuleCollection | None = None, question_groups: dict[str, tuple[int, int]] | None = None, name: str | None = None)[source]

Bases: SurveyExportMixin, SurveyFlowVisualizationMixin, Base

A collection of questions that supports skip logic.

__init__(questions: list[QuestionBase | Instruction | ChangeInstruction] | None = None, memory_plan: MemoryPlan | None = None, rule_collection: RuleCollection | None = None, question_groups: dict[str, tuple[int, int]] | None = None, name: str | None = None)[source]

Create a new survey.

Parameters:
  • questions – The questions in the survey.

  • memory_plan – The memory plan for the survey.

  • rule_collection – The rule collection for the survey.

  • question_groups – The groups of questions in the survey.

  • name – The name of the survey - DEPRECATED.

>>> from edsl import QuestionFreeText
>>> q1 = QuestionFreeText(question_text = "What is your name?", question_name = "name")
>>> q2 = QuestionFreeText(question_text = "What is your favorite color?", question_name = "color")
>>> q3 = QuestionFreeText(question_text = "Is a hot dog a sandwich", question_name = "food")
>>> s = Survey([q1, q2, q3], question_groups = {"demographics": (0, 1), "substantive":(3)})
add_instruction(instruction: Instruction | ChangeInstruction) Survey[source]

Add an instruction to the survey.

Parameters:

instruction – The instruction to add to the survey.

>>> from edsl import Instruction
>>> i = Instruction(text="Pay attention to the following questions.", name="intro")
>>> s = Survey().add_instruction(i)
>>> s.instruction_names_to_instructions
{'intro': Instruction(name="intro", text="Pay attention to the following questions.")}
>>> s.pseudo_indices
{'intro': -0.5}
add_memory_collection(focal_question: QuestionBase | str, prior_questions: List[QuestionBase | str]) Survey[source]

Add prior questions and responses so the agent has them when answering.

This adds instructions to a survey than when answering focal_question, the agent should also remember the answers to prior_questions listed in prior_questions.

Parameters:
  • focal_question – The question that the agent is answering.

  • prior_questions – The questions that the agent should remember when answering the focal question.

Here we have it so that when answering q2, the agent should remember answers to q0 and q1:

>>> s = Survey.example().add_memory_collection("q2", ["q0", "q1"])
>>> s.memory_plan
{'q2': Memory(prior_questions=['q0', 'q1'])}
add_question(question: QuestionBase, index: int | None = None) Survey[source]

Add a question to survey.

Parameters:
  • question – The question to add to the survey.

  • question_name – The name of the question. If not provided, the question name is used.

The question is appended at the end of the self.questions list A default rule is created that the next index is the next question.

>>> from edsl import QuestionMultipleChoice
>>> q = QuestionMultipleChoice(question_text = "Do you like school?", question_options=["yes", "no"], question_name="q0")
>>> s = Survey().add_question(q)
>>> s = Survey().add_question(q).add_question(q)
Traceback (most recent call last):
...
edsl.exceptions.surveys.SurveyCreationError: Question name 'q0' already exists in survey. Existing names are ['q0'].
add_question_group(start_question: QuestionBase | str, end_question: QuestionBase | str, group_name: str) Survey[source]

Add a group of questions to the survey.

Parameters:
  • start_question – The first question in the group.

  • end_question – The last question in the group.

  • group_name – The name of the group.

Example:

>>> s = Survey.example().add_question_group("q0", "q1", "group1")
>>> s.question_groups
{'group1': (0, 1)}

The name of the group must be a valid identifier:

>>> s = Survey.example().add_question_group("q0", "q2", "1group1")
Traceback (most recent call last):
...
ValueError: Group name 1group1 is not a valid identifier.

The name of the group cannot be the same as an existing question name:

>>> s = Survey.example().add_question_group("q0", "q1", "q0")
Traceback (most recent call last):
...
ValueError: Group name q0 already exists as a question name in the survey.

The start index must be less than the end index:

>>> s = Survey.example().add_question_group("q1", "q0", "group1")
Traceback (most recent call last):
...
ValueError: Start index 1 is greater than end index 0.
add_rule(question: QuestionBase | str, expression: str, next_question: QuestionBase | int, before_rule: bool = False) Survey[source]

Add a rule to a Question of the Survey.

Parameters:
  • question – The question to add the rule to.

  • expression – The expression to evaluate.

  • next_question – The next question to go to if the rule is true.

  • before_rule – Whether the rule is evaluated before the question is answered.

This adds a rule that if the answer to q0 is ‘yes’, the next question is q2 (as opposed to q1)

>>> s = Survey.example().add_rule("q0", "{{ q0 }} == 'yes'", "q2")
>>> s.next_question("q0", {"q0": "yes"}).question_name
'q2'
add_skip_rule(question: QuestionBase | str, expression: str) Survey[source]

Adds a per-question skip rule to the survey.

Parameters:
  • question – The question to add the skip rule to.

  • expression – The expression to evaluate.

This adds a rule that skips ‘q0’ always, before the question is answered:

>>> from edsl import QuestionFreeText
>>> q0 = QuestionFreeText.example()
>>> q0.question_name = "q0"
>>> q1 = QuestionFreeText.example()
>>> q1.question_name = "q1"
>>> s = Survey([q0, q1]).add_skip_rule("q0", "True")
>>> s.next_question("q0", {}).question_name
'q1'

Note that this is different from a rule that jumps to some other question after the question is answered.

add_stop_rule(question: QuestionBase | str, expression: str) Survey[source]

Add a rule that stops the survey.

Parameters:
  • question – The question to add the stop rule to.

  • expression – The expression to evaluate.

If this rule is true, the survey ends. The rule is evaluated after the question is answered. If the rule is true, the survey ends.

Here, answering “yes” to q0 ends the survey:

>>> s = Survey.example().add_stop_rule("q0", "q0 == 'yes'")
>>> s.next_question("q0", {"q0": "yes"})
EndOfSurvey

By comparison, answering “no” to q0 does not end the survey:

>>> s.next_question("q0", {"q0": "no"}).question_name
'q1'
>>> s.add_stop_rule("q0", "q1 <> 'yes'")
Traceback (most recent call last):
...
ValueError: The expression contains '<>', which is not allowed. You probably mean '!='.
add_targeted_memory(focal_question: QuestionBase | str, prior_question: QuestionBase | str) Survey[source]

Add instructions to a survey than when answering focal_question.

Parameters:
  • focal_question – The question that the agent is answering.

  • prior_question – The question that the agent should remember when answering the focal question.

Here we add instructions to a survey than when answering q2 they should remember q1:

>>> s = Survey.example().add_targeted_memory("q2", "q0")
>>> s.memory_plan
{'q2': Memory(prior_questions=['q0'])}

The agent should also remember the answers to prior_questions listed in prior_questions.

by(*args: 'Agent' | 'Scenario' | 'LanguageModel') Jobs[source]

Add Agents, Scenarios, and LanguageModels to a survey and returns a runnable Jobs object.

Parameters:

args – The Agents, Scenarios, and LanguageModels to add to the survey.

This takes the survey and adds an Agent and a Scenario via ‘by’ which converts to a Jobs object:

>>> s = Survey.example(); from edsl import Agent; from edsl import Scenario
>>> s.by(Agent.example()).by(Scenario.example())
Jobs(...)
clear_non_default_rules() Survey[source]

Remove all non-default rules from the survey.

>>> Survey.example().show_rules()
┏━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━━━┓
┃ current_q ┃ expression  ┃ next_q ┃ priority ┃ before_rule ┃
┡━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━━━┩
│ 0         │ True        │ 1      │ -1       │ False       │
│ 0         │ q0 == 'yes' │ 2      │ 0        │ False       │
│ 1         │ True        │ 2      │ -1       │ False       │
│ 2         │ True        │ 3      │ -1       │ False       │
└───────────┴─────────────┴────────┴──────────┴─────────────┘
>>> Survey.example().clear_non_default_rules().show_rules()
┏━━━━━━━━━━━┳━━━━━━━━━━━━┳━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━━━┓
┃ current_q ┃ expression ┃ next_q ┃ priority ┃ before_rule ┃
┡━━━━━━━━━━━╇━━━━━━━━━━━━╇━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━━━┩
│ 0         │ True       │ 1      │ -1       │ False       │
│ 1         │ True       │ 2      │ -1       │ False       │
│ 2         │ True       │ 3      │ -1       │ False       │
└───────────┴────────────┴────────┴──────────┴─────────────┘
codebook() dict[str, str][source]

Create a codebook for the survey, mapping question names to question text.

>>> s = Survey.example()
>>> s.codebook()
{'q0': 'Do you like school?', 'q1': 'Why not?', 'q2': 'Why?'}
create_agent() Agent[source]

Create an agent from the simulated answers.

dag(textify: bool = False) DAG[source]

Return the DAG of the survey, which reflects both skip-logic and memory.

Parameters:

textify – Whether to return the DAG with question names instead of indices.

>>> s = Survey.example()
>>> d = s.dag()
>>> d
{1: {0}, 2: {0}}
delete_question(identifier: str | int) Survey[source]

Delete a question from the survey.

Parameters:

identifier – The name or index of the question to delete.

Returns:

The updated Survey object.

>>> from edsl import QuestionMultipleChoice, Survey
>>> q1 = QuestionMultipleChoice(question_text="Q1", question_options=["A", "B"], question_name="q1")
>>> q2 = QuestionMultipleChoice(question_text="Q2", question_options=["C", "D"], question_name="q2")
>>> s = Survey().add_question(q1).add_question(q2)
>>> _ = s.delete_question("q1")
>>> len(s.questions)
1
>>> _ = s.delete_question(0)
>>> len(s.questions)
0
classmethod example(params: bool = False, randomize: bool = False, include_instructions=False, custom_instructions: str | None = None) Survey[source]

Return an example survey.

>>> s = Survey.example()
>>> [q.question_text for q in s.questions]
['Do you like school?', 'Why not?', 'Why?']
classmethod from_dict(data: dict) Survey[source]

Deserialize the dictionary back to a Survey object.

Parameters:

data – The dictionary to deserialize.

>>> d = Survey.example().to_dict()
>>> s = Survey.from_dict(d)
>>> s == Survey.example()
True
>>> s = Survey.example(include_instructions = True)
>>> d = s.to_dict()
>>> news = Survey.from_dict(d)
>>> news == s
True
classmethod from_qsf(qsf_file: str | None = None, url: str | None = None) Survey[source]

Create a Survey object from a Qualtrics QSF file.

gen_path_through_survey() Generator[QuestionBase, dict, None][source]

Generate a coroutine that can be used to conduct an Interview.

The coroutine is a generator that yields a question and receives answers. It starts with the first question in the survey. The coroutine ends when an EndOfSurvey object is returned.

For the example survey, this is the rule table:

>>> s = Survey.example()
>>> s.show_rules()
┏━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━━━┓
┃ current_q ┃ expression  ┃ next_q ┃ priority ┃ before_rule ┃
┡━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━━━┩
│ 0         │ True        │ 1      │ -1       │ False       │
│ 0         │ q0 == 'yes' │ 2      │ 0        │ False       │
│ 1         │ True        │ 2      │ -1       │ False       │
│ 2         │ True        │ 3      │ -1       │ False       │
└───────────┴─────────────┴────────┴──────────┴─────────────┘

Note that q0 has a rule that if the answer is ‘yes’, the next question is q2. If the answer is ‘no’, the next question is q1.

Here is the path through the survey if the answer to q0 is ‘yes’:

>>> i = s.gen_path_through_survey()
>>> next(i)
Question('multiple_choice', question_name = """q0""", question_text = """Do you like school?""", question_options = ['yes', 'no'])
>>> i.send({"q0": "yes"})
Question('multiple_choice', question_name = """q2""", question_text = """Why?""", question_options = ['**lack*** of killer bees in cafeteria', 'other'])

And here is the path through the survey if the answer to q0 is ‘no’:

>>> i2 = s.gen_path_through_survey()
>>> next(i2)
Question('multiple_choice', question_name = """q0""", question_text = """Do you like school?""", question_options = ['yes', 'no'])
>>> i2.send({"q0": "no"})
Question('multiple_choice', question_name = """q1""", question_text = """Why not?""", question_options = ['killer bees in cafeteria', 'other'])
get(question_name: str) QuestionBase[source]

Return the question object given the question name.

Parameters:

question_name – The name of the question to get.

>>> s = Survey.example()
>>> s.get_question("q0")
Question('multiple_choice', question_name = """q0""", question_text = """Do you like school?""", question_options = ['yes', 'no'])
get_job(model=None, agent=None, **kwargs)[source]
get_question(question_name: str) QuestionBase[source]

Return the question object given the question name.

property last_item_was_instruction: bool[source]

Return whether the last item added to the survey was an instruction. This is used to determine the pseudo-index of the next item added to the survey.

Example:

>>> s = Survey.example()
>>> s.last_item_was_instruction
False
>>> from edsl.surveys.instructions.Instruction import Instruction
>>> s = s.add_instruction(Instruction(text="Pay attention to the following questions.", name="intro"))
>>> s.last_item_was_instruction
True
property max_pseudo_index: float[source]

Return the maximum pseudo index in the survey.

Example:

>>> s = Survey.example()
>>> s.max_pseudo_index
2
move_question(identifier: str | int, new_index: int)[source]
next_question(current_question: str | QuestionBase, answers: dict) QuestionBase | EndOfSurveyParent[source]

Return the next question in a survey.

Parameters:
  • current_question – The current question in the survey.

  • answers – The answers for the survey so far

  • If called with no arguments, it returns the first question in the survey.

  • If no answers are provided for a question with a rule, the next question is returned. If answers are provided, the next question is determined by the rules and the answers.

  • If the next question is the last question in the survey, an EndOfSurvey object is returned.

>>> s = Survey.example()
>>> s.next_question("q0", {"q0": "yes"}).question_name
'q2'
>>> s.next_question("q0", {"q0": "no"}).question_name
'q1'
property parameters[source]

Return a set of parameters in the survey.

>>> s = Survey.example()
>>> s.parameters
set()
property parameters_by_question[source]

Return a dictionary of parameters by question in the survey. >>> from edsl import QuestionFreeText >>> q = QuestionFreeText(question_name = “example”, question_text = “What is the capital of {{ country}}?”) >>> s = Survey([q]) >>> s.parameters_by_question {‘example’: {‘country’}}

property piping_dag: DAG[source]

Figures out the DAG of piping dependencies.

>>> from edsl import QuestionFreeText
>>> q0 = QuestionFreeText(question_text="Here is a question", question_name="q0")
>>> q1 = QuestionFreeText(question_text="You previously answered {{ q0 }}---how do you feel now?", question_name="q1")
>>> s = Survey([q0, q1])
>>> s.piping_dag
{1: {0}}
print()[source]

Print the survey in a rich format.

>>> s = Survey.example()
>>> s.print()
{
  "questions": [
  ...
}
property question_name_to_index: dict[str, int][source]

Return a dictionary mapping question names to question indices.

Example:

>>> s = Survey.example()
>>> s.question_name_to_index
{'q0': 0, 'q1': 1, 'q2': 2}
property question_names: list[str][source]

Return a list of question names in the survey.

Example:

>>> s = Survey.example()
>>> s.question_names
['q0', 'q1', 'q2']
question_names_to_questions() dict[source]

Return a dictionary mapping question names to question attributes.

questions[source]

A collection of questions that supports skip logic.

Initalization: - questions: the questions in the survey (optional) - question_names: the names of the questions (optional) - name: the name of the survey (optional)

Methods: -

Notes: - The presumed order of the survey is the order in which questions are added.

classmethod random_survey()[source]

Create a random survey.

recombined_questions_and_instructions() list[QuestionBase | Instruction][source]

Return a list of questions and instructions sorted by pseudo index.

relevant_instructions(question) dict[source]

This should be a dictionry with keys as question names and values as instructions that are relevant to the question.

Parameters:

question – The question to get the relevant instructions for.

# Did the instruction come before the question and was it not modified by a change instruction?

property relevant_instructions_dict: InstructionCollection[source]

Return a dictionary with keys as question names and values as instructions that are relevant to the question.

>>> s = Survey.example(include_instructions=True)
>>> s.relevant_instructions_dict
{'q0': [Instruction(name="attention", text="Please pay attention!")], 'q1': [Instruction(name="attention", text="Please pay attention!")], 'q2': [Instruction(name="attention", text="Please pay attention!")]}
rich_print() Table[source]

Print the survey in a rich format.

>>> t = Survey.example().rich_print()
>>> print(t) 
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ Questions                                                                                          ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│ ┏━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━┓                                │
│ ┃ Question Name ┃ Question Type   ┃ Question Text       ┃ Options ┃                                │
│ ┡━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━┩                                │
│ │ q0            │ multiple_choice │ Do you like school? │ yes, no │                                │
│ └───────────────┴─────────────────┴─────────────────────┴─────────┘                                │
│ ┏━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓              │
│ ┃ Question Name ┃ Question Type   ┃ Question Text ┃ Options                         ┃              │
│ ┡━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩              │
│ │ q1            │ multiple_choice │ Why not?      │ killer bees in cafeteria, other │              │
│ └───────────────┴─────────────────┴───────────────┴─────────────────────────────────┘              │
│ ┏━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ │
│ ┃ Question Name ┃ Question Type   ┃ Question Text ┃ Options                                      ┃ │
│ ┡━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │
│ │ q2            │ multiple_choice │ Why?          │ **lack*** of killer bees in cafeteria, other │ │
│ └───────────────┴─────────────────┴───────────────┴──────────────────────────────────────────────┘ │
└────────────────────────────────────────────────────────────────────────────────────────────────────┘
run(*args, **kwargs) Results[source]

Turn the survey into a Job and runs it.

>>> from edsl import QuestionFreeText
>>> s = Survey([QuestionFreeText.example()])
>>> from edsl.language_models import LanguageModel
>>> m = LanguageModel.example(test_model = True, canned_response = "Great!")
>>> results = s.by(m).run(cache = False)
>>> results.select('answer.*')
Dataset([{'answer.how_are_you': ['Great!']}])
async run_async(model=None, agent=None, cache=None, **kwargs)[source]

Run the survey with default model, taking the required survey as arguments.

>>> from edsl.questions import QuestionFunctional
>>> def f(scenario, agent_traits): return "yes" if scenario["period"] == "morning" else "no"
>>> q = QuestionFunctional(question_name = "q0", func = f)
>>> s = Survey([q])
>>> s(period = "morning").select("answer.q0").first()
'yes'
>>> s(period = "evening").select("answer.q0").first()
'no'
property scenario_attributes: list[str][source]

Return a list of attributes that admissible Scenarios should have.

Here we have a survey with a question that uses a jinja2 style {{ }} template:

>>> from edsl import QuestionFreeText
>>> s = Survey().add_question(QuestionFreeText(question_text="{{ greeting }}. What is your name?", question_name="name"))
>>> s.scenario_attributes
['greeting']
>>> s = Survey().add_question(QuestionFreeText(question_text="{{ greeting }}. What is your {{ attribute }}?", question_name="name"))
>>> s.scenario_attributes
['greeting', 'attribute']
set_full_memory_mode() Survey[source]

Add instructions to a survey that the agent should remember all of the answers to the questions in the survey.

>>> s = Survey.example().set_full_memory_mode()
set_lagged_memory(lags: int) Survey[source]

Add instructions to a survey that the agent should remember the answers to the questions in the survey.

The agent should remember the answers to the questions in the survey from the previous lags.

show_prompts()[source]
show_rules() None[source]

Print out the rules in the survey.

>>> s = Survey.example()
>>> s.show_rules()
┏━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━━━┓
┃ current_q ┃ expression  ┃ next_q ┃ priority ┃ before_rule ┃
┡━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━━━┩
│ 0         │ True        │ 1      │ -1       │ False       │
│ 0         │ q0 == 'yes' │ 2      │ 0        │ False       │
│ 1         │ True        │ 2      │ -1       │ False       │
│ 2         │ True        │ 3      │ -1       │ False       │
└───────────┴─────────────┴────────┴──────────┴─────────────┘
simulate() dict[source]

Simulate the survey and return the answers.

simulate_results() Results[source]

Simulate the survey and return the results.

textify(index_dag: DAG) DAG[source]

Convert the DAG of question indices to a DAG of question names.

Parameters:

index_dag – The DAG of question indices.

Example:

>>> s = Survey.example()
>>> d = s.dag()
>>> d
{1: {0}, 2: {0}}
>>> s.textify(d)
{'q1': {'q0'}, 'q2': {'q0'}}
to_csv(filename: str = None)[source]

Export the survey to a CSV file.

Parameters:

filename – The name of the file to save the CSV to.

>>> s = Survey.example()
>>> s.to_csv() 
   index question_name        question_text                                question_options    question_type
0      0            q0  Do you like school?                                       [yes, no]  multiple_choice
1      1            q1             Why not?               [killer bees in cafeteria, other]  multiple_choice
2      2            q2                 Why?  [**lack*** of killer bees in cafeteria, other]  multiple_choice
to_dict() dict[str, Any][source]

Serialize the Survey object to a dictionary.

>>> s = Survey.example()
>>> s.to_dict().keys()
dict_keys(['questions', 'memory_plan', 'rule_collection', 'question_groups', 'edsl_version', 'edsl_class_name'])
to_jobs()[source]

Convert the survey to a Jobs object.

web(platform: Literal['google_forms', 'lime_survey', 'survey_monkey'] = 'google_forms', email=None)[source]
class edsl.surveys.Survey.ValidatedString(content)[source]

Bases: str

edsl.surveys.Survey.main()[source]

Run the example survey.