Questions

EDSL provides templates for many common question types, including multiple choice, checkbox, free text, numerical, linear scale and others. The Question class has subclasses for each of these types: QuestionMultipleChoice, QuestionCheckBox, QuestionFreeText, QuestionNumerical, QuestionLinearScale, etc., which have methods for validating answers and responses from language models.

Question type templates

A question is constructed by creating an instance of a question type class and passing the required fields. Questions are formatted as dictionaries with specific keys based on the question type.

Question types

The following question types are available:

  • QuestionMultipleChoice - multiple choice questions

  • QuestionCheckBox - checkbox questions

  • QuestionFreeText - free text questions

  • QuestionNumerical - numerical questions

  • QuestionLinearScale - linear scale questions

  • QuestionLikertFive - Likert scale questions

  • QuestionRank - ranked list questions

  • QuestionTopK - top-k list questions

  • QuestionYesNo - yes/no questions (multiple choice with fixed options)

  • QuestionList - list questions (the response is formatted as a list of strings)

  • QuestionBudget - budget allocation questions (the response is a dictionary of allocated amounts)

  • QuestionExtract - information extraction questions (the response is formatted according to a specified template)

  • QuestionFunctional - functional questions

Required fields

All question types require a question_name and question_text. The question_name is a unique Pythonic identifier for a question (e.g., “favorite_color”). The question_text is the text of the question itself written as a string (e.g., “What is your favorite color?”). Question types other than free text require a question_options list of possible answer options. The question_options list can be a list of strings, integers, a list of lists or other data types depending on the question type.

For example, to create a multiple choice question where the response should be a single option selected from a list of colors, we import the QuestionMultipleChoice class and create an instance of it with the required fields:

from edsl import QuestionMultipleChoice

q = QuestionMultipleChoice(
   question_name = "favorite_color",
   question_text = "What is your favorite color?",
   question_options = ["Red", "Orange", "Yellow", "Green", "Blue", "Purple"]
)

Special parameters for question types

Some question types have additional parameters that are either optional or required to be added to the question when it is created:

min_selections and max_selections - Optional parameters that can be added to checkbox and rank questions to specify the minimum and maximum number of options that can be selected. For example, in a checkbox question where the response must include at least 2 and at most 3 of the options:

from edsl import QuestionCheckBox

q = QuestionCheckBox(
   question_name = "favorite_days",
   question_text = "What are your 2-3 favorite days of the week?",
   question_options = ["Mon", "Tue", "Wed", "Thu", "Fri", "Sat", "Sun"],
   min_selections = 2, # optional
   max_selections = 3 # optional
)

min_value and max_value - Optional parameters that can be added to numerical questions to specify the minimum and maximum values that can be entered. For example, in a numerical question where the respondent must enter a number between 1 and 100:

from edsl import QuestionNumerical

q = QuestionNumerical(
   question_name = "age",
   question_text = "How old are you (in years)?",
   min_value = 1, # optioal
   max_value = 100 # optional
)

option_labels - Optional parameter that can be added to linear_scale questions to specify labels for the scale options. For example, in a linear scale question where the response must be an integer between 1 and 5 reflecting the respondent’s agreement with a statement:

from edsl import QuestionLinearScale

q = QuestionLinearScale(
   question_name = "agree",
   question_text = "Please indicate whether you agree with the following statement: I am only happy when it rains.",
   question_options = [1, 2, 3, 4, 5],
   option_labels = {1: "Strongly disagree", 5: "Strongly agree"} # optional
)

num_selections - Optional parameter that can be added to rank questions to specify the number of options that must be ranked. For example, in a rank question where the respondent must rank their top 3 favorite foods:

from edsl import QuestionRank

q = QuestionRank(
   question_name = "foods_rank",
   question_text = "Rank your top 3 favorite foods.",
   question_options = ["Pizza", "Pasta", "Salad", "Soup"],
   num_selections = 3 # optional
)

answer_template - Required parameter of extract questions to specify a template for the extracted information. For example, in an extract question where the respondent must extract information from a given text:

from edsl import QuestionExtract

q = QuestionExtract(
   question_name = "course_schedule",
   question_text = "This semester we are offering courses on calligraphy on Friday mornings.",
   answer_template = {"course_topic": "AI", "days": ["Monday", "Wednesday"]} # required
)

func - Required parameter of functional questions to specify a function that generates the answer. For example, in a functional question where the answer is generated by a function:

from edsl import QuestionFunctional, ScenarioList, Scenario
import random

scenarios = ScenarioList(
   [Scenario({"persona": p, "random": random.randint(0, 1000)}) for p in ["Magician", "Economist"]]
)

def my_function(scenario, agent_traits):
   if scenario.get("persona") == "Magician":
      return "Magicians never pick randomly!"
   elif scenario.get("random") > 500:
      return "Top half"
   else:
      return "Bottom half"

q = QuestionFunctional(
   question_name = "evaluate",
   func = my_function
)

results = q.by(scenarios).run()

results.select("persona", "random", "evaluate").print(format="rich")

Example results:

┏━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ scenario  ┃ scenario ┃ answer                         ┃
┃ .persona  ┃ .random  ┃ .evaluate                      ┃
┡━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│ Magician  │ 301      │ Magicians never pick randomly! │
├───────────┼──────────┼────────────────────────────────┤
│ Economist │ 395      │ Bottom half                    │
└───────────┴──────────┴────────────────────────────────┘

Optional parameters

The following optional parameters can be added to any question type (examples of each are provided at the end of this section):

include_comment = False - A boolean value that can be added to any question type (other than free text) to exclude the default instruction in the user prompt that instructs the model to include a comment about its response to the question, as well as the comment field that is otherwise automatically included in survey results for the question.

By default, a comment field is automatically added to all question types other than free text. It is a free text field that allows a model to provide any commentary on its response to the question, such as why a certain answer was chosen or how the model arrived at its answer. It can be useful for debugging unexpected responses and reducing the likelihood that a model fails to follow formatting instructions for the main response (e.g., just selecting an answer option) because it wants to be more verbose. It can also be helpful in constructing sequences of questions with context about prior responses (e.g., simulating a chain of thought). (See the survey section for more information about adding question memory to a survey.)

question_presentation - A string that can be added to any question type to specify how the question should be presented to the model. It can be used to provide additional context or instructions to the model about how to interpret the question.

answering_instructions - A string that can be added to any question type to specify how the model should answer the question. It can be used to provide additional context or instructions to the model about how to format its response.

permissive = True - A boolean value that can be added to any question type to specify whether the model should be allowed to provide an answer that violates the question constraints (e.g., selecting fewer or more than the allowed number of options in a checkbox question). (By default, permissive is set to False to enforce any question constraints.)

Creating a survey

We can combine multiple questions into a survey by passing them as a list to a Survey object:

from edsl import QuestionLinearScale, QuestionList, QuestionNumerical, Survey

q1 = QuestionLinearScale(
   question_name = "dc_state",
   question_text = "How likely is Washington, D.C. to become a U.S. state?",
   question_options = [1, 2, 3, 4, 5],
   option_labels = {1: "Not at all likely", 5: "Very likely"}
)

q2 = QuestionList(
   question_name = "largest_us_cities",
   question_text = "What are the largest U.S. cities by population?",
   max_list_items = 3
)

q3 = QuestionNumerical(
   question_name = "us_pop",
   question_text = "What was the U.S. population in 2020?"
)

survey = Survey(questions = [q1, q2, q3])

This allows us to administer multiple questions at once, either asynchronously (by default) or according to specified logic (e.g., skip or stop rules). To learn more about designing surveys with conditional logic, please see the Surveys section.

Simulating a response

We generate a response to a question by delivering it to a language model. This is done by calling the run method for the question:

from edsl import QuestionCheckBox

q = QuestionCheckBox(
   question_name = "primary_colors",
   question_text = "Which of the following colors are primary?",
   question_options = ["Red", "Orange", "Yellow", "Green", "Blue", "Purple"]
)

results = q.run()

This will generate a Results object that contains a single Result representing the response to the question and information about the model used. If the model to be used has not been specified (as in the above example), the run method delivers the question to the default LLM (run Model() to check the current default LLM). We can inspect the response and model used by calling the select and print methods on the components of the results that we want to display. For example, we can print just the answer to the question:

results.select("primary_colors").print(format="rich")

Output:

┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ answer                    ┃
┃ .primary_colors           ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│ ['Red', 'Yellow', 'Blue'] │
└───────────────────────────┘

Or to inspect the model:

results.select("model").print(format="rich")

Output:

┏━━━━━━━━┓
┃ model  ┃
┃ .model ┃
┡━━━━━━━━┩
│ gpt-4o │
└────────┘

If questions have been combined in a survey, the run method is called directly on the survey instead:

from edsl import QuestionLinearScale, QuestionList, QuestionNumerical, Survey

q1 = QuestionLinearScale(
   question_name = "dc_state",
   question_text = "How likely is Washington, D.C. to become a U.S. state?",
   question_options = [1, 2, 3, 4, 5],
   option_labels = {1: "Not at all likely", 5: "Very likely"}
)

q2 = QuestionList(
   question_name = "largest_us_cities",
   question_text = "What are the largest U.S. cities by population?",
   max_list_items = 3
)

q3 = QuestionNumerical(
   question_name = "us_pop",
   question_text = "What was the U.S. population in 2020?"
)

survey = Survey(questions = [q1, q2, q3])

results = survey.run()

results.select("answer.*").print(format="rich")

Output:

┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━┳━━━━━━━━━━━┓
┃ answer                                 ┃ answer    ┃ answer    ┃
┃ .largest_us_cities                     ┃ .dc_state ┃ .us_pop   ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━╇━━━━━━━━━━━┩
│ ['New York', 'Los Angeles', 'Chicago'] │ 2         │ 331449281 │
└────────────────────────────────────────┴───────────┴───────────┘

For a survey, each Result represents a response for the set of survey questions. To learn more about analyzing results, please see the Results section.

Parameterizing a question

A question can also be constructed to take parameters that are replaced with specified values either when the question is constructed or when the question is run. This operation can be done in a number of ways:

  • Use “piping” to pass answers or other components of previous questions to a subsequent question when it is run.

  • Use Scenario objects to pass values to a question when it is run.

  • Use Scenario objects to pass values to a question when it is constructed.

  • Use f-strings to pass values to a question when it is constructed.

Each of these methods allows us to easily create and administer multiple versions of a question at once.

In addition to the examples below, please also see the Scenarios section for more information on constructing and using scenarios to parameterize questions.

Scenarios

A Scenario object is a dictionary of parameter values that can be passed to a question. Details about scenarios can be found in the Scenarios section.

Key steps:

Create a question text that takes a parameter in double braces:

from edsl import QuestionFreeText

q = QuestionFreeText(
   question_name = "favorite_item",
   question_text = "What is your favorite {{ item }}?",
)

Then create a dictionary for each value that will replace the parameter and store them in Scenario objects:

from edsl import ScenarioList, Scenario

scenarios = ScenarioList(
   Scenario({"item": item}) for item in ["color", "food"]
)

We can pass scenarios to a question using the by method when the question is run:

from edsl import QuestionFreeText, ScenarioList, Scenario

q = QuestionFreeText(
   question_name = "favorite_item",
   question_text = "What is your favorite {{ item }}?",
)

scenarios = ScenarioList(
   Scenario({"item": item}) for item in ["color", "food"]
)

results = q.by(scenarios).run()

Each of the Results that are generated will include an individual Result for each version of the question that was answered.

Alternatively, we can create multiple versions of a question when constructing a survey (i.e., before running it), by passing scenarios to the question loop method:

questions = q.loop(scenarios) # using the scenarios from the example above

We can inspect the questions that have been created:

questions

Output:

[Question('free_text', question_name = """favorite_item_0""", question_text = """What is your favorite color?"""),
Question('free_text', question_name = """favorite_item_1""", question_text = """What is your favorite food?""")]

Note that a unique question_name has been automatically generated based on the parameter values. This is necessary in order to pass the questions to a Survey object.

We can alternatively specify that the paramater values be inserted in the question name to create the unique identifiers (so long as they are Pythonic):

from edsl import QuestionFreeText, ScenarioList, Scenario

q = QuestionFreeText(
   question_name = "favorite_{{ item }}",
   question_text = "What is your favorite {{ item }}?",
)

scenarios = ScenarioList(
   Scenario({"item": item}) for item in ["color", "food"]
)

questions = q.loop(scenarios)

Output:

[Question('free_text', question_name = """favorite_color""", question_text = """What is your favorite color?"""),
Question('free_text', question_name = """favorite_food""", question_text = """What is your favorite food?""")]

To run the questions, we pass them to a Survey in the usual manner:

from edsl import Survey

survey = Survey(questions = questions)

results = survey.run()

Piping

Piping is a method for passing the answer or other components of previous questions into a subsequent question. Question components can be piped into question texts or options by using double braces with the name of the prior question and the key of the answer in the braces (e.g., {{ <prior_question_name>.answer }}). Note that piping can only be used when questions are run together in a survey.

For example:

from edsl import QuestionNumerical, QuestionList, QuestionMultipleChoice, Survey

q_age = QuestionNumerical(
   question_name = "age",
   question_text = "What is your age?"
)

# Piping an answer in a question text
q_prime = QuestionMultipleChoice(
   question_name = "prime",
   question_text = "Is {{ age.answer }} a prime number?",
   question_options = ["Yes", "No", "I don't know"]
)

q_favorite_colors = QuestionList(
   question_name = "favorite_colors",
   question_text = "What are your 3 favorite colors?",
   max_list_items = 3
)

# Using an item from a list response
q_flowers = QuestionList(
   question_name = "flowers",
   question_text = "Name some flowers that are {{ favorite_colors.answer[0] }}."
)

# Using a list response as a complete set of question options
q_house_color = QuestionMultipleChoice(
   question_name = "house_color",
   question_text = "Pretend you are painting a house. Which color would you choose?",
   question_options = "{{ favorite_colors.answer }}"
)

# Itemizing options from a list response in question options
q_car_color = QuestionMultipleChoice(
   question_name = "car_color",
   question_text = "Pretend you are buying a car. Which color would you choose?",
   question_options = [
      "{{ favorite_colors.answer[0] }}",
      "{{ favorite_colors.answer[1] }}",
      "{{ favorite_colors.answer[2] }}",
      "Other"
   ]
)

survey = Survey([
   q_age,
   q_prime,
   q_favorite_colors,
   q_flowers,
   q_car_color
])

When the survey is run, the answers will be piped as noted in the comments.

For more details and examples of piping, please see the Surveys module section on piping.

F-strings

F-strings can be used to pass values to a question when it is constructed. They function independently of scenarios and piping, but can be used at the same time.

For example:

from edsl import QuestionFreeText, ScenarioList, Scenario, Survey

questions = []
sentiments = ["enjoy", "hate", "love"]

scenarios = ScenarioList(
   Scenario({"activity": activity}) for activity in ["running", "reading"]
)

for sentiment in sentiments:
   q = QuestionFreeText(
      question_name = f"{ sentiment }_activity",
      question_text = f"How much do you { sentiment } {{ activity }}?"
   )
   q_list = q.loop(scenarios)

   questions = questions + q_list

questions

Output:

[Question('free_text', question_name = """enjoy_activity_0""", question_text = """How much do you enjoy running?"""),
Question('free_text', question_name = """enjoy_activity_1""", question_text = """How much do you enjoy reading?"""),
Question('free_text', question_name = """hate_activity_0""", question_text = """How much do you hate running?"""),
Question('free_text', question_name = """hate_activity_1""", question_text = """How much do you hate reading?"""),
Question('free_text', question_name = """love_activity_0""", question_text = """How much do you love running?"""),
Question('free_text', question_name = """love_activity_1""", question_text = """How much do you love reading?""")]

We can see that the question names and texts have been parameterized with the values of sentiments and scenarios, and the question names have been automatically incremented to ensure uniqueness. We can then pass the questions to a survey and run it:

survey = Survey(questions = questions)

results = survey.run()

Designing AI agents

A key feature of EDSL is the ability to design AI agents with personas and other traits for language models to use in responding to questions. The use of agents allows us to simulate survey results for target audiences at scale. This is done by creating Agent objects with dictionaries of desired traits and adding them to questions when they are run. For example, if we want a question answered by an AI agent representing a student we can create an Agent object with a relevant persona and attributes:

from edsl import Agent

agent = Agent(traits = {
   "persona": "You are a student...", # can be an extended text
   "age": 20, # individual trait values can be useful for analysis
   "current_grade": "college sophomore"
   })

To generate a response for the agent, we pass it to the by method when we run the question:

results = q.by(agent).run()

We can also generate responses for multiple agents at once by passing them as a list:

from edsl import AgentList, Agent

agents = AgentList(
   Agent(traits = {"persona":p}) for p in ["Dog catcher", "Magician", "Spy"]
)

results = q.by(scenarios).by(agents).run()

The Results will contain a Result for each agent that answered the question. To learn more about designing agents, please see the Agents section.

Specifying language models

In the above examples we did not specify a language model for the question or survey, so the default model was used (run Model() to check the current default model). Similar to the way that we optionally passed scenarios to a question and added AI agents, we can also use the by method to specify one or more LLMs to use in generating results. This is done by creating Model objects for desired models and optionally specifying model parameters, such as temperature.

To check available models:

from edsl import Model

Model.available()

This will return a list of names of models that we can choose from.

We can also check the models for which we have already added API keys:

Model.check_models()

See instructions on storing API Keys for the models that you want to use, or activating Remote Inference to use the Expected Parrot server to access available models.

To specify models for a survey we first create Model objects:

from edsl import ModelList, Model

models = ModelList(
   Model(m) for m in ['gpt-4o', 'gemini-1.5-pro']
)

Then we add them to a question or survey with the by method when running it:

results = q.by(models).run()

If scenarios and/or agents are also specified, each component is added in its own by call, chained together in any order, with the run method appended last:

results = q.by(scenarios).by(agents).by(models).run()

Note that multiple scenarios, agents and models are always passed as lists in the same by call.

Learn more about specifying question scenarios, agents and language models and their parameters in the respective sections:

Question type classes

QuestionMultipleChoice class

A subclass of the Question class for creating multiple choice questions where the response is a single option selected from a list of options. It specially requires a question_options list of strings for the options. Example usage:

from edsl import QuestionMultipleChoice

q = QuestionMultipleChoice(
   question_name = "color",
   question_text = "What is your favorite color?",
   question_options = ["Red", "Blue", "Green", "Yellow"]
)

An example can also created using the example method:

QuestionMultipleChoice.example()
class edsl.questions.QuestionMultipleChoice.MultipleChoiceResponseValidator(response_model: type[BaseModel], exception_to_throw: Exception | None = None, override_answer: dict | None = None, **kwargs)[source]

Bases: ResponseValidatorABC

fix(response, verbose=False)[source]
invalid_examples = [({'answer': -1}, {'question_options': ['Good', 'Great', 'OK', 'Bad']}, 'Answer code must be a non-negative integer'), ({'answer': None}, {'question_options': ['Good', 'Great', 'OK', 'Bad']}, 'Answer code must not be missing.')][source]
required_params: List[str] = ['question_options', 'use_code'][source]
valid_examples = [({'answer': 1}, {'question_options': ['Good', 'Great', 'OK', 'Bad']})][source]
class edsl.questions.QuestionMultipleChoice.QuestionMultipleChoice(question_name: str, question_text: str, question_options: list[str] | list[list] | list[float] | list[int], include_comment: bool = True, use_code: bool = False, answering_instructions: str | None = None, question_presentation: str | None = None, permissive: bool = False)[source]

Bases: QuestionBase

This question prompts the agent to select one option from a list of options.

https://docs.expectedparrot.com/en/latest/questions.html#questionmultiplechoice-class

__init__(question_name: str, question_text: str, question_options: list[str] | list[list] | list[float] | list[int], include_comment: bool = True, use_code: bool = False, answering_instructions: str | None = None, question_presentation: str | None = None, permissive: bool = False)[source]

Instantiate a new QuestionMultipleChoice.

Parameters:
  • question_name – The name of the question.

  • question_text – The text of the question.

  • question_options – The options the agent should select from.

  • include_comment – Whether to include a comment field.

  • use_code – Whether to use code for the options.

  • answering_instructions – Instructions for the question.

  • question_presentation – The presentation of the question.

  • permissive – Whether to force the answer to be one of the options.

create_response_model(replacement_dict: dict = None)[source]
classmethod example(exception_to_throw: Exception | None = None, override_answer: dict | None = None, *args, **kwargs) T[source]
property question_html_content: str[source]

Return the HTML version of the question.

response_validator_class[source]

alias of MultipleChoiceResponseValidator

edsl.questions.QuestionMultipleChoice.create_response_model(choices: List[str], permissive: bool = False)[source]

Create a ChoiceResponse model class with a predefined list of choices.

Parameters:
  • choices – A list of allowed values for the answer field.

  • permissive – If True, any value will be accepted as an answer.

Returns:

A new Pydantic model class.

QuestionCheckBox class

A subclass of the Question class for creating questions where the response is a list of one or more of the given options. It specially requires a question_options list of strings for the options. The minimum number of options that must be selected and the maximum number that may be selected can be specified when creating the question (parameters min_selections and max_selections). If not specified, the minimum number of options that must be selected is 1 and the maximum allowed is the number of question options provided. Example usage:

from edsl import QuestionCheckBox

q = QuestionCheckBox(
   question_name = "favorite_days",
   question_text = "What are your 2 favorite days of the week?",
   question_options = ["Monday", "Tuesday", "Wednesday",
   "Thursday", "Friday", "Saturday", "Sunday"],
   min_selections = 2, # optional
   max_selections = 2  # optional
)

An example can also be created using the example method:

QuestionCheckBox.example()
class edsl.questions.QuestionCheckBox.CheckBoxResponseValidator(response_model: type[BaseModel], exception_to_throw: Exception | None = None, override_answer: dict | None = None, **kwargs)[source]

Bases: ResponseValidatorABC

custom_validate(response) BaseResponse[source]
fix(response, verbose=False)[source]
invalid_examples = [({'answer': [-1]}, {'question_options': ['Good', 'Great', 'OK', 'Bad']}, 'Answer code must be a non-negative integer'), ({'answer': 1}, {'question_options': ['Good', 'Great', 'OK', 'Bad']}, 'Answer code must be a list'), ({'answer': [1, 2, 3, 4]}, {'max_selections': 2, 'min_selections': 1, 'question_options': ['Good', 'Great', 'OK', 'Bad']}, 'Too many options selected')][source]
required_params: List[str] = ['question_options', 'min_selections', 'max_selections', 'use_code', 'permissive'][source]
valid_examples = [({'answer': [1, 2]}, {'question_options': ['Good', 'Great', 'OK', 'Bad']})][source]
class edsl.questions.QuestionCheckBox.QuestionCheckBox(question_name: str, question_text: str, question_options: list[str], min_selections: int | None = None, max_selections: int | None = None, include_comment: bool = True, use_code: bool = True, question_presentation: str | None = None, answering_instructions: str | None = None, permissive: bool = False)[source]

Bases: QuestionBase

This question prompts the agent to select options from a list.

__init__(question_name: str, question_text: str, question_options: list[str], min_selections: int | None = None, max_selections: int | None = None, include_comment: bool = True, use_code: bool = True, question_presentation: str | None = None, answering_instructions: str | None = None, permissive: bool = False)[source]

Instantiate a new QuestionCheckBox.

Parameters:
  • question_name – The name of the question.

  • question_text – The text of the question.

  • question_options – The options the respondent should select from.

  • min_selections – The minimum number of options that must be selected.

  • max_selections – The maximum number of options that must be selected.

create_response_model()[source]
classmethod example(exception_to_throw: Exception | None = None, override_answer: dict | None = None, *args, **kwargs) T[source]
property question_html_content: str[source]
response_validator_class[source]

alias of CheckBoxResponseValidator

edsl.questions.QuestionCheckBox.create_checkbox_response_model(choices: list, min_selections: int | None = None, max_selections: int | None = None, permissive: bool = False)[source]

Dynamically create a CheckboxResponse model with a predefined list of choices.

Parameters:
  • choices – A list of allowed values for the answer field.

  • include_comment – Whether to include a comment field in the model.

Returns:

A new Pydantic model class.

QuestionFreeText class

A subclass of the Question class for creating free response questions. There are no specially required fields (only question_name and question_text). The response is a single string of text. Example usage:

from edsl import QuestionFreeText

q = QuestionFreeText(
   question_name = "food",
   question_text = "What is your favorite food?"
)

An example can also be created using the example method:

QuestionFreeText.example()
class edsl.questions.QuestionFreeText.FreeTextResponse(*, answer: str, generated_tokens: str | None = None)[source]

Bases: BaseModel

Validator for free text response questions.

answer: str[source]
generated_tokens: str | None[source]
model_computed_fields: ClassVar[Dict[str, ComputedFieldInfo]] = {}[source]

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config: ClassVar[ConfigDict] = {}[source]

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[Dict[str, FieldInfo]] = {'answer': FieldInfo(annotation=str, required=True), 'generated_tokens': FieldInfo(annotation=Union[str, NoneType], required=False, default=None)}[source]

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo] objects.

This replaces Model.__fields__ from Pydantic V1.

class edsl.questions.QuestionFreeText.FreeTextResponseValidator(response_model: type[BaseModel], exception_to_throw: Exception | None = None, override_answer: dict | None = None, **kwargs)[source]

Bases: ResponseValidatorABC

fix(response, verbose=False)[source]
invalid_examples = [({'answer': None}, {}, 'Answer code must not be missing.')][source]
required_params: List[str] = [][source]
valid_examples = [({'answer': 'This is great'}, {})][source]
class edsl.questions.QuestionFreeText.QuestionFreeText(question_name: str, question_text: str, answering_instructions: str | None = None, question_presentation: str | None = None)[source]

Bases: QuestionBase

This question prompts the agent to respond with free text.

__init__(question_name: str, question_text: str, answering_instructions: str | None = None, question_presentation: str | None = None)[source]

Instantiate a new QuestionFreeText.

Parameters:
  • question_name – The name of the question.

  • question_text – The text of the question.

classmethod example(exception_to_throw: Exception | None = None, override_answer: dict | None = None, *args, **kwargs) T[source]
property question_html_content: str[source]
question_name: str[source]
question_text: str[source]
response_validator_class[source]

alias of FreeTextResponseValidator

QuestionLinearScale class

A subclass of the QuestionMultipleChoice class for creating linear scale questions. It requires a question_options list of integers for the scale. The option_labels parameter can be used to specify labels for the scale options. Example usage:

from edsl import QuestionLinearScale

q = QuestionLinearScale(
   question_name = "studying",
   question_text = """On a scale from 0 to 5, how much do you
   enjoy studying? (0 = not at all, 5 = very much)""",
   question_options = [0, 1, 2, 3, 4, 5], # integers
   option_labels = {0: "Not at all", 5: "Very much"} # optional
)

An example can also be created using the example method:

QuestionLinearScale.example()
class edsl.questions.derived.QuestionLinearScale.QuestionLinearScale(question_name: str, question_text: str, question_options: list[int], option_labels: dict[int, str] | None = None, answering_instructions: str | None = None, question_presentation: str | None = None, include_comment: bool | None = True)[source]

Bases: QuestionMultipleChoice

This question prompts the agent to respond to a statement on a linear scale.

__init__(question_name: str, question_text: str, question_options: list[int], option_labels: dict[int, str] | None = None, answering_instructions: str | None = None, question_presentation: str | None = None, include_comment: bool | None = True)[source]

Instantiate a new QuestionLinearScale.

Parameters:
  • question_name – The name of the question.

  • question_text – The text of the question.

  • question_options – The options the respondent should select from.

  • option_labels – Maps question_options to labels.

  • instructions – Instructions for the question. If not provided, the default instructions are used. To view them, run QuestionLinearScale.default_instructions.

classmethod example(exception_to_throw: Exception | None = None, override_answer: dict | None = None, *args, **kwargs) T[source]

QuestionNumerical class

A subclass of the Question class for creating questions where the response is a numerical value. The minimum and maximum values of the answer can be specified using the min_value and max_value parameters. Example usage:

from edsl import QuestionNumerical

q = QuestionNumerical(
   question_name = "work_days",
   question_text = "How many days a week do you normally work?",
   min_value = 1, # optional
   max_value = 7  # optional
)

An example can also be created using the example method:

QuestionNumerical.example()
class edsl.questions.QuestionNumerical.NumericalResponseValidator(response_model: type[BaseModel], exception_to_throw: Exception | None = None, override_answer: dict | None = None, **kwargs)[source]

Bases: ResponseValidatorABC

fix(response, verbose=False)[source]
invalid_examples = [({'answer': 10}, {'max_value': 5, 'min_value': 0}, 'Answer is out of range'), ({'answer': 'ten'}, {'max_value': 5, 'min_value': 0}, 'Answer is not a number'), ({}, {'max_value': 5, 'min_value': 0}, 'Answer key is missing')][source]
required_params: List[str] = ['min_value', 'max_value', 'permissive'][source]
valid_examples = [({'answer': 1}, {'max_value': 10, 'min_value': 0}), ({'answer': 1}, {'max_value': None, 'min_value': None})][source]
class edsl.questions.QuestionNumerical.QuestionNumerical(question_name: str, question_text: str, min_value: int | float | None = None, max_value: int | float | None = None, include_comment: bool = True, question_presentation: str | None = None, answering_instructions: str | None = None, permissive: bool = False)[source]

Bases: QuestionBase

This question prompts the agent to answer with a numerical value.

>>> QuestionNumerical.self_check()
__init__(question_name: str, question_text: str, min_value: int | float | None = None, max_value: int | float | None = None, include_comment: bool = True, question_presentation: str | None = None, answering_instructions: str | None = None, permissive: bool = False)[source]

Initialize the question.

Parameters:
  • question_name – The name of the question.

  • question_text – The text of the question.

  • min_value – The minimum value of the answer.

  • max_value – The maximum value of the answer.

create_response_model()[source]
classmethod example(exception_to_throw: Exception | None = None, override_answer: dict | None = None, *args, **kwargs) T[source]
property question_html_content: str[source]
response_validator_class[source]

alias of NumericalResponseValidator

edsl.questions.QuestionNumerical.create_numeric_response(min_value: float | None = None, max_value: float | None = None, permissive=False)[source]

QuestionLikertFive class

A subclass of the QuestionMultipleChoice class for creating questions where the answer is a response to a given statement on a 5-point Likert scale. (The scale does not need to be added as a parameter.) Example usage:

from edsl import QuestionLikertFive

q = QuestionLikertFive(
   question_name = "happy",
   question_text = "I am only happy when it rains."
)

An example can also be created using the example method:

QuestionLikertFive.example()
class edsl.questions.derived.QuestionLikertFive.QuestionLikertFive(question_name: str, question_text: str, question_options: list[str] | None = ['Strongly disagree', 'Disagree', 'Neutral', 'Agree', 'Strongly agree'], answering_instructions: str | None = None, question_presentation: str | None = None, include_comment: bool = True)[source]

Bases: QuestionMultipleChoice

This question prompts the agent to respond to a statement on a 5-point Likert scale.

__init__(question_name: str, question_text: str, question_options: list[str] | None = ['Strongly disagree', 'Disagree', 'Neutral', 'Agree', 'Strongly agree'], answering_instructions: str | None = None, question_presentation: str | None = None, include_comment: bool = True)[source]

Initialize the question.

Parameters:
  • question_name – The name of the question.

  • question_text – The text of the question.

  • question_options – The options the respondent should select from (list of strings). If not provided, the default Likert options are used ([‘Strongly disagree’, ‘Disagree’, ‘Neutral’, ‘Agree’, ‘Strongly agree’]). To view them, run QuestionLikertFive.likert_options.

classmethod example(exception_to_throw: Exception | None = None, override_answer: dict | None = None, *args, **kwargs) T[source]

QuestionRank class

A subclass of the Question class for creating questions where the response is a ranked list of options. It specially requires a question_options list of strings for the options. The number of options that must be selected can be optionally specified when creating the question. If not specified, all options are included (ranked) in the response. Example usage:

from edsl import QuestionRank

q = QuestionRank(
   question_name = "foods_rank",
   question_text = "Rank the following foods.",
   question_options = ["Pizza", "Pasta", "Salad", "Soup"],
   num_selections = 2 # optional
)

An example can also be created using the example method:

QuestionRank.example()

Alternatively, QuestionTopK can be used to ask the respondent to select a specific number of options from a list. (See the next section for details.)

class edsl.questions.QuestionRank.QuestionRank(question_name: str, question_text: str, question_options: list[str], num_selections: int | None = None, question_presentation: str | None = None, answering_instructions: str | None = None, permissive: bool = False, use_code: bool = True, include_comment: bool = True)[source]

Bases: QuestionBase

This question prompts the agent to rank options from a list.

__init__(question_name: str, question_text: str, question_options: list[str], num_selections: int | None = None, question_presentation: str | None = None, answering_instructions: str | None = None, permissive: bool = False, use_code: bool = True, include_comment: bool = True)[source]

Initialize the question.

Parameters:
  • question_name – The name of the question.

  • question_text – The text of the question.

  • question_options – The options the respondent should select from.

  • min_selections – The minimum number of options that must be selected.

  • max_selections – The maximum number of options that must be selected.

create_response_model()[source]
classmethod example(use_code=False, include_comment=True) QuestionRank[source]

Return an example question.

property question_html_content: str[source]
response_validator_class[source]

alias of RankResponseValidator

class edsl.questions.QuestionRank.RankResponseValidator(response_model: type[BaseModel], exception_to_throw: Exception | None = None, override_answer: dict | None = None, **kwargs)[source]

Bases: ResponseValidatorABC

fix(response, verbose=False)[source]
invalid_examples = [][source]
required_params: List[str] = ['num_selections', 'permissive', 'use_code', 'question_options'][source]
valid_examples = [][source]
edsl.questions.QuestionRank.create_response_model(choices: list, num_selections: int | None = None, permissive: bool = False)[source]
Parameters:
  • choices – A list of allowed values for the answer field.

  • include_comment – Whether to include a comment field in the model.

Returns:

A new Pydantic model class.

QuestionTopK class

A subclass of the QuestionMultipleChoice class for creating questions where the response is a list of ranked items. It specially requires a question_options list of strings for the options and the number of options that must be selected (num_selections). Example usage:

from edsl import QuestionTopK

q = QuestionTopK(
    question_name = "foods_rank",
    question_text = "Select the best foods.",
    question_options = ["Pizza", "Pasta", "Salad", "Soup"],
    min_selections = 2,
    max_selections = 2
)

An example can also be created using the example method:

QuestionTopK.example()
class edsl.questions.derived.QuestionTopK.QuestionTopK(question_name: str, question_text: str, question_options: list[str], min_selections: int, max_selections: int, question_presentation: str | None = None, answering_instructions: str | None = None, include_comment: bool | None = True, use_code: bool | None = True)[source]

Bases: QuestionCheckBox

This question prompts the agent to select exactly K options from a list.

__init__(question_name: str, question_text: str, question_options: list[str], min_selections: int, max_selections: int, question_presentation: str | None = None, answering_instructions: str | None = None, include_comment: bool | None = True, use_code: bool | None = True)[source]

Initialize the question.

Parameters:
  • question_name – The name of the question.

  • question_text – The text of the question.

  • question_options – The options the respondent should select from.

  • instructions – Instructions for the question. If not provided, the default instructions are used. To view them, run QuestionTopK.default_instructions.

  • num_selections – The number of options that must be selected.

classmethod example(exception_to_throw: Exception | None = None, override_answer: dict | None = None, *args, **kwargs) T[source]
question_options: list[str][source]

QuestionYesNo class

A subclass of the QuestionMultipleChoice class for creating multiple choice questions where the answer options are already specified: [‘Yes’, ‘No’]. Example usage:

from edsl import QuestionYesNo

q = QuestionYesNo(
    question_name = "student",
    question_text = "Are you a student?"
)

An example can also be created using the example method:

QuestionYesNo.example()
class edsl.questions.derived.QuestionYesNo.QuestionYesNo(question_name: str, question_text: str, question_options: list[str] = ['No', 'Yes'], answering_instructions: str | None = None, question_presentation: str | None = None, include_comment: bool | None = True)[source]

Bases: QuestionMultipleChoice

This question prompts the agent to respond with ‘Yes’ or ‘No’.

__init__(question_name: str, question_text: str, question_options: list[str] = ['No', 'Yes'], answering_instructions: str | None = None, question_presentation: str | None = None, include_comment: bool | None = True)[source]

Instantiate a new QuestionYesNo.

Parameters:
  • question_name – The name of the question.

  • question_text – The text of the question.

  • instructions – Instructions for the question. If not provided, the default instructions are used. To view them, run QuestionYesNo.default_instructions.

classmethod example(exception_to_throw: Exception | None = None, override_answer: dict | None = None, *args, **kwargs) T[source]

QuestionList class

A subclass of the Question class for creating questions where the response is a list of strings. The maximum number of items in the list can be specified using the max_list_items parameter. Example usage:

q = QuestionList(
    question_name = "activities",
    question_text = "What activities do you enjoy most?",
    max_list_items = 5 # optional
)

An example can also be created using the example method:

QuestionList.example()
class edsl.questions.QuestionList.ListResponseValidator(response_model: type[BaseModel], exception_to_throw: Exception | None = None, override_answer: dict | None = None, **kwargs)[source]

Bases: ResponseValidatorABC

fix(response, verbose=False)[source]
invalid_examples = [({'answer': ['hello', 'world', 'this', 'is', 'a', 'test']}, {'max_list_items': 5}, 'Too many items.')][source]
required_params: List[str] = ['max_list_items', 'permissive'][source]
valid_examples = [({'answer': ['hello', 'world']}, {'max_list_items': 5})][source]
class edsl.questions.QuestionList.QuestionList(question_name: str, question_text: str, max_list_items: int | None = None, include_comment: bool = True, answering_instructions: str | None = None, question_presentation: str | None = None, permissive: bool = False)[source]

Bases: QuestionBase

This question prompts the agent to answer by providing a list of items as comma-separated strings.

__init__(question_name: str, question_text: str, max_list_items: int | None = None, include_comment: bool = True, answering_instructions: str | None = None, question_presentation: str | None = None, permissive: bool = False)[source]

Instantiate a new QuestionList.

Parameters:
  • question_name – The name of the question.

  • question_text – The text of the question.

  • max_list_items – The maximum number of items that can be in the answer list.

>>> QuestionList.example().self_check()
create_response_model()[source]
classmethod example(exception_to_throw: Exception | None = None, override_answer: dict | None = None, *args, **kwargs) T[source]
property question_html_content: str[source]
response_validator_class[source]

alias of ListResponseValidator

edsl.questions.QuestionList.convert_string(s)[source]

Convert a string to a more appropriate type if possible.

>>> convert_string("3.14")
3.14
>>> convert_string("42")
42
>>> convert_string("hello")
'hello'
>>> convert_string('{"key": "value"}')
{'key': 'value'}
>>> convert_string("{'key': 'value'}")
{'key': 'value'}
edsl.questions.QuestionList.create_model(max_list_items: int, permissive)[source]

QuestionBudget class

A subclass of the Question class for creating questions where the response is an allocation of a sum among a list of options in the form of a dictionary where the keys are the options and the values are the allocated amounts. It specially requires a question_options list of strings for the options and a budget_sum number for the total sum to be allocated. Example usage:

from edsl import QuestionBudget

q = QuestionBudget(
   question_name = "food_budget",
   question_text = "How would you allocate $100?",
   question_options = ["Pizza", "Ice cream", "Burgers", "Salad"],
   budget_sum = 100
)

An example can also be created using the example method:

QuestionBudget.example()
class edsl.questions.QuestionBudget.BudgewResponseValidator(response_model: type[BaseModel], exception_to_throw: Exception | None = None, override_answer: dict | None = None, **kwargs)[source]

Bases: ResponseValidatorABC

fix(response, verbose=False)[source]
invalid_examples = [][source]
valid_examples = [][source]
class edsl.questions.QuestionBudget.QuestionBudget(question_name: str, question_text: str, question_options: list[str], budget_sum: int, include_comment: bool = True, question_presentation: str | None = None, answering_instructions: str | None = None, permissive: bool = False)[source]

Bases: QuestionBase

This question prompts the agent to allocate a budget among options.

__init__(question_name: str, question_text: str, question_options: list[str], budget_sum: int, include_comment: bool = True, question_presentation: str | None = None, answering_instructions: str | None = None, permissive: bool = False)[source]

Instantiate a new QuestionBudget.

Parameters:
  • question_name – The name of the question.

  • question_text – The text of the question.

  • question_options – The options for allocation of the budget sum.

  • budget_sum – The total amount of the budget to be allocated among the options.

create_response_model()[source]
classmethod example(include_comment: bool = True) QuestionBudget[source]

Return an example of a budget question.

property question_html_content: str[source]
response_validator_class[source]

alias of BudgewResponseValidator

edsl.questions.QuestionBudget.create_budget_model(budget_sum: float, permissive: bool, question_options: List[str])[source]

QuestionExtract class

A subclass of the Question class for creating questions where the response is information extracted (or extrapolated) from a given text and formatted according to a specified template. Example usage:

from edsl import QuestionExtract

q = QuestionExtract(
    question_name = "course_schedule",
    question_text = """This semester we are offering courses on
    calligraphy on Friday mornings.""",
    answer_template = {"course_topic": "AI", "days": ["Monday",
    "Wednesday"]}
)

An example can also be created using the example method:

QuestionExtract.example()
class edsl.questions.QuestionExtract.ExtractResponseValidator(response_model: type[BaseModel], exception_to_throw: Exception | None = None, override_answer: dict | None = None, **kwargs)[source]

Bases: ResponseValidatorABC

custom_validate(response) BaseResponse[source]
fix(response, verbose=False)[source]
invalid_examples = [({'answer': None}, {'answer_template': {'name': 'John Doe', 'profession': 'Carpenter'}}, 'Result cannot be empty')][source]
required_params: List[str] = ['answer_template'][source]
valid_examples = [({'answer': 'This is great'}, {})][source]
class edsl.questions.QuestionExtract.QuestionExtract(question_text: str, answer_template: dict[str, Any], question_name: str, answering_instructions: str = None, question_presentation: str = None)[source]

Bases: QuestionBase

This question prompts the agent to extract information from a string and return it in a given template.

__init__(question_text: str, answer_template: dict[str, Any], question_name: str, answering_instructions: str = None, question_presentation: str = None)[source]

Initialize the question.

Parameters:
  • question_name – The name of the question.

  • question_text – The text of the question.

  • question_options – The options the respondent should select from.

  • answer_template – The template for the answer.

create_response_model()[source]
classmethod example(exception_to_throw: Exception | None = None, override_answer: dict | None = None, *args, **kwargs) T[source]
property question_html_content: str[source]
response_validator_class[source]

alias of ExtractResponseValidator

edsl.questions.QuestionExtract.dict_to_pydantic_model(input_dict: Dict[str, Any]) Any[source]
edsl.questions.QuestionExtract.extract_json(text, expected_keys, verbose=False)[source]

QuestionFunctional class

A subclass of the Question class for creating questions where the response is generated by a function instead of a lanugage model. The question type is not intended to be used directly in a survey, but rather to generate responses for other questions. This can be useful where a model is not needed for part of a survey, for questions that require some kind of initial computation, or for questions that are the result of a multi-step process. The question type lets us define a function func that takes in a scenario and (optional) agent traits and returns an answer.

Example usage:

Say we have some survey results where we asked some agents to pick a random number:

from edsl import QuestionNumerical, Agent

q_random = QuestionNumerical(
   question_name = "random",
   question_text = "Choose a random number between 1 and 1000."
)

agents = [Agent({"persona":p}) for p in ["Dog catcher", "Magician", "Spy"]]

results = q_random.by(agents).run()
results.select("persona", "random").print(format="rich")

The results are:

┏━━━━━━━━━━━━━┳━━━━━━━━━┓
┃ agent       ┃ answer  ┃
┃ .persona    ┃ .random ┃
┡━━━━━━━━━━━━━╇━━━━━━━━━┩
│ Dog catcher │ 472     │
├─────────────┼─────────┤
│ Magician    │ 537     │
├─────────────┼─────────┤
│ Spy         │ 528     │
└─────────────┴─────────┘

We can use QuestionFunctional to evaluate the responses using a function instead of calling the language model to answer another question. The responses are passed to the function as scenarios, and then the function is passed to the QuestionFunctional object:

from edsl import QuestionFunctional

def my_function(scenario, agent_traits):
   if scenario.get("persona") == "Magician":
      return "Magicians never pick randomly!"
   elif scenario.get("random") > 500:
      return "Top half"
   else:
      return "Bottom half"

q_evaluate = QuestionFunctional(
   question_name = "evaluate",
   func = my_function
)

Next we turn the responses into scenarios for the function:

scenarios = results.select("persona", "random").to_scenarios()
scenarios

We can inspect the scenarios:

[Scenario({'persona': 'Dog catcher', 'random': 472}),
Scenario({'persona': 'Magician', 'random': 537}),
Scenario({'persona': 'Spy', 'random': 528})]

Finally, we run the function with the scenarios:

results = q_evaluate.by(scenarios).run()
results.select("persona", "random", "evaluate").print(format="rich")

The results are:

┏━━━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ scenario    ┃ scenario ┃ answer                         ┃
┃ .persona    ┃ .random  ┃ .evaluate                      ┃
┡━━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│ Dog catcher │ 472      │ Bottom half                    │
├─────────────┼──────────┼────────────────────────────────┤
│ Magician    │ 537      │ Magicians never pick randomly! │
├─────────────┼──────────┼────────────────────────────────┤
│ Spy         │ 528      │ Top half                       │
└─────────────┴──────────┴────────────────────────────────┘

Another example of QuestionFunctional can be seen in the following notebook, where we give agents different instructions for generating random numbers and then use a function to identify whether the responses are identical.

Example notebook: Simulating randomness

class edsl.questions.QuestionFunctional.QuestionFunctional(question_name: str, func: Callable | None = None, question_text: str | None = 'Functional question', requires_loop: bool | None = False, function_source_code: str | None = None, function_name: str | None = None, unsafe: bool | None = False)[source]

Bases: QuestionBase

A special type of question that is not answered by an LLM.

>>> from edsl import Scenario, Agent

# Create an instance of QuestionFunctional with the new function >>> question = QuestionFunctional.example()

# Activate and test the function >>> question.activate() >>> scenario = Scenario({“numbers”: [1, 2, 3, 4, 5]}) >>> agent = Agent(traits={“multiplier”: 10}) >>> results = question.by(scenario).by(agent).run(disable_remote_cache = True, disable_remote_inference = True) >>> results.select(“answer.*”).to_list()[0] == 150 True

# Serialize the question to a dictionary

>>> from edsl.questions.QuestionBase import QuestionBase
>>> new_question = QuestionBase.from_dict(question.to_dict())
>>> results = new_question.by(scenario).by(agent).run(disable_remote_cache = True, disable_remote_inference = True)
>>> results.select("answer.*").to_list()[0] == 150
True
__init__(question_name: str, func: Callable | None = None, question_text: str | None = 'Functional question', requires_loop: bool | None = False, function_source_code: str | None = None, function_name: str | None = None, unsafe: bool | None = False)[source]
activate()[source]
activate_loop()[source]

Activate the function with loop logic using RestrictedPython.

activated = True[source]
answer_question_directly(scenario, agent_traits=None)[source]

Return the answer to the question, ensuring the function is activated.

default_instructions = ''[source]
classmethod example()[source]
function_name = ''[source]
function_source_code = ''[source]
property question_html_content: str[source]
question_name: str[source]
question_text: str[source]
response_validator_class = None[source]
to_dict(add_edsl_version=True)[source]

Convert the question to a dictionary that includes the question type (used in deserialization).

>>> from edsl import QuestionFreeText as Q; Q.example().to_dict(add_edsl_version = False)
{'question_name': 'how_are_you', 'question_text': 'How are you?', 'question_type': 'free_text'}
edsl.questions.QuestionFunctional.calculate_sum_and_multiply(scenario, agent_traits)[source]

Optional question parameters

Examples of optional question paramaters:

include_comment - This boolean parameter can be used to specify that the default comment field which is added to all types other than free_text should be excluded from a question (default: include_comment = True). Example usage:

from edsl import QuestionNumerical, Survey

q1 = QuestionNumerical(
   question_name = "adding_v1",
   question_text = "What is 1+1?"
)

# The same question with the comment field excluded
q2 = QuestionNumerical(
   question_name = "adding_v2",
   question_text = "What is 1+1?",
   include_comment = False
)

job = Survey([q1, q2]).to_jobs()

job.prompts().select("user_prompt", "question_name").print(format="rich")

We can see that the second version of the question does not include the comment instruction “After the answer, put a comment explaining your choice on the next line.”:

┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓
┃ user_prompt                                                                                     ┃ question_name ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩
│ What is 1+1?                                                                                    │ adding_v1     │
│                                                                                                 │               │
│ This question requires a numerical response in the form of an integer or decimal (e.g., -12, 0, │               │
│ 1, 2, 3.45, ...).                                                                               │               │
│ Respond with just your number on a single line.                                                 │               │
│ If your response is equivalent to zero, report '0'                                              │               │
│                                                                                                 │               │
│                                                                                                 │               │
│ After the answer, put a comment explaining your choice on the next line.                        │               │
├─────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────┤
│ What is 1+1?                                                                                    │ adding_v2     │
│                                                                                                 │               │
│ This question requires a numerical response in the form of an integer or decimal (e.g., -12, 0, │               │
│ 1, 2, 3.45, ...).                                                                               │               │
│ Respond with just your number on a single line.                                                 │               │
│ If your response is equivalent to zero, report '0'                                              │               │
└─────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────┘

When we run the survey, the comment field will be included in the results for the first question but not the second:

results = job.run()
results.select("comment.*").print(format="rich")

Output:

┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━┓
┃ comment                     ┃ comment            ┃
┃ .adding_v1_comment          ┃ .adding_v2_comment ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━┩
│ The sum of 1 and 1 is 2.    │ None               │
└─────────────────────────────┴────────────────────┘

See the Prompts section for more information about various methods for inspecting user and system prompts.

question_presentation and answering_instructions - These parameters can be used to add additional context or modify the default instructions of a question.

  • The parameter question_presentation interacts with the question text to specify how the question should be presented to the model (e.g., to modify the default instructions for a question).

  • The parameter answering_instructions is added to the end of the question text without modifying it. It can be used to specify how the model should answer the question and can be useful for questions that require a specific format for the answer.

Example usage:

from edsl import QuestionNumerical, Survey

q = QuestionNumerical(
   question_name = "adding",
   question_text = "What is 1+1?",
   question_presentation = "Please solve the following addition problem: {{ question_text }}",
   answering_instructions = "\n\nRespond with just your number on a single line."
)

job = Survey([q]).to_jobs()

job.prompts().select("user_prompt").print(format="rich")

Output:

┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ user_prompt                                               ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│ Please solve the following addition problem: What is 1+1? │
│                                                           │
│ Respond with just your number on a single line.           │
└───────────────────────────────────────────────────────────┘

See the Prompts section for more information about various methods for inspecting user and system prompts.

Other classes & methods

Settings for the questions module.

class edsl.questions.settings.Settings[source]

Bases: object

Settings for the questions module.

MAX_ANSWER_LENGTH = 2000[source]
MAX_EXPRESSION_CONSTRAINT_LENGTH = 1000[source]
MAX_NUM_OPTIONS = 200[source]
MAX_OPTION_LENGTH = 10000[source]
MAX_QUESTION_LENGTH = 100000[source]
MIN_NUM_OPTIONS = 2[source]