Questions

EDSL provides templates for many common question types, including free text, multiple choice, checkbox, linear scale, matrix, numerical, dictionary and others. The Question class has subclasses for each of these types: QuestionFreeText, QuestionMultipleChoice, QuestionCheckBox, QuestionLinearScale, QuestionMatrix, QuestionNumerical, QuestionDict, etc., which have methods for validating answers and responses from language models.

Question type templates

A question is constructed by creating an instance of a question type class and passing the required fields. Questions are formatted as dictionaries with specific keys based on the question type.

Question types

The following question types are available:

  • QuestionFreeText - free text questions

  • QuestionMultipleChoice - multiple choice questions

  • QuestionCheckBox - checkbox questions

  • QuestionLinearScale - linear scale questions

  • QuestionMatrix - matrix questions

  • QuestionDict - dictionary questions

  • QuestionLikertFive - Likert scale questions

  • QuestionRank - ranked list questions

  • QuestionTopK - top-k list questions

  • QuestionYesNo - yes/no questions (multiple choice with fixed options)

  • QuestionNumerical - numerical questions

  • QuestionList - list questions (the response is formatted as a list of strings)

  • QuestionBudget - budget allocation questions (the response is a dictionary of allocated amounts)

  • QuestionExtract - information extraction questions (the response is formatted according to a specified template)

  • QuestionFunctional - functional questions (the response is generated by a function)

Base parameters

All question types require a question_name and question_text. The question_name is a unique Pythonic identifier for a question (e.g., “favorite_color”). The question_text is the text of the question itself written as a string (e.g., “What is your favorite color?”). Question types other than free text also require a question_options list of possible answer options. The question_options list can be a list of strings, floats, integers, dictionaries, lists or other data types depending on the question type. (See a demo notebook of allowed question options types at Coop.)

For example, to create a multiple choice question where the response should be a single option selected from a list of colors, we import the QuestionMultipleChoice class and create an instance of it with the required fields:

from edsl import QuestionMultipleChoice

q = QuestionMultipleChoice(
   question_name = "favorite_color",
   question_text = "What is your favorite color?",
   question_options = ["Red", "Orange", "Yellow", "Green", "Blue", "Purple"]
)

Special parameters

Some question types have additional parameters that are either optional or required to be added to the question when it is created:

answer_keys - Required parameter of dict (dictionary) questions to specify the keys that the respondent must provide in their answer. For example, in a dictionary question where the respondent must provide the names of their favorite colors and the number of times they have worn each color in the past week:

from edsl import QuestionDict

q = QuestionDict(
   question_name = "favorite_colors",
   question_text = "What are your favorite colors and how many times have you worn each color in the past week?",
   answer_keys = ["color", "times_worn"]
)

value_types - Optional parameter of dict questions to specify the types of the values that the respondent must provide for each key. Permissible types are str, int, float, list and their string representations (e.g., “str”, “int”, “float”, “list”). For example, in a dictionary question where the respondent must provide the names of their favorite colors and the number of times they have worn each color in the past week, and the values must be a string and an integer, respectively:

from edsl import QuestionDict

q = QuestionDict(
   question_name = "favorite_colors",
   question_text = "What are your favorite colors and how many times have you worn each color in the past week?",
   answer_keys = ["color", "times_worn"],
   value_types = [str, int] # optional
)

Note that the types can be provided as actual types (e.g., str, int, float, list, dict, etc.) or as strings (e.g., “str”, “int”, “float”, “list”, “dict”, etc.).

value_descriptions - Optional parameter of dict questions to specify descriptions of the values that the respondent must provide for each key. For example, in a dictionary question where the respondent is asked to draft a tweet and provide the text and the number of characters in the tweet:

from edsl import QuestionDict

q = QuestionDict(
   question_name = "tweet",
   question_text = "Draft a tweet.",
   answer_keys = ["text", "characters"],
   value_descriptions = ["The text of the tweet", "The number of characters in the tweet"] # optional
)

question_items - Required parameter of matrix questions to specify the row items. For example, in a matrix question where the respondent must rate a list of items on a scale of 1 to 5:

from edsl import QuestionMatrix

q = QuestionMatrix(
   question_name = "rate_items",
   question_text = "Please rate the following items on a scale of 1 to 5.",
   question_items = ["Item 1", "Item 2", "Item 3"],
   question_options = [1, 2, 3, 4, 5],
   option_labels = {1: "Terrible", 5: "Excellent"} # optional
)

option_labels - Optional parameter of linear_scale and matrix questions to specify labels for the scale options. For example, in a linear scale question where the response must be an integer between 1 and 5 reflecting the respondent’s agreement with a statement:

from edsl import QuestionLinearScale

q = QuestionLinearScale(
   question_name = "agree",
   question_text = "Please indicate whether you agree with the following statement: I am only happy when it rains.",
   question_options = [1, 2, 3, 4, 5],
   option_labels = {1: "Strongly disagree", 5: "Strongly agree"} # optional
)

min_selections and max_selections - Optional parameters of checkbox and rank questions to specify the minimum and maximum number of options that can be selected. For example, in a checkbox question where the response must include at least 2 and at most 3 of the options:

from edsl import QuestionCheckBox

q = QuestionCheckBox(
   question_name = "favorite_days",
   question_text = "What are your 2-3 favorite days of the week?",
   question_options = ["Mon", "Tue", "Wed", "Thu", "Fri", "Sat", "Sun"],
   min_selections = 2, # optional
   max_selections = 3 # optional
)

min_value and max_value - Optional parameters of numerical questions to specify the minimum and maximum values that can be entered. For example, in a numerical question where the respondent must enter a number between 1 and 100:

from edsl import QuestionNumerical

q = QuestionNumerical(
   question_name = "age",
   question_text = "How old are you (in years)?",
   min_value = 1, # optioal
   max_value = 100 # optional
)

num_selections - Optional parameter of rank questions to specify the number of options that must be ranked. For example, in a rank question where the respondent must rank their top 3 favorite foods:

from edsl import QuestionRank

q = QuestionRank(
   question_name = "foods_rank",
   question_text = "Rank your top 3 favorite foods.",
   question_options = ["Pizza", "Pasta", "Salad", "Soup"],
   num_selections = 3 # optional
)

use_code - Optional boolean parameter of multiple_choice questions that adds an instruction to the user_prompt for the model to provide the code number of the question option that it selects as its answer (i.e., 0, 1, 2, etc.) instead of the value of the option. This can be useful when the question options are long or complex, or include formatting that a model may make errors in reproducing to provide an answer, resulting in a validation error that may be avoidable by returning the code number of the option instead. The code is then translated back to the option value in the survey results. For example, in a multiple choice question where the agent is instructed to select a programming language we can add the use_code parameter and then inspect how the user prompt is modified to include “Respond only with the code corresponding to one of the options.” (learn more about constructing agents in the Agents section):

from edsl import QuestionMultipleChoice, Survey, Agent

a = Agent(traits = {"persona":"You are an experienced computer programmer."})

q = QuestionMultipleChoice(
   question_name = "programming_language",
   question_text = "Which programming language do you prefer?",
   question_options = ["Python", "Java", "C++", "JavaScript"],
   use_code = True # optional
)

survey = Survey([q])

survey.by(a).show_prompts()

Output:

user_prompt

system_prompt

Which programming language do you prefer? 0: Python 1: Java 2: C++ 3: JavaScript

Only 1 option may be selected.

Respond only with the code corresponding to one of the options.

After the answer, you can put a comment explaining why you chose that option on the next line.

You are answering questions as if you were a human. Do not break character. Your traits: {‘persona’: ‘You are an experienced computer programmer.’}

answer_template - Required parameter of extract questions to specify a template for the extracted information. For example, in an extract question where the respondent must extract information from a given text:

from edsl import QuestionExtract

q = QuestionExtract(
   question_name = "course_schedule",
   question_text = "This semester we are offering courses on calligraphy on Friday mornings.",
   answer_template = {"course_topic": "AI", "days": ["Monday", "Wednesday"]} # required
)

func - Required parameter of functional questions to specify a function that generates the answer. For example, in a functional question where the answer is generated by a function:

from edsl import QuestionFunctional, ScenarioList, Scenario
import random

scenarios = ScenarioList(
   [Scenario({"persona": p, "random": random.randint(0, 1000)}) for p in ["Magician", "Economist"]]
)

def my_function(scenario, agent_traits):
   if scenario.get("persona") == "Magician":
      return "Magicians never pick randomly!"
   elif scenario.get("random") > 500:
      return "Top half"
   else:
      return "Bottom half"

q = QuestionFunctional(
   question_name = "evaluate",
   func = my_function
)

results = q.by(scenarios).run()

results.select("persona", "random", "evaluate")

Example results:

scenario.persona

scenario.random

answer.evaluate

Magician

358

Magicians never pick randomly!

Economist

826

Top half

General optional parameters

The following optional parameters can be added to any question type (examples of each are provided at the end of this section):

include_comment = False - A boolean value that can be added to any question type (other than free text) to exclude the default instruction in the user prompt that instructs the model to include a comment about its response to the question, as well as the comment field that is otherwise automatically included in survey results for the question.

By default, a comment field is automatically added to all question types other than free text. It is a free text field that allows a model to provide any commentary on its response to the question, such as why a certain answer was chosen or how the model arrived at its answer. It can be useful for debugging unexpected responses and reducing the likelihood that a model fails to follow formatting instructions for the main response (e.g., just selecting an answer option) because it wants to be more verbose. It can also be helpful in constructing sequences of questions with context about prior responses (e.g., simulating a chain of thought). (See the survey section for more information about adding question memory to a survey.)

question_presentation - A string that can be added to any question type to specify how the question should be presented to the model. It can be used to provide additional context or instructions to the model about how to interpret the question.

answering_instructions - A string that can be added to any question type to specify how the model should answer the question. It can be used to provide additional context or instructions to the model about how to format its response.

permissive = True - A boolean value that can be added to any question type to specify whether the model should be allowed to provide an answer that violates the question constraints (e.g., selecting fewer or more than the allowed number of options in a checkbox question). (By default, permissive is set to False to enforce any question constraints.)

Creating a survey

We can combine multiple questions into a survey by passing them as a list to a Survey object:

from edsl import QuestionLinearScale, QuestionList, QuestionNumerical, Survey

q1 = QuestionLinearScale(
   question_name = "dc_state",
   question_text = "How likely is Washington, D.C. to become a U.S. state?",
   question_options = [1, 2, 3, 4, 5],
   option_labels = {1: "Not at all likely", 5: "Very likely"}
)

q2 = QuestionList(
   question_name = "largest_us_cities",
   question_text = "What are the largest U.S. cities by population?",
   max_list_items = 3
)

q3 = QuestionNumerical(
   question_name = "us_pop",
   question_text = "What was the U.S. population in 2020?"
)

survey = Survey(questions = [q1, q2, q3])

This allows us to administer multiple questions at once, either asynchronously (by default) or according to specified logic (e.g., skip or stop rules). To learn more about designing surveys with conditional logic, please see the Surveys section.

Note: If you want multiple choice question options to be randomized, you can pass an optional parameter questions_to_randomize (a list of the relevant question names) to the Survey object when it is created. See more details about QuestionMultipleChoice below and the Surveys section on randomizing question options.

Simulating a response

We generate a response to a question by delivering it to a language model. This is done by calling the run method for the question:

from edsl import QuestionCheckBox

q = QuestionCheckBox(
   question_name = "primary_colors",
   question_text = "Which of the following colors are primary?",
   question_options = ["Red", "Orange", "Yellow", "Green", "Blue", "Purple"]
)

results = q.run()

This will generate a Results object that contains a single Result representing the response to the question and information about the model used. If the model to be used has not been specified (as in the above example), the run method delivers the question to the default LLM (run Model() to check the current default LLM). We can inspect the response and model used by calling the select method and passing it the names of components of the results that we want to display in a table. For example, we can print just the answer to the question:

results.select("primary_colors")

Output:

answer.primary_colors

[‘Red’, ‘Yellow’, ‘Blue’]

Or to inspect the model:

results.select("model")

Output:

model.model

gpt-4o

If questions have been combined in a survey, the run method is called directly on the survey instead:

from edsl import QuestionLinearScale, QuestionList, QuestionNumerical, Survey

q1 = QuestionList(
   question_name = "largest_us_cities",
   question_text = "What are the largest U.S. cities by population?",
   max_list_items = 3
)

q2 = QuestionLinearScale(
   question_name = "dc_state",
   question_text = "How likely is Washington, D.C. to become a U.S. state?",
   question_options = [1, 2, 3, 4, 5],
   option_labels = {1: "Not at all likely", 5: "Very likely"}
)

q3 = QuestionNumerical(
   question_name = "us_pop",
   question_text = "What was the U.S. population in 2020?"
)

survey = Survey(questions = [q1, q2, q3])

results = survey.run()

We can inspect the results in the same way as for a single question, and use “*” to select all components of the results or group of component (e.g., all of the answers):

results.select("answer.*")

Output:

answer.largest_us_cities

answer.dc_state

answer.us_pop

[‘New York’, ‘Los Angeles’, ‘Chicago’]

2

331449281

For a survey, each Result represents a response for the set of survey questions. To learn more about analyzing results, please see the Results section.

Parameterizing a question

A question can also be constructed to take parameters that are replaced with specified values either when the question is constructed or when the question is run. This operation can be done in a number of ways:

  • Use “piping” to pass answers or other components of previous questions to a subsequent question when it is run.

  • Use Scenario objects to pass values to a question when it is run.

  • Use Scenario objects to pass values to a question when it is constructed.

  • Use f-strings to pass values to a question when it is constructed.

Each of these methods allows us to easily create and administer multiple versions of a question at once.

In addition to the examples below, please also see the scenarios section for more information on constructing and using scenarios to parameterize questions.

Scenarios

A Scenario object is a dictionary of parameter values that can be passed to a question. Details about scenarios can be found in the scenarios section.

Key steps:

Create a question text that takes a parameter in double braces:

from edsl import QuestionFreeText

q = QuestionFreeText(
   question_name = "favorite_item",
   question_text = "What is your favorite {{ item }}?",
)

Then create a dictionary for each value that will replace the parameter and store them in Scenario objects:

from edsl import ScenarioList, Scenario

scenarios = ScenarioList(
   Scenario({"item": item}) for item in ["color", "food"]
)

We can pass scenarios to a question using the by method when the question is run:

from edsl import QuestionFreeText, ScenarioList, Scenario

q = QuestionFreeText(
   question_name = "favorite_item",
   question_text = "What is your favorite {{ item }}?",
)

scenarios = ScenarioList(
   Scenario({"item": item}) for item in ["color", "food"]
)

results = q.by(scenarios).run()

Each of the Results that are generated will include an individual Result for each version of the question that was answered.

Alternatively, we can create multiple versions of a question when constructing a survey (i.e., before running it), by passing scenarios to the question loop method:

questions = q.loop(scenarios) # using the scenarios from the example above

We can inspect the questions that have been created:

questions

Output:

[Question('free_text', question_name = """favorite_item_0""", question_text = """What is your favorite color?"""),
Question('free_text', question_name = """favorite_item_1""", question_text = """What is your favorite food?""")]

Note that a unique question_name has been automatically generated based on the parameter values. This is necessary in order to pass the questions to a Survey object.

We can alternatively specify that the paramater values be inserted in the question name to create the unique identifiers (so long as they are Pythonic):

from edsl import QuestionFreeText, ScenarioList, Scenario

q = QuestionFreeText(
   question_name = "favorite_{{ item }}",
   question_text = "What is your favorite {{ item }}?",
)

scenarios = ScenarioList(
   Scenario({"item": item}) for item in ["color", "food"]
)

questions = q.loop(scenarios)

Output:

[Question('free_text', question_name = """favorite_color""", question_text = """What is your favorite color?"""),
Question('free_text', question_name = """favorite_food""", question_text = """What is your favorite food?""")]

To run the questions, we pass them to a Survey in the usual manner:

from edsl import Survey

survey = Survey(questions = questions)

results = survey.run()

Piping

Piping is a method for passing the answer or other components of previous questions into a subsequent question. Question components can be piped into question texts or options by using double braces with the name of the prior question and the key of the answer in the braces (e.g., {{ <prior_question_name>.answer }}). Note that piping can only be used when questions are run together in a survey.

For example:

from edsl import QuestionNumerical, QuestionList, QuestionMultipleChoice, Survey

q_age = QuestionNumerical(
   question_name = "age",
   question_text = "What is your age?"
)

# Piping an answer in a question text
q_prime = QuestionMultipleChoice(
   question_name = "prime",
   question_text = "Is {{ age.answer }} a prime number?",
   question_options = ["Yes", "No", "I don't know"]
)

q_favorite_colors = QuestionList(
   question_name = "favorite_colors",
   question_text = "What are your 3 favorite colors?",
   max_list_items = 3
)

# Using an item from a list response
q_flowers = QuestionList(
   question_name = "flowers",
   question_text = "Name some flowers that are {{ favorite_colors.answer[0] }}."
)

# Using a list response as a complete set of question options
q_house_color = QuestionMultipleChoice(
   question_name = "house_color",
   question_text = "Pretend you are painting a house. Which color would you choose?",
   question_options = "{{ favorite_colors.answer }}"
)

# Itemizing options from a list response in question options
q_car_color = QuestionMultipleChoice(
   question_name = "car_color",
   question_text = "Pretend you are buying a car. Which color would you choose?",
   question_options = [
      "{{ favorite_colors.answer[0] }}",
      "{{ favorite_colors.answer[1] }}",
      "{{ favorite_colors.answer[2] }}",
      "Other"
   ]
)

survey = Survey([
   q_age,
   q_prime,
   q_favorite_colors,
   q_flowers,
   q_car_color
])

When the survey is run, the answers will be piped as noted in the comments.

For more details and examples of piping, please see the Surveys module section on piping.

F-strings

F-strings can be used to pass values to a question when it is constructed. They function independently of scenarios and piping, but can be used at the same time.

For example:

from edsl import QuestionFreeText, ScenarioList, Scenario, Survey

questions = []
sentiments = ["enjoy", "hate", "love"]

scenarios = ScenarioList(
   Scenario({"activity": activity}) for activity in ["running", "reading"]
)

for sentiment in sentiments:
   q = QuestionFreeText(
      question_name = f"{ sentiment }_activity",
      question_text = f"How much do you { sentiment } {{ activity }}?"
   )
   q_list = q.loop(scenarios)

   questions = questions + q_list

questions

Output:

[Question('free_text', question_name = """enjoy_activity_0""", question_text = """How much do you enjoy running?"""),
Question('free_text', question_name = """enjoy_activity_1""", question_text = """How much do you enjoy reading?"""),
Question('free_text', question_name = """hate_activity_0""", question_text = """How much do you hate running?"""),
Question('free_text', question_name = """hate_activity_1""", question_text = """How much do you hate reading?"""),
Question('free_text', question_name = """love_activity_0""", question_text = """How much do you love running?"""),
Question('free_text', question_name = """love_activity_1""", question_text = """How much do you love reading?""")]

We can see that the question names and texts have been parameterized with the values of sentiments and scenarios, and the question names have been automatically incremented to ensure uniqueness. We can then pass the questions to a survey and run it:

survey = Survey(questions = questions)

results = survey.run()

Designing AI agents

A key feature of EDSL is the ability to design AI agents with personas and other traits for language models to use in responding to questions. The use of agents allows us to simulate survey results for target audiences at scale. This is done by creating Agent objects with dictionaries of desired traits and adding them to questions when they are run. For example, if we want a question answered by an AI agent representing a student we can create an Agent object with a relevant persona and attributes:

from edsl import Agent

agent = Agent(traits = {
   "persona": "You are a student...", # can be an extended text
   "age": 20, # individual trait values can be useful for analysis
   "current_grade": "college sophomore"
   })

To generate a response for the agent, we pass it to the by method when we run the question:

results = q.by(agent).run()

We can also generate responses for multiple agents at once by passing them as a list:

from edsl import AgentList, Agent

agents = AgentList(
   Agent(traits = {"persona":p}) for p in ["Dog catcher", "Magician", "Spy"]
)

results = q.by(scenarios).by(agents).run()

The Results will contain a Result for each agent that answered the question. To learn more about designing agents, please see the Agents section.

Specifying language models

In the above examples we did not specify a language model for the question or survey, so the default model was used (run Model() to check the current default model). Similar to the way that we optionally passed scenarios to a question and added AI agents, we can also use the by method to specify one or more LLMs to use in generating results. This is done by creating Model objects for desired models and optionally specifying model parameters, such as temperature.

To check available models:

from edsl import Model

Model.available()

This will return a list of names of models that we can choose from.

We can also check the models for which we have already added API keys:

Model.check_models()

See instructions on storing Managing Keys for the models that you want to use, or activating Remote Inference to use the Expected Parrot server to access available models.

To specify models for a survey we first create Model objects:

from edsl import ModelList, Model

models = ModelList(
   Model(m) for m in ['gpt-4o', 'gemini-1.5-pro']
)

Then we add them to a question or survey with the by method when running it:

results = q.by(models).run()

If scenarios and/or agents are also specified, each component is added in its own by call, chained together in any order, with the run method appended last:

results = q.by(scenarios).by(agents).by(models).run()

Note that multiple scenarios, agents and models are always passed as lists in the same by call.

Learn more about specifying question scenarios, agents and language models and their parameters in the respective sections:

QuestionBase class

class edsl.questions.QuestionBase[source]

Bases: PersistenceMixin, RepresentationMixin, SimpleAskMixin, QuestionBasePromptsMixin, QuestionBaseGenMixin, ABC, AnswerValidatorMixin

Abstract base class for all question types in EDSL.

QuestionBase defines the core interface and behavior that all question types must implement. It provides the foundation for asking questions to agents, validating responses, generating prompts, and integrating with the rest of the EDSL framework.

The class inherits from multiple mixins to provide different capabilities: - PersistenceMixin: Serialization and deserialization - RepresentationMixin: String representation - SimpleAskMixin: Basic asking functionality - QuestionBasePromptsMixin: Template-based prompt generation - QuestionBaseGenMixin: Generate responses with language models - AnswerValidatorMixin: Response validation

It also uses the RegisterQuestionsMeta metaclass to enforce constraints on child classes and automatically register them for serialization and runtime use.

Class attributes:

question_name (str): Name of the question, used as an identifier question_text (str): The actual text of the question to be asked

Required attributes in derived classes:

question_type (str): String identifier for the question type _response_model (Type): Pydantic model class for validating responses response_validator_class (Type): Validator class for responses

Key Methods:

by(model): Connect this question to a language model for answering run(): Execute the question with the connected language model duplicate(): Create an exact copy of this question is_valid_question_name(): Verify the question_name is valid

Lifecycle:
  1. Instantiation: A question is created with specific parameters

  2. Connection: The question is connected to a language model via by()

  3. Execution: The question is run to generate a response

  4. Validation: The response is validated based on the question type

  5. Result: The validated response is returned for analysis

Template System:

Questions use Jinja2 templates for generating prompts. Each question type has associated template files: - answering_instructions.jinja: Instructions for how the model should answer - question_presentation.jinja: Format for how the question is presented Templates support variable substitution using scenario variables.

Response Validation:

Each question type has a dedicated response validator that: - Enforces the expected response structure - Ensures the response is valid for the question type - Attempts to fix invalid responses when possible - Uses Pydantic models for schema validation

Example:

Derived classes must define the required attributes:

```python class FreeTextQuestion(QuestionBase):

question_type = “free_text” _response_model = FreeTextResponse response_validator_class = FreeTextResponseValidator

def __init__(self, question_name, question_text, **kwargs):

self.question_name = question_name self.question_text = question_text # Additional initialization as needed

```

Using a question:

```python # Create a question question = FreeTextQuestion(

question_name=”opinion”, question_text=”What do you think about AI?”

)

# Connect to a language model and run from edsl.language_models import Model model = Model() result = question.by(model).run()

# Access the answer answer = result.select(“answer.opinion”).to_list()[0] print(f”The model’s opinion: {answer}”) ```

Notes:
  • QuestionBase is abstract and cannot be instantiated directly

  • Child classes must implement required methods and attributes

  • The RegisterQuestionsMeta metaclass handles registration of question types

  • Questions can be serialized to and from dictionaries for storage

  • Questions can be used independently or as part of surveys

class ValidatedAnswer[source]

Bases: TypedDict

Type definition for a validated answer to a question.

This TypedDict defines the structure of a validated answer, which includes the actual answer value, an optional comment, and optional generated tokens information for tracking LLM token usage.

Attributes:

answer: The validated answer value, type depends on question type comment: Optional string comment or explanation for the answer generated_tokens: Optional string containing raw LLM output for token tracking

answer: Any[source]
comment: str | None[source]
generated_tokens: str | None[source]
add_question(other: QuestionBase) Survey[source]

Add a question to this question by turning them into a survey with two questions.

>>> from edsl.questions import QuestionFreeText as Q
>>> from edsl.questions import QuestionMultipleChoice as QMC
>>> s = Q.example().add_question(QMC.example())
>>> len(s.questions)
2
by(*args) Jobs[source]

Turn a single question into a survey and then a Job.

property data: dict[source]

Return a dictionary of question attributes except for question_type.

>>> from edsl.questions import QuestionFreeText as Q
>>> Q.example().data
{'question_name': 'how_are_you', 'question_text': 'How are you?'}
duplicate() QuestionBase[source]

Create an exact copy of this question instance.

This method creates a new instance of the question with identical attributes by serializing the current instance to a dictionary and then deserializing it back into a new instance.

Returns:

QuestionBase: A new instance of the same question type with identical attributes.

Examples:
>>> from edsl.questions import QuestionFreeText
>>> original = QuestionFreeText(question_name="q1", question_text="Hello?")
>>> copy = original.duplicate()
>>> original.question_name == copy.question_name
True
>>> original is copy
False
classmethod example_model()[source]
classmethod example_results()[source]
property fake_data_factory[source]

Create and return a factory for generating fake response data.

This property lazily creates a factory class based on Pydantic’s ModelFactory that can generate fake data conforming to the question’s response model. The factory is cached after first creation for efficiency.

Returns:

ModelFactory: A factory class that can generate fake data for this question type.

Notes:
  • Uses polyfactory to generate valid fake data instances

  • The response model for the question defines the structure of the generated data

  • Primarily used for testing and simulation purposes

classmethod from_dict(data: dict) QuestionBase[source]

Create a question instance from a dictionary representation.

This class method deserializes a question from a dictionary representation, typically created by the to_dict method. It looks up the appropriate question class based on the question_type field and constructs an instance of that class.

Args:

data: Dictionary representation of a question, must contain a ‘question_type’ field.

Returns:

QuestionBase: An instance of the appropriate question subclass.

Raises:
QuestionSerializationError: If the data is missing the question_type field or

if no question class is registered for the given type.

Examples:
>>> from edsl.questions import QuestionFreeText
>>> original = QuestionFreeText.example()
>>> serialized = original.to_dict()
>>> deserialized = QuestionBase.from_dict(serialized)
>>> original.question_text == deserialized.question_text
True
>>> isinstance(deserialized, QuestionFreeText)
True
Notes:
  • The @remove_edsl_version decorator removes EDSL version information from the dictionary before processing

  • Special handling is implemented for certain question types like linear_scale

  • Model instructions, if present, are handled separately to ensure proper initialization

gold_standard(q_and_a_dict: dict[str, str]) Result[source]

Run the question with a gold standard agent and return the result.

html(scenario: dict | None = None, agent: dict | None = {}, answers: dict | None = None, include_question_name: bool = False, height: int | None = None, width: int | None = None, iframe=False)[source]
human_readable() str[source]

Print the question in a human readable format.

>>> from edsl.questions import QuestionFreeText
>>> QuestionFreeText.example().human_readable()
'Question Type: free_text\nQuestion: How are you?'
humanize(project_name: str = 'Project', survey_description: str | None = None, survey_alias: str | None = None, survey_visibility: Literal['private', 'public', 'unlisted'] | None = 'unlisted') dict[source]

Turn a single question into a survey and send the survey to Coop.

Then, create a project on Coop so you can share the survey with human respondents.

inspect()[source]

Create an interactive inspector widget for this question.

This method uses the InspectorWidget registry system to find the appropriate inspector widget class for questions and returns an instance of it.

Returns:

QuestionInspectorWidget instance: Interactive widget for inspecting this question

Raises:

KeyError: If no question inspector widget is available ImportError: If the widgets module cannot be imported

is_valid_question_name() bool[source]

Check if the question name is a valid Python identifier.

This method validates that the question_name attribute is a valid Python variable name according to Python’s syntax rules. This is important because question names are often used as identifiers in various parts of the system.

Returns:

bool: True if the question name is a valid Python identifier, False otherwise.

Examples:
>>> from edsl.questions import QuestionFreeText
>>> q = QuestionFreeText(question_name="valid_name", question_text="Text")
>>> q.is_valid_question_name()
True
property name: str[source]

Get the question name.

This property is a simple alias for question_name that provides a consistent interface shared with other EDSL components like Instructions.

Returns:

str: The question name.

question_name: str[source]

Validate that the question_name attribute is a valid variable name.

question_text: str[source]

Validate that the question_text attribute is a string.

>>> class TestQuestion:
...     question_text = QuestionTextDescriptor()
...     def __init__(self, question_text: str):
...         self.question_text = question_text
>>> _ = TestQuestion("What is the capital of France?")
>>> _ = TestQuestion("What is the capital of France? {{variable}}")
property response_validator: ResponseValidatorABC[source]

Get the appropriate validator for this question type.

This property lazily creates and returns a response validator instance specific to this question type. The validator is created using the ResponseValidatorFactory, which selects the appropriate validator class based on the question’s type.

Returns:

ResponseValidatorABC: An instance of the appropriate validator for this question.

Notes:
  • Each question type has its own validator class defined in the class attribute response_validator_class

  • The validator is responsible for ensuring responses conform to the expected format and constraints for this question type

rich_print()[source]

Print the question in a rich format.

run(*args, **kwargs) Results[source]

Turn a single question into a survey and runs it.

async run_async(just_answer: bool = True, model: 'LanguageModel' | None = None, agent: 'Agent' | None = None, disable_remote_inference: bool = False, **kwargs) Any | 'Results'[source]

Call the question asynchronously.

>>> import asyncio
>>> from edsl.questions import QuestionFreeText as Q
>>> m = Q._get_test_model(canned_response = "Blue")
>>> q = Q(question_name = "color", question_text = "What is your favorite color?")
>>> async def test_run_async(): result = await q.run_async(model=m, disable_remote_inference = True, disable_remote_cache = True); print(result)
>>> asyncio.run(test_run_async())
Blue
classmethod run_example(show_answer: bool = True, model: 'LanguageModel' | None = None, cache: bool = False, disable_remote_cache: bool = False, disable_remote_inference: bool = False, **kwargs) Results[source]

Run the example question with a language model and return results.

This class method creates an example instance of the question, asks it using the provided language model, and returns the results. It’s primarily used for demonstrations, documentation, and testing.

Args:
show_answer: If True, returns only the answer portion of the results.

If False, returns the full results.

model: Language model to use for answering. If None, creates a default model. cache: Whether to use local caching for the model call. disable_remote_cache: Whether to disable remote caching. disable_remote_inference: Whether to disable remote inference. **kwargs: Additional keyword arguments to pass to the example method.

Returns:

Results: Either the full results or just the answer portion, depending on show_answer.

Examples:
>>> from edsl.language_models import LanguageModel
>>> from edsl import QuestionFreeText as Q
>>> m = Q._get_test_model(canned_response="Yo, what's up?")
>>> results = Q.run_example(show_answer=True, model=m,
...                       disable_remote_cache=True, disable_remote_inference=True)
>>> "answer" in str(results)
True
Notes:
  • This method is useful for quick demonstrations of question behavior

  • The disable_remote_* parameters are useful for offline testing

  • Additional parameters to customize the example can be passed via kwargs

to_dict(add_edsl_version: bool = True)[source]

Convert the question to a dictionary that includes the question type (used in deserialization).

>>> from edsl.questions import QuestionFreeText as Q; Q.example().to_dict(add_edsl_version = False)
{'question_name': 'how_are_you', 'question_text': 'How are you?', 'question_type': 'free_text'}
to_jobs()[source]
to_survey() Survey[source]

Turn a single question into a survey. >>> from edsl import QuestionFreeText as Q >>> Q.example().to_survey().questions[0].question_name ‘how_are_you’

using(*args, **kwargs) Jobs[source]

Turn a single question into a survey and then a Job.

Question type classes

QuestionFreeText class

A subclass of the Question class for creating free response questions. There are no specially required fields (only question_name and question_text). The response is a single string of text. Example usage:

from edsl import QuestionFreeText

q = QuestionFreeText(
   question_name = "food",
   question_text = "What is your favorite food?"
)

An example can also be created using the example method:

QuestionFreeText.example()

HTML/XML Tags in Question Text

You can use HTML or XML tags within question text to provide additional context or instructions to the language model. These tags will be preserved and passed to the model as part of the prompt. This can be useful for structuring complex prompts or providing inline instructions.

Example with XML tags for language specification:

q = QuestionFreeText(
   question_name = "favorite_color",
   question_text = """Please return your favorite color
                  <language>
                  German
                  </language>
                  """,
)

When this question is run, the model will observe the XML tags and may respond in German:

results = q.run()
results.select('answer.*')
# Example output:
# Dataset([{'answer.favorite_color': ['Als KI-Modell habe ich keine persönlichen Vorlieben oder Lieblingsfarben. Aber ich kann Ihnen sagen, dass in der deutschen Kultur Farben wie Blau, Rot und Grün oft beliebt sind. Wenn Sie weitere Informationen über Farben auf Deutsch benötigen, lassen Sie es mich wissen!']}])

You can also use HTML formatting tags:

q = QuestionFreeText(
   question_name = "formatted_question",
   question_text = """This text includes <b>bold</b> and <i>italic</i> formatting.
                  <br>This appears on a new line."""
)
class edsl.questions.QuestionFreeText(question_name: str, question_text: str, answering_instructions: str | None = None, question_presentation: str | None = None)[source]

Bases: QuestionBase

A question that allows an agent to respond with free-form text.

QuestionFreeText is one of the simplest and most commonly used question types in EDSL. It prompts an agent or language model to provide a textual response without any specific structure or constraints on the format. The response can be of any length and content, making it suitable for open-ended questions, explanations, storytelling, and other scenarios requiring unrestricted text.

Attributes:

question_type (str): Identifier for this question type, set to “free_text”. _response_model: Pydantic model for validating responses. response_validator_class: Class used to validate and fix responses.

Examples:
>>> q = QuestionFreeText(
...     question_name="opinion",
...     question_text="What do you think about AI?"
... )
>>> q.question_type
'free_text'
>>> from edsl.language_models import Model
>>> model = Model("test", canned_response="I think AI is fascinating.")
>>> result = q.by(model).run(disable_remote_inference=True)
>>> answer = result.select("answer.*").to_list()[0]
>>> "fascinating" in answer
True
__init__(question_name: str, question_text: str, answering_instructions: str | None = None, question_presentation: str | None = None)[source]

Initialize a new free text question.

Args:
question_name: Identifier for the question, used in results and templates.

Must be a valid Python variable name.

question_text: The actual text of the question to be asked. answering_instructions: Optional additional instructions for answering

the question, overrides default instructions.

question_presentation: Optional custom presentation template for the

question, overrides default presentation.

Examples:
>>> q = QuestionFreeText(
...     question_name="feedback",
...     question_text="Please provide your thoughts on this product."
... )
>>> q.question_name
'feedback'
>>> q = QuestionFreeText(
...     question_name="explanation",
...     question_text="Explain how photosynthesis works.",
...     answering_instructions="Provide a detailed scientific explanation."
... )
classmethod example(exception_to_throw: Exception | None = None, override_answer: dict | None = None, *args, **kwargs) T[source]
property question_html_content: str[source]

Generate HTML content for rendering the question in web interfaces.

This property generates HTML markup for the question when it needs to be displayed in web interfaces or HTML contexts. For a free text question, this is typically a textarea element.

Returns:

str: HTML markup for rendering the question.

Notes:
  • Uses Jinja2 templating to generate the HTML

  • Creates a textarea input element with the question_name as the ID and name

  • Can be used for displaying the question in web UIs or HTML exports

question_name: str[source]
question_text: str[source]
response_validator_class[source]

alias of FreeTextResponseValidator

QuestionMultipleChoice class

A subclass of the Question class for creating multiple choice questions where the response is a single option selected from a list of options. It specially requires a question_options list of strings, integers or floats for the options. Example usage:

from edsl import QuestionMultipleChoice

q = QuestionMultipleChoice(
   question_name = "color",
   question_text = "What is your favorite color?",
   question_options = ["Red", "Blue", "Green", "Yellow"]
)

An example can also created using the example method:

QuestionMultipleChoice.example()

If you want the question options to be randomized, you can pass an optional parameter questions_to_randomize (a list of the relevant question names) to the Survey object when it is created. For example:

from edsl import QuestionMultipleChoice, Survey

q = QuestionMultipleChoice(
   question_name = "color",
   question_text = "What is your favorite color?",
   question_options = ["Red", "Blue", "Green", "Yellow"]
)

survey = Survey([q], questions_to_randomize=["color"])

Note: Question options can be strings of any length, but if they are long or complex, it may be useful to add the use_code parameter to the question. This will add an instruction to the user_prompt for the model to provide the code number of the question option that it selects as its answer (i.e., 0, 1, 2, etc.) instead of the value of the option. This can be useful when the question options are long or complex, or include formatting that a model may make errors in reproducing to provide an answer, resulting in a validation error that may be avoidable by returning the code number of the option instead. The code is then translated back to the option value in the survey results.

For example, in a multiple choice question where the agent is instructed to select a programming language we can add the use_code parameter and then inspect how the user prompt is modified to include “Respond only with the code corresponding to one of the options.”

from edsl import QuestionMultipleChoice

q = QuestionMultipleChoice(
   question_name = "programming_language",
   question_text = "Which programming language do you prefer?",
   question_options = ["Python", "Java", "C++", "JavaScript"],
   use_code = True # optional
)
class edsl.questions.QuestionMultipleChoice(question_name: str, question_text: str, question_options: list[str] | list[list] | list[float] | list[int], include_comment: bool = True, use_code: bool = False, answering_instructions: str | None = None, question_presentation: str | None = None, permissive: bool = False)[source]

Bases: QuestionBase

A question that prompts the agent to select one option from a list of choices.

QuestionMultipleChoice presents a set of predefined choices to the agent and asks them to select exactly one option. This question type is ideal for scenarios where the possible answers are known and limited, such as surveys, preference questions, or classification tasks.

Key Features: - Presents a fixed set of options to choose from - Enforces selection of exactly one option - Can use numeric codes for options (use_code=True) - Supports custom instructions and presentation - Optional comment field for additional explanation - Can be configured to be permissive (accept answers outside the options)

Technical Details: - Uses Pydantic models for validation with Literal types for strict checking - Supports dynamic options from scenario variables - HTML rendering for web interfaces - Robust validation with repair capabilities

Examples:

Basic usage:

```python q = QuestionMultipleChoice(

question_name=”preference”, question_text=”Which color do you prefer?”, question_options=[“Red”, “Green”, “Blue”, “Yellow”]

With numeric codes:

```python q = QuestionMultipleChoice(

question_name=”rating”, question_text=”Rate this product from 1 to 5”, question_options=[“Very Poor”, “Poor”, “Average”, “Good”, “Excellent”], use_code=True # The answer will be 0-4 instead of the text

Dynamic options from scenario:

```python q = QuestionMultipleChoice(

question_name=”choice”, question_text=”Select an option”, question_options=[“{{option1}}”, “{{option2}}”, “{{option3}}”]

) scenario = Scenario({“option1”: “Choice A”, “option2”: “Choice B”, “option3”: “Choice C”}) result = q.by(model).with_scenario(scenario).run() ```

See also:

https://docs.expectedparrot.com/en/latest/questions.html#questionmultiplechoice-class

__init__(question_name: str, question_text: str, question_options: list[str] | list[list] | list[float] | list[int], include_comment: bool = True, use_code: bool = False, answering_instructions: str | None = None, question_presentation: str | None = None, permissive: bool = False)[source]

Initialize a new multiple choice question.

Parameters

question_namestr

The name of the question, used as an identifier. Must be a valid Python variable name. This name will be used in results, templates, and when referencing the question in surveys.

question_textstr

The actual text of the question to be asked. This is the prompt that will be presented to the language model or agent.

question_optionsUnion[list[str], list[list], list[float], list[int]]

The list of options the agent can select from. These can be: - Strings: [“Option A”, “Option B”, “Option C”] - Lists: Used for nested or complex options - Numbers: [1, 2, 3, 4, 5] or [0.1, 0.2, 0.3] - Template strings: [“{{var1}}”, “{{var2}}”] which will be rendered with scenario variables

include_commentbool, default=True

Whether to include a comment field in the response, allowing the model to provide additional explanation beyond just selecting an option.

use_codebool, default=False

If True, the answer will be the index of the selected option (0-based) instead of the option text itself. This is useful for numeric scoring or when option text is long.

answering_instructionsOptional[str], default=None

Custom instructions for how the model should answer the question. If None, default instructions for multiple choice questions will be used.

question_presentationOptional[str], default=None

Custom template for how the question is presented to the model. If None, the default presentation for multiple choice questions will be used.

permissivebool, default=False

If True, the validator will accept answers that are not in the provided options list. If False (default), only exact matches to the provided options are allowed.

Examples

>>> q = QuestionMultipleChoice(
...     question_name="color_preference",
...     question_text="What is your favorite color?",
...     question_options=["Red", "Blue", "Green", "Yellow"],
...     include_comment=True
... )
>>> q_numeric = QuestionMultipleChoice(
...     question_name="rating",
...     question_text="How would you rate this product?",
...     question_options=["Very Poor", "Poor", "Average", "Good", "Excellent"],
...     use_code=True,
...     include_comment=True
... )

Notes

  • When use_code=True, the answer will be the index (0-based) of the selected option

  • The permissive parameter is useful when you want to allow free-form responses while still suggesting options

  • Dynamic options can reference variables in a scenario using Jinja2 template syntax

create_response_model(replacement_dict: dict = None)[source]
classmethod example(exception_to_throw: Exception | None = None, override_answer: dict | None = None, *args, **kwargs) T[source]
property question_html_content: str[source]

Return the HTML version of the question.

response_validator_class[source]

alias of MultipleChoiceResponseValidator

QuestionCheckBox class

A subclass of the Question class for creating questions where the response is a list of one or more of the given options. It specially requires a question_options list of strings, integers or floats for the options. The minimum number of options that must be selected and the maximum number that may be selected can be specified when creating the question (parameters min_selections and max_selections). If not specified, the minimum number of options that must be selected is 1 and the maximum allowed is the number of question options provided. Example usage:

from edsl import QuestionCheckBox

q = QuestionCheckBox(
   question_name = "favorite_days",
   question_text = "What are your 2 favorite days of the week?",
   question_options = ["Monday", "Tuesday", "Wednesday",
   "Thursday", "Friday", "Saturday", "Sunday"],
   min_selections = 2, # optional
   max_selections = 2  # optional
)

An example can also be created using the example method:

QuestionCheckBox.example()
class edsl.questions.QuestionCheckBox(question_name: str, question_text: str, question_options: list[str], min_selections: int | None = None, max_selections: int | None = None, include_comment: bool = True, use_code: bool = False, question_presentation: str | None = None, answering_instructions: str | None = None, permissive: bool = False)[source]

Bases: QuestionBase

A question that prompts the agent to select multiple options from a list.

QuestionCheckBox allows agents to select one or more items from a predefined list of options. It’s useful for “select all that apply” scenarios, multi-select preferences, or any question where multiple valid selections are possible.

Attributes:

question_type (str): Identifier for this question type, set to “checkbox”. purpose (str): Brief description of when to use this question type. question_options: List of available options to select from. min_selections: Optional minimum number of selections required. max_selections: Optional maximum number of selections allowed. _response_model: Initially None, set by create_response_model(). response_validator_class: Class used to validate and fix responses.

Examples:
>>> # Basic creation works
>>> q = QuestionCheckBox.example()
>>> q.question_type
'checkbox'
>>> # Create preferences question with selection constraints
>>> q = QuestionCheckBox(
...     question_name="favorite_fruits",
...     question_text="Which fruits do you like?",
...     question_options=["Apple", "Banana", "Cherry", "Durian", "Elderberry"],
...     min_selections=1,
...     max_selections=3
... )
>>> q.question_options
['Apple', 'Banana', 'Cherry', 'Durian', 'Elderberry']
>>> q.min_selections
1
>>> q.max_selections
3
__init__(question_name: str, question_text: str, question_options: list[str], min_selections: int | None = None, max_selections: int | None = None, include_comment: bool = True, use_code: bool = False, question_presentation: str | None = None, answering_instructions: str | None = None, permissive: bool = False)[source]

Initialize a new checkbox question.

Args:
question_name: Identifier for the question, used in results and templates.

Must be a valid Python variable name.

question_text: The actual text of the question to be asked. question_options: List of options the agent can select from. min_selections: Optional minimum number of options that must be selected. max_selections: Optional maximum number of options that can be selected. include_comment: Whether to allow comments with the answer. use_code: If True, use indices (0,1,2…) instead of option text values. question_presentation: Optional custom presentation template. answering_instructions: Optional additional instructions. permissive: If True, ignore selection count constraints during validation.

Examples:
>>> q = QuestionCheckBox(
...     question_name="symptoms",
...     question_text="Select all symptoms you are experiencing:",
...     question_options=["Fever", "Cough", "Headache", "Fatigue"],
...     min_selections=1
... )
>>> q.question_name
'symptoms'
>>> # Question with both min and max
>>> q = QuestionCheckBox(
...     question_name="pizza_toppings",
...     question_text="Choose 2-4 toppings for your pizza:",
...     question_options=["Cheese", "Pepperoni", "Mushroom", "Onion",
...                       "Sausage", "Bacon", "Pineapple"],
...     min_selections=2,
...     max_selections=4
... )
create_response_model()[source]

Create a response model with the appropriate constraints.

This method creates a Pydantic model customized with the options and selection count constraints specified for this question instance.

Returns:

A Pydantic model class tailored to this question’s constraints.

Examples:
>>> q = QuestionCheckBox.example(use_code=True)
>>> model = q.create_response_model()
>>> model(answer=[0, 2])  # Select first and third options
ConstrainedCheckboxResponse(answer=[0, 2], comment=None, generated_tokens=None)
classmethod example(exception_to_throw: Exception | None = None, override_answer: dict | None = None, *args, **kwargs) T[source]
property question_html_content: str[source]

Generate HTML content for rendering the question in web interfaces.

This property generates HTML markup for the question when it needs to be displayed in web interfaces or HTML contexts. For a checkbox question, this is a set of checkbox input elements, one for each option.

Returns:

str: HTML markup for rendering the question.

response_validator_class[source]

alias of CheckBoxResponseValidator

QuestionMatrix class

A subclass of the Question class for creating matrix questions where the response is a table of answers. It specially requires a question_items list of strings for the row items, a question_options list of strings, integers or floats for the column items and an optional option_labels dictionary to specify labels for the column options.

Example usage:

from edsl import QuestionMatrix

q = QuestionMatrix(
   question_name = "matrix",
   question_text = "Rate the following items on a scale of 1 to 5.",
   question_items = ["Item 1", "Item 2", "Item 3"],
   question_options = [1, 2, 3, 4, 5],
   option_labels = {1: "Terrible", 5: "Excellent"} # optional
)

An example can also be created using the example method:

QuestionMatrix.example()
class edsl.questions.QuestionMatrix(question_name: str, question_text: str, question_items: List[str], question_options: List[int | str | float], option_labels: Dict[int | str | float, str] | None = None, include_comment: bool = True, answering_instructions: str | None = None, question_presentation: str | None = None, permissive: bool = False)[source]

Bases: QuestionBase

A question that presents a matrix/grid where multiple items are rated or selected from the same set of options.

This question type allows respondents to provide an answer for each row in a grid, selecting from the same set of options for each row. It’s often used for Likert scales, ratings grids, or any scenario where multiple items need to be rated using the same scale.

Examples:
>>> # Create a happiness rating matrix
>>> question = QuestionMatrix(
...     question_name="happiness_matrix",
...     question_text="Rate your happiness with each aspect:",
...     question_items=["Work", "Family", "Social life"],
...     question_options=[1, 2, 3, 4, 5],
...     option_labels={1: "Very unhappy", 3: "Neutral", 5: "Very happy"}
... )
>>> # The response is a dict matching each item to a rating
>>> response = {"answer": {"Work": 4, "Family": 5, "Social life": 3}}
__init__(question_name: str, question_text: str, question_items: List[str], question_options: List[int | str | float], option_labels: Dict[int | str | float, str] | None = None, include_comment: bool = True, answering_instructions: str | None = None, question_presentation: str | None = None, permissive: bool = False)[source]

Initialize a matrix question.

Args:

question_name: The name of the question question_text: The text of the question question_items: List of items to be rated or answered (rows) question_options: Possible answer options for each item (columns) option_labels: Optional mapping of options to labels (e.g. {1: “Sad”, 5: “Happy”}) include_comment: Whether to include a comment field answering_instructions: Custom instructions template question_presentation: Custom presentation template permissive: Whether to allow any values & extra items instead of strictly checking

create_response_model() Type[BaseModel][source]

Returns the pydantic model for validating responses to this question.

The model is dynamically created based on the question’s configuration, including allowed items, options, and permissiveness.

classmethod example(exception_to_throw: Exception | None = None, override_answer: dict | None = None, *args, **kwargs) T[source]
property question_html_content: str[source]

Generate an HTML representation of the matrix question.

Returns:

HTML content string for rendering the question

question_text: str[source]

Validate that the question_text attribute is a string.

>>> class TestQuestion:
...     question_text = QuestionTextDescriptor()
...     def __init__(self, question_text: str):
...         self.question_text = question_text
>>> _ = TestQuestion("What is the capital of France?")
>>> _ = TestQuestion("What is the capital of France? {{variable}}")
response_validator_class[source]

alias of MatrixResponseValidator

QuestionDict

A subclass of the Question class for creating questions where the response is a dictionary of answers. It specially requires an answer_keys list of strings for the keys of the dictionary. The value_types and value_descriptions parameters can be used to specify the types and descriptions of the values that should be entered for each key. Example usage:

from edsl import QuestionDict

q = QuestionDict(
   question_name = "recipe",
   question_text = "Please draft an easy recipe for hot chocolate.",
   answer_keys = ["title", "ingedients", "num_ingredients", "instructions"],
   value_types = [str, list, int, list], # optional
   value_descriptions = ["Title of the recipe", "List of ingredients", "Number of ingredients", "Instructions"] # optional
)

An example can also be created using the example method:

QuestionDict.example()
class edsl.questions.QuestionDict(question_name: str, question_text: str, answer_keys: List[str], value_types: List[str | type] | None = None, value_descriptions: List[str] | None = None, include_comment: bool = True, question_presentation: str | None = None, answering_instructions: str | None = None, permissive: bool = False)[source]

Bases: QuestionBase

A QuestionDict allows you to create questions that expect dictionary responses with specific keys and value types. It dynamically builds a pydantic model so that Pydantic automatically raises ValidationError for missing/invalid fields.

Documentation: https://docs.expectedparrot.com/en/latest/questions.html#questiondict

Parameters

question_namestr

Unique identifier for the question

question_textstr

The actual question text presented to users

answer_keysList[str]

Keys that must be provided in the answer dictionary

value_typesOptional[List[Union[str, type]]]

Expected data types for each answer key

value_descriptionsOptional[List[str]]

Human-readable descriptions for each answer key

include_commentbool

Whether to allow additional comments with the answer

question_presentationOptional[str]

Alternative way to present the question

answering_instructionsOptional[str]

Additional instructions for answering

permissivebool

If True, allows additional keys not specified in answer_keys

__init__(question_name: str, question_text: str, answer_keys: List[str], value_types: List[str | type] | None = None, value_descriptions: List[str] | None = None, include_comment: bool = True, question_presentation: str | None = None, answering_instructions: str | None = None, permissive: bool = False)[source]
create_response_model() Type[BaseModel][source]

Build and return the Pydantic model that should parse/validate user answers. This is similar to QuestionNumerical.create_response_model, but for dicts.

classmethod example(exception_to_throw: Exception | None = None, override_answer: dict | None = None, *args, **kwargs) T[source]
classmethod from_dict(data: dict) QuestionDict[source]

Recreate from a dictionary.

question_text: str[source]

Validate that the question_text attribute is a string.

>>> class TestQuestion:
...     question_text = QuestionTextDescriptor()
...     def __init__(self, question_text: str):
...         self.question_text = question_text
>>> _ = TestQuestion("What is the capital of France?")
>>> _ = TestQuestion("What is the capital of France? {{variable}}")
response_validator_class[source]

alias of DictResponseValidator

to_dict(add_edsl_version: bool = True) dict[source]

Serialize to JSON-compatible dictionary.

QuestionLinearScale class

A subclass of the QuestionMultipleChoice class for creating linear scale questions. It requires a question_options list of integers for the scale. The option_labels parameter can be used to specify labels for the scale options. Example usage:

from edsl import QuestionLinearScale

q = QuestionLinearScale(
   question_name = "studying",
   question_text = """On a scale from 0 to 5, how much do you
   enjoy studying? (0 = not at all, 5 = very much)""",
   question_options = [0, 1, 2, 3, 4, 5], # integers
   option_labels = {0: "Not at all", 5: "Very much"} # optional
)

An example can also be created using the example method:

QuestionLinearScale.example()

QuestionNumerical class

A subclass of the Question class for creating questions where the response is a numerical value. The minimum and maximum values of the answer can be specified using the min_value and max_value parameters. Example usage:

from edsl import QuestionNumerical

q = QuestionNumerical(
   question_name = "work_days",
   question_text = "How many days a week do you normally work?",
   min_value = 1, # optional
   max_value = 7  # optional
)

An example can also be created using the example method:

QuestionNumerical.example()
class edsl.questions.QuestionNumerical(question_name: str, question_text: str, min_value: int | float | None = None, max_value: int | float | None = None, include_comment: bool = True, question_presentation: str | None = None, answering_instructions: str | None = None, permissive: bool = False)[source]

Bases: QuestionBase

A question that prompts the agent to answer with a numerical value.

QuestionNumerical is designed for responses that must be numbers, with optional range constraints to ensure values fall within acceptable bounds. It’s useful for age questions, ratings, measurements, and any scenario requiring numerical answers.

Attributes:

question_type (str): Identifier for this question type, set to “numerical”. min_value: Optional lower bound for acceptable answers. max_value: Optional upper bound for acceptable answers. _response_model: Initially None, set by create_response_model(). response_validator_class: Class used to validate and fix responses.

Examples:
>>> # Basic self-check passes
>>> QuestionNumerical.self_check()
>>> # Create age question with range constraints
>>> q = QuestionNumerical(
...     question_name="age",
...     question_text="How old are you in years?",
...     min_value=0,
...     max_value=120
... )
>>> q.min_value
0
>>> q.max_value
120
__init__(question_name: str, question_text: str, min_value: int | float | None = None, max_value: int | float | None = None, include_comment: bool = True, question_presentation: str | None = None, answering_instructions: str | None = None, permissive: bool = False)[source]

Initialize a new numerical question.

Args:
question_name: Identifier for the question, used in results and templates.

Must be a valid Python variable name.

question_text: The actual text of the question to be asked. min_value: Optional minimum value for the answer (inclusive). max_value: Optional maximum value for the answer (inclusive). include_comment: Whether to allow comments with the answer. question_presentation: Optional custom presentation template. answering_instructions: Optional additional instructions. permissive: If True, ignore min/max constraints during validation.

Examples:
>>> q = QuestionNumerical(
...     question_name="temperature",
...     question_text="What is the temperature in Celsius?",
...     min_value=-273.15  # Absolute zero
... )
>>> q.question_name
'temperature'
>>> # Question with both min and max
>>> q = QuestionNumerical(
...     question_name="rating",
...     question_text="Rate from 1 to 10",
...     min_value=1,
...     max_value=10
... )
create_response_model()[source]

Create a response model with the appropriate constraints.

This method creates a Pydantic model customized with the min/max constraints specified for this question instance. If permissive=True, constraints are ignored.

Returns:

A Pydantic model class tailored to this question’s constraints.

Examples:
>>> q = QuestionNumerical.example()
>>> model = q.create_response_model()
>>> model(answer=45).answer
45
classmethod example(exception_to_throw: Exception | None = None, override_answer: dict | None = None, *args, **kwargs) T[source]
property question_html_content: str[source]

Generate HTML content for rendering the question in web interfaces.

This property generates HTML markup for the question when it needs to be displayed in web interfaces or HTML contexts. For a numerical question, this is typically an input element with type=”number”.

Returns:

str: HTML markup for rendering the question.

response_validator_class[source]

alias of NumericalResponseValidator

QuestionLikertFive class

A subclass of the QuestionMultipleChoice class for creating questions where the answer is a response to a given statement on a 5-point Likert scale. (The scale does not need to be added as a parameter.) Example usage:

from edsl import QuestionLikertFive

q = QuestionLikertFive(
   question_name = "happy",
   question_text = "I am only happy when it rains."
)

An example can also be created using the example method:

QuestionLikertFive.example()

QuestionRank class

A subclass of the Question class for creating questions where the response is a ranked list of options. It specially requires a question_options list of strings for the options. The number of options that must be selected can be optionally specified when creating the question. If not specified, all options are included (ranked) in the response. Example usage:

from edsl import QuestionRank

q = QuestionRank(
   question_name = "foods_rank",
   question_text = "Rank the following foods.",
   question_options = ["Pizza", "Pasta", "Salad", "Soup"],
   num_selections = 2 # optional
)

An example can also be created using the example method:

QuestionRank.example()

Alternatively, QuestionTopK can be used to ask the respondent to select a specific number of options from a list. (See the next section for details.)

class edsl.questions.QuestionRank(question_name: str, question_text: str, question_options: list[str], num_selections: int | None = None, question_presentation: str | None = None, answering_instructions: str | None = None, permissive: bool = False, use_code: bool = True, include_comment: bool = True)[source]

Bases: QuestionBase

A question that prompts the agent to rank options from a list.

This question type asks respondents to put options in order of preference, importance, or any other ordering criteria. The response is a list of selected options in ranked order.

Examples:
>>> # Create a ranking question for food preferences
>>> question = QuestionRank(
...     question_name="food_ranking",
...     question_text="Rank these foods from most to least favorite.",
...     question_options=["Pizza", "Pasta", "Salad", "Soup"],
...     num_selections=2
... )
>>> # The response should be a ranked list
>>> response = {"answer": ["Pizza", "Pasta"], "comment": "I prefer Italian food."}
__init__(question_name: str, question_text: str, question_options: list[str], num_selections: int | None = None, question_presentation: str | None = None, answering_instructions: str | None = None, permissive: bool = False, use_code: bool = True, include_comment: bool = True)[source]

Initialize a rank question.

Args:

question_name: The name of the question question_text: The text of the question question_options: The options the respondent should rank num_selections: The number of options to select and rank (defaults to all) question_presentation: Custom presentation template (optional) answering_instructions: Custom instructions template (optional) permissive: Whether to relax validation constraints use_code: Whether to use numeric indices (0,1,2) instead of option text include_comment: Whether to include a comment field

create_response_model()[source]

Returns the pydantic model for validating responses to this question.

The model is dynamically created based on the question’s configuration, including allowed choices, number of selections, and permissiveness.

classmethod example(use_code=False, include_comment=True) QuestionRank[source]

Return an example rank question.

Args:

use_code: Whether to use numeric indices include_comment: Whether to include a comment field

Returns:

An example QuestionRank instance

property question_html_content: str[source]

Generate an HTML representation of the ranking question.

Returns:

HTML content string for rendering the question

response_validator_class[source]

alias of RankResponseValidator

QuestionTopK class

A subclass of the QuestionMultipleChoice class for creating questions where the response is a list of ranked items. It specially requires a question_options list of strings for the options and the number of options that must be selected (num_selections). Example usage:

from edsl import QuestionTopK

q = QuestionTopK(
   question_name = "foods_rank",
   question_text = "Select the best foods.",
   question_options = ["Pizza", "Pasta", "Salad", "Soup"],
   min_selections = 2,
   max_selections = 2
)

An example can also be created using the example method:

QuestionTopK.example()

QuestionYesNo class

A subclass of the QuestionMultipleChoice class for creating multiple choice questions where the answer options are already specified: [‘Yes’, ‘No’]. Example usage:

from edsl import QuestionYesNo

q = QuestionYesNo(
   question_name = "student",
   question_text = "Are you a student?"
)

An example can also be created using the example method:

QuestionYesNo.example()

QuestionList class

A subclass of the Question class for creating questions where the response is a list of strings. The minimum and maximum numbers of items to be included in the list can be specified using the optional parameters min_list_items and max_list_items. Example usage:

q = QuestionList(
   question_name = "activities",
   question_text = "What activities do you enjoy most?",
   min_list_items = 2, # optional
   max_list_items = 5  # optional
)

An example can also be created using the example method:

QuestionList.example()
class edsl.questions.QuestionList(question_name: str, question_text: str, include_comment: bool = True, max_list_items: int | None = None, min_list_items: int | None = None, answering_instructions: str | None = None, question_presentation: str | None = None, permissive: bool = False)[source]

Bases: QuestionBase

This question prompts the agent to answer by providing a list of items as comma-separated strings.

__init__(question_name: str, question_text: str, include_comment: bool = True, max_list_items: int | None = None, min_list_items: int | None = None, answering_instructions: str | None = None, question_presentation: str | None = None, permissive: bool = False)[source]

Instantiate a new QuestionList.

Parameters:
  • question_name – The name of the question.

  • question_text – The text of the question.

  • max_list_items – The maximum number of items that can be in the answer list.

  • min_list_items – The minimum number of items that must be in the answer list.

>>> QuestionList.example().self_check()
create_response_model()[source]
classmethod example(exception_to_throw: Exception | None = None, override_answer: dict | None = None, *args, **kwargs) T[source]
min_list_items: int[source]

Validate that a value is an integer or None.

property question_html_content: str[source]
response_validator_class[source]

alias of ListResponseValidator

QuestionBudget class

A subclass of the Question class for creating questions where the response is an allocation of a sum among a list of options in the form of a dictionary where the keys are the options and the values are the allocated amounts. It specially requires a question_options list of strings for the options and a budget_sum number for the total sum to be allocated. Example usage:

from edsl import QuestionBudget

q = QuestionBudget(
   question_name = "food_budget",
   question_text = "How would you allocate $100?",
   question_options = ["Pizza", "Ice cream", "Burgers", "Salad"],
   budget_sum = 100
)

An example can also be created using the example method:

QuestionBudget.example()
class edsl.questions.QuestionBudget(question_name: str, question_text: str, question_options: list[str], budget_sum: int, include_comment: bool = True, question_presentation: str | None = None, answering_instructions: str | None = None, permissive: bool = False)[source]

Bases: QuestionBase

A question that prompts the agent to allocate a budget among options.

QuestionBudget is designed for scenarios where a fixed amount needs to be distributed across multiple categories or options. It’s useful for allocation questions, spending priorities, resource distribution, and similar scenarios.

Attributes:

question_type: Identifier for this question type, set to “budget” budget_sum: The total amount to be allocated question_options: List of options to allocate the budget among _response_model: Initially None, set by create_response_model() response_validator_class: Class used to validate and fix responses

Examples:
>>> # Create budget allocation question
>>> q = QuestionBudget(
...     question_name="spending",
...     question_text="How would you allocate $100?",
...     question_options=["Food", "Housing", "Entertainment", "Savings"],
...     budget_sum=100
... )
>>> q.budget_sum
100
>>> len(q.question_options)
4
__init__(question_name: str, question_text: str, question_options: list[str], budget_sum: int, include_comment: bool = True, question_presentation: str | None = None, answering_instructions: str | None = None, permissive: bool = False)[source]

Initialize a new budget allocation question.

Args:

question_name: Identifier for the question, used in results and templates question_text: The actual text of the question to be asked question_options: The options for allocation of the budget sum budget_sum: The total amount of the budget to be allocated include_comment: Whether to allow comments with the answer question_presentation: Optional custom presentation template answering_instructions: Optional additional instructions permissive: If True, allow allocations less than budget_sum

Examples:
>>> q = QuestionBudget(
...     question_name="investment",
...     question_text="How would you invest $1000?",
...     question_options=["Stocks", "Bonds", "Real Estate", "Cash"],
...     budget_sum=1000
... )
>>> q.question_name
'investment'
create_response_model()[source]

Create a response model with the appropriate constraints.

This method creates a Pydantic model customized with the budget constraints and options specified for this question instance.

Returns:

A Pydantic model class tailored to this question’s constraints

Examples:
>>> q = QuestionBudget.example()
>>> model = q.create_response_model()
>>> model(answer=[25, 25, 25, 25]).answer
[25.0, 25.0, 25.0, 25.0]
classmethod example(include_comment: bool = True) QuestionBudget[source]

Create an example instance of a budget question.

This class method creates a predefined example of a budget question for demonstration, testing, and documentation purposes.

Args:

include_comment: Whether to include a comment field with the answer

Returns:

QuestionBudget: An example budget question

Examples:
>>> q = QuestionBudget.example()
>>> q.question_name
'food_budget'
>>> q.question_text
'How would you allocate $100?'
>>> q.budget_sum
100
>>> q.question_options
['Pizza', 'Ice Cream', 'Burgers', 'Salad']
property question_html_content: str[source]

Generate HTML content for rendering the question in web interfaces.

This property generates HTML markup for the question when it needs to be displayed in web interfaces or HTML contexts, including an interactive budget allocation form with JavaScript for real-time budget tracking.

Returns:

str: HTML markup for rendering the question

response_validator_class[source]

alias of BudgetResponseValidator

QuestionExtract class

A subclass of the Question class for creating questions where the response is information extracted (or extrapolated) from a given text and formatted according to a specified template. Example usage:

from edsl import QuestionExtract

q = QuestionExtract(
   question_name = "course_schedule",
   question_text = "This semester we are offering courses on calligraphy on Friday mornings.",
   answer_template = {"course_topic": "AI", "days": ["Monday", "Wednesday"]}
)

An example can also be created using the example method:

QuestionExtract.example()
class edsl.questions.QuestionExtract(question_text: str, answer_template: dict[str, Any], question_name: str, answering_instructions: str = None, question_presentation: str = None)[source]

Bases: QuestionBase

A question that extracts structured information from text according to a template.

This question type prompts the agent to extract specific data points from text and return them in a structured format defined by a template. It’s useful for information extraction tasks like parsing contact details, extracting features, or summarizing structured information.

Attributes:

question_type: Identifier for this question type answer_template: Dictionary defining the structure to extract response_validator_class: The validator class for responses

Examples:
>>> # Create a question to extract name and profession
>>> q = QuestionExtract(
...     question_name="person_info",
...     question_text="Extract the person's name and profession from this text: John is a carpenter from Boston.",
...     answer_template={"name": "Example Name", "profession": "Example Profession"}
... )
>>> q.answer_template
{'name': 'Example Name', 'profession': 'Example Profession'}
>>> # Validate a correct answer
>>> response = {"answer": {"name": "John", "profession": "carpenter"}}
>>> q._validate_answer(response)
{'answer': {'name': 'John', 'profession': 'carpenter'}, 'comment': None, 'generated_tokens': None}
__init__(question_text: str, answer_template: dict[str, Any], question_name: str, answering_instructions: str = None, question_presentation: str = None)[source]

Initialize the extraction question.

Args:

question_name: The name/identifier for the question question_text: The text of the question to present answer_template: Dictionary template defining the structure to extract answering_instructions: Optional custom instructions for the agent question_presentation: Optional custom presentation template

Examples:
>>> q = QuestionExtract(
...     question_name="review_extract",
...     question_text="Extract information from this product review",
...     answer_template={"rating": 5, "pros": "example", "cons": "example"}
... )
>>> q.question_name
'review_extract'
create_response_model()[source]

Create a dynamic Pydantic model based on the answer template.

Returns:

A Pydantic model class configured for the template structure

Examples:
>>> q = QuestionExtract.example()
>>> model = q.create_response_model()
>>> isinstance(model, type)
True
classmethod example(exception_to_throw: Exception | None = None, override_answer: dict | None = None, *args, **kwargs) T[source]
property question_html_content: str[source]

Generate HTML form inputs for the template fields.

Returns:

HTML string with form inputs for each template field

response_validator_class[source]

alias of ExtractResponseValidator

QuestionFunctional class

A subclass of the Question class for creating questions where the response is generated by a function instead of a lanugage model. The question type is not intended to be used directly in a survey, but rather to generate responses for other questions. This can be useful where a model is not needed for part of a survey, for questions that require some kind of initial computation, or for questions that are the result of a multi-step process. The question type lets us define a function func that takes in a scenario and (optional) agent traits and returns an answer.

Example usage:

from edsl import QuestionNumerical, Agent

q_random = QuestionNumerical(
   question_name = "random",
   question_text = "Choose a random number between 1 and 1000."
)

agents = [Agent({"persona":p}) for p in ["Dog catcher", "Magician", "Spy"]]

results = q_random.by(agents).run()

results.select("persona", "random")

Output:

We can use QuestionFunctional to evaluate the responses using a function instead of calling the language model to answer another question. The responses are passed to the function as scenarios, and then the function is passed to the QuestionFunctional object:

from edsl import QuestionFunctional

def my_function(scenario, agent_traits):
   if scenario.get("persona") == "Magician":
      return "Magicians never pick randomly!"
   elif scenario.get("random") > 500:
      return "Top half"
   else:
      return "Bottom half"

q_evaluate = QuestionFunctional(
   question_name = "evaluate",
   func = my_function
)

Next we turn the responses into scenarios and inspect them:

scenarios = results.select("persona", "random").to_scenario_list()
scenarios

Output:

[Scenario({'persona': 'Dog catcher', 'random': 472}),
Scenario({'persona': 'Magician', 'random': 537}),
Scenario({'persona': 'Spy', 'random': 528})]

Finally, we run the function with the scenarios and inspect the results:

results = q_evaluate.by(scenarios).run()

results.select("persona", "random", "evaluate")

Output:

Another example of QuestionFunctional can be seen in the following notebook, where we give agents different instructions for generating random numbers and then use a function to identify whether the responses are identical.

Example notebook: Simulating randomness

class edsl.questions.QuestionFunctional(question_name: str, func: Callable | None = None, question_text: str | None = 'Functional question', requires_loop: bool | None = False, function_source_code: str | None = None, function_name: str | None = None, unsafe: bool | None = False)[source]

Bases: QuestionBase

A special type of question that is not answered by an LLM.

>>> from edsl import Scenario, Agent

# Create an instance of QuestionFunctional with the new function >>> question = QuestionFunctional.example()

# Activate and test the function >>> question.activate() >>> scenario = Scenario({“numbers”: [1, 2, 3, 4, 5]}) >>> agent = Agent(traits={“multiplier”: 10}) >>> results = question.by(scenario).by(agent).run(disable_remote_cache = True, disable_remote_inference = True) >>> results.select(“answer.*”).to_list()[0] == 150 True

# Serialize the question to a dictionary

>>> from .question_base import QuestionBase
>>> new_question = QuestionBase.from_dict(question.to_dict())
>>> results = new_question.by(scenario).by(agent).run(disable_remote_cache = True, disable_remote_inference = True)
>>> results.select("answer.*").to_list()[0] == 150
True
__init__(question_name: str, func: Callable | None = None, question_text: str | None = 'Functional question', requires_loop: bool | None = False, function_source_code: str | None = None, function_name: str | None = None, unsafe: bool | None = False)[source]
activate()[source]
activate_loop()[source]

Activate the function with loop logic using RestrictedPython.

activated = True[source]
answer_question_directly(scenario, agent_traits=None)[source]

Return the answer to the question, ensuring the function is activated.

create_response_model()[source]

Returns the Pydantic model for validating responses to this question.

default_instructions = ''[source]
classmethod example()[source]
function_name = ''[source]
function_source_code = ''[source]
property question_html_content: str[source]
question_name: str[source]
question_text: str[source]
response_validator_class[source]

alias of FunctionalResponseValidator

to_dict(add_edsl_version=True)[source]

Convert the question to a dictionary that includes the question type (used in deserialization).

>>> from edsl.questions import QuestionFreeText as Q; Q.example().to_dict(add_edsl_version = False)
{'question_name': 'how_are_you', 'question_text': 'How are you?', 'question_type': 'free_text'}

Optional question parameters

Examples of optional question paramaters:

include_comment - This boolean parameter can be used to specify that the default comment field which is added to all types other than free_text should be excluded from a question (default: include_comment = True). Example usage:

from edsl import QuestionNumerical, Survey

q1 = QuestionNumerical(
   question_name = "adding_v1",
   question_text = "What is 1+1?"
)

# The same question with the comment field excluded
q2 = QuestionNumerical(
   question_name = "adding_v2",
   question_text = "What is 1+1?",
   include_comment = False
)

job = Survey([q1, q2]).to_jobs()

job.prompts().select("user_prompt", "question_name")

We can see that the second version of the question does not include the comment instruction “After the answer, put a comment explaining your choice on the next line.”:

user_prompt

question_name

What is 1+1? This question requires a numerical response in the form of an integer or decimal (e.g., -12, 0, 1, 2, 3.45, …). Respond with just your number on a single line. If your response is equivalent to zero, report ‘0’ After the answer, put a comment explaining your choice on the next line.

adding_v1

What is 1+1? This question requires a numerical response in the form of an integer or decimal (e.g., -12, 0, 1, 2, 3.45, …). Respond with just your number on a single line. If your response is equivalent to zero, report ‘0’

adding_v2

When we run the survey, the comment field will be included in the results for the first question but not the second:

results = job.run()

results.select("comment.*")

Output:

comment.adding_v1_comment

comment.adding_v2_comment

// The sum of 1 and 1 is 2.

See the Prompts section for more information about various methods for inspecting user and system prompts.

question_presentation and answering_instructions - These parameters can be used to add additional context or modify the default instructions of a question.

  • The parameter question_presentation interacts with the question text to specify how the question should be presented to the model (e.g., to modify the default instructions for a question).

  • The parameter answering_instructions is added to the end of the question text without modifying it. It can be used to specify how the model should answer the question and can be useful for questions that require a specific format for the answer.

Example usage:

from edsl import QuestionNumerical, Survey

q = QuestionNumerical(
   question_name = "adding",
   question_text = "What is 1+1?",
   question_presentation = "Please solve the following addition problem: {{ question_text }}",
   answering_instructions = "\n\nRespond with just your number on a single line."
)

job = Survey([q]).to_jobs()

job.prompts().select("user_prompt")

Output:

user_prompt

Please solve the following addition problem: What is 1+1? Respond with just your number on a single line.

See the Prompts section for more information about various methods for inspecting user and system prompts.

Other classes & methods

Settings for the questions module.

class edsl.questions.settings.Settings[source]

Bases: object

Settings for the questions module.

MAX_ANSWER_LENGTH = 2000[source]
MAX_EXPRESSION_CONSTRAINT_LENGTH = 1000[source]
MAX_NUM_OPTIONS = 200[source]
MAX_OPTION_LENGTH = 10000[source]
MAX_QUESTION_LENGTH = 100000[source]
MIN_NUM_OPTIONS = 2[source]