Skip to main content
Please see the EDSL documentation page for more details on each of the object types and methods for looping questions and piping questions and answers that are used below.
from edsl import QuestionNumerical, Scenario, ScenarioList, Survey
We start by creating an initial question (with no content piped into it). EDSL comes with many common question types that we can choose from based on the form of the response that we want to get back from a model (e.g., free text, multiple choice, linear scale, etc.). Here we use a numerical question:
q_0 = QuestionNumerical(
    question_name = "q_0",
    question_text = "Please give me a random number.",
    min_value = 1,
    max_value = 100,
    answering_instructions = "The number must be an integer."
)
Next we create a question that we will “loop” (repeat) some number of times. We use double braces to create a {{ placeholder }} for content to be added to the question when we create copies of it Here we want to simultaneously set the names of the copies of the question and reference those names in the versions of the question text, so that content from one question and answer can be automatically piped into another cope of the question. To do this, we create placeholders for each question name ({{ num }}) (it must be unique) and question text ({{ text }}). Then in the next step we reference the question names in those texts.
Note:(Note that the names of the placeholders can be anything other than reserved names, and this example works with any other question types as well. We just use a numerical question to keep the responses brief and easy to check!)
q = QuestionNumerical(
    question_name = "q_`{{ scenario.num }}`",
    question_text = "`{{ scenario.text }}`",
    min_value = 1,
    max_value = 100,
    answering_instructions = "The number must be an integer."
)
Next we create a list of Scenario objects for the question name and question text inputs that we will pass to the loop method that we call on the question in order to create the copies (learn more about using scenarios):
s = ScenarioList(
    [Scenario({
        "num": n,
        "text": f"""
        I asked you for a random number between 1 and 100 and you gave me `{{ q_{n-1}.answer }}`.
        Please give me a new random number.
        """
    }) for n in range(1,6)]
)
The loop method creates a list of questions with the scenarios added in. Note that because we used single-braces for ease of referencing the piped question names we will see a warning that scenarios require double braces, in case we used the single braces inadvertently. We can ignore this message here, and confirm that our questions have been formatted as intended:
qq = q.loop(s)
qq
[Question('numerical', question_name = """q_1""", question_text = """
         I asked you for a random number between 1 and 100 and you gave me `{ q_0.answer }`.
         Please give me a new random number.
         """, min_value = 1, max_value = 100, answering_instructions = """The number must be an integer."""),
 Question('numerical', question_name = """q_2""", question_text = """
         I asked you for a random number between 1 and 100 and you gave me `{ q_1.answer }`.
         Please give me a new random number.
         """, min_value = 1, max_value = 100, answering_instructions = """The number must be an integer."""),
 Question('numerical', question_name = """q_3""", question_text = """
         I asked you for a random number between 1 and 100 and you gave me `{ q_2.answer }`.
         Please give me a new random number.
         """, min_value = 1, max_value = 100, answering_instructions = """The number must be an integer."""),
 Question('numerical', question_name = """q_4""", question_text = """
         I asked you for a random number between 1 and 100 and you gave me `{ q_3.answer }`.
         Please give me a new random number.
         """, min_value = 1, max_value = 100, answering_instructions = """The number must be an integer."""),
 Question('numerical', question_name = """q_5""", question_text = """
         I asked you for a random number between 1 and 100 and you gave me `{ q_4.answer }`.
         Please give me a new random number.
         """, min_value = 1, max_value = 100, answering_instructions = """The number must be an integer.""")]
We pass the list of questions to a Survey object as usual in order to administer them together. Note that because we are piping answers into questions, the questions will automatically be administered in the order required by the piping. (If no piping or other survey rules are applied, questions are administered asychronously by default. Learn more about applying survey rules and logic.) We can re-inspect the questions that are now in a survey:
survey = Survey(questions = [q_0] + qq)
survey
Survey # questions: 6; question_name list: ['q_0', 'q_1', 'q_2', 'q_3', 'q_4', 'q_5'];
question_namequestion_textmin_valuemax_valueanswering_instructionsquestion_type
0q_0Please give me a random number.1100The number must be an integer.numerical
1q_1I asked you for a random number between 1 and 100 and you gave me { q_0.answer }. Please give me a new random number.1100The number must be an integer.numerical
2q_2I asked you for a random number between 1 and 100 and you gave me { q_1.answer }. Please give me a new random number.1100The number must be an integer.numerical
3q_3I asked you for a random number between 1 and 100 and you gave me { q_2.answer }. Please give me a new random number.1100The number must be an integer.numerical
4q_4I asked you for a random number between 1 and 100 and you gave me { q_3.answer }. Please give me a new random number.1100The number must be an integer.numerical
5q_5I asked you for a random number between 1 and 100 and you gave me { q_4.answer }. Please give me a new random number.1100The number must be an integer.numerical
Next we select some models to generate responses (see our models pricing page for details on available models and documentation on specifying model parameters):
from edsl import Model, ModelList

m = ModelList([
    Model("gemini-1.5-flash", service_name = "google"),
    Model("gpt-4o", service_name = "openai")
    # etc.
])
We run the survey by adding the models and then calling the run() method on it:
results = survey.by(m).run()
We can see a list of the columns of the dataset of Results that has been generated:
results.columns
0
0agent.agent_index
1agent.agent_instruction
2agent.agent_name
3answer.q_0
4answer.q_1
5answer.q_2
6answer.q_3
7answer.q_4
8answer.q_5
9cache_keys.q_0_cache_key
10cache_keys.q_1_cache_key
11cache_keys.q_2_cache_key
12cache_keys.q_3_cache_key
13cache_keys.q_4_cache_key
14cache_keys.q_5_cache_key
15cache_used.q_0_cache_used
16cache_used.q_1_cache_used
17cache_used.q_2_cache_used
18cache_used.q_3_cache_used
19cache_used.q_4_cache_used
20cache_used.q_5_cache_used
21comment.q_0_comment
22comment.q_1_comment
23comment.q_2_comment
24comment.q_3_comment
25comment.q_4_comment
26comment.q_5_comment
27generated_tokens.q_0_generated_tokens
28generated_tokens.q_1_generated_tokens
29generated_tokens.q_2_generated_tokens
30generated_tokens.q_3_generated_tokens
31generated_tokens.q_4_generated_tokens
32generated_tokens.q_5_generated_tokens
33iteration.iteration
34model.frequency_penalty
35model.inference_service
36model.logprobs
37model.maxOutputTokens
38model.max_tokens
39model.model
40model.model_index
41model.presence_penalty
42model.stopSequences
43model.temperature
44model.topK
45model.topP
46model.top_logprobs
47model.top_p
48prompt.q_0_system_prompt
49prompt.q_0_user_prompt
50prompt.q_1_system_prompt
51prompt.q_1_user_prompt
52prompt.q_2_system_prompt
53prompt.q_2_user_prompt
54prompt.q_3_system_prompt
55prompt.q_3_user_prompt
56prompt.q_4_system_prompt
57prompt.q_4_user_prompt
58prompt.q_5_system_prompt
59prompt.q_5_user_prompt
60question_options.q_0_question_options
61question_options.q_1_question_options
62question_options.q_2_question_options
63question_options.q_3_question_options
64question_options.q_4_question_options
65question_options.q_5_question_options
66question_text.q_0_question_text
67question_text.q_1_question_text
68question_text.q_2_question_text
69question_text.q_3_question_text
70question_text.q_4_question_text
71question_text.q_5_question_text
72question_type.q_0_question_type
73question_type.q_1_question_type
74question_type.q_2_question_type
75question_type.q_3_question_type
76question_type.q_4_question_type
77question_type.q_5_question_type
78raw_model_response.q_0_cost
79raw_model_response.q_0_input_price_per_million_tokens
80raw_model_response.q_0_input_tokens
81raw_model_response.q_0_one_usd_buys
82raw_model_response.q_0_output_price_per_million_tokens
83raw_model_response.q_0_output_tokens
84raw_model_response.q_0_raw_model_response
85raw_model_response.q_1_cost
86raw_model_response.q_1_input_price_per_million_tokens
87raw_model_response.q_1_input_tokens
88raw_model_response.q_1_one_usd_buys
89raw_model_response.q_1_output_price_per_million_tokens
90raw_model_response.q_1_output_tokens
91raw_model_response.q_1_raw_model_response
92raw_model_response.q_2_cost
93raw_model_response.q_2_input_price_per_million_tokens
94raw_model_response.q_2_input_tokens
95raw_model_response.q_2_one_usd_buys
96raw_model_response.q_2_output_price_per_million_tokens
97raw_model_response.q_2_output_tokens
98raw_model_response.q_2_raw_model_response
99raw_model_response.q_3_cost
100raw_model_response.q_3_input_price_per_million_tokens
101raw_model_response.q_3_input_tokens
102raw_model_response.q_3_one_usd_buys
103raw_model_response.q_3_output_price_per_million_tokens
104raw_model_response.q_3_output_tokens
105raw_model_response.q_3_raw_model_response
106raw_model_response.q_4_cost
107raw_model_response.q_4_input_price_per_million_tokens
108raw_model_response.q_4_input_tokens
109raw_model_response.q_4_one_usd_buys
110raw_model_response.q_4_output_price_per_million_tokens
111raw_model_response.q_4_output_tokens
112raw_model_response.q_4_raw_model_response
113raw_model_response.q_5_cost
114raw_model_response.q_5_input_price_per_million_tokens
115raw_model_response.q_5_input_tokens
116raw_model_response.q_5_one_usd_buys
117raw_model_response.q_5_output_price_per_million_tokens
118raw_model_response.q_5_output_tokens
119raw_model_response.q_5_raw_model_response
120reasoning_summary.q_0_reasoning_summary
121reasoning_summary.q_1_reasoning_summary
122reasoning_summary.q_2_reasoning_summary
123reasoning_summary.q_3_reasoning_summary
124reasoning_summary.q_4_reasoning_summary
125reasoning_summary.q_5_reasoning_summary
126scenario.scenario_index
All of these components can be analyzed in a variety of built-in methods for working with results. Here we create a table of responses, together with the question prompts to verify that the piping worked:
(
    results
    .select(
        "model",
        "prompt.q_0_user_prompt", "q_0",
        "prompt.q_1_user_prompt", "q_1",
        "prompt.q_2_user_prompt", "q_2",
        "prompt.q_3_user_prompt", "q_3",
        "prompt.q_4_user_prompt", "q_4",
        "prompt.q_5_user_prompt", "q_5"
    )
)
model.modelprompt.q_0_user_promptanswer.q_0prompt.q_1_user_promptanswer.q_1prompt.q_2_user_promptanswer.q_2prompt.q_3_user_promptanswer.q_3prompt.q_4_user_promptanswer.q_4prompt.q_5_user_promptanswer.q_5
0gemini-1.5-flashPlease give me a random number. Minimum answer value: 1 Maximum answer value: 100 The number must be an integer.67I asked you for a random number between 1 and 100 and you gave me { q_0.answer }. Please give me a new random number. Minimum answer value: 1 Maximum answer value: 100 The number must be an integer.42I asked you for a random number between 1 and 100 and you gave me { q_1.answer }. Please give me a new random number. Minimum answer value: 1 Maximum answer value: 100 The number must be an integer.42I asked you for a random number between 1 and 100 and you gave me { q_2.answer }. Please give me a new random number. Minimum answer value: 1 Maximum answer value: 100 The number must be an integer.42I asked you for a random number between 1 and 100 and you gave me . Please give me a new random number. Minimum answer value: 1 Maximum answer value: 100 The number must be an integer.42I asked you for a random number between 1 and 100 and you gave me . Please give me a new random number. Minimum answer value: 1 Maximum answer value: 100 The number must be an integer.97
1gpt-4oPlease give me a random number. Minimum answer value: 1 Maximum answer value: 100 The number must be an integer.1I asked you for a random number between 1 and 100 and you gave me { q_0.answer }. Please give me a new random number. Minimum answer value: 1 Maximum answer value: 100 The number must be an integer.1I asked you for a random number between 1 and 100 and you gave me { q_1.answer }. Please give me a new random number. Minimum answer value: 1 Maximum answer value: 100 The number must be an integer.1I asked you for a random number between 1 and 100 and you gave me { q_2.answer }. Please give me a new random number. Minimum answer value: 1 Maximum answer value: 100 The number must be an integer.1I asked you for a random number between 1 and 100 and you gave me . Please give me a new random number. Minimum answer value: 1 Maximum answer value: 100 The number must be an integer.1I asked you for a random number between 1 and 100 and you gave me . Please give me a new random number. Minimum answer value: 1 Maximum answer value: 100 The number must be an integer.1

Adding question memory

Re: survey rules mentioned above–here we automatically add a memory of all prior questions to each new question, e.g., to see how this may impact responses:
results_memory = survey.set_full_memory_mode().by(m).run()
results_memory.select("model", "q_0", "q_1", "q_2", "q_3", "q_4", "q_5")
model.modelanswer.q_0answer.q_1answer.q_2answer.q_3answer.q_4answer.q_5
0gemini-1.5-flash679242318517
1gpt-4o111111

Posting to Coop

Coop is a platform for posting and sharing AI-based research. It is fully integrated with EDSL and free to use. Learn more about how it works or create an account: https://www.expectedparrot.com/login. In the examples above, results generated using remote inference (run at the Expected Parrot server) were automatically posted to Coop (see links to results). Here we show how to manually post any local content to Coop, such as this notebook:
from edsl import Notebook

nb = Notebook(path = "looping_and_piping.ipynb")

nb.push(
    description = "Simultaneous looping and piping",
    alias = "looping-piping",
    visibility = "public"
)
Content posted to Coop can be modified from your workspace or at the web app at any time.
I