Looping & piping questions
This notebook provides example EDSL code for automatically looping (repeating) a question with content piped from other questions and answers.
Please see the EDSL documentation page for more details on each of the object types and methods for looping questions and piping questions and answers that are used below.
[1]:
from edsl import QuestionNumerical, Scenario, ScenarioList, Survey
We start by creating an initial question (with no content piped into it). EDSL comes with many common question types that we can choose from based on the form of the response that we want to get back from a model (e.g., free text, multiple choice, linear scale, etc.). Here we use a numerical question:
[2]:
q_0 = QuestionNumerical(
question_name = "q_0",
question_text = "Please give me a random number.",
min_value = 1,
max_value = 100,
answering_instructions = "The number must be an integer."
)
Next we create a question that we will “loop” (repeat) some number of times. We use double braces to create a {{ placeholder }}
for content to be added to the question when we create copies of it
Here we want to simultaneously set the names of the copies of the question and reference those names in the versions of the question text, so that content from one question and answer can be automatically piped into another cope of the question. To do this, we create placeholders for each question name ({{ num }}
) (it must be unique) and question text ({{ text }}
). Then in the next step we reference the question names in those texts.
(Note that the names of the placeholders can be anything other than reserved names, and this example works with any other question types as well. We just use a numerical question to keep the responses brief and easy to check!)
[3]:
q = QuestionNumerical(
question_name = "q_{{ num }}",
question_text = "{{ text }}",
min_value = 1,
max_value = 100,
answering_instructions = "The number must be an integer."
)
Next we create a list of Scenario
objects for the question name and question text inputs that we will pass to the loop
method that we call on the question in order to create the copies (learn more about using scenarios):
[4]:
s = ScenarioList(
[Scenario({
"num": n,
"text": f"""
I asked you for a random number between 1 and 100 and you gave me {{ q_{n-1}.answer }}.
Please give me a new random number.
"""
}) for n in range(1,6)]
)
The loop
method creates a list of questions with the scenarios added in. Note that because we used single-braces for ease of referencing the piped question names we will see a warning that scenarios require double braces, in case we used the single braces inadvertently. We can ignore this message here, and confirm that our questions have been formatted as intended:
[5]:
qq = q.loop(s)
qq
/Users/a16174/edsl/edsl/questions/descriptors.py:400: UserWarning: WARNING: Question text contains a single-braced substring. If you intended to parameterize the question with a Scenario, this will be changed to a double-braced substring, e.g. {{variable}}.
See details on constructing Scenarios in the docs: https://docs.expectedparrot.com/en/latest/scenarios.html
warnings.warn(
[5]:
[Question('numerical', question_name = """q_1""", question_text = """
I asked you for a random number between 1 and 100 and you gave me {{ q_0.answer }}.
Please give me a new random number.
""", min_value = 1, max_value = 100, answering_instructions = """The number must be an integer."""),
Question('numerical', question_name = """q_2""", question_text = """
I asked you for a random number between 1 and 100 and you gave me {{ q_1.answer }}.
Please give me a new random number.
""", min_value = 1, max_value = 100, answering_instructions = """The number must be an integer."""),
Question('numerical', question_name = """q_3""", question_text = """
I asked you for a random number between 1 and 100 and you gave me {{ q_2.answer }}.
Please give me a new random number.
""", min_value = 1, max_value = 100, answering_instructions = """The number must be an integer."""),
Question('numerical', question_name = """q_4""", question_text = """
I asked you for a random number between 1 and 100 and you gave me {{ q_3.answer }}.
Please give me a new random number.
""", min_value = 1, max_value = 100, answering_instructions = """The number must be an integer."""),
Question('numerical', question_name = """q_5""", question_text = """
I asked you for a random number between 1 and 100 and you gave me {{ q_4.answer }}.
Please give me a new random number.
""", min_value = 1, max_value = 100, answering_instructions = """The number must be an integer.""")]
We pass the list of questions to a Survey
object as usual in order to administer them together. Note that because we are piping answers into questions, the questions will automatically be administered in the order required by the piping. (If no piping or other survey rules are applied, questions are administered asychronously by default. Learn more about applying survey rules and logic.)
We can re-inspect the questions that are now in a survey:
[6]:
survey = Survey(questions = [q_0] + qq)
survey
[6]:
Survey # questions: 6; question_name list: ['q_0', 'q_1', 'q_2', 'q_3', 'q_4', 'q_5'];
question_name | question_text | min_value | max_value | answering_instructions | question_type | |
---|---|---|---|---|---|---|
0 | q_0 | Please give me a random number. | 1 | 100 | The number must be an integer. | numerical |
1 | q_1 | I asked you for a random number between 1 and 100 and you gave me {{ q_0.answer }}. Please give me a new random number. | 1 | 100 | The number must be an integer. | numerical |
2 | q_2 | I asked you for a random number between 1 and 100 and you gave me {{ q_1.answer }}. Please give me a new random number. | 1 | 100 | The number must be an integer. | numerical |
3 | q_3 | I asked you for a random number between 1 and 100 and you gave me {{ q_2.answer }}. Please give me a new random number. | 1 | 100 | The number must be an integer. | numerical |
4 | q_4 | I asked you for a random number between 1 and 100 and you gave me {{ q_3.answer }}. Please give me a new random number. | 1 | 100 | The number must be an integer. | numerical |
5 | q_5 | I asked you for a random number between 1 and 100 and you gave me {{ q_4.answer }}. Please give me a new random number. | 1 | 100 | The number must be an integer. | numerical |
Next we select some models to generate responses (see our models pricing page for details on available models and documentation on specifying model parameters):
[7]:
from edsl import Model, ModelList
m = ModelList(
Model(model) for model in [
"gemini-1.5-flash",
"gpt-4o",
# etc.
]
)
We run the survey by adding the models and then calling the run()
method on it:
[8]:
results = survey.by(m).run()
Job UUID | d313a9e1-d5bc-41f7-82fb-b2c276e6cc05 |
Progress Bar URL | https://www.expectedparrot.com/home/remote-job-progress/d313a9e1-d5bc-41f7-82fb-b2c276e6cc05 |
Error Report URL | None |
Results UUID | 016b07be-e42a-41f6-afef-fd48aa72ca7b |
Results URL | None |
We can see a list of the columns of the dataset of Results
that has been generated:
[9]:
results.columns
[9]:
0 | |
---|---|
0 | agent.agent_index |
1 | agent.agent_instruction |
2 | agent.agent_name |
3 | answer.q_0 |
4 | answer.q_1 |
5 | answer.q_2 |
6 | answer.q_3 |
7 | answer.q_4 |
8 | answer.q_5 |
9 | cache_keys.q_0_cache_key |
10 | cache_keys.q_1_cache_key |
11 | cache_keys.q_2_cache_key |
12 | cache_keys.q_3_cache_key |
13 | cache_keys.q_4_cache_key |
14 | cache_keys.q_5_cache_key |
15 | cache_used.q_0_cache_used |
16 | cache_used.q_1_cache_used |
17 | cache_used.q_2_cache_used |
18 | cache_used.q_3_cache_used |
19 | cache_used.q_4_cache_used |
20 | cache_used.q_5_cache_used |
21 | comment.q_0_comment |
22 | comment.q_1_comment |
23 | comment.q_2_comment |
24 | comment.q_3_comment |
25 | comment.q_4_comment |
26 | comment.q_5_comment |
27 | generated_tokens.q_0_generated_tokens |
28 | generated_tokens.q_1_generated_tokens |
29 | generated_tokens.q_2_generated_tokens |
30 | generated_tokens.q_3_generated_tokens |
31 | generated_tokens.q_4_generated_tokens |
32 | generated_tokens.q_5_generated_tokens |
33 | iteration.iteration |
34 | model.frequency_penalty |
35 | model.logprobs |
36 | model.maxOutputTokens |
37 | model.max_tokens |
38 | model.model |
39 | model.model_index |
40 | model.presence_penalty |
41 | model.stopSequences |
42 | model.temperature |
43 | model.topK |
44 | model.topP |
45 | model.top_logprobs |
46 | model.top_p |
47 | prompt.q_0_system_prompt |
48 | prompt.q_0_user_prompt |
49 | prompt.q_1_system_prompt |
50 | prompt.q_1_user_prompt |
51 | prompt.q_2_system_prompt |
52 | prompt.q_2_user_prompt |
53 | prompt.q_3_system_prompt |
54 | prompt.q_3_user_prompt |
55 | prompt.q_4_system_prompt |
56 | prompt.q_4_user_prompt |
57 | prompt.q_5_system_prompt |
58 | prompt.q_5_user_prompt |
59 | question_options.q_0_question_options |
60 | question_options.q_1_question_options |
61 | question_options.q_2_question_options |
62 | question_options.q_3_question_options |
63 | question_options.q_4_question_options |
64 | question_options.q_5_question_options |
65 | question_text.q_0_question_text |
66 | question_text.q_1_question_text |
67 | question_text.q_2_question_text |
68 | question_text.q_3_question_text |
69 | question_text.q_4_question_text |
70 | question_text.q_5_question_text |
71 | question_type.q_0_question_type |
72 | question_type.q_1_question_type |
73 | question_type.q_2_question_type |
74 | question_type.q_3_question_type |
75 | question_type.q_4_question_type |
76 | question_type.q_5_question_type |
77 | raw_model_response.q_0_cost |
78 | raw_model_response.q_0_one_usd_buys |
79 | raw_model_response.q_0_raw_model_response |
80 | raw_model_response.q_1_cost |
81 | raw_model_response.q_1_one_usd_buys |
82 | raw_model_response.q_1_raw_model_response |
83 | raw_model_response.q_2_cost |
84 | raw_model_response.q_2_one_usd_buys |
85 | raw_model_response.q_2_raw_model_response |
86 | raw_model_response.q_3_cost |
87 | raw_model_response.q_3_one_usd_buys |
88 | raw_model_response.q_3_raw_model_response |
89 | raw_model_response.q_4_cost |
90 | raw_model_response.q_4_one_usd_buys |
91 | raw_model_response.q_4_raw_model_response |
92 | raw_model_response.q_5_cost |
93 | raw_model_response.q_5_one_usd_buys |
94 | raw_model_response.q_5_raw_model_response |
95 | scenario.scenario_index |
All of these components can be analyzed in a variety of built-in methods for working with results. Here we create a table of responses, together with the question prompts to verify that the piping worked:
[10]:
(
results
.select(
"model",
"prompt.q_0_user_prompt", "q_0",
"prompt.q_1_user_prompt", "q_1",
"prompt.q_2_user_prompt", "q_2",
"prompt.q_3_user_prompt", "q_3",
"prompt.q_4_user_prompt", "q_4",
"prompt.q_5_user_prompt", "q_5"
)
)
[10]:
model.model | prompt.q_0_user_prompt | answer.q_0 | prompt.q_1_user_prompt | answer.q_1 | prompt.q_2_user_prompt | answer.q_2 | prompt.q_3_user_prompt | answer.q_3 | prompt.q_4_user_prompt | answer.q_4 | prompt.q_5_user_prompt | answer.q_5 | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | gemini-1.5-flash | Please give me a random number. Minimum answer value: 1 Maximum answer value: 100 The number must be an integer. | 87 | I asked you for a random number between 1 and 100 and you gave me 87. Please give me a new random number. Minimum answer value: 1 Maximum answer value: 100 The number must be an integer. | 42 | I asked you for a random number between 1 and 100 and you gave me 42. Please give me a new random number. Minimum answer value: 1 Maximum answer value: 100 The number must be an integer. | 97 | I asked you for a random number between 1 and 100 and you gave me 97. Please give me a new random number. Minimum answer value: 1 Maximum answer value: 100 The number must be an integer. | 42 | I asked you for a random number between 1 and 100 and you gave me 42. Please give me a new random number. Minimum answer value: 1 Maximum answer value: 100 The number must be an integer. | 97 | I asked you for a random number between 1 and 100 and you gave me 97. Please give me a new random number. Minimum answer value: 1 Maximum answer value: 100 The number must be an integer. | 42 |
1 | gpt-4o | Please give me a random number. Minimum answer value: 1 Maximum answer value: 100 The number must be an integer. | 1 | I asked you for a random number between 1 and 100 and you gave me 1. Please give me a new random number. Minimum answer value: 1 Maximum answer value: 100 The number must be an integer. | 1 | I asked you for a random number between 1 and 100 and you gave me 1. Please give me a new random number. Minimum answer value: 1 Maximum answer value: 100 The number must be an integer. | 1 | I asked you for a random number between 1 and 100 and you gave me 1. Please give me a new random number. Minimum answer value: 1 Maximum answer value: 100 The number must be an integer. | 1 | I asked you for a random number between 1 and 100 and you gave me 1. Please give me a new random number. Minimum answer value: 1 Maximum answer value: 100 The number must be an integer. | 1 | I asked you for a random number between 1 and 100 and you gave me 1. Please give me a new random number. Minimum answer value: 1 Maximum answer value: 100 The number must be an integer. | 1 |
Adding question memory
Re: survey rules mentioned above–here we automatically add a memory of all prior questions to each new question, e.g., to see how this may impact responses:
[11]:
results_memory = survey.set_full_memory_mode().by(m).run()
Job UUID | 7af2f8f6-a2ba-4b29-9e51-cfa700f16e7c |
Progress Bar URL | https://www.expectedparrot.com/home/remote-job-progress/7af2f8f6-a2ba-4b29-9e51-cfa700f16e7c |
Error Report URL | None |
Results UUID | 6bdfc82a-5af4-4302-97c8-6e3b0cd3d74a |
Results URL | None |
[12]:
results_memory.select("model", "q_0", "q_1", "q_2", "q_3", "q_4", "q_5")
[12]:
model.model | answer.q_0 | answer.q_1 | answer.q_2 | answer.q_3 | answer.q_4 | answer.q_5 | |
---|---|---|---|---|---|---|---|
0 | gemini-1.5-flash | 87 | 42 | 17 | 91 | 63 | 27 |
1 | gpt-4o | 1 | 1 | 1 | 42 | 1 | 1 |
Posting to Coop
Coop is a platform for posting and sharing AI-based research. It is fully integrated with EDSL and free to use. Learn more about how it works or create an account: https://www.expectedparrot.com/login.
In the examples above, results generated using remote inference (run at the Expected Parrot server) were automatically posted to Coop (see links to results).
Here we show how to manually post any local content to Coop, such as this notebook:
[13]:
# from edsl import Notebook
# n = Notebook("looping_and_piping.ipynb")
# n.push(description = "Simultaneous looping and piping", visibility = "public")
Updating an object at Coop:
[14]:
from edsl import Notebook
n = Notebook("looping_and_piping.ipynb")
n.patch(uuid = "4359ce69-e494-4085-930c-b46a5f433305", value = n)
[14]:
{'status': 'success'}
Content posted to Coop can be modified from your workspace or at the web app at any time.