Prompts
Overview
Prompts are texts that are sent to a language model in order to guide it on how to generate responses to questions. They can include questions, instructions or any other textual information to be displayed to the language model.
Creating & showing prompts
There are two types of prompts:
A user_prompt contains the instructions for a question.
A system_prompt contains the instructions for the agent.
Note: Some models do not support system prompts, e.g., OpenAI’s o1 models. When using these models the system prompts will be ignored.
Methods
Methods for displaying prompts are available for both surveys and jobs:
Calling the show_prompts() method on a Survey will display the user prompts and the system prompts (if any agents are used) that will be sent to the model when the survey is run.
Calling the prompts() method on a Job (a survey combined with a model) will return a dataset of the prompts together with information about each question/scenario/agent/model combination and estimated cost.
For example, here we create a survey consisting of a single question and use the show_prompts() method to inspect the prompts without adding an agent:
from edsl import QuestionFreeText, Survey
q = QuestionFreeText(
question_name = "today",
question_text = "How do you feel today?"
)
survey = Survey([q])
survey.show_prompts()
Output:
user_prompt |
system_prompt |
---|---|
How do you feel today? |
In this example, the user_prompt is identical to the question text because there are no default additional instructions for free text questions, and the system_prompt is blank because we did not use an agent.
Here we create an agent, add it to the survey and show the prompts again:
from edsl import QuestionFreeText, Survey, Agent
q = QuestionFreeText(
question_name = "today",
question_text = "How do you feel today?"
)
agent = Agent(
traits = {
"persona": "You are a high school student.",
"age": 15
}
)
survey = Survey([q])
survey.by(agent).show_prompts()
Output:
user_prompt |
system_prompt |
---|---|
How do you feel today? |
You are answering questions as if you were a human. Do not break character. Your traits: {‘persona’: ‘You are a high school student.’, ‘age’: 15} |
This time we can see that the system_prompt includes the default agent instructions (You are answering questions as if you were a human. Do not break character. Your traits:) and the agent’s traits.
If we want to see more information about the question, we can create a job that combines the survey and a model, and call the prompts() method:
from edsl import Model
model = Model("gpt-4o")
survey.by(agent).by(model).prompts()
Output:
user_prompt |
system_prompt |
interview_index |
question_name |
scenario_index |
agent_index |
model |
estimated_cost |
---|---|---|---|---|---|---|---|
How do you feel today? |
You are answering questions as if you were a human. Do not break character. Your traits: {‘persona’: ‘You are a high school student.’, ‘age’: 15} |
0 |
today |
0 |
0 |
gpt-4o |
0.0004125 |
Modifying agent instructions
An agent can also be constructed with an optional instruction. This text is added to the beginning of the system_prompt, replacing the default instructions “You are answering questions as if you were a human. Do not break character.” Here we create agents with and without an instruction and compare the prompts:
from edsl import AgentList, Agent
agents = AgentList([
Agent(
traits = {"persona": "You are a high school student.", "age": 15}
# no instruction
),
Agent(
traits = {"persona": "You are a high school student.", "age": 15},
instruction = "You are tired."
)
])
survey.by(agents).show_prompts() # using the survey from the previous examples
Output:
user_prompt |
system_prompt |
---|---|
How do you feel today? |
You are answering questions as if you were a human. Do not break character. Your traits: {‘persona’: ‘You are a high school student.’, ‘age’: 15} |
How do you feel today? |
You are tired. Your traits: {‘persona’: ‘You are a high school student.’, ‘age’: 15} |
If we use the prompts() method to see more details, we will find that the agent_index is different for each agent, allowing us to distinguish between them in the survey results, and the interview_index is also incremented for each question/agent/model combination:
survey.by(agents).by(model).prompts() # using the survey, agents and model from examples above
Output:
user_prompt |
system_prompt |
interview_index |
question_name |
scenario_index |
agent_index |
model |
estimated_cost |
---|---|---|---|---|---|---|---|
How do you feel today? |
You are answering questions as if you were a human. Do not break character. Your traits: {‘persona’: ‘You are a high school student.’, ‘age’: 15} |
0 |
today |
0 |
0 |
gpt-4o |
0.0004125 |
How do you feel today? |
You are tired. Your traits: {‘persona’: ‘You are a high school student.’, ‘age’: 15} |
1 |
today |
0 |
1 |
gpt-4o |
0.000265 |
Agent names
Agents can also be constructed with an optional unique name parameter which does not appear in the prompts but can be useful for identifying agents in the results. The name is stored in the agent_name column that is automatically added to the results. The default agent name in results is “Agent” followed by the agent’s index in the agent list (e.g. “Agent_0”, “Agent_1”, etc.).
Learn more about designing Agents and accessing columns in Results.
Instructions for question types
In the examples above, the user_prompt for the question was identical to the question text. This is because the question type was free text, which does not include additional instructions by default. Question types other than free text include additional instructions in the user_prompt that are specific to the question type.
For example, here we create a multiple choice question and inspect the user prompt:
from edsl import QuestionMultipleChoice, Survey
q = QuestionMultipleChoice(
question_name = "favorite_subject",
question_text = "What is your favorite subject?",
question_options = ["Math", "English", "Social studies", "Science", "Other"]
)
survey = Survey([q])
survey.by(agent).prompts().select("user_prompt") # to display just the user prompt
Output:
In this case, the user_prompt for the question includes both the question text and the default instructions for multiple choice questions: “Only one answer may be selected…” Other question types have their own default instructions that specify how the response should be formatted.
Learn more about the different question types in the Questions section of the documentation.
Prompts for multiple questions
If a survey consists of multiple questions, the show_prompts() and prompts() methods will display all of the prompts for each question/scenario/model/agent combination in the survey.
For example:
from edsl import QuestionMultipleChoice, QuestionYesNo, Survey
q1 = QuestionMultipleChoice(
question_name = "favorite_subject",
question_text = "What is your favorite subject?",
question_options = ["Math", "English", "Social studies", "Science", "Other"]
)
q2 = QuestionYesNo(
question_name = "college_plan",
question_text = "Do you plan to go to college?"
)
survey = Survey([q1, q2])
survey.by(agent).by(model).prompts() # using the agent and model from previous examples
Output:
user_prompt |
system_prompt |
interview_index |
question_name |
scenario_index |
agent_index |
model |
estimated_cost |
---|---|---|---|---|---|---|---|
What is your favorite subject? Math English Social studies Science Other Only 1 option may be selected. Respond only with a string corresponding to one of the options. After the answer, you can put a comment explaining why you chose that option on the next line. |
You are answering questions as if you were a human. Do not break character. Your traits: {‘persona’: ‘You are a high school student.’, ‘age’: 15} |
0 |
favorite_subject |
0 |
0 |
gpt-4o |
0.001105 |
Do you plan to go to college? No Yes Only 1 option may be selected. Please respond with just your answer. After the answer, you can put a comment explaining your response. |
You are answering questions as if you were a human. Do not break character. Your traits: {‘persona’: ‘You are a high school student.’, ‘age’: 15} |
0 |
college_plan |
0 |
0 |
gpt-4o |
0.00084 |
Modifying prompts
Templates for default prompts are provided in the edsl.prompts.library module. These prompts can be used as is or customized to suit specific requirements by creating new classes that inherit from the Prompt class.
Typically, prompts are created using the Prompt class, a subclass of the PromptBase class which is an abstract class that defines the basic structure of a prompt. The Prompt class has the following attributes (see examples above):
user_prompt: A list of strings that contain the text that will be sent to the model.
system_prompt: A list of strings that contain the text that will be sent to the model.
interview_index: An integer that specifies the index of the interview.
question_name: A string that specifies the name of the question.
scenario_index: An integer that specifies the index of the scenario.
agent_index: An integer that specifies the index of the agent.
model: A string that specifies the model to be used.
estimated_cost: A float that specifies the estimated cost of the prompt.
Inspecting prompts after running a survey
After a survey is run, we can inspect the prompts that were used by selecting the prompt.* fields of the results.
For example, here we run the survey from above and inspect the prompts that were used:
results = survey.by(agent).by(model).run()
To select all the prompt columns at once:
results.select("prompt.*")
Output:
prompt.favorite_subject_user_prompt |
prompt.college_plan_user_prompt |
prompt.favorite_subject_system_prompt |
prompt.college_plan_system_prompt |
---|---|---|---|
What is your favorite subject? Math English Social studies Science Other Only 1 option may be selected. Respond only with a string corresponding to one of the options. After the answer, you can put a comment explaining why you chose that option on the next line. |
Do you plan to go to college? No Yes Only 1 option may be selected. Please respond with just your answer. After the answer, you can put a comment explaining your response. |
You are answering questions as if you were a human. Do not break character. Your traits: {‘persona’: ‘You are a high school student.’, ‘age’: 15} |
You are answering questions as if you were a human. Do not break character. Your traits: {‘persona’: ‘You are a high school student.’, ‘age’: 15} |
Or to specify the order in the table we can name them individually:
(
results.select(
"favorite_subject_system_prompt",
"college_plan_system_prompt",
"favorite_subject_user_prompt",
"college_plan_user_prompt"
)
)
Output:
prompt.favorite_subject_system_prompt |
prompt.college_plan_system_prompt |
prompt.favorite_subject_user_prompt |
prompt.college_plan_user_prompt |
---|---|---|---|
You are answering questions as if you were a human. Do not break character. Your traits: {‘persona’: ‘You are a high school student.’, ‘age’: 15} |
You are answering questions as if you were a human. Do not break character. Your traits: {‘persona’: ‘You are a high school student.’, ‘age’: 15} |
What is your favorite subject? Math English Social studies Science Other Only 1 option may be selected. Respond only with a string corresponding to one of the options. After the answer, you can put a comment explaining why you chose that option on the next line. |
Do you plan to go to college? No Yes Only 1 option may be selected. Please respond with just your answer. After the answer, you can put a comment explaining your response. |
More about question prompts
See the Questions section for more details on how to create and customize question prompts with question_presentation and answering_instructions parameters in the Question type constructor.
Prompts class
- class edsl.prompts.Prompt.Prompt(text: str | None = None)[source]
Bases:
PersistenceMixin
,RepresentationMixin
Class for creating a prompt to be used in a survey.
- __init__(text: str | None = None)[source]
Create a Prompt object.
- Parameters:
text – The text of the prompt.
- classmethod from_dict(data) PromptBase [source]
Create a Prompt from a dictionary.
Example:
>>> p = Prompt("Hello, {{person}}") >>> p2 = Prompt.from_dict(p.to_dict()) >>> p2 Prompt(text="""Hello, {{person}}""")
- classmethod from_template(file_name: str, path_to_folder: str | Path | None = None, **kwargs: Dict[str, Any]) PromptBase [source]
Create a PromptBase from a Jinja template.
- Args:
file_name (str): The name of the Jinja template file. path_to_folder (Union[str, Path]): The path to the folder containing the template.
Can be absolute or relative.
**kwargs: Variables to be passed to the template for rendering.
- Returns:
PromptBase: An instance of PromptBase with the rendered template as text.
- classmethod from_txt(filename: str) PromptBase [source]
Create a Prompt from text.
- Parameters:
text – The text of the prompt.
- property has_variables: bool[source]
Return True if the prompt has variables.
Example:
>>> p = Prompt("Hello, {{person}}") >>> p.has_variables True
>>> p = Prompt("Hello, person") >>> p.has_variables False
- render(primary_replacement: dict, **additional_replacements) str [source]
Render the prompt with the replacements.
- Parameters:
primary_replacement – The primary replacement dictionary.
additional_replacements – Additional replacement dictionaries.
>>> p = Prompt("Hello, {{person}}") >>> p.render({"person": "John"}) Prompt(text="""Hello, John""")
>>> p.render({"person": "Mr. {{last_name}}", "last_name": "Horton"}) Prompt(text="""Hello, Mr. Horton""")
>>> p.render({"person": "Mr. {{last_name}}", "last_name": "Ho{{letter}}ton"}, max_nesting = 1) Prompt(text="""Hello, Mr. Ho{{ letter }}ton""")
>>> p.render({"person": "Mr. {{last_name}}"}) Prompt(text="""Hello, Mr. {{ last_name }}""")
- template_variables() list[str] [source]
Return the the variables in the template.
Example:
>>> p = Prompt("Hello, {{person}}") >>> p.template_variables() ['person']
- to_dict(add_edsl_version=False) dict[str, Any] [source]
Return the Prompt as a dictionary.
Example:
>>> p = Prompt("Hello, {{person}}") >>> p.to_dict() {'text': 'Hello, {{person}}', 'class_name': 'Prompt'}
- undefined_template_variables(replacement_dict: dict)[source]
Return the variables in the template that are not in the replacement_dict.
- Parameters:
replacement_dict – A dictionary of replacements to populate the template.
Example:
>>> p = Prompt("Hello, {{person}}") >>> p.undefined_template_variables({"person": "John"}) []
>>> p = Prompt("Hello, {{title}} {{person}}") >>> p.undefined_template_variables({"person": "John"}) ['title']
Comments
The user prompt for the multiple choice question above also includes an instruction for the model to provide a comment about its answer: “After the answer, you can put a comment explaining why you chose that option on the next line.” All questions types other than free text automatically include a “comment” which is stored in a separate field in the survey results. (The field is blank for free text questions.) Comments are not required, but can be useful for understanding a model’s reasoning, or debugging a non-response. They can also be useful when you want to simulate a “chain of thought” by giving an agent context of prior questions and answers in a survey. Comments can be turned off by passing a parameter include_comment = False to the question constructor.
Learn more about using question memory and piping comments or other question components in the Surveys section of the documentation.
For example, here we modify the multiple choice question above to not include a comment and show the resulting user prompt:
Output:
There is no longer any instruction about a comment at the end of the user prompt.