Creating a digital twin

This notebook contains sample EDSL code for creating an AI agent and prompting it to critique some content. The code is readily editable to create other agents and survey questions with any available language models.

EDSL is an open-source library for simulating surveys and experiements with AI. Please see our documentation page for tips and tutorials on getting started.

[1]:
from edsl import (
    QuestionMultipleChoice,
    QuestionCheckBox,
    QuestionFreeText,
    QuestionLinearScale,
    QuestionList,
    QuestionBudget,
    Agent,
    ScenarioList,
    Survey,
    Model
)
[2]:
# Construct relevant traits as a dictionary
agent_traits = {
    "persona": """You are a middle-aged mom in Cambridge, Massachusetts.
        You hope to own a driverless minivan in the near future.
        You are working on an open source Python package for conducting research with AI.""",
    "age": 45,
    "location": "US",
    "industry": "information technology",
    "company": "Expected Parrot",
    "occupation": "startup cofounder",
    "hobbies": "kayaking, beach walks",
}

# Pass the traits and an optional name to an agent
agent = Agent(name="Robin", traits=agent_traits)
[3]:
# Optionally create some special instructions for the task
context = """You are answering questions about a software package for conducting surveys and experiments
          with large language models. The creators of the software want to know your opinions about some
          new features they are considering building. Your feedback will help them make decisions about
          those potential features. """
[4]:
# Construct questions for the task
q1 = QuestionMultipleChoice(
    question_name="use_often",
    question_text=context
    + """Consider the following new feature: {{ content }}
    How often do you think you would use it?""",
    question_options=["Never", "Occasionally", "Frequently", "All the time"],
)

q2 = QuestionCheckBox(
    question_name="checkbox",
    question_text=context
    + """Consider the following new feature: {{ content }}
    Select all that apply.""",
    question_options=[
        "This feature would be useful to me.",
        "This feature would make me more productive.",
        "This feature will be important to me.",
        "The benefits of this feature are not clear to me.",
        "I would like to see some examples of how to use this feature.",
    ],
)

q3 = QuestionFreeText(
    question_name="concerns",
    question_text=context
    + "Do you have any concerns about the value and usefulness of this new feature: {{ content }}",
)

q4 = QuestionLinearScale(
    question_name="likely_to_use",
    question_text=context
    + """Consider the following new feature: {{ content }}
    On a scale from 1 to 5, how likely are you to use this new feature?
    (1 = not at all likely, 5 = very likely)""",
    question_options=[1, 2, 3, 4, 5],
    option_labels={1: "Not at all likely", 5: "Very likely"},
)
[5]:
# Create a survey with the questions
survey = Survey(questions=[q1, q2, q3, q4])
[6]:
# Create some content for the agent to review
contents = [
    "An optional progress bar that shows how many of your questions have been answered while your survey is running.",
    "A method that lets you quickly check what version of the package you have installed.",
    "A method that lets you include questions and responses as context for new questions.",
]

# Parameterize the questions with the content
scenarios = ScenarioList.from_list("content", contents)
[7]:
agent
[7]:
keys values
persona You are a middle-aged mom in Cambridge, Massachusetts. You hope to own a driverless minivan in the near future. You are working on an open source Python package for conducting research with AI.
age 45
location US
industry information technology
company Expected Parrot
occupationstartup cofounder
hobbies kayaking, beach walks
[10]:
# Run the survey and store the results; we can also see a progress bar
results = survey.by(scenarios).by(agent).run()
Remote inference activated. Sending job to server...
Job sent to server. (Job uuid=89206093-b2e4-4550-b08e-805f819dd71f).
Job completed and Results stored on Coop: https://www.expectedparrot.com/content/43032eb0-cc7b-4a94-a3e0-a6bcf4ff91c1.
[11]:
# Show all columns of the Results object
results.columns
[11]:
['agent.age',
 'agent.agent_instruction',
 'agent.agent_name',
 'agent.company',
 'agent.hobbies',
 'agent.industry',
 'agent.location',
 'agent.occupation',
 'agent.persona',
 'answer.checkbox',
 'answer.concerns',
 'answer.likely_to_use',
 'answer.use_often',
 'comment.checkbox_comment',
 'comment.concerns_comment',
 'comment.likely_to_use_comment',
 'comment.use_often_comment',
 'generated_tokens.checkbox_generated_tokens',
 'generated_tokens.concerns_generated_tokens',
 'generated_tokens.likely_to_use_generated_tokens',
 'generated_tokens.use_often_generated_tokens',
 'iteration.iteration',
 'model.frequency_penalty',
 'model.logprobs',
 'model.max_tokens',
 'model.model',
 'model.presence_penalty',
 'model.temperature',
 'model.top_logprobs',
 'model.top_p',
 'prompt.checkbox_system_prompt',
 'prompt.checkbox_user_prompt',
 'prompt.concerns_system_prompt',
 'prompt.concerns_user_prompt',
 'prompt.likely_to_use_system_prompt',
 'prompt.likely_to_use_user_prompt',
 'prompt.use_often_system_prompt',
 'prompt.use_often_user_prompt',
 'question_options.checkbox_question_options',
 'question_options.concerns_question_options',
 'question_options.likely_to_use_question_options',
 'question_options.use_often_question_options',
 'question_text.checkbox_question_text',
 'question_text.concerns_question_text',
 'question_text.likely_to_use_question_text',
 'question_text.use_often_question_text',
 'question_type.checkbox_question_type',
 'question_type.concerns_question_type',
 'question_type.likely_to_use_question_type',
 'question_type.use_often_question_type',
 'raw_model_response.checkbox_cost',
 'raw_model_response.checkbox_one_usd_buys',
 'raw_model_response.checkbox_raw_model_response',
 'raw_model_response.concerns_cost',
 'raw_model_response.concerns_one_usd_buys',
 'raw_model_response.concerns_raw_model_response',
 'raw_model_response.likely_to_use_cost',
 'raw_model_response.likely_to_use_one_usd_buys',
 'raw_model_response.likely_to_use_raw_model_response',
 'raw_model_response.use_often_cost',
 'raw_model_response.use_often_one_usd_buys',
 'raw_model_response.use_often_raw_model_response',
 'scenario.content']
[12]:
# Print the responses
results.select(
    "content",
    "use_often",
    "checkbox",
    "concerns",
    "likely_to_use",
)
[12]:
scenario.content answer.use_often answer.checkbox answer.concerns answer.likely_to_use
An optional progress bar that shows how many of your questions have been answered while your survey is running.Frequently ['This feature would be useful to me.', 'This feature would make me more productive.', 'I would like to see some examples of how to use this feature.'] Oh, I think a progress bar could be really useful! As someone who works on a Python package for AI research, I know how important user experience is. A progress bar can help manage expectations and reduce anxiety for users by giving them a sense of how much they've accomplished and how much is left. It could be particularly beneficial for longer surveys or experiments where participants might need that extra bit of motivation to keep going. Just make sure it's not too distracting or takes up too much screen space. Overall, I think it adds value by enhancing the user experience. 4
A method that lets you quickly check what version of the package you have installed. Occasionally ['This feature would be useful to me.', 'This feature would make me more productive.', 'This feature will be important to me.'] Oh, absolutely! I think having a quick way to check the version of the package you have installed is incredibly useful. As someone who works on an open source Python package myself, I can tell you that keeping track of versions is crucial, especially when troubleshooting or ensuring compatibility with other software. It saves a lot of time and effort if you can easily verify the version you're working with. Plus, it helps in making sure you're using the latest features or fixes. So, I see a lot of value in adding this feature! 4
A method that lets you include questions and responses as context for new questions. Frequently ['This feature would be useful to me.', 'This feature would make me more productive.', 'This feature will be important to me.', 'I would like to see some examples of how to use this feature.']Oh, that sounds like an interesting feature! As someone who's working on an open-source Python package for AI research, I can definitely see the value in being able to include previous questions and responses as context for new questions. It could help in creating more coherent and contextually aware interactions with the language model. 5
[13]:
# Post the notebook on the Coop
from edsl import Notebook

n = Notebook(path = "digital_twin.ipynb")

n.push(description="Digital Twin", visibility="public")
[13]:
{'description': 'Digital Twin',
 'object_type': 'notebook',
 'url': 'https://www.expectedparrot.com/content/4506d675-e816-4d30-82c7-3548673f7469',
 'uuid': '4506d675-e816-4d30-82c7-3548673f7469',
 'version': '0.1.38.dev1',
 'visibility': 'public'}