Question types
This notebook contains code for creating different types of questions in edsl
.
Multiple Choice Checkbox Linear Scale Yes / No Budget Free Text List Numerical Extract Administering questions
[1]:
# ! pip install edsl
Multiple Choice
A multiple choice question prompts the respondent to select a single option from a given set of options.
[2]:
from edsl.questions import QuestionMultipleChoice
q_mc = QuestionMultipleChoice(
question_name="q_mc",
question_text="How often do you shop for clothes?",
question_options=["Rarely or never", "Annually", "Seasonally", "Monthly", "Daily"],
)
Checkbox
A checkbox question prompts the respondent to select one or more of the given options, which are returned as a list.
[3]:
from edsl.questions import QuestionCheckBox
q_cb = QuestionCheckBox(
question_name="q_cb",
question_text="""Which of the following factors are important to you in making decisions about clothes shopping?
Select all that apply.""",
question_options=[
"Price",
"Quality",
"Brand Reputation",
"Style and Design",
"Fit and Comfort",
"Customer Reviews and Recommendations",
"Ethical and Sustainable Practices",
"Return Policy",
"Convenience",
"Other",
],
min_selections=1, # This is optional
max_selections=3, # This is optional
)
Linear Scale
A linear scale question prompts the respondent to choose from a set of numerical options.
[4]:
from edsl.questions import QuestionLinearScale
q_ls = QuestionLinearScale(
question_name="q_ls",
question_text="""On a scale of 0-10, how much do you typically enjoy clothes shopping?
(0 = Not at all, 10 = Very much)""",
question_options=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10],
)
Yes / No
A yes/no question requires the respondent to respond “yes” or “no”. Response options are set by default and not modifiable. To include other options use a multiple choice question.
[5]:
from edsl.questions import QuestionYesNo
q_yn = QuestionYesNo(
question_name="q_yn",
question_text="Have you ever felt excluded or frustrated by the standard sizes of the fashion industry?",
)
Budget
A budget question prompts the respondent to allocation a specified sum among a set of options.
[6]:
from edsl.questions import QuestionBudget
q_bg = QuestionBudget(
question_name="q_bg",
question_text="""Estimate the percentage of your total time spent shopping for clothes in each of the
following modes.""",
question_options=[
"Online",
"Malls",
"Freestanding stores",
"Mail order catalogs",
"Other",
],
budget_sum=100,
)
Free Text
A free text question prompts the respondent to provide a short unstructured response.
[7]:
from edsl.questions import QuestionFreeText
q_ft = QuestionFreeText(
question_name="q_ft",
question_text="What improvements would you like to see in options for clothes shopping?",
)
List
A list question prompts the respondent to provide a response in the form of a list. This can be a convenient way to reformat free text questions.
[8]:
from edsl.questions import QuestionList
q_li = QuestionList(
question_name="q_li",
question_text="What considerations are important to youin shopping for clothes?",
)
Numerical
A numerical question prompts the respondent to provide a response that is a number.
[9]:
from edsl.questions import QuestionNumerical
q_nu = QuestionNumerical(
question_name="q_nu",
question_text="Estimate the amount of money that you spent on clothing in the past year (in $USD).",
)
Extract
A question type thatprompts the respondent to provide a response in the form of a dictionary, where the keys and example values are provided.
[10]:
from edsl.questions import QuestionExtract
q_ex = QuestionExtract(
question_name="q_ex",
question_text="""Consider all of the articles of clothing in your closet.
Identify the categories of clothing that are most and least valuable to you.""",
answer_template={"most_valuable": "socks", "least_valuable": "shoes"},
)
Administering questions
Here we administer each question to the default LLM. We do this by simply appending the run()
method to a question. (See how to administer questions and surveys to specific agent personas and LLMs in example Agents and Surveys.)
[11]:
result_mc = q_mc.run()
result_cb = q_cb.run()
result_ls = q_ls.run()
result_yn = q_yn.run()
result_bg = q_bg.run()
result_ft = q_ft.run()
result_li = q_li.run()
result_nu = q_nu.run()
result_ex = q_ex.run()
We can select the fields to inspect (e.g., just the response):
[12]:
result_mc.select("q_mc").print()
# result_cb.select("q_cb").print()
# result_ls.select("q_ls").print()
# result_yn.select("q_yn").print()
# result_bg.select("q_bg").print()
# result_ft.select("q_ft").print()
# result_li.select("q_li").print()
# result_nu.select("q_nu").print()
# result_ex.select("q_ex").print()
answer.q_mc |
---|
Seasonally |
We can add some pretty labels to our tables:
[13]:
result_mc.select("q_mc").print(pretty_labels={"answer.q_mc": q_mc.question_text})
result_cb.select("q_cb").print(pretty_labels={"answer.q_cb": q_cb.question_text})
result_ls.select("q_ls").print(pretty_labels={"answer.q_ls": q_ls.question_text})
result_yn.select("q_yn").print(pretty_labels={"answer.q_yn": q_yn.question_text})
result_bg.select("q_bg").print(pretty_labels={"answer.q_bg": q_bg.question_text})
result_ft.select("q_ft").print(pretty_labels={"answer.q_ft": q_ft.question_text})
result_li.select("q_li").print(pretty_labels={"answer.q_li": q_li.question_text})
result_nu.select("q_nu").print(pretty_labels={"answer.q_nu": q_nu.question_text})
result_ex.select("q_ex").print(pretty_labels={"answer.q_ex": q_ex.question_text})
How often do you shop for clothes? |
---|
Seasonally |
Which of the following factors are important to you in making decisions about clothes shopping? Select all that apply. |
---|
['Quality', 'Style and Design', 'Fit and Comfort'] |
On a scale of 0-10, how much do you typically enjoy clothes shopping? (0 = Not at all, 10 = Very much) |
---|
7 |
Have you ever felt excluded or frustrated by the standard sizes of the fashion industry? |
---|
Yes |
Estimate the percentage of your total time spent shopping for clothes in each of the following modes. |
---|
[{'Online': 50}, {'Malls': 30}, {'Freestanding stores': 15}, {'Mail order catalogs': 0}, {'Other': 5}] |
What improvements would you like to see in options for clothes shopping? |
---|
I'd like to see improvements in the variety and inclusivity of sizes to cater to all body types, better use of sustainable and eco-friendly materials, enhanced virtual fitting technologies to help with online shopping, and more personalized shopping experiences through AI recommendations. Additionally, streamlined and hassle-free return policies would make the process more convenient for customers. |
What considerations are important to youin shopping for clothes? |
---|
['fit', 'comfort', 'style', 'fabric', 'durability', 'price', 'brand', 'occasion', 'seasonality', 'maintenance', 'sustainability', 'color', 'trends', 'return policy'] |
Estimate the amount of money that you spent on clothing in the past year (in $USD). |
---|
500 |
Consider all of the articles of clothing in your closet. Identify the categories of clothing that are most and least valuable to you. |
---|
{'most_valuable': 'null', 'least_valuable': 'null'} |
Constructing a survey
We can also combine our questions into a survey to adminster them asynchronously:
[14]:
from edsl import Survey
survey = Survey(questions=[q_mc, q_cb, q_ls, q_yn, q_bg, q_ft, q_li, q_nu, q_ex])
results = survey.run()
[15]:
results.select("q_mc", "q_cb", "q_ls", "q_yn", "q_bg").print()
answer.q_mc | answer.q_cb | answer.q_ls | answer.q_yn | answer.q_bg |
---|---|---|---|---|
Seasonally | ['Quality', 'Style and Design', 'Fit and Comfort'] | 7 | Yes | [{'Online': 50}, {'Malls': 30}, {'Freestanding stores': 15}, {'Mail order catalogs': 0}, {'Other': 5}] |
[16]:
results.select("q_ft", "q_li", "q_nu", "q_ex").print()
answer.q_ft | answer.q_li | answer.q_nu | answer.q_ex |
---|---|---|---|
I'd like to see improvements in the variety and inclusivity of sizes to cater to all body types, better use of sustainable and eco-friendly materials, enhanced virtual fitting technologies to help with online shopping, and more personalized shopping experiences through AI recommendations. Additionally, streamlined and hassle-free return policies would make the process more convenient for customers. | ['fit', 'comfort', 'style', 'fabric', 'durability', 'price', 'brand', 'occasion', 'seasonality', 'maintenance', 'sustainability', 'color', 'trends', 'return policy'] | 500 | {'most_valuable': 'null', 'least_valuable': 'null'} |
Parameterizing questions
We can create different versions or scenarios of questions by parameterizing them:
[17]:
from edsl import Scenario
scenarios = [Scenario({"item": i}) for i in ["shoes", "hats", "tshirts"]]
q = QuestionNumerical(
question_name="annual_item_spending",
question_text="How much do you spend shopping for {{ item }} on an annual basis (in $USD)?",
)
results = q.by(scenarios).run()
results.select("scenario.*", "annual_item_spending").print()
scenario.item | answer.annual_item_spending |
---|---|
shoes | 0 |
hats | 0 |
tshirts | 0 |
Filtering results
We can filter results by adding a logical expression to the select()
method. Note that all question types other than free text automatically include a “comment” field for the response:
[18]:
(results.filter("scenario.item == 'shoes'").select("scenario.item", "answer.*").print())
scenario.item | answer.annual_item_spending_comment | answer.annual_item_spending |
---|---|---|
shoes | As an AI, I do not have personal experiences or expenses, so I do not spend any money on shoes or anything else. | 0 |
Adding AI agents
We can design an agent with a persona to reference in responding to the survey questions:
[19]:
from edsl import Agent
agent = Agent(
name="Fashion expert", traits={"persona": "You are an expert in fashion design."}
)
results = survey.by(agent).run()
results.select("persona", "answer.q_ft").print()
agent.persona | answer.q_ft |
---|---|
You are an expert in fashion design. | I'd like to see advancements in virtual fitting technology to help customers better visualize how clothes would fit their bodies without trying them on in-store. Additionally, increased personalization options where algorithms suggest clothing based on personal style, body type, and past purchases could enhance the shopping experience. More sustainable and transparent clothing options, with detailed information about materials and ethical sourcing, are also important. Finally, a broader range of sizes and adaptive clothing options to cater to all body types and abilities would be a significant improvement. |
Adding question memory
We can include a “memory” of a prior question/answer in the prompt for a subsequent question. Here we include the question and response to q_mc in the prompt for q_ft and inspect it:
[20]:
survey.add_targeted_memory(q_li, q_ft)
results = survey.by(agent).run()
[21]:
(
results.select("q_li", "q_ft_user_prompt", "q_ft").print(
{
"answer.q_li": "(List version) " + q_li.question_text,
"prompt.q_ft_user_prompt": "Prompt for q_ft",
"answer.q_ft": "(Free text version) " + q_ft.question_text,
}
)
)
(List version) What considerations are important to youin shopping for clothes? | (Free text version) What improvements would you like to see in options for clothes shopping? | Prompt for q_ft |
---|---|---|
['Fit', 'Material', 'Style', 'Versatility', 'Durability', 'Brand reputation', 'Ethical sourcing', 'Price', 'Care instructions', 'Seasonality', 'Occasion appropriateness', 'Trendiness', 'Size availability', 'Color'] | I'd like to see advancements in virtual fitting technology to help customers better visualize how clothes would fit their bodies without trying them on in-store. Additionally, increased personalization options where algorithms suggest clothing based on personal style, body type, and past purchases could enhance the shopping experience. More sustainable and transparent clothing options, with detailed information about materials and ethical sourcing, are also important. Finally, a broader range of sizes and adaptive clothing options to cater to all body types and abilities would be a significant improvement. | {'text': 'You are being asked the following question: What improvements would you like to see in options for clothes shopping?\nReturn a valid JSON formatted like this:\n{"answer": " |
Specifying LLMs
We can specify the language models to use in running a survey:
[22]:
from edsl import Model
Model.available()
[22]:
[['01-ai/Yi-34B-Chat', 'deep_infra', 0],
['Austism/chronos-hermes-13b-v2', 'deep_infra', 1],
['Gryphe/MythoMax-L2-13b', 'deep_infra', 2],
['HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1', 'deep_infra', 3],
['Phind/Phind-CodeLlama-34B-v2', 'deep_infra', 4],
['bigcode/starcoder2-15b', 'deep_infra', 5],
['claude-3-haiku-20240307', 'anthropic', 6],
['claude-3-opus-20240229', 'anthropic', 7],
['claude-3-sonnet-20240229', 'anthropic', 8],
['codellama/CodeLlama-34b-Instruct-hf', 'deep_infra', 9],
['codellama/CodeLlama-70b-Instruct-hf', 'deep_infra', 10],
['cognitivecomputations/dolphin-2.6-mixtral-8x7b', 'deep_infra', 11],
['databricks/dbrx-instruct', 'deep_infra', 12],
['deepinfra/airoboros-70b', 'deep_infra', 13],
['gemini-pro', 'google', 14],
['google/gemma-1.1-7b-it', 'deep_infra', 15],
['gpt-3.5-turbo', 'openai', 16],
['gpt-3.5-turbo-0125', 'openai', 17],
['gpt-3.5-turbo-0301', 'openai', 18],
['gpt-3.5-turbo-0613', 'openai', 19],
['gpt-3.5-turbo-1106', 'openai', 20],
['gpt-3.5-turbo-16k', 'openai', 21],
['gpt-3.5-turbo-16k-0613', 'openai', 22],
['gpt-3.5-turbo-instruct', 'openai', 23],
['gpt-3.5-turbo-instruct-0914', 'openai', 24],
['gpt-4', 'openai', 25],
['gpt-4-0125-preview', 'openai', 26],
['gpt-4-0613', 'openai', 27],
['gpt-4-1106-preview', 'openai', 28],
['gpt-4-1106-vision-preview', 'openai', 29],
['gpt-4-turbo', 'openai', 30],
['gpt-4-turbo-2024-04-09', 'openai', 31],
['gpt-4-turbo-preview', 'openai', 32],
['gpt-4-vision-preview', 'openai', 33],
['lizpreciatior/lzlv_70b_fp16_hf', 'deep_infra', 34],
['llava-hf/llava-1.5-7b-hf', 'deep_infra', 35],
['meta-llama/Llama-2-13b-chat-hf', 'deep_infra', 36],
['meta-llama/Llama-2-70b-chat-hf', 'deep_infra', 37],
['meta-llama/Llama-2-7b-chat-hf', 'deep_infra', 38],
['meta-llama/Meta-Llama-3-70B-Instruct', 'deep_infra', 39],
['meta-llama/Meta-Llama-3-8B-Instruct', 'deep_infra', 40],
['microsoft/WizardLM-2-7B', 'deep_infra', 41],
['microsoft/WizardLM-2-8x22B', 'deep_infra', 42],
['mistralai/Mistral-7B-Instruct-v0.1', 'deep_infra', 43],
['mistralai/Mistral-7B-Instruct-v0.2', 'deep_infra', 44],
['mistralai/Mixtral-8x22B-Instruct-v0.1', 'deep_infra', 45],
['mistralai/Mixtral-8x22B-v0.1', 'deep_infra', 46],
['mistralai/Mixtral-8x7B-Instruct-v0.1', 'deep_infra', 47],
['openchat/openchat_3.5', 'deep_infra', 48]]
[23]:
models = [Model(m) for m in ["gpt-3.5-turbo", "gpt-4-1106-preview"]]
results = survey.by(agent).by(models).run()
results.select("model.model", "q_bg", "q_li").print()
model.model | answer.q_bg | answer.q_li |
---|---|---|
gpt-3.5-turbo | [{'Online': 40}, {'Malls': 20}, {'Freestanding stores': 30}, {'Mail order catalogs': 5}, {'Other': 5}] | ['Quality', 'Fit', 'Material', 'Style', 'Brand reputation', 'Price', 'Functionality'] |
gpt-4-1106-preview | [{'Online': 40}, {'Malls': 30}, {'Freestanding stores': 20}, {'Mail order catalogs': 5}, {'Other': 5}] | ['Fit', 'Material', 'Style', 'Versatility', 'Durability', 'Brand reputation', 'Ethical sourcing', 'Price', 'Care instructions', 'Seasonality', 'Occasion appropriateness', 'Trendiness', 'Size availability', 'Color'] |
Show columns
We can check a list all of the components of the results with the columns
method:
[24]:
results = q_mc.by(scenarios).by(agent).by(models).run()
results.columns
[24]:
['agent.agent_name',
'agent.persona',
'answer.q_mc',
'answer.q_mc_comment',
'iteration.iteration',
'model.frequency_penalty',
'model.logprobs',
'model.max_tokens',
'model.model',
'model.presence_penalty',
'model.temperature',
'model.top_logprobs',
'model.top_p',
'prompt.q_mc_system_prompt',
'prompt.q_mc_user_prompt',
'raw_model_response.q_mc_raw_model_response',
'scenario.item']