Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.expectedparrot.com/llms.txt

Use this file to discover all available pages before exploring further.

When creating a human survey or preview (e.g. survey.humanize() or survey.preview()), you can pass a humanize schema to control styling and optionality per question. The schema is validated against your survey before use.

Fields

questions
dict
Map of question name (string) -> HumanizeQuestionSchema.
survey
dict
Optional survey-level options (e.g. custom CSS).

HumanizeQuestionSchema (per question)

Each questions[question_name] value is a HumanizeQuestionSchema object.
optional
boolean
default:false
Whether the question is optional. Default is false when omitted.Supported question types: free_text, budget, checkbox, interview, likert_five, linear_scale, list, matrix, multiple_choice, multiple_choice_with_other, numerical, rank, top_k, yes_no.Note: optional currently has no effect for interview and rank.
format
dict
Display format. Varies by question type:
  • MC-style (likert_five, linear_scale, multiple_choice, yes_no): "radio" (list of radio buttons) or "dropdown" (single <select>). Default is "radio". Use {"type": "radio"} or {"type": "dropdown"}.
  • Numerical (numerical): "input" (number input field) or "slider" (range slider). Default is "input". Use {"type": "input"} or {"type": "slider", "min": 0, "max": 100, "step": 1}. For slider, min must be less than max, step must be positive and not exceed (max - min).
Other question types do not support format.
custom_validation
dict
Multiple choice only (multiple_choice). Optional custom validation rules.
interview_mode
string
default:"text"
Interview only (interview). Controls the input mode offered to respondents.
  • "text" — text-based interview (default).
  • "voice" — voice-based interview.
  • "both" — respondent can choose between text and voice.
comment
dict
Optional comment input shown with the question. Submitted comment text appears in survey results under comment.{question_name}_comment.Supported question types: free_text, budget, checkbox, likert_five, linear_scale, list, matrix, multiple_choice, multiple_choice_with_other, numerical, rank, top_k, yes_no.

Example

from edsl import Survey, QuestionFreeText, QuestionMultipleChoice

survey = Survey([
    QuestionMultipleChoice(
        question_name="rating",
        question_text="Rate your experience.",
        question_options=["Good", "OK", "Bad"],
    ),
    QuestionFreeText(
        question_name="feedback",
        question_text="Any additional feedback you'd like to share?",
    ),
])

humanize_schema = {
    "questions": {
        "rating": {
            "optional": False,
            "format": {"type": "dropdown"},
            "comment": {"label": "Why did you choose this rating?"},
        },
        "feedback": {"optional": True},
    },
    "survey": {"custom_css": None},
}

# Validation runs before creating the human survey
survey.humanize(humanize_schema=humanize_schema)

Validation

If the schema is invalid, Expected Parrot raises HumanizeSchemaValidationError. Common causes:
  • A key in questions is not a question name in the survey, or is an instruction.
  • A question’s type is not supported for humanize schema (e.g. demand, dropdown).
  • A question’s entry has the wrong shape for its type (e.g. wrong field types or extra fields that aren’t allowed).
  • Top-level structure is invalid (e.g. questions not a dict, or an entry not a dict).
You can also pass the schema to survey.preview(humanize_schema=...) to get a preview URL. Ensure your humanize schema matches the parameters above for each question type in your survey.