Jobs
The Jobs class is a collection of agents, scenarios and models and one survey. It is used to run a collection of interviews.
Base Question
This module contains the Question class, which is the base class for all questions in EDSL.
- class edsl.questions.QuestionBase.QuestionBase[source]
Bases:
PersistenceMixin
,RichPrintingMixin
,SimpleAskMixin
,QuestionBasePromptsMixin
,QuestionBaseGenMixin
,ABC
,AnswerValidatorMixin
ABC for the Question class. All questions inherit from this class. Some of the constraints on child questions are defined in the RegisterQuestionsMeta metaclass.
- add_question(other: QuestionBase) Survey [source]
Add a question to this question by turning them into a survey with two questions.
>>> from edsl.questions import QuestionFreeText as Q >>> from edsl.questions import QuestionMultipleChoice as QMC >>> s = Q.example().add_question(QMC.example()) >>> len(s.questions) 2
- property data: dict[source]
Return a dictionary of question attributes except for question_type.
>>> from edsl import QuestionFreeText as Q >>> Q.example().data {'question_name': 'how_are_you', 'question_text': 'How are you?'}
- classmethod from_dict(data: dict) Type[QuestionBase] [source]
Construct a question object from a dictionary created by that question’s to_dict method.
- html(scenario: dict | None = None, agent: dict | None = {}, answers: dict | None = None, include_question_name: bool = False, height: int | None = None, width: int | None = None, iframe=False)[source]
Return the question in HTML format.
- human_readable() str [source]
Print the question in a human readable format.
>>> from edsl.questions import QuestionFreeText >>> QuestionFreeText.example().human_readable() 'Question Type: free_text\nQuestion: How are you?'
- property name: str[source]
Helper function so questions and instructions can use the same access method
- question_text: str[source]
Validate that the question_text attribute is a string.
>>> class TestQuestion: ... question_text = QuestionTextDescriptor() ... def __init__(self, question_text: str): ... self.question_text = question_text
>>> _ = TestQuestion("What is the capital of France?") >>> _ = TestQuestion("What is the capital of France? {{variable}}") >>> _ = TestQuestion("What is the capital of France? {{variable name}}") Traceback (most recent call last): ... edsl.exceptions.questions.QuestionCreationValidationError: Question text contains an invalid identifier: 'variable name'
- async run_async(just_answer: bool = True, model: 'Model' | None = None, agent: 'Agent' | None = None, **kwargs) Any | 'Results' [source]
Call the question asynchronously.
>>> import asyncio >>> from edsl import QuestionFreeText as Q >>> m = Q._get_test_model(canned_response = "Blue") >>> q = Q(question_name = "color", question_text = "What is your favorite color?") >>> async def test_run_async(): result = await q.run_async(model=m); print(result) >>> asyncio.run(test_run_async()) Blue
- classmethod run_example(show_answer: bool = True, model: 'LanguageModel' | None = None, cache=False, **kwargs)[source]
Run an example of the question. >>> from edsl.language_models import LanguageModel >>> from edsl import QuestionFreeText as Q >>> m = Q._get_test_model(canned_response = “Yo, what’s up?”) >>> m.execute_model_call(“”, “”) {‘message’: [{‘text’: “Yo, what’s up?”}], ‘usage’: {‘prompt_tokens’: 1, ‘completion_tokens’: 1}} >>> Q.run_example(show_answer = True, model = m) ┏━━━━━━━━━━━━━━━━┓ ┃ answer ┃ ┃ .how_are_you ┃ ┡━━━━━━━━━━━━━━━━┩ │ Yo, what’s up? │ └────────────────┘
- to_dict() dict[str, Any] [source]
Convert the question to a dictionary that includes the question type (used in deserialization). >>> from edsl import QuestionFreeText as Q; Q.example().to_dict() {‘question_name’: ‘how_are_you’, ‘question_text’: ‘How are you?’, ‘question_type’: ‘free_text’, ‘edsl_version’: ‘…’}
Jobs class
- class edsl.jobs.Jobs.Jobs(survey: Survey, agents: list['Agent'] | None = None, models: list['LanguageModel'] | None = None, scenarios: list['Scenario'] | None = None)[source]
Bases:
Base
A collection of agents, scenarios and models and one survey. The actual running of a job is done by a JobsRunner, which is a subclass of JobsRunner. The JobsRunner is chosen by the user, and is stored in the jobs_runner_name attribute.
- all_question_parameters()[source]
Return all the fields in the questions in the survey. >>> from edsl.jobs import Jobs >>> Jobs.example().all_question_parameters() {‘period’}
- property bucket_collection: BucketCollection[source]
Return the bucket collection. If it does not exist, create it.
- by(*args: 'Agent' | 'Scenario' | 'LanguageModel' | Sequence['Agent' | 'Scenario' | 'LanguageModel']) Jobs [source]
Add Agents, Scenarios and LanguageModels to a job. If no objects of this type exist in the Jobs instance, it stores the new objects as a list in the corresponding attribute. Otherwise, it combines the new objects with existing objects using the object’s __add__ method.
This ‘by’ is intended to create a fluent interface.
>>> from edsl import Survey >>> from edsl import QuestionFreeText >>> q = QuestionFreeText(question_name="name", question_text="What is your name?") >>> j = Jobs(survey = Survey(questions=[q])) >>> j Jobs(survey=Survey(...), agents=AgentList([]), models=ModelList([]), scenarios=ScenarioList([])) >>> from edsl import Agent; a = Agent(traits = {"status": "Sad"}) >>> j.by(a).agents AgentList([Agent(traits = {'status': 'Sad'})])
- Parameters:
args – objects or a sequence (list, tuple, …) of objects of the same type
Notes: - all objects must implement the ‘get_value’, ‘set_value’, and __add__ methods - agents: traits of new agents are combined with traits of existing agents. New and existing agents should not have overlapping traits, and do not increase the # agents in the instance - scenarios: traits of new scenarios are combined with traits of old existing. New scenarios will overwrite overlapping traits, and do not increase the number of scenarios in the instance - models: new models overwrite old models.
- static compute_job_cost(job_results: Results) float [source]
Computes the cost of a completed job in USD.
- create_bucket_collection() BucketCollection [source]
Create a collection of buckets for each model.
These buckets are used to track API calls and token usage.
>>> from edsl.jobs import Jobs >>> from edsl import Model >>> j = Jobs.example().by(Model(temperature = 1), Model(temperature = 0.5)) >>> bc = j.create_bucket_collection() >>> bc BucketCollection(...)
- create_remote_inference_job(iterations: int = 1, remote_inference_description: str | None = None)[source]
- estimate_job_cost() dict [source]
Estimates the cost of a job according to the following assumptions:
1 token = 4 characters.
Input tokens = output tokens.
Fetches prices from Coop.
- estimate_job_cost_from_external_prices(price_lookup: dict) dict [source]
Estimates the cost of a job according to the following assumptions:
1 token = 4 characters.
Input tokens = output tokens.
price_lookup is an external pricing dictionary.
- static estimate_prompt_cost(system_prompt: str, user_prompt: str, price_lookup: dict, inference_service: str, model: str) dict [source]
Estimates the cost of a prompt. Takes piping into account.
- classmethod example(throw_exception_probability: float = 0.0, randomize: bool = False, test_model=False) Jobs [source]
Return an example Jobs instance.
- Parameters:
throw_exception_probability – the probability that an exception will be thrown when answering a question. This is useful for testing error handling.
randomize – whether to randomize the job by adding a random string to the period
test_model – whether to use a test model
>>> Jobs.example() Jobs(...)
- classmethod from_interviews(interview_list)[source]
Return a Jobs instance from a list of interviews.
This is useful when you have, say, a list of failed interviews and you want to create a new job with only those interviews.
- interviews() list[Interview] [source]
Return a list of
edsl.jobs.interviews.Interview
objects.It returns one Interview for each combination of Agent, Scenario, and LanguageModel. If any of Agents, Scenarios, or LanguageModels are missing, it fills in with defaults.
>>> from edsl.jobs import Jobs >>> j = Jobs.example() >>> len(j.interviews()) 4 >>> j.interviews()[0] Interview(agent = Agent(traits = {'status': 'Joyful'}), survey = Survey(...), scenario = Scenario({'period': 'morning'}), model = Model(...))
- prompts() Dataset [source]
Return a Dataset of prompts that will be used.
>>> from edsl.jobs import Jobs >>> Jobs.example().prompts() Dataset(...)
- run(n: int = 1, progress_bar: bool = False, stop_on_exception: bool = False, cache: Cache | bool = None, check_api_keys: bool = False, sidecar_model: LanguageModel | None = None, verbose: bool = False, print_exceptions=True, remote_cache_description: str | None = None, remote_inference_description: str | None = None, skip_retry: bool = False, raise_validation_errors: bool = False, disable_remote_inference: bool = False) Results [source]
Runs the Job: conducts Interviews and returns their results.
- Parameters:
n – how many times to run each interview
progress_bar – shows a progress bar
stop_on_exception – stops the job if an exception is raised
cache – a cache object to store results
check_api_keys – check if the API keys are valid
batch_mode – run the job in batch mode i.e., no expecation of interaction with the user
verbose – prints messages
remote_cache_description – specifies a description for this group of entries in the remote cache
remote_inference_description – specifies a description for the remote inference job