Jobs
The Jobs class is a collection of agents, scenarios and models and one survey. It is used to run a collection of interviews.
Base Question
This module contains the Question class, which is the base class for all questions in EDSL.
- class edsl.questions.QuestionBase.QuestionBase[source]
Bases:
PersistenceMixin
,RepresentationMixin
,SimpleAskMixin
,QuestionBasePromptsMixin
,QuestionBaseGenMixin
,ABC
,AnswerValidatorMixin
ABC for the Question class. All questions inherit from this class. Some of the constraints on child questions are defined in the RegisterQuestionsMeta metaclass.
Every child class wiill have class attributes of question_type, _response_model and response_validator_class e.g.,
question_type = “free_text” _response_model = FreeTextResponse response_validator_class = FreeTextResponseValidator
- add_question(other: QuestionBase) Survey [source]
Add a question to this question by turning them into a survey with two questions.
>>> from edsl.questions import QuestionFreeText as Q >>> from edsl.questions import QuestionMultipleChoice as QMC >>> s = Q.example().add_question(QMC.example()) >>> len(s.questions) 2
- property data: dict[source]
Return a dictionary of question attributes except for question_type.
>>> from edsl.questions import QuestionFreeText as Q >>> Q.example().data {'question_name': 'how_are_you', 'question_text': 'How are you?'}
- classmethod from_dict(data: dict) Type[QuestionBase] [source]
Construct a question object from a dictionary created by that question’s to_dict method.
- html(scenario: dict | None = None, agent: dict | None = {}, answers: dict | None = None, include_question_name: bool = False, height: int | None = None, width: int | None = None, iframe=False)[source]
- human_readable() str [source]
Print the question in a human readable format.
>>> from edsl.questions import QuestionFreeText >>> QuestionFreeText.example().human_readable() 'Question Type: free_text\nQuestion: How are you?'
- property name: str[source]
Helper function so questions and instructions can use the same access method
- question_text: str[source]
Validate that the question_text attribute is a string.
>>> class TestQuestion: ... question_text = QuestionTextDescriptor() ... def __init__(self, question_text: str): ... self.question_text = question_text
>>> _ = TestQuestion("What is the capital of France?") >>> _ = TestQuestion("What is the capital of France? {{variable}}") >>> _ = TestQuestion("What is the capital of France? {{variable name}}") Traceback (most recent call last): ... edsl.exceptions.questions.QuestionCreationValidationError: Question text contains an invalid identifier: 'variable name'
- async run_async(just_answer: bool = True, model: 'Model' | None = None, agent: 'Agent' | None = None, disable_remote_inference: bool = False, **kwargs) Any | 'Results' [source]
Call the question asynchronously.
>>> import asyncio >>> from edsl.questions import QuestionFreeText as Q >>> m = Q._get_test_model(canned_response = "Blue") >>> q = Q(question_name = "color", question_text = "What is your favorite color?") >>> async def test_run_async(): result = await q.run_async(model=m, disable_remote_inference = True); print(result) >>> asyncio.run(test_run_async()) Blue
- classmethod run_example(show_answer: bool = True, model: 'LanguageModel' | None = None, cache=False, disable_remote_cache: bool = False, disable_remote_inference: bool = False, **kwargs)[source]
Run an example of the question. >>> from edsl.language_models import LanguageModel >>> from edsl import QuestionFreeText as Q >>> m = Q._get_test_model(canned_response = “Yo, what’s up?”) >>> m.execute_model_call(“”, “”) {‘message’: [{‘text’: “Yo, what’s up?”}], ‘usage’: {‘prompt_tokens’: 1, ‘completion_tokens’: 1}} >>> Q.run_example(show_answer = True, model = m, disable_remote_cache = True, disable_remote_inference = True) Dataset([{‘answer.how_are_you’: [“Yo, what’s up?”]}])
- to_dict(add_edsl_version=True)[source]
Convert the question to a dictionary that includes the question type (used in deserialization).
>>> from edsl.questions import QuestionFreeText as Q; Q.example().to_dict(add_edsl_version = False) {'question_name': 'how_are_you', 'question_text': 'How are you?', 'question_type': 'free_text'}
Jobs class
- class edsl.jobs.Jobs.Jobs(survey: Survey, agents: list[Agent] | AgentList | None = None, models: ModelList | list[LanguageModel] | None = None, scenarios: ScenarioList | list[Scenario] | None = None)[source]
Bases:
Base
A collection of agents, scenarios and models and one survey that creates ‘interviews’
- all_question_parameters()[source]
Return all the fields in the questions in the survey. >>> from edsl.jobs import Jobs >>> Jobs.example().all_question_parameters() {‘period’}
- property bucket_collection: BucketCollection[source]
Return the bucket collection. If it does not exist, create it.
- by(*args: Agent | Scenario | LanguageModel | Sequence['Agent' | 'Scenario' | 'LanguageModel']) Jobs [source]
Add Agents, Scenarios and LanguageModels to a job.
- Parameters:
args – objects or a sequence (list, tuple, …) of objects of the same type
If no objects of this type exist in the Jobs instance, it stores the new objects as a list in the corresponding attribute. Otherwise, it combines the new objects with existing objects using the object’s __add__ method.
This ‘by’ is intended to create a fluent interface.
>>> from edsl.surveys.Survey import Survey >>> from edsl.questions.QuestionFreeText import QuestionFreeText >>> q = QuestionFreeText(question_name="name", question_text="What is your name?") >>> j = Jobs(survey = Survey(questions=[q])) >>> j Jobs(survey=Survey(...), agents=AgentList([]), models=ModelList([]), scenarios=ScenarioList([])) >>> from edsl.agents.Agent import Agent; a = Agent(traits = {"status": "Sad"}) >>> j.by(a).agents AgentList([Agent(traits = {'status': 'Sad'})])
Notes: - all objects must implement the ‘get_value’, ‘set_value’, and __add__ methods - agents: traits of new agents are combined with traits of existing agents. New and existing agents should not have overlapping traits, and do not increase the # agents in the instance - scenarios: traits of new scenarios are combined with traits of old existing. New scenarios will overwrite overlapping traits, and do not increase the number of scenarios in the instance - models: new models overwrite old models.
- static compute_job_cost(job_results: Results) float [source]
Computes the cost of a completed job in USD.
- create_bucket_collection() BucketCollection [source]
Create a collection of buckets for each model.
These buckets are used to track API calls and token usage.
>>> from edsl.jobs import Jobs >>> from edsl import Model >>> j = Jobs.example().by(Model(temperature = 1), Model(temperature = 0.5)) >>> bc = j.create_bucket_collection() >>> bc BucketCollection(...)
- estimate_job_cost(iterations: int = 1) dict [source]
Estimate the cost of running the job.
- Parameters:
iterations – the number of iterations to run
- static estimate_prompt_cost(system_prompt: str, user_prompt: str, price_lookup: dict, inference_service: str, model: str) dict [source]
Estimate the cost of running the prompts. :param iterations: the number of iterations to run :param system_prompt: the system prompt :param user_prompt: the user prompt :param price_lookup: the price lookup :param inference_service: the inference service :param model: the model name
- classmethod example(throw_exception_probability: float = 0.0, randomize: bool = False, test_model=False) Jobs [source]
Return an example Jobs instance.
- Parameters:
throw_exception_probability – the probability that an exception will be thrown when answering a question. This is useful for testing error handling.
randomize – whether to randomize the job by adding a random string to the period
test_model – whether to use a test model
>>> Jobs.example() Jobs(...)
- classmethod from_interviews(interview_list)[source]
Return a Jobs instance from a list of interviews.
This is useful when you have, say, a list of failed interviews and you want to create a new job with only those interviews.
- interviews() list[Interview] [source]
Return a list of
edsl.jobs.interviews.Interview
objects.It returns one Interview for each combination of Agent, Scenario, and LanguageModel. If any of Agents, Scenarios, or LanguageModels are missing, it fills in with defaults.
>>> from edsl.jobs import Jobs >>> j = Jobs.example() >>> len(j.interviews()) 4 >>> j.interviews()[0] Interview(agent = Agent(traits = {'status': 'Joyful'}), survey = Survey(...), scenario = Scenario({'period': 'morning'}), model = Model(...))
- prompts() Dataset [source]
Return a Dataset of prompts that will be used.
>>> from edsl.jobs import Jobs >>> Jobs.example().prompts() Dataset(...)
- run(n: int = 1, progress_bar: bool = False, stop_on_exception: bool = False, cache: 'Cache' | bool = None, check_api_keys: bool = False, sidecar_model: LanguageModel | None = None, verbose: bool = True, print_exceptions=True, remote_cache_description: str | None = None, remote_inference_description: str | None = None, remote_inference_results_visibility: Literal['private', 'public', 'unlisted'] | None = 'unlisted', skip_retry: bool = False, raise_validation_errors: bool = False, disable_remote_cache: bool = False, disable_remote_inference: bool = False, bucket_collection: BucketCollection | None = None) Results [source]
Runs the Job: conducts Interviews and returns their results.
- Parameters:
n – How many times to run each interview
progress_bar – Whether to show a progress bar
stop_on_exception – Stops the job if an exception is raised
cache – A Cache object to store results
check_api_keys – Raises an error if API keys are invalid
verbose – Prints extra messages
remote_cache_description – Specifies a description for this group of entries in the remote cache
remote_inference_description – Specifies a description for the remote inference job
remote_inference_results_visibility – The initial visibility of the Results object on Coop. This will only be used for remote jobs!
disable_remote_cache – If True, the job will not use remote cache. This only works for local jobs!
disable_remote_inference – If True, the job will not use remote inference
- async run_async(cache=None, n=1, disable_remote_inference: bool = False, remote_inference_description: str | None = None, remote_inference_results_visibility: Literal['private', 'public', 'unlisted'] | None = 'unlisted', bucket_collection: BucketCollection | None = None, **kwargs)[source]
Run the job asynchronously, either locally or remotely.
- Parameters:
cache – Cache object or boolean
n – Number of iterations
disable_remote_inference – If True, forces local execution
remote_inference_description – Description for remote jobs
remote_inference_results_visibility – Visibility setting for remote results
kwargs – Additional arguments passed to local execution
- Returns:
Results object