Interviews

Interview class

This module contains the Interview class, which is responsible for conducting an interview asynchronously.

class edsl.jobs.interviews.Interview.Interview(agent: Agent, survey: Survey, scenario: Scenario, model: Type['LanguageModel'], iteration: int = 0, indices: dict = None, cache: 'Cache' | None = None, skip_retry: bool = False, raise_validation_errors: bool = True)[source]

Bases: object

An ‘interview’ is one agent answering one survey, with one language model, for a given scenario.

The main method is async_conduct_interview, which conducts the interview asynchronously. Most of the class is dedicated to creating the tasks for each question in the survey, and then running them.

static _extract_valid_results(tasks: List['asyncio.Task'], invigilators: List['InvigilatorBase'], exceptions: InterviewExceptionCollection) Generator['Answers', None, None][source]

Extract the valid results from the list of results.

It iterates through the tasks and invigilators, and yields the results of the tasks that are done. If a task is not done, it raises a ValueError. If an exception is raised in the task, it records the exception in the Interview instance except if the task was cancelled, which is expected behavior.

>>> i = Interview.example()
>>> result, _ = asyncio.run(i.async_conduct_interview())
async async_conduct_interview(run_config: 'RunConfig' | None = None) tuple['Answers', List[dict[str, Any]]][source]

Conduct an Interview asynchronously. It returns a tuple with the answers and a list of valid results.

Parameters:
  • model_buckets – a dictionary of token buckets for the model.

  • debug – run without calls to LLM.

  • stop_on_exception – if True, stops the interview if an exception is raised.

Example usage:

>>> i = Interview.example()
>>> result, _ = asyncio.run(i.async_conduct_interview())
>>> result['q0']
'yes'
>>> i = Interview.example(throw_exception = True)
>>> result, _ = asyncio.run(i.async_conduct_interview())
>>> i.exceptions
{'q0': ...
>>> i = Interview.example()
>>> from edsl.jobs.Jobs import RunConfig, RunParameters, RunEnvironment
>>> run_config = RunConfig(parameters = RunParameters(), environment = RunEnvironment())
>>> run_config.parameters.stop_on_exception = True
>>> result, _ = asyncio.run(i.async_conduct_interview(run_config))
Traceback (most recent call last):
...
asyncio.exceptions.CancelledError
duplicate(iteration: int, cache: Cache, randomize_survey: bool | None = True) Interview[source]

Duplicate the interview, but with a new iteration number and cache.

>>> i = Interview.example()
>>> i2 = i.duplicate(1, None)
>>> i.iteration + 1 == i2.iteration
True
classmethod example(throw_exception: bool = False) Interview[source]

Return an example Interview instance.

classmethod from_dict(d: dict[str, Any]) Interview[source]

Return an Interview instance from a dictionary.

property has_exceptions: bool[source]

Return True if there are exceptions.

property interview_status: InterviewStatusDictionary[source]

Return a dictionary mapping task status codes to counts.

property task_status_logs: InterviewStatusLog[source]

Return the task status logs for the interview.

The keys are the question names; the values are the lists of status log changes for each task.

to_dict(include_exceptions=True, add_edsl_version=True) dict[str, Any][source]

Return a dictionary representation of the Interview instance. This is just for hashing purposes.

>>> i = Interview.example()
>>> hash(i)
767745459362662063
property token_usage: InterviewTokenUsage[source]

Determine how many tokens were used for the interview.

class edsl.jobs.interviews.Interview.InterviewRunningConfig(cache: "Optional['Cache']" = (None,), skip_retry: 'bool' = (False,), raise_validation_errors: 'bool' = (True,), stop_on_exception: 'bool' = (False,))[source]

Bases: object

cache: 'Cache' | None = (None,)[source]
raise_validation_errors: bool = (True,)[source]
skip_retry: bool = (False,)[source]
stop_on_exception: bool = (False,)[source]