Remote Inference

Remote inference allows you to use a single API key to run surveys with any available language models at the Expected Parrot server, instead of obtaining your own API Keys for models to access on your own machine.

You can also automatically save survey results and API calls on the Expected Parrot server by activating Remote Caching.

Note: You must have a Coop account and purchase credits in order to use remote inference. Credits will be deducted from your balance based on tokens used and prices set by service providers. By using remote inference you agree to terms of use of service providers, which Expected Parrot may accept on your behalf and enforce in accordance with its own terms of use: https://www.expectedparrot.com/terms.

Activating remote inference

  1. Log into your Coop account.

  2. Navigate to API Settings. Toggle on the slider for Remote inference and copy your API key.

Toggle on remote inference and copy your Expected Parrot API key
  1. Add the following line to your .env file in your edsl working directory (replace your_api_key_here with your actual API key):

EXPECTED_PARROT_API_KEY='your_api_key_here'

You can regenerate your key (and update your .env file) at any time.

Using remote inference

With remote inference activated, calling the run() method will send a survey to the Expected Parrot server and allow you to access results and all information about it (job history, costs, etc.). You can optionally pass a parameter remote_inference_description (a string) to identify the results at the Coop and a parameter remote_inference_visibility (“private”, “public” or “unlisted”) to specify the visibility of the results at the Coop. (Either of these settings can be edited later, either from your workspace or at the Coop web app.)

Example:

from edsl import Survey

survey = Survey.example()

results = survey.run(remote_inference_description="Example survey", remote_inference_visibility="public")

Output (details will be unique to your job):

Job completed and Results stored on Coop: https://www.expectedparrot.com/content/642809b1-c887-42a9-b6c8-12ed5c6d856b.

If you also have remote caching activated, the results will be stored automatically on the Expected Parrot server.

Viewing the results

Navigate to the Remote inference section of your Coop account to view the status of your job and the results. Once your job has finished, it will appear with a status of Completed:

Remote inference page on the Coop web app. There is one job shown, and it has a status of "Completed."

You can then select View to access the results of the job. Your results are provided as an EDSL object for you to view, pull and share with others.

Job details and costs

When you run a job, you are charged credits based on the number of tokens used.

Before running a job, you can estimate the cost of the job by calling the estimate_job_cost() method on the Job object (a survey combined with one or more models). This will return information about the estimated total cost, input tokens, output tokens, and per-model costs:

Example:

from edsl import Survey, Model

survey = Survey.example()
model = Model("gpt-4o")
job = survey.by(model)

estimated_job_cost = job.estimate_job_cost()
estimated_job_cost

Output:

{'estimated_total_cost': 0.0018625,
 'estimated_total_input_tokens': 185,
 'estimated_total_output_tokens': 140,
 'model_costs': [{'inference_service': 'openai',
   'model': 'gpt-4o',
   'estimated_cost': 0.0018625,
   'estimated_input_tokens': 185,
   'estimated_output_tokens': 140}]}

You can also estimate the cost in credits to run the job remotely by passing the job to the remote_inference_cost() method of a Coop client object:

from edsl import Coop

coop = Coop()

estimated_remote_inference_cost = coop.remote_inference_cost(job) # using the job object from above
estimated_remote_inference_cost

Output:

{'credits': 0.24, 'usd': 0.00231}

Details on these methods can be found in the credits section.

After running a job, you can view the actual cost in your job history or by calling the remote_inference_cost() method and passing it the job UUID (this is distinct from the results UUID, and can be found in your job history page).

You can also check the details of a job using the remote_inference_get() method as pass it the job UUID.

Job history

You can click on any job to view its history. When a job fails, the job history logs will describe the error that caused the failure:

A screenshot of job history logs on the Coop web app. The job has failed due to insufficient funds.

Job history can also provide important information about cancellation. When you cancel a job, one of two things must be true:

  1. The job hasn’t started running yet. No credits will be deducted from your balance.

  2. The job has started running. Credits will be deducted.

When a late cancellation has occurred, the credits deduction will be reflected in your job history.

A screenshot of job history logs on the Coop web app. The job has been cancelled late, and 2 credits have been deducted from the user's balance.

Using remote inference with remote caching

When remote caching and remote inference are both turned on, your remote jobs will use your remote cache entries when applicable.

Remote cache and remote inference toggles on the Coop web app

Here we rerun the survey from above:

survey.run(remote_inference_description="Example survey rerun")

The remote cache now has a new entry in the remote cache logs:

Remote cache logs on the Coop web app. There is one log that reads, "Add 1 new cache entry from remote inference job."

If the remote cache has been used for a particular job, the details will also show up in job history:

An entry in the job history log on the Coop web app. It shows that 1 new entry was added to the remote cache during this job.

Remote inference methods

Coop class

class edsl.coop.coop.Coop(api_key: str | None = None, url: str | None = None)[source]

Bases: CoopFunctionsMixin

Client for the Expected Parrot API.

remote_inference_cost(input: Jobs | Survey, iterations: int = 1) int[source]

Get the cost of a remote inference job.

Parameters:

input – The EDSL job to send to the server.

>>> job = Jobs.example()
>>> coop.remote_inference_cost(input=job)
{'credits': 0.77, 'usd': 0.0076950000000000005}
remote_inference_create(job: Jobs, description: str | None = None, status: Literal['queued', 'running', 'completed', 'failed'] = 'queued', visibility: Literal['private', 'public', 'unlisted'] | None = 'unlisted', initial_results_visibility: Literal['private', 'public', 'unlisted'] | None = 'unlisted', iterations: int | None = 1) RemoteInferenceCreationInfo[source]

Send a remote inference job to the server.

Parameters:
  • job – The EDSL job to send to the server.

  • description (optional) – A description for this entry in the remote cache.

  • status – The status of the job. Should be ‘queued’, unless you are debugging.

  • visibility – The visibility of the cache entry.

  • iterations – The number of times to run each interview.

>>> job = Jobs.example()
>>> coop.remote_inference_create(job=job, description="My job")
{'uuid': '9f8484ee-b407-40e4-9652-4133a7236c9c', 'description': 'My job', 'status': 'queued', 'iterations': None, 'visibility': 'unlisted', 'version': '0.1.38.dev1'}
remote_inference_get(job_uuid: str | None = None, results_uuid: str | None = None) RemoteInferenceResponse[source]

Get the details of a remote inference job. You can pass either the job uuid or the results uuid as a parameter. If you pass both, the job uuid will be prioritized.

Parameters:
  • job_uuid – The UUID of the EDSL job.

  • results_uuid – The UUID of the results associated with the EDSL job.

>>> coop.remote_inference_get("9f8484ee-b407-40e4-9652-4133a7236c9c")
{'job_uuid': '9f8484ee-b407-40e4-9652-4133a7236c9c', 'results_uuid': 'dd708234-31bf-4fe1-8747-6e232625e026', 'results_url': 'https://www.expectedparrot.com/content/dd708234-31bf-4fe1-8747-6e232625e026', 'latest_error_report_uuid': None, 'latest_error_report_url': None, 'status': 'completed', 'reason': None, 'credits_consumed': 0.35, 'version': '0.1.38.dev1'}