Activate remote caching
Remote caching is automatically activated with remote inference. To activate remote inference, navigate to the Settings page of your account and toggle on remote inference. Learn more about how remote inference works at the Remote Inference section. When you run a survey remotely at the Expected Parrot server your results are also cached at the server. You can access them at your Cache page or from your workspace (see examples of methods below).Universal remote cache
The universal remote cache is a collection of all the unique prompts that have been sent to any language models via the Expected Parrot server, and the responses that were returned. It is a shared resource that is available to all users for free. When you run a survey at the Expected Parrot server your survey results will draw from the universal remote cache by default. This means that if your survey includes any prompts that have been run before, the stored response to those prompts is retrieved from the universal remote cache and included in your results, at no cost to you. If a set of prompts has not been run before, then a new response is generated, included in your results and added to the universal remote cache. (By “prompts” we mean a unique user prompt for a question together with a unique system prompt for an agent, if one was used with the question.)Fresh responses
If you want to draw fresh responses, you can pass a parameter fresh=True to the run() method. Your results object will still have a cache automatically attached to it, and the universal remote cache will be updated with any new responses that are generated. (There can be multiple stored responses for a set of prompts if fresh responses are specified for a survey.)Features of the universal remote cache
The universal remote cache offers the following features:- Free access: It is free to use and available to all users, regardless of whether you are running surveys remotely with your own keys for language models or an Expected Parrot API key.
- Free storage & retrieval: There is no limit on the number of responses that you can add to the universal remote cache or retrieve from it.
- Automatic updates: It is automatically updated whenever a survey is run remotely.
- Multiple responses: If a fresh response is generated for a question that is different from a response already stored in the universal remote cache, the new response is added with an iteration index.
- No deletions: You cannot delete entries from the universal remote cache.
- No manual additions: You cannot manually add entries. The only way to add responses to the universal remote cache is by running a survey remotely at the Expected Parrot server.
- Sharing & reproducibility: A new cache is automatically attached to each results object, which can be posted and shared with other users at the Coop.
- Privacy: It is not queryable, and no user information is available. You must run a survey to retrieve responses from the universal remote cache.
Frequently asked questions
How do I add responses to the universal remote cache?
How do I add responses to the universal remote cache?
How do I get a stored response?
How do I get a stored response?
How do I know whether I will retrieve a stored response?
How do I know whether I will retrieve a stored response?
Is the universal remote cache queryable? Can I check whether there is a stored response for a question?
Is the universal remote cache queryable? Can I check whether there is a stored response for a question?
Can I see which user generated a stored response in the universal remote cache?
Can I see which user generated a stored response in the universal remote cache?
Is my legacy remote cache in the universal remote cache?
Is my legacy remote cache in the universal remote cache?
Can I still access my legacy remote cache?
Can I still access my legacy remote cache?
Why can’t I add my existing caches to the universal remote cache?
Why can’t I add my existing caches to the universal remote cache?
What if I want to run a survey remotely but do not want my responses added to the universal remote cache?
What if I want to run a survey remotely but do not want my responses added to the universal remote cache?
Can I access the universal remote cache when I run a survey locally?
Can I access the universal remote cache when I run a survey locally?
Can I delete a response in the universal remote cache?
Can I delete a response in the universal remote cache?
What happens if I delete my account?
What happens if I delete my account?
Legacy remote cache
Responses to questions that you ran remotely prior to the launch of the universal remote cache are stored in a legacy remote cache that can be found at the Cache page of your account. You can pull these entries to use them locally at any time.Using your remote cache
You can view and search all of your remote cache entries and logs at your Cache page. These entries include all of the responses to questions that you have run remotely and generated or retrieved from the universal remote cache, and all the logs of your remote surveys. For example, here we run a survey with remote caching activated, and pass a description to readily identify the job at Coop:
Reproducing results
When you share a results object (e.g., post it publicly at Coop or share it privately with other users) the cache attached to it is automatically shared with it. This can be useful if you want to share a specific historic cache for a survey or project (e.g., to allow other users to reproduce your results). You can inspect the cache for a results object by calling the cache property on a results object. For example, here we inspect the cache for the survey that we ran above:model | parameters | system_prompt | user_prompt | output | iteration | timestamp | cache_key |
---|---|---|---|---|---|---|---|
gemini-1.5-flash | {‘temperature’: 0.5, ‘topP’: 1, ‘topK’: 1, ‘maxOutputTokens’: 2048, ‘stopSequences’: []} | nan | Is 2 a prime number? | {“candidates”: [{“content”: {“parts”: [{“text”: “Yes, 2 is a prime number. It’s the only even prime number.n”}], “role”: “model”}, “finish_reason”: 1, “safety_ratings”: [{“category”: 8, “probability”: 1, “blocked”: false}, {“category”: 10, “probability”: 1, “blocked”: false}, {“category”: 7, “probability”: 1, “blocked”: false}, {“category”: 9, “probability”: 1, “blocked”: false}], “avg_logprobs”: -0.0006228652317076921, “token_count”: 0, “grounding_attributions”: []}], “usage_metadata”: {“prompt_token_count”: 7, “candidates_token_count”: 20, “total_token_count”: 27, “cached_content_token_count”: 0}, “model_version”: “gemini-1.5-flash”} | 0 | 1738759640 | b939c0cf262061c7aedbbbfedc540689 |
Remote cache methods
When remote caching is activated, EDSL will automatically send responses to the server when you run a job (i.e., you do not need to execute methods manually). If you want to interact with the remote cache programatically, you can use the following methods:Coop class
Bases:class edsl.coop.coop.Coop(api_key: str | None = None, url: str | None = None)[source]
CoopFunctionsMixin
Client for the Expected Parrot API that provides cloud-based functionality for EDSL.
The Coop class is the main interface for interacting with Expected Parrot’s cloud services. It enables:
- Storing and retrieving EDSL objects (surveys, agents, models, results, etc.)
- Running inference jobs remotely for better performance and scalability
- Retrieving and caching interview results
- Managing API keys and authentication
- Accessing model availability and pricing information
Initialize the Expected Parrot API client. This constructor sets up the connection to Expected Parrot’s cloud services. If not provided explicitly, it will attempt to obtain an API key from environment variables or from a stored location in the user’s config directory.init(api_key: str | None = None, url: str | None = None) → None[source]
Parameters:api_key (str, optional): API key for authentication with Expected Parrot. If not provided, will attempt to obtain from environment or stored location. url (str, optional): Base URL for the Expected Parrot service. If not provided, uses the default from configuration.
Notes:
- The API key is stored in the EXPECTED_PARROT_API_KEY environment variable or in a platform-specific config directory
- The URL is determined based on whether it’s a production, staging, or development environment
- The api_url for actual API endpoints is derived from the base URL
Approve a Prolific study submission.approve_prolific_study_submission(project_uuid: str, study_id: str, submission_id: str) → dict[source]
check_for_updates(silent: bool = False) → dict | None[source]Check if there’s a newer version of EDSL available. Args: silent: If True, don’t print any messages to console Returns: dict with version info if update is available, None otherwise
Store an EDSL object in the Expected Parrot cloud service. This method uploads an EDSL object (like a Survey, Agent, or Results) to the Expected Parrot cloud service for storage, sharing, or further processing.create(object: Agent | AgentList | Cache | LanguageModel | ModelList | Notebook | Type[QuestionBase] | Results | Scenario | ScenarioList | Survey, description: str | None = None, alias: str | None = None, visibility: Literal[‘private’, ‘public’, ‘unlisted’] | None = ‘unlisted’) → dict[source]
Parameters:object (EDSLObject): The EDSL object to store (Survey, Agent, Results, etc.) description (str, optional): A human-readable description of the object alias (str, optional): A custom alias for easier reference later visibility (VisibilityType, optional): Access level for the object. One of:
- “private”: Only accessible by the owner
- “public”: Accessible by anyone
- “unlisted”: Accessible with the link, but not listed publicly
Returns:dict: Information about the created object including:
- url: The URL to access the object
- alias_url: The URL with the custom alias (if provided)
- uuid: The unique identifier for the object
- visibility: The visibility setting
- version: The EDSL version used to create the object
Raises:CoopServerResponseError: If there’s an error communicating with the server
Example:
Create a survey object on Coop, then create a project from the survey.create_project(survey: Survey, scenario_list: ScenarioList | None = None, scenario_list_method: Literal[‘randomize’, ‘loop’, ‘single_scenario’, ‘ordered’] | None = None, project_name: str = ‘Project’, survey_description: str | None = None, survey_alias: str | None = None, survey_visibility: Literal[‘private’, ‘public’, ‘unlisted’] | None = ‘unlisted’, scenario_list_description: str | None = None, scenario_list_alias: str | None = None, scenario_list_visibility: Literal[‘private’, ‘public’, ‘unlisted’] | None = ‘unlisted’)[source]
Create a project run.create_project_run(project_uuid: str, name: str | None = None, scenario_list_uuid: str | UUID | None = None, scenario_list_method: Literal[‘randomize’, ‘loop’, ‘single_scenario’, ‘ordered’] | None = None) → dict[source]
Create a Prolific study for a project. Returns a dict with the study details. To add filters to your study, you should first pull the list of supported filters using Coop.list_prolific_filters(). Then, you can use the create_study_filter method of the returned CoopProlificFilters object to create a valid filter dict.create_prolific_study(project_uuid: str, project_run_uuid: str, name: str, description: str, num_participants: int, estimated_completion_time_minutes: int, participant_payment_cents: int, device_compatibility: List[Literal[‘desktop’, ‘tablet’, ‘mobile’]] | None = None, peripheral_requirements: List[Literal[‘audio’, ‘camera’, ‘download’, ‘microphone’]] | None = None, filters: List[Dict] | None = None) → dict[source]
Create a new widget.create_widget(short_name: str, display_name: str, esm_code: str, css_code: str | None = None, description: str | None = None) → dict[source]
Parameters:short_name (str): The short name identifier for the widget. Must start with a lowercase letter and contain only lowercase letters, digits, and underscores display_name (str): The display name of the widget description (str): A human-readable description of the widget esm_code (str): The ESM JavaScript code for the widget css_code (str): The CSS code for the widget
Returns:dict: Information about the created widget including:
- short_name: The widget’s short name
- display_name: The widget’s display name
- description: The widget’s description
Raises:CoopServerResponseError: If there’s an error communicating with the server
Delete an object from the server.delete(url_or_uuid: str | UUID) → dict[source]
Parameters:url_or_uuid – The UUID or URL of the object. URLs can be in the form content/uuid or content/username/alias.
Delete a project run.delete_project_run(project_uuid: str, project_run_uuid: str) → dict[source]
Deletes a Prolific study.delete_prolific_study(project_uuid: str, study_id: str) → dict[source]
Delete a widget by short name.delete_widget(short_name: str) → dict[source]
Parameters:short_name (str): The short name of the widget to delete
Returns:dict: Success status
Raises:CoopServerResponseError: If there’s an error communicating with the server
Retrieve and return the EDSL settings stored on Coop. If no response is received within 5 seconds, return an empty dict.property edsl_settings*: dict*[source]
Execute a Firecrawl request through the Extension Gateway. This method sends a Firecrawl request dictionary to the Extension Gateway’s /firecrawl/execute endpoint, which processes it using FirecrawlScenario and returns EDSL Scenario/ScenarioList objects.execute_firecrawl_request(request_dict: Dict[str, Any]) → Any[source]
Parameters:request_dict (Dict[str, Any]): A dictionary containing the Firecrawl request. Must include: - method: The Firecrawl method to execute (scrape, crawl, search, extract, map_urls) - api_key: Optional if provided via environment or this method will add it - Other method-specific parameters (url_or_urls, query_or_queries, etc.)
Returns:Any: The result from FirecrawlScenario execution:
- For scrape/extract with single URL: Scenario object
- For scrape/extract with multiple URLs: ScenarioList object
- For crawl/search/map_urls: ScenarioList object
Raises:httpx.HTTPError: If the request to the Extension Gateway fails ValueError: If the request_dict is missing required fields Exception: If the Firecrawl execution fails
Example:
Fetch information about available language models from Expected Parrot. This method retrieves the current list of available language models grouped by service provider (e.g., OpenAI, Anthropic, etc.). This information is useful for programmatically selecting models based on availability and for ensuring that jobs only use supported models.fetch_models() → Dict[str, List[str]][source]
Returns:ServiceToModelsMapping: A mapping of service providers to their available models.
Example structure:
Raises:CoopServerResponseError: If there’s an error communicating with the server
Notes:
- The availability of models may change over time
- Not all models may be accessible with your current API keys
- Use this method to check for model availability before creating jobs
- Models may have different capabilities (text-only, multimodal, etc.)
Example:
Fetch the current pricing information for language models. This method retrieves the latest pricing information for all supported language models from the Expected Parrot API. The pricing data is used to estimate costs for jobs and to optimize model selection based on budget constraints.fetch_prices() → dict[source]
Returns:
dict: A dictionary mapping (service, model) tuples to pricing information.Each entry contains token pricing for input and output tokens. Example structure:
Raises:ValueError: If the EDSL_FETCH_TOKEN_PRICES configuration setting is invalid
Notes:
- Returns an empty dict if EDSL_FETCH_TOKEN_PRICES is set to “False”
- The pricing data is cached to minimize API calls
- Pricing may vary based on the model, provider, and token type (input/output)
- All prices are in USD per million tokens
Example:
Fetch a dict of rate limit config vars from Coop. The dict keys are RPM and TPM variables like EDSL_SERVICE_RPM_OPENAI.fetch_rate_limit_config_vars() → dict[source]
Fetch a list of working models from Coop.fetch_working_models() → List[dict][source]
Example output:
Retrieve an EDSL object from the Expected Parrot cloud service. This method downloads and deserializes an EDSL object from the cloud service using either its UUID, URL, or username/alias combination.get(url_or_uuid: str | UUID, expected_object_type: Literal[‘agent’, ‘agent_list’, ‘cache’, ‘model’, ‘model_list’, ‘notebook’, ‘question’, ‘results’, ‘scenario’, ‘scenario_list’, ‘survey’] | None = None) → Agent | AgentList | Cache | LanguageModel | ModelList | Notebook | Type[QuestionBase] | Results | Scenario | ScenarioList | Survey[source]
Parameters:url_or_uuid (Union[str, UUID]): Identifier for the object to retrieve. Can be one of: - UUID string (e.g., “123e4567-e89b-12d3-a456-426614174000”) - Full URL (e.g., “https://expectedparrot.com/content/123e4567…”) - Alias URL (e.g., “https://expectedparrot.com/content/username/my-survey”) expected_object_type (ObjectType, optional): If provided, validates that the retrieved object is of the expected type (e.g., “survey”, “agent”)
Returns:EDSLObject: The retrieved object as its original EDSL class instance (e.g., Survey, Agent, Results)
Raises:CoopNoUUIDError: If no UUID or URL is provided CoopInvalidURLError: If the URL format is invalid CoopServerResponseError: If the server returns an error (e.g., not found, unauthorized access) Exception: If the retrieved object doesn’t match the expected type
Notes:
- If the object’s visibility is set to “private”, you must be the owner to access it
- For objects stored with an alias, you can use either the UUID or the alias URL
Example:
Get the current credit balance for the authenticated user. This method retrieves the user’s current credit balance information from the Expected Parrot platform.get_balance() → dict[source]
Returns:dict: Information about the user’s credit balance, including:
- credits: The current number of credits in the user’s account
- usage_history: Recent credit usage if available
Raises:CoopServerResponseError: If there’s an error communicating with the server
Example:
Get an object’s metadata from the server.get_metadata(url_or_uuid: str | UUID) → dict[source]
Parameters:url_or_uuid – The UUID or URL of the object. URLs can be in the form content/uuid or content/username/alias.
Get the current user’s profile information. This method retrieves the authenticated user’s profile information from the Expected Parrot platform using their API key.get_profile() → dict[source]
Returns:dict: User profile information including:
- username: The user’s username
- email: The user’s email address
Raises:CoopServerResponseError: If there’s an error communicating with the server
Example:
get_progress_bar_url()[source]
Get a project from Coop.get_project(project_uuid: str) → dict[source]
Return a Results object with the human responses for a project. If generating the Results object fails, a ScenarioList will be returned instead.get_project_human_responses(project_uuid: str, project_run_uuid: str | None = None) → Results | ScenarioList[source]
Get a Prolific study. Returns a dict with the study details.get_prolific_study(project_uuid: str, study_id: str) → dict[source]
Return a Results object with the human responses for a project. If generating the Results object fails, a ScenarioList will be returned instead.get_prolific_study_responses(project_uuid: str, study_id: str) → Results | ScenarioList[source]
Get a list of currently running job IDs.get_running_jobs() → List[str][source]
Returns:list[str]: List of running job UUIDs
Get a signed upload URL for updating the content of an existing object. This method gets a signed URL that allows direct upload to Google Cloud Storage for objects stored in the new format, while preserving the existing UUID.get_upload_url(object_uuid: str) → dict[source]
Parameters:object_uuid (str): The UUID of the object to get an upload URL for
Returns:dict: A response containing:
- signed_url: The signed URL for uploading new content
- object_uuid: The UUID of the object
- message: Success message
Raises:CoopServerResponseError: If there’s an error communicating with the server HTTPException: If the object is not found, not owned by user, or not in new format
Notes:
- Only works with objects stored in the new format (transition table)
- User must be the owner of the object
- The signed URL expires after 60 minutes
Example:
Retrieve the UUID for an object based on its hash. This method calls the remote endpoint to get the UUID associated with an object hash. Args: hash_value (str): The hash value of the object to look up Returns: str: The UUID of the object if found Raises: CoopServerResponseError: If the object is not found or there’s an error communicating with the serverget_uuid_from_hash(hash_value: str) → str[source]
Get a specific widget by short name. Parameters: short_name (str): The short name of the widget Returns: Dict: Complete widget data including ESM and CSS code Raises: CoopServerResponseError: If there’s an error communicating with the serverget_widget(short_name: str) → Dict[source]
Get metadata for a specific widget by short name.get_widget_metadata(short_name: str) → Dict[source]
Parameters:short_name (str): The short name of the widget
Returns:Dict: Widget metadata including size information
Raises:CoopServerResponseError: If there’s an error communicating with the server
Return the headers for the request.property headers*: dict*[source]
Retrieve objects either owned by the user or shared with them. Notes: - search_query only works with the description field. - If sort_ascending is False, then the most recently created objects are returned first. - If community is False, then only objects owned by the user or shared with the user are returned. - If community is True, then only public objects not owned by the user are returned.list(object_type: Literal[‘agent’, ‘agent_list’, ‘cache’, ‘model’, ‘model_list’, ‘notebook’, ‘question’, ‘results’, ‘scenario’, ‘scenario_list’, ‘survey’] | List[Literal[‘agent’, ‘agent_list’, ‘cache’, ‘model’, ‘model_list’, ‘notebook’, ‘question’, ‘results’, ‘scenario’, ‘scenario_list’, ‘survey’]] | None = None, visibility: Literal[‘private’, ‘public’, ‘unlisted’] | List[Literal[‘private’, ‘public’, ‘unlisted’]] | None = None, search_query: str | None = None, page: int = 1, page_size: int = 10, sort_ascending: bool = False, community: bool = False) → CoopRegularObjects[source]
Get a ScenarioList of supported Prolific filters. This list has several methods that you can use to create valid filter dicts for use with Coop.create_prolific_study(). Call find() to examine a specific filter by ID: >>> filters = coop.list_prolific_filters() >>> filters.find(“age”) Scenario(list_prolific_filters() → CoopProlificFilters[source]
Get metadata for all widgets.list_widgets(search_query: str | None = None, page: int = 1, page_size: int = 10) → ScenarioList[source]
Parameters:page (int): Page number for pagination (default: 1) page_size (int): Number of widgets per page (default: 10, max: 100)
Returns:List[Dict]: List of widget metadata
Raises:CoopValueError: If page or page_size parameters are invalid CoopServerResponseError: If there’s an error communicating with the server
Starts the EDSL auth token login flow.login()[source]
Start the EDSL auth token login flow inside a Gradio application. This helper mirrors the behaviour oflogin_gradio(timeout: int = 120, launch: bool = True, **launch_kwargs)[source]
Coop.login_streamlit()
but renders the login link and status updates inside a Gradio UI. It will poll the Expected Parrot server for the API-key associated with a newly generated auth-token and, once received, store it via :pyclass:~edsl.coop.ep_key_handling.ExpectedParrotKeyHandler
as well as in the local .env
file so subsequent sessions pick it up automatically.
Parameters
timeoutint, default 120 How many seconds to wait for the user to complete the login before giving up. launchbool, default True IfTrue
the Gradio app is immediately launched with demo.launch(**launch_kwargs)
. Set this to False
if you want to embed the returned gradio.Blocks
object into an existing Gradio interface.
**launch_kwargs
Additional keyword-arguments forwarded to gr.Blocks.launch
when launch is True
.
Returns
str | gradio.Blocks | None- If the API-key is retrieved within timeout seconds while the function is executing (e.g. when launch is
False
and the caller integrates the Blocks into another app) the key is returned. - If launch is
True
the method returnsNone
after the Gradio app has been launched. - If launch is
False
the constructedgr.Blocks
is returned so the caller can compose it further.
Start the EDSL auth token login flow inside a Streamlit application. This helper is functionally equivalent tologin_streamlit(timeout: int = 120)[source]
Coop.login
but renders the login link and status updates directly in the Streamlit UI. The method will automatically poll the Expected Parrot server for the API-key associated with the generated auth-token and, once received, store it via ExpectedParrotKeyHandler
and write it to the local .env
file so subsequent sessions pick it up automatically.
Parameters
timeoutint, default 120 How many seconds to wait for the user to complete the login before giving up and showing an error in the Streamlit app.Returns
str | None The API-key if the user logged-in successfully, otherwiseNone
.
Get the status and details of a remote inference job. This method retrieves the current status and information about a remote job, including links to results if the job has completed successfully.new_remote_inference_get(job_uuid: str | None = None, results_uuid: str | None = None, include_json_string: bool | None = False) → RemoteInferenceResponse[source]
Parameters:job_uuid (str, optional): The UUID of the remote job to check results_uuid (str, optional): The UUID of the results associated with the job
(can be used if you only have the results UUID)include_json_string (bool, optional): If True, include the json string for the job in the response
Returns:
RemoteInferenceResponse: Information about the job including:job_uuid: The unique identifier for the job results_uuid: The UUID of the results results_url: URL to access the results status: Current status (“queued”, “running”, “completed”, “failed”) version: EDSL version used for the job job_json_string: The json string for the job (if include_json_string is True) latest_job_run_details: Metadata about the job status
interview_details: Metadata about the job interview status (for jobs that have reached running status)total_interviews: The total number of interviews in the job completed_interviews: The number of completed interviews interviews_with_exceptions: The number of completed interviews that have exceptions exception_counters: A list of exception counts for the job exception_type: The type of exception inference_service: The inference service model: The model question_name: The name of the question exception_count: The number of exceptions failure_reason: The reason the job failed (failed jobs only) failure_description: The description of the failure (failed jobs only) error_report_uuid: The UUID of the error report (partially failed jobs only) cost_credits: The cost of the job run in credits cost_usd: The cost of the job run in USD expenses: The expenses incurred by the job run service: The service model: The model token_type: The type of token (input or output) price_per_million_tokens: The price per million tokens tokens_count: The number of tokens consumed cost_credits: The cost of the service/model/token type combination in credits cost_usd: The cost of the service/model/token type combination in USD
Raises:ValueError: If neither job_uuid nor results_uuid is provided CoopServerResponseError: If there’s an error communicating with the server
Notes:
- Either job_uuid or results_uuid must be provided
- If both are provided, job_uuid takes precedence
- For completed jobs, you can use the results_url to view or download results
- For failed jobs, check the latest_error_report_url for debugging information
Example:
Create a remote inference job for execution in the Expected Parrot cloud. This method sends a job to be executed in the cloud, which can be more efficient for large jobs or when you want to run jobs in the background. The job execution is handled by Expected Parrot’s infrastructure, and you can check the status and retrieve results later.old_remote_inference_create(job: Jobs, description: str | None = None, status: Literal[‘queued’, ‘running’, ‘completed’, ‘failed’, ‘cancelled’, ‘cancelling’, ‘partial_failed’] = ‘queued’, visibility: Literal[‘private’, ‘public’, ‘unlisted’] | None = ‘unlisted’, initial_results_visibility: Literal[‘private’, ‘public’, ‘unlisted’] | None = ‘unlisted’, iterations: int | None = 1, fresh: bool | None = False) → RemoteInferenceCreationInfo[source]
Parameters:job (Jobs): The EDSL job to run in the cloud description (str, optional): A human-readable description of the job status (RemoteJobStatus): Initial status, should be “queued” for normal use Possible values: “queued”, “running”, “completed”, “failed” visibility (VisibilityType): Access level for the job information. One of:
- “private”: Only accessible by the owner
- “public”: Accessible by anyone
- “unlisted”: Accessible with the link, but not listed publicly
Returns:RemoteInferenceCreationInfo: Information about the created job including:
- uuid: The unique identifier for the job
- description: The job description
- status: Current status of the job
- iterations: Number of iterations for each interview
- visibility: Access level for the job
- version: EDSL version used to create the job
Raises:CoopServerResponseError: If there’s an error communicating with the server
Notes:
- Remote jobs run asynchronously and may take time to complete
- Use remote_inference_get() with the returned UUID to check status
- Credits are consumed based on the complexity of the job
Example:
Change the attributes of an uploaded objectpatch(url_or_uuid: str | UUID, description: str | None = None, alias: str | None = None, value: Agent | AgentList | Cache | LanguageModel | ModelList | Notebook | Type[QuestionBase] | Results | Scenario | ScenarioList | Survey | None = None, visibility: Literal[‘private’, ‘public’, ‘unlisted’] | None = None) → dict[source]
Parameters:
- url_or_uuid – The UUID or URL of the object. URLs can be in the form content/uuid or content/username/alias.
- description – Optional new description
- alias – Optional new alias
- value – Optional new object value
- visibility – Optional new visibility setting
Parameters:credits_transferred (int): The number of credits to transfer to the recipient recipient_username (str): The username of the recipient service_name (str): The name of the service to pay for
Returns:dict: Information about the transfer transaction, including:
- success: Whether the transaction was successful
- transaction_id: A unique identifier for the transaction
- remaining_credits: The number of credits remaining in the sender’s account
Raises:CoopServerResponseError: If there’s an error communicating with the server or if the transfer criteria aren’t met (e.g., insufficient credits)
Example:
Publish a Prolific study.publish_prolific_study(project_uuid: str, study_id: str) → dict[source]
Generate a signed URL for pulling an object directly from Google Cloud Storage. This method gets a signed URL that allows direct download access to the object from Google Cloud Storage, which is more efficient for large files.pull(url_or_uuid: str | UUID | None = None, expected_object_type: Literal[‘agent’, ‘agent_list’, ‘cache’, ‘model’, ‘model_list’, ‘notebook’, ‘question’, ‘results’, ‘scenario’, ‘scenario_list’, ‘survey’] | None = None) → dict[source]
Parameters:url_or_uuid (Union[str, UUID], optional): Identifier for the object to retrieve. Can be one of: - UUID string (e.g., “123e4567-e89b-12d3-a456-426614174000”) - Full URL (e.g., “https://expectedparrot.com/content/123e4567…”) - Alias URL (e.g., “https://expectedparrot.com/content/username/my-survey”) expected_object_type (ObjectType, optional): If provided, validates that the retrieved object is of the expected type (e.g., “survey”, “agent”)
Returns:dict: A response containing the signed_url for direct download
Raises:CoopNoUUIDError: If no UUID or URL is provided CoopInvalidURLError: If the URL format is invalid CoopServerResponseError: If there’s an error communicating with the server HTTPException: If the object or object files are not found
Example:
Generate a signed URL for pushing an object directly to Google Cloud Storage. This method gets a signed URL that allows direct upload access to Google Cloud Storage, which is more efficient for large files.push(object: Agent | AgentList | Cache | LanguageModel | ModelList | Notebook | Type[QuestionBase] | Results | Scenario | ScenarioList | Survey, description: str | None = None, alias: str | None = None, visibility: Literal[‘private’, ‘public’, ‘unlisted’] | None = ‘unlisted’) → Scenario[source]
Parameters:object_type (ObjectType): The type of object to be uploaded
Returns:dict: A response containing the signed_url for direct upload and optionally a job_id
Raises:CoopServerResponseError: If there’s an error communicating with the server Example:
Reject a Prolific study submission.reject_prolific_study_submission(project_uuid: str, study_id: str, submission_id: str, reason: Literal[‘TOO_QUICKLY’, ‘TOO_SLOWLY’, ‘FAILED_INSTRUCTIONS’, ‘INCOMP_LONGITUDINAL’, ‘FAILED_CHECK’, ‘LOW_EFFORT’, ‘MALINGERING’, ‘NO_CODE’, ‘BAD_CODE’, ‘NO_DATA’, ‘UNSUPP_DEVICE’, ‘OTHER’], explanation: str) → dict[source]
async remote_async_execute_model_call(model_dict: dict, user_prompt: str, system_prompt: str) → dict[source]
Get all remote cache entries. Parameters: select_keys (optional) – Only return CacheEntry objects with these keys.remote_cache_get(job_uuid: str | UUID | None = None) → List[CacheEntry][source]
Get all remote cache entries. Parameters: select_keys (optional) – Only return CacheEntry objects with these keys.remote_cache_get_by_key(select_keys: List[str] | None = None) → List[CacheEntry][source]
Get the estimated cost in credits of a remote inference job. Parameters: input – The EDSL job to send to the server.remote_inference_cost(input: Jobs | Survey, iterations: int = 1) → int[source]
Create a remote inference job for execution in the Expected Parrot cloud. This method sends a job to be executed in the cloud, which can be more efficient for large jobs or when you want to run jobs in the background. The job execution is handled by Expected Parrot’s infrastructure, and you can check the status and retrieve results later.remote_inference_create(job: Jobs, description: str | None = None, status: Literal[‘queued’, ‘running’, ‘completed’, ‘failed’, ‘cancelled’, ‘cancelling’, ‘partial_failed’] = ‘queued’, visibility: Literal[‘private’, ‘public’, ‘unlisted’] | None = ‘unlisted’, initial_results_visibility: Literal[‘private’, ‘public’, ‘unlisted’] | None = ‘unlisted’, iterations: int | None = 1, fresh: bool | None = False) → RemoteInferenceCreationInfo[source]
Parameters:job (Jobs): The EDSL job to run in the cloud description (str, optional): A human-readable description of the job status (RemoteJobStatus): Initial status, should be “queued” for normal use Possible values: “queued”, “running”, “completed”, “failed” visibility (VisibilityType): Access level for the job information. One of:
- “private”: Only accessible by the owner
- “public”: Accessible by anyone
- “unlisted”: Accessible with the link, but not listed publicly
Returns:RemoteInferenceCreationInfo: Information about the created job including:
- uuid: The unique identifier for the job
- description: The job description
- status: Current status of the job
- iterations: Number of iterations for each interview
- visibility: Access level for the job
- version: EDSL version used to create the job
Raises:CoopServerResponseError: If there’s an error communicating with the server
Notes:
- Remote jobs run asynchronously and may take time to complete
- Use remote_inference_get() with the returned UUID to check status
- Credits are consumed based on the complexity of the job
Get the status and details of a remote inference job. This method retrieves the current status and information about a remote job, including links to results if the job has completed successfully.remote_inference_get(job_uuid: str | None = None, results_uuid: str | None = None, include_json_string: bool | None = False) → RemoteInferenceResponse[source]
Parameters:job_uuid (str, optional): The UUID of the remote job to check results_uuid (str, optional): The UUID of the results associated with the job
(can be used if you only have the results UUID)include_json_string (bool, optional): If True, include the json string for the job in the response
Returns:
RemoteInferenceResponse: Information about the job including:job_uuid: The unique identifier for the job results_uuid: The UUID of the results results_url: URL to access the results status: Current status (“queued”, “running”, “completed”, “failed”) version: EDSL version used for the job job_json_string: The json string for the job (if include_json_string is True) latest_job_run_details: Metadata about the job status
interview_details: Metadata about the job interview status (for jobs that have reached running status)total_interviews: The total number of interviews in the job completed_interviews: The number of completed interviews interviews_with_exceptions: The number of completed interviews that have exceptions exception_counters: A list of exception counts for the job exception_type: The type of exception inference_service: The inference service model: The model question_name: The name of the question exception_count: The number of exceptions failure_reason: The reason the job failed (failed jobs only) failure_description: The description of the failure (failed jobs only) error_report_uuid: The UUID of the error report (partially failed jobs only) cost_credits: The cost of the job run in credits cost_usd: The cost of the job run in USD expenses: The expenses incurred by the job run service: The service model: The model token_type: The type of token (input or output) price_per_million_tokens: The price per million tokens tokens_count: The number of tokens consumed cost_credits: The cost of the service/model/token type combination in credits cost_usd: The cost of the service/model/token type combination in USD
Raises:ValueError: If neither job_uuid nor results_uuid is provided CoopServerResponseError: If there’s an error communicating with the server
Notes:
- Either job_uuid or results_uuid must be provided
- If both are provided, job_uuid takes precedence
- For completed jobs, you can use the results_url to view or download results
- For failed jobs, check the latest_error_report_url for debugging information
remote_inference_list(status: Literal[‘queued’, ‘running’, ‘completed’, ‘failed’, ‘cancelled’, ‘cancelling’, ‘partial_failed’] | List[Literal[‘queued’, ‘running’, ‘completed’, ‘failed’, ‘cancelled’, ‘cancelling’, ‘partial_failed’]] | None = None, search_query: str | None = None, page: int = 1, page_size: int = 10, sort_ascending: bool = False) → CoopJobsObjects[source]Retrieve jobs owned by the user.
Parameters:error (Exception): The exception to report
Example:
Reset the scenario sampling state for a project. This is useful if you have scenario_list_method=”ordered” and you want to start over with the first scenario in the list.reset_scenario_sampling_state(project_uuid: str, project_run_uuid: str) → dict[source]
Get a sample for a project.test_scenario_sampling(project_uuid: str, project_run_uuid: str) → List[int][source]
Transfer credits to another user. This method transfers a specified number of credits from the authenticated user’s account to another user’s account on the Expected Parrot platform.transfer_credits(credits_transferred: int, recipient_username: str, transfer_note: str = None) → dict[source]
Parameters:credits_transferred (int): The number of credits to transfer to the recipient recipient_username (str): The username of the recipient transfer_note (str, optional): A personal note to include with the transfer
Returns:dict: Information about the transfer transaction, including:
- success: Whether the transaction was successful
- transaction_id: A unique identifier for the transaction
- remaining_credits: The number of credits remaining in the sender’s account
Raises:CoopServerResponseError: If there’s an error communicating with the server or if the transfer criteria aren’t met (e.g., insufficient credits)
Example:
Update a project run.update_project_run(project_uuid: str, project_run_uuid: str, name: str | None = None) → dict[source]
Update a Prolific study. Returns a dict with the study details.update_prolific_study(project_uuid: str, study_id: str, project_run_uuid: str | None = None, name: str | None = None, description: str | None = None, num_participants: int | None = None, estimated_completion_time_minutes: int | None = None, participant_payment_cents: int | None = None, device_compatibility: List[Literal[‘desktop’, ‘tablet’, ‘mobile’]] | None = None, peripheral_requirements: List[Literal[‘audio’, ‘camera’, ‘download’, ‘microphone’]] | None = None, filters: List[Dict] | None = None) → dict[source]
Update a widget by short name.update_widget(existing_short_name: str, short_name: str | None = None, display_name: str | None = None, esm_code: str | None = None, css_code: str | None = None, description: str | None = None) → Dict[source]
Parameters:existing_short_name (str): The current short name of the widget short_name (str, optional): New short name for the widget. Must start with a lowercase letter and contain only lowercase letters, digits, and underscores display_name (str, optional): New display name for the widget description (str, optional): New description for the widget esm_code (str, optional): New ESM JavaScript code for the widget css_code (str, optional): New CSS code for the widget
Returns:dict: Success status
Raises:CoopServerResponseError: If there’s an error communicating with the server
web(survey: dict, platform: Literal[‘google_forms’, ‘lime_survey’, ‘survey_monkey’] = ‘lime_survey’, email=None)[source]