Note: Credits are not required to run surveys with your own keys from service providers or to post and share content at Coop. When using your own keys, cost estimates are based on the prices listed in the model pricing page, but your actual charges may vary depending on service providers’ terms.
Free credits
Your Coop account comes with free credits that you can use to run surveys with your Expected Parrot key. Are you using EDSL for a research project? Send an email to info@expectedparrot.com to request additional free credits!Purchasing credits
To purchase credits, navigate to the Credits page of your account and enter the number of credits that you would like to purchase (1 USD buys 100 credits; the minimum purchase amount is 1 USD):
Using credits
When you run a survey with your Expected Parrot API key, the number of credits consumed (and deducted from your balance) is displayed at the Jobs page of your account. This number is equal to the sum of the cost in credits of each response in the results. The cost in credits of a response is calculated as follows:- The number of input tokens is multiplied by the input token rate set by the language model service provider.
- The number of output tokens is multiplied by the output token rate set by the language model service provider.
- The total cost in USD is converted to credits (1 USD = 100 credits).
- The total cost in credits is rounded up to the nearest 1/100th of a credit.
Example calculation
- Input tokens: 16
- Output tokens: 45
- Input token rate: USD 2.50 per 1M tokens
- Output token rate: USD 10.00 per 1M tokens
- Total cost: (16 * USD 2.50/1,000,000) + (45 * USD 10.00/1,000,000) = USD 0.00049
- Total credits: 0.05 credits
Response details & token rates
Details about a model’s response are stored in the raw_model_response fields of the results dataset. For each question that was run, the following columns will appear in results:- raw_model_response.<question_name>_cost: The cost in USD for the API call to a language model service provider. (In the example above, this is USD 0.00049.)
- raw_model_response.<question_name>_one_usd_buys: The number of tokens that can be purchased with 1 USD (for reference).
- raw_model_response.<question_name>_raw_model_response: A dictionary containing the raw response for the question, which includes the input text and tokens, output text and tokens, and other information about the API call. This dictionary is specific to the language model service provider and may contain additional information about the response.
model.model | raw_model_response.rainbow_cost | raw_model_response.rainbow_raw_model_response | raw_model_response.rainbow_one_usd_buys |
---|---|---|---|
gemini-1.5-flash | 0.000018 | {‘candidates’: [{‘content’: {‘parts’: [{‘text’: “The colors of a rainbow are typically listed as red, orange, yellow, green, blue, indigo, and violet. However, it’s important to note that these colors blend seamlessly into each other, and the number of distinct colors perceived can vary from person to person.n”}], ‘role’: ‘model’}, ‘finish_reason’: 1, ‘safety_ratings’: [{‘category’: 8, ‘probability’: 1, ‘blocked’: False}, {‘category’: 10, ‘probability’: 1, ‘blocked’: False}, {‘category’: 7, ‘probability’: 1, ‘blocked’: False}, {‘category’: 9, ‘probability’: 1, ‘blocked’: False}], ‘avg_logprobs’: -0.099734950483891, ‘token_count’: 0, ‘grounding_attributions’: []}], ‘usage_metadata’: {‘prompt_token_count’: 8, ‘candidates_token_count’: 57, ‘total_token_count’: 65, ‘cached_content_token_count’: 0}, ‘model_version’: ‘gemini-1.5-flash’} | 56497.186153 |
gpt-4o | 0.000438 | {‘id’: ‘chatcmpl-B2OaTCPGFdNY7dju27SxmrLfSWXSE’, ‘choices’: [{‘finish_reason’: ‘stop’, ‘index’: 0, ‘logprobs’: None, ‘message’: {‘content’: ‘The colors of a rainbow, in order, are red, orange, yellow, green, blue, indigo, and violet. These colors can be remembered using the acronym ROYGBIV.’, ‘refusal’: None, ‘role’: ‘assistant’, ‘audio’: None, ‘function_call’: None, ‘tool_calls’: None}}], ‘created’: 1739910869, ‘model’: ‘gpt-4o-2024-08-06’, ‘object’: ‘chat.completion’, ‘service_tier’: ‘default’, ‘system_fingerprint’: ‘fp_523b9b6e5f’, ‘usage’: {‘completion_tokens’: 40, ‘prompt_tokens’: 15, ‘total_tokens’: 55, ‘completion_tokens_details’: {‘accepted_prediction_tokens’: 0, ‘audio_tokens’: 0, ‘reasoning_tokens’: 0, ‘rejected_prediction_tokens’: 0}, ‘prompt_tokens_details’: {‘audio_tokens’: 0, ‘cached_tokens’: 0}}} | 2285.714286 |


Token rates
Model token rates used to calculate costs can be viewed at the model pricing page. This page is regularly updated to reflect the latest prices published by service providers. If you notice a discrepancy with a listed price, please submit a report using the form at that page.Estimating job costs
Before running a survey, you can estimate the tokens and costs (in USD and credits) in 2 different ways:- Call the estimate_job_cost() method on the Job object (a survey combined with one or more models).
- Call the remote_inference_cost() method on a Coop client object and pass it the job.
Example
Here we create a survey and agent, select a model and combine them to create a job. Then we call the above-mentioned methods for estimating costs and show the underlying calculations. The steps below can also be accessed as a notebook at the Coop web app (notebook view).Formula details
Total job costs are estimated by performing the following calculation for each set of question prompts in the survey and summing the results:- Estimate the input tokens.
- Compute the number of characters in the user_prompt and system_prompt, with any Agent and Scenario data piped in. (Note: Previous answers cannot be piped in because they are not available until the survey is run; they are left as Jinja-bracketed variables in the prompts for purposes of estimating tokens and costs.)
- Apply a piping multiplier of 2 to the number of characters in the user prompt if it has an answer piped in from a previous question (i.e., if the question has Jinja braces that cannot be filled in before the survey is run). Otherwise, apply a multiplier of 1.
- Convert the number of characters into the number of input tokens using a conversion factor of 4 characters per token, rounding down to the nearest whole number. (This approximation was established by OpenAI.)
- Estimate the output tokens.
- Apply a multiplier of 0.75 to the number of input tokens, rounding up to the nearest whole number.
- Apply the token rates for the model and inference service.
- Find the model and inference service for the question in the model pricing page: Total cost in USD = (input tokens * input token rate) + (output tokens * output token rate)
- If a model and inference service are not found, use the following fallback token rates (you will see a warning message that actual model rates were not found):
- USD 1.00 per 1M input tokens
- USD 1.00 per 1M ouput tokens
- Convert the total cost in USD to credits.
- Total cost in credits = total cost in USD * 100, rounded up to the nearest 1/100th credit.
Calculations
Here we show the calculations for the examples above. We can call the show_prompts() method on the job object to see the prompts for each question in the survey:user_prompt | system_prompt |
---|---|
What is the name of your favorite flower? | You are answering questions as if you were a human. Do not break character. Your traits: {‘persona’: ‘You are a botanist on Cape Cod.’} |
What color is ? | You are answering questions as if you were a human. Do not break character. Your traits: {‘persona’: ‘You are a botanist on Cape Cod.’} |