Why caching?
Language model outputs are expensive to create, both in terms of time and money. As such, it is useful to store them in a cache so that they can be shared or reused later. Use cases:- Avoid rerunning questions when a job fails only partially, by only resending unanswered questions to a language model.
- Share your cache with others so they can rerun your questions at no cost.
- Use a common remote cache to avoid rerunning questions that others have already run.
- Build up training data to train or fine-tune a smaller model.
- Build up a public repository of questions and responses so others can learn from them.
How it works
A Cache is a dictionary-like object that stores the inputs and outputs of a language model. Specifically, a cache has an attribute, data, that is dictionary-like. The keys of a cache are hashes of the unique inputs to a language model, i.e., the unique combinations of prompts and any parameters used to generate the outputs. The values are CacheEntry objects, which contains the inputs and outputs. A cache can be stored as either a Python in-memory dictionary or a dictionary connected to a SQLite3 database. The default constructor is an in-memory dictionary. If a SQLite3 database is used, a cache will persist automatically between sessions. You can also specify that a cache be used for a specific session, in which case it will not persist between sessions. After a session, the cache will have new entries from any new jobs that have been run during the session. These can be written to a local SQLite3 database, a JSONL file, or a remote server.Multiple caches
Multiple caches are impacted when a survey is run:- New cache for results: A new cache is automatically created for the results of a survey. This cache is specific to the results object, and is attached to it.
- Default cache updated: A default cache is automatically updated with any new entries from the results. If a new or existing cache object was passed to the run() method, it is the default cache that is updated. Otherwise, either your local cache or your remote cache is updated, depending on whether the job was run locally or remotely.
Generating a new cache for results
A new cache is automatically generated whenever results are generated for a question or survey. This cache is specific to the Results object, and is attached to it. It can be accessed using the cache attribute of the results object. For example:model | parameters | system_prompt | user_prompt | output | iteration | timestamp | cache_key |
---|---|---|---|---|---|---|---|
gemini-1.5-flash | {‘temperature’: 0.5, ‘topP’: 1, ‘topK’: 1, ‘maxOutputTokens’: 2048, ‘stopSequences’: []} | nan | Please give me a random number between 1 and 100. This question requires a numerical response in the form of an integer or decimal (e.g., -12, 0, 1, 2, 3.45, …). Respond with just your number on a single line. If your response is equivalent to zero, report ‘0’ After the answer, put a comment explaining your choice on the next line. | {“candidates”: [{“content”: {“parts”: [{“text”: “87n# This is a randomly generated number between 1 and 100.n”}], “role”: “model”}, “finish_reason”: 1, “safety_ratings”: [{“category”: 8, “probability”: 1, “blocked”: false}, {“category”: 10, “probability”: 1, “blocked”: false}, {“category”: 7, “probability”: 1, “blocked”: false}, {“category”: 9, “probability”: 1, “blocked”: false}], “avg_logprobs”: -0.03539780080318451, “token_count”: 0, “grounding_attributions”: []}], “usage_metadata”: {“prompt_token_count”: 97, “candidates_token_count”: 20, “total_token_count”: 117, “cached_content_token_count”: 0}} | 0 | 1737491116 | 7f057154c60a1b9ae343b0634fe7a370 |
agent.agent_index |
agent.agent_instruction |
agent.agent_name |
answer.random |
cache_keys.random_cache_key |
cache_used.random_cache_used |
comment.random_comment |
generated_tokens.random_generated_tokens |
iteration.iteration |
model.maxOutputTokens |
model.model |
model.model_index |
model.stopSequences |
model.temperature |
model.topK |
model.topP |
prompt.random_system_prompt |
prompt.random_user_prompt |
question_options.random_question_options |
question_text.random_question_text |
question_type.random_question_type |
raw_model_response.random_cost |
raw_model_response.random_one_usd_buys |
raw_model_response.random_raw_model_response |
scenario.scenario_index |
cache_keys.random_cache_key | cache_used.random_cache_used |
---|---|
7f057154c60a1b9ae343b0634fe7a370 | False |
cache_keys.random_cache_key | cache_used.random_cache_used |
---|---|
7f057154c60a1b9ae343b0634fe7a370 | True |
Working with a cache
A cache can be passed to the run() method of a survey in order to make the entries available in generating the survey results. This can be useful if you want to add new entries to a specific cache other than your default cache (local or remote). For example:cache_keys.random_cache_key | cache_keys.explain_cache_key | cache_used.random_cache_used | cache_used.explain_cache_used |
---|---|---|---|
7f057154c60a1b9ae343b0634fe7a370 | 6442cf8e6b9812a89bd50bf059f77885 | True | False |
Note:Note that this is independent of the new cache generated for the results as above, which are accessed by calling r3.cache.
Instantiating a new cache
This code will instantiate a new cache object using a dictionary as the data attributeIn-memory usage
Local persistence for an in-memory cache
SQLite3Dict for transactions
Instead of using a dictionary as the data attribute, you can use a special dictionary-like object based on SQLite3. This will persist the cache between sessions. This is the “normal” way that a cache is used for runs where no specic cache is passed.Default SQLite Cache: .edsl_cache/data.db
By default, the cache will be stored in a SQLite3 database at the path .edsl_cache/data.db. You can interact with this cache directly, e.g.,Setting a session cache
The set_session_cache function is used to set the cache for a session:Avoiding cache persistence
We can avoid cache persistence by passing cache=False to the run method:For developers
Delayed cache-writing: Useful for remote caching
Separate from this remote cache syncing, delays can be made in writing to the cache itself. By default, the cache will write to the cache immediately after storing a new entry. However, this can be changed by setting the immediate_write parameter to False.Cache class
Bases:class edsl.caching.Cache(*, filename: str | None = None, data: SQLiteDict | dict | None = None, immediate_write: bool = True, method=None, verbose=False)[source]
Base
Cache for storing and retrieving language model responses.
The Cache class manages a collection of CacheEntry objects, providing methods for storing, retrieving, and persisting language model responses. It serves as the core component of EDSL’s caching infrastructure, helping to reduce redundant API calls, save costs, and ensure reproducibility.
Cache can use different storage backends: - In-memory dictionary (default) - SQLite database via SQLiteDict - JSON lines file (.jsonl)
The cache operates by generating deterministic keys based on the model, parameters, prompts, and iteration number. This allows for efficient lookup of cached responses when identical requests are made.
Attributes:
data (dict or SQLiteDict): The primary storage for cache entries new_entries (dict): Entries added in the current session fetched_data (dict): Entries retrieved in the current session filename (str, optional): Path for persistence if provided immediate_write (bool): Whether to update data immediately (True) or defer (False)
Technical Notes:
- Can be used as a context manager to automatically persist changes on exit
- Supports serialization/deserialization via to_dict/from_dict methods
- Implements set operations (addition, subtraction) for combining caches
- Integrates with the broader EDSL caching infrastructure via CacheHandler
Initialize a new Cache instance. Creates a new cache for storing language model responses. The cache can be initialized with existing data or connected to a persistent storage file.init(*, filename: str | None = None, data: SQLiteDict | dict | None = None, immediate_write: bool = True, method=None, verbose=False)[source]
Args:filename: Path to a persistent storage file (.jsonl or .db). If provided, the cache will be initialized from this file and changes will be written back to it. Cannot be used together with data parameter. data: Initial cache data as a dictionary or SQLiteDict. Cannot be used together with filename parameter. immediate_write: If True, new entries are immediately added to the main data store. If False, they’re kept separate until explicitly written. method: Deprecated. Legacy parameter for backward compatibility. verbose: If True, prints diagnostic information about cache hits and misses.
Raises:CacheError: If both filename and data are provided, or if the filename has an invalid extension.
Implementation Notes:
- The cache maintains separate dictionaries for tracking: * data: The main persistent storage * new_entries: Entries added in this session * fetched_data: Entries fetched in this session * new_entries_to_write_later: Entries to be written if immediate_write=False
- If loading from a file, the appropriate loader method is called based on extension
Add entries to the cache from a dictionary. Parameters: write_now – Whether to write to the cache immediately (similar to immediate_write).add_from_dict(new_data: dict[str, ‘CacheEntry’], write_now: bool | None = True) → None[source]
Add entries to the cache from a JSONL. Parameters: write_now – Whether to write to the cache immediately (similar to immediate_write).add_from_jsonl(filename: str, write_now: bool | None = True) → None[source]
Add entries to the cache from an SQLite database. Parameters: write_now – Whether to write to the cache immediately (similar to immediate_write).add_from_sqlite(db_path: str, write_now: bool | None = True)[source]
Explicitly close and clean up resources. This method properly disposes of any SQLAlchemy engines and connections to prevent memory leaks.close()[source]
Generate Python code that recreates this object. This method must be implemented by all subclasses to provide a way to generate executable Python code that can recreate the object. Returns: str: Python code that, when executed, creates an equivalent objectcode()[source]
data = [source]
Create an example Cache instance for testing and demonstration. Creates a Cache object pre-populated with example CacheEntry objects. This method is useful for documentation, testing, and demonstration purposes.classmethod example(randomize: bool = False) → Cache[source]
Args:randomize: If True, creates CacheEntry objects with randomized content for uniqueness. If False, uses consistent example entries.
Returns:Cache: A new Cache object containing example CacheEntry objects
Technical Notes:
- Uses CacheEntry.example() to create sample entries
- When randomize=True, generates unique keys for each call
- When randomize=False, produces consistent examples for doctests
- Creates an in-memory cache (no persistent file)
Examples:
Retrieve a cached language model response if available. This method attempts to find a cached response matching the exact input parameters. The combination of model, parameters, prompts, and iteration creates a unique key that identifies a specific language model request.fetch(***, model: str, parameters: dict, system_prompt: str, user_prompt: str, iteration: int, validated: bool = False, remote_fetch: bool = False)[source]
Args:model: Language model identifier (e.g., “gpt-3.5-turbo”) parameters: Model configuration parameters (e.g., temperature, max_tokens) system_prompt: The system instructions given to the model user_prompt: The user query/prompt given to the model iteration: The iteration number for this specific request
Returns:tuple: (response, key) where:
- response: The cached model output as a string, or None if not found
- key: The cache key string generated for this request
Technical Notes:
- Uses CacheEntry.gen_key() to generate a consistent hash-based key
- Updates self.fetched_data when a hit occurs to track cache usage
- Optionally logs cache hit/miss when verbose=True
- The response is returned as a JSON string for consistency
- On local cache miss, attempts to fetch from remote universal cache
Examples:
Create an example input for a ‘fetch’ operation.fetch_input_example() → dict[source]
Construct a Cache from a dictionary.classmethod from_dict(data) → Cache[source]
Construct a Cache from a JSONL file. Parameters:classmethod from_jsonl(jsonlfile: str, db_path: str | None = None) → Cache[source]
- jsonlfile – The path to the JSONL file of cache entries.
- db_path – The path to the SQLite database used to store the cache.
- If db_path is None, the cache will be stored in memory, as a dictionary.
- If db_path is provided, the cache will be stored in an SQLite database.
Construct a Cache from a local cache file.classmethod from_local_cache() → Cache[source]
Construct a Cache from a SQLite database.classmethod from_sqlite_db(db_path: str) → Cache[source]
Construct a Cache object from a remote.classmethod from_url(db_path=None) → Cache[source]
Return an iterator of (key, value) pairs in the cache. Similar to dict.items(), provides an iterator over all key-value pairs in the cache for easy iteration. Returns: zip: An iterator of (key, CacheEntry) tuplesitems()[source]
Return a list of all cache keys. Retrieves all cache keys, which are the unique identifiers for each cache entry.keys()[source]
Returns:list: A list of string keys in the cache
Examples:
Return a new Cache object with the new entries.new_entries_cache() → Cache[source]
select(*fields)[source]
Store a new language model response in the cache. Creates a new CacheEntry from the provided parameters and response, then adds it to the cache using a deterministic key derived from the input parameters.store(model: str, parameters: str, system_prompt: str, user_prompt: str, response: dict, iteration: int, service: str, validated: bool = False) → str[source]
Args:model: Language model identifier (e.g., “gpt-3.5-turbo”) parameters: Model configuration parameters (e.g., temperature, max_tokens) system_prompt: The system instructions given to the model user_prompt: The user query/prompt given to the model response: The model’s response as a dictionary iteration: The iteration number for this specific request service: The service provider (e.g., “openai”, “anthropic”) validated: Whether the response has been validated (default: False)
Returns:str: The cache key generated for this entry
Technical Notes:
- Creates a new CacheEntry object to encapsulate the response and metadata
- Adds the entry to self.new_entries to track entries added in this session
- Adds the entry to the main data store if immediate_write=True
- Otherwise, stores in new_entries_to_write_later for deferred writing
- The response is stored as a JSON string for consistency and compatibility
Storage Behavior:The method’s behavior depends on the immediate_write setting: - If True: Immediately writes to the main data store (self.data) - If False: Stores in a separate dict for writing later (e.g., at context exit)
Examples:
Return a subset of the Cache with the specified keys.subset(keys: list[str]) → Cache[source]
table(*fields, tablefmt: str | None = None, pretty_labels: dict | None = None) → str[source]
Convert this object to a Dataset for advanced data operations. Returns: Dataset: A Dataset object containing this object’s datato_dataset()[source]
Serialize the cache to a dictionary for storage or transmission. Converts the Cache object into a plain dictionary format that can be easily serialized to JSON or other formats. Each CacheEntry is also converted to a dictionary using its to_dict method.to_dict(add_edsl_version=True) → dict[source]
Args:add_edsl_version: If True, includes the EDSL version and class name in the serialized output for compatibility tracking
Returns:
dict: A dictionary representation of the cache with the structure:
Technical Notes:
- Used by from_dict for deserialization
- Used by hash for cache comparison
- The version info allows for proper handling of format changes
to_html()[source]
to_scenario_list()[source]
tree(node_list: list[str] | None = None)[source]
Return a list of all cache entry values. Retrieves all CacheEntry objects stored in the cache.values()[source]
Returns:list: A list of CacheEntry objects
Examples:
View the Cache in a new browser tab.view() → None[source]
Write the cache to a file at the specified location.write(filename: str | None = None) → None[source]
Write the cache to a JSONL file.write_jsonl(filename: str) → None[source]
Write the cache to an SQLite database.write_sqlite_db(db_path: str) → None[source]