Language Models

Language models are used to generate responses to survey questions. EDSL works with many models from a variety of popular inference service providers, including Anthropic, Azure, Bedrock, Deep Infra, DeepSeek, Google, Mistral, OpenAI, Perplexity and Together. Current model pricing and performance information can be found at the Coop model pricing page. The same information can also be retrieved at your workspace by running the Model.check_working_models() method (see example code below).

We also recommend checking providers’ websites for the most up-to-date information on models and service providers’ terms of use. Links to providers’ websites can be found at the Coop model pricing page. If you need assistance checking whether a model is working or to report a missing model or price, please send a message to info@expectedparrot.com or post a message on Discord.

This page provides examples of methods for specifying models for surveys using the Model and ModelList classes.

API keys

In order to use a model, you need to have an API key for the relevant service provider. EDSL allows you to choose whether to provide your own keys from service providers or use an Expected Parrot API key to access all available models at once. See the Managing Keys page for instructions on storing and prioritizing keys.

Available services

The following code will return a table of inference service providers:

from edsl import Model

Model.services()

Output:

Service Name

anthropic

azure

bedrock

deep_infra

deepseek

google

groq

mistral

ollama

openai

perplexity

together

Available models

The following code will return a table of models for all service providers that have been used with EDSL (output omitted here for brevity).

This list should be used together with the model pricing page to check current model performance with test survey questions. We also recommend running your own test questions with any models that you want to use before running a large survey.

from edsl import Model

Model.available()

To see a list of all models for a specific service, pass the service name as an argument:

Model.available(service = "google")

Output (this list will vary based on the models that have been used when the code is run):

Model Name

Service Name

gemini-pro

google

gemini-1.0-pro

google

gemini-1.0-flash

google

gemini-1.0-flash-8b

google

gemini-1.5-pro

google

gemini-2.0-flash

google

Check working models

You can check current performance and pricing for models by running the following code:

from edsl import Model

Model.check_working_models()

This will return the same information available at the model pricing page: Service, Model, Works with text, Works with images, Price per 1M input tokens (USD), Price per 1M output tokens (USD). It can also be used to check a particular service provider (output omitted here for brevity):

from edsl import Model

Model.check_working_models(service = "google")

Specifying a model

To specify a model to use with a survey, create a Model object and pass it the name of the model. You can optionally set other model parameters at the same time (temperature, etc.).

For example, the following code creates a Model object for gpt-4o with default model parameters that we can inspect:

from edsl import Model

m = Model("gpt-4o")
m

Output:

key

value

model

gpt-4o

parameters:temperature

0.5

parameters:max_tokens

1000

parameters:top_p

1

parameters:frequency_penalty

0

parameters:presence_penalty

0

parameters:logprobs

False

parameters:top_logprobs

3

inference_service

openai

We can see that the object consists of a model name and a dictionary of the default parameters of the model, together with the name of the inference service (some models are provided by multiple services).

Here we also specify the temperature when creating the Model object:

from edsl import Model

m = Model("gpt-4o", temperature = 1.0)
m

Output:

key

value

model

gpt-4o

parameters:temperature

1.0

parameters:max_tokens

1000

parameters:top_p

1

parameters:frequency_penalty

0

parameters:presence_penalty

0

parameters:logprobs

False

parameters:top_logprobs

3

inference_service

openai

Creating a list of models

To create a list of models at once, pass a list of model names to a ModelList object.

For example, the following code creates a Model for each of gpt-4o and gemini-pro:

from edsl import Model, ModelList

ml = ModelList([Model("gpt-4o"), Model("gemini-1.5-flash")])

This code is equivalent to the following:

from edsl import Model, ModelList

ml = ModelList(Model(model) for model in ["gpt-4o", "gemini-1.5-flash"])

We can also use a special method to pass a list of names instead:

from edsl import Model, ModelList

model_names = ['gpt-4o', 'gemini-1.5-flash']

ml = ModelList.from_names(model_names)

ml

Output:

topK

presence_penalty

top_logprobs

topP

temperature

stopSequences

maxOutputTokens

logprobs

max_tokens

frequency_penalty

model

top_p

inference_service

nan

0.000000

3.000000

nan

0.500000

nan

nan

False

1000.000000

0.000000

gpt-4o

1.000000

openai

1.000000

nan

nan

1.000000

0.500000

[]

2048.000000

nan

nan

nan

gemini-1.5-flash

nan

google

Running a survey with models

Similar to how we specify Agents and Scenarios to use with a survey, we specify the models to use by adding them to a survey with the by() method when the survey is run. We can pass either a single Model object or a list of models to the by() method. If multiple models are to be used they are passed as a list or as a ModelList object.

For example, the following code specifies that a survey will be run with each of gpt-4o and gemini-1.5-flash:

from edsl import Model, QuestionFreeText, Survey

m = [Model("gpt-4o"), Model("gemini-1.5-flash")]

q = QuestionFreeText(
   question_name = "example",
   question_text = "What is the capital of France?"
)

survey = Survey(questions = [q])

results = survey.by(m).run()

This code uses ModelList instead of a list of Model objects:

from edsl import Model, ModelList, QuestionFreeText, Survey

ml = ModelList(Model(model) for model in ["gpt-4o", "gemini-1.5-flash"])

q = QuestionFreeText(
   question_name = "example",
   question_text = "What is the capital of France?"
)

survey = Survey(questions = [q])

results = survey.by(ml).run()

This will generate a result for each question in the survey with each model. If agents and/or scenarios are also specified, the responses will be generated for each combination of agents, scenarios and models. Each component is added with its own by() method, the order of which does not matter. The following commands are equivalent:

# add code for creating survey, scenarios, agents, models here ...

results = survey.by(scenarios).by(agents).by(models).run()

# this is equivalent:
results = survey.by(models).by(agents).by(scenarios).run()

Default model

If no model is specified, a survey is automatically run with the default model. Run Model() to check the current default model.

For example, the following code runs the above survey with the default model (and no agents or scenarios) without needing to import the Model class:

results = survey.run() # using the survey from above

# this is equivalent
results = survey.by(Model()).run()

We can verify the model that was used:

results.select("model.model") # selecting only the model name

Output:

model

gpt-4o

Inspecting model parameters

We can also inspect parameters of the models that were used by calling the models of the Results object.

For example, we can verify the default model when running a survey without specifying a model:

results.models # using the results from above

This will return the same information as running results.select(“model.model”) in the example above.

To learn more about all the components of a Results object, please see the Results section.

Troubleshooting

Newly released models of service providers are automatically made available to use with your surveys whenever possible (not all service providers facilitate this).

If you do not see a model that you want to work with or are unable to instantiate it using the standard method, please send us a request to add it to info@expectedparrot.com.

ModelList class

Base class for all classes in the EDSL package.

This abstract base class combines several mixins to provide a rich set of functionality to all EDSL objects. It defines the core interface that all EDSL objects must implement, including serialization, deserialization, and code generation.

All EDSL classes should inherit from this class to ensure consistent behavior and capabilities across the framework.

LanguageModel class

Abstract base class for all language model implementations in EDSL.

This class defines the common interface and functionality for interacting with various language model providers (OpenAI, Anthropic, etc.). It handles caching, response parsing, token usage tracking, and cost calculation, providing a consistent interface regardless of the underlying model.

Subclasses must implement the async_execute_model_call method to handle the actual API call to the model provider. Other methods may also be overridden to customize behavior for specific models.

The class uses several mixins to provide serialization, pretty printing, and hashing functionality, and a metaclass to automatically register model implementations.

Attributes:

_model_: The default model identifier (set by subclasses) key_sequence: Path to extract generated text from model responses DEFAULT_RPM: Default requests per minute rate limit DEFAULT_TPM: Default tokens per minute rate limit

Other methods

class edsl.language_models.registry.RegisterLanguageModelsMeta(name, bases, namespace, /, **kwargs)[source]

Bases: ABCMeta

Metaclass to register output elements in a registry i.e., those that have a parent.

REQUIRED_CLASS_ATTRIBUTES = ['_model_', '_parameters_', '_inference_service_'][source]
__init__(name, bases, dct)[source]

Register the class in the registry if it has a _model_ attribute.

static check_required_class_variables(candidate_class: LanguageModel, required_attributes: List[str] = None)[source]

Check if a class has the required attributes.

>>> class M:
...     _model_ = "m"
...     _parameters_ = {}
>>> RegisterLanguageModelsMeta.check_required_class_variables(M, ["_model_", "_parameters_"])
>>> class M2:
...     _model_ = "m"
classmethod get_registered_classes()[source]

Return the registry.

classmethod model_names_to_classes()[source]

Return a dictionary of model names to classes.

static verify_method(candidate_class: LanguageModel, method_name: str, expected_return_type: Any, required_parameters: List[tuple[str, Any]] = None, must_be_async: bool = False)[source]

Verify that a method is defined in a class, has the correct return type, and has the correct parameters.