Data cleaning

This notebook provides sample EDSL code for using a language model to conduct a data cleaning task. In a series of steps we use EDSL to prompt a language model to generate appropriate sense checks for a dataset and then run the sense checks in the form of a survey about the data, returning a new dataset consisting of the data failing the checks.

EDSL is an open-source library for simulating surveys, experiments and other research with AI agents and large language models. Before running the code below please see instructions on getting started and tips and tutorials at our documentation page.

Example data

EDSL allows us to generate data or import it from other sources (CSV, PDF, PNG, MP4, DOC, tables, lists, dicts, etc.). Here we construct a dataset for our exercise: a random list of ages between 22 and 85 with some bad values mixed in. Our goal is to identify them:

[1]:
ages = [84, 62, 79, 57, 59, 55, 68, 66, 47, 54, 76, 33, 74, 56, 47, 24, 23, 38, 38, 54, 51, 84, 71,
        46, 38, 26, 50, 56, 62, 39, 31, 52, 69, 84, 69, 48, 48, 23, 65, 54, 78, 51, 69, 77, 75, 76,
        26, 44, 61, 32, 70, 24, 74, 22, 32, 24, 80, 65, 36, 42, 84, 66, 40, 85, 28, 22, 67, 25, 70,
        77, 53, 69, 64, 27, 61, 68, 68, 78, 0.99, 83, 58, 33, 46, 43, 50, 85, 28, 82, 50, 61, 66, 32,
        45, 70, 56, 50, 43, 30, 43, 55, 33, 72, 43, 43, -5, 32, 43, 45, 67, 84, 37, 63, 52, 53, 58,
        79, 79, 80, 62, 75, 57, 60, 39, 79, 49, 60, 60, 37, 45, 36, 1050, 73, 70, 56, 39, 58, 69, 77,
        68, 84, 78, 48, 31, 74, 27, 55, 56, 66, 35, 39, 57, 47, 29, 24, 47, 60, 43, 37, 84, 64, 28,
        22, 37, 71, 77, 76, 84, 63, 76, 58, 41, 72, 22, 63, 78, 49, 82, 69, "old", 37, 27, 29, 54, 83,
        80, 74, 48, 76, 49, 26, 38, 35, 36, 25, 23, 71, 33, 39, 40, 35, 85, 24, 57, 85, 63, 53, 62,
        47, 69, 76, 71, 48, 62, 23, 25, 84, 32, 63, 75, 31, 25, 50, 85, 36, 58, 85, 34, 62, 43, 2,
        50, 83, 44, 73, 81, 44, 43, 82, 84, 30, 24, 63, 63, 59, 46, 30, 62, 25, 52, 23, 100, 1.3, 3]

Quick question

With a small dataset, we may be able to design the entire task as a single question where we prompt a model to review all the data at once and flag bad data:

[2]:
from edsl import QuestionList, Scenario, Model

q = QuestionList(
    question_name = "bad_ages",
    question_text = """
    Review the following list of observations of human ages
    and return a list of all the unrealistic ages: {{ scenario.ages }}
    """
)

s = Scenario({"ages":ages})

m = Model("gemini-1.5-flash", service_name = "google")

results = q.by(s).by(m).run()

results.select("bad_ages", "bad_ages_comment")
Job Status 🦜
Completed (1 completed, 0 failed)
Identifiers
Results UUID:
da9b5160...de76
Use Results.pull(uuid) to fetch results.
Job UUID:
b1a9d4e6...1755
Use Jobs.pull(uuid) to fetch job.
βœ“ Status: Completed
Last updated: 2025-06-07 18:24:22
18:24:22
Job completed and Results stored on Coop. View Results
18:24:17
Job status: queued - last update: 2025-06-07 06:24:17 PM
18:24:17
View job progress here
18:24:17
Job details are available at your Coop account. Go to Remote Inference page
18:24:17
Job sent to server. (Job uuid=b1a9d4e6-e966-49ae-b1e9-ebb6b5451755).
18:24:17
Your survey is running at the Expected Parrot server...
18:24:15
Remote inference activated. Sending job to server...
Model Costs ($0.0002 / 0.02 credits total)
Service Model Input Tokens Input Cost Output Tokens Output Cost Total Cost Total Credits
google gemini-1.5-flash 1,097 $0.0001 63 $0.0001 $0.0002 0.02
Totals 1,097 $0.0001 63 $0.0001 $0.0002 0.02

You can obtain the total credit cost by multiplying the total USD cost by 100. A lower credit cost indicates that you saved money by retrieving responses from the universal remote cache.

[2]:
  answer.bad_ages comment.bad_ages_comment
0 [0.99, -5, 1050, 1.3, 'old'] # These values are not realistic human ages because they are either negative, less than 1, or far exceed the maximum human lifespan. "old" is also not a numerical age.

This approach may be feasible for a small dataset that is easily checked. For larger datasets, we may encounter problems with input token limits, a model’s ability to accurately check a large volume of data at once, and responses that are not usefully formatted.

Below we demonstrate some ways of approaching the task in an iterative manner instead.

Constructing a question

We start by creating a question to prompt a model to draft sense check questions for our data. EDSL comes with a variety of question types that we can choose from based on the desired form of the response (multiple choice, free text, etc.). Here we use QuestionList in order to prompt the model to format its response as a list. We use a {{ placeholder }} for content that we will add to the question when we run it (a description of the data and a sample); this allows us to re-use the question with other contexts as desired:

[3]:
from edsl import QuestionList

q = QuestionList(
    question_name = "sense_check_questions",
    question_text = """
    You are being asked to suggest sense checks for a dataset consisting of {{ scenario.data_description }}.
    Here is a small sample of the data (to demonstrate the format): {{ scenario.sample_data }}.
    Return the sense checks as a list of questions to be answered about each item in the dataset individually.
    """,
    max_list_items = 3 # optional
)

Adding context to the question

Next we create Scenario objects representing the content that we want to add to the question when we run it. Here we create a single scenario for our example data:

[4]:
import random

sample_data = random.sample(ages, 10)
[5]:
from edsl import Scenario

s = Scenario({
    "data_description": "a list of realistic human ages (in years)",
    "sample_data": sample_data
})
s
[5]:

Scenario

  key value
0 data_description a list of realistic human ages (in years)
1 sample_data:0 54
2 sample_data:1 48
3 sample_data:2 57
4 sample_data:3 43
5 sample_data:4 80
6 sample_data:5 58
7 sample_data:6 57
8 sample_data:7 57
9 sample_data:8 23
10 sample_data:9 72

Running the question

We administer the question to a model by adding the scenarios and calling the run method. This generates a formatted dataset of Results that we can access with built-in methods for analysis. Here we inspect the answer:

[6]:
results = q.by(s).by(m).run()
Job Status 🦜
Completed (1 completed, 0 failed)
Identifiers
Results UUID:
6cddad64...522d
Use Results.pull(uuid) to fetch results.
Job UUID:
884c80b0...3030
Use Jobs.pull(uuid) to fetch job.
βœ“ Status: Completed
Last updated: 2025-06-07 18:28:28
18:28:28
Job completed and Results stored on Coop. View Results
18:28:23
Job status: queued - last update: 2025-06-07 06:28:23 PM
18:28:22
View job progress here
18:28:22
Job details are available at your Coop account. Go to Remote Inference page
18:28:22
Job sent to server. (Job uuid=884c80b0-e232-4ed5-9b99-20d1f26b3030).
18:28:22
Your survey is running at the Expected Parrot server...
18:28:20
Remote inference activated. Sending job to server...
Model Costs ($0.0002 / 0.02 credits total)
Service Model Input Tokens Input Cost Output Tokens Output Cost Total Cost Total Credits
google gemini-1.5-flash 174 $0.0001 93 $0.0001 $0.0002 0.02
Totals 174 $0.0001 93 $0.0001 $0.0002 0.02

You can obtain the total credit cost by multiplying the total USD cost by 100. A lower credit cost indicates that you saved money by retrieving responses from the universal remote cache.

[7]:
results.select("sense_check_questions")
[7]:
  answer.sense_check_questions
0 ['Is the age a non-negative integer?', 'Is the age within a plausible range for a living human (e.g., 0-125)?', 'Is the age consistent with other ages in the dataset (considering potential context, if available)?']

Conducting the task

Next we want a model to answer each sense check question about each piece of data in the dataset. This can be done by using the sense check questions as scenarios of a new question explaining the task. We can use QuestionYesNo to easily filter the responses:

[8]:
from edsl import QuestionYesNo

q2 = QuestionYesNo(
    question_name = "check_data",
    question_text = """
    You are being asked to sense check a dataset consisting of {{ scenario.data_description }}.
    Consider the following item in the dataset: {{ scenario.age }}
    {{ scenario.sense_check_question }}
    """
)

We need to create a new set of scenarios for the question. We use ScenarioList objects to create all the combinations of values to add to the question (learn more about constructing scenarios from different data sources):

[9]:
from edsl import ScenarioList

sl = ScenarioList(
    Scenario({
        "data_description": "a list of realistic human ages (in years)",
        "age": age,
        "sense_check_question": sense_check_question
    }) for age in ages for sense_check_question in results.select("sense_check_questions").to_list()[0]
)

We can inspect the scenarios that we created:

[10]:
sl.sample(3)
[10]:

ScenarioList scenarios: 3; keys: ['sense_check_question', 'age', 'data_description'];

  data_description age sense_check_question
0 a list of realistic human ages (in years) 22 Is the age a non-negative integer?
1 a list of realistic human ages (in years) 58 Is the age consistent with other ages in the dataset (considering potential context, if available)?
2 a list of realistic human ages (in years) 38 Is the age a non-negative integer?

Same as with a single scenario, we add all the scenarios to the question at once when we run it:

[11]:
results = q2.by(sl).by(m).run()
Job Status 🦜
Completed (759 completed, 0 failed)
Identifiers
Results UUID:
4e01b55f...6926
Use Results.pull(uuid) to fetch results.
Job UUID:
c6f3d144...2c36
Use Jobs.pull(uuid) to fetch job.
βœ“ Status: Completed
Last updated: 2025-06-07 18:30:54
18:30:54
Job completed and Results stored on Coop. View Results
18:30:49
Job status: running - last update: 2025-06-07 06:30:49 PM
18:30:45
Job status: running - last update: 2025-06-07 06:30:45 PM
18:30:40
Job status: running - last update: 2025-06-07 06:30:40 PM
18:30:36
Job status: running - last update: 2025-06-07 06:30:36 PM
18:30:32
Job status: running - last update: 2025-06-07 06:30:32 PM
18:30:27
Job status: running - last update: 2025-06-07 06:30:27 PM
18:30:23
Job status: running - last update: 2025-06-07 06:30:23 PM
18:30:18
Job status: queued - last update: 2025-06-07 06:30:18 PM
18:30:14
Job status: queued - last update: 2025-06-07 06:30:14 PM
18:30:14
View job progress here
18:30:14
Job details are available at your Coop account. Go to Remote Inference page
18:30:14
Job sent to server. (Job uuid=c6f3d144-5158-4e9f-812b-d7b462522c36).
18:30:14
Your survey is running at the Expected Parrot server...
18:30:12
Remote inference activated. Sending job to server...
Model Costs ($0.0062 / 0.42 credits total)
Service Model Input Tokens Input Cost Output Tokens Output Cost Total Cost Total Credits
google gemini-1.5-flash 74,891 $0.0057 1,518 $0.0005 $0.0062 0.42
Totals 74,891 $0.0057 1,518 $0.0005 $0.0062 0.42

You can obtain the total credit cost by multiplying the total USD cost by 100. A lower credit cost indicates that you saved money by retrieving responses from the universal remote cache.

We can filter, sort, select and print any components of the results that are generated:

[12]:
(
    results
    .filter("check_data == 'No'")
    .sort_by("sense_check_question")
    .select("sense_check_question", "age")
)
[12]:
  scenario.sense_check_question scenario.age
0 Is the age a non-negative integer? 0.99
1 Is the age a non-negative integer? -5
2 Is the age a non-negative integer? 1050
3 Is the age a non-negative integer? old
4 Is the age a non-negative integer? 1.3
5 Is the age consistent with other ages in the dataset (considering potential context, if available)? 0.99
6 Is the age consistent with other ages in the dataset (considering potential context, if available)? -5
7 Is the age consistent with other ages in the dataset (considering potential context, if available)? 1050
8 Is the age consistent with other ages in the dataset (considering potential context, if available)? old
9 Is the age consistent with other ages in the dataset (considering potential context, if available)? 2
10 Is the age consistent with other ages in the dataset (considering potential context, if available)? 100
11 Is the age consistent with other ages in the dataset (considering potential context, if available)? 1.3
12 Is the age within a plausible range for a living human (e.g., 0-125)? -5
13 Is the age within a plausible range for a living human (e.g., 0-125)? 1050
14 Is the age within a plausible range for a living human (e.g., 0-125)? old
15 Is the age within a plausible range for a living human (e.g., 0-125)? 1.3

Further exploration

This notebook can be readily edited and expanded for other data cleaning and data labeling purposes, or to add personas for AI agents answering the questions with relevant background and expertise. Learn more about using AI agents for your EDSL surveys.

Please see our documentation page for examples of other methods and use cases and let us know if you have any questions!

Posting to the Coop

Coop is a platform for creating, storing and sharing LLM-based research. It is fully integrated with EDSL and accessible from your workspace or Coop account page. Learn more about creating an account and using the Coop.

Here we post this notebook:

[13]:
# from edsl import Notebook

# nb = Notebook(path = "data_cleaning.ipynb")

# nb.push(
#     description = "Example code for data cleaning",
#     alias = "data-cleaning-notebook",
#     visibility = "public"
# )

To update an object at Coop:

[ ]:
from edsl import Notebook

nb = Notebook(path = "data_cleaning.ipynb")

nb.patch("https://www.expectedparrot.com/content/RobinHorton/data-cleaning-notebook", value = nb)