Using video scenarios

Note: Before adding video scenarios to your survey you must install the ffmpeg package.

[1]:
# brew install ffmpeg
[12]:
from edsl import FileStore, QuestionFreeText, Scenario, Model

fs = FileStore("models_page.mp4")

s = Scenario({"video":fs})

q = QuestionFreeText(
    question_name = "test",
    question_text = "Describe what's happening in this video: {{ scenario.video }}"
)

m = Model("gemini-2.5-flash-preview-04-17", service_name="google")

r = q.by(s).by(m).run()
Job Status 🦜
Completed
Identifiers
Results UUID:
807d45c2...6d40
Job UUID:
f85857f4...f980
Status: Completed Last updated: 2025-04-26 13:05:24
13:05:24
Job completed and Results stored on Coop. View Results
13:05:16
Job status: running - last update: 2025-04-26 01:05:16 PM
13:05:11
Job status: running - last update: 2025-04-26 01:05:11 PM
13:05:04
Job status: running - last update: 2025-04-26 01:05:04 PM
13:04:58
Job status: queued - last update: 2025-04-26 01:04:58 PM
13:04:57
View job progress here
13:04:57
Job details are available at your Coop account. Go to Remote Inference page
13:04:57
Job sent to server. (Job uuid=f85857f4-68c2-4c1f-ac06-0b7972d4f980).
13:04:57
Your survey is running at the Expected Parrot server...
13:04:46
Remote inference activated. Sending job to server...
[13]:
r.select("model", "test")
[13]:
  model.model answer.test
0 gemini-2.5-flash-preview-04-17 The video shows a screen recording of a webpage displaying a table of AI models. The table has columns for the model number, the service providing the model (like anthropic, bedrock, deep_infra, google, groq, mistral, openai, perplexity, together, xai), the model name, and its support for Text and Image inputs. For Text Support and Image Support, each model is marked with either a green "Works" or a red "Doesn't work". The video consists of the user scrolling down the table from the top to the bottom, revealing a long list of various AI models and their capabilities regarding text and image support. Most models are shown to support Text input, while Image input support varies, with many models indicating "Doesn't work".

Here we add a video stored locally to Coop using the FileStore module. This allows us to retrieve it later and share it with others:

[3]:
fs = FileStore("models_page.mp4")

fs.push(
    description = "EP landing page video showing the Models pricing and performance page",
    alias = "models-page-video",
    visibility = "public"
)
[3]:
{'description': 'EP landing page video showing the Models pricing and performance page',
 'object_type': 'scenario',
 'url': 'https://www.expectedparrot.com/content/bfe0b754-bfad-443a-b47d-291ca8a875e6',
 'alias_url': 'https://www.expectedparrot.com/content/RobinHorton/models-page-video',
 'uuid': 'bfe0b754-bfad-443a-b47d-291ca8a875e6',
 'version': '0.1.56.dev1',
 'visibility': 'public'}

Here we retrieve the video from Coop and create a Scenario for it:

[4]:
fs = FileStore.pull("https://www.expectedparrot.com/content/RobinHorton/models-page-video")
[5]:
s = Scenario({"video":fs})

If we want to add fields for reference in the results, we simply add key/values as desired with any scenarios (only the video key needs to be added to the question text, and we can use any pythonic key to reference it–learn more about using scenarios for metadata:

[6]:
s = Scenario({
    "video":fs,
    "ref":"EP Models page",
    "link":"https://www.expectedparrot.com/models_page.4041830e.mp4"
})

Next we create a Question that uses the scenario:

[7]:
q = QuestionFreeText(
    question_name = "test",
    question_text = "Describe what's happening in this video: {{ scenario.video }}"
)

Here we select an appropriate Model:

[8]:
m = Model("gemini-2.5-flash-preview-04-17", service_name="google")

We run the question by adding the scenario and model:

[9]:
r = q.by(s).by(m).run()
Job Status 🦜
Completed
Identifiers
Results UUID:
3dce9d3f...f835
Job UUID:
7fd69db9...6539
Status: Completed Last updated: 2025-04-26 12:02:27
12:02:27
Job completed and Results stored on Coop. View Results
12:02:20
Job status: running - last update: 2025-04-26 12:02:20 PM
12:02:07
Job status: running - last update: 2025-04-26 12:02:07 PM
12:02:01
Job status: queued - last update: 2025-04-26 12:02:01 PM
12:01:59
View job progress here
12:01:59
Job details are available at your Coop account. Go to Remote Inference page
12:01:59
Job sent to server. (Job uuid=7fd69db9-f019-44a9-9db7-589f5b9a6539).
12:01:59
Your survey is running at the Expected Parrot server...
12:01:50
Remote inference activated. Sending job to server...

Here we inspect the response:

Here we post this notebook to Coop:

[11]:
from edsl import Notebook

nb = Notebook("video_scenario_example.ipynb")
nb.push(
    description = "How to create video scenarios",
    alias = "video-scenarios-notebook",
    visibility = "public"
)
[11]:
{'description': 'How to create video scenarios',
 'object_type': 'notebook',
 'url': 'https://www.expectedparrot.com/content/1e3e4180-6811-41ba-9c70-41567bed879e',
 'alias_url': 'https://www.expectedparrot.com/content/RobinHorton/video-scenarios-notebook',
 'uuid': '1e3e4180-6811-41ba-9c70-41567bed879e',
 'version': '0.1.56.dev1',
 'visibility': 'public'}
[ ]: