Using video scenarios
Note: Before adding video scenarios to your survey you must install the ffmpeg
package.
[1]:
# brew install ffmpeg
[12]:
from edsl import FileStore, QuestionFreeText, Scenario, Model
fs = FileStore("models_page.mp4")
s = Scenario({"video":fs})
q = QuestionFreeText(
question_name = "test",
question_text = "Describe what's happening in this video: {{ scenario.video }}"
)
m = Model("gemini-2.5-flash-preview-04-17", service_name="google")
r = q.by(s).by(m).run()
[13]:
r.select("model", "test")
[13]:
model.model | answer.test | |
---|---|---|
0 | gemini-2.5-flash-preview-04-17 | The video shows a screen recording of a webpage displaying a table of AI models. The table has columns for the model number, the service providing the model (like anthropic, bedrock, deep_infra, google, groq, mistral, openai, perplexity, together, xai), the model name, and its support for Text and Image inputs. For Text Support and Image Support, each model is marked with either a green "Works" or a red "Doesn't work". The video consists of the user scrolling down the table from the top to the bottom, revealing a long list of various AI models and their capabilities regarding text and image support. Most models are shown to support Text input, while Image input support varies, with many models indicating "Doesn't work". |
Here we add a video stored locally to Coop using the FileStore
module. This allows us to retrieve it later and share it with others:
[3]:
fs = FileStore("models_page.mp4")
fs.push(
description = "EP landing page video showing the Models pricing and performance page",
alias = "models-page-video",
visibility = "public"
)
[3]:
{'description': 'EP landing page video showing the Models pricing and performance page',
'object_type': 'scenario',
'url': 'https://www.expectedparrot.com/content/bfe0b754-bfad-443a-b47d-291ca8a875e6',
'alias_url': 'https://www.expectedparrot.com/content/RobinHorton/models-page-video',
'uuid': 'bfe0b754-bfad-443a-b47d-291ca8a875e6',
'version': '0.1.56.dev1',
'visibility': 'public'}
Here we retrieve the video from Coop and create a Scenario
for it:
[4]:
fs = FileStore.pull("https://www.expectedparrot.com/content/RobinHorton/models-page-video")
[5]:
s = Scenario({"video":fs})
If we want to add fields for reference in the results, we simply add key/values as desired with any scenarios (only the video key needs to be added to the question text, and we can use any pythonic key to reference it–learn more about using scenarios for metadata:
[6]:
s = Scenario({
"video":fs,
"ref":"EP Models page",
"link":"https://www.expectedparrot.com/models_page.4041830e.mp4"
})
Next we create a Question
that uses the scenario:
[7]:
q = QuestionFreeText(
question_name = "test",
question_text = "Describe what's happening in this video: {{ scenario.video }}"
)
Here we select an appropriate Model
:
[8]:
m = Model("gemini-2.5-flash-preview-04-17", service_name="google")
We run the question by adding the scenario and model:
[9]:
r = q.by(s).by(m).run()
Here we inspect the response:
Here we post this notebook to Coop:
[11]:
from edsl import Notebook
nb = Notebook("video_scenario_example.ipynb")
nb.push(
description = "How to create video scenarios",
alias = "video-scenarios-notebook",
visibility = "public"
)
[11]:
{'description': 'How to create video scenarios',
'object_type': 'notebook',
'url': 'https://www.expectedparrot.com/content/1e3e4180-6811-41ba-9c70-41567bed879e',
'alias_url': 'https://www.expectedparrot.com/content/RobinHorton/video-scenarios-notebook',
'uuid': '1e3e4180-6811-41ba-9c70-41567bed879e',
'version': '0.1.56.dev1',
'visibility': 'public'}
[ ]: