NPS survey in EDSL
This notebook provides sample EDSL code for simulating a Net Promoter Score (NPS) survey with AI agents and large language models. In the steps below we show how to construct an EDSL survey, create personas for AI agents to answer the questions, and then administer the survey to them. We also demonstrate some built-in methods for inspecting and analyzing the dataset of results that is generated when an EDSL survey is run.
The following questions are used in the sample survey:
On a scale from 0-10, how likely are you to recommend our company to a friend or colleague? (0=Not at all likely, 10=Very likely) Please tell us why you gave a rating.
How satisfied are you with the following experience with our company? Product quality Customer support Purchasing experience
Is there anything specific that our company can do to improve your experience?
Technical setup
Before running the code below, ensure that you have (1) installed the EDSL library and (2) created a Coop account to activate remote inference or stored your own API keys for language models that you want to use with EDSL. Please also see our tutorials and documentation page on getting started using the EDSL library.
Constructing questions
We start by selecting appropriate question types for the above questions. EDSL comes with a variety of common question types that we can choose from based on the form of the response that we want to get back from the model. The first quesiton is linear scale; we import the class type and then construct a question in the relevant template:
[1]:
from edsl import QuestionLinearScale
[2]:
q_recommend = QuestionLinearScale(
question_name = "recommend",
question_text = "On a scale from 0-10, how likely are you to recommend our company to a friend or colleague?",
question_options = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10],
option_labels = {0:"Not at all likely", 10:"Very likely"}
)
Each question type other than free text automatically includes a “comment” field for the model to provide commentary on its response to the main question. When we run the survey, we can check that it has effectively captured the follow-on question from above–Please tell us why you gave a rating–and modify or add questions as needed.
For the next question, we use a {{ placeholder }}
for an “experience” that we will insert when repeating the base question:
[3]:
from edsl import QuestionMultipleChoice
[4]:
q_satisfied = QuestionMultipleChoice(
question_name = "satisfied",
question_text = "How satisfied are you with the following experience with our company: {{ experience }}",
question_options = [
"Extremely satisfied",
"Moderately satisfied",
"Neither satisfied nor dissatisfied",
"Moderately dissatisfied",
"Extremely dissatisfied"
]
)
The third question is a simple free text question that we can choose whether to administer once or individually for each “experience” question. In the steps that follow we show how to apply survey logic to achieve this effect:
[5]:
from edsl import QuestionFreeText
[6]:
q_improve = QuestionFreeText(
question_name = "improve",
question_text = "Is there anything specific that our company can do to improve your experience?"
)
Creating variants of questions with scenarios
Next we want to create a version of the “satisfied” question for each “experience”. This can be done with Scenario
objects–dictionaries of key/value pairs representing the content to be added to questions. Scenarios can be automatically generated from a variety of data sources (PDFs, CSVs, images, tables, etc.). Here we have import a simple list:
[7]:
from edsl import ScenarioList, Scenario
[8]:
experiences = ["Product quality", "Customer support", "Purchasing experience"]
s = ScenarioList(
Scenario({"experience":e}) for e in experiences
)
We could also use a specific method for creating scenarios from a list:
[9]:
s = ScenarioList.from_list("experience", experiences)
We can check the scenarios that have been created:
[10]:
s
[10]:
ScenarioList scenarios: 3; keys: ['experience'];
experience | |
---|---|
0 | Product quality |
1 | Customer support |
2 | Purchasing experience |
To create the question variants, we pass the scenario list to the question loop()
method, which returns a list of new questions. We can see that each question has a new unique name and a question text with the placeholder replaced with an experience:
[11]:
satisfied_questions = q_satisfied.loop(s)
satisfied_questions
[11]:
[Question('multiple_choice', question_name = """satisfied_0""", question_text = """How satisfied are you with the following experience with our company: Product quality""", question_options = ['Extremely satisfied', 'Moderately satisfied', 'Neither satisfied nor dissatisfied', 'Moderately dissatisfied', 'Extremely dissatisfied']),
Question('multiple_choice', question_name = """satisfied_1""", question_text = """How satisfied are you with the following experience with our company: Customer support""", question_options = ['Extremely satisfied', 'Moderately satisfied', 'Neither satisfied nor dissatisfied', 'Moderately dissatisfied', 'Extremely dissatisfied']),
Question('multiple_choice', question_name = """satisfied_2""", question_text = """How satisfied are you with the following experience with our company: Purchasing experience""", question_options = ['Extremely satisfied', 'Moderately satisfied', 'Neither satisfied nor dissatisfied', 'Moderately dissatisfied', 'Extremely dissatisfied'])]
We can also use the loop()
method to create copies of the “improve” question in order to present it as a follow-up question to each of the “satisfied” questions that have been parameterized with experiences. Here, we’re simply duplicating the base question without a scenario {{ placeholder }}
because we will instead add a “memory” of the relevant “satisfied” question when administering each copy of it:
[12]:
improve_questions = q_improve.loop(s)
improve_questions
[12]:
[Question('free_text', question_name = """improve_0""", question_text = """Is there anything specific that our company can do to improve your experience?"""),
Question('free_text', question_name = """improve_1""", question_text = """Is there anything specific that our company can do to improve your experience?"""),
Question('free_text', question_name = """improve_2""", question_text = """Is there anything specific that our company can do to improve your experience?""")]
Creating a survey
Next we pass a list of all the questions to a Survey
in order to administer them together:
[13]:
questions = [q_recommend] + satisfied_questions + improve_questions
[14]:
from edsl import Survey
[15]:
survey = Survey(questions)
Adding survey logic
In the next step we add logic to the survey specifying that each “improve” question should include a “memory” of a “satisfied” question (the question and answer that was provided):
[16]:
for i in range(len(s)):
survey = survey.add_targeted_memory(f"improve_{i}", f"satisfied_{i}")
We can inspect the survey details:
[17]:
survey
[17]:
Survey # questions: 7; question_name list: ['recommend', 'satisfied_0', 'satisfied_1', 'satisfied_2', 'improve_0', 'improve_1', 'improve_2'];
question_text | question_name | question_type | option_labels | question_options | |
---|---|---|---|---|---|
0 | On a scale from 0-10, how likely are you to recommend our company to a friend or colleague? | recommend | linear_scale | {0: 'Not at all likely', 10: 'Very likely'} | [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10] |
1 | How satisfied are you with the following experience with our company: Product quality | satisfied_0 | multiple_choice | nan | ['Extremely satisfied', 'Moderately satisfied', 'Neither satisfied nor dissatisfied', 'Moderately dissatisfied', 'Extremely dissatisfied'] |
2 | How satisfied are you with the following experience with our company: Customer support | satisfied_1 | multiple_choice | nan | ['Extremely satisfied', 'Moderately satisfied', 'Neither satisfied nor dissatisfied', 'Moderately dissatisfied', 'Extremely dissatisfied'] |
3 | How satisfied are you with the following experience with our company: Purchasing experience | satisfied_2 | multiple_choice | nan | ['Extremely satisfied', 'Moderately satisfied', 'Neither satisfied nor dissatisfied', 'Moderately dissatisfied', 'Extremely dissatisfied'] |
4 | Is there anything specific that our company can do to improve your experience? | improve_0 | free_text | nan | nan |
5 | Is there anything specific that our company can do to improve your experience? | improve_1 | free_text | nan | nan |
6 | Is there anything specific that our company can do to improve your experience? | improve_2 | free_text | nan | nan |
AI agent personas
EDSL comes with a variety of methods for designing AI agents to answer surveys. An Agent
is constructed by passing a dictionary of relevant traits
with optional additional instructions
for the language model to reference in generating responses for the agent. Agents can be constructed from a variety of data sources, including existing survey data (e.g., a dataset of responses that were provided to some other questions). We
can also use an EDSL question to draft some personas for agents. Here, we ask for a list of them:
[18]:
from edsl import QuestionList
[19]:
q_personas = QuestionList(
question_name = "personas",
question_text = "Draft 5 personas for diverse customers of landscaping business with varying satisfaction levels."
)
We can run this question alone and extract the response list (more on working with results below):
[20]:
personas = q_personas.run().select("personas").to_list()[0]
personas
Job UUID | 371708f5-0b6a-437b-a000-ed11f8d4d23f |
Progress Bar URL | https://www.expectedparrot.com/home/remote-job-progress/371708f5-0b6a-437b-a000-ed11f8d4d23f |
Exceptions Report URL | None |
Results UUID | 176efc8a-34ae-44fa-92b1-51c56431c656 |
Results URL | https://www.expectedparrot.com/content/176efc8a-34ae-44fa-92b1-51c56431c656 |
[20]:
['John, a retired veteran who loves his garden and is highly satisfied with the personalized service',
'Emily, a busy professional who is moderately satisfied but wishes for more eco-friendly options',
'Raj, a young tech entrepreneur who is dissatisfied due to inconsistent appointment scheduling',
'Maria, a single mother who is very satisfied with the affordable pricing and flexible payment plans',
'Grace, an elderly woman who is dissatisfied because of slow response times to her queries']
Next we pass the personas to create a set of agents:
[21]:
from edsl import AgentList, Agent
[22]:
a = AgentList(
Agent(traits = {"persona":p}) for p in personas
)
Selecting language models
EDSL works with many popular large language models that we can select to use with a survey. To see a list of available models:
[23]:
from edsl import Model
[24]:
# Model.available()
To select a model to use with a survey we pass a model name to a Model
:
[25]:
m = Model("gemini-1.5-flash")
If we want to compare responses for several models, we can use a ModelList
instead:
[26]:
from edsl import ModelList
[27]:
m = ModelList(
Model(model) for model in ["gemini-1.5-flash", "gpt-4o"]
)
Note: If no model is specified when running a survey, the default model GPT-4o is used (as above when we generated personas).
Running a survey
We administer the survey by adding the agents and models with the by()
method and then calling the run()
method:
[28]:
results = survey.by(a).by(m).run()
Job UUID | d80dabbf-57a9-40ef-88eb-0e284aa9d603 |
Progress Bar URL | https://www.expectedparrot.com/home/remote-job-progress/d80dabbf-57a9-40ef-88eb-0e284aa9d603 |
Exceptions Report URL | https://www.expectedparrot.com/home/remote-inference/error/8e4bc432-3ad7-4bc4-a9a7-a8f0c4800b23 |
Results UUID | 918c9db0-7a06-48f5-9d10-753009585e37 |
Results URL | https://www.expectedparrot.com/content/918c9db0-7a06-48f5-9d10-753009585e37 |
This generates a dataset of Results
that includes a response for each agent/model that was used. We can access the results with built-in methods for analysis. To see a list of all the components of the results:
[29]:
results.columns
[29]:
0 | |
---|---|
0 | agent.agent_index |
1 | agent.agent_instruction |
2 | agent.agent_name |
3 | agent.persona |
4 | answer.improve_0 |
5 | answer.improve_1 |
6 | answer.improve_2 |
7 | answer.recommend |
8 | answer.satisfied_0 |
9 | answer.satisfied_1 |
10 | answer.satisfied_2 |
11 | cache_keys.improve_0_cache_key |
12 | cache_keys.improve_1_cache_key |
13 | cache_keys.improve_2_cache_key |
14 | cache_keys.recommend_cache_key |
15 | cache_keys.satisfied_0_cache_key |
16 | cache_keys.satisfied_1_cache_key |
17 | cache_keys.satisfied_2_cache_key |
18 | cache_used.improve_0_cache_used |
19 | cache_used.improve_1_cache_used |
20 | cache_used.improve_2_cache_used |
21 | cache_used.recommend_cache_used |
22 | cache_used.satisfied_0_cache_used |
23 | cache_used.satisfied_1_cache_used |
24 | cache_used.satisfied_2_cache_used |
25 | comment.improve_0_comment |
26 | comment.improve_1_comment |
27 | comment.improve_2_comment |
28 | comment.recommend_comment |
29 | comment.satisfied_0_comment |
30 | comment.satisfied_1_comment |
31 | comment.satisfied_2_comment |
32 | generated_tokens.improve_0_generated_tokens |
33 | generated_tokens.improve_1_generated_tokens |
34 | generated_tokens.improve_2_generated_tokens |
35 | generated_tokens.recommend_generated_tokens |
36 | generated_tokens.satisfied_0_generated_tokens |
37 | generated_tokens.satisfied_1_generated_tokens |
38 | generated_tokens.satisfied_2_generated_tokens |
39 | iteration.iteration |
40 | model.frequency_penalty |
41 | model.inference_service |
42 | model.logprobs |
43 | model.maxOutputTokens |
44 | model.max_tokens |
45 | model.model |
46 | model.model_index |
47 | model.presence_penalty |
48 | model.stopSequences |
49 | model.temperature |
50 | model.topK |
51 | model.topP |
52 | model.top_logprobs |
53 | model.top_p |
54 | prompt.improve_0_system_prompt |
55 | prompt.improve_0_user_prompt |
56 | prompt.improve_1_system_prompt |
57 | prompt.improve_1_user_prompt |
58 | prompt.improve_2_system_prompt |
59 | prompt.improve_2_user_prompt |
60 | prompt.recommend_system_prompt |
61 | prompt.recommend_user_prompt |
62 | prompt.satisfied_0_system_prompt |
63 | prompt.satisfied_0_user_prompt |
64 | prompt.satisfied_1_system_prompt |
65 | prompt.satisfied_1_user_prompt |
66 | prompt.satisfied_2_system_prompt |
67 | prompt.satisfied_2_user_prompt |
68 | question_options.improve_0_question_options |
69 | question_options.improve_1_question_options |
70 | question_options.improve_2_question_options |
71 | question_options.recommend_question_options |
72 | question_options.satisfied_0_question_options |
73 | question_options.satisfied_1_question_options |
74 | question_options.satisfied_2_question_options |
75 | question_text.improve_0_question_text |
76 | question_text.improve_1_question_text |
77 | question_text.improve_2_question_text |
78 | question_text.recommend_question_text |
79 | question_text.satisfied_0_question_text |
80 | question_text.satisfied_1_question_text |
81 | question_text.satisfied_2_question_text |
82 | question_type.improve_0_question_type |
83 | question_type.improve_1_question_type |
84 | question_type.improve_2_question_type |
85 | question_type.recommend_question_type |
86 | question_type.satisfied_0_question_type |
87 | question_type.satisfied_1_question_type |
88 | question_type.satisfied_2_question_type |
89 | raw_model_response.improve_0_cost |
90 | raw_model_response.improve_0_one_usd_buys |
91 | raw_model_response.improve_0_raw_model_response |
92 | raw_model_response.improve_1_cost |
93 | raw_model_response.improve_1_one_usd_buys |
94 | raw_model_response.improve_1_raw_model_response |
95 | raw_model_response.improve_2_cost |
96 | raw_model_response.improve_2_one_usd_buys |
97 | raw_model_response.improve_2_raw_model_response |
98 | raw_model_response.recommend_cost |
99 | raw_model_response.recommend_one_usd_buys |
100 | raw_model_response.recommend_raw_model_response |
101 | raw_model_response.satisfied_0_cost |
102 | raw_model_response.satisfied_0_one_usd_buys |
103 | raw_model_response.satisfied_0_raw_model_response |
104 | raw_model_response.satisfied_1_cost |
105 | raw_model_response.satisfied_1_one_usd_buys |
106 | raw_model_response.satisfied_1_raw_model_response |
107 | raw_model_response.satisfied_2_cost |
108 | raw_model_response.satisfied_2_one_usd_buys |
109 | raw_model_response.satisfied_2_raw_model_response |
110 | scenario.scenario_index |
For example, we can filter, sort and display columns of results in a table:
[30]:
(
results
.filter("model.model == 'gemini-1.5-flash'")
.sort_by("recommend", reverse=True)
.select("model","persona","recommend", "recommend_comment")
)
[30]:
model.model | agent.persona | answer.recommend | comment.recommend_comment | |
---|---|---|---|---|
0 | gemini-1.5-flash | John, a retired veteran who loves his garden and is highly satisfied with the personalized service | 10 | Honestly, I've been so pleased with the personalized attention I've received. It's a breath of fresh air compared to some of the impersonal service I've gotten in the past. I'd recommend you folks to anyone in a heartbeat. |
1 | gemini-1.5-flash | Maria, a single mother who is very satisfied with the affordable pricing and flexible payment plans | 10 | Honestly, as a single mom, being able to afford the things I need without breaking the bank is a huge deal. The flexible payment plans have been a lifesaver! I'd recommend your company in a heartbeat. |
2 | gemini-1.5-flash | Emily, a busy professional who is moderately satisfied but wishes for more eco-friendly options | 7 | I'd recommend you, but I wish you had more sustainable options. There's definitely room for improvement on the eco-friendliness front, which holds me back from giving a higher score. |
3 | gemini-1.5-flash | Raj, a young tech entrepreneur who is dissatisfied due to inconsistent appointment scheduling | 3 | Ugh, honestly, it's a mixed bag. The product itself is okay, but the constant rescheduling of meetings is driving me absolutely bonkers. It's wasted so much of my time. I wouldn't actively discourage a friend, but I wouldn't exactly *recommend* it either. |
4 | gemini-1.5-flash | Grace, an elderly woman who is dissatisfied because of slow response times to her queries | 2 | Honestly, I'm finding it terribly frustrating to get even simple questions answered in a timely manner. It's just taking far too long. |
[31]:
(
results
.filter("model.model == 'gemini-1.5-flash'")
.sort_by("satisfied_0")
.select("satisfied_0", "satisfied_0_comment", "improve_0")
)
[31]:
answer.satisfied_0 | comment.satisfied_0_comment | answer.improve_0 | |
---|---|---|---|
0 | Extremely satisfied | My tomatoes have never been so plump and juicy! The quality of the seeds was top-notch, just what I'd expect from personalized service. | Well, honestly, everything's been top-notch. The product quality, as I already said, is excellent. I'm a pretty simple guy, I like things that work and work well, and yours certainly do. If I had to pick something, and this is really nitpicking, perhaps a little more personalized communication about new products or updates that might be relevant to my gardening hobby? I don't need a barrage of emails, mind you – just a thoughtful note every now and then letting me know about something I might find useful. That personal touch is what really sets you apart, and a little extra of that would be fantastic. But honestly, I'm already incredibly happy with everything. |
1 | Extremely satisfied | Honestly, for the price, the quality is amazing! I've never had any issues. Being a single mom, I need things to last, and they have. | Honestly? Keeping things affordable and flexible is the biggest thing for me. Being a single mom, every penny counts. So, if you could keep those payment plans as they are, or maybe even explore even more options for people in my situation, that would be amazing. I'm already extremely happy with the quality of your products, so that's not a concern at all. |
2 | Moderately dissatisfied | Honestly, the product itself is alright, but it took them *forever* to even get it to me. The wait time was ridiculous. | Oh, honey, product quality is only *part* of it. The *real* problem is waiting. Waiting, waiting, waiting! I've been trying to get a simple question answered about a faulty widget for weeks now. Weeks! I've left messages, sent emails, even tried calling – and it's like talking to a brick wall. |
3 | Moderately satisfied | The product quality is okay, it does the job, but I wish there were more sustainable options available. I'm always looking for ways to reduce my environmental impact, and that's a factor in my overall satisfaction. | Honestly? It's tricky because I'm generally happy with the product quality, but I'm always looking for ways to reduce my environmental impact. So, if you could offer more eco-friendly packaging options – maybe using recycled materials or reducing the amount of packaging overall – that would be a huge plus. I know it's a small thing, but those little choices add up, and it would make a big difference to me. |
4 | Moderately satisfied | The product itself is okay, I guess. It does what it's supposed to, but the constant rescheduling of meetings to discuss features and bugs is really killing my productivity. If the scheduling was better, I'd be much happier. | Ugh, honestly? The product itself is okay, I'll give you that. But the scheduling of appointments... that's where you guys really fall down. It's a nightmare. One minute I'm getting a confirmation, the next it's been rescheduled, then cancelled, then rescheduled again... It's completely unprofessional and eats into my already crazy schedule. If you could just nail down a reliable, consistent appointment system – maybe a better online booking system with fewer glitches and better communication – that would be a *huge* improvement. Seriously, that's the biggest thing holding me back from being completely happy. |
[32]:
(
results
.filter("model.model == 'gemini-1.5-flash'")
.select("satisfied_1", "satisfied_1_comment", "improve_1")
.print(pretty_labels = {
"answer.satisfied_1": "Customer service: satisfaction",
"comment.satisfied_1_comment": "Customer service: comment",
"answer.improve_1": "Customer service: improvements"
})
)
[32]:
Customer service: satisfaction | Customer service: comment | Customer service: improvements | |
---|---|---|---|
0 | Extremely satisfied | Now, let me tell you, I've dealt with my share of companies over the years, some good, some...well, not so good. But your customer support? Top-notch. Felt like they really took the time to understand my problem, didn't treat me like just another number. Reminds me of the personalized service I used to get at the old hardware store back home. A real breath of fresh air. | Honestly, y'all have been great. I've dealt with enough impersonal, bureaucratic nonsense in my time, so the personalized attention I got was a real breath of fresh air. It's hard to pinpoint something specific you could *improve*, because the experience was already so good. Maybe just keep doing what you're doing! It's refreshing to find a company that actually cares about its customers. Reminds me of tending my prize-winning roses – a little extra care goes a long way. |
1 | Moderately satisfied | Honestly, the customer support was okay. They got the job done, but it wasn't exactly a seamless or particularly pleasant experience. I wish they had more eco-friendly options for contacting them, like a better email system or something, instead of just relying on so many phone calls. It's a small thing, but it adds up for me. | Honestly? It's a bit of a mixed bag. Your customer support was fine – helpful enough when I needed it. But what would *really* improve my experience, and I think a lot of other people's too, is a stronger focus on sustainability. I'm always looking for more eco-friendly options, and it's something I'm increasingly considering when choosing companies to work with. Maybe explore more sustainable packaging, or carbon-neutral shipping options? Little things like that would make a big difference to me. |
2 | Moderately dissatisfied | Ugh, it's the scheduling again. I've had to reschedule appointments so many times because of their inconsistencies. It's frustrating and eats into my already packed day. While the actual support *when* I finally get it is okay, the constant rescheduling is a major pain. | Ugh, look, the customer support itself wasn't *terrible*, but the scheduling is driving me nuts. It's like trying to herd cats. One appointment gets cancelled, rescheduled, then *another* gets moved. It's completely thrown off my whole week, multiple times. Honestly, if you could just nail down a reliable scheduling system – maybe something with better reminders, and less room for last-minute changes – that would be a *huge* improvement. I'm all about efficiency, and this back-and-forth is costing me valuable time and frankly, sanity. |
3 | Extremely satisfied | Honestly, as a single mom, I'm always juggling so much. Their customer support has been a lifesaver – always quick to respond and so understanding. It makes a huge difference! | Honestly? Keeping things affordable and flexible is the biggest thing for me. I'm a single mom, so every penny counts. Being able to adjust my payments when things get a little tight… that's a lifesaver. So, maybe just keeping those options available and clear to understand? I don't need fancy extras, just reliable service at a price I can manage. That's more valuable than anything else. |
4 | Extremely dissatisfied | Honestly, the whole thing was a dreadful experience. I waited ages for someone to even acknowledge my query, and then the solution they offered was completely unhelpful. I'm just terribly disappointed. | Oh, honey, where do I even begin? "Extremely dissatisfied" doesn't even begin to cover it. You want specifics? Fine. First, the waiting. The *waiting*. I've spent more time on hold listening to that chirpy little tune than I have spent actually talking to a real person. I swear that tune is designed to drive a person to distraction! It's maddening. Then, when I *finally* get through, it's often to someone who sounds like they're reading from a script, not actually listening to my problem. I've explained the same thing three, four times to different people! It's like talking to a wall, only a wall that occasionally interrupts you with hold music. And the solutions? Oh, the solutions. They're usually temporary fixes that don't actually address the root of the problem. I'm getting tired of this constant back-and-forth. I need someone to actually *listen* and provide a *lasting* solution, not just a band-aid. |
Posting to Coop
The Coop is a platform for creating, storing and sharing LLM-based research. It is fully integrated with EDSL, allowing you to access objects from your workspace or Coop account interface. Learn more about creating an account and using the Coop.
The surveys and results above were already posted automatically using remote inference. Here we demonstrate local methods for posting the same content from your workspace (if you are working locally):
[33]:
info = survey.push(description = "Example NPS survey", visibility = "public")
info
[33]:
{'description': 'Example NPS survey',
'object_type': 'survey',
'url': 'https://www.expectedparrot.com/content/cce7d1b6-600d-4450-890f-384ee9391da1',
'uuid': 'cce7d1b6-600d-4450-890f-384ee9391da1',
'version': '0.1.45.dev1',
'visibility': 'public'}
We can also post a notebook, such as this one:
[34]:
from edsl import Notebook
[35]:
n = Notebook(path = "nps_survey.ipynb")
[36]:
info = n.push(description = "Notebook for simulating an NPS survey")
info
[36]:
{'description': 'Notebook for simulating an NPS survey',
'object_type': 'notebook',
'url': 'https://www.expectedparrot.com/content/9bef6849-f6b4-4aea-a769-9313650edf58',
'uuid': '9bef6849-f6b4-4aea-a769-9313650edf58',
'version': '0.1.45.dev1',
'visibility': 'unlisted'}
To update an object at the Coop:
[37]:
n = Notebook(path = "nps_survey.ipynb") # resave
[38]:
n.patch(uuid = info["uuid"], visibility = "public", value = n)
[38]:
{'status': 'success'}