WeUnlearn is an NGO that empowers adolescents from low-income families with knowledge and resources to lead emotionally resilient and gender equitable lives. They had been conducting pilots with 20-30 children on facebook messenger, website but they had not really been generating the desired engagement. Moreover, the right tools and methods to measure the impact were not available.
Two important considerations:
- A ready tech solution to launch the program
- Reach a large number of children
and serendipity got them connected to Donald Lobo and Glific. They decided to test two modules using the chatbot solution, one on bullying and one on gender stereotypes. The curriculum was designed to increase the awareness around these issues and inculcate essential life skills like negotiation, persuasion, assertive communication to overcome the issues. The resource was scheduled to be sent to the students everyday at a particular time on their WhatsApp numbers. Each knowledge resource was followed by a round of questions to check what the students learnt and retained. On avg. the children engaged with the program for about 10mins everyday.
At the end of 2-weeks, the WeUnlearn team saw a shift of about 10-15% point change, with most of the results being statistically significant in the gender attitude of the children and the way they perceived gender equality. The students exchanged about 1,50,865 messages with the bot and wanted to keep going even after the pilot had ended.
“When I went to the field & interacted with the children, they loved the platform & wanted to continue engaging with the bot.”
– Pallavi Khare, Co-Founder, WeUnlearn
The chatbot relied on WhatsApp based automated messaging and content delivery. Reaching children where they are at, seemed to be much more effective in delivering the program and continuing their learning and education. It provided an opportunity for the children to interact with the bot and learn in a fun and engaging manner.
Here’s a snippet of how the bot worked:
WeUnlearn team conducted multiple field visits, spoke to more than 200 adolescent children to identify the pain points and accordingly prepared the content. They designed a robust evaluation (a randomized control trial) to measure the effectiveness of the chatbot and deployed the baseline and endline via the chatbot itself. They also tested the program internally with the team and a set of 10-12 contacts to make sure the messages had been set up the way they envisioned, and that the program was ready to scale to an even larger number of students. Here’s what their journey looked like:
Abhishek: How did you create the cohort of students and deployed your program?
Pallavi: We partnered with a Delhi based organization that has a large community of 20,000 students from low-income families. They already had a structure to train students for english and run vocational programs. They welcomed our idea to run our program to their students. From the partners, we got a good network of mentors and facilitators who further managed their group of 50 students each. These mentors & facilitators helped in deploying our program by getting students to engage with the bot. As the children trusted the facilitators, there was no reason for us to connect directly.
We also had a good team structure that consisted of a great project lead(who would think in advance about all the things that could go wrong), content team, tech team, and implementation team to work with the facilitators.
We created a whatsapp group with all the facilitators, and trained them how to run the program. Glific platform helped us monitor the students who were engaged and those who were not. Facilitators catalysed by following up with the children.
We also shared an onboarding video to the children on how to use and interact with the bot. We provided the required instructions to the facilitators on what to do – they already had WhatsApp groups with the children so they sent a link to the students to initiate conversation with the bot.
A: How did you ensure engagement with the students throughout the program?
P: We realised that the usage will differ for each student, and monitoring for how students are interacting with the bot would be required. In the first few days, we onboarded about 8 volunteers who would spend a couple of hours a day monitoring if some students were getting stuck anywhere. They would reach out to the facilitators who would then nudge the student to continue the program or help them where required. The facilitators also reported back to us how many students completed the interaction for the day – this was nicely managed by our project lead.
We centered our content around the children using examples of movies they liked, people they are familiar with and kept it relatable overall.
Asking a few questions everyday and checking how they would apply the knowledge was important to ensuring engagement.
We had a good number of students who continued the program to the end, but still there were quite a few drop offs along the way. But running this program opened up a lot of insights on how to improve the questions, creating a really smooth flow throughout the program, which would be different from NGO to NGO but the good thing about having a system is that – “what did not work” – can be quantified, measured and improved upon.
A: How did you go about creating content for the program?
P: Meghna from our team spent a long time on the field interviewing about 200 students understanding their pain points. Our content was centered around the children and their pain points. We focused on a 3 step process: awareness (of the challenges), skilling (such as persuasion, negotiation to overcome the issue), action (we gave them a challenge to practice change) for our program.
We structured the program such that students could spend a few minutes every day on it. After identifying the program content and conversational flow, setting it up on Glific was pretty smooth. We kept the intervention simple. The user interface was easy to use and we were set up in no time. Building the entire bot in such a less time would have been a big challenge, but just setting up the content and conversations was easy.
A: How did you measure your program impact?
P: We extracted the data from BigQuery which basically gave us the baseline and endline responses that each of the control groups answered. We compared the before and after data to analyse the significance of the impact.
We had a strong focus on evaluation because it is one of the most important aspects of demonstrating impact and securing program funding and donations. During my time at Gates foundation I saw many NGOs run a lot of amazing programs but they did not have the data to show for it. A good standard of evaluation is also important for early stage NGOs.
Being able to deploy the program, with surveys, easily through Glific enabled us to support our program with data.
A: What was the most important part of running the program?
P: The project lead was the most important. They played a crucial role in anticipating the pitfalls, preparing for it and keeping it moving forward.
A: How did Glific help you in all of this?
P: Glific was a blessing because we could not have run our program in the short time period otherwise. We would not have the integration with WhatsApp, designed and built a bot to support our program. We would not have been able to test it and launch it at a large scale so fast as well. It came to us at the right time and it seems like the right platform for NGOs. Even the cost seems fair, in comparison to what it would cost us to build and run it all on our own.
The user interface was really good, we got all our team members onboarded it in no time; even the volunteers got started quickly and were able to use the platform well.