This blog is written by Meenakshi Harikrishna, Key Education Foundation
Introduction to Key Education Foundation
Key Education Foundation is a Bangalore-based not-for-profit organisation working to improve the quality of Early Child Education(ECE) in India. We work on the holistic development of children in the age group of 3-6 years by enriching the classroom environment, equipping teachers and empowering parents.
The CLAP(Children Learning, Assisted by Parent) Project is a Parent Engagement initiative to improve parent knowledge of best practices in ECE and enhance positive parent-child interactions. Through CLAP, parents receive:
- One worksheet per week through the institution their child is enrolled in(private school, government school or Anganwadi).
- Learning resources, age-appropriate activities and instructions for worksheets on the WhatsApp bot
- Parents receive support through on-ground teams and calls with experts
Parents can access audio-visual support on how to do the activities in the worksheet through the WhatsApp bot. On completing each worksheet, parents also need to send a picture of one activity on the bot. This step helps to check if the worksheets accessed on the bot are completed. Finally, parents receive a question related to the theme of the worksheet.
Need for an A/B Test:
Over the past year, we have consistently heard the following feedback from on-ground teams:
- Users return the completed physical worksheets to the institution their child is enrolled in but do not complete the worksheet on the bot
- There is a significant drop in the number of users that start a worksheet on the bot and those that complete it
- The flow to complete the worksheets is lengthy and cumbersome(too many steps/information-heavy)
The ABCs of the pilot:
We began our journey of the A/B test by identifying our sample set, hypothesis and the indicators for data analysis.
Sample Set – We wanted to test the flow with at least 1000 users from our high-engagement pathway – users whose children are enrolled in Karnataka Public Schools.
Duration – We ran the pilot for 1 week during which users completed one worksheet each
Hypothesis: Shorter flows on the bot for completing the worksheets will result in a higher completion rate
Indicators:
- The difference in completion rates between different versions of the default/original flow
- The step where there is the highest user drop-off
Setup:
To gather data on the above feedback, we A/B tested three versions of our flows(shown below) using the ‘split randomly’ node. To avoid double counts in our data, we made sure a user can enter the A/B test flow only once by sending users who have completed the A/B flow to the default flow.


We used labels and collections throughout the flow to enable tracking user engagement at each step of the different testing scenarios. (refer image below)

Testing the flow:
To ensure our flows were running seamlessly, we first tried out the flow with an internal set of test users. This helped us identify issues/bugs in our flow like webhooks being linked incorrectly and attachments not getting sent to users, and to verify that the data was being collected and tagged to labels in the way we intended. We removed the test users from the collections created for the pilot to ensure we had the most accurate data from our sample set when the pilot was launched.
Launching the A/B test:
We launched the A/B test for our sample set of users and ran the pilot for one week(users completed one worksheet during this time).

Data Collection and Analysis
Using the date and label filters on the Glific Dashboard we were able to track data on the relevant labels and collections.


This provided us with some rich data to validate the feedback we have been receiving. Some of our insights include:
- The data from the A/B test validated our hypothesis – Shorter flows on the bot for completing the worksheets will result in a higher completion rate(89% and 84% for 2-step flows compared to 76% for a 3-step flow)
- Drop-off rate increases when there are multiple steps – 10% in Step 1 and 16% in Step 2 in the Original Flow
- More users drop off at the image upload stage(12%) than at the question stage(6%)

What’s Next?
The experience of the A/B test has helped us get a better understanding of user behaviour on the bot. We will be using this data to guide us in designing our flow for next year keeping the ease of using the bot at the centre of our design principle. We will be eliminating steps that are cumbersome and ensuring that we have as few steps as possible.
This feature has also expanded the possibility of testing variations in our flows that can lead to a better user experience. We plan to run A/B tests at multiple points in our user journey through the next year and make quick fixes as and when possible to continually improve user interactions with the bot.
To learn more about AB testing read this blog or this documentation.
Leave a Reply