Building computational learning chatbot for students in Odisha

Abhishek Sharma

DECEMBER 20, 2022

Share

About the program

Quest Alliance is running a WhatsApp chatbot based program in partnership with Amazon Future Engineers to educate students in Odisha on computational thinking and coding. Quest Alliance has designed the entire program in a way that students receive content every week and they are required to complete it within a week before they are sent content for the next week. The instructional learning content uses rich media like GIFs, images, videos and crisp text to engage the students into an immersive learning experience. The content flow is interspersed with quizzes and activities and interactive buttons to keep the flow two-way as opposed to pushing content one-way. The program started off on a pilot with about 300 students. They are loving the experience and can’t wait for the new week’s content to be released to them. 

Background

In June of 2022, Rishi connected with us regarding a new education program by Quest Alliance(QA) that would require Glific’s WhatsApp chatbot. Rishi shared the plans of this chatbot to reach students on a really large scale of up to 2L students in Odisha. There were some concerns about the cost implications since WhatsApp conversation costs can really add up at a large scale. Quest Alliance has already been running a couple of chatbots with Glific and I believe it was a good decision to not worry about the costs before building, testing and seeing the program on the ground. We are still in the middle of the pilot and the response on ground from the students and other stakeholders has been quite positive and the journey to build the chatbot has been remarkable.

Rishi’s terrific presentation in Dehradun on Conversational pedagogy and the latest chatbot work that got everyone curious about the work.

The first time I saw the working prototype of the chatbot was during the Glific sprint in Dehradun which Rishi had joined. That was the time I also started getting more involved to support the chatbot from a Glific(& tech) perspective. The chatbot seemed really promising. I was totally engaged in the activities and quizzes on the chatbot and couldn’t wait to start working on it. I wasn’t aware that before this Rishi had been putting together a collaborative effort to design, develop and implement the program on ground :). The collaborative composed of the Glific team helping with tech to build the chatbot, Ooloi labs team helped with designing the conversation flows, and the QA team creating the pedagogical program along with its rollout & adoption among the students on the ground in Odisha. 

Development process

The chatbot work began much before building the flows on Glific. Field research was conducted by Ooloi with Quest team for these objectives:

  1. Understanding the students’ current knowledge around technology, access to mobile devices and career aspirations. 
  2. Understanding the facilitators comfort with the curriculum and the delivery mechanisms. 
  3. See how the students’ responding to the interface and the content, and get a sense of what works and what doesn’t.

On ground research of content and conversation flows before building the chatbot.

The findings of this research were as follows:

  1. Students were unsure whether the chatbot is a person they were speaking to or some kind of a robot
  2. The students couldn’t understand why they were using the chatbot and what purpose does learning code serve for them. This made it difficult to connect to the content and see it as something useful. 
  3. Whenever students got stuck in a flow they weren’t sure what to do next. We had to provide a way for them to get unstuck on their own.
  4. Students were unfamiliar with interactions like buttons and menus on Whatsapp. 
  5. Once the students left Whatsapp, they struggled with downloading and opening Scratch, but didn’t access the chatbot to seek help.

After the conversation flows were designed, the Glific team along with Sourav(from QA team) stepped in to review and build the flows on the chatbot. We guided the program on building nudges, designing pathways to help students find their way on the chatbot, build dashboards, fix any technical issues with the flows, getting HSMs approved and experimenting with features like multiple users using chatbot from one number etc…

Most of the time in consulting engagements, the design and on-ground implementation is managed by the NGO with tech support from Glific. At first I was a bit apprehensive that many different groups participated to build the chatbot because in such cases the ownership of the work gets diluted but it was really nice to see how this collaboration worked out well. Everyone did their part and got out of the way for the others. People stepped in to fill in the missing pieces rather than punting it on to others even if it wasn’t particularly assigned to them. Like the Ooloi team building flows on Glific or Glific team reviewing and commenting on the messaging. And the QA team was not crowding or being demanding about the timelines or adding pressure to the mix. It was great to see them managing what they could at their end.

Part of the flow on QA’s chatbot

With any chatbot or program implementation it’s crucial to set up some key metrics and measurement indicators. We advocate for these monitoring reports to be decided early on in the project development. Though these indicators are one of the most important parts of a program they are not always easy to arrive at. While reviewing the pilot we looked at:

  1. the completion rate of the flows because that directly correlates to their learning,
  2. the distribution of students across districts and schools to know the effectiveness of the program team to onboarded students, 
  3. percentage and list of students who got stuck, were dropping off or faced challenges with using the chatbot because that limited the completion rate.

1282 contacts out of 2000 completed the first weekly content which is about 64% completion.

Students went into the unexpected flow a total 1.5K times. We looked into these cases and found better ways to avoid students taking the unexpected route.

In addition to the metrics and reporting dashboard, we also went through each of those 300 conversations to understand the patterns where the conversation flows needed to be fixed for a better outcome. Some of our top learnings from running this pilot were:

  • We had to expand the scope of accepting responses from the users.
    For example, when asking for their district input, while the right response was the number allocated to the district, some students would type out the district name too and different variations of it too.
  • Improve clarity of some messages
    For example, to prompt language change option we reworded `Language setting` to `Change language`.
  • Educate users/students how to use the chatbot – mainly on the types of responses expected from them and that they always need to respond to the last message sent to them and not respond to messages in the middle of the flow.
  • Minor technical issues like their name not showing properly

Outcomes and results

Based on these in-depth conversation reviews on Glific, we were able to fix many issues for better effectiveness. 

The early outcomes of the program have been:

  1. About 60% completion rate of all the students who interacted with the chatbot for the first few flows.
  2. Students are eagerly waiting for the new content and quite excited to learn with the chatbot.

Conclusion

This is just the beginning of a chatbot program, there’s a lot of work to be done like enabling sustained engagements over weeks and months. It’s good to know that QA is thinking of it as a part of their core offerings and looking at building the chatbot in the long term and not just a short quick program. 

In hindsight this also feels like one of the more successful cases of building programs on chatbot. Here’s the key takeaways:

  1. Start small and take steps forward, don’t get stuck with the high cost projections early on because you never know how things will turn out. We see many other orgs get discouraged thinking of the costs of running the chatbot at scale.
  2. Lay emphasis on the design and messaging on the chatbot, the content and how the program will benefit or engage the audience is the core of your offering. No amount of technology can change outcomes for an unthoughtful program. QA did it quite well in partnership with the Ooloi labs team.
  3. Plan your key measurement metrics and reporting requirements early on.
  4. Convey to the tech team (Glific in this case) in an elaborate manner what your program needs so that you can get the right help on technology and feature support.
  5. Review your conversations with a smaller set of users as a part of your pilot program to test your program with actual users. Then plan for updates and the next round for iterate development.

Leave a Reply

%d bloggers like this: