Reflections from The Apprentice Project team’s experience at 2 day deep dive into LLMs

tejasglific

OCTOBER 30, 2023

Share

The blog has been put together by Gaurav Lagad from The Apprentice Project (gaurav.l@theapprenticeproject.org)

About the org and use-case

  • Automated Student and Teacher Support – P1 
    • Queries related to program and content(activities)
    • Support for Multiple languages
    • Voice note based queries’ support
  • LLM for Churn Analysis – P2
    • User Feedback Analysis
    • User Profiling & Customised learning pathways
    • Early Prediction system
  • LLM for Connecting Opportunities to Students – P3
    • User Sentiment Analysis & Segmentation done for by LLM (to understand the users’ rigors)
    • Opportunities catalog ingested by TAP
  • https://docs.google.com/presentation/d/1rKgvsaaMGX_2faDy5TKg9kX3N2nF5Veh5QXCSxUsNic/edit#slide=id.g290b3fe5042_0_65 

Top Takeaways

  • Please 3-5 points that you’re taking away from the conversations/discussions that helped your understanding of the LLM technology or helped you get ideas on how it might be applicable / not applicable for your use case. 
  • LLM for doing Data Analysis – Edmund’s talk about the usage of ChatGPT 4 to use for data analysis makes me want to explore the same for TAP’s use-cases where we require data analysis to be done.
  • LLM for LLM – Usage of LLM for doing meta-tasks like classification (multi-shot classification), and language detection. This will help us reduce the cost of the subsequent calls and make our LLM implementation more robust.
  • Art of Prompt Designing – The difference between Prompt Designing and Knowledge base and its role in the way the ChatGPT responds. The significance of good grammar, explicit communication while designing a prompt and its repercussions on the way ChatGPT responds.
  • OpenAI Playground – We can use the OpenAI playground in order to effectively experiment with the ChatGPT prompts, knowledge base and test out our LLM implementation before testing out the whole ChatGPT implementation E2E.

Prototyping Done

  • Edmund demonstrated writing code in python to do the LLM for LLM bit wherein Edmund helped me explore the idea of talking to LLM for intent classification/evaluation of the response.
  • Aman demonstrated usage of OpenAI playground to test the LLM implementations quickly.
  • This screenshot demonstrates the usage of playground to test out the variable injection part

Potential Next steps / Help needed

  • Experimentation using LLM from the learnings in the Glific sprint. No help needed right now as such, will come back whenever there is any help needed 🙂

Overall thoughts on the sprint

  • Great execution of the entire sprint! It was a great idea to bring everyone together. Loved the way Tejas coordinated (great time keeper 🙂 ) . 
  • Something I loved that Tejas mentioned – “We are limited by our own imagination”. This I loved really well since I personally do agree with the same and this sprint was an eye opener for me where I didn’t know many of the use-cases of LLM.
  • One realization I had was how everyone’s problems are overlapping and how this sprint was such a good place to come, discuss, share and learn 🙂
  • Nothing in terms of AODs as of now! 🙂 

Leave a Reply

%d bloggers like this: