Highlights
- Easy and Accessible Education Everywhere: Our versatile LLM model meets the mission of our client by providing smart and cost-effective educational services to the specific target audience, enhancing the functionality of the main product of the company.
- Artificial Intelligence and Machine Learning Development Expertise at Scale: We leveraged our rich experience in building AI/ML products for business across entirely different verticals and delivered a highly efficient, lightweight LLM model trained to cater to a particular user group.
Client
Our client is a prominent B2B company that provides services in the Educational Technology industry. We have already developed an innovative video conferencing platform for this client, building a viable alternative to Zoom and Meet, and have become an integral part of the client’s success. The company was founded in 2020 and by 2025 had become an industry-leading player.
Product
The product is part of an AI-powered online education platform, featuring a chatbot specifically designed for children aged 9–12 years old and K-12 institutions. The client discovered that this age group requires a distinct approach to learning, which differs significantly from that of 15-year-old kids, for example, so a tailored solution was needed. The mission was to create a custom Large Language Model (LLM) that would help kids get answers to questions and generate valuable content.
Goals and objectives
- Provide a Powerful Chatbot for Learning Support: Develop an interactive LLM-powered chatbot that delivers age-appropriate guidance, answers questions in real-time, and fosters independent learning by adapting responses to the unique comprehension level and curiosity of children in the target age group.
- Automate Age-Appropriate Content Generation: Implement an intelligent content creation mechanism that generates content tailored to the learning levels of children, curriculum standards, and interests, reducing the manual workload for educators and ensuring consistent educational quality.
- Optimize Platform Costs Through Scalable AI Integration: Use scalable LLM deployment strategies to minimize infrastructure and content development costs, securing accessible educational experiences.
Project challenge
- Collecting High-Quality, Age-Appropriate Training Data: Overcome the difficulty of sourcing and curating diverse, reliable, and suitable educational data to train the LLM, ensuring it aligns with both pedagogical standards and the cognitive abilities of the target audience.
- Tuning the LLM for Child-Centric Comprehension and Interaction: Fine-tune LLMs to consistently understand and generate responses that match perfectly the comprehension, tone, and engagement style appropriate for young learners.
Solution
To complete this project, we assigned a vetted AI/ML developer with years of experience in delivering projects for businesses of all sizes and diverse industries.
The project began with the data collection phase. Our developer collaborated with experts on the client’s side to identify and collect high-quality, age-appropriate learning materials. This included academic texts, child-friendly FAQs, and interactive exercises. Additionally, synthetic data generation with anonymized real-world queries was also incorporated.
Next, our developer moved to model training and evaluation. They leveraged open-source LLM architectures and fine-tuned the model to understand and respond in a manner aligned with children’s comprehension levels and emotional sensitivity. The evaluation process included both automated metrics and human-in-the-loop testing, where educators reviewed the chatbot’s responses for clarity, engagement, and pedagogical value.
The model was deployed using Amazon Bedrock, and used the LoRA (Low-Rank Adaptation) technique to efficiently train a custom LLM tailored to our domain with minimal resource overhead. This allowed for scaling quickly while keeping infrastructure costs in check. LoRA enabled injecting domain-specific behaviors without altering the base model architecture, ensuring a lightweight, maintainable solution.
The system was built to support ongoing improvements and future iterations, including multi-language support and gamified learning features.tem accordingly, and developed effective recovery strategies.
Tech Stack
-
AWS Bedrock
-
PyTorch
-
Hugging Face
Our results
Our developer successfully delivered the full scope of the project on time and on budget, meeting the client’s expectations in terms of quality and performance.
- Outstanding Cost-Effectiveness and Flexibility: Our model is lightweight with only 3 billion parameters, compared to GPT-3, for example, with 175 billion parameters. The model we presented is easy to self-host and conveniently available via AWS Bedrock.
- Simplicity and Data Security: We delivered a highly secure and effective solution that can be used out of the box, without the need for prompt engineering and hiring specialized technical teams.
- Ongoing Collaboration: After the success of this model, we continue our collaboration, as the client plans to train more customized LLMs for other projects.
Highlights Client HaulHub is a B2B2C enterprise that offers digital platforms to the transportation construction sector. The company focuses...
Explore Case