ConvoZen demonstrates enterprise applications of Small Language Models

NoBroker’s co-founder Akhil Gupta and his small engineering team in Bengaluru were grappling with a persistent problem: improving the productivity of customer service agents who handle numerous potential buyers of houses and flats. High attrition among agents made it difficult to maintain service quality, and ready-made AI solutions in the market were too expensive to adopt at scale.

To address this, NoBroker built its own AI tools over the past five years, developing a suite of small language models (SLMs) and conversational analytics models not only for internal use but also as standalone offerings for other businesses. These include task-specific models and adaptations of open large language models. Together, they form the backbone of NoBroker’s flagship conversational analytics platform, ConvoZen. 

A major strength of these models is their ability to understand multiple Indian languages and lingua francas like Hinglish. They can summarise conversations, highlight mistakes, perform sentiment analysis and provide agents with contextual information to enhance customer interactions. Rather than attempting to compete with large foundational models, Gupta emphasizes that the aim was always to solve specific business problems efficiently and cost-effectively. 

NoBroker achieved this with a relatively modest setup, a few hundred moderately powerful GPUs and around 25 engineers. Their speech and language models, internally named the Maya Series, include MayaConformer, a 50-million-parameter model trained on both open and in-house datasets across Indian languages. Gupta says that with the right training data, small language models can be highly effective for narrow, targeted use cases, even reducing errors further with minimal curated recordings. 

To know more visit:
https://timesofindia.indiatimes.com/business/india-business/small-language-models-too-have-powerful-use-cases/articleshow/109416710.cms

Scroll to Top