Piyush Arora.
ProjectsAboutResumeContact
Back to Projects

Project Details

LLM Chatbot with Multi-Agent System

Multi-agent chatbot system enabling recruiters to query accounts and job postings through intelligent routing and specialized agent collaboration for accurate, context-aware responses.

Mastra FrameworkAI SDKOpenAIMulti-Agent SystemMongoDBReact

About the Project

Built a production-ready LLM chatbot using Mastra Framework and AI SDK for recruiters to query about their accounts and job postings. Implemented a sophisticated multi-agent system where the root agent intelligently routes queries: handling simple conversations directly, while delegating complex account and job-related queries to a specialized sub-agent. The sub-agent collaborates with real-time data sources to fetch information about job postings, account verification status, and user profiles, then formats comprehensive responses. This architecture ensures accurate, context-aware responses while maintaining seamless conversation continuity across agents through shared memory storage, streaming responses for real-time feedback, and human-in-the-loop interactions for enhanced user experience.

Project Overview

Built an intelligent LLM chatbot using the Mastra Framework, AI SDK, and OpenAI (Azure GPT-4.1 Mini) specifically designed for recruiters to query about their accounts and job postings. The system implements a sophisticated multi-agent architecturewhere the root agent acts as an intelligent router: handling simple conversations like greetings and general inquiries directly, while analyzing query complexity to route account and job-related queries to a specialized sub-agent. The sub-agent collaborates with real-time data sources to fetch information about job postings, account verification status, and user profiles, then formats comprehensive responses that flow back through the root agent. This architecture ensures accurate, context-aware responses while maintaining seamless conversation continuity across both agents through MongoDB-based memory storage with shared thread and resource IDs.

Mastra Framework Backend

The backend uses Mastra Framework to orchestrate agents, tools, and workflows with built-in memory management. The framework handles agent registration, MongoDB storage configuration, and API route setup for seamless integration.

Root Agent & Sub-Agent Architecture

The system uses a root agent and sub-agent pattern where the root agent handles simple queries directly and routes complex queries to a specialized sub-agent. The root agent analyzes query complexity and uses tools to invoke the sub-agent when needed, ensuring clean separation of concerns.

Example flow: Root agent receives query → Analyzes complexity → Routes simple queries directly or invokes sub-agent via tool → Sub-agent uses specialized tools to fetch data → Response flows back through root agent → Streams to user

Memory & Context Management

Conversation context is maintained across agents using thread and resource IDs stored in MongoDB. The frontend sends memory context with each request, which is extracted by backend middleware and stored in runtime context. Both root and sub-agents use the same thread and resource IDs, ensuring seamless conversation continuity across agent boundaries.

Frontend Integration

The frontend uses AI SDK's useChat hook with custom transport to handle streaming responses and tool invocations. Memory context (thread and resource IDs) is included with each request, and tool invocations are displayed with user-friendly messages and interactive elements for human-in-the-loop interactions.

Key Features

Root Agent & Sub-Agent Pattern

Root agent handles simple conversations directly and routes complex queries to a specialized sub-agent via tools.

Tool-Based Sub-Agent Invocation

Root agent uses tools to invoke sub-agents, creating clean separation where root handles routing and sub-agent handles specialized queries with tools.

Conversation Continuity

Thread and resource IDs ensure all agents share the same conversation context, enabling follow-up questions across agents.

Human-in-the-Loop

Interactive UI elements (job selectors, city selectors) allow users to provide input during tool execution.

Streaming Responses

Real-time streaming of agent responses using AI SDK's streaming capabilities for better UX.

MongoDB Memory

Persistent conversation storage in MongoDB enables context retrieval across sessions and agents.

Architecture Benefits

  • Modularity: Each agent has a specific role, making the system easy to maintain and extend
  • Scalability: New agents and tools can be added without modifying existing code
  • Context Preservation: Shared thread/resource IDs ensure conversation continuity across agent boundaries
  • Flexibility: Tool-based routing allows dynamic agent selection based on query content
  • User Experience: Streaming responses and interactive tool UI provide real-time feedback
  • Observability: Mastra's built-in observability features enable monitoring and debugging

Technical Stack

Mastra FrameworkAI SDKOpenAI (Azure GPT-4.1 Mini)MongoDBReactTypeScriptNext.jsStreaming