- Published on
How to Build a Fullstack AI Agent with LangGraphJS and NestJS (Using Agent Initializr)
- Authors
- Name
- Ali Ibrahim
Last time, we compared LangGraph.js and LlamaIndex.ts to see which framework makes building AI agents in JavaScript easier. Today, we’re going one step further: building a production-ready AI agent backend using NestJS and LangGraph. We’ll break down the architecture, implementation details, and how to harness LangGraph’s power to build stateful, conversational agents. And to make things faster for you, I’ll also introduce Agent Initializr a tool I built to help you scaffold AI agent backends in minutes.
Why LangGraphJS ?
As mentioned in the previous article, LangGraph is a powerful agent framework, especially when used with LangChain.js. But its real strength is flexibility: it’s not tightly coupled to any one library. In this project, I’m using LangChain.js, but you’re free to pair LangGraph with any AI framework that suits your needs.
Why Build Your Own Backend?
Many agent frameworks let you deploy directly to their cloud, which is perfect for quick prototypes or standalone projects. But if you're building scalable AI agents that need to integrate with your system or evolve with new features, a custom backend gives you the control and flexibility those hosted solutions can't.
Why NestJS?
Coming from a background in large-scale Java systems with Spring Boot, I’ve learned to appreciate the value of a well-structured framework.
NestJS brings that same level of organization and scalability to the TypeScript ecosystem, something often undervalued when choosing a backend framework.
While lighter frameworks like Fastify may be enough for quick prototypes, Agent Initializr is built for serious builders, not just tinkerers. That’s why I chose a framework optimized for long-term maintainability and growth.
Here are four reasons NestJS stands out:
- Modular architecture that scales cleanly as your project grows
- Built-in guards, middleware, and auth for secure, extensible APIs
- Injectable config and logging, perfect for cloud-native apps
- Familiar to TypeScript devs, making team onboarding easier
Scaffolding the Project
Now that we’ve covered the “why,” let’s get into the “how.”
You have two options:
- Manually create a NestJS project using the CLI and integrate LangGraph yourself
- Or, use Agent Initializr to instantly scaffold everything for you
To keep things simple, we’ll go with the second option.
Getting Started with Agent Initializr
Agent Initializr is a tool I built to help developers scaffold production-ready AI agent backends using NestJS and LangGraph.
It generates a fully configured project based on your settings, so you can start building instead of wiring up boilerplate.
How to Use Agent Initializr
- Go to initializr.agentailor.com, fill in your agent configuration, and download the project.
- Or use this prefill link to generate a sample project instantly.
The generated backend includes:
- A pre-configured NestJS setup
- LangGraph.js agent integration
- LLM provider configuration (OpenAI or Google)
- Database setup for conversation history
- Ready-to-use API endpoints
- Docker Compose for local development
This setup eliminates boilerplate and lets you focus on implementing your agent logic.
Project Highlights
- Real-time streaming responses using Server-Sent Events
- Conversation persistence in a relational database
- Support for multiple LLM providers (OpenAI, Google Gemini)
- Redis-based pub/sub for real-time messaging
- A clean and maintainable architecture designed for growth
Architecture Overview
The project is structured around three core modules:
- Agent Module – Implements the core AI logic using LangGraph
- API Module – Exposes HTTP endpoints and handles DTOs
- Messaging Module – Manages real-time communication via Redis
Notes
- While LangGraph supports many agent types, the Initializr currently scaffolds a ReAct-style agent. These agents combine reasoning and action to solve complex tasks step by step.
- LangGraph provides a
createReactAgent
utility to simplify ReAct agent setup, but in this project, we use a custom builder. This gives us full control over the state machine and tool integration, making it easier to extend and evolve the agent’s behavior over time.
Agent Module
The heart of our application lies in the LangGraph implementation (Agent Module). Here is how it is implemented in the generated project:
1. React Agent Builder
The ReactAgentBuilder
class sets up the LangGraph state machine:
export class ReactAgentBuilder {
private readonly stateGraph: StateGraph<typeof MessagesAnnotation>
constructor(tools: any[], llm: BaseChatModel) {
this.stateGraph = new StateGraph(MessagesAnnotation)
this.initializeGraph()
}
private initializeGraph(): void {
this.stateGraph
.addNode('agent', this.callModel.bind(this))
.addNode('tools', this.toolNode)
.addEdge(START, 'agent')
.addConditionalEdges('agent', this.shouldContinue.bind(this), ['tools', END])
.addEdge('tools', 'agent')
}
}
2. State Management with PostgreSQL
We use LangGraph's PostgresSaver
for persistent state management:
export function createPostgresMemory(): PostgresSaver {
const connectionString = `postgresql://${username}:${password}@${host}:${port}/${dbName}`
return PostgresSaver.fromConnString(connectionString)
}
3. Agent Factory Pattern
The AgentFactory
provides a clean way to create agents with different LLM providers:
export class AgentFactory {
public static createAgent(
modelProvider: ModelProvider,
tools: any[],
checkpointer?: PostgresSaver
): CompiledStateGraph<typeof MessagesAnnotation, any> {
switch (modelProvider) {
case ModelProvider.OPENAI: {
return new ReactAgentBuilder(
tools,
new ChatOpenAI({
model: process.env.OPENAI_MODEL,
})
).build(checkpointer)
}
// Add other providers here
}
}
}
Let's explore each component in detail.
API Module
The application exposes three main endpoints:
- Chat Endpoint
The chat endpoint allows users to send messages to the agent and receive responses.
@Post('chat')
async chat(@Body() messageDto: MessageDto): Promise<MessageResponseDto> {
return await this.agentService.chat(messageDto);
}
- Streaming Endpoint
The streaming endpoint uses Server-Sent Events (SSE) to provide real-time responses from the agent.
@Sse('stream')
async stream(
@Query() messageDto: SseMessageDto,
): Promise<Observable<SseMessage>> {
return await this.agentService.stream(messageDto);
}
- History Endpoint
The history endpoint retrieves the conversation history for a specific thread.
@Get('history/:threadId')
async getHistory(
@Param('threadId') threadId: string,
): Promise<MessageResponseDto[]> {
return await this.agentService.getHistory(threadId);
}
Messaging Module
The application uses Redis for pub/sub messaging, enabling real-time communication between the agent and clients:
@Injectable()
export class RedisService {
constructor() {
this.client = createClient({
url: `redis://${process.env.REDIS_HOST}:${process.env.REDIS_PORT}`,
})
}
async publish(channel: string, message: any): Promise<void> {
await this.client.publish(channel, JSON.stringify(message))
}
}
Usage Example
Here's how to interact with the agent:
// Chat with the agent
const response = await fetch('http://localhost:3001/api/agent/chat', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
threadId: 'unique-thread-id',
content: [{ type: 'text', text: 'Hello, AI!' }],
type: 'human',
}),
})
// Stream responses
const sse = new EventSource(
'http://localhost:3001/api/agent/stream?threadId=unique-thread-id&content=Hello'
)
sse.onmessage = (event) => {
const data = JSON.parse(event.data)
console.log(data.content)
}
Complete Development Workflow with Agent Initializr
Once you've generated your project using Agent Initializr, here's the typical development workflow:
- Project Setup
# Clone your generated repository
git clone <your-repo>
cd <your-repo>
# Install dependencies
pnpm install
# Start required services
docker compose up -d
# Start development server
pnpm run start:dev
Configuration
- Update
.env
file with your API keys and configurations - The project comes with pre-configured environment variables based on your Initializr selections
- Update
Customization
- Add custom tools in
src/agent/tools/
- Modify agent behavior in
src/agent/agent.builder.ts
- Add new endpoints in
src/api/agent/controller/
- Add custom tools in
Testing
- Unit tests are pre-configured with Jest
- E2E tests are set up for API endpoints
- Run tests with
pnpm test
andpnpm test:e2e
Testing Your Agent with Agentailor Chat UI
To help you quickly test and interact with your generated agent backend, I've created AgentAilor Chat UI, a ready-to-use chat interface that integrates seamlessly with the backend generated by Agent Initializr.
Setting Up the Chat UI
- Clone the UI repository:
git clone https://github.com/IBJunior/agentailor-chat-ui.git
cd agentailor-chat-ui
- Install dependencies and start the development server:
pnpm install
pnpm dev
The chat UI will be available at http://localhost:3000
by default.
Features
The chat interface comes with:
- Real-time streaming responses
- Markdown message rendering
- Thread management
- Message history viewer
Customization Options
You can customize the UI to match your needs, all the details are in the README.md
file of the chat UI repository.
Conclusion
In this article, we explored how to build a production-ready AI agent backend using NestJS and LangGraph, with the help of Agent Initializr to scaffold the project.
The generated setup provides a clean architecture, real-time communication, LLM integration, and conversation persistence, all designed to be extended as your agents grow more complex.
While the example focused on the core agent logic without tools, the scaffolded project is fully extensible. You can easily add custom tools and inject them via the AgentFactory
.
This is just the beginning, Agent Initializr is still evolving. I'm planning to support more LLM providers, agent types, and ready-to-use tools in future versions.
If you have suggestions, feedback, or want to see a feature added, I’d love to hear from you.
Feel free to reach out on X or LinkedIn.