Skip to main content

Stateful LLM Chatbot Server with Gemini 2.5 Pro using LangGraph

 In this tutorial, we upgrade the stateless chatbot server by adding stateful memory support using LangGraph. This enables more human-like, multi-turn conversations where the model remembers previous messages.

Key Features of This Upgrade

  • Powered by Gemini 2.5 Pro via LangChain's integration
  • Uses LangGraph's MemorySaver for session memory
  • Built with Flask and CORS enabled
  • Maintains per-user conversation history using thread_id

Difference from the Stateless Version

The main differences from the stateless version are:

  • State Management: Introduces a State class using TypedDict to track conversation history via messages.
  • LangGraph Integration: Defines a stateful workflow using StateGraph and persists memory using MemorySaver.
  • Session Memory: Associates chat sessions with a unique thread_id (e.g., user_124) using LangGraph's configurable input.

Complete Stateful Chatbot Code (with Explanation)

import os
from dotenv import load_dotenv
from flask import Flask, request, jsonify
from flask_cors import CORS

# Typing and LangChain / LangGraph Imports
from typing import Sequence
from typing_extensions import Annotated, TypedDict
from langchain_google_genai import ChatGoogleGenerativeAI
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.messages import HumanMessage, BaseMessage
from langgraph.graph.message import add_messages
from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import StateGraph, START, END

# Load .env environment variable
load_dotenv(".env")
api_key = os.getenv("GOOGLE_API_KEY")

# Initialize Flask server
app = Flask(__name__)
CORS(app)

# Setup Gemini 2.5 Pro LLM
llm_pro = ChatGoogleGenerativeAI(
    model='gemini-2.5-pro-exp-03-25',
    temperature=0.5
)

# Prompt with system role + memory placeholder
prompt = ChatPromptTemplate.from_messages([
    ('system', 'You are an agent who is expert in deep learning.'),
    MessagesPlaceholder(variable_name="messages"),
])

# Define memory-aware state schema
class State(TypedDict):
    messages: Annotated[Sequence[BaseMessage], add_messages]

# Chain: prompt → model
runnable = prompt | llm_pro

# Graph node function: generates new message
def generator(state: State):
    response = runnable.invoke(state)
    return {"messages": [response]}  # update messages in state

# Create LangGraph workflow
workflow = StateGraph(state_schema=State)
workflow.add_edge(START, "model")
workflow.add_node("model", generator)

# Use memory saver to store conversations
memory = MemorySaver()
chat_app = workflow.compile(checkpointer=memory)

# Define user session using configurable thread_id
config = {"configurable": {"thread_id": "user_124"}}

# API endpoint
@app.route("/chat", methods=["POST"])
def chat():
    data = request.get_json()
    user_query = data.get('user_query')

    if not user_query:
        return jsonify({'response': "Please enter your question."})

    # Pass message and user config to LangGraph
    output = chat_app.invoke(
        input={"messages": [HumanMessage(user_query)]},
        config=config
    )

    # Extract model response
    response = output["messages"][-1].content
    return jsonify({'response': response})

if __name__ == '__main__':
    app.run(debug=True, port=8000)

What Makes It Stateful?

In a stateless chatbot, each user query is independent and has no memory. This version solves that by:

  • Maintaining a messages history in the State schema
  • Persisting and retrieving conversation state using MemorySaver
  • Thread-based memory via configurable.thread_id

Conclusion

This upgraded chatbot server now supports stateful conversation using Gemini 2.5 Pro and LangGraph. It’s ideal for building smart, context-aware assistants in production.

Ready to build your own? Try replacing thread_id dynamically based on user sessions or cookies!

Comments

Popular

Building an MCP Agent with UV, Python & mcp-use

Model Context Protocol (MCP) is an open protocol designed to enable AI agents to interact with external tools and data in a standardized way. MCP is composed of three components: server , client , and host . MCP host The MCP host acts as the interface between the user and the agent   (such as Claude Desktop or IDE) and plays the role of connecting to external tools or data through MCP clients and servers. Previously, Anthropic’s Claude Desktop was introduced as a host, but it required a separate desktop app, license, and API key management, leading to dependency on the Claude ecosystem.   mcp-use is an open-source Python/Node package that connects LangChain LLMs (e.g., GPT-4, Claude, Groq) to MCP servers in just six lines of code, eliminating dependencies and supporting multi-server and multi-model setups. MCP Client The MCP client manages the MCP protocol within the host and is responsible for connecting to MCP servers that provide the necessary functions for the ...

How to Save and Retrieve a Vector Database using LangChain, FAISS, and Gemini Embeddings

How to Save and Retrieve a Vector Database using LangChain, FAISS, and Gemini Embeddings Efficient storage and retrieval of vector databases is foundational for building intelligent retrieval-augmented generation (RAG) systems using large language models (LLMs). In this guide, we’ll walk through a professional-grade Python implementation that utilizes LangChain with FAISS and Google Gemini Embeddings to store document embeddings and retrieve similar information. This setup is highly suitable for advanced machine learning (ML) and deep learning (DL) engineers who work with semantic search and retrieval pipelines. Why Vector Databases Matter in LLM Applications Traditional keyword-based search systems fall short when it comes to understanding semantic meaning. Vector databases store high-dimensional embeddings of text data, allowing for approximate nearest-neighbor (ANN) searches based on semantic similarity. These capabilities are critical in applications like: Question Ans...

RF-DETR: Overcoming the Limitations of DETR in Object Detection

RF-DETR (Region-Focused DETR), proposed in April 2025, is an advanced object detection architecture designed to overcome fundamental drawbacks of the original DETR (DEtection TRansformer) . In this technical article, we explore RF-DETR's contributions, architecture, and how it compares with both DETR and the improved model D-FINE . We also provide experimental benchmarks and discuss its real-world applicability. RF-DETR Architecture diagram for object detection Limitations of DETR DETR revolutionized object detection by leveraging the Transformer architecture, enabling end-to-end learning without anchor boxes or NMS (Non-Maximum Suppression). However, DETR has notable limitations: Slow convergence, requiring heavy data augmentation and long training schedules Degraded performance on low-resolution objects and complex scenes Lack of locality due to global self-attention mechanisms Key Innovations in RF-DETR RF-DETR intr...