What is LangChain and Why Use It?
LangChain is an open-source framework that simplifies the use of Large Language Models (LLMs) like OpenAI, Gemini (Google), and others by adding structure, tools, and memory to help build real-world applications such as chatbots, assistants, agents, or AI-enhanced software.
Why Use LangChain for LLM Projects?
- Chainable Components: Easily build pipelines combining prompts, LLMs, tools, and memory.
- Multi-Model Support: Work with Gemini, OpenAI, Anthropic, Hugging Face, etc.
- Built-in Templates: Manage prompts more effectively.
- Supports Multi-Turn Chat: Manage complex interactions with memory and roles.
- Tool and API Integration: Let the model interact with external APIs or functions.
Let's Walk Through the Code: Gemini + LangChain
I will break the code into 4 main parts, each showcasing different features of LangChain and Gemini API.
Part 1: Basic Gemini API Call Using LangChain
import os
from dotenv import load_dotenv
load_dotenv(".env") # Load environment variables from .env file
api_key = os.getenv("GOOGLE_API_KEY") # Get Google API Key securely
# Import Gemini LLM wrapper from LangChain
from langchain_google_genai import ChatGoogleGenerativeAI
# Initialize the Gemini model with specific configuration
llm = ChatGoogleGenerativeAI(model='gemini-2.0-flash', temperature=0.9)
# Send a prompt to the model and get a response
response = llm.invoke('Explain what ChatGoogleGenerativeAI function do in langchain_google_genai.')
print(response.content)
Function Explanation:
- load_dotenv(".env"): Loads your .env file which stores your API key.
- os.getenv(): Retrieves your Gemini API key without hardcoding it in the script.
- ChatGoogleGenerativeAI(): Initializes the Gemini model with the name and temperature (creativity level).
- invoke(): Sends a prompt to the LLM and returns a response object.
This is a basic setup that demonstrates how to call Gemini via LangChain and print the result.
Part 2: Using PromptTemplate and Chain Composition
from langchain.prompts import PromptTemplate
llm = ChatGoogleGenerativeAI(model='gemini-2.0-flash', temperature=0.9)
# Define a reusable prompt structure with a placeholder
prompt = PromptTemplate.from_template('You are python coding agent. Explain me how to use {message}.')
# Chain the prompt template with the model using the pipe operator
chain = prompt | llm
# Provide the actual input for the placeholder
message = 'heapq'
response = chain.invoke(input=message)
print(response.content)
Function Explanation:
- PromptTemplate.from_template(): Creates a parameterized prompt where {message} is replaced dynamically.
- | (pipe operator): Chains the prompt and the LLM, forming a pipeline: prompt → LLM.
- invoke(input=message): Passes the actual user input ('heapq') to the chain and gets the final output.
This part shows how to modularize your prompts and chain components together, which is a powerful pattern in LangChain.
Part 3: Using System and Human Messages for Role-Based Input
from langchain_core.messages import HumanMessage, SystemMessage
llm_pro = ChatGoogleGenerativeAI(model='gemini-2.5-pro-exp-03-25', temperature=0.9)
# Create structured messages indicating who is speaking (system vs human)
output = llm_pro.invoke([
SystemMessage(content='Answer within 100 charicter in Korean.'),
HumanMessage(content='Forcasting Amazon stock price for next week.')
])
print(output.content)
Function Explanation:
- SystemMessage(): Gives the model background behavior (e.g., how to answer).
- HumanMessage(): Simulates the user’s actual question or prompt.
- Passing a list of messages: Supports multi-turn conversation or giving context with different roles.
This part introduces the chat-based message structure—important for building assistants with clear roles and behavior instructions.
Part 4: Using ChatPromptTemplate with Role Keywords
from langchain_core.prompts import ChatPromptTemplate
llm_pro = ChatGoogleGenerativeAI(model='gemini-2.5-pro-exp-03-25', temperature=0.9)
# Define a chat-style prompt template with role-based content
prompt = ChatPromptTemplate([
('system', 'Answer within 100 charicter in Korean.'),
('user', 'Forcasting {message} stock price for next week.')
])
# Create a chain from the prompt to the model
chain = prompt | llm_pro
# Replace the variable in the prompt template
message = 'Amazon'
response = chain.invoke(input=message)
print(response.content)
Function Explanation:
- ChatPromptTemplate(): Provides a flexible way to define role-specific messages using keywords like 'system' and 'user'.
- Templates with variables: {message} is dynamically filled in at runtime.
- Prompt Chaining: Works just like before—template flows into the LLM.
This version is cleaner and more scalable than manually defining SystemMessage and HumanMessage.
2025.04.23 - [AI] - Building a Simple LLM Chatbot Server Using Google Gemini 2.5 Pro and LangChain
Comments
Post a Comment