Skip to main content

Getting Started with Google Gemini API

If you're just getting started with Generative AI and want to use Google's Gemini models for free, you're in the right place. In this tutorial, I’ll walk you through everything you need to know to build your first Gemini-powered application using Python, including how to get a free API key, install necessary libraries, and write code that interacts with Gemini’s generate_content() function.

How to Get a Free Gemini API Key

You’ll need a Google account for this.

  1. Go to the official Google AI Studio.
  2. Sign in with your Google account.
  3. In the top-right corner, click on your profile and select "Get API Key".
  4. You’ll be redirected to Google Cloud's API Console.
  5. Create a new API key or use the existing one.
  6. Copy this key and keep it safe! You'll use it in your Python code.

**Important: Google provides free quota to use Gemini API. Make sure you check the quota limits for the free tier.

Setting Up Your Python Environment

Before writing any code, install the required libraries:

pip install google-generativeai python-dotenv

Also, create a .env file in your project folder and add your API key like this:

'.env' file should contain following information.

GOOGLE_API_KEY=your_api_key_here

This helps keep your secret key private and secure.

Full Example Code + Explanation

Here is the complete code with detailed comments to help you understand how to interact with the Gemini API using Python:

# Import necessary Python modules
import os                              # Helps interact with environment variables
from dotenv import load_dotenv         # Loads variables from .env file
from google import genai               # Google's Generative AI SDK
from google.genai import types         # Contains types for model configuration

# Load environment variables (API key)
load_dotenv(".env")                    # Loads the .env file from current directory
api_key = os.getenv("GOOGLE_API_KEY")  # Retrieves the API key from the environment

# Initialize the GenAI client
gClient = genai.Client(api_key=api_key)  # Authenticates you to use the Gemini models

# ---------------------------------------------
# EXAMPLE 1: Simple usage
# Ask the model a question
response = gClient.models.generate_content(
    model='gemini-2.0-flash',           # Free-tier optimized model
    contents="Explain what generate_content function do in genai.Client.models."
)
print(response.text)

# ---------------------------------------------
# EXAMPLE 2: With generation configuration
# Translate a sentence with custom behavior
response = gClient.models.generate_content(
    model="gemini-2.0-flash",
    contents='Translate good morning in Korean',
    
    # Optional configuration for generation behavior
    config=types.GenerateContentConfig(
        temperature=1,                 # Controls creativity: 0 = more predictable, 1 = more diverse
        top_p=0.99,                    # Limits the token selection to a cumulative probability
        top_k=0,                       # Limits token selection to top k tokens (0 = disabled)
        max_output_tokens=4096         # Max tokens in the output (adjust based on response length)
    ),
)
print(response.text)

Understanding Key Parameters

generate_content()

This function sends a prompt to the selected Gemini model and receives a generated response.

temperature

Controls randomness:

  • 0: Most deterministic
  • 1: Most creative/random

top_p

Uses nucleus sampling. The model selects tokens from the top cumulative probability mass. Values closer to 1 allow more variety.

top_k

Model picks from the top k most likely tokens. Set to 0 to disable.

max_output_tokens

Limits the response length (like word count, but for tokens). Typical values: 256, 1024, 4096.

More Sample Use Cases

1. Summarize a Paragraph

response = gClient.models.generate_content(
    model="gemini-2.0-flash",
    contents="Summarize this: Artificial Intelligence is transforming industries across the globe..."
)
print(response.text)

2. Translate Multiple Languages

response = gClient.models.generate_content(
    model="gemini-2.0-flash",
    contents="Translate 'thank you' into French, Spanish, and Japanese."
)
print(response.text)

3. Generate a Poem

response = gClient.models.generate_content(
    model="gemini-2.0-flash",
    contents="Write a haiku about spring."
)
print(response.text)

4. Coding Help

response = gClient.models.generate_content(
    model="gemini-2.0-flash",
    contents="Explain what a Python decorator is with an example."
)
print(response.text)

2025.04.22 - [AI] - How to use Gemini API via LangChain

Helpful Resources

Comments

Popular

Understanding SentencePiece: A Language-Independent Tokenizer for AI Engineers

In the realm of Natural Language Processing (NLP), tokenization plays a pivotal role in preparing text data for machine learning models. Traditional tokenization methods often rely on language-specific rules and pre-tokenized inputs, which can be limiting when dealing with diverse languages and scripts. Enter SentencePiece—a language-independent tokenizer and detokenizer designed to address these challenges and streamline the preprocessing pipeline for neural text processing systems. What is SentencePiece? SentencePiece is an open-source tokenizer and detokenizer developed by Google, tailored for neural-based text processing tasks such as Neural Machine Translation (NMT). Unlike conventional tokenizers that depend on whitespace and language-specific rules, SentencePiece treats the input text as a raw byte sequence, enabling it to process languages without explicit word boundaries, such as Japanese, Chinese, and Korean. This approach allows SentencePiece to train subword models di...

Mastering the Byte Pair Encoding (BPE) Tokenizer for NLP and LLMs

Byte Pair Encoding (BPE) is one of the most important and widely adopted subword tokenization algorithms in modern Natural Language Processing (NLP), especially in training Large Language Models (LLMs) like GPT. This guide provides a deep technical dive into how BPE works, compares it with other tokenizers like WordPiece and SentencePiece, and explains its practical implementation with Python code. This article is optimized for AI engineers building real-world models and systems. 1. What is Byte Pair Encoding? BPE was originally introduced as a data compression algorithm by Gage in 1994. It replaces the most frequent pair of bytes in a sequence with a single, unused byte. In 2015, Sennrich et al. adapted BPE for NLP to address the out-of-vocabulary (OOV) problem in neural machine translation. Instead of working with full words, BPE decomposes them into subword units that can be recombined to represent rare or unseen words. 2. Why Tokenization Matters in LLMs Tokenization is th...

ZeRO: Deep Memory Optimization for Training Trillion-Parameter Models

In 2020, Microsoft researchers introduced ZeRO (Zero Redundancy Optimizer) via their paper "ZeRO: Memory Optimization Towards Training Trillion Parameter Models" (arXiv:1910.02054). ZeRO is a memory optimization technique that eliminates redundancy in distributed training, enabling efficient scaling to trillion-parameter models. This provides an in-depth technical breakdown of ZeRO's partitioning strategies, memory usage analysis, and integration with DeepSpeed. 1. What is ZeRO? ZeRO eliminates redundant memory copies of model states across GPUs. Instead of replicating parameters, gradients, and optimizer states across each GPU, ZeRO partitions them across all devices. This results in near-linear memory savings as the number of GPUs increases. 2. Limitations of Traditional Data Parallelism In standard data-parallel training, every GPU maintains: Model Parameters $\theta$ Gradients $\nabla \theta$ Optimizer States $O(\theta)$ This causes memory usage ...