Gemini API Setup Guide for Python: Authentication, Multimodal Input & JSON Output
Gemini API Setup Guide for Python Developers
Google’s Gemini API offers powerful generative AI capabilities including text generation, multimodal understanding, and structured output. This guide walks Python developers through complete setup—from API key authentication to production-ready configurations with Vertex AI service accounts, multimodal inputs, and structured JSON parsing.
Step 1: Install the Google Generative AI SDK
Start by installing the official Python SDK. For direct API key access, use the lightweight package:
pip install google-generativeai
For Vertex AI (enterprise/production), install the full platform SDK:
pip install google-cloud-aiplatform
Verify your installation:
python -c “import google.generativeai as genai; print(genai.version)“
Step 2: API Key Authentication (Quick Start)
The fastest way to start is with an API key from **Google AI Studio** (aistudio.google.com). This method is ideal for prototyping and personal projects.
import google.generativeai as genai
genai.configure(api_key=“YOUR_API_KEY”)
model = genai.GenerativeModel(“gemini-2.0-flash”)
response = model.generate_content(“Explain quantum computing in three sentences.”)
print(response.text)
For better security, use environment variables instead of hardcoding keys:
import os
import google.generativeai as genai
genai.configure(api_key=os.environ[“GEMINI_API_KEY”])
Set the variable in your terminal before running:
# Linux/macOS
export GEMINI_API_KEY=“YOUR_API_KEY”
Windows PowerShell
$env:GEMINI_API_KEY=“YOUR_API_KEY”
Step 3: Vertex AI Service Account Configuration (Production)
For production applications, use Vertex AI with a Google Cloud service account. This provides IAM-based access control, audit logging, and enterprise compliance.
3a. Create a Service Account
# Install gcloud CLI, then:
gcloud iam service-accounts create gemini-app
—display-name=“Gemini Production App”
gcloud projects add-iam-policy-binding YOUR_PROJECT_ID
—member=“serviceAccount:gemini-app@YOUR_PROJECT_ID.iam.gserviceaccount.com”
—role=“roles/aiplatform.user”
gcloud iam service-accounts keys create key.json
—iam-account=gemini-app@YOUR_PROJECT_ID.iam.gserviceaccount.com
3b. Authenticate and Initialize Vertex AI
import vertexai
from vertexai.generative_models import GenerativeModel
vertexai.init(
project="YOUR_PROJECT_ID",
location="us-central1",
# Automatically uses GOOGLE_APPLICATION_CREDENTIALS env var
)
model = GenerativeModel("gemini-2.0-flash")
response = model.generate_content("Summarize the benefits of cloud computing.")
print(response.text)Set your credentials path:
export GOOGLE_APPLICATION_CREDENTIALS=“/path/to/key.json”
Step 4: Multimodal Input Handling
Gemini natively processes text, images, audio, video, and PDFs. Here is how to send an image alongside a text prompt:
Image + Text Input
import google.generativeai as genai
from pathlib import Path
genai.configure(api_key=os.environ[“GEMINI_API_KEY”])
model = genai.GenerativeModel(“gemini-2.0-flash”)
image_data = Path(“product_photo.jpg”).read_bytes()
response = model.generate_content([
“Describe this product and suggest a marketing tagline.”,
{“mime_type”: “image/jpeg”, “data”: image_data}
])
print(response.text)
PDF Document Analysis
pdf_data = Path("contract.pdf").read_bytes()
response = model.generate_content([
"Extract all key dates and obligations from this contract.",
{"mime_type": "application/pdf", "data": pdf_data}
])
print(response.text)
Using File Upload for Large Files
video_file = genai.upload_file("presentation.mp4", mime_type="video/mp4")
# Wait for processing
import time
while video_file.state.name == "PROCESSING":
time.sleep(5)
video_file = genai.get_file(video_file.name)
response = model.generate_content([
"Summarize the key points discussed in this video.",
video_file
])
print(response.text)
Step 5: Structured JSON Output Parsing
For production pipelines, you need predictable structured output. Gemini supports enforced JSON schemas via response_mime_type and response_schema.
Basic JSON Mode
model = genai.GenerativeModel(
“gemini-2.0-flash”,
generation_config=genai.GenerationConfig(
response_mime_type=“application/json”
)
)
response = model.generate_content(“List 3 Python web frameworks with name and description.”)
import json
data = json.loads(response.text)
print(data)
Schema-Enforced JSON Output
from google.generativeai.types import content_types
import typing_extensions as typing
class ProductReview(typing.TypedDict):
product_name: str
rating: int
pros: list[str]
cons: list[str]
summary: str
model = genai.GenerativeModel(
"gemini-2.0-flash",
generation_config=genai.GenerationConfig(
response_mime_type="application/json",
response_schema=list[ProductReview]
)
)
response = model.generate_content(
"Analyze these reviews and extract structured data: "
"'Great battery life but the screen is dim. 4/5 stars for PhoneX.'"
)
reviews = json.loads(response.text)
for review in reviews:
print(f"{review['product_name']}: {review['rating']}/5")
Production-Ready Parsing with Error Handling
import json
from google.api_core import exceptions, retry
@retry.Retry(predicate=retry.if_exception_type(exceptions.ResourceExhausted))
def get_structured_response(prompt: str, schema) -> dict:
model = genai.GenerativeModel(
"gemini-2.0-flash",
generation_config=genai.GenerationConfig(
response_mime_type="application/json",
response_schema=schema,
temperature=0.1
)
)
response = model.generate_content(prompt)
if response.prompt_feedback.block_reason:
raise ValueError(f"Blocked: {response.prompt_feedback.block_reason}")
return json.loads(response.text)
Pro Tips
- Use
gemini-2.0-flashfor most tasks — it is faster, cheaper, and sufficient for 90% of use cases. Reservegemini-2.5-profor complex reasoning tasks.- Settemperature=0.0to0.2for structured output — lower temperatures produce more consistent, parseable JSON.- Batch requests with asyncio — usemodel.generate_content_async()for concurrent requests in production pipelines.- Cache system instructions — usesystem_instructionin the model constructor to avoid repeating context in every call, which reduces token usage and cost.- Monitor quotas — free-tier API keys are rate-limited to 15 RPM. Use Vertex AI or request quota increases for production workloads.- Pin model versions — use versioned model names likegemini-2.0-flash-001in production to avoid unexpected behavior changes on auto-updates.
Troubleshooting
| Error | Cause | Solution |
|---|---|---|
400 API key not valid | Invalid or expired API key | Regenerate key in Google AI Studio. Ensure no trailing whitespace in env var. |
429 Resource exhausted | Rate limit exceeded | Add exponential backoff with google.api_core.retry. Upgrade to paid tier or Vertex AI. |
403 Permission denied | Service account lacks IAM roles | Grant roles/aiplatform.user to the service account. |
InvalidArgument: response_schema | Schema not supported on model | Use gemini-2.0-flash or newer. Older models lack schema enforcement. |
BlockedPromptException | Content safety filter triggered | Adjust safety_settings or rephrase the prompt. Check response.prompt_feedback. |
| JSON parse error on response | Model returned malformed JSON | Always use response_mime_type="application/json" with a response_schema for reliable output. |
What is the difference between Google AI Studio API keys and Vertex AI authentication?
Google AI Studio API keys are simple bearer tokens ideal for development and prototyping. They authenticate via a single key string with basic rate limits. Vertex AI uses Google Cloud IAM with service accounts, providing granular access control, audit logging, VPC Service Controls, and enterprise SLAs. For production applications handling sensitive data, Vertex AI is the recommended path as it integrates with your existing cloud security infrastructure.
How do I handle large files like videos with the Gemini API?
For files over 20MB, use the File API with genai.upload_file(). This uploads the file to Google’s servers, processes it asynchronously, and returns a file reference you can include in prompts. Supported formats include MP4, WAV, MP3, and PDF. Always poll the file state with genai.get_file() until processing completes before using it in a generation request. Uploaded files are automatically deleted after 48 hours.
Can I guarantee the Gemini API always returns valid JSON?
Yes. By setting response_mime_type=“application/json” alongside a response_schema, Gemini constrains its output to valid JSON matching your specified structure. This uses constrained decoding at the model level, making it far more reliable than prompt engineering alone. Define your schema using Python TypedDict classes or Pydantic models for the best developer experience. Always wrap json.loads() in a try-except block as an additional safety layer in production code.