Chapter 3 — Setting Up Your AI Agent Development Environment (Plug-and-Play)

AI Agent Development Environment setup is the only goal of Chapter 3: install Python 3.12 with uv, isolate dependencies in a .venv, run Dockerized Postgres + pgvector, configure a clean .env, and verify everything with sanity checks—so Chapter 4 can focus entirely on architecture.

1) What You’ll Set Up (and Why) for Your AI Agent Development Environment

AI Agent Development Environment

This AI Agent Development Environment prioritizes isolation, reproducibility, and clean services.

  • Python 3.12 uv venv — runtime & packaging for AI agent environment setup
  • uv or pip — uv or pip — fast installs and reproducibility (Python 3.12 uv venv) (uv is recommended). docs.astral.sh+1
  • Virtual environment (.venv) — per-project dependency isolation.
  • Git — version control.
  • Docker Desktop + Compose v2 — clean, disposable local services. Docker Documentation+1
  • Postgres + pgvector — vector search for RAG/memory. GitHub
  • .env + .env.example — safe secrets/config pattern.
  • Sanity checks — know it works before writing code.

2) Install Python 3.12 and uv for an AI Agent Development Environment

If you can’t decide: Python 3.12 + uv is a great default.

macOS

brew install python@3.12 uv

or use uv’s official installer:

curl -LsSf https://astral.sh/uv/install.sh | sh

docs.astral.sh+1

Ubuntu/Debian

sudo apt-get update
sudo apt-get install -y python3.12 python3.12-venv curl
curl -LsSf https://astral.sh/uv/install.sh | sh

docs.astral.sh

Windows (PowerShell)

  • Install Python from python.org / Microsoft Store.
  • Install uv (optional but recommended):
powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"

docs.astral.sh

You’ll activate the venv inside the project (next step).


3) Create the project and virtual environment for your AI Agent Development Environment

Your **AI Agent Development Environment** uses a project-local \.venv` to avoid global conflicts and ensure consistent tooling.`

mkdir ai-agent-env && cd ai-agent-env
git init

Create & activate the venv

Option A — uv (recommended)

uv venv .venv
source .venv/bin/activate        # Windows (PowerShell): . .\.venv\Scripts\Activate.ps1
uv pip install --upgrade pip

Option B — built-in venv + pip

python -m venv .venv
source .venv/bin/activate        # Windows: . .\.venv\Scripts\Activate.ps1
python -m pip install --upgrade pip

Expected result

  • Your shell prompt shows (.venv).
  • python --version prints 3.12.x (or 3.11.x).

4) .gitignore + README (why each line)

echo ".venv/"        >> .gitignore   # ✅ never commit the virtual env
echo ".env"          >> .gitignore   # ✅ never commit secrets
echo "__pycache__/"  >> .gitignore   # cache files
echo ".cache/"       >> .gitignore   # cache files
touch README.md

5) Core dev tools (safe pins you can keep)

Inside the activated venv:

uv pip install ruff black pre-commit

Create .pre-commit-config.yaml:

repos:
  - repo: https://github.com/psf/black
    rev: 24.8.0
    hooks: [{ id: black }]
  - repo: https://github.com/astral-sh/ruff-pre-commit
    rev: v0.6.4
    hooks: [{ id: ruff, args: ["--fix"] }]

Then:

pre-commit install
pre-commit run --all-files   # one-time check

You can bump the rev: tags later to upgrade.


6) Run Postgres + pgvector in Docker (Core of Your AI Agent Development Environment)

pgvector is the vector engine in our AI Agent Development Environment, enabling fast Postgres vector search for embeddings.

Folder & file

mkdir -p docker

docker/postgres-pgvector.yml

services:
  postgres:
    image: ankane/pgvector               # Postgres with pgvector preinstalled
    container_name: ai-pgvector          # ✏️ you can rename the container
    environment:
      POSTGRES_PASSWORD: postgres        # dev-only; change for teams if needed
      POSTGRES_DB: ai                    # default DB name; change if you like
    ports:
      - "5432:5432"                      # change left side if 5432 is busy (e.g., "5433:5432")
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U postgres -d ai"]
      interval: 5s
      timeout: 2s
      retries: 20

 

Start it:

docker compose -f docker/postgres-pgvector.yml up -d
docker ps     # wait until STATUS says "(healthy)"

Why this image? It’s a maintained Postgres build with pgvector preinstalled, used widely in tutorials and CI. Docker Hub+1

Create the Vector Table and Index for the AI Agent Development Environment

docker exec -it ai-pgvector psql -U postgres -d ai

Then run:

CREATE EXTENSION IF NOT EXISTS vector;

-- 1536 dims matches OpenAI's text-embedding-3-small
CREATE TABLE IF NOT EXISTS docs (
  id TEXT PRIMARY KEY,
  body TEXT NOT NULL,
  embedding vector(1536)
);

-- IVFFLAT index for cosine similarity; tune lists as data grows
CREATE INDEX IF NOT EXISTS docs_embedding_idx
ON docs USING ivfflat (embedding vector_cosine_ops)
WITH (lists = 100);
\q

With the IVFFLAT index in place, Postgres vector search via pgvector is production-style even in local dev.

  • 1536 matches text-embedding-3-small output. OpenAI Platform+1
  • lists controls cluster count/trade-off for ANN; increase for larger data. Aiven+1

 


7) The .env files for your AI Agent Development Environment — fully annotated

The \.env.example` documents choices across the AI Agent Development Environment, while your private `.env` keeps real keys out of Git — a core part of .env best practices.`

Create both files: .env.example (committed) and .env (private, never commit). Keep .env.example values fake.

.env.example (copy-paste)

# ========= LLM =========
# If you're using OpenAI Cloud:
OPENAI_API_KEY=sk-REPLACE_ME               # example shape: sk-1234abcd... (put real key only in .env)
OPENAI_BASE_URL=                           # leave blank for OpenAI Cloud
OPENAI_MODEL=gpt-4.1                       # example; you can also use o4-mini, etc. Check current models list

# --- OR local LLM via OpenAI-compatible servers (choose ONE) ---
# Ollama (local dev):
# OPENAI_BASE_URL=http://localhost:11434/v1   # Ollama's OpenAI-compatible endpoint
# OPENAI_MODEL=llama3                         # model pulled via `ollama pull llama3`

# vLLM (self-hosted, GPU):
# OPENAI_BASE_URL=http://localhost:8000/v1    # vLLM OpenAI-compatible server
# OPENAI_MODEL=meta-llama/Llama-3.1-8B-Instruct

# ========= Tracing (optional)
# Prefer the current LangSmith vars; older guides sometimes use LANGCHAIN_TRACING_V2
LANGSMITH_TRACING=true
LANGSMITH_API_KEY=ls-REPLACE_ME             # leave blank if not using LangSmith yet
LANGSMITH_PROJECT=ai-agent-env

# ========= Vector DB =========
VECTOR_DB=pgvector
POSTGRES_URL=postgresql://postgres:postgres@localhost:5432/ai
# If you mapped 5433:5432 in compose: postgresql://postgres:postgres@localhost:5433/ai

# (Optional alternatives if you switch vector DBs later)
QDRANT_URL=http://localhost:6333
PINECONE_API_KEY=pc-REPLACE_ME
  • Models list / naming: check current OpenAI models and families (GPT vs reasoning models like o4-mini). OpenAI Platform+1
  • Ollama OpenAI-compat: http://localhost:11434/v1, any placeholder API key is fine. docs.ollama.com+1
  • vLLM OpenAI-compat: default local base is http://localhost:8000/v1. docs.vllm.ai+1
  • Qdrant default port is 6333. qdrant.tech

A beginner-friendly .env (pick one path):

Cloud-first (OpenAI Cloud)

OPENAI_API_KEY=sk-1234abcdyourrealkey
OPENAI_BASE_URL=
OPENAI_MODEL=gpt-4.1

LANGSMITH_TRACING=true
LANGSMITH_API_KEY=
LANGSMITH_PROJECT=ai-agent-env

VECTOR_DB=pgvector
POSTGRES_URL=postgresql://postgres:postgres@localhost:5432/ai
QDRANT_URL=http://localhost:6333
PINECONE_API_KEY=

Local-first (Ollama)

OPENAI_API_KEY=ollama                     # required-but-ignored placeholder
OPENAI_BASE_URL=http://localhost:11434/v1
OPENAI_MODEL=llama3

LANGSMITH_TRACING=true
LANGSMITH_API_KEY=
LANGSMITH_PROJECT=ai-agent-env

VECTOR_DB=pgvector
POSTGRES_URL=postgresql://postgres:postgres@localhost:5432/ai

docs.ollama.com

Self-hosted (vLLM, GPU)

OPENAI_API_KEY=token-abc123               # if your gateway enforces a key
OPENAI_BASE_URL=http://localhost:8000/v1
OPENAI_MODEL=meta-llama/Llama-3.1-8B-Instruct

LANGSMITH_TRACING=true
LANGSMITH_API_KEY=
LANGSMITH_PROJECT=ai-agent-env

VECTOR_DB=pgvector
POSTGRES_URL=postgresql://postgres:postgres@localhost:5432/ai

docs.vllm.ai

Notes

  • For OpenAI Cloud, leave OPENAI_BASE_URL blank and just set OPENAI_API_KEY. OpenAI Platform
  • LangSmith uses LANGSMITH_TRACING, LANGSMITH_API_KEY and optional LANGSMITH_PROJECT. Older docs/libraries also accept LANGCHAIN_TRACING_V2; prefer the LANGSMITH_* names going forward. docs.langchain.com+1
  • Pinecone clients look for PINECONE_API_KEY. docs.pinecone.io

8) Sanity Checks That Prove Your AI Agent Development Environment Works

Sanity checks confirm the AI Agent Development Environment is healthy before coding.

A) Container is healthy

docker ps

You should see ai-pgvector with STATUS ... (healthy).

B) Table exists

docker exec -it ai-pgvector psql -U postgres -d ai -c "\dt"

Good output (example):

         List of relations
 Schema | Name | Type  | Owner
--------+------+-------+--------
 public | docs | table | postgres
(1 row)

C) .env is present and readable

python - << 'PY'
import os, sys
from pathlib import Path
print(".env present:", Path(".env").exists())
for key in ["OPENAI_BASE_URL","OPENAI_MODEL","POSTGRES_URL"]:
    print(key, "=", os.getenv(key))
PY

Cloud-first example:

.env present: True
OPENAI_BASE_URL = None
OPENAI_MODEL = gpt-4.1
POSTGRES_URL = postgresql://postgres:postgres@localhost:5432/ai

Ollama example:

.env present: True
OPENAI_BASE_URL = http://localhost:11434/v1
OPENAI_MODEL = llama3
POSTGRES_URL = postgresql://postgres:postgres@localhost:5432/ai

9) Optional: LangSmith tracing / Phoenix (observability)

  • LangSmith: set LANGSMITH_TRACING=true, LANGSMITH_API_KEY, and LANGSMITH_PROJECT to see traces when you start building in Chapter 4. docs.langchain.com+1
  • Phoenix (Arize): pip install arize-phoenix, then run phoenix serve to launch the local UI. PyPI+1

10) Editor & shell tips (beginner pitfalls solved)

  • VS Code: install the Python extension — it will detect .venv automatically.
  • Command “not found”? Make sure your prompt shows (.venv); if not, re-activate.
  • Windows path issues? Close/reopen PowerShell as Administrator once after new installs.
  • If Windows tooling is painful, use WSL and follow the Ubuntu steps inside it.
  • Docker Compose v2 uses docker compose ... (with a space). If you’re on Linux without Docker Desktop, install the Compose plugin. Docker Documentation+1

11) What you do not need today

No “hello world,” no embeddings, no retrieval code. Chapter 3 is environment only. You’re now ready for Chapter 4.


Day-one checklist (tick as you go)

  • Python 3.12 installed
  • Project folder created and .venv activated
  • uv or pip working inside the venv
  • Git initialized; .gitignore includes .env and .venv
  • Docker Desktop running
  • Postgres + pgvector up and healthy
  • .env.example created and .env filled (cloud / local / self-hosted)
  • psql lists the docs table
  • (Optional) LangSmith / Phoenix keys saved

The best Python starter (pick one)

  • Automate the Boring Stuff with Python (3rd ed.) — free to read online; very beginner-friendly. automatetheboringstuff.com +1
  • Prefer video? CS50’s Introduction to Programming with Python (Harvard) — free to audit. edX+1

External resources (deep-dive / bookmark)

  • uv installer & docs — fast Python packaging. docs.astral.sh+1
  • Docker Compose v2 (Desktop & Linux plugin) — modern CLI (docker compose). Docker Documentation+1
  • pgvector — project + IVFFlat tuning guidance. GitHub+1
  • Ollama OpenAI-compat endpointhttp://localhost:11434/v1. docs.ollama.com
  • vLLM OpenAI-compat serverhttp://localhost:8000/v1. docs.vllm.ai
  • OpenAI models overview — check current model names/families. OpenAI Platform
  • Qdrant local defaults — REST port 6333. qdrant.tech
  • Phoenix (Arize) quickstartpip install arize-phoenixphoenix serve. Arize AI

Internal resources (from this course)

  • Chapter 1 — Automated AI agents: Overview & Strategy
  • Chapter 2 — Choosing the Right AI Framework & Tools
  • Chapter 4 — Designing the AI Agent’s Architecture (up next)

Up next (Chapter 4): we’ll pick an orchestrator/runtime, define state, tools, memory backends, and tracing. Bring your .env — we’ll wire it to concrete code paths.


Conclusion

Your workstation is now production-friendly: a clean **AI Agent Development Environment** with Python 3.12 + uv, isolated \.venv`, Dockerized Postgres + pgvector, and .env best practices. Sanity checks are green — you’re ready to build.` This lets you focus on architecture and agent behavior next, without battling versions or local databases.

With the AI Agent Development Environment ready, jump to Chapter 4 — Designing the AI Agent’s Architecture. We’ll choose an orchestrator, wire tools and memory, and enable tracing.

Final project tree (so you can compare)

ai-agent-env/
├─ .git/ # created by `git init`
├─ .gitignore
├─ .pre-commit-config.yaml
├─ README.md
├─ docker/
│ └─ postgres-pgvector.yml
├─ .env # your private keys (UNTRACKED)
├─ .env.example # fake/sample values (TRACKED)
└─ (created at runtime, untracked)
├─ .venv/ # virtual env
├─ __pycache__/ # Python cache
└─ .cache/ # tool caches

 

Troubleshooting (FAQ)

  • KeyError: OPENAI_API_KEY
    The .env wasn’t loaded. Ensure the venv is active and python-dotenv is installed. Quick check:

     
    import os; print((os.getenv("OPENAI_API_KEY") or "")[:6])
  • “vector dimension mismatch”
    Your table is vector(1536) but your embedding model returns a different dimension. Fix either the model or the schema, then re-ingest.

  • “connection refused” to Postgres
    Docker not started or port 5432 in use. Run docker ps and docker logs <container>. Change the port in your compose file if needed.

  • Inconsistent or off-topic answers
    Lower temperature; improve the prompt; add response validators; increase k; add Ragas/Promptfoo tests and iterate.

  • 429 / timeouts
    Use retries and backoff (Tenacity), set timeouts, enable caching where possible, and respect provider rate limits.

  • Windows SSL/cert issues
    python -m pip install --upgrade certifi or run under WSL if needed.

When you’re ready, jump to Chapter 4 — Designing the AI Agent’s Architecture and start shaping your agent’s building blocks.

 

Leave a Reply

Your email address will not be published. Required fields are marked *