Agno

Lightweight Python agent lib.

CLAUDE.md

CLAUDE.md — Agno

Instructions for Claude Code when working on this codebase.


Repository Structure

.
├── libs/agno/agno/          # Core framework code
├── cookbook/                # Examples, patterns and test cases (organized by topic)
├── scripts/                 # Development and build scripts
├── specs/                   # Design documents (symlinked, private)
├── docs/                    # Documentation (symlinked, private)
└── .cursorrules             # Coding patterns and conventions

Conductor Notes

When working in Conductor, you can use the .context/ directory for scratch notes or agent-to-agent handoff artifacts. This directory is gitignored.


Setting Up Symlinks

The specs/ and docs/ directories are symlinked from external locations. For a fresh clone or new workspace, create these symlinks:

ln -s ~/code/specs specs
ln -s ~/code/docs docs

These contain private design documents and documentation that are not checked into the repository.


Virtual Environments

This project uses two virtual environments:

Environment Purpose Setup
.venv/ Development: tests, formatting, validation ./scripts/dev_setup.sh
.venvs/demo/ Cookbooks: has all demo dependencies ./scripts/demo_setup.sh

Use .venv for development tasks (pytest, ./scripts/format.sh, ./scripts/validate.sh).

Use .venvs/demo for running cookbook examples.


Testing Cookbooks

Apart from implementing features, your most important task will be to test and maintain the cookbooks in cookbook/ directory.

See cookbook/08_learning/ for the golden standard.

Quick Reference

Test Environment:

# Virtual environment with all dependencies
.venvs/demo/bin/python

# Setup (if needed)
./scripts/demo_setup.sh

# Database (if needed)
./cookbook/scripts/run_pgvector.sh

Run a cookbook:

.venvs/demo/bin/python cookbook/<folder>/<file>.py

Expected Cookbook Structure

Each cookbook folder should have the following files:

  • README.md — The README for the cookbook.
  • TEST_LOG.md — Test results log.

Testing Workflow

1. Before Testing

  • Ensure the virtual environment exists (run ./scripts/demo_setup.sh if needed)
  • Start any required services (e.g., ./cookbook/scripts/run_pgvector.sh)

2. Running Tests

# Run individual cookbook
.venvs/demo/bin/python cookbook/<folder>/<file>.py

# Tail output for long tests
.venvs/demo/bin/python cookbook/<folder>/<file>.py 2>&1 | tail -100

3. Updating TEST_LOG.md

After each test, update the cookbook's TEST_LOG.md with:

  • Test name and path
  • Status: PASS or FAIL
  • Brief description of what was tested
  • Any notable observations or issues

Format:

### filename.py

**Status:** PASS/FAIL

**Description:** What the test does and what was observed.

**Result:** Summary of success/failure.

---

Code Locations

What Where
Core agent code libs/agno/agno/agent/
Teams libs/agno/agno/team/
Workflows libs/agno/agno/workflow/
Tools libs/agno/agno/tools/
Models libs/agno/agno/models/
Knowledge/RAG libs/agno/agno/knowledge/
Memory libs/agno/agno/memory/
Learning libs/agno/agno/learn/
Database adapters libs/agno/agno/db/
Vector databases libs/agno/agno/vectordb/
Tests libs/agno/tests/

Coding Patterns

See .cursorrules for detailed patterns. Key rules:

  • Never create agents in loops — reuse them for performance
  • Use output_schema for structured responses
  • PostgreSQL in production, SQLite for dev only
  • Start with single agent, scale up only when needed
  • Both sync and async — all public methods need both variants

Running Code

Running cookbooks:

.venvs/demo/bin/python cookbook/<folder>/<file>.py

Running tests:

source .venv/bin/activate
pytest libs/agno/tests/

# Run a specific test file
pytest libs/agno/tests/unit/test_agent.py

When Implementing Features

  1. Check for design doc in specs/ — if it exists, follow it
  2. Look at existing patterns — find similar code and follow conventions
  3. Create a cookbook — every pattern should have an example
  4. Update implementation.md — mark what's done

Before Submitting Code

Always run these scripts before pushing code or creating a PR:

# Activate the virtual environment first
source .venv/bin/activate

# Format all code (ruff format)
./scripts/format.sh

# Validate all code (ruff check, mypy)
./scripts/validate.sh

Both scripts must pass with no errors before code review.

PR Title Format:

PR titles must follow one of these formats:

  • type: description — e.g., feat: add workflow serialization
  • [type] description — e.g., [feat] add workflow serialization
  • type-kebab-case — e.g., feat-workflow-serialization

Valid types: feat, fix, cookbook, test, refactor, chore, style, revert, release

PR Description:

Always follow the PR template in .github/pull_request_template.md. Include:

  • Summary of changes
  • Type of change (bug fix, new feature, etc.)
  • Completed checklist items
  • Any additional context

GitHub Operations

Updating PR descriptions:

The gh pr edit command may fail with GraphQL errors related to classic projects. Use the API directly instead:

# Update PR body
gh api repos/agno-agi/agno/pulls/<PR_NUMBER> -X PATCH -f body="<PR_BODY>"

# Or with a file
gh api repos/agno-agi/agno/pulls/<PR_NUMBER> -X PATCH -f body="$(cat /path/to/body.md)"

Don't

  • Don't implement features without checking for a design doc first
  • Don't use f-strings for print lines where there are no variables
  • Don't use emojis in examples and print lines
  • Don't skip async variants of public methods
  • Don't push code without running ./scripts/format.sh and ./scripts/validate.sh
  • Don't submit a PR without a detailed PR description. Always follow the PR template provided in .github/pull_request_template.md.
  • Don't use OpenAIChat in cookbooks or examples — use OpenAIResponses instead
  • Don't use gpt-4o or gpt-4o-mini in cookbooks or examples — use gpt-5.4 instead

CI: Automated Code Review

Every non-draft PR automatically receives a review from Opus using both code-review and pr-review-toolkit official plugins (10 specialized agents total). No manual trigger needed — the review posts as a sticky comment on the PR.

When running in GitHub Actions (CI), always end your response with a plain-text summary of findings. Never let the final action be a tool call. If there are no issues, say "No high-confidence findings."

Agno-specific checks to always verify:

  • Both sync and async variants exist for all new public methods
  • No agent creation inside loops (agents should be reused)
  • CLAUDE.md coding patterns are followed
  • No f-strings for print lines where there are no variables
README.md

What is Agno

Agno is the runtime for agentic software. Build agents, teams, and workflows. Run them as scalable services. Monitor and manage them in production.

Layer What it does
Framework Build agents, teams, and workflows with memory, knowledge, guardrails, and 100+ integrations.
Runtime Serve your system in production with a stateless, session-scoped FastAPI backend.
Control Plane Test, monitor, and manage your system using the AgentOS UI.

Quick Start

Build a stateful, tool-using agent and serve it as a production API in ~20 lines.

from agno.agent import Agent
from agno.db.sqlite import SqliteDb
from agno.models.anthropic import Claude
from agno.os import AgentOS
from agno.tools.mcp import MCPTools

agno_assist = Agent(
    name="Agno Assist",
    model=Claude(id="claude-sonnet-4-6"),
    db=SqliteDb(db_file="agno.db"),
    tools=[MCPTools(url="https://docs.agno.com/mcp")],
    add_history_to_context=True,
    num_history_runs=3,
    markdown=True,
)

agent_os = AgentOS(agents=[agno_assist], tracing=True)
app = agent_os.get_app()

Run it:

export ANTHROPIC_API_KEY="***"

uvx --python 3.12 \
  --with "agno[os]" \
  --with anthropic \
  --with mcp \
  fastapi dev agno_assist.py

In ~20 lines, you get:

  • A stateful agent with streaming responses
  • Per-user, per-session isolation
  • A production API at http://localhost:8000
  • Native tracing

Connect to the AgentOS UI to monitor, manage, and test your agents.

  1. Open os.agno.com and sign in.
  2. Click "Add new OS" in the top navigation.
  3. Select "Local" to connect to a local AgentOS.
  4. Enter your endpoint URL (default: http://localhost:8000).
  5. Name it "Local AgentOS".
  6. Click "Connect".

https://github.com/user-attachments/assets/75258047-2471-4920-8874-30d68c492683

Open Chat, select your agent, and ask:

What is Agno?

The agent retrieves context from the Agno MCP server and responds with grounded answers.

https://github.com/user-attachments/assets/24c28d28-1d17-492c-815d-810e992ea8d2

You can use this exact same architecture for running multi-agent systems in production.

Why Agno?

Agentic software introduces three fundamental shifts.

A new interaction model

Traditional software receives a request and returns a response. Agents stream reasoning, tool calls, and results in real time. They can pause mid-execution, wait for approval, and resume later.

Agno treats streaming and long-running execution as first-class behavior.

A new governance model

Traditional systems execute predefined decision logic written in advance. Agents choose actions dynamically. Some actions are low risk. Some require user approval. Some require administrative authority.

Agno lets you define who decides what as part of the agent definition, with:

  • Approval workflows
  • Human-in-the-loop
  • Audit logs
  • Enforcement at runtime

A new trust model

Traditional systems are designed to be predictable. Every execution path is defined in advance. Agents introduce probabilistic reasoning into the execution path.

Agno builds trust into the engine itself:

  • Guardrails run as part of execution
  • Evaluations integrate into the agent loop
  • Traces and audit logs are first-class

Built for Production

Agno runs in your infrastructure, not ours.

  • Stateless, horizontally scalable runtime.
  • 50+ APIs and background execution.
  • Per-user and per-session isolation.
  • Runtime approval enforcement.
  • Native tracing and full auditability.
  • Sessions, memory, knowledge, and traces stored in your database.

You own the system. You own the data. You define the rules.

What You Can Build

Agno powers real agentic systems built from the same primitives above.

  • Pal → A personal agent that learns your preferences.
  • Dash → A self-learning data agent grounded in six layers of context.
  • Scout → A self-learning context agent that manages enterprise context knowledge.
  • Gcode → A post-IDE coding agent that improves over time.
  • Investment Team → A multi-agent investment committee that debates and allocates capital.

Single agents. Coordinated teams. Structured workflows. All built on one architecture.

Get Started

  1. Read the docs
  2. Build your first agent
  3. Explore the cookbook

IDE Integration

Add Agno docs as a source in your coding tools:

Cursor: Settings → Indexing & Docs → Add https://docs.agno.com/llms-full.txt

Also works with VSCode, Windsurf, and similar tools.

Contributing

See the contributing guide.

Telemetry

Agno logs which model providers are used to prioritize updates. Disable with AGNO_TELEMETRY=false.