2026-02-16
Prompt Install for AI Observability: Setup ZappyBee with Claude Code, Codex, or Cursor
How to install ZappyBee with one prompt so your coding assistant detects your stack, wires tracing, and keeps manual fallback available.
ReadSee every step, error, token, and cost in one trace timeline. Install with one prompt and get your first signal fast.
ZappyBee Beta is free while we collect feedback. Fair-use monthly limits apply to prevent abuse.
No credit card required. Free beta access.
Example Trace Timeline
Root cause is visible in one screen, not scattered logs.
Use your coding assistant to detect stack, install dependencies, and wire tracing automatically.
Choose your coding assistant
Best for CLI-driven refactors in existing repos.
You are Claude Code working directly inside my repository. Goal: Install ZappyBee tracing as fast as possible with zero product regressions. Critical constraints: 1) Detect framework(s), language(s), runtime(s), and package manager(s) automatically. 2) Keep existing app behavior intact. No breaking changes. 3) Do NOT hardcode secrets. Use env vars only. 4) Do NOT commit or push anything. 5) At the end, output: - changed files - exact run command - exact smoke test command Configuration: - SDK package name: zappybee - API key env var: ZAPPYBEE_API_KEY - Base URL env var: ZAPPYBEE_BASE_URL - Base URL value: https://tokencat-api-riuvb.ondigitalocean.app Implementation tasks: 1) Detect stack - Identify whether this repo uses TypeScript/JavaScript, Python, or both. - Detect package manager preference from lockfiles/config: pnpm-lock.yaml > package-lock.json > yarn.lock > bun.lockb - For Python detect toolchain: uv > poetry > pipenv > pip 2) Install dependencies - If TS/JS exists, install zappybee with the detected package manager. - If Python exists, install zappybee with the detected Python tool. - If OpenAI or Anthropic SDKs are already present, keep using them. - Do not remove existing dependencies. 3) Configure environment - Add/update env examples so they include: ZAPPYBEE_API_KEY= ZAPPYBEE_BASE_URL=https://tokencat-api-riuvb.ondigitalocean.app - If there is an app env loader, wire both vars there. - Never place a real API key in source files. 4) Add SDK initialization - Add one shared initialization location per runtime (TS and/or Python). - Initialize with: apiKey from ZAPPYBEE_API_KEY baseUrl from ZAPPYBEE_BASE_URL (fallback to https://tokencat-api-riuvb.ondigitalocean.app if needed) 5) Add automatic tracing where possible - If Anthropic/OpenAI clients are found, apply ZappyBee.wrap(client). - Keep existing client options and behavior unchanged. 6) Add manual tracing for unsupported/custom flows - Add one traced execution path with: startTrace -> startStep -> step.end -> trace.end - Include model, prompt/completion tokens, status, output/error. - Use try/catch/finally (or equivalent) so traces close on failures. 7) Add smoke test path - Add a minimal script or command that triggers one traced run. - Script should work locally without exposing secrets in code. - Add clear comment on how to run it. 8) Validate - Run existing build/typecheck/lint commands where available. - Fix only issues introduced by your changes. 9) Final report - Print: a) what was detected (framework/language/package manager) b) exact files changed c) install command(s) used d) run command e) smoke test command f) where to check traces in UI Now apply the changes directly.
import { ZappyBee } from "zappybee";
import Anthropic from "@anthropic-ai/sdk";
ZappyBee.init({
apiKey: process.env.ZAPPYBEE_API_KEY || "tc_live_...",
baseUrl: process.env.ZAPPYBEE_BASE_URL, // optional
});
const client = new Anthropic();
ZappyBee.wrap(client); // auto-traces all calls
const response = await client.messages.create({
model: "claude-sonnet-4-20250514",
max_tokens: 512,
messages: [{ role: "user", content: "Hello!" }],
});From prototype to production. Know exactly what your agents are doing, how much they cost, and when something breaks.
Copy one install prompt into Claude Code, Codex, Cursor, Lovable, Antigravity, or another coding assistant.
Full trace timeline: LLM calls, tool calls, reasoning steps, inputs/outputs, tokens, latency, and cost.
Claude, GPT, Gemini, Grok, Mistral, Llama, DeepSeek and more. Auto-wrap Anthropic/OpenAI or use manual tracing for anything.
Model usage, daily spend, error rate, average duration, and top models across your agents and projects.
Version prompts, ship changes safely, and compare performance with real usage and outcomes.
Trigger alerts on error rate, cost, and latency. Get notified via email or Slack webhook.
Send signed events (HMAC) for traces, steps, and alert triggers to your incident pipeline.
Invite teammates, create shareable dashboards, and enforce retention policies for production readiness.
One SDK plus integrations for popular frameworks
Practical guides, troubleshooting playbooks, and real production patterns for teams building AI agents.
2026-02-16
How to install ZappyBee with one prompt so your coding assistant detects your stack, wires tracing, and keeps manual fallback available.
Read2026-02-16
A buyer-focused comparison for teams deciding between an observability platform and custom in-house tracing.
Read2026-02-15
How to structure your product pages, docs, and evidence so AI assistants cite your tool when users ask about agent debugging, tracing, and cost monitoring.
Read