Deep cost attribution per workflow, step, and customer — plus hard budget enforcement that actually stops agents when limits hit.
Three steps from zero visibility to full cost attribution across every agent, workflow, and customer.
Install the Caltryx SDK in minutes. Wrap your LLM calls with trackLLM()
and attach tags like user_id, workflow_id, and step_name. That's it.
Every token, every call, every dollar gets attributed. Tag by tenant, workflow, step, model, environment — or any custom dimension. Without tags, nothing works. So we make tagging dead simple.
Not just alerts — hard enforcement. Set daily, monthly, per-workflow, per-tenant, or per-agent budgets. Caltryx auto-stops agents and blocks API calls the moment a limit is hit.
Most AI builders don't know what they're spending — until the invoice arrives and it's too late.
From agent management to Telegram control — Caltryx is the complete observability stack for production AI.
Start solo. Scale to enterprise. Every plan pays for itself the first time it stops a runaway bill.
All plans include BYOK · No hidden fees · Cancel anytime
Not an analytics dashboard. Not an APM tool. The observability + guardrail layer built for LLM workloads.
Everything before you integrate Caltryx into production.
Caltryx is not one feature — it's the complete observability + guardrail stack. Every system works independently, and powerfully together.
Create, edit, enable/disable agents. Set model, temperature, max tokens, and API key per agent. Full lifecycle control from a single interface.
Add your own OpenAI or Anthropic keys. Encrypted at rest. Assign per agent. Track usage per key. Multiple keys per workspace supported.
Cost per workflow, per step, per customer. The most powerful dimension in the stack — powered entirely by the tagging layer. Every dollar tracked.
Hard enforcement — not just alerts. Set daily, monthly, per-workflow, per-tenant, per-agent budgets. Kill switch auto-stops agents when limits hit.
DevOps-style tracing with Correlation IDs. Full request logs, error logs, model used, response time, and complete token breakdowns per step.
Token waste detection. Large context warnings. Model swap suggestions. Unknown spend detection for untagged traffic. Anomaly detection on usage spikes.
Alerts for budget exceeded, usage spikes, agent stopped. Run /stats, /usage, /stop agent-name, /budget directly from Telegram. Lightweight remote control.
Node/JS first. One function call wraps your LLM. Attach tags, report tokens, enforce budgets, generate correlation IDs. 5-minute integration guaranteed.
Full REST endpoint. Webhook support for real-time events. Usage and CSV export. Team access with per-workspace API keys. Build on top of Caltryx.
Three steps. One SDK. Total attribution across every agent, workflow, and customer in your stack.
One npm install. Works with any Node/JS codebase — LangChain, raw OpenAI calls, custom agents, multi-step pipelines. No lock-in.
npm install @Caltryx/sdk
One function call does everything: attaches tags, reports tokens, enforces budgets, and generates a correlation ID for full trace linking.
import { trackLLM } from '@Caltryx/sdk';
await trackLLM({
workflow: "support",
step: "planner",
user_id: "u123",
tenant_id: "acme_corp",
model: "gpt-4o"
});
From the dashboard or API: set daily/monthly budgets per agent, workflow, or tenant. Caltryx enforces in real-time — auto-stop, block, or throttle. Telegram alert fires instantly.
POST /budgets
{ tenant_id: "acme_corp", monthly: 500,
action: "stop" }
Every plan includes BYOK. Every plan pays for itself the moment it catches one runaway LLM bill.
All plans include BYOK · No hidden fees · Cancel anytime
The Caltryx SDK is Node/JS first. One function wraps your LLM calls and handles everything — tags, tokens, budgets, correlation IDs.
npm install @Caltryx/sdkimport { trackLLM } from '@Caltryx/sdk';
import OpenAI from 'openai';
const client = new OpenAI();
const result = await trackLLM({
workflow: "support-bot",
step: "planner",
user_id: "user_123",
tenant_id: "acme_corp",
model: "gpt-4o",
fn: () => client.chat.completions.create({ ... })
});
// Set in dashboard or via API
POST /v1/budgets
{
"tenant_id": "acme_corp",
"monthly_usd": 500,
"action": "stop" // or "throttle" / "alert"
}
Full reference for the Caltryx SDK, REST API, webhook events, and dashboard configuration.
Full API for trackLLM(), tags, budget hooks, and correlation IDs.
→Endpoints for budgets, usage export, agents, and workspace management.
→Real-time event payloads for budget hits, agent stops, and usage spikes.
→Setup guide for /stats, /stop, /budget commands and alert configuration.
→Practical writing for AI builders who care about what they're spending.
Most founders discover their AI costs through the invoice. Here's how to see it coming before it destroys your margins.
An alert tells you after the damage. A kill switch prevents it. The distinction matters more than most people realise.
One key for everything is a security and attribution disaster. Here's the architecture that actually scales.
Correlation IDs are the key to understanding where your money goes inside complex agent workflows.
Every update, improvement, and fix — in reverse chronological order.
Caltryx is officially in pre-launch. Join the waitlist to be first when we open access. Early waitlist members get 3 months of Growth free.
The drop-in trackLLM() SDK for Node and JS environments. BYOK support, full tagging, budget hooks, and correlation IDs built in.
The full Caltryx dashboard — cost per workflow, per step, per customer. Agent management, budget controls, tracing view, and Telegram integration.
Token waste detection, model swap recommendations, and unknown spend flagging powered by internal analysis.
Caltryx handles API keys, usage data, and cost attribution with security-first architecture.
All API keys are encrypted at rest using AES-256. Keys are never stored in plain text, never logged, and never transmitted without encryption.
Caltryx tracks tokens, costs, and metadata — not your prompts or completions. We never store or see the content of your LLM calls.
All data is scoped to your workspace. No cross-tenant data access. Role-based permissions on Scale and Enterprise plans.
Complete audit trail for all budget changes, agent modifications, API key additions, and permission changes. Available on Enterprise plans.
Caltryx was born from a real problem: a multi-agent pipeline quietly running up a $600 bill with zero visibility into which step, workflow, or customer was responsible.
The existing tools weren't built for LLM workloads. APMs track infrastructure. Analytics dashboards show page views. Neither tells you that your summarizer agent is spending $40/day when a cheaper model would do the same job for $4.
Caltryx is infrastructure for the way AI is actually built in 2026 — multi-agent, multi-tenant, multi-model, and increasingly expensive if you're not watching it closely.
We believe every indie hacker, solo founder, and small AI startup deserves the same cost visibility and guardrail tools that enterprise companies build internally. That's what we're building.
Caltryx is currently in pre-launch. Join the waitlist — early members help shape what we prioritise.
We do one thing: AI cost observability and budget enforcement. No scope creep.
5 minutes to full visibility. No rearchitecting. No lock-in.
Built for indie hackers and solo founders — not enterprise IT departments.
The best way to reach me is a DM on X. Questions, enterprise inquiries, partnership ideas, or just your LLM cost horror story — I read everything.
DM me directly on X. I reply to every genuine message. Building Caltryx in public — feedback, feature requests, and ideas all welcome.
Open X.com / DM me →DM on X and mention "Enterprise" — I'll schedule a call same week.
Building for builders. Every idea goes into the roadmap. DM or quote-tweet.
Found something wrong? DM with details. Fast fixes, always.
Last updated: February 2026
Caltryx collects: email addresses from waitlist signups, token usage metadata (counts, not content), cost attribution tags you attach via the SDK, and workspace configuration data. We do not collect the content of your LLM prompts or completions.
Usage metadata is used solely to power the Caltryx dashboard — cost attribution, budget tracking, and optimization suggestions. Email addresses are used to send product updates and launch notifications. We never sell data to third parties.
API keys you add via BYOK are encrypted at rest with AES-256. They are decrypted in memory only when making authenticated requests on your behalf. They are never logged, stored in plaintext, or accessible to Caltryx staff.
Usage data is retained for the duration of your subscription plus 90 days. You can request deletion at any time by emailing [email protected].
For privacy enquiries: [email protected]
Last updated: February 2026
Caltryx provides AI cost observability and budget enforcement infrastructure. You use the service to track LLM usage, attribute costs, and enforce budgets across your AI workloads.
You are responsible for the API keys you add, the agents you configure, and the LLM calls made through your workspace. You must not use Caltryx to track or attribute costs from workloads you do not own or have permission to monitor.
Bring Your Own Key (BYOK) means you supply your own OpenAI/Anthropic credentials. Caltryx does not pay for or resell LLM usage. All LLM costs are billed directly by your provider to you.
Subscriptions are billed monthly. You can cancel at any time. No refunds for partial months, but your access continues until the end of the billing period. Enterprise contracts are governed by a separate agreement.
Caltryx is a monitoring tool. We are not responsible for LLM costs incurred before Caltryx was integrated, during outages, or where budget enforcement could not be applied due to external factors.
For terms enquiries: [email protected]