Dobby
Unified AI Access Layer

LLM Gateway

Route every LLM request through a single gateway with authentication, cost tracking, rate limiting, and policy enforcement. Support 13+ providers out of the box.

13+ LLM Providers

Claude, GPT, Gemini, Mistral, Llama, Bedrock, DeepSeek, Grok, Perplexity, and more — all through one endpoint.

3-Tier Key System

User keys (gk_user_*), service keys (gk_svc_*), and temporary keys (gk_tmp_*) with scoped permissions.

Cost Tracking

Real-time metering of every request. Track costs per agent, per task, per model. Set budgets and alerts.

DLP & Content Filtering

9 PII detection patterns, secret redaction, content policy enforcement on both requests and responses.

Key Capabilities

OpenAI-compatible chat/completions endpoint with streaming
MCP server with JSON-RPC 2.0 and OAuth 2.1 authentication
Agent proxy for CrewAI, LangSmith, Jira, and managed services
Rate limiting per-key and per-org with Redis sliding window
Circuit breaker with automatic failover between providers
Anomaly detection and behavior profiling analytics

How It Works

1

Request arrives

An agent or user sends an LLM request via the gateway.

2

Auth & policy check

Gateway validates the key, checks policies, and enforces limits.

3

Route & execute

Request is routed to the configured provider with failover.

4

Meter & log

Response is metered for cost, logged for audit, and returned.

Ready to get started?

Connect and manage AI agents with llm gateway built in.