Getting Started
Your first request through the Rivaro enforcement proxy in about 5 minutes. Covers SDK configuration, detection keys, and a working example.
Rivaro sits between your AI application and the AI provider (OpenAI, Anthropic, Azure, etc.). It intercepts every request, scans for violations (PII, prompt injection, data exfiltration, tool abuse, etc.), enforces your policies, and forwards the request to the provider. Your app gets back the same response it would normally — enforcement is transparent.
Step 1: Create an AppContext
An AppContext tells Rivaro which AI provider you're routing to and what models are allowed. Create one in the Rivaro dashboard:
- Go to Settings > Adapters
- Select your provider (e.g. OpenAI, Anthropic, Azure OpenAI)
- Give it a name (e.g. "Production OpenAI")
- Optionally configure:
- Allowed models — restrict which models can be used through this key
- Enabled detectors — which risk categories to scan for
- Rate limits — requests per minute
Save it. You'll get an AppContext ID (e.g. ac_xxxxxxxxxxxx).
Step 2: Create a Detection Key
A detection key authenticates your proxy requests and links them to your AppContext. Create one in the Rivaro dashboard:
- Go to Settings > Detection Keys
- Click Create Key
- Give it a name (e.g. "Production Key")
- Select the AppContext you just created
- Click Create
You'll see a key like detect_live_aBcDeFg.... Copy it now — it's only shown once.
Step 3: Change your base URL
Point your AI SDK at your Rivaro proxy instance instead of the provider directly. That's the only code change.
OpenAI (Python)
Before:
from openai import OpenAI
client = OpenAI(api_key="sk-your-openai-key")
After:
from openai import OpenAI
client = OpenAI(
api_key="sk-your-openai-key",
base_url="https://your-org.rivaro.ai/v1",
default_headers={
"X-Detection-Key": "detect_live_your_key_here"
}
)
OpenAI (Node.js)
Before:
import OpenAI from 'openai';
const client = new OpenAI({ apiKey: 'sk-your-openai-key' });
After:
import OpenAI from 'openai';
const client = new OpenAI({
apiKey: 'sk-your-openai-key',
baseURL: 'https://your-org.rivaro.ai/v1',
defaultHeaders: {
'X-Detection-Key': 'detect_live_your_key_here'
}
});
Anthropic (Python)
Before:
from anthropic import Anthropic
client = Anthropic(api_key="sk-ant-your-key")
After:
from anthropic import Anthropic
client = Anthropic(
api_key="sk-ant-your-key",
base_url="https://your-org.rivaro.ai",
default_headers={
"X-Detection-Key": "detect_live_your_key_here"
}
)
curl
curl https://your-org.rivaro.ai/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer sk-your-openai-key" \
-H "X-Detection-Key: detect_live_your_key_here" \
-d '{
"model": "gpt-4",
"messages": [{"role": "user", "content": "Hello, world!"}]
}'
Step 4: Make a request
Use your SDK exactly as you normally would. Rivaro handles enforcement transparently:
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Hello, world!"}]
)
print(response.choices[0].message.content)
If enforcement allows the request, you get the provider's response back unchanged. If a policy blocks the request, you'll get an error response from Rivaro (see Error responses below).
Step 5: See enforcement results
Open the Rivaro dashboard. You'll see your request in the activity feed with:
- Detections — what Rivaro found (PII, prompt injection, etc.)
- Policy action — what happened (allowed, logged, blocked, redacted)
- Risk classification — severity, risk domain, risk category
If no policies are configured, Rivaro runs in observation mode — it detects and logs violations but doesn't block anything. Configure policies in the dashboard to start enforcing.
Error responses
When Rivaro itself rejects a request (not the AI provider), you'll get:
| HTTP Status | Meaning | Example |
|---|---|---|
| 401 | Detection key missing or invalid | {"error": "Detection key required for proxy endpoints."} |
| 403 | Model not in allowed list | {"error": "The requested model is not permitted."} |
| 429 | Rate limit exceeded | {"error": "Rate limit exceeded"} |
| 451 | Request blocked by policy | {"error": "Request blocked by enforcement policy"} |
Errors from the AI provider (e.g. invalid provider API key, quota exceeded) are passed through in the provider's own format.
Streaming
Streaming works out of the box. Use stream=True (Python) or stream: true (Node.js/curl) as you normally would. Rivaro streams chunks back in real time with enforcement applied.
stream = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Explain quantum computing"}],
stream=True
)
for chunk in stream:
print(chunk.choices[0].delta.content or "", end="")
What's next
- Provider-specific guides — OpenAI · Anthropic · Azure OpenAI · AWS Bedrock · Vertex AI
- Configuration Guide — AppContexts, detection keys, rate limits, and organization setup
- Enforcement & Policies — Configure what happens when violations are detected
- Understanding Detections — What Rivaro scans for and how detections are classified
- Error Handling — Handle Rivaro-specific errors, retry strategies, and debugging
- API Reference — Full endpoint reference for all supported providers