Enterprise AI Governance
Enforce guardrails, track usage, manage projects, and observe all AI operations from a single control plane.
Overview
Enterprise AI deployments need governance: who can access what, how much they're spending, what the AI is actually saying, and whether it's staying within policy. Cognipeer provides this through Console's observability layer and Agent SDK's runtime guardrails.
This guide covers setting up enterprise-grade governance across your AI infrastructure.
Architecture
Console provides the control plane: project management, API key scoping, usage tracking, tracing, and dashboard-level observability.
Agent SDK adds runtime-level governance with input/output guardrails, content filtering, and approval workflows.
1. Project-Scoped API Keys
Console organises resources into projects. Each project gets its own API keys, models, and usage quotas.
// Each project has isolated API keys
// Configure via Console dashboard:
//
// Project: "Customer Support"
// - API Key: cp_support_xxx
// - Allowed models: gpt-4o, claude-3.5-sonnet
// - Rate limit: 100 req/min
//
// Project: "Internal Tools"
// - API Key: cp_internal_xxx
// - Allowed models: gpt-4o-mini
// - Rate limit: 500 req/min
// In your application, use the project-scoped key
const client = new ConsoleClient({
apiKey: "cp_support_xxx", // Scoped to "Customer Support"
baseURL: "https://your-console.example.com",
});2. Agent Guardrails
Apply input and output guardrails to control what agents can say and do.
import { createSmartAgent } from "@cognipeer/agent-sdk";
const governedAgent = createSmartAgent({
name: "GovernedAssistant",
model,
tools: [/* ... */],
guardrails: {
input: [
// Block prompt injection attempts
{ type: "injection_detection", config: { sensitivity: "high" } },
// Filter inappropriate content
{ type: "content_filter", config: { categories: ["hate", "violence"] } },
],
output: [
// Mask personal information
{ type: "pii_filter", config: { mask: true, types: ["email", "phone", "ssn"] } },
// Enforce response length
{ type: "length_limit", config: { maxTokens: 2000 } },
],
},
// Require human approval for sensitive actions
humanInTheLoop: {
requireApproval: ["send_email", "delete_record", "update_account"],
},
tracing: { enabled: true },
});3. Observability & Tracing
Console traces every request through the system — from API call to provider response. Use the tracing API to ingest agent-level traces too.
// Console SDK tracing integration
const session = await client.tracing.sessions.create({
agentId: "governed-assistant",
metadata: {
userId: "user-123",
project: "customer-support",
},
});
// Send streaming trace events
await client.tracing.sessions.startStream(session.id);
await client.tracing.sessions.sendEvent(session.id, {
type: "tool_call",
data: {
tool: "lookup_order",
input: { orderId: "ORD-456" },
output: { status: "shipped" },
duration: 120,
},
});
await client.tracing.sessions.endStream(session.id);
// View in Console dashboard:
// - Session timeline with all events
// - Token usage and cost per session
// - Tool call success/failure rates
// - Guardrail trigger frequencyResult
You now have enterprise AI governance that:
- Isolates projects with scoped API keys and quotas - Guards inputs against prompt injection and inappropriate content - Filters outputs to mask PII and enforce policies - Requires human approval for sensitive operations - Traces every request, tool call, and decision - Dashboards usage, cost, and compliance metrics