AI-Powered Applications
Integrate AI capabilities into existing applications with OpenAI-compatible APIs, provider routing, and type-safe SDKs.
Overview
Adding AI capabilities to existing applications shouldn't require a rewrite. Cognipeer Console provides an OpenAI-compatible API, so any existing OpenAI integration works out of the box — while giving you multi-provider routing, fallback, caching, and full observability.
This guide shows how to add AI features to a TypeScript application using Console as a drop-in gateway.
Architecture
Console acts as your AI gateway — routing requests to multiple providers (OpenAI, Anthropic, etc.) with automatic fallback and health-based routing.
Console SDK provides a type-safe TypeScript client with streaming support and full coverage of Console features.
1. Drop-In OpenAI Replacement
If you already use the OpenAI SDK, you can switch to Cognipeer Console by just changing the base URL and API key. Zero code changes needed.
import OpenAI from "openai";
// Before: Direct OpenAI
// const openai = new OpenAI({ apiKey: "sk-..." });
// After: Through Cognipeer Console (same API!)
const openai = new OpenAI({
apiKey: process.env.COGNIPEER_API_KEY,
baseURL: "https://your-console.example.com/api/client/v1",
});
// Everything works exactly the same
const response = await openai.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Hello!" }],
});2. Use Console SDK for Full Features
For type-safe access to all Console features — chat, embeddings, vectors, files, tracing — use the Console SDK.
import { ConsoleClient } from "@cognipeer/console-sdk";
const client = new ConsoleClient({
apiKey: process.env.COGNIPEER_API_KEY!,
baseURL: "https://your-console.example.com",
});
// Chat with streaming
const stream = await client.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Summarise this document" }],
stream: true,
});
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content || "");
}3. Multi-Provider Routing
Console routes your requests across providers with fallback. Configure primary and fallback models in Console dashboard, then just use a single model key in your code.
// Console handles routing behind the scenes
// If provider A is down, it falls back to provider B
// Your code just uses one model key:
const response = await client.chat.completions.create({
model: "my-primary-model", // Configured in Console dashboard
messages: [
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: userInput },
],
temperature: 0.7,
});
// Embeddings work the same way
const embeddings = await client.embeddings.create({
model: "my-embedding-model",
input: ["Text to embed"],
});4. Observability & Tracing
Every request through Console is automatically traced. You can view latency, token usage, and errors in the Console dashboard — no extra instrumentation needed.
// All requests are automatically traced
const response = await client.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Hello" }],
});
// Response includes request_id for correlation
console.log("Request ID:", response.request_id);
// View traces in Console dashboard:
// - Latency per request
// - Token usage and cost
// - Provider routing decisions
// - Error rates and fallback eventsResult
You now have AI integrated into your application with:
- Zero migration cost — OpenAI-compatible API works with existing code - Provider resilience — Automatic fallback across multiple LLM providers - Full observability — Every request traced with latency, tokens, and cost - Type safety — Console SDK with full TypeScript support - Streaming — Real-time response streaming out of the box