Agent Development With Tracing
Develop custom agents with Agent SDK and feed execution traces into Console so teams can compare runs, inspect tool behavior, and debug workflows faster.
Overview
Teams building custom agents usually hit the same wall: the agent works in a local test, but it is hard to compare runs, inspect tool sequences, and understand failures once multiple prompts, tools, and datasets are involved.
A practical pattern is to keep agent construction in Agent SDK while using Console as the place where traces are collected and reviewed.
Architecture
Agent SDK owns the runtime logic, tools, and control flow. Console SDK sends structured tracing data into Console, where product and platform teams can inspect sessions, thread-level workflows, latency, and token usage.
This creates a clean split: build agents where you need determinism, observe them where you need operational visibility.
1. Build The Agent In Agent SDK
Keep the runtime local to your application or service, but give each run a stable session and thread identity.
import { createSmartAgent, createTool } from '@cognipeer/agent-sdk';
const agent = createSmartAgent({
name: 'PolicyReviewer',
model,
tools: [searchPolicyDocs, summarizeFindings],
systemPrompt: 'Review uploaded policies and produce structured findings.',
useTodoList: true,
tracing: { enabled: true },
});
const sessionId = 'sess_' + Date.now();
const threadId = 'thread_policy-review-2026-03';2. Ingest Execution Data Into Console
After a run completes, send the session summary and important events into Console for later comparison and debugging.
import { ConsoleClient } from '@cognipeer/console-sdk';
const client = new ConsoleClient({
apiKey: process.env.COGNIPEER_API_KEY!,
baseURL: 'https://console.example.com',
});
await client.tracing.ingest({
sessionId,
threadId,
source: 'custom',
status: 'success',
startedAt,
endedAt: new Date().toISOString(),
durationMs: 1480,
agent: {
name: 'PolicyReviewer',
version: '0.3.0',
model: 'gpt-4o-mini',
},
summary: {
totalInputTokens: 1240,
totalOutputTokens: 420,
totalCachedInputTokens: 0,
totalBytesIn: 18000,
totalBytesOut: 6200,
eventCounts: {
ai_call: 2,
tool_call: 2,
},
},
events: traceEvents,
errors: [],
});3. Compare Agent Iterations By Thread
Thread correlation is useful when you are iterating on the same workflow across prompt versions, model changes, or tool updates.
async function runReview(promptVersion: string, input: string) {
const startedAt = new Date().toISOString();
const result = await agent.invoke({
messages: [{ role: 'user', content: input }],
});
await client.tracing.ingest({
sessionId: sessionId + '-' + promptVersion,
threadId,
source: 'custom',
status: 'success',
startedAt,
endedAt: new Date().toISOString(),
durationMs: 1200,
agent: { name: 'PolicyReviewer', version: promptVersion, model: 'gpt-4o-mini' },
summary,
events: buildTraceEvents(result),
errors: [],
});
}Result
You get a development workflow that:
- Builds custom agents in Agent SDK without giving up observability - Captures tool calls, latency, and token usage in Console - Compares multiple agent versions through shared thread IDs - Speeds up debugging for prompt, model, and tool-chain changes