Agent traces
Introduced 3.6
Agent traces provide observability for generative AI applications and large language model (LLM) agents. Unlike general application monitoring, agent traces specialize in tracking LLM calls, token usage, tool invocations, and agent reasoning flows using generative AI semantic conventions.
Prerequisites
To use agent traces, you need the following:
- An OpenSearch cluster – Stores trace data in
otel-v1-apm-span-*indexes. - OpenSearch Dashboards – Provides the Agent Traces panel for visualization.
- OpenSearch Data Prepper – Processes and stores trace data in OpenSearch.
The agent traces workflow
The agent traces workflow consists of the following stages:
- Instrument: Add observability to your AI applications:
- Use the GenAI SDK containing decorators, enrichment functions, and automatic LLM instrumentation for libraries such as OpenAI, Anthropic, Amazon Bedrock, and LangChain.
-
Normalize: The OpenTelemetry Collector standardizes spans using generative AI semantic conventions, ensuring consistent attribute names across providers.
-
Use local tooling (Optional): Use Agent Health for local debugging, scoring, and evaluation during development.
-
Process: OpenSearch Data Prepper routes OpenTelemetry Protocol (OTLP) data into OpenSearch, creating service maps, correlating traces, and aggregating metrics.
- View: OpenSearch Dashboards displays agent traces as hierarchical trees, directed acyclic graphs (DAGs), and timelines, along with metrics for production monitoring.
Agent traces support the following frameworks and providers:
- Frameworks: Strands Agents, LangGraph, CrewAI, and the OpenAI Agents SDK.
- Providers: OpenAI, Anthropic, Amazon Bedrock, LangChain, LlamaIndex, and other LLM providers.
Getting started
To start using agent traces, explore the following topics:
- Instrument your application – Install the SDK and add tracing to your AI agents.
- Viewing agent traces – Configure and explore agent traces in OpenSearch Dashboards.