### 3. Visualize in Middleware

Once your LLM application is instrumented, you can explore the telemetry data in Middleware:

1. **Navigate to LLM Observability**: Go to your [Middleware Dashboard](https://app.middleware.io/) and click on **LLM Observability** in the sidebar
2. **Explore AI Operations**: View your AI application traces including:
   - LLM request traces with detailed timing
   - Token usage and cost information  
   - Vector database operations
   - Model performance analytics
   - Request/response payloads (if enabled)
3. **Custom Dashboards**: Create custom dashboards for your specific LLM metrics
4. **Alerting**: Set up alerts for LLM performance anomalies and cost thresholds
5. **Performance Analysis**: Analyze latency, throughput, and resource usage patterns

For detailed information on LLM Observability features, consult the [Middleware LLM Observability Documentation](https://docs.middleware.io/llm-observability/overview).
