Opik is an [open-source](https://github.com/comet-ml/opik) logging, debugging, and optimization
platform for AI agents and LLM applications. If you're building AI features, you know it's easy to
spin up a working prototype but harder to log, test, iterate, and monitor to meet production
requirements.

Opik gives you all the tools you need to go from LLM observability to action across your AI
application footprint and dev cycle. Ship measurable improvements with gorgeous logs, annotation and
scoring functions, pre-configured LLM-as-a-judge [eval metrics](/evaluation/metrics/overview), and
even [automated agent optimization algorithms](/agent_optimization/overview) to maximize performance. 

## End-to-End AI Engineering

<Frame>
  <img src="/img/home/EndToEnd-Engineering-Diagram.jpg" />
</Frame>

<Tip>
  Opik is Open Source! You can find the full source code on [GitHub](https://github.com/comet-ml/opik) and the complete
  self-hosting guide can be found [here](/self-host/local_deployment).
</Tip>

## Core Functions

<CardGroup cols={2}>
  <Card title="Quickstart Guide" href="/quickstart" icon="fa-solid fa-rocket" iconPosition="left">
    Opik integrates with your existing AI stack through your model provider or LLM framework.
  </Card>
  <Card title="LLM Observability - Log LLM Traces" href="/tracing/log_traces" icon="fa-solid fa-eye" iconPosition="left">
    Traces give you instant visibility into what's working, what's not, and why and includes
    advanced analysis and debugging features built in. 
  </Card>
  <Card title="Evaluation - Score Performance" href="/evaluation/overview" icon="fa-solid fa-chart-line" iconPosition="left">
    Use LLM-as-a-judge and heuristic eval metrics to score your app or agent on hallucination,
    context recall, and more. 
  </Card>
  <Card title="Agent Optimization" href="/agent_optimization/overview" icon="fa-solid fa-brain" iconPosition="left">
    Choose from six advanced optimization algorithms to auto-generate and score the best prompts for
    the steps in your agentic system. 
  </Card>
  <Card title="Prompt Engineering" href="/prompt_engineering/prompt_management" icon="fa-solid fa-wand-magic-sparkles" iconPosition="left">
    Store and version system prompts, compare results live in the [Prompt Playground](/prompt_engineering/playground),
    and experiment with different models with our LLM proxy. 
  </Card>
  <Card title="Self-hosting Opik" href="/self-host/overview" icon="fa-solid fa-server" iconPosition="left">
    Deploy Opik on your own infrastructure with local or Kubernetes deployment options.
  </Card>
</CardGroup>

## Video Tutorials

Prefer a visual guide ? Follow along as we cover everything from basic setup and trace logging to
LLM evaluation metrics, production monitoring, and more.

<Frame>
  <iframe
    width="100%"
    height="500px"
    src="https://www.youtube-nocookie.com/embed/TO9ar6-OJj4?rel=0"
    frameborder="0"
    allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share; fullscreen"
    referrerpolicy="strict-origin-when-cross-origin"
    allowfullscreen
  ></iframe>
</Frame>

<Tip>
  You can find a full set of video tutorials in the [Opik University](/opik-university/overview).
</Tip>

## Open-Source access meets enterprise performance

All Opik versions ([cloud](https://www.comet.com/signup?from=llm),
[open source](https://github.com/comet-ml/opik), and
[enterprise](https://www.comet.com/site/pricing/)) include the full AI engineering featureset
and run on the Comet platform, with proven performance at scale supporting many of the world's
largest organizations. 

Compare Opik to other LLM observability tools and you'll find that traces populate faster,
evaluations run smoother, and reliability comes standard — even for complex agentic systems serving
millions of users in production. 

## Join Our Bounty Program!

Want to contribute to Opik and get rewarded for your efforts? Check out our
[Bounty Program](/contributing/developer-programs/bounties) to find exciting tasks and help us
grow the platform!
