---
title: No-code LLM Evaluation Workflow
---

<div style={{
    position: 'relative',
    paddingBottom: '56.25%', // 16:9 aspect ratio
    height: 0,
    overflow: 'hidden',
    maxWidth: '100%',
    marginBottom: '20px'
}}>
    <iframe
        src="https://drive.google.com/file/d/1b0dUc8knAncBCapo70_aTt7IwNRqOFaw/preview"
        frameborder="0"
        webkitallowfullscreen
        mozallowfullscreen
        allowfullscreen
        style={{
            position: 'absolute',
            top: 0,
            left: 0,
            width: '100%',
            height: '100%',
        }}
    />
</div>

## End-to-End UI-Based LLM Experimentation

This comprehensive video demonstrates how to conduct complete LLM [experimentation](https://www.comet.com/docs/opik/evaluation/evaluate_your_llm) using Opik's UI in a 15-minute video.
Through a practical example of building risk-aware LLM applications, you'll learn the complete workflow from [dataset](https://www.comet.com/docs/opik/evaluation/manage_datasets) creation to [experiment](https://www.comet.com/docs/opik/evaluation/evaluate_prompt) comparison.
The video walks through testing different [prompt](https://www.comet.com/docs/opik/prompt_engineering/prompt_management) strategies to ensure LLM outputs include adequate cautionary statements, using LLM-as-a-judge [metrics](https://www.comet.com/docs/opik/evaluation/metrics/overview) for systematic [evaluation](https://www.comet.com/docs/opik/evaluation/overview) without requiring any coding experience.

## Key Highlights

- **Complete UI Workflow**: Demonstrates end-to-end [LLM evaluation](https://www.comet.com/docs/opik/evaluation/evaluate_prompt) entirely through Opik's interface, making it accessible to non-technical users and data scientists alike
- **Practical Use Case**: Real-world example of ensuring LLM applications provide adequate risk warnings, preventing potentially dangerous command executions like Docker volume deletions
- **[Dataset](https://www.comet.com/docs/opik/evaluation/manage_datasets) Creation & Management**: Upload CSV files or use AI to synthetically expand [datasets](https://www.comet.com/docs/opik/evaluation/manage_datasets) with risky prompts for comprehensive testing scenarios
- **System [Prompt](https://www.comet.com/docs/opik/prompt_engineering/prompt_management) Testing**: Compare different prompting strategies including "Risky Rick" (minimal warnings), "Safety Sid" (comprehensive cautions), and baseline ChatGPT responses
- **Interactive [Playground](https://www.comet.com/docs/opik/prompt_engineering/playground)**: Test multiple [prompt](https://www.comet.com/docs/opik/prompt_engineering/prompt_management) variations side-by-side with configurable model parameters and [dataset](https://www.comet.com/docs/opik/evaluation/manage_datasets) integration using template variables
- **[LLM-as-a-Judge](https://www.comet.com/docs/opik/evaluation/metrics/g_eval) [Evaluation](https://www.comet.com/docs/opik/evaluation/overview)**: Create custom [evaluation rules](https://www.comet.com/docs/opik/production/rules) that automatically assess output quality using Boolean scoring (0/1) for systematic risk assessment
- **Automated [Experiment](https://www.comet.com/docs/opik/evaluation/evaluate_your_llm) Comparison**: Run parallel tests across entire [datasets](https://www.comet.com/docs/opik/evaluation/manage_datasets) and compare aggregated [metrics](https://www.comet.com/docs/opik/evaluation/metrics/overview) to identify the best-performing [prompt](https://www.comet.com/docs/opik/prompt_engineering/prompt_management) strategy
- **Production Cost Considerations**: Configure sampling and filtering options for [online evaluation](https://www.comet.com/docs/opik/production/rules) to manage costs when running LLM-as-a-judge on production data
- **Scalable [Evaluation](https://www.comet.com/docs/opik/evaluation/overview) Approach**: Move beyond manual spot-checking to systematic assessment across 20+ test cases, enabling data-driven decisions for production deployment
