Spaces:
Running
Running
Yannis Katsis
commited on
Commit
·
bcb2553
1
Parent(s):
6c9e7f7
feat: Add Ragas integration notebook (#9)
Browse files* feat: Ragas demonstration
* chore: update README with integration notebooks
- README.md +24 -6
- notebooks/Ragas_Demonstration.ipynb +554 -0
README.md
CHANGED
@@ -2,13 +2,15 @@
|
|
2 |
|
3 |
InspectorRAGet, an introspection platform for RAG evaluation. InspectorRAGet allows the user to analyze aggregate and instance-level performance of RAG systems, using both human and algorithmic metrics as well as annotator quality.
|
4 |
|
5 |
-
|
6 |
-
[![InspectorRAGet on the case!](https://img.youtube.com/vi/MJhe8QIXcEc/0.jpg)](https://www.youtube.com/watch?v=MJhe8QIXcEc)
|
7 |
|
8 |
-
|
9 |
-
|
10 |
|
11 |
## 🏗️ Build & Deploy
|
|
|
|
|
|
|
12 |
### Installation
|
13 |
We use yarn as a default package manager.
|
14 |
|
@@ -38,9 +40,25 @@ yarn start
|
|
38 |
|
39 |
## Usage
|
40 |
|
41 |
-
Once you
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
42 |
|
43 |
-
The experiment
|
44 |
sets of models and metrics used in the experiment via the `models` and `metrics` fields, respectively. The last three sections cover the dataset and the outcome of evaluation experiment in the form of `documents`, `tasks` and `evaluations` fields.
|
45 |
|
46 |
#### 1. Metadata
|
|
|
2 |
|
3 |
InspectorRAGet, an introspection platform for RAG evaluation. InspectorRAGet allows the user to analyze aggregate and instance-level performance of RAG systems, using both human and algorithmic metrics as well as annotator quality.
|
4 |
|
5 |
+
InspectorRAGet has been developed as a [React](https://react.dev/) web application built with [NextJS 14](https://nextjs.org/) framework and the [Carbon Design System](https://carbondesignsystem.com/).
|
|
|
6 |
|
7 |
+
## 🎥 Demo
|
8 |
+
[![InspectorRAGet on the case!](https://img.youtube.com/vi/vB7mJnSNx7s/0.jpg)](https://www.youtube.com/watch?v=vB7mJnSNx7s)
|
9 |
|
10 |
## 🏗️ Build & Deploy
|
11 |
+
|
12 |
+
To install and run InspectorRAGet follow the steps below:
|
13 |
+
|
14 |
### Installation
|
15 |
We use yarn as a default package manager.
|
16 |
|
|
|
40 |
|
41 |
## Usage
|
42 |
|
43 |
+
Once you have started InspectorRAGet, the next step is import a json file with the evaluation results in the format expected by the platform. You can do this in two ways:
|
44 |
+
- Use one of our [integration notebooks](#use-inspectorraget-through-integration-notebooks), showing how to use InspectorRAGet in combination with popular evaluation frameworks.
|
45 |
+
- Manually convert the evaluation results into the expected format by consulting the [documentation of InspectorRAGet's file format](#use-inspectorraget-by-manually-creating-input-file).
|
46 |
+
|
47 |
+
## Use InspectorRAGet through integration notebooks
|
48 |
+
|
49 |
+
To make it easier to get started, we have created notebooks showcasing how InspectorRAGet can be used in combination with popular evaluation frameworks. Each notebook demonstrates how to use the corresponding framework to run an evaluation experiment and transform its output to the input format expected by InspectorRAGet for analysis. We provide notebooks demonstrating integrations of InspectorRAGet with the following popular frameworks:
|
50 |
+
|
51 |
+
| Framework | Description | Integration Notebook |
|
52 |
+
| --- | --- | --- |
|
53 |
+
| Language Model Evaluation Harness | Popular evaluation framework used to evaluate language models on different tasks | [LM_Eval_Demonstration.ipynb](notebooks/LM_Eval_Demonstration.ipynb) |
|
54 |
+
| Ragas | Popular evaluation framework specifically designed for the evaluation of RAG systems through LLM-as-a-judge techniques | [Ragas_Demonstration.ipynb](notebooks/Ragas_Demonstration.ipynb) |
|
55 |
+
| HuggingFace | Offers libraries and assets (incl. datasets, models, and metric evaluators) that can be used to both create and evaluate RAG systems | [HuggingFace_Demonstration.ipynb](notebooks/HuggingFace_Demonstration.ipynb) |
|
56 |
+
|
57 |
+
## Use InspectorRAGet by manually creating input file
|
58 |
+
|
59 |
+
If you want to use your own code/framework, not covered by the integration notebooks above, to run the evaluation, you can manually transform the evaluation results to the input format expected by InspectorRAGet, described below. Examples of input files in the expected format can be found in the [data](data) folder.
|
60 |
|
61 |
+
The experiment results json file expected by InspectorRAGet can be broadly split into six sections along their functional boundaries. The first section captures general details about the experiment in `name`, `description` and `timestamp` fields. The second and third sections describe the
|
62 |
sets of models and metrics used in the experiment via the `models` and `metrics` fields, respectively. The last three sections cover the dataset and the outcome of evaluation experiment in the form of `documents`, `tasks` and `evaluations` fields.
|
63 |
|
64 |
#### 1. Metadata
|
notebooks/Ragas_Demonstration.ipynb
ADDED
@@ -0,0 +1,554 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"cells": [
|
3 |
+
{
|
4 |
+
"cell_type": "markdown",
|
5 |
+
"metadata": {},
|
6 |
+
"source": [
|
7 |
+
"## Ragas demonstration\n",
|
8 |
+
"\n",
|
9 |
+
"This notebook demonstrates how to evaluate a RAG system using the [Ragas evaluation framework](https://github.com/explodinggradients/ragas?tab=readme-ov-file) and import the resulting evaluation results into InspectorRAGet for analysis.\n",
|
10 |
+
"\n",
|
11 |
+
"### Installation"
|
12 |
+
]
|
13 |
+
},
|
14 |
+
{
|
15 |
+
"cell_type": "code",
|
16 |
+
"execution_count": 1,
|
17 |
+
"metadata": {},
|
18 |
+
"outputs": [
|
19 |
+
{
|
20 |
+
"name": "stdout",
|
21 |
+
"output_type": "stream",
|
22 |
+
"text": [
|
23 |
+
"Collecting git+https://github.com/explodinggradients/ragas\n",
|
24 |
+
" Cloning https://github.com/explodinggradients/ragas to /private/var/folders/l5/fj0t2qmn44x0r042xw6cjr8w0000gn/T/pip-req-build-k70llt1h\n",
|
25 |
+
" Running command git clone --filter=blob:none --quiet https://github.com/explodinggradients/ragas /private/var/folders/l5/fj0t2qmn44x0r042xw6cjr8w0000gn/T/pip-req-build-k70llt1h\n",
|
26 |
+
" Resolved https://github.com/explodinggradients/ragas to commit d2486f117fd827dcfc3e196d4cf7798573c55b09\n",
|
27 |
+
" Installing build dependencies ... \u001b[?25ldone\n",
|
28 |
+
"\u001b[?25h Getting requirements to build wheel ... \u001b[?25ldone\n",
|
29 |
+
"\u001b[?25h Preparing metadata (pyproject.toml) ... \u001b[?25ldone\n",
|
30 |
+
"\u001b[?25hRequirement already satisfied: numpy in ./.venv/lib/python3.9/site-packages (from ragas==0.1.13) (1.26.4)\n",
|
31 |
+
"Requirement already satisfied: datasets in ./.venv/lib/python3.9/site-packages (from ragas==0.1.13) (2.20.0)\n",
|
32 |
+
"Requirement already satisfied: tiktoken in ./.venv/lib/python3.9/site-packages (from ragas==0.1.13) (0.7.0)\n",
|
33 |
+
"Requirement already satisfied: langchain in ./.venv/lib/python3.9/site-packages (from ragas==0.1.13) (0.2.12)\n",
|
34 |
+
"Requirement already satisfied: langchain-core in ./.venv/lib/python3.9/site-packages (from ragas==0.1.13) (0.2.28)\n",
|
35 |
+
"Requirement already satisfied: langchain-community in ./.venv/lib/python3.9/site-packages (from ragas==0.1.13) (0.2.11)\n",
|
36 |
+
"Requirement already satisfied: langchain-openai in ./.venv/lib/python3.9/site-packages (from ragas==0.1.13) (0.1.20)\n",
|
37 |
+
"Requirement already satisfied: openai>1 in ./.venv/lib/python3.9/site-packages (from ragas==0.1.13) (1.39.0)\n",
|
38 |
+
"Requirement already satisfied: pysbd>=0.3.4 in ./.venv/lib/python3.9/site-packages (from ragas==0.1.13) (0.3.4)\n",
|
39 |
+
"Requirement already satisfied: nest-asyncio in ./.venv/lib/python3.9/site-packages (from ragas==0.1.13) (1.6.0)\n",
|
40 |
+
"Requirement already satisfied: appdirs in ./.venv/lib/python3.9/site-packages (from ragas==0.1.13) (1.4.4)\n",
|
41 |
+
"Requirement already satisfied: anyio<5,>=3.5.0 in ./.venv/lib/python3.9/site-packages (from openai>1->ragas==0.1.13) (4.4.0)\n",
|
42 |
+
"Requirement already satisfied: distro<2,>=1.7.0 in ./.venv/lib/python3.9/site-packages (from openai>1->ragas==0.1.13) (1.9.0)\n",
|
43 |
+
"Requirement already satisfied: httpx<1,>=0.23.0 in ./.venv/lib/python3.9/site-packages (from openai>1->ragas==0.1.13) (0.27.0)\n",
|
44 |
+
"Requirement already satisfied: pydantic<3,>=1.9.0 in ./.venv/lib/python3.9/site-packages (from openai>1->ragas==0.1.13) (2.8.2)\n",
|
45 |
+
"Requirement already satisfied: sniffio in ./.venv/lib/python3.9/site-packages (from openai>1->ragas==0.1.13) (1.3.1)\n",
|
46 |
+
"Requirement already satisfied: tqdm>4 in ./.venv/lib/python3.9/site-packages (from openai>1->ragas==0.1.13) (4.66.5)\n",
|
47 |
+
"Requirement already satisfied: typing-extensions<5,>=4.7 in ./.venv/lib/python3.9/site-packages (from openai>1->ragas==0.1.13) (4.12.2)\n",
|
48 |
+
"Requirement already satisfied: filelock in ./.venv/lib/python3.9/site-packages (from datasets->ragas==0.1.13) (3.15.4)\n",
|
49 |
+
"Requirement already satisfied: pyarrow>=15.0.0 in ./.venv/lib/python3.9/site-packages (from datasets->ragas==0.1.13) (17.0.0)\n",
|
50 |
+
"Requirement already satisfied: pyarrow-hotfix in ./.venv/lib/python3.9/site-packages (from datasets->ragas==0.1.13) (0.6)\n",
|
51 |
+
"Requirement already satisfied: dill<0.3.9,>=0.3.0 in ./.venv/lib/python3.9/site-packages (from datasets->ragas==0.1.13) (0.3.8)\n",
|
52 |
+
"Requirement already satisfied: pandas in ./.venv/lib/python3.9/site-packages (from datasets->ragas==0.1.13) (2.2.2)\n",
|
53 |
+
"Requirement already satisfied: requests>=2.32.2 in ./.venv/lib/python3.9/site-packages (from datasets->ragas==0.1.13) (2.32.3)\n",
|
54 |
+
"Requirement already satisfied: xxhash in ./.venv/lib/python3.9/site-packages (from datasets->ragas==0.1.13) (3.4.1)\n",
|
55 |
+
"Requirement already satisfied: multiprocess in ./.venv/lib/python3.9/site-packages (from datasets->ragas==0.1.13) (0.70.16)\n",
|
56 |
+
"Requirement already satisfied: fsspec<=2024.5.0,>=2023.1.0 in ./.venv/lib/python3.9/site-packages (from fsspec[http]<=2024.5.0,>=2023.1.0->datasets->ragas==0.1.13) (2024.5.0)\n",
|
57 |
+
"Requirement already satisfied: aiohttp in ./.venv/lib/python3.9/site-packages (from datasets->ragas==0.1.13) (3.10.1)\n",
|
58 |
+
"Requirement already satisfied: huggingface-hub>=0.21.2 in ./.venv/lib/python3.9/site-packages (from datasets->ragas==0.1.13) (0.24.5)\n",
|
59 |
+
"Requirement already satisfied: packaging in ./.venv/lib/python3.9/site-packages (from datasets->ragas==0.1.13) (24.1)\n",
|
60 |
+
"Requirement already satisfied: pyyaml>=5.1 in ./.venv/lib/python3.9/site-packages (from datasets->ragas==0.1.13) (6.0.1)\n",
|
61 |
+
"Requirement already satisfied: SQLAlchemy<3,>=1.4 in ./.venv/lib/python3.9/site-packages (from langchain->ragas==0.1.13) (2.0.32)\n",
|
62 |
+
"Requirement already satisfied: async-timeout<5.0.0,>=4.0.0 in ./.venv/lib/python3.9/site-packages (from langchain->ragas==0.1.13) (4.0.3)\n",
|
63 |
+
"Requirement already satisfied: langchain-text-splitters<0.3.0,>=0.2.0 in ./.venv/lib/python3.9/site-packages (from langchain->ragas==0.1.13) (0.2.2)\n",
|
64 |
+
"Requirement already satisfied: langsmith<0.2.0,>=0.1.17 in ./.venv/lib/python3.9/site-packages (from langchain->ragas==0.1.13) (0.1.96)\n",
|
65 |
+
"Requirement already satisfied: tenacity!=8.4.0,<9.0.0,>=8.1.0 in ./.venv/lib/python3.9/site-packages (from langchain->ragas==0.1.13) (8.5.0)\n",
|
66 |
+
"Requirement already satisfied: jsonpatch<2.0,>=1.33 in ./.venv/lib/python3.9/site-packages (from langchain-core->ragas==0.1.13) (1.33)\n",
|
67 |
+
"Requirement already satisfied: dataclasses-json<0.7,>=0.5.7 in ./.venv/lib/python3.9/site-packages (from langchain-community->ragas==0.1.13) (0.6.7)\n",
|
68 |
+
"Requirement already satisfied: regex>=2022.1.18 in ./.venv/lib/python3.9/site-packages (from tiktoken->ragas==0.1.13) (2024.7.24)\n",
|
69 |
+
"Requirement already satisfied: aiohappyeyeballs>=2.3.0 in ./.venv/lib/python3.9/site-packages (from aiohttp->datasets->ragas==0.1.13) (2.3.4)\n",
|
70 |
+
"Requirement already satisfied: aiosignal>=1.1.2 in ./.venv/lib/python3.9/site-packages (from aiohttp->datasets->ragas==0.1.13) (1.3.1)\n",
|
71 |
+
"Requirement already satisfied: attrs>=17.3.0 in ./.venv/lib/python3.9/site-packages (from aiohttp->datasets->ragas==0.1.13) (24.1.0)\n",
|
72 |
+
"Requirement already satisfied: frozenlist>=1.1.1 in ./.venv/lib/python3.9/site-packages (from aiohttp->datasets->ragas==0.1.13) (1.4.1)\n",
|
73 |
+
"Requirement already satisfied: multidict<7.0,>=4.5 in ./.venv/lib/python3.9/site-packages (from aiohttp->datasets->ragas==0.1.13) (6.0.5)\n",
|
74 |
+
"Requirement already satisfied: yarl<2.0,>=1.0 in ./.venv/lib/python3.9/site-packages (from aiohttp->datasets->ragas==0.1.13) (1.9.4)\n",
|
75 |
+
"Requirement already satisfied: idna>=2.8 in ./.venv/lib/python3.9/site-packages (from anyio<5,>=3.5.0->openai>1->ragas==0.1.13) (3.7)\n",
|
76 |
+
"Requirement already satisfied: exceptiongroup>=1.0.2 in ./.venv/lib/python3.9/site-packages (from anyio<5,>=3.5.0->openai>1->ragas==0.1.13) (1.2.2)\n",
|
77 |
+
"Requirement already satisfied: marshmallow<4.0.0,>=3.18.0 in ./.venv/lib/python3.9/site-packages (from dataclasses-json<0.7,>=0.5.7->langchain-community->ragas==0.1.13) (3.21.3)\n",
|
78 |
+
"Requirement already satisfied: typing-inspect<1,>=0.4.0 in ./.venv/lib/python3.9/site-packages (from dataclasses-json<0.7,>=0.5.7->langchain-community->ragas==0.1.13) (0.9.0)\n",
|
79 |
+
"Requirement already satisfied: certifi in ./.venv/lib/python3.9/site-packages (from httpx<1,>=0.23.0->openai>1->ragas==0.1.13) (2024.7.4)\n",
|
80 |
+
"Requirement already satisfied: httpcore==1.* in ./.venv/lib/python3.9/site-packages (from httpx<1,>=0.23.0->openai>1->ragas==0.1.13) (1.0.5)\n",
|
81 |
+
"Requirement already satisfied: h11<0.15,>=0.13 in ./.venv/lib/python3.9/site-packages (from httpcore==1.*->httpx<1,>=0.23.0->openai>1->ragas==0.1.13) (0.14.0)\n",
|
82 |
+
"Requirement already satisfied: jsonpointer>=1.9 in ./.venv/lib/python3.9/site-packages (from jsonpatch<2.0,>=1.33->langchain-core->ragas==0.1.13) (3.0.0)\n",
|
83 |
+
"Requirement already satisfied: orjson<4.0.0,>=3.9.14 in ./.venv/lib/python3.9/site-packages (from langsmith<0.2.0,>=0.1.17->langchain->ragas==0.1.13) (3.10.6)\n",
|
84 |
+
"Requirement already satisfied: annotated-types>=0.4.0 in ./.venv/lib/python3.9/site-packages (from pydantic<3,>=1.9.0->openai>1->ragas==0.1.13) (0.7.0)\n",
|
85 |
+
"Requirement already satisfied: pydantic-core==2.20.1 in ./.venv/lib/python3.9/site-packages (from pydantic<3,>=1.9.0->openai>1->ragas==0.1.13) (2.20.1)\n",
|
86 |
+
"Requirement already satisfied: charset-normalizer<4,>=2 in ./.venv/lib/python3.9/site-packages (from requests>=2.32.2->datasets->ragas==0.1.13) (3.3.2)\n",
|
87 |
+
"Requirement already satisfied: urllib3<3,>=1.21.1 in ./.venv/lib/python3.9/site-packages (from requests>=2.32.2->datasets->ragas==0.1.13) (2.2.2)\n",
|
88 |
+
"Requirement already satisfied: greenlet!=0.4.17 in ./.venv/lib/python3.9/site-packages (from SQLAlchemy<3,>=1.4->langchain->ragas==0.1.13) (3.0.3)\n",
|
89 |
+
"Requirement already satisfied: python-dateutil>=2.8.2 in ./.venv/lib/python3.9/site-packages (from pandas->datasets->ragas==0.1.13) (2.9.0.post0)\n",
|
90 |
+
"Requirement already satisfied: pytz>=2020.1 in ./.venv/lib/python3.9/site-packages (from pandas->datasets->ragas==0.1.13) (2024.1)\n",
|
91 |
+
"Requirement already satisfied: tzdata>=2022.7 in ./.venv/lib/python3.9/site-packages (from pandas->datasets->ragas==0.1.13) (2024.1)\n",
|
92 |
+
"Requirement already satisfied: six>=1.5 in ./.venv/lib/python3.9/site-packages (from python-dateutil>=2.8.2->pandas->datasets->ragas==0.1.13) (1.16.0)\n",
|
93 |
+
"Requirement already satisfied: mypy-extensions>=0.3.0 in ./.venv/lib/python3.9/site-packages (from typing-inspect<1,>=0.4.0->dataclasses-json<0.7,>=0.5.7->langchain-community->ragas==0.1.13) (1.0.0)\n"
|
94 |
+
]
|
95 |
+
}
|
96 |
+
],
|
97 |
+
"source": [
|
98 |
+
"!pip install git+https://github.com/explodinggradients/ragas"
|
99 |
+
]
|
100 |
+
},
|
101 |
+
{
|
102 |
+
"cell_type": "code",
|
103 |
+
"execution_count": 2,
|
104 |
+
"metadata": {},
|
105 |
+
"outputs": [
|
106 |
+
{
|
107 |
+
"name": "stdout",
|
108 |
+
"output_type": "stream",
|
109 |
+
"text": [
|
110 |
+
"Requirement already satisfied: ipywidgets in ./.venv/lib/python3.9/site-packages (8.1.3)\n",
|
111 |
+
"Requirement already satisfied: comm>=0.1.3 in ./.venv/lib/python3.9/site-packages (from ipywidgets) (0.2.2)\n",
|
112 |
+
"Requirement already satisfied: ipython>=6.1.0 in ./.venv/lib/python3.9/site-packages (from ipywidgets) (8.18.1)\n",
|
113 |
+
"Requirement already satisfied: traitlets>=4.3.1 in ./.venv/lib/python3.9/site-packages (from ipywidgets) (5.14.3)\n",
|
114 |
+
"Requirement already satisfied: widgetsnbextension~=4.0.11 in ./.venv/lib/python3.9/site-packages (from ipywidgets) (4.0.11)\n",
|
115 |
+
"Requirement already satisfied: jupyterlab-widgets~=3.0.11 in ./.venv/lib/python3.9/site-packages (from ipywidgets) (3.0.11)\n",
|
116 |
+
"Requirement already satisfied: decorator in ./.venv/lib/python3.9/site-packages (from ipython>=6.1.0->ipywidgets) (5.1.1)\n",
|
117 |
+
"Requirement already satisfied: jedi>=0.16 in ./.venv/lib/python3.9/site-packages (from ipython>=6.1.0->ipywidgets) (0.19.1)\n",
|
118 |
+
"Requirement already satisfied: matplotlib-inline in ./.venv/lib/python3.9/site-packages (from ipython>=6.1.0->ipywidgets) (0.1.7)\n",
|
119 |
+
"Requirement already satisfied: prompt-toolkit<3.1.0,>=3.0.41 in ./.venv/lib/python3.9/site-packages (from ipython>=6.1.0->ipywidgets) (3.0.47)\n",
|
120 |
+
"Requirement already satisfied: pygments>=2.4.0 in ./.venv/lib/python3.9/site-packages (from ipython>=6.1.0->ipywidgets) (2.18.0)\n",
|
121 |
+
"Requirement already satisfied: stack-data in ./.venv/lib/python3.9/site-packages (from ipython>=6.1.0->ipywidgets) (0.6.3)\n",
|
122 |
+
"Requirement already satisfied: typing-extensions in ./.venv/lib/python3.9/site-packages (from ipython>=6.1.0->ipywidgets) (4.12.2)\n",
|
123 |
+
"Requirement already satisfied: exceptiongroup in ./.venv/lib/python3.9/site-packages (from ipython>=6.1.0->ipywidgets) (1.2.2)\n",
|
124 |
+
"Requirement already satisfied: pexpect>4.3 in ./.venv/lib/python3.9/site-packages (from ipython>=6.1.0->ipywidgets) (4.9.0)\n",
|
125 |
+
"Requirement already satisfied: parso<0.9.0,>=0.8.3 in ./.venv/lib/python3.9/site-packages (from jedi>=0.16->ipython>=6.1.0->ipywidgets) (0.8.4)\n",
|
126 |
+
"Requirement already satisfied: ptyprocess>=0.5 in ./.venv/lib/python3.9/site-packages (from pexpect>4.3->ipython>=6.1.0->ipywidgets) (0.7.0)\n",
|
127 |
+
"Requirement already satisfied: wcwidth in ./.venv/lib/python3.9/site-packages (from prompt-toolkit<3.1.0,>=3.0.41->ipython>=6.1.0->ipywidgets) (0.2.13)\n",
|
128 |
+
"Requirement already satisfied: executing>=1.2.0 in ./.venv/lib/python3.9/site-packages (from stack-data->ipython>=6.1.0->ipywidgets) (2.0.1)\n",
|
129 |
+
"Requirement already satisfied: asttokens>=2.1.0 in ./.venv/lib/python3.9/site-packages (from stack-data->ipython>=6.1.0->ipywidgets) (2.4.1)\n",
|
130 |
+
"Requirement already satisfied: pure-eval in ./.venv/lib/python3.9/site-packages (from stack-data->ipython>=6.1.0->ipywidgets) (0.2.3)\n",
|
131 |
+
"Requirement already satisfied: six>=1.12.0 in ./.venv/lib/python3.9/site-packages (from asttokens>=2.1.0->stack-data->ipython>=6.1.0->ipywidgets) (1.16.0)\n"
|
132 |
+
]
|
133 |
+
}
|
134 |
+
],
|
135 |
+
"source": [
|
136 |
+
"!pip install ipywidgets"
|
137 |
+
]
|
138 |
+
},
|
139 |
+
{
|
140 |
+
"cell_type": "code",
|
141 |
+
"execution_count": 3,
|
142 |
+
"metadata": {},
|
143 |
+
"outputs": [],
|
144 |
+
"source": [
|
145 |
+
"from datasets import Dataset \n",
|
146 |
+
"import os\n",
|
147 |
+
"from ragas import evaluate\n",
|
148 |
+
"from ragas.metrics import faithfulness, answer_relevancy, context_precision, context_recall"
|
149 |
+
]
|
150 |
+
},
|
151 |
+
{
|
152 |
+
"cell_type": "code",
|
153 |
+
"execution_count": 4,
|
154 |
+
"metadata": {},
|
155 |
+
"outputs": [],
|
156 |
+
"source": [
|
157 |
+
"os.environ[\"RAGAS_DO_NOT_TRACK\"] = \"true\""
|
158 |
+
]
|
159 |
+
},
|
160 |
+
{
|
161 |
+
"cell_type": "markdown",
|
162 |
+
"metadata": {},
|
163 |
+
"source": [
|
164 |
+
"### Import dataset\n",
|
165 |
+
"\n",
|
166 |
+
"For this example, we will be using the sample dataset provided in Ragas' quick start instructions."
|
167 |
+
]
|
168 |
+
},
|
169 |
+
{
|
170 |
+
"cell_type": "code",
|
171 |
+
"execution_count": 5,
|
172 |
+
"metadata": {},
|
173 |
+
"outputs": [],
|
174 |
+
"source": [
|
175 |
+
"data_samples = {\n",
|
176 |
+
" 'question': ['When was the first super bowl?', 'Who won the most super bowls?'],\n",
|
177 |
+
" 'answer': ['The first superbowl was held on Jan 15, 1967', 'The most super bowls have been won by The New England Patriots'],\n",
|
178 |
+
" 'contexts' : [['The First AFL–NFL World Championship Game was an American football game played on January 15, 1967, at the Los Angeles Memorial Coliseum in Los Angeles,'], \n",
|
179 |
+
" ['The Green Bay Packers...Green Bay, Wisconsin.','The Packers compete...Football Conference']],\n",
|
180 |
+
" 'ground_truth': ['The first superbowl was held on January 15, 1967', 'The New England Patriots have won the Super Bowl a record six times']\n",
|
181 |
+
"}\n",
|
182 |
+
"\n",
|
183 |
+
"dataset = Dataset.from_dict(data_samples)"
|
184 |
+
]
|
185 |
+
},
|
186 |
+
{
|
187 |
+
"cell_type": "markdown",
|
188 |
+
"metadata": {},
|
189 |
+
"source": [
|
190 |
+
"### Run evaluation\n",
|
191 |
+
"\n",
|
192 |
+
"In this example, we run evaluation using `gpt-4o-mini` and the following Ragas evaluation metrics: faithfulness, answer relevance, context precision, and context recall. Consult the Ragas documentation on how to use different models or metrics.\n",
|
193 |
+
"\n",
|
194 |
+
"**Note: To use one of the OpenAI model for evaluation, you have to fill in your `OPENAI_API_KEY` below.** "
|
195 |
+
]
|
196 |
+
},
|
197 |
+
{
|
198 |
+
"cell_type": "code",
|
199 |
+
"execution_count": 6,
|
200 |
+
"metadata": {},
|
201 |
+
"outputs": [],
|
202 |
+
"source": [
|
203 |
+
"os.environ[\"OPENAI_API_KEY\"] = \"provide-your-openai-api-key\""
|
204 |
+
]
|
205 |
+
},
|
206 |
+
{
|
207 |
+
"cell_type": "code",
|
208 |
+
"execution_count": 7,
|
209 |
+
"metadata": {},
|
210 |
+
"outputs": [
|
211 |
+
{
|
212 |
+
"data": {
|
213 |
+
"application/vnd.jupyter.widget-view+json": {
|
214 |
+
"model_id": "d221210e378a4ad0a25748d8f8f095ba",
|
215 |
+
"version_major": 2,
|
216 |
+
"version_minor": 0
|
217 |
+
},
|
218 |
+
"text/plain": [
|
219 |
+
"Evaluating: 0%| | 0/8 [00:00<?, ?it/s]"
|
220 |
+
]
|
221 |
+
},
|
222 |
+
"metadata": {},
|
223 |
+
"output_type": "display_data"
|
224 |
+
},
|
225 |
+
{
|
226 |
+
"data": {
|
227 |
+
"text/html": [
|
228 |
+
"<div>\n",
|
229 |
+
"<style scoped>\n",
|
230 |
+
" .dataframe tbody tr th:only-of-type {\n",
|
231 |
+
" vertical-align: middle;\n",
|
232 |
+
" }\n",
|
233 |
+
"\n",
|
234 |
+
" .dataframe tbody tr th {\n",
|
235 |
+
" vertical-align: top;\n",
|
236 |
+
" }\n",
|
237 |
+
"\n",
|
238 |
+
" .dataframe thead th {\n",
|
239 |
+
" text-align: right;\n",
|
240 |
+
" }\n",
|
241 |
+
"</style>\n",
|
242 |
+
"<table border=\"1\" class=\"dataframe\">\n",
|
243 |
+
" <thead>\n",
|
244 |
+
" <tr style=\"text-align: right;\">\n",
|
245 |
+
" <th></th>\n",
|
246 |
+
" <th>question</th>\n",
|
247 |
+
" <th>answer</th>\n",
|
248 |
+
" <th>contexts</th>\n",
|
249 |
+
" <th>ground_truth</th>\n",
|
250 |
+
" <th>faithfulness</th>\n",
|
251 |
+
" <th>answer_relevancy</th>\n",
|
252 |
+
" <th>context_precision</th>\n",
|
253 |
+
" <th>context_recall</th>\n",
|
254 |
+
" </tr>\n",
|
255 |
+
" </thead>\n",
|
256 |
+
" <tbody>\n",
|
257 |
+
" <tr>\n",
|
258 |
+
" <th>0</th>\n",
|
259 |
+
" <td>When was the first super bowl?</td>\n",
|
260 |
+
" <td>The first superbowl was held on Jan 15, 1967</td>\n",
|
261 |
+
" <td>[The First AFL–NFL World Championship Game was...</td>\n",
|
262 |
+
" <td>The first superbowl was held on January 15, 1967</td>\n",
|
263 |
+
" <td>1.0</td>\n",
|
264 |
+
" <td>0.980714</td>\n",
|
265 |
+
" <td>1.0</td>\n",
|
266 |
+
" <td>1.0</td>\n",
|
267 |
+
" </tr>\n",
|
268 |
+
" <tr>\n",
|
269 |
+
" <th>1</th>\n",
|
270 |
+
" <td>Who won the most super bowls?</td>\n",
|
271 |
+
" <td>The most super bowls have been won by The New ...</td>\n",
|
272 |
+
" <td>[The Green Bay Packers...Green Bay, Wisconsin....</td>\n",
|
273 |
+
" <td>The New England Patriots have won the Super Bo...</td>\n",
|
274 |
+
" <td>0.0</td>\n",
|
275 |
+
" <td>0.943043</td>\n",
|
276 |
+
" <td>0.0</td>\n",
|
277 |
+
" <td>0.0</td>\n",
|
278 |
+
" </tr>\n",
|
279 |
+
" </tbody>\n",
|
280 |
+
"</table>\n",
|
281 |
+
"</div>"
|
282 |
+
],
|
283 |
+
"text/plain": [
|
284 |
+
" question \\\n",
|
285 |
+
"0 When was the first super bowl? \n",
|
286 |
+
"1 Who won the most super bowls? \n",
|
287 |
+
"\n",
|
288 |
+
" answer \\\n",
|
289 |
+
"0 The first superbowl was held on Jan 15, 1967 \n",
|
290 |
+
"1 The most super bowls have been won by The New ... \n",
|
291 |
+
"\n",
|
292 |
+
" contexts \\\n",
|
293 |
+
"0 [The First AFL–NFL World Championship Game was... \n",
|
294 |
+
"1 [The Green Bay Packers...Green Bay, Wisconsin.... \n",
|
295 |
+
"\n",
|
296 |
+
" ground_truth faithfulness \\\n",
|
297 |
+
"0 The first superbowl was held on January 15, 1967 1.0 \n",
|
298 |
+
"1 The New England Patriots have won the Super Bo... 0.0 \n",
|
299 |
+
"\n",
|
300 |
+
" answer_relevancy context_precision context_recall \n",
|
301 |
+
"0 0.980714 1.0 1.0 \n",
|
302 |
+
"1 0.943043 0.0 0.0 "
|
303 |
+
]
|
304 |
+
},
|
305 |
+
"metadata": {},
|
306 |
+
"output_type": "display_data"
|
307 |
+
}
|
308 |
+
],
|
309 |
+
"source": [
|
310 |
+
"from langchain_openai.chat_models import ChatOpenAI\n",
|
311 |
+
"llm = ChatOpenAI(model=\"gpt-4o-mini\")\n",
|
312 |
+
"\n",
|
313 |
+
"score = evaluate(dataset,llm=llm, metrics=[faithfulness,answer_relevancy,context_precision,context_recall])\n",
|
314 |
+
"df_score = score.to_pandas()\n",
|
315 |
+
"\n",
|
316 |
+
"display(df_score)"
|
317 |
+
]
|
318 |
+
},
|
319 |
+
{
|
320 |
+
"cell_type": "markdown",
|
321 |
+
"metadata": {},
|
322 |
+
"source": [
|
323 |
+
"### Create InspectorRAGet file"
|
324 |
+
]
|
325 |
+
},
|
326 |
+
{
|
327 |
+
"cell_type": "markdown",
|
328 |
+
"metadata": {},
|
329 |
+
"source": [
|
330 |
+
"We next generate the file for InspectorRAGet. We start by specifying the experiment metadata (experiment name, model, metrics)."
|
331 |
+
]
|
332 |
+
},
|
333 |
+
{
|
334 |
+
"cell_type": "markdown",
|
335 |
+
"metadata": {},
|
336 |
+
"source": [
|
337 |
+
"**Specify name of experiment**"
|
338 |
+
]
|
339 |
+
},
|
340 |
+
{
|
341 |
+
"cell_type": "code",
|
342 |
+
"execution_count": 8,
|
343 |
+
"metadata": {},
|
344 |
+
"outputs": [],
|
345 |
+
"source": [
|
346 |
+
"name = \"Ragas Demo\""
|
347 |
+
]
|
348 |
+
},
|
349 |
+
{
|
350 |
+
"cell_type": "markdown",
|
351 |
+
"metadata": {},
|
352 |
+
"source": [
|
353 |
+
"**Specify name of model that produced the answers:** Since we do not know where the answers in the `data_samples` came from, we use a dummy name. In your experiments, replace this with the name of the model that produced the answers."
|
354 |
+
]
|
355 |
+
},
|
356 |
+
{
|
357 |
+
"cell_type": "code",
|
358 |
+
"execution_count": 9,
|
359 |
+
"metadata": {},
|
360 |
+
"outputs": [],
|
361 |
+
"source": [
|
362 |
+
"# models -> List[dict]\n",
|
363 |
+
"models = [\n",
|
364 |
+
" {\n",
|
365 |
+
" \"model_id\": \"model_a\", # e.g., \"OpenAI/gpt-3.5-turbo\"\n",
|
366 |
+
" \"name\": \"Model A\", # e.g., \"GPT-3.5-Turbo\"\n",
|
367 |
+
" \"owner\": \"Owner A\" # e.g., \"OpenAI\"\n",
|
368 |
+
" }\n",
|
369 |
+
"]"
|
370 |
+
]
|
371 |
+
},
|
372 |
+
{
|
373 |
+
"cell_type": "markdown",
|
374 |
+
"metadata": {},
|
375 |
+
"source": [
|
376 |
+
"**Specify metrics used:** Ceate a record for each metric used in the Ragas evaluation. Note that the `name` of each metric below should match the name of the metric used in the data frame `df_score` output by Ragas."
|
377 |
+
]
|
378 |
+
},
|
379 |
+
{
|
380 |
+
"cell_type": "code",
|
381 |
+
"execution_count": 10,
|
382 |
+
"metadata": {},
|
383 |
+
"outputs": [],
|
384 |
+
"source": [
|
385 |
+
"# metrics -> List[dict]\n",
|
386 |
+
"all_metrics = [\n",
|
387 |
+
" {\n",
|
388 |
+
" \"name\": \"faithfulness\",\n",
|
389 |
+
" \"display_name\": \"Faithfulness\",\n",
|
390 |
+
" \"description\": \"Faithfulness\",\n",
|
391 |
+
" \"author\": \"algorithm\",\n",
|
392 |
+
" \"type\": \"numerical\",\n",
|
393 |
+
" \"aggregator\": \"average\",\n",
|
394 |
+
" \"range\": [0, 1.0, 0.1]\n",
|
395 |
+
" },\n",
|
396 |
+
" {\n",
|
397 |
+
" \"name\": \"answer_relevancy\",\n",
|
398 |
+
" \"display_name\": \"Answer Relevancy\",\n",
|
399 |
+
" \"description\": \"Answer Relevancy\",\n",
|
400 |
+
" \"author\": \"algorithm\",\n",
|
401 |
+
" \"type\": \"numerical\",\n",
|
402 |
+
" \"aggregator\": \"average\",\n",
|
403 |
+
" \"range\": [0, 1.0, 0.1]\n",
|
404 |
+
" },\n",
|
405 |
+
" {\n",
|
406 |
+
" \"name\": \"context_precision\",\n",
|
407 |
+
" \"display_name\": \"Context Precision\",\n",
|
408 |
+
" \"description\": \"Context Precision\",\n",
|
409 |
+
" \"author\": \"algorithm\",\n",
|
410 |
+
" \"type\": \"numerical\",\n",
|
411 |
+
" \"aggregator\": \"average\",\n",
|
412 |
+
" \"range\": [0, 1.0, 0.1]\n",
|
413 |
+
" },\n",
|
414 |
+
" {\n",
|
415 |
+
" \"name\": \"context_recall\",\n",
|
416 |
+
" \"display_name\": \"Context Recall\",\n",
|
417 |
+
" \"description\": \"Context Recall\",\n",
|
418 |
+
" \"author\": \"algorithm\",\n",
|
419 |
+
" \"type\": \"numerical\",\n",
|
420 |
+
" \"aggregator\": \"average\",\n",
|
421 |
+
" \"range\": [0, 1.0, 0.1]\n",
|
422 |
+
" }\n",
|
423 |
+
"]"
|
424 |
+
]
|
425 |
+
},
|
426 |
+
{
|
427 |
+
"cell_type": "markdown",
|
428 |
+
"metadata": {},
|
429 |
+
"source": [
|
430 |
+
"##### Compute document IDs"
|
431 |
+
]
|
432 |
+
},
|
433 |
+
{
|
434 |
+
"cell_type": "code",
|
435 |
+
"execution_count": 11,
|
436 |
+
"metadata": {},
|
437 |
+
"outputs": [],
|
438 |
+
"source": [
|
439 |
+
"doc_id_counter = 0\n",
|
440 |
+
"\n",
|
441 |
+
"doc_text_to_id = {}\n",
|
442 |
+
"for index, row in df_score.iterrows():\n",
|
443 |
+
" for c in row[\"contexts\"]:\n",
|
444 |
+
" if c not in doc_text_to_id:\n",
|
445 |
+
" doc_text_to_id[c] = doc_id_counter\n",
|
446 |
+
" doc_id_counter += 1"
|
447 |
+
]
|
448 |
+
},
|
449 |
+
{
|
450 |
+
"cell_type": "markdown",
|
451 |
+
"metadata": {},
|
452 |
+
"source": [
|
453 |
+
"##### Populate documents, tasks, and evaluations"
|
454 |
+
]
|
455 |
+
},
|
456 |
+
{
|
457 |
+
"cell_type": "code",
|
458 |
+
"execution_count": 12,
|
459 |
+
"metadata": {},
|
460 |
+
"outputs": [],
|
461 |
+
"source": [
|
462 |
+
"all_documents = []\n",
|
463 |
+
"all_tasks = []\n",
|
464 |
+
"all_evaluations = []\n",
|
465 |
+
"\n",
|
466 |
+
"# Populate documents\n",
|
467 |
+
"for doc_text, doc_id in doc_text_to_id.items():\n",
|
468 |
+
" document = {\n",
|
469 |
+
" \"document_id\": f\"{doc_id}\",\n",
|
470 |
+
" \"text\": f\"{doc_text}\"\n",
|
471 |
+
" }\n",
|
472 |
+
" all_documents.append(document)\n",
|
473 |
+
"\n",
|
474 |
+
"# Populate taks and evaluations\n",
|
475 |
+
"for index, row in df_score.iterrows():\n",
|
476 |
+
" instance = {\n",
|
477 |
+
" \"task_id\": f\"{index}\",\n",
|
478 |
+
" \"task_type\": \"rag\",\n",
|
479 |
+
" \"contexts\": [ {\"document_id\": f\"{doc_text_to_id[c]}\"} for c in row[\"contexts\"] ],\n",
|
480 |
+
" \"input\": [{\"speaker\": \"user\", \"text\": f\"{row['question']}\"}],\n",
|
481 |
+
" \"targets\": [{\"text\": f\"{row['ground_truth']}\"}]\n",
|
482 |
+
" }\n",
|
483 |
+
" all_tasks.append(instance)\n",
|
484 |
+
"\n",
|
485 |
+
" evaluation = {\n",
|
486 |
+
" \"task_id\": f\"{index}\",\n",
|
487 |
+
" \"model_id\": f\"{models[0]['model_id']}\",\n",
|
488 |
+
" \"model_response\": f\"{row['answer']}\",\n",
|
489 |
+
" \"annotations\": {}\n",
|
490 |
+
" }\n",
|
491 |
+
" for metric in all_metrics:\n",
|
492 |
+
" metric_name = metric[\"name\"]\n",
|
493 |
+
" evaluation[\"annotations\"][metric_name] = {\n",
|
494 |
+
" \"system\": {\n",
|
495 |
+
" \"value\": row[metric_name],\n",
|
496 |
+
" \"duration\": 0\n",
|
497 |
+
" }\n",
|
498 |
+
" }\n",
|
499 |
+
" all_evaluations.append(evaluation)"
|
500 |
+
]
|
501 |
+
},
|
502 |
+
{
|
503 |
+
"cell_type": "markdown",
|
504 |
+
"metadata": {},
|
505 |
+
"source": [
|
506 |
+
"##### Write final json file to `ragas-inspectorraget-demo.json` in working directory"
|
507 |
+
]
|
508 |
+
},
|
509 |
+
{
|
510 |
+
"cell_type": "code",
|
511 |
+
"execution_count": 13,
|
512 |
+
"metadata": {},
|
513 |
+
"outputs": [],
|
514 |
+
"source": [
|
515 |
+
"import json\n",
|
516 |
+
"\n",
|
517 |
+
"output = {\n",
|
518 |
+
" \"name\": name,\n",
|
519 |
+
" \"models\": models,\n",
|
520 |
+
" \"metrics\": all_metrics,\n",
|
521 |
+
" \"documents\": all_documents,\n",
|
522 |
+
" \"tasks\": all_tasks,\n",
|
523 |
+
" \"evaluations\": all_evaluations,\n",
|
524 |
+
"}\n",
|
525 |
+
"\n",
|
526 |
+
"with open(\n",
|
527 |
+
" file=\"ragas-inspectorraget-demo.json\", mode=\"w\", encoding=\"utf-8\"\n",
|
528 |
+
") as fp:\n",
|
529 |
+
" json.dump(output, fp, indent=4)"
|
530 |
+
]
|
531 |
+
}
|
532 |
+
],
|
533 |
+
"metadata": {
|
534 |
+
"kernelspec": {
|
535 |
+
"display_name": "bin",
|
536 |
+
"language": "python",
|
537 |
+
"name": "python3"
|
538 |
+
},
|
539 |
+
"language_info": {
|
540 |
+
"codemirror_mode": {
|
541 |
+
"name": "ipython",
|
542 |
+
"version": 3
|
543 |
+
},
|
544 |
+
"file_extension": ".py",
|
545 |
+
"mimetype": "text/x-python",
|
546 |
+
"name": "python",
|
547 |
+
"nbconvert_exporter": "python",
|
548 |
+
"pygments_lexer": "ipython3",
|
549 |
+
"version": "3.9.12"
|
550 |
+
}
|
551 |
+
},
|
552 |
+
"nbformat": 4,
|
553 |
+
"nbformat_minor": 2
|
554 |
+
}
|