Open-Source AI Cookbook documentation

Data Annotation with Argilla Spaces

Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Open In Colab

Data Annotation with Argilla Spaces

This notebook illustrates the workflow for systematically evaluating LLM outputs and creating LLM training data. You can start by using this notebook for evaluating the zeroshot performance of your favourite LLM on your task without any fine-tuning. If you want to improve performance, you can then easily reuse this workflow to create training data.

Example use-case: code generation. For this tutorial we demonstrate how to create high quality test & train data for code generation tasks. The same workflow can, however, be adapted to any other task that’s relevant for your specific use-case.

In this notebook, we:

  1. Download data for the example task.
  2. Prompt two LLMs to respond to these tasks. This results in “synthetic data” to speed up manual data creation.
  3. Create an Argilla annotation interface on HF Spaces to compare and evaluate the outputs from the two LLMs.
  4. Upload the example data and the zeroshot LLM responses into the Argilla annotation interface.
  5. Download the annotated data.

You can adapt this notebook to your needs, e.g. by using a different LLM and API provider for step (2) or by adapting the annotation interface in step (3).

Install required packages and connect to HF Hub

!pip install "argilla[server]~=1.27.0"
!pip install transformers~=4.40.0
!pip install datasets~=2.19.0
!pip install huggingface_hub~=0.23.2
# Login to the HF Hub. We recommend using this login method 
# to avoid the need for explicitly storing your HF token in variables 
import huggingface_hub
!git config --global credential.helper store
huggingface_hub.login(add_to_git_credential=True)

Download example task data

We first download an example dataset that contains code generation tasks for LLMs. We want to evaluate how well two different LLMs perform on these code generation tasks. We use instructions from the bigcode/self-oss-instruct-sc2-exec-filter-50k dataset that was used to train the StarCoder2-Instruct model.

>>> from datasets import load_dataset

>>> # taking a small sample here for faster testing
>>> dataset_codetask = load_dataset("bigcode/self-oss-instruct-sc2-exec-filter-50k", split="train[:3]")
>>> print("Dataset structure:\n", dataset_codetask, "\n")

>>> # We are are only interested in the instructions/prompts provided in the dataset
>>> instructions_lst = dataset_codetask["instruction"]
>>> print("Example instructions:\n", instructions_lst[:2])
Dataset structure:
 Dataset({
    features: ['fingerprint', 'sha1', 'seed', 'response', 'concepts', 'prompt', 'instruction', 'id'],
    num_rows: 3
}) 

Example instructions:
 ['Write a Python function named `get_value` that takes a matrix (represented by a list of lists) and a tuple of indices, and returns the value at that index in the matrix. The function should handle index out of range errors by returning None.', 'Write a Python function `check_collision` that takes a list of `rectangles` as input and checks if there are any collisions between any two rectangles. A rectangle is represented as a tuple (x, y, w, h) where (x, y) is the top-left corner of the rectangle, `w` is the width, and `h` is the height.\n\nThe function should return True if any pair of rectangles collide, and False otherwise. Use an iterative approach and check for collisions based on the bounding box collision detection algorithm. If a collision is found, return True immediately without checking for more collisions.']

Prompt two LLMs on the example task

Formatting the instructions with a chat_template

Before sending the instructions to an LLM API, we need to format the instructions with the correct chat_template for each of the models we want to evaluate. This essentially entails wrapping some special tokens around the instructions. See the docs on chat templates for details.

>>> # apply correct chat formatting to instructions from dataset
>>> from transformers import AutoTokenizer

>>> models_to_compare = ["mistralai/Mixtral-8x7B-Instruct-v0.1", "meta-llama/Meta-Llama-3-70B-Instruct"]


>>> def format_prompt(prompt, tokenizer):
...     messages = [{"role": "user", "content": prompt}]
...     messages_tokenized = tokenizer.apply_chat_template(
...         messages, tokenize=False, add_generation_prompt=True, return_tensors="pt"
...     )
...     return messages_tokenized


>>> prompts_formatted_dic = {}
>>> for model in models_to_compare:
...     tokenizer = AutoTokenizer.from_pretrained(model)

...     prompt_formatted = []
...     for instruction in instructions_lst:
...         prompt_formatted.append(format_prompt(instruction, tokenizer))

...     prompts_formatted_dic.update({model: prompt_formatted})


>>> print(
...     f"\nFirst prompt formatted for {models_to_compare[0]}:\n\n",
...     prompts_formatted_dic[models_to_compare[0]][0],
...     "\n\n",
... )
>>> print(
...     f"First prompt formatted for {models_to_compare[1]}:\n\n",
...     prompts_formatted_dic[models_to_compare[1]][0],
...     "\n\n",
... )
First prompt formatted for mistralai/Mixtral-8x7B-Instruct-v0.1:

 [INST] Write a Python function named `get_value` that takes a matrix (represented by a list of lists) and a tuple of indices, and returns the value at that index in the matrix. The function should handle index out of range errors by returning None. [/INST] 


First prompt formatted for meta-llama/Meta-Llama-3-70B-Instruct:

 <|begin_of_text|><|start_header_id|>user<|end_header_id|>

Write a Python function named `get_value` that takes a matrix (represented by a list of lists) and a tuple of indices, and returns the value at that index in the matrix. The function should handle index out of range errors by returning None.<|eot_id|><|start_header_id|>assistant<|end_header_id|>

Sending the instructions to the HF Inference API

Now we can send the instructions to the APIs for both LLMs to get outputs we can evaluate. We first define some parameters for generating the responses correctly. Hugging Face’s LLM APIs are powered by Text Generation Inference (TGI) containers. See the TGI OpenAPI specifications here and the explanations of different parameters in the Transformers Generation Parameters docs.

generation_params = dict(
    # we use low temperatur and top_p to reduce creativity and increase likelihood of highly probable tokens
    temperature=0.2,
    top_p=0.60,
    top_k=None,
    repetition_penalty=1.0,
    do_sample=True,
    max_new_tokens=512 * 2,
    return_full_text=False,
    seed=42,
    # details=True,
    # stop=["<|END_OF_TURN_TOKEN|>"],
    # grammar={"type": "json"}
    max_time=None,
    stream=False,
    use_cache=False,
    wait_for_model=False,
)

Now we can make a standard API request to the Serverless Inference API (docs). Note that the Serverless Inference API is mostly for testing and is rate limited. For testing without rate limits, you can create your own API via the HF Dedicated Endpoints (docs). See also our corresponding tutorials in the Open Source AI Cookbook.

The code below will be updated once the Inference API recipe is finished.

>>> import requests
>>> from tqdm.auto import tqdm


>>> # Hint: use asynchronous API calls (and dedicated endpoints) to increase speed
>>> def query(payload=None, api_url=None):
...     response = requests.post(api_url, headers=headers, json=payload)
...     return response.json()


>>> headers = {"Authorization": f"Bearer {huggingface_hub.get_token()}"}

>>> output_dic = {}
>>> for model in models_to_compare:
...     # create API urls for each model
...     # When using dedicated endpoints, you can reuse the same code and simply replace this URL
...     api_url = "https://api-inference.huggingface.co/models/" + model

...     # send requests to API
...     output_lst = []
...     for prompt in tqdm(prompt_formatted):
...         output = query(payload={"inputs": prompt, "parameters": {**generation_params}}, api_url=api_url)
...         output_lst.append(output[0]["generated_text"])

...     output_dic.update({model: output_lst})

>>> # inspect first generation
>>> print(f"---First generation of {models_to_compare[0]}:\n{output_dic[models_to_compare[0]][0]}\n\n")
>>> print(f"---First generation of {models_to_compare[1]}:\n{output_dic[models_to_compare[1]][0]}")
---First generation of mistralai/Mixtral-8x7B-Instruct-v0.1:
Here's a Python function that meets your requirements:

```python
def get_value(matrix, indices):
    try:
        return matrix[indices[0]][indices[1]]
    except IndexError:
        return None
```

This function takes a matrix (represented by a list of lists) and a tuple of indices as input. It first tries to access the value at the given indices in the matrix. If the indices are out of range, it catches the `IndexError` exception and returns `None`.


---First generation of meta-llama/Meta-Llama-3-70B-Instruct:
Here is a Python function that does what you described:
```
def get_value(matrix, indices):
    try:
        row, col = indices
        return matrix[row][col]
    except IndexError:
        return None
```
Here's an explanation of how the function works:

1. The function takes two arguments: `matrix` (a list of lists) and `indices` (a tuple of two integers, representing the row and column indices).
2. The function tries to access the value at the specified indices using `matrix[row][col]`.
3. If the indices are out of range (i.e., `row` or `col` is greater than the length of the corresponding dimension of the matrix), an `IndexError` exception is raised.
4. The `except` block catches the `IndexError` exception and returns `None` instead of raising an error.

Here's an example usage of the function:
```
matrix = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]

print(get_value(matrix, (0, 0)))  &num; prints 1
print(get_value(matrix, (1, 1)))  &num; prints 5
print(get_value(matrix, (3, 0)))  &num; prints None (out of range)
print(get_value(matrix, (0, 3)))  &num; prints None (out of range)
```
I hope this helps! Let me know if you have any questions.

Store the LLM outputs in a dataset

We can now store the LLM outputs in a dataset together with the original instructions.

# create a HF dataset with the instructions and model outputs
from datasets import Dataset

dataset = Dataset.from_dict(
    {
        "instructions": instructions_lst,
        "response_model_1": output_dic[models_to_compare[0]],
        "response_model_2": output_dic[models_to_compare[1]],
    }
)

dataset

Create and configure your Argilla annotation interface

We use Argilla, an open-source data annotation tool. We run Argilla via a HF Space, which you can set up with just a few clicks without any local setup. You can create the HF Argilla Space by following the instructions for Space creation here. For more details on HF Argilla Spaces, see also the detailed docs. If you want, you can also run Argilla locally via Argilla’s docker containers (see Argilla docs).

Things to consider when creating the HF Argilla Space:

  1. Persistent storage: While creating the Argilla Space, it is important to enable persistent storage for your Space, to make sure that all annotations are saved.

  2. User management: While creating the Argilla Space, three users are created automatically: owner, admin, annotator. You can set their respective names, API keys and passwords for logging into the Argilla interface manually while creating the Space (see image below).

image description

If you leave these fields blank, the following default values will be used.

Item Default Value
OWNER_USERNAME owner
ADMIN_USERNAME admin
ANNOTATOR_USERNAME annotator
PASSWORD (all) 12345678
OWNER_API_KEY owner.apikey
ADMIN_API_KEY admin.apikey
ARGILLA_WORKSPACE admin

For more details on user management with Argilla see the docs here.

Once you’ve created the Argilla Space, you should see the following login screen in your browser and you can login with the password and username specified above.

image description

Programmatically interact with Argilla

Before we can tailor the interface to our specific task and upload data, we need to first set up a few things.

Connecting this notebook to Argilla: We can now connect this notebook to Argilla to programmatically configure the interface and upload/download data.

# After starting the Argilla Space (or local docker container) you can connect to the Space with the code below.
# Here we use Argilla as the "owner" user
import argilla as rg

rg.init(
    # The `api_url` to the space follows the pattern "https://username-spacename.hf.space"
    api_url="https://moritzlaurer-argilla-00.hf.space",  # Ff you run Argilla locally: "http://localhost:6900"
    api_key="owner.apikey",  # "owner.apikey", "admin.apikey"
    # To use a private HF Argilla Space, also pass your HF token
    extra_headers={"Authorization": f"Bearer {huggingface_hub.get_token()}"},
    workspace="admin",
)
user = rg.User.me()
user

Write good annotator guidelines

Writing good guidelines for your human annotators is just as important (and difficult) as writing good training code. Good instructions should fulfill the following criteria:

  • Simple and clear: The guidelines should be simple and clear to understand for people who do not know anything about your task yet. Always ask at least one colleague to reread the guidelines to make sure that there are no ambiguities.
  • Reproducible and explicit: All information for doing the annotation task should be contained in the guidelines. A common mistake is to create informal interpretations of the guidelines during conversations with selected annotators. Future annotators will not have this information and might do the task differently than intended if it is not made explicit in the guidelines.
  • Short and comprehensive: The guidelines should as short as possible, while containing all necessary information. Annotators tend not to read long guidelines properly, so try to keep them as short as possible, while remaining comprehensive.

Note that creating annotator guidelines is an iterative process. It is good practice to do a few dozen annotations yourself and refine the guidelines based on your learnings from the data before assigning the task to others. Versioning the guidelines can also help as the task evolves over time. See further tips in this blog post.

annotator_guidelines = """\
Your task is to evaluate the responses of two LLMs to code generation tasks. 

First, you need to score each response on a scale from 0 to 7. You add points to your final score based on the following criteria:
- Add up to +2 points, if the code is properly commented, with inline comments and doc strings for functions.
- Add up to +2 points, if the code contains a good example for testing. 
- Add up to +3 points, if the code runs and works correctly. Copy the code into an IDE and test it with at least two different inputs. Attribute one point if the code is overall correct, but has some issues. Attribute three points if the code is fully correct and robust against different scenarios. 
Your resulting final score can be any value between 0 to 7. 

If both responses have a final score of <= 4, select one response and correct it manually in the text field. 
The corrected response must fulfill all criteria from above. 
"""

rating_tooltip = """\
- Add up to +2 points, if the code is properly commented, with inline comments and doc strings for functions.
- Add up to +2 points, if the code contains a good example for testing. 
- Add up to +3 points, if the code runs and works correctly. Copy the code into an IDE and test it with at least two different inputs. Attribute one point if the code works mostly correctly, but has some issues. Attribute three points if the code is fully correct and robust against different scenarios. 
"""

Cumulative ratings vs. Likert scales: Note that the guidelines above ask the annotators to do cumulative ratings by adding points for explicit criteria. An alternative approach are “Likert scales”, where annotators are asked to rate responses on a continuous scale e.g. from 1 (very bad) to 3 (mediocre) to 5 (very good). We generally recommend cumulative ratings, because they force you and the annotators to make quality criteria explicit, while just rating a response as “4” (good) is ambiguous and will be interpreted differently by different annotators.

Tailor the Argilla interface to your specific task

We can now can create our own code-llm task with it’s own interface tailored to the specific task.

dataset_argilla_name = "code-llm"
reuse_existing_dataset = False  # for easier iterative testing

# Create annotator groups. Used for task asignment via meta data filtering.
# See explanations on annotator task assignment further below
annotators = ["annotator-1", "annotator-2", "annotator-3"]

# Create the interface structure via an Argilla FeedbackDataset
dataset_argilla = rg.FeedbackDataset(
    # The overall annotation guidelines, which human annotators can refer back to inside of the interface
    guidelines=annotator_guidelines,
    # The fields on the left side of the interface
    fields=[
        rg.TextField(name="instruction", title="Instruction:", use_markdown=True, required=True),
        rg.TextField(name="generation_1", title="Response model 1:", use_markdown=True, required=True),
        rg.TextField(name="generation_2", title="Response model 2:", use_markdown=True, required=True),
    ],
    # The different questions on the right side of the interface
    # These are the questions we ask annotators about the fields on the left of the interface
    # The available question types are documented here: https://docs.argilla.io/en/latest/getting_started/cheatsheet.html#configure-datasets
    questions=[
        rg.RatingQuestion(
            name="score_response_1",
            title="Your score for the response of model 1:",
            description=rating_tooltip,  # "1 = very bad\n2 = bad\n3 = mediocre\n4 = good\n5 = very good",
            # Note: Argilla version <= 1.28 does not yet support rating values of 0.
            # This will be possible starting version >= 1.29
            values=[1, 2, 3, 4, 5, 6, 7],
            required=True,
        ),
        rg.RatingQuestion(
            name="score_response_2",
            title="Your score for the response of model 2:",
            description=rating_tooltip,  # "1 = very bad\n2 = bad\n3 = mediocre\n4 = good\n5 = very good",
            values=[1, 2, 3, 4, 5, 6, 7],
            required=True,
        ),
        rg.LabelQuestion(
            name="which_response_corrected",
            title="If both responses score below 4, select a response to correct:",
            description="Select the response you will correct in the text field below.",
            labels=["Response 1", "Response 2", "Combination of both", "Neither"],
            required=False,
        ),
        rg.TextQuestion(
            name="correction",
            title="Paste the selected response below and correct it manually:",
            description="Your corrected response must fulfill all criteria from the annotation guidelines.",
            use_markdown=True,
            required=False,
        ),
        rg.TextQuestion(
            name="comments",
            title="Annotator Comments",
            description="Add any additional comments here. E.g.: edge cases, issues with the interface etc.",
            use_markdown=True,
            required=False,
        ),
    ],
    metadata_properties=[
        rg.TermsMetadataProperty(
            name="annotator-groups",
            title="Annotator groups",
            values=annotators,
        ),
        rg.TermsMetadataProperty(
            name="source-dataset",
            title="Original dataset source",
        ),
    ],
    allow_extra_metadata=False,
)


if reuse_existing_dataset:
    dataset_argilla = rg.FeedbackDataset.from_argilla(dataset_argilla_name, workspace="admin")
else:
    # check if dataset already exists
    dataset_existing = [dataset for dataset in rg.list_datasets() if dataset.name == dataset_argilla_name]
    # if it already exists, delete it
    if len(dataset_existing) > 0:
        rg.FeedbackDataset.from_argilla(name=dataset_argilla_name, workspace="admin").delete()
    # push (updated) dataset to argilla
    dataset_argilla = dataset_argilla.push_to_argilla(dataset_argilla_name, workspace="admin")

After running the code above, you will see the new custom code-llm task in Argilla (and any other tasks you might have created before, see image).

image description
# The final argilla dataset
# print(dataset_argilla)

You can also read the detailed guide on working with LLM data in Argilla for more guidance on creating different interfaces for different tasks.

Upload data to Argilla for our task

At this point, the task is still empty. Let’s upload some data into the task interface with the code below.

import random

# Iterate over the samples in the dataset
records = []
for example in dataset:

    # Add the records to the FeedbackDataset
    record = rg.FeedbackRecord(
        fields={
            "instruction": example["instructions"],
            "generation_1": example["response_model_1"],
            "generation_2": example["response_model_2"],
        },
        metadata={
            # we randomly assign a record/task to the annotators
            "annotator-groups": random.choice(annotators),
            "source-dataset": "bigcode/self-oss-instruct-sc2-exec-filter-50k",
        },
    )

    # Optional: add prefilled suggestion
    # you can use this to fill Questions with suggestions from an LLM-as-a-judge system
    # to further speed up manual annotation
    # record.suggestions = [
    #    {
    #        "question_name": "score_response_1",
    #        "value": example["llm_judge_rating"],
    #        "agent": "llama-3-70b-instruct"
    #    },
    # ]

    try:
        dataset_argilla.add_records(record, show_progress=True)
    except Exception as e:
        print("Exception:", e)

The final annotation interface will look similar to this:

image description

Assign tasks to annotators: Argilla supports assigning tasks to multiple users/annotators. There are different ways of implementing task assignments, documented here. For this tutorial, we use the simplest metadata method, where everyone has access to the same full dataset and all annotations (via the annotators variable created above). To access the annotations assigned to them, an annotator then needs to use the Metadata filter in the interface to filter the data to only see records assigned to them (see image below). For larger teams and to get multiple annotations for the same record, it is better to use other task assignment methods.

image description

Annotate

That’s it, we’ve created our custom data annotation interface with Argilla and we can now start annotating!

Important: If you use Argilla in a HF Space, you need to activate persistent storage so that your data is safely stored and not automatically deleted after a while. For production settings, make sure that persistent storage is activated before making any annotations to avoid data loss.

Download annotated data

After annotating, you can pull the data from Argilla and simply store and process them locally in any tabular format (see docs here). You can also download filtered version of the dataset (docs).

remote_dataset = rg.FeedbackDataset.from_argilla(dataset_argilla_name, workspace="admin")

# pull the first N records from the remote dataset
local_dataset = remote_dataset.pull(max_records=100)

# transform Argilla dataset to HF dataset
hf_dataset = local_dataset.format_as("datasets")

# This HF dataset can then be formatted, stored and processed into any tabular data format
# Display the annotated dataset:
hf_dataset.to_pandas()
# Store the dataset locally
hf_dataset.to_csv("argilla-dataset-local.csv")  # Save as CSV
# hf_dataset.to_json("argilla-dataset-local.json")  # Save as JSON
# hf_dataset.save_to_disk("argilla-dataset-local")  # Save as a `datasets.Dataset` in the local filesystem
# hf_dataset.to_parquet()  # Save as Parquet

Next Steps

That’s it! You’ve created synthetic LLM data with the HF inference API, created your custom annotation interface with Argilla, uploaded the LLM data into Argilla, evaluated/corrected the data, and after annotation you have downloaded the data in a simple tabular format for downstream use.

We have specifically designed the pipeline and the interface for two main use-cases:

  1. Evaluation: You can now simply use the numeric scores in the score_response_1 and score_response_2 columns to calculate which model was better overall. You can also inspect responses with very low or high ratings for a detailed error analysis. As you test or train different models, you can reuse this pipeline and track improvements of different models over time.
  2. Training: After annotating enough data, you can create a train-test split from the data and fine-tune your own model. You can either use highly rated response texts for supervised fine-tuning with the the TRL SFTTrainer, or you can directly use the ratings for preference-tuning techniques like DPO with the TRL DPOTrainer. See the TRL docs for the pros and cons of different LLM fine-tuning techniques.

Adapt and improve: Many things can be improved to tailor this pipeline to your specific use-cases. For example, you can prompt an LLM to evaluate the outputs of the two LLMs with instructions very similar to the guidelines for human annotators (“LLM-as-a-judge” approach). This can help further speed up your evaluation pipeline. See our LLM-as-a-judge recipe for an example implementation of LLM-as-a-judge and our overall Open-Source AI Cookbook for many other ideas.

< > Update on GitHub