title
stringlengths
17
126
author
stringlengths
3
21
date
stringlengths
11
18
local
stringlengths
2
59
tags
stringlengths
2
76
URL
stringlengths
30
87
content
stringlengths
1.11k
108k
Interactively explore your Huggingface dataset with one line of code
sps44
October 25, 2023
scalable-data-inspection
open-source-collab, visualization, data inspection
https://huggingface.co/blog/scalable-data-inspection
# Interactively explore your Huggingface dataset with one line of code The Hugging Face [*datasets* library](https://huggingface.co/docs/datasets/index) not only provides access to more than 70k publicly available datasets, but also offers very convenient data preparation pipelines for custom datasets. [Renumics Spotlight](https://github.com/Renumics/spotlight) allows you to create **interactive visualizations** to **identify critical clusters** in your data. Because Spotlight understands the data semantics within Hugging Face datasets, you can **[get started with just one line of code](https://renumics.com/docs)**: ```python import datasets from renumics import spotlight ds = datasets.load_dataset('speech_commands', 'v0.01', split='validation') spotlight.show(ds) ``` <p align="center"><a href="https://github.com/Renumics/spotlight"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/scalable-data-inspection/speech_commands_vis_s.gif" width="100%"/></a></p> Spotlight allows to **leverage model results** such as predictions and embeddings to gain a deeper understanding in data segments and model failure modes: ```python ds_results = datasets.load_dataset('renumics/speech_commands-ast-finetuned-results', 'v0.01', split='validation') ds = datasets.concatenate_datasets([ds, ds_results], axis=1) spotlight.show(ds, dtype={'embedding': spotlight.Embedding}, layout=spotlight.layouts.debug_classification(embedding='embedding', inspect={'audio': spotlight.dtypes.audio_dtype})) ``` Data inspection is a very important task in almost all ML development stages, but it can also be very time consuming. > “Manual inspection of data has probably the highest value-to-prestige ratio of any activity in machine learning.” — Greg Brockman > [Spotlight](https://renumics.com/docs) helps you to **make data inspection more scalable** along two dimensions: Setting up and maintaining custom data inspection workflows and finding relevant data samples and clusters to inspect. In the following sections we show some examples based on Hugging Face datasets. ## Spotlight 🤝 Hugging Face datasets The *datasets* library has several features that makes it an ideal tool for working with ML datasets: It stores tabular data (e.g. metadata, labels) along with unstructured data (e.g. images, audio) in a common Arrows table. *Datasets* also describes important data semantics through features (e.g. images, audio) and additional task-specific metadata. Spotlight directly works on top of the *datasets* library. This means that there is no need to copy or pre-process the dataset for data visualization and inspection. Spotlight loads the tabular data into memory to allow for efficient, client-side data analytics. Memory-intensive unstructured data samples (e.g. audio, images, video) are loaded lazily on demand. In most cases, data types and label mappings are inferred directly from the dataset. Here, we visualize the CIFAR-100 dataset with one line of code: ```python ds = datasets.load_dataset('cifar100', split='test') spotlight.show(ds) ``` In cases where the data types are ambiguous or not specified, the Spotlight API allows to manually assign them: ```python label_mapping = dict(zip(ds.features['fine_label'].names, range(len(ds.features['fine_label'].names)))) spotlight.show(ds, dtype={'img': spotlight.Image, 'fine_label': spotlight.dtypes.CategoryDType(categories=label_mapping)}) ``` ## **Leveraging model results for data inspection** Exploring raw unstructured datasets often yield little insights. Leveraging model results such as predictions or embeddings can help to uncover critical data samples and clusters. Spotlight has several visualization options (e.g. similarity map, confusion matrix) that specifically make use of model results. We recommend storing your prediction results directly in a Hugging Face dataset. This not only allows you to take advantage of the batch processing capabilities of the datasets library, but also keeps label mappings. We can use the [*transformers* library](https://huggingface.co/docs/transformers) to compute embeddings and predictions on the CIFAR-100 image classification problem. We install the libraries via pip: ```bash pip install renumics-spotlight datasets transformers[torch] ``` Now we can compute the enrichment: ```python import torch import transformers device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") model_name = "Ahmed9275/Vit-Cifar100" processor = transformers.ViTImageProcessor.from_pretrained(model_name) cls_model = transformers.ViTForImageClassification.from_pretrained(model_name).to(device) fe_model = transformers.ViTModel.from_pretrained(model_name).to(device) def infer(batch): images = [image.convert("RGB") for image in batch] inputs = processor(images=images, return_tensors="pt").to(device) with torch.no_grad(): outputs = cls_model(**inputs) probs = torch.nn.functional.softmax(outputs.logits, dim=-1).cpu().numpy() embeddings = fe_model(**inputs).last_hidden_state[:, 0].cpu().numpy() preds = probs.argmax(axis=-1) return {"prediction": preds, "embedding": embeddings} features = datasets.Features({**ds.features, "prediction": ds.features["fine_label"], "embedding": datasets.Sequence(feature=datasets.Value("float32"), length=768)}) ds_enriched = ds.map(infer, input_columns="img", batched=True, batch_size=2, features=features) ``` If you don’t want to perform the full inference run, you can alternatively download pre-computed model results for CIFAR-100 to follow this tutorial: ```python ds_results = datasets.load_dataset('renumics/spotlight-cifar100-enrichment', split='test') ds_enriched = datasets.concatenate_datasets([ds, ds_results], axis=1) ``` We can now use the results to interactively explore relevant data samples and clusters in Spotlight: ```python layout = spotlight.layouts.debug_classification(label='fine_label', embedding='embedding', inspect={'img': spotlight.dtypes.image_dtype}) spotlight.show(ds_enriched, dtype={'embedding': spotlight.Embedding}, layout=layout) ``` <figure class="image text-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/scalable-data-inspection/cifar-100-model-debugging.png" alt="CIFAR-100 model debugging layout example."> </figure> ## Customizing data inspection workflows Visualization layouts can be interactively changed, saved and loaded in the GUI: You can select different widget types and configurations. The *Inspector* widget allows to represent multimodal data samples including text, image, audio, video and time series data. You can also define layouts through the [Python API](https://renumics.com/api/spotlight/). This option is especially useful for building custom data inspection and curation workflows including EDA, model debugging and model monitoring tasks. In combination with the data issues widget, the Python API offers a great way to integrate the results of existing scripts (e.g. data quality checks or model monitoring) into a scalable data inspection workflow. ## Using Spotlight on the Hugging Face hub You can use Spotlight directly on your local NLP, audio, CV or multimodal dataset. If you would like to showcase your dataset or model results on the Hugging Face hub, you can use Hugging Face spaces to launch a Spotlight visualization for it. We have already prepared [example spaces](https://huggingface.co/renumics) for many popular NLP, audio and CV datasets on the hub. You can simply duplicate one of these spaces and specify your dataset in the `HF_DATASET` variable. You can optionally choose a dataset that contains model results and other configuration options such as splits, subsets or dataset revisions. <figure class="image text-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/scalable-data-inspection/space_duplication.png" alt="Creating a new dataset visualization with Spotlight by duplicating a Hugging Face space."> </figure> ## What’s next? With Spotlight you can create **interactive visualizations** and leverage data enrichments to **identify critical clusters** in your Hugging Face datasets. In this blog, we have seen both an audio ML and a computer vision example. You can use Spotlight directly to explore and curate your NLP, audio, CV or multimodal dataset: - Install Spotlight: *pip install renumics-spotlight* - Check out the [documentation](https://renumics.com/docs) or open an issue on [Github](https://github.com/Renumics/spotlight) - Join the [Spotlight community](https://discord.gg/VAQdFCU5YD) on Discord - Follow us on [Twitter](https://twitter.com/renumics) and [LinkedIn](https://www.linkedin.com/company/renumics)
Personal Copilot: Train Your Own Coding Assistant
smangrul
October 27, 2023
personal-copilot
bigcode, llm, nlp, inference, guide
https://huggingface.co/blog/personal-copilot
# Personal Copilot: Train Your Own Coding Assistant In the ever-evolving landscape of programming and software development, the quest for efficiency and productivity has led to remarkable innovations. One such innovation is the emergence of code generation models such as [Codex](https://openai.com/blog/openai-codex), [StarCoder](https://arxiv.org/abs/2305.06161) and [Code Llama](https://arxiv.org/abs/2308.12950). These models have demonstrated remarkable capabilities in generating human-like code snippets, thereby showing immense potential as coding assistants. However, while these pre-trained models can perform impressively across a range of tasks, there's an exciting possibility lying just beyond the horizon: the ability to tailor a code generation model to your specific needs. Think of personalized coding assistants which could be leveraged at an enterprise scale. In this blog post we show how we created HugCoder 🤗, a code LLM fine-tuned on the code contents from the public repositories of the [`huggingface` GitHub organization](https://github.com/huggingface). We will discuss our data collection workflow, our training experiments, and some interesting results. This will enable you to create your own personal copilot based on your proprietary codebase. We will leave you with a couple of further extensions of this project for experimentation. Let’s begin 🚀 ![Using HugCoder in Visual Studio Code to help create a LoRA fine-tune](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/personal_copilot/personal-copilot-demo.gif) ## Data Collection Workflow Our desired dataset is conceptually simple, we structured it like so: | | | | |---|---|---| | Repository Name | Filepath in the Repository | File Contents | |---|---|---| |---|---|---| Scraping code contents from GitHub is straightforward with the [Python GitHub API](https://github.com/PyGithub/PyGithub). However, depending on the number of repositories and the number of code files within a repository, one might easily run into API rate-limiting issues. To prevent such problems, we decided to clone all the public repositories locally and extract the contents from them instead of through the API. We used the `multiprocessing` module from Python to download all repos in parallel, as shown in [this download script](https://github.com/sayakpaul/hf-codegen/blob/main/data/parallel_clone_repos.py). A repository can often contain non-code files such as images, presentations and other assets. We’re not interested in scraping them. We created a [list of extensions](https://github.com/sayakpaul/hf-codegen/blob/f659eba76f07e622873211e5b975168b634e6c22/data/prepare_dataset.py#L17C1-L49C68) to filter them out. To parse code files other than Jupyter Notebooks, we simply used the "utf-8" encoding. For notebooks, we only considered the code cells. We also excluded all file paths that were not directly related to code. These include: `.git`, `__pycache__`, and `xcodeproj`. To keep the serialization of this content relatively memory-friendly, we used chunking and the [feather format](https://arrow.apache.org/docs/python/feather.html#:~:text=Feather%20is%20a%20portable%20file,Python%20(pandas)%20and%20R.). Refer to [this script](https://github.com/sayakpaul/hf-codegen/blob/main/data/prepare_dataset.py) for the full implementation. The final dataset is [available on the Hub](https://huggingface.co/datasets/sayakpaul/hf-codegen-v2), and it looks like this: ![hf-stack-full](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/personal_copilot/hf-stack-full.png) For this blog, we considered the top 10 Hugging Face public repositories, based on stargazers. They are the following: > ['transformers', 'pytorch-image-models', 'datasets', 'diffusers', 'peft', 'tokenizers', 'accelerate', 'text-generation-inference', 'chat-ui', 'deep-rl-class'] [This is the code we used to generate this dataset](https://github.com/pacman100/DHS-LLM-Workshop/tree/main/personal_copilot/dataset_generation), and [this is the dataset in the Hub](https://huggingface.co/datasets/smangrul/hf-stack-v1). Here is a snapshot of what it looks like: ![hf-stack-v1](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/personal_copilot/hf-stack-v1.png) To reduce the project complexity, we didn’t consider deduplication of the dataset. If you are interested in applying deduplication techniques for a production application, [this blog post](https://huggingface.co/blog/dedup) is an excellent resource about the topic in the context of code LLMs. ## Finetuning your own Personal Co-Pilot In this section, we show how to fine-tune the following models: [`bigcode/starcoder`](https://hf.co/bigcode/starcoder) (15.5B params), [`bigcode/starcoderbase-1b`](https://hf.co/bigcode/starcoderbase-1b) (1B params), [`Deci/DeciCoder-1b`](https://hf.co/Deci/DeciCoder-1b) (1B params). We'll use a single A100 40GB Colab Notebook using 🤗 PEFT (Parameter-Efficient Fine-Tuning) for all the experiments. Additionally, we'll show how to fully finetune the `bigcode/starcoder` (15.5B params) on a machine with 8 A100 80GB GPUs using 🤗 Accelerate's FSDP integration. The training objective is [fill in the middle (FIM)](https://arxiv.org/abs/2207.14255), wherein parts of a training sequence are moved to the end, and the reordered sequence is predicted auto-regressively. Why PEFT? Full fine-tuning is expensive. Let’s have some numbers to put things in perspective: Minimum GPU memory required for full fine-tuning: 1. Weight: 2 bytes (Mixed-Precision training) 2. Weight gradient: 2 bytes 3. Optimizer state when using Adam: 4 bytes for original FP32 weight + 8 bytes for first and second moment estimates 4. Cost per parameter adding all of the above: 16 bytes per parameter 5. **15.5B model -> 248GB of GPU memory without even considering huge memory requirements for storing intermediate activations -> minimum 4X A100 80GB GPUs required** Since the hardware requirements are huge, we'll be using parameter-efficient fine-tuning using [QLoRA](https://arxiv.org/abs/2305.14314). Here are the minimal GPU memory requirements for fine-tuning StarCoder using QLoRA: > trainable params: 110,428,160 || all params: 15,627,884,544 || trainable%: 0.7066097761926236 1. Base model Weight: 0.5 bytes * 15.51B frozen params = 7.755 GB 2. Adapter weight: 2 bytes * 0.11B trainable params = 0.22GB 3. Weight gradient: 2 bytes * 0.11B trainable params = 0.12GB 4. Optimizer state when using Adam: 4 bytes * 0.11B trainable params * 3 = 1.32GB 5. **Adding all of the above -> 9.51 GB ~10GB -> 1 A100 40GB GPU required** 🤯. The reason for A100 40GB GPU is that the intermediate activations for long sequence lengths of 2048 and batch size of 4 for training lead to higher memory requirements. As we will see below, GPU memory required is 26GB which can be accommodated on A100 40GB GPU. Also, A100 GPUs have better compatibilty with Flash Attention 2. In the above calculations, we didn't consider memory required for intermediate activation checkpointing which is considerably huge. We leverage Flash Attention V2 and Gradient Checkpointing to overcome this issue. 1. For QLoRA along with flash attention V2 and gradient checkpointing, the total memory occupied by the model on a single A100 40GB GPU is **26 GB** with a **batch size of 4**. 2. For full fine-tuning using FSDP along with Flash Attention V2 and Gradient Checkpointing, the memory occupied per GPU ranges between **70 GB to 77.6 GB** with a **per_gpu_batch_size of 1**. Please refer to the [model-memory-usage](https://huggingface.co/spaces/hf-accelerate/model-memory-usage) to easily calculate how much vRAM is needed to train and perform big model inference on a model hosted on the 🤗 Hugging Face Hub. ## Full Finetuning We will look at how to do full fine-tuning of `bigcode/starcoder` (15B params) on 8 A100 80GB GPUs using PyTorch Fully Sharded Data Parallel (FSDP) technique. For more information on FSDP, please refer to [Fine-tuning Llama 2 70B using PyTorch FSDP](https://huggingface.co/blog/ram-efficient-pytorch-fsdp) and [Accelerate Large Model Training using PyTorch Fully Sharded Data Parallel](https://huggingface.co/blog/pytorch-fsdp). **Resources** 1. Codebase: [link](https://github.com/pacman100/DHS-LLM-Workshop/tree/main/personal_copilot/training). It uses the recently added Flash Attention V2 support in Transformers. 2. FSDP Config: [fsdp_config.yaml](https://github.com/pacman100/DHS-LLM-Workshop/blob/main/personal_copilot/training/configs/fsdp_config.yaml) 3. Model: [bigcode/stacoder](https://huggingface.co/bigcode/starcoder) 4. Dataset: [smangrul/hf-stack-v1](https://huggingface.co/datasets/smangrul/hf-stack-v1) 5. Fine-tuned Model: [smangrul/peft-lora-starcoder15B-v2-personal-copilot-A100-40GB-colab](https://huggingface.co/smangrul/peft-lora-starcoder15B-v2-personal-copilot-A100-40GB-colab) The command to launch training is given at [run_fsdp.sh](https://github.com/pacman100/DHS-LLM-Workshop/blob/main/personal_copilot/training/run_fsdp.sh). ``` accelerate launch --config_file "configs/fsdp_config.yaml" train.py \ --model_path "bigcode/starcoder" \ --dataset_name "smangrul/hf-stack-v1" \ --subset "data" \ --data_column "content" \ --split "train" \ --seq_length 2048 \ --max_steps 2000 \ --batch_size 1 \ --gradient_accumulation_steps 2 \ --learning_rate 5e-5 \ --lr_scheduler_type "cosine" \ --weight_decay 0.01 \ --num_warmup_steps 30 \ --eval_freq 100 \ --save_freq 500 \ --log_freq 25 \ --num_workers 4 \ --bf16 \ --no_fp16 \ --output_dir "starcoder-personal-copilot-A100-40GB-colab" \ --fim_rate 0.5 \ --fim_spm_rate 0.5 \ --use_flash_attn ``` The total training time was **9 Hours**. Taking the cost of $12.00 / hr based on [lambdalabs](https://lambdalabs.com/service/gpu-cloud/pricing) for 8x A100 80GB GPUs, the total cost would be **$108**. ## PEFT We will look at how to use QLoRA for fine-tuning `bigcode/starcoder` (15B params) on a single A100 40GB GPU using 🤗 PEFT. For more information on QLoRA and PEFT methods, please refer to [Making LLMs even more accessible with bitsandbytes, 4-bit quantization and QLoRA](https://huggingface.co/blog/4bit-transformers-bitsandbytes) and [🤗 PEFT: Parameter-Efficient Fine-Tuning of Billion-Scale Models on Low-Resource Hardware](https://huggingface.co/blog/peft). **Resources** 1. Codebase: [link](https://github.com/pacman100/DHS-LLM-Workshop/tree/main/personal_copilot/training). It uses the recently added Flash Attention V2 support in Transformers. 2. Colab notebook: [link](https://colab.research.google.com/drive/1Tz9KKgacppA4S6H4eo_sw43qEaC9lFLs?usp=sharing). Make sure to choose A100 GPU with High RAM setting. 3. Model: [bigcode/stacoder](https://huggingface.co/bigcode/starcoder) 4. Dataset: [smangrul/hf-stack-v1](https://huggingface.co/datasets/smangrul/hf-stack-v1) 5. QLoRA Fine-tuned Model: [smangrul/peft-lora-starcoder15B-v2-personal-copilot-A100-40GB-colab](https://huggingface.co/smangrul/peft-lora-starcoder15B-v2-personal-copilot-A100-40GB-colab) The command to launch training is given at [run_peft.sh](https://github.com/pacman100/DHS-LLM-Workshop/blob/main/personal_copilot/training/run_peft.sh). The total training time was **12.5 Hours**. Taking the cost of **$1.10 / hr** based on [lambdalabs](https://lambdalabs.com/service/gpu-cloud/pricing), the total cost would be **$13.75**. That's pretty good 🚀! In terms of cost, it's **7.8X** lower than the cost for full fine-tuning. ## Comparison The plot below shows the eval loss, train loss and learning rate scheduler for QLoRA vs full fine-tuning. We observe that full fine-tuning leads to slightly lower loss and converges a bit faster compared to QLoRA. The learning rate for peft fine-tuning is 10X more than that of full fine-tuning. ![plots](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/personal_copilot/full_finetuning_vs_qlora.png) To make sure that our QLoRA model doesn't lead to catastrophic forgetting, we run the Python Human Eval on it. Below are the results we got. `Pass@1` measures the pass rate of completions considering just a single generated code candidate per problem. We can observe that the performance on `humaneval-python` is comparable between the base `bigcode/starcoder` (15B params) and the fine-tuned PEFT model `smangrul/peft-lora-starcoder15B-v2-personal-copilot-A100-40GB-colab`. | | | |---|---| | Model | Pass@1 | |bigcode/starcoder | 33.57| |smangrul/peft-lora-starcoder15B-v2-personal-copilot-A100-40GB-colab| 33.37 | Let's now look at some qualitative samples. In our manual analysis, we noticed that the QLoRA led to slight overfitting and as such we down weigh it by creating new weighted adapter with weight 0.8 via `add_weighted_adapter` utility of PEFT. We will look at 2 code infilling examples wherein the task of the model is to fill the part denoted by the `<FILL_ME>` placeholder. We will consider infilling completions from GitHub Copilot, the QLoRA fine-tuned model and the full fine-tuned model. ![qualitative_comparison_1](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/personal_copilot/qlora_vs_finetune_1.png) *Qualitative Example 1* In the example above, the completion from GitHub Copilot is along the correct lines but doesn't help much. On the other hand, completions from QLoRA and full fine-tuned models are correctly infilling the entire function call with the necessary parameters. However, they are also adding a lot more noise afterwards. This could be controlled with a post-processing step to limit completions to closing brackets or new lines. Note that both QLoRA and full fine-tuned models produce results with similar quality. ![qualitative_comparison_2](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/personal_copilot/qlora_vs_finetune_2.png) Qualitative Example 2 In the second example above, **GitHub Copilot didn't give any completion**. This can be due to the fact that 🤗 PEFT is a recent library and not yet part of Copilot's training data, which **is exactly the type of problem we are trying to address**. On the other hand, completions from QLoRA and full fine-tuned models are correctly infilling the entire function call with the necessary parameters. Again, note that both the QLoRA and the full fine-tuned models are giving generations of similar quality. Inference Code with various examples for full fine-tuned model and peft model are available at [Full_Finetuned_StarCoder_Inference.ipynb](https://github.com/pacman100/DHS-LLM-Workshop/blob/main/personal_copilot/inference/Full_Finetuned_StarCoder_Inference.ipynb) and [PEFT_StarCoder_Inference.ipynb](https://github.com/pacman100/DHS-LLM-Workshop/blob/main/personal_copilot/inference/PEFT_StarCoder_Inference.ipynb), respectively. Therefore, we can observe that the generations from both the variants are as per expectations. Awesome! 🚀 ## How do I use it in VS Code? You can easily configure a custom code-completion LLM in VS Code using 🤗 [llm-vscode](https://marketplace.visualstudio.com/items?itemName=HuggingFace.huggingface-vscode) VS Code Extension, together with hosting the model via [🤗 Inference EndPoints](https://ui.endpoints.huggingface.co/). We'll go through the required steps below. You can learn more details about deploying an endpoint in the [inference endpoints documentation](https://huggingface.co/docs/inference-endpoints/index). ### Setting an Inference Endpoint Below are the screenshots with the steps we followed to create our custom Inference Endpoint. We used our QLoRA model, exported as a full-sized _merged_ model that can be easily loaded in `transformers`. ![ie_1](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/personal_copilot/inference_endpoint_1.png) ![ie_2](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/personal_copilot/inference_endpoint_2.png) ### Setting up the VS Code Extension Just follow the [installation steps](https://github.com/huggingface/llm-vscode#installation). In the settings, replace the endpoint in the field below, so it points to the HF Inference Endpoint you deployed. ![vs_code_endpoint](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/personal_copilot/vs_code_endpoint.png) Usage will look like below: ![code_completion](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/personal_copilot/vs_code_completion_usage.png) # Finetuning your own Code Chat Assistant So far, the models we trained were specifically trained as personal co-pilot for code completion tasks. They aren't trained to carry out conversations or for question answering. `Octocoder` and `StarChat` are great examples of such models. This section briefly describes how to achieve that. **Resources** 1. Codebase: [link](https://github.com/pacman100/DHS-LLM-Workshop/tree/main/code_assistant/training). It uses the recently added Flash Attention V2 support in Transformers. 2. Colab notebook: [link](https://colab.research.google.com/drive/1XFyePK-3IoyX81RM94JO73CcIZtAU4i4?usp=sharing). Make sure to choose A100 GPU with High RAM setting. 3. Model: [bigcode/stacoderplus](https://huggingface.co/bigcode/starcoderplus) 4. Dataset: [smangrul/code-chat-assistant-v1](https://huggingface.co/datasets/smangrul/code-chat-assistant-v1). Mix of `LIMA+GUANACO` with proper formatting in a ready-to-train format. 5. Trained Model: [smangrul/peft-lora-starcoderplus-chat-asst-A100-40GB-colab](https://huggingface.co/smangrul/peft-lora-starcoderplus-chat-asst-A100-40GB-colab) # Dance of LoRAs If you have dabbled with Stable Diffusion models and LoRAs for making your own Dreambooth models, you might be familiar with the concepts of combining different LoRAs with different weights, using a LoRA model with a different base model than the one on which it was trained. In text/code domain, this remains unexplored territory. We carry out experiments in this regard and have observed very promising findings. Are you ready? Let's go! 🚀 ## Mix-and-Match LoRAs PEFT currently supports 3 ways of combining LoRA models, `linear`, `svd` and `cat`. For more details, refer to [tuners#peft.LoraModel.add_weighted_adapter](https://huggingface.co/docs/peft/main/en/package_reference/tuners#peft.LoraModel.add_weighted_adapter). Our notebook [Dance_of_LoRAs.ipynb](https://github.com/pacman100/DHS-LLM-Workshop/blob/main/personal_copilot/inference/Dance_of_LoRAs.ipynb) includes all the inference code and various LoRA loading combinations, like loading the chat assistant on top of `starcoder` instead of `starcodeplus`, which is the base model that we fine-tuned. Here, we will consider 2 abilities (`chatting/QA` and `code-completion`) on 2 data distributions (`top 10 public hf codebase` and `generic codebase`). That gives us 4 axes on which we'll carry out some qualitative evaluation analyses. #### First, let us consider the `chatting/QA` task. If we disable adapters, we observe that the task fails for both datasets, as the base model (`starcoder`) is only meant for code completion and not suitable for `chatting/question-answering`. Enabling `copilot` adapter performs similar to the disabled case because this LoRA was also specifically fine-tuned for code-completion. Now, let's enable the `assistant` adapter. ![assistant_chat_generic](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/personal_copilot/generic_qa_short.png) Question Answering based on generic code ![assistant_chat_hf](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/personal_copilot/qa_hf.png) Question Answering based on HF code We can observe that generic question regarding `scrapy` is being answered properly. However, it is failing for the HF code related question which wasn't part of its pretraining data. ##### Let us now consider the `code-completion` task. On disabling adapters, we observe that the code completion for the generic two-sum works as expected. However, the HF code completion fails with wrong params to `LoraConfig`, because the base model hasn't seen it in its pretraining data. Enabling `assistant` performs similar to the disabled case as it was trained on natural language conversations which didn't have any Hugging Face code repos. Now, let's enable the `copilot` adapter. ![copilot_code_generic](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/personal_copilot/infill.png) We can observe that the `copilot` adapter gets it right in both cases. Therefore, it performs as expected for code-completions when working with HF specific codebase as well as generic codebases. **Now, as a user, I want to combine the ability of `assistant` as well as `copilot`. This will enable me to use it for code completion while coding in an IDE, and also have it as a chatbot to answer my questions regarding APIs, classes, methods, documentation. It should be able to provide answers to questions like `How do I use x`, `Please write a code snippet for Y` on my codebase.** PEFT allows you to do it via `add_weighted_adapter`. Let's create a new adapter `code_buddy` with equal weights to `assistant` and `copilot` adapters. ![combining_loras](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/personal_copilot/combine_adapters.png) Combining Multiple Adapters Now, let's see how `code_buddy` performs on the `chatting/question_answering` tasks. ![mix_chat_hf](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/personal_copilot/qa_combined_hf.png) We can observe that `code_buddy` is performing much better than the `assistant` or `copilot` adapters alone! It is able to answer the _write a code snippet_ request to show how to use a specific HF repo API. However, it is also hallucinating the wrong links/explanations, which remains an open challenge for LLMs. Below is the performance of `code_buddy` on code completion tasks. ![mix_code_generic](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/personal_copilot/infill_combined.png) We can observe that `code_buddy` is performing on par with `copilot`, which was specifically finetuned for this task. ## Transfer LoRAs to different base models We can also transfer the LoRA models to different base models. We will take the hot-off-the-press `Octocoder` model and apply on it the LoRA we trained above with `starcoder` base model. Please go through the following notebook [PEFT_Personal_Code_CoPilot_Adapter_Transfer_Octocoder.ipynb](https://github.com/pacman100/DHS-LLM-Workshop/blob/main/personal_copilot/inference/PEFT_Personal_Code_CoPilot_Adapter_Transfer_Octocoder.ipynb) for the entire code. **Performance on the Code Completion task** ![octocoder_code_hf](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/personal_copilot/octocoder_infill.png) We can observe that `octocoder` is performing great. It is able to complete HF specific code snippets. It is also able to complete generic code snippets as seen in the notebook. **Performance on the Chatting/QA task** As Octocoder is trained to answer questions and carry out conversations about coding, let's see if it can use our LoRA adapter to answer HF specific questions. ![octocoder_chat_hf](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/personal_copilot/octocoder_qa.png) Yay! It correctly answers in detail how to create `LoraConfig` and related peft model along with correctly using the model name, dataset name as well as param values of LoraConfig. On disabling the adapter, it fails to correctly use the API of `LoraConfig` or to create a PEFT model, suggesting that it isn't part of the training data of Octocoder. # How do I run it locally? I know, after all this, you want to finetune starcoder on your codebase and use it locally on your consumer hardware such as Mac laptops with M1 GPUs, windows with RTX 4090/3090 GPUs ... Don't worry, we have got you covered. We will be using this super cool open source library [mlc-llm](https://github.com/mlc-ai/mlc-llm) 🔥. Specifically, we will be using this fork [pacman100/mlc-llm](https://github.com/pacman100/mlc-llm) which has changes to get it working with the Hugging Face Code Completion extension for VS Code. On my Mac latop with M1 Metal GPU, the 15B model was painfully slow. Hence, we will go small and train a PEFT LoRA version as well as a full finetuned version of `bigcode/starcoderbase-1b`. The training colab notebooks are linked below: 1. Colab notebook for Full fine-tuning and PEFT LoRA finetuning of `starcoderbase-1b`: [link](https://colab.research.google.com/drive/1tTdvc2buL3Iy1PKwrG_bBIDP06DC9r5m?usp=sharing) The training loss, evaluation loss as well as learning rate schedules are plotted below: ![loss_plots](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/personal_copilot/loss_plots.png) Now, we will look at detailed steps for locally hosting the merged model [smangrul/starcoder1B-v2-personal-copilot-merged](https://huggingface.co/smangrul/starcoder1B-v2-personal-copilot-merged) and using it with 🤗 [llm-vscode](https://marketplace.visualstudio.com/items?itemName=HuggingFace.huggingface-vscode) VS Code Extension. 1. Clone the repo ``` git clone --recursive https://github.com/pacman100/mlc-llm.git && cd mlc-llm/ ``` 2. Install the mlc-ai and mlc-chat (in editable mode) : ``` pip install --pre --force-reinstall mlc-ai-nightly mlc-chat-nightly -f https://mlc.ai/wheels cd python pip uninstall mlc-chat-nightly pip install -e "." ``` 3. Compile the model via: ``` time python3 -m mlc_llm.build --hf-path smangrul/starcoder1B-v2-personal-copilot-merged --target metal --use-cache=0 ``` 4. Update the config with the following values in `dist/starcoder1B-v2-personal-copilot-merged-q4f16_1/params/mlc-chat-config.json`: ```diff { "model_lib": "starcoder7B-personal-copilot-merged-q4f16_1", "local_id": "starcoder7B-personal-copilot-merged-q4f16_1", "conv_template": "code_gpt", - "temperature": 0.7, + "temperature": 0.2, - "repetition_penalty": 1.0, "top_p": 0.95, - "mean_gen_len": 128, + "mean_gen_len": 64, - "max_gen_len": 512, + "max_gen_len": 64, "shift_fill_factor": 0.3, "tokenizer_files": [ "tokenizer.json", "merges.txt", "vocab.json" ], "model_category": "gpt_bigcode", "model_name": "starcoder1B-v2-personal-copilot-merged" } ``` 5. Run the local server: ``` python -m mlc_chat.rest --model dist/starcoder1B-v2-personal-copilot-merged-q4f16_1/params --lib-path dist/starcoder1B-v2-personal-copilot-merged-q4f16_1/starcoder1B-v2-personal-copilot-merged-q4f16_1-metal.so ``` 6. Change the endpoint of HF Code Completion extension in VS Code to point to the local server: ![local_endpoint](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/personal_copilot/local_endpoint.png) 7. Open a new file in VS code, paste the code below and have the cursor in-between the doc quotes, so that the model tries to infill the doc string: ![local_inference](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/personal_copilot/local_inference.png) Voila! ⭐️ The demo at the start of this post is this 1B model running locally on my Mac laptop. ## Conclusion In this blog plost, we saw how to finetune `starcoder` to create a personal co-pilot that knows about our code. We called it 🤗 HugCoder, as we trained it on Hugging Face code :) After looking at the data collection workflow, we compared training using QLoRA vs full fine-tuning. We also experimented by combining different LoRAs, which is still an unexplored technique in the text/code domain. For deployment, we examined remote inference using 🤗 Inference Endpoints, and also showed on-device execution of a smaller model with VS Code and MLC. Please, let us know if you use these methods for your own codebase! ## Acknowledgements We would like to thank [Pedro Cuenca](https://github.com/pcuenca), [Leandro von Werra](https://github.com/lvwerra), [Benjamin Bossan](https://github.com/BenjaminBossan), [Sylvain Gugger](https://github.com/sgugger) and [Loubna Ben Allal](https://github.com/loubnabnl) for their help with the writing of this blogpost.
Creating open machine learning datasets? Share them on the Hugging Face Hub!
davanstrien
October 30, 2023
researcher-dataset-sharing
community, research, datasets, guide
https://huggingface.co/blog/researcher-dataset-sharing
# Creating open machine learning datasets? Share them on the Hugging Face Hub! ## Who is this blog post for? Are you a researcher doing data-intensive research or using machine learning as a research tool? As part of this research, you have likely created datasets for training and evaluating machine learning models, and like many researchers, you may be sharing these datasets via Google Drive, OneDrive, or your own personal server. In this post, we’ll outline why you might want to consider sharing these datasets on the Hugging Face Hub instead. This post outlines: - Why researchers should openly share their data (feel free to skip this section if you are already convinced about this!) - What the Hugging Face Hub offers for researchers who want to share their datasets. - Resources for getting started with sharing your datasets on the Hugging Face Hub. ## Why share your data? Machine learning is increasingly utilized across various disciplines, enhancing research efficiency in tackling diverse problems. Data remains crucial for training and evaluating models, especially when developing new machine-learning methods for specific tasks or domains. Large Language Models may not perform well on specialized tasks like bio-medical entity extraction, and computer vision models might struggle with classifying domain specific images. Domain-specific datasets are vital for evaluating and training machine learning models, helping to overcome the limitations of existing models. Creating these datasets, however, is challenging, requiring significant time, resources, and domain expertise, particularly for annotating data. Maximizing the impact of this data is crucial for the benefit of both the researchers involved and their respective fields. The Hugging Face Hub can help achieve this maximum impact. ## What is the Hugging Face Hub? The [Hugging Face Hub](https://huggingface.co/) has become the central hub for sharing open machine learning models, datasets and demos, hosting over 360,000 models and 70,000 datasets. The Hub enables people – including researchers – to access state-of-the-art machine learning models and datasets in a few lines of code. <p align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/researcher-dataset-sharing/hub-datasets.png" alt="Screenshot of datasets in the Hugging Face Hub"><br> <em>Datasets on the Hugging Face Hub.</em> </p> ## What does the Hugging Face Hub offer for data sharing? This blog post won’t cover all of the features and benefits of hosting datasets on the Hugging Face Hub but will instead highlight some that are particularly relevant for researchers. ### Visibility for your work The Hugging Face Hub has become the central Hub for people to collaborate on open machine learning. Making your datasets available via the Hugging Face Hub ensures it is visible to a wide audience of machine learning researchers. The Hub makes it possible to expose links between datasets, models and demos which makes it easier to see how people are using your datasets for training models and creating demos. ### Tools for exploring and working with datasets There are a growing number of tools being created which make it easier to understand datasets hosted on the Hugging Face Hub. ### Tools for loading datasets hosted on the Hugging Face Hub Datasets shared on the Hugging Face Hub can be loaded via a variety of tools. The [`datasets`](https://huggingface.co/docs/datasets/) library is a Python library which can directly load datasets from the huggingface hub via a `load_dataset` command. The `datasets` library is optimized for working with large datasets (including datasets which won't fit into memory) and supporting machine learning workflows. Alongside this many of the datasets on the Hub can also be loaded directly into [`Pandas`](https://pandas.pydata.org/), [`Polars`](https://www.pola.rs/), and [`DuckDB`](https://duckdb.org/). This [page](https://huggingface.co/docs/datasets-server/parquet_process) provides a more detailed overview of the different ways you can load datasets from the Hub. #### Datasets Viewer The datasets viewer allows people to explore and interact with datasets hosted on the Hub directly in the browser by visiting the dataset repository on the Hugging Face Hub. This makes it much easier for others to view and explore your data without first having to download it. The datasets viewer also allows you to search and filter datasets, which can be valuable to potential dataset users, understanding the nature of a dataset more quickly. <p align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/researcher-dataset-sharing/datasets-viewer.png" alt="Screenshot of a dataset viewer on the Hub showing a named entity recognition dataset"><br> <em>The dataset viewer for the multiconer_v2 Named Entity Recognition dataset.</em> </p> ### Community tools Alongside the datasets viewer there are a growing number of community created tools for exploring datasets on the Hub. #### Spotlight [`Spotlight`](https://github.com/Renumics/spotlight) is a tool that allows you to interactively explore datasets on the Hub with one line of code. <p align="center"><a href="https://github.com/Renumics/spotlight"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/scalable-data-inspection/speech_commands_vis_s.gif" width="100%"/></a></p> You can learn more about how you can use this tool in this [blog post](https://huggingface.co/blog/scalable-data-inspection). #### Lilac [`Lilac`](https://lilacml.com/) is a tool that aims to help you "curate better data for LLMs" and allows you to explore natural language datasets more easily. The tool allows you to semantically search your dataset (search by meaning), cluster data and gain high-level insights into your dataset. <div style="text-align: center;"> <iframe src="https://lilacai-lilac.hf.space" frameborder="0" width="850" height="450" ></iframe> <em>A Spaces demo of the lilac tool.</em> </div> You can explore the `Lilac` tool further in a [demo](https://lilacai-lilac.hf.space/). This growing number of tools for exploring datasets on the Hub makes it easier for people to explore and understand your datasets and can help promote your datasets to a wider audience. ### Support for large datasets The Hub can host large datasets; it currently hosts datasets with multiple TBs of data.The datasets library, which users can use to download and process datasets from the Hub, supports streaming, making it possible to work with large datasets without downloading the entire dataset upfront. This can be invaluable for allowing researchers with less computational resources to work with your datasets, or to select small portions of a huge dataset for testing, development or prototyping. <p align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/researcher-dataset-sharing/filesize.png" alt="Screenshot of the file size information for a dataset"><br> <em>The Hugging Face Hub can host the large datasets often created for machine learning research.</em> </p> ## API and client library interaction with the Hub Interacting with the Hugging Face Hub via an [API](https://huggingface.co/docs/hub/api) or the [`huggingface_hub`](https://huggingface.co/docs/huggingface_hub/index) Python library is possible. This includes creating new repositories, uploading data programmatically and creating and modifying metadata for datasets. This can be powerful for research workflows where new data or annotations continue to be created. The client library also makes uploading large datasets much more accessible. ## Community The Hugging Face Hub is already home to a large community of researchers, developers, artists, and others interested in using and contributing to an ecosystem of open-source machine learning. Making your datasets accessible to this community increases their visibility, opens them up to new types of users and places your datasets within the context of a larger ecosystem of models, datasets and libraries. The Hub also has features which allow communities to collaborate more easily. This includes a discussion page for each dataset, model and Space hosted on the Hub. This means users of your datasets can quickly ask questions and discuss ideas for working with a dataset. <p align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/researcher-dataset-sharing/discussion.png" alt="Screenshot of a discussion for a dataset on the Hub."><br> <em>The Hub makes it easy to ask questions and discuss datasets.</em> </p> ### Other important features for researchers Some other features of the Hub may be of particular interest to researchers wanting to share their machine learning datasets on the Hub: - [Organizations](https://huggingface.co/organizations) allow you to collaborate with other people and share models, datasets and demos under a single organization. This can be an excellent way of highlighting the work of a particular research project or institute. - [Gated repositories](https://huggingface.co/docs/hub/datasets-gated) allow you to add some access restrictions to accessing your dataset. - Download metrics are available for datasets on the Hub; this can be useful for communicating the impact of your researchers to funders and hiring committees. - [Digital Object Identifiers (DOI)](https://huggingface.co/docs/hub/doi): it’s possible to register a persistent identifier for your dataset. ### How can I share my dataset on the Hugging Face Hub? Here are some resources to help you get started with sharing your datasets on the Hugging Face Hub: - General guidance on [creating](https://huggingface.co/docs/datasets/create_dataset) and [sharing datasets on the Hub](https://huggingface.co/docs/datasets/upload_dataset) - Guides for particular modalities: - Creating an [audio dataset](https://huggingface.co/docs/datasets/audio_dataset) - Creating an [image dataset](https://huggingface.co/docs/datasets/image_dataset) - Guidance on [structuring your repository](https://huggingface.co/docs/datasets/repository_structure) so a dataset can be automatically loaded from the Hub. The following pages will be useful if you want to share large datasets: - [Repository limitations and recommendations](https://huggingface.co/docs/hub/repositories-recommendations) provides general guidance on some of the considerations you'll want to make when sharing large datasets. - The [Tips and tricks for large uploads](https://huggingface.co/docs/huggingface_hub/guides/upload#tips-and-tricks-for-large-uploads) page provides some guidance on how to upload large datasets to the Hub. If you want any further help uploading a dataset to the Hub or want to upload a particularly large dataset, please contact datasets@huggingface.co.
Introducing Storage Regions on the HF Hub
julien-c
November 3, 2023
regions
announcement, enterprise, hub
https://huggingface.co/blog/regions
# Introducing Storage Regions on the Hub As part of our [Enterprise Hub](https://huggingface.co/enterprise) plan, we recently released support for **Storage Regions**. Regions let you decide where your org's models and datasets will be stored. This has two main benefits, which we'll briefly go over in this blog post: - **Regulatory and legal compliance**, and more generally, better digital sovereignty - **Performance** (improved download and upload speeds and latency) Currently we support the following regions: - US 🇺🇸 - EU 🇪🇺 - coming soon: Asia-Pacific 🌏 But first, let's see how to setup this feature in your organization's settings 🔥 ## Org settings If your organization is not an Enterprise Hub org yet, you will see the following screen: ![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/storage-regions/no-feature.png) As soon as you subscribe, you will be able to see the Regions settings page: ![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/storage-regions/feature-annotated.png) On that page you can see: - an audit of where your orgs' repos are currently located - dropdowns to select where your repos will be created ## Repository Tag Any repo (model or dataset) stored in a non-default location will display its Region directly as a tag. That way your organization's members can see at a glance where repos are located. ![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/storage-regions/tag-on-repo.png) ## Regulatory and legal compliance In many regulated industries, you may have a requirement to store your data in a specific area. For companies in the EU, that means you can use the Hub to build ML in a GDPR compliant way: with datasets, models and inference endpoints all stored within EU data centers. If you are an Enterprise Hub customer and have further questions about this, please get in touch! ## Performance Storing your models or your datasets closer to your team and infrastructure also means significantly improved performance, for both uploads and downloads. This makes a big difference considering model weights and dataset files are usually very large. ![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/storage-regions/upload-speed.png) As an example, if you are located in Europe and store your repositories in the EU region, you can expect to see ~4-5x faster upload and download speeds vs. if they were stored in the US.
Comparing the Performance of LLMs: A Deep Dive into Roberta, Llama 2, and Mistral for Disaster Tweets Analysis with Lora
mehdiiraqui
November 7, 2023
Lora-for-sequence-classification-with-Roberta-Llama-Mistral
nlp, guide, llm, peft
https://huggingface.co/blog/Lora-for-sequence-classification-with-Roberta-Llama-Mistral
# Comparing the Performance of LLMs: A Deep Dive into Roberta, Llama 2, and Mistral for Disaster Tweets Analysis with Lora <!-- TOC --> - [Comparing the Performance of LLMs: A Deep Dive into Roberta, Llama 2, and Mistral for Disaster Tweets Analysis with LoRA](#comparing-the-performance-of-llms-a-deep-dive-into-roberta-llama-2-and-mistral-for-disaster-tweets-analysis-with-lora) - [Introduction](#introduction) - [Hardware Used](#hardware-used) - [Goals](#goals) - [Dependencies](#dependencies) - [Pre-trained Models](#pre-trained-models) - [RoBERTa](#roberta) - [Llama 2](#llama-2) - [Mistral 7B](#mistral-7b) - [LoRA](#lora) - [Setup](#setup) - [Data preparation](#data-preparation) - [Data loading](#data-loading) - [Data Processing](#data-processing) - [Models](#models) - [RoBERTa](#roberta) - [Load RoBERTA Checkpoints for the Classification Task](#load-roberta-checkpoints-for-the-classification-task) - [LoRA setup for RoBERTa classifier](#lora-setup-for-roberta-classifier) - [Mistral](#mistral) - [Load checkpoints for the classfication model](#load-checkpoints-for-the-classfication-model) - [LoRA setup for Mistral 7B classifier](#lora-setup-for-mistral-7b-classifier) - [Llama 2](#llama-2) - [Load checkpoints for the classification mode](#load-checkpoints-for-the-classfication-mode) - [LoRA setup for Llama 2 classifier](#lora-setup-for-llama-2-classifier) - [Setup the trainer](#setup-the-trainer) - [Evaluation Metrics](#evaluation-metrics) - [Custom Trainer for Weighted Loss](#custom-trainer-for-weighted-loss) - [Trainer Setup](#trainer-setup) - [RoBERTa](#roberta) - [Mistral-7B](#mistral-7b) - [Llama 2](#llama-2) - [Hyperparameter Tuning](#hyperparameter-tuning) - [Results](#results) - [Conclusion](#conclusion) - [Resources](#resources) <!-- /TOC --> ## Introduction In the fast-moving world of Natural Language Processing (NLP), we often find ourselves comparing different language models to see which one works best for specific tasks. This blog post is all about comparing three models: RoBERTa, Mistral-7b, and Llama-2-7b. We used them to tackle a common problem - classifying tweets about disasters. It is important to note that Mistral and Llama 2 are large models with 7 billion parameters. In contrast, RoBERTa-large (355M parameters) is a relatively smaller model used as a baseline for the comparison study. In this blog, we used PEFT (Parameter-Efficient Fine-Tuning) technique: LoRA (Low-Rank Adaptation of Large Language Models) for fine-tuning the pre-trained model on the sequence classification task. LoRa is designed to significantly reduce the number of trainable parameters while maintaining strong downstream task performance. The main objective of this blog post is to implement LoRA fine-tuning for sequence classification tasks using three pre-trained models from Hugging Face: [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf), [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1), and [roberta-large](https://huggingface.co/roberta-large) ## Hardware Used - Number of nodes: 1 - Number of GPUs per node: 1 - GPU type: A6000 - GPU memory: 48GB ## Goals - Implement fine-tuning of pre-trained LLMs using LoRA PEFT methods. - Learn how to use the HuggingFace APIs ([transformers](https://huggingface.co/docs/transformers/index), [peft](https://huggingface.co/docs/peft/index), and [datasets](https://huggingface.co/docs/datasets/index)). - Setup the hyperparameter tuning and experiment logging using [Weights & Biases](https://wandb.ai). ## Dependencies ```bash datasets evaluate peft scikit-learn torch transformers wandb ``` Note: For reproducing the reported results, please check the pinned versions in the [wandb reports](#resources). ## Pre-trained Models ### [RoBERTa](https://arxiv.org/abs/1907.11692) RoBERTa (Robustly Optimized BERT Approach) is an advanced variant of the BERT model proposed by Meta AI research team. BERT is a transformer-based language model using self-attention mechanisms for contextual word representations and trained with a masked language model objective. Note that BERT is an encoder only model used for natural language understanding tasks (such as sequence classification and token classification). RoBERTa is a popular model to fine-tune and appropriate as a baseline for our experiments. For more information, you can check the Hugging Face model [card](https://huggingface.co/docs/transformers/model_doc/roberta). ### [Llama 2](https://arxiv.org/abs/2307.09288) Llama 2 models, which stands for Large Language Model Meta AI, belong to the family of large language models (LLMs) introduced by Meta AI. The Llama 2 models vary in size, with parameter counts ranging from 7 billion to 65 billion. Llama 2 is an auto-regressive language model, based on the transformer decoder architecture. To generate text, Llama 2 processes a sequence of words as input and iteratively predicts the next token using a sliding window. Llama 2 architecture is slightly different from models like GPT-3. For instance, Llama 2 employs the SwiGLU activation function rather than ReLU and opts for rotary positional embeddings in place of absolute learnable positional embeddings. The recently released Llama 2 introduced architectural refinements to better leverage very long sequences by extending the context length to up to 4096 tokens, and using grouped-query attention (GQA) decoding. ### [Mistral 7B](https://arxiv.org/abs/2310.06825) Mistral 7B v0.1, with 7.3 billion parameters, is the first LLM introduced by Mistral AI. The main novel techniques used in Mistral 7B's architecture are: - Sliding Window Attention: Replace the full attention (square compute cost) with a sliding window based attention where each token can attend to at most 4,096 tokens from the previous layer (linear compute cost). This mechanism enables Mistral 7B to handle longer sequences, where higher layers can access historical information beyond the window size of 4,096 tokens. - Grouped-query Attention: used in Llama 2 as well, the technique optimizes the inference process (reduce processing time) by caching the key and value vectors for previously decoded tokens in the sequence. ## [LoRA](https://arxiv.org/abs/2106.09685) PEFT, Parameter Efficient Fine-Tuning, is a collection of techniques (p-tuning, prefix-tuning, IA3, Adapters, and LoRa) designed to fine-tune large models using a much smaller set of training parameters while preserving the performance levels typically achieved through full fine-tuning. LoRA, Low-Rank Adaptation, is a PEFT method that shares similarities with Adapter layers. Its primary objective is to reduce the model's trainable parameters. LoRA's operation involves learning a low rank update matrix while keeping the pre-trained weights frozen. ![image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/Lora-for-sequence-classification-with-Roberta-Llama-Mistral/lora.png) ## Setup RoBERTa has a limitatiom of maximum sequence length of 512, so we set the `MAX_LEN=512` for all models to ensure a fair comparison. ```python MAX_LEN = 512 roberta_checkpoint = "roberta-large" mistral_checkpoint = "mistralai/Mistral-7B-v0.1" llama_checkpoint = "meta-llama/Llama-2-7b-hf" ``` ## Data preparation ### Data loading We will load the dataset from Hugging Face: ```python from datasets import load_dataset dataset = load_dataset("mehdiiraqui/twitter_disaster") ``` Now, let's split the dataset into training and validation datasets. Then add the test set: ```python from datasets import Dataset # Split the dataset into training and validation datasets data = dataset['train'].train_test_split(train_size=0.8, seed=42) # Rename the default "test" split to "validation" data['val'] = data.pop("test") # Convert the test dataframe to HuggingFace dataset and add it into the first dataset data['test'] = dataset['test'] ``` Here's an overview of the dataset: ```bash DatasetDict({ train: Dataset({ features: ['id', 'keyword', 'location', 'text', 'target'], num_rows: 6090 }) val: Dataset({ features: ['id', 'keyword', 'location', 'text', 'target'], num_rows: 1523 }) test: Dataset({ features: ['id', 'keyword', 'location', 'text', 'target'], num_rows: 3263 }) }) ``` Let's check the data distribution: ```python import pandas as pd data['train'].to_pandas().info() data['test'].to_pandas().info() ``` - Train dataset ```<class 'pandas.core.frame.DataFrame'> RangeIndex: 7613 entries, 0 to 7612 Data columns (total 5 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 id 7613 non-null int64 1 keyword 7552 non-null object 2 location 5080 non-null object 3 text 7613 non-null object 4 target 7613 non-null int64 dtypes: int64(2), object(3) memory usage: 297.5+ KB ``` - Test dataset ``` <class 'pandas.core.frame.DataFrame'> RangeIndex: 3263 entries, 0 to 3262 Data columns (total 5 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 id 3263 non-null int64 1 keyword 3237 non-null object 2 location 2158 non-null object 3 text 3263 non-null object 4 target 3263 non-null int64 dtypes: int64(2), object(3) memory usage: 127.6+ KB ``` **Target distribution in the train dataset** ``` target 0 4342 1 3271 Name: count, dtype: int64 ``` As the classes are not balanced, we will compute the positive and negative weights and use them for loss calculation later: ```python pos_weights = len(data['train'].to_pandas()) / (2 * data['train'].to_pandas().target.value_counts()[1]) neg_weights = len(data['train'].to_pandas()) / (2 * data['train'].to_pandas().target.value_counts()[0]) ``` The final weights are: ``` POS_WEIGHT, NEG_WEIGHT = (1.1637114032405993, 0.8766697374481806) ``` Then, we compute the maximum length of the column text: ```python # Number of Characters max_char = data['train'].to_pandas()['text'].str.len().max() # Number of Words max_words = data['train'].to_pandas()['text'].str.split().str.len().max() ``` ``` The maximum number of characters is 152. The maximum number of words is 31. ``` ### Data Processing Let's take a look to one row example of training data: ```python data['train'][0] ``` ``` {'id': 5285, 'keyword': 'fear', 'location': 'Thibodaux, LA', 'text': 'my worst fear. https://t.co/iH8UDz8mq3', 'target': 0} ``` The data comprises a keyword, a location and the text of the tweet. For the sake of simplicity, we select the `text` feature as the only input to the LLM. At this stage, we prepared the train, validation, and test sets in the HuggingFace format expected by the pre-trained LLMs. The next step is to define the tokenized dataset for training using the appropriate tokenizer to transform the `text` feature into two Tensors of sequence of token ids and attention masks. As each model has its specific tokenizer, we will need to define three different datasets. We start by defining the RoBERTa dataloader: - Load the tokenizer: ```python from transformers import AutoTokenizer roberta_tokenizer = AutoTokenizer.from_pretrained(roberta_checkpoint, add_prefix_space=True) ``` **Note:** The RoBERTa tokenizer has been trained to treat spaces as part of the token. As a result, the first word of the sentence is encoded differently if it is not preceded by a white space. To ensure the first word includes a space, we set `add_prefix_space=True`. Also, to maintain consistent pre-processing for all three models, we set the parameter to 'True' for Llama 2 and Mistral 7b. - Define the preprocessing function for converting one row of the dataframe: ```python def roberta_preprocessing_function(examples): return roberta_tokenizer(examples['text'], truncation=True, max_length=MAX_LEN) ``` By applying the preprocessing function to the first example of our training dataset, we have the tokenized inputs (`input_ids`) and the attention mask: ```python roberta_preprocessing_function(data['train'][0]) ``` ``` {'input_ids': [0, 127, 2373, 2490, 4, 1205, 640, 90, 4, 876, 73, 118, 725, 398, 13083, 329, 398, 119, 1343, 246, 2], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]} ``` - Now, let's apply the preprocessing function to the entire dataset: ```python col_to_delete = ['id', 'keyword','location', 'text'] # Apply the preprocessing function and remove the undesired columns roberta_tokenized_datasets = data.map(roberta_preprocessing_function, batched=True, remove_columns=col_to_delete) # Rename the target to label as for HugginFace standards roberta_tokenized_datasets = roberta_tokenized_datasets.rename_column("target", "label") # Set to torch format roberta_tokenized_datasets.set_format("torch") ``` **Note:** we deleted the undesired columns from our data: id, keyword, location and text. We have deleted the text because we have already converted it into the inputs ids and the attention mask: We can have a look into our tokenized training dataset: ```python roberta_tokenized_datasets['train'][0] ``` ``` {'label': tensor(0), 'input_ids': tensor([ 0, 127, 2373, 2490, 4, 1205, 640, 90, 4, 876, 73, 118, 725, 398, 13083, 329, 398, 119, 1343, 246, 2]), 'attention_mask': tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1])} ``` - For generating the training batches, we also need to pad the rows of a given batch to the maximum length found in the batch. For that, we will use the `DataCollatorWithPadding` class: ```python # Data collator for padding a batch of examples to the maximum length seen in the batch from transformers import DataCollatorWithPadding roberta_data_collator = DataCollatorWithPadding(tokenizer=roberta_tokenizer) ``` You can follow the same steps for preparing the data for Mistral 7B and Llama 2 models: **Note** that Llama 2 and Mistral 7B don't have a default `pad_token_id`. So, we use the `eos_token_id` for padding as well. - Mistral 7B: ```python # Load Mistral 7B Tokenizer from transformers import AutoTokenizer, DataCollatorWithPadding mistral_tokenizer = AutoTokenizer.from_pretrained(mistral_checkpoint, add_prefix_space=True) mistral_tokenizer.pad_token_id = mistral_tokenizer.eos_token_id mistral_tokenizer.pad_token = mistral_tokenizer.eos_token def mistral_preprocessing_function(examples): return mistral_tokenizer(examples['text'], truncation=True, max_length=MAX_LEN) mistral_tokenized_datasets = data.map(mistral_preprocessing_function, batched=True, remove_columns=col_to_delete) mistral_tokenized_datasets = mistral_tokenized_datasets.rename_column("target", "label") mistral_tokenized_datasets.set_format("torch") # Data collator for padding a batch of examples to the maximum length seen in the batch mistral_data_collator = DataCollatorWithPadding(tokenizer=mistral_tokenizer) ``` - Llama 2: ```python # Load Llama 2 Tokenizer from transformers import AutoTokenizer, DataCollatorWithPadding llama_tokenizer = AutoTokenizer.from_pretrained(llama_checkpoint, add_prefix_space=True) llama_tokenizer.pad_token_id = llama_tokenizer.eos_token_id llama_tokenizer.pad_token = llama_tokenizer.eos_token def llama_preprocessing_function(examples): return llama_tokenizer(examples['text'], truncation=True, max_length=MAX_LEN) llama_tokenized_datasets = data.map(llama_preprocessing_function, batched=True, remove_columns=col_to_delete) llama_tokenized_datasets = llama_tokenized_datasets.rename_column("target", "label") llama_tokenized_datasets.set_format("torch") # Data collator for padding a batch of examples to the maximum length seen in the batch llama_data_collator = DataCollatorWithPadding(tokenizer=llama_tokenizer) ``` Now that we have prepared the tokenized datasets, the next section will showcase how to load the pre-trained LLMs checkpoints and how to set the LoRa weights. ## Models ### RoBERTa #### Load RoBERTa Checkpoints for the Classification Task We load the pre-trained RoBERTa model with a sequence classification head using the Hugging Face `AutoModelForSequenceClassification` class: ```python from transformers import AutoModelForSequenceClassification roberta_model = AutoModelForSequenceClassification.from_pretrained(roberta_checkpoint, num_labels=2) ``` #### LoRA setup for RoBERTa classifier We import LoRa configuration and set some parameters for RoBERTa classifier: - TaskType: Sequence classification - r(rank): Rank for our decomposition matrices - lora_alpha: Alpha parameter to scale the learned weights. LoRA paper advises fixing alpha at 16 - lora_dropout: Dropout probability of the LoRA layers - bias: Whether to add bias term to LoRa layers The code below uses the values recommended by the [Lora paper](https://arxiv.org/abs/2106.09685). [Later in this post](#hyperparameter-tuning) we will perform hyperparameter tuning of these parameters using `wandb`. ```python from peft import get_peft_model, LoraConfig, TaskType roberta_peft_config = LoraConfig( task_type=TaskType.SEQ_CLS, r=2, lora_alpha=16, lora_dropout=0.1, bias="none", ) roberta_model = get_peft_model(roberta_model, roberta_peft_config) roberta_model.print_trainable_parameters() ``` We can see that the number of trainable parameters represents only 0.64% of the RoBERTa model parameters: ```bash trainable params: 2,299,908 || all params: 356,610,052 || trainable%: 0.6449363911929212 ``` ### Mistral #### Load checkpoints for the classfication model Let's load the pre-trained Mistral-7B model with a sequence classification head: ```python from transformers import AutoModelForSequenceClassification import torch mistral_model = AutoModelForSequenceClassification.from_pretrained( pretrained_model_name_or_path=mistral_checkpoint, num_labels=2, device_map="auto" ) ``` For Mistral 7B, we have to add the padding token id as it is not defined by default. ```python mistral_model.config.pad_token_id = mistral_model.config.eos_token_id ``` #### LoRa setup for Mistral 7B classifier For Mistral 7B model, we need to specify the `target_modules` (the query and value vectors from the attention modules): ```python from peft import get_peft_model, LoraConfig, TaskType mistral_peft_config = LoraConfig( task_type=TaskType.SEQ_CLS, r=2, lora_alpha=16, lora_dropout=0.1, bias="none", target_modules=[ "q_proj", "v_proj", ], ) mistral_model = get_peft_model(mistral_model, mistral_peft_config) mistral_model.print_trainable_parameters() ``` The number of trainable parameters reprents only 0.024% of the Mistral model parameters: ``` trainable params: 1,720,320 || all params: 7,112,380,416 || trainable%: 0.02418768259540745 ``` ### Llama 2 #### Load checkpoints for the classfication mode Let's load pre-trained Llama 2 model with a sequence classification header. ```python from transformers import AutoModelForSequenceClassification import torch llama_model = AutoModelForSequenceClassification.from_pretrained( pretrained_model_name_or_path=llama_checkpoint, num_labels=2, device_map="auto", offload_folder="offload", trust_remote_code=True ) ``` For Llama 2, we have to add the padding token id as it is not defined by default. ```python llama_model.config.pad_token_id = llama_model.config.eos_token_id ``` #### LoRa setup for Llama 2 classifier We define LoRa for Llama 2 with the same parameters as for Mistral: ```python from peft import get_peft_model, LoraConfig, TaskType llama_peft_config = LoraConfig( task_type=TaskType.SEQ_CLS, r=16, lora_alpha=16, lora_dropout=0.05, bias="none", target_modules=[ "q_proj", "v_proj", ], ) llama_model = get_peft_model(llama_model, llama_peft_config) llama_model.print_trainable_parameters() ``` The number of trainable parameters reprents only 0.12% of the Llama 2 model parameters: ``` trainable params: 8,404,992 || all params: 6,615,748,608 || trainable%: 0.1270452143516515 ``` At this point, we defined the tokenized dataset for training as well as the LLMs setup with LoRa layers. The following section will introduce how to launch training using the HuggingFace `Trainer` class. ## Setup the trainer ### Evaluation Metrics First, we define the performance metrics we will use to compare the three models: F1 score, recall, precision and accuracy: ```python import evaluate import numpy as np def compute_metrics(eval_pred): # All metrics are already predefined in the HF `evaluate` package precision_metric = evaluate.load("precision") recall_metric = evaluate.load("recall") f1_metric= evaluate.load("f1") accuracy_metric = evaluate.load("accuracy") logits, labels = eval_pred # eval_pred is the tuple of predictions and labels returned by the model predictions = np.argmax(logits, axis=-1) precision = precision_metric.compute(predictions=predictions, references=labels)["precision"] recall = recall_metric.compute(predictions=predictions, references=labels)["recall"] f1 = f1_metric.compute(predictions=predictions, references=labels)["f1"] accuracy = accuracy_metric.compute(predictions=predictions, references=labels)["accuracy"] # The trainer is expecting a dictionary where the keys are the metrics names and the values are the scores. return {"precision": precision, "recall": recall, "f1-score": f1, 'accuracy': accuracy} ``` ### Custom Trainer for Weighted Loss As mentioned at the beginning of this post, we have an imbalanced distribution between positive and negative classes. We need to train our models with a weighted cross-entropy loss to account for that. The `Trainer` class doesn't support providing a custom loss as it expects to get the loss directly from the model's outputs. So, we need to define our custom `WeightedCELossTrainer` that overrides the `compute_loss` method to calculate the weighted cross-entropy loss based on the model's predictions and the input labels: ```python from transformers import Trainer class WeightedCELossTrainer(Trainer): def compute_loss(self, model, inputs, return_outputs=False): labels = inputs.pop("labels") # Get model's predictions outputs = model(**inputs) logits = outputs.get("logits") # Compute custom loss loss_fct = torch.nn.CrossEntropyLoss(weight=torch.tensor([neg_weights, pos_weights], device=model.device, dtype=logits.dtype)) loss = loss_fct(logits.view(-1, self.model.config.num_labels), labels.view(-1)) return (loss, outputs) if return_outputs else loss ``` ### Trainer Setup Let's set the training arguments and the trainer for the three models. #### RoBERTa First important step is to move the models to the GPU device for training. ```python roberta_model = roberta_model.cuda() roberta_model.device() ``` It will print the following: ``` device(type='cuda', index=0) ``` Then, we set the training arguments: ```python from transformers import TrainingArguments lr = 1e-4 batch_size = 8 num_epochs = 5 training_args = TrainingArguments( output_dir="roberta-large-lora-token-classification", learning_rate=lr, lr_scheduler_type= "constant", warmup_ratio= 0.1, max_grad_norm= 0.3, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, num_train_epochs=num_epochs, weight_decay=0.001, evaluation_strategy="epoch", save_strategy="epoch", load_best_model_at_end=True, report_to="wandb", fp16=False, gradient_checkpointing=True, ) ``` Finally, we define the RoBERTa trainer by providing the model, the training arguments and the tokenized datasets: ```python roberta_trainer = WeightedCELossTrainer( model=roberta_model, args=training_args, train_dataset=roberta_tokenized_datasets['train'], eval_dataset=roberta_tokenized_datasets["val"], data_collator=roberta_data_collator, compute_metrics=compute_metrics ) ``` #### Mistral-7B Similar to RoBERTa, we initialize the `WeightedCELossTrainer` as follows: ```python from transformers import TrainingArguments, Trainer mistral_model = mistral_model.cuda() lr = 1e-4 batch_size = 8 num_epochs = 5 training_args = TrainingArguments( output_dir="mistral-lora-token-classification", learning_rate=lr, lr_scheduler_type= "constant", warmup_ratio= 0.1, max_grad_norm= 0.3, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, num_train_epochs=num_epochs, weight_decay=0.001, evaluation_strategy="epoch", save_strategy="epoch", load_best_model_at_end=True, report_to="wandb", fp16=True, gradient_checkpointing=True, ) mistral_trainer = WeightedCELossTrainer( model=mistral_model, args=training_args, train_dataset=mistral_tokenized_datasets['train'], eval_dataset=mistral_tokenized_datasets["val"], data_collator=mistral_data_collator, compute_metrics=compute_metrics ) ``` **Note** that we needed to enable half-precision training by setting `fp16` to `True`. The main reason is that Mistral-7B is large, and its weights cannot fit into one GPU memory (48GB) with full float32 precision. #### Llama 2 Similar to Mistral 7B, we define the trainer as follows: ```python from transformers import TrainingArguments, Trainer llama_model = llama_model.cuda() lr = 1e-4 batch_size = 8 num_epochs = 5 training_args = TrainingArguments( output_dir="llama-lora-token-classification", learning_rate=lr, lr_scheduler_type= "constant", warmup_ratio= 0.1, max_grad_norm= 0.3, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, num_train_epochs=num_epochs, weight_decay=0.001, evaluation_strategy="epoch", save_strategy="epoch", load_best_model_at_end=True, report_to="wandb", fp16=True, gradient_checkpointing=True, ) llama_trainer = WeightedCELossTrainer( model=llama_model, args=training_args, train_dataset=llama_tokenized_datasets['train'], eval_dataset=llama_tokenized_datasets["val"], data_collator=llama_data_collator, compute_metrics=compute_metrics ) ``` ## Hyperparameter Tuning We have used Wandb Sweep API to run hyperparameter tunning with Bayesian search strategy (30 runs). The hyperparameters tuned are the following. | method | metric | lora_alpha | lora_bias | lora_dropout | lora_rank | lr | max_length | |--------|---------------------|-------------------------------------------|---------------------------|-------------------------|----------------------------------------------------|-----------------------------|---------------------------| | bayes | goal: maximize | distribution: categorical | distribution: categorical | distribution: uniform | distribution: categorical | distribution: uniform | distribution: categorical | | | name: eval/f1-score | values: <br>-16 <br>-32 <br>-64 | values: None | -max: 0.1 <br>-min: 0 | values: <br>-4 <br>-8 <br>-16 <br>-32 | -max: 2e-04<br>-min: 1e-05 | values: 512 | | For more information, you can check the Wandb experiment report in the [resources sections](#resources). ## Results | Models | F1 score | Training time | Memory consumption | Number of trainable parameters | |---------|----------|----------------|------------------------------|--------------------------------| | RoBERTa | 0.8077 | 538 seconds | GPU1: 9.1 Gb<br>GPU2: 8.3 Gb | 0.64% | | Mistral 7B | 0.7364 | 2030 seconds | GPU1: 29.6 Gb<br>GPU2: 29.5 Gb | 0.024% | | Llama 2 | 0.7638 | 2052 seconds | GPU1: 35 Gb <br>GPU2: 33.9 Gb | 0.12% | ## Conclusion In this blog post, we compared the performance of three large language models (LLMs) - RoBERTa, Mistral 7b, and Llama 2 - for disaster tweet classification using LoRa. From the performance results, we can see that RoBERTa is outperforming Mistral 7B and Llama 2 by a large margin. This raises the question about whether we really need a complex and large LLM for tasks like short-sequence binary classification? One learning we can draw from this study is that one should account for the specific project requirements, available resources, and performance needs to choose the LLMs model to use. Also, for relatively *simple* prediction tasks with short sequences base models such as RoBERTa remain competitive. Finally, we showcase that LoRa method can be applied to both encoder (RoBERTa) and decoder (Llama 2 and Mistral 7B) models. ## Resources 1. You can find the code script in the following [Github project](https://github.com/mehdiir/Roberta-Llama-Mistral/). 2. You can check the hyper-param search results in the following Weight&Bias reports: - [RoBERTa](https://api.wandb.ai/links/mehdi-iraqui/505c22j1) - [Mistral 7B](https://api.wandb.ai/links/mehdi-iraqui/24vveyxp) - [Llama 2](https://api.wandb.ai/links/mehdi-iraqui/qq8beod0)
Introducing Prodigy-HF: a direct integration with Hugging Face
koaning
November 7, 2023
prodigy-hf
community, nlp, datasets, guide
https://huggingface.co/blog/prodigy-hf
# Introducing Prodigy-HF [Prodigy](https://prodi.gy/) is an annotation tool made by [Explosion](https://explosion.ai/), a company well known as the creators of [spaCy](https://spacy.io/). It's a fully scriptable product with a large community around it. The product has many features, including tight integration with spaCy and active learning capabilities. But the main feature of the product is that it is programmatically customizable with Python. To foster this customisability, Explosion has started releasing [plugins](https://prodi.gy/docs/plugins). These plugins integrate with third-party tools in an open way that encourages users to work on bespoke annotation workflows. However, one customization specifically deserves to be celebrated explicitly. Last week, Explosion introduced [Prodigy-HF](https://github.com/explosion/prodigy-hf), which offers code recipes that directly integrate with the Hugging Face stack. It's been a much-requested feature on the [Prodigy support forum](https://support.prodi.gy/), so we're super excited to have it out there. ## Features The first main feature is that this plugin allows you to train and re-use Hugging Face models on your annotated data. That means if you've been annotating data in our interface for named entity recognition, you can directly fine-tune BERT models against it. <figure> <div style="background-color: #eee; padding-top: 8px; padding-bottom: 8px;"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/prodigy-hf/interface.png" width="100%"> </div> <figcaption style="text-color: gray; margin-left: auto; margin-right: auto; text-align:center; padding-top: 8px;"><small>What the Prodigy NER interface looks like.</small></figcaption> </figure> After installing the plugin you can call the `hf.train.ner` recipe from the command line to train a transformer model directly on your own data. ``` python -m prodigy hf.train.ner fashion-train,eval:fashion-eval path/to/model-out --model "distilbert-base-uncased" ``` This will fine-tune the `distilbert-base-uncased` model for the dataset you've stored in Prodigy and save it to disk. Similarly, this plugin also supports models for text classification via a very similar interface. ``` python -m prodigy hf.train.textcat fashion-train,eval:fashion-eval path/to/model-out --model "distilbert-base-uncased" ``` This offers a lot of flexibility because the tool directly integrates with the `AutoTokenizer` and `AutoModel` classes of Hugging Face transformers. Any transformer model on the hub can be fine-tuned on your own dataset with just a single command. These models will be serialised on disk, which means that you can upload them to the Hugging Face Hub, or re-use them to help you annotate data. This can save a lot of time, especially for NER tasks. To re-use a trained NER model you can use the `hf.correct.ner` recipe. ``` python -m prodigy hf.correct.ner fashion-train path/to/model-out examples.jsonl ``` This will give you a similar interface as before, but now the model predictions will be shown in the interface as well. ### Upload The second feature, which is equally exciting, is that you can now also publish your annotated datasets on the Hugging Face Hub. This is great if you're interested in sharing datasets that others would like to use. ``` python -m prodigy hf.upload <dataset_name> <username>/<repo_name> ``` We're particularly fond of this upload feature because it encourages collaboration. People can annotate their own datasets independently of each other, but still benefit when they share the data with the wider community. ## More to come We hope that this direct integration with the Hugging Face ecosystem enables many users to experiment more. The Hugging Face Hub offers _many_ [models](https://huggingface.co/models) for a wide array of tasks as well as a wide array of languages. We really hope that this integration makes it easier to get data annotated, even if you've got a more domain specific and experimental use-case. More features for this library are on their way, and feel free to reach out on the [Prodigy forum](https://support.prodi.gy/) if you have more questions. We'd also like to thank the team over at Hugging Face for their feedback on this plugin, specifically @davanstrien, who suggested to add the upload feature. Thanks!
Make your llama generation time fly with AWS Inferentia2
dacorvo
November 7, 2023
inferentia-llama2
guide, text-generation, llama2, aws
https://huggingface.co/blog/inferentia-llama2
# Make your llama generation time fly with AWS Inferentia2 In a [previous post on the Hugging Face blog](https://huggingface.co/blog/accelerate-transformers-with-inferentia2), we introduced [AWS Inferentia2](https://aws.amazon.com/ec2/instance-types/inf2/), the second-generation AWS Inferentia accelerator, and explained how you could use [optimum-neuron](https://huggingface.co/docs/optimum-neuron/index) to quickly deploy Hugging Face models for standard text and vision tasks on AWS Inferencia 2 instances. In a further step of integration with the [AWS Neuron SDK](https://github.com/aws-neuron/aws-neuron-sdk), it is now possible to use 🤗 [optimum-neuron](https://huggingface.co/docs/optimum-neuron/index) to deploy LLM models for text generation on AWS Inferentia2. And what better model could we choose for that demonstration than [Llama 2](https://huggingface.co/meta-llama/Llama-2-13b-hf), one of the most popular models on the [Hugging Face hub](https://huggingface.co/models). ## Setup 🤗 optimum-neuron on your Inferentia2 instance Our recommendation is to use the [Hugging Face Neuron Deep Learning AMI](https://aws.amazon.com/marketplace/pp/prodview-gr3e6yiscria2) (DLAMI). The DLAMI comes with all required libraries pre-packaged for you, including the Optimum Neuron, Neuron Drivers, Transformers, Datasets, and Accelerate. Alternatively, you can use the [Hugging Face Neuron SDK DLC](https://github.com/aws/deep-learning-containers/releases?q=hf&expanded=true) to deploy on Amazon SageMaker. *Note: stay tuned for an upcoming post dedicated to SageMaker deployment.* Finally, these components can also be installed manually on a fresh Inferentia2 instance following the `optimum-neuron` [installation instructions](https://huggingface.co/docs/optimum-neuron/installation). ## Export the Llama 2 model to Neuron As explained in the [optimum-neuron documentation](https://huggingface.co/docs/optimum-neuron/guides/export_model#why-compile-to-neuron-model), models need to be compiled and exported to a serialized format before running them on Neuron devices. Fortunately, 🤗 `optimum-neuron` offers a [very simple API](https://huggingface.co/docs/optimum-neuron/guides/models#configuring-the-export-of-a-generative-model) to export standard 🤗 [transformers models](https://huggingface.co/docs/transformers/index) to the Neuron format. ``` >>> from optimum.neuron import NeuronModelForCausalLM >>> compiler_args = {"num_cores": 24, "auto_cast_type": 'fp16'} >>> input_shapes = {"batch_size": 1, "sequence_length": 2048} >>> model = NeuronModelForCausalLM.from_pretrained( "meta-llama/Llama-2-7b-hf", export=True, **compiler_args, **input_shapes) ``` This deserves a little explanation: - using `compiler_args`, we specify on how many cores we want the model to be deployed (each neuron device has two cores), and with which precision (here `float16`), - using `input_shape`, we set the static input and output dimensions of the model. All model compilers require static shapes, and neuron makes no exception. Note that the `sequence_length` not only constrains the length of the input context, but also the length of the KV cache, and thus, the output length. Depending on your choice of parameters and inferentia host, this may take from a few minutes to more than an hour. Fortunately, you will need to do this only once because you can save your model and reload it later. ``` >>> model.save_pretrained("a_local_path_for_compiled_neuron_model") ``` Even better, you can push it to the [Hugging Face hub](https://huggingface.co/models). ``` >>> model.push_to_hub( "a_local_path_for_compiled_neuron_model", repository_id="aws-neuron/Llama-2-7b-hf-neuron-latency") ``` ## Generate Text using Llama 2 on AWS Inferentia2 Once your model has been exported, you can generate text using the transformers library, as it has been described in [detail in this previous post](https://huggingface.co/blog/how-to-generate). ``` >>> from optimum.neuron import NeuronModelForCausalLM >>> from transformers import AutoTokenizer >>> model = NeuronModelForCausalLM.from_pretrained('aws-neuron/Llama-2-7b-hf-neuron-latency') >>> tokenizer = AutoTokenizer.from_pretrained("aws-neuron/Llama-2-7b-hf-neuron-latency") >>> inputs = tokenizer("What is deep-learning ?", return_tensors="pt") >>> outputs = model.generate(**inputs, max_new_tokens=128, do_sample=True, temperature=0.9, top_k=50, top_p=0.9) >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) ['What is deep-learning ?\nThe term “deep-learning” refers to a type of machine-learning that aims to model high-level abstractions of the data in the form of a hierarchy of multiple layers of increasingly complex processing nodes.'] ``` *Note: when passing multiple input prompts to a model, the resulting token sequences must be padded to the left with an end-of-stream token. The tokenizers saved with the exported models are configured accordingly.* The following generation strategies are supported: - greedy search, - multinomial sampling with top-k and top-p (with temperature). Most logits pre-processing/filters (such as repetition penalty) are supported. ## All-in-one with optimum-neuron pipelines For those who like to keep it simple, there is an even simpler way to use an LLM model on AWS inferentia 2 using [optimum-neuron pipelines](https://huggingface.co/docs/optimum-neuron/guides/pipelines). Using them is as simple as: ``` >>> from optimum.neuron import pipeline >>> p = pipeline('text-generation', 'aws-neuron/Llama-2-7b-hf-neuron-budget') >>> p("My favorite place on earth is", max_new_tokens=64, do_sample=True, top_k=50) [{'generated_text': 'My favorite place on earth is the ocean. It is where I feel most at peace. I love to travel and see new places. I have a'}] ``` ## Benchmarks But how much efficient is text-generation on Inferentia2? Let's figure out! We have uploaded on the hub pre-compiled versions of the LLama 2 7B and 13B models with different configurations: | Model type | num cores | batch_size | Hugging Face Hub model | |----------------------------|-----------|------------|-------------------------------------------| | Llama2 7B - B (budget) | 2 | 1 |[aws-neuron/Llama-2-7b-hf-neuron-budget](https://huggingface.co/aws-neuron/Llama-2-7b-hf-neuron-budget) | | Llama2 7B - L (latency) | 24 | 1 |[aws-neuron/Llama-2-7b-hf-neuron-latency](https://huggingface.co/aws-neuron/Llama-2-7b-hf-neuron-latency) | | Llama2 7B - T (throughput) | 24 | 4 |[aws-neuron/Llama-2-7b-hf-neuron-throughput](https://huggingface.co/aws-neuron/Llama-2-7b-hf-neuron-throughput) | | Llama2 13B - L (latency) | 24 | 1 |[aws-neuron/Llama-2-13b-hf-neuron-latency](https://huggingface.co/aws-neuron/Llama-2-13b-hf-neuron-latency) | | Llama2 13B - T (throughput)| 24 | 4 |[aws-neuron/Llama-2-13b-hf-neuron-throughput](https://huggingface.co/aws-neuron/Llama-2-13b-hf-neuron-throughput)| *Note: all models are compiled with a maximum sequence length of 2048.* The `llama2 7B` "budget" model is meant to be deployed on `inf2.xlarge` instance that has only one neuron device, and enough `cpu` memory to load the model. All other models are compiled to use the full extent of cores available on the `inf2.48xlarge` instance. *Note: please refer to the [inferentia2 product page](https://aws.amazon.com/ec2/instance-types/inf2/) for details on the available instances.* We created two "latency" oriented configurations for the `llama2 7B` and `llama2 13B` models that can serve only one request at a time, but at full speed. We also created two "throughput" oriented configurations to serve up to four requests in parallel. To evaluate the models, we generate tokens up to a total sequence length of 1024, starting from 256 input tokens (i.e. we generate 256, 512 and 768 tokens). *Note: the "budget" model numbers are reported but not included in the graphs for better readability.* ### Encoding time The encoding time is the time required to process the input tokens and generate the first output token. It is a very important metric, as it corresponds to the latency directly perceived by the user when streaming generated tokens. We test the encoding time for increasing context sizes, 256 input tokens corresponding roughly to a typical Q/A usage, while 768 is more typical of a Retrieval Augmented Generation (RAG) use-case. The "budget" model (`Llama2 7B-B`) is deployed on an `inf2.xlarge` instance while other models are deployed on an `inf2.48xlarge` instance. Encoding time is expressed in **seconds**. | input tokens | Llama2 7B-L | Llama2 7B-T | Llama2 13B-L | Llama2 13B-T | Llama2 7B-B | |-----------------|----------------|----------------|-----------------|-----------------|----------------| | 256 | 0.5 | 0.9 | 0.6 | 1.8 | 0.3 | | 512 | 0.7 | 1.6 | 1.1 | 3.0 | 0.4 | | 768 | 1.1 | 3.3 | 1.7 | 5.2 | 0.5 | ![Llama2 inferentia2 encoding-time](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/169_inferentia-llama2/encoding-time.png "Encoding time") We can see that all deployed models exhibit excellent response times, even for long contexts. ### End-to-end Latency The end-to-end latency corresponds to the total time to reach a sequence length of 1024 tokens. It therefore includes the encoding and generation time. The "budget" model (`Llama2 7B-B`) is deployed on an `inf2.xlarge` instance while other models are deployed on an `inf2.48xlarge` instance. Latency is expressed in **seconds**. | new tokens | Llama2 7B-L | Llama2 7B-T | Llama2 13B-L | Llama2 13B-T | Llama2 7B-B | |---------------|----------------|----------------|-----------------|-----------------|----------------| | 256 | 2.3 | 2.7 | 3.5 | 4.1 | 15.9 | | 512 | 4.4 | 5.3 | 6.9 | 7.8 | 31.7 | | 768 | 6.2 | 7.7 | 10.2 | 11.1 | 47.3 | ![Llama2 inferentia2 end-to-end latency](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/169_inferentia-llama2/latency.png "Latency") All models deployed on the high-end instance exhibit a good latency, even those actually configured to optimize throughput. The "budget" deployed model latency is significantly higher, but still ok. ### Throughput We adopt the same convention as other benchmarks to evaluate the throughput, by dividing the end-to-end latency by the sum of both input and output tokens. In other words, we divide the end-to-end latency by `batch_size * sequence_length` to obtain the number of generated tokens per second. The "budget" model (`Llama2 7B-B`) is deployed on an `inf2.xlarge` instance while other models are deployed on an `inf2.48xlarge` instance. Throughput is expressed in **tokens/second**. | new tokens | Llama2 7B-L | Llama2 7B-T | Llama2 13B-L | Llama2 13B-T | Llama2 7B-B | |---------------|----------------|----------------|-----------------|-----------------|----------------| | 256 | 227 | 750 | 145 | 504 | 32 | | 512 | 177 | 579 | 111 | 394 | 24 | | 768 | 164 | 529 | 101 | 370 | 22 | ![Llama2 inferentia2 throughput](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/169_inferentia-llama2/throughput.png "Throughput") Again, the models deployed on the high-end instance have a very good throughput, even those optimized for latency. The "budget" model has a much lower throughput, but still ok for a streaming use-case, considering that an average reader reads around 5 words per-second. ## Conclusion We have illustrated how easy it is to deploy `llama2` models from the [Hugging Face hub](https://huggingface.co/models) on [AWS Inferentia2](https://aws.amazon.com/ec2/instance-types/inf2/) using 🤗 [optimum-neuron](https://huggingface.co/docs/optimum-neuron/index). The deployed models demonstrate very good performance in terms of encoding time, latency and throughput. Interestingly, the deployed models latency is not too sensitive to the batch size, which opens the way for their deployment on inference endpoints serving multiple requests in parallel. There is still plenty of room for improvement though: - in the current implementation, the only way to augment the throughput is to increase the batch size, but it is currently limited by the device memory. Alternative options such as pipelining are currently integrated, - the static sequence length limits the model ability to encode long contexts. It would be interesting to see if attention sinks might be a valid option to address this.
SDXL in 4 steps with Latent Consistency LoRAs
pcuenq
November 9, 2023
lcm_lora
sdxl, lcm, stable diffusion, guide
https://huggingface.co/blog/lcm_lora
# SDXL in 4 steps with Latent Consistency LoRAs [Latent Consistency Models (LCM)](https://huggingface.co/papers/2310.04378) are a way to decrease the number of steps required to generate an image with Stable Diffusion (or SDXL) by _distilling_ the original model into another version that requires fewer steps (4 to 8 instead of the original 25 to 50). Distillation is a type of training procedure that attempts to replicate the outputs from a source model using a new one. The distilled model may be designed to be smaller (that’s the case of [DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert) or the recently-released [Distil-Whisper](https://github.com/huggingface/distil-whisper)) or, in this case, require fewer steps to run. It’s usually a lengthy and costly process that requires huge amounts of data, patience, and a few GPUs. Well, that was the status quo before today! We are delighted to announce a new method that can essentially make Stable Diffusion and SDXL faster, as if they had been distilled using the LCM process! How does it sound to run _any_ SDXL model in about 1 second instead of 7 on a 3090, or 10x faster on Mac? Read on for details! ## Contents - [Method Overview](#method-overview) - [Why does this matter](#why-does-this-matter) - [Fast Inference with SDXL LCM LoRAs](#fast-inference-with-sdxl-lcm-loras) - [Quality Comparison](#quality-comparison) - [Guidance Scale and Negative Prompts](#guidance-scale-and-negative-prompts) - [Quality vs base SDXL](#quality-vs-base-sdxl) - [LCM LoRAs with other Models](#lcm-loras-with-other-models) - [Full Diffusers Integration](#full-diffusers-integration) - [Benchmarks](#benchmarks) - [LCM LoRAs and Models Released Today](#lcm-loras-and-models-released-today) - [Bonus: Combine LCM LoRAs with regular SDXL LoRAs](#bonus-combine-lcm-loras-with-regular-sdxl-loras) - [How to train LCM LoRAs](#how-to-train-lcm-loras) - [Resources](#resources) - [Credits](#credits) ## Method Overview So, what’s the trick? For latent consistency distillation, each model needs to be distilled separately. The core idea with LCM LoRA is to train just a small number of adapters, [known as LoRA layers](https://huggingface.co/docs/peft/conceptual_guides/lora), instead of the full model. The resulting LoRAs can then be applied to any fine-tuned version of the model without having to distil them separately. If you are itching to see how this looks in practice, just jump to the [next section](#fast-inference-with-sdxl-lcm-loras) to play with the inference code. If you want to train your own LoRAs, this is the process you’d use: 1. Select an available teacher model from the Hub. For example, you can use [SDXL (base)](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0), or any fine-tuned or dreamboothed version you like. 2. [Train a LCM LoRA](#how-to-train-lcm-models-and-loras) on the model. LoRA is a type of performance-efficient fine-tuning, or PEFT, that is much cheaper to accomplish than full model fine-tuning. For additional details on PEFT, please check [this blog post](https://huggingface.co/blog/peft) or [the diffusers LoRA documentation](https://huggingface.co/docs/diffusers/training/lora). 3. Use the LoRA with any SDXL diffusion model and the LCM scheduler; bingo! You get high-quality inference in just a few steps. For more details on the process, please [download our paper](https://huggingface.co/latent-consistency/lcm-lora-sdxl/resolve/main/LCM-LoRA-Technical-Report.pdf). ## Why does this matter? Fast inference of Stable Diffusion and SDXL enables new use-cases and workflows. To name a few: - **Accessibility**: generative tools can be used effectively by more people, even if they don’t have access to the latest hardware. - **Faster iteration**: get more images and multiple variants in a fraction of the time! This is great for artists and researchers; whether for personal or commercial use. - Production workloads may be possible on different accelerators, including CPUs. - Cheaper image generation services. To gauge the speed difference we are talking about, generating a single 1024x1024 image on an M1 Mac with SDXL (base) takes about a minute. Using the LCM LoRA, we get great results in just ~6s (4 steps). This is an order of magnitude faster, and not having to wait for results is a game-changer. Using a 4090, we get almost instant response (less than 1s). This unlocks the use of SDXL in applications where real-time events are a requirement. ## Fast Inference with SDXL LCM LoRAs The version of `diffusers` released today makes it very easy to use LCM LoRAs: ```py from diffusers import DiffusionPipeline, LCMScheduler import torch model_id = "stabilityai/stable-diffusion-xl-base-1.0" lcm_lora_id = "latent-consistency/lcm-lora-sdxl" pipe = DiffusionPipeline.from_pretrained(model_id, variant="fp16") pipe.load_lora_weights(lcm_lora_id) pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) pipe.to(device="cuda", dtype=torch.float16) prompt = "close-up photography of old man standing in the rain at night, in a street lit by lamps, leica 35mm summilux" images = pipe( prompt=prompt, num_inference_steps=4, guidance_scale=1, ).images[0] ``` Note how the code: - Instantiates a standard diffusion pipeline with the SDXL 1.0 base model. - Applies the LCM LoRA. - Changes the scheduler to the LCMScheduler, which is the one used in latent consistency models. - That’s it! This would result in the following full-resolution image: <p align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/lcm-lora/lcm-1.jpg?download=true" alt="SDXL in 4 steps with LCM LoRA"><br> <em>Image generated with SDXL in 4 steps using an LCM LoRA.</em> </p> ### Quality Comparison Let’s see how the number of steps impacts generation quality. The following code will generate images with 1 to 8 total inference steps: ```py images = [] for steps in range(8): generator = torch.Generator(device=pipe.device).manual_seed(1337) image = pipe( prompt=prompt, num_inference_steps=steps+1, guidance_scale=1, generator=generator, ).images[0] images.append(image) ``` These are the 8 images displayed in a grid: <p align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/lcm-lora/lcm-grid.jpg?download=true" alt="LCM LoRA generations with 1 to 8 steps"><br> <em>LCM LoRA generations with 1 to 8 steps.</em> </p> As expected, using just **1** step produces an approximate shape without discernible features and lacking texture. However, results quickly improve, and they are usually very satisfactory in just 4 to 6 steps. Personally, I find the 8-step image in the previous test to be a bit too saturated and “cartoony” for my taste, so I’d probably choose between the ones with 5 and 6 steps in this example. Generation is so fast that you can create a bunch of different variants using just 4 steps, and then select the ones you like and iterate using a couple more steps and refined prompts as necessary. ### Guidance Scale and Negative Prompts Note that in the previous examples we used a `guidance_scale` of `1`, which effectively disables it. This works well for most prompts, and it’s fastest, but ignores negative prompts. You can also explore using negative prompts by providing a guidance scale between `1` and `2` – we found that larger values don’t work. ### Quality vs base SDXL How does this compare against the standard SDXL pipeline, in terms of quality? Let’s see an example! We can quickly revert our pipeline to a standard SDXL pipeline by unloading the LoRA weights and switching to the default scheduler: ```py from diffusers import EulerDiscreteScheduler pipe.unload_lora_weights() pipe.scheduler = EulerDiscreteScheduler.from_config(pipe.scheduler.config) ``` Then we can run inference as usual for SDXL. We’ll gather results using varying number of steps: ```py images = [] for steps in (1, 4, 8, 15, 20, 25, 30, 50): generator = torch.Generator(device=pipe.device).manual_seed(1337) image = pipe( prompt=prompt, num_inference_steps=steps, generator=generator, ).images[0] images.append(image) ``` <p align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/lcm-lora/lcm-sdxl-grid.jpg?download=true" alt="SDXL results for various inference steps"><br> <em>SDXL pipeline results (same prompt and random seed), using 1, 4, 8, 15, 20, 25, 30, and 50 steps.</em> </p> As you can see, images in this example are pretty much useless until ~20 steps (second row), and quality still increases niteceably with more steps. The details in the final image are amazing, but it took 50 steps to get there. ### LCM LoRAs with other models This technique also works for any other fine-tuned SDXL or Stable Diffusion model. To demonstrate, let's see how to run inference on [`collage-diffusion`](https://huggingface.co/wavymulder/collage-diffusion), a model fine-tuned from [Stable Diffusion v1.5](https://huggingface.co/runwayml/stable-diffusion-v1-5) using Dreambooth. The code is similar to the one we saw in the previous examples. We load the fine-tuned model, and then the LCM LoRA suitable for Stable Diffusion v1.5. ```py from diffusers import DiffusionPipeline, LCMScheduler import torch model_id = "wavymulder/collage-diffusion" lcm_lora_id = "latent-consistency/lcm-lora-sdv1-5" pipe = DiffusionPipeline.from_pretrained(model_id, variant="fp16") pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) pipe.load_lora_weights(lcm_lora_id) pipe.to(device="cuda", dtype=torch.float16) prompt = "collage style kid sits looking at the night sky, full of stars" generator = torch.Generator(device=pipe.device).manual_seed(1337) images = pipe( prompt=prompt, generator=generator, negative_prompt=negative_prompt, num_inference_steps=4, guidance_scale=1, ).images[0] images ``` <p align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/lcm-lora/collage.png?download=true" alt="LCM LoRA technique with a Dreambooth Stable Diffusion v1.5 model, allowing 4-step inference."><br> <em>LCM LoRA technique with a Dreambooth Stable Diffusion v1.5 model, allowing 4-step inference.</em> </p> ### Full Diffusers Integration The integration of LCM in `diffusers` makes it possible to take advantage of many features and workflows that are part of the diffusers toolbox. For example: - Out of the box `mps` support for Macs with Apple Silicon. - Memory and performance optimizations like flash attention or `torch.compile()`. - Additional memory saving strategies for low-RAM environments, including model offload. - Workflows like ControlNet or image-to-image. - Training and fine-tuning scripts. ## Benchmarks This section is not meant to be exhaustive, but illustrative of the generation speed we achieve on various computers. Let us stress again how liberating it is to explore image generation so easily. | Hardware | SDXL LoRA LCM (4 steps) | SDXL standard (25 steps) | |----------------------------------------|-------------------------|--------------------------| | Mac, M1 Max | 6.5s | 64s | | 2080 Ti | 4.7s | 10.2s | | 3090 | 1.4s | 7s | | 4090 | 0.7s | 3.4s | | T4 (Google Colab Free Tier) | 8.4s | 26.5s | | A100 (80 GB) | 1.2s | 3.8s | | Intel i9-10980XE CPU (1/36 cores used) | 29s | 219s | These tests were run with a batch size of 1 in all cases, using [this script](https://huggingface.co/datasets/pcuenq/gists/blob/main/sayak_lcm_benchmark.py) by [Sayak Paul](https://huggingface.co/sayakpaul). For cards with a lot of capacity, such as A100, performance increases significantly when generating multiple images at once, which is usually the case for production workloads. ## LCM LoRAs and Models Released Today - [Latent Consistency Models LoRAs Collection](https://huggingface.co/collections/latent-consistency/latent-consistency-models-loras-654cdd24e111e16f0865fba6) - [`latent-consistency/lcm-lora-sdxl`](https://huggingface.co/latent-consistency/lcm-lora-sdxl). LCM LoRA for [SDXL 1.0 base](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0), as seen in the examples above. - [`latent-consistency/lcm-lora-sdv1-5`](https://huggingface.co/latent-consistency/lcm-lora-sdv1-5). LCM LoRA for [Stable Diffusion 1.5](https://huggingface.co/runwayml/stable-diffusion-v1-5). - [`latent-consistency/lcm-lora-ssd-1b`](https://huggingface.co/latent-consistency/lcm-lora-ssd-1b). LCM LoRA for [`segmind/SSD-1B`](https://huggingface.co/segmind/SSD-1B), a distilled SDXL model that's 50% smaller and 60% faster than the original SDXL. - [`latent-consistency/lcm-sdxl`](https://huggingface.co/latent-consistency/lcm-sdxl). Full fine-tuned consistency model derived from [SDXL 1.0 base](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0). - [`latent-consistency/lcm-ssd-1b`](https://huggingface.co/latent-consistency/lcm-ssd-1b). Full fine-tuned consistency model derived from [`segmind/SSD-1B`](https://huggingface.co/segmind/SSD-1B). ## Bonus: Combine LCM LoRAs with regular SDXL LoRAs Using the [diffusers + PEFT integration](https://huggingface.co/docs/diffusers/main/en/tutorials/using_peft_for_inference), you can combine LCM LoRAs with regular SDXL LoRAs, giving them the superpower to run LCM inference in only 4 steps. Here we are going to combine `CiroN2022/toy_face` LoRA with the LCM LoRA: ```py from diffusers import DiffusionPipeline, LCMScheduler import torch model_id = "stabilityai/stable-diffusion-xl-base-1.0" lcm_lora_id = "latent-consistency/lcm-lora-sdxl" pipe = DiffusionPipeline.from_pretrained(model_id, variant="fp16") pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) pipe.load_lora_weights(lcm_lora_id) pipe.load_lora_weights("CiroN2022/toy-face", weight_name="toy_face_sdxl.safetensors", adapter_name="toy") pipe.set_adapters(["lora", "toy"], adapter_weights=[1.0, 0.8]) pipe.to(device="cuda", dtype=torch.float16) prompt = "a toy_face man" negative_prompt = "blurry, low quality, render, 3D, oversaturated" images = pipe( prompt=prompt, negative_prompt=negative_prompt, num_inference_steps=4, guidance_scale=0.5, ).images[0] images ``` <p align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/lcm-lora/lcm-toy.png?download=true" alt="Combining LoRAs for fast inference"><br> <em>Standard and LCM LoRAs combined for fast (4 step) inference.</em> </p> Need ideas to explore some LoRAs? Check out our experimental [LoRA the Explorer (LCM version)](https://huggingface.co/spaces/latent-consistency/lcm-LoraTheExplorer) Space to test amazing creations by the community and get inspired! ## How to Train LCM Models and LoRAs As part of the `diffusers` release today, we are providing training and fine-tuning scripts developed in collaboration with the LCM team authors. They allow users to: - Perform full-model distillation of Stable Diffusion or SDXL models on large datasets such as Laion. - Train LCM LoRAs, which is a much easier process. As we've shown in this post, it also makes it possible to run fast inference with Stable Diffusion, without having to go through distillation training. For more details, please check the instructions for [SDXL](https://github.com/huggingface/diffusers/blob/main/examples/consistency_distillation/README_sdxl.md) or [Stable Diffusion](https://github.com/huggingface/diffusers/blob/main/examples/consistency_distillation/README.md) in the repo. We hope these scripts inspire the community to try their own fine-tunes. Please, do let us know if you use them for your projects! ## Resources - Latent Consistency Models [project page](https://latent-consistency-models.github.io), [paper](https://huggingface.co/papers/2310.04378). - [LCM LoRAs](https://huggingface.co/collections/latent-consistency/latent-consistency-models-loras-654cdd24e111e16f0865fba6) - [For SDXL](https://huggingface.co/latent-consistency/lcm-lora-sdxl). - [For Stable Diffusion v1.5](https://huggingface.co/latent-consistency/lcm-lora-sdv1-5). - [For Segmind's SSD-1B](https://huggingface.co/latent-consistency/lcm-lora-ssd-1b). - [Technical Report](https://huggingface.co/latent-consistency/lcm-lora-sdxl/resolve/main/LCM-LoRA-Technical-Report.pdf). - Demos - [SDXL in 4 steps with Latent Consistency LoRAs](https://huggingface.co/spaces/latent-consistency/lcm-lora-for-sdxl) - [Near real-time video stream](https://huggingface.co/spaces/latent-consistency/Real-Time-LCM-ControlNet-Lora-SD1.5) - [LoRA the Explorer (experimental LCM version)](https://huggingface.co/spaces/latent-consistency/lcm-LoraTheExplorer) - PEFT: [intro](https://huggingface.co/blog/peft), [repo](https://github.com/huggingface/peft) - Training scripts - [For Stable Diffusion 1.5](https://github.com/huggingface/diffusers/blob/main/examples/consistency_distillation/README.md) - [For SDXL](https://github.com/huggingface/diffusers/blob/main/examples/consistency_distillation/README_sdxl.md) ## Credits The amazing work on Latent Consistency Models was performed by the [LCM Team](https://latent-consistency-models.github.io), please make sure to check out their code, report and paper. This project is a collaboration between the [diffusers team](https://github.com/huggingface/diffusers), the LCM team, and community contributor [Daniel Gu](https://huggingface.co/dg845). We believe it's a testament to the enabling power of open source AI, the cornerstone that allows researchers, practitioners and tinkerers to explore new ideas and collaborate. We'd also like to thank [`@madebyollin`](https://huggingface.co/madebyollin) for their continued contributions to the community, including the `float16` autoencoder we use in our training scripts.
Open LLM Leaderboard: DROP deep dive
clefourrier
December 1, 2023
leaderboard-drop-dive
community, research, nlp, evaluation, leaderboard
https://huggingface.co/blog/leaderboard-drop-dive
# Open LLM Leaderboard: DROP deep dive Recently, [three new benchmarks](https://twitter.com/clefourrier/status/1722555555338956840) were added to the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard): Winogrande, GSM8k and DROP, using the original implementations reproduced in the [EleutherAI Harness](https://github.com/EleutherAI/lm-evaluation-harness/). A cursory look at the scores for DROP revealed something strange was going on, with the overwhelming majority of models scoring less than 10 out of 100 on their f1-score! We did a deep dive to understand what was going on, come with us to see what we found out! ## Initial observations DROP (Discrete Reasoning Over Paragraphs) is an evaluation where models must extract relevant information from English-text paragraphs before executing discrete reasoning steps on them (for example, sorting or counting items to arrive at the correct answer, see the table below for examples). The metrics used are custom f1 and exact match scores. <div align="center"> <figure class="image table text-center m-0 w-full"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/open-llm-leaderboard/drop/drop_example.png" width="500" /> <figcaption>Examples of reasoning and paragraph from the original article.</figcaption> </figure> </div> We added it to the Open LLM Leaderboard three weeks ago, and observed that the f1-scores of pretrained models followed an unexpected trend: when we plotted DROP scores against the leaderboard original average (of ARC, HellaSwag, TruthfulQA and MMLU), which is a reasonable proxy for overall model performance, we expected DROP scores to be correlated with it (with better models having better performance). However, this was only the case for a small number of models, and all the others had a very low DROP f1-score, below 10. <div align="center"> <figure class="image table text-center m-0 w-full"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/open-llm-leaderboard/drop/drop_bimodal.png" width="500" /> <figcaption>Two trends can be observed in the DROP scores: some follow the average (in diagonal), others are stuck around 5 (vertical line on the right of the graph).</figcaption> </figure> </div> ## Normalization interrogations During our first deeper dive in these surprising behavior, we observed that the normalization step was possibly not working as intended: in some cases, this normalization ignored the correct numerical answers when they were directly followed by a whitespace character other than a space (a line return, for example). Let's look at an example, with the generation being `10\n\nPassage: The 2011 census recorded a population of 1,001,360`, and the gold answer being `10`. Normalization happens in several steps, both for generation and gold: 1) **Split on separators** `|`, `-`, or ` ` The beginning sequence of the generation `10\n\nPassage:` contain no such separator, and is therefore considered a single entity after this step. 2) **Punctuation removal** The first token then becomes `10\n\nPassage` (`:` is removed) 3) **Homogenization of numbers** Every string that can be cast to float is considered a number and cast to float, then re-converted to string. `10\n\nPassage` stays the same, as it cannot be cast to float, whereas the gold `10` becomes `10.0`. 4) **Other steps** A lot of other normalization steps ensue (removing articles, removing other whitespaces, etc.) and our original example becomes `10 passage 2011.0 census recorded population of 1001360.0`. However, the overall score is not computed on the string, but on the bag of words (BOW) extracted from the string, here `{'recorded', 'population', 'passage', 'census', '2011.0', '1001360.0', '10'}`, which is compared with the BOW of the gold, also normalized in the above manner, `{10.0}`. As you can see, they don’t intersect, even though the model predicted the correct output! In summary, if a number is followed by any kind of whitespace other than a simple space, it will not pass through the number normalization, hence never match the gold if it is also a number! This first issue was likely to mess up the scores quite a bit, but clearly it was not the only factor causing DROP scores to be so low. We decided to investigate a bit more. ## Diving into the results Extending our investigations, our friends at [Zeno](https://zenoml.com) joined us and [undertook a much more thorough exploration](https://hub.zenoml.com/report/1255/DROP%20Benchmark%20Exploration) of the results, looking at 5 models which were representative of the problems we noticed in DROP scores: falcon-180B and mistral-7B were underperforming compared to what we were expecting, Yi-34B and tigerbot-70B had a very good performance on DROP correlated with their average scores, and facebook/xglm-7.5B fell in the middle. You can give analyzing the results a try [in the Zeno project here](https://hub.zenoml.com/project/2f5dec90-df5e-4e3e-a4d1-37faf814c5ae/OpenLLM%20Leaderboard%20DROP%20Comparison/explore?params=eyJtb2RlbCI6ImZhY2Vib29rX194Z2xtLTcuNUIiLCJtZXRyaWMiOnsiaWQiOjk1NjUsIm5hbWUiOiJmMSIsInR5cGUiOiJtZWFuIiwiY29sdW1ucyI6WyJmMSJdfSwiY29tcGFyaXNvbk1vZGVsIjoiVGlnZXJSZXNlYXJjaF9fdGlnZXJib3QtNzBiLWNoYXQiLCJjb21wYXJpc29uQ29sdW1uIjp7ImlkIjoiYzJmNTY1Y2EtYjJjZC00MDkwLWIwYzctYTNiNTNkZmViM2RiIiwibmFtZSI6ImVtIiwiY29sdW1uVHlwZSI6IkZFQVRVUkUiLCJkYXRhVHlwZSI6IkNPTlRJTlVPVVMiLCJtb2RlbCI6ImZhY2Vib29rX194Z2xtLTcuNUIifSwiY29tcGFyZVNvcnQiOltudWxsLHRydWVdLCJtZXRyaWNSYW5nZSI6W251bGwsbnVsbF0sInNlbGVjdGlvbnMiOnsic2xpY2VzIjpbXSwibWV0YWRhdGEiOnt9LCJ0YWdzIjpbXX19) if you want to! The Zeno team found two even more concerning features: 1) Not a single model got a correct result on floating point answers 2) High quality models which generate long answers actually have a lower f1-score At this point, we believed that both failure cases were actually caused by the same root factor: using `.` as a stopword token (to end the generations): 1) Floating point answers are systematically interrupted before their generation is complete 2) Higher quality models, which try to match the few-shot prompt format, will generate `Answer\n\nPlausible prompt for the next question.`, and only stop during the plausible prompt continuation after the actual answer on the first `.`, therefore generating too many words and getting a bad f1 score. We hypothesized that both these problems could be fixed by using `\n` instead of `.` as an end of generation stop word. ## Changing the end of generation token So we gave it a try! We investigated using `\n` as the end of generation token on the available results. We split the generated answer on the first `\n` it contained, if one was present, and recomputed the scores. *Note that this is only an approximation of the correct result, as it won't fix answers that were cut too early on `.` (for example floating point answers) - but it also won’t give unfair advantage to any model, as all of them were affected by this problem. However it’s the best we could do without rerunning models (as we wanted to keep the community posted as soon as possible).* The results we got were the following - splitting on `\n` correlates really well with other scores and therefore with overall performance. <div align="center"> <figure class="image table text-center m-0 w-full"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/open-llm-leaderboard/drop/drop_partial_fix.png" width="500" /> <figcaption>We can see in orange that the scores computed on the new strings correlate much better with the average performance.</figcaption> </figure> </div> ## So what's next? A quick calculation shows that re-running the full evaluation of all models would be quite costly (the full update took 8 years of GPU time, and a lot of it was taken by DROP), we estimated how much it would cost to only re-run failing examples. In 10% of the cases, the gold answer is a floating number (for example `12.25`) and model predictions start with the correct beginning (for our example, `12`) but are cut off on a `.` - these predictions likely would have actually been correct if the generation was to continue. We would definitely need to re-run them! Our estimation does not count generated sentences that finish with a number which was possibly interrupted (40% of the other generations), nor any prediction messed up by its normalization. To get correct results, we would thus need to re-run more than 50% of the examples, a huge amount of GPU time! We need to be certain that the implementation we'll run is correct this time. After discussing it with the fantastic EleutherAI team (both on [GitHub](https://github.com/EleutherAI/lm-evaluation-harness/issues/978) and internally), who guided us through the code and helped our investigations, it became very clear that the LM Eval Harness implementation follows the "official DROP" code very strictly: a new version of this benchmark’s evaluation thus needs to be developed! **We have therefore taken the decision to remove DROP from the Open LLM Leaderboard until a new version arises.** One take away of this investiguation is the value in having the many eyes of the community collaboratively investiguate a benchmark in order to detect errors that were previously missed. Here again the power of open-source, community and developping in the open-shines in that it allows to transparently investigate the root cause of an issue on a benchmark which has been out there for a couple of years. We hope that interested members of the community will join forces with academics working on DROP evaluation to fix both its scoring and its normalization. We'd love it becomes usable again, as the dataset itself is really quite interesting and cool. We encourage you to provide feedback on how we should evaluate DROP [on this issue](https://github.com/EleutherAI/lm-evaluation-harness/issues/1050). Thanks to the many community members who pointed out issues on DROP scores, and many thanks to the EleutherAI Harness and Zeno teams for their great help on this issue.
Goodbye cold boot - how we made LoRA inference 300% faster
raphael-gl
December 5, 2023
lora-adapters-dynamic-loading
diffusers, lora, models, inference, stable-diffusion
https://huggingface.co/blog/lora-adapters-dynamic-loading
# Goodbye cold boot - how we made LoRA Inference 300% faster tl;dr: We swap the Stable Diffusion LoRA adapters per user request, while keeping the base model warm allowing fast LoRA inference across multiple users. You can experience this by browsing our [LoRA catalogue](https://huggingface.co/models?library=diffusers&other=lora) and playing with the inference widget. ![Inference Widget Example](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/171_load_lora_adapters/inference_widget.png) In this blog we will go in detail over how we achieved that. We've been able to drastically speed up inference in the Hub for public LoRAs based on public Diffusion models. This has allowed us to save compute resources and provide a faster and better user experience. To perform inference on a given model, there are two steps: 1. Warm up phase - that consists in downloading the model and setting up the service (25s). 2. Then the inference job itself (10s). With the improvements, we were able to reduce the warm up time from 25s to 3s. We are now able to serve inference for hundreds of distinct LoRAs, with less than 5 A10G GPUs, while the response time to user requests decreased from 35s to 13s. Let's talk more about how we can leverage some recent features developed in the [Diffusers](https://github.com/huggingface/diffusers/) library to serve many distinct LoRAs in a dynamic fashion with one single service. ## LoRA LoRA is a fine-tuning technique that belongs to the family of "parameter-efficient" (PEFT) methods, which try to reduce the number of trainable parameters affected by the fine-tuning process. It increases fine-tuning speed while reducing the size of fine-tuned checkpoints. Instead of fine-tuning the model by performing tiny changes to all its weights, we freeze most of the layers and only train a few specific ones in the attention blocks. Furthermore, we avoid touching the parameters of those layers by adding the product of two smaller matrices to the original weights. Those small matrices are the ones whose weights are updated during the fine-tuning process, and then saved to disk. This means that all of the model original parameters are preserved, and we can load the LoRA weights on top using an adaptation method. The LoRA name (Low Rank Adaptation) comes from the small matrices we mentioned. For more information about the method, please refer to [this post](https://huggingface.co/blog/lora) or the [original paper](https://arxiv.org/abs/2106.09685). <div id="diagram"></div> ![LoRA decomposition](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/171_load_lora_adapters/lora_diagram.png) The diagram above shows two smaller orange matrices that are saved as part of the LoRA adapter. We can later load the LoRA adapter and merge it with the blue base model to obtain the yellow fine-tuned model. Crucially, _unloading_ the adapter is also possible so we can revert back to the original base model at any point. In other words, the LoRA adapter is like an add-on of a base model that can be added and removed on demand. And because of A and B smaller ranks, it is very light in comparison with the model size. Therefore, loading is much faster than loading the whole base model. If you look, for example, inside the [Stable Diffusion XL Base 1.0 model repo](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/tree/main), which is widely used as a base model for many LoRA adapters, you can see that its size is around **7 GB**. However, typical LoRA adapters like [this one](https://huggingface.co/minimaxir/sdxl-wrong-lora/) take a mere **24 MB** of space ! There are far less blue base models than there are yellow ones on the Hub. If we can go quickly from the blue to yellow one and vice versa, then we have a way serve many distinct yellow models with only a few distinct blue deployments. For a more exhaustive presentation on what LoRA is, please refer to the following blog post:[Using LoRA for Efficient Stable Diffusion Fine-Tuning](https://huggingface.co/blog/lora), or refer directly to the [original paper](https://arxiv.org/abs/2106.09685). ## Benefits We have approximately **2500** distinct public LoRAs on the Hub. The vast majority (**~92%**) of them are LoRAs based on the [Stable Diffusion XL Base 1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0) model. Before this mutualization, this would have meant deploying a dedicated service for all of them (eg. for all the yellow merged matrices in the diagram above); releasing + reserving at least one new GPU. The time to spawn the service and have it ready to serve requests for a specific model is approximately **25s**, then on top of this you have the inference time (**~10s** for a 1024x1024 SDXL inference diffusion with 25 inference steps on an A10G). If an adapter is only occasionally requested, its service gets stopped to free resources preempted by others. If you were requesting a LoRA that was not so popular, even if it was based on the SDXL model like the vast majority of adapters found on the Hub so far, it would have required **35s** to warm it up and get an answer on the first request (the following ones would have taken the inference time, eg. **10s**). Now: request time has decreased from 35s to 13s since adapters will use only a few distinct "blue" base models (like 2 significant ones for Diffusion). Even if your adapter is not so popular, there is a good chance that its "blue" service is already warmed up. In other words, there is a good chance that you avoid the 25s warm up time, even if you do not request your model that often. The blue model is already downloaded and ready, all we have to do is unload the previous adapter and load the new one, which takes **3s** as we see [below](#loading-figures). Overall, this requires less GPUs to serve all distinct models, even though we already had a way to share GPUs between deployments to maximize their compute usage. In a **2min** time frame, there are approximately **10** distinct LoRA weights that are requested. Instead of spawning 10 deployments, and keeping them warm, we simply serve all of them with 1 to 2 GPUs (or more if there is a request burst). ## Implementation We implemented LoRA mutualization in the Inference API. When a request is performed on a model available in our platform, we first determine whether this is a LoRA or not. We then identify the base model for the LoRA and route the request to a common backend farm, with the ability to serve requests for the said model. Inference requests get served by keeping the base model warm and loading/unloading LoRAs on the fly. This way we can ultimately reuse the same compute resources to serve many distinct models at once. ### LoRA structure In the Hub, LoRAs can be identified with two attributes: ![Hub](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/171_load_lora_adapters/lora_adapter_hub.png) A LoRA will have a ```base_model``` attribute. This is simply the model which the LoRA was built for and should be applied to when performing inference. Because LoRAs are not the only models with such an attribute (any duplicated model will have one), a LoRA will also need a ```lora``` tag to be properly identified. ### Loading/Offloading LoRA for Diffusers 🧨 <div class="alert"> <p> Note that there is a more seemless way to perform the same as what is presented in this section using the <a href="https://github.com/huggingface/peft">peft</a> library. Please refer to <a href="https://huggingface.co/docs/diffusers/main/en/tutorials/using_peft_for_inference">the documentation</a> for more details. The principle remains the same as below (going from/to the blue box to/from the yellow one in the <a href="#diagram">diagram</a> above) </p> </div> </br> 4 functions are used in the Diffusers library to load and unload distinct LoRA weights: ```load_lora_weights``` and ```fuse_lora``` for loading and merging weights with the main layers. Note that merging weights with the main model before performing inference can decrease the inference time by 30%. ```unload_lora_weights``` and ```unfuse_lora``` for unloading. We provide an example below on how one can leverage the Diffusers library to quickly load several LoRA weights on top of a base model: ```py import torch from diffusers import ( AutoencoderKL, DiffusionPipeline, ) import time base = "stabilityai/stable-diffusion-xl-base-1.0" adapter1 = 'nerijs/pixel-art-xl' weightname1 = 'pixel-art-xl.safetensors' adapter2 = 'minimaxir/sdxl-wrong-lora' weightname2 = None inputs = "elephant" kwargs = {} if torch.cuda.is_available(): kwargs["torch_dtype"] = torch.float16 start = time.time() # Load VAE compatible with fp16 created by madebyollin vae = AutoencoderKL.from_pretrained( "madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16, ) kwargs["vae"] = vae kwargs["variant"] = "fp16" model = DiffusionPipeline.from_pretrained( base, **kwargs ) if torch.cuda.is_available(): model.to("cuda") elapsed = time.time() - start print(f"Base model loaded, elapsed {elapsed:.2f} seconds") def inference(adapter, weightname): start = time.time() model.load_lora_weights(adapter, weight_name=weightname) # Fusing lora weights with the main layers improves inference time by 30 % ! model.fuse_lora() elapsed = time.time() - start print(f"LoRA adapter loaded and fused to main model, elapsed {elapsed:.2f} seconds") start = time.time() data = model(inputs, num_inference_steps=25).images[0] elapsed = time.time() - start print(f"Inference time, elapsed {elapsed:.2f} seconds") start = time.time() model.unfuse_lora() model.unload_lora_weights() elapsed = time.time() - start print(f"LoRA adapter unfused/unloaded from base model, elapsed {elapsed:.2f} seconds") inference(adapter1, weightname1) inference(adapter2, weightname2) ``` ## Loading figures All numbers below are in seconds: <table> <tr> <th>GPU</th> <td>T4</td> <td>A10G</td> </tr> <tr> <th>Base model loading - not cached</th> <td>20</td> <td>20</td> </tr> <tr> <th>Base model loading - cached</th> <td>5.95</td> <td>4.09</td> </tr> <tr> <th>Adapter 1 loading</th> <td>3.07</td> <td>3.46</td> </tr> <tr> <th>Adapter 1 unloading</th> <td>0.52</td> <td>0.28</td> </tr> <tr> <th>Adapter 2 loading</th> <td>1.44</td> <td>2.71</td> </tr> <tr> <th>Adapter 2 unloading</th> <td>0.19</td> <td>0.13</td> </tr> <tr> <th>Inference time</th> <td>20.7</td> <td>8.5</td> </tr> </table> With 2 to 4 additional seconds per inference, we can serve many distinct LoRAs. However, on an A10G GPU, the inference time decreases by a lot while the adapters loading time does not change much, so the LoRA's loading/unloading is relatively more expensive. ### Serving requests To serve inference requests, we use [this open source community image](https://github.com/huggingface/api-inference-community/tree/main/docker_images/diffusers) You can find the previously described mechanism used in the [TextToImagePipeline](https://github.com/huggingface/api-inference-community/blob/main/docker_images/diffusers/app/pipelines/text_to_image.py) class. When a LoRA is requested, we'll look at the one that is loaded and change it only if required, then we perform inference as usual. This way, we are able to serve requests for the base model and many distinct adapters. Below is an example on how you can test and request this image: ``` $ git clone https://github.com/huggingface/api-inference-community.git $ cd api-inference-community/docker_images/diffusers $ docker build -t test:1.0 -f Dockerfile . $ cat > /tmp/env_file <<'EOF' MODEL_ID=stabilityai/stable-diffusion-xl-base-1.0 TASK=text-to-image HF_HUB_ENABLE_HF_TRANSFER=1 EOF $ docker run --gpus all --rm --name test1 --env-file /tmp/env_file_minimal -p 8888:80 -it test:1.0 ``` Then in another terminal perform requests to the base model and/or miscellaneous LoRA adapters to be found on the HF Hub. ``` # Request the base model $ curl 0:8888 -d '{"inputs": "elephant", "parameters": {"num_inference_steps": 20}}' > /tmp/base.jpg # Request one adapter $ curl -H 'lora: minimaxir/sdxl-wrong-lora' 0:8888 -d '{"inputs": "elephant", "parameters": {"num_inference_steps": 20}}' > /tmp/adapter1.jpg # Request another one $ curl -H 'lora: nerijs/pixel-art-xl' 0:8888 -d '{"inputs": "elephant", "parameters": {"num_inference_steps": 20}}' > /tmp/adapter2.jpg ``` ### What about batching ? Recently a really interesting [paper](https://arxiv.org/abs/2311.03285) came out, that described how to increase the throughput by performing batched inference on LoRA models. In short, all inference requests would be gathered in a batch, the computation related to the common base model would be done all at once, then the remaining adapter-specific products would be computed. We did not implement such a technique (close to the approach adopted in [text-generation-inference](https://github.com/huggingface/text-generation-inference/) for LLMs). Instead, we stuck to single sequential inference requests. The reason is that we observed that batching was not interesting for diffusers: throughput does not increase significantly with batch size. On the simple image generation benchmark we performed, it only increased 25% for a batch size of 8, in exchange for 6 times increased latency! Comparatively, batching is far more interesting for LLMs because you get 8 times the sequential throughput with only a 10% latency increase. This is the reason why we did not implement batching for diffusers. ## Conclusion: **Time**! Using dynamic LoRA loading, we were able to save compute resources and improve the user experience in the Hub Inference API. Despite the extra time added by the process of unloading the previously loaded adapter and loading the one we're interested in, the fact that the serving process is most often already up and running makes the inference time response on the whole much shorter. Note that for a LoRA to benefit from this inference optimization on the Hub, it must both be public, non-gated and based on a non-gated public model. Please do let us know if you apply the same method to your deployment!
Optimum-NVIDIA - Unlock blazingly fast LLM inference in just 1 line of code
laikh-nvidia
December 5, 2023
optimum-nvidia
llm, nvidia, llama, inference, optimum
https://huggingface.co/blog/optimum-nvidia
# Optimum-NVIDIA on Hugging Face enables blazingly fast LLM inference in just 1 line of code Large Language Models (LLMs) have revolutionized natural language processing and are increasingly deployed to solve complex problems at scale. Achieving optimal performance with these models is notoriously challenging due to their unique and intense computational demands. Optimized performance of LLMs is incredibly valuable for end users looking for a snappy and responsive experience, as well as for scaled deployments where improved throughput translates to dollars saved. That's where the [Optimum-NVIDIA](https://github.com/huggingface/optimum-nvidia) inference library comes in. Available on Hugging Face, Optimum-NVIDIA dramatically accelerates LLM inference on the NVIDIA platform through an extremely simple API. By changing **just a single line of code**, you can unlock up to **28x faster inference and 1,200 tokens/second** on the NVIDIA platform. Optimum-NVIDIA is the first Hugging Face inference library to benefit from the new `float8` format supported on the NVIDIA Ada Lovelace and Hopper architectures. FP8, in addition to the advanced compilation capabilities of [NVIDIA TensorRT-LLM software](https://developer.nvidia.com/blog/nvidia-tensorrt-llm-supercharges-large-language-model-inference-on-nvidia-h100-gpus/) software, dramatically accelerates LLM inference. ### How to Run You can start running LLaMA with blazingly fast inference speeds in just 3 lines of code with a pipeline from Optimum-NVIDIA. If you already set up a pipeline from Hugging Face’s transformers library to run LLaMA, you just need to modify a single line of code to unlock peak performance! ```diff - from transformers.pipelines import pipeline + from optimum.nvidia.pipelines import pipeline # everything else is the same as in transformers! pipe = pipeline('text-generation', 'meta-llama/Llama-2-7b-chat-hf', use_fp8=True) pipe("Describe a real-world application of AI in sustainable energy.") ``` You can also enable FP8 quantization with a single flag, which allows you to run a bigger model on a single GPU at faster speeds and without sacrificing accuracy. The flag shown in this example uses a predefined calibration strategy by default, though you can provide your own calibration dataset and customized tokenization to tailor the quantization to your use case. The pipeline interface is great for getting up and running quickly, but power users who want fine-grained control over setting sampling parameters can use the Model API. ```diff - from transformers import AutoModelForCausalLM + from optimum.nvidia import AutoModelForCausalLM from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-13b-chat-hf", padding_side="left") model = AutoModelForCausalLM.from_pretrained( "meta-llama/Llama-2-13b-chat-hf", + use_fp8=True, ) model_inputs = tokenizer( ["How is autonomous vehicle technology transforming the future of transportation and urban planning?"], return_tensors="pt" ).to("cuda") generated_ids, generated_length = model.generate( **model_inputs, top_k=40, top_p=0.7, repetition_penalty=10, ) tokenizer.batch_decode(generated_ids[0], skip_special_tokens=True) ``` For more details, check out our [documentation](https://github.com/huggingface/optimum-nvidia) ### Performance Evaluation When evaluating the performance of an LLM, we consider two metrics: First Token Latency and Throughput. First Token Latency (also known as Time to First Token or prefill latency) measures how long you wait from the time you enter your prompt to the time you begin receiving your output, so this metric can tell you how responsive the model will feel. Optimum-NVIDIA delivers up to 3.3x faster First Token Latency compared to stock transformers: <br> <figure class="image"> <img alt="" src="assets/optimum_nvidia/first_token_latency.svg" /> <figcaption>Figure 1. Time it takes to generate the first token (ms)</figcaption> </figure> <br> Throughput, on the other hand, measures how fast the model can generate tokens and is particularly relevant when you want to batch generations together. While there are a few ways to calculate throughput, we adopted a standard method to divide the end-to-end latency by the total sequence length, including both input and output tokens summed over all batches. Optimum-NVIDIA delivers up to 28x better throughput compared to stock transformers: <br> <figure class="image"> <img alt="" src="assets/optimum_nvidia/throughput.svg" /> <figcaption>Figure 2. Throughput (token / second)</figcaption> </figure> <br> Initial evaluations of the [recently announced NVIDIA H200 Tensor Core GPU](https://www.nvidia.com/en-us/data-center/h200/) show up to an additional 2x boost in throughput for LLaMA models compared to an NVIDIA H100 Tensor Core GPU. As H200 GPUs become more readily available, we will share performance data for Optimum-NVIDIA running on them. ### Next steps Optimum-NVIDIA currently provides peak performance for the LLaMAForCausalLM architecture + task, so any [LLaMA-based model](https://huggingface.co/models?other=llama,llama2), including fine-tuned versions, should work with Optimum-NVIDIA out of the box today. We are actively expanding support to include other text generation model architectures and tasks, all from within Hugging Face. We continue to push the boundaries of performance and plan to incorporate cutting-edge optimization techniques like In-Flight Batching to improve throughput when streaming prompts and INT4 quantization to run even bigger models on a single GPU. Give it a try: we are releasing the [Optimum-NVIDIA repository](https://github.com/huggingface/optimum-nvidia) with instructions on how to get started. Please share your feedback with us! 🤗
SetFitABSA: Few-Shot Aspect Based Sentiment Analysis using SetFit
ronenlap
December 6, 2023
setfit-absa
research, nlp
https://huggingface.co/blog/setfit-absa
# SetFitABSA: Few-Shot Aspect Based Sentiment Analysis using SetFit <p align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/setfit-absa/method.png" width=500> </p> <p align="center"> <em>SetFitABSA is an efficient technique to detect the sentiment towards specific aspects within the text.</em> </p> Aspect-Based Sentiment Analysis (ABSA) is the task of detecting the sentiment towards specific aspects within the text. For example, in the sentence, "This phone has a great screen, but its battery is too small", the _aspect_ terms are "screen" and "battery" and the sentiment polarities towards them are Positive and Negative, respectively. ABSA is widely used by organizations for extracting valuable insights by analyzing customer feedback towards aspects of products or services in various domains. However, labeling training data for ABSA is a tedious task because of the fine-grained nature (token level) of manually identifying aspects within the training samples. Intel Labs and Hugging Face are excited to introduce SetFitABSA, a framework for few-shot training of domain-specific ABSA models; SetFitABSA is competitive and even outperforms generative models such as Llama2 and T5 in few-shot scenarios. Compared to LLM based methods, SetFitABSA has two unique advantages: <p>🗣 <strong>No prompts needed:</strong> few-shot in-context learning with LLMs requires handcrafted prompts which make the results brittle, sensitive to phrasing and dependent on user expertise. SetFitABSA dispenses with prompts altogether by generating rich embeddings directly from a small number of labeled text examples.</p> <p>🏎 <strong>Fast to train:</strong> SetFitABSA requires only a handful of labeled training samples; in addition, it uses a simple training data format, eliminating the need for specialized tagging tools. This makes the data labeling process fast and easy.</p> In this blog post, we'll explain how SetFitABSA works and how to train your very own models using the [SetFit library](https://github.com/huggingface/setfit). Let's dive in! ## How does it work? <p align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/setfit-absa/method.png" width=700> </p> <p align="center"> <em>SetFitABSA's three-stage training process</em> </p> SetFitABSA is comprised of three steps. The first step extracts aspect candidates from the text, the second one yields the aspects by classifying the aspect candidates as aspects or non-aspects, and the final step associates a sentiment polarity to each extracted aspect. Steps two and three are based on SetFit models. ### Training **1. Aspect candidate extraction** In this work we assume that aspects, which are usually features of products and services, are mostly nouns or noun compounds (strings of consecutive nouns). We use [spaCy](https://spacy.io/) to tokenize and extract nouns/noun compounds from the sentences in the (few-shot) training set. Since not all extracted nouns/noun compounds are aspects, we refer to them as aspect candidates. **2. Aspect/Non-aspect classification** Now that we have aspect candidates, we need to train a model to be able to distinguish between nouns that are aspects and nouns that are non-aspects. For this purpose, we need training samples with aspect/no-aspect labels. This is done by considering aspects in the training set as `True` aspects, while other non-overlapping candidate aspects are considered non-aspects and therefore labeled as `False`: * **Training sentence:** "Waiters aren't friendly but the cream pasta is out of this world." * **Tokenized:** [Waiters, are, n't, friendly, but, the, cream, pasta, is, out, of, this, world, .] * **Extracted aspect candidates:** [<strong style="color:orange">Waiters</strong>, are, n't, friendly, but, the, <strong style="color:orange">cream</strong>, <strong style="color:orange">pasta</strong>, is, out, of, this, <strong style="color:orange">world</strong>, .] * **Gold labels from training set, in [BIO format](https://en.wikipedia.org/wiki/Inside–outside–beginning_(tagging)):** [B-ASP, O, O, O, O, O, B-ASP, I-ASP, O, O, O, O, O, .] * **Generated aspect/non-aspect Labels:** [<strong style="color:green">Waiters</strong>, are, n't, friendly, but, the, <strong style="color:green">cream</strong>, <strong style="color:green">pasta</strong>, is, out, of, this, <strong style="color:red">world</strong>, .] Now that we have all the aspect candidates labeled, how do we use it to train the candidate aspect classification model? In other words, how do we use SetFit, a sentence classification framework, to classify individual tokens? Well, this is the trick: each aspect candidate is concatenated with the entire training sentence to create a training instance using the following template: ``` aspect_candidate:training_sentence ``` Applying the template to the example above will generate 3 training instances – two with `True` labels representing aspect training instances, and one with `False` label representing non-aspect training instance: | Text | Label | |:------------------------------------------------------------------------------|:------| | Waiters:Waiters aren't friendly but the cream pasta is out of this world. | 1 | | cream pasta:Waiters aren't friendly but the cream pasta is out of this world. | 1 | | world:Waiters aren't friendly but the cream pasta is out of this world. | 0 | | ... | ... | After generating the training instances, we are ready to use the power of SetFit to train a few-shot domain-specific binary classifier to extract aspects from an input text review. This will be our first fine-tuned SetFit model. **3. Sentiment polarity classification** Once the system extracts the aspects from the text, it needs to associate a sentiment polarity (e.g., positive, negative or neutral) to each aspect. For this purpose, we use a 2nd SetFit model and train it in a similar fashion to the aspect extraction model as illustrated in the following example: * **Training sentence:** "Waiters aren't friendly but the cream pasta is out of this world." * **Tokenized:** [Waiters, are, n't, friendly, but, the, cream, pasta, is, out, of, this, world, .] * **Gold labels from training set:** [NEG, O, O, O, O, O, POS, POS, O, O, O, O, O, .] | Text | Label | |:------------------------------------------------------------------------------|:------| | Waiters:Waiters aren't friendly but the cream pasta is out of this world. | NEG | | cream pasta:Waiters aren't friendly but the cream pasta is out of this world. | POS | | ... | ... | Note that as opposed to the aspect extraction model, we don't include non-aspects in this training set because the goal is to classify the sentiment polarity towards real aspects. ## Running inference At inference time, the test sentence passes through the spaCy aspect candidate extraction phase, resulting in test instances using the template `aspect_candidate:test_sentence`. Next, non-aspects are filtered by the aspect/non-aspect classifier. Finally, the extracted aspects are fed to the sentiment polarity classifier that predicts the sentiment polarity per aspect. In practice, this means the model can receive normal text as input, and output aspects and their sentiments: **Model Input:** ``` "their dinner specials are fantastic." ``` **Model Output:** ``` [{'span': 'dinner specials', 'polarity': 'positive'}] ``` ## Benchmarking SetFitABSA was benchmarked against the recent state-of-the-art work by [AWS AI Labs](https://arxiv.org/pdf/2210.06629.pdf) and [Salesforce AI Research](https://arxiv.org/pdf/2204.05356.pdf) that finetune T5 and GPT2 using prompts. To get a more complete picture, we also compare our model to the Llama-2-chat model using in-context learning. We use the popular Laptop14 and Restaurant14 ABSA [datasets](https://huggingface.co/datasets/alexcadillon/SemEval2014Task4) from the Semantic Evaluation Challenge 2014 ([SemEval14](https://aclanthology.org/S14-2004.pdf)). SetFitABSA is evaluated both on the intermediate task of aspect term extraction (SB1) and on the full ABSA task of aspect extraction along with their sentiment polarity predictions (SB1+SB2). ### Model size comparison | Model | Size (params) | |:------------------:|:-------------:| | Llama-2-chat | 7B | | T5-base | 220M | | GPT2-base | 124M | | GPT2-medium | 355M | | **SetFit (MPNet)** | 2x 110M | Note that for the SB1 task, SetFitABSA is 110M parameters, for SB2 it is 110M parameters, and for SB1+SB2 SetFitABSA consists of 220M parameters. ### Performance comparison We see a clear advantage of SetFitABSA when the number of training instances is low, despite being 2x smaller than T5 and x3 smaller than GPT2-medium. Even when compared to Llama 2, which is x64 larger, the performance is on par or better. **SetFitABSA vs GPT2** <p align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/setfit-absa/SetFitABSA_vs_GPT2.png" width=700> </p> **SetFitABSA vs T5** <p align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/setfit-absa/SetFitABSA_vs_T5.png" width=700> </p> Note that for fair comparison, we conducted comparisons with SetFitABSA against exactly the dataset splits used by the various baselines (GPT2, T5, etc.). **SetFitABSA vs Llama2** <p align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/setfit-absa/SetFitABSA_vs_Llama2.png" width=700> </p> We notice that increasing the number of in-context training samples for Llama2 did not result in improved performance. This phenomenon [has been shown for ChatGPT before](https://www.analyticsvidhya.com/blog/2023/09/power-of-llms-zero-shot-and-few-shot-prompting/), and we think it should be further investigated. ## Training your own model SetFitABSA is part of the SetFit framework. To train an ABSA model, start by installing `setfit` with the `absa` option enabled: ```shell python -m pip install -U "setfit[absa]" ``` Additionally, we must install the `en_core_web_lg` spaCy model: ```shell python -m spacy download en_core_web_lg ``` We continue by preparing the training set. The format of the training set is a `Dataset` with the columns `text`, `span`, `label`, `ordinal`: * **text**: The full sentence or text containing the aspects. * **span**: An aspect from the full sentence. Can be multiple words. For example: "food". * **label**: The (polarity) label corresponding to the aspect span. For example: "positive". The label names can be chosen arbitrarily when tagging the collected training data. * **ordinal**: If the aspect span occurs multiple times in the text, then this ordinal represents the index of those occurrences. Often this is just 0, as each aspect usually appears only once in the input text. For example, the training text "Restaurant with wonderful food but worst service I ever seen" contains two aspects, so will add two lines to the training set table: | Text | Span | Label | Ordinal | |:-------------------------------------------------------------|:--------|:---------|:--------| | Restaurant with wonderful food but worst service I ever seen | food | positive | 0 | | Restaurant with wonderful food but worst service I ever seen | service | negative | 0 | | ... | ... | ... | ... | Once we have the training dataset ready we can create an ABSA trainer and execute the training. SetFit models are fairly efficient to train, but as SetFitABSA involves two models trained sequentially, it is recommended to use a GPU for training to keep the training time low. For example, the following training script trains a full SetFitABSA model in about 10 minutes with the free Google Colab T4 GPU. ```python from datasets import load_dataset from setfit import AbsaTrainer, AbsaModel # Create a training dataset as above # For convenience we will use an already prepared dataset here train_dataset = load_dataset("tomaarsen/setfit-absa-semeval-restaurants", split="train[:128]") # Create a model with a chosen sentence transformer from the Hub model = AbsaModel.from_pretrained("sentence-transformers/paraphrase-mpnet-base-v2") # Create a trainer: trainer = AbsaTrainer(model, train_dataset=train_dataset) # Execute training: trainer.train() ``` That's it! We have trained a domain-specific ABSA model. We can save our trained model to disk or upload it to the Hugging Face hub. Bear in mind that the model contains two submodels, so each is given its own path: ```python model.save_pretrained( "models/setfit-absa-model-aspect", "models/setfit-absa-model-polarity" ) # or model.push_to_hub( "tomaarsen/setfit-absa-paraphrase-mpnet-base-v2-restaurants-aspect", "tomaarsen/setfit-absa-paraphrase-mpnet-base-v2-restaurants-polarity" ) ``` Now we can use our trained model for inference. We start by loading the model: ```python from setfit import AbsaModel model = AbsaModel.from_pretrained( "tomaarsen/setfit-absa-paraphrase-mpnet-base-v2-restaurants-aspect", "tomaarsen/setfit-absa-paraphrase-mpnet-base-v2-restaurants-polarity" ) ``` Then, we use the predict API to run inference. The input is a list of strings, each representing a textual review: ```python preds = model.predict([ "Best pizza outside of Italy and really tasty.", "The food variations are great and the prices are absolutely fair.", "Unfortunately, you have to expect some waiting time and get a note with a waiting number if it should be very full." ]) print(preds) # [ # [{'span': 'pizza', 'polarity': 'positive'}], # [{'span': 'food variations', 'polarity': 'positive'}, {'span': 'prices', 'polarity': 'positive'}], # [{'span': 'waiting time', 'polarity': 'neutral'}, {'span': 'waiting number', 'polarity': 'neutral'}] # ] ``` For more details on training options, saving and loading models, and inference see the SetFit [docs](https://huggingface.co/docs/setfit/how_to/absa). ## References * Maria Pontiki, Dimitris Galanis, John Pavlopoulos, Harris Papageorgiou, Ion Androutsopoulos, and Suresh Manandhar. 2014. SemEval-2014 task 4: Aspect based sentiment analysis. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), pages 27–35. * Siddharth Varia, Shuai Wang, Kishaloy Halder, Robert Vacareanu, Miguel Ballesteros, Yassine Benajiba, Neha Anna John, Rishita Anubhai, Smaranda Muresan, Dan Roth, 2023 "Instruction Tuning for Few-Shot Aspect-Based Sentiment Analysis". https://arxiv.org/abs/2210.06629 * Ehsan Hosseini-Asl, Wenhao Liu, Caiming Xiong, 2022. "A Generative Language Model for Few-shot Aspect-Based Sentiment Analysis". https://arxiv.org/abs/2204.05356 * Lewis Tunstall, Nils Reimers, Unso Eun Seo Jo, Luke Bates, Daniel Korat, Moshe Wasserblat, Oren Pereg, 2022. "Efficient Few-Shot Learning Without Prompts". https://arxiv.org/abs/2209.11055