url
stringlengths
31
71
targets
stringlengths
11
143
authors
stringlengths
6
190
date
stringlengths
11
18
inputs
stringlengths
140
14.8k
https://huggingface.co/blog/convert-transformers-to-onnx
Convert Transformers to ONNX with Hugging Face Optimum
Philipp Schmid
June 22, 2022
Hundreds of Transformers experiments and models are uploaded to the Hugging Face Hub every single day. Machine learning engineers and students conducting those experiments use a variety of frameworks like PyTorch, TensorFlow/Keras, or others. These models are already used by thousands of companies and form the foundation of AI-powered products.If you deploy Transformers models in production environments, we recommend exporting them first into a serialized format that can be loaded, optimized, and executed on specialized runtimes and hardware.In this guide, you'll learn about:What is ONNX?What is Hugging Face Optimum?What Transformers architectures are supported?How can I convert a Transformers model (BERT) to ONNX?What's next?Let's get started! 🚀If you are interested in optimizing your models to run with maximum efficiency, check out the 🤗 Optimum library.1. What is ONNX?The ONNX or Open Neural Network eXchange is an open standard and format to represent machine learning models. ONNX defines a common set of operators and a common file format to represent deep learning models in a wide variety of frameworks, including PyTorch and TensorFlow. pseudo ONNX graph, visualized with NETRONWhen a model is exported to the ONNX format, these operators are used to construct a computational graph (often called an intermediate representation) which represents the flow of data through the neural network.Important: ONNX Is not a Runtime ONNX is only the representation that can be used with runtimes like ONNX Runtime. You can find a list of supported accelerators here.➡️Learn more about ONNX.2. What is Hugging Face Optimum?Hugging Face Optimum is an open-source library and an extension of Hugging Face Transformers, that provides a unified API of performance optimization tools to achieve maximum efficiency to train and run models on accelerated hardware, including toolkits for optimized performance on Graphcore IPU and Habana Gaudi. Optimum can be used for converting, quantization, graph optimization, accelerated training & inference with support for transformers pipelines.Below you can see a typical developer journey of how you can leverage Optimum with ONNX.➡️ Learn more about Optimum3. What Transformers architectures are supported?A list of all supported Transformers architectures can be found in the ONNX section of the Transformers documentation. Below is an excerpt of the most commonly used architectures which can be converted to ONNX and optimized with Hugging Face Optimum ALBERTBARTBERTDistilBERTELECTRAGPT NeoGPT-JGPT-2RoBERTaT5ViTXLM…➡️ All supported architectures4. How can I convert a Transformers model (BERT) to ONNX?There are currently three ways to convert your Hugging Face Transformers models to ONNX. In this section, you will learn how to export distilbert-base-uncased-finetuned-sst-2-english for text-classification using all three methods going from the low-level torch API to the most user-friendly high-level API of optimum. Each method will do exactly the sameExport with torch.onnx (low-level)torch.onnx enables you to convert model checkpoints to an ONNX graph by the export method. But you have to provide a lot of values like input_names, dynamic_axes, etc. You’ll first need to install some dependencies:pip install transformers torchexporting our checkpoint with export import torchfrom transformers import AutoModelForSequenceClassification, AutoTokenizer# load model and tokenizermodel_id = "distilbert-base-uncased-finetuned-sst-2-english"model = AutoModelForSequenceClassification.from_pretrained(model_id)tokenizer = AutoTokenizer.from_pretrained(model_id)dummy_model_input = tokenizer("This is a sample", return_tensors="pt")# exporttorch.onnx.export(model, tuple(dummy_model_input.values()),f="torch-model.onnx", input_names=['input_ids', 'attention_mask'], output_names=['logits'], dynamic_axes={'input_ids': {0: 'batch_size', 1: 'sequence'}, 'attention_mask': {0: 'batch_size', 1: 'sequence'}, 'logits': {0: 'batch_size', 1: 'sequence'}}, do_constant_folding=True, opset_version=13, )Export with transformers.onnx (mid-level)transformers.onnx enables you to convert model checkpoints to an ONNX graph by leveraging configuration objects. That way you don’t have to provide the complex configuration for dynamic_axes etc.You’ll first need to install some dependencies:pip install transformers[onnx] torchExporting our checkpoint with the transformers.onnx.from pathlib import Pathimport transformersfrom transformers.onnx import FeaturesManagerfrom transformers import AutoConfig, AutoTokenizer, AutoModelForSequenceClassification# load model and tokenizermodel_id = "distilbert-base-uncased-finetuned-sst-2-english"feature = "sequence-classification"model = AutoModelForSequenceClassification.from_pretrained(model_id)tokenizer = AutoTokenizer.from_pretrained(model_id)# load configmodel_kind, model_onnx_config = FeaturesManager.check_supported_model_or_raise(model, feature=feature)onnx_config = model_onnx_config(model.config)# exportonnx_inputs, onnx_outputs = transformers.onnx.export(preprocessor=tokenizer,model=model,config=onnx_config,opset=13,output=Path("trfs-model.onnx"))Export with Optimum (high-level)Optimum Inference includes methods to convert vanilla Transformers models to ONNX using the ORTModelForXxx classes. To convert your Transformers model to ONNX you simply have to pass from_transformers=True to the from_pretrained() method and your model will be loaded and converted to ONNX leveraging the transformers.onnx package under the hood.You’ll first need to install some dependencies:pip install optimum[onnxruntime]Exporting our checkpoint with ORTModelForSequenceClassificationfrom optimum.onnxruntime import ORTModelForSequenceClassificationmodel = ORTModelForSequenceClassification.from_pretrained("distilbert-base-uncased-finetuned-sst-2-english",from_transformers=True)The best part about the conversion with Optimum is that you can immediately use the model to run predictions or load it inside a pipeline.5. What's next?Since you successfully convert your Transformers model to ONNX the whole set of optimization and quantization tools is now open to use. Potential next steps can be:Use the onnx model for Accelerated Inference with Optimum and Transformers PipelinesApply static quantization to your model for ~3x latency improvementsUse ONNX runtime for trainingConvert your ONNX model to TensorRT to improve GPU performance…If you are interested in optimizing your models to run with maximum efficiency, check out the 🤗 Optimum library.Thanks for reading! If you have any questions, feel free to contact me, through Github, or on the forum. You can also connect with me on Twitter or LinkedIn.
https://huggingface.co/blog/arxiv
Hugging Face Machine Learning Demos on arXiv
Abubakar Abid, Omar Sanseviero, Pedro Cuenca
November 17, 2022
Hugging Face Machine Learning Demos on arXivHugging Face Models Datasets Spaces Posts Docs Solutions Pricing Log In Sign Up Back to Articles Hugging Face Machine Learning Demos on arXiv
https://huggingface.co/blog/safetensors-security-audit
Audit shows that safetensors is safe and ready to become the default
Nicolas Patry, Stella Biderman
May 23, 2023
Hugging Face, in close collaboration with EleutherAI and Stability AI, has orderedan external security audit of the safetensors library, the results of which allowall three organizations to move toward making the library the default formatfor saved models.The full results of the security audit, performed by Trail of Bits, can be found here: Report.The following blog post explains the origins of the library, why these audit results are important,and the next steps.What is safetensors?🐶Safetensors is a libraryfor saving and loading tensors in the most common frameworks (including PyTorch, TensorFlow, JAX, PaddlePaddle, and NumPy).For a more concrete explanation, we'll use PyTorch.import torchfrom safetensors.torch import load_file, save_fileweights = {"embeddings": torch.zeros((10, 100))}save_file(weights, "model.safetensors")weights2 = load_file("model.safetensors")It also has a number of cool features compared to other formats, most notably that loading files is safe, as we'll see later. When you're using transformers, if safetensors is installed, then those files will alreadybe used preferentially in order to prevent issues, which means thatpip install safetensorsis likely to be the only thing needed to run safetensors files safely.Going forward and thanks to the validation of the library, safetensors will now be installed in transformers bydefault. The next step is saving models in safetensors by default.We are thrilled to see that the safetensors library is already seeing use in the ML ecosystem, including:CivitaiStable Diffusion Web UIdfdxLLaMA.cppWhy create something new?The creation of this library was driven by the fact that PyTorch uses pickle underthe hood, which is inherently unsafe. (Sources: 1, 2, video, 3)With pickle, it is possible to write a malicious file posing as a model that gives full control of a user's computer to an attacker without the user's knowledge,allowing the attacker to steal all their bitcoins 😓.While this vulnerability in pickle is widely known in the computer security world (and is acknowledged in the PyTorch docs), it’s not common knowledge in the broader ML community.Since the Hugging Face Hub is a platform where anyone can upload and share models, it is important to make efforts to prevent users from getting infected by malware.We are also taking steps to make sure the existing PyTorch files are not malicious, but the best we can do is flag suspicious-looking files.Of course, there are other file formats out there, butnone seemed to meet the full set of ideal requirements our team identified.In addition to being safe, safetensors allows lazy loading and generally faster loads (around 100x faster on CPU).Lazy loading means loading only part of a tensor in an efficient manner.This particular feature enables arbitrary sharding with efficient inference libraries, such as text-generation-inference, to load LLMs (such as LLaMA, StarCoder, etc.) on various types of hardwarewith maximum efficiency.Because it loads so fast and is framework agnostic, we can even use the formatto load models from the same file in PyTorch or TensorFlow.The security auditSince safetensors main asset is providing safety guarantees, we wanted to make sureit actually delivered. That's why Hugging Face, EleutherAI, and Stability AI teamed up to get an externalsecurity audit to confirm it.Important findings:No critical security flaw leading to arbitrary code execution was found.Some imprecisions in the spec format were detected and fixed. Some missing validation allowed polyglot files, which was fixed.Lots of improvements to the test suite were proposed and implemented.In the name of openness and transparency, all companies agreed to make the reportfully public.Full reportOne import thing to note is that the library is written in Rust. This addsan extra layer of securitycoming directly from the language itself.While it is impossible to prove the absence of flaws, this is a major step in giving reassurance that safetensorsis indeed safe to use.Going forwardFor Hugging Face, EleutherAI, and Stability AI, the master plan is to shift to using this format by default.EleutherAI has added support for evaluating models stored as safetensors in their LM Evaluation Harness and is working on supporting the format in their GPT-NeoX distributed training library.Within the transformers library we are doing the following:Create safetensors.Verify it works and can deliver on all promises (lazy load for LLMs, single file for all frameworks, faster loads).Verify it's safe. (This is today's announcement.)Make safetensors a core dependency. (This is already done or soon to come.)Make safetensors the default saving format. This will happen in a few months when we have enough feedbackto make sure it will cause as little disruption as possible and enough users already have the libraryto be able to load new models even on relatively old transformers versions.As for safetensors itself, we're looking into adding more advanced features for LLM training,which has its own set of issues with current formats.Finally, we plan to release a 1.0 in the near future, with the large user base of transformers providing the final testing step.The format and the lib have had very few modifications since their inception,which is a good sign of stability.We're glad we can bring ML one step closer to being safe and efficient for all!
https://huggingface.co/blog/textgen-pipe-gaudi
Text-Generation Pipeline on Intel® Gaudi® 2 AI Accelerator
Siddhant Jagtap
February 29, 2024
With the Generative AI (GenAI) revolution in full swing, text-generation with open-source transformer models like Llama 2 has become the talk of the town. AI enthusiasts as well as developers are looking to leverage the generative abilities of such models for their own use cases and applications. This article shows how easy it is to generate text with the Llama 2 family of models (7b, 13b and 70b) using Optimum Habana and a custom pipeline class – you'll be able to run the models with just a few lines of code!This custom pipeline class has been designed to offer great flexibility and ease of use. Moreover, it provides a high level of abstraction and performs end-to-end text-generation which involves pre-processing and post-processing. There are multiple ways to use the pipeline - you can run the run_pipeline.py script from the Optimum Habana repository, add the pipeline class to your own python scripts, or initialize LangChain classes with it.PrerequisitesSince the Llama 2 models are part of a gated repo, you need to request access if you haven't done it already. First, you have to visit the Meta website and accept the terms and conditions. After you are granted access by Meta (it can take a day or two), you have to request access in Hugging Face, using the same email address you provided in the Meta form.After you are granted access, please login to your Hugging Face account by running the following command (you will need an access token, which you can get from your user profile page):huggingface-cli loginYou also need to install the latest version of Optimum Habana and clone the repo to access the pipeline script. Here are the commands to do so:pip install optimum-habana==1.10.4git clone -b v1.10-release https://github.com/huggingface/optimum-habana.gitIn case you are planning to run distributed inference, install DeepSpeed depending on your SynapseAI version. In this case, I am using SynapseAI 1.14.0.pip install git+https://github.com/HabanaAI/DeepSpeed.git@1.14.0Now you are all set to perform text-generation with the pipeline!Using the PipelineFirst, go to the following directory in your optimum-habana checkout where the pipeline scripts are located, and follow the instructions in the README to update your PYTHONPATH.cd optimum-habana/examples/text-generationpip install -r requirements.txtcd text-generation-pipelineIf you wish to generate a sequence of text from a prompt of your choice, here is a sample command.python run_pipeline.py --model_name_or_path meta-llama/Llama-2-7b-hf --use_hpu_graphs --use_kv_cache --max_new_tokens 100 --do_sample --prompt "Here is my prompt"You can also pass multiple prompts as input and change the temperature and top_p values for generation as follows.python run_pipeline.py --model_name_or_path meta-llama/Llama-2-13b-hf --use_hpu_graphs --use_kv_cache --max_new_tokens 100 --do_sample --temperature 0.5 --top_p 0.95 --prompt "Hello world" "How are you?"For generating text with large models such as Llama-2-70b, here is a sample command to launch the pipeline with DeepSpeed.python ../../gaudi_spawn.py --use_deepspeed --world_size 8 run_pipeline.py --model_name_or_path meta-llama/Llama-2-70b-hf --max_new_tokens 100 --bf16 --use_hpu_graphs --use_kv_cache --do_sample --temperature 0.5 --top_p 0.95 --prompt "Hello world" "How are you?" "Here is my prompt" "Once upon a time"Usage in Python ScriptsYou can use the pipeline class in your own scripts as shown in the example below. Run the following sample script from optimum-habana/examples/text-generation/text-generation-pipeline.import argparseimport loggingfrom pipeline import GaudiTextGenerationPipelinefrom run_generation import setup_parser# Define a loggerlogging.basicConfig(format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",datefmt="%m/%d/%Y %H:%M:%S",level=logging.INFO,)logger = logging.getLogger(__name__)# Set up an argument parserparser = argparse.ArgumentParser()args = setup_parser(parser)# Define some pipeline arguments. Note that --model_name_or_path is a required argument for this scriptargs.num_return_sequences = 1args.model_name_or_path = "meta-llama/Llama-2-7b-hf"args.max_new_tokens = 100args.use_hpu_graphs = Trueargs.use_kv_cache = Trueargs.do_sample = True# Initialize the pipelinepipe = GaudiTextGenerationPipeline(args, logger)# You can provide input prompts as stringsprompts = ["He is working on", "Once upon a time", "Far far away"]# Generate text with pipelinefor prompt in prompts:print(f"Prompt: {prompt}")output = pipe(prompt)print(f"Generated Text: {repr(output)}")You will have to run the above script with python <name_of_script>.py --model_name_or_path a_model_name as --model_name_or_path is a required argument. However, the model name can be programatically changed as shown in the python snippet.This shows us that the pipeline class operates on a string input and performs data pre-processing as well as post-processing for us.LangChain CompatibilityThe text-generation pipeline can be fed as input to LangChain classes via the use_with_langchain constructor argument. You can install LangChain as follows.pip install langchain==0.0.191Here is a sample script that shows how the pipeline class can be used with LangChain.import argparseimport loggingfrom langchain.llms import HuggingFacePipelinefrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainfrom pipeline import GaudiTextGenerationPipelinefrom run_generation import setup_parser# Define a loggerlogging.basicConfig(format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",datefmt="%m/%d/%Y %H:%M:%S",level=logging.INFO,)logger = logging.getLogger(__name__)# Set up an argument parserparser = argparse.ArgumentParser()args = setup_parser(parser)# Define some pipeline arguments. Note that --model_name_or_path is a required argument for this scriptargs.num_return_sequences = 1args.model_name_or_path = "meta-llama/Llama-2-13b-chat-hf"args.max_input_tokens = 2048args.max_new_tokens = 1000args.use_hpu_graphs = Trueargs.use_kv_cache = Trueargs.do_sample = Trueargs.temperature = 0.2args.top_p = 0.95# Initialize the pipelinepipe = GaudiTextGenerationPipeline(args, logger, use_with_langchain=True)# Create LangChain objectllm = HuggingFacePipeline(pipeline=pipe)template = """Use the following pieces of context to answer the question at the end. If you don't know the answer,\just say that you don't know, don't try to make up an answer.Context: Large Language Models (LLMs) are the latest models used in NLP.Their superior performance over smaller models has made them incrediblyuseful for developers building NLP enabled applications. These modelscan be accessed via Hugging Face's `transformers` library, via OpenAIusing the `openai` library, and via Cohere using the `cohere` library.Question: {question}Answer: """prompt = PromptTemplate(input_variables=["question"], template=template)llm_chain = LLMChain(prompt=prompt, llm=llm)# Use LangChain objectquestion = "Which libraries and model providers offer LLMs?"response = llm_chain(prompt.format(question=question))print(f"Question 1: {question}")print(f"Response 1: {response['text']}")question = "What is the provided context about?"response = llm_chain(prompt.format(question=question))print(f"Question 2: {question}")print(f"Response 2: {response['text']}")The pipeline class has been validated for LangChain version 0.0.191 and may not work with other versions of the package.ConclusionWe presented a custom text-generation pipeline on Intel® Gaudi® 2 AI accelerator that accepts single or multiple prompts as input. This pipeline offers great flexibility in terms of model size as well as parameters affecting text-generation quality. Furthermore, it is also very easy to use and to plug into your scripts, and is compatible with LangChain.Use of the pretrained model is subject to compliance with third party licenses, including the “Llama 2 Community License Agreement” (LLAMAV2). For guidance on the intended use of the LLAMA2 model, what will be considered misuse and out-of-scope uses, who are the intended users and additional terms please review and read the instructions in this link https://ai.meta.com/llama/license/. Users bear sole liability and responsibility to follow and comply with any third party licenses, and Habana Labs disclaims and will bear no liability with respect to users’ use or compliance with third party licenses.To be able to run gated models like this Llama-2-70b-hf, you need the following:Have a HuggingFace accountAgree to the terms of use of the model in its model card on the HF Hubset a read tokenLogin to your account using the HF CLI: run huggingface-cli login before launching your script
https://huggingface.co/blog/pollen-vision
Pollen-Vision: Unified interface for Zero-Shot vision models in robotics
Antoine Pirrone, Simon Le Goff, Rouanet, Simon Revelly
March 25, 2024
This is a guest blog post by the Pollen Robotics team. We are the creators of Reachy, an open-source humanoid robot designed for manipulation in the real world.In the context of autonomous behaviors, the essence of a robot's usability lies in its ability to understand and interact with its environment. This understanding primarily comes from visual perception, which enables robots to identify objects, recognize people, navigate spaces, and much more.We're excited to share the initial launch of our open-source pollen-vision library, a first step towards empowering our robots with the autonomy to grasp unknown objects. This library is a carefully curated collection of vision models chosen for their direct applicability to robotics. Pollen-vision is designed for ease of installation and use, composed of independent modules that can be combined to create a 3D object detection pipeline, getting the position of the objects in 3D space (x, y, z). We focused on selecting zero-shot models, eliminating the need for any training, and making these tools instantly usable right out of the box.Our initial release is focused on 3D object detection—laying the groundwork for tasks like robotic grasping by providing a reliable estimate of objects' spatial coordinates. Currently limited to positioning within a 3D space (not extending to full 6D pose estimation), this functionality establishes a solid foundation for basic robotic manipulation tasks. The Core Models of Pollen-Vision The library encapsulates several key models. We want the models we use to be zero-shot and versatile, allowing a wide range of detectable objects without re-training. The models also have to be “real-time capable”, meaning they should run at least at a few fps on a consumer GPU. The first models we chose are:OWL-VIT (Open World Localization - Vision Transformer, By Google Research): This model performs text-conditioned zero-shot 2D object localization in RGB images. It outputs bounding boxes (like YOLO)Mobile Sam: A lightweight version of the Segment Anything Model (SAM) by Meta AI. SAM is a zero-shot image segmentation model. It can be prompted with bounding boxes or points. RAM (Recognize Anything Model by OPPO Research Institute): Designed for zero-shot image tagging, RAM can determine the presence of an object in an image based on textual descriptions, laying the groundwork for further analysis. Get started in very few lines of code! Below is an example of how to use pollen-vision to build a simple object detection and segmentation pipeline, taking only images and text as input.from pollen_vision.vision_models.object_detection import OwlVitWrapperfrom pollen_vision.vision_models.object_segmentation import MobileSamWrapperfrom pollen_vision.vision_models.utils import Annotator, get_bboxesowl = OwlVitWrapper()sam = MobileSamWrapper()annotator = Annotator()im = ...predictions = owl.infer(im, ["paper cups"]) # zero-shot object detectionbboxes = get_bboxes(predictions)masks = sam.infer(im, bboxes=bboxes) # zero-shot object segmentationannotated_im = annotator.annotate(im, predictions, masks=masks)OWL-VIT’s inference time depends on the number of prompts provided (i.e., the number of objects to detect). On a Laptop with a RTX 3070 GPU: 1 prompt : ~75ms per frame2 prompts : ~130ms per frame3 prompts : ~180ms per frame4 prompts : ~240ms per frame5 prompts : ~330ms per frame10 prompts : ~650ms per frameSo it is interesting, performance-wise, to only prompt OWL-VIT with objects that we know are in the image. That’s where RAM is useful, as it is fast and provides exactly this information. A robotics use case: grasping unknown objects in unconstrained environments With the object's segmentation mask, we can estimate its (u, v) position in pixel space by computing the centroid of the binary mask. Here, having the segmentation mask is very useful because it allows us to average the depth values inside the mask rather than inside the full bounding box, which also contains a background that would skew the average.One way to do that is by averaging the u and v coordinates of the non zero pixels in the maskdef get_centroid(mask): x_center, y_center = np.argwhere(mask == 1).sum(0) / np.count_nonzero(mask) return int(y_center), int(x_center)We can now bring in depth information in order to estimate the z coordinate of the object. The depth values are already in meters, but the (u, v) coordinates are expressed in pixels. We can get the (x, y, z) position of the centroid of the object in meters using the camera’s intrinsic matrix (K)def uv_to_xyz(z, u, v, K): cx = K[0, 2] cy = K[1, 2] fx = K[0, 0] fy = K[1, 1] x = (u - cx) * z / fx y = (v - cy) * z / fy return np.array([x, y, z])We now have an estimation of the 3D position of the object in the camera’s reference frame. If we know where the camera is positioned relative to the robot’s origin frame, we can perform a simple transformation to get the 3D position of the object in the robot’s frame. This means we can move the end effector of our robot where the object is, and grasp it ! 🥳 What’s next? What we presented in this post is a first step towards our goal, which is autonomous grasping of unknown objects in the wild. There are a few issues that still need addressing:OWL-Vit does not detect everything every time and can be inconsistent. We are looking for a better option.There is no temporal or spatial consistency so far. All is recomputed every frameWe are currently working on integrating a point tracking solution to enhance the consistency of the detectionsGrasping technique (only front grasp for now) was not the focus of this work. We will be working on different approaches to enhance the grasping capabilities in terms of perception (6D detection) and grasping pose generation.Overall speed could be improved Try pollen-vision Wanna try pollen-vision? Check out our Github repository !
https://huggingface.co/blog/segmoe
SegMoE: Segmind Mixture of Diffusion Experts
Yatharth Gupta, Vishnu V Jaddipal, Harish Prabhala
February 3, 2024
SegMoE is an exciting framework for creating Mixture-of-Experts Diffusion models from scratch! SegMoE is comprehensively integrated within the Hugging Face ecosystem and comes supported with diffusers 🔥!Among the features and integrations being released today:Models on the Hub, with their model cards and licenses (Apache 2.0)Github Repository to create your own MoE-style models.Table of ContentsWhat is SegMoEAbout the nameInferenceSamplesUsing 🤗 DiffusersUsing a Local ModelComparisonCreating your Own SegMoEDisclaimers and ongoing workAdditional ResourcesConclusionWhat is SegMoE?SegMoE models follow the same architecture as Stable Diffusion. Like Mixtral 8x7b, a SegMoE model comes with multiple models in one. The way this works is by replacing some Feed-Forward layers with a sparse MoE layer. A MoE layer contains a router network to select which experts process which tokens most efficiently.You can use the segmoe package to create your own MoE models! The process takes just a few minutes. For further information, please visit the Github Repository. We take inspiration from the popular library mergekit to design segmoe. We thank the contributors of mergekit for such a useful library.For more details on MoEs, see the Hugging Face 🤗 post: hf.co/blog/moe.SegMoE release TL;DR;Release of SegMoE-4x2, SegMoE-2x1 and SegMoE-SD4x2 versionsRelease of custom MoE-making codeAbout the nameThe SegMoE MoEs are called SegMoE-AxB, where A refers to the number of expert models MoE-d together, while the second number refers to the number of experts involved in the generation of each image. Only some layers of the model (the feed-forward blocks, attentions, or all) are replicated depending on the configuration settings; the rest of the parameters are the same as in a Stable Diffusion model. For more details about how MoEs work, please refer to the "Mixture of Experts Explained" post.InferenceWe release 3 merges on the Hub:SegMoE 2x1 has two expert models.SegMoE 4x2 has four expert models.SegMoE SD 4x2 has four Stable Diffusion 1.5 expert models.SamplesImages generated using SegMoE 4x2Images generated using SegMoE 2x1:Images generated using SegMoE SD 4x2Using 🤗 DiffusersPlease, run the following command to install the segmoe package. Make sure you have the latest version of diffusers and transformers installed.pip install -U segmoe diffusers transformersThe following loads up the second model ("SegMoE 4x2") from the list above, and runs generation on it.from segmoe import SegMoEPipelinepipeline = SegMoEPipeline("segmind/SegMoE-4x2-v0", device="cuda")prompt = "cosmic canvas, orange city background, painting of a chubby cat"negative_prompt = "nsfw, bad quality, worse quality"img = pipeline(prompt=prompt,negative_prompt=negative_prompt,height=1024,width=1024,num_inference_steps=25,guidance_scale=7.5,).images[0]img.save("image.png")Using a Local ModelAlternatively, a local model can also be loaded up, here segmoe_v0 is the path to the directory containing the local SegMoE model. Checkout Creating your Own SegMoE to learn how to build your own!from segmoe import SegMoEPipelinepipeline = SegMoEPipeline("segmoe_v0", device="cuda")prompt = "cosmic canvas, orange city background, painting of a chubby cat"negative_prompt = "nsfw, bad quality, worse quality"img = pipeline(prompt=prompt,negative_prompt=negative_prompt,height=1024,width=1024,num_inference_steps=25,guidance_scale=7.5,).images[0]img.save("image.png")ComparisonPrompt understanding seems to improve, as shown in the images below. Each image shows the following models left to right: SegMoE-2x1-v0, SegMoE-4x2-v0, Base Model (RealVisXL_V3.0)three green glass bottlespanda bear with aviator glasses on its headthe statue of Liberty next to the Washington MonumentTaj Mahal with its reflection. detailed charcoal sketch.Creating your Own SegMoESimply prepare a config.yaml file, with the following structure:base_model: Base Model Path, Model Card or CivitAI Download Linknum_experts: Number of experts to usemoe_layers: Type of Layers to Mix (can be "ff", "attn" or "all"). Defaults to "attn"num_experts_per_tok: Number of Experts to use experts:- source_model: Expert 1 Path, Model Card or CivitAI Download Linkpositive_prompt: Positive Prompt for computing gate weightsnegative_prompt: Negative Prompt for computing gate weights- source_model: Expert 2 Path, Model Card or CivitAI Download Linkpositive_prompt: Positive Prompt for computing gate weightsnegative_prompt: Negative Prompt for computing gate weights- source_model: Expert 3 Path, Model Card or CivitAI Download Linkpositive_prompt: Positive Prompt for computing gate weightsnegative_prompt: Negative Prompt for computing gate weights- source_model: Expert 4 Path, Model Card or CivitAI Download Linkpositive_prompt: Positive Prompt for computing gate weightsnegative_prompt: Negative Prompt for computing gate weightsAny number of models can be combined. For detailed information on how to create a config file, please refer to the github repositoryNoteBoth Hugging Face and CivitAI models are supported. For CivitAI models, paste the download link of the model, for example: "https://civitai.com/api/download/models/239306"Then run the following command:segmoe config.yaml segmoe_v0This will create a folder called segmoe_v0 with the following structure:├── model_index.json├── scheduler│   └── scheduler_config.json├── text_encoder│   ├── config.json│   └── model.safetensors├── text_encoder_2│   ├── config.json│   └── model.safetensors├── tokenizer│   ├── merges.txt│   ├── special_tokens_map.json│   ├── tokenizer_config.json│   └── vocab.json├── tokenizer_2│   ├── merges.txt│   ├── special_tokens_map.json│   ├── tokenizer_config.json│   └── vocab.json├── unet│   ├── config.json│   └── diffusion_pytorch_model.safetensors└──vae   ├── config.json    └── diffusion_pytorch_model.safetensorsAlternatively, you can also use the Python API to create a mixture of experts model:from segmoe import SegMoEPipelinepipeline = SegMoEPipeline("config.yaml", device="cuda")pipeline.save_pretrained("segmoe_v0")Push to HubThe Model can be pushed to the hub via the huggingface-clihuggingface-cli upload segmind/segmoe_v0 ./segmoe_v0The model can also be pushed to the Hub directly from Python:from huggingface_hub import create_repo, upload_foldermodel_id = "segmind/SegMoE-v0"repo_id = create_repo(repo_id=model_id, exist_ok=True).repo_idupload_folder(repo_id=repo_id,folder_path="segmoe_v0",commit_message="Inital Commit",ignore_patterns=["step_*", "epoch_*"],)Detailed usage can be found hereDisclaimers and ongoing workSlower Speed: If the number of experts per token is larger than 1, the MoE performs computation across several expert models. This makes it slower than a single SD 1.5 or SDXL model.High VRAM usage: MoEs run inference very quickly but still need a large amount of VRAM (and hence an expensive GPU). This makes it challenging to use them in local setups, but they are great for deployments with multiple GPUs. As a reference point, SegMoE-4x2 requires 24GB of VRAM in half-precision.ConclusionWe built SegMoE to provide the community a new tool that can potentially create SOTA Diffusion Models with ease, just by combining pretrained models while keeping inference times low. We're excited to see what you can build with it!Additional ResourcesMixture of Experts ExplainedMixture of Experts Models on Hugging Face
https://huggingface.co/blog/setfit-optimum-intel
Blazing Fast SetFit Inference with 🤗 Optimum Intel on Xeon
Daniel Korat, Tom Aarsen, Oren Pereg, Moshe Wasserblat, Ella Charlaix, Abirami Prabhakaran
April 3, 2024
SetFit is a promising solution for a common modeling problem: how to deal with lack of labeled data for training. Developed with Hugging Face’s research partners at Intel Labs and the UKP Lab, SetFit is an efficient framework for few-shot fine-tuning of Sentence Transformers models. SetFit achieves high accuracy with little labeled data - for example, SetFit outperforms GPT-3.5 in 3-shot prompting and with 5 shot it also outperforms 3-shot GPT-4 on the Banking 77 financial intent dataset.Compared to LLM based methods, SetFit has two unique advantages:🗣 No prompts or verbalisers: few-shot in-context learning with LLMs requires handcrafted prompts which make the results brittle, sensitive to phrasing and dependent on user expertise. SetFit dispenses with prompts altogether by generating rich embeddings directly from a small number of labeled text examples.🏎 Fast to train: SetFit doesn't rely on LLMs such as GPT-3.5 or Llama2 to achieve high accuracy. As a result, it is typically an order of magnitude (or more) faster to train and run inference with.For more details on SetFit, check out our paper, blog, code, and data.Setfit has been widely adopted by the AI developer community, with ~100k downloads per month and ~1500 SetFit models on the Hub, and growing with an average of ~4 models per day!Faster!In this blog post, we'll explain how you can accelerate inference with SetFit by 7.8x on Intel CPUs, by optimizing your SetFit model with 🤗 Optimum Intel. We’ll show how you can achieve huge throughput gains by performing a simple post-training quantization step on your model. This can enable production-grade deployment of SetFit solutions using Intel Xeon CPUs. Optimum Intel is an open-source library that accelerates end-to-end pipelines built with Hugging Face libraries on Intel Hardware. Optimum Intel includes several techniques to accelerate models such as low-bit quantization, model weight pruning, distillation, and an accelerated runtime.The runtime and optimizations included in Optimum Intel take advantage of Intel® Advanced Vector Extensions 512 (Intel® AVX-512), Vector Neural Network Instructions (VNNI) and Intel® Advanced Matrix Extensions (Intel® AMX) on Intel CPUs to accelerate models. Specifically, it has built-in BFloat16 (bf16) and int8 GEMM accelerators in every core to accelerate deep learning training and inference workloads. AMX accelerated inference is introduced in PyTorch 2.0 and Intel Extension for PyTorch (IPEX) in addition to other optimizations for various common operators.Optimizing pre-trained models can be done easily with Optimum Intel; many simple examples can be found here.Our blog is accompanied by a notebook for a step-by-step walkthrough.Step 1: Quantize the SetFit Model using 🤗 Optimum IntelIn order to optimize our SetFit model, we will apply quantization to the model body, using Intel Neural Compressor (INC), part of Optimum Intel.Quantization is a very popular deep learning model optimization technique for improving inference speeds. It minimizes the number of bits required to represent the weights and/or activations in a neural network. This is done by converting a set of high-precision numbers into a lower-bit data representations, such as INT8. Moreover, quantization can enable faster computations in lower precision.Specifically, we'll apply post-training static quantization (PTQ). PTQ can reduce the memory footprint and latency for inference, while still preserving the accuracy of the model, with only a small unlabeled calibration set and without any training.Before you begin, make sure you have all the necessary libraries installed and that your version of Optimum Intel is at least 1.14.0 since the functionality was introduced in that version:pip install --upgrade-strategy eager optimum[ipex]Prepare a Calibration DatasetThe calibration dataset should be able to represent the distribution of unseen data. In general, preparing 100 samples is enough for calibration. We'll use the rotten_tomatoes dataset in our case, since it’s composed of movie reviews, similar to our target dataset, sst2.First, we’ll load 100 random samples from this dataset. Then, to prepare the dataset for quantization, we'll need to tokenize each example. We won’t need the “text” and “label” columns, so let’s remove them.calibration_set = load_dataset("rotten_tomatoes", split="train").shuffle(seed=42).select(range(100)) def tokenize(examples):return tokenizer(examples["text"], padding="max_length", max_length=512, truncation=True)tokenizer = setfit_model.model_body.tokenizercalibration_set = calibration_set.map(tokenize, remove_columns=["text", "label"])Run QuantizationBefore we run quantization, we need to define the desired quantization process - in our case - Static Post Training Quantization, and use optimum.intel to run the quantization on our calibration dataset:from optimum.intel import INCQuantizerfrom neural_compressor.config import PostTrainingQuantConfigsetfit_body = setfit_model.model_body[0].auto_modelquantizer = INCQuantizer.from_pretrained(setfit_body)optimum_model_path = "/tmp/bge-small-en-v1.5_setfit-sst2-english_opt"quantization_config = PostTrainingQuantConfig(approach="static", backend="ipex", domain="nlp")quantizer.quantize(quantization_config=quantization_config,calibration_dataset=calibration_set,save_directory=optimum_model_path,batch_size=1,)tokenizer.save_pretrained(optimum_model_path)That’s it! We now have a local copy of our quantized SetFit model. Let’s test it out.Step 2: Benchmark InferenceIn our notebook, we’ve set up a PerformanceBenchmark class to compute model latency and throughput, as well as an accuracy measure. Let’s use it to benchmark our Optimum Intel model with two other commonly used methods:Using PyTorch and 🤗 Transformers library with fp32.Using Intel Extension for PyTorch (IPEX) runtime with bf16 and tracing the model using TorchScript.Load our test dataset, sst2, and run the benchmark using PyTorch and 🤗 Transformers library:from datasets import load_datasetfrom setfit import SetFitModeltest_dataset = load_dataset("SetFit/sst2")["validation"]model_path = "dkorat/bge-small-en-v1.5_setfit-sst2-english"setfit_model = SetFitModel.from_pretrained(model_path)pb = PerformanceBenchmark(model=setfit_model,dataset=test_dataset,optim_type="bge-small (transformers)",)perf_metrics = pb.run_benchmark()For the second benchmark, we'll use Intel Extension for PyTorch (IPEX) with bf16 precision and TorchScript tracing. To use IPEX we simply import the IPEX library and apply ipex.optimize() to the target model, which, in our case, is the SetFit (transformer) model body:dtype = torch.bfloat16body = ipex.optimize(setfit_model.model_body, dtype=dtype)For TorchScript tracing, we generate a random sequence based on the model's maximum input length, with tokens sampled from the tokenizer's vocabulary:tokenizer = setfit_model.model_body.tokenizerd = generate_random_sequences(batch_size=1, length=tokenizer.model_max_length, vocab_size=tokenizer.vocab_size)body = torch.jit.trace(body, (d,), check_trace=False, strict=False)setfit_model.model_body = torch.jit.freeze(body)Now let's run the benchmark using our quantized Optimum model. We’ll first need to define a wrapper around our SetFit model which plugs in our quantized model body at inference (instead of the original model body). Then, we can run the benchmark using this wrapper.from optimum.intel import IPEXModelclass OptimumSetFitModel:def __init__(self, setfit_model, model_body):model_body.tokenizer = setfit_model.model_body.tokenizerself.model_body = model_bodyself.model_head = setfit_model.model_headoptimum_model = IPEXModel.from_pretrained(optimum_model_path)optimum_setfit_model = OptimumSetFitModel(setfit_model, model_body=optimum_model)pb = PerformanceBenchmark(model=optimum_setfit_model,dataset=test_dataset,optim_type=f"bge-small (optimum-int8)",model_path=optimum_model_path,autocast_dtype=torch.bfloat16,)perf_metrics.update(pb.run_benchmark())ResultsAccuracy vs latency at batch size=1bge-small (transformers)bge-small (ipex-bfloat16)bge-small (optimum-int8)Model Size127.32 MB63.74 MB44.65 MBAccuracy on test set88.4%88.4%88.1%Latency (bs=1)15.69 +/- 0.57 ms5.67 +/- 0.66 ms4.55 +/- 0.25 msWhen inspecting the performance at batch size 1, there’s a 3.45x reduction in latency with our optimized model. Note that this is achieved with virtually no drop in accuracy! It's also worth mentioning that the model size has shrunk by 2.85x. We move on to our main focus, which is the reported throughputs with different batch sizes.Here, the optimization has garnered even greater speedups. When comparing the highest achievable throughput (at any batch size), the optimized model is 7.8x faster than the original transformers fp32 model!SummaryIn this blog post, we have showed how to use quantization capabilities present in 🤗 Optimum Intel to optimize SetFit models. After running a quick and easy post-training quantization procedure, we've observed that accuracy level was preserved, while inference throughput increased by 7.8x. This optimization method can be readily applied to any existing SetFit deployment running on Intel Xeon.ReferencesLewis Tunstall, Nils Reimers, Unso Eun Seo Jo, Luke Bates, Daniel Korat, Moshe Wasserblat, Oren Pereg, 2022. "Efficient Few-Shot Learning Without Prompts". https://arxiv.org/abs/2209.11055
https://huggingface.co/blog/deep-rl-ppo
Proximal Policy Optimization (PPO)
Thomas Simonini
August 5, 2022
Unit 8, of the Deep Reinforcement Learning Class with Hugging Face 🤗⚠️ A new updated version of this article is available here 👉 https://huggingface.co/deep-rl-course/unit1/introductionThis article is part of the Deep Reinforcement Learning Class. A free course from beginner to expert. Check the syllabus here.⚠️ A new updated version of this article is available here 👉 https://huggingface.co/deep-rl-course/unit1/introductionThis article is part of the Deep Reinforcement Learning Class. A free course from beginner to expert. Check the syllabus here.In the last Unit, we learned about Advantage Actor Critic (A2C), a hybrid architecture combining value-based and policy-based methods that help to stabilize the training by reducing the variance with:An Actor that controls how our agent behaves (policy-based method).A Critic that measures how good the action taken is (value-based method).Today we'll learn about Proximal Policy Optimization (PPO), an architecture that improves our agent's training stability by avoiding too large policy updates. To do that, we use a ratio that will indicates the difference between our current and old policy and clip this ratio from a specific range [1−ϵ,1+ϵ] [1 - \epsilon, 1 + \epsilon] [1−ϵ,1+ϵ] .Doing this will ensure that our policy update will not be too large and that the training is more stable.And then, after the theory, we'll code a PPO architecture from scratch using PyTorch and bulletproof our implementation with CartPole-v1 and LunarLander-v2.Sounds exciting? Let's get started!The intuition behind PPOIntroducing the Clipped Surrogate ObjectiveRecap: The Policy Objective FunctionThe Ratio FunctionThe unclipped part of the Clipped Surrogate Objective functionThe clipped Part of the Clipped Surrogate Objective functionVisualize the Clipped Surrogate ObjectiveCase 1 and 2: the ratio is between the rangeCase 3 and 4: the ratio is below the rangeCase 5 and 6: the ratio is above the rangeLet's code our PPO AgentThe intuition behind PPOThe idea with Proximal Policy Optimization (PPO) is that we want to improve the training stability of the policy by limiting the change you make to the policy at each training epoch: we want to avoid having too large policy updates.For two reasons:We know empirically that smaller policy updates during training are more likely to converge to an optimal solution.A too big step in a policy update can result in falling “off the cliff” (getting a bad policy) and having a long time or even no possibility to recover.Taking smaller policy updates improve the training stabilityModified version from RL — Proximal Policy Optimization (PPO) Explained by Jonathan Hui: https://jonathan-hui.medium.com/rl-proximal-policy-optimization-ppo-explained-77f014ec3f12So with PPO, we update the policy conservatively. To do so, we need to measure how much the current policy changed compared to the former one using a ratio calculation between the current and former policy. And we clip this ratio in a range [1−ϵ,1+ϵ] [1 - \epsilon, 1 + \epsilon] [1−ϵ,1+ϵ], meaning that we remove the incentive for the current policy to go too far from the old one (hence the proximal policy term).Introducing the Clipped Surrogate ObjectiveRecap: The Policy Objective FunctionLet’s remember what is the objective to optimize in Reinforce:The idea was that by taking a gradient ascent step on this function (equivalent to taking gradient descent of the negative of this function), we would push our agent to take actions that lead to higher rewards and avoid harmful actions.However, the problem comes from the step size:Too small, the training process was too slowToo high, there was too much variability in the trainingHere with PPO, the idea is to constrain our policy update with a new objective function called the Clipped surrogate objective function that will constrain the policy change in a small range using a clip.This new function is designed to avoid destructive large weights updates :Let’s study each part to understand how it works.The Ratio FunctionThis ratio is calculated this way:It’s the probability of taking action at a_t at​ at state st s_t st​ in the current policy divided by the previous one.As we can see, rt(θ) r_t(\theta) rt​(θ) denotes the probability ratio between the current and old policy:If rt(θ)>1 r_t(\theta) > 1 rt​(θ)>1, the action at a_t at​ at state st s_t st​ is more likely in the current policy than the old policy.If rt(θ) r_t(\theta) rt​(θ) is between 0 and 1, the action is less likely for the current policy than for the old one.So this probability ratio is an easy way to estimate the divergence between old and current policy.The unclipped part of the Clipped Surrogate Objective functionThis ratio can replace the log probability we use in the policy objective function. This gives us the left part of the new objective function: multiplying the ratio by the advantage.Proximal Policy Optimization AlgorithmsHowever, without a constraint, if the action taken is much more probable in our current policy than in our former, this would lead to a significant policy gradient step and, therefore, an excessive policy update.The clipped Part of the Clipped Surrogate Objective functionConsequently, we need to constrain this objective function by penalizing changes that lead to a ratio away from 1 (in the paper, the ratio can only vary from 0.8 to 1.2).By clipping the ratio, we ensure that we do not have a too large policy update because the current policy can't be too different from the older one.To do that, we have two solutions:TRPO (Trust Region Policy Optimization) uses KL divergence constraints outside the objective function to constrain the policy update. But this method is complicated to implement and takes more computation time.PPO clip probability ratio directly in the objective function with its Clipped surrogate objective function.This clipped part is a version where rt(theta) is clipped between [1−ϵ,1+ϵ] [1 - \epsilon, 1 + \epsilon] [1−ϵ,1+ϵ].With the Clipped Surrogate Objective function, we have two probability ratios, one non-clipped and one clipped in a range (between [1−ϵ,1+ϵ] [1 - \epsilon, 1 + \epsilon] [1−ϵ,1+ϵ], epsilon is a hyperparameter that helps us to define this clip range (in the paper ϵ=0.2 \epsilon = 0.2 ϵ=0.2.).Then, we take the minimum of the clipped and non-clipped objective, so the final objective is a lower bound (pessimistic bound) of the unclipped objective.Taking the minimum of the clipped and non-clipped objective means we'll select either the clipped or the non-clipped objective based on the ratio and advantage situation.Visualize the Clipped Surrogate ObjectiveDon't worry. It's normal if this seems complex to handle right now. But we're going to see what this Clipped Surrogate Objective Function looks like, and this will help you to visualize better what's going on.Table from "Towards Delivering a Coherent Self-ContainedExplanation of Proximal Policy Optimization" by Daniel BickWe have six different situations. Remember first that we take the minimum between the clipped and unclipped objectives.Case 1 and 2: the ratio is between the rangeIn situations 1 and 2, the clipping does not apply since the ratio is between the range [1−ϵ,1+ϵ] [1 - \epsilon, 1 + \epsilon] [1−ϵ,1+ϵ]In situation 1, we have a positive advantage: the action is better than the average of all the actions in that state. Therefore, we should encourage our current policy to increase the probability of taking that action in that state.Since the ratio is between intervals, we can increase our policy's probability of taking that action at that state.In situation 2, we have a negative advantage: the action is worse than the average of all actions at that state. Therefore, we should discourage our current policy from taking that action in that state.Since the ratio is between intervals, we can decrease the probability that our policy takes that action at that state. Case 3 and 4: the ratio is below the rangeTable from "Towards Delivering a Coherent Self-ContainedExplanation of Proximal Policy Optimization" by Daniel BickIf the probability ratio is lower than [1−ϵ] [1 - \epsilon] [1−ϵ], the probability of taking that action at that state is much lower than with the old policy.If, like in situation 3, the advantage estimate is positive (A>0), then you want to increase the probability of taking that action at that state.But if, like situation 4, the advantage estimate is negative, we don't want to decrease further the probability of taking that action at that state. Therefore, the gradient is = 0 (since we're on a flat line), so we don't update our weights.Case 5 and 6: the ratio is above the rangeTable from "Towards Delivering a Coherent Self-ContainedExplanation of Proximal Policy Optimization" by Daniel BickIf the probability ratio is higher than [1+ϵ] [1 + \epsilon] [1+ϵ], the probability of taking that action at that state in the current policy is much higher than in the former policy.If, like in situation 5, the advantage is positive, we don't want to get too greedy. We already have a higher probability of taking that action at that state than the former policy. Therefore, the gradient is = 0 (since we're on a flat line), so we don't update our weights.If, like in situation 6, the advantage is negative, we want to decrease the probability of taking that action at that state.So if we recap, we only update the policy with the unclipped objective part. When the minimum is the clipped objective part, we don't update our policy weights since the gradient will equal 0. So we update our policy only if:Our ratio is in the range [1−ϵ,1+ϵ] [1 - \epsilon, 1 + \epsilon] [1−ϵ,1+ϵ]Our ratio is outside the range, but the advantage leads to getting closer to the rangeBeing below the ratio but the advantage is > 0Being above the ratio but the advantage is < 0You might wonder why, when the minimum is the clipped ratio, the gradient is 0. When the ratio is clipped, the derivative in this case will not be the derivative of the rt(θ)∗At r_t(\theta) * A_t rt​(θ)∗At​ but the derivative of either (1−ϵ)∗At (1 - \epsilon)* A_t(1−ϵ)∗At​ or the derivative of (1+ϵ)∗At (1 + \epsilon)* A_t(1+ϵ)∗At​ which both = 0.To summarize, thanks to this clipped surrogate objective, we restrict the range that the current policy can vary from the old one. Because we remove the incentive for the probability ratio to move outside of the interval since, the clip have the effect to gradient. If the ratio is > 1+ϵ 1 + \epsilon 1+ϵ or < 1−ϵ 1 - \epsilon 1−ϵ the gradient will be equal to 0.The final Clipped Surrogate Objective Loss for PPO Actor-Critic style looks like this, it's a combination of Clipped Surrogate Objective function, Value Loss Function and Entropy bonus:That was quite complex. Take time to understand these situations by looking at the table and the graph. You must understand why this makes sense. If you want to go deeper, the best resource is the article Towards Delivering a Coherent Self-Contained Explanation of Proximal Policy Optimization" by Daniel Bick, especially part 3.4.Let's code our PPO AgentNow that we studied the theory behind PPO, the best way to understand how it works is to implement it from scratch. Implementing an architecture from scratch is the best way to understand it, and it's a good habit. We have already done it for a value-based method with Q-Learning and a Policy-based method with Reinforce.So, to be able to code it, we're going to use two resources:A tutorial made by Costa Huang. Costa is behind CleanRL, a Deep Reinforcement Learning library that provides high-quality single-file implementation with research-friendly features.In addition to the tutorial, to go deeper, you can read the 13 core implementation details: https://iclr-blog-track.github.io/2022/03/25/ppo-implementation-details/Then, to test its robustness, we're going to train it in 2 different classical environments:Cartpole-v1LunarLander-v2And finally, we will be push the trained model to the Hub to evaluate and visualize your agent playing.LunarLander-v2 is the first environment you used when you started this course. At that time, you didn't know how it worked, and now, you can code it from scratch and train it. How incredible is that 🤩.via GIPHYStart the tutorial here 👉 https://github.com/huggingface/deep-rl-class/blob/main/unit8/unit8.ipynbCongrats on finishing this chapter! There was a lot of information. And congrats on finishing the tutorial. 🥳, this was one of the hardest of the course.Don't hesitate to train your agent in other environments. The best way to learn is to try things on your own!I want you to think about your progress since the first Unit. With these eight units, you've built a strong background in Deep Reinforcement Learning. Congratulations!But this is not the end, even if the foundations part of the course is finished, this is not the end of the journey. We're working on new elements:Adding new environments and tutorials.A section about multi-agents (self-play, collaboration, competition).Another one about offline RL and Decision Transformers.Paper explained articles.And more to come.The best way to keep in touch is to sign up for the course so that we keep you updated 👉 http://eepurl.com/h1pElXAnd don't forget to share with your friends who want to learn 🤗!Finally, with your feedback, we want to improve and update the course iteratively. If you have some, please fill this form 👉 https://forms.gle/3HgA7bEHwAmmLfwh9See you next time!Keep learning, stay awesome 🤗,
https://huggingface.co/blog/fast-diffusers-coreml
Faster Stable Diffusion with Core ML on iPhone, iPad, and Mac
Pedro Cuenca
June 15, 2023
WWDC’23 (Apple Worldwide Developers Conference) was held last week. A lot of the news focused on the Vision Pro announcement during the keynote, but there’s much more to it. Like every year, WWDC week is packed with more than 200 technical sessions that dive deep inside the upcoming features across Apple operating systems and frameworks. This year we are particularly excited about changes in Core ML devoted to compression and optimization techniques. These changes make running models such as Stable Diffusion faster and with less memory use! As a taste, consider the following test I ran on my iPhone 13 back in December, compared with the current speed using 6-bit palettization:Stable Diffusion on iPhone, back in December and now with 6-bit palettization Contents New Core ML OptimizationsUsing Quantized and Optimized Stable Diffusion ModelsConverting and Optimizing Custom ModelsUsing Less than 6 bitsConclusion New Core ML Optimizations Core ML is a mature framework that allows machine learning models to run efficiently on-device, taking advantage of all the compute hardware in Apple devices: the CPU, the GPU, and the Neural Engine specialized in ML tasks. On-device execution is going through a period of extraordinary interest triggered by the popularity of models such as Stable Diffusion and Large Language Models with chat interfaces. Many people want to run these models on their hardware for a variety of reasons, including convenience, privacy, and API cost savings. Naturally, many developers are exploring ways to run these models efficiently on-device and creating new apps and use cases. Core ML improvements that contribute to achieving that goal are big news for the community!The Core ML optimization changes encompass two different (but complementary) software packages:The Core ML framework itself. This is the engine that runs ML models on Apple hardware and is part of the operating system. Models have to be exported in a special format supported by the framework, and this format is also referred to as “Core ML”.The coremltools conversion package. This is an open-source Python module whose mission is to convert PyTorch or Tensorflow models to the Core ML format.coremltools now includes a new submodule called coremltools.optimize with all the compression and optimization tools. For full details on this package, please take a look at this WWDC session. In the case of Stable Diffusion, we’ll be using 6-bit palettization, a type of quantization that compresses model weights from a 16-bit floating-point representation to just 6 bits per parameter. The name “palettization” refers to a technique similar to the one used in computer graphics to work with a limited set of colors: the color table (or “palette”) contains a fixed number of colors, and the colors in the image are replaced with the indexes of the closest colors available in the palette. This immediately provides the benefit of drastically reducing storage size, and thus reducing download time and on-device disk use.Illustration of 2-bit palettization. Image credit: Apple WWDC’23 Session Use Core ML Tools for machine learning model compression.The compressed 6-bit weights cannot be used for computation, because they are just indices into a table and no longer represent the magnitude of the original weights. Therefore, Core ML needs to uncompress the palletized weights before use. In previous versions of Core ML, uncompression took place when the model was first loaded from disk, so the amount of memory used was equal to the uncompressed model size. With the new improvements, weights are kept as 6-bit numbers and converted on the fly as inference progresses from layer to layer. This might seem slow – an inference run requires a lot of uncompressing operations –, but it’s typically more efficient than preparing all the weights in 16-bit mode! The reason is that memory transfers are in the critical path of execution, and transferring less memory is faster than transferring uncompressed data. Using Quantized and Optimized Stable Diffusion Models Last December, Apple introduced ml-stable-diffusion, an open-source repo based on diffusers to easily convert Stable Diffusion models to Core ML. It also applies optimizations to the transformers attention layers that make inference faster on the Neural Engine (on devices where it’s available). ml-stable-diffusion has just been updated after WWDC with the following:Quantization is supported using --quantize-nbits during conversion. You can quantize to 8, 6, 4, or even 2 bits! For best results, we recommend using 6-bit quantization, as the precision loss is small while achieving fast inference and significant memory savings. If you want to go lower than that, please check this section for advanced techniques.Additional optimizations of the attention layers that achieve even better performance on the Neural Engine! The trick is to split the query sequences into chunks of 512 to avoid the creation of large intermediate tensors. This method is called SPLIT_EINSUM_V2 in the code and can improve performance between 10% to 30%.In order to make it easy for everyone to take advantage of these improvements, we have converted the four official Stable Diffusion models and pushed them to the Hub. These are all the variants:ModelUncompressedPalettizedStable Diffusion 1.4Core ML, float16Core ML, 6-bit palettizedStable Diffusion 1.5Core ML, float16Core ML, 6-bit palettizedStable Diffusion 2 baseCore ML, float16Core ML, 6-bit palettizedStable Diffusion 2.1 baseCore ML, float16Core ML, 6-bit palettizedIn order to use 6-bit models, you need the development versions of iOS/iPadOS 17 or macOS 14 (Sonoma) because those are the ones that contain the latest Core ML framework. You can download them from the Apple developer site if you are a registered developer, or you can sign up for the public beta that will be released in a few weeks.Note that each variant is available in Core ML format and also as a zip archive. Zip files are ideal for native apps, such as our open-source demo app and other third party tools. If you just want to run the models on your own hardware, the easiest way is to use our demo app and select the quantized model you want to test. You need to compile the app using Xcode, but an update will be available for download in the App Store soon. For more details, check our previous post.Running 6-bit stable-diffusion-2-1-base model in demo appIf you want to download a particular Core ML package to integrate it in your own Xcode project, you can clone the repos or just download the version of interest using code like the following.from huggingface_hub import snapshot_downloadfrom pathlib import Pathrepo_id = "apple/coreml-stable-diffusion-2-1-base-palettized"variant = "original/packages"model_path = Path("./models") / (repo_id.split("/")[-1] + "_" + variant.replace("/", "_"))snapshot_download(repo_id, allow_patterns=f"{variant}/*", local_dir=model_path, local_dir_use_symlinks=False)print(f"Model downloaded at {model_path}") Converting and Optimizing Custom Models If you want to use a personalized Stable Diffusion model (for example, if you have fine-tuned or dreamboothed your own models), you can use Apple’s ml-stable-diffusion repo to do the conversion yourself. This is a brief summary of how you’d go about it, but we recommend you read the documentation details.If you want to apply quantization, you need the latest versions of coremltools, apple/ml-stable-diffusion and Xcode in order to do the conversion.Download coremltools 7.0 beta from the releases page in GitHub.Download Xcode 15.0 beta from Apple developer site.Download apple/ml-stable-diffusion from the repo and follow the installation instructions.Select the model you want to convert. You can train your own or choose one from the Hugging Face Diffusers Models Gallery. For example, let’s convert prompthero/openjourney-v4.Install apple/ml-stable-diffusion and run a first conversion using the ORIGINAL attention implementation like this:python -m python_coreml_stable_diffusion.torch2coreml \ --model-version prompthero/openjourney-v4 \ --convert-unet \ --convert-text-encoder \ --convert-vae-decoder \ --convert-vae-encoder \ --convert-safety-checker \ --quantize-nbits 6 \ --attention-implementation ORIGINAL \ --compute-unit CPU_AND_GPU \ --bundle-resources-for-swift-cli \ --check-output-correctness \ -o models/original/openjourney-6-bitUse --convert-vae-encoder if you want to use image-to-image tasks.Do not use --chunk-unet with --quantized-nbits 6 (or less), as the quantized model is small enough to work fine on both iOS and macOS.Repeat the conversion for the SPLIT_EINSUM_V2 attention implementation:python -m python_coreml_stable_diffusion.torch2coreml \ --model-version prompthero/openjourney-v4 \ --convert-unet \ --convert-text-encoder \ --convert-vae-decoder \ --convert-safety-checker \ --quantize-nbits 6 \ --attention-implementation SPLIT_EINSUM_V2 \ --compute-unit ALL \ --bundle-resources-for-swift-cli \ --check-output-correctness \ -o models/split_einsum_v2/openjourney-6-bitTest the converted models on the desired hardware. As a rule of thumb, the ORIGINAL version usually works better on macOS, whereas SPLIT_EINSUM_V2 is usually faster on iOS. For more details and additional data points, see these tests contributed by the community on the previous version of Stable Diffusion for Core ML.To integrate the desired model in your own app:If you are going to distribute the model inside the app, use the .mlpackage files. Note that this will increase the size of your app binary.Otherwise, you can use the compiled Resources to download them dynamically when your app starts.If you don’t use the --quantize-nbits option, weights will be represented as 16-bit floats. This is compatible with the current version of Core ML so you won’t need to install the betas of iOS, macOS or Xcode. Using Less than 6 bits 6-bit quantization is a sweet spot between model quality, model size and convenience – you just need to provide a conversion option in order to be able to quantize any pre-trained model. This is an example of post-training compression.The beta version of coremltools released last week also includes training-time compression methods. The idea here is that you can fine-tune a pre-trained Stable Diffusion model and perform the weights compression while fine-tuning is taking place. This allows you to use 4- or even 2-bit compression while minimizing the loss in quality. The reason this works is because weight clustering is performed using a differentiable algorithm, and therefore we can apply the usual training optimizers to find the quantization table while minimizing model loss.We have plans to evaluate this method soon, and can’t wait to see how 4-bit optimized models work and how fast they run. If you beat us to it, please drop us a note and we’ll be happy to check 🙂 Conclusion Quantization methods can be used to reduce the size of Stable Diffusion models, make them run faster on-device and consume less resources. The latest versions of Core ML and coremltools support techniques like 6-bit palettization that are easy to apply and that have a minimal impact on quality. We have added 6-bit palettized models to the Hub, which are small enough to run on both iOS and macOS. We've also shown how you can convert fine-tuned models yourself, and can't wait to see what you do with these tools and techniques!
https://huggingface.co/blog/paddlepaddle
Welcome PaddlePaddle to the Hugging Face Hub
PaddlePaddle
January 17, 2023
We are happy to share an open source collaboration between Hugging Face and PaddlePaddle on a shared mission to advance and democratize AI through open source!First open sourced by Baidu in 2016, PaddlePaddle enables developers of all skill levels to adopt and implement Deep Learning at scale. As of Q4 2022, PaddlePaddle is being used by more than 5.35 million developers and 200,000 enterprises, ranking first in terms of market share among Deep Learning platforms in China. PaddlePaddle features popular open source repositories such as the Paddle Deep Learning Framework, model libraries across different modalities (e.g. PaddleOCR, PaddleDetection, PaddleNLP, PaddleSpeech), PaddleSlim for model compression, FastDeploy for model deployment and many more.With PaddleNLP leading the way, PaddlePaddle will gradually integrate its libraries with the Hugging Face Hub. You will soon be able to play with the full suite of awesome pre-trained PaddlePaddle models across text, image, audio, video and multi-modalities on the Hub!Find PaddlePaddle ModelsYou can find all PaddlePaddle models on the Model Hub by filtering with the PaddlePaddle library tag. There are already over 75 PaddlePaddle models on the Hub. As an example, you can find our multi-task Information Extraction model series UIE, State-of-the-Art Chinese Language Model ERNIE 3.0 model series, novel document pre-training model Ernie-Layout with layout knowledge enhancement in the whole workflow and so on.You are also welcome to check out the PaddlePaddle org on the HuggingFace Hub. In additional to the above-mentioned models, you can also explore our Spaces, including our text-to-image Ernie-ViLG, cross-modal Information Extraction engine UIE-X and awesome multilingual OCR toolkit PaddleOCR.Inference API and WidgetsPaddlePaddle models are available through the Inference API, which you can access through HTTP with cURL, Python’s requests library, or your preferred method for making network requests.Models that support a task are equipped with an interactive widget that allows you to play with the model directly in the browser.Use Existing ModelsIf you want to see how to load a specific model, you can click Use in paddlenlp (or other PaddlePaddle libraries in the future) and you will be given a working snippet that to load it!Share ModelsDepending on the PaddlePaddle library, you may be able to share your models by pushing to the Hub. For example, you can share PaddleNLP models by using the save_to_hf_hub method.from paddlenlp.transformers import AutoTokenizer, AutoModelForMaskedLMtokenizer = AutoTokenizer.from_pretrained("PaddlePaddle/ernie-3.0-base-zh", from_hf_hub=True)model = AutoModelForMaskedLM.from_pretrained("PaddlePaddle/ernie-3.0-base-zh", from_hf_hub=True)tokenizer.save_to_hf_hub(repo_id="<my_org_name>/<my_repo_name>")model.save_to_hf_hub(repo_id="<my_org_name>/<my_repo_name>")ConclusionPaddlePaddle is an open source Deep Learning platform that originated from industrial practice and has been open-sourcing innovative and industry-grade projects since 2016. We are excited to join the Hub to share our work with the HuggingFace community and you can expect more fun and State-of-the-Art projects from us soon! To stay up to date with the latest news, you can follow us on Twitter at @PaddlePaddle.
https://huggingface.co/blog/lcm_lora
SDXL in 4 steps with Latent Consistency LoRAs
Pedro Cuenca, Suraj Patil, Simian Luo, Daniel Gu, Yiqin Tan, Sayak Paul, Apolinário from multimodal AI art
November 9, 2023
Latent Consistency Models (LCM) are a way to decrease the number of steps required to generate an image with Stable Diffusion (or SDXL) by distilling the original model into another version that requires fewer steps (4 to 8 instead of the original 25 to 50). Distillation is a type of training procedure that attempts to replicate the outputs from a source model using a new one. The distilled model may be designed to be smaller (that’s the case of DistilBERT or the recently-released Distil-Whisper) or, in this case, require fewer steps to run. It’s usually a lengthy and costly process that requires huge amounts of data, patience, and a few GPUs.Well, that was the status quo before today!We are delighted to announce a new method that can essentially make Stable Diffusion and SDXL faster, as if they had been distilled using the LCM process! How does it sound to run any SDXL model in about 1 second instead of 7 on a 3090, or 10x faster on Mac? Read on for details!ContentsMethod OverviewWhy does this matterFast Inference with SDXL LCM LoRAsQuality ComparisonGuidance Scale and Negative PromptsQuality vs base SDXLLCM LoRAs with other ModelsFull Diffusers IntegrationBenchmarksLCM LoRAs and Models Released TodayBonus: Combine LCM LoRAs with regular SDXL LoRAsHow to train LCM LoRAsResourcesCreditsMethod OverviewSo, what’s the trick? For latent consistency distillation, each model needs to be distilled separately. The core idea with LCM LoRA is to train just a small number of adapters, known as LoRA layers, instead of the full model. The resulting LoRAs can then be applied to any fine-tuned version of the model without having to distil them separately. If you are itching to see how this looks in practice, just jump to the next section to play with the inference code. If you want to train your own LoRAs, this is the process you’d use:Select an available teacher model from the Hub. For example, you can use SDXL (base), or any fine-tuned or dreamboothed version you like.Train a LCM LoRA on the model. LoRA is a type of performance-efficient fine-tuning, or PEFT, that is much cheaper to accomplish than full model fine-tuning. For additional details on PEFT, please check this blog post or the diffusers LoRA documentation.Use the LoRA with any SDXL diffusion model and the LCM scheduler; bingo! You get high-quality inference in just a few steps.For more details on the process, please download our paper.Why does this matter?Fast inference of Stable Diffusion and SDXL enables new use-cases and workflows. To name a few:Accessibility: generative tools can be used effectively by more people, even if they don’t have access to the latest hardware.Faster iteration: get more images and multiple variants in a fraction of the time! This is great for artists and researchers; whether for personal or commercial use.Production workloads may be possible on different accelerators, including CPUs.Cheaper image generation services.To gauge the speed difference we are talking about, generating a single 1024x1024 image on an M1 Mac with SDXL (base) takes about a minute. Using the LCM LoRA, we get great results in just ~6s (4 steps). This is an order of magnitude faster, and not having to wait for results is a game-changer. Using a 4090, we get almost instant response (less than 1s). This unlocks the use of SDXL in applications where real-time events are a requirement.Fast Inference with SDXL LCM LoRAsThe version of diffusers released today makes it very easy to use LCM LoRAs:from diffusers import DiffusionPipeline, LCMSchedulerimport torchmodel_id = "stabilityai/stable-diffusion-xl-base-1.0"lcm_lora_id = "latent-consistency/lcm-lora-sdxl"pipe = DiffusionPipeline.from_pretrained(model_id, variant="fp16")pipe.load_lora_weights(lcm_lora_id)pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)pipe.to(device="cuda", dtype=torch.float16)prompt = "close-up photography of old man standing in the rain at night, in a street lit by lamps, leica 35mm summilux"images = pipe(prompt=prompt,num_inference_steps=4,guidance_scale=1,).images[0]Note how the code:Instantiates a standard diffusion pipeline with the SDXL 1.0 base model.Applies the LCM LoRA.Changes the scheduler to the LCMScheduler, which is the one used in latent consistency models.That’s it!This would result in the following full-resolution image:Image generated with SDXL in 4 steps using an LCM LoRA.Quality ComparisonLet’s see how the number of steps impacts generation quality. The following code will generate images with 1 to 8 total inference steps:images = []for steps in range(8):generator = torch.Generator(device=pipe.device).manual_seed(1337)image = pipe(prompt=prompt,num_inference_steps=steps+1,guidance_scale=1,generator=generator,).images[0]images.append(image)These are the 8 images displayed in a grid:LCM LoRA generations with 1 to 8 steps.As expected, using just 1 step produces an approximate shape without discernible features and lacking texture. However, results quickly improve, and they are usually very satisfactory in just 4 to 6 steps. Personally, I find the 8-step image in the previous test to be a bit too saturated and “cartoony” for my taste, so I’d probably choose between the ones with 5 and 6 steps in this example. Generation is so fast that you can create a bunch of different variants using just 4 steps, and then select the ones you like and iterate using a couple more steps and refined prompts as necessary.Guidance Scale and Negative PromptsNote that in the previous examples we used a guidance_scale of 1, which effectively disables it. This works well for most prompts, and it’s fastest, but ignores negative prompts. You can also explore using negative prompts by providing a guidance scale between 1 and 2 – we found that larger values don’t work.Quality vs base SDXLHow does this compare against the standard SDXL pipeline, in terms of quality? Let’s see an example!We can quickly revert our pipeline to a standard SDXL pipeline by unloading the LoRA weights and switching to the default scheduler:from diffusers import EulerDiscreteSchedulerpipe.unload_lora_weights()pipe.scheduler = EulerDiscreteScheduler.from_config(pipe.scheduler.config)Then we can run inference as usual for SDXL. We’ll gather results using varying number of steps:images = []for steps in (1, 4, 8, 15, 20, 25, 30, 50):generator = torch.Generator(device=pipe.device).manual_seed(1337)image = pipe(prompt=prompt,num_inference_steps=steps,generator=generator,).images[0]images.append(image)SDXL pipeline results (same prompt and random seed), using 1, 4, 8, 15, 20, 25, 30, and 50 steps.As you can see, images in this example are pretty much useless until ~20 steps (second row), and quality still increases noticeably with more steps. The details in the final image are amazing, but it took 50 steps to get there.LCM LoRAs with other modelsThis technique also works for any other fine-tuned SDXL or Stable Diffusion model. To demonstrate, let's see how to run inference on collage-diffusion, a model fine-tuned from Stable Diffusion v1.5 using Dreambooth.The code is similar to the one we saw in the previous examples. We load the fine-tuned model, and then the LCM LoRA suitable for Stable Diffusion v1.5.from diffusers import DiffusionPipeline, LCMSchedulerimport torchmodel_id = "wavymulder/collage-diffusion"lcm_lora_id = "latent-consistency/lcm-lora-sdv1-5"pipe = DiffusionPipeline.from_pretrained(model_id, variant="fp16")pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)pipe.load_lora_weights(lcm_lora_id)pipe.to(device="cuda", dtype=torch.float16)prompt = "collage style kid sits looking at the night sky, full of stars"generator = torch.Generator(device=pipe.device).manual_seed(1337)images = pipe(prompt=prompt,generator=generator,negative_prompt=negative_prompt,num_inference_steps=4,guidance_scale=1,).images[0]imagesLCM LoRA technique with a Dreambooth Stable Diffusion v1.5 model, allowing 4-step inference.Full Diffusers IntegrationThe integration of LCM in diffusers makes it possible to take advantage of many features and workflows that are part of the diffusers toolbox. For example:Out of the box mps support for Macs with Apple Silicon.Memory and performance optimizations like flash attention or torch.compile().Additional memory saving strategies for low-RAM environments, including model offload.Workflows like ControlNet or image-to-image.Training and fine-tuning scripts.BenchmarksThis section is not meant to be exhaustive, but illustrative of the generation speed we achieve on various computers. Let us stress again how liberating it is to explore image generation so easily.HardwareSDXL LoRA LCM (4 steps)SDXL standard (25 steps)Mac, M1 Max6.5s64s2080 Ti4.7s10.2s30901.4s7s40900.7s3.4sT4 (Google Colab Free Tier)8.4s26.5sA100 (80 GB)1.2s3.8sIntel i9-10980XE CPU (1/36 cores used)29s219sThese tests were run with a batch size of 1 in all cases, using this script by Sayak Paul.For cards with a lot of capacity, such as A100, performance increases significantly when generating multiple images at once, which is usually the case for production workloads.LCM LoRAs and Models Released TodayLatent Consistency Models LoRAs Collectionlatent-consistency/lcm-lora-sdxl. LCM LoRA for SDXL 1.0 base, as seen in the examples above.latent-consistency/lcm-lora-sdv1-5. LCM LoRA for Stable Diffusion 1.5.latent-consistency/lcm-lora-ssd-1b. LCM LoRA for segmind/SSD-1B, a distilled SDXL model that's 50% smaller and 60% faster than the original SDXL.latent-consistency/lcm-sdxl. Full fine-tuned consistency model derived from SDXL 1.0 base.latent-consistency/lcm-ssd-1b. Full fine-tuned consistency model derived from segmind/SSD-1B.Bonus: Combine LCM LoRAs with regular SDXL LoRAsUsing the diffusers + PEFT integration, you can combine LCM LoRAs with regular SDXL LoRAs, giving them the superpower to run LCM inference in only 4 steps.Here we are going to combine CiroN2022/toy_face LoRA with the LCM LoRA:from diffusers import DiffusionPipeline, LCMSchedulerimport torchmodel_id = "stabilityai/stable-diffusion-xl-base-1.0"lcm_lora_id = "latent-consistency/lcm-lora-sdxl"pipe = DiffusionPipeline.from_pretrained(model_id, variant="fp16")pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)pipe.load_lora_weights(lcm_lora_id)pipe.load_lora_weights("CiroN2022/toy-face", weight_name="toy_face_sdxl.safetensors", adapter_name="toy")pipe.set_adapters(["lora", "toy"], adapter_weights=[1.0, 0.8])pipe.to(device="cuda", dtype=torch.float16)prompt = "a toy_face man"negative_prompt = "blurry, low quality, render, 3D, oversaturated"images = pipe(prompt=prompt,negative_prompt=negative_prompt,num_inference_steps=4,guidance_scale=0.5,).images[0]imagesStandard and LCM LoRAs combined for fast (4 step) inference.Need ideas to explore some LoRAs? Check out our experimental LoRA the Explorer (LCM version) Space to test amazing creations by the community and get inspired!How to Train LCM Models and LoRAsAs part of the diffusers release today, we are providing training and fine-tuning scripts developed in collaboration with the LCM team authors. They allow users to:Perform full-model distillation of Stable Diffusion or SDXL models on large datasets such as Laion.Train LCM LoRAs, which is a much easier process. As we've shown in this post, it also makes it possible to run fast inference with Stable Diffusion, without having to go through distillation training.For more details, please check the instructions for SDXL or Stable Diffusion in the repo.We hope these scripts inspire the community to try their own fine-tunes. Please, do let us know if you use them for your projects!ResourcesLatent Consistency Models project page, paper.LCM LoRAsFor SDXL.For Stable Diffusion v1.5.For Segmind's SSD-1B.Technical Report.DemosSDXL in 4 steps with Latent Consistency LoRAsNear real-time video streamLoRA the Explorer (experimental LCM version)PEFT: intro, repoTraining scriptsFor Stable Diffusion 1.5For SDXLCreditsThe amazing work on Latent Consistency Models was performed by the LCM Team, please make sure to check out their code, report and paper. This project is a collaboration between the diffusers team, the LCM team, and community contributor Daniel Gu. We believe it's a testament to the enabling power of open source AI, the cornerstone that allows researchers, practitioners and tinkerers to explore new ideas and collaborate. We'd also like to thank @madebyollin for their continued contributions to the community, including the float16 autoencoder we use in our training scripts.
https://huggingface.co/blog/train-decision-transformers
Train your first Decision Transformer
Edward Beeching, Thomas Simonini
September 8, 2022
In a previous post, we announced the launch of Decision Transformers in the transformers library. This new technique of using a Transformer as a Decision-making model is getting increasingly popular.So today, you’ll learn to train your first Offline Decision Transformer model from scratch to make a half-cheetah run. We'll train it directly on a Google Colab that you can find here 👉 https://github.com/huggingface/blog/blob/main/notebooks/101_train-decision-transformers.ipynb*An "expert" Decision Transformers model, learned using offline RL in the Gym HalfCheetah environment.*Sounds exciting? Let's get started!What are Decision Transformers?Training Decision TransformersLoading the dataset and building the Custom Data CollatorTraining the Decision Transformer model with a 🤗 transformers TrainerConclusionWhat’s next?ReferencesWhat are Decision Transformers?The Decision Transformer model was introduced by “Decision Transformer: Reinforcement Learning via Sequence Modeling” by Chen L. et al. It abstracts Reinforcement Learning as a conditional-sequence modeling problem.The main idea is that instead of training a policy using RL methods, such as fitting a value function that will tell us what action to take to maximize the return (cumulative reward), we use a sequence modeling algorithm (Transformer) that, given the desired return, past states, and actions, will generate future actions to achieve this desired return. It’s an autoregressive model conditioned on the desired return, past states, and actions to generate future actions that achieve the desired return.This is a complete shift in the Reinforcement Learning paradigm since we use generative trajectory modeling (modeling the joint distribution of the sequence of states, actions, and rewards) to replace conventional RL algorithms. It means that in Decision Transformers, we don’t maximize the return but rather generate a series of future actions that achieve the desired return.The process goes this way:We feed the last K timesteps into the Decision Transformer with three inputs:Return-to-goStateActionThe tokens are embedded either with a linear layer if the state is a vector or a CNN encoder if it’s frames.The inputs are processed by a GPT-2 model, which predicts future actions via autoregressive modeling.Decision Transformer architecture. States, actions, and returns are fed into modality-specific linear embeddings, and a positional episodic timestep encoding is added. Tokens are fed into a GPT architecture which predicts actions autoregressively using a causal self-attention mask. Figure from [1].There are different types of Decision Transformers, but today, we’re going to train an offline Decision Transformer, meaning that we only use data collected from other agents or human demonstrations. The agent does not interact with the environment. If you want to know more about the difference between offline and online reinforcement learning, check this article.Now that we understand the theory behind Offline Decision Transformers, let’s see how we’re going to train one in practice.Training Decision TransformersIn the previous post, we demonstrated how to use a transformers Decision Transformer model and load pretrained weights from the 🤗 hub. In this part we will use 🤗 Trainer and a custom Data Collator to train a Decision Transformer model from scratch, using an Offline RL Dataset hosted on the 🤗 hub. You can find code for this tutorial in this Colab notebook.We will be performing offline RL to learn the following behavior in the mujoco halfcheetah environment.*An "expert" Decision Transformers model, learned using offline RL in the Gym HalfCheetah environment.*Loading the dataset and building the Custom Data CollatorWe host a number of Offline RL Datasets on the hub. Today we will be training with the halfcheetah “expert” dataset, hosted here on hub.First we need to import the load_dataset function from the 🤗 datasets package and download the dataset to our machine.from datasets import load_datasetdataset = load_dataset("edbeeching/decision_transformer_gym_replay", "halfcheetah-expert-v2")While most datasets on the hub are ready to use out of the box, sometimes we wish to perform some additional processing or modification of the dataset. In this case we wish to match the author's implementation, that is we need to:Normalize each feature by subtracting the mean and dividing by the standard deviation.Pre-compute discounted returns for each trajectory.Scale the rewards and returns by a factor of 1000.Augment the dataset sampling distribution so it takes into account the length of the expert agent’s trajectories.In order to perform this dataset preprocessing, we will use a custom 🤗 Data Collator. Now let’s get started on the Custom Data Collator for Offline Reinforcement Learning.@dataclassclass DecisionTransformerGymDataCollator:return_tensors: str = "pt"max_len: int = 20 #subsets of the episode we use for trainingstate_dim: int = 17 # size of state spaceact_dim: int = 6 # size of action spacemax_ep_len: int = 1000 # max episode length in the datasetscale: float = 1000.0 # normalization of rewards/returnsstate_mean: np.array = None # to store state meansstate_std: np.array = None # to store state stdsp_sample: np.array = None # a distribution to take account trajectory lengthsn_traj: int = 0 # to store the number of trajectories in the datasetdef __init__(self, dataset) -> None:self.act_dim = len(dataset[0]["actions"][0])self.state_dim = len(dataset[0]["observations"][0])self.dataset = dataset# calculate dataset stats for normalization of statesstates = []traj_lens = []for obs in dataset["observations"]:states.extend(obs)traj_lens.append(len(obs))self.n_traj = len(traj_lens)states = np.vstack(states)self.state_mean, self.state_std = np.mean(states, axis=0), np.std(states, axis=0) + 1e-6traj_lens = np.array(traj_lens)self.p_sample = traj_lens / sum(traj_lens)def _discount_cumsum(self, x, gamma):discount_cumsum = np.zeros_like(x)discount_cumsum[-1] = x[-1]for t in reversed(range(x.shape[0] - 1)):discount_cumsum[t] = x[t] + gamma * discount_cumsum[t + 1]return discount_cumsumdef __call__(self, features):batch_size = len(features)# this is a bit of a hack to be able to sample of a non-uniform distributionbatch_inds = np.random.choice(np.arange(self.n_traj),size=batch_size,replace=True,p=self.p_sample, # reweights so we sample according to timesteps)# a batch of dataset featuress, a, r, d, rtg, timesteps, mask = [], [], [], [], [], [], []for ind in batch_inds:# for feature in features:feature = self.dataset[int(ind)]si = random.randint(0, len(feature["rewards"]) - 1)# get sequences from datasets.append(np.array(feature["observations"][si : si + self.max_len]).reshape(1, -1, self.state_dim))a.append(np.array(feature["actions"][si : si + self.max_len]).reshape(1, -1, self.act_dim))r.append(np.array(feature["rewards"][si : si + self.max_len]).reshape(1, -1, 1))d.append(np.array(feature["dones"][si : si + self.max_len]).reshape(1, -1))timesteps.append(np.arange(si, si + s[-1].shape[1]).reshape(1, -1))timesteps[-1][timesteps[-1] >= self.max_ep_len] = self.max_ep_len - 1 # padding cutoffrtg.append(self._discount_cumsum(np.array(feature["rewards"][si:]), gamma=1.0)[: s[-1].shape[1] # TODO check the +1 removed here].reshape(1, -1, 1))if rtg[-1].shape[1] < s[-1].shape[1]:print("if true")rtg[-1] = np.concatenate([rtg[-1], np.zeros((1, 1, 1))], axis=1)# padding and state + reward normalizationtlen = s[-1].shape[1]s[-1] = np.concatenate([np.zeros((1, self.max_len - tlen, self.state_dim)), s[-1]], axis=1)s[-1] = (s[-1] - self.state_mean) / self.state_stda[-1] = np.concatenate([np.ones((1, self.max_len - tlen, self.act_dim)) * -10.0, a[-1]],axis=1,)r[-1] = np.concatenate([np.zeros((1, self.max_len - tlen, 1)), r[-1]], axis=1)d[-1] = np.concatenate([np.ones((1, self.max_len - tlen)) * 2, d[-1]], axis=1)rtg[-1] = np.concatenate([np.zeros((1, self.max_len - tlen, 1)), rtg[-1]], axis=1) / self.scaletimesteps[-1] = np.concatenate([np.zeros((1, self.max_len - tlen)), timesteps[-1]], axis=1)mask.append(np.concatenate([np.zeros((1, self.max_len - tlen)), np.ones((1, tlen))], axis=1))s = torch.from_numpy(np.concatenate(s, axis=0)).float()a = torch.from_numpy(np.concatenate(a, axis=0)).float()r = torch.from_numpy(np.concatenate(r, axis=0)).float()d = torch.from_numpy(np.concatenate(d, axis=0))rtg = torch.from_numpy(np.concatenate(rtg, axis=0)).float()timesteps = torch.from_numpy(np.concatenate(timesteps, axis=0)).long()mask = torch.from_numpy(np.concatenate(mask, axis=0)).float()return {"states": s,"actions": a,"rewards": r,"returns_to_go": rtg,"timesteps": timesteps,"attention_mask": mask,}That was a lot of code, the TLDR is that we defined a class that takes our dataset, performs the required preprocessing and will return us batches of states, actions, rewards, returns, timesteps and masks. These batches can be directly used to train a Decision Transformer model with a 🤗 transformers Trainer.Training the Decision Transformer model with a 🤗 transformers Trainer.In order to train the model with the 🤗 Trainer class, we first need to ensure the dictionary it returns contains a loss, in this case L-2 norm of the models action predictions and the targets. We achieve this by making a TrainableDT class, which inherits from the Decision Transformer model.class TrainableDT(DecisionTransformerModel):def __init__(self, config):super().__init__(config)def forward(self, **kwargs):output = super().forward(**kwargs)# add the DT lossaction_preds = output[1]action_targets = kwargs["actions"]attention_mask = kwargs["attention_mask"]act_dim = action_preds.shape[2]action_preds = action_preds.reshape(-1, act_dim)[attention_mask.reshape(-1) > 0]action_targets = action_targets.reshape(-1, act_dim)[attention_mask.reshape(-1) > 0]loss = torch.mean((action_preds - action_targets) ** 2)return {"loss": loss}def original_forward(self, **kwargs):return super().forward(**kwargs)The transformers Trainer class required a number of arguments, defined in the TrainingArguments class. We use the same hyperparameters are in the authors original implementation, but train for fewer iterations. This takes around 40 minutes to train in a Colab notebook, so grab a coffee or read the 🤗 Annotated Diffusion blog post while you wait. The authors train for around 3 hours, so the results we get here will not be quite as good as theirs.training_args = TrainingArguments(output_dir="output/",remove_unused_columns=False,num_train_epochs=120,per_device_train_batch_size=64,learning_rate=1e-4,weight_decay=1e-4,warmup_ratio=0.1,optim="adamw_torch",max_grad_norm=0.25,)trainer = Trainer(model=model,args=training_args,train_dataset=dataset["train"],data_collator=collator,)trainer.train()Now that we explained the theory behind Decision Transformer, the Trainer, and how to train it. You're ready to train your first offline Decision Transformer model from scratch to make a half-cheetah run 👉 https://github.com/huggingface/blog/blob/main/notebooks/101_train-decision-transformers.ipynbThe Colab includes visualizations of the trained model, as well as how to save your model on the 🤗 hub.ConclusionThis post has demonstrated how to train the Decision Transformer on an offline RL dataset, hosted on 🤗 datasets. We have used a 🤗 transformers Trainer and a custom data collator.In addition to Decision Transformers, we want to support more use cases and tools from the Deep Reinforcement Learning community. Therefore, it would be great to hear your feedback on the Decision Transformer model, and more generally anything we can build with you that would be useful for RL. Feel free to reach out to us.What’s next?In the coming weeks and months, we plan on supporting other tools from the ecosystem:Expanding our repository of Decision Transformer models with models trained or finetuned in an online setting [2]Integrating sample-factory version 2.0The best way to keep in touch is to join our discord server to exchange with us and with the community.References[1] Chen, Lili, et al. "Decision transformer: Reinforcement learning via sequence modeling." Advances in neural information processing systems 34 (2021).[2] Zheng, Qinqing and Zhang, Amy and Grover, Aditya “Online Decision Transformer” (arXiv preprint, 2022)
https://huggingface.co/blog/course-launch-event
Course Launch Community Event
Sylvain Gugger
October 26, 2021
We are excited to share that after a lot of work from the Hugging Face team, part 2 of the Hugging Face Course will be released on November 15th! Part 1 focused on teaching you how to use a pretrained model, fine-tune it on a text classification task then upload the result to the Model Hub. Part 2 will focus on all the other common NLP tasks: token classification, language modeling (causal and masked), translation, summarization and question answering. It will also take a deeper dive in the whole Hugging Face ecosystem, in particular 🤗 Datasets and 🤗 Tokenizers.To go with this release, we are organizing a large community event to which you are invited! The program includes two days of talks, then team projects focused on fine-tuning a model on any NLP task ending with live demos like this one. Those demos will go nicely in your portfolio if you are looking for a new job in Machine Learning. We will also deliver a certificate of completion to all the participants that achieve building one of them.AWS is sponsoring this event by offering free compute to participants via Amazon SageMaker. To register, please fill out this form. You will find below more details on the two days of talks.Day 1 (November 15th): A high-level view of Transformers and how to train themThe first day of talks will focus on a high-level presentation of Transformers models and the tools we can use to train or fine-tune them.Thomas Wolf: Transfer Learning and the birth of the Transformers libraryThomas Wolf is co-founder and Chief Science Officer of HuggingFace. The tools created by Thomas Wolf and the Hugging Face team are used across more than 5,000 research organisations including Facebook Artificial Intelligence Research, Google Research, DeepMind, Amazon Research, Apple, the Allen Institute for Artificial Intelligence as well as most university departments. Thomas Wolf is the initiator and senior chair of the largest research collaboration that has ever existed in Artificial Intelligence: “BigScience”, as well as a set of widely used libraries and tools. Thomas Wolf is also a prolific educator and a thought leader in the field of Artificial Intelligence and Natural Language Processing, a regular invited speaker to conferences all around the world (https://thomwolf.io).Margaret Mitchell: On Values in ML DevelopmentMargaret Mitchell is a researcher working on Ethical AI, currently focused on the ins and outs of ethics-informed AI development in tech. She has published over 50 papers on natural language generation, assistive technology, computer vision, and AI ethics, and holds multiple patents in the areas of conversation generation and sentiment classification. She previously worked at Google AI as a Staff Research Scientist, where she founded and co-led Google's Ethical AI group, focused on foundational AI ethics research and operationalizing AI ethics Google-internally. Before joining Google, she was a researcher at Microsoft Research, focused on computer vision-to-language generation; and was a postdoc at Johns Hopkins, focused on Bayesian modeling and information extraction. She holds a PhD in Computer Science from the University of Aberdeen and a Master's in computational linguistics from the University of Washington. While earning her degrees, she also worked from 2005-2012 on machine learning, neurological disorders, and assistive technology at Oregon Health and Science University. She has spearheaded a number of workshops and initiatives at the intersections of diversity, inclusion, computer science, and ethics. Her work has received awards from Secretary of Defense Ash Carter and the American Foundation for the Blind, and has been implemented by multiple technology companies. She likes gardening, dogs, and cats.Jakob Uszkoreit: It Ain't Broke So Don't Fix Let's Break ItJakob Uszkoreit is the co-founder of Inceptive. Inceptive designs RNA molecules for vaccines and therapeutics using large-scale deep learning in a tight loop with high throughput experiments with the goal of making RNA-based medicines more accessible, more effective and more broadly applicable. Previously, Jakob worked at Google for more than a decade, leading research and development teams in Google Brain, Research and Search working on deep learning fundamentals, computer vision, language understanding and machine translation.Jay Alammar: A gentle visual intro to Transformers modelsJay Alammar, Cohere. Through his popular ML blog, Jay has helped millions of researchers and engineers visually understand machine learning tools and concepts from the basic (ending up in numPy, pandas docs) to the cutting-edge (Transformers, BERT, GPT-3).Matthew Watson: NLP workflows with KerasMatthew Watson is a machine learning engineer on the Keras team, with a focus on high-level modeling APIs. He studied Computer Graphics during undergrad and a Masters at Stanford University. An almost English major who turned towards computer science, he is passionate about working across disciplines and making NLP accessible to a wider audience.Chen Qian: NLP workflows with KerasChen Qian is a software engineer from Keras team, with a focus on high-level modeling APIs. Chen got a Master degree of Electrical Engineering from Stanford University, and he is especially interested in simplifying code implementations of ML tasks and large-scale ML.Mark Saroufim: How to Train a Model with PytorchMark Saroufim is a Partner Engineer at Pytorch working on OSS production tools including TorchServe and Pytorch Enterprise. In his past lives, Mark was an Applied Scientist and Product Manager at Graphcore, yuri.ai, Microsoft and NASA's JPL. His primary passion is to make programming more fun.Day 2 (November 16th): The tools you will useDay 2 will be focused on talks by the Hugging Face, Gradio, and AWS teams, showing you the tools you will use.Lewis Tunstall: Simple Training with the 🤗 Transformers TrainerLewis is a machine learning engineer at Hugging Face, focused on developing open-source tools and making them accessible to the wider community. He is also a co-author of an upcoming O’Reilly book on Transformers and you can follow him on Twitter (@_lewtun) for NLP tips and tricks!Matthew Carrigan: New TensorFlow Features for 🤗 Transformers and 🤗 DatasetsMatt is responsible for TensorFlow maintenance at Transformers, and will eventually lead a coup against the incumbent PyTorch faction which will likely be co-ordinated via his Twitter account @carrigmat.Lysandre Debut: The Hugging Face Hub as a means to collaborate on and share Machine Learning projectsLysandre is a Machine Learning Engineer at Hugging Face where he is involved in many open source projects. His aim is to make Machine Learning accessible to everyone by developing powerful tools with a very simple API.Sylvain Gugger: Supercharge your PyTorch training loop with 🤗 AccelerateSylvain is a Research Engineer at Hugging Face and one of the core maintainers of 🤗 Transformers and the developer behind 🤗 Accelerate. He likes making model training more accessible.Lucile Saulnier: Get your own tokenizer with 🤗 Transformers & 🤗 TokenizersLucile is a machine learning engineer at Hugging Face, developing and supporting the use of open source tools. She is also actively involved in many research projects in the field of Natural Language Processing such as collaborative training and BigScience.Merve Noyan: Showcase your model demos with 🤗 SpacesMerve is a developer advocate at Hugging Face, working on developing tools and building content around them to democratize machine learning for everyone.Abubakar Abid: Building Machine Learning Applications FastAbubakar Abid is the CEO of Gradio. He received his Bachelor's of Science in Electrical Engineering and Computer Science from MIT in 2015, and his PhD in Applied Machine Learning from Stanford in 2021. In his role as the CEO of Gradio, Abubakar works on making machine learning models easier to demo, debug, and deploy.Mathieu Desvé: AWS ML Vision: Making Machine Learning Accessible to all CustomersTechnology enthusiast, maker on my free time. I like challenges and solving problem of clients and users, and work with talented people to learn every day. Since 2004, I work in multiple positions switching from frontend, backend, infrastructure, operations and managements. Try to solve commons technical and managerial issues in agile manner.Philipp Schmid: Managed Training with Amazon SageMaker and 🤗 TransformersPhilipp Schmid is a Machine Learning Engineer and Tech Lead at Hugging Face, where he leads the collaboration with the Amazon SageMaker team. He is passionate about democratizing and productionizing cutting-edge NLP models and improving the ease of use for Deep Learning.
https://huggingface.co/blog/ml-for-games-2
AI for Game Development: Creating a Farming Game in 5 Days. Part 2
Dylan Ebert
January 9, 2023
Welcome to AI for Game Development! In this series, we'll be using AI tools to create a fully functional farming game in just 5 days. By the end of this series, you will have learned how you can incorporate a variety of AI tools into your game development workflow. I will show you how you can use AI tools for:Art StyleGame Design3D Assets2D AssetsStoryWant the quick video version? You can watch it here. Otherwise, if you want the technical details, keep reading!Note: This tutorial is intended for readers who are familiar with Unity development and C#. If you're new to these technologies, check out the Unity for Beginners series before continuing.Day 2: Game DesignIn Part 1 of this tutorial series, we used AI for Art Style. More specifically, we used Stable Diffusion to generate concept art and develop the visual style of our game.In this part, we'll be using AI for Game Design. In The Short Version, I'll talk about how I used ChatGPT as a tool to help develop game ideas. But more importantly, what is actually going on here? Keep reading for background on Language Models and their broader Uses in Game Development.The Short VersionThe short version is straightforward: ask ChatGPT for advice, and follow its advice at your own discretion. In the case of the farming game, I asked ChatGPT:You are a professional game designer, designing a simple farming game. What features are most important to making the farming game fun and engaging?The answer given includes (summarized):Variety of cropsA challenging and rewarding progression systemDynamic and interactive environmentsSocial and multiplayer featuresA strong and immersive story or themeGiven that I only have 5 days, I decided to gray-box the first two points. You can play the result here, and view the source code here.I'm not going to go into detail on how I implemented these mechanics, since the focus of this series is how to use AI tools in your own game development process, not how to implement a farming game. Instead, I'll talk about what ChatGPT is (a language model), how these models actually work, and what this means for game development.Language ModelsChatGPT, despite being a major breakthrough in adoption, is an iteration on tech that has existed for a while: language models.Language models are a type of AI that are trained to predict the likelihood of a sequence of words. For example, if I were to write "The cat chases the ____", a language model would be trained to predict "mouse". This training process can then be applied to a wide variety of tasks. For example, translation: "the French word for cat is ____". This setup, while successful at some natural language tasks, wasn't anywhere near the level of performance seen today. This is, until the introduction of transformers.Transformers, introduced in 2017, are a neural network architecture that use a self-attention mechanism to predict the entire sequence all at once. This is the tech behind modern language models like ChatGPT. Want to learn more about how they work? Check out our Introduction to Transformers course, available free here on Hugging Face.So why is ChatGPT so successful compared to previous language models? It's impossible to answer this in its entirety, since ChatGPT is not open source. However, one of the reasons is Reinforcement Learning from Human Feedback (RLHF), where human feedback is used to improve the language model. Check out this blog post for more information on RLHF: how it works, open-source tools for doing it, and its future.This area of AI is constantly changing, and likely to see an explosion of creativity as it becomes part of the open source community, including in uses for game development. If you're reading this, you're probably ahead of the curve already.Uses in Game DevelopmentIn The Short Version, I talked about how I used ChatGPT to help develop game ideas. There is a lot more you can do with it though, like using it to code an entire game. You can use it for pretty much anything you can think of. Something that might be a bit more helpful is to talk about what it can't do.LimitationsChatGPT often sounds very convincing, while being wrong. Here is an archive of ChatGPT failures. The reason for these is that ChatGPT doesn't know what it's talking about the way a human does. It's a very large Language Model that predicts likely outputs, but doesn't really understand what it's saying. One of my personal favorite examples of these failures (especially relevant to game development) is this explanation of quaternions from Reddit:This explanation, while sounding excellent, is completely wrong. This is a great example of why ChatGPT, while very useful, shouldn't be used as a definitive knowledge base.SuggestionsIf ChatGPT fails a lot, should you use it? I would argue that it's still extremely useful as a tool, rather than as a replacement. In the example of Game Design, I could have followed up on ChatGPT's answer, and asked it to implement all of its suggestions for me. As I mentioned before, others have done this, and it somewhat works. However, I would suggest using ChatGPT more as a tool for brainstorming and acceleration, rather than as a complete replacement for steps in the development process.Click here to read Part 3, where we use AI for 3D Assets.
https://huggingface.co/blog/stable-diffusion-xl-coreml
Stable Diffusion XL on Mac with Advanced Core ML Quantization
Pedro Cuenca, Orhon
July 27, 2023
Stable Diffusion XL was released yesterday and it’s awesome. It can generate large (1024x1024) high quality images; adherence to prompts has been improved with some new tricks; it can effortlessly produce very dark or very bright images thanks to the latest research on noise schedulers; and it’s open source!The downside is that the model is much bigger, and therefore slower and more difficult to run on consumer hardware. Using the latest release of the Hugging Face diffusers library, you can run Stable Diffusion XL on CUDA hardware in 16 GB of GPU RAM, making it possible to use it on Colab’s free tier.The past few months have shown that people are very clearly interested in running ML models locally for a variety of reasons, including privacy, convenience, easier experimentation, or unmetered use. We’ve been working hard at both Apple and Hugging Face to explore this space. We’ve shown how to run Stable Diffusion on Apple Silicon, or how to leverage the latest advancements in Core ML to improve size and performance with 6-bit palettization.For Stable Diffusion XL we’ve done a few things:Ported the base model to Core ML so you can use it in your native Swift apps.Updated Apple’s conversion and inference repo so you can convert the models yourself, including any fine-tunes you’re interested in.Updated Hugging Face’s demo app to show how to use the new Core ML Stable Diffusion XL models downloaded from the Hub.Explored mixed-bit palettization, an advanced compression technique that achieves important size reductions while minimizing and controlling the quality loss you incur. You can apply the same technique to your own models too!Everything is open source and available today, let’s get on with it.ContentsUsing SD XL Models from the Hugging Face HubWhat is Mixed-Bit Palettization?How are Mixed-Bit Recipes Created?Converting Fine-Tuned ModelsPublished ResourcesUsing SD XL Models from the Hugging Face HubAs part of this release, we published two different versions of Stable Diffusion XL in Core ML.apple/coreml-stable-diffusion-xl-base is a complete pipeline, without any quantization.apple/coreml-stable-diffusion-mixed-bit-palettization contains (among other artifacts) a complete pipeline where the UNet has been replaced with a mixed-bit palettization recipe that achieves a compression equivalent to 4.5 bits per parameter. Size went down from 4.8 to 1.4 GB, a 71% reduction, and in our opinion quality is still great.Either model can be tested using Apple’s Swift command-line inference app, or Hugging Face’s demo app. This is an example of the latter using the new Stable Diffusion XL pipeline:As with previous Stable Diffusion releases, we expect the community to come up with novel fine-tuned versions for different domains, and many of them will be converted to Core ML. You can keep an eye on this filter in the Hub to explore!Stable Diffusion XL works on Apple Silicon Macs running the public beta of macOS 14. It currently uses the ORIGINAL attention implementation, which is intended for CPU + GPU compute units. Note that the refiner stage has not been ported yet.For reference, these are the performance figures we achieved on different devices:Device--compute-unit--attention-implementationEnd-to-End Latency (s)Diffusion Speed (iter/s)MacBook Pro (M1 Max)CPU_AND_GPUORIGINAL460.46MacBook Pro (M2 Max)CPU_AND_GPUORIGINAL370.57Mac Studio (M1 Ultra)CPU_AND_GPUORIGINAL250.89Mac Studio (M2 Ultra)CPU_AND_GPUORIGINAL201.11What is Mixed-Bit Palettization?Last month we discussed 6-bit palettization, a post-training quantization method that converts 16-bit weights to just 6-bit per parameter. This achieves an important reduction in model size, but going beyond that is tricky because model quality becomes more and more impacted as the number of bits is decreased.One option to decrease model size further is to use training time quantization, which consists of learning the quantization tables while we fine-tune the model. This works great, but you need to run a fine-tuning phase for every model you want to convert.We explored a different alternative instead: mixed-bit palettization. Instead of using 6 bits per parameter, we examine the model and decide how many quantization bits to use per layer. We make the decision based on how much each layer contributes to the overall quality degradation, which we measure by comparing the PSNR between the quantized model and the original model in float16 mode, for a set of a few inputs. We explore several bit depths, per layer: 1 (!), 2, 4 and 8. If a layer degrades significantly when using, say, 2 bits, we move to 4 and so on. Some layers might be kept in 16-bit mode if they are critical to preserving quality.Using this method, we can achieve effective quantizations of, for example, 2.8 bits on average, and we measure the impact on degradation for every combination we try. This allows us to be better informed about the best quantization to use for our target quality and size budgets.To illustrate the method, let’s consider the following quantization “recipes” that we got from one of our analysis runs (we’ll explain later how they were generated):{"model_version": "stabilityai/stable-diffusion-xl-base-1.0","baselines": {"original": 82.2,"linear_8bit": 66.025,"recipe_6.55_bit_mixedpalette": 79.9,"recipe_4.50_bit_mixedpalette": 75.8,"recipe_3.41_bit_mixedpalette": 71.7,},}What this tells us is that the original model quality, as measured by PSNR in float16, is about 82 dB. Performing a naïve 8-bit linear quantization drops it to 66 dB. But then we have a recipe that compresses to 6.55 bits per parameter, on average, while keeping PSNR at 80 dB. The second and third recipes further reduce the model size, while still sustaining a PSNR larger than that of the 8-bit linear quantization.For visual examples, these are the results on prompt a high quality photo of a surfing dog running each one of the three recipes with the same seed:3.41-bit4.50-bit6.55-bit16-bit (original)Some initial conclusions:In our opinion, all the images have good quality in terms of how realistic they look. The 6.55 and 4.50 versions are close to the 16-bit version in this aspect.The same seed produces an equivalent composition, but will not preserve the same details. Dog breeds may be different, for example.Adherence to the prompt may degrade as we increase compression. In this example, the aggressive 3.41 version loses the board. PSNR only compares how much pixels differ overall, but does not care about the subjects in the images. You need to examine results and assess them for your use case.This technique is great for Stable Diffusion XL because we can keep about the same UNet size even though the number of parameters tripled with respect to the previous version. But it's not exclusive to it! You can apply the method to any Stable Diffusion Core ML model.How are Mixed-Bit Recipes Created?The following plot shows the signal strength (PSNR in dB) versus model size reduction (% of float16 size) for stabilityai/stable-diffusion-xl-base-1.0. The {1,2,4,6,8}-bit curves are generated by progressively palettizing more layers using a palette with a fixed number of bits. The layers were ordered in ascending order of their isolated impact to end-to-end signal strength, so the cumulative compression's impact is delayed as much as possible. The mixed-bit curve is based on falling back to a higher number of bits as soon as a layer's isolated impact to end-to-end signal integrity drops below a threshold. Note that all curves based on palettization outperform linear 8-bit quantization at the same model size except for 1-bit.Mixed-bit palettization runs in two phases: analysis and application.The goal of the analysis phase is to find points in the mixed-bit curve (the brown one above all the others in the figure) so we can choose our desired quality-vs-size tradeoff. As mentioned in the previous section, we iterate through the layers and select the lowest bit depths that yield results above a given PSNR threshold. We repeat the process for various thresholds to get different quantization strategies. The result of the process is thus a set of quantization recipes, where each recipe is just a JSON dictionary detailing the number of bits to use for each layer in the model. Layers with few parameters are ignored and kept in float16 for simplicity.The application phase simply goes over the recipe and applies palettization with the number of bits specified in the JSON structure.Analysis is a lengthy process and requires a GPU (mps or cuda), as we have to run inference multiple times. Once it’s done, recipe application can be performed in a few minutes.We provide scripts for each one of these phases:mixed_bit_compression_pre_analysis.pymixed_bit_compression_apply.pyConverting Fine-Tuned ModelsIf you’ve previously converted Stable Diffusion models to Core ML, the process for XL using the command line converter is very similar. There’s a new flag to indicate whether the model belongs to the XL family, and you have to use --attention-implementation ORIGINAL if that’s the case.For an introduction to the process, check the instructions in the repo or one of our previous blog posts, and make sure you use the flags above.Running Mixed-Bit PalettizationAfter converting Stable Diffusion or Stable Diffusion XL models to Core ML, you can optionally apply mixed-bit palettization using the scripts mentioned above.Because the analysis process is slow, we have prepared recipes for the most popular models:Recipes for Stable Diffusion 1.5Recipes for Stable Diffusion 2.1Recipes for Stable Diffusion XL 1.0 baseYou can download and apply them locally to experiment.In addition, we also applied the three best recipes from the Stable Diffusion XL analysis to the Core ML version of the UNet, and published them here. Feel free to play with them and see how they work for you!Finally, as mentioned in the introduction, we created a complete Stable Diffusion XL Core ML pipeline that uses a 4.5-bit recipe.Published Resourcesapple/ml-stable-diffusion, by Apple. Conversion and inference library for Swift (and Python).huggingface/swift-coreml-diffusers. Hugging Face demo app, built on top of Apple's package.Stable Diffusion XL 1.0 base (Core ML version). Model ready to run using the repos above and other third-party apps.Stable Diffusion XL 1.0 base, with mixed-bit palettization (Core ML). Same model as above, with UNet quantized with an effective palettization of 4.5 bits (on average).Additional UNets with mixed-bit palettizaton.Mixed-bit palettization recipes, pre-computed for popular models and ready to use.mixed_bit_compression_pre_analysis.py. Script to run mixed-bit analysis and recipe generation.mixed_bit_compression_apply.py. Script to apply recipes computed during the analysis phase.
https://huggingface.co/blog/stable_diffusion_jax
🧨 Stable Diffusion in JAX / Flax !
Pedro Cuenca, Patrick von Platen
October 13, 2022
🤗 Hugging Face Diffusers supports Flax since version 0.5.1! This allows for super fast inference on Google TPUs, such as those available in Colab, Kaggle or Google Cloud Platform.This post shows how to run inference using JAX / Flax. If you want more details about how Stable Diffusion works or want to run it in GPU, please refer to this Colab notebook.If you want to follow along, click the button above to open this post as a Colab notebook.First, make sure you are using a TPU backend. If you are running this notebook in Colab, select Runtime in the menu above, then select the option "Change runtime type" and then select TPU under the Hardware accelerator setting.Note that JAX is not exclusive to TPUs, but it shines on that hardware because each TPU server has 8 TPU accelerators working in parallel.Setupimport jaxnum_devices = jax.device_count()device_type = jax.devices()[0].device_kindprint(f"Found {num_devices} JAX devices of type {device_type}.")assert "TPU" in device_type, "Available device is not a TPU, please select TPU from Edit > Notebook settings > Hardware accelerator"Output:Found 8 JAX devices of type TPU v2.Make sure diffusers is installed.!pip install diffusers==0.5.1Then we import all the dependencies.import numpy as npimport jaximport jax.numpy as jnpfrom pathlib import Pathfrom jax import pmapfrom flax.jax_utils import replicatefrom flax.training.common_utils import shardfrom PIL import Imagefrom huggingface_hub import notebook_loginfrom diffusers import FlaxStableDiffusionPipelineModel LoadingBefore using the model, you need to accept the model license in order to download and use the weights. The license is designed to mitigate the potential harmful effects of such a powerful machine learning system. We request users to read the license entirely and carefully. Here we offer a summary:You can't use the model to deliberately produce nor share illegal or harmful outputs or content,We claim no rights on the outputs you generate, you are free to use them and are accountable for their use which should not go against the provisions set in the license, andYou may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users.Flax weights are available in Hugging Face Hub as part of the Stable Diffusion repo. The Stable Diffusion model is distributed under the CreateML OpenRail-M license. It's an open license that claims no rights on the outputs you generate and prohibits you from deliberately producing illegal or harmful content. The model card provides more details, so take a moment to read them and consider carefully whether you accept the license. If you do, you need to be a registered user in the Hub and use an access token for the code to work. You have two options to provide your access token:Use the huggingface-cli login command-line tool in your terminal and paste your token when prompted. It will be saved in a file in your computer.Or use notebook_login() in a notebook, which does the same thing.The following cell will present a login interface unless you've already authenticated before in this computer. You'll need to paste your access token.if not (Path.home()/'.huggingface'/'token').exists(): notebook_login()TPU devices support bfloat16, an efficient half-float type. We'll use it for our tests, but you can also use float32 to use full precision instead.dtype = jnp.bfloat16Flax is a functional framework, so models are stateless and parameters are stored outside them. Loading the pre-trained Flax pipeline will return both the pipeline itself and the model weights (or parameters). We are using a bf16 version of the weights, which leads to type warnings that you can safely ignore.pipeline, params = FlaxStableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4",revision="bf16",dtype=dtype,)InferenceSince TPUs usually have 8 devices working in parallel, we'll replicate our prompt as many times as devices we have. Then we'll perform inference on the 8 devices at once, each responsible for generating one image. Thus, we'll get 8 images in the same amount of time it takes for one chip to generate a single one.After replicating the prompt, we obtain the tokenized text ids by invoking the prepare_inputs function of the pipeline. The length of the tokenized text is set to 77 tokens, as required by the configuration of the underlying CLIP Text model.prompt = "A cinematic film still of Morgan Freeman starring as Jimi Hendrix, portrait, 40mm lens, shallow depth of field, close up, split lighting, cinematic"prompt = [prompt] * jax.device_count()prompt_ids = pipeline.prepare_inputs(prompt)prompt_ids.shapeOutput:(8, 77)Replication and parallelizationModel parameters and inputs have to be replicated across the 8 parallel devices we have. The parameters dictionary is replicated using flax.jax_utils.replicate, which traverses the dictionary and changes the shape of the weights so they are repeated 8 times. Arrays are replicated using shard.p_params = replicate(params)prompt_ids = shard(prompt_ids)prompt_ids.shapeOutput:(8, 1, 77)That shape means that each one of the 8 devices will receive as an input a jnp array with shape (1, 77). 1 is therefore the batch size per device. In TPUs with sufficient memory, it could be larger than 1 if we wanted to generate multiple images (per chip) at once.We are almost ready to generate images! We just need to create a random number generator to pass to the generation function. This is the standard procedure in Flax, which is very serious and opinionated about random numbers – all functions that deal with random numbers are expected to receive a generator. This ensures reproducibility, even when we are training across multiple distributed devices.The helper function below uses a seed to initialize a random number generator. As long as we use the same seed, we'll get the exact same results. Feel free to use different seeds when exploring results later in the notebook.def create_key(seed=0):return jax.random.PRNGKey(seed)We obtain a rng and then "split" it 8 times so each device receives a different generator. Therefore, each device will create a different image, and the full process is reproducible.rng = create_key(0)rng = jax.random.split(rng, jax.device_count())JAX code can be compiled to an efficient representation that runs very fast. However, we need to ensure that all inputs have the same shape in subsequent calls; otherwise, JAX will have to recompile the code, and we wouldn't be able to take advantage of the optimized speed.The Flax pipeline can compile the code for us if we pass jit = True as an argument. It will also ensure that the model runs in parallel in the 8 available devices.The first time we run the following cell it will take a long time to compile, but subsequent calls (even with different inputs) will be much faster. For example, it took more than a minute to compile in a TPU v2-8 when I tested, but then it takes about 7s for future inference runs.images = pipeline(prompt_ids, p_params, rng, jit=True)[0]Output:CPU times: user 464 ms, sys: 105 ms, total: 569 msWall time: 7.07 sThe returned array has shape (8, 1, 512, 512, 3). We reshape it to get rid of the second dimension and obtain 8 images of 512 × 512 × 3 and then convert them to PIL.images = images.reshape((images.shape[0],) + images.shape[-3:])images = pipeline.numpy_to_pil(images)VisualizationLet's create a helper function to display images in a grid.def image_grid(imgs, rows, cols):w,h = imgs[0].sizegrid = Image.new('RGB', size=(cols*w, rows*h))for i, img in enumerate(imgs): grid.paste(img, box=(i%cols*w, i//cols*h))return gridimage_grid(images, 2, 4)Using different promptsWe don't have to replicate the same prompt in all the devices. We can do whatever we want: generate 2 prompts 4 times each, or even generate 8 different prompts at once. Let's do that!First, we'll refactor the input preparation code into a handy function:prompts = ["Labrador in the style of Hokusai","Painting of a squirrel skating in New York","HAL-9000 in the style of Van Gogh","Times Square under water, with fish and a dolphin swimming around","Ancient Roman fresco showing a man working on his laptop","Close-up photograph of young black woman against urban background, high quality, bokeh","Armchair in the shape of an avocado","Clown astronaut in space, with Earth in the background",]prompt_ids = pipeline.prepare_inputs(prompts)prompt_ids = shard(prompt_ids)images = pipeline(prompt_ids, p_params, rng, jit=True).imagesimages = images.reshape((images.shape[0], ) + images.shape[-3:])images = pipeline.numpy_to_pil(images)image_grid(images, 2, 4)How does parallelization work?We said before that the diffusers Flax pipeline automatically compiles the model and runs it in parallel on all available devices. We'll now briefly look inside that process to show how it works.JAX parallelization can be done in multiple ways. The easiest one revolves around using the jax.pmap function to achieve single-program, multiple-data (SPMD) parallelization. It means we'll run several copies of the same code, each on different data inputs. More sophisticated approaches are possible, we invite you to go over the JAX documentation and the pjit pages to explore this topic if you are interested!jax.pmap does two things for us:Compiles (or jits) the code, as if we had invoked jax.jit(). This does not happen when we call pmap, but the first time the pmapped function is invoked.Ensures the compiled code runs in parallel in all the available devices.To show how it works we pmap the _generate method of the pipeline, which is the private method that runs generates images. Please, note that this method may be renamed or removed in future releases of diffusers.p_generate = pmap(pipeline._generate)After we use pmap, the prepared function p_generate will conceptually do the following:Invoke a copy of the underlying function pipeline._generate in each device.Send each device a different portion of the input arguments. That's what sharding is used for. In our case, prompt_ids has shape (8, 1, 77, 768). This array will be split in 8 and each copy of _generate will receive an input with shape (1, 77, 768).We can code _generate completely ignoring the fact that it will be invoked in parallel. We just care about our batch size (1 in this example) and the dimensions that make sense for our code, and don't have to change anything to make it work in parallel.The same way as when we used the pipeline call, the first time we run the following cell it will take a while, but then it will be much faster.images = p_generate(prompt_ids, p_params, rng)images = images.block_until_ready()images.shapeOutput:CPU times: user 118 ms, sys: 83.9 ms, total: 202 msWall time: 6.82 s(8, 1, 512, 512, 3)We use block_until_ready() to correctly measure inference time, because JAX uses asynchronous dispatch and returns control to the Python loop as soon as it can. You don't need to use that in your code; blocking will occur automatically when you want to use the result of a computation that has not yet been materialized.
https://huggingface.co/blog/deep-rl-a2c
Advantage Actor Critic (A2C)
Thomas Simonini
July 22, 2022
Unit 7, of the Deep Reinforcement Learning Class with Hugging Face 🤗⚠️ A new updated version of this article is available here 👉 https://huggingface.co/deep-rl-course/unit1/introductionThis article is part of the Deep Reinforcement Learning Class. A free course from beginner to expert. Check the syllabus here.⚠️ A new updated version of this article is available here 👉 https://huggingface.co/deep-rl-course/unit1/introductionThis article is part of the Deep Reinforcement Learning Class. A free course from beginner to expert. Check the syllabus here.In Unit 5, we learned about our first Policy-Based algorithm called Reinforce. In Policy-Based methods, we aim to optimize the policy directly without using a value function. More precisely, Reinforce is part of a subclass of Policy-Based Methods called Policy-Gradient methods. This subclass optimizes the policy directly by estimating the weights of the optimal policy using Gradient Ascent.We saw that Reinforce worked well. However, because we use Monte-Carlo sampling to estimate return (we use an entire episode to calculate the return), we have significant variance in policy gradient estimation. Remember that the policy gradient estimation is the direction of the steepest increase in return. Aka, how to update our policy weights so that actions that lead to good returns have a higher probability of being taken. The Monte Carlo variance, which we will further study in this unit, leads to slower training since we need a lot of samples to mitigate it.Today we'll study Actor-Critic methods, a hybrid architecture combining a value-based and policy-based methods that help to stabilize the training by reducing the variance:An Actor that controls how our agent behaves (policy-based method)A Critic that measures how good the action taken is (value-based method)We'll study one of these hybrid methods called Advantage Actor Critic (A2C), and train our agent using Stable-Baselines3 in robotic environments. Where we'll train two agents to walk:A bipedal walker 🚶A spider 🕷️Sounds exciting? Let's get started!The Problem of Variance in ReinforceAdvantage Actor Critic (A2C)Reducing variance with Actor-Critic methodsThe Actor-Critic ProcessAdvantage Actor CriticAdvantage Actor Critic (A2C) using Robotics Simulations with PyBullet 🤖 The Problem of Variance in Reinforce In Reinforce, we want to increase the probability of actions in a trajectory proportional to how high the return is.If the return is high, we will push up the probabilities of the (state, action) combinations.Else, if the return is low, it will push down the probabilities of the (state, action) combinations.This return R(τ)R(\tau)R(τ) is calculated using a Monte-Carlo sampling. Indeed, we collect a trajectory and calculate the discounted return, and use this score to increase or decrease the probability of every action taken in that trajectory. If the return is good, all actions will be “reinforced” by increasing their likelihood of being taken. R(τ)=Rt+1+γRt+2+γ2Rt+3+...R(\tau) = R_{t+1} + \gamma R_{t+2} + \gamma^2 R_{t+3} + ...R(τ)=Rt+1​+γRt+2​+γ2Rt+3​+... The advantage of this method is that it’s unbiased. Since we’re not estimating the return, we use only the true return we obtain.But the problem is that the variance is high, since trajectories can lead to different returns due to stochasticity of the environment (random events during episode) and stochasticity of the policy. Consequently, the same starting state can lead to very different returns.Because of this, the return starting at the same state can vary significantly across episodes.The solution is to mitigate the variance by using a large number of trajectories, hoping that the variance introduced in any one trajectory will be reduced in aggregate and provide a "true" estimation of the return.However, increasing the batch size significantly reduces sample efficiency. So we need to find additional mechanisms to reduce the variance. If you want to dive deeper into the question of variance and bias tradeoff in Deep Reinforcement Learning, you can check these two articles:- Making Sense of the Bias / Variance Trade-off in (Deep) Reinforcement Learning - Bias-variance Tradeoff in Reinforcement Learning Advantage Actor Critic (A2C) Reducing variance with Actor-Critic methods The solution to reducing the variance of Reinforce algorithm and training our agent faster and better is to use a combination of policy-based and value-based methods: the Actor-Critic method.To understand the Actor-Critic, imagine you play a video game. You can play with a friend that will provide you some feedback. You’re the Actor, and your friend is the Critic.You don’t know how to play at the beginning, so you try some actions randomly. The Critic observes your action and provides feedback.Learning from this feedback, you’ll update your policy and be better at playing that game.On the other hand, your friend (Critic) will also update their way to provide feedback so it can be better next time.This is the idea behind Actor-Critic. We learn two function approximations:A policy that controls how our agent acts: πθ(s,a) \pi_{\theta}(s,a) πθ​(s,a)A value function to assist the policy update by measuring how good the action taken is: q^w(s,a) \hat{q}_{w}(s,a) q^​w​(s,a) The Actor-Critic Process Now that we have seen the Actor Critic's big picture, let's dive deeper to understand how Actor and Critic improve together during the training.As we saw, with Actor-Critic methods there are two function approximations (two neural networks):Actor, a policy function parameterized by theta: πθ(s,a) \pi_{\theta}(s,a) πθ​(s,a)Critic, a value function parameterized by w: q^w(s,a) \hat{q}_{w}(s,a) q^​w​(s,a)Let's see the training process to understand how Actor and Critic are optimized:At each timestep, t, we get the current state St S_tSt​ from the environment and pass it as input through our Actor and Critic.Our Policy takes the state and outputs an action At A_t At​.The Critic takes that action also as input and, using St S_tSt​ and At A_t At​, computes the value of taking that action at that state: the Q-value.The action At A_tAt​ performed in the environment outputs a new state St+1 S_{t+1}St+1​ and a reward Rt+1 R_{t+1} Rt+1​ .The Actor updates its policy parameters using the Q value.Thanks to its updated parameters, the Actor produces the next action to take at At+1 A_{t+1} At+1​ given the new state St+1 S_{t+1} St+1​. The Critic then updates its value parameters. Advantage Actor Critic (A2C) We can stabilize learning further by using the Advantage function as Critic instead of the Action value function.The idea is that the Advantage function calculates how better taking that action at a state is compared to the average value of the state. It’s subtracting the mean value of the state from the state action pair:In other words, this function calculates the extra reward we get if we take this action at that state compared to the mean reward we get at that state.The extra reward is what's beyond the expected value of that state. If A(s,a) > 0: our gradient is pushed in that direction.If A(s,a) < 0 (our action does worse than the average value of that state), our gradient is pushed in the opposite direction.The problem with implementing this advantage function is that it requires two value functions — Q(s,a) Q(s,a)Q(s,a) and V(s) V(s)V(s). Fortunately, we can use the TD error as a good estimator of the advantage function. Advantage Actor Critic (A2C) using Robotics Simulations with PyBullet 🤖 Now that you've studied the theory behind Advantage Actor Critic (A2C), you're ready to train your A2C agent using Stable-Baselines3 in robotic environments.Start the tutorial here 👉 https://colab.research.google.com/github/huggingface/deep-rl-class/blob/main/unit7/unit7.ipynbThe leaderboard to compare your results with your classmates 🏆 👉 https://huggingface.co/spaces/chrisjay/Deep-Reinforcement-Learning-Leaderboard Conclusion Congrats on finishing this chapter! There was a lot of information. And congrats on finishing the tutorial. 🥳.It's normal if you still feel confused with all these elements. This was the same for me and for all people who studied RL.Take time to grasp the material before continuing. Look also at the additional reading materials we provided in this article and the syllabus to go deeper 👉 https://github.com/huggingface/deep-rl-class/blob/main/unit7/README.mdDon't hesitate to train your agent in other environments. The best way to learn is to try things on your own!In the next unit, we will learn to improve Actor-Critic Methods with Proximal Policy Optimization.And don't forget to share with your friends who want to learn 🤗!Finally, with your feedback, we want to improve and update the course iteratively. If you have some, please fill this form 👉 https://forms.gle/3HgA7bEHwAmmLfwh9Keep learning, stay awesome 🤗,
https://huggingface.co/blog/ray-tune
Hyperparameter Search with Transformers and Ray Tune
Ray Project (Anyscale)
November 2, 2020
With cutting edge research implementations, thousands of trained models easily accessible, the Hugging Face transformers library has become critical to the success and growth of natural language processing today.For any machine learning model to achieve good performance, users often need to implement some form of parameter tuning. Yet, nearly everyone (1, 2) either ends up disregarding hyperparameter tuning or opting to do a simplistic grid search with a small search space.However, simple experiments are able to show the benefit of using an advanced tuning technique. Below is a recent experiment run on a BERT model from Hugging Face transformers on the RTE dataset. Genetic optimization techniques like PBT can provide large performance improvements compared to standard hyperparameter optimization techniques.AlgorithmBest Val Acc.Best Test Acc.Total GPU minTotal $ costGrid Search74%65.4%45 min$2.30Bayesian Optimization +Early Stop77%66.9%104 min$5.30Population-based Training78%70.5%48 min$2.45If you’re leveraging Transformers, you’ll want to have a way to easily access powerful hyperparameter tuning solutions without giving up the customizability of the Transformers framework.In the Transformers 3.1 release, Hugging Face Transformers and Ray Tune teamed up to provide a simple yet powerful integration. Ray Tune is a popular Python library for hyperparameter tuning that provides many state-of-the-art algorithms out of the box, along with integrations with the best-of-class tooling, such as Weights and Biases and tensorboard.To demonstrate this new Hugging Face + Ray Tune integration, we leverage the Hugging Face Datasets library to fine tune BERT on MRPC.To run this example, please first run:pip install "ray[tune]" transformers datasets scipy sklearn torchSimply plug in one of Ray’s standard tuning algorithms by just adding a few lines of code.from datasets import load_dataset, load_metricfrom transformers import (AutoModelForSequenceClassification, AutoTokenizer,Trainer, TrainingArguments)tokenizer = AutoTokenizer.from_pretrained('distilbert-base-uncased')dataset = load_dataset('glue', 'mrpc')metric = load_metric('glue', 'mrpc')def encode(examples):outputs = tokenizer(examples['sentence1'], examples['sentence2'], truncation=True)return outputsencoded_dataset = dataset.map(encode, batched=True)def model_init():return AutoModelForSequenceClassification.from_pretrained('distilbert-base-uncased', return_dict=True)def compute_metrics(eval_pred):predictions, labels = eval_predpredictions = predictions.argmax(axis=-1)return metric.compute(predictions=predictions, references=labels)# Evaluate during training and a bit more often# than the default to be able to prune bad trials early.# Disabling tqdm is a matter of preference.training_args = TrainingArguments("test", evaluation_strategy="steps", eval_steps=500, disable_tqdm=True)trainer = Trainer(args=training_args,tokenizer=tokenizer,train_dataset=encoded_dataset["train"],eval_dataset=encoded_dataset["validation"],model_init=model_init,compute_metrics=compute_metrics,)# Default objective is the sum of all metrics# when metrics are provided, so we have to maximize it.trainer.hyperparameter_search(direction="maximize", backend="ray", n_trials=10 # number of trials)By default, each trial will utilize 1 CPU, and optionally 1 GPU if available.You can leverage multiple GPUs for a parallel hyperparameter searchby passing in a resources_per_trial argument.You can also easily swap different parameter tuning algorithms such as HyperBand, Bayesian Optimization, Population-Based Training:To run this example, first run: pip install hyperoptfrom ray.tune.suggest.hyperopt import HyperOptSearchfrom ray.tune.schedulers import ASHASchedulertrainer = Trainer(args=training_args,tokenizer=tokenizer,train_dataset=encoded_dataset["train"],eval_dataset=encoded_dataset["validation"],model_init=model_init,compute_metrics=compute_metrics,)best_trial = trainer.hyperparameter_search(direction="maximize",backend="ray",# Choose among many libraries:# https://docs.ray.io/en/latest/tune/api_docs/suggestion.htmlsearch_alg=HyperOptSearch(metric="objective", mode="max"),# Choose among schedulers:# https://docs.ray.io/en/latest/tune/api_docs/schedulers.htmlscheduler=ASHAScheduler(metric="objective", mode="max"))It also works with Weights and Biases out of the box!Try it out today:pip install -U raypip install -U transformers datasetsCheck out the Hugging Face documentation and Discussion threadEnd-to-end example of using Hugging Face hyperparameter search for text classificationIf you liked this blog post, be sure to check out:Transformers + GLUE + Ray Tune exampleOur Weights and Biases report on Hyperparameter Optimization for TransformersThe simplest way to serve your NLP model from scratch
https://huggingface.co/blog/sagemaker-distributed-training-seq2seq
Distributed Training: Train BART/T5 for Summarization using 🤗 Transformers and Amazon SageMaker
Philipp Schmid
April 8, 2021
In case you missed it: on March 25th we announced a collaboration with Amazon SageMaker to make it easier to create State-of-the-Art Machine Learning models, and ship cutting-edge NLP features faster. Together with the SageMaker team, we built 🤗 Transformers optimized Deep Learning Containers to accelerate training of Transformers-based models. Thanks AWS friends!🤗 🚀 With the new HuggingFace estimator in the SageMaker Python SDK, you can start training with a single line of code. The announcement blog post provides all the information you need to know about the integration, including a "Getting Started" example and links to documentation, examples, and features.listed again here:🤗 Transformers Documentation: Amazon SageMakerExample NotebooksAmazon SageMaker documentation for Hugging FacePython SDK SageMaker documentation for Hugging FaceDeep Learning ContainerIf you're not familiar with Amazon SageMaker: "Amazon SageMaker is a fully managed service that provides every developer and data scientist with the ability to build, train, and deploy machine learning (ML) models quickly. SageMaker removes the heavy lifting from each step of the machine learning process to make it easier to develop high quality models." [REF]TutorialWe will use the new Hugging Face DLCs and Amazon SageMaker extension to train a distributed Seq2Seq-transformer model on the summarization task using the transformers and datasets libraries, and then upload the model to huggingface.co and test it.As distributed training strategy we are going to use SageMaker Data Parallelism, which has been built into the Trainer API. To use data-parallelism we only have to define the distribution parameter in our HuggingFace estimator.# configuration for running training on smdistributed Data Paralleldistribution = {'smdistributed':{'dataparallel':{ 'enabled': True }}}In this tutorial, we will use an Amazon SageMaker Notebook Instance for running our training job. You can learn here how to set up a Notebook Instance.What are we going to do:Set up a development environment and install sagemakerChoose 🤗 Transformers examples/ scriptConfigure distributed training and hyperparametersCreate a HuggingFace estimator and start trainingUpload the fine-tuned model to huggingface.coTest inferenceModel and DatasetWe are going to fine-tune facebook/bart-large-cnn on the samsum dataset. "BART is sequence-to-sequence model trained with denoising as pretraining objective." [REF]The samsum dataset contains about 16k messenger-like conversations with summaries. {"id": "13818513","summary": "Amanda baked cookies and will bring Jerry some tomorrow.","dialogue": "Amanda: I baked cookies. Do you want some?\rJerry: Sure!\rAmanda: I'll bring you tomorrow :-)"}Set up a development environment and install sagemakerAfter our SageMaker Notebook Instance is running we can select either Jupyer Notebook or JupyterLab and create a new Notebook with the conda_pytorch_p36 kernel.Note: The use of Jupyter is optional: We could also launch SageMaker Training jobs from anywhere we have an SDK installed, connectivity to the cloud and appropriate permissions, such as a Laptop, another IDE or a task scheduler like Airflow or AWS Step Functions.After that we can install the required dependencies!pip install transformers "datasets[s3]" sagemaker --upgradeinstall git-lfs for model upload.!curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.rpm.sh | sudo bash!sudo yum install git-lfs -y!git lfs installTo run training on SageMaker we need to create a sagemaker Session and provide an IAM role with the right permission. This IAM role will be later attached to the TrainingJob enabling it to download data, e.g. from Amazon S3.import sagemakersess = sagemaker.Session()role = sagemaker.get_execution_role()print(f"IAM role arn used for running training: {role}")print(f"S3 bucket used for storing artifacts: {sess.default_bucket()}")Choose 🤗 Transformers examples/ scriptThe 🤗 Transformers repository contains several examples/scripts for fine-tuning models on tasks from language-modeling to token-classification. In our case, we are using the run_summarization.py from the seq2seq/ examples. Note: you can use this tutorial as-is to train your model on a different examples script.Since the HuggingFace Estimator has git support built-in, we can specify a training script stored in a GitHub repository as entry_point and source_dir.We are going to use the transformers 4.4.2 DLC which means we need to configure the v4.4.2 as the branch to pull the compatible example scripts.#git_config = {'repo': 'https://github.com/huggingface/transformers.git','branch': 'v4.4.2'} # v4.4.2 is referring to the `transformers_version you use in the estimator.# used due an missing package in v4.4.2 git_config = {'repo': 'https://github.com/philschmid/transformers.git','branch': 'master'} # v4.4.2 is referring to the `transformers_version you use in the estimator.Configure distributed training and hyperparametersNext, we will define our hyperparameters and configure our distributed training strategy. As hyperparameter, we can define any Seq2SeqTrainingArguments and the ones defined in run_summarization.py. # hyperparameters, which are passed into the training jobhyperparameters={'per_device_train_batch_size': 4,'per_device_eval_batch_size': 4,'model_name_or_path':'facebook/bart-large-cnn','dataset_name':'samsum','do_train':True,'do_predict': True,'predict_with_generate': True,'output_dir':'/opt/ml/model','num_train_epochs': 3,'learning_rate': 5e-5,'seed': 7,'fp16': True,}# configuration for running training on smdistributed Data Paralleldistribution = {'smdistributed':{'dataparallel':{ 'enabled': True }}}Since, we are using SageMaker Data Parallelism our total_batch_size will be per_device_train_batch_size * n_gpus.Create a HuggingFace estimator and start trainingThe last step before training is creating a HuggingFace estimator. The Estimator handles the end-to-end Amazon SageMaker training. We define which fine-tuning script should be used as entry_point, which instance_type should be used, and which hyperparameters are passed in.from sagemaker.huggingface import HuggingFace# create the Estimatorhuggingface_estimator = HuggingFace(entry_point='run_summarization.py', # scriptsource_dir='./examples/seq2seq', # relative path to examplegit_config=git_config,instance_type='ml.p3dn.24xlarge',instance_count=2,transformers_version='4.4.2',pytorch_version='1.6.0',py_version='py36',role=role,hyperparameters = hyperparameters,distribution = distribution)As instance_type we are using ml.p3dn.24xlarge, which contains 8x NVIDIA A100 with an instance_count of 2. This means we are going to run training on 16 GPUs and a total_batch_size of 16*4=64. We are going to train a 400 Million Parameter model with a total_batch_size of 64, which is just wow.To start our training we call the .fit() method.# starting the training jobhuggingface_estimator.fit()2021-04-01 13:00:35 Starting - Starting the training job...2021-04-01 13:01:03 Starting - Launching requested ML instancesProfilerReport-1617282031: InProgress2021-04-01 13:02:23 Starting - Preparing the instances for training......2021-04-01 13:03:25 Downloading - Downloading input data...2021-04-01 13:04:04 Training - Downloading the training image...............2021-04-01 13:06:33 Training - Training image download completed. Training in progress........2021-04-01 13:16:47 Uploading - Uploading generated training model2021-04-01 13:27:49 Completed - Training job completedTraining seconds: 2882Billable seconds: 2882The training seconds are 2882 because they are multiplied by the number of instances. If we calculate 2882/2=1441 is it the duration from "Downloading the training image" to "Training job completed". Converted to real money, our training on 16 NVIDIA Tesla V100-GPU for a State-of-the-Art summarization model comes down to ~28$.Upload the fine-tuned model to huggingface.coSince our model achieved a pretty good score we are going to upload it to huggingface.co, create a model_card and test it with the Hosted Inference widget. To upload a model you need to create an account here.We can download our model from Amazon S3 and unzip it using the following snippet.import osimport tarfilefrom sagemaker.s3 import S3Downloaderlocal_path = 'my_bart_model'os.makedirs(local_path, exist_ok = True)# download model from S3S3Downloader.download(s3_uri=huggingface_estimator.model_data, # s3 uri where the trained model is locatedlocal_path=local_path, # local path where *.tar.gz will be savedsagemaker_session=sess # sagemaker session used for training the model)# unzip modeltar = tarfile.open(f"{local_path}/model.tar.gz", "r:gz")tar.extractall(path=local_path)tar.close()os.remove(f"{local_path}/model.tar.gz")Before we are going to upload our model to huggingface.co we need to create a model_card. The model_card describes the model and includes hyperparameters, results, and specifies which dataset was used for training. To create a model_card we create a README.md in our local_path # read eval and test results with open(f"{local_path}/eval_results.json") as f:eval_results_raw = json.load(f)eval_results={}eval_results["eval_rouge1"] = eval_results_raw["eval_rouge1"]eval_results["eval_rouge2"] = eval_results_raw["eval_rouge2"]eval_results["eval_rougeL"] = eval_results_raw["eval_rougeL"]eval_results["eval_rougeLsum"] = eval_results_raw["eval_rougeLsum"]with open(f"{local_path}/test_results.json") as f:test_results_raw = json.load(f)test_results={}test_results["test_rouge1"] = test_results_raw["test_rouge1"]test_results["test_rouge2"] = test_results_raw["test_rouge2"]test_results["test_rougeL"] = test_results_raw["test_rougeL"]test_results["test_rougeLsum"] = test_results_raw["test_rougeLsum"]After we extract all the metrics we want to include we are going to create our README.md. Additionally to the automated generation of the results table we add the metrics manually to the metadata of our model card under model-indeximport jsonMODEL_CARD_TEMPLATE = """---language: entags:- sagemaker- bart- summarizationlicense: apache-2.0datasets:- samsummodel-index:- name: {model_name}results:- task: name: Abstractive Text Summarizationtype: abstractive-text-summarizationdataset:name: "SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization" type: samsummetrics:- name: Validation ROGUE-1type: rogue-1value: 42.621- name: Validation ROGUE-2type: rogue-2value: 21.9825- name: Validation ROGUE-Ltype: rogue-lvalue: 33.034- name: Test ROGUE-1type: rogue-1value: 41.3174- name: Test ROGUE-2type: rogue-2value: 20.8716- name: Test ROGUE-Ltype: rogue-lvalue: 32.1337widget:- text: | Jeff: Can I train a 🤗 Transformers model on Amazon SageMaker? Philipp: Sure you can use the new Hugging Face Deep Learning Container. Jeff: ok.Jeff: and how can I get started? Jeff: where can I find documentation? Philipp: ok, ok you can find everything here. https://huggingface.co/blog/the-partnership-amazon-sagemaker-and-hugging-face ---## `{model_name}`This model was trained using Amazon SageMaker and the new Hugging Face Deep Learning container.For more information look at:- [🤗 Transformers Documentation: Amazon SageMaker](https://huggingface.co/transformers/sagemaker.html)- [Example Notebooks](https://github.com/huggingface/notebooks/tree/master/sagemaker)- [Amazon SageMaker documentation for Hugging Face](https://docs.aws.amazon.com/sagemaker/latest/dg/hugging-face.html)- [Python SDK SageMaker documentation for Hugging Face](https://sagemaker.readthedocs.io/en/stable/frameworks/huggingface/index.html)- [Deep Learning Container](https://github.com/aws/deep-learning-containers/blob/master/available_images.md#huggingface-training-containers)## Hyperparameters{hyperparameters}## Usagefrom transformers import pipelinesummarizer = pipeline("summarization", model="philschmid/{model_name}")conversation = '''Jeff: Can I train a 🤗 Transformers model on Amazon SageMaker? Philipp: Sure you can use the new Hugging Face Deep Learning Container. Jeff: ok.Jeff: and how can I get started? Jeff: where can I find documentation? Philipp: ok, ok you can find everything here. https://huggingface.co/blog/the-partnership-amazon-sagemaker-and-hugging-face '''nlp(conversation)## Results| key | value || --- | ----- |{eval_table}{test_table}"""# Generate model card (todo: add more data from Trainer)model_card = MODEL_CARD_TEMPLATE.format(model_name=f"{hyperparameters['model_name_or_path'].split('/')[1]}-{hyperparameters['dataset_name']}",hyperparameters=json.dumps(hyperparameters, indent=4, sort_keys=True),eval_table="".join(f"| {k} | {v} |" for k, v in eval_results.items()),test_table="".join(f"| {k} | {v} |" for k, v in test_results.items()),)with open(f"{local_path}/README.md", "w") as f:f.write(model_card)After we have our unzipped model and model card located in my_bart_model we can use the either huggingface_hub SDK to create a repository and upload it to huggingface.co – or just to https://huggingface.co/new an create a new repository and upload it.from getpass import getpassfrom huggingface_hub import HfApi, Repositoryhf_username = "philschmid" # your username on huggingface.cohf_email = "philipp@huggingface.co" # email used for commitrepository_name = f"{hyperparameters['model_name_or_path'].split('/')[1]}-{hyperparameters['dataset_name']}" # repository name on huggingface.copassword = getpass("Enter your password:") # creates a prompt for entering password# get hf tokentoken = HfApi().login(username=hf_username, password=password)# create repositoryrepo_url = HfApi().create_repo(token=token, name=repository_name, exist_ok=True)# create a Repository instancemodel_repo = Repository(use_auth_token=token,clone_from=repo_url,local_dir=local_path,git_user=hf_username,git_email=hf_email)# push model to the hubmodel_repo.push_to_hub()Test inferenceAfter we uploaded our model we can access it at https://huggingface.co/{hf_username}/{repository_name} print(f"https://huggingface.co/{hf_username}/{repository_name}")And use the "Hosted Inference API" widget to test it. https://huggingface.co/philschmid/bart-large-cnn-samsum
https://huggingface.co/blog/fastai
Welcome fastai to the Hugging Face Hub
Omar Espejel
May 6, 2022
Making neural nets uncool again... and sharing themFew have done as much as the fast.ai ecosystem to make Deep Learning accessible. Our mission at Hugging Face is to democratize good Machine Learning. Let's make exclusivity in access to Machine Learning, including pre-trained models, a thing of the past and let's push this amazing field even further.fastai is an open-source Deep Learning library that leverages PyTorch and Python to provide high-level components to train fast and accurate neural networks with state-of-the-art outputs on text, vision, and tabular data. However, fast.ai, the company, is more than just a library; it has grown into a thriving ecosystem of open source contributors and people learning about neural networks. As some examples, check out their book and courses. Join the fast.ai Discord and forums. It is a guarantee that you will learn by being part of their community!Because of all this, and more (the writer of this post started his journey thanks to the fast.ai course), we are proud to announce that fastai practitioners can now share and upload models to Hugging Face Hub with a single line of Python.👉 In this post, we will introduce the integration between fastai and the Hub. Additionally, you can open this tutorial as a Colab notebook.We want to thank the fast.ai community, notably Jeremy Howard, Wayde Gilliam, and Zach Mueller for their feedback 🤗. This blog is heavily inspired by the Hugging Face Hub section in the fastai docs.Why share to the Hub?The Hub is a central platform where anyone can share and explore models, datasets, and ML demos. It has the most extensive collection of Open Source models, datasets, and demos.Sharing on the Hub amplifies the impact of your fastai models by making them available for others to download and explore. You can also use transfer learning with fastai models; load someone else's model as the basis for your task.Anyone can access all the fastai models in the Hub by filtering the hf.co/models webpage by the fastai library, as in the image below.In addition to free model hosting and exposure to the broader community, the Hub has built-in version control based on git (git-lfs, for large files) and model cards for discoverability and reproducibility. For more information on navigating the Hub, see this introduction.Joining Hugging Face and installationTo share models in the Hub, you will need to have a user. Create it on the Hugging Face website.The huggingface_hub library is a lightweight Python client with utility functions to interact with the Hugging Face Hub. To push fastai models to the hub, you need to have some libraries pre-installed (fastai>=2.4, fastcore>=1.3.27 and toml). You can install them automatically by specifying ["fastai"] when installing huggingface_hub, and your environment is good to go:pip install huggingface_hub["fastai"]Creating a fastai LearnerHere we train the first model in the fastbook to identify cats 🐱. We fully recommended reading the entire fastbook.# Training of 6 lines in chapter 1 of the fastbook.from fastai.vision.all import *path = untar_data(URLs.PETS)/'images'def is_cat(x): return x[0].isupper()dls = ImageDataLoaders.from_name_func(path, get_image_files(path), valid_pct=0.2, seed=42,label_func=is_cat, item_tfms=Resize(224))learn = vision_learner(dls, resnet34, metrics=error_rate)learn.fine_tune(1)Sharing a Learner to the HubA Learner is a fastai object that bundles a model, data loaders, and a loss function. We will use the words Learner and Model interchangeably throughout this post.First, log in to the Hugging Face Hub. You will need to create a write token in your Account Settings. Then there are three options to log in:Type huggingface-cli login in your terminal and enter your token.If in a python notebook, you can use notebook_login.from huggingface_hub import notebook_loginnotebook_login()Use the token argument of the push_to_hub_fastai function.You can input push_to_hub_fastai with the Learner you want to upload and the repository id for the Hub in the format of "namespace/repo_name". The namespace can be an individual account or an organization you have write access to (for example, 'fastai/stanza-de'). For more details, refer to the Hub Client documentation.from huggingface_hub import push_to_hub_fastai# repo_id = "YOUR_USERNAME/YOUR_LEARNER_NAME"repo_id = "espejelomar/identify-my-cat"push_to_hub_fastai(learner=learn, repo_id=repo_id)The Learner is now in the Hub in the repo named espejelomar/identify-my-cat. An automatic model card is created with some links and next steps. When uploading a fastai Learner (or any other model) to the Hub, it is helpful to edit its model card (image below) so that others better understand your work (refer to the Hugging Face documentation).if you want to learn more about push_to_hub_fastai go to the Hub Client Documentation. There are some cool arguments you might be interested in 👀. Remember, your model is a Git repository with all the advantages that this entails: version control, commits, branches...Loading a Learner from the Hugging Face HubLoading a model from the Hub is even simpler. We will load our Learner, "espejelomar/identify-my-cat", and test it with a cat image (🦮?). This code is adapted fromthe first chapter of the fastbook.First, upload an image of a cat (or possibly a dog?). The Colab notebook with this tutorial uses ipywidgets to interactively upload a cat image (or not?). Here we will use this cute cat 🐅:Now let's load the Learner we just shared in the Hub and test it.from huggingface_hub import from_pretrained_fastai# repo_id = "YOUR_USERNAME/YOUR_LEARNER_NAME"repo_id = "espejelomar/identify-my-cat"learner = from_pretrained_fastai(repo_id)It works 👇!_,_,probs = learner.predict(img)print(f"Probability it's a cat: {100*probs[1].item():.2f}%")Probability it's a cat: 100.00%The Hub Client documentation includes addtional details on from_pretrained_fastai.Blurr to mix fastai and Hugging Face Transformers (and share them)![Blurr is] a library designed for fastai developers who want to train and deploy Hugging Face transformers - Blurr Docs.We will:Train a blurr Learner with the high-level Blurr API. It will load the distilbert-base-uncased model from the Hugging Face Hub and prepare a sequence classification model.Share it to the Hub with the namespace fastai/blurr_IMDB_distilbert_classification using push_to_hub_fastai.Load it with from_pretrained_fastai and try it with learner_blurr.predict().Collaboration and open-source are fantastic!First, install blurr and train the Learner.git clone https://github.com/ohmeow/blurr.gitcd blurrpip install -e ".[dev]"import torchimport transformersfrom fastai.text.all import *from blurr.text.data.all import *from blurr.text.modeling.all import *path = untar_data(URLs.IMDB_SAMPLE)model_path = Path("models")imdb_df = pd.read_csv(path / "texts.csv")learn_blurr = BlearnerForSequenceClassification.from_data(imdb_df, "distilbert-base-uncased", dl_kwargs={"bs": 4})learn_blurr.fit_one_cycle(1, lr_max=1e-3)Use push_to_hub_fastai to share with the Hub.from huggingface_hub import push_to_hub_fastai# repo_id = "YOUR_USERNAME/YOUR_LEARNER_NAME"repo_id = "fastai/blurr_IMDB_distilbert_classification"push_to_hub_fastai(learn_blurr, repo_id)Use from_pretrained_fastai to load a blurr model from the Hub.from huggingface_hub import from_pretrained_fastai# repo_id = "YOUR_USERNAME/YOUR_LEARNER_NAME"repo_id = "fastai/blurr_IMDB_distilbert_classification"learner_blurr = from_pretrained_fastai(repo_id)Try it with a couple sentences and review their sentiment (negative or positive) with learner_blurr.predict().sentences = ["This integration is amazing!","I hate this was not available before."]probs = learner_blurr.predict(sentences)print(f"Probability that sentence '{sentences[0]}' is negative is: {100*probs[0]['probs'][0]:.2f}%")print(f"Probability that sentence '{sentences[1]}' is negative is: {100*probs[1]['probs'][0]:.2f}%")Again, it works!Probability that sentence 'This integration is amazing!' is negative is: 29.46%Probability that sentence 'I hate this was not available before.' is negative is: 70.04%What's next?Take the fast.ai course (a new version is coming soon), follow Jeremy Howard and fast.ai on Twitter for updates, and start sharing your fastai models on the Hub 🤗. Or load one of the models that are already in the Hub.📧 Feel free to contact us via the Hugging Face Discord and share if you have an idea for a project. We would love to hear your feedback 💖.Would you like to integrate your library to the Hub?This integration is made possible by the huggingface_hub library. If you want to add your library to the Hub, we have a guide for you! Or simply tag someone from the Hugging Face team.A shout out to the Hugging Face team for all the work on this integration, in particular @osanseviero 🦙.Thank you fastlearners and hugging learners 🤗.
https://huggingface.co/blog/setfit-absa
SetFitABSA: Few-Shot Aspect Based Sentiment Analysis using SetFit
Ronen Laperdon, Tom Aarsen, Lewis Tunstall, Oren Pereg, Moshe Wasserblat
December 6, 2023
Aspect-Based Sentiment Analysis (ABSA) is the task of detecting the sentiment towards specific aspects within the text. For example, in the sentence, "This phone has a great screen, but its battery is too small", the aspect terms are "screen" and "battery" and the sentiment polarities towards them are Positive and Negative, respectively.ABSA is widely used by organizations for extracting valuable insights by analyzing customer feedback towards aspects of products or services in various domains. However, labeling training data for ABSA is a tedious task because of the fine-grained nature (token level) of manually identifying aspects within the training samples.Intel Labs and Hugging Face are excited to introduce SetFitABSA, a framework for few-shot training of domain-specific ABSA models; SetFitABSA is competitive and even outperforms generative models such as Llama2 and T5 in few-shot scenarios.Compared to LLM based methods, SetFitABSA has two unique advantages:🗣 No prompts needed: few-shot in-context learning with LLMs requires handcrafted prompts which make the results brittle, sensitive to phrasing and dependent on user expertise. SetFitABSA dispenses with prompts altogether by generating rich embeddings directly from a small number of labeled text examples.🏎 Fast to train: SetFitABSA requires only a handful of labeled training samples; in addition, it uses a simple training data format, eliminating the need for specialized tagging tools. This makes the data labeling process fast and easy.In this blog post, we'll explain how SetFitABSA works and how to train your very own models using the SetFit library. Let's dive in!How does it work?SetFitABSA's three-stage training processSetFitABSA is comprised of three steps. The first step extracts aspect candidates from the text, the second one yields the aspects by classifying the aspect candidates as aspects or non-aspects, and the final step associates a sentiment polarity to each extracted aspect. Steps two and three are based on SetFit models.Training1. Aspect candidate extractionIn this work we assume that aspects, which are usually features of products and services, are mostly nouns or noun compounds (strings of consecutive nouns). We use spaCy to tokenize and extract nouns/noun compounds from the sentences in the (few-shot) training set. Since not all extracted nouns/noun compounds are aspects, we refer to them as aspect candidates.2. Aspect/Non-aspect classificationNow that we have aspect candidates, we need to train a model to be able to distinguish between nouns that are aspects and nouns that are non-aspects. For this purpose, we need training samples with aspect/no-aspect labels. This is done by considering aspects in the training set as True aspects, while other non-overlapping candidate aspects are considered non-aspects and therefore labeled as False:Training sentence: "Waiters aren't friendly but the cream pasta is out of this world."Tokenized: [Waiters, are, n't, friendly, but, the, cream, pasta, is, out, of, this, world, .]Extracted aspect candidates: [Waiters, are, n't, friendly, but, the, cream, pasta, is, out, of, this, world, .]Gold labels from training set, in BIO format: [B-ASP, O, O, O, O, O, B-ASP, I-ASP, O, O, O, O, O, .]Generated aspect/non-aspect Labels: [Waiters, are, n't, friendly, but, the, cream, pasta, is, out, of, this, world, .]Now that we have all the aspect candidates labeled, how do we use it to train the candidate aspect classification model? In other words, how do we use SetFit, a sentence classification framework, to classify individual tokens? Well, this is the trick: each aspect candidate is concatenated with the entire training sentence to create a training instance using the following template:aspect_candidate:training_sentenceApplying the template to the example above will generate 3 training instances – two with True labels representing aspect training instances, and one with False label representing non-aspect training instance:TextLabelWaiters:Waiters aren't friendly but the cream pasta is out of this world.1cream pasta:Waiters aren't friendly but the cream pasta is out of this world.1world:Waiters aren't friendly but the cream pasta is out of this world.0......After generating the training instances, we are ready to use the power of SetFit to train a few-shot domain-specific binary classifier to extract aspects from an input text review. This will be our first fine-tuned SetFit model.3. Sentiment polarity classificationOnce the system extracts the aspects from the text, it needs to associate a sentiment polarity (e.g., positive, negative or neutral) to each aspect. For this purpose, we use a 2nd SetFit model and train it in a similar fashion to the aspect extraction model as illustrated in the following example:Training sentence: "Waiters aren't friendly but the cream pasta is out of this world."Tokenized: [Waiters, are, n't, friendly, but, the, cream, pasta, is, out, of, this, world, .]Gold labels from training set: [NEG, O, O, O, O, O, POS, POS, O, O, O, O, O, .]TextLabelWaiters:Waiters aren't friendly but the cream pasta is out of this world.NEGcream pasta:Waiters aren't friendly but the cream pasta is out of this world.POS......Note that as opposed to the aspect extraction model, we don't include non-aspects in this training set because the goal is to classify the sentiment polarity towards real aspects.Running inferenceAt inference time, the test sentence passes through the spaCy aspect candidate extraction phase, resulting in test instances using the template aspect_candidate:test_sentence. Next, non-aspects are filtered by the aspect/non-aspect classifier. Finally, the extracted aspects are fed to the sentiment polarity classifier that predicts the sentiment polarity per aspect.In practice, this means the model can receive normal text as input, and output aspects and their sentiments:Model Input:"their dinner specials are fantastic."Model Output:[{'span': 'dinner specials', 'polarity': 'positive'}]BenchmarkingSetFitABSA was benchmarked against the recent state-of-the-art work by AWS AI Labs and Salesforce AI Research that finetune T5 and GPT2 using prompts. To get a more complete picture, we also compare our model to the Llama-2-chat model using in-context learning.We use the popular Laptop14 and Restaurant14 ABSA datasets from the Semantic Evaluation Challenge 2014 (SemEval14).SetFitABSA is evaluated both on the intermediate task of aspect term extraction (SB1) and on the full ABSA task of aspect extraction along with their sentiment polarity predictions (SB1+SB2).Model size comparisonModelSize (params)Llama-2-chat7BT5-base220MGPT2-base124MGPT2-medium355MSetFit (MPNet)2x 110MNote that for the SB1 task, SetFitABSA is 110M parameters, for SB2 it is 110M parameters, and for SB1+SB2 SetFitABSA consists of 220M parameters.Performance comparisonWe see a clear advantage of SetFitABSA when the number of training instances is low, despite being 2x smaller than T5 and x3 smaller than GPT2-medium. Even when compared to Llama 2, which is x64 larger, the performance is on par or better.SetFitABSA vs GPT2SetFitABSA vs T5Note that for fair comparison, we conducted comparisons with SetFitABSA against exactly the dataset splits used by the various baselines (GPT2, T5, etc.).SetFitABSA vs Llama2We notice that increasing the number of in-context training samples for Llama2 did not result in improved performance. This phenomenon has been shown for ChatGPT before, and we think it should be further investigated.Training your own modelSetFitABSA is part of the SetFit framework. To train an ABSA model, start by installing setfit with the absa option enabled:python -m pip install -U "setfit[absa]"Additionally, we must install the en_core_web_lg spaCy model:python -m spacy download en_core_web_lgWe continue by preparing the training set. The format of the training set is a Dataset with the columns text, span, label, ordinal:text: The full sentence or text containing the aspects. span: An aspect from the full sentence. Can be multiple words. For example: "food".label: The (polarity) label corresponding to the aspect span. For example: "positive". The label names can be chosen arbitrarily when tagging the collected training data.ordinal: If the aspect span occurs multiple times in the text, then this ordinal represents the index of those occurrences. Often this is just 0, as each aspect usually appears only once in the input text.For example, the training text "Restaurant with wonderful food but worst service I ever seen" contains two aspects, so will add two lines to the training set table:TextSpanLabelOrdinalRestaurant with wonderful food but worst service I ever seenfoodpositive0Restaurant with wonderful food but worst service I ever seenservicenegative0............Once we have the training dataset ready we can create an ABSA trainer and execute the training. SetFit models are fairly efficient to train, but as SetFitABSA involves two models trained sequentially, it is recommended to use a GPU for training to keep the training time low. For example, the following training script trains a full SetFitABSA model in about 10 minutes with the free Google Colab T4 GPU.from datasets import load_datasetfrom setfit import AbsaTrainer, AbsaModel# Create a training dataset as above# For convenience we will use an already prepared dataset heretrain_dataset = load_dataset("tomaarsen/setfit-absa-semeval-restaurants", split="train[:128]")# Create a model with a chosen sentence transformer from the Hubmodel = AbsaModel.from_pretrained("sentence-transformers/paraphrase-mpnet-base-v2")# Create a trainer:trainer = AbsaTrainer(model, train_dataset=train_dataset)# Execute training:trainer.train()That's it! We have trained a domain-specific ABSA model. We can save our trained model to disk or upload it to the Hugging Face hub. Bear in mind that the model contains two submodels, so each is given its own path:model.save_pretrained("models/setfit-absa-model-aspect", "models/setfit-absa-model-polarity")# ormodel.push_to_hub("tomaarsen/setfit-absa-paraphrase-mpnet-base-v2-restaurants-aspect","tomaarsen/setfit-absa-paraphrase-mpnet-base-v2-restaurants-polarity")Now we can use our trained model for inference. We start by loading the model:from setfit import AbsaModelmodel = AbsaModel.from_pretrained("tomaarsen/setfit-absa-paraphrase-mpnet-base-v2-restaurants-aspect","tomaarsen/setfit-absa-paraphrase-mpnet-base-v2-restaurants-polarity")Then, we use the predict API to run inference. The input is a list of strings, each representing a textual review:preds = model.predict(["Best pizza outside of Italy and really tasty.","The food variations are great and the prices are absolutely fair.","Unfortunately, you have to expect some waiting time and get a note with a waiting number if it should be very full."])print(preds)# [# [{'span': 'pizza', 'polarity': 'positive'}],# [{'span': 'food variations', 'polarity': 'positive'}, {'span': 'prices', 'polarity': 'positive'}],# [{'span': 'waiting time', 'polarity': 'neutral'}, {'span': 'waiting number', 'polarity': 'neutral'}]# ]For more details on training options, saving and loading models, and inference see the SetFit docs.ReferencesMaria Pontiki, Dimitris Galanis, John Pavlopoulos, Harris Papageorgiou, Ion Androutsopoulos, and Suresh Manandhar. 2014. SemEval-2014 task 4: Aspect based sentiment analysis. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), pages 27–35.Siddharth Varia, Shuai Wang, Kishaloy Halder, Robert Vacareanu, Miguel Ballesteros, Yassine Benajiba, Neha Anna John, Rishita Anubhai, Smaranda Muresan, Dan Roth, 2023 "Instruction Tuning for Few-Shot Aspect-Based Sentiment Analysis". https://arxiv.org/abs/2210.06629Ehsan Hosseini-Asl, Wenhao Liu, Caiming Xiong, 2022. "A Generative Language Model for Few-shot Aspect-Based Sentiment Analysis". https://arxiv.org/abs/2204.05356Lewis Tunstall, Nils Reimers, Unso Eun Seo Jo, Luke Bates, Daniel Korat, Moshe Wasserblat, Oren Pereg, 2022. "Efficient Few-Shot Learning Without Prompts". https://arxiv.org/abs/2209.11055
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
82
Edit dataset card