πŸ€— Optimum Habana

πŸ€— Optimum Habana is the interface between the πŸ€— Transformers and πŸ€— Diffusers libraries and Habana’s Gaudi processor (HPU). It provides a set of tools that enable simple model loading, training and inference on single- and multi-HPU settings for various downstream tasks.

The table below shows which model architectures, tasks and device distributions are currently supported for πŸ€— Optimum Habana:

Architecture Single Card Multi Card DeepSpeed
Tasks
BERT βœ… βœ… βœ…
  • text classification
  • question answering
  • language modeling
  • RoBERTa βœ… βœ… βœ…
  • question answering
  • language modeling
  • ALBERT βœ… βœ… βœ…
  • question answering
  • language modeling
  • DistilBERT βœ… βœ… βœ…
  • question answering
  • language modeling
  • GPT2 βœ… βœ… βœ…
  • language modeling
  • T5 βœ… βœ… βœ…
  • summarization
  • translation
  • ViT βœ… βœ… βœ…
  • image classification
  • Swin βœ… βœ… βœ…
  • image classification
  • Wav2Vec2 βœ… βœ… βœ…
  • audio classification
  • speech recognition
  • Stable Diffusion βœ… ❌ ❌
  • text-to-image generation
  • Other models and tasks supported by the πŸ€— Transformers library may also work. You can refer to the Quickstart for examples on using them with πŸ€— Optimum Habana. Besides, this page explains how to modify any example from the πŸ€— Transformers library to make it work with πŸ€— Optimum Habana.

    Tutorials

    Learn the basics and become familiar with training transformers on HPUs with πŸ€— Optimum. Start here if you are using πŸ€— Optimum Habana for the first time!

    How-to guides

    Practical guides to help you achieve a specific goal. Take a look at these guides to learn how to use πŸ€— Optimum Habana to solve real-world problems.

    Conceptual guides

    High-level explanations for building a better understanding about important topics such as HPUs.

    Reference

    Technical descriptions of how the Habana classes and methods of πŸ€— Optimum Habana work.