Optimum Habana is the interface between the Hugging Face Transformers and Diffusers libraries and Habana's Gaudi processor (HPU). It provides a set of tools enabling easy and fast model loading, training and inference on single- and multi-HPU settings for different downstream tasks. Learn more about how to take advantage of the power of Habana HPUs to train and deploy Transformers and Diffusers models at hf.co/hardware/habana.
This model only contains the
GaudiConfig file for running the bert-large-uncased-whole-word-masking model on Habana's Gaudi processors (HPU).
This model contains no model weights, only a GaudiConfig.
This enables to specify:
use_habana_mixed_precision: whether to use Habana Mixed Precision (HMP)
hmp_opt_level: optimization level for HMP, see here for a detailed explanation
hmp_bf16_ops: list of operators that should run in bf16
hmp_fp32_ops: list of operators that should run in fp32
use_fused_adam: whether to use Habana's custom AdamW implementation
use_fused_clip_norm: whether to use Habana's fused gradient norm clipping operator
The model is instantiated the same way as in the Transformers library. The only difference is that there are a few new training arguments specific to HPUs.
Here is a question-answering example script to fine-tune a model on SQuAD. You can run it with BERT Large with the following command:
python run_qa.py \ --model_name_or_path bert-large-uncased-whole-word-masking \ --gaudi_config_name gaudi_config_name_or_path \ --dataset_name squad \ --do_train \ --do_eval \ --per_device_train_batch_size 24 \ --per_device_eval_batch_size 8 \ --learning_rate 3e-5 \ --num_train_epochs 2 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/squad/ \ --use_habana \ --use_lazy_mode \ --throughput_warmup_steps 2
Check the documentation out for more advanced usage and examples.