This guide focuses on inferencing large models efficiently on CPU.
BetterTransformer
for faster inferenceWe have recently integrated BetterTransformer
for faster inference on CPU for text, image and audio models. Check the documentation about this integration here for more details.
For a gentle introduction to TorchScript, see the Introduction to PyTorch TorchScript tutorial.
Check more detailed information for IPEX Graph Optimization.
IPEX release is following PyTorch, check the approaches for IPEX installation.
for PyTorch >= 1.14.0. JIT-mode could benefit any models for prediction and evaluaion since dict input is supported in jit.trace
for PyTorch < 1.14.0. JIT-mode could benefit models whose forward parameter order matches the tuple input order in jit.trace, like question-answering model In the case where the forward parameter order does not match the tuple input order in jit.trace, like text-classification models, jit.trace will fail and we are capturing this with the exception here to make it fallback. Logging is used to notify users.
Take an example of the use cases on Transformers question-answering
Inference using jit mode on CPU:
python run_qa.py \ --model_name_or_path csarron/bert-base-uncased-squad-v1 \ --dataset_name squad \ --do_eval \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/ \ --no_cuda \ --jit_mode_eval
Inference with IPEX using jit mode on CPU:
python run_qa.py \ --model_name_or_path csarron/bert-base-uncased-squad-v1 \ --dataset_name squad \ --do_eval \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/ \ --no_cuda \ --use_ipex \ --jit_mode_eval