echarlaix HF staff commited on
Commit
66e4329
1 Parent(s): d568bc2

Update model card

Browse files
Files changed (1) hide show
  1. README.md +39 -0
README.md CHANGED
@@ -1,3 +1,42 @@
1
  ---
 
2
  license: apache-2.0
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language: en
3
  license: apache-2.0
4
+ datasets:
5
+ - sst2
6
+ - glue
7
+ metrics:
8
+ - accuracy
9
+ tags:
10
+ - text-classification
11
+ - neural-compressor
12
+ - int8
13
  ---
14
+
15
+ # Dynamically quantized and pruned DistilBERT base uncased finetuned SST-2
16
+
17
+ ## Table of Contents
18
+ - [Model Details](#model-details)
19
+ - [How to Get Started With the Model](#how-to-get-started-with-the-model)
20
+
21
+ ## Model Details
22
+ **Model Description:** This model is a [DistilBERT](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) fine-tuned on SST-2 dynamically quantized and pruned using a magnitude pruning strategy to obtain a sparsity of 10% with [optimum-intel](https://github.com/huggingface/optimum-intel) through the usage of [Intel® Neural Compressor](https://github.com/intel/neural-compressor).
23
+ - **Model Type:** Text Classification
24
+ - **Language(s):** English
25
+ - **License:** Apache-2.0
26
+ - **Parent Model:** For more details on the original model, we encourage users to check out [this](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) model card.
27
+
28
+ ## How to Get Started With the Model
29
+
30
+ To load the quantized model and run inference using the Transformers [pipelines](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines), you can do as follows:
31
+
32
+ ```python
33
+ from transformers import AutoTokenizer, pipeline
34
+ from optimum.intel.neural_compressor import IncQuantizedModelForSequenceClassification
35
+
36
+ model_id = "echarlaix/distilbert-sst2-inc-dynamic-quantization-magnitude-pruning-0.1"
37
+ model = IncQuantizedModelForSequenceClassification.from_pretrained(model_id)
38
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
39
+ cls_pipe = pipeline("text-classification", model=model, tokenizer=tokenizer)
40
+ text = "He's a dreadful magician."
41
+ outputs = cls_pipe(text)
42
+ ```