Add example usage
Browse files
README.md
CHANGED
@@ -42,3 +42,27 @@ Here is a detailed view on how the remaining heads are distributed in the networ
|
|
42 |
| ------ | --------- | --------- | --------- |
|
43 |
| **accuracy** | **91.17** | **92.7** | **-1.53**|
|
44 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
42 |
| ------ | --------- | --------- | --------- |
|
43 |
| **accuracy** | **91.17** | **92.7** | **-1.53**|
|
44 |
|
45 |
+
|
46 |
+
## Example Usage
|
47 |
+
Install nn_pruning: it contains the optimization script, which just pack the linear layers into smaller ones by removing empty rows/columns.
|
48 |
+
|
49 |
+
`pip install nn_pruning`
|
50 |
+
|
51 |
+
Then you can use the `transformers library` almost as usual: you just have to call `optimize_model` when the pipeline has loaded.
|
52 |
+
|
53 |
+
```python
|
54 |
+
from transformers import pipeline
|
55 |
+
from nn_pruning.inference_model_patcher import optimize_model
|
56 |
+
|
57 |
+
cls_pipeline = pipeline(
|
58 |
+
"text-classification",
|
59 |
+
model="echarlaix/bert-base-uncased-sst2-acc91.1-d37-hybrid",
|
60 |
+
tokenizer="echarlaix/bert-base-uncased-sst2-acc91.1-d37-hybrid",
|
61 |
+
)
|
62 |
+
|
63 |
+
print(f"Parameters count (includes only head pruning, no feed forward pruning)={int(cls_pipeline.model.num_parameters() / 1E6)}M")
|
64 |
+
cls_pipeline.model = optimize_model(cls_pipeline.model, "dense")
|
65 |
+
print(f"Parameters count after optimization={int(cls_pipeline.model.num_parameters() / 1E6)}M")
|
66 |
+
predictions = cls_pipeline("This restaurant is awesome")
|
67 |
+
print(predictions)
|
68 |
+
```
|