yujiepan commited on
Commit
0cbcfe1
0 Parent(s):

upload model

Browse files
.gitattributes ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tflite filter=lfs diff=lfs merge=lfs -text
29
+ *.tgz filter=lfs diff=lfs merge=lfs -text
30
+ *.wasm filter=lfs diff=lfs merge=lfs -text
31
+ *.xz filter=lfs diff=lfs merge=lfs -text
32
+ *.zip filter=lfs diff=lfs merge=lfs -text
33
+ *.zst filter=lfs diff=lfs merge=lfs -text
34
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
35
+ trainer_state.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,108 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ tags:
5
+ - generated_from_trainer
6
+ datasets:
7
+ - glue
8
+ metrics:
9
+ - accuracy
10
+ model-index:
11
+ - name: yujiepan/bert-base-uncased-sst2-int8-unstructured80-17epoch
12
+ results:
13
+ - task:
14
+ name: Text Classification
15
+ type: text-classification
16
+ dataset:
17
+ name: GLUE SST2
18
+ type: glue
19
+ config: sst2
20
+ split: validation
21
+ args: sst2
22
+ metrics:
23
+ - name: Accuracy
24
+ type: accuracy
25
+ value: 0.91284
26
+ ---
27
+
28
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
29
+ should probably proofread and complete it, then remove this comment. -->
30
+
31
+ # Joint magnitude pruning, quantization and distillation on BERT-base/SST-2
32
+
33
+ This model conducts unstructured magnitude pruning, quantization and distillation at the same time when finetuning on the GLUE SST2 dataset.
34
+ It achieves the following results on the evaluation set:
35
+ - Torch loss: 0.3858
36
+ - Torch accuracy: 0.9128
37
+ - OpenVINO IR accuracy: 0.9128
38
+ - Sparsity in transformer block linear layers: 0.80
39
+
40
+ ## Setup
41
+
42
+ ```
43
+ conda install pytorch torchvision torchaudio pytorch-cuda=11.6 -c pytorch -c nvidia
44
+ git clone https://github.com/yujiepan-work/optimum-intel.git
45
+ git checkout -b "magnitude-pruning" 01927af543eaea8678671bf8f4eb78fdb29f8930
46
+ cd optimum-intel
47
+ pip install -e .[openvino,nncf]
48
+
49
+ cd examples/openvino/text-classification/
50
+ pip install -r requirements.txt
51
+ pip install wandb # optional
52
+ ```
53
+
54
+ ## NNCF config
55
+
56
+ See `nncf_config.json` in this repo.
57
+
58
+
59
+ ## Run
60
+
61
+ We use one card for training.
62
+
63
+ ```
64
+ NNCFCFG=/path/to/nncf/config
65
+ python run_glue.py \
66
+ --lr_scheduler_type cosine_with_restarts \
67
+ --cosine_cycle_ratios 11,6 \
68
+ --cosine_cycle_decays 1,1 \
69
+ --save_best_model_after_epoch -1 \
70
+ --save_best_model_after_sparsity 0.7999 \
71
+ --model_name_or_path textattack/bert-base-uncased-SST-2 \
72
+ --teacher_model_or_path yoshitomo-matsubara/bert-large-uncased-sst2 \
73
+ --distillation_temperature 2 \
74
+ --task_name sst2 \
75
+ --nncf_compression_config $NNCFCFG \
76
+ --distillation_weight 0.95 \
77
+ --output_dir /tmp/bert-base-uncased-sst2-int8-unstructured80-17epoch \
78
+ --run_name bert-base-uncased-sst2-int8-unstructured80-17epoch \
79
+ --overwrite_output_dir \
80
+ --do_train \
81
+ --do_eval \
82
+ --max_seq_length 128 \
83
+ --per_device_train_batch_size 32 \
84
+ --per_device_eval_batch_size 32 \
85
+ --learning_rate 5e-05 \
86
+ --optim adamw_torch \
87
+ --num_train_epochs 17 \
88
+ --logging_steps 1 \
89
+ --evaluation_strategy steps \
90
+ --eval_steps 250 \
91
+ --save_strategy steps \
92
+ --save_steps 250 \
93
+ --save_total_limit 1 \
94
+ --fp16 \
95
+ --seed 1
96
+ ```
97
+
98
+ The best model checkpoint is stored in the `best_model` folder. Here we only upload that checkpoint folder together with some config files.
99
+
100
+
101
+ ### Framework versions
102
+
103
+ - Transformers 4.26.0
104
+ - Pytorch 1.13.1+cu116
105
+ - Datasets 2.8.0
106
+ - Tokenizers 0.13.2
107
+
108
+ For a full description of the environment, please refer to `pip-requirements.txt` and `conda-requirements.txt`.
compressed_graph.dot ADDED
The diff for this file is too large to render. See raw diff
 
conda-requirements.txt ADDED
@@ -0,0 +1,229 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # This file may be used to create an environment using:
2
+ # $ conda create --name <env> --file <this file>
3
+ # platform: linux-64
4
+ _libgcc_mutex=0.1=main
5
+ _openmp_mutex=5.1=1_gnu
6
+ addict=2.4.0=pypi_0
7
+ aiofiles=22.1.0=pypi_0
8
+ aiohttp=3.8.3=pypi_0
9
+ aiosignal=1.3.1=pypi_0
10
+ aiosqlite=0.18.0=pypi_0
11
+ alembic=1.9.2=pypi_0
12
+ anyio=3.6.2=pypi_0
13
+ appdirs=1.4.4=pypi_0
14
+ argon2-cffi=21.3.0=pypi_0
15
+ argon2-cffi-bindings=21.2.0=pypi_0
16
+ arrow=1.2.3=pypi_0
17
+ asttokens=2.2.1=pyhd8ed1ab_0
18
+ async-timeout=4.0.2=pypi_0
19
+ attrs=22.1.0=pypi_0
20
+ audioread=3.0.0=pypi_0
21
+ autoflake=2.0.0=pypi_0
22
+ autograd=1.5=pypi_0
23
+ autopep8=2.0.1=pypi_0
24
+ babel=2.11.0=pypi_0
25
+ backcall=0.2.0=pyh9f0ad1d_0
26
+ backports=1.0=pyhd8ed1ab_3
27
+ backports.functools_lru_cache=1.6.4=pyhd8ed1ab_0
28
+ beautifulsoup4=4.11.2=pypi_0
29
+ black=22.12.0=pypi_0
30
+ bleach=6.0.0=pypi_0
31
+ ca-certificates=2022.12.7=ha878542_0
32
+ certifi=2022.12.7=pypi_0
33
+ cffi=1.15.1=pypi_0
34
+ charset-normalizer=2.1.1=pypi_0
35
+ click=8.1.3=pypi_0
36
+ cma=2.7.0=pypi_0
37
+ cmaes=0.9.1=pypi_0
38
+ coloredlogs=15.0.1=pypi_0
39
+ colorlog=6.7.0=pypi_0
40
+ comm=0.1.2=pyhd8ed1ab_0
41
+ cudatoolkit=11.6.0=habf752d_9
42
+ cycler=0.11.0=pypi_0
43
+ datasets=2.8.0=pypi_0
44
+ debugpy=1.6.4=pypi_0
45
+ decorator=5.1.1=pyhd8ed1ab_0
46
+ defusedxml=0.7.1=pypi_0
47
+ dill=0.3.6=pypi_0
48
+ docker-pycreds=0.4.0=pypi_0
49
+ entrypoints=0.4=pyhd8ed1ab_0
50
+ evaluate=0.4.0=pypi_0
51
+ executing=1.2.0=pyhd8ed1ab_0
52
+ fastjsonschema=2.16.2=pypi_0
53
+ filelock=3.8.2=pypi_0
54
+ flake8=6.0.0=pypi_0
55
+ fonttools=4.38.0=pypi_0
56
+ fqdn=1.5.1=pypi_0
57
+ frozenlist=1.3.3=pypi_0
58
+ fsspec=2022.11.0=pypi_0
59
+ future=0.18.2=pypi_0
60
+ gitdb=4.0.10=pypi_0
61
+ gitpython=3.1.29=pypi_0
62
+ greenlet=2.0.2=pypi_0
63
+ huggingface-hub=0.11.1=pypi_0
64
+ humanfriendly=10.0=pypi_0
65
+ idna=3.4=pypi_0
66
+ importlib-metadata=5.2.0=pypi_0
67
+ importlib-resources=5.10.2=pypi_0
68
+ ipykernel=6.20.2=pyh210e3f2_0
69
+ ipython=8.9.0=pyh41d4057_0
70
+ ipython-genutils=0.2.0=pypi_0
71
+ ipywidgets=8.0.4=pypi_0
72
+ isoduration=20.11.0=pypi_0
73
+ isort=5.11.4=pypi_0
74
+ jedi=0.18.2=pyhd8ed1ab_0
75
+ jinja2=3.1.2=pypi_0
76
+ joblib=1.2.0=pypi_0
77
+ json5=0.9.11=pypi_0
78
+ jsonpointer=2.3=pypi_0
79
+ jsonschema=4.17.3=pypi_0
80
+ jstyleson=0.0.2=pypi_0
81
+ jupyter-client=8.0.2=pypi_0
82
+ jupyter-events=0.5.0=pypi_0
83
+ jupyter-server=2.2.1=pypi_0
84
+ jupyter-server-fileid=0.6.0=pypi_0
85
+ jupyter-server-terminals=0.4.4=pypi_0
86
+ jupyter-server-ydoc=0.6.1=pypi_0
87
+ jupyter-ydoc=0.2.2=pypi_0
88
+ jupyter_core=5.1.5=py38h578d9bd_0
89
+ jupyterlab=3.6.1=pypi_0
90
+ jupyterlab-pygments=0.2.2=pypi_0
91
+ jupyterlab-server=2.19.0=pypi_0
92
+ jupyterlab-widgets=3.0.5=pypi_0
93
+ kiwisolver=1.4.4=pypi_0
94
+ ld_impl_linux-64=2.38=h1181459_1
95
+ libedit=3.1.20221030=h5eee18b_0
96
+ libffi=3.4.2=h6a678d5_6
97
+ libgcc-ng=11.2.0=h1234567_1
98
+ libgomp=11.2.0=h1234567_1
99
+ librosa=0.9.2=pypi_0
100
+ libsodium=1.0.18=h36c2ea0_1
101
+ libstdcxx-ng=11.2.0=h1234567_1
102
+ llvmlite=0.39.1=pypi_0
103
+ mako=1.2.4=pypi_0
104
+ markupsafe=2.1.2=pypi_0
105
+ matplotlib=3.5.3=pypi_0
106
+ matplotlib-inline=0.1.6=pyhd8ed1ab_0
107
+ mistune=2.0.5=pypi_0
108
+ mpmath=1.2.1=pypi_0
109
+ multidict=6.0.3=pypi_0
110
+ multiprocess=0.70.14=pypi_0
111
+ mypy-extensions=0.4.3=pypi_0
112
+ natsort=8.2.0=pypi_0
113
+ nbclassic=0.5.1=pypi_0
114
+ nbclient=0.7.2=pypi_0
115
+ nbconvert=7.2.9=pypi_0
116
+ nbformat=5.7.3=pypi_0
117
+ ncurses=6.3=h5eee18b_3
118
+ nest-asyncio=1.5.6=pyhd8ed1ab_0
119
+ networkx=2.8.8=pypi_0
120
+ ninja=1.10.2.4=pypi_0
121
+ nncf=2.4.0.dev0+c02f9f3=pypi_0
122
+ notebook=6.5.2=pypi_0
123
+ notebook-shim=0.2.2=pypi_0
124
+ ntplib=0.4.0=pypi_0
125
+ numba=0.56.4=pypi_0
126
+ numpy=1.23.4=pypi_0
127
+ onnx=1.13.0=pypi_0
128
+ opencv-python=4.7.0.68=pypi_0
129
+ openssl=1.1.1s=h7f8727e_0
130
+ openvino=2023.0.0.dev20230119=pypi_0
131
+ openvino-dev=2023.0.0.dev20230119=pypi_0
132
+ openvino-telemetry=2022.3.0=pypi_0
133
+ optimum=1.6.3=pypi_0
134
+ optimum-intel=1.7.0.dev0=pypi_0
135
+ optuna=3.1.0=pypi_0
136
+ packaging=22.0=pypi_0
137
+ pandas=1.3.5=pypi_0
138
+ pandocfilters=1.5.0=pypi_0
139
+ parameterized=0.8.1=pypi_0
140
+ parso=0.8.3=pyhd8ed1ab_0
141
+ pathspec=0.10.3=pypi_0
142
+ pathtools=0.1.2=pypi_0
143
+ pexpect=4.8.0=pyh1a96a4e_2
144
+ pickleshare=0.7.5=py_1003
145
+ pillow=9.4.0=pypi_0
146
+ pip=22.3.1=py38h06a4308_0
147
+ pkgutil-resolve-name=1.3.10=pypi_0
148
+ platformdirs=2.6.2=pyhd8ed1ab_0
149
+ pooch=1.6.0=pypi_0
150
+ prometheus-client=0.16.0=pypi_0
151
+ promise=2.3=pypi_0
152
+ prompt-toolkit=3.0.36=pyha770c72_0
153
+ protobuf=3.20.2=pypi_0
154
+ psutil=5.9.0=py38h5eee18b_0
155
+ ptyprocess=0.7.0=pyhd3deb0d_0
156
+ pure_eval=0.2.2=pyhd8ed1ab_0
157
+ pyarrow=10.0.1=pypi_0
158
+ pycodestyle=2.10.0=pypi_0
159
+ pycparser=2.21=pypi_0
160
+ pydot=1.4.2=pypi_0
161
+ pyflakes=3.0.1=pypi_0
162
+ pygments=2.14.0=pyhd8ed1ab_0
163
+ pymoo=0.5.0=pypi_0
164
+ pyparsing=2.4.7=pypi_0
165
+ pyrsistent=0.19.2=pypi_0
166
+ python=3.8.15=h7a1cb2a_2
167
+ python-dateutil=2.8.2=pyhd8ed1ab_0
168
+ python-json-logger=2.0.4=pypi_0
169
+ python_abi=3.8=2_cp38
170
+ pytz=2022.7.1=pypi_0
171
+ pyyaml=6.0=pypi_0
172
+ pyzmq=25.0.0=pypi_0
173
+ readline=8.2=h5eee18b_0
174
+ regex=2022.10.31=pypi_0
175
+ requests=2.28.2=pypi_0
176
+ resampy=0.4.2=pypi_0
177
+ responses=0.18.0=pypi_0
178
+ rfc3339-validator=0.1.4=pypi_0
179
+ rfc3986-validator=0.1.1=pypi_0
180
+ scikit-learn=1.2.0=pypi_0
181
+ scipy=1.10.0=pypi_0
182
+ send2trash=1.8.0=pypi_0
183
+ sentencepiece=0.1.97=pypi_0
184
+ sentry-sdk=1.12.1=pypi_0
185
+ setproctitle=1.3.2=pypi_0
186
+ setuptools=65.5.0=py38h06a4308_0
187
+ shortuuid=1.0.11=pypi_0
188
+ six=1.16.0=pyh6c4a22f_0
189
+ smmap=5.0.0=pypi_0
190
+ sniffio=1.3.0=pypi_0
191
+ soundfile=0.11.0=pypi_0
192
+ soupsieve=2.3.2.post1=pypi_0
193
+ sqlalchemy=2.0.1=pypi_0
194
+ sqlite=3.40.0=h5082296_0
195
+ stack_data=0.6.2=pyhd8ed1ab_0
196
+ sympy=1.11.1=pypi_0
197
+ terminado=0.17.1=pypi_0
198
+ texttable=1.6.7=pypi_0
199
+ threadpoolctl=3.1.0=pypi_0
200
+ tinycss2=1.2.1=pypi_0
201
+ tk=8.6.12=h1ccaba5_0
202
+ tokenizers=0.13.2=pypi_0
203
+ tomli=2.0.1=pypi_0
204
+ torch=1.13.1+cu116=pypi_0
205
+ torchaudio=0.13.1+cu116=pypi_0
206
+ torchvision=0.14.1+cu116=pypi_0
207
+ tornado=6.2=pypi_0
208
+ tqdm=4.64.1=pypi_0
209
+ traitlets=5.8.1=pyhd8ed1ab_0
210
+ transformers=4.26.0=pypi_0
211
+ typing-extensions=4.4.0=hd8ed1ab_0
212
+ typing_extensions=4.4.0=pyha770c72_0
213
+ uri-template=1.2.0=pypi_0
214
+ urllib3=1.26.14=pypi_0
215
+ wandb=0.13.7=pypi_0
216
+ wcwidth=0.2.6=pyhd8ed1ab_0
217
+ webcolors=1.12=pypi_0
218
+ webencodings=0.5.1=pypi_0
219
+ websocket-client=1.5.1=pypi_0
220
+ wheel=0.37.1=pyhd3eb1b0_0
221
+ widgetsnbextension=4.0.5=pypi_0
222
+ xxhash=3.1.0=pypi_0
223
+ xz=5.2.8=h5eee18b_0
224
+ y-py=0.5.5=pypi_0
225
+ yarl=1.8.2=pypi_0
226
+ ypy-websocket=0.8.2=pypi_0
227
+ zeromq=4.3.4=h9c3ff4c_1
228
+ zipp=3.11.0=pypi_0
229
+ zlib=1.2.13=h5eee18b_0
config.json ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "textattack/bert-base-uncased-SST-2",
3
+ "architectures": [
4
+ "NNCFNetwork"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "finetuning_task": "sst2",
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.1,
11
+ "hidden_size": 768,
12
+ "id2label": {
13
+ "0": "negative",
14
+ "1": "positive"
15
+ },
16
+ "initializer_range": 0.02,
17
+ "intermediate_size": 3072,
18
+ "label2id": {
19
+ "negative": 0,
20
+ "positive": 1
21
+ },
22
+ "layer_norm_eps": 1e-12,
23
+ "max_position_embeddings": 512,
24
+ "model_type": "bert",
25
+ "num_attention_heads": 12,
26
+ "num_hidden_layers": 12,
27
+ "pad_token_id": 0,
28
+ "position_embedding_type": "absolute",
29
+ "problem_type": "single_label_classification",
30
+ "torch_dtype": "float32",
31
+ "transformers_version": "4.26.0",
32
+ "type_vocab_size": 2,
33
+ "use_cache": true,
34
+ "vocab_size": 30522
35
+ }
eval_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "eval_accuracy": 0.9128440366972477,
3
+ "eval_loss": 0.38576310873031616,
4
+ "eval_runtime": 6.8627,
5
+ "eval_samples_per_second": 127.064,
6
+ "eval_steps_per_second": 4.08
7
+ }
8
+
nncf_config.json ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "algorithm": "quantization",
4
+ "preset": "mixed",
5
+ "overflow_fix": "disable",
6
+ "initializer": {
7
+ "range": {
8
+ "num_init_samples": 300,
9
+ "type": "mean_min_max"
10
+ },
11
+ "batchnorm_adaptation": {
12
+ "num_bn_adaptation_samples": 0
13
+ }
14
+ },
15
+ "scope_overrides": {
16
+ "activations": {
17
+ "{re}.*matmul_0": {
18
+ "mode": "symmetric"
19
+ }
20
+ }
21
+ },
22
+ "ignored_scopes": [
23
+ "{re}.*Embeddings.*",
24
+ "{re}.*__add___[0-1]",
25
+ "{re}.*layer_norm_0",
26
+ "{re}.*matmul_1",
27
+ "{re}.*__truediv__*"
28
+ ]
29
+ },
30
+ {
31
+ "algorithm": "magnitude_sparsity",
32
+ "ignored_scopes": [
33
+ "{re}.*NNCFEmbedding.*",
34
+ "{re}.*LayerNorm.*",
35
+ "{re}.*pooler.*",
36
+ "{re}.*classifier.*"
37
+ ],
38
+ "sparsity_init": 0.0,
39
+ "params": {
40
+ "power": 3,
41
+ "schedule": "polynomial",
42
+ "sparsity_freeze_epoch": 10,
43
+ "sparsity_target": 0.8,
44
+ "sparsity_target_epoch": 9,
45
+ "steps_per_epoch": 2105,
46
+ "update_per_optimizer_step": true
47
+ }
48
+ }
49
+ ]
nncf_output.log ADDED
The diff for this file is too large to render. See raw diff
 
openvino_config.json ADDED
@@ -0,0 +1,82 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "compression": [
3
+ {
4
+ "algorithm": "quantization",
5
+ "export_to_onnx_standard_ops": false,
6
+ "ignored_scopes": [
7
+ "{re}.*Embeddings.*",
8
+ "{re}.*__add___[0-1]",
9
+ "{re}.*layer_norm_0",
10
+ "{re}.*matmul_1",
11
+ "{re}.*__truediv__*"
12
+ ],
13
+ "initializer": {
14
+ "batchnorm_adaptation": {
15
+ "num_bn_adaptation_samples": 0
16
+ },
17
+ "range": {
18
+ "num_init_samples": 300,
19
+ "type": "mean_min_max"
20
+ }
21
+ },
22
+ "overflow_fix": "disable",
23
+ "preset": "mixed",
24
+ "scope_overrides": {
25
+ "activations": {
26
+ "{re}.*matmul_0": {
27
+ "mode": "symmetric"
28
+ }
29
+ }
30
+ }
31
+ },
32
+ {
33
+ "algorithm": "magnitude_sparsity",
34
+ "ignored_scopes": [
35
+ "{re}.*NNCFEmbedding.*",
36
+ "{re}.*LayerNorm.*",
37
+ "{re}.*pooler.*",
38
+ "{re}.*classifier.*"
39
+ ],
40
+ "params": {
41
+ "power": 3,
42
+ "schedule": "polynomial",
43
+ "sparsity_freeze_epoch": 10,
44
+ "sparsity_target": 0.8,
45
+ "sparsity_target_epoch": 9,
46
+ "steps_per_epoch": 2105,
47
+ "update_per_optimizer_step": true
48
+ },
49
+ "sparsity_init": 0.0
50
+ }
51
+ ],
52
+ "input_info": [
53
+ {
54
+ "keyword": "input_ids",
55
+ "sample_size": [
56
+ 32,
57
+ 128
58
+ ],
59
+ "type": "long"
60
+ },
61
+ {
62
+ "keyword": "token_type_ids",
63
+ "sample_size": [
64
+ 32,
65
+ 128
66
+ ],
67
+ "type": "long"
68
+ },
69
+ {
70
+ "keyword": "attention_mask",
71
+ "sample_size": [
72
+ 32,
73
+ 128
74
+ ],
75
+ "type": "long"
76
+ }
77
+ ],
78
+ "log_dir": "/nvme2/yujiepan/workspace/jpqd-test/LOGS/optimum-magnitude/0209_jp0w_QMaP80_LR5e-05_COS11,6_EPO17_PerStp_INIT0.0_END9_BS32_LT0.95TMP2_SEED1_dgx1",
79
+ "optimum_version": "1.6.3",
80
+ "save_onnx_model": false,
81
+ "transformers_version": "4.26.0"
82
+ }
openvino_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6bc46883c54fe9cf7d9ed6d172ee9622273d1c081c186aae0da2755d30bc0861
3
+ size 181696972
openvino_model.xml ADDED
The diff for this file is too large to render. See raw diff
 
original_graph.dot ADDED
The diff for this file is too large to render. See raw diff
 
pip-requirements.txt ADDED
@@ -0,0 +1,213 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ addict==2.4.0
2
+ aiofiles==22.1.0
3
+ aiohttp==3.8.3
4
+ aiosignal==1.3.1
5
+ aiosqlite==0.18.0
6
+ alembic==1.9.2
7
+ anyio==3.6.2
8
+ appdirs==1.4.4
9
+ argon2-cffi==21.3.0
10
+ argon2-cffi-bindings==21.2.0
11
+ arrow==1.2.3
12
+ astroid==1.6.6
13
+ asttokens @ file:///home/conda/feedstock_root/build_artifacts/asttokens_1670263926556/work
14
+ async-timeout==4.0.2
15
+ attrs==22.1.0
16
+ audioread==3.0.0
17
+ autoflake==2.0.0
18
+ autograd==1.5
19
+ autopep8==2.0.1
20
+ Babel==2.11.0
21
+ backcall @ file:///home/conda/feedstock_root/build_artifacts/backcall_1592338393461/work
22
+ backports.functools-lru-cache @ file:///home/conda/feedstock_root/build_artifacts/backports.functools_lru_cache_1618230623929/work
23
+ beautifulsoup4==4.11.2
24
+ black==22.12.0
25
+ bleach==6.0.0
26
+ blessings==1.7
27
+ certifi==2022.12.7
28
+ cffi==1.15.1
29
+ charset-normalizer==2.1.1
30
+ click==8.1.3
31
+ cma==2.7.0
32
+ cmaes==0.9.1
33
+ coloredlogs==15.0.1
34
+ colorlog==6.7.0
35
+ comm @ file:///home/conda/feedstock_root/build_artifacts/comm_1670575068857/work
36
+ cycler==0.11.0
37
+ datasets==2.8.0
38
+ debugpy==1.6.4
39
+ decorator @ file:///home/conda/feedstock_root/build_artifacts/decorator_1641555617451/work
40
+ defusedxml==0.7.1
41
+ dill==0.3.6
42
+ docker-pycreds==0.4.0
43
+ entrypoints @ file:///home/conda/feedstock_root/build_artifacts/entrypoints_1643888246732/work
44
+ et-xmlfile==1.1.0
45
+ evaluate==0.4.0
46
+ executing @ file:///home/conda/feedstock_root/build_artifacts/executing_1667317341051/work
47
+ fastjsonschema==2.16.2
48
+ filelock==3.8.2
49
+ flake8==6.0.0
50
+ fonttools==4.38.0
51
+ fqdn==1.5.1
52
+ frozenlist==1.3.3
53
+ fsspec==2022.11.0
54
+ future==0.18.2
55
+ gitdb==4.0.10
56
+ GitPython==3.1.29
57
+ gpustat==0.6.0
58
+ greenlet==2.0.2
59
+ huggingface-hub==0.11.1
60
+ humanfriendly==10.0
61
+ idna==3.4
62
+ importlib-metadata==5.2.0
63
+ importlib-resources==5.10.2
64
+ iniconfig==1.1.1
65
+ ipykernel @ file:///home/conda/feedstock_root/build_artifacts/ipykernel_1673894597753/work
66
+ ipython @ file:///home/conda/feedstock_root/build_artifacts/ipython_1674911957283/work
67
+ ipython-genutils==0.2.0
68
+ ipywidgets==8.0.4
69
+ isoduration==20.11.0
70
+ isort==5.10.1
71
+ jedi @ file:///home/conda/feedstock_root/build_artifacts/jedi_1669134318875/work
72
+ Jinja2==3.1.2
73
+ joblib==1.2.0
74
+ json5==0.9.11
75
+ jsonpointer==2.3
76
+ jsonschema==4.17.3
77
+ jstyleson==0.0.2
78
+ jupyter-events==0.5.0
79
+ jupyter-ydoc==0.2.2
80
+ jupyter_client==8.0.2
81
+ jupyter_core @ file:///home/conda/feedstock_root/build_artifacts/jupyter_core_1674601164834/work
82
+ jupyter_server==2.2.1
83
+ jupyter_server_fileid==0.6.0
84
+ jupyter_server_terminals==0.4.4
85
+ jupyter_server_ydoc==0.6.1
86
+ jupyterlab==3.6.1
87
+ jupyterlab-pygments==0.2.2
88
+ jupyterlab-widgets==3.0.5
89
+ jupyterlab_server==2.19.0
90
+ kiwisolver==1.4.4
91
+ lazy-object-proxy==1.7.1
92
+ librosa==0.9.2
93
+ llvmlite==0.39.1
94
+ Mako==1.2.4
95
+ MarkupSafe==2.1.2
96
+ matplotlib==3.5.3
97
+ matplotlib-inline @ file:///home/conda/feedstock_root/build_artifacts/matplotlib-inline_1660814786464/work
98
+ mccabe==0.7.0
99
+ mistune==2.0.5
100
+ mpmath==1.2.1
101
+ multidict==6.0.3
102
+ multiprocess==0.70.14
103
+ mypy-extensions==0.4.3
104
+ natsort==8.2.0
105
+ nbclassic==0.5.1
106
+ nbclient==0.7.2
107
+ nbconvert==7.2.9
108
+ nbformat==5.7.3
109
+ nest-asyncio @ file:///home/conda/feedstock_root/build_artifacts/nest-asyncio_1664684991461/work
110
+ networkx==2.8.8
111
+ ninja==1.10.2.4
112
+ nncf @ git+https://github.com/openvinotoolkit/nncf@c02f9f3bc0d77677b8cacbeedf102bb8ca93c0ac
113
+ notebook==6.5.2
114
+ notebook_shim==0.2.2
115
+ ntplib==0.4.0
116
+ numba==0.56.4
117
+ numpy==1.23.4
118
+ nvidia-ml-py3==7.352.0
119
+ onnx==1.13.0
120
+ opencv-python==4.7.0.68
121
+ openpyxl==3.0.10
122
+ openvino==2023.0.0.dev20230119
123
+ openvino-dev==2023.0.0.dev20230119
124
+ openvino-telemetry==2022.3.0
125
+ optimum==1.6.3
126
+ -e git+https://github.com/yujiepan-work/optimum-intel.git@01927af543eaea8678671bf8f4eb78fdb29f8930#egg=optimum_intel
127
+ optuna==3.1.0
128
+ packaging @ file:///home/conda/feedstock_root/build_artifacts/packaging_1673482170163/work
129
+ pandas==1.3.5
130
+ pandocfilters==1.5.0
131
+ parameterized==0.8.1
132
+ parso @ file:///home/conda/feedstock_root/build_artifacts/parso_1638334955874/work
133
+ pathspec==0.10.3
134
+ pathtools==0.1.2
135
+ pexpect @ file:///home/conda/feedstock_root/build_artifacts/pexpect_1667297516076/work
136
+ pickleshare @ file:///home/conda/feedstock_root/build_artifacts/pickleshare_1602536217715/work
137
+ Pillow==9.4.0
138
+ pkgutil_resolve_name==1.3.10
139
+ platformdirs @ file:///home/conda/feedstock_root/build_artifacts/platformdirs_1672264874562/work
140
+ pooch==1.6.0
141
+ prometheus-client==0.16.0
142
+ promise==2.3
143
+ prompt-toolkit @ file:///home/conda/feedstock_root/build_artifacts/prompt-toolkit_1670414775770/work
144
+ protobuf==3.20.2
145
+ psutil==5.9.1
146
+ ptyprocess @ file:///home/conda/feedstock_root/build_artifacts/ptyprocess_1609419310487/work/dist/ptyprocess-0.7.0-py2.py3-none-any.whl
147
+ pure-eval @ file:///home/conda/feedstock_root/build_artifacts/pure_eval_1642875951954/work
148
+ py==1.11.0
149
+ pyarrow==10.0.1
150
+ pycodestyle==2.10.0
151
+ pycparser==2.21
152
+ pydot==1.4.2
153
+ pyflakes==3.0.1
154
+ Pygments @ file:///home/conda/feedstock_root/build_artifacts/pygments_1672682006896/work
155
+ pylint==1.9.4
156
+ pymoo==0.5.0
157
+ pyparsing==2.4.7
158
+ pyrsistent==0.19.2
159
+ python-dateutil==2.8.2
160
+ python-json-logger==2.0.4
161
+ pytz==2022.7.1
162
+ PyYAML==6.0
163
+ pyzmq==25.0.0
164
+ regex==2022.10.31
165
+ requests==2.28.2
166
+ resampy==0.4.2
167
+ responses==0.18.0
168
+ rfc3339-validator==0.1.4
169
+ rfc3986-validator==0.1.1
170
+ scikit-learn==1.2.0
171
+ scipy==1.10.0
172
+ seaborn==0.12.0
173
+ Send2Trash==1.8.0
174
+ sentencepiece==0.1.97
175
+ sentry-sdk==1.12.1
176
+ setproctitle==1.3.2
177
+ shortuuid==1.0.11
178
+ six @ file:///home/conda/feedstock_root/build_artifacts/six_1620240208055/work
179
+ smmap==5.0.0
180
+ sniffio==1.3.0
181
+ soundfile==0.11.0
182
+ soupsieve==2.3.2.post1
183
+ SQLAlchemy==2.0.1
184
+ stack-data @ file:///home/conda/feedstock_root/build_artifacts/stack_data_1669632077133/work
185
+ sympy==1.11.1
186
+ terminado==0.17.1
187
+ texttable==1.6.7
188
+ threadpoolctl==3.1.0
189
+ tinycss2==1.2.1
190
+ tokenizers==0.13.2
191
+ tomli==2.0.1
192
+ torch==1.13.1+cu116
193
+ torchaudio==0.13.1+cu116
194
+ torchvision==0.14.1+cu116
195
+ tornado==6.2
196
+ tqdm==4.64.1
197
+ traitlets @ file:///home/conda/feedstock_root/build_artifacts/traitlets_1673359992537/work
198
+ transformers==4.26.0
199
+ typing_extensions @ file:///home/conda/feedstock_root/build_artifacts/typing_extensions_1665144421445/work
200
+ uri-template==1.2.0
201
+ urllib3==1.26.14
202
+ wandb==0.13.7
203
+ wcwidth @ file:///home/conda/feedstock_root/build_artifacts/wcwidth_1673864653149/work
204
+ webcolors==1.12
205
+ webencodings==0.5.1
206
+ websocket-client==1.5.1
207
+ widgetsnbextension==4.0.5
208
+ wrapt==1.14.1
209
+ xxhash==3.1.0
210
+ y-py==0.5.5
211
+ yarl==1.8.2
212
+ ypy-websocket==0.8.2
213
+ zipp==3.11.0
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:db74623dddd1b4b137050000513cf9aca5b7a07131693df439c5b3d0c2b3236d
3
+ size 778415781
special_tokens_map.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": "[CLS]",
3
+ "mask_token": "[MASK]",
4
+ "pad_token": "[PAD]",
5
+ "sep_token": "[SEP]",
6
+ "unk_token": "[UNK]"
7
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": "[CLS]",
3
+ "do_basic_tokenize": true,
4
+ "do_lower_case": true,
5
+ "mask_token": "[MASK]",
6
+ "model_max_length": 512,
7
+ "name_or_path": "textattack/bert-base-uncased-SST-2",
8
+ "never_split": null,
9
+ "pad_token": "[PAD]",
10
+ "sep_token": "[SEP]",
11
+ "special_tokens_map_file": "/home/yujiepan/.cache/huggingface/hub/models--textattack--bert-base-uncased-SST-2/snapshots/95f0f6f859b35c8ff0863ae3cd4e2dbc702c0ae2/special_tokens_map.json",
12
+ "strip_accents": null,
13
+ "tokenize_chinese_chars": true,
14
+ "tokenizer_class": "BertTokenizer",
15
+ "unk_token": "[UNK]"
16
+ }
train_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 17.0,
3
+ "train_loss": 0.06787751242209697,
4
+ "train_runtime": 20398.7052,
5
+ "train_samples": 67349,
6
+ "train_samples_per_second": 56.128,
7
+ "train_steps_per_second": 1.754
8
+ }
trainer_state.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7224e8ad3cd262826caf3cd7f3d73b6abdefaa2008e8fc0bc14c4995c6af1961
3
+ size 17898144
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aec332d02c9c5ad9062f278fc7bcfa773a8024c1e0626e6b2f7f7e7b9077300e
3
+ size 4091
vocab.txt ADDED
The diff for this file is too large to render. See raw diff