M-FAC's picture
Add links for github repo
4dd3fa2
|
raw
history blame
3.23 kB
# BERT-tiny model finetuned with M-FAC
This model is finetuned on MNLI dataset with state-of-the-art second-order optimizer M-FAC.
Check NeurIPS 2021 paper for more details on M-FAC: [https://arxiv.org/pdf/2107.03356.pdf](https://arxiv.org/pdf/2107.03356.pdf).
## Finetuning setup
For fair comparison against default Adam baseline, we finetune the model in the same framework as described here [https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) and just swap Adam optimizer with M-FAC.
Hyperparameters used by M-FAC optimizer:
```bash
learning rate = 1e-4
number of gradients = 1024
dampening = 1e-6
```
## Results
We share the best model out of 5 runs with the following score on MNLI validation set:
```bash
matched_accuracy = 69.55
mismatched_accuracy = 70.58
```
Mean and standard deviation for 5 runs on MNLI validation set:
| | Matched Accuracy | Mismatched Accuracy |
|:----:|:-----------:|:----------:|
| Adam | 65.36 ± 0.13 | 66.78 ± 0.15 |
| M-FAC | 68.28 ± 3.29 | 68.98 ± 3.05 |
Results can be reproduced by adding M-FAC optimizer code in [https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) and running the following bash script:
```bash
CUDA_VISIBLE_DEVICES=0 python run_glue.py \
--seed 42 \
--model_name_or_path prajjwal1/bert-tiny \
--task_name mnli \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--learning_rate 1e-4 \
--num_train_epochs 5 \
--output_dir out_dir/ \
--optim MFAC \
--optim_args '{"lr": 1e-4, "num_grads": 1024, "damp": 1e-6}'
```
We believe these results could be improved with modest tuning of hyperparameters: `per_device_train_batch_size`, `learning_rate`, `num_train_epochs`, `num_grads` and `damp`. For the sake of fair comparison and a robust default setup we use the same hyperparameters across all models (`bert-tiny`, `bert-mini`) and all datasets (SQuAD version 2 and GLUE).
Our code for M-FAC can be found here: [https://github.com/IST-DASLab/M-FAC](https://github.com/IST-DASLab/M-FAC).
A step-by-step tutorial on how to integrate and use M-FAC with any repository can be found here: [https://github.com/IST-DASLab/M-FAC/tree/master/tutorials](https://github.com/IST-DASLab/M-FAC/tree/master/tutorials).
## BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2107-03356,
author = {Elias Frantar and
Eldar Kurtic and
Dan Alistarh},
title = {Efficient Matrix-Free Approximations of Second-Order Information,
with Applications to Pruning and Optimization},
journal = {CoRR},
volume = {abs/2107.03356},
year = {2021},
url = {https://arxiv.org/abs/2107.03356},
eprinttype = {arXiv},
eprint = {2107.03356},
timestamp = {Tue, 20 Jul 2021 15:08:33 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2107-03356.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```