EFTNAS Model Card: eftnas-s2-bert-medium

The super-networks fine-tuned on BERT-medium with GLUE benchmark using EFTNAS.

Model Details

Information

Training and Evaluation

GLUE benchmark

Results

Results of the optimal sub-network discoverd from the super-network:

Model GFLOPs GLUE Avg. MNLI-m QNLI QQP SST-2 CoLA MRPC RTE
Test Set:
EFTNAS-S1 5.7 77.7 83.7 89.9 71.8 93.4 52.6 87.6 65.0
EFTNAS-S2 2.2 75.2 82.0 87.8 70.6 91.4 44.5 86.1 64.0

Model Sources

Citation

@inproceedings{
  eftnas2024,
  title={Searching for Efficient Language Models in First-Order Weight-Reordered Super-Networks},
  author={J. Pablo Munoz and Yi Zheng and Nilesh Jain},
  booktitle={The 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation},
  year={2024},
  url={}
}

License

Apache-2.0

Downloads last month
0
Unable to determine this model's library. Check the docs .

Dataset used to train IntelLabs/eftnas-s2-bert-medium