mechanicalsea commited on
Commit
dc6bb12
1 Parent(s): d7b3149

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -5
README.md CHANGED
@@ -15,7 +15,7 @@ datasets:
15
 
16
  # LightHuBERT
17
 
18
- [**LightHuBERT**](?): **Lightweight and Configurable Speech Representation Learning with Once-for-All Hidden-Unit BERT**
19
 
20
  Authors: Rui Wang, Qibing Bai, Junyi Ao, Long Zhou, Zhixiang Xiong, Zhihua Wei, Yu Zhang, Tom Ko and Haizhou Li
21
 
@@ -23,7 +23,7 @@ Authors: Rui Wang, Qibing Bai, Junyi Ao, Long Zhou, Zhixiang Xiong, Zhihua Wei,
23
 
24
  The authors' PyTorch implementation and pre-trained models of LightHuBERT.
25
 
26
- - March 2022: release preprint in [arXiv](?) and checkpoints in [huggingface](https://huggingface.co/mechanicalsea/lighthubert).
27
 
28
  ## Pre-Trained Models
29
 
@@ -70,11 +70,10 @@ print(f"Representation at bottom hidden states: {torch.allclose(rep, hs[-1])}")
70
  If you find our work is useful in your research, please cite the following paper:
71
 
72
  ```bibtex
73
- @article{rwang-lighthubert-2022,
74
  title={{LightHuBERT}: Lightweight and Configurable Speech Representation Learning with Once-for-All Hidden-Unit {BERT}},
75
  author={Rui Wang and Qibing Bai and Junyi Ao and Long Zhou and Zhixiang Xiong and Zhihua Wei and Yu Zhang and Tom Ko and Haizhou Li},
76
- year={2022},
77
- journal={arXiv preprint arXiv:?.?},
78
  year={2022}
79
  }
80
  ```
 
15
 
16
  # LightHuBERT
17
 
18
+ [**LightHuBERT**](https://arxiv.org/abs/2203.15610): **Lightweight and Configurable Speech Representation Learning with Once-for-All Hidden-Unit BERT**
19
 
20
  Authors: Rui Wang, Qibing Bai, Junyi Ao, Long Zhou, Zhixiang Xiong, Zhihua Wei, Yu Zhang, Tom Ko and Haizhou Li
21
 
 
23
 
24
  The authors' PyTorch implementation and pre-trained models of LightHuBERT.
25
 
26
+ - March 2022: release preprint in [arXiv](https://arxiv.org/abs/2203.15610) and checkpoints in [huggingface](https://huggingface.co/mechanicalsea/lighthubert).
27
 
28
  ## Pre-Trained Models
29
 
 
70
  If you find our work is useful in your research, please cite the following paper:
71
 
72
  ```bibtex
73
+ @article{wang2022lighthubert,
74
  title={{LightHuBERT}: Lightweight and Configurable Speech Representation Learning with Once-for-All Hidden-Unit {BERT}},
75
  author={Rui Wang and Qibing Bai and Junyi Ao and Long Zhou and Zhixiang Xiong and Zhihua Wei and Yu Zhang and Tom Ko and Haizhou Li},
76
+ journal={arXiv preprint arXiv:2203.15610},
 
77
  year={2022}
78
  }
79
  ```