Update README.md
Browse files
README.md
CHANGED
@@ -41,13 +41,13 @@ You can use this model directly with a pipeline for token classification :
|
|
41 |
The model is fine-tuned by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We fine-tune five epochs with a sequence length of 512 on the basis of the pre-trained model [chinese_roberta_L-12_H-768](https://huggingface.co/uer/chinese_roberta_L-12_H-768). At the end of each epoch, the model is saved when the best performance on development set is achieved.
|
42 |
|
43 |
```
|
44 |
-
python3 run_ner.py --pretrained_model_path models/cluecorpussmall_roberta_base_seq512_model.bin-250000 \
|
45 |
-
|
46 |
-
|
47 |
-
|
48 |
-
|
49 |
-
|
50 |
-
|
51 |
```
|
52 |
|
53 |
Finally, we convert the pre-trained model into Huggingface's format:
|
|
|
41 |
The model is fine-tuned by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We fine-tune five epochs with a sequence length of 512 on the basis of the pre-trained model [chinese_roberta_L-12_H-768](https://huggingface.co/uer/chinese_roberta_L-12_H-768). At the end of each epoch, the model is saved when the best performance on development set is achieved.
|
42 |
|
43 |
```
|
44 |
+
python3 finetune/run_ner.py --pretrained_model_path models/cluecorpussmall_roberta_base_seq512_model.bin-250000 \
|
45 |
+
--vocab_path models/google_zh_vocab.txt \
|
46 |
+
--train_path datasets/cluener2020/train.tsv \
|
47 |
+
--dev_path datasets/cluener2020/dev.tsv \
|
48 |
+
--label2id_path datasets/cluener2020/label2id.json \
|
49 |
+
--output_model_path models/cluener2020_ner_model.bin \
|
50 |
+
--learning_rate 3e-5 --epochs_num 5 --batch_size 32 --seq_length 512
|
51 |
```
|
52 |
|
53 |
Finally, we convert the pre-trained model into Huggingface's format:
|