sumitrsch commited on
Commit
52464af
1 Parent(s): b79a047

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -14
README.md CHANGED
@@ -3,18 +3,6 @@ For prediction on test data use this link. https://colab.research.google.com/dri
3
 
4
 
5
  If you are using this code, cite paper "silp_nlp at SemEval-2023 Task 2: Cross-lingual Knowledge Transfer for
6
- Mono-lingual Learning" ,
7
  https://aclanthology.org/2023.semeval-1.164
8
- bib tex: @inproceedings{singh-tiwary-2023-silp,
9
- title = "Silp{\_}nlp at {S}em{E}val-2023 Task 2: Cross-lingual Knowledge Transfer for Mono-lingual Learning",
10
- author = "Singh, Sumit and
11
- Tiwary, Uma",
12
- booktitle = "Proceedings of the The 17th International Workshop on Semantic Evaluation (SemEval-2023)",
13
- month = jul,
14
- year = "2023",
15
- address = "Toronto, Canada",
16
- publisher = "Association for Computational Linguistics",
17
- url = "https://aclanthology.org/2023.semeval-1.164",
18
- pages = "1183--1189",
19
- abstract = "Our team silp{\_}nlp participated in SemEval2023 Task 2: MultiCoNER II. Our work made systems for 11 mono-lingual tracks. For leveraging the advantage of all track knowledge we chose transformer-based pretrained models, which have strong cross-lingual transferability. Hence our model trained in two stages, the first stage for multi-lingual learning from all tracks and the second for fine-tuning individual tracks. Our work highlights that the knowledge of all tracks can be transferred to an individual track if the baseline language model has crosslingual features. Our system positioned itself in the top 10 for 4 tracks by scoring 0.7432 macro F1 score for the Hindi track ( 7th rank ) and 0.7322 macro F1 score for the Bangla track ( 9th rank ).",
20
- }
 
3
 
4
 
5
  If you are using this code, cite paper "silp_nlp at SemEval-2023 Task 2: Cross-lingual Knowledge Transfer for
6
+ Mono-lingual Learning"
7
  https://aclanthology.org/2023.semeval-1.164
8
+