felerminoali commited on
Commit
06ee716
1 Parent(s): a81b1ad

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -5
README.md CHANGED
@@ -15,13 +15,21 @@ The dataset paper was published in EMNLP 2024.
15
 
16
  Please cite as:
17
  ```
18
- @inproceedings{ali2024data-paper,
19
- title={Building Resources for Emakhuwa: Machine Translation and News Classification Benchmark},
20
- author={Felermino D. M. Antonio Ali and Henrique Lopes Cardoso and Rui Sousa-Silva},
 
 
 
 
 
21
  booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
22
  month = nov,
23
  year = "2024",
24
  address = "Miami, Florida, USA",
25
  publisher = "Association for Computational Linguistics",
26
-
27
- }
 
 
 
 
15
 
16
  Please cite as:
17
  ```
18
+ @inproceedings{ali-etal-2024-building,
19
+ title = "Building Resources for Emakhuwa: Machine Translation and News Classification Benchmarks",
20
+ author = "Ali, Felermino D. M. A. and
21
+ Lopes Cardoso, Henrique and
22
+ Sousa-Silva, Rui",
23
+ editor = "Al-Onaizan, Yaser and
24
+ Bansal, Mohit and
25
+ Chen, Yun-Nung",
26
  booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
27
  month = nov,
28
  year = "2024",
29
  address = "Miami, Florida, USA",
30
  publisher = "Association for Computational Linguistics",
31
+ url = "https://aclanthology.org/2024.emnlp-main.824",
32
+ pages = "14842--14857",
33
+ abstract = "This paper introduces a comprehensive collection of NLP resources for Emakhuwa, Mozambique{'}s most widely spoken language. The resources include the first manually translated news bitext corpus between Portuguese and Emakhuwa, news topic classification datasets, and monolingual data. We detail the process and challenges of acquiring this data and present benchmark results for machine translation and news topic classification tasks. Our evaluation examines the impact of different data types{---}originally clean text, post-corrected OCR, and back-translated data{---}and the effects of fine-tuning from pre-trained models, including those focused on African languages.Our benchmarks demonstrate good performance in news topic classification and promising results in machine translation. We fine-tuned multilingual encoder-decoder models using real and synthetic data and evaluated them on our test set and the FLORES evaluation sets. The results highlight the importance of incorporating more data and potential for future improvements.All models, code, and datasets are available in the \url{https://huggingface.co/LIACC} repository under the CC BY 4.0 license.",
34
+ }
35
+ ```