Update README.md
Browse files
README.md
CHANGED
@@ -155,14 +155,16 @@ The data has class imbalanced on both training and testing data splits, so we us
|
|
155 |
#### Summary
|
156 |
Table above presents the weighted F1 scores for predicting writing intentions across baselines and fine-tuned models. All models finetuned on ScholaWrite show a improvement performance compared to their baselines. BERT and RoBERTa achieved the most improvement, while LLama-8B-Instruct showed a modest improvement after fine-tuning. Those results demonstrate the effectiveness of our ScholaWrite dataset to align language models with writers' intentions.
|
157 |
|
158 |
-
##
|
159 |
|
160 |
-
|
161 |
-
|
162 |
-
|
163 |
-
|
164 |
-
|
165 |
-
|
166 |
-
|
167 |
-
|
168 |
-
|
|
|
|
|
|
155 |
#### Summary
|
156 |
Table above presents the weighted F1 scores for predicting writing intentions across baselines and fine-tuned models. All models finetuned on ScholaWrite show a improvement performance compared to their baselines. BERT and RoBERTa achieved the most improvement, while LLama-8B-Instruct showed a modest improvement after fine-tuning. Those results demonstrate the effectiveness of our ScholaWrite dataset to align language models with writers' intentions.
|
157 |
|
158 |
+
## BibTeX
|
159 |
|
160 |
+
```
|
161 |
+
@misc{wang2025scholawritedatasetendtoendscholarly,
|
162 |
+
title={ScholaWrite: A Dataset of End-to-End Scholarly Writing Process},
|
163 |
+
author={Linghe Wang and Minhwa Lee and Ross Volkov and Luan Tuyen Chau and Dongyeop Kang},
|
164 |
+
year={2025},
|
165 |
+
eprint={2502.02904},
|
166 |
+
archivePrefix={arXiv},
|
167 |
+
primaryClass={cs.CL},
|
168 |
+
url={https://arxiv.org/abs/2502.02904},
|
169 |
+
}
|
170 |
+
```
|