Linghe-Wang commited on
Commit
20ef7a5
·
verified ·
1 Parent(s): d86c562

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -10
README.md CHANGED
@@ -155,14 +155,16 @@ The data has class imbalanced on both training and testing data splits, so we us
155
  #### Summary
156
  Table above presents the weighted F1 scores for predicting writing intentions across baselines and fine-tuned models. All models finetuned on ScholaWrite show a improvement performance compared to their baselines. BERT and RoBERTa achieved the most improvement, while LLama-8B-Instruct showed a modest improvement after fine-tuning. Those results demonstrate the effectiveness of our ScholaWrite dataset to align language models with writers' intentions.
157
 
158
- ## Citation
159
 
160
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
161
-
162
- **BibTeX:**
163
-
164
- [More Information Needed]
165
-
166
- **APA:**
167
-
168
- [More Information Needed]
 
 
 
155
  #### Summary
156
  Table above presents the weighted F1 scores for predicting writing intentions across baselines and fine-tuned models. All models finetuned on ScholaWrite show a improvement performance compared to their baselines. BERT and RoBERTa achieved the most improvement, while LLama-8B-Instruct showed a modest improvement after fine-tuning. Those results demonstrate the effectiveness of our ScholaWrite dataset to align language models with writers' intentions.
157
 
158
+ ## BibTeX
159
 
160
+ ```
161
+ @misc{wang2025scholawritedatasetendtoendscholarly,
162
+ title={ScholaWrite: A Dataset of End-to-End Scholarly Writing Process},
163
+ author={Linghe Wang and Minhwa Lee and Ross Volkov and Luan Tuyen Chau and Dongyeop Kang},
164
+ year={2025},
165
+ eprint={2502.02904},
166
+ archivePrefix={arXiv},
167
+ primaryClass={cs.CL},
168
+ url={https://arxiv.org/abs/2502.02904},
169
+ }
170
+ ```