Model Card for ScholaWrite-Llama3.1-8B-Writing

Model Details

Model Description

This model is refered as LLAMA-8B-SW-GEN in the paper. It is fined-tuned based on 4 bit quantized Llama-3.1-8B-Instruct from unsloth Hugging Face Hub, using train split of ScholaWrite dataset. The sole purpose of this model is to take the role of after text generation in the Iterative Self-Writing task.

Model Sources [optional]

Uses

Direct Use

The model is intended to used for "after-text" generation in Iterative Self-Writing task.

Iterative Self-Writing task is iteratively generating scholarly text from scratch, mirroring the human writing process. This task focuses on how well the model trained on our dataset can replicate the actual iterative writing and thinking process of scholars, thus produce the better scholarly text output than model without trained on our dataset.

Iterative self-writing involves two subtasks (1) Next intention prediction. Model will take input prompt with task instructions, and the "before text". The model's task is to generate next writing intention based on the ''before'' text. (2) "after-text" generation. Model will take input prompt with task instructions, a verbalizer derived from human-annotated labels, and the "before text". The model's task is to generate ''after-text'' given the verbalizer and ''before'' text.

Out-of-Scope Use

The model is fine-tuned only for "after-text" generation and infereneced in closed enviroment. Its main goal is to examine the usefullness of our dataset. It is suitable for acdamic use, but not suitable for production, general public use, or consumer-oriented service. In addition, use this model on tasks besides "after-text" generation in LaTex acdamic draft may not work well.

Bias and Limitations

The bias and limitations of this model mainly came from the dataset (ScholaWrite) it fine-tuned on.

First, the ScholaWrite dataset is currently limited to the computer science domain, as LaTeX is predominantly used in computer science journals and conferences. This domain-specific focus in dataset may restrict the model's generalizability to other scientific disciplines. Future work could address this limitation by collecting keystroke data from a broader range of fields with diverse writing conven554 tions and tools, such as the humanities or biological sciences. For example, students in humanities usu556 ally write book-length papers and integrate more sources, so it could affect cognitive complexities.

Second, all participants were early-career researchers (e.g., PhD students) at an R1 university in the United States, which means the models may not learn the professional writing behavior and cognitive process from expert. Expanding the dataset to include senior researchers, such as post-doctoral fellows and professors, could offer valuable insights into how writing strategies and revision behaviors evolve with research experience and expertise.

Third, the dataset is exclusive to English-language writing, which restricts model's capability to iteratively write paper in multilingual or non-English contexts. Expanding to multilingual settings could reveal unique cognitive and linguistic insights into writing across languages.

How to Get Started with the Model

import os
from unsloth import FastLanguageModel
from dotenv import load_dotenv
from huggingface_hub import login
load_dotenv()
login(os.getenv("HUGGINGFACE_TOKEN"))
model_name = "minnesotanlp/scholawrite-llama3.1-8b-writing"
text = '''
list in following format: 
[
    {"role": "user", "content": your prompt that contain instruction, verbalizer, and before text}
]
'''
before_text = "your before text"
model, tokenizer = FastLanguageModel.from_pretrained(
model_name=model_name,
max_seq_length=4096,
load_in_4bit=True,
dtype=None,
)
FastLanguageModel.for_inference(model)
input_ids = tokenizer.apply_chat_template(text, max_length=4096, tokenize=True, add_generation_prompt=True, return_tensors="pt")
outputs = model.generate(input_ids, max_new_tokens=len(before_text)+100, do_sample=True, top_k=50, top_p=0.95)
response = tokenizer.batch_decode(outputs)
response = response[0].split("<|start_header_id|>assistant<|end_header_id|>")[1].strip()
response = response.replace("<|eot_id|>", "")

fine-tuning Details

fine-tuning Data

This model is fine-tuned on minnesotanlp/scholawrite dataset train split. It is keystroke logs of an end-to-end scholarly writing process, with thorough annotations of cognitive writing intentions behind each keystroke. No additional data pre-processing or filtering performed on the dataset.

fine-tuning Procedure

The dataset contains before text, intention, and after text. For each entry in the dataset, we need to setup the prompt that is ready for fine-tuning. To do so, the intention will be first converted to corresponding verbalizer. Then, before test, verbalizer, and after text will be put into the predefined prompt template. We mask out the system and user message of the prompt with -100, so that the model is trained on responsed only.

fine-tuning Hyperparameters

  • fine-tuning regime: QLoRA
  • max_seq_length 5096
  • learning_rate 3e-4
  • lr_scheduler_type linear
  • per_device_train_batch_size 1
  • gradient_accumulation_steps 4
  • num_train_epochs 1
  • fp16 False
  • bf16 True
  • logging_steps 10
  • optim adamw_8bit
  • weight_decay 0.01
  • warmup_steps 10
  • seed 0

Machine Specs

  • Hardware: Nvidia L40s GPU
  • Software: Unsloth
  • Hours used: 12 hrs
  • Compute Region: Minnesota

Testing Procedure

Testing Data

Instead of running test on dataset, we performed Iterative Self-Writing task, see section Direct Use for task detail. We pick 4 seed documents as starting point of Iterative Self-Writing task, derived from 4 award-winning NLP papers spanning different topics. They are Zeng et al., 2024; Lu et al., 2024b; Du et al., 2022a; Etxaniz et al., 2024

Metrics

We came up these three metrics for auto evaluation:

  1. lexical diversity: the ratio of unique to total tokens in the final iteration
  2. topic consistency: cosine similarity between the seed document and the final output
  3. intention coverage: diversity of writing intentions as a proportion of unique labels used across 100 iterations among the 15 available labels in our taxonomy.

Furthermore, inspired by chang2023 et al (2023), we conducted a human evaluationfor more detailed descriptions of the entire evaluation process. With three native English speakers experienced in Overleaf. They assessed the outputs based on following metrics:

  1. Accuracy: alignment with the predicted intention
  2. Alignment: how closely the model’s process resembled human writing style
  3. fluency: grammatical correctness of final writing
  4. coherence: logical structure
  5. relevance: connection to the seed paper's contents.

Accuracy is evaluated for each iteration, while alignment, fluency, and coherence were assessed through pairwise comparisons on final iteration.

Results

Note:

Auto Evaluation Results for Seed 1
Metric Llama-8b-sw Llama-3b-instruct Llama-8b-instruct GPT4o
Lexical Diversity 0.4985 0.2197 0.2268 0.3405
Cosine Similarity 0.8197 0.7839 0.4494 0.6516
Auto Evaluation Results for Seed 2
Metric Llama-8b-sw Llama-3b-instruct Llama-8b-instruct GPT4o
Lexical Diversity 0.4262 0.164 0.23 0.3113
Cosine Similarity 0.8644 0.7467 0.8319 0.6585
Auto Evaluation Results for Seed 3
Metric Llama-8b-sw Llama-3b-instruct Llama-8b-instruct GPT4o
Lexical Diversity 0.457 0.2127 0.1784 0.3093
Cosine Similarity 0.7772 0.8416 0.8367 0.4037
Auto Evaluation Results for Seed 4
Metric Llama-8b-sw Llama-3b-instruct Llama-8b-instruct GPT4o
Lexical Diversity 0.359 0.1802 0.1824 0.3139
Cosine Similarity 0.2147 0.5009 0.5353 0.6500

Human Evaluation Results for Seed 1
Metrics Model Evaluator 1 Evaluator 2 Evaluator 3
Accuracy Finetuned 43 3 17
Baseline 47 22 38
Alignment Finetuned
Baseline X X X
Fluency Finetuned
Baseline X X X
Coherence Finetuned
Baseline X X X
Relevance Finetuned Yes No No
Baseline Yes Yes Yes
Human Evaluation Results for Seed 2
Metrics Model Evaluator 1 Evaluator 2 Evaluator 3
Accuracy Finetuned 26 0 5
Baseline 48 12 29
Alignment Finetuned
Baseline X X X
Fluency Finetuned
Baseline X X X
Coherence Finetuned
Baseline X X X
Relevance Finetuned Yes Yes Yes
Baseline Yes Yes Yes
Human Evaluation Results for Seed 3
Metrics Model Evaluator 1 Evaluator 2 Evaluator 3
Accuracy Finetuned 52 0 3
Baseline 70 23 43
Alignment Finetuned
Baseline X X X
Fluency Finetuned
Baseline X X X
Coherence Finetuned
Baseline X X X
Relevance Finetuned Yes Yes No
Baseline Yes Yes Yes
Human Evaluation Results for Seed 4
Metrics Model Evaluator 1 Evaluator 2 Evaluator 3
Accuracy Finetuned 37 3 6
Baseline 60 22 48
Alignment Finetuned
Baseline X X X
Fluency Finetuned
Baseline X X X
Coherence Finetuned
Baseline X X X
Relevance Finetuned Yes No No
Baseline Yes Yes Yes

Summary

Auto Evaluation Results tables illustrates the quality of the final writing output produced by each model across all four seed documents. Notably, our models (scholawrite-llama3.1-8b-writing and scholawrite-llama3.1-8b-classifier) consistently used the most lexically diverse words in their final outputs. Moreover, it generated content that was semantically most aligned with the seed 1 and seed 2. It also covered the highest number of writing intentions based on our taxonomy for all seeds except Seed 3. These results underscore the effectiveness of ScholaWrite as a valuable resource for enhancing the quality of scholarly writing generated by language models.

Despite their remarkable performance based on automatic evaluation metrics, LLMs still exhibit limitations in learning human writing behaviors and scholarly thinking processes. According to our human evaluation our model generated fewer instances of after text that aligned with the predicted intentions from the previous step during 100 iterations across all four seed documents. Furthermore, all three evaluators unanimously agreed that the baseline model, demonstrated more human-like writing behaviors throughout the iterations. Its final outputs were also perceived as more grammatically correct and containing stronger logical claims compared to our models.

However, the evaluators also noted that the final outputs from our models contained more relevant content for Seeds 2 and 3. This observation aligns with the trend in topic consistency scores shown in Auto Evaluation Results for Seed 2 and seed 3, further highlighting the usefulness of ScholaWrite dataset in certain contexts.

BibTeX

@misc{wang2025scholawritedatasetendtoendscholarly,
      title={ScholaWrite: A Dataset of End-to-End Scholarly Writing Process},
      author={Linghe Wang and Minhwa Lee and Ross Volkov and Luan Tuyen Chau and Dongyeop Kang},
      year={2025},
      eprint={2502.02904},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2502.02904},
      }
Downloads last month
73
Safetensors
Model size
4.65B params
Tensor type
BF16
·
F32
·
U8
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for minnesotanlp/scholawrite-llama3.1-8b-writing

Dataset used to train minnesotanlp/scholawrite-llama3.1-8b-writing