Engessay_grading_ML / README.md
kevintu's picture
Update README.md
0630a2c verified
|
raw
history blame
3.87 kB
metadata
license: mit

This model is primarily designed for the automatic grading of English essays, particularly those written by second language (L2) learners. The training dataset used is the English Language Learner Insight, Proficiency, and Skills Evaluation (ELLIPSE) Corpus. This freely available resource comprises approximately 6,500 writing composition samples from English language learners, each scored for overall holistic language proficiency as well as analytic scores pertaining to cohesion, syntax, vocabulary, phraseology, grammar, and conventions. The scores were obtained through assessments by a number of professional English teachers adhering to rigorous procedures. The training dataset guarantees that our model acuqires high practicality and accuracy, closely emulating professional grading standards.

The model's performance on the test dataset, which includes around 980 English essays, is summarized by the following metrics: 'accuracy'= 0.87 and 'f1 score' = 0.85.

Upon inputting an essay, the model outputs six scores corresponding to cohesion, syntax, vocabulary, phraseology, grammar, and conventions. Each score ranges from 1 to 5, with higher scores indicating greater proficiency within the essay. These dimensions collectively assess the quality of the input essay from multiple perspectives. The model serves as a valuable tool for EFL teachers and researchers, and it is also beneficial for English L2 learners and parents for self-evaluating their composition skills.

To test the model, run the following code or paste your essay into the API interface:

#import packages

from transformers import AutoModelForSequenceClassification, AutoTokenizer
import torch
model = AutoModelForSequenceClassification.from_pretrained("Kevintu/Engessay_grading_ML")
tokenizer = AutoTokenizer.from_pretrained("Kevintu/Engessay_grading_ML")


# Example new text input
new_text = "The English Language Learner Insight, Proficiency and Skills Evaluation (ELLIPSE) Corpus is a freely available corpus of ~6,500 ELL writing samples that have been scored for overall holistic language proficiency as well as analytic proficiency scores related to cohesion, syntax, vocabulary, phraseology, grammar, and conventions. In addition, the ELLIPSE corpus provides individual and demographic information for the ELL writers in the corpus including economic status, gender, grade level (8-12), and race/ethnicity. The corpus provides language proficiency scores for individual writers and was developed to advance research in corpus and NLP approaches to assess overall and more fine-grained features of proficiency."


# Define the path to your text file
#file_path = 'path/to/yourfile.txt'

# Read the content of the file
#with open(file_path, 'r', encoding='utf-8') as file:
#    new_text = file.read()


# Encode the text using the same tokenizer used during training
encoded_input = tokenizer(new_text, return_tensors='pt', padding=True, truncation=True, max_length=64)


# Move the model to the correct device (CPU in this case, or GPU if available)
model.eval()  # Set the model to evaluation mode

# Perform the prediction
with torch.no_grad():
    outputs = model(**encoded_input)

# Get the predictions (the output here depends on whether you are doing regression or classification)
predictions = outputs.logits.squeeze()


# Assuming the model is a regression model and outputs raw scores
predicted_scores = predictions.numpy()  # Convert to numpy array if necessary
trait_names = ["cohesion", "syntax", "vocabulary", "phraseology", "grammar",  "conventions"]

# Print the predicted personality traits scores
for trait, score in zip(trait_names, predicted_scores):
    print(f"{trait}: {score:.4f}")

##"output": 
#cohesion: 3.5399
#syntax: 3.6380
#vocabulary: 3.9250
#phraseology: 3.8381
#grammar: 3.9194
#conventions: 3.6819