Edit model card

Exam-corrector: A Fine-tuned LLama 8b Model

Overview

Exam-corrector is a fine-tuned version of the LLama 8b model, specifically adapted to function as a written question corrector. This model grades student answers by comparing them against model answers using a set of predefined instructions. The finetuning process was performed using LoRA (Low-Rank Adaptation).

Model Description

Exam-corrector is designed to provide consistent and fair grading for written answers in exams. It takes both a model answer (the best answer) and a student answer as inputs and returns a grade along with a brief explanation.

Instructions

The grading process follows these detailed instructions:

  1. The input always consists of two components: the Model Answer and the Student Answer.
  2. The Model Answer is used solely as a reference and does not receive any marks.
  3. Grades are assigned to the Student Answer based on its alignment with the Model Answer.
  4. Full marks are given to Student Answers that convey the complete meaning of the Model Answer, even if different words are used.
  5. Incomplete or irrelevant information results in deducted marks based on the answer's quality and completeness.
  6. A consistent marking technique is used to ensure the same answers always receive the same marks.
  7. Questions with no answer receive zero marks.
  8. Each grade comes with a one-line brief explanation of the mark.

Input Format

Model Answer:

{model_answer}

Student Answer:

{student_answer}

Output Format

Response:

{grade} {explanation}

Training Details

This model was fine-tuned using the LoRA (Low-Rank Adaptation) technique. Below is a function to print the number of trainable parameters in the model:

def print_number_of_trainable_model_parameters(model):
    trainable_model_params = 0
    all_model_params = 0
    for _, param in model.named_parameters():
        all_model_params += param.numel()
        if param.requires_grad:
            trainable_model_params += param.numel()
    return f"trainable model parameters: {trainable_model_params}\\nall model parameters: {all_model_params}\\npercentage of trainable model parameters: {100 * trainable_model_params / all_model_params:.2f}%"

print(print_number_of_trainable_model_parameters(model))

trainable model parameters: 167772160
all model parameters: 4708372480
percentage of trainable model parameters: 3.56%

Usage

To use this model for grading student answers, you can load it from Hugging Face and pass the appropriate inputs as shown in the example prompt.

Example

from transformers import LlamaTokenizer, LlamaForCausalLM

tokenizer = LlamaTokenizer.from_pretrained("MohamedMotaz/Examination-llama-8b-4bit")
model = LlamaForCausalLM.from_pretrained("MohamedMotaz/Examination-llama-8b-4bit")

model_answer = "The process of photosynthesis involves converting light energy into chemical energy."
student_answer = "Photosynthesis is when plants turn light into energy."

inputs = prompt.format(model_answer, student_answer)
input_ids = tokenizer(inputs, return_tensors="pt").input_ids

outputs = model.generate(input_ids)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)

print(response)

Conclusion

Exam-corrector is a robust tool for automating the grading of written exam answers, ensuring consistent and fair evaluation based on model answers. Feel free to fine-tune further or adapt the model for other specific grading tasks.

Contact

For any issues, questions, or contributions, please reach out to me at myLinkedIn.

Downloads last month
0
Safetensors
Model size
4.65B params
Tensor type
FP16
F32
U8
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.