Edit model card
Configuration Parsing Warning: In adapter_config.json: "peft.base_model_name_or_path" must be a string

mistral_instruct_generation (Resume ATS score generation based on Job description)

This model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2 on the generator dataset. This model is a specialized chatbot designed to automate the evaluation of resumes by providing an ATS (Applicant Tracking System) score based on a given job description. It is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2, utilizing a custom dataset tailored for the nuances of job descriptions and resume content.

Model description

The mistral_instruct_generation model employs advanced NLP techniques to understand and compare the content of resumes against job descriptions. It aims to support applicants by offering an automated, preliminary assessment of candidate suitability, streamlining the initial stages of the hiring process.

Intended uses & limitations

This model is intended for use in HR technology platforms and recruitment software, providing an automated way to score resumes against job descriptions. It is designed to enhance, not replace, human decision-making processes in recruitment. Limitations include potential biases in training data and the need for regular updates to adapt to evolving job market requirements. Users should be aware of these limitations and use the model's output as one of several tools in a comprehensive recruitment process.

Training and evaluation data

More information needed

Training procedure

The model was trained on a Custom dataset comprising pairs of resumes and job descriptions across various industries. This dataset was curated to cover a broad spectrum of job roles, experience levels, and skills. The specifics of the dataset composition can provide further insights into the model's capabilities and potential biases.

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0002
  • train_batch_size: 4
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: constant
  • lr_scheduler_warmup_steps: 0.03
  • num_epochs: 4

Training results

Training Loss Epoch Step Validation Loss
1.8804 0.17 20 1.8834
1.8364 0.34 40 1.8631
1.8363 0.51 60 1.8547
1.8312 0.68 80 1.8298
1.7648 0.85 100 1.8102
1.6197 1.02 120 1.7888
1.6869 1.19 140 1.7887
1.5637 1.36 160 1.7672
1.6921 1.53 180 1.7476
1.5883 1.69 200 1.7305
1.5235 1.86 220 1.7099
1.6134 2.03 240 1.7045
1.4006 2.2 260 1.7191
1.5571 2.37 280 1.6963
1.3889 2.54 300 1.6869
1.4278 2.71 320 1.6658
1.3868 2.88 340 1.6592
1.1515 3.05 360 1.6576
1.2761 3.22 380 1.6553
1.1679 3.39 400 1.6439
1.3966 3.56 420 1.6301
1.2536 3.73 440 1.6200
1.262 3.9 460 1.6300

Framework versions

  • PEFT 0.8.2
  • Transformers 4.36.2
  • Pytorch 2.1.0+cu121
  • Datasets 2.16.0
  • Tokenizers 0.15.2
Downloads last month
0
Unable to determine this model’s pipeline type. Check the docs .

Adapter for