--- license: apache-2.0 base_model: Buseak/spellcorrector_17_02_050_qwerty tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: spellcorrector_18_02_050_qwerty_v6 results: [] --- # spellcorrector_18_02_050_qwerty_v6 This model is a fine-tuned version of [Buseak/spellcorrector_17_02_050_qwerty](https://huggingface.co/Buseak/spellcorrector_17_02_050_qwerty) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0028 - Precision: 0.9968 - Recall: 0.9941 - F1: 0.9954 - Accuracy: 0.9993 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0887 | 1.0 | 967 | 0.0551 | 0.9876 | 0.9801 | 0.9838 | 0.9842 | | 0.0684 | 2.0 | 1934 | 0.0415 | 0.9930 | 0.9844 | 0.9887 | 0.9881 | | 0.0581 | 3.0 | 2901 | 0.0343 | 0.9924 | 0.9855 | 0.9890 | 0.9899 | | 0.0487 | 4.0 | 3868 | 0.0280 | 0.9925 | 0.9882 | 0.9903 | 0.9917 | | 0.0425 | 5.0 | 4835 | 0.0241 | 0.9930 | 0.9882 | 0.9906 | 0.9930 | | 0.0382 | 6.0 | 5802 | 0.0209 | 0.9946 | 0.9882 | 0.9914 | 0.9940 | | 0.0333 | 7.0 | 6769 | 0.0168 | 0.9951 | 0.9909 | 0.9930 | 0.9950 | | 0.0294 | 8.0 | 7736 | 0.0148 | 0.9941 | 0.9909 | 0.9925 | 0.9957 | | 0.0265 | 9.0 | 8703 | 0.0121 | 0.9946 | 0.9909 | 0.9927 | 0.9964 | | 0.0238 | 10.0 | 9670 | 0.0103 | 0.9952 | 0.9919 | 0.9935 | 0.9970 | | 0.0216 | 11.0 | 10637 | 0.0090 | 0.9978 | 0.9930 | 0.9954 | 0.9974 | | 0.0193 | 12.0 | 11604 | 0.0076 | 0.9952 | 0.9930 | 0.9941 | 0.9979 | | 0.0175 | 13.0 | 12571 | 0.0065 | 0.9973 | 0.9936 | 0.9954 | 0.9982 | | 0.016 | 14.0 | 13538 | 0.0055 | 0.9973 | 0.9936 | 0.9954 | 0.9985 | | 0.0137 | 15.0 | 14505 | 0.0045 | 0.9968 | 0.9936 | 0.9952 | 0.9988 | | 0.0127 | 16.0 | 15472 | 0.0039 | 0.9973 | 0.9941 | 0.9957 | 0.9990 | | 0.0118 | 17.0 | 16439 | 0.0034 | 0.9978 | 0.9941 | 0.9960 | 0.9991 | | 0.0111 | 18.0 | 17406 | 0.0030 | 0.9968 | 0.9941 | 0.9954 | 0.9992 | | 0.0104 | 19.0 | 18373 | 0.0029 | 0.9968 | 0.9941 | 0.9954 | 0.9993 | | 0.0099 | 20.0 | 19340 | 0.0028 | 0.9968 | 0.9941 | 0.9954 | 0.9993 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.0 - Tokenizers 0.15.2