--- tags: - generated_from_trainer metrics: - accuracy - f1 - recall - precision model-index: - name: squeezebert-uncased-News_About_Gold results: [] language: - en pipeline_tag: text-classification --- # squeezebert-uncased-News_About_Gold This model is a fine-tuned version of [squeezebert/squeezebert-uncased](https://huggingface.co/squeezebert/squeezebert-uncased). It achieves the following results on the evaluation set: - Loss: 0.2643 - Accuracy: 0.9167 - F1 - Weighted: 0.9166 - Micro: 0.9167 - Macro: 0.8749 - Recall - Weighted: 0.9167 - Micro: 0.9167 - Macro: 0.8684 - Precision - Weighted: 0.9168 - Micro: 0.9167 - Macro: 0.8822 ## Model description For more information on how it was created, check out the following link: https://github.com/DunnBC22/NLP_Projects/blob/main/Sentiment%20Analysis/Sentiment%20Analysis%20of%20Commodity%20News%20-%20Gold%20(Transformer%20Comparison)/News%20About%20Gold%20-%20Sentiment%20Analysis%20-%20SqueezeBERT%20with%20W%26B.ipynb This project is part of a comparison of seven (7) transformers. Here is the README page for the comparison: https://github.com/DunnBC22/NLP_Projects/tree/main/Sentiment%20Analysis/Sentiment%20Analysis%20of%20Commodity%20News%20-%20Gold%20(Transformer%20Comparison) ## Intended uses & limitations This model is intended to demonstrate my ability to solve a complex problem using technology. ## Training and evaluation data Dataset Source: https://www.kaggle.com/datasets/ankurzing/sentiment-analysis-in-commodity-market-gold _Input Word Length:_ ![Length of Input Text (in Words)](https://github.com/DunnBC22/NLP_Projects/raw/main/Sentiment%20Analysis/Sentiment%20Analysis%20of%20Commodity%20News%20-%20Gold%20(Transformer%20Comparison)/Images/Input%20Word%20Length.png) _Class Distribution:_ ![Length of Input Text (in Words)](https://github.com/DunnBC22/NLP_Projects/raw/main/Sentiment%20Analysis/Sentiment%20Analysis%20of%20Commodity%20News%20-%20Gold%20(Transformer%20Comparison)/Images/Class%20Distribution.png) ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Weighted F1 | Micro F1 | Macro F1 | Weighted Recall | Micro Recall | Macro Recall | Weighted Precision | Micro Precision | Macro Precision | |:-------------:|:-----:|:----:|:---------------:|:--------:|:-----------:|:--------:|:--------:|:---------------:|:------------:|:------------:|:------------------:|:---------------:|:---------------:| | 0.8756 | 1.0 | 133 | 0.4529 | 0.8699 | 0.8557 | 0.8699 | 0.6560 | 0.8699 | 0.8699 | 0.6727 | 0.8437 | 0.8699 | 0.6414 | | 0.4097 | 2.0 | 266 | 0.3196 | 0.9026 | 0.8982 | 0.9026 | 0.7826 | 0.9026 | 0.9026 | 0.7635 | 0.9059 | 0.9026 | 0.8743 | | 0.3147 | 3.0 | 399 | 0.2824 | 0.9115 | 0.9111 | 0.9115 | 0.8470 | 0.9115 | 0.9115 | 0.8319 | 0.9138 | 0.9115 | 0.8751 | | 0.2685 | 4.0 | 532 | 0.2649 | 0.9186 | 0.9187 | 0.9186 | 0.8681 | 0.9186 | 0.9186 | 0.8602 | 0.9203 | 0.9186 | 0.8797 | | 0.2479 | 5.0 | 665 | 0.2643 | 0.9167 | 0.9166 | 0.9167 | 0.8749 | 0.9167 | 0.9167 | 0.8684 | 0.9168 | 0.9167 | 0.8822 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0 - Datasets 2.11.0 - Tokenizers 0.13.3