prithivMLmods commited on
Commit
ac00149
1 Parent(s): 11a25a3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -0
README.md CHANGED
@@ -14,7 +14,19 @@ library_name: transformers
14
  This implementation leverages **BERT (Bidirectional Encoder Representations from Transformers)** for binary classification (Spam / Ham) using sequence classification. The model uses the **`prithivMLmods/Spam-Text-Detect-Analysis` dataset** and integrates **Weights & Biases (wandb)** for comprehensive experiment tracking.
15
 
16
  ---
 
 
 
 
 
 
 
 
 
 
 
17
 
 
18
  ## **🛠️ Overview**
19
 
20
  ### **Core Details:**
 
14
  This implementation leverages **BERT (Bidirectional Encoder Representations from Transformers)** for binary classification (Spam / Ham) using sequence classification. The model uses the **`prithivMLmods/Spam-Text-Detect-Analysis` dataset** and integrates **Weights & Biases (wandb)** for comprehensive experiment tracking.
15
 
16
  ---
17
+ ### Summary of Uploaded Files:
18
+
19
+ | **File Name** | **Size** | **Description** | **Upload Status** |
20
+ |------------------------------------|-----------|-----------------------------------------------------|-------------------|
21
+ | `.gitattributes` | 1.52 kB | Tracks files stored with Git LFS. | Uploaded |
22
+ | `README.md` | 8.78 kB | Comprehensive documentation for the repository. | Updated |
23
+ | `config.json` | 727 Bytes | Configuration file related to the model settings. | Uploaded |
24
+ | `model.safetensors` | 438 MB | Model weights stored in safetensors format. | Uploaded (LFS) |
25
+ | `special_tokens_map.json` | 125 Bytes | Mapping of special tokens for tokenizer handling. | Uploaded |
26
+ | `tokenizer_config.json` | 1.24 kB | Tokenizer settings for initialization. | Uploaded |
27
+ | `vocab.txt` | 232 kB | Vocabulary file for tokenizer use. | Uploaded |
28
 
29
+ ---
30
  ## **🛠️ Overview**
31
 
32
  ### **Core Details:**