jdavit commited on
Commit
e5a5d57
1 Parent(s): b555544

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +50 -0
README.md ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - ner
4
+ ---
5
+
6
+ # NER NER-finetuning-BERT
7
+ This is the BERT-cased model for NER [google-bert/bert-base-cased](https://huggingface.co/google-bert/bert-base-cased) using the CONLL2002 dataset. The results were as follows:
8
+
9
+ - Precision: 0.8265
10
+ - Recall: 0.8443
11
+ - F1: 0.8353
12
+ - Accuracy: 0.9786
13
+
14
+
15
+ ## Model description
16
+
17
+ Fine-Tuned BERT-cased for Named Entity Recognition (NER)
18
+ Overview:
19
+ This model is a fine-tuned version of the bert-cased pre-trained model specifically tailored for the task of Named Entity Recognition (NER). BERT (Bidirectional Encoder Representations from Transformers) is a state-of-the-art transformer-based model designed to understand the context of words in a sentence by considering both the left and right surrounding words. The bert-cased variant ensures that the model distinguishes between uppercase and lowercase letters, preserving the case sensitivity which is crucial for NER tasks.
20
+
21
+ ## Training procedure
22
+
23
+ ### Training hyperparameters
24
+
25
+ The following hyperparameters were used during training:
26
+ - evaluation_strategy="epoch",
27
+ - save_strategy="epoch",
28
+ - learning_rate=2e-5,
29
+ - num_train_epochs=4,
30
+ - per_device_train_batch_size=16,
31
+ - weight_decay=0.01,
32
+
33
+
34
+ ### Training results
35
+
36
+
37
+ | Epoch | Training Loss | Validation Loss |
38
+ |:-------:|:---------------:|:-----------------:|
39
+ | 1 | 0.005700 | 0.258581 |
40
+ | 2 | 0.004600 | 0.248794 |
41
+ | 3 | 0.002800 | 0.257513 |
42
+ | 4 | 0.002100 | 0.275097 |
43
+
44
+
45
+ ### Framework versions
46
+
47
+ - Transformers 4.40.2
48
+ - Pytorch 2.2.1+cu121
49
+ - Datasets 2.19.1
50
+ - Tokenizers 0.19.1