EngTig commited on
Commit
7156cb3
1 Parent(s): e7c03f6

Model save

Browse files
README.md CHANGED
@@ -4,10 +4,8 @@ base_model: romainlhardy/roberta-large-finetuned-ner
4
  tags:
5
  - generated_from_trainer
6
  model-index:
7
- - name: roberta-large-finetuned-ner-finetuned-ner
8
  results: []
9
- datasets:
10
- - surrey-nlp/PLOD-filtered
11
  ---
12
 
13
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -15,7 +13,18 @@ should probably proofread and complete it, then remove this comment. -->
15
 
16
  # roberta-large-finetuned-ner-finetuned-ner
17
 
18
- This model is a fine-tuned version of [romainlhardy/roberta-large-finetuned-ner](https://huggingface.co/romainlhardy/roberta-large-finetuned-ner) on PLOD-filtered dataset.
 
 
 
 
 
 
 
 
 
 
 
19
 
20
  ## Model description
21
 
@@ -40,11 +49,11 @@ The following hyperparameters were used during training:
40
  - seed: 42
41
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
42
  - lr_scheduler_type: linear
43
- - num_epochs: 6
44
 
45
  ### Framework versions
46
 
47
  - Transformers 4.38.2
48
  - Pytorch 2.2.1+cu121
49
  - Datasets 2.18.0
50
- - Tokenizers 0.15.2
 
4
  tags:
5
  - generated_from_trainer
6
  model-index:
7
+ - name: roberta-large-finetuned-ner-finetuned-ner
8
  results: []
 
 
9
  ---
10
 
11
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
13
 
14
  # roberta-large-finetuned-ner-finetuned-ner
15
 
16
+ This model is a fine-tuned version of [romainlhardy/roberta-large-finetuned-ner](https://huggingface.co/romainlhardy/roberta-large-finetuned-ner) on an unknown dataset.
17
+ It achieves the following results on the evaluation set:
18
+ - eval_loss: 0.1264
19
+ - eval_precision: 0.9593
20
+ - eval_recall: 0.9473
21
+ - eval_f1: 0.9533
22
+ - eval_accuracy: 0.9488
23
+ - eval_runtime: 588.3236
24
+ - eval_samples_per_second: 41.032
25
+ - eval_steps_per_second: 10.258
26
+ - epoch: 0.59
27
+ - step: 16493
28
 
29
  ## Model description
30
 
 
49
  - seed: 42
50
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
51
  - lr_scheduler_type: linear
52
+ - num_epochs: 2
53
 
54
  ### Framework versions
55
 
56
  - Transformers 4.38.2
57
  - Pytorch 2.2.1+cu121
58
  - Datasets 2.18.0
59
+ - Tokenizers 0.15.2
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:87c85b349abfa320cd0d6dd443b6764fb4d12dc17c97f08398c543d115432fab
3
  size 1417309084
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f923ae3e2b022a35b6dea6cb6c2df03c641f275cca135f3a17a768106c0d2f04
3
  size 1417309084
runs/Apr10_09-39-34_d628b22b3f8d/events.out.tfevents.1712742008.d628b22b3f8d.993.3 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8f3312d2403fc2986205261a9046c26194a8a69158bff992e3312e6c0942b627
3
+ size 13110
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2f37de543529dfc047d41b3ff9d1c452eef17e1a851d22527d3247dd9ed4fbcd
3
  size 4920
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e5c8a240f427358eddf460cd3e0592f7b4992994ab93a63e441f759b8aa4242d
3
  size 4920