jonas-luehrs commited on
Commit
dad0a65
1 Parent(s): a80f158

End of training

Browse files
README.md ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: jonas-luehrs/chembert_cased-MLM-chemistry
3
+ tags:
4
+ - generated_from_trainer
5
+ metrics:
6
+ - f1
7
+ - precision
8
+ - recall
9
+ - accuracy
10
+ model-index:
11
+ - name: chembert_cased-MLM-chemistry-textCLS-PETROCHEMICAL
12
+ results: []
13
+ ---
14
+
15
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
16
+ should probably proofread and complete it, then remove this comment. -->
17
+
18
+ # chembert_cased-MLM-chemistry-textCLS-PETROCHEMICAL
19
+
20
+ This model is a fine-tuned version of [jonas-luehrs/chembert_cased-MLM-chemistry](https://huggingface.co/jonas-luehrs/chembert_cased-MLM-chemistry) on an unknown dataset.
21
+ It achieves the following results on the evaluation set:
22
+ - Loss: 0.5883
23
+ - F1: 0.7662
24
+ - Precision: 0.7616
25
+ - Recall: 0.7838
26
+ - Accuracy: 0.7838
27
+
28
+ ## Model description
29
+
30
+ More information needed
31
+
32
+ ## Intended uses & limitations
33
+
34
+ More information needed
35
+
36
+ ## Training and evaluation data
37
+
38
+ More information needed
39
+
40
+ ## Training procedure
41
+
42
+ ### Training hyperparameters
43
+
44
+ The following hyperparameters were used during training:
45
+ - learning_rate: 2e-05
46
+ - train_batch_size: 16
47
+ - eval_batch_size: 16
48
+ - seed: 42
49
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
50
+ - lr_scheduler_type: linear
51
+ - num_epochs: 3
52
+
53
+ ### Training results
54
+
55
+ | Training Loss | Epoch | Step | Validation Loss | F1 | Precision | Recall | Accuracy |
56
+ |:-------------:|:-----:|:----:|:---------------:|:------:|:---------:|:------:|:--------:|
57
+ | 1.1285 | 1.0 | 125 | 0.7318 | 0.7165 | 0.7123 | 0.7297 | 0.7297 |
58
+ | 0.6049 | 2.0 | 250 | 0.6196 | 0.7524 | 0.7483 | 0.7703 | 0.7703 |
59
+ | 0.4449 | 3.0 | 375 | 0.5883 | 0.7662 | 0.7616 | 0.7838 | 0.7838 |
60
+
61
+
62
+ ### Framework versions
63
+
64
+ - Transformers 4.33.2
65
+ - Pytorch 2.0.1+cu118
66
+ - Datasets 2.14.5
67
+ - Tokenizers 0.13.3
config.json ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "jonas-luehrs/chembert_cased-MLM-chemistry",
3
+ "architectures": [
4
+ "BertForSequenceClassification"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.1,
10
+ "hidden_size": 768,
11
+ "id2label": {
12
+ "0": "Automobile",
13
+ "1": "Catalyst",
14
+ "2": "Construct",
15
+ "3": "HouseConst",
16
+ "4": "Household",
17
+ "5": "IndustConst",
18
+ "6": "Process"
19
+ },
20
+ "initializer_range": 0.02,
21
+ "intermediate_size": 3072,
22
+ "label2id": {
23
+ "Automobile": 0,
24
+ "Catalyst": 1,
25
+ "Construct": 2,
26
+ "HouseConst": 3,
27
+ "Household": 4,
28
+ "IndustConst": 5,
29
+ "Process": 6
30
+ },
31
+ "layer_norm_eps": 1e-12,
32
+ "max_position_embeddings": 512,
33
+ "model_type": "bert",
34
+ "num_attention_heads": 12,
35
+ "num_hidden_layers": 12,
36
+ "pad_token_id": 0,
37
+ "position_embedding_type": "absolute",
38
+ "problem_type": "single_label_classification",
39
+ "torch_dtype": "float32",
40
+ "transformers_version": "4.33.2",
41
+ "type_vocab_size": 2,
42
+ "use_cache": true,
43
+ "vocab_size": 28996
44
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0fb7f197cffa3c16c73abe25f10952b202f373adbb74ebfd22ac939235e558fc
3
+ size 433330993
special_tokens_map.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": "[CLS]",
3
+ "mask_token": "[MASK]",
4
+ "pad_token": "[PAD]",
5
+ "sep_token": "[SEP]",
6
+ "unk_token": "[UNK]"
7
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "clean_up_tokenization_spaces": true,
3
+ "cls_token": "[CLS]",
4
+ "do_basic_tokenize": true,
5
+ "do_lower_case": false,
6
+ "mask_token": "[MASK]",
7
+ "max_len": 512,
8
+ "model_max_length": 512,
9
+ "never_split": null,
10
+ "pad_token": "[PAD]",
11
+ "sep_token": "[SEP]",
12
+ "strip_accents": null,
13
+ "tokenize_chinese_chars": true,
14
+ "tokenizer_class": "BertTokenizer",
15
+ "unk_token": "[UNK]"
16
+ }
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6358f913dbda98b2c335a2e1d4a3993aca864e9e2d2d8c308aec182b7858b750
3
+ size 4091
vocab.txt ADDED
The diff for this file is too large to render. See raw diff