ubermenchh commited on
Commit
e0d8478
1 Parent(s): 9531be1

End of training

Browse files
README.md CHANGED
@@ -1,3 +1,78 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ base_model: t5-small
4
+ tags:
5
+ - generated_from_trainer
6
+ datasets:
7
+ - itihasa
8
+ metrics:
9
+ - bleu
10
+ model-index:
11
+ - name: sanskrit-english-model
12
+ results:
13
+ - task:
14
+ name: Sequence-to-sequence Language Modeling
15
+ type: text2text-generation
16
+ dataset:
17
+ name: itihasa
18
+ type: itihasa
19
+ config: Itihasa
20
+ split: test
21
+ args: Itihasa
22
+ metrics:
23
+ - name: Bleu
24
+ type: bleu
25
+ value: 0.3733
26
  ---
27
+
28
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
29
+ should probably proofread and complete it, then remove this comment. -->
30
+
31
+ # sanskrit-english-model
32
+
33
+ This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the itihasa dataset.
34
+ It achieves the following results on the evaluation set:
35
+ - Loss: 3.7022
36
+ - Bleu: 0.3733
37
+ - Gen Len: 19.0
38
+
39
+ ## Model description
40
+
41
+ More information needed
42
+
43
+ ## Intended uses & limitations
44
+
45
+ More information needed
46
+
47
+ ## Training and evaluation data
48
+
49
+ More information needed
50
+
51
+ ## Training procedure
52
+
53
+ ### Training hyperparameters
54
+
55
+ The following hyperparameters were used during training:
56
+ - learning_rate: 2e-05
57
+ - train_batch_size: 16
58
+ - eval_batch_size: 16
59
+ - seed: 42
60
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
61
+ - lr_scheduler_type: linear
62
+ - num_epochs: 2
63
+ - mixed_precision_training: Native AMP
64
+
65
+ ### Training results
66
+
67
+ | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
68
+ |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
69
+ | 3.9891 | 1.0 | 4698 | 3.7609 | 0.3838 | 19.0 |
70
+ | 3.9079 | 2.0 | 9396 | 3.7022 | 0.3733 | 19.0 |
71
+
72
+
73
+ ### Framework versions
74
+
75
+ - Transformers 4.35.0
76
+ - Pytorch 2.0.0
77
+ - Datasets 2.1.0
78
+ - Tokenizers 0.14.1
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "decoder_start_token_id": 0,
3
+ "eos_token_id": 1,
4
+ "pad_token_id": 0,
5
+ "transformers_version": "4.35.0"
6
+ }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2445a2a0c25f2ea4ec2fcb638e7de289253b509958290367c93a37d6d21a22aa
3
  size 242041896
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:102977a0f44d1217f31c9633156946ab2f3e1736d8a113979d0e3efe99087bb4
3
  size 242041896
runs/Nov15_14-44-27_3486ef2c9151/events.out.tfevents.1700059468.3486ef2c9151.47.2 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5f7b6cf7d6946b8ea5673dcf441e686cbd95efdf6a0da4cf9154b3034e671c6b
3
- size 8419
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:65665537fef56590a80a78d5fa9312dee6eb330312f14ea5de872fcab6ff3e60
3
+ size 9143