keethu commited on
Commit
20a5ea4
1 Parent(s): 3f45153

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -7
README.md CHANGED
@@ -6,26 +6,31 @@ tags:
6
  model-index:
7
  - name: results
8
  results: []
 
 
 
 
9
  ---
10
 
11
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
  should probably proofread and complete it, then remove this comment. -->
13
 
14
- # results
15
 
16
- This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
17
 
18
  ## Model description
19
 
20
- More information needed
 
21
 
22
  ## Intended uses & limitations
23
 
24
- More information needed
25
 
26
  ## Training and evaluation data
27
 
28
- More information needed
29
 
30
  ## Training procedure
31
 
@@ -42,11 +47,12 @@ The following hyperparameters were used during training:
42
 
43
  ### Training results
44
 
45
-
 
46
 
47
  ### Framework versions
48
 
49
  - Transformers 4.41.2
50
  - Pytorch 2.3.0+cu121
51
  - Datasets 2.20.0
52
- - Tokenizers 0.19.1
 
6
  model-index:
7
  - name: results
8
  results: []
9
+ language:
10
+ - en
11
+ metrics:
12
+ - accuracy
13
  ---
14
 
15
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
16
  should probably proofread and complete it, then remove this comment. -->
17
 
18
+ # Results
19
 
20
+ This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the Kubernetes dataset, which is updated in the same hub!
21
 
22
  ## Model description
23
 
24
+ This model can be used to generate texts related to Kubernetes.
25
+ This would be the first model towards interests in IBN.
26
 
27
  ## Intended uses & limitations
28
 
29
+ It can be used for the text generation.
30
 
31
  ## Training and evaluation data
32
 
33
+ This model contains only the training data and no evaluation data.
34
 
35
  ## Training procedure
36
 
 
47
 
48
  ### Training results
49
 
50
+ Training Loss:
51
+ TrainOutput(global_step=3, training_loss=3.4602108001708984, metrics={'train_runtime': 83.5107, 'train_samples_per_second': 0.036, 'train_steps_per_second': 0.036, 'total_flos': 1567752192000.0, 'train_loss': 3.4602108001708984, 'epoch': 3.0})
52
 
53
  ### Framework versions
54
 
55
  - Transformers 4.41.2
56
  - Pytorch 2.3.0+cu121
57
  - Datasets 2.20.0
58
+ - Tokenizers 0.19.1