SushantGautam commited on
Commit
4486ebb
·
1 Parent(s): 8fdb7ef

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -13
README.md CHANGED
@@ -1,8 +1,7 @@
1
  ---
 
2
  tags:
3
  - generated_from_trainer
4
- metrics:
5
- - rouge
6
  model-index:
7
  - name: CodeGeneration
8
  results: []
@@ -13,14 +12,7 @@ should probably proofread and complete it, then remove this comment. -->
13
 
14
  # CodeGeneration
15
 
16
- This model is a fine-tuned version of [SEBIS/code_trans_t5_small_program_synthese_transfer_learning_finetune](https://huggingface.co/SEBIS/code_trans_t5_small_program_synthese_transfer_learning_finetune) on an unknown dataset.
17
- It achieves the following results on the evaluation set:
18
- - Loss: 5.2823
19
- - Rouge1: 0.0
20
- - Rouge2: 0.0
21
- - Rougel: 0.0
22
- - Rougelsum: 0.0
23
- - Gen Len: 1.0
24
 
25
  ## Model description
26
 
@@ -40,12 +32,12 @@ More information needed
40
 
41
  The following hyperparameters were used during training:
42
  - learning_rate: 5e-05
43
- - train_batch_size: 64
44
- - eval_batch_size: 16
45
  - seed: 42
46
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
47
  - lr_scheduler_type: linear
48
- - num_epochs: 5.0
49
 
50
  ### Training results
51
 
 
1
  ---
2
+ license: mit
3
  tags:
4
  - generated_from_trainer
 
 
5
  model-index:
6
  - name: CodeGeneration
7
  results: []
 
12
 
13
  # CodeGeneration
14
 
15
+ This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
 
 
 
 
 
 
 
16
 
17
  ## Model description
18
 
 
32
 
33
  The following hyperparameters were used during training:
34
  - learning_rate: 5e-05
35
+ - train_batch_size: 8
36
+ - eval_batch_size: 8
37
  - seed: 42
38
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
39
  - lr_scheduler_type: linear
40
+ - num_epochs: 3.0
41
 
42
  ### Training results
43