File size: 2,375 Bytes
348c6c0
 
 
 
 
 
 
 
 
a9b8774
 
 
 
 
348c6c0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a9b8774
 
 
 
 
348c6c0
 
 
a9b8774
348c6c0
 
 
a9b8774
348c6c0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a9b8774
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: codet5-small-Generate_Docstrings_for_Python-Condensed
  results: []
datasets:
- calum/the-stack-smol-python-docstrings
language:
- en
pipeline_tag: text2text-generation
---

# codet5-small-Generate_Docstrings_for_Python-Condensed

This model is a fine-tuned version of [Salesforce/codet5-small](https://huggingface.co/Salesforce/codet5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1444
- Rouge1: 0.3828
- Rouge2: 0.2214
- Rougel: 0.3583
- Rougelsum: 0.3661
- Gen Len: 12.6656

## Model description

This model is trained to predict the docstring (the output) for a function (the input).

For more information on how it was created, check out the following link: https://github.com/DunnBC22/NLP_Projects/blob/main/Generate%20Docstrings/Smol%20Dataset/Code_T5_Project-Small%20Checkpoint.ipynb

For this model, I trimmed some of the longer samples to quicken the pace of training on consumer hardware.

## Intended uses & limitations

This model is intended to demonstrate my ability to solve a complex problem using technology.

## Training and evaluation data

Dataset Source: calum/the-stack-smol-python-docstrings (from HuggingFace Datasets; https://huggingface.co/datasets/calum/the-stack-smol-python-docstrings)

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4

### Training results

| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 2.9064        | 1.0   | 965  | 2.3096          | 0.3695 | 0.2098 | 0.3464 | 0.3529    | 11.7285 |
| 2.4836        | 2.0   | 1930 | 2.2051          | 0.38   | 0.2176 | 0.3554 | 0.3635    | 12.9401 |
| 2.3669        | 3.0   | 2895 | 2.1548          | 0.3842 | 0.2219 | 0.3595 | 0.3674    | 13.0029 |
| 2.3254        | 4.0   | 3860 | 2.1444          | 0.3828 | 0.2214 | 0.3583 | 0.3661    | 12.6656 |


### Framework versions

- Transformers 4.26.1
- Pytorch 1.12.1
- Datasets 2.9.0
- Tokenizers 0.12.1