File size: 4,619 Bytes
a455002
b64b819
 
 
a455002
b64b819
 
 
 
 
a17c1ca
a455002
b64b819
 
 
 
 
 
 
 
00de65d
b64b819
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
---
language:
- code
- en
license: apache-2.0
tags:
- commit_message_generation
- code
datasets:
- JetBrains-Research/commit-chronicle
pipeline_tag: text2text-generation
---

# CMG/CMC: RACE (with history)

This is the checkpoint for [RACE](https://aclanthology.org/2022.emnlp-main.372.pdf) model, fine-tuned for the commit message generation (and/or completion) task as part of the paper "From Commit Message Generation to History-Aware Commit Message Completion", ASE 2023.

## Details

> πŸ” For further details, please refer to:
> * **Paper**: [https://arxiv.org/abs/2308.07655](https://arxiv.org/abs/2308.07655)
> * **Repository**: [https://github.com/JetBrains-Research/commit_message_generation](https://github.com/JetBrains-Research/commit_message_generation)


* This model is based on the fine-tuned CodeT5 checkpoint [`JetBrains-Research/cmg-codet5-with-history`](https://huggingface.co/JetBrains-Research/cmg-codet5-with-history) and uses RACE architecture introduced in πŸ“œ [RACE: Retrieval-Augmented Commit Message Generation](https://aclanthology.org/2022.emnlp-main.372.pdf).
  * Note: Requires a custom model class. Check [our implementation](https://github.com/JetBrains-Research/commit_message_generation/blob/appendix_cmg/src/model/configurations/utils/race.py) or [the replication package](https://github.com/DeepSoftwareAnalytics/RACE) provided by RACE authors. 
* This model was trained with commit diffs as well as WITH commit message history.
* This model was trained on the CommitChronicle dataset introduced in our study. 
* Our hyperparameter setting is mostly based on πŸ“œ [RACE: Retrieval-augmented Commit Message Generation](https://aclanthology.org/2022.emnlp-main.372/).
 The exact values are provided below:
    
    |       Hyperparameter       |                                                                    Value                                                                    |
    |:--------------------------:|:-------------------------------------------------------------------------------------------------------------------------------------------:|
    | Encoder context max length |                                                                     512                                                                     |
    | Decoder context max length |                                                                     512                                                                     |
    | Number of training epochs  |                                                                      1                                                                      |
    |         Batch size         |                                                                     32                                                                      |
    |         Optimizer          |                  [AdamW](https://pytorch.org/docs/1.12/generated/torch.optim.AdamW.html?highlight=adamw#torch.optim.AdamW)                  |
    |           Warmup           | [Linear](https://huggingface.co/docs/transformers/v4.21.3/en/main_classes/optimizer_schedules#transformers.get_linear_schedule_with_warmup) |
    |   Number of warmup steps   |                                                                     100                                                                     |
    |     Peak learning rate     |                                                                   0.00002                                                                   |


## Available checkpoints

We also released checkpoints for other models fine-tuned as part of our study.

* Models trained *with commit message history*:
  * **CodeT5:** πŸ€— [`JetBrains-Research/cmg-codet5-with-history`](https://huggingface.co/JetBrains-Research/cmg-codet5-with-history)
  * **CodeReviewer:** πŸ€— [`JetBrains-Research/cmg-codereviewer-with-history`](https://huggingface.co/JetBrains-Research/cmg-codereviewer-with-history) 
  * **RACE:** πŸ€— [`JetBrains-Research/cmg-race-with-history`](https://huggingface.co/JetBrains-Research/cmg-race-with-history) (this model)
* Models trained *without commit message history*:
  * **CodeT5:** πŸ€— [`JetBrains-Research/cmg-codet5-without-history`](https://huggingface.co/JetBrains-Research/cmg-codet5-without-history)
  * **CodeReviewer:** πŸ€— [`JetBrains-Research/cmg-codereviewer-without-history`](https://huggingface.co/JetBrains-Research/cmg-codereviewer-without-history)
  * **RACE:** πŸ€— [`JetBrains-Research/cmg-race-without-history`](https://huggingface.co/JetBrains-Research/cmg-race-without-history)

## Citation

```
TODO
```