File size: 1,345 Bytes
423c464
 
 
 
 
 
 
 
2531059
423c464
b2220b1
 
02e2355
b2220b1
522d18f
 
 
 
b2220b1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
---
license: apache-2.0
tags:
- codet5
datasets:
- code_x_glue_ct_code_to_text
widget:
- text: 'def pad(tensor, paddings, mode: "CONSTANT", name: nil) _op(:pad, tensor, paddings, mode: mode, name: name) end </s>'
---

# Description

CodeT5-small model, fine-tuned on the code summarization subtask of CodeXGLUE (Ruby programming language). This model can generate a docstring of a given function written in Ruby.

# Notebook

The notebook that I used to fine-tune CodeT5 can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/T5/Fine_tune_CodeT5_for_generating_docstrings_from_Ruby_code.ipynb).

# Usage

Here's how to use this model: 

```python
from transformers import RobertaTokenizer, T5ForConditionalGeneration

model_name = "nielsr/codet5-small-code-summarization-ruby"
tokenizer = RobertaTokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)

code = """
def update_with_file_contents(digest, filename)
      File.open(filename) do |io|
        while (chunk = io.read(1024 * 8))
          digest.update(chunk)
        end
      end
    end
"""

input_ids = tokenizer(code, return_tensors="pt").input_ids
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
# Update the digest with the contents of the given file
```