File size: 1,437 Bytes
52181bf
 
 
 
 
 
3506280
52181bf
 
 
 
 
 
 
 
 
 
 
 
 
 
5902a34
06f4a41
52181bf
 
 
 
7a4a4fe
 
52181bf
5902a34
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---

# Model Card for Self-instruct-starcoder

<!-- Provide a quick summary of what the model is/does. -->

This model is an instruction-tuned version of ⭐️ StarCoder. The instruction dataset involved is [Self-instruct-starcoder](https://huggingface.co/datasets/codeparrot/self-instruct-starcoder)
which was built by boostrapping on StarCoder's generations.
## Uses

<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
The model was fine-tuned with the following template
```
Question: <instruction>

Answer: <output>
```
If you have your model and tokenizer loaded, you can use the following code to make the model generate the right output to a given instruction

```python
instruction = "Write a function to compute the GCD between two integers a and b"
prompt = f"Question:{instruction}\n\nAnswer:"
input_ids = tokenizer(prompt, return_tensors="pt")["input_ids"]
completion = model.generate(input_ids, max_length=200)
print(tokenizer.batch_decode(completion[:,input_ids.shape[1]:])[0])
```
 ## More information
For additional information, check
- [self-intruct-starcoder](https://huggingface.co/codeparrot/self-instruct-starcoder)
- [starcoder](https://huggingface.co/bigcode/starcoder)