mrm8488 commited on
Commit
88fe318
·
1 Parent(s): 0ea841e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +33 -0
README.md CHANGED
@@ -16,6 +16,39 @@ language:
16
 
17
  The Stack contains over 6TB of permissively-licensed source code files covering 358 programming languages. The dataset was created as part of the BigCode Project, an open scientific collaboration working on the responsible development of Large Language Models for Code (Code LLMs). The Stack serves as a pre-training dataset for Code LLMs, i.e., code-generating AI systems which enable the synthesis of programs from natural language descriptions as well as other from code snippets.
18
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19
 
20
 
21
 
 
16
 
17
  The Stack contains over 6TB of permissively-licensed source code files covering 358 programming languages. The dataset was created as part of the BigCode Project, an open scientific collaboration working on the responsible development of Large Language Models for Code (Code LLMs). The Stack serves as a pre-training dataset for Code LLMs, i.e., code-generating AI systems which enable the synthesis of programs from natural language descriptions as well as other from code snippets.
18
 
19
+ ## Example of usage
20
+
21
+ ```py
22
+ import torch
23
+ from transformers import BloomTokenizerFast, BloomForCausalLM
24
+
25
+ device = 'cuda' if torch.cuda.is_available() else 'cpu'
26
+ ckpt = 'mrm8488/bloom-560m-finetuned-the-stack-rust'
27
+ revision = '100k' #latest one at the moment
28
+
29
+ tokenizer = BloomTokenizerFast.from_pretrained(ckpt)
30
+ model = BloomForCausalLM.from_pretrained(ckpt, revision=revision).to(device)
31
+
32
+ def complete_code(text):
33
+ inputs = tokenizer(text, return_tensors='pt')
34
+ input_ids = inputs.input_ids.to(device)
35
+ attention_mask = inputs.attention_mask.to(device)
36
+ output = model.generate(input_ids, attention_mask=attention_mask, max_length=2048, eos_token_id=tokenizer.eos_token_id)
37
+
38
+ return tokenizer.decode(output[0], skip_special_tokens=False)
39
+
40
+ code_prompt = """
41
+ use fastly::{Error, Request, Response};
42
+ use serde_json::{json, Value};
43
+
44
+ #[fastly::main]
45
+ fn main(req: Request) -> Result<Response, Error> {
46
+ let mut response = req.send("origin_0")?;
47
+ """
48
+
49
+ complete_code(code_prompt)
50
+ ```
51
+
52
 
53
 
54