senseable commited on
Commit
838ea3d
1 Parent(s): ba8d99f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +65 -5
README.md CHANGED
@@ -1,9 +1,69 @@
1
- ---
2
- license: apache-2.0
3
- language:
4
- - en
5
  library_name: transformers
6
  tags:
 
7
  - moe
8
- - code
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  ---
 
1
+ language:
2
+ - "en"
3
+ metrics:
4
+ - code_eval
5
  library_name: transformers
6
  tags:
7
+ - coe
8
  - moe
9
+ -
10
+ datasets:
11
+ - andersonbcdefg/synthetic_retrieval_tasks
12
+ - ise-uiuc/Magicoder-Evol-Instruct-110K
13
+ license: "apache-2.0"
14
+ ---
15
+
16
+
17
+ # 33x Coding Model
18
+
19
+ 33x-coder is a powerful Llama based model available on Hugging Face, designed to assist and augment coding tasks. Leveraging the capabilities of advanced language models, 33x-coder specializes in understanding and generating code. This model is trained on a diverse range of programming languages and coding scenarios, making it a versatile tool for developers looking to streamline their coding process. Whether you're debugging, seeking coding advice, or generating entire scripts, 33x-coder can provide relevant, syntactically correct code snippets and comprehensive programming guidance. Its intuitive understanding of coding languages and constructs makes it an invaluable asset for any coding project, helping to reduce development time and improve code quality.
20
+
21
+ ## Importing necessary libraries from transformers
22
+ ```
23
+ from transformers import AutoTokenizer, AutoModelForCausalLM
24
+ ```
25
+
26
+ ## Initialize the tokenizer and model
27
+ ```
28
+ tokenizer = AutoTokenizer.from_pretrained("senseable/33x-coder")
29
+ model = AutoModelForCausalLM.from_pretrained("senseable/33x-coder").cuda()
30
+ ```
31
+
32
+ # User's request for a quick sort algorithm in Python
33
+ ```
34
+ messages = [
35
+ {'role': 'user', 'content': "Write a Python function to check if a number is prime."}
36
+ ]
37
+ ```
38
+
39
+ ## Preparing the input for the model by encoding the messages and sending them to the same device as the model
40
+ ```
41
+ inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device)
42
+ ```
43
+
44
+ ## Generating responses from the model with specific parameters for text generation
45
+ ```
46
+ outputs = model.generate(
47
+ inputs,
48
+ max_new_tokens=512, # Maximum number of new tokens to generate
49
+ do_sample=False, # Disable random sampling to get the most likely next token
50
+ top_k=50, # The number of highest probability vocabulary tokens to keep for top-k-filtering
51
+ top_p=0.95, # Nucleus sampling: keeps the top p probability mass worth of tokens
52
+ num_return_sequences=1, # The number of independently computed returned sequences for each element in the batch
53
+ eos_token_id=32021, # End of sequence token id
54
+ add_generation_prompt=True
55
+ )
56
+ ```
57
+
58
+ ## Decoding and printing the generated response
59
+
60
+ ```
61
+ start_index = len(inputs[0])
62
+ generated_output_tokens = outputs[0][start_index:]
63
+ decoded_output = tokenizer.decode(generated_output_tokens, skip_special_tokens=True)
64
+ print("Generated Code:\n", decoded_output)
65
+ ```
66
+
67
+ ---
68
+ license: apache-2.0
69
  ---