Wit Sense commited on
Commit
d67d129
1 Parent(s): dbdac85

update readme

Browse files
Files changed (1) hide show
  1. README.md +50 -0
README.md CHANGED
@@ -1,3 +1,53 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  ---
 
1
+ # 33x Coding Model
2
+
3
+ 33x-coder is a powerful Llama based model available on Hugging Face, designed to assist and augment coding tasks. Leveraging the capabilities of advanced language models, 33x-coder specializes in understanding and generating code. This model is trained on a diverse range of programming languages and coding scenarios, making it a versatile tool for developers looking to streamline their coding process. Whether you're debugging, seeking coding advice, or generating entire scripts, 33x-coder can provide relevant, syntactically correct code snippets and comprehensive programming guidance. Its intuitive understanding of coding languages and constructs makes it an invaluable asset for any coding project, helping to reduce development time and improve code quality.
4
+
5
+ ## Importing necessary libraries from transformers
6
+ ```
7
+ from transformers import AutoTokenizer, AutoModelForCausalLM
8
+ ```
9
+
10
+ ## Initialize the tokenizer and model
11
+ ```
12
+ tokenizer = AutoTokenizer.from_pretrained("senseable/33x-coder", trust_remote_code=True)
13
+ model = AutoModelForCausalLM.from_pretrained("senseable/33x-coder", trust_remote_code=True).cuda()
14
+ ```
15
+
16
+ # User's request for a quick sort algorithm in Python
17
+ ```
18
+ messages = [
19
+ {'role': 'user', 'content': "Write a Python function to check if a number is prime."}
20
+ ]
21
+ ```
22
+
23
+ ## Preparing the input for the model by encoding the messages and sending them to the same device as the model
24
+ ```
25
+ inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device)
26
+ ```
27
+
28
+ ## Generating responses from the model with specific parameters for text generation
29
+ ```
30
+ outputs = model.generate(
31
+ inputs,
32
+ max_new_tokens=512, # Maximum number of new tokens to generate
33
+ do_sample=False, # Disable random sampling to get the most likely next token
34
+ top_k=50, # The number of highest probability vocabulary tokens to keep for top-k-filtering
35
+ top_p=0.95, # Nucleus sampling: keeps the top p probability mass worth of tokens
36
+ num_return_sequences=1, # The number of independently computed returned sequences for each element in the batch
37
+ eos_token_id=32021, # End of sequence token id
38
+ add_generation_prompt=True
39
+ )
40
+ ```
41
+
42
+ ## Decoding and printing the generated response
43
+
44
+ ```
45
+ start_index = len(inputs[0])
46
+ generated_output_tokens = outputs[0][start_index:]
47
+ decoded_output = tokenizer.decode(generated_output_tokens, skip_special_tokens=True)
48
+ print("Generated Code:\n", decoded_output)
49
+ ```
50
+
51
  ---
52
  license: apache-2.0
53
  ---