Mr-Vicky-01 commited on
Commit
fa28a0e
1 Parent(s): 59975ef

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +72 -0
README.md ADDED
@@ -0,0 +1,72 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ datasets:
4
+ - flytech/python-codes-25k
5
+ language:
6
+ - en
7
+ pipeline_tag: text2text-generation
8
+ tags:
9
+ - code
10
+ inference:
11
+ parameters:
12
+ max_new_tokens: 100
13
+ do_sample: false
14
+ ---
15
+ # Gemma-2B Fine-Tuned Python Model
16
+
17
+ ## Overview
18
+ Gemma-2B Fine-Tuned Python Model is a deep learning model based on the Gemma-2B architecture, fine-tuned specifically for Python programming tasks. This model is designed to understand Python code and assist developers by providing suggestions, completing code snippets, or offering corrections to improve code quality and efficiency.
19
+
20
+ ## Model Details
21
+ - **Model Name**: Gemma-2B Fine-Tuned Python Model
22
+ - **Model Type**: Deep Learning Model
23
+ - **Base Model**: Gemma-2B
24
+ - **Language**: Python
25
+ - **Task**: Python Code Understanding and Assistance
26
+
27
+ ## Example Use Cases
28
+ - Code completion: Automatically completing code snippets based on partial inputs.
29
+ - Syntax correction: Identifying and suggesting corrections for syntax errors in Python code.
30
+ - Code quality improvement: Providing suggestions to enhance code readability, efficiency, and maintainability.
31
+ - Debugging assistance: Offering insights and suggestions to debug Python code by identifying potential errors or inefficiencies.
32
+
33
+ ## How to Use
34
+ 1. **Install Gemma Python Package**:
35
+ ```bash
36
+ pip install -q -U transformers==4.38.0
37
+ pip install torch
38
+ ```
39
+
40
+ ## Inference
41
+ 1. **How to use the model in our notebook**:
42
+ ```python
43
+ # Load model directly
44
+ import torch
45
+ from transformers import AutoTokenizer, AutoModelForCausalLM
46
+
47
+ tokenizer = AutoTokenizer.from_pretrained("Mr-Vicky-01/Gemma-2B-Finetuined-pythonCode")
48
+ model = AutoModelForCausalLM.from_pretrained("Mr-Vicky-01/Gemma-2B-Finetuined-pythonCode")
49
+
50
+ query = input('enter a query:')
51
+ prompt_template = f"""
52
+ <start_of_turn>user based on given instruction create a solution\n\nhere are the instruction {query}
53
+ <end_of_turn>\n<start_of_turn>model
54
+ """
55
+ prompt = prompt_template
56
+ encodeds = tokenizer(prompt, return_tensors="pt", add_special_tokens=True).input_ids
57
+
58
+ device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
59
+ model.to(device)
60
+ inputs = encodeds.to(device)
61
+
62
+
63
+ # Increase max_new_tokens if needed
64
+ generated_ids = model.generate(inputs, max_new_tokens=1000, do_sample=False, pad_token_id=tokenizer.eos_token_id)
65
+ ans = ''
66
+ for i in tokenizer.decode(generated_ids[0], skip_special_tokens=True).split('<end_of_turn>')[:2]:
67
+ ans += i
68
+
69
+ # Extract only the model's answer
70
+ model_answer = ans.split("model")[1].strip()
71
+ print(model_answer)
72
+ ```