Mr-Vicky-01 commited on
Commit
ecf8dcc
1 Parent(s): 6ab11d3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +84 -0
README.md CHANGED
@@ -1,3 +1,87 @@
1
  ---
2
  license: mit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
+ datasets:
4
+ - ShashiVish/cover-letter-dataset
5
+ language:
6
+ - en
7
+ widget:
8
+ - example_title: Python!
9
+ text: >-
10
+ <start_of_turn>user Generate Cover Letter for Role: ML Engineer, \ Preferred Qualifications: strong AI realted skills, \ Hiring Company: Google, User Name: Vicky, \ Past Working Experience: Intenship in CodeClause, Current Working Experience: Fresher, \ Skillsets: Machine Learning, Deep Learning, AI, SQL, NLP, Qualifications: Bachelor of commerce with computer application <end_of_turn>\n<start_of_turn>model
11
+ tags:
12
+ - code
13
+ inference:
14
+ parameters:
15
+ max_new_tokens: 250
16
+ do_sample: false
17
+ pipeline_tag: text2text-generation
18
  ---
19
+ # Gemma-2B Fine-Tuned Python Model
20
+
21
+ ## Overview
22
+ Gemma-2B Fine-Tuned Python Model is a deep learning model based on the Gemma-2B architecture, fine-tuned specifically for Python programming tasks. This model is designed to understand Python code and assist developers by providing suggestions, completing code snippets, or offering corrections to improve code quality and efficiency.
23
+
24
+ ## Model Details
25
+ - **Model Name**: Gemma-2B Fine-Tuned Python Model
26
+ - **Model Type**: Deep Learning Model
27
+ - **Base Model**: Gemma-2B
28
+ - **Language**: Python
29
+ - **Task**: Python Code Understanding and Assistance
30
+
31
+ ## Example Use Cases
32
+ - Code completion: Automatically completing code snippets based on partial inputs.
33
+ - Syntax correction: Identifying and suggesting corrections for syntax errors in Python code.
34
+ - Code quality improvement: Providing suggestions to enhance code readability, efficiency, and maintainability.
35
+ - Debugging assistance: Offering insights and suggestions to debug Python code by identifying potential errors or inefficiencies.
36
+
37
+ ## How to Use
38
+ 1. **Install Gemma Python Package**:
39
+ ```bash
40
+ pip install -q -U transformers==4.38.0
41
+ pip install torch
42
+ ```
43
+
44
+ ## Inference
45
+ 1. **How to use the model in our notebook**:
46
+ ```python
47
+ # Load model directly
48
+ import torch
49
+ from transformers import AutoTokenizer, AutoModelForCausalLM
50
+
51
+ tokenizer = AutoTokenizer.from_pretrained("Mr-Vicky-01/Gemma2B-Finetuned-CoverLetter")
52
+ model = AutoModelForCausalLM.from_pretrained("Mr-Vicky-01/Gemma2B-Finetuned-CoverLetter")
53
+
54
+ job_title = "ML Engineer"
55
+ preferred_qualification = "strong AI realted skills"
56
+ hiring_company_name = "Google"
57
+ user_name = "Vicky"
58
+ past_working_experience= "N/A"
59
+ current_working_experience = "Fresher"
60
+ skilleset= "Machine Learning, Deep Learning, AI, SQL, NLP"
61
+ qualification = "Bachelor of commerce with computer application"
62
+
63
+
64
+ prompt_template = f"<start_of_turn>user Generate Cover Letter for Role: {job_title}, \
65
+ Preferred Qualifications: {preferred_qualification}, \
66
+ Hiring Company: {hiring_company_name}, User Name: {user_name}, \
67
+ Past Working Experience: {past_working_experience}, Current Working Experience: {current_working_experience}, \
68
+ Skillsets: {skilleset}, Qualifications: {qualification} <end_of_turn>\n<start_of_turn>model"
69
+
70
+ prompt = prompt_template
71
+ encodeds = tokenizer(prompt, return_tensors="pt", add_special_tokens=True).input_ids
72
+
73
+ device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
74
+ merged_model.to(device)
75
+ inputs = encodeds.to(device)
76
+
77
+
78
+ # Increase max_new_tokens if needed
79
+ generated_ids = merged_model.generate(inputs, max_new_tokens=250, do_sample=False, pad_token_id=tokenizer.eos_token_id)
80
+ ans = ''
81
+ for i in tokenizer.decode(generated_ids[0], skip_special_tokens=True).split('<end_of_turn>')[:2]:
82
+ ans += i
83
+
84
+ # Extract only the model's answer
85
+ model_answer = ans.split("model")[1].strip()
86
+ print(model_answer)
87
+ ```