akshaybharadwaj96 commited on
Commit
a235503
·
verified ·
1 Parent(s): 4a12398

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +63 -117
README.md CHANGED
@@ -1,7 +1,7 @@
1
  ---
2
  base_model: Salesforce/codegen-350M-mono
3
  library_name: peft
4
- license: apache-2.0
5
  datasets:
6
  - google/code_x_glue_ct_code_to_text
7
  language:
@@ -20,15 +20,20 @@ Generate python code from natural language prompts.
20
  ### Model Description
21
 
22
  <!-- Provide a longer summary of what this model is. -->
 
 
 
 
 
 
23
 
24
 
 
25
 
26
- - **Developed by:** [More Information Needed]
27
-
28
- - **Model type:** [More Information Needed]
29
- - **Language(s) (NLP):** [More Information Needed]
30
- - **License:** [More Information Needed]
31
- - **Finetuned from model [optional]:** [More Information Needed]
32
 
33
  <!-- ### Model Sources [optional]
34
 
@@ -46,37 +51,62 @@ Generate python code from natural language prompts.
46
 
47
  <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
48
 
49
- [More Information Needed]
 
 
 
 
 
 
50
 
51
- ### Downstream Use [optional]
52
 
53
  <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
54
 
55
- [More Information Needed]
 
 
 
 
 
 
56
 
57
  ### Out-of-Scope Use
58
 
59
  <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
60
 
61
- [More Information Needed]
 
 
 
 
62
 
63
  ## Bias, Risks, and Limitations
64
 
65
  <!-- This section is meant to convey both technical and sociotechnical limitations. -->
66
 
67
- [More Information Needed]
68
 
69
- ### Recommendations
70
 
71
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
72
 
73
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
74
 
75
  ## How to Get Started with the Model
76
 
77
  Use the code below to get started with the model.
 
 
78
 
79
- [More Information Needed]
 
 
 
 
 
 
 
 
80
 
81
  ## Training Details
82
 
@@ -84,94 +114,27 @@ Use the code below to get started with the model.
84
 
85
  <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
86
 
87
- [More Information Needed]
88
-
89
- ### Training Procedure
90
-
91
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
92
-
93
- #### Preprocessing [optional]
94
-
95
- [More Information Needed]
96
-
97
-
98
- #### Training Hyperparameters
99
-
100
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
101
-
102
- #### Speeds, Sizes, Times [optional]
103
-
104
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
105
-
106
- [More Information Needed]
107
 
108
  ## Evaluation
109
 
110
  <!-- This section describes the evaluation protocols and provides the results. -->
111
 
112
- ### Testing Data, Factors & Metrics
113
-
114
- #### Testing Data
115
-
116
- <!-- This should link to a Dataset Card if possible. -->
117
-
118
- [More Information Needed]
119
-
120
- #### Factors
121
-
122
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
123
-
124
- [More Information Needed]
125
-
126
  #### Metrics
127
 
128
  <!-- These are the evaluation metrics being used, ideally with a description of why. -->
 
129
 
130
- [More Information Needed]
131
-
132
- ### Results
 
 
 
133
 
134
- [More Information Needed]
135
-
136
- #### Summary
137
-
138
-
139
-
140
- ## Model Examination [optional]
141
-
142
- <!-- Relevant interpretability work for the model goes here -->
143
-
144
- [More Information Needed]
145
-
146
- ## Environmental Impact
147
-
148
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
149
-
150
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
151
-
152
- - **Hardware Type:** [More Information Needed]
153
- - **Hours used:** [More Information Needed]
154
- - **Cloud Provider:** [More Information Needed]
155
- - **Compute Region:** [More Information Needed]
156
- - **Carbon Emitted:** [More Information Needed]
157
-
158
- ## Technical Specifications [optional]
159
-
160
- ### Model Architecture and Objective
161
-
162
- [More Information Needed]
163
-
164
- ### Compute Infrastructure
165
-
166
- [More Information Needed]
167
-
168
- #### Hardware
169
-
170
- [More Information Needed]
171
-
172
- #### Software
173
-
174
- [More Information Needed]
175
 
176
  ## Citation [optional]
177
 
@@ -179,29 +142,12 @@ Carbon emissions can be estimated using the [Machine Learning Impact calculator]
179
 
180
  **BibTeX:**
181
 
182
- [More Information Needed]
183
-
184
- **APA:**
185
-
186
- [More Information Needed]
187
-
188
- ## Glossary [optional]
189
-
190
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
191
-
192
- [More Information Needed]
193
-
194
- ## More Information [optional]
195
-
196
- [More Information Needed]
197
-
198
- ## Model Card Authors [optional]
199
-
200
- [More Information Needed]
201
 
202
- ## Model Card Contact
203
 
204
- [More Information Needed]
205
- ### Framework versions
206
- -->
207
  - PEFT 0.7.2.dev0
 
1
  ---
2
  base_model: Salesforce/codegen-350M-mono
3
  library_name: peft
4
+ license: mit
5
  datasets:
6
  - google/code_x_glue_ct_code_to_text
7
  language:
 
20
  ### Model Description
21
 
22
  <!-- Provide a longer summary of what this model is. -->
23
+ This model is a fine-tuned variant of Salesforce/codegen-350M-mono,
24
+ specialized for natural language to code generation in Python.
25
+ It takes natural language instructions (e.g., “check MySQL database connection”)
26
+ and generates the corresponding Python code snippet.
27
+ The model was trained on a curated text-to-code dataset containing diverse
28
+ programming instructions and function-level examples to improve semantic and syntactic accuracy.
29
 
30
 
31
+ - **Developed by:** Akshay Bharadwaj
32
 
33
+ - **Model type:** Transformer-based Causal Language Model
34
+ - **Language(s) (NLP):** English (Prompts) and Python (Code Outputs)
35
+ - **License:** MIT License
36
+ - **Finetuned from model [optional]:** Salesforce/codegen-350M-mono
 
 
37
 
38
  <!-- ### Model Sources [optional]
39
 
 
51
 
52
  <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
53
 
54
+ The model can be used for:
55
+
56
+ * Translating natural language prompts into functional Python code.
57
+
58
+ * Assisting in code autocompletion or boilerplate generation.
59
+
60
+ * Supporting educational and prototyping environments.
61
 
62
+ ### Downstream Use
63
 
64
  <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
65
 
66
+ Can be integrated into:
67
+
68
+ * Developer tools (IDE plugins or assistants).
69
+
70
+ * Chatbots for code assistance or educational coding tutors.
71
+
72
+ * LLM pipelines for multi-step reasoning or coding workflows.
73
 
74
  ### Out-of-Scope Use
75
 
76
  <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
77
 
78
+ * Generating production-level code without human review.
79
+
80
+ * Security-critical or real-time applications (e.g., code execution automation).
81
+
82
+ * Generation of malicious or unsafe code.
83
 
84
  ## Bias, Risks, and Limitations
85
 
86
  <!-- This section is meant to convey both technical and sociotechnical limitations. -->
87
 
88
+ * The model may produce incomplete or syntactically incorrect code for ambiguous prompts.
89
 
90
+ * It can misinterpret vague natural language queries (semantic drift).
91
 
92
+ * Potential bias toward common Python idioms and limited handling of rare libraries or APIs.
93
 
 
94
 
95
  ## How to Get Started with the Model
96
 
97
  Use the code below to get started with the model.
98
+ ```
99
+ from transformers import AutoTokenizer, AutoModelForCausalLM
100
 
101
+ model_id = "akshayb/nl-code-gen-python"
102
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
103
+ model = AutoModelForCausalLM.from_pretrained(model_id)
104
+
105
+ prompt = "write a python function to check mysql database connection"
106
+ inputs = tokenizer(prompt, return_tensors="pt")
107
+ outputs = model.generate(**inputs, max_new_tokens=256)
108
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
109
+ ```
110
 
111
  ## Training Details
112
 
 
114
 
115
  <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
116
 
117
+ The dataset contains paired natural language descriptions
118
+ and Python function implementations, collected and cleaned
119
+ from public code repositories and text-to-code benchmarks (e.g., CodeXGLUE).
120
+ Preprocessing involved deduplication, tokenization, and removal of incomplete code samples.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
121
 
122
  ## Evaluation
123
 
124
  <!-- This section describes the evaluation protocols and provides the results. -->
125
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
126
  #### Metrics
127
 
128
  <!-- These are the evaluation metrics being used, ideally with a description of why. -->
129
+ For Comparison between Base Model and Fine-tuned model, we use the following metrics:
130
 
131
+ | Metric | Focus | Strength |
132
+ | ---------------- | ------------------------------ | ----------------------------------------- |
133
+ | **BLEU** | Token-level similarity | Measures fluency and lexical accuracy |
134
+ | **CodeBLEU** | Lexical + syntactic + semantic | Captures holistic code quality |
135
+ | **Exact Match** | String equality | Strict correctness measure |
136
+ | **Syntax Match** | AST structure | Validates syntactic and logical integrity |
137
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
138
 
139
  ## Citation [optional]
140
 
 
142
 
143
  **BibTeX:**
144
 
145
+ @misc{akshay2025nlcodegen,
146
+ title={Natural Language to Code Generation (Fine-tuned CodeGen-350M)},
147
+ author={Akshay Bharadwaj},
148
+ year={2025},
149
+ howpublished={\url{https://huggingface.co/akshayb/nl-code-gen-python}}
150
+ }
 
 
 
 
 
 
 
 
 
 
 
 
 
151
 
 
152
 
 
 
 
153
  - PEFT 0.7.2.dev0