01GangaPutraBheeshma commited on
Commit
27ff0cb
1 Parent(s): 76e485d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -3
README.md CHANGED
@@ -25,12 +25,24 @@ Here's a brief description of my project.
25
  - [License](#license)
26
  - [Acknowledgements](#acknowledgements)
27
 
28
- ## Introduction
29
 
30
  colab_code_generator_FT_code_gen_UT, an instruction-following large language model trained on the Google Colab Pro with T4 GPU and fine-tuned on 'Salesforce/codegen-350M-mono' that is licensed for commercial use. Code Generator_UT is trained on ~19k instructions/response fine-tuning records from 'iamtarun/python_code_instructions_18k_alpaca'.
31
 
32
- ### Loading the fine-tuned Code Generator
 
 
33
  ```
34
  from peft import AutoPeftModelForCausalLM>
35
  test_model_UT = AutoPeftModelForCausalLM.from_pretrained("01GangaPutraBheeshma/colab_code_generator_FT_code_gen_UT")
36
- test_tokenizer_UT = AutoTokenizer.from_pretrained("01GangaPutraBheeshma/colab_code_generator_FT_code_gen_UT")```
 
 
 
 
 
 
 
 
 
 
 
25
  - [License](#license)
26
  - [Acknowledgements](#acknowledgements)
27
 
28
+ # Introduction
29
 
30
  colab_code_generator_FT_code_gen_UT, an instruction-following large language model trained on the Google Colab Pro with T4 GPU and fine-tuned on 'Salesforce/codegen-350M-mono' that is licensed for commercial use. Code Generator_UT is trained on ~19k instructions/response fine-tuning records from 'iamtarun/python_code_instructions_18k_alpaca'.
31
 
32
+ # Getting Started
33
+
34
+ Loading the fine-tuned Code Generator
35
  ```
36
  from peft import AutoPeftModelForCausalLM>
37
  test_model_UT = AutoPeftModelForCausalLM.from_pretrained("01GangaPutraBheeshma/colab_code_generator_FT_code_gen_UT")
38
+ test_tokenizer_UT = AutoTokenizer.from_pretrained("01GangaPutraBheeshma/colab_code_generator_FT_code_gen_UT")
39
+ ```
40
+
41
+ # Documentation
42
+
43
+ This model was fine-tuned using LoRA because I wanted the model's weights to be efficient in solving other types of Python problems(Ones that were not included in the training data).
44
+ Setting lora_alpha to 16 suggests that I chose a relatively strong regularization. The specific value of this hyperparameter often requires experimentation and tuning to find the optimal balance between preventing overfitting and allowing the model to capture important patterns in the data.
45
+
46
+ The lora_dropout rate is 0.1, which dropped out 10% of the neurons randomly during training. This helps to prevent overfitting by introducing a level of randomness and redundancy in the network.
47
+ 'r' in LoRa represents a rank which helps to decide the level of representation of the model in terms of number of dimensions or features. This proved to be advantageous for tasks like fine-tuning, where reducing the complexity of the model while preserving information is paramount.
48
+