01GangaPutraBheeshma's picture
Update README.md
4083802
|
raw
history blame
1.19 kB
metadata
license: apache-2.0
datasets:
  - iamtarun/python_code_instructions_18k_alpaca
language:
  - en
library_name: peft
pipeline_tag: text2text-generation
tags:
  - code

Here's a brief description of my project.

Table of Contents

Introduction

colab_code_generator_FT_code_gen_UT, an instruction-following large language model trained on the Google Colab Pro with T4 GPU and fine-tuned on 'Salesforce/codegen-350M-mono' that is licensed for commercial use. Code Generator_UT is trained on ~19k instructions/response fine-tuning records from 'iamtarun/python_code_instructions_18k_alpaca'.

Loading the fine-tuned Code Generator

<from peft import AutoPeftModelForCausalLM

test_model_UT = AutoPeftModelForCausalLM.from_pretrained("01GangaPutraBheeshma/colab_code_generator_FT_code_gen_UT") test_tokenizer_UT = AutoTokenizer.from_pretrained("01GangaPutraBheeshma/colab_code_generator_FT_code_gen_UT")>