Edit model card

You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

fine tuning a llm model, performance tests yet to perform.

Model: tiiuae/falcon-7b, Dataset: nampdn-ai/tiny-codes

before training output: Instruction:"Generate a python function to find number of CPU cores" Response: "def num_cpu_cores(): num_cores = (int)(os.cpu_count() * 2) return"

After training output: Instruction:"Generate a python function to find number of CPU cores" Response: "def num_cpu_cores(): num_cores = 0 for i in range(0, os.cpu_count()):""

Training procedure

The following bitsandbytes quantization config was used during training:

  • load_in_8bit: False
  • load_in_4bit: True
  • llm_int8_threshold: 6.0
  • llm_int8_skip_modules: None
  • llm_int8_enable_fp32_cpu_offload: False
  • llm_int8_has_fp16_weight: False
  • bnb_4bit_quant_type: nf4
  • bnb_4bit_use_double_quant: True
  • bnb_4bit_compute_dtype: bfloat16

Framework versions

  • PEFT 0.5.0.dev0
Downloads last month
0
Unable to determine this model’s pipeline type. Check the docs .