Code-Jamba-v0.1 / README.md
ajibawa-2023's picture
Update README.md
46063d0 verified
metadata
license: apache-2.0
datasets:
  - ajibawa-2023/Code-290k-ShareGPT
  - m-a-p/Code-Feedback
language:
  - en
tags:
  - code
  - Python
  - C++
  - Rust
  - Ruby
  - Sql
  - R
  - Julia

Code-Jamba-v0.1

This model is trained upon my dataset Code-290k-ShareGPT and Code-Feedback. It is finetuned on Jamba-v0.1 . It is very very good in Code generation in various languages such as Python, Java, JavaScript, GO, C++, Rust, Ruby, Sql, MySql, R, Julia, Haskell, etc.. This model will also generate detailed explanation/logic behind each code. This model uses ChatML prompt format.

Training

Entire dataset was trained on 2 x H100 94GB. For 3 epoch, training took 162 hours. Axolotl along with DeepSpeed codebase was used for training purpose. This was trained on Jamba-v0.1 by AI21Labs.

This is a qlora model. Links for quantized models will be updated very soon.

GPTQ, GGUF, AWQ & Exllama

GPTQ: TBA

GGUF: TBA

AWQ: TBA

Exllama v2: TBA

Example Prompt:

This model uses ChatML prompt format.

<|im_start|>system
You are a Helpful Assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant

You can modify above Prompt as per your requirement.

I want to say special Thanks to the Open Source community for helping & guiding me to better understand the AI/Model development.

Thank you for your love & support.

Example Output

Coming soon!