|
--- |
|
library_name: peft |
|
base_model: ai21labs/Jamba-v0.1 |
|
license: apache-2.0 |
|
datasets: |
|
- mhenrichsen/alpaca_2k_test |
|
tags: |
|
- axolotl |
|
--- |
|
|
|
# Jambalpaca-v0.1 |
|
|
|
This is a test run to fine-tune [ai21labs/Jamba-v0.1](https://huggingface.co/ai21labs/Jamba-v0.1) on an A100 80GB GPU using [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) and a custom version of [LazyAxolotl](https://colab.research.google.com/drive/1TsDKNo2riwVmU55gjuBgB1AXVtRRfRHW). |
|
I used [mhenrichsen/alpaca_2k_test](https://huggingface.co/datasets/mhenrichsen/alpaca_2k_test) as a dataset. |
|
I had to quantize the base model in 8-bit precision due to how merging models with adapters work. Weirdly enough, I didn't have to do it in a notebook version I created. |
|
I also pushed the adapter so I or someone else could do a better merge. |
|
|
|
Let me know if you're interested, I can give you access to Jamba's version of LazyAxolotl. |