afrideva commited on
Commit
e930ecd
1 Parent(s): 693309f

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +93 -0
README.md ADDED
@@ -0,0 +1,93 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: habanoz/TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1
3
+ datasets:
4
+ - OpenAssistant/oasst_top1_2023-08-25
5
+ inference: false
6
+ language:
7
+ - en
8
+ license: apache-2.0
9
+ model_creator: habanoz
10
+ model_name: TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1
11
+ pipeline_tag: text-generation
12
+ quantized_by: afrideva
13
+ tags:
14
+ - gguf
15
+ - ggml
16
+ - quantized
17
+ - q2_k
18
+ - q3_k_m
19
+ - q4_k_m
20
+ - q5_k_m
21
+ - q6_k
22
+ - q8_0
23
+ ---
24
+ # habanoz/TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1-GGUF
25
+
26
+ Quantized GGUF model files for [TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1](https://huggingface.co/habanoz/TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1) from [habanoz](https://huggingface.co/habanoz)
27
+
28
+
29
+ | Name | Quant method | Size |
30
+ | ---- | ---- | ---- |
31
+ | [tinyllama-1.1b-step-2t-lr-5-5ep-oasst1-top1-instruct-v1.fp16.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1-GGUF/resolve/main/tinyllama-1.1b-step-2t-lr-5-5ep-oasst1-top1-instruct-v1.fp16.gguf) | fp16 | 2.20 GB |
32
+ | [tinyllama-1.1b-step-2t-lr-5-5ep-oasst1-top1-instruct-v1.q2_k.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1-GGUF/resolve/main/tinyllama-1.1b-step-2t-lr-5-5ep-oasst1-top1-instruct-v1.q2_k.gguf) | q2_k | 483.12 MB |
33
+ | [tinyllama-1.1b-step-2t-lr-5-5ep-oasst1-top1-instruct-v1.q3_k_m.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1-GGUF/resolve/main/tinyllama-1.1b-step-2t-lr-5-5ep-oasst1-top1-instruct-v1.q3_k_m.gguf) | q3_k_m | 550.82 MB |
34
+ | [tinyllama-1.1b-step-2t-lr-5-5ep-oasst1-top1-instruct-v1.q4_k_m.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1-GGUF/resolve/main/tinyllama-1.1b-step-2t-lr-5-5ep-oasst1-top1-instruct-v1.q4_k_m.gguf) | q4_k_m | 668.79 MB |
35
+ | [tinyllama-1.1b-step-2t-lr-5-5ep-oasst1-top1-instruct-v1.q5_k_m.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1-GGUF/resolve/main/tinyllama-1.1b-step-2t-lr-5-5ep-oasst1-top1-instruct-v1.q5_k_m.gguf) | q5_k_m | 783.02 MB |
36
+ | [tinyllama-1.1b-step-2t-lr-5-5ep-oasst1-top1-instruct-v1.q6_k.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1-GGUF/resolve/main/tinyllama-1.1b-step-2t-lr-5-5ep-oasst1-top1-instruct-v1.q6_k.gguf) | q6_k | 904.39 MB |
37
+ | [tinyllama-1.1b-step-2t-lr-5-5ep-oasst1-top1-instruct-v1.q8_0.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1-GGUF/resolve/main/tinyllama-1.1b-step-2t-lr-5-5ep-oasst1-top1-instruct-v1.q8_0.gguf) | q8_0 | 1.17 GB |
38
+
39
+
40
+
41
+ ## Original Model Card:
42
+ TinyLlama/TinyLlama-1.1B-intermediate-step-955k-token-2T finetuned using OpenAssistant/oasst_top1_2023-08-25 dataset.
43
+
44
+ Trained for 5 epochs using Qlora. Adapter is merged.
45
+
46
+ SFT code:
47
+ https://github.com/habanoz/qlora.git
48
+
49
+ Command used:
50
+ ```bash
51
+ accelerate launch $BASE_DIR/qlora/train.py \
52
+ --model_name_or_path $BASE_MODEL \
53
+ --working_dir $BASE_DIR/$OUTPUT_NAME-checkpoints \
54
+ --output_dir $BASE_DIR/$OUTPUT_NAME-peft \
55
+ --merged_output_dir $BASE_DIR/$OUTPUT_NAME \
56
+ --final_output_dir $BASE_DIR/$OUTPUT_NAME-final \
57
+ --num_train_epochs 5 \
58
+ --logging_steps 1 \
59
+ --save_strategy steps \
60
+ --save_steps 75 \
61
+ --save_total_limit 2 \
62
+ --data_seed 11422 \
63
+ --evaluation_strategy steps \
64
+ --per_device_eval_batch_size 4 \
65
+ --eval_dataset_size 0.01 \
66
+ --eval_steps 75 \
67
+ --max_new_tokens 1024 \
68
+ --dataloader_num_workers 3 \
69
+ --logging_strategy steps \
70
+ --do_train \
71
+ --do_eval \
72
+ --lora_r 64 \
73
+ --lora_alpha 16 \
74
+ --lora_modules all \
75
+ --bits 4 \
76
+ --double_quant \
77
+ --quant_type nf4 \
78
+ --lr_scheduler_type constant \
79
+ --dataset oasst1-top1 \
80
+ --dataset_format oasst1 \
81
+ --model_max_len 1024 \
82
+ --per_device_train_batch_size 4 \
83
+ --gradient_accumulation_steps 4 \
84
+ --learning_rate 1e-5 \
85
+ --adam_beta2 0.999 \
86
+ --max_grad_norm 0.3 \
87
+ --lora_dropout 0.0 \
88
+ --weight_decay 0.0 \
89
+ --seed 11422 \
90
+ --gradient_checkpointing \
91
+ --use_flash_attention_2 \
92
+ --ddp_find_unused_parameters False
93
+ ```