GGUF
chat
Inference Endpoints
conversational
aashish1904 commited on
Commit
4d6637e
1 Parent(s): 7d6fddf

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +143 -0
README.md ADDED
@@ -0,0 +1,143 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+
4
+ license: agpl-3.0
5
+ tags:
6
+ - chat
7
+ datasets:
8
+ - NewEden/CivitAI-SD-Prompts
9
+ License: agpl-3.0
10
+ Language:
11
+ - En
12
+ Pipeline_tag: text-generation
13
+ Base_model: NewEden/Qwen-1.5B-Claude
14
+ Tags:
15
+ - Chat
16
+
17
+ ---
18
+
19
+ [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)
20
+
21
+
22
+ # QuantFactory/SD-Prompter-1.5B-V0.1-GGUF
23
+ This is quantized version of [Delta-Vector/SD-Prompter-1.5B-V0.1](https://huggingface.co/Delta-Vector/SD-Prompter-1.5B-V0.1) created using llama.cpp
24
+
25
+ # Original Model Card
26
+
27
+
28
+ This is the first in a line of models dedicated to creating Stable-Diffusion prompts when given a character appearance, This has been finetuned ontop of
29
+ [NewEden/Qwen-1.5B-Claude](https://huggingface.co/NewEden/Qwen-1.5B-Claude).
30
+
31
+ ## Prompting
32
+
33
+ Model has been tuned with the Alapaca formatting. A typical input would look like this:
34
+ ```
35
+ ### Instruction:
36
+ Create a prompt for Stable Diffusion based on the information below.
37
+ ### Input:
38
+ Rae has short has dark brown hair and brown eyes, She is commonly seen wearing her Royal Academy uniform, which consists of a red jacket with gold lines, a white ruffled necktie, a red bow tie with an attached blue gem, and a long black skirt with white lines. Along with her uniform, she wears black leggings and brown shoes.
39
+ ### Response:
40
+ ```
41
+
42
+ ## System Prompting
43
+
44
+ I would highly recommend using the following system prompt for this model.
45
+
46
+ ```
47
+ Create a prompt for Stable Diffusion based on the information below.
48
+ ```
49
+
50
+ ## Axolotl Config
51
+
52
+ <details><summary>See Axolotl Trainer config</summary>
53
+
54
+ ```yaml
55
+ base_model: NewEden/Qwen-1.5B-Claude
56
+ model_type: AutoModelForCausalLM
57
+ tokenizer_type: AutoTokenizer
58
+
59
+ trust_remote_code: true
60
+
61
+ load_in_8bit: false
62
+ load_in_4bit: false
63
+ strict: false
64
+
65
+ datasets:
66
+ - path: civit-slop-combined.jsonl
67
+ type: alpaca
68
+ conversation: mpt-30b-instruct
69
+
70
+ chat_template: alpaca
71
+
72
+ dataset_prepared_path:
73
+ val_set_size: 0.05
74
+ output_dir: ./outputs/sd-prompter
75
+ sequence_len: 2048
76
+ sample_packing: true
77
+ eval_sample_packing: false
78
+ pad_to_sequence_len: true
79
+
80
+ adapter:
81
+ lora_model_dir:
82
+ lora_r:
83
+ lora_alpha:
84
+ lora_dropout:
85
+ lora_target_linear: true
86
+ lora_fan_in_fan_out:
87
+
88
+ wandb_project: SDprompt-qwen
89
+ wandb_entity:
90
+ wandb_watch:
91
+ wandb_name: qwen1.5b-2
92
+ wandb_log_model:
93
+
94
+ gradient_accumulation_steps: 64
95
+ micro_batch_size: 2
96
+ num_epochs: 3
97
+ optimizer: adamw_torch
98
+ lr_scheduler: cosine
99
+ learning_rate: 0.00002
100
+
101
+ train_on_inputs: false
102
+ group_by_length: false
103
+ bf16: auto
104
+ fp16:
105
+ tf32: true
106
+
107
+ gradient_checkpointing: true
108
+ gradient_checkpointing_kwargs:
109
+ use_reentrant: false
110
+ early_stopping_patience:
111
+ resume_from_checkpoint:
112
+ local_rank:
113
+ logging_steps: 1
114
+ xformers_attention:
115
+ flash_attention: true
116
+
117
+ warmup_ratio: 0.05
118
+ evals_per_epoch: 4
119
+ saves_per_epoch: 1
120
+ debug:
121
+ #deepspeed: deepspeed_configs/zero2.json
122
+ #deepspeed: /training/axolotl/axolotl/deepspeed_configs/zero2.json
123
+ weight_decay: 0.0
124
+ #fsdp:
125
+ #fsdp_config:
126
+ # fsdp_limit_all_gathers: true
127
+ # fsdp_sync_module_states: true
128
+ # fsdp_offload_params: true
129
+ # fsdp_use_orig_params: false
130
+ # fsdp_cpu_ram_efficient_loading: true
131
+ # fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
132
+ # fsdp_transformer_layer_cls_to_wrap: Qwen2DecoderLayer
133
+ # fsdp_state_dict_type: FULL_STATE_DICT
134
+ special_tokens:
135
+ ```
136
+ </details><br>
137
+
138
+ ## Credits
139
+
140
+ Thank you to [Kubernetes Bad](https://huggingface.co/kubernetes-bad)
141
+
142
+ ## Training
143
+ The training was done for 2 epochs. I used 2 x [RTX 6000s](https://www.nvidia.com/en-us/design-visualization/rtx-6000/) GPUs graciously provided by [Kubernetes Bad](https://huggingface.co/kubernetes-bad) for the full-parameter fine-tuning of the model.