--- library_name: transformers license: cc-by-nc-4.0 tags: - creative-writing - creative-writer - multiplicative-lora --- An experimental model, fine-tuned using the ["multiplicative-LoRA" method](#the-multiplicative-lora-method) on [c4ai-command-r-v01](https://huggingface.co/CohereForAI/c4ai-command-r-v01). Other experimental models, based off `creative-writer-v0.1-alfa-35b` that attempt to encourage more diverse/creative text generation: - [creative-writer-v0.1-bravo-35b](https://huggingface.co/jukofyork/creative-writer-v0.1-bravo-35b) - Scaled the pre-softmax logits by `1.1` during training (and then reset after training). - **[CURRENTLY UPLOADING...]** [creative-writer-v0.1-charlie-35b](https://huggingface.co/jukofyork/creative-writer-v0.1-charlie-35b) - Scaled the pre-softmax logits by `0.9` during training (and didn't reset after training). - **[CURRENTLY TRAINING...]** [creative-writer-v0.1-delta-35b](https://huggingface.co/jukofyork/creative-writer-v0.1-delta-35b) - Trained using [Focal Loss](https://arxiv.org/abs/1708.02002) with `gamma=2` (instead of stock [Cross Entropy Loss](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html)). --- # Usage - Use the normal `command-r` chat template: `'<|START_OF_TURN_TOKEN|><|USER_TOKEN|>prompt<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>reply...'`. - I suggest using **no system prompt** with this (and all other `Cohere` models!), as it writes *much* better without IMO... - You **must used some small value of min-p** with this (and the original `c4ai-command-r-v01` model!), or the model will output gibberish! --- # The "multiplicative-LoRA" method Uses: `h = (I + lora_B @ lora_A) @ tensor @ x = tensor @ x + lora_B @ lora_A @ tensor @ x` or equivalently: `h = tensor @ x` `h' = h + lora_B @ lora_A @ h` instead of the normal "additive-LoRA" method of: `h = (tensor + lora_B @ lora_A) @ x = tensor @ x + lora_B @ lora_A @ x` I only apply this to the `down_proj` matrices, and skipped the last layer's `down_proj` matrix in the same way as [creative-writing-control-vectors-v3.0](https://huggingface.co/jukofyork/creative-writing-control-vectors-v3.0). This currently requires hacking [PEFT's layer.py](https://github.com/huggingface/peft/blob/main/src/peft/tuners/lora/layer.py) like so: ```python #self.lora_A[adapter_name] = nn.Linear(self.in_features, r, bias=False) self.lora_A[adapter_name] = nn.Linear(self.out_features, r, bias=False) self.lora_B[adapter_name] = nn.Linear(r, self.out_features, bias=False) ``` and: ```python #x = x.to(lora_A.weight.dtype) temp = result.to(lora_A.weight.dtype) if not self.use_dora[active_adapter]: #result = result + lora_B(lora_A(dropout(x))) * scaling result = result + lora_B(lora_A(dropout(temp))) * scaling ``` Then to merge you need to hack [qlora-pipe's merge_lora.py](https://github.com/tdrussell/qlora-pipe/blob/main/merge_lora.py) to use: ```python old_type = tensor.dtype tensor = tensor.to(torch.float32) tensor += scale * lora_B.to(torch.float32) @ lora_A.to(torch.float32) @ tensor tensor = tensor.to(old_type) ``` --- # The rationale behind the "multiplicative-LoRA" method and the link to control-vectors There are actually 3 existing "multiplicative-LoRA" methods in [PEFT/tuners](https://github.com/huggingface/peft/tree/main/src/peft/tuners): - https://github.com/huggingface/peft/tree/main/src/peft/tuners/oft (https://arxiv.org/abs/2306.07280) - https://github.com/huggingface/peft/tree/main/src/peft/tuners/boft (https://arxiv.org/abs/2311.06243) - https://github.com/huggingface/peft/tree/main/src/peft/tuners/hra (https://arxiv.org/abs/2405.17484) but all of these deliberately maintain [orthogonality](https://en.wikipedia.org/wiki/Orthogonal_matrix), and thus are more restrictive in the types of transformations they can perform (ie: [Rotations](https://en.wikipedia.org/wiki/Rotation) and/or [Improper Rotations](https://en.wikipedia.org/wiki/Improper_rotation) only; with no scaling and/or sheer possible...). For example, these can't perform the orthogonal projection performed by [abliteration](https://www.lesswrong.com/posts/jGuXSZgv6qfdhMCuJ/refusal-in-llms-is-mediated-by-a-single-direction): `h' = h - v @ v^T @ h` whereas the general (non-orthogonal) "multiplicative-LoRA" method can do this by choosing to set `u = -v` like so: `h' = h + u @ v^T @ h` In general, the way to think about these (non-orthogonal) "multiplicative-LoRAs" is as a kind of "conditional control-vector": - The vectors in `lora_A` look for a certain dirrection, and via the dot-product; generate (signed) weighting factor that measure the similarity between the output of the `down_proj` transformation. - The vectors in `lora_B` then get added to the hidden state / residual stream based on the weighting factors. So instead of having just a single vector that we add (in essence we add a bias term and create an [Affine transformation](https://en.wikipedia.org/wiki/Affine_transformation)), we now have many different control vectors that can be added (in `lora_B`), based on how well they match another set of "directional detection vectors" (in `lora_A`). --- # Training - Took just under 4 days using dual-A6000 GPUs connected via NVLink, using [qlora-pipe](https://github.com/tdrussell/qlora-pipe). - The dataset consisted of approximately 1000 pre-2012 books converted to Markdown (~180M tokens) using the same `dataset_combination_mode = 'concatenate'` as [Llama-3-70B-Instruct-Storywriter](https://huggingface.co/tdrussell/Llama-3-70B-Instruct-Storywriter). - I used the same `sequence_len = 8192` and `batch_size_tokens = 8192` as [Llama-3-70B-Instruct-Storywriter](https://huggingface.co/tdrussell/Llama-3-70B-Instruct-Storywriter). ## `config_creative_writer.toml` ```toml # Paths model = '/mnt/data/c4ai-command-r-v01' output_dir = '/mnt/data/creative-writer-v0.1-alfa-35b' # Lora configuration lora_rank = 64 lora_alpha = 64 lora_dropout = 0.0 target_modules = ['down_proj'] layers_to_transform = '0:38' # skip last layer # Optimization configuration epochs = 1 lr_scheduler = 'constant' warmup_steps = 100 batch_size_tokens = 8192 # Performance settings pipeline_stages = 2 logging_steps = 1 eval_steps = 100 save_steps = 100 checkpoint_every_n_minutes = 60 eval_before_first_step = true model_weight_dtype = 'bfloat16' lora_weight_dtype = 'bfloat16' keep_states = 3 group_by_length = true activation_checkpointing = 'unsloth' # Resume a prior run resume_from_checkpoint = false # Dataset configuration dataset_combination_mode = 'concatenate' eval_gradient_accumulation_steps = 1 [optimizer] type = 'adamw_kahan' lr = 5e-6 beta1 = 0.9 beta2 = 0.99 weight_decay = 0.01 [[datasets]] name = 'books' dataset_type = 'textfile' dataset_path = '/mnt/data/datasets/ebooks/*.txt' sequence_len = 8192 eval_size = 0.01 ``` ## `ds_creative_writer.json` ```json { "train_micro_batch_size_per_gpu": 1, "gradient_accumulation_steps": 16, "gradient_clipping": 1.0, "steps_per_print": 1 } ``` ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65995c45539c808e84c38bf1/DcGilkmIa7wBQJIhCWbHP.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65995c45539c808e84c38bf1/TnsnTqtAd9S3JE8VacxN6.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65995c45539c808e84c38bf1/Ly3Y4TK1S2TsTCLEslzZ2.png)