--- library_name: peft license: wtfpl language: - en pipeline_tag: text-generation --- ## Model description togethercomputer/RedPajama-INCITE-Base-3B-v1 finetuned for paraphrasing and changing the tone of the input sentence(to casual/professional/witty). Sample training data: ```json { "original": "If you have any further questions, feel free to ask.", "casual": "Got more questions? Feel free to ask away. I'm here to help!", "professional": "Should you have any additional inquiries, please don't hesitate to ask.", "witty": "Curiosity is always in style! If you have more mysteries to solve, I'm all ears!", "paraphrase": "Don't hesitate to ask if you have any more questions." } ``` ## Training params ```json { "batch_size": 8, "eval_ratio": 0.1, "eval_steps": 100, "gradient_accumulation_steps": 1, "learning_rate": 0.0001, "logging_steps": 100, "lora_alpha": 32, "lora_dropout": 0.05, "lora_r": 16, "max_length": 128, "model_name": "togethercomputer/RedPajama-INCITE-Base-3B-v1", "num_train_epochs": 3, "seed": 10, "task_type": "paraphrase_tone", "use_aim": True } ``` ## Training curve ![train_eval_loss](RedPajama-INCITE-Base-3B-v1-paraphrase-tone.jpeg) ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.4.0.dev0