File size: 3,030 Bytes
87bb4c2 c8a0c04 87bb4c2 c8a0c04 1325a67 fd8b621 99b1a2f 43f3de8 99b1a2f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
- This model is finetuned using Guanaco Unchained dataset CheshireAI/guanaco-unchained (https://huggingface.co/datasets/CheshireAI/guanaco-unchained#guanaco-unchained)
- Guanaco Unchained
- Trained using qLora method (https://github.com/artidoro/qlora)
- Dataset is fairly small but unlocks interesting results on 7B model.
- Trained on Nvideo L40 in about 5 hours.
- To try it, use TheBloke/Llama-2-7B-fp16 base model, load jlevin/guanaco-unchained-llama-2-7b as Lora
"Guanaco Unchained" is a refined and optimized version of the original Guanaco dataset. It is specifically curated to maintain high-quality data while minimizing alignment issues.
The main transformations that were applied to the dataset include:
Language Filtering: To ensure quality control, most of the non-English prompts were removed.
AI Identification Removal: Any references suggesting the model's identity as AI, such as "OpenAssistant", "As an AI language model", and similar prompts, were removed. This adjustment allows for a more human-like interaction.
Content Refining: Responses that indicated refusal, moralizing, or strong subjectivity were either removed or modified to increase accuracy and reduce bias.
Context Trimming: In scenarios where a human response lacked a corresponding model answer, the last human response was removed to maintain consistency in the instruct pair format.
Apologetic Language Reduction: The dataset was also revised to remove or modify apologetic language in the responses, thereby ensuring assertiveness and precision.
Dataset Information:
The primary source of the data is the Guanaco dataset. Following this, a series of processing steps (as outlined above) were performed to remove unnecessary or ambiguous elements, resulting in the "Guanaco Unchained" dataset. The structure of the dataset remains consistent with the original Guanaco dataset, containing pairs of human prompts and assistant responses.
Known Limitations:
The dataset was manually curated, and therefore, may contain unintentional errors, oversights, or inconsistencies. Despite the concerted effort to remove all instances of AI identification, there may still be undetected instances. The dataset's multilingual capability may also be reduced due to the removal of non-English prompts.
Additional Information:
The "Guanaco Unchained" dataset is ideally suited for any application that aims for a more human-like interaction with minimized AI identifiers and alignment issues. It is particularly beneficial in contexts where direct, assertive, and high-quality English responses are desired.
|