File size: 1,877 Bytes
66b376b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 |
---
{}
---
---
license: apache-2.0
tags:
- moe
- mergekit
- Ahmad0067/llama-3-8b-Instruct-Referral_Synth_data_Phase_1_and_2_corect_unsloth_merged
- Ahmad0067/llama-3-8b-Instruct-Prescriptin_Synth_data_Phase_1_and_2_corect_unsloth_merged
---
# llama-3-8b-Instruct-moe-ref-blood-pres_Unsloth_correct_v2
llama-3-8b-Instruct-moe-ref-blood-pres_Unsloth_correct_v2 is a Mixture of Experts (MoE) model, configured as follows:
* [Ahmad0067/llama-3-8b-Instruct-Referral_Synth_data_Phase_1_and_2_corect_unsloth_merged](https://huggingface.co/Ahmad0067/llama-3-8b-Instruct-Referral_Synth_data_Phase_1_and_2_corect_unsloth_merged)
- **Positive Prompts**: Expert on Referral Orders extraction.
- **Negative Prompts**: NOT good for Bloodwork Orders., NOT good for Prescription Orders.
* [Ahmad0067/llama-3-8b-Instruct-Prescriptin_Synth_data_Phase_1_and_2_corect_unsloth_merged](https://huggingface.co/Ahmad0067/llama-3-8b-Instruct-Prescriptin_Synth_data_Phase_1_and_2_corect_unsloth_merged)
- **Positive Prompts**: Expert on Prescription Orders extraction.
- **Negative Prompts**: NOT good for Bloodwork Orders., NOT good for Referal Orders.
## 🧩 Configuration
```yaml
base_model: unsloth/llama-3-8b-Instruct
gate_mode: hidden
dtype: bfloat16
experts:
- source_model: Ahmad0067/llama-3-8b-Instruct-Referral_Synth_data_Phase_1_and_2_corect_unsloth_merged
positive_prompts:
- Expert on Referral Orders extraction.
negative_prompts:
- NOT good for Bloodwork Orders.
- NOT good for Prescription Orders.
- source_model: Ahmad0067/llama-3-8b-Instruct-Prescriptin_Synth_data_Phase_1_and_2_corect_unsloth_merged
positive_prompts:
- Expert on Prescription Orders extraction.
negative_prompts:
- NOT good for Bloodwork Orders.
- NOT good for Referal Orders.
``` |