|
--- |
|
license: apache-2.0 |
|
--- |
|
# DrKlaus-7B |
|
|
|
![image/webp](https://cdn-uploads.huggingface.co/production/uploads/6455cc8d679315e4ef16fbec/E0UeNsU-zKRAwySfeCWf8.webp) |
|
|
|
DrKlaus-7B is a SFT model made with [AutoSloth](https://colab.research.google.com/drive/1Zo0sVEb2lqdsUm9dy2PTzGySxdF9CNkc#scrollTo=MmLkhAjzYyJ4) by [macadeliccc](https://huggingface.co/macadeliccc) |
|
|
|
## Process |
|
|
|
- Original Model: [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) |
|
- Datatset: [medalpaca/medical_meadow_wikidoc_patient_information](https://huggingface.co/datasets/medalpaca/medical_meadow_wikidoc_patient_information) |
|
|
|
- Learning Rate: 3e-05 |
|
- Steps: 80 |
|
- Warmup Steps: 8 |
|
- Per Device Train Batch Size: 24 |
|
- Gradient Accumulation Steps 12 |
|
- Optimizer: adamw_8bit |
|
- Max Sequence Length: 1024 |
|
- Max Prompt Length: 512 |
|
- Max Length: 1024 |
|
|
|
## 💻 Usage |
|
|
|
```python |
|
!pip install -qU transformers |
|
|
|
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline |
|
|
|
model = "macadeliccc/DrKlaus-7B" |
|
tokenizer = AutoTokenizer.from_pretrained(model) |
|
|
|
# Example prompt |
|
prompt = "Your example prompt here" |
|
|
|
# Generate a response |
|
model = AutoModelForCausalLM.from_pretrained(model) |
|
pipeline = pipeline("text-generation", model=model, tokenizer=tokenizer) |
|
outputs = pipeline(prompt, max_length=50, num_return_sequences=1) |
|
print(outputs[0]["generated_text"]) |
|
``` |
|
|
|
<div align="center"> |
|
<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/made%20with%20unsloth.png" height="50" align="center" /> |
|
</div> |