Model Card for Emu
Some alignments in these domains:
- Bitcoin
- Nostr
- Health
- Permaculture
- Phytochemicals
- Alternative medicine
- Herbs
- Nutrition
I am having success with chat template of Llama3: <|begin_of_text|><|start_header_id|> ...
You can check the GGUF chat template to see the exact format. But I didn't change it, so Llama3 format continues.
GGUF has the necessary eot token to properly stop.
Model Details
- Fine tuned by: someone
- Finetuned from model: https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct
Uses
Ask any question, compared to other models this may know more about those topics above. You can use llama.cpp to chat with it. You can also use llama-cpp-python package to chat with it in a Python script.
This is how you generate prompt and stops:
prompt = f"<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\n{sys_msg}<|eot_id|>"
i = 0
while i < len(msgs):
prompt += f"<|start_header_id|>user<|end_header_id|>\n\n{msgs[i]['content']}<|eot_id|>"
prompt += f"<|start_header_id|>assistant<|end_header_id|>\n\n{msgs[i + 1]['content']}<|eot_id|>"
i += 2
prompt += f"<|start_header_id|>user<|end_header_id|>\n\n{q}<|eot_id|>"
prompt += "<|start_header_id|>assistant<|end_header_id|>\n\n"
stops = ['<|eot_id|>', '<|end_of_text|>', '<|im_end|>', '<|start_header_id|>']
Warning
Users (both direct and downstream) should be aware of the risks, biases and limitations of the model. The trainer, developer or uploader of this model does not assume any liability. Use it at your own risk.
Training Details
Training Data
Some data I curated from various sources.
Training Procedure
LLaMa-Factory is used to train on 2x3090!
fsdp_qlora is the technique.
- Downloads last month
- 32