--- license: apache-2.0 ---

# Model Description Tulpar-7b is a Mistral-7b-based model trained by HyperbeeAI. Training is done on a filtered and preprocessed instruction finetuning dataset that includes GPT-4 generated and generally curated datasets like Airoboros and Platypus. # Example Usage Loading the model: ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("HyperbeeAI/Tulpar-7b-v0") model = AutoModelForCausalLM.from_pretrained("HyperbeeAI/Tulpar-7b-v0", device_map="auto") ``` You can run inference with both of the following prompts: ```python input_text="What is deep learning?" prompt = f"### User: {input_text}\n\n### Assistant:\n" inputs = tokenizer(prompt, return_tensors="pt") output = model.generate(**inputs, do_sample=True, top_p=0.95, top_k=0, max_new_tokens=512) print(tokenizer.decode(output[0])) ``` ```python input_text="What is deep learning?" prompt = f"Question: {input_text}\n\nAnswer:" inputs = tokenizer(prompt, return_tensors="pt") output = model.generate(**inputs, do_sample=True, top_p=0.95, top_k=0, max_new_tokens=512) print(tokenizer.decode(output[0])) ``` or use ChatML format. # Ethical Considerations and Limitations Tulpar is a technology with potential risks and limitations. This model is finetuned only in English and all language-related scenarios are not covered. As HyperbeeAI, we neither guarantee ethical, accurate, unbiased, objective responses nor endorse its outputs. Before deploying this model, you are advised to make safety tests for your use case.