--- license: apache-2.0 base_model: Felladrin/Minueza-32M-Base pipeline_tag: text-generation language: - en datasets: - databricks/databricks-dolly-15k - Felladrin/ChatML-databricks-dolly-15k - euclaise/reddit-instruct-curated - Felladrin/ChatML-reddit-instruct-curated - THUDM/webglm-qa - Felladrin/ChatML-WebGLM-QA - starfishmedical/webGPT_x_dolly - Felladrin/ChatML-webGPT_x_dolly - LDJnr/Capybara - Felladrin/ChatML-Capybara - Open-Orca/SlimOrca-Dedup - Felladrin/ChatML-SlimOrca-Dedup - HuggingFaceH4/ultrachat_200k - Felladrin/ChatML-ultrachat_200k - nvidia/HelpSteer - Felladrin/ChatML-HelpSteer - sablo/oasst2_curated - Felladrin/ChatML-oasst2_curated - CohereForAI/aya_dataset - Felladrin/ChatML-aya_dataset - argilla/distilabel-capybara-dpo-7k-binarized - Felladrin/ChatML-distilabel-capybara-dpo-7k-binarized - argilla/distilabel-intel-orca-dpo-pairs - Felladrin/ChatML-distilabel-intel-orca-dpo-pairs - argilla/ultrafeedback-binarized-preferences - Felladrin/ChatML-ultrafeedback-binarized-preferences - sablo/oasst2_dpo_pairs_en - Felladrin/ChatML-oasst2_dpo_pairs_en - NeuralNovel/Neural-DPO - Felladrin/ChatML-Neural-DPO widget: - text: |- <|im_start|>system You are a career counselor. The user will provide you with an individual looking for guidance in their professional life, and your task is to assist them in determining what careers they are most suited for based on their skills, interests, and experience. You should also conduct research into the various options available, explain the job market trends in different industries, and advice on which qualifications would be beneficial for pursuing particular fields.<|im_end|> <|im_start|>user Heya!<|im_end|> <|im_start|>assistant Hi! How may I help you?<|im_end|> <|im_start|>user I am interested in developing a career in software engineering. What would you recommend me to do?<|im_end|> <|im_start|>assistant - text: |- <|im_start|>system You are a highly knowledgeable assistant. Help the user as much as you can.<|im_end|> <|im_start|>user How I can become a healthier person?<|im_end|> <|im_start|>assistant - text: |- <|im_start|>system You are a helpful assistant who gives creative responses.<|im_end|> <|im_start|>user Write the specs of a game about mages in a fantasy world.<|im_end|> <|im_start|>assistant - text: |- <|im_start|>system You are a helpful assistant who answers user's questions with details.<|im_end|> <|im_start|>user Tell me about the pros and cons of social media.<|im_end|> <|im_start|>assistant - text: |- <|im_start|>system You are a helpful assistant who answers user's questions with details and curiosity.<|im_end|> <|im_start|>user What are some potential applications for quantum computing?<|im_end|> <|im_start|>assistant inference: parameters: max_new_tokens: 250 do_sample: true temperature: 0.7 top_p: 0.5 top_k: 35 repetition_penalty: 1.176 --- # Minueza-32M-Chat: A chat model with 32 million parameters - Base model: [Felladrin/Minueza-32M-Base](https://huggingface.co/Felladrin/Minueza-32M-Base) - Datasets used during SFT: - [[ChatML](https://huggingface.co/datasets/Felladrin/ChatML-databricks-dolly-15k)] [databricks/databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) - [[ChatML](https://huggingface.co/datasets/Felladrin/ChatML-reddit-instruct-curated)] [euclaise/reddit-instruct-curated](https://huggingface.co/datasets/euclaise/reddit-instruct-curated) - [[ChatML](https://huggingface.co/datasets/Felladrin/ChatML-WebGLM-QA)] [THUDM/webglm-qa](https://huggingface.co/datasets/THUDM/webglm-qa) - [[ChatML](https://huggingface.co/datasets/Felladrin/ChatML-webGPT_x_dolly)] [starfishmedical/webGPT_x_dolly](https://huggingface.co/datasets/starfishmedical/webGPT_x_dolly) - [[ChatML](https://huggingface.co/datasets/Felladrin/ChatML-Capybara)] [LDJnr/Capybara](https://huggingface.co/datasets/LDJnr/Capybara) - [[ChatML](https://huggingface.co/datasets/Felladrin/ChatML-SlimOrca-Dedup)] [Open-Orca/SlimOrca-Dedup](https://huggingface.co/datasets/Open-Orca/SlimOrca-Dedup) - [[ChatML](https://huggingface.co/datasets/Felladrin/ChatML-ultrachat_200k)] [HuggingFaceH4/ultrachat_200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k) - [[ChatML](https://huggingface.co/datasets/Felladrin/ChatML-HelpSteer)] [nvidia/HelpSteer](https://huggingface.co/datasets/nvidia/HelpSteer) - [[ChatML](https://huggingface.co/datasets/Felladrin/ChatML-oasst2_curated)] [sablo/oasst2_curated](https://huggingface.co/datasets/sablo/oasst2_curated) - [[ChatML](https://huggingface.co/datasets/Felladrin/ChatML-aya_dataset)] [CohereForAI/aya_dataset](https://huggingface.co/datasets/CohereForAI/aya_dataset) - Datasets used during DPO: - [[ChatML](https://huggingface.co/datasets/Felladrin/ChatML-distilabel-capybara-dpo-7k-binarized)] [argilla/distilabel-capybara-dpo-7k-binarized](https://huggingface.co/datasets/argilla/distilabel-capybara-dpo-7k-binarized) - [[ChatML](https://huggingface.co/datasets/Felladrin/ChatML-distilabel-intel-orca-dpo-pairs)] [argilla/distilabel-intel-orca-dpo-pairs](https://huggingface.co/datasets/argilla/distilabel-intel-orca-dpo-pairs) - [[ChatML](https://huggingface.co/datasets/Felladrin/ChatML-ultrafeedback-binarized-preferences)] [argilla/ultrafeedback-binarized-preferences](https://huggingface.co/datasets/argilla/ultrafeedback-binarized-preferences) - [[ChatML](https://huggingface.co/datasets/Felladrin/ChatML-oasst2_dpo_pairs_en)] [sablo/oasst2_dpo_pairs_en](https://huggingface.co/datasets/sablo/oasst2_dpo_pairs_en) - [[ChatML](https://huggingface.co/datasets/Felladrin/ChatML-Neural-DPO)] [NeuralNovel/Neural-DPO](https://huggingface.co/datasets/NeuralNovel/Neural-DPO) - Availability in other ML formats: - GGUF: [Felladrin/gguf-Minueza-32M-Chat](https://huggingface.co/Felladrin/gguf-Minueza-32M-Chat) - ONNX: [Felladrin/onnx-Minueza-32M-Chat](https://huggingface.co/Felladrin/onnx-Minueza-32M-Chat) ## Recommended Prompt Format ``` <|im_start|>system {system_message}<|im_end|> <|im_start|>user {user_message}<|im_end|> <|im_start|>assistant ``` ## Recommended Inference Parameters ```yml do_sample: true temperature: 0.7 top_p: 0.5 top_k: 35 repetition_penalty: 1.176 ``` ## Usage Example ```python from transformers import pipeline generate = pipeline("text-generation", "Felladrin/Minueza-32M-Chat") messages = [ { "role": "system", "content": "You are a helpful assistant who answers the user's questions with details and curiosity.", }, { "role": "user", "content": "What are some potential applications for quantum computing?", }, ] prompt = generate.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) output = generate( prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=35, top_p=0.5, repetition_penalty=1.176, ) print(output[0]["generated_text"]) ``` ## License This model is licensed under the [Apache License 2.0](https://huggingface.co/Felladrin/Minueza-32M-Chat/resolve/main/license.txt).