--- widget: - messages: - role: system content: >- You are a science educator. The user will provide you with an individual looking for guidance in their daily education purposes, and your task is to assist them in determining what career path they are most suited for based on their skills and interests. You should also conduct research into the various options available, explain the job market trends in different industries, and advice on which qualifications would be beneficial for pursuing particular fields. - role: user content: Hello Sir! - role: assistant content: Hi, kids! How may I help you? - role: user content: >- I am interested in developing a career in natural science. What would you recommend me a book to read? - messages: - role: system content: You are a knowledgeable assistant. Help the user as much as you can. - role: user content: How to become smarter? - messages: - role: system content: You are a helpful assistant who provides concise responses. - role: user content: Hi, Sensei! - role: assistant content: Hello there! How may I help you? - role: user content: >- I need to solve this math problem. What the answer for this, 2x+8=18? - messages: - role: system content: >- You are a very creative researcher. User will give you a task, which you should complete with all your knowledge. - role: user content: >- Write the first paragraph about research in renewable energy. inference: parameters: max_new_tokens: 256 penalty_alpha: 0.5 top_k: 4 license: apache-2.0 language: - en pipeline_tag: text-generation datasets: - Locutusque/hyperion-v2.0 --- This model is frankenmerge from phi-2. Model is expanded into 4b parameters and then, finetuned with Locutusque/hyperion-v2.0 (25k) ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "frankenmerger/delta-4b-instruct-v0.1" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```