Edit model card

Prompt Generator by ByteWave:

Welcome to the official repository of Prompt Generator, a powerful tool for effortlessly generating prompts for Large Language Models (LLMs) by ByteWave.

About Prompt Generator:

Prompt Generator is designed to streamline the process of generating text prompts for LLMs. Whether you are a content creator, researcher, or developer, this tool empowers you to create effective prompts quickly and efficiently.

Features:

  • Easy-to-use interface
  • Fast prompt generation
  • Customizable prompts for various LLMs

Usage:

from transformers import pipeline

generator = pipeline("text-generation",model="ByteWave/prompt-generator")
act = f"""
      Action: Doctor 
      Prompt:
"""
prompt = generator(act, do_sample=True, max_new_tokens=256)
print(prompt)
"""
I want you to act as a doctor and come up with a treatment plan for an elderly patient who has been experiencing severe headaches. Your goal is to use your knowledge of conventional medicine, herbal remedies, and other natural alternatives in order to create a plan that helps the patient achieve optimal health. Remember to use your best judgment and discuss various options with the patient, and if necessary, suggest additional tests or treatments in order to ensure success.
""" 

Training:

This model trained on openlm-research/open_llama_3b_v2 base model.

Loss Graph:

Downloads last month
243
Safetensors
Model size
3.43B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train ByteWave/prompt-generator