NCUEatingAI-0.5B-v1
This repository provides an example of how to use the NCUEatingAI-0.5B-v1 large language model from Hugging Face for chat-based inference. The model can be customized to act like any persona you specify in the system prompt, and it generates conversational responses based on user inputs.
Model Information
- Model: ZoneTwelve/NCUEatingAI-0.5B-v1
- Size: 0.5 billion parameters
- Task: Conversational AI / Chatbot
Usage
System Prompt
You can set a system prompt to define how the model should behave during interactions. A simple example format is:
"You act like $USERNAME"
Where $USERNAME
can be replaced with the desired persona (e.g., "a helpful assistant", "a curious learner", etc.).
Inference Example
Here’s a simple way to perform inference using the model. You’ll need to load the model and tokenizer, define the user and system prompts, and format the input using the apply_chat_template
method.
Code Example
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
def chat_with_ncueatingai(
model_path: str = "ZoneTwelve/NCUEatingAI-0.5B-v1",
prompt: str = "What's for lunch?",
system_prompt: str = "You act like a @ZoneTwelve.",
max_tokens: int = 64,
):
# Load the model and tokenizer
model = AutoModelForCausalLM.from_pretrained(model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path)
# Prepare the chat messages
messages = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": prompt}
]
# Apply chat template
input_text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
# Tokenize inputs
inputs = tokenizer(input_text, return_tensors="pt")
# Generate response
with torch.no_grad():
outputs = model.generate(
inputs.input_ids,
max_length=max_tokens,
pad_token_id=tokenizer.eos_token_id
)
# Decode the response
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
return response
# Example usage
if __name__ == "__main__":
response = chat_with_ncueatingai(
prompt="What's a healthy meal?",
system_prompt="You act like a nutrition expert."
)
print("Model Response:", response)
Parameters
model_path
: The path or Hugging Face model hub identifier, default is"ZoneTwelve/NCUEatingAI-0.5B-v1"
.prompt
: The user’s input prompt, which the model will respond to.system_prompt
: Defines the behavior or persona of the model.max_tokens
: The maximum number of tokens in the generated response.
Requirements
Ensure the following Python packages are installed:
pip install torch transformers
Model Download
You can download the model directly from Hugging Face using:
model = AutoModelForCausalLM.from_pretrained("ZoneTwelve/NCUEatingAI-0.5B-v1")
tokenizer = AutoTokenizer.from_pretrained("ZoneTwelve/NCUEatingAI-0.5B-v1")
License
This project is licensed under the terms of the MIT license. See LICENSE for details.
Enjoy using NCUEatingAI-0.5B-v1 to build your personalized conversational AI!
- Downloads last month
- 2