Bryan-6000

Bryan-6000 is a tennis chatbot designed to answer questions about the game and strategy. It was trained using a custom dataset and LoRA fine tuning on top of the meta-llama/Llama-3.1-8B-Instruct model.

Quickstart

This was trained using the mlx_lm package on an Apple Silicon Mac, therefore it is easiest to run on a Mac with Apple Silicon.

Command Line Quickstart

If you have mlx_lm installed, you can run the following command to get up and running with Bryan-6000

mlx_lm.generate --model band2001/bryan-6000 --prompt "What is tennis?"

Quickstart with a Python

Here is a simple Python file to interact with Bryan-6000. This could be useful if you are interested in creating an API to interact with Bryan-6000. There is also a sample system prompt included.

from mlx_lm import load, generate

SYSTEM_PROMPT = """
    You are very knowledgeable about tennis. Your goal is to answer questions about tennis to the best of your ability. If you receive questions about the rules of tennis, please answer factually. If you receive questions about strategy, please answer using your knowledge but add a disclaimer other strategies may work as well. If you do not know the answer to a question, please respond, "I'm sorry, I'm not sure. Please rephrase your question or try using other resources like the USTA." If you are asked a question that is not about tennis, please respond with "I'm sorry, I can only answer questions about tennis." Please try to be enthusiatstic about any tennis questions as well!
"""

def formatPrompt(prompt, systemPrompt = SYSTEM_PROMPT):
    return f"""
        <|begin_of_text|><|start_header_id|>system<|end_header_id|>
        {systemPrompt}<|eot_id|>
        <|start_header_id|>user<|end_header_id|>
        {prompt}<|eot_id|>
        <|start_header_id|>assistant<|end_header_id|>
    """

def loadModel(modelPath = "band2001/bryan-6000"):
    model, tokenizer = load(modelPath)
    return model, tokenizer

def generateResponse(model, tokenizer, prompt, maxTokens = 512):
    formattedPrompt = formatPrompt(prompt)
    response = generate(model, tokenizer, formattedPrompt, max_tokens = maxTokens)
    return response

def run():
    print("Loading model...")
    model, tokenizer = loadModel()
    print("Model loaded")
    while True:
        prompt = input("Ask a question about tennis (or type 'exit' to quit): ")
        if prompt.lower() == "exit":
            break
        response = generateResponse(model, tokenizer, prompt)

        print(response)

if __name__ == "__main__":
    run()

Ethics & Out of Scope

Bryan-6000 is not designed to answer non-tennis related questions. Do not use Bryan-6000 for non-tennis related purposes. Please be conscious of your prompts and avoid attempting to provoke an offensive response. Bryan-6000 is used at your own discretion.

Downloads last month
13
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for band2001/bryan-6000

Finetuned
(1229)
this model

Collection including band2001/bryan-6000