Edit model card
YAML Metadata Warning: The pipeline tag "conversational" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, text2text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, other

LLmRa-1.3B

A conversational fairseq-dense fine-tune.

LLmRa 1.3B, as a proof-of-concept fine-tune of KoboldAI/fairseq-dense-1.3B optimized for dialogue.

Disclaimer: NSFW data was included in the fine-tuning of this model. Although SFW inputs will usually result in SFW outputs, you are advised to chat at your own risk. This model is not suitable for use by minors.

Warning: This model is NOT suitable for use by minors. It will output X-rated content under certain circumstances.


Usage Format

To effectively utilize the model, follow this structured format for engaging text-based conversations:

1. Initialization

<|INST|><[system]>: (YOUR AI PERSONA)
<st_r>
  • Persona: You can define a specific persona or context for the AI, but it's optional. It can be a character, a role, or just a style of interaction.

2. AI Introduction

<|INST|> (User's input message here.) <|/INST|>
  • Users can start the conversation by entering their message within <|INST|> and closing with <|/INST|>.

3. AI Response The model will respond based on the input provided by the user.


Example Usage:

Here's an example of how to start a conversation with the AI:

<|INST|><[system]>: I'm here to provide information and assistance on a wide range of topics.
<st_r>
Hello! Welcome to our AI-powered assistant. How can I assist you today?
User: Tell me about the history of artificial intelligence. <|/INST|>

Continue the conversation as needed. This structured format helps maintain a smooth and engaging interaction with the AI.

You are not required to include User, you can change it to your prefered name or leave it blank You may also add the AI name, example:

<|INST|> YourNameHere: Hello. <|/INST|> CharacterName:

Or have both blank.

<|INST|> Hello. <|/INST|>

Loading The Model

To use the model and interact with it, use the Python code below:

from transformers import AutoTokenizer, AutoModelForCausalLM

model_name = "L-R/LLmRa-1.3B"
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)

def ask_question(model_data, input_data, model, tokenizer):
    model_data_dict = {

        "X1": {
            "name": "SmartAI",
            "greeting": "Hello! How can I assist you today?",
            "description": "I'm here to provide information and assistance on a wide range of topics"
        },
        "X2": {
            "name": "MysteryBot",
            "greeting": "Greetings, curious traveler! What secrets do you seek?",
            "description": "I am the enigmatic MysteryBot, here to uncover and reveal the mysteries of the world."
        }

    }
    
    if model_data in model_data_dict:
        data = model_data_dict[model_data]
        name = data["name"]
        greeting = data["greeting"]
        model_data = data["description"]
    else:
        return "Invalid model_data option"
    
    question = f"<|INST|><[system]>: {model_data}\n<st_r>\n{greeting}\nPete: {input_data} <|/INST|> {name}:"
    
    print("\n[----------]\n")
    
    inputs = tokenizer.encode(question, return_tensors="pt")
    outputs = model.generate(
        input_ids=inputs,
        max_length=250 + len(inputs[0]),
        no_repeat_ngram_size=4,
        pad_token_id=tokenizer.eos_token_id,
        do_sample=True,
        top_k=40,
        top_p=.55,
        num_return_sequences=1,
        temperature=.5,
        repetition_penalty=1.25,
        use_cache=True
    )
    response = tokenizer.decode(outputs[0], skip_special_tokens=True)[len(question):]
    print(f"\n\n[Generated Text]:{response}")
    print("\n[----------]\n")
    return response


while True:
    print("\nQuestion For The AI: ")
    input_data = input(">> ")
    model_data = input("Personality Of The (X1, X2): ")
    ask_question(model_data, input_data, model, tokenizer)

Known issues

The AI exhibits inconsistent responses, occasionally providing nonsensical or unusual answers. The AI performance seems to be worse than in the 355M model one, meaning the training data did not "sit right" onto the model, the next version will be on a bigger dataset, with a new architecture.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 31.1
ARC (25-shot) 32.68
HellaSwag (10-shot) 58.77
MMLU (5-shot) 23.23
TruthfulQA (0-shot) 36.21
Winogrande (5-shot) 59.04
GSM8K (5-shot) 0.08
DROP (3-shot) 7.72
Downloads last month
3,136
Safetensors
Model size
1.31B params
Tensor type
FP16
·
Inference API
Input a message to start chatting with L-R/LLmRa-1.3B.
Inference API (serverless) has been turned off for this model.