--- license: other language: - en pipeline_tag: conversational inference: false tags: - AI - ConversationalAI ---

LLmRa-1.3B-V2

A conversational Open Pre-trained Transformer Language Model fine-tune.

**LLmRa 1.3B-V2**, as a proof-of-concept fine-tune of [facebook/opt-1.3b](https://huggingface.co/facebook/opt-1.3b) optimized for dialogue. **Disclaimer:** NSFW data was included in the fine-tuning of this model. Although SFW inputs will usually result in SFW outputs, you are advised to **chat at your own risk. This model is not suitable for use by minors.** **Warning:** This model is **NOT** suitable for use by minors. **It will output X-rated content under certain circumstances.** **Model Fine-Tuned on LLmRa-100K conversational dataset - small version** --- ## Usage Format To effectively utilize the model, follow this structured format for engaging text-based conversations: **1. Initialization** Here is how you can define the personality of the language model: ``` <|system|>[Persona] ``` - **Persona**: You can define a specific persona or context for the AI, but it's optional. It can be a character, a role, or just a style of interaction. **2. AI Introduction** ``` <|user|>[User input]<|model|> ``` - Users can start the conversation by entering their message within `<|user|>` and closing with `<|model|>`. --- ### Example Usage: Here's an example of how to start a conversation with the AI: ``` <|system|>I'm here to provide information and assistance on a wide range of topics. <|model|>Hello! Welcome to our AI-powered assistant. How can I assist you today? <|user|>Tell me about the history of artificial intelligence. <|model|> ``` Continue the conversation as needed. This structured format helps maintain a smooth and engaging interaction with the AI. You are not required to include `User`, you can change it to your prefered name or leave it blank You may also add the AI name, example: ``` <|user|>YourNameHere: Hello.<|model|>CharacterName: ``` You can also use this instruct prompt example: ``` <|system|>What is one plus one?<|model|> ``` ## Loading The Model To use the model and interact with it, use the Python code below: ```Python from transformers import (AutoModelForCausalLM, AutoTokenizer, pipeline, ) model = AutoModelForCausalLM.from_pretrained('L-R/LLmRa-1.3B-V2') tokenizer = AutoTokenizer.from_pretrained('L-R/LLmRa-1.3B-V2') pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=100) input_question = 'QUESTION HERE' question_formatted = f'<|system|>{input_question}<|model|>' result = pipe(question_formatted) print(f"[model]: {result[0]['generated_text'][len(question_formatted):]}") ``` ## Known issues Model doesn't some of the times follow instructions.