iData commited on
Commit
3787177
1 Parent(s): 10424e6

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +148 -0
README.md ADDED
@@ -0,0 +1,148 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ pipeline_tag: conversational
5
+ inference: false
6
+ tags:
7
+ - AI
8
+ - ConversationalAI
9
+ ---
10
+
11
+ <h1 style="text-align: center">LLmRa-355M</h1>
12
+ <h2 style="text-align: center">A conversational fairseq-dense fine-tune.</h2>
13
+
14
+ ## Model
15
+
16
+ **LLmRa 355M**, as a proof-of-concept fine-tune of [KoboldAI/fairseq-dense-355M] optimized for dialogue, represents a significant advancement in natural language understanding and conversation generation.
17
+
18
+ **Disclaimer:** NSFW data was included in the fine-tuning of this model. Although SFW inputs will usually result in SFW outputs, you are advised to **chat at your own risk. This model is not suitable for use by minors.**
19
+
20
+ **Warning:** This model is **NOT** suitable for use by minors.**It will output X-rated content under certain circumstances.**
21
+
22
+ Certainly! Here's an improved usage format with a more detailed and user-friendly description:
23
+
24
+ ---
25
+
26
+ ## Usage Format
27
+
28
+ To effectively utilize the model, follow this structured format for engaging text-based conversations:
29
+
30
+ **1. Initialization**
31
+ ```markdown
32
+ <[system]>: (Persona)
33
+ <start>
34
+ (You can begin with a friendly AI greeting if you like)
35
+ ```
36
+ - **Persona**: You can define a specific persona or context for the AI, but it's optional. It can be a character, a role, or just a style of interaction.
37
+
38
+ **2. AI Introduction**
39
+ ```markdown
40
+ [>] (User's input message here.) [/]
41
+ ```
42
+ - Users can start the conversation by entering their message within `[>]` and closing with `[/]`.
43
+
44
+ **3. AI Response**
45
+ The model will respond based on the input provided by the user.
46
+
47
+ ---
48
+
49
+ ### Example Usage:
50
+
51
+ Here's an example of how to start a conversation with the AI:
52
+
53
+ ```markdown
54
+ <[system]>: I'm here to provide information and assistance on a wide range of topics.
55
+ <start>
56
+ Hello! Welcome to our AI-powered assistant. How can I assist you today?
57
+ [>] User: Tell me about the history of artificial intelligence. [/]
58
+ ```
59
+
60
+ Generated text:
61
+
62
+ ```markdown
63
+ The history of AI is an extensive and complex topic, covering various aspects from the early days of computer programming to present-day technology and beyond. Here are some key milestones in the field of AI...
64
+ ```
65
+
66
+ Continue the conversation as needed. This structured format helps maintain a smooth and engaging interaction with the AI.
67
+
68
+ You are not required to include `User`, you can change it to your prefered name or leave it blank You may also add the AI name, example:
69
+
70
+ ```markdown
71
+ [>] User: Hello. [/] CharacterName:
72
+ ```
73
+
74
+ Or have both blank.
75
+
76
+ ```markdown
77
+ [>] Hello. [/]
78
+ ```
79
+
80
+ ## Loading The Model
81
+
82
+ To use the model and interact with it, use the Python code below:
83
+
84
+ ```Python
85
+ from transformers import AutoTokenizer, AutoModelForCausalLM
86
+
87
+ model_name = "L-R/LLmRa-355M"
88
+ model = AutoModelForCausalLM.from_pretrained(model_name)
89
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
90
+
91
+ def ask_question(model_data, input_data, model, tokenizer):
92
+ model_data_dict = {
93
+
94
+ "X1": {
95
+ "name": "SmartAI",
96
+ "greeting": "Hello! How can I assist you today?",
97
+ "description": "I'm here to provide information and assistance on a wide range of topics"
98
+ },
99
+ "X2": {
100
+ "name": "MysteryBot",
101
+ "greeting": "Greetings, curious traveler! What secrets do you seek?",
102
+ "description": "I am the enigmatic MysteryBot, here to uncover and reveal the mysteries of the world."
103
+ }
104
+
105
+ }
106
+
107
+ if model_data in model_data_dict:
108
+ data = model_data_dict[model_data]
109
+ name = data["name"]
110
+ greeting = data["greeting"]
111
+ model_data = data["description"]
112
+ else:
113
+ return "Invalid model_data option"
114
+
115
+ question = f"<[system]>: {model_data}\n<start>\n{greeting}\n[>] Pete: {input_data} [/] {name}:"
116
+
117
+ print("\n[----------]\n")
118
+
119
+ inputs = tokenizer.encode(question, return_tensors="pt")
120
+ outputs = model.generate(
121
+ input_ids=inputs,
122
+ max_length=250 + len(inputs[0]),
123
+ no_repeat_ngram_size=4,
124
+ pad_token_id=tokenizer.eos_token_id,
125
+ do_sample=True,
126
+ top_k=40,
127
+ top_p=.55,
128
+ num_return_sequences=1,
129
+ temperature=.5,
130
+ repetition_penalty=1.25,
131
+ use_cache=True
132
+ )
133
+ response = tokenizer.decode(outputs[0], skip_special_tokens=True)[len(question):]
134
+ print(f"\n\n[Generated Text]:{response}")
135
+ print("\n[----------]\n")
136
+ return response
137
+
138
+
139
+ while True:
140
+ print("\nQuestion For The AI: ")
141
+ input_data = input(">> ")
142
+ model_data = input("Personality Of The (X1, X2): ")
143
+ ask_question(model_data, input_data, model, tokenizer)
144
+ ```
145
+
146
+ ## Known issues
147
+
148
+ Inconsistent responses, including occasional nonsensical or strange answers. The AI may mistakenly identify itself as the user when asked about its identity, attributing the user's name to itself. The AI was only trained on 10-MB of conversational data, for testing purposes, bigger models will include more data. One of the bigger issues is that the AI ends the conversation. And starts generating a random personality for itself and making a whole new conversation. Will be fixed in bigger models.