WackyArt commited on
Commit
ef56776
1 Parent(s): 5ac99c6

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +58 -0
README.md ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ license: mit
4
+ library_name: transformers
5
+ tags:
6
+ - text-generation
7
+ - gptj
8
+ ---
9
+
10
+ # Model Card for Peaches-Pygmalion-6b
11
+
12
+ This model is based on `Pygmalion-6b`, originally developed by the PygmalionAI team. It is designed for conversational AI and text-generation tasks, tailored to represent the persona of Peaches Sinclair, a charming and slightly clumsy Catgirl.
13
+
14
+ ## Model Details
15
+
16
+ ### Model Description
17
+
18
+ The original `Pygmalion-6b` model was developed for high-quality conversational AI. This version incorporates a tailored persona for Peaches Sinclair, making it suitable for creative and engaging dialogues.
19
+
20
+ - **Developed by:** PygmalionAI
21
+ - **Fine-tuned for:** Peaches Sinclair
22
+ - **Model type:** GPT-J
23
+ - **Language(s):** English
24
+ - **License:** MIT (inherited from Pygmalion-6b)
25
+ - **Base model:** Pygmalion-6b
26
+
27
+ ## Uses
28
+
29
+ ### Direct Use
30
+
31
+ This model is ideal for:
32
+ - Conversational AI.
33
+ - Text generation with creative and playful dialogues.
34
+
35
+ ### Out-of-Scope Use
36
+
37
+ This model is not recommended for:
38
+ - Tasks requiring factual accuracy.
39
+ - Use cases involving harmful or explicit content.
40
+
41
+ ## Bias, Risks, and Limitations
42
+
43
+ This model inherits biases from the base model (`Pygmalion-6b`) and its training data. Users should carefully monitor outputs, especially for sensitive topics.
44
+
45
+ ## How to Get Started with the Model
46
+
47
+ You can use the model with the Transformers library:
48
+
49
+ ```python
50
+ from transformers import AutoModelForCausalLM, AutoTokenizer
51
+
52
+ model_name = "WackyArt/Peaches-Pygmalion-6b"
53
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
54
+ model = AutoModelForCausalLM.from_pretrained(model_name)
55
+
56
+ inputs = tokenizer("Hello! How are you today?", return_tensors="pt")
57
+ outputs = model.generate(**inputs)
58
+ print(tokenizer.decode(outputs[0]))