File size: 2,700 Bytes
53a09a5
 
370ef87
 
53a09a5
47effe1
 
 
 
 
 
300bc5f
 
 
 
 
47effe1
 
 
 
 
 
 
 
 
 
 
 
 
f43f616
 
47effe1
 
 
f43f616
 
 
 
47effe1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
---
license: other
tags:
- alpaca
---

-Alpac(ino) stands for Alpaca Integrated Narrative Optimization.

This model is a triple model merge of (Alpaca+(CoT+Storytelling)), resulting in a comprehensive boost in Alpaca's reasoning and story writing capabilities.
Alpaca was chosen as the backbone of this merge to ensure Alpaca's instruct format remains dominant. 

Hey! New GGML flavor! WOW!

Thanks to camelids for making Alpacino30B accessible to the cool GGML community.
https://huggingface.co/camelids/alpacino-33b-ggml-q5_1

-Legalese:

This model is under a non-commercial license. This release contains modified weights of Llama30b and is commensurate with good faith that those
who download and/or utilize this model have been granted explicit access to the original Llama weights by Meta AI after filling out the following
form-
https://docs.google.com/forms/d/e/1FAIpQLSfqNECQnMkycAp2jP4Z9TFX0cGR4uf7b_fBxjY_OjhJILlKGA/viewform

-Use Case Example of an Infinite Text-Based Adventure Game With Alpacino30b:

In Text-Generation-WebUI or KoboldAI enable chat mode, name the user Player and name the AI Narrator, then tailor the instructions below as desired and paste in
context/memory field-


```
### Instruction:
Make Narrator function as a text based adventure game that responds with verbose, detailed, and creative descriptions of what happens next after Player's response.
Make Player function as the player input for Narrator's text based adventure game, controlling a character named (insert character name here, their short bio, and
whatever quest or other information to keep consistent in the interaction).

### Response:
{an empty new line here}
```

Testing subjectively suggests ideal presets for both TGUI and KAI are "Storywriter" (temp raised to 1.1) or "Godlike" with context tokens
at 2048 and max generation tokens at ~680 or greater. This model will determine when to stop writing and will rarely use half as many tokens.

-Obligatory:

This model may output offensive text and/or fabricated information; do not use this model for advice
in any domain, especially medical or mental health advice. Meta AI and I are not liable for improper
use or any damages, percieved or otherwise.

-Sourced LoRA Credits:

ChanSung's exellently made Alpaca LoRA

https://huggingface.co/chansung/alpaca-lora-30b

https://huggingface.co/datasets/yahma/alpaca-cleaned

https://github.com/gururise/AlpacaDataCleaned

magicgh's valuable CoT LoRA

https://huggingface.co/magicgh/llama30b-lora-cot

https://huggingface.co/datasets/QingyiSi/Alpaca-CoT

https://github.com/PhoebusSi/alpaca-CoT

GamerUntouch's unique Storytelling LoRA

https://huggingface.co/GamerUntouch/Storytelling-LLaMa-LoRAs