Julius ter Pelkwijk commited on
Commit
a2363f5
1 Parent(s): 77e19e0
README.md ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ license: mit
4
+ ---
5
+ # Fairseq-dense 13B - Nerys
6
+ ## Model Description
7
+ Fairseq-dense 13B-Nerys is a finetune created using Fairseq's MoE dense model.
8
+ ## Training data
9
+ The training data contains around 2500 ebooks in various genres (the "Pike" dataset), a CYOA dataset called "CYS" and 50 Asian "Light Novels" (the "Manga-v1" dataset).
10
+ Most parts of the dataset have been prepended using the following text: `[Genre: <genre1>, <genre2>]`
11
+ ### How to use
12
+ You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:
13
+ ```py
14
+ >>> from transformers import pipeline
15
+ >>> generator = pipeline('text-generation', model='KoboldAI/fairseq-dense-13B-Nerys')
16
+ >>> generator("Welcome Captain Janeway, I apologize for the delay.", do_sample=True, min_length=50)
17
+ [{'generated_text': 'Welcome Captain Janeway, I apologize for the delay."\nIt's all right," Janeway said. "I'm certain that you're doing your best to keep me informed of what\'s going on."'}]
18
+ ```
19
+ ### Limitations and Biases
20
+ Based on known problems with NLP technology, potential relevant factors include bias (gender, profession, race and religion).
21
+
22
+ ### BibTeX entry and citation info
23
+ ```
24
+ Artetxe et al. (2021): Efficient Large Scale Language Modeling with Mixtures of Experts
25
+ ```
config.json ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "KoboldAI/fairseq-dense-13B",
3
+ "activation_dropout": 0.0,
4
+ "activation_function": "gelu",
5
+ "architectures": [
6
+ "XGLMForCausalLM"
7
+ ],
8
+ "attention_dropout": 0.1,
9
+ "attention_heads": 40,
10
+ "bos_token_id": 50257,
11
+ "d_model": 5120,
12
+ "decoder_start_token_id": 2,
13
+ "dropout": 0.1,
14
+ "eos_token_id": 50259,
15
+ "ffn_dim": 20480,
16
+ "init_std": 0.02,
17
+ "layerdrop": 0.0,
18
+ "max_position_embeddings": 2048,
19
+ "model_type": "xglm",
20
+ "newlinemode": "s",
21
+ "num_layers": 40,
22
+ "pad_token_id": 1,
23
+ "scale_embedding": true,
24
+ "tokenizer_class": "GPT2Tokenizer",
25
+ "torch_dtype": "float16",
26
+ "transformers_version": "4.20.0",
27
+ "use_cache": false,
28
+ "vocab_size": 50261,
29
+ "formatoptns": {
30
+ "frmtrmblln": true,
31
+ "frmttriminc": true
32
+ },
33
+ "welcome": "You are currently running the hybrid novel/adventure-writing model `Nerys, version 3 - PATREON MODEL.`\n\n This model is made by [Mr. Seeker](https://www.patreon.com/mrseeker)\n\n### How to use this model\n\nNerys can both generate adventures and stories. Use the authors note to give it a certain genre to follow, use memory to give an overview of the story and use World Information to give it specific details about the characters. To start off, give the AI an idea of what you are writing about by setting the scene. Give the AI around 10 sentences that make your story really interesting to read. Introduce your character, describe the world, blow something up, or let the AI use its creative mind. Turn Adventure mode on for an AI-Dungeon experience.",
34
+ "antemplate": "[Genre: <|>]"
35
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c0ccaab6b42e740c0f4038766eed2ef07f75c20fddf1de25079ef3524f937af8
3
+ size 25707041085
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
1
+ {"bos_token": "<s>", "eos_token": "</s>", "unk_token": "<unk>", "sep_token": "<|endoftext|>", "pad_token": "<pad>"}
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
1
+ {"errors": "replace", "unk_token": {"content": "<|endoftext|>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "bos_token": {"content": "<|endoftext|>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "eos_token": {"content": "<|endoftext|>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "add_prefix_space": false, "special_tokens_map_file": "/root/.cache/huggingface/transformers/d62d75cc3a7250ada25f0a99e2741555d3712693661d5eef48b3fcbdd151d255.f4b0476f9d35aab16d5dd877dd9e5d547702eff96a3d808497c0d3fc36a32c99", "name_or_path": "KoboldAI/fairseq-dense-13B", "tokenizer_class": "GPT2Tokenizer"}
vocab.json ADDED
The diff for this file is too large to render. See raw diff