AzureBlack commited on
Commit
63857b1
1 Parent(s): a8d2427

Upload 2 files

Browse files
Files changed (2) hide show
  1. README.md +70 -0
  2. huggingface-metadata.txt +20 -0
README.md ADDED
@@ -0,0 +1,70 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama2
3
+ model_type: llama
4
+ tags:
5
+ - facebook
6
+ - meta
7
+ - pytorch
8
+ - llama
9
+ - llama-2
10
+ - Storywriter
11
+ ---
12
+
13
+ ![GOAT-70B-Storytelling](https://assets.adapt.ws/files/20231117_ehznrqludevtapck.png)
14
+ # GOAT-70B-Storytelling model
15
+ GOAT-70B-Storytelling model trained by GOAT.AI lab as a core model for an autonomous story-writing agent.
16
+
17
+ # GOAT-Storytelling-Agent
18
+ This agent facilitates the generation of high-quality, cohesive, and captivating narratives, including stories and books. It achieves this by utilizing inputs such as plot outlines, character profiles, their interrelationships, and other relevant details. Examples are provided below.
19
+
20
+ # Model description
21
+ - **Base Architecture:** LLaMA 2 70B
22
+ - **License:** llama2
23
+ - **Context window length:** 4096 tokens
24
+
25
+ ### Training details
26
+ Training was performed on a GPU cluster of 64xH100s. FSDP ZeRO-3 sharding is employed for efficient training. We instruction finetune on a dataset of 18K examples for one epoch with batch size of 336, AdamW optimizer with learning rate 1e-5.
27
+
28
+ ### Learn more
29
+ - **Blogpost:** [GOAT-Storytelling: Arbitrarily Long Story Writing Agent](https://www.blog.goat.ai/goat-st/)
30
+ - **GitHub:** [here](https://github.com/GOAT-AI-lab/GOAT-Storytelling-Agent)
31
+ - **Generated examples:** [here](https://huggingface.co/datasets/GOAT-AI/generated-novels/tree/main/generated-books)
32
+
33
+ ## Uses
34
+ The main purpose of GOAT-70B-Storytelling is to generate books, novels, movie scripts and etc. as an agent in coping with our GOAT-Storytelling-Agent. It is specifically designed for storywriters.
35
+
36
+ ## Usage
37
+ Usage can be either self-hosted via `transformers` or used with Spaces
38
+
39
+ ```python
40
+ import torch
41
+
42
+ from transformers import AutoTokenizer, AutoModelForCausalLM
43
+
44
+ model_name = "GOAT-AI/GOAT-70B-Storytelling"
45
+
46
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
47
+ model = AutoModelForCausalLM.from_pretrained(
48
+ model_name,
49
+ torch_dtype=torch.bfloat16
50
+ )
51
+ ```
52
+ Currently, we support LLM endpoint generation, where you need to send a post request to the generation endpoint (we recommend using Text Generation Inference by HuggingFace)
53
+
54
+ First, modify `config.py` and add your generation endpoint.
55
+
56
+ Then you can use it inside via GOAT-Storytelling-Agent:
57
+
58
+ ```python
59
+ from goat_storytelling_agent import storytelling_agent as goat
60
+
61
+ novel_scenes = goat.generate_story('treasure hunt in a jungle', form='novel')
62
+ ```
63
+
64
+ ## License
65
+ GOAT-70B-Storytelling model is based on [Meta's LLaMA-2-70b-hf](https://huggingface.co/meta-llama/Llama-2-70b-hf), and using own datasets.
66
+
67
+ GOAT-70B-Storytelling model weights are available under LLAMA-2 license.
68
+
69
+ ### Risks and Biases
70
+ GOAT-70B-Storytelling model can produce factually incorrect output and should not be relied on to deliver factually accurate information. Therefore, the GOAT-70B-Storytelling model could possibly generate wrong, biased, or otherwise offensive outputs.
huggingface-metadata.txt ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ url: https://huggingface.co/GOAT-AI/GOAT-70B-Storytelling
2
+ branch: main
3
+ download date: 2023-11-19 11:00:52
4
+ sha256sum:
5
+ cff84a14ff786f2c79269a7d46c062cd2006f608f8a4e8a6dd643a6609425d5a pytorch_model-00001-of-00015.bin
6
+ 0bff6a7714e539971818b882ca28fc4e69f60cede78739ecad476f4b773c0422 pytorch_model-00002-of-00015.bin
7
+ 897b9454e30a7f6cd1f4388781063e046e74c9e908af127fbb0140a604d48382 pytorch_model-00003-of-00015.bin
8
+ b720d8745cf66c2c4b3df39b50162a982ec32d96f6bf29be770c31a40a0fdba6 pytorch_model-00004-of-00015.bin
9
+ 232b871c353ec429bc33ca3173b007340233768aa139d318c9ee1b13b3a13a7a pytorch_model-00005-of-00015.bin
10
+ 6c924ccaaf46debdd89b97026b9f5d864ed381c7e3f34b37ed4d20c700a7498e pytorch_model-00006-of-00015.bin
11
+ 467171f02f26e51bdce98c342b533155b646bde1bad3b17c811c804d260b4ea3 pytorch_model-00007-of-00015.bin
12
+ 230d32f9b5464ae919a8b319a25ba22c5580998250cdb6b2df589508d7ca45c3 pytorch_model-00008-of-00015.bin
13
+ 3132cf8f97a39c190b5ee3896fb17eebe5b0828293431249377bd0db5ac341e9 pytorch_model-00009-of-00015.bin
14
+ dc5b095277185a6b2159843a5cb2da83daf8abbefb33b8bbb83ea821939ed5a0 pytorch_model-00010-of-00015.bin
15
+ 1184c60c93fee2653129565b51aae09dbc898ff3bee96c5755f217d4e57d3601 pytorch_model-00011-of-00015.bin
16
+ 98e3be7c2b172aeab89990ed21e6e5e9452f1a0bc8a7f196119aa9cb5847c54b pytorch_model-00012-of-00015.bin
17
+ 562e25c93f4cdb86920fe45c096a3f606ef1ce811fad2ad18aa5e0cc3ce81c6c pytorch_model-00013-of-00015.bin
18
+ 9c0f9674c7708f8e6eb43578087f5aca3471604fb7944ba3da86ba60b69540dc pytorch_model-00014-of-00015.bin
19
+ b188ed0447c698e53942e85d031909fad27e1496d825ebf05ddeca3587680c80 pytorch_model-00015-of-00015.bin
20
+ 9e556afd44213b6bd1be2b850ebbbd98f5481437a8021afaf58ee7fb1818d347 tokenizer.model