pszemraj commited on
Commit
9741fe8
1 Parent(s): c53c970

add einops

Browse files

add details on usage/model constraints too

Files changed (1) hide show
  1. README.md +8 -3
README.md CHANGED
@@ -6,6 +6,10 @@ pipeline_tag: text-generation
6
  inference: false
7
  datasets:
8
  - the_pile_books3
 
 
 
 
9
  ---
10
 
11
  # mpt-7b-storywriter: sharded
@@ -22,10 +26,12 @@ Please refer to the previously linked repo for details on usage/implementation/e
22
 
23
  ## Basic Usage
24
 
 
 
25
  Install/upgrade packages:
26
 
27
  ```bash
28
- pip install -U torch transformers accelerate
29
  ```
30
 
31
  Load the model:
@@ -50,5 +56,4 @@ tokenizer = AutoTokenizer.from_pretrained(model_name)
50
  Then you can use `model.generate()` as you would normally - see the notebook for details.
51
 
52
 
53
- ---
54
-
 
6
  inference: false
7
  datasets:
8
  - the_pile_books3
9
+ tags:
10
+ - mosaicML
11
+ - sharded
12
+ - story
13
  ---
14
 
15
  # mpt-7b-storywriter: sharded
 
26
 
27
  ## Basic Usage
28
 
29
+ > Note when using: this is **not** an instruction-tuned model, so you need to give it sufficient input text to continue generating something on-topic with your prompt
30
+ >
31
  Install/upgrade packages:
32
 
33
  ```bash
34
+ pip install -U torch transformers accelerate einops
35
  ```
36
 
37
  Load the model:
 
56
  Then you can use `model.generate()` as you would normally - see the notebook for details.
57
 
58
 
59
+ ---