voidful commited on
Commit
2c72486
1 Parent(s): 2dbd746

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +41 -0
README.md ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ ---
6
+
7
+ # Mamba
8
+
9
+ <!-- Provide a quick summary of what the model is/does. -->
10
+ This repository contains the `transfromers` compatible `mamba-2.8b`. The checkpoints are untouched, but the full `config.json` and tokenizer are pushed to this repo.
11
+
12
+ # Usage
13
+
14
+ You need to install `transformers` from `main` until `transformers=4.39.0` is released.
15
+ ```bash
16
+ pip install git+https://github.com/huggingface/transformers@main
17
+ ```
18
+
19
+ We also recommend you to install both `causal_conv_1d` and `mamba-ssm` using:
20
+
21
+ ```bash
22
+ pip install causal-conv1d>=1.2.0
23
+ pip install mamba-ssm
24
+ ```
25
+
26
+ If any of these two is not installed, the "eager" implementation will be used. Otherwise the more optimised `cuda` kernels will be used.
27
+
28
+ ## Generation
29
+ You can use the classic `generate` API:
30
+ ```python
31
+ >>> from transformers import MambaConfig, MambaForCausalLM, AutoTokenizer
32
+ >>> import torch
33
+
34
+ >>> tokenizer = AutoTokenizer.from_pretrained("state-spaces/mamba-790m-hf")
35
+ >>> model = MambaForCausalLM.from_pretrained("state-spaces/mamba-790m-hf")
36
+ >>> input_ids = tokenizer("Hey how are you doing?", return_tensors="pt")["input_ids"]
37
+
38
+ >>> out = model.generate(input_ids, max_new_tokens=10)
39
+ >>> print(tokenizer.batch_decode(out))
40
+ ["Hey how are you doing?\n\nI'm good.\n\nHow are"]
41
+ ```