File size: 3,744 Bytes
262ec26
 
a8f05ce
 
744cfe1
a8f05ce
744cfe1
 
262ec26
a8f05ce
 
 
ce7cf93
cc77574
eeda8c0
744cfe1
eeda8c0
 
606fa26
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cc77574
 
aa04cf5
dc6001b
aa04cf5
 
cc77574
ce7cf93
cc77574
 
 
 
 
dc6001b
09a0ce7
ce7cf93
cc77574
 
 
 
 
 
 
ce7cf93
cc77574
dc6001b
cc77574
dc6001b
cc77574
 
dc6001b
ce7cf93
cc77574
 
 
 
 
 
 
 
 
 
 
b4a1811
cc77574
 
 
 
 
c2fce10
 
 
 
 
 
 
 
 
 
 
 
 
 
cc77574
 
 
c2fce10
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
---
license: wtfpl
datasets:
- HuggingFaceH4/no_robots
thumbnail: https://huggingface.co/clibrain/mamba-2.8b-chat-no_robots/resolve/main/mamba_no_robos-logo.png
pipeline_tag: text-generation
language:
- en
---

# MAMBA (2.8B) 🐍 fine-tuned on H4/no_robots dataset for chat / instruction

Model Card is still WIP!

<div style="text-align:center;width:250px;height:250px;">
    <img src="https://huggingface.co/clibrain/mamba-2.8b-chat-no_robots/resolve/main/mamba_no_robos-logo.png" alt="mamba-no_robots logo"">
</div>


## Base model info

Mamba is a new state space model architecture showing promising performance on information-dense data such as language modeling, where previous subquadratic models fall short of Transformers.
It is based on the line of progress on [structured state space models](https://github.com/state-spaces/s4),
with an efficient hardware-aware design and implementation in the spirit of [FlashAttention](https://github.com/Dao-AILab/flash-attention).

## Dataset info

_Look Ma, an instruction dataset that wasn't generated by GPTs!_

### Dataset Description

- **Repository:** https://github.com/huggingface/alignment-handbook
- **Paper:** 
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** Lewis Tunstall

#### Dataset Summary

No Robots is a high-quality dataset of 10,000 instructions and demonstrations created by skilled human annotators. This data can be used for supervised fine-tuning (SFT) to make language models follow instructions better. No Robots was modelled after the instruction dataset described in OpenAI's [InstructGPT paper](https://huggingface.co/papers/2203.02155), and is comprised mostly of single-turn instructions across the following categories:

| Category   |   Count |
|:-----------|--------:|
| Generation |    4560 |
| Open QA    |    1240 |
| Brainstorm |    1120 |
| Chat       |     850 |
| Rewrite    |     660 |
| Summarize  |     420 |
| Coding     |     350 |
| Classify   |     350 |
| Closed QA  |     260 |
| Extract    |     190 |


## Usage

```sh
pip install torch==2.1.0 transformers==4.35.0 causal-conv1d==1.0.0 mamba-ssm==1.0.1
```

```py
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from mamba_ssm.models.mixer_seq_simple import MambaLMHeadModel

CHAT_TEMPLATE_ID = "HuggingFaceH4/zephyr-7b-beta"

device = "cuda:0" if torch.cuda.is_available() else "cpu"
model_name = "clibrain/mamba-2.8b-chat-no_robots"

eos_token = "<|endoftext|>"
tokenizer = AutoTokenizer.from_pretrained(model_name)
tokenizer.eos_token = eos_token
tokenizer.pad_token = tokenizer.eos_token
tokenizer.chat_template = AutoTokenizer.from_pretrained(CHAT_TEMPLATE_ID).chat_template

model = MambaLMHeadModel.from_pretrained(
        model_name, device=device, dtype=torch.float16)

messages = []
prompt = "Tell me 5 sites to visit in Spain"
messages.append(dict(role="user", content=prompt))

input_ids = tokenizer.apply_chat_template(
            messages, return_tensors="pt", add_generation_prompt=True
).to(device)

out = model.generate(
    input_ids=input_ids,
    max_length=2000,
    temperature=0.9,
    top_p=0.7,
    eos_token_id=tokenizer.eos_token_id,
)

decoded = tokenizer.batch_decode(out)
assistant_message = (
    decoded[0].split("<|assistant|>\n")[-1].replace(eos_token, "")
)

print(assistant_message)
```


## Gradio Demo

```sh
git clone https://github.com/mrm8488/mamba-chat.git
cd mamba-chat

pip install -r requirements.txt
pip install -q gradio==4.8.0

python app.py \
--model clibrain/mamba-2.8b-chat-no_robots \
--share
```
## Evaluations

Coming soon!


## Acknowledgments

Thanks to [mamba-chat](https://github.com/havenhq/mamba-chat/tree/main) for heavily inspiring our work