Text Generation
Transformers
English
mpt
llm-rs
ggml
text-generation-inference
File size: 6,377 Bytes
b870cca
 
 
 
 
 
af6801d
b870cca
af6801d
 
 
 
 
 
 
d256254
 
 
c993cc9
 
 
 
 
8e65c74
af6801d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4e562e7
fcf6d02
 
 
 
 
 
 
c993cc9
4e562e7
fcf6d02
 
8c63ea9
4e562e7
8c63ea9
4fd8ace
fcf6d02
 
 
fc3ee23
31676f8
fcf6d02
 
 
af6801d
 
fcf6d02
 
 
 
 
 
 
4e562e7
fcf6d02
5d5ce7b
4e562e7
fcf6d02
4e562e7
 
 
 
 
 
 
 
 
fcf6d02
4e562e7
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
tags:
- llm-rs
- ggml
datasets:
- mc4
- c4
- togethercomputer/RedPajama-Data-1T
- bigcode/the-stack
- allenai/s2orc
inference: false
---
# GGML converted versions of [Mosaic's](https://huggingface.co/mosaicml) MPT Models

MPT-7B is a decoder-style transformer pretrained from scratch on 1T tokens of English text and code.
This model was trained by [MosaicML](https://www.mosaicml.com).

MPT-7B is part of the family of MosaicPretrainedTransformer (MPT) models, which use a modified transformer architecture optimized for efficient training and inference. 

## Converted Models:
| Name                                                                                                                          | Based on                                                                          | Type   | Container   | GGML Version   |
|:------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-------|:------------|:---------------|
| [mpt-7b-chat-f16.bin](https://huggingface.co/rustformers/mpt-7b-ggml/blob/main/mpt-7b-chat-f16.bin)                           | [mosaicml/mpt-7b-chat](https://huggingface.co/mosaicml/mpt-7b-chat)               | F16    | GGML        | V3             |
| [mpt-7b-chat-q4_0.bin](https://huggingface.co/rustformers/mpt-7b-ggml/blob/main/mpt-7b-chat-q4_0.bin)                         | [mosaicml/mpt-7b-chat](https://huggingface.co/mosaicml/mpt-7b-chat)               | Q4_0   | GGML        | V3             |
| [mpt-7b-chat-q4_0-ggjt.bin](https://huggingface.co/rustformers/mpt-7b-ggml/blob/main/mpt-7b-chat-q4_0-ggjt.bin)               | [mosaicml/mpt-7b-chat](https://huggingface.co/mosaicml/mpt-7b-chat)               | Q4_0   | GGJT        | V3             |
| [mpt-7b-chat-q5_1-ggjt.bin](https://huggingface.co/rustformers/mpt-7b-ggml/blob/main/mpt-7b-chat-q5_1-ggjt.bin)               | [mosaicml/mpt-7b-chat](https://huggingface.co/mosaicml/mpt-7b-chat)               | Q5_1   | GGJT        | V3             |
| [mpt-7b-f16.bin](https://huggingface.co/rustformers/mpt-7b-ggml/blob/main/mpt-7b-f16.bin)                                     | [mosaicml/mpt-7b](https://huggingface.co/mosaicml/mpt-7b)                         | F16    | GGML        | V3             |
| [mpt-7b-instruct-f16.bin](https://huggingface.co/rustformers/mpt-7b-ggml/blob/main/mpt-7b-instruct-f16.bin)                   | [mosaicml/mpt-7b-instruct](https://huggingface.co/mosaicml/mpt-7b-instruct)       | F16    | GGML        | V3             |
| [mpt-7b-instruct-q4_0.bin](https://huggingface.co/rustformers/mpt-7b-ggml/blob/main/mpt-7b-instruct-q4_0.bin)                 | [mosaicml/mpt-7b-instruct](https://huggingface.co/mosaicml/mpt-7b-instruct)       | Q4_0   | GGML        | V3             |
| [mpt-7b-instruct-q4_0-ggjt.bin](https://huggingface.co/rustformers/mpt-7b-ggml/blob/main/mpt-7b-instruct-q4_0-ggjt.bin)       | [mosaicml/mpt-7b-instruct](https://huggingface.co/mosaicml/mpt-7b-instruct)       | Q4_0   | GGJT        | V3             |
| [mpt-7b-instruct-q5_1-ggjt.bin](https://huggingface.co/rustformers/mpt-7b-ggml/blob/main/mpt-7b-instruct-q5_1-ggjt.bin)       | [mosaicml/mpt-7b-instruct](https://huggingface.co/mosaicml/mpt-7b-instruct)       | Q5_1   | GGJT        | V3             |
| [mpt-7b-q4_0.bin](https://huggingface.co/rustformers/mpt-7b-ggml/blob/main/mpt-7b-q4_0.bin)                                   | [mosaicml/mpt-7b](https://huggingface.co/mosaicml/mpt-7b)                         | Q4_0   | GGML        | V3             |
| [mpt-7b-q4_0-ggjt.bin](https://huggingface.co/rustformers/mpt-7b-ggml/blob/main/mpt-7b-q4_0-ggjt.bin)                         | [mosaicml/mpt-7b](https://huggingface.co/mosaicml/mpt-7b)                         | Q4_0   | GGJT        | V3             |
| [mpt-7b-q5_1-ggjt.bin](https://huggingface.co/rustformers/mpt-7b-ggml/blob/main/mpt-7b-q5_1-ggjt.bin)                         | [mosaicml/mpt-7b](https://huggingface.co/mosaicml/mpt-7b)                         | Q5_1   | GGJT        | V3             |
| [mpt-7b-storywriter-f16.bin](https://huggingface.co/rustformers/mpt-7b-ggml/blob/main/mpt-7b-storywriter-f16.bin)             | [mosaicml/mpt-7b-storywriter](https://huggingface.co/mosaicml/mpt-7b-storywriter) | F16    | GGML        | V3             |
| [mpt-7b-storywriter-q4_0.bin](https://huggingface.co/rustformers/mpt-7b-ggml/blob/main/mpt-7b-storywriter-q4_0.bin)           | [mosaicml/mpt-7b-storywriter](https://huggingface.co/mosaicml/mpt-7b-storywriter) | Q4_0   | GGML        | V3             |
| [mpt-7b-storywriter-q4_0-ggjt.bin](https://huggingface.co/rustformers/mpt-7b-ggml/blob/main/mpt-7b-storywriter-q4_0-ggjt.bin) | [mosaicml/mpt-7b-storywriter](https://huggingface.co/mosaicml/mpt-7b-storywriter) | Q4_0   | GGJT        | V3             |
| [mpt-7b-storywriter-q5_1-ggjt.bin](https://huggingface.co/rustformers/mpt-7b-ggml/blob/main/mpt-7b-storywriter-q5_1-ggjt.bin) | [mosaicml/mpt-7b-storywriter](https://huggingface.co/mosaicml/mpt-7b-storywriter) | Q5_1   | GGJT        | V3             |

⚠️Caution⚠️: mpt-7b-storywriter is still under development!

## Usage 

### Python via [llm-rs](https://github.com/LLukas22/llm-rs-python):

#### Installation
Via pip: `pip install llm-rs`

#### Run inference
```python
from llm_rs import AutoModel

#Load the model, define any model you like from the list above as the `model_file`
model = AutoModel.from_pretrained("rustformers/mpt-7b-ggml",model_file="mpt-7b-q4_0-ggjt.bin")

#Generate
print(model.generate("The meaning of life is"))
```
### Rust via [rustformers/llm](https://github.com/rustformers/llm): 

#### Installation
```
git clone --recurse-submodules https://github.com/rustformers/llm.git
cd llm
cargo build --release
```

#### Run inference
```
cargo run --release -- mpt infer -m path/to/model.bin  -p "Tell me how cool the Rust programming language is:"
```

### C via [GGML](https://github.com/ggerganov/ggml)
The `GGML` example only supports the ggml container type!

#### Installation

```
git clone https://github.com/ggerganov/ggml
cd ggml
mkdir build && cd build
cmake ..
make -j4 mpt
```

#### Run inference

```
./bin/mpt -m path/to/model.bin -p "The meaning of life is"
```