File size: 3,141 Bytes
b870cca
 
 
 
 
 
 
 
d256254
 
 
 
 
8e65c74
 
4e562e7
8e65c74
 
 
 
 
2d7f5a0
 
ae1b86c
4aaf40f
ae1b86c
c708961
ae1b86c
 
c708961
ae1b86c
 
c708961
ae1b86c
4aaf40f
c708961
4aaf40f
4e562e7
 
 
 
 
 
 
5d5ce7b
 
4e562e7
5d5ce7b
4e562e7
 
 
 
 
 
 
 
 
5d5ce7b
4e562e7
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
tags:
- llm
- ggml
---


# GGML converted versions of [Mosaic's](https://huggingface.co/mosaicml) MPT Models


## CAUTION: MPT Development is still ongoing and not finished! 
- Rust & Python: Rustformers implementation see here: [Implement MPT Model](https://github.com/rustformers/llm/pull/218)

If these implementations are complete i will add instructions on how to run the models and update them if necesary!


## Converted Models:


| Name   | Based on |  Type | Container |
|-|-|-|-|
| [mpt-7b-f16.bin](https://huggingface.co/LLukas22/mpt-7b-ggml/blob/main/mpt-7b-f16.bin) |  [mpt-7b](https://huggingface.co/mosaicml/mpt-7b) | fp16 | GGML |
| [mpt-7b-q4_0.bin](https://huggingface.co/LLukas22/mpt-7b-ggml/blob/main/mpt-7b-q4_0.bin) |  [mpt-7b](https://huggingface.co/mosaicml/mpt-7b) | int4 | GGML |
| [mpt-7b-q4_0-ggjt.bin](https://huggingface.co/LLukas22/mpt-7b-ggml/blob/main/mpt-7b-q4_0-ggjt.bin) |  [mpt-7b](https://huggingface.co/mosaicml/mpt-7b) | int4 | GGJT |
| [mpt-7b-chat-f16.bin](https://huggingface.co/LLukas22/mpt-7b-ggml/blob/main/mpt-7b-chat-f16.bin) |  [mpt-7b-chat](https://huggingface.co/mosaicml/mpt-7b-chat) | fp16 | GGML |
| [mpt-7b-chat-q4_0.bin](https://huggingface.co/LLukas22/mpt-7b-ggml/blob/main/mpt-7b-chat-q4_0.bin) |  [mpt-7b-chat](https://huggingface.co/mosaicml/mpt-7b-chat) | int4 | GGML |
| [mpt-7b-chat-q4_0-ggjt.bin](https://huggingface.co/LLukas22/mpt-7b-ggml/blob/main/mpt-7b-chat-q4_0-ggj.bin) |  [mpt-7b-chat](https://huggingface.co/mosaicml/mpt-7b-chat) | int4 | GGJT |
| [mpt-7b-instruct-f16.bin](https://huggingface.co/LLukas22/mpt-7b-ggml/blob/main/mpt-7b-instruct-f16.bin) |  [mpt-7b-instruct](https://huggingface.co/mosaicml/mpt-7b-instruct) | fp16 | GGML |
| [mpt-7b-instruct-q4_0.bin](https://huggingface.co/LLukas22/mpt-7b-ggml/blob/main/mpt-7b-instruct-q4_0.bin) |  [mpt-7b-instruct](https://huggingface.co/mosaicml/mpt-7b-instruct) | int4 | GGML |
| [mpt-7b-instruct-q4_0-ggjt.bin](https://huggingface.co/LLukas22/mpt-7b-ggml/blob/main/mpt-7b-instruct-q4_0-ggjt.bin) |  [mpt-7b-instruct](https://huggingface.co/mosaicml/mpt-7b-instruct) | int4 | GGJT |
| [mpt-7b-storywriter-f16.bin](https://huggingface.co/LLukas22/mpt-7b-ggml/blob/main/mpt-7b-f16.bin) |  [mpt-7b-storywriter](https://huggingface.co/mosaicml/mpt-7b-storywriter) | fp16 | GGML | 
| [mpt-7b-storywriter-q4_0.bin](https://huggingface.co/LLukas22/mpt-7b-ggml/blob/main/mpt-7b-storywriter-q4_0.bin) |  [mpt-7b-storywriter](https://huggingface.co/mosaicml/mpt-7b-storywriter) | int4 | GGML |
| [mpt-7b-storywriter-q4_0-ggjt.bin](https://huggingface.co/LLukas22/mpt-7b-ggml/blob/main/mpt-7b-storywriter-q4_0-ggjt.bin) |  [mpt-7b-storywriter](https://huggingface.co/mosaicml/mpt-7b-storywriter) | int4 | GGJT |


 ## Usage 

###  Rust & Python: 
#### TBD See above!

### Via GGML
The `GGML` example only supports the ggml container type!

##### Installation

```
git clone https://github.com/ggerganov/ggml
cd ggml
mkdir build && cd build
cmake ..
make -j4 mpt
```

##### Run inference

```
./bin/mpt -m path/to/model.bin -p "The meaning of life is"
```