File size: 1,485 Bytes
b870cca d256254 8e65c74 2d7f5a0 e2b4e0c d8842e4 a2eafbd 089a659 a2eafbd |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
tags:
- llm
- ggml
---
# GGML converted versions of [Mosaic's](https://huggingface.co/mosaicml) MPT Models
## CAUTION: MPT Development is still ongoing and not finished!
- GGML implementation see here: [Replit + MPT](https://github.com/ggerganov/ggml/pull/145)
- Rustformers implementation see here: [Implement MPT Model](https://github.com/rustformers/llm/pull/218)
If these implementations are complete i will add instructions on how to run the models and update them if necesary!
## Converted Models:
| Name | Based on | Type |
|-|-|-|
| [mpt-7b-f16.bin](https://huggingface.co/LLukas22/mpt-7b-ggml/blob/main/mpt-7b-f16.bin) | [mpt-7b](https://huggingface.co/mosaicml/mpt-7b) | fp16 |
| [mpt-7b-q4_0.bin](https://huggingface.co/LLukas22/mpt-7b-ggml/blob/main/mpt-7b-q4_0.bin) | [mpt-7b](https://huggingface.co/mosaicml/mpt-7b) | int4 |
| [mpt-7b-chat-q4_0.bin](https://huggingface.co/LLukas22/mpt-7b-ggml/blob/main/mpt-7b-chat-q4_0.bin) | [mpt-7b-chat](https://huggingface.co/mosaicml/mpt-7b-chat) | int4 |
| [mpt-7b-instruct-q4_0.bin](https://huggingface.co/LLukas22/mpt-7b-ggml/blob/main/mpt-7b-instruct-q4_0.bin) | [mpt-7b-instruct](https://huggingface.co/mosaicml/mpt-7b-instruct) | int4 |
| [mpt-7b-storywriter-q4_0.bin](https://huggingface.co/LLukas22/mpt-7b-ggml/blob/main/mpt-7b-storywriter-q4_0.bin) | [mpt-7b-storywriter](https://huggingface.co/mosaicml/mpt-7b-storywriter) | int4 |
|