GGML converted versions of Together's RedPajama models

Description

RedPajama-INCITE-Base-3B-v1 was developed by Together and leaders from the open-source AI community including Ontocord.ai, ETH DS3Lab, AAI CERC, Université de Montréal, MILA - Québec AI Institute, Stanford Center for Research on Foundation Models (CRFM), Stanford Hazy Research research group and LAION. The training was done on 3,072 V100 GPUs provided as part of the INCITE 2023 project on Scalable Foundation Models for Transferrable Generalist AI, awarded to MILA, LAION, and EleutherAI in fall 2022, with support from the Oak Ridge Leadership Computing Facility (OLCF) and INCITE program.

Converted Models:

Name Based on Type Container GGML Version
RedPajama-INCITE-Base-3B-v1-f16.bin togethercomputer/RedPajama-INCITE-Base-3B-v1 F16 GGML V3
RedPajama-INCITE-Base-3B-v1-q4_0.bin togethercomputer/RedPajama-INCITE-Base-3B-v1 Q4_0 GGML V3
RedPajama-INCITE-Base-3B-v1-q4_0-ggjt.bin togethercomputer/RedPajama-INCITE-Base-3B-v1 Q4_0 GGJT V3
RedPajama-INCITE-Base-3B-v1-q5_1.bin togethercomputer/RedPajama-INCITE-Base-3B-v1 Q5_1 GGML V3
RedPajama-INCITE-Base-3B-v1-q5_1-ggjt.bin togethercomputer/RedPajama-INCITE-Base-3B-v1 Q5_1 GGJT V3
RedPajama-INCITE-Chat-3B-v1-f16.bin togethercomputer/RedPajama-INCITE-Chat-3B-v1 F16 GGML V3
RedPajama-INCITE-Chat-3B-v1-q4_0.bin togethercomputer/RedPajama-INCITE-Chat-3B-v1 Q4_0 GGML V3
RedPajama-INCITE-Chat-3B-v1-q4_0-ggjt.bin togethercomputer/RedPajama-INCITE-Chat-3B-v1 Q4_0 GGJT V3
RedPajama-INCITE-Chat-3B-v1-q5_1.bin togethercomputer/RedPajama-INCITE-Chat-3B-v1 Q5_1 GGML V3
RedPajama-INCITE-Chat-3B-v1-q5_1-ggjt.bin togethercomputer/RedPajama-INCITE-Chat-3B-v1 Q5_1 GGJT V3
RedPajama-INCITE-Instruct-3B-v1-f16.bin togethercomputer/RedPajama-INCITE-Instruct-3B-v1 F16 GGML V3
RedPajama-INCITE-Instruct-3B-v1-q4_0.bin togethercomputer/RedPajama-INCITE-Instruct-3B-v1 Q4_0 GGML V3
RedPajama-INCITE-Instruct-3B-v1-q4_0-ggjt.bin togethercomputer/RedPajama-INCITE-Instruct-3B-v1 Q4_0 GGJT V3
RedPajama-INCITE-Instruct-3B-v1-q5_1.bin togethercomputer/RedPajama-INCITE-Instruct-3B-v1 Q5_1 GGML V3
RedPajama-INCITE-Instruct-3B-v1-q5_1-ggjt.bin togethercomputer/RedPajama-INCITE-Instruct-3B-v1 Q5_1 GGJT V3

Usage

Python via llm-rs:

Installation

Via pip: pip install llm-rs

Run inference

from llm_rs import AutoModel

#Load the model, define any model you like from the list above as the `model_file`
model = AutoModel.from_pretrained("rustformers/redpajama-3b-ggml",model_file="RedPajama-INCITE-Base-3B-v1-q4_0-ggjt.bin")

#Generate
print(model.generate("The meaning of life is"))

Using local.ai GUI

Installation

Download the installer at www.localai.app.

Running Inference

Download your preferred model and place it in the "models" directory. Subsequently, you can start a chat session with your model directly from the interface.

Rust via Rustformers/llm:

Installation

git clone --recurse-submodules https://github.com/rustformers/llm.git
cd llm
cargo build --release

Run inference

cargo run --release -- gptneox infer -m path/to/model.bin  -p "Tell me how cool the Rust programming language is:"
Downloads last month
48
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.