Edit model card
TigerBot

A cutting-edge foundation for your very own LLM.

🌐 TigerBot β€’ πŸ€— Hugging Face

This is a 4-bit EXL2 version of the tigerbot-70b-chat-v2.

It was quantized to 4bit using: https://github.com/turboderp/exllamav2

How to download and use this model in github: https://github.com/TigerResearch/TigerBot

Here are commands to clone the TigerBot and install.

conda create --name tigerbot python=3.8
conda activate tigerbot
conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia

git clone https://github.com/TigerResearch/TigerBot
cd TigerBot
pip install -r requirements.txt

Inference with command line interface

infer with exllamav2

# install exllamav2
git clone https://github.com/turboderp/exllamav2
cd exllamav2
pip install -r requirements.txt

# infer command
CUDA_VISIBLE_DEVICES=0 python other_infer/exllamav2_hf_infer.py --model_path TigerResearch/tigerbot-70b-chat-v2-4bit-exl2
Downloads last month
3

Space using TigerResearch/tigerbot-70b-chat-v2-4bit-exl2 1