LLaVA / requirements.txt
badayvedat's picture
Load 13B model with 8-bit/4-bit quantization to support more hardwares (#2)
c6dfdac
--extra-index-url https://download.pytorch.org/whl/cu118
pip
einops
fastapi
gradio==3.35.2
markdown2[all]
numpy
requests
sentencepiece
tokenizers>=0.12.1
torch==2.0.1
torchvision==0.15.2
uvicorn
wandb
shortuuid
httpx==0.24.0
deepspeed==0.9.5
peft==0.4.0
transformers==4.31.0
accelerate==0.21.0
bitsandbytes==0.41.0
scikit-learn==1.2.2
sentencepiece==0.1.99
einops==0.6.1
einops-exts==0.0.4
timm==0.6.13
gradio_client==0.2.9