Text Generation
Transformers
PyTorch
English
llama
Inference Endpoints
text-generation-inference

Change cache = true in config.json to significantly boost inference performance

#1
by TheBloke - opened
Open Access AI Collective org
No description provided.
winglian changed pull request status to merged
Open Access AI Collective org

thanks!

Sign up or log in to comment