Text Generation
Transformers
PyTorch
English
llama
Inference Endpoints
text-generation-inference
winglian's picture TheBloke's picture
Change cache = true in config.json to significantly boost inference performance (#1)
1959f62