Text Generation
Transformers
PyTorch
English
llama
text-generation-inference
Inference Endpoints
TheBloke's picture
Change cache = true in config.json to significantly boost inference performance
b8eb946