Text Generation
Transformers
PyTorch
English
llama
sft
Inference Endpoints
text-generation-inference
andreaskoepf commited on
Commit
31f3350
1 Parent(s): 0319e91

update info regarding inference via tgi

Browse files
Files changed (1) hide show
  1. README.md +6 -5
README.md CHANGED
@@ -92,12 +92,13 @@ perform safety testing and tuning tailored to their specific applications of the
92
 
93
  Please see Meta's [Responsible Use Guide](https://ai.meta.com/llama/responsible-use-guide/).
94
 
95
- ## Note regarding inference with TGI
96
 
97
- During evaluation we noticed that this 70B model produced extremely poor outputs when loaded it was loaded in 16 bit precision sharded in [TGI](https://github.com/huggingface/text-generation-inference).
98
- In contrast the model could be evaluated without problem using [vLLM](https://github.com/vllm-project/vllm).
99
- The model also worked decently well when loaded with TGI on a single GPPU nf4 quantized via [TimDettmers/bitsandbytes](https://github.com/TimDettmers/bitsandbytes).
100
- Will will get it touch with the TGI authors to find out why sharded 16-bit inference doesn't work as expected.
 
 
101
 
102
  ## Configuration Details
103
 
 
92
 
93
  Please see Meta's [Responsible Use Guide](https://ai.meta.com/llama/responsible-use-guide/).
94
 
 
95
 
96
+ ## Inference via TGI
97
+
98
+ An early version of this model had an embedding count of 32,007 which was incompatible to sharding with [TGI](https://github.com/huggingface/text-generation-inference).
99
+ In the current version the embeddings and the lm_head weights have been padded to a multiple of 128 (by replicating the emembeddings of the unk-token (id: 0)).
100
+ Sharded inference with TGI should now work as expected.
101
+
102
 
103
  ## Configuration Details
104