dedicated endpoints Issue with v2.5 compared with v2

#1
by trungnx26 - opened

Hi team,

Today, I tried to test version 2.5 with dedicated endpoints. In version 2, I was able to set the Max Batch Prefill Tokens to 8000 on an A10G CPU with 24GB of RAM, and it worked well. However, in version 2.5, even when I lowered it to 4000, it still showed there wasn't enough capacity.

Then I changed it back to the default setting of 2048, even though I was using an Nvidia Tesla T4 with 64GB, but I still couldn't create the endpoint. (max_input_length":1024,"max_prefill_tokens":2048,"max_total_tokens":1512)

Update: I'm not sure what changed, but it looks like HF made an update recently, and now the endpoint is working fine. Tried both A10G and T4, and the settings with tokens 8000, 4000, and 2000 in the Container Configuration.

quest to PyTorch.)\n"},"target":"text_generation_launcher"}
{"timestamp":"2024-04-17T05:14:17.054355Z","level":"ERROR","message":"Server error: Expected q_dtype == torch::kFloat16 || ((is_sm8x || is_sm90) && q_dtype == torch::kBFloat16) to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)","target":"text_generation_client","filename":"router/client/src/lib.rs","line_number":33,"span":{"name":"warmup"},"spans":[{"max_batch_size":"None","max_input_length":1024,"max_prefill_tokens":2048,"max_total_tokens":1512,"name":"warmup"},{"name":"warmup"}]}
Error: Warmup(Generation("Expected q_dtype == torch::kFloat16 || ((is_sm8x || is_sm90) && q_dtype == torch::kBFloat16) to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)"))
{"timestamp":"2024-04-17T05:14:17.186614Z","level":"ERROR","fields":{"message":"Webserver Crashed"},"target":"text_generation_launcher"}
{"timestamp":"2024-04-17T05:14:17.186630Z","level":"INFO","fields":{"message":"Shutting down shards"},"target":"text_generation_launcher"}
{"timestamp":"2024-04-17T05:14:17.463758Z","level":"INFO","fields":{"message":"Shard terminated"},"target":"text_generation_launcher","span":{"rank":3,"name":"shard-manager"},"spans":[{"rank":3,"name":"shard-manager"}]}
{"timestamp":"2024-04-17T05:14:17.626196Z","level":"INFO","fields":{"message":"Shard terminated"},"target":"text_generation_launcher","span":{"rank":0,"name":"shard-manager"},"spans":[{"rank":0,"name":"shard-manager"}]}
{"timestamp":"2024-04-17T05:14:17.628364Z","level":"INFO","fields":{"message":"Shard terminated"},"target":"text_generation_launcher","span":{"rank":2,"name":"shard-manager"},"spans":[{"rank":2,"name":"shard-manager"}]}
{"timestamp":"2024-04-17T05:14:17.777650Z","level":"INFO","fields":{"message":"Shard terminated"},"target":"text_generation_launcher","span":{"rank":1,"name":"shard-manager"},"spans":[{"rank":1,"name":"shard-manager"}]}
Error: WebserverFailed

SeaLLMs - Language Models for Southeast Asian Languages org
edited Apr 17

Hi @trungnx26 , I'm not familiar with endpoints, so cannot tell what's going on from this log. But I can give you some hints:

  1. Make sure latest transformers is installed (4.40+)
  2. v2.5 has massive 256000 vocab, easily cause OOM than v2. So you may need to lower the context length further. Let me know if that works.

Update:
Error: Warmup(Generation("Expected q_dtype == torch::kFloat16 || ((is_sm8x || is_sm90) && q_dtype == torch::kBFloat16) to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)"))

This doesn't seem to be an OOM error.

Hi @nxphi47 , I'm not sure what changed, but it looks like HF made an update recently, and now the endpoint is working fine. Tried both A10G and T4, and the settings with tokens 8000, 4000, and 2000 in the Container Configuration.

The only problem I'm having now is that when I use their API, I can only input about 820 tokens. In V2, (the same setting) I could input more than 8000 tokens in one go. Still happy with V2.

Update: Try it in the demo at https://damo-nlp-sg.github.io/SeaLLMs/, work well with 8K tokens, and the result/logic is more better than V2. My test is a summary 4 articles from a newspaper. I don't know why the Delicated server is limited.

SeaLLMs - Language Models for Southeast Asian Languages org

@trungnx26 Our demo runs on A100, you can try with A100 for your endpoints.

Hi @nxphi47 , it works now with A100 and even lower CPU like T4. Still confused today. But now everything is fine. Thank you.

trungnx26 changed discussion status to closed

Sign up or log in to comment