DBRX-Instruct evaluation failed, likely due to model size (132B params)
Hi! We recently merged the official DbrxModel
implementation to transformers>=4.40.0
and the HF Hub models for databricks/dbrx-base
, databricks/dbrx-instruct
have been updated accordingly.
I tried to submit an eval request for databricks/dbrx-instruct
but it failed for some reason. I suspect the eval hardware needs to be bigger to accomodate the large model size (132B params). It should definitely work on a 8x80GB system with bfloat16
precision, that's how I've been running evals locally.
If you could help eval this model manually we would very much appreciate it! When I ran eval locally I got an openllm average of around 74.5%, around the same as Command-R+.
Hi
@alozowski
, I think the resubmission for dbrx-instruct
failed: https://huggingface.co/datasets/open-llm-leaderboard/requests/blob/main/databricks/dbrx-instruct_eval_request_False_bfloat16_Original.json
{
"model": "databricks/dbrx-instruct",
"base_model": "",
"revision": "main",
"private": false,
"precision": "bfloat16",
"params": 131.597,
"architectures": "DbrxForCausalLM",
"weight_type": "Original",
"status": "FAILED",
"submitted_time": "2024-04-19T07:36:33Z",
"model_type": "\ud83d\udcac : chat models (RLHF, DPO, IFT, ...)",
"job_id": "4054064",
"job_start_time": "2024-04-29T11:57:23.745963"
}
Could you help take a look at the logs?
Hi @abhi-db !
Unfortunately, according to the logs, the model failed due to a job preemption – sometimes our research cluster can be full. I've resubmitted your model, please, check out the request file. If the model fails again, please, ping me here and I will run manual evaluation 👍