Unable to reproduce high quality arena-hard-auto results on GCP A100

#31
by noamgat - opened

Hello,
As part of trying this model out, I am reproducing its reported results on arena-hard-auto, in which the 27b IT version reports a 57% win rate.
So, I set up a deployment on GCP according to the recommended k8s yaml, using the docker image:
us-docker.pkg.dev/deeplearning-platform-release/gcr.io/huggingface-text-generation-inference-cu121.2-1.ubuntu2204.py310

I then collected responses for the 500 prompts on arena-hard, and ran the judgement on them. I got the following low quality results:

tgi-google-gemma-2-27b-it | score: 3.8 | 95% CI: (-0.8, 0.7) | average #tokens: 1554

The 3.8% score is so low, that I'm sure that its a problem with the deployment and not that this is the true strength of the model. However, may other people are reporting generation artifacts and overall low quality results, so it seems that there is no cookbook to create a functioning inference server with this model that actually delivers its true capabilities.

Has anyone been able to create a satisfactory inference server running this model?

For me, inferencing with vllm OpenAI compatible API, I added --enforce-eager flag to enable model to enforce eager execution https://github.com/vllm-project/vllm/blob/main/vllm/config.py
docker run command example:

docker run --rm --runtime nvidia --gpus all  --env VLLM_ATTENTION_BACKEND=FLASHINFER vllm/vllm-openai:latest     --model google/gemma-2-2b-it     --tensor-parallel-size 2 --enforce-eager

You can try adding model = Automodel.from_pretrained('google/gemma-2-27b-it', attn_implementation='eager') argument if you use transformer library itself.

I forgot to update this thread, that also worked for me.

noamgat changed discussion status to closed

Sign up or log in to comment