Loading Model on AWS G5.4xlarge Instance Results in 'killed' Message

#2
by Tejasram - opened

I'm trying to implement a RAG system using this model on an aws g5.4xlarge instance with the following configurations:
vcpus: 16
gpu: NVidia A10g 24GB

This is the code im using:

model_path = "BlackSamorez/Mixtral-8x7b-AQLM-2Bit-1x16-hf"
tokenizer_path = "mistralai/Mixtral-8x7B-v0.1"

tokenizer = AutoTokenizer.from_pretrained(model_path)

pipe = pipeline("text2text-generation",model=model_path,tokenizer=tokenizer,trust_remote_code=True,max_new_tokens=1000)

However when i run this, the process gets killed.

IST Austria Distributed Algorithms and Systems Lab org

I haven't worked with AWS so I'm not sure what killed might mean, but it's likely that you don't have enough RAM to load the model.
One thing you could do is install the latest accelerate in your environment

pip install git+https://github.com/huggingface/accelerate.git@main

and then load the model with extra kwarg device_map="cuda". That would cut RAM requirements significantly.

Sign up or log in to comment