Update config.json

#5
by aismlv - opened

Fixes the warning below when using multiple GPUs. The parameter is not utilised by Llava (and set to false for Llama)

2023-12-14 19:03:27,491 - accelerate.utils.modeling - WARNING - The model weights are not tied. Please use the `tie_weights` method before using the `infer_auto_device` function.
Llava Hugging Face org

Makes sense, thanks!

ybelkada changed pull request status to merged

Sign up or log in to comment