Visual Question Answering
Transformers
Safetensors
llava_llama
text-generation
amitfin commited on
Commit
cee017a
1 Parent(s): 4ff42a4

model_type: "llava_llama" => "llava"

Browse files

```
ValueError: The checkpoint you are trying to load has model type ⁠ llava_llama ⁠ but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.
```

Files changed (1) hide show
  1. config.json +1 -1
config.json CHANGED
@@ -46,7 +46,7 @@
46
  "mm_vision_select_layer": -2,
47
  "mm_vision_tower": "openai/clip-vit-large-patch14-336",
48
  "mm_vision_tower_lr": 2e-06,
49
- "model_type": "llava_llama",
50
  "num_attention_heads": 32,
51
  "num_hidden_layers": 32,
52
  "num_key_value_heads": 8,
 
46
  "mm_vision_select_layer": -2,
47
  "mm_vision_tower": "openai/clip-vit-large-patch14-336",
48
  "mm_vision_tower_lr": 2e-06,
49
+ "model_type": "llava",
50
  "num_attention_heads": 32,
51
  "num_hidden_layers": 32,
52
  "num_key_value_heads": 8,