bababababooey commited on
Commit
ffd11f3
1 Parent(s): eab6e7f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -0
README.md CHANGED
@@ -281,6 +281,8 @@ extra_gated_eu_disallowed: true
281
 
282
  *test of swapping the language model in 3.2. i used v000000/L3-8B-Stheno-v3.2-abliterated*
283
 
 
 
284
  The Llama 3.2-Vision collection of multimodal large language models (LLMs) is a collection of pretrained and instruction-tuned image reasoning generative models in 11B and 90B sizes (text \+ images in / text out). The Llama 3.2-Vision instruction-tuned models are optimized for visual recognition, image reasoning, captioning, and answering general questions about an image. The models outperform many of the available open source and closed multimodal models on common industry benchmarks.
285
 
286
  **Model Developer**: Meta
 
281
 
282
  *test of swapping the language model in 3.2. i used v000000/L3-8B-Stheno-v3.2-abliterated*
283
 
284
+ *see swapper/hotswap.py if you want to make your own*
285
+
286
  The Llama 3.2-Vision collection of multimodal large language models (LLMs) is a collection of pretrained and instruction-tuned image reasoning generative models in 11B and 90B sizes (text \+ images in / text out). The Llama 3.2-Vision instruction-tuned models are optimized for visual recognition, image reasoning, captioning, and answering general questions about an image. The models outperform many of the available open source and closed multimodal models on common industry benchmarks.
287
 
288
  **Model Developer**: Meta