prithivMLmods commited on
Commit
72b07f9
·
verified ·
1 Parent(s): 341189e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +91 -1
README.md CHANGED
@@ -9,4 +9,94 @@ library_name: transformers
9
  tags:
10
  - VLMec:Vision-Language Model for extended reasoning
11
  - text-generation-inference
12
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  tags:
10
  - VLMec:Vision-Language Model for extended reasoning
11
  - text-generation-inference
12
+ ---
13
+
14
+ # **Nemesis-VLMer-7B-0818**
15
+
16
+ > The **Nemesis-VLMer-7B-0818** model is a fine-tuned version of **Qwen2.5-VL-7B-Instruct**, optimized for **Reasoning**, **Content Analysis**, and **Visual Question Answering (VQA)**. Built on top of the Qwen2.5-VL architecture, this model enhances multimodal comprehension capabilities with focused training on reasoning-oriented and analysis-rich datasets for superior reasoning, content interpretation, and visual question answering tasks.
17
+
18
+ ## Key Enhancements
19
+
20
+ * **Context-Aware Multimodal Reasoning and Linking**: Advanced capability for understanding multimodal context and establishing connections across text, images, and structured elements.
21
+
22
+ * **Enhanced Content Analysis**: Designed to efficiently interpret and analyze complex content, ranging from structured text to multimodal information.
23
+
24
+ * **Visual Question Answering (VQA)**: Specialized for accurately answering visual and multimodal queries across diverse domains.
25
+
26
+ * **Advanced Reasoning Capabilities**: Optimized for logical, mathematical, and contextual reasoning tasks involving charts, tables, and diagrams.
27
+
28
+ * **State-of-the-Art Performance Across Benchmarks**: Achieves competitive results on reasoning and visual QA datasets such as DocVQA, MathVista, RealWorldQA, and MTVQA.
29
+
30
+ * **Video Understanding up to 20+ minutes**: Supports detailed comprehension of long-duration videos for reasoning, summarization, question answering, and multi-modal analysis.
31
+
32
+ * **Visually-Grounded Device Interaction**: Enables mobile or robotic device operation via visual inputs and text-based instructions using contextual understanding and reasoning-driven decision-making logic.
33
+
34
+ ## Quick Start with Transformers🤗
35
+
36
+ ```python
37
+ from transformers import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor
38
+ from qwen_vl_utils import process_vision_info
39
+
40
+ model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
41
+ "prithivMLmods/Nemesis-VLMer-7B-0818", torch_dtype="auto", device_map="auto"
42
+ )
43
+
44
+ processor = AutoProcessor.from_pretrained("prithivMLmods/Nemesis-VLMer-7B-0818")
45
+
46
+ messages = [
47
+ {
48
+ "role": "user",
49
+ "content": [
50
+ {
51
+ "type": "image",
52
+ "image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
53
+ },
54
+ {"type": "text", "text": "What reasoning can you infer from this image?"},
55
+ ],
56
+ }
57
+ ]
58
+
59
+ text = processor.apply_chat_template(
60
+ messages, tokenize=False, add_generation_prompt=True
61
+ )
62
+ image_inputs, video_inputs = process_vision_info(messages)
63
+ inputs = processor(
64
+ text=[text],
65
+ images=image_inputs,
66
+ videos=video_inputs,
67
+ padding=True,
68
+ return_tensors="pt",
69
+ )
70
+ inputs = inputs.to("cuda")
71
+
72
+ generated_ids = model.generate(**inputs, max_new_tokens=128)
73
+ generated_ids_trimmed = [
74
+ out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
75
+ ]
76
+ output_text = processor.batch_decode(
77
+ generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
78
+ )
79
+ print(output_text)
80
+ ```
81
+
82
+ ## Intended Use
83
+
84
+ This model is intended for:
85
+
86
+ * Context-aware multimodal reasoning and linking across diverse inputs.
87
+ * High-fidelity content analysis and interpretation for structured and unstructured data.
88
+ * Visual question answering (VQA) across educational, enterprise, and research applications.
89
+ * Reasoning-driven analysis of charts, graphs, tables, and visual data representations.
90
+ * Extraction and LaTeX formatting of mathematical expressions for academic and professional use.
91
+ * Retrieval, reasoning, and summarization from long documents, slides, and multi-modal sources.
92
+ * Multilingual reasoning and structured content analysis for global use cases.
93
+ * Robotic or mobile automation with vision-guided, reasoning-based contextual interaction.
94
+
95
+ ## Limitations
96
+
97
+ * May show degraded performance on extremely low-quality or occluded images.
98
+ * Not optimized for real-time applications on low-resource or edge devices due to computational demands.
99
+ * Variable accuracy on uncommon or low-resource languages or scripts.
100
+ * Long video processing may require substantial memory and is not optimized for streaming applications.
101
+ * Visual token settings affect performance; suboptimal configurations can impact results.
102
+ * In rare cases, outputs may contain hallucinated or contextually misaligned reasoning steps.