iFlyBot commited on
Commit
aedf1d9
·
1 Parent(s): fe5d588

fix README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -8
README.md CHANGED
@@ -3,19 +3,19 @@ license: mit
3
  ---
4
 
5
 
6
- # iFlyBotVLM
7
 
8
  ## 🔥Introduction
9
 
10
- We introduce iFlyBotVLM, a general-purpose Vision-Language-Model (VLM) specifically engineered for the domain of Embodied Intelligence. The primary objective of this model is to bridge the cross-modal semantic gap between high-dimensional environmental perception and low-level robot motion control. It achieves this by abstracting complex scene information into an "Operational Language" that is body-agnostic and transferable, thus enabling seamless perception-to-action closed-loop coordination.
11
 
12
- The architecture of iFlyBotVLM is designed to realize four critical functional capabilities in the embodied domain:
13
  **🧭Spatial Understanding and Metric**: Provides the model with the capacity to understand spatial relationships and perform relative position estimation among objects in the environment.
14
  **🎯Interactive Target Grounding**: Supports diverse grounding mechanisms, including 2D/3D object detection in the visual modality, language-based object and spatial referring, and the prediction of critical object affordance regions.
15
  **🤖Action Abstraction and Control Parameter Generation**: Generates outputs directly relevant to the manipulation domain, providing grasp poses and manipulation trajectories.
16
  **📋Task Planning**: Leveraging the current scene Understanding, this module performs multi-step prediction to decompose complex tasks into a sequence of atomic skills, fundamentally supporting the robust execution of long-horizon tasks.
17
 
18
- We anticipate that iFlyBotVLM will serve as an efficient and scalable foundation model, driving the advancement of embodied AI from single-task capabilities toward generalist intelligent agents.
19
 
20
 
21
  <div style="display: flex; gap: 1em; max-width: 100%;">
@@ -34,7 +34,7 @@ We anticipate that iFlyBotVLM will serve as an efficient and scalable foundation
34
 
35
  ## 🏗️Model Architecture
36
 
37
- iFlyBotVLM inherits the robust, three-stage "ViT-Projector-LLM" paradigm from established Vision-Language Models. It integrates a dedicated, incrementally pre-trained Visual Encoder with an advanced Language Model via a simple, randomly initialized MLP projector for efficient feature alignment.
38
 
39
  The core enhancement lies in the ViT's Positional Encoding (PE) layer. Instead of relying solely on the original 448 dimension PE, we employ Bicubic Interpolation to intelligently upsample the learned positional embeddings from 448 to an enriched dimension of 896. This novel approach, termed Dimension-Expanded Position Embedding (DEPE), provides a significantly more nuanced spatial context vector for each visual token. This dimensional enrichment allows the model to capture more complex positional and relative spatial information without increasing the sequence length, thereby enhancing the model's ability to perform fine-grained visual reasoning and detailed localization tasks.
40
 
@@ -42,19 +42,19 @@ The core enhancement lies in the ViT's Positional Encoding (PE) layer. Instead o
42
 
43
  ## 📊Model Performance
44
 
45
- iFlyBotVLM demonstrates superior performance across various challenging benchmarks.
46
 
47
  ![image/png](https://huggingface.co/datasets/iFlyBot/iFlyBotVLM-Repo/resolve/main/images/benchmark_performance.png)
48
 
49
  ![image/png](https://huggingface.co/datasets/iFlyBot/iFlyBotVLM-Repo/resolve/main/images/table-performances.png)
50
 
51
- iFlyBotVLM-8B achieves state-of-the-art (SOTA) or near-SOTA performance on ten spatial Understanding, spatial perception, and temporal task planning benchmarks: Where2Place, Refspatial-bench, ShareRobot-affordance, ShareRobot-trajectory, BLINK(spatial), EmbSpatial, ERQA, CVBench, SAT, EgoPlan2.
52
 
53
  ## 🚀Quick Start
54
 
55
  ### Using 🤗 Transformers to Chat
56
 
57
- We provide an example code to run `iFlyBotVLM-8B` using `transformers`.
58
 
59
  > Please use transformers>=4.37.2 to ensure the model works normally.
60
 
 
3
  ---
4
 
5
 
6
+ # iFlyBot-VLM
7
 
8
  ## 🔥Introduction
9
 
10
+ We introduce iFlyBot-VLM, a general-purpose Vision-Language-Model (VLM) specifically engineered for the domain of Embodied Intelligence. The primary objective of this model is to bridge the cross-modal semantic gap between high-dimensional environmental perception and low-level robot motion control. It achieves this by abstracting complex scene information into an "Operational Language" that is body-agnostic and transferable, thus enabling seamless perception-to-action closed-loop coordination.
11
 
12
+ The architecture of iFlyBot-VLM is designed to realize four critical functional capabilities in the embodied domain:
13
  **🧭Spatial Understanding and Metric**: Provides the model with the capacity to understand spatial relationships and perform relative position estimation among objects in the environment.
14
  **🎯Interactive Target Grounding**: Supports diverse grounding mechanisms, including 2D/3D object detection in the visual modality, language-based object and spatial referring, and the prediction of critical object affordance regions.
15
  **🤖Action Abstraction and Control Parameter Generation**: Generates outputs directly relevant to the manipulation domain, providing grasp poses and manipulation trajectories.
16
  **📋Task Planning**: Leveraging the current scene Understanding, this module performs multi-step prediction to decompose complex tasks into a sequence of atomic skills, fundamentally supporting the robust execution of long-horizon tasks.
17
 
18
+ We anticipate that iFlyBot-VLM will serve as an efficient and scalable foundation model, driving the advancement of embodied AI from single-task capabilities toward generalist intelligent agents.
19
 
20
 
21
  <div style="display: flex; gap: 1em; max-width: 100%;">
 
34
 
35
  ## 🏗️Model Architecture
36
 
37
+ iFlyBot-VLM inherits the robust, three-stage "ViT-Projector-LLM" paradigm from established Vision-Language Models. It integrates a dedicated, incrementally pre-trained Visual Encoder with an advanced Language Model via a simple, randomly initialized MLP projector for efficient feature alignment.
38
 
39
  The core enhancement lies in the ViT's Positional Encoding (PE) layer. Instead of relying solely on the original 448 dimension PE, we employ Bicubic Interpolation to intelligently upsample the learned positional embeddings from 448 to an enriched dimension of 896. This novel approach, termed Dimension-Expanded Position Embedding (DEPE), provides a significantly more nuanced spatial context vector for each visual token. This dimensional enrichment allows the model to capture more complex positional and relative spatial information without increasing the sequence length, thereby enhancing the model's ability to perform fine-grained visual reasoning and detailed localization tasks.
40
 
 
42
 
43
  ## 📊Model Performance
44
 
45
+ iFlyBot-VLM demonstrates superior performance across various challenging benchmarks.
46
 
47
  ![image/png](https://huggingface.co/datasets/iFlyBot/iFlyBotVLM-Repo/resolve/main/images/benchmark_performance.png)
48
 
49
  ![image/png](https://huggingface.co/datasets/iFlyBot/iFlyBotVLM-Repo/resolve/main/images/table-performances.png)
50
 
51
+ iFlyBot-VLM-8B achieves state-of-the-art (SOTA) or near-SOTA performance on ten spatial Understanding, spatial perception, and temporal task planning benchmarks: Where2Place, Refspatial-bench, ShareRobot-affordance, ShareRobot-trajectory, BLINK(spatial), EmbSpatial, ERQA, CVBench, SAT, EgoPlan2.
52
 
53
  ## 🚀Quick Start
54
 
55
  ### Using 🤗 Transformers to Chat
56
 
57
+ We provide an example code to run `iFlyBot-VLM-8B` using `transformers`.
58
 
59
  > Please use transformers>=4.37.2 to ensure the model works normally.
60