pabloruizponce commited on
Commit
cf646b8
1 Parent(s): 925f5ff

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +43 -1
README.md CHANGED
@@ -1,4 +1,46 @@
1
  ---
2
  library_name: transformers
3
  pipeline_tag: text-to-3d
4
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  library_name: transformers
3
  pipeline_tag: text-to-3d
4
+ ---
5
+
6
+ # in2IN: Leveraging individual Information to Generate Human INteractions
7
+
8
+ <p style="display:flex; gap:5px" align="center">
9
+ <a href="https://pabloruizponce.github.io/in2IN/"><img alt="Project" src="https://img.shields.io/badge/-Project%20Page-lightgrey?logo=Google%20Chrome&color=informational&logoColor=white"></a>
10
+ <a href="https://arxiv.org/abs/2404.09988"><img alt="arXiv" src="https://img.shields.io/badge/arXiv-2404.09988-b31b1b.svg"></a>
11
+ <a href="https://paperswithcode.com/sota/motion-synthesis-on-interhuman?p=in2in-leveraging-individual-information-to-1"><img src="https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/in2in-leveraging-individual-information-to-1/motion-synthesis-on-interhuman"/></a>
12
+ </p>
13
+
14
+ <div style="text-align: center;">
15
+     <img src="cover.png" align="center" width=100% >
16
+ </div>
17
+ </br>
18
+ Generating human-human motion interactions conditioned on textual descriptions is a very useful application in many areas such as robotics, gaming, animation, and the metaverse. Alongside this utility also comes a great difficulty in modeling the highly dimensional inter-personal dynamics. In addition, properly capturing the intra-personal diversity of interactions has a lot of challenges. Current methods generate interactions with limited diversity of intra-person dynamics due to the limitations of the available datasets and conditioning strategies. For this, we introduce <b>in2IN</b>, a novel diffusion model for human-human motion generation which is conditioned not only on the textual description of the overall interaction but also on the individual descriptions of the actions performed by each person involved in the interaction. To train this model, we use a large language model to extend the InterHuman dataset with individual descriptions. As a result, <b>in2IN</b> achieves state-of-the-art performance in the InterHuman dataset. Furthermore, in order to increase the intra-personal diversity on the existing interaction datasets, we propose <b>DualMDM</b>, a model composition technique that combines the motions generated with <b>in2IN</b> and the motions generated by a single-person motion prior pre-trained on HumanML3D. As a result, <b>DualMDM</b> generates motions with higher individual diversity and improves control over the intra-person dynamics while maintaining inter-personal coherence.
19
+
20
+ ## Usage
21
+
22
+ **Input**: The model gets as input the textual description of the overall interaction and the two individual descriptions from the interactants
23
+
24
+ **Output** (2,T,N,3): the model returns an array with the coordinates of the N joints of each interactant during a motion of T timesteps of duration,  
25
+  
26
+ ```python
27
+ from transformers import AutoModel        
28
+ model = AutoModel.from_pretrained("pabloruizponce/in2IN", trust_remote_code=True)
29
+ model(textI, texti1, texti2)
30
+ ```
31
+
32
+ ## 📚 Citation
33
+
34
+ If you find our work helpful, please cite:
35
+
36
+ ```bibtex
37
+ @InProceedings{Ruiz-Ponce_2024_CVPR,
38
+ author = {Ruiz-Ponce, Pablo and Barquero, German and Palmero, Cristina and Escalera, Sergio and Garc{\'\i}a-Rodr{\'\i}guez, Jos\'e},
39
+ title = {in2IN: Leveraging Individual Information to Generate Human INteractions},
40
+ booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
41
+ month = {June},
42
+ year = {2024},
43
+ pages = {1941-1951}
44
+ }
45
+ ```
46
+