mlfu7 commited on
Commit
9835273
1 Parent(s): 0ba2642

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -1,7 +1,7 @@
1
  ---
2
  license: apache-2.0
3
  ---
4
- # Aligning Touch, Vision, and Language for Multimodal Perception
5
  by <a href="https://max-fu.github.io">Max (Letian) Fu</a>, <a href="https://www.linkedin.com/in/gaurav-datta/">Gaurav Datta*</a>, <a href="https://qingh097.github.io/">Huang Huang*</a>, <a href="https://autolab.berkeley.edu/people">William Chung-Ho Panitch*</a>, <a href="https://www.linkedin.com/in/jaimyn-drake/">Jaimyn Drake*</a>, <a href="https://joeaortiz.github.io/">Joseph Ortiz</a>, <a href="https://www.mustafamukadam.com/">Mustafa Mukadam</a>, <a href="https://scholar.google.com/citations?user=p6DCMrQAAAAJ&hl=en">Mike Lambeta</a>, <a href="https://lasr.org/">Roberto Calandra</a>, <a href="https://goldberg.berkeley.edu">Ken Goldberg</a> at UC Berkeley, Meta AI, TU Dresden, and CeTI (*equal contribution).
6
 
7
  [[Paper](#todo)] | [[Project Page](https://tactile-vlm.github.io/)] | [[Citation](#citation)]
@@ -11,7 +11,7 @@ by <a href="https://max-fu.github.io">Max (Letian) Fu</a>, <a href="https://www.
11
  </p>
12
 
13
 
14
- This repo contains the official checkpoints for *A Touch, Vision, and Language Dataset for Multimodal Perception*.
15
 
16
  The tactile encoders comes in three different sizes: ViT-Tiny, ViT-Small, and ViT-Base, all of which are stored in
17
  ```bash
 
1
  ---
2
  license: apache-2.0
3
  ---
4
+ # A Touch, Vision, and Language Dataset for Multimodal Alignment
5
  by <a href="https://max-fu.github.io">Max (Letian) Fu</a>, <a href="https://www.linkedin.com/in/gaurav-datta/">Gaurav Datta*</a>, <a href="https://qingh097.github.io/">Huang Huang*</a>, <a href="https://autolab.berkeley.edu/people">William Chung-Ho Panitch*</a>, <a href="https://www.linkedin.com/in/jaimyn-drake/">Jaimyn Drake*</a>, <a href="https://joeaortiz.github.io/">Joseph Ortiz</a>, <a href="https://www.mustafamukadam.com/">Mustafa Mukadam</a>, <a href="https://scholar.google.com/citations?user=p6DCMrQAAAAJ&hl=en">Mike Lambeta</a>, <a href="https://lasr.org/">Roberto Calandra</a>, <a href="https://goldberg.berkeley.edu">Ken Goldberg</a> at UC Berkeley, Meta AI, TU Dresden, and CeTI (*equal contribution).
6
 
7
  [[Paper](#todo)] | [[Project Page](https://tactile-vlm.github.io/)] | [[Citation](#citation)]
 
11
  </p>
12
 
13
 
14
+ This repo contains the official checkpoints for *A Touch, Vision, and Language Dataset for Multimodal Alignment*.
15
 
16
  The tactile encoders comes in three different sizes: ViT-Tiny, ViT-Small, and ViT-Base, all of which are stored in
17
  ```bash