mlfu7 commited on
Commit
0ba2642
1 Parent(s): 4fb914e

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -2,7 +2,7 @@
2
  license: apache-2.0
3
  ---
4
  # Aligning Touch, Vision, and Language for Multimodal Perception
5
- by <a href="https://max-fu.github.io">Max (Letian) Fu</a>, <a href="https://www.linkedin.com/in/gaurav-datta/">Gaurav Datta*</a>, <a href="https://qingh097.github.io/">Huang Huang*</a>, <a href="https://autolab.berkeley.edu/people">William Chung-Ho Panitch*</a>, <a href="https://www.linkedin.com/in/jaimyn-drake/">Jaimyn Drake*</a>, <a href="https://joeaortiz.github.io/">Joseph Ortiz</a>, <a href="https://www.mustafamukadam.com/">Mustafa Mukadam</a>, <a href="https://scholar.google.com/citations?user=p6DCMrQAAAAJ&hl=en">Mike Lambeta</a>, <a href="https://lasr.org/">Roberto Calandra</a>, <a href="https://goldberg.berkeley.edu">Ken Goldberg</a> at UC Berkeley, Meta AI, and TU Dresden (*equal contribution).
6
 
7
  [[Paper](#todo)] | [[Project Page](https://tactile-vlm.github.io/)] | [[Citation](#citation)]
8
 
@@ -32,4 +32,4 @@ CLIP_PRETRAIN_DATA = "datacomp_xl_s13b_b90k"
32
 
33
  For TVL-LLaMA, please request access to the pre-trained LLaMA-2 from this [form](https://llama.meta.com/llama-downloads/). In particular, we use `llama-2-7b` as the base model. The weights here contains the trained [adapter](https://arxiv.org/abs/2309.03905), the tactile encoder, and the vision encoder for the ease of loading.
34
 
35
- For the complete info, please take a look at the [GitHub repo](https://tactile-vlm.github.io/) to see instructions on pretraining, fine-tuning, and evaluation with these models.
 
2
  license: apache-2.0
3
  ---
4
  # Aligning Touch, Vision, and Language for Multimodal Perception
5
+ by <a href="https://max-fu.github.io">Max (Letian) Fu</a>, <a href="https://www.linkedin.com/in/gaurav-datta/">Gaurav Datta*</a>, <a href="https://qingh097.github.io/">Huang Huang*</a>, <a href="https://autolab.berkeley.edu/people">William Chung-Ho Panitch*</a>, <a href="https://www.linkedin.com/in/jaimyn-drake/">Jaimyn Drake*</a>, <a href="https://joeaortiz.github.io/">Joseph Ortiz</a>, <a href="https://www.mustafamukadam.com/">Mustafa Mukadam</a>, <a href="https://scholar.google.com/citations?user=p6DCMrQAAAAJ&hl=en">Mike Lambeta</a>, <a href="https://lasr.org/">Roberto Calandra</a>, <a href="https://goldberg.berkeley.edu">Ken Goldberg</a> at UC Berkeley, Meta AI, TU Dresden, and CeTI (*equal contribution).
6
 
7
  [[Paper](#todo)] | [[Project Page](https://tactile-vlm.github.io/)] | [[Citation](#citation)]
8
 
 
32
 
33
  For TVL-LLaMA, please request access to the pre-trained LLaMA-2 from this [form](https://llama.meta.com/llama-downloads/). In particular, we use `llama-2-7b` as the base model. The weights here contains the trained [adapter](https://arxiv.org/abs/2309.03905), the tactile encoder, and the vision encoder for the ease of loading.
34
 
35
+ For the complete info, please take a look at the [GitHub repo](https://tactile-vlm.github.io/) to see instructions on pretraining, fine-tuning, and evaluation with these models.