mlfu7 commited on
Commit
080f5a6
1 Parent(s): e193e76

Upload folder using huggingface_hub

Browse files
Files changed (3) hide show
  1. .gitattributes +1 -0
  2. README.md +32 -0
  3. img/splash_figure_alt.png +3 -0
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ img/splash_figure_alt.png filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,3 +1,35 @@
1
  ---
2
  license: apache-2.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  ---
4
+ # Aligning Touch, Vision, and Language for Multimodal Perception
5
+ by <a href="https://max-fu.github.io">Max (Letian) Fu</a>, <a href="https://www.linkedin.com/in/gaurav-datta/">Gaurav Datta*</a>, <a href="https://qingh097.github.io/">Huang Huang*</a>, <a href="https://autolab.berkeley.edu/people">William Chung-Ho Panitch*</a>, <a href="https://www.linkedin.com/in/jaimyn-drake/">Jaimyn Drake*</a>, <a href="https://joeaortiz.github.io/">Joseph Ortiz</a>, <a href="https://www.mustafamukadam.com/">Mustafa Mukadam</a>, <a href="https://scholar.google.com/citations?user=p6DCMrQAAAAJ&hl=en">Mike Lambeta</a>, <a href="https://lasr.org/">Roberto Calandra</a>, <a href="https://goldberg.berkeley.edu">Ken Goldberg</a> at UC Berkeley, Meta AI, and TU Dresden
6
+
7
+ [[Paper](#todo)] | [[Project Page](https://tvl.github.io/)] | [[Citation](#citation)]
8
+
9
+ <p align="center">
10
+ <img src="img/splash_figure_alt.png" width="800">
11
+ </p>
12
+
13
+
14
+ This repo contains the official checkpoints for *Aligning Touch, Vision, and Language for Multimodal Perception*.
15
+
16
+ The tactile encoders comes in three different sizes: ViT-Tiny, ViT-Small, and ViT-Base, all of which are stored in
17
+ ```bash
18
+ ckpt/tvl_enc
19
+ ```
20
+
21
+ TVL-LLaMA, the generative counterparts, are stored in
22
+ ```bash
23
+ ckpt/tvl_llama
24
+ ```
25
+
26
+ ## Inference
27
+ For zero-shot classification, we would require [OpenCLIP](https://github.com/mlfoundations/open_clip) with the following configuration:
28
+ ```bash
29
+ CLIP_VISION_MODEL = "ViT-L-14"
30
+ CLIP_PRETRAIN_DATA = "datacomp_xl_s13b_b90k"
31
+ ```
32
+
33
+ For TVL-LLaMA, please request access to the pre-trained LLaMA-2 from this [form](https://llama.meta.com/llama-downloads/). In particular, we use `llama-2-7b` as the base model. The weights here contains the trained [adapter](https://arxiv.org/abs/2309.03905), the tactile encoder, and the vision encoder for the ease of loading.
34
+
35
+ For the complete info, please take a look at the [GitHub repo](https://tvl.github.io) to see instructions on pretraining, fine-tuning, and evaluation with these models.
img/splash_figure_alt.png ADDED

Git LFS Details

  • SHA256: e9748d48f10a2407cb84f08a9a726460ff7c78e1fa3c5a2bf29878cc80eb2146
  • Pointer size: 132 Bytes
  • Size of remote file: 1.82 MB