Tokenize Anything via Prompting

[Ting Pan](https://github.com/PhyscalX/)1,2*,   [Lulu Tang]()2*,   [Xinlong Wang](https://www.xloong.wang/)2ΒΆ,   [Shiguang Shan](https://scholar.google.com/citations?user=Vkzd7MIAAAAJ&hl=en)1 1[ICT-CAS](http://english.ict.cas.cn/),   2[BAAI](https://www.baai.ac.cn/english.html)
* Equal Contribution, ΒΆProject Lead
We present **T**okenize **A**nything via **P**rompting, a unified and promptable model capable of simultaneously segmenting, recognizing, and captioning objects within arbitrary regions, only relaying on visual prompts (point, box and sketch). The model is trained with exhaustive segmentation masks sourced from SA-1B, coupled with semantic priors from a pre-trained EVA-CLIP with 5 billion parameters. ## Installation See [Github Page](https://github.com/baaivision/tokenize-anything). ## Models ### Model weights Two versions of the model are available with different image encoders. | Model | Description | Weights | | ----- | ------------| ------ | | **tap_vit_l** | ViT-L TAP model | [πŸ€— HF link](https://huggingface.co/BAAI/tokenize-anything/blob/main/models/tap_vit_l_03f8ec.pkl) | | **tap_vit_b** | ViT-B TAP model | [πŸ€— HF link](https://huggingface.co/BAAI/tokenize-anything/blob/main/models/tap_vit_b_b45cbf.pkl) | ### Concept weights ***Note***: You can generate these weights following the [Concept Guide](https://github.com/baaivision/tokenize-anything/blob/main/notebooks/concept.ipynb). | Concept | Description | Weights | | ------- | ------------| ------ | | **Merged-2560** | Merged concepts | [πŸ€— HF link](https://huggingface.co/BAAI/tokenize-anything/blob/main/concepts/merged_2560.pkl) | | **LVIS-1203** | LVIS concepts | [πŸ€— HF link](https://huggingface.co/BAAI/tokenize-anything/blob/main/models/lvis_1203.pkl) | | **COCO-80** | COCO concepts | [πŸ€— HF link](https://huggingface.co/BAAI/tokenize-anything/blob/main/models/coco_80.pkl) | ## License [Apache License 2.0](LICENSE) ## Citation ``` @article{pan2023tap, title={Tokenize Anything via Prompting}, author={Pan, Ting and Tang, Lulu and Wang, Xinlong and Shan, Shiguang}, journal={arXiv preprint arXiv:2312.yyyyy}, year={2023} } ``` ## Acknowledgement We thank the repositories: [SAM](https://github.com/facebookresearch/segment-anything), [EVA](https://github.com/baaivision/EVA), [LLaMA](https://github.com/facebookresearch/llama), [FlashAttention](https://github.com/Dao-AILab/flash-attention), [Gradio](https://github.com/gradio-app/gradio), [Detectron2](https://github.com/facebookresearch/detectron2) and [CodeWithGPU](https://github.com/seetacloud/codewithgpu).