Upload folder using huggingface_hub
Browse files- .DS_Store +0 -0
- README.md +36 -0
- SAM2_1TinyImageEncoderFLOAT16.mlpackage/Data/com.apple.CoreML/model.mlmodel +3 -0
- SAM2_1TinyImageEncoderFLOAT16.mlpackage/Data/com.apple.CoreML/weights/weight.bin +3 -0
- SAM2_1TinyImageEncoderFLOAT16.mlpackage/Manifest.json +18 -0
- SAM2_1TinyMaskDecoderFLOAT16.mlpackage/Data/com.apple.CoreML/model.mlmodel +3 -0
- SAM2_1TinyMaskDecoderFLOAT16.mlpackage/Data/com.apple.CoreML/weights/weight.bin +3 -0
- SAM2_1TinyMaskDecoderFLOAT16.mlpackage/Manifest.json +18 -0
- SAM2_1TinyPromptEncoderFLOAT16.mlpackage/Data/com.apple.CoreML/model.mlmodel +3 -0
- SAM2_1TinyPromptEncoderFLOAT16.mlpackage/Data/com.apple.CoreML/weights/weight.bin +3 -0
- SAM2_1TinyPromptEncoderFLOAT16.mlpackage/Manifest.json +18 -0
.DS_Store
ADDED
Binary file (8.2 kB). View file
|
|
README.md
ADDED
@@ -0,0 +1,36 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
pipeline_tag: mask-generation
|
4 |
+
library_name: coreml
|
5 |
+
---
|
6 |
+
|
7 |
+
# SAM 2.1 Tiny Core ML
|
8 |
+
|
9 |
+
SAM 2 (Segment Anything in Images and Videos), is a collection of foundation models from FAIR that aim to solve promptable visual segmentation in images and videos. See the [SAM 2 paper](https://arxiv.org/abs/2408.00714) for more information.
|
10 |
+
|
11 |
+
This is the Core ML version of [SAM 2.1 Tiny](https://huggingface.co/facebook/sam2.1-hiera-tiny), and is suitable for use with the [SAM2 Studio demo app](https://github.com/huggingface/sam2-studio). It was converted in `float16` precision using [this fork](https://github.com/huggingface/segment-anything-2/tree/coreml-conversion) of the original code repository.
|
12 |
+
|
13 |
+
## Download
|
14 |
+
|
15 |
+
Install `huggingface-cli`
|
16 |
+
|
17 |
+
```bash
|
18 |
+
brew install huggingface-cli
|
19 |
+
```
|
20 |
+
|
21 |
+
```bash
|
22 |
+
huggingface-cli download --local-dir models apple/coreml-sam2.1-tiny
|
23 |
+
```
|
24 |
+
|
25 |
+
## Citation
|
26 |
+
|
27 |
+
To cite the paper, model, or software, please use the below:
|
28 |
+
```
|
29 |
+
@article{ravi2024sam2,
|
30 |
+
title={SAM 2: Segment Anything in Images and Videos},
|
31 |
+
author={Ravi, Nikhila and Gabeur, Valentin and Hu, Yuan-Ting and Hu, Ronghang and Ryali, Chaitanya and Ma, Tengyu and Khedr, Haitham and R{\"a}dle, Roman and Rolland, Chloe and Gustafson, Laura and Mintun, Eric and Pan, Junting and Alwala, Kalyan Vasudev and Carion, Nicolas and Wu, Chao-Yuan and Girshick, Ross and Doll{\'a}r, Piotr and Feichtenhofer, Christoph},
|
32 |
+
journal={arXiv preprint arXiv:2408.00714},
|
33 |
+
url={https://arxiv.org/abs/2408.00714},
|
34 |
+
year={2024}
|
35 |
+
}
|
36 |
+
```
|
SAM2_1TinyImageEncoderFLOAT16.mlpackage/Data/com.apple.CoreML/model.mlmodel
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:6cbc50301ee3ff4a9366083f9647e1f06762759542d8dd0fac394ebc3682cce7
|
3 |
+
size 154372
|
SAM2_1TinyImageEncoderFLOAT16.mlpackage/Data/com.apple.CoreML/weights/weight.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:eab96eb8ff35720c79eedc0cac2a4ef32d685f9c994c39736027078528c48a97
|
3 |
+
size 67069504
|
SAM2_1TinyImageEncoderFLOAT16.mlpackage/Manifest.json
ADDED
@@ -0,0 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"fileFormatVersion": "1.0.0",
|
3 |
+
"itemInfoEntries": {
|
4 |
+
"02E92A73-442D-4E0A-B919-FAD54731C33F": {
|
5 |
+
"author": "com.apple.CoreML",
|
6 |
+
"description": "CoreML Model Specification",
|
7 |
+
"name": "model.mlmodel",
|
8 |
+
"path": "com.apple.CoreML/model.mlmodel"
|
9 |
+
},
|
10 |
+
"4EE95287-6F9E-4C5D-A95A-8B89889CBAE6": {
|
11 |
+
"author": "com.apple.CoreML",
|
12 |
+
"description": "CoreML Model Weights",
|
13 |
+
"name": "weights",
|
14 |
+
"path": "com.apple.CoreML/weights"
|
15 |
+
}
|
16 |
+
},
|
17 |
+
"rootModelIdentifier": "02E92A73-442D-4E0A-B919-FAD54731C33F"
|
18 |
+
}
|
SAM2_1TinyMaskDecoderFLOAT16.mlpackage/Data/com.apple.CoreML/model.mlmodel
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:4601f302d4c6936e15de3a22089c2afe1fa009ef703f82147ff829b4be677577
|
3 |
+
size 75167
|
SAM2_1TinyMaskDecoderFLOAT16.mlpackage/Data/com.apple.CoreML/weights/weight.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f5a8635981199fa1199007ed6798c61a326288548b74553b3c2ddb932fcdc8de
|
3 |
+
size 10222400
|
SAM2_1TinyMaskDecoderFLOAT16.mlpackage/Manifest.json
ADDED
@@ -0,0 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"fileFormatVersion": "1.0.0",
|
3 |
+
"itemInfoEntries": {
|
4 |
+
"9085E4BD-5C48-424E-A062-C26A403EEA69": {
|
5 |
+
"author": "com.apple.CoreML",
|
6 |
+
"description": "CoreML Model Weights",
|
7 |
+
"name": "weights",
|
8 |
+
"path": "com.apple.CoreML/weights"
|
9 |
+
},
|
10 |
+
"C27B6C59-C9B2-4D24-BFF8-8FE9AA1D4A75": {
|
11 |
+
"author": "com.apple.CoreML",
|
12 |
+
"description": "CoreML Model Specification",
|
13 |
+
"name": "model.mlmodel",
|
14 |
+
"path": "com.apple.CoreML/model.mlmodel"
|
15 |
+
}
|
16 |
+
},
|
17 |
+
"rootModelIdentifier": "C27B6C59-C9B2-4D24-BFF8-8FE9AA1D4A75"
|
18 |
+
}
|
SAM2_1TinyPromptEncoderFLOAT16.mlpackage/Data/com.apple.CoreML/model.mlmodel
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:3a83c167d8bd63e80f86349a78c2ab0527ce97eca1f848a4ce57fe5351241fa3
|
3 |
+
size 20618
|
SAM2_1TinyPromptEncoderFLOAT16.mlpackage/Data/com.apple.CoreML/weights/weight.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:af466cf28ef8838f409c2bfd8cc0049b9efbf9db335d60a57dbfc5160af883f2
|
3 |
+
size 2101056
|
SAM2_1TinyPromptEncoderFLOAT16.mlpackage/Manifest.json
ADDED
@@ -0,0 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"fileFormatVersion": "1.0.0",
|
3 |
+
"itemInfoEntries": {
|
4 |
+
"26A25183-CFBB-4EA8-AB34-E218CB0F807B": {
|
5 |
+
"author": "com.apple.CoreML",
|
6 |
+
"description": "CoreML Model Weights",
|
7 |
+
"name": "weights",
|
8 |
+
"path": "com.apple.CoreML/weights"
|
9 |
+
},
|
10 |
+
"91D18253-4482-4B98-BB15-E4E236E8299C": {
|
11 |
+
"author": "com.apple.CoreML",
|
12 |
+
"description": "CoreML Model Specification",
|
13 |
+
"name": "model.mlmodel",
|
14 |
+
"path": "com.apple.CoreML/model.mlmodel"
|
15 |
+
}
|
16 |
+
},
|
17 |
+
"rootModelIdentifier": "91D18253-4482-4B98-BB15-E4E236E8299C"
|
18 |
+
}
|