Upload folder using huggingface_hub
Browse files- .DS_Store +0 -0
- README.md +36 -0
- SAM2_1SmallImageEncoderFLOAT16.mlpackage/Data/com.apple.CoreML/model.mlmodel +3 -0
- SAM2_1SmallImageEncoderFLOAT16.mlpackage/Data/com.apple.CoreML/weights/weight.bin +3 -0
- SAM2_1SmallImageEncoderFLOAT16.mlpackage/Manifest.json +18 -0
- SAM2_1SmallMaskDecoderFLOAT16.mlpackage/Data/com.apple.CoreML/model.mlmodel +3 -0
- SAM2_1SmallMaskDecoderFLOAT16.mlpackage/Data/com.apple.CoreML/weights/weight.bin +3 -0
- SAM2_1SmallMaskDecoderFLOAT16.mlpackage/Manifest.json +18 -0
- SAM2_1SmallPromptEncoderFLOAT16.mlpackage/Data/com.apple.CoreML/model.mlmodel +3 -0
- SAM2_1SmallPromptEncoderFLOAT16.mlpackage/Data/com.apple.CoreML/weights/weight.bin +3 -0
- SAM2_1SmallPromptEncoderFLOAT16.mlpackage/Manifest.json +18 -0
.DS_Store
ADDED
Binary file (8.2 kB). View file
|
|
README.md
ADDED
@@ -0,0 +1,36 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
pipeline_tag: mask-generation
|
4 |
+
library_name: coreml
|
5 |
+
---
|
6 |
+
|
7 |
+
# SAM 2.1 Small Core ML
|
8 |
+
|
9 |
+
SAM 2 (Segment Anything in Images and Videos), is a collection of foundation models from FAIR that aim to solve promptable visual segmentation in images and videos. See the [SAM 2 paper](https://arxiv.org/abs/2408.00714) for more information.
|
10 |
+
|
11 |
+
This is the Core ML version of [SAM 2.1 Small](https://huggingface.co/facebook/sam2.1-hiera-small), and is suitable for use with the [SAM2 Studio demo app](https://github.com/huggingface/sam2-studio). It was converted in `float16` precision using [this fork](https://github.com/huggingface/segment-anything-2/tree/coreml-conversion) of the original code repository.
|
12 |
+
|
13 |
+
## Download
|
14 |
+
|
15 |
+
Install `huggingface-cli`
|
16 |
+
|
17 |
+
```bash
|
18 |
+
brew install huggingface-cli
|
19 |
+
```
|
20 |
+
|
21 |
+
```bash
|
22 |
+
huggingface-cli download --local-dir models apple/coreml-sam2.1-small
|
23 |
+
```
|
24 |
+
|
25 |
+
## Citation
|
26 |
+
|
27 |
+
To cite the paper, model, or software, please use the below:
|
28 |
+
```
|
29 |
+
@article{ravi2024sam2,
|
30 |
+
title={SAM 2: Segment Anything in Images and Videos},
|
31 |
+
author={Ravi, Nikhila and Gabeur, Valentin and Hu, Yuan-Ting and Hu, Ronghang and Ryali, Chaitanya and Ma, Tengyu and Khedr, Haitham and R{\"a}dle, Roman and Rolland, Chloe and Gustafson, Laura and Mintun, Eric and Pan, Junting and Alwala, Kalyan Vasudev and Carion, Nicolas and Wu, Chao-Yuan and Girshick, Ross and Doll{\'a}r, Piotr and Feichtenhofer, Christoph},
|
32 |
+
journal={arXiv preprint arXiv:2408.00714},
|
33 |
+
url={https://arxiv.org/abs/2408.00714},
|
34 |
+
year={2024}
|
35 |
+
}
|
36 |
+
```
|
SAM2_1SmallImageEncoderFLOAT16.mlpackage/Data/com.apple.CoreML/model.mlmodel
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:736602a3130875072f843df123ce3ceff41ec2f917a867da56f7c2264d81a567
|
3 |
+
size 203631
|
SAM2_1SmallImageEncoderFLOAT16.mlpackage/Data/com.apple.CoreML/weights/weight.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:6128a58f10cbcb7797bd12a7ec1e05ab4e9f8c80bd14ed05bbc43a9c9935a9c3
|
3 |
+
size 81268288
|
SAM2_1SmallImageEncoderFLOAT16.mlpackage/Manifest.json
ADDED
@@ -0,0 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"fileFormatVersion": "1.0.0",
|
3 |
+
"itemInfoEntries": {
|
4 |
+
"4C20C7AA-F42B-4CCD-84C3-73C031A91D48": {
|
5 |
+
"author": "com.apple.CoreML",
|
6 |
+
"description": "CoreML Model Weights",
|
7 |
+
"name": "weights",
|
8 |
+
"path": "com.apple.CoreML/weights"
|
9 |
+
},
|
10 |
+
"DDCB1D63-C7BD-4A13-8EB5-D7151371105B": {
|
11 |
+
"author": "com.apple.CoreML",
|
12 |
+
"description": "CoreML Model Specification",
|
13 |
+
"name": "model.mlmodel",
|
14 |
+
"path": "com.apple.CoreML/model.mlmodel"
|
15 |
+
}
|
16 |
+
},
|
17 |
+
"rootModelIdentifier": "DDCB1D63-C7BD-4A13-8EB5-D7151371105B"
|
18 |
+
}
|
SAM2_1SmallMaskDecoderFLOAT16.mlpackage/Data/com.apple.CoreML/model.mlmodel
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:3536b71674dc706381f42833b55627fd3d1f2c1b68a365b49255284b4f7b55c7
|
3 |
+
size 75167
|
SAM2_1SmallMaskDecoderFLOAT16.mlpackage/Data/com.apple.CoreML/weights/weight.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:dbc4910c434fee657ed9f001da7abcb467d2a37ac235099c6aa47628b122ec4e
|
3 |
+
size 10222400
|
SAM2_1SmallMaskDecoderFLOAT16.mlpackage/Manifest.json
ADDED
@@ -0,0 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"fileFormatVersion": "1.0.0",
|
3 |
+
"itemInfoEntries": {
|
4 |
+
"6FA6762D-69A1-4A0B-AB0D-512638FD7ECF": {
|
5 |
+
"author": "com.apple.CoreML",
|
6 |
+
"description": "CoreML Model Specification",
|
7 |
+
"name": "model.mlmodel",
|
8 |
+
"path": "com.apple.CoreML/model.mlmodel"
|
9 |
+
},
|
10 |
+
"DB82D069-C4C9-41FB-A178-262063485D28": {
|
11 |
+
"author": "com.apple.CoreML",
|
12 |
+
"description": "CoreML Model Weights",
|
13 |
+
"name": "weights",
|
14 |
+
"path": "com.apple.CoreML/weights"
|
15 |
+
}
|
16 |
+
},
|
17 |
+
"rootModelIdentifier": "6FA6762D-69A1-4A0B-AB0D-512638FD7ECF"
|
18 |
+
}
|
SAM2_1SmallPromptEncoderFLOAT16.mlpackage/Data/com.apple.CoreML/model.mlmodel
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:3a83c167d8bd63e80f86349a78c2ab0527ce97eca1f848a4ce57fe5351241fa3
|
3 |
+
size 20618
|
SAM2_1SmallPromptEncoderFLOAT16.mlpackage/Data/com.apple.CoreML/weights/weight.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:e29c6c65ef7754f8f6ed26155ca39ea703de3a7f576be5a7d3b8e29545059f31
|
3 |
+
size 2101056
|
SAM2_1SmallPromptEncoderFLOAT16.mlpackage/Manifest.json
ADDED
@@ -0,0 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"fileFormatVersion": "1.0.0",
|
3 |
+
"itemInfoEntries": {
|
4 |
+
"BE0329D0-1E5D-4FF9-8ECE-350FC8DE699D": {
|
5 |
+
"author": "com.apple.CoreML",
|
6 |
+
"description": "CoreML Model Weights",
|
7 |
+
"name": "weights",
|
8 |
+
"path": "com.apple.CoreML/weights"
|
9 |
+
},
|
10 |
+
"C1F60EF7-4F31-4243-8BE5-C107CB23EADF": {
|
11 |
+
"author": "com.apple.CoreML",
|
12 |
+
"description": "CoreML Model Specification",
|
13 |
+
"name": "model.mlmodel",
|
14 |
+
"path": "com.apple.CoreML/model.mlmodel"
|
15 |
+
}
|
16 |
+
},
|
17 |
+
"rootModelIdentifier": "C1F60EF7-4F31-4243-8BE5-C107CB23EADF"
|
18 |
+
}
|