domain
stringclasses 40
values | framework
stringclasses 20
values | functionality
stringclasses 181
values | api_name
stringlengths 4
87
| api_call
stringlengths 15
216
| api_arguments
stringlengths 0
495
| python_environment_requirements
stringlengths 0
190
| example_code
stringlengths 0
3.35k
| performance
stringlengths 22
1.36k
| description
stringlengths 35
1.11k
|
---|---|---|---|---|---|---|---|---|---|
Computer Vision Image Classification | Hugging Face Transformers | Image Classification | convnextv2_huge.fcmae_ft_in1k | timm.create_model('convnextv2_huge.fcmae_ft_in1k') | {'pretrained': 'True'} | ['timm'] | from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'))
model = timm.create_model('convnextv2_huge.fcmae_ft_in1k', pretrained=True)
model = model.eval()
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0))
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) | {'dataset': 'imagenet-1k', 'accuracy': 86.256} | A ConvNeXt-V2 image classification model. Pretrained with a fully convolutional masked autoencoder framework (FCMAE) and fine-tuned on ImageNet-1k. |
Computer Vision Image Classification | Hugging Face Transformers | Image Classification, Feature Map Extraction, Image Embeddings | convnext_base.fb_in1k | timm.create_model('convnext_base.fb_in1k') | {'pretrained': 'True', 'features_only': 'True', 'num_classes': '0'} | ['timm'] | ['from urllib.request import urlopen', 'from PIL import Image', 'import timm', "img = Image.open(urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'))", "model = timm.create_model('convnext_base.fb_in1k', pretrained=True)", 'model = model.eval()', 'data_config = timm.data.resolve_model_data_config(model)', 'transforms = timm.data.create_transform(**data_config, is_training=False)', 'output = model(transforms(img).unsqueeze(0))'] | {'dataset': 'imagenet-1k', 'accuracy': '83.82%'} | A ConvNeXt image classification model pretrained on ImageNet-1k by paper authors. It can be used for image classification, feature map extraction, and image embeddings. |
Computer Vision Image Classification | Hugging Face Transformers | Image Classification | timm/mobilenetv3_large_100.ra_in1k | timm.create_model('mobilenetv3_large_100.ra_in1k') | {'pretrained': 'True'} | {'timm': 'latest'} | from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('mobilenetv3_large_100.ra_in1k', pretrained=True)
model = model.eval()
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) | {'dataset': 'imagenet-1k', 'accuracy': 'Not provided'} | A MobileNet-v3 image classification model. Trained on ImageNet-1k in timm using recipe template described below. Recipe details: RandAugment RA recipe. Inspired by and evolved from EfficientNet RandAugment recipes. Published as B recipe in ResNet Strikes Back. RMSProp (TF 1.0 behaviour) optimizer, EMA weight averaging. Step (exponential decay w/ staircase) LR schedule with warmup. |
Computer Vision Object Detection | Hugging Face Transformers | Transformers | microsoft/table-transformer-detection | TableTransformerDetrModel.from_pretrained('microsoft/table-transformer-detection') | image | transformers | from transformers import pipeline; table_detector = pipeline('object-detection', model='microsoft/table-transformer-detection'); results = table_detector(image) | {'dataset': 'PubTables1M', 'accuracy': 'Not provided'} | Table Transformer (DETR) model trained on PubTables1M for detecting tables in documents. Introduced in the paper PubTables-1M: Towards Comprehensive Table Extraction From Unstructured Documents by Smock et al. |
Computer Vision Object Detection | Hugging Face Transformers | Object Detection | facebook/detr-resnet-50 | DetrForObjectDetection.from_pretrained('facebook/detr-resnet-50') | {'pretrained_model_name': 'facebook/detr-resnet-50'} | ['transformers', 'torch', 'PIL', 'requests'] | from transformers import DetrImageProcessor, DetrForObjectDetection
import torch
from PIL import Image
import requests
url = http://images.cocodataset.org/val2017/000000039769.jpg
image = Image.open(requests.get(url, stream=True).raw)
processor = DetrImageProcessor.from_pretrained(facebook/detr-resnet-50)
model = DetrForObjectDetection.from_pretrained(facebook/detr-resnet-50)
inputs = processor(images=image, return_tensors=pt)
outputs = model(**inputs) | {'dataset': 'COCO 2017 validation', 'accuracy': '42.0 AP'} | DEtection TRansformer (DETR) model trained end-to-end on COCO 2017 object detection (118k annotated images). It was introduced in the paper End-to-End Object Detection with Transformers by Carion et al. and first released in this repository. |
Computer Vision Object Detection | Hugging Face Transformers | Object Detection | hustvl/yolos-tiny | YolosForObjectDetection.from_pretrained('hustvl/yolos-tiny') | {'images': 'image', 'return_tensors': 'pt'} | ['transformers', 'PIL', 'requests'] | from transformers import YolosFeatureExtractor, YolosForObjectDetection
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = YolosFeatureExtractor.from_pretrained('hustvl/yolos-tiny')
model = YolosForObjectDetection.from_pretrained('hustvl/yolos-tiny')
inputs = feature_extractor(images=image, return_tensors=pt)
outputs = model(**inputs)
logits = outputs.logits
bboxes = outputs.pred_boxes | {'dataset': 'COCO 2017 validation', 'accuracy': '28.7 AP'} | YOLOS is a Vision Transformer (ViT) trained using the DETR loss. Despite its simplicity, a base-sized YOLOS model is able to achieve 42 AP on COCO validation 2017 (similar to DETR and more complex frameworks such as Faster R-CNN). The model is trained using a bipartite matching loss: one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a no object as class and no bounding box as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model. |
Computer Vision Object Detection | Hugging Face Transformers | Object Detection | facebook/detr-resnet-101 | DetrForObjectDetection.from_pretrained('facebook/detr-resnet-101') | ['image'] | ['transformers', 'torch', 'PIL', 'requests'] | from transformers import DetrImageProcessor, DetrForObjectDetection
import torch
from PIL import Image
import requests
url = http://images.cocodataset.org/val2017/000000039769.jpg
image = Image.open(requests.get(url, stream=True).raw)
processor = DetrImageProcessor.from_pretrained(facebook/detr-resnet-101)
model = DetrForObjectDetection.from_pretrained(facebook/detr-resnet-101)
inputs = processor(images=image, return_tensors=pt)
outputs = model(**inputs) | {'dataset': 'COCO 2017', 'accuracy': '43.5 AP'} | DEtection TRansformer (DETR) model trained end-to-end on COCO 2017 object detection (118k annotated images). It was introduced in the paper End-to-End Object Detection with Transformers by Carion et al. and first released in this repository. |
Computer Vision Object Detection | Hugging Face Transformers | zero-shot-object-detection | google/owlvit-base-patch32 | OwlViTForObjectDetection.from_pretrained('google/owlvit-base-patch32') | {'texts': 'List of text queries', 'images': 'Image to be processed'} | transformers | import requests
from PIL import Image
import torch
from transformers import OwlViTProcessor, OwlViTForObjectDetection
processor = OwlViTProcessor.from_pretrained(google/owlvit-base-patch32)
model = OwlViTForObjectDetection.from_pretrained(google/owlvit-base-patch32)
url = http://images.cocodataset.org/val2017/000000039769.jpg
image = Image.open(requests.get(url, stream=True).raw)
texts = [[a photo of a cat, a photo of a dog]]
inputs = processor(text=texts, images=image, return_tensors=pt)
outputs = model(**inputs)
target_sizes = torch.Tensor([image.size[::-1]])
results = processor.post_process(outputs=outputs, target_sizes=target_sizes) | {'dataset': 'COCO and OpenImages', 'accuracy': 'Not specified'} | OWL-ViT is a zero-shot text-conditioned object detection model that uses CLIP as its multi-modal backbone, with a ViT-like Transformer to get visual features and a causal language model to get the text features. The model can be used to query an image with one or multiple text queries. |
Computer Vision Object Detection | Hugging Face Transformers | Table Extraction | keremberke/yolov8m-table-extraction | YOLO('keremberke/yolov8m-table-extraction') | {'image': 'URL or local path to the image'} | ['ultralyticsplus==0.0.23', 'ultralytics==8.0.21'] | from ultralyticsplus import YOLO, render_result
model = YOLO('keremberke/yolov8m-table-extraction')
model.overrides['conf'] = 0.25
model.overrides['iou'] = 0.45
model.overrides['agnostic_nms'] = False
model.overrides['max_det'] = 1000
image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'
results = model.predict(image)
print(results[0].boxes)
render = render_result(model=model, image=image, result=results[0])
render.show() | {'dataset': 'table-extraction', 'accuracy': 0.952} | A YOLOv8 model for table extraction in images, capable of detecting both bordered and borderless tables. Trained using the keremberke/table-extraction dataset. |
Computer Vision Object Detection | Hugging Face Transformers | Detect Bordered and Borderless tables in documents | TahaDouaji/detr-doc-table-detection | DetrForObjectDetection.from_pretrained('TahaDouaji/detr-doc-table-detection') | ['images', 'return_tensors', 'threshold'] | ['transformers', 'torch', 'PIL', 'requests'] | from transformers import DetrImageProcessor, DetrForObjectDetection
import torch
from PIL import Image
import requests
image = Image.open(IMAGE_PATH)
processor = DetrImageProcessor.from_pretrained(TahaDouaji/detr-doc-table-detection)
model = DetrForObjectDetection.from_pretrained(TahaDouaji/detr-doc-table-detection)
inputs = processor(images=image, return_tensors=pt)
outputs = model(**inputs)
target_sizes = torch.tensor([image.size[::-1]])
results = processor.post_process_object_detection(outputs, target_sizes=target_sizes, threshold=0.9)[0]
for score, label, box in zip(results[scores], results[labels], results[boxes]):
box = [round(i, 2) for i in box.tolist()]
print(
fDetected {model.config.id2label[label.item()]} with confidence
f{round(score.item(), 3)} at location {box}
) | {'dataset': 'ICDAR2019 Table Dataset', 'accuracy': 'Not provided'} | detr-doc-table-detection is a model trained to detect both Bordered and Borderless tables in documents, based on facebook/detr-resnet-50. |
Computer Vision Object Detection | Hugging Face Transformers | Object Detection | hustvl/yolos-small | YolosForObjectDetection.from_pretrained('hustvl/yolos-small') | {'model_name': 'hustvl/yolos-small'} | {'packages': ['transformers', 'PIL', 'requests']} | {'import': ['from transformers import YolosFeatureExtractor, YolosForObjectDetection', 'from PIL import Image', 'import requests'], 'url': 'http://images.cocodataset.org/val2017/000000039769.jpg', 'image': 'Image.open(requests.get(url, stream=True).raw)', 'feature_extractor': "YolosFeatureExtractor.from_pretrained('hustvl/yolos-small')", 'model': "YolosForObjectDetection.from_pretrained('hustvl/yolos-small')", 'inputs': "feature_extractor(images=image, return_tensors='pt')", 'outputs': 'model(**inputs)', 'logits': 'outputs.logits', 'bboxes': 'outputs.pred_boxes'} | {'dataset': 'COCO 2017 validation', 'accuracy': '36.1 AP'} | YOLOS model fine-tuned on COCO 2017 object detection (118k annotated images). It was introduced in the paper You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection by Fang et al. and first released in this repository. YOLOS is a Vision Transformer (ViT) trained using the DETR loss. Despite its simplicity, a base-sized YOLOS model is able to achieve 42 AP on COCO validation 2017 (similar to DETR and more complex frameworks such as Faster R-CNN). |
Computer Vision Object Detection | Hugging Face Transformers | Object Detection | facebook/detr-resnet-101-dc5 | DetrForObjectDetection.from_pretrained('facebook/detr-resnet-101-dc5') | {'image': 'Image.open(requests.get(url, stream=True).raw)', 'return_tensors': 'pt'} | ['transformers', 'PIL', 'requests'] | from transformers import DetrFeatureExtractor, DetrForObjectDetection
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = DetrFeatureExtractor.from_pretrained('facebook/detr-resnet-101-dc5')
model = DetrForObjectDetection.from_pretrained('facebook/detr-resnet-101-dc5')
inputs = feature_extractor(images=image, return_tensors=pt)
outputs = model(**inputs)
logits = outputs.logits
bboxes = outputs.pred_boxes | {'dataset': 'COCO 2017 validation', 'accuracy': 'AP 44.9'} | DETR (End-to-End Object Detection) model with ResNet-101 backbone (dilated C5 stage). The model is trained on COCO 2017 object detection dataset and achieves an average precision (AP) of 44.9 on the COCO 2017 validation set. |
Computer Vision Object Detection | Hugging Face Transformers | Object Detection | deformable-detr | DeformableDetrForObjectDetection.from_pretrained('SenseTime/deformable-detr') | ['images', 'return_tensors'] | ['transformers', 'torch', 'PIL', 'requests'] | from transformers import AutoImageProcessor, DeformableDetrForObjectDetection
import torch
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
processor = AutoImageProcessor.from_pretrained('SenseTime/deformable-detr')
model = DeformableDetrForObjectDetection.from_pretrained('SenseTime/deformable-detr')
inputs = processor(images=image, return_tensors='pt')
outputs = model(**inputs) | {'dataset': 'COCO 2017', 'accuracy': 'Not provided'} | Deformable DETR model with ResNet-50 backbone trained end-to-end on COCO 2017 object detection (118k annotated images). It was introduced in the paper Deformable DETR: Deformable Transformers for End-to-End Object Detection by Zhu et al. and first released in this repository. |
Computer Vision Object Detection | Hugging Face Transformers | Object Detection | keremberke/yolov8m-hard-hat-detection | YOLO('keremberke/yolov8m-hard-hat-detection') | {'image': 'URL or local path to the image'} | ['ultralyticsplus==0.0.24', 'ultralytics==8.0.23'] | from ultralyticsplus import YOLO, render_result
model = YOLO('keremberke/yolov8m-hard-hat-detection')
model.overrides['conf'] = 0.25
model.overrides['iou'] = 0.45
model.overrides['agnostic_nms'] = False
model.overrides['max_det'] = 1000
image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'
results = model.predict(image)
print(results[0].boxes)
render = render_result(model=model, image=image, result=results[0])
render.show() | {'dataset': 'hard-hat-detection', 'accuracy': 0.811} | A YOLOv8 model for detecting hard hats in images. The model can distinguish between 'Hardhat' and 'NO-Hardhat' classes. It can be used to ensure safety compliance in construction sites or other industrial environments where hard hats are required. |
Computer Vision Object Detection | Hugging Face Transformers | License Plate Detection | keremberke/yolov5m-license-plate | yolov5.load('keremberke/yolov5m-license-plate') | {'conf': 0.25, 'iou': 0.45, 'agnostic': False, 'multi_label': False, 'max_det': 1000, 'img': 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg', 'size': 640, 'augment': True} | pip install -U yolov5 | ['import yolov5', "model = yolov5.load('keremberke/yolov5m-license-plate')", 'model.conf = 0.25', 'model.iou = 0.45', 'model.agnostic = False', 'model.multi_label = False', 'model.max_det = 1000', "img = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'", 'results = model(img, size=640)', 'results = model(img, augment=True)', 'predictions = results.pred[0]', 'boxes = predictions[:, :4]', 'scores = predictions[:, 4]', 'categories = predictions[:, 5]', 'results.show()', "results.save(save_dir='results/')"] | {'dataset': 'keremberke/license-plate-object-detection', 'accuracy': 0.988} | A YOLOv5 model for license plate detection trained on a custom dataset. The model can detect license plates in images with high accuracy. |
Computer Vision Object Detection | Hugging Face Transformers | Object Detection | keremberke/yolov8m-valorant-detection | YOLO('keremberke/yolov8m-valorant-detection') | {'conf': 0.25, 'iou': 0.45, 'agnostic_nms': False, 'max_det': 1000} | pip install ultralyticsplus==0.0.23 ultralytics==8.0.21 | from ultralyticsplus import YOLO, render_result
model = YOLO('keremberke/yolov8m-valorant-detection')
model.overrides['conf'] = 0.25
model.overrides['iou'] = 0.45
model.overrides['agnostic_nms'] = False
model.overrides['max_det'] = 1000
image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'
results = model.predict(image)
print(results[0].boxes)
render = render_result(model=model, image=image, result=results[0])
render.show() | {'dataset': 'valorant-object-detection', 'accuracy': 0.965} | A YOLOv8 model for object detection in Valorant game, trained on a custom dataset. It detects dropped spike, enemy, planted spike, and teammate objects. |
Computer Vision Object Detection | Hugging Face Transformers | Object Detection | keremberke/yolov8m-csgo-player-detection | YOLO('keremberke/yolov8m-csgo-player-detection') | {'image': 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'} | ultralyticsplus==0.0.23 ultralytics==8.0.21 | from ultralyticsplus import YOLO, render_result
model = YOLO('keremberke/yolov8m-csgo-player-detection')
model.overrides['conf'] = 0.25
model.overrides['iou'] = 0.45
model.overrides['agnostic_nms'] = False
model.overrides['max_det'] = 1000
image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'
results = model.predict(image)
print(results[0].boxes)
render = render_result(model=model, image=image, result=results[0])
render.show() | {'dataset': 'csgo-object-detection', 'accuracy': 0.892} | An object detection model trained to detect Counter-Strike: Global Offensive (CS:GO) players. The model is based on the YOLOv8 architecture and can identify 'ct', 'cthead', 't', and 'thead' labels. |
Computer Vision Object Detection | Hugging Face Transformers | Table Extraction | keremberke/yolov8s-table-extraction | YOLO('keremberke/yolov8s-table-extraction') | {'conf': 0.25, 'iou': 0.45, 'agnostic_nms': False, 'max_det': 1000, 'image': 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'} | pip install ultralyticsplus==0.0.23 ultralytics==8.0.21 | from ultralyticsplus import YOLO, render_result
model = YOLO('keremberke/yolov8s-table-extraction')
model.overrides['conf'] = 0.25
model.overrides['iou'] = 0.45
model.overrides['agnostic_nms'] = False
model.overrides['max_det'] = 1000
image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'
results = model.predict(image)
print(results[0].boxes)
render = render_result(model=model, image=image, result=results[0])
render.show() | {'dataset': 'table-extraction', 'accuracy': 0.984} | A YOLOv8 model for table extraction in documents, capable of detecting bordered and borderless tables. Trained on the table-extraction dataset, the model achieves a mAP@0.5 of 0.984 on the validation set. |
Computer Vision Object Detection | Hugging Face Transformers | zero-shot-object-detection | google/owlvit-large-patch14 | OwlViTForObjectDetection.from_pretrained('google/owlvit-large-patch14') | {'model_name': 'google/owlvit-large-patch14'} | ['torch', 'transformers', 'PIL', 'requests'] | ['import requests', 'from PIL import Image', 'import torch', 'from transformers import OwlViTProcessor, OwlViTForObjectDetection', 'processor = OwlViTProcessor.from_pretrained(google/owlvit-large-patch14)', 'model = OwlViTForObjectDetection.from_pretrained(google/owlvit-large-patch14)', 'url = http://images.cocodataset.org/val2017/000000039769.jpg', 'image = Image.open(requests.get(url, stream=True).raw)', 'texts = [[a photo of a cat, a photo of a dog]', 'inputs = processor(text=texts, images=image, return_tensors=pt)', 'outputs = model(**inputs)', 'target_sizes = torch.Tensor([image.size[::-1]])', 'results = processor.post_process(outputs=outputs, target_sizes=target_sizes)', 'i = 0', 'text = texts[i]', 'boxes, scores, labels = results[i][boxes], results[i][scores], results[i][labels]', 'score_threshold = 0.1', 'for box, score, label in zip(boxes, scores, labels):', ' box = [round(i, 2) for i in box.tolist()]', ' if score >= score_threshold:', ' print(fDetected {text[label]} with confidence {round(score.item(), 3)} at location {box})'] | {'dataset': 'COCO', 'accuracy': 'Not specified'} | OWL-ViT is a zero-shot text-conditioned object detection model that can be used to query an image with one or multiple text queries. It uses CLIP as its multi-modal backbone, with a ViT-like Transformer to get visual features and a causal language model to get the text features. OWL-ViT is trained on publicly available image-caption data and fine-tuned on publicly available object detection datasets such as COCO and OpenImages. |
Computer Vision Object Detection | Hugging Face Transformers | Object Detection | keremberke/yolov8m-nlf-head-detection | YOLO('keremberke/yolov8m-nlf-head-detection') | {'conf': 0.25, 'iou': 0.45, 'agnostic_nms': False, 'max_det': 1000, 'image': 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'} | pip install ultralyticsplus==0.0.24 ultralytics==8.0.23 | ['from ultralyticsplus import YOLO, render_result', "model = YOLO('keremberke/yolov8m-nlf-head-detection')", "model.overrides['conf'] = 0.25", "model.overrides['iou'] = 0.45", "model.overrides['agnostic_nms'] = False", "model.overrides['max_det'] = 1000", "image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'", 'results = model.predict(image)', 'print(results[0].boxes)', 'render = render_result(model=model, image=image, result=results[0])', 'render.show()'] | {'dataset': 'nfl-object-detection', 'accuracy': 0.287} | A YOLOv8 model trained for head detection in American football. The model is capable of detecting helmets, blurred helmets, difficult helmets, partial helmets, and sideline helmets. |
Computer Vision Object Detection | Hugging Face Transformers | Object Detection | keremberke/yolov8m-forklift-detection | YOLO('keremberke/yolov8m-forklift-detection') | {'image': 'URL or local path to the image'} | ['ultralyticsplus==0.0.23', 'ultralytics==8.0.21'] | ['from ultralyticsplus import YOLO, render_result', "model = YOLO('keremberke/yolov8m-forklift-detection')", "model.overrides['conf'] = 0.25", "model.overrides['iou'] = 0.45", "model.overrides['agnostic_nms'] = False", "model.overrides['max_det'] = 1000", "image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'", 'results = model.predict(image)', 'print(results[0].boxes)', 'render = render_result(model=model, image=image, result=results[0])', 'render.show()'] | {'dataset': 'forklift-object-detection', 'accuracy': 0.846} | A YOLOv8 model for detecting forklifts and persons in images. |
Computer Vision Object Detection | Hugging Face Transformers | zero-shot-object-detection | google/owlvit-base-patch16 | OwlViTForObjectDetection.from_pretrained('google/owlvit-base-patch16') | ['texts', 'images'] | ['requests', 'PIL', 'torch', 'transformers'] | processor = OwlViTProcessor.from_pretrained(google/owlvit-base-patch16)
model = OwlViTForObjectDetection.from_pretrained(google/owlvit-base-patch16)
url = http://images.cocodataset.org/val2017/000000039769.jpg
image = Image.open(requests.get(url, stream=True).raw)
texts = [[a photo of a cat, a photo of a dog]]
inputs = processor(text=texts, images=image, return_tensors=pt)
outputs = model(**inputs)
target_sizes = torch.Tensor([image.size[::-1]])
results = processor.post_process(outputs=outputs, target_sizes=target_sizes) | {'dataset': 'COCO', 'accuracy': 'Not provided'} | OWL-ViT is a zero-shot text-conditioned object detection model that can be used to query an image with one or multiple text queries. OWL-ViT uses CLIP as its multi-modal backbone, with a ViT-like Transformer to get visual features and a causal language model to get the text features. |
Computer Vision Object Detection | Hugging Face Transformers | Object Detection | keremberke/yolov8m-plane-detection | YOLO('keremberke/yolov8m-plane-detection') | {'image': 'URL or local path to the image'} | ['pip install ultralyticsplus==0.0.23 ultralytics==8.0.21'] | ['from ultralyticsplus import YOLO, render_result', "model = YOLO('keremberke/yolov8m-plane-detection')", "model.overrides['conf'] = 0.25", "model.overrides['iou'] = 0.45", "model.overrides['agnostic_nms'] = False", "model.overrides['max_det'] = 1000", "image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'", 'results = model.predict(image)', 'print(results[0].boxes)', 'render = render_result(model=model, image=image, result=results[0])', 'render.show()'] | {'dataset': 'plane-detection', 'accuracy': '0.995'} | A YOLOv8 model for plane detection trained on the keremberke/plane-detection dataset. The model is capable of detecting planes in images with high accuracy. |
Computer Vision Object Detection | Hugging Face Transformers | Object Detection | keremberke/yolov8s-csgo-player-detection | YOLO('keremberke/yolov8s-csgo-player-detection') | {'image': 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'} | ['ultralyticsplus==0.0.23', 'ultralytics==8.0.21'] | from ultralyticsplus import YOLO, render_result
model = YOLO('keremberke/yolov8s-csgo-player-detection')
model.overrides['conf'] = 0.25
model.overrides['iou'] = 0.45
model.overrides['agnostic_nms'] = False
model.overrides['max_det'] = 1000
image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'
results = model.predict(image)
print(results[0].boxes)
render = render_result(model=model, image=image, result=results[0])
render.show() | {'dataset': 'csgo-object-detection', 'accuracy': 0.886} | A YOLOv8 model for detecting Counter-Strike: Global Offensive (CS:GO) players. Supports the labels ['ct', 'cthead', 't', 'thead']. |
Computer Vision Object Detection | Hugging Face Transformers | Object Detection | keremberke/yolov8m-blood-cell-detection | YOLO('keremberke/yolov8m-blood-cell-detection') | {'conf': 0.25, 'iou': 0.45, 'agnostic_nms': False, 'max_det': 1000} | ['ultralyticsplus==0.0.24', 'ultralytics==8.0.23'] | ['from ultralyticsplus import YOLO, render_result', "model = YOLO('keremberke/yolov8m-blood-cell-detection')", "model.overrides['conf'] = 0.25", "model.overrides['iou'] = 0.45", "model.overrides['agnostic_nms'] = False", "model.overrides['max_det'] = 1000", "image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'", 'results = model.predict(image)', 'print(results[0].boxes)', 'render = render_result(model=model, image=image, result=results[0])', 'render.show()'] | {'dataset': 'blood-cell-object-detection', 'accuracy': 0.927} | A YOLOv8 model for blood cell detection, including Platelets, RBC, and WBC. Trained on the blood-cell-object-detection dataset. |
Computer Vision Object Detection | Hugging Face Transformers | Object Detection | keremberke/yolov8s-hard-hat-detection | YOLO('keremberke/yolov8s-hard-hat-detection') | {'conf': 0.25, 'iou': 0.45, 'agnostic_nms': False, 'max_det': 1000} | pip install ultralyticsplus==0.0.23 ultralytics==8.0.21 | from ultralyticsplus import YOLO, render_result
model = YOLO('keremberke/yolov8s-hard-hat-detection')
model.overrides['conf'] = 0.25
model.overrides['iou'] = 0.45
model.overrides['agnostic_nms'] = False
model.overrides['max_det'] = 1000
image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'
results = model.predict(image)
print(results[0].boxes)
render = render_result(model=model, image=image, result=results[0])
render.show() | {'dataset': 'hard-hat-detection', 'accuracy': 0.834} | An object detection model trained to detect hard hats and no-hard hats in images. The model is based on YOLOv8 architecture and can be used for safety applications. |
Computer Vision Object Detection | Transformers | Object Detection | fcakyon/yolov5s-v7.0 | yolov5.load('fcakyon/yolov5s-v7.0') | {'conf': 0.25, 'iou': 0.45, 'agnostic': False, 'multi_label': False, 'max_det': 1000, 'img': 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg', 'size': 640, 'augment': True} | pip install -U yolov5 | import yolov5
model = yolov5.load('fcakyon/yolov5s-v7.0')
model.conf = 0.25
model.iou = 0.45
model.agnostic = False
model.multi_label = False
model.max_det = 1000
img = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'
results = model(img)
results = model(img, size=640)
results = model(img, augment=True)
predictions = results.pred[0]
boxes = predictions[:, :4]
scores = predictions[:, 4]
categories = predictions[:, 5]
results.show()
results.save(save_dir='results/') | {'dataset': 'detection-datasets/coco', 'accuracy': None} | Yolov5s-v7.0 is an object detection model trained on the COCO dataset. It can detect objects in images and return their bounding boxes, scores, and categories. |
Computer Vision Object Detection | Hugging Face Transformers | Table Extraction | keremberke/yolov8n-table-extraction | YOLO('keremberke/yolov8n-table-extraction') | {'conf': 0.25, 'iou': 0.45, 'agnostic_nms': False, 'max_det': 1000} | ['ultralyticsplus==0.0.23', 'ultralytics==8.0.21'] | ['from ultralyticsplus import YOLO, render_result', "model = YOLO('keremberke/yolov8n-table-extraction')", "model.overrides['conf'] = 0.25", "model.overrides['iou'] = 0.45", "model.overrides['agnostic_nms'] = False", "model.overrides['max_det'] = 1000", "image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'", 'results = model.predict(image)', 'print(results[0].boxes)', 'render = render_result(model=model, image=image, result=results[0])', 'render.show()'] | {'dataset': 'table-extraction', 'accuracy': 0.967} | An object detection model for extracting tables from documents. Supports two label types: 'bordered' and 'borderless'. |
Computer Vision Image Segmentation | Hugging Face Transformers | Transformers | clipseg-rd64-refined | pipeline('image-segmentation', model='CIDAS/clipseg-rd64-refined') | {'model': 'CIDAS/clipseg-rd64-refined'} | transformers | {'dataset': '', 'accuracy': ''} | CLIPSeg model with reduce dimension 64, refined (using a more complex convolution). It was introduced in the paper Image Segmentation Using Text and Image Prompts by Lüddecke et al. and first released in this repository. This model is intended for zero-shot and one-shot image segmentation. |
|
Computer Vision Object Detection | Hugging Face Transformers | Object Detection | keremberke/yolov8n-csgo-player-detection | YOLO('keremberke/yolov8n-csgo-player-detection') | {'image': 'URL or local path to image'} | pip install ultralyticsplus==0.0.23 ultralytics==8.0.21 | from ultralyticsplus import YOLO, render_result
model = YOLO('keremberke/yolov8n-csgo-player-detection')
model.overrides['conf'] = 0.25
model.overrides['iou'] = 0.45
model.overrides['agnostic_nms'] = False
model.overrides['max_det'] = 1000
image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'
results = model.predict(image)
print(results[0].boxes)
render = render_result(model=model, image=image, result=results[0])
render.show() | {'dataset': 'csgo-object-detection', 'accuracy': 0.844} | A YOLOv8 model for detecting Counter-Strike: Global Offensive (CS:GO) players with supported labels: ['ct', 'cthead', 't', 'thead']. |
Computer Vision Object Detection | Hugging Face Transformers | License Plate Detection | keremberke/yolov5s-license-plate | yolov5.load('keremberke/yolov5s-license-plate') | {'img': 'image url or path', 'size': 'image resize dimensions', 'augment': 'optional, test time augmentation'} | pip install -U yolov5 | ['import yolov5', "model = yolov5.load('keremberke/yolov5s-license-plate')", 'model.conf = 0.25', 'model.iou = 0.45', 'model.agnostic = False', 'model.multi_label = False', 'model.max_det = 1000', "img = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'", 'results = model(img, size=640)', 'results = model(img, augment=True)', 'predictions = results.pred[0]', 'boxes = predictions[:, :4]', 'scores = predictions[:, 4]', 'categories = predictions[:, 5]', 'results.show()', "results.save(save_dir='results/')"] | {'dataset': 'keremberke/license-plate-object-detection', 'accuracy': 0.985} | A YOLOv5 based license plate detection model trained on a custom dataset. |
Computer Vision Image Segmentation | Hugging Face Transformers | Image Segmentation | openmmlab/upernet-convnext-small | UperNetModel.from_pretrained('openmmlab/upernet-convnext-small') | N/A | transformers | N/A | {'dataset': 'N/A', 'accuracy': 'N/A'} | UperNet framework for semantic segmentation, leveraging a ConvNeXt backbone. UperNet was introduced in the paper Unified Perceptual Parsing for Scene Understanding by Xiao et al. Combining UperNet with a ConvNeXt backbone was introduced in the paper A ConvNet for the 2020s. |
Computer Vision Object Detection | Hugging Face Transformers | Blood Cell Detection | keremberke/yolov8n-blood-cell-detection | YOLO('keremberke/yolov8n-blood-cell-detection') | {'conf': 0.25, 'iou': 0.45, 'agnostic_nms': False, 'max_det': 1000} | ultralyticsplus==0.0.23 ultralytics==8.0.21 | from ultralyticsplus import YOLO, render_result
model = YOLO('keremberke/yolov8n-blood-cell-detection')
model.overrides['conf'] = 0.25
model.overrides['iou'] = 0.45
model.overrides['agnostic_nms'] = False
model.overrides['max_det'] = 1000
image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'
results = model.predict(image)
print(results[0].boxes)
render = render_result(model=model, image=image, result=results[0])
render.show() | {'dataset': 'blood-cell-object-detection', 'accuracy': 0.893} | This model detects blood cells in images, specifically Platelets, RBC, and WBC. It is based on the YOLOv8 architecture and trained on the blood-cell-object-detection dataset. |
Computer Vision Image Segmentation | Hugging Face Transformers | Semantic Segmentation | nvidia/segformer-b0-finetuned-ade-512-512 | SegformerForSemanticSegmentation.from_pretrained('nvidia/segformer-b0-finetuned-ade-512-512') | {'images': 'Image', 'return_tensors': 'pt'} | {'transformers': 'SegformerImageProcessor, SegformerForSemanticSegmentation', 'PIL': 'Image', 'requests': 'requests'} | from transformers import SegformerImageProcessor, SegformerForSemanticSegmentation
from PIL import Image
import requests
processor = SegformerImageProcessor.from_pretrained(nvidia/segformer-b0-finetuned-ade-512-512)
model = SegformerForSemanticSegmentation.from_pretrained(nvidia/segformer-b0-finetuned-ade-512-512)
url = http://images.cocodataset.org/val2017/000000039769.jpg
image = Image.open(requests.get(url, stream=True).raw)
inputs = feature_extractor(images=image, return_tensors=pt)
outputs = model(**inputs)
logits = outputs.logits | {'dataset': 'ADE20k', 'accuracy': 'Not provided'} | SegFormer model fine-tuned on ADE20k at resolution 512x512. It was introduced in the paper SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers by Xie et al. and first released in this repository. |
Computer Vision Image Segmentation | Hugging Face Transformers | Image Segmentation | nvidia/segformer-b5-finetuned-ade-640-640 | SegformerForSemanticSegmentation.from_pretrained('nvidia/segformer-b5-finetuned-ade-640-640') | ['images', 'return_tensors'] | ['transformers', 'PIL', 'requests'] | from transformers import SegformerFeatureExtractor, SegformerForSemanticSegmentation
from PIL import Image
import requests
feature_extractor = SegformerFeatureExtractor.from_pretrained(nvidia/segformer-b5-finetuned-ade-512-512)
model = SegformerForSemanticSegmentation.from_pretrained(nvidia/segformer-b5-finetuned-ade-512-512)
url = http://images.cocodataset.org/val2017/000000039769.jpg
image = Image.open(requests.get(url, stream=True).raw)
inputs = feature_extractor(images=image, return_tensors=pt)
outputs = model(**inputs)
logits = outputs.logits | {'dataset': 'ADE20K', 'accuracy': 'Not provided'} | SegFormer model fine-tuned on ADE20k at resolution 640x640. It was introduced in the paper SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers by Xie et al. and first released in this repository. |
Computer Vision Image Segmentation | Hugging Face Transformers | Semantic Segmentation | nvidia/segformer-b2-finetuned-cityscapes-1024-1024 | SegformerForSemanticSegmentation.from_pretrained('nvidia/segformer-b2-finetuned-cityscapes-1024-1024') | {'images': 'image', 'return_tensors': 'pt'} | {'transformers': 'latest', 'PIL': 'latest', 'requests': 'latest'} | from transformers import SegformerFeatureExtractor, SegformerForSemanticSegmentation
from PIL import Image
import requests
feature_extractor = SegformerFeatureExtractor.from_pretrained('nvidia/segformer-b2-finetuned-cityscapes-1024-1024')
model = SegformerForSemanticSegmentation.from_pretrained('nvidia/segformer-b2-finetuned-cityscapes-1024-1024')
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
inputs = feature_extractor(images=image, return_tensors='pt')
outputs = model(**inputs)
logits = outputs.logits | {'dataset': 'Cityscapes', 'accuracy': 'Not provided'} | SegFormer model fine-tuned on CityScapes at resolution 1024x1024. It was introduced in the paper SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers by Xie et al. and first released in this repository. |
Computer Vision Image Segmentation | Hugging Face Transformers | Image Segmentation | nvidia/segformer-b0-finetuned-cityscapes-1024-1024 | SegformerForSemanticSegmentation.from_pretrained('nvidia/segformer-b0-finetuned-cityscapes-1024-1024') | {'pretrained_model_name_or_path': 'nvidia/segformer-b0-finetuned-cityscapes-1024-1024'} | ['transformers', 'PIL', 'requests'] | from transformers import SegformerFeatureExtractor, SegformerForSemanticSegmentation
from PIL import Image
import requests
feature_extractor = SegformerFeatureExtractor.from_pretrained(nvidia/segformer-b0-finetuned-cityscapes-1024-1024)
model = SegformerForSemanticSegmentation.from_pretrained(nvidia/segformer-b0-finetuned-cityscapes-1024-1024)
url = http://images.cocodataset.org/val2017/000000039769.jpg
image = Image.open(requests.get(url, stream=True).raw)
inputs = feature_extractor(images=image, return_tensors=pt)
outputs = model(**inputs)
logits = outputs.logits | {'dataset': 'CityScapes', 'accuracy': 'Not provided'} | SegFormer model fine-tuned on CityScapes at resolution 1024x1024. It was introduced in the paper SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers by Xie et al. and first released in this repository. |
Computer Vision Image Segmentation | Hugging Face Transformers | Image Segmentation | facebook/detr-resnet-50-panoptic | DetrForSegmentation.from_pretrained('facebook/detr-resnet-50-panoptic') | ['image'] | ['torch', 'numpy', 'transformers', 'PIL', 'requests', 'io'] | ['import io', 'import requests', 'from PIL import Image', 'import torch', 'import numpy', 'from transformers import DetrFeatureExtractor, DetrForSegmentation', 'from transformers.models.detr.feature_extraction_detr import rgb_to_id', "url = 'http://images.cocodataset.org/val2017/000000039769.jpg'", 'image = Image.open(requests.get(url, stream=True).raw)', "feature_extractor = DetrFeatureExtractor.from_pretrained('facebook/detr-resnet-50-panoptic')", "model = DetrForSegmentation.from_pretrained('facebook/detr-resnet-50-panoptic')", "inputs = feature_extractor(images=image, return_tensors='pt')", 'outputs = model(**inputs)', "processed_sizes = torch.as_tensor(inputs['pixel_values'].shape[-2:]).unsqueeze(0)", 'result = feature_extractor.post_process_panoptic(outputs, processed_sizes)[0]', "panoptic_seg = Image.open(io.BytesIO(result['png_string']))", 'panoptic_seg = numpy.array(panoptic_seg, dtype=numpy.uint8)', 'panoptic_seg_id = rgb_to_id(panoptic_seg)'] | {'dataset': 'COCO 2017 validation', 'accuracy': {'box_AP': 38.8, 'segmentation_AP': 31.1, 'PQ': 43.4}} | DEtection TRansformer (DETR) model trained end-to-end on COCO 2017 panoptic (118k annotated images). It was introduced in the paper End-to-End Object Detection with Transformers by Carion et al. and first released in this repository. |
Computer Vision Image Segmentation | Hugging Face Transformers | Transformers | facebook/maskformer-swin-base-coco | MaskFormerForInstanceSegmentation.from_pretrained('facebook/maskformer-swin-base-coco') | ['image'] | ['transformers', 'PIL', 'requests'] | from transformers import MaskFormerFeatureExtractor, MaskFormerForInstanceSegmentation
from PIL import Image
import requests
feature_extractor = MaskFormerFeatureExtractor.from_pretrained('facebook/maskformer-swin-base-coco')
model = MaskFormerForInstanceSegmentation.from_pretrained('facebook/maskformer-swin-base-coco')
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
inputs = feature_extractor(images=image, return_tensors='pt')
outputs = model(**inputs)
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
result = feature_extractor.post_process_panoptic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
predicted_panoptic_map = result['segmentation'] | {'dataset': 'COCO', 'accuracy': 'Not provided'} | MaskFormer model trained on COCO panoptic segmentation (base-sized version, Swin backbone). It was introduced in the paper Per-Pixel Classification is Not All You Need for Semantic Segmentation and first released in this repository. |
Computer Vision Image Segmentation | Hugging Face Transformers | Image Segmentation | mattmdjaga/segformer_b2_clothes | SegformerForSemanticSegmentation.from_pretrained('mattmdjaga/segformer_b2_clothes') | ['image'] | ['transformers', 'PIL', 'requests', 'matplotlib', 'torch'] | from transformers import AutoFeatureExtractor, SegformerForSemanticSegmentation
from PIL import Image
import requests
import matplotlib.pyplot as plt
import torch.nn as nn
extractor = AutoFeatureExtractor.from_pretrained('mattmdjaga/segformer_b2_clothes')
model = SegformerForSemanticSegmentation.from_pretrained('mattmdjaga/segformer_b2_clothes')
url = 'https://plus.unsplash.com/premium_photo-1673210886161-bfcc40f54d1f?ixlib=rb-4.0.3&ixid=MnwxMjA3fDB8MHxzZWFyY2h8MXx8cGVyc29uJTIwc3RhbmRpbmd8ZW58MHx8MHx8&w=1000&q=80'
image = Image.open(requests.get(url, stream=True).raw)
inputs = extractor(images=image, return_tensors='pt')
outputs = model(**inputs)
logits = outputs.logits.cpu()
upsampled_logits = nn.functional.interpolate(logits, size=image.size[::-1], mode='bilinear', align_corners=False)
pred_seg = upsampled_logits.argmax(dim=1)[0]
plt.imshow(pred_seg) | {'dataset': 'mattmdjaga/human_parsing_dataset', 'accuracy': 'Not provided'} | SegFormer model fine-tuned on ATR dataset for clothes segmentation. |
Computer Vision Image Segmentation | Hugging Face Transformers | Transformers | facebook/mask2former-swin-base-coco-panoptic | Mask2FormerForUniversalSegmentation.from_pretrained('facebook/mask2former-swin-base-coco-panoptic') | {'pretrained_model_name_or_path': 'facebook/mask2former-swin-base-coco-panoptic'} | {'packages': ['requests', 'torch', 'PIL', 'transformers']} | import requests
import torch
from PIL import Image
from transformers import AutoImageProcessor, Mask2FormerForUniversalSegmentation
processor = AutoImageProcessor.from_pretrained('facebook/mask2former-swin-base-coco-panoptic')
model = Mask2FormerForUniversalSegmentation.from_pretrained('facebook/mask2former-swin-base-coco-panoptic')
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors='pt')
with torch.no_grad():
outputs = model(**inputs)
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
result = processor.post_process_panoptic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
predicted_panoptic_map = result['segmentation'] | {'dataset': 'COCO panoptic segmentation', 'accuracy': None} | Mask2Former model trained on COCO panoptic segmentation (base-sized version, Swin backbone). It was introduced in the paper Masked-attention Mask Transformer for Universal Image Segmentation and first released in this repository. Mask2Former addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation. Mask2Former outperforms the previous SOTA, MaskFormer both in terms of performance an efficiency. |
Computer Vision Image Segmentation | Hugging Face Transformers | Transformers | facebook/mask2former-swin-large-cityscapes-semantic | Mask2FormerForUniversalSegmentation.from_pretrained('facebook/mask2former-swin-large-cityscapes-semantic') | {'pretrained_model_name_or_path': 'facebook/mask2former-swin-large-cityscapes-semantic'} | ['torch', 'transformers'] | processor = AutoImageProcessor.from_pretrained('facebook/mask2former-swin-large-cityscapes-semantic')
model = Mask2FormerForUniversalSegmentation.from_pretrained('facebook/mask2former-swin-large-cityscapes-semantic')
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors='pt')
with torch.no_grad():
outputs = model(**inputs)
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
predicted_semantic_map = processor.post_process_semantic_segmentation(outputs, target_sizes=[image.size[::-1]])[0] | {'dataset': 'Cityscapes', 'accuracy': 'Not specified'} | Mask2Former model trained on Cityscapes semantic segmentation (large-sized version, Swin backbone). It addresses instance, semantic and panoptic segmentation by predicting a set of masks and corresponding labels. The model outperforms the previous SOTA, MaskFormer, in terms of performance and efficiency. |
Computer Vision Image Segmentation | Hugging Face Transformers | Transformers | shi-labs/oneformer_coco_swin_large | OneFormerForUniversalSegmentation.from_pretrained('shi-labs/oneformer_coco_swin_large') | {'images': 'image', 'task_inputs': ['semantic', 'instance', 'panoptic'], 'return_tensors': 'pt'} | ['transformers', 'PIL', 'requests'] | from transformers import OneFormerProcessor, OneFormerForUniversalSegmentation
from PIL import Image
import requests
url = https://huggingface.co/datasets/shi-labs/oneformer_demo/blob/main/coco.jpeg
image = Image.open(requests.get(url, stream=True).raw)
processor = OneFormerProcessor.from_pretrained(shi-labs/oneformer_coco_swin_large)
model = OneFormerForUniversalSegmentation.from_pretrained(shi-labs/oneformer_coco_swin_large)
semantic_inputs = processor(images=image, task_inputs=[semantic], return_tensors=pt)
semantic_outputs = model(**semantic_inputs)
predicted_semantic_map = processor.post_process_semantic_segmentation(outputs, target_sizes=[image.size[::-1]])[0] | {'dataset': 'ydshieh/coco_dataset_script', 'accuracy': 'Not provided'} | OneFormer model trained on the COCO dataset (large-sized version, Swin backbone). It was introduced in the paper OneFormer: One Transformer to Rule Universal Image Segmentation by Jain et al. and first released in this repository. OneFormer is the first multi-task universal image segmentation framework. It needs to be trained only once with a single universal architecture, a single model, and on a single dataset, to outperform existing specialized models across semantic, instance, and panoptic segmentation tasks. OneFormer uses a task token to condition the model on the task in focus, making the architecture task-guided for training, and task-dynamic for inference, all with a single model. |
Computer Vision Image Segmentation | Hugging Face Transformers | Transformers | facebook/maskformer-swin-large-ade | MaskFormerForInstanceSegmentation.from_pretrained('facebook/maskformer-swin-large-ade') | {'from_pretrained': 'facebook/maskformer-swin-large-ade'} | {'packages': ['transformers', 'PIL', 'requests']} | from transformers import MaskFormerImageProcessor, MaskFormerForInstanceSegmentation
from PIL import Image
import requests
url = 'https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000001.jpg'
image = Image.open(requests.get(url, stream=True).raw)
processor = MaskFormerImageProcessor.from_pretrained('facebook/maskformer-swin-large-ade')
inputs = processor(images=image, return_tensors='pt')
model = MaskFormerForInstanceSegmentation.from_pretrained('facebook/maskformer-swin-large-ade')
outputs = model(**inputs)
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
predicted_semantic_map = processor.post_process_semantic_segmentation(outputs, target_sizes=[image.size[::-1]])[0] | {'dataset': 'ADE20k', 'accuracy': 'Not provided'} | MaskFormer model trained on ADE20k semantic segmentation (large-sized version, Swin backbone). It was introduced in the paper Per-Pixel Classification is Not All You Need for Semantic Segmentation and first released in this repository. This model addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation. |
Computer Vision Image Segmentation | Hugging Face Transformers | Transformers | shi-labs/oneformer_ade20k_swin_large | OneFormerForUniversalSegmentation.from_pretrained('shi-labs/oneformer_ade20k_swin_large') | ['images', 'task_inputs', 'return_tensors'] | ['transformers', 'PIL', 'requests'] | from transformers import OneFormerProcessor, OneFormerForUniversalSegmentation
from PIL import Image
import requests
url = https://huggingface.co/datasets/shi-labs/oneformer_demo/blob/main/ade20k.jpeg
image = Image.open(requests.get(url, stream=True).raw)
processor = OneFormerProcessor.from_pretrained(shi-labs/oneformer_ade20k_swin_large)
model = OneFormerForUniversalSegmentation.from_pretrained(shi-labs/oneformer_ade20k_swin_large)
semantic_inputs = processor(images=image, task_inputs=[semantic], return_tensors=pt)
semantic_outputs = model(**semantic_inputs)
predicted_semantic_map = processor.post_process_semantic_segmentation(outputs, target_sizes=[image.size[::-1]])[0] | {'dataset': 'scene_parse_150', 'accuracy': None} | OneFormer model trained on the ADE20k dataset (large-sized version, Swin backbone). It was introduced in the paper OneFormer: One Transformer to Rule Universal Image Segmentation by Jain et al. and first released in this repository. OneFormer is the first multi-task universal image segmentation framework. It needs to be trained only once with a single universal architecture, a single model, and on a single dataset, to outperform existing specialized models across semantic, instance, and panoptic segmentation tasks. OneFormer uses a task token to condition the model on the task in focus, making the architecture task-guided for training, and task-dynamic for inference, all with a single model. |
Computer Vision Image Segmentation | Hugging Face Transformers | Transformers | facebook/mask2former-swin-large-coco-panoptic | Mask2FormerForUniversalSegmentation.from_pretrained('facebook/mask2former-swin-large-coco-panoptic') | ['image'] | ['requests', 'torch', 'PIL', 'transformers'] | processor = AutoImageProcessor.from_pretrained(facebook/mask2former-swin-large-coco-panoptic)
model = Mask2FormerForUniversalSegmentation.from_pretrained(facebook/mask2former-swin-large-coco-panoptic)
url = http://images.cocodataset.org/val2017/000000039769.jpg
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors=pt)
with torch.no_grad():
outputs = model(**inputs)
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
result = processor.post_process_panoptic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
predicted_panoptic_map = result[segmentation] | {'dataset': 'COCO', 'accuracy': 'Not provided'} | Mask2Former model trained on COCO panoptic segmentation (large-sized version, Swin backbone). It was introduced in the paper Masked-attention Mask Transformer for Universal Image Segmentation and first released in this repository. Mask2Former addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation. Mask2Former outperforms the previous SOTA, MaskFormer both in terms of performance an efficiency. |
Computer Vision Image Segmentation | Hugging Face Transformers | Transformers | facebook/mask2former-swin-small-coco-instance | Mask2FormerForUniversalSegmentation.from_pretrained('facebook/mask2former-swin-small-coco-instance') | {'pretrained_model_name_or_path': 'facebook/mask2former-swin-small-coco-instance'} | ['requests', 'torch', 'PIL', 'transformers'] | processor = AutoImageProcessor.from_pretrained('facebook/mask2former-swin-small-coco-instance')
model = Mask2FormerForUniversalSegmentation.from_pretrained('facebook/mask2former-swin-small-coco-instance')
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors='pt')
with torch.no_grad():
outputs = model(**inputs)
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
result = processor.post_process_instance_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
predicted_instance_map = result['segmentation'] | {'dataset': 'COCO', 'accuracy': 'Not provided'} | Mask2Former model trained on COCO instance segmentation (small-sized version, Swin backbone). It was introduced in the paper Masked-attention Mask Transformer for Universal Image Segmentation and first released in this repository. Mask2Former addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation. Mask2Former outperforms the previous SOTA, MaskFormer both in terms of performance an efficiency. |
Computer Vision Image Segmentation | Hugging Face Transformers | Transformers | shi-labs/oneformer_ade20k_swin_tiny | OneFormerForUniversalSegmentation.from_pretrained('shi-labs/oneformer_ade20k_swin_tiny') | {'images': 'image', 'task_inputs': ['semantic', 'instance', 'panoptic'], 'return_tensors': 'pt'} | ['transformers', 'PIL', 'requests'] | from transformers import OneFormerProcessor, OneFormerForUniversalSegmentation
from PIL import Image
import requests
url = https://huggingface.co/datasets/shi-labs/oneformer_demo/blob/main/ade20k.jpeg
image = Image.open(requests.get(url, stream=True).raw)
processor = OneFormerProcessor.from_pretrained(shi-labs/oneformer_ade20k_swin_tiny)
model = OneFormerForUniversalSegmentation.from_pretrained(shi-labs/oneformer_ade20k_swin_tiny)
semantic_inputs = processor(images=image, task_inputs=[semantic], return_tensors=pt)
semantic_outputs = model(**semantic_inputs)
predicted_semantic_map = processor.post_process_semantic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
instance_inputs = processor(images=image, task_inputs=[instance], return_tensors=pt)
instance_outputs = model(**instance_inputs)
predicted_instance_map = processor.post_process_instance_segmentation(outputs, target_sizes=[image.size[::-1]])[0][segmentation]
panoptic_inputs = processor(images=image, task_inputs=[panoptic], return_tensors=pt)
panoptic_outputs = model(**panoptic_inputs)
predicted_semantic_map = processor.post_process_panoptic_segmentation(outputs, target_sizes=[image.size[::-1]])[0][segmentation] | {'dataset': 'ADE20k', 'accuracy': 'Not provided'} | OneFormer is the first multi-task universal image segmentation framework. It needs to be trained only once with a single universal architecture, a single model, and on a single dataset, to outperform existing specialized models across semantic, instance, and panoptic segmentation tasks. OneFormer uses a task token to condition the model on the task in focus, making the architecture task-guided for training, and task-dynamic for inference, all with a single model. |
Computer Vision Image Segmentation | Hugging Face Transformers | Image Segmentation | keremberke/yolov8m-building-segmentation | YOLO('keremberke/yolov8m-building-segmentation') | {'image': 'URL or local path to the image'} | pip install ultralyticsplus==0.0.21 | from ultralyticsplus import YOLO, render_result
model = YOLO('keremberke/yolov8m-building-segmentation')
model.overrides['conf'] = 0.25
model.overrides['iou'] = 0.45
model.overrides['agnostic_nms'] = False
model.overrides['max_det'] = 1000
image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'
results = model.predict(image)
print(results[0].boxes)
print(results[0].masks)
render = render_result(model=model, image=image, result=results[0])
render.show() | {'dataset': 'satellite-building-segmentation', 'accuracy': {'mAP@0.5(box)': 0.623, 'mAP@0.5(mask)': 0.613}} | A YOLOv8 model for building segmentation in satellite images. It can detect and segment buildings in the input images. |
Computer Vision Image Segmentation | Hugging Face Transformers | Semantic Segmentation | nvidia/segformer-b5-finetuned-cityscapes-1024-1024 | SegformerForSemanticSegmentation.from_pretrained('nvidia/segformer-b5-finetuned-cityscapes-1024-1024') | {'images': 'image', 'return_tensors': 'pt'} | {'packages': ['transformers', 'PIL', 'requests']} | from transformers import SegformerFeatureExtractor, SegformerForSemanticSegmentation
from PIL import Image
import requests
feature_extractor = SegformerFeatureExtractor.from_pretrained(nvidia/segformer-b5-finetuned-cityscapes-1024-1024)
model = SegformerForSemanticSegmentation.from_pretrained(nvidia/segformer-b5-finetuned-cityscapes-1024-1024)
url = http://images.cocodataset.org/val2017/000000039769.jpg
image = Image.open(requests.get(url, stream=True).raw)
inputs = feature_extractor(images=image, return_tensors=pt)
outputs = model(**inputs)
logits = outputs.logits | {'dataset': 'CityScapes', 'accuracy': 'Not provided'} | SegFormer model fine-tuned on CityScapes at resolution 1024x1024. It was introduced in the paper SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers by Xie et al. and first released in this repository. |
Computer Vision Image Segmentation | Hugging Face Transformers | Transformers | facebook/mask2former-swin-tiny-coco-instance | Mask2FormerForUniversalSegmentation.from_pretrained('facebook/mask2former-swin-tiny-coco-instance') | {'pretrained_model_name_or_path': 'facebook/mask2former-swin-tiny-coco-instance'} | ['torch', 'transformers', 'PIL', 'requests'] | processor = AutoImageProcessor.from_pretrained('facebook/mask2former-swin-tiny-coco-instance')
model = Mask2FormerForUniversalSegmentation.from_pretrained('facebook/mask2former-swin-tiny-coco-instance')
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors='pt')
with torch.no_grad():
outputs = model(**inputs)
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
result = processor.post_process_instance_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
predicted_instance_map = result['segmentation'] | {'dataset': 'COCO', 'accuracy': 'Not specified'} | Mask2Former model trained on COCO instance segmentation (tiny-sized version, Swin backbone). It was introduced in the paper Masked-attention Mask Transformer for Universal Image Segmentation and first released in this repository. This model addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. You can use this particular checkpoint for instance segmentation. |
Computer Vision Image Segmentation | Hugging Face Transformers | Transformers | facebook/maskformer-swin-base-ade | MaskFormerForInstanceSegmentation.from_pretrained('facebook/maskformer-swin-base-ade') | {'from_pretrained': 'facebook/maskformer-swin-base-ade'} | {'transformers': 'latest', 'PIL': 'latest', 'requests': 'latest'} | from transformers import MaskFormerFeatureExtractor, MaskFormerForInstanceSegmentation
from PIL import Image
import requests
url = https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000001.jpg
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = MaskFormerFeatureExtractor.from_pretrained(facebook/maskformer-swin-base-ade)
inputs = feature_extractor(images=image, return_tensors=pt)
model = MaskFormerForInstanceSegmentation.from_pretrained(facebook/maskformer-swin-base-ade)
outputs = model(**inputs)
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
predicted_semantic_map = feature_extractor.post_process_semantic_segmentation(outputs, target_sizes=[image.size[::-1]])[0] | {'dataset': 'ADE20k', 'accuracy': 'Not provided'} | MaskFormer model trained on ADE20k semantic segmentation (base-sized version, Swin backbone). It was introduced in the paper Per-Pixel Classification is Not All You Need for Semantic Segmentation and first released in this repository. This model addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation. |
Computer Vision Image Segmentation | Hugging Face Transformers | Image Segmentation | keremberke/yolov8m-pcb-defect-segmentation | YOLO('keremberke/yolov8m-pcb-defect-segmentation') | {'image': 'URL or local path to the image'} | ['ultralyticsplus==0.0.24', 'ultralytics==8.0.23'] | ['from ultralyticsplus import YOLO, render_result', "model = YOLO('keremberke/yolov8m-pcb-defect-segmentation')", "model.overrides['conf'] = 0.25", "model.overrides['iou'] = 0.45", "model.overrides['agnostic_nms'] = False", "model.overrides['max_det'] = 1000", "image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'", 'results = model.predict(image)', 'print(results[0].boxes)', 'print(results[0].masks)', 'render = render_result(model=model, image=image, result=results[0])', 'render.show()'] | {'dataset': 'pcb-defect-segmentation', 'accuracy': {'mAP@0.5(box)': 0.568, 'mAP@0.5(mask)': 0.557}} | A YOLOv8 model for PCB defect segmentation trained on the pcb-defect-segmentation dataset. The model can detect and segment defects in PCB images, such as Dry_joint, Incorrect_installation, PCB_damage, and Short_circuit. |
Computer Vision Image Segmentation | Hugging Face Transformers | Transformers | facebook/maskformer-swin-tiny-coco | MaskFormerForInstanceSegmentation.from_pretrained('facebook/maskformer-swin-tiny-coco') | ['image', 'return_tensors'] | ['transformers', 'PIL', 'requests'] | from transformers import MaskFormerFeatureExtractor, MaskFormerForInstanceSegmentation
from PIL import Image
import requests
feature_extractor = MaskFormerFeatureExtractor.from_pretrained('facebook/maskformer-swin-tiny-coco')
model = MaskFormerForInstanceSegmentation.from_pretrained('facebook/maskformer-swin-tiny-coco')
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
inputs = feature_extractor(images=image, return_tensors='pt')
outputs = model(**inputs)
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
result = feature_extractor.post_process_panoptic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
predicted_panoptic_map = result['segmentation'] | {'dataset': 'COCO panoptic segmentation', 'accuracy': 'Not provided'} | MaskFormer model trained on COCO panoptic segmentation (tiny-sized version, Swin backbone). It was introduced in the paper Per-Pixel Classification is Not All You Need for Semantic Segmentation and first released in this repository. |
Computer Vision Image Segmentation | Hugging Face Transformers | Image Segmentation | keremberke/yolov8m-pothole-segmentation | YOLO('keremberke/yolov8m-pothole-segmentation') | {'image': 'URL or local image path'} | ['ultralyticsplus==0.0.23', 'ultralytics==8.0.21'] | from ultralyticsplus import YOLO, render_result
model = YOLO('keremberke/yolov8m-pothole-segmentation')
model.overrides['conf'] = 0.25
model.overrides['iou'] = 0.45
model.overrides['agnostic_nms'] = False
model.overrides['max_det'] = 1000
image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'
results = model.predict(image)
print(results[0].boxes)
print(results[0].masks)
render = render_result(model=model, image=image, result=results[0])
render.show() | {'dataset': 'pothole-segmentation', 'accuracy': {'mAP@0.5(box)': 0.858, 'mAP@0.5(mask)': 0.895}} | A YOLOv8 model for pothole segmentation trained on keremberke/pothole-segmentation dataset. It can detect potholes in images and provide segmentation masks for the detected potholes. |
Computer Vision Image Segmentation | Hugging Face Transformers | Image Segmentation | keremberke/yolov8s-building-segmentation | YOLO('keremberke/yolov8s-building-segmentation') | ['conf', 'iou', 'agnostic_nms', 'max_det', 'image'] | ['ultralyticsplus==0.0.21'] | ['from ultralyticsplus import YOLO, render_result', "model = YOLO('keremberke/yolov8s-building-segmentation')", "model.overrides['conf'] = 0.25", "model.overrides['iou'] = 0.45", "model.overrides['agnostic_nms'] = False", "model.overrides['max_det'] = 1000", "image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'", 'results = model.predict(image)', 'print(results[0].boxes)', 'print(results[0].masks)', 'render = render_result(model=model, image=image, result=results[0])', 'render.show()'] | {'dataset': 'satellite-building-segmentation', 'accuracy': {'mAP@0.5(box)': 0.661, 'mAP@0.5(mask)': 0.651}} | A YOLOv8 model for building segmentation in satellite images. Trained on the satellite-building-segmentation dataset, it can detect and segment buildings with high accuracy. |
Computer Vision Image Segmentation | Hugging Face Transformers | Image Segmentation | keremberke/yolov8s-pothole-segmentation | YOLO('keremberke/yolov8s-pothole-segmentation') | {'image': 'URL or local path to the image'} | {'ultralyticsplus': '0.0.23', 'ultralytics': '8.0.21'} | from ultralyticsplus import YOLO, render_result
model = YOLO('keremberke/yolov8s-pothole-segmentation')
model.overrides['conf'] = 0.25
model.overrides['iou'] = 0.45
model.overrides['agnostic_nms'] = False
model.overrides['max_det'] = 1000
image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'
results = model.predict(image)
print(results[0].boxes)
print(results[0].masks)
render = render_result(model=model, image=image, result=results[0])
render.show() | {'dataset': 'pothole-segmentation', 'accuracy': {'mAP@0.5(box)': 0.928, 'mAP@0.5(mask)': 0.928}} | A YOLOv8 model for pothole segmentation. This model detects potholes in images and outputs bounding boxes and masks for the detected potholes. |
Computer Vision Image Segmentation | Hugging Face Transformers | Image Segmentation | keremberke/yolov8n-pothole-segmentation | YOLO('keremberke/yolov8n-pothole-segmentation') | {'image': 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg', 'conf': 0.25, 'iou': 0.45, 'agnostic_nms': False, 'max_det': 1000} | {'ultralyticsplus': '0.0.23', 'ultralytics': '8.0.21'} | from ultralyticsplus import YOLO, render_result
model = YOLO('keremberke/yolov8n-pothole-segmentation')
model.overrides['conf'] = 0.25
model.overrides['iou'] = 0.45
model.overrides['agnostic_nms'] = False
model.overrides['max_det'] = 1000
image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'
results = model.predict(image)
print(results[0].boxes)
print(results[0].masks)
render = render_result(model=model, image=image, result=results[0])
render.show() | {'dataset': 'pothole-segmentation', 'accuracy': {'mAP@0.5(box)': 0.995, 'mAP@0.5(mask)': 0.995}} | A YOLOv8 model for pothole segmentation in images. The model is trained on the pothole-segmentation dataset and achieves high accuracy in detecting potholes. |
Computer Vision Image Segmentation | Hugging Face Transformers | Image Segmentation | keremberke/yolov8n-pcb-defect-segmentation | YOLO('keremberke/yolov8n-pcb-defect-segmentation') | {'image': 'URL or local path to image'} | ultralyticsplus==0.0.23 ultralytics==8.0.21 | from ultralyticsplus import YOLO, render_result
model = YOLO('keremberke/yolov8n-pcb-defect-segmentation')
model.overrides['conf'] = 0.25
model.overrides['iou'] = 0.45
model.overrides['agnostic_nms'] = False
model.overrides['max_det'] = 1000
image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'
results = model.predict(image)
print(results[0].boxes)
print(results[0].masks)
render = render_result(model=model, image=image, result=results[0])
render.show() | {'dataset': 'pcb-defect-segmentation', 'accuracy': {'mAP@0.5(box)': 0.512, 'mAP@0.5(mask)': 0.517}} | A YOLOv8 model for detecting and segmenting PCB defects such as Dry_joint, Incorrect_installation, PCB_damage, and Short_circuit. |
Computer Vision Image Segmentation | Hugging Face Transformers | Image Segmentation | keremberke/yolov8s-pcb-defect-segmentation | YOLO('keremberke/yolov8s-pcb-defect-segmentation') | {'image': 'URL or local path to image'} | ['ultralyticsplus==0.0.23', 'ultralytics==8.0.21'] | from ultralyticsplus import YOLO, render_result
model = YOLO('keremberke/yolov8s-pcb-defect-segmentation')
model.overrides['conf'] = 0.25
model.overrides['iou'] = 0.45
model.overrides['agnostic_nms'] = False
model.overrides['max_det'] = 1000
image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'
results = model.predict(image)
print(results[0].boxes)
print(results[0].masks)
render = render_result(model=model, image=image, result=results[0])
render.show() | {'dataset': 'pcb-defect-segmentation', 'accuracy': {'mAP@0.5(box)': 0.515, 'mAP@0.5(mask)': 0.491}} | YOLOv8s model for PCB defect segmentation. The model is trained to detect and segment PCB defects such as Dry_joint, Incorrect_installation, PCB_damage, and Short_circuit. |
Computer Vision Image-to-Image | Hugging Face | Image Variations | lambdalabs/sd-image-variations-diffusers | StableDiffusionImageVariationPipeline.from_pretrained('lambdalabs/sd-image-variations-diffusers', revision='v2.0') | {'revision': 'v2.0'} | Diffusers >=0.8.0 | from diffusers import StableDiffusionImageVariationPipeline
from PIL import Image
device = cuda:0
sd_pipe = StableDiffusionImageVariationPipeline.from_pretrained(
lambdalabs/sd-image-variations-diffusers,
revision=v2.0,
)
sd_pipe = sd_pipe.to(device)
im = Image.open(path/to/image.jpg)
tform = transforms.Compose([
transforms.ToTensor(),
transforms.Resize(
(224, 224),
interpolation=transforms.InterpolationMode.BICUBIC,
antialias=False,
),
transforms.Normalize(
[0.48145466, 0.4578275, 0.40821073],
[0.26862954, 0.26130258, 0.27577711]),
])
inp = tform(im).to(device).unsqueeze(0)
out = sd_pipe(inp, guidance_scale=3)
out[images][0].save(result.jpg) | {'dataset': 'ChristophSchuhmann/improved_aesthetics_6plus', 'accuracy': 'N/A'} | This version of Stable Diffusion has been fine tuned from CompVis/stable-diffusion-v1-4-original to accept CLIP image embedding rather than text embeddings. This allows the creation of image variations similar to DALLE-2 using Stable Diffusion. |
Computer Vision Image-to-Image | Hugging Face | Image-to-Image | lllyasviel/sd-controlnet-canny | ControlNetModel.from_pretrained('lllyasviel/sd-controlnet-canny') | {'torch_dtype': 'torch.float16'} | {'opencv': 'pip install opencv-contrib-python', 'diffusers': 'pip install diffusers transformers accelerate'} | import cv2
from PIL import Image
from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler
import torch
import numpy as np
from diffusers.utils import load_image
image = load_image(https://huggingface.co/lllyasviel/sd-controlnet-hed/resolve/main/images/bird.png)
image = np.array(image)
low_threshold = 100
high_threshold = 200
image = cv2.Canny(image, low_threshold, high_threshold)
image = image[:, :, None]
image = np.concatenate([image, image, image], axis=2)
image = Image.fromarray(image)
controlnet = ControlNetModel.from_pretrained(
lllyasviel/sd-controlnet-canny, torch_dtype=torch.float16
)
pipe = StableDiffusionControlNetPipeline.from_pretrained(
runwayml/stable-diffusion-v1-5, controlnet=controlnet, safety_checker=None, torch_dtype=torch.float16
)
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
pipe.enable_xformers_memory_efficient_attention()
pipe.enable_model_cpu_offload()
image = pipe(bird, image, num_inference_steps=20).images[0]
image.save('images/bird_canny_out.png') | {'dataset': '3M edge-image, caption pairs', 'accuracy': '600 GPU-hours with Nvidia A100 80G'} | ControlNet is a neural network structure to control diffusion models by adding extra conditions. This checkpoint corresponds to the ControlNet conditioned on Canny edges. It can be used in combination with Stable Diffusion. |
Computer Vision Image-to-Image | Hugging Face | Human Pose Estimation | lllyasviel/sd-controlnet-openpose | ControlNetModel.from_pretrained('lllyasviel/sd-controlnet-openpose') | {'text': 'chef in the kitchen', 'image': 'image', 'num_inference_steps': 20} | {'diffusers': 'pip install diffusers', 'transformers': 'pip install transformers', 'accelerate': 'pip install accelerate', 'controlnet_aux': 'pip install controlnet_aux'} | from PIL import Image
from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler
import torch
from controlnet_aux import OpenposeDetector
from diffusers.utils import load_image
openpose = OpenposeDetector.from_pretrained('lllyasviel/ControlNet')
image = load_image(https://huggingface.co/lllyasviel/sd-controlnet-openpose/resolve/main/images/pose.png)
image = openpose(image)
controlnet = ControlNetModel.from_pretrained(
lllyasviel/sd-controlnet-openpose, torch_dtype=torch.float16
)
pipe = StableDiffusionControlNetPipeline.from_pretrained(
runwayml/stable-diffusion-v1-5, controlnet=controlnet, safety_checker=None, torch_dtype=torch.float16
)
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
pipe.enable_xformers_memory_efficient_attention()
pipe.enable_model_cpu_offload()
image = pipe(chef in the kitchen, image, num_inference_steps=20).images[0]
image.save('images/chef_pose_out.png') | {'dataset': '200k pose-image, caption pairs', 'accuracy': 'Not specified'} | ControlNet is a neural network structure to control diffusion models by adding extra conditions. This checkpoint corresponds to the ControlNet conditioned on Human Pose Estimation. It can be used in combination with Stable Diffusion. |
Computer Vision Image-to-Image | Hugging Face | Image-to-Image | lllyasviel/sd-controlnet-hed | ControlNetModel.from_pretrained('lllyasviel/sd-controlnet-hed') | ['image', 'text'] | ['diffusers', 'transformers', 'accelerate', 'controlnet_aux'] | from PIL import Image
from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler
import torch
from controlnet_aux import HEDdetector
from diffusers.utils import load_image
hed = HEDdetector.from_pretrained('lllyasviel/ControlNet')
image = load_image(https://huggingface.co/lllyasviel/sd-controlnet-hed/resolve/main/images/man.png)
image = hed(image)
controlnet = ControlNetModel.from_pretrained(lllyasviel/sd-controlnet-hed, torch_dtype=torch.float16)
pipe = StableDiffusionControlNetPipeline.from_pretrained(runwayml/stable-diffusion-v1-5, controlnet=controlnet, safety_checker=None, torch_dtype=torch.float16)
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
pipe.enable_xformers_memory_efficient_attention()
pipe.enable_model_cpu_offload()
image = pipe(oil painting of handsome old man, masterpiece, image, num_inference_steps=20).images[0]
image.save('images/man_hed_out.png') | {'dataset': '3M edge-image, caption pairs', 'accuracy': 'Not provided'} | ControlNet is a neural network structure to control diffusion models by adding extra conditions. This checkpoint corresponds to the ControlNet conditioned on HED Boundary. It can be used in combination with Stable Diffusion. |
Computer Vision Image-to-Image | Hugging Face | Image Segmentation | lllyasviel/sd-controlnet-seg | ControlNetModel.from_pretrained('lllyasviel/sd-controlnet-seg') | ['torch_dtype'] | ['diffusers', 'transformers', 'accelerate'] | image = pipe(house, image, num_inference_steps=20).images[0]
image.save('./images/house_seg_out.png') | {'dataset': 'ADE20K', 'accuracy': 'Trained on 164K segmentation-image, caption pairs'} | ControlNet is a neural network structure to control diffusion models by adding extra conditions. This checkpoint corresponds to the ControlNet conditioned on Image Segmentation. It can be used in combination with Stable Diffusion. |
Computer Vision Image-to-Image | Hugging Face | Depth Estimation | lllyasviel/sd-controlnet-depth | ControlNetModel.from_pretrained('lllyasviel/sd-controlnet-depth') | {'torch_dtype': 'torch.float16'} | ['diffusers', 'transformers', 'accelerate', 'PIL', 'numpy', 'torch'] | {'install_packages': 'pip install diffusers transformers accelerate', 'code': ['from transformers import pipeline', 'from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler', 'from PIL import Image', 'import numpy as np', 'import torch', 'from diffusers.utils import load_image', "depth_estimator = pipeline('depth-estimation')", 'image = load_image(https://huggingface.co/lllyasviel/sd-controlnet-depth/resolve/main/images/stormtrooper.png)', "image = depth_estimator(image)['depth']", 'image = np.array(image)', 'image = image[:, :, None]', 'image = np.concatenate([image, image, image], axis=2)', 'image = Image.fromarray(image)', 'controlnet = ControlNetModel.from_pretrained(lllyasviel/sd-controlnet-depth, torch_dtype=torch.float16)', 'pipe = StableDiffusionControlNetPipeline.from_pretrained(runwayml/stable-diffusion-v1-5, controlnet=controlnet, safety_checker=None, torch_dtype=torch.float16)', 'pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)', 'pipe.enable_xformers_memory_efficient_attention()', 'pipe.enable_model_cpu_offload()', "image = pipe(Stormtrooper's lecture, image, num_inference_steps=20).images[0]", "image.save('./images/stormtrooper_depth_out.png')"]} | {'dataset': '3M depth-image, caption pairs', 'accuracy': '500 GPU-hours with Nvidia A100 80G using Stable Diffusion 1.5 as a base model'} | ControlNet is a neural network structure to control diffusion models by adding extra conditions. This checkpoint corresponds to the ControlNet conditioned on Depth estimation. It can be used in combination with Stable Diffusion. |
Computer Vision Image-to-Image | Hugging Face | Text-to-Image Diffusion Models | lllyasviel/sd-controlnet-scribble | ControlNetModel.from_pretrained('lllyasviel/sd-controlnet-scribble') | ['image', 'text'] | ['diffusers', 'transformers', 'accelerate', 'controlnet_aux'] | from PIL import Image
from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler
import torch
from controlnet_aux import HEDdetector
from diffusers.utils import load_image
hed = HEDdetector.from_pretrained('lllyasviel/ControlNet')
image = load_image('https://huggingface.co/lllyasviel/sd-controlnet-scribble/resolve/main/images/bag.png')
image = hed(image, scribble=True)
controlnet = ControlNetModel.from_pretrained('lllyasviel/sd-controlnet-scribble', torch_dtype=torch.float16)
pipe = StableDiffusionControlNetPipeline.from_pretrained('runwayml/stable-diffusion-v1-5', controlnet=controlnet, safety_checker=None, torch_dtype=torch.float16)
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
pipe.enable_xformers_memory_efficient_attention()
pipe.enable_model_cpu_offload()
image = pipe('bag', image, num_inference_steps=20).images[0]
image.save('images/bag_scribble_out.png') | {'dataset': '500k scribble-image, caption pairs', 'accuracy': 'Not provided'} | ControlNet is a neural network structure to control diffusion models by adding extra conditions. This checkpoint corresponds to the ControlNet conditioned on Scribble images. It can be used in combination with Stable Diffusion. |
Computer Vision Image-to-Image | Hugging Face | Image-to-Image | lllyasviel/control_v11p_sd15_canny | ControlNetModel.from_pretrained('lllyasviel/control_v11p_sd15_canny') | {'text': 'a blue paradise bird in the jungle', 'num_inference_steps': 20, 'generator': 'torch.manual_seed(33)', 'image': 'control_image'} | ['pip install opencv-contrib-python', 'pip install diffusers transformers accelerate'] | ['import torch', 'import os', 'from huggingface_hub import HfApi', 'from pathlib import Path', 'from diffusers.utils import load_image', 'import numpy as np', 'import cv2', 'from PIL import Image', 'from diffusers import (', ' ControlNetModel,', ' StableDiffusionControlNetPipeline,', ' UniPCMultistepScheduler,', ')', 'checkpoint = lllyasviel/control_v11p_sd15_canny', 'image = load_image(', ' https://huggingface.co/lllyasviel/control_v11p_sd15_canny/resolve/main/images/input.png', ')', 'image = np.array(image)', 'low_threshold = 100', 'high_threshold = 200', 'image = cv2.Canny(image, low_threshold, high_threshold)', 'image = image[:, :, None]', 'image = np.concatenate([image, image, image], axis=2)', 'control_image = Image.fromarray(image)', 'control_image.save(./images/control.png)', 'controlnet = ControlNetModel.from_pretrained(checkpoint, torch_dtype=torch.float16)', 'pipe = StableDiffusionControlNetPipeline.from_pretrained(', ' runwayml/stable-diffusion-v1-5, controlnet=controlnet, torch_dtype=torch.float16', ')', 'pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)', 'pipe.enable_model_cpu_offload()', 'generator = torch.manual_seed(33)', 'image = pipe(a blue paradise bird in the jungle, num_inference_steps=20, generator=generator, image=control_image).images[0]', "image.save('images/image_out.png')"] | {'dataset': 'N/A', 'accuracy': 'N/A'} | Controlnet v1.1 is a neural network structure to control diffusion models by adding extra conditions. This checkpoint corresponds to the ControlNet conditioned on Canny edges. It can be used in combination with Stable Diffusion, such as runwayml/stable-diffusion-v1-5. |
Computer Vision Image-to-Image | Hugging Face | ControlNet - M-LSD Straight Line Version | lllyasviel/sd-controlnet-mlsd | ControlNetModel.from_pretrained('lllyasviel/sd-controlnet-mlsd') | {'torch_dtype': 'torch.float16'} | {'diffusers': 'pip install diffusers', 'transformers': 'pip install transformers', 'accelerate': 'pip install accelerate', 'controlnet_aux': 'pip install controlnet_aux'} | {'import': ['from PIL import Image', 'from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler', 'import torch', 'from controlnet_aux import MLSDdetector', 'from diffusers.utils import load_image'], 'setup': ["mlsd = MLSDdetector.from_pretrained('lllyasviel/ControlNet')", 'image = load_image(https://huggingface.co/lllyasviel/sd-controlnet-mlsd/resolve/main/images/room.png)', 'image = mlsd(image)', 'controlnet = ControlNetModel.from_pretrained(lllyasviel/sd-controlnet-mlsd, torch_dtype=torch.float16)', 'pipe = StableDiffusionControlNetPipeline.from_pretrained(runwayml/stable-diffusion-v1-5, controlnet=controlnet, safety_checker=None, torch_dtype=torch.float16)', 'pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)'], 'execution': ['pipe.enable_xformers_memory_efficient_attention()', 'pipe.enable_model_cpu_offload()', 'image = pipe(room, image, num_inference_steps=20).images[0]', "image.save('images/room_mlsd_out.png')"]} | {'dataset': '600k edge-image, caption pairs generated from Places2', 'accuracy': 'Not specified'} | ControlNet is a neural network structure to control diffusion models by adding extra conditions. This checkpoint corresponds to the ControlNet conditioned on M-LSD straight line detection. It can be used in combination with Stable Diffusion. |
Computer Vision Image-to-Image | Hugging Face | ControlNet | lllyasviel/control_v11p_sd15_lineart | ControlNetModel.from_pretrained('lllyasviel/control_v11p_sd15_lineart') | {'checkpoint': 'ControlNet-1-1-preview/control_v11p_sd15_lineart', 'torch_dtype': 'torch.float16'} | pip install diffusers transformers accelerate controlnet_aux==0.3.0 | import torch
import os
from huggingface_hub import HfApi
from pathlib import Path
from diffusers.utils import load_image
from PIL import Image
import numpy as np
from controlnet_aux import LineartDetector
from diffusers import (
ControlNetModel,
StableDiffusionControlNetPipeline,
UniPCMultistepScheduler,
)
checkpoint = ControlNet-1-1-preview/control_v11p_sd15_lineart
image = load_image(
https://huggingface.co/ControlNet-1-1-preview/control_v11p_sd15_lineart/resolve/main/images/input.png
)
image = image.resize((512, 512))
prompt = michael jackson concert
processor = LineartDetector.from_pretrained(lllyasviel/Annotators)
control_image = processor(image)
control_image.save(./images/control.png)
controlnet = ControlNetModel.from_pretrained(checkpoint, torch_dtype=torch.float16)
pipe = StableDiffusionControlNetPipeline.from_pretrained(
runwayml/stable-diffusion-v1-5, controlnet=controlnet, torch_dtype=torch.float16
)
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
pipe.enable_model_cpu_offload()
generator = torch.manual_seed(0)
image = pipe(prompt, num_inference_steps=30, generator=generator, image=control_image).images[0]
image.save('images/image_out.png') | {'dataset': 'ControlNet-1-1-preview', 'accuracy': 'Not provided'} | ControlNet is a neural network structure to control diffusion models by adding extra conditions. This checkpoint corresponds to the ControlNet conditioned on lineart images. |
Computer Vision Image-to-Image | Hugging Face | Normal Map Estimation | lllyasviel/sd-controlnet-normal | ControlNetModel.from_pretrained('lllyasviel/sd-controlnet-normal') | ['image', 'num_inference_steps'] | ['diffusers', 'transformers', 'accelerate'] | from PIL import Image
from transformers import pipeline
import numpy as np
import cv2
from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler
import torch
from diffusers.utils import load_image
image = load_image(https://huggingface.co/lllyasviel/sd-controlnet-normal/resolve/main/images/toy.png).convert(RGB)
depth_estimator = pipeline(depth-estimation, model =Intel/dpt-hybrid-midas )
image = depth_estimator(image)['predicted_depth'][0]
image = image.numpy()
image_depth = image.copy()
image_depth -= np.min(image_depth)
image_depth /= np.max(image_depth)
bg_threhold = 0.4
x = cv2.Sobel(image, cv2.CV_32F, 1, 0, ksize=3)
x[image_depth < bg_threhold] = 0
y = cv2.Sobel(image, cv2.CV_32F, 0, 1, ksize=3)
y[image_depth < bg_threhold] = 0
z = np.ones_like(x) * np.pi * 2.0
image = np.stack([x, y, z], axis=2)
image /= np.sum(image ** 2.0, axis=2, keepdims=True) ** 0.5
image = (image * 127.5 + 127.5).clip(0, 255).astype(np.uint8)
image = Image.fromarray(image)
controlnet = ControlNetModel.from_pretrained(
fusing/stable-diffusion-v1-5-controlnet-normal, torch_dtype=torch.float16
)
pipe = StableDiffusionControlNetPipeline.from_pretrained(
runwayml/stable-diffusion-v1-5, controlnet=controlnet, safety_checker=None, torch_dtype=torch.float16
)
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
pipe.enable_xformers_memory_efficient_attention()
pipe.enable_model_cpu_offload()
image = pipe(cute toy, image, num_inference_steps=20).images[0]
image.save('images/toy_normal_out.png') | {'dataset': 'DIODE', 'accuracy': 'Not provided'} | ControlNet is a neural network structure to control diffusion models by adding extra conditions. This checkpoint corresponds to the ControlNet conditioned on Normal Map Estimation. It can be used in combination with Stable Diffusion. |
Computer Vision Image-to-Image | Diffusers | Text-to-Image | lllyasviel/control_v11p_sd15_scribble | ControlNetModel.from_pretrained('lllyasviel/control_v11p_sd15_scribble') | {'checkpoint': 'lllyasviel/control_v11p_sd15_scribble', 'torch_dtype': 'torch.float16'} | ['diffusers', 'transformers', 'accelerate', 'controlnet_aux==0.3.0'] | import torch
import os
from huggingface_hub import HfApi
from pathlib import Path
from diffusers.utils import load_image
from PIL import Image
import numpy as np
from controlnet_aux import PidiNetDetector, HEDdetector
from diffusers import (
ControlNetModel,
StableDiffusionControlNetPipeline,
UniPCMultistepScheduler,
)
checkpoint = lllyasviel/control_v11p_sd15_scribble
image = load_image(
https://huggingface.co/lllyasviel/control_v11p_sd15_scribble/resolve/main/images/input.png
)
prompt = royal chamber with fancy bed
processor = HEDdetector.from_pretrained('lllyasviel/Annotators')
control_image = processor(image, scribble=True)
control_image.save(./images/control.png)
controlnet = ControlNetModel.from_pretrained(checkpoint, torch_dtype=torch.float16)
pipe = StableDiffusionControlNetPipeline.from_pretrained(
runwayml/stable-diffusion-v1-5, controlnet=controlnet, torch_dtype=torch.float16
)
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
pipe.enable_model_cpu_offload()
generator = torch.manual_seed(0)
image = pipe(prompt, num_inference_steps=30, generator=generator, image=control_image).images[0]
image.save('images/image_out.png') | {'dataset': 'Stable Diffusion v1-5', 'accuracy': 'Not specified'} | Controlnet v1.1 is a neural network structure to control diffusion models by adding extra conditions. This checkpoint corresponds to the ControlNet conditioned on Scribble images. It can be used in combination with Stable Diffusion, such as runwayml/stable-diffusion-v1-5. |
Computer Vision Image-to-Image | Diffusers | Text-to-Image Diffusion Models | lllyasviel/control_v11p_sd15_openpose | ControlNetModel.from_pretrained('lllyasviel/control_v11p_sd15_openpose') | {'checkpoint': 'lllyasviel/control_v11p_sd15_openpose', 'torch_dtype': 'torch.float16'} | ['diffusers', 'transformers', 'accelerate', 'controlnet_aux==0.3.0'] | {'import_libraries': ['import torch', 'import os', 'from huggingface_hub import HfApi', 'from pathlib import Path', 'from diffusers.utils import load_image', 'from PIL import Image', 'import numpy as np', 'from controlnet_aux import OpenposeDetector', 'from diffusers import (', ' ControlNetModel,', ' StableDiffusionControlNetPipeline,', ' UniPCMultistepScheduler,', ')'], 'load_model': ['checkpoint = lllyasviel/control_v11p_sd15_openpose', 'controlnet = ControlNetModel.from_pretrained(checkpoint, torch_dtype=torch.float16)'], 'example_usage': ['image = load_image(https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/input.png)', 'prompt = chef in the kitchen', "processor = OpenposeDetector.from_pretrained('lllyasviel/ControlNet')", 'control_image = processor(image, hand_and_face=True)', 'control_image.save(./images/control.png)', 'pipe = StableDiffusionControlNetPipeline.from_pretrained(', ' runwayml/stable-diffusion-v1-5, controlnet=controlnet, torch_dtype=torch.float16', ')', 'pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)', 'pipe.enable_model_cpu_offload()', 'generator = torch.manual_seed(0)', 'image = pipe(prompt, num_inference_steps=30, generator=generator, image=control_image).images[0]', "image.save('images/image_out.png')"]} | {'dataset': 'Not specified', 'accuracy': 'Not specified'} | ControlNet is a neural network structure to control diffusion models by adding extra conditions. This checkpoint corresponds to the ControlNet conditioned on openpose images. |
Computer Vision Image-to-Image | Hugging Face Transformers | Image Super-Resolution | caidas/swin2SR-classical-sr-x2-64 | Swin2SRForImageSuperResolution.from_pretrained('caidas/swin2sr-classical-sr-x2-64') | image, model, feature_extractor | transformers | Refer to the documentation. | {'dataset': 'arxiv: 2209.11345', 'accuracy': 'Not provided'} | Swin2SR model that upscales images x2. It was introduced in the paper Swin2SR: SwinV2 Transformer for Compressed Image Super-Resolution and Restoration by Conde et al. and first released in this repository. |
Computer Vision Image-to-Image | Hugging Face | Diffusion-based text-to-image generation model | lllyasviel/control_v11e_sd15_ip2p | ControlNetModel.from_pretrained('lllyasviel/control_v11e_sd15_ip2p') | ['checkpoint', 'torch_dtype'] | ['diffusers', 'transformers', 'accelerate'] | import torch
import os
from huggingface_hub import HfApi
from pathlib import Path
from diffusers.utils import load_image
from PIL import Image
import numpy as np
from diffusers import (
ControlNetModel,
StableDiffusionControlNetPipeline,
UniPCMultistepScheduler,
)
checkpoint = lllyasviel/control_v11e_sd15_ip2p
control_image = load_image(https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p/resolve/main/images/input.png).convert('RGB')
prompt = make it on fire
controlnet = ControlNetModel.from_pretrained(checkpoint, torch_dtype=torch.float16)
pipe = StableDiffusionControlNetPipeline.from_pretrained(
runwayml/stable-diffusion-v1-5, controlnet=controlnet, torch_dtype=torch.float16
)
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
pipe.enable_model_cpu_offload()
generator = torch.manual_seed(0)
image = pipe(prompt, num_inference_steps=30, generator=generator, image=image).images[0]
image.save('images/image_out.png') | {'dataset': 'Stable Diffusion v1-5', 'accuracy': 'Not provided'} | ControlNet is a neural network structure to control diffusion models by adding extra conditions. This checkpoint corresponds to the ControlNet conditioned on instruct pix2pix images. |
Computer Vision Image-to-Image | Hugging Face | Diffusion-based text-to-image generation | lllyasviel/control_v11p_sd15_seg | ControlNetModel.from_pretrained('lllyasviel/control_v11p_sd15_seg') | {'checkpoint': 'lllyasviel/control_v11p_sd15_seg'} | ['diffusers', 'transformers', 'accelerate'] | controlnet = ControlNetModel.from_pretrained(checkpoint, torch_dtype=torch.float16)
pipe = StableDiffusionControlNetPipeline.from_pretrained(
runwayml/stable-diffusion-v1-5, controlnet=controlnet, torch_dtype=torch.float16
)
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
pipe.enable_model_cpu_offload()
generator = torch.manual_seed(0)
image = pipe(prompt, num_inference_steps=30, generator=generator, image=control_image).images[0]
image.save('images/image_out.png') | {'dataset': 'COCO', 'accuracy': 'Not specified'} | ControlNet v1.1 is a neural network structure to control diffusion models by adding extra conditions. This checkpoint corresponds to the ControlNet conditioned on seg images. It can be used in combination with Stable Diffusion, such as runwayml/stable-diffusion-v1-5. |
Computer Vision Image-to-Image | Hugging Face | Diffusion-based text-to-image generation | lllyasviel/control_v11p_sd15_softedge | ControlNetModel.from_pretrained('lllyasviel/control_v11p_sd15_softedge') | {'checkpoint': 'lllyasviel/control_v11p_sd15_softedge', 'torch_dtype': 'torch.float16'} | ['diffusers', 'transformers', 'accelerate', 'controlnet_aux==0.3.0'] | import torch
import os
from huggingface_hub import HfApi
from pathlib import Path
from diffusers.utils import load_image
from PIL import Image
import numpy as np
from controlnet_aux import PidiNetDetector, HEDdetector
from diffusers import (
ControlNetModel,
StableDiffusionControlNetPipeline,
UniPCMultistepScheduler,
)
checkpoint = lllyasviel/control_v11p_sd15_softedge
image = load_image(
https://huggingface.co/lllyasviel/control_v11p_sd15_softedge/resolve/main/images/input.png
)
prompt = royal chamber with fancy bed
processor = HEDdetector.from_pretrained('lllyasviel/Annotators')
processor = PidiNetDetector.from_pretrained('lllyasviel/Annotators')
control_image = processor(image, safe=True)
control_image.save(./images/control.png)
controlnet = ControlNetModel.from_pretrained(checkpoint, torch_dtype=torch.float16)
pipe = StableDiffusionControlNetPipeline.from_pretrained(
runwayml/stable-diffusion-v1-5, controlnet=controlnet, torch_dtype=torch.float16
)
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
pipe.enable_model_cpu_offload()
generator = torch.manual_seed(0)
image = pipe(prompt, num_inference_steps=30, generator=generator, image=control_image).images[0]
image.save('images/image_out.png') | {'dataset': 'ControlNet', 'accuracy': 'Not provided'} | Controlnet v1.1 is a diffusion-based text-to-image generation model that controls pretrained large diffusion models to support additional input conditions. This checkpoint corresponds to the ControlNet conditioned on Soft edges. It can be used in combination with Stable Diffusion, such as runwayml/stable-diffusion-v1-5. |
Computer Vision Image-to-Image | Hugging Face Transformers | Transformers | swin2SR-lightweight-x2-64 | Swin2SRForConditionalGeneration.from_pretrained('condef/Swin2SR-lightweight-x2-64'). | feature_extractor, model | transformers, torch | {'dataset': '', 'accuracy': ''} | Swin2SR model that upscales images x2. It was introduced in the paper Swin2SR: SwinV2 Transformer for Compressed Image Super-Resolution and Restoration by Conde et al. and first released in this repository. This model is intended for lightweight image super resolution. |
|
Computer Vision Image-to-Image | Hugging Face | Text-to-Image Diffusion Models | lllyasviel/control_v11p_sd15_mlsd | ControlNetModel.from_pretrained('lllyasviel/control_v11p_sd15_mlsd') | ['checkpoint', 'torch_dtype'] | ['diffusers', 'transformers', 'accelerate', 'controlnet_aux'] | import torch
import os
from huggingface_hub import HfApi
from pathlib import Path
from diffusers.utils import load_image
from PIL import Image
import numpy as np
from controlnet_aux import MLSDdetector
from diffusers import (
ControlNetModel,
StableDiffusionControlNetPipeline,
UniPCMultistepScheduler,
)
checkpoint = lllyasviel/control_v11p_sd15_mlsd
image = load_image(
https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd/resolve/main/images/input.png
)
prompt = royal chamber with fancy bed
processor = MLSDdetector.from_pretrained('lllyasviel/ControlNet')
control_image = processor(image)
control_image.save(./images/control.png)
controlnet = ControlNetModel.from_pretrained(checkpoint, torch_dtype=torch.float16)
pipe = StableDiffusionControlNetPipeline.from_pretrained(
runwayml/stable-diffusion-v1-5, controlnet=controlnet, torch_dtype=torch.float16
)
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
pipe.enable_model_cpu_offload()
generator = torch.manual_seed(0)
image = pipe(prompt, num_inference_steps=30, generator=generator, image=control_image).images[0]
image.save('images/image_out.png') | {'dataset': 'MLSD', 'accuracy': 'Not provided'} | Controlnet v1.1 is a neural network structure to control diffusion models by adding extra conditions. It can be used in combination with Stable Diffusion, such as runwayml/stable-diffusion-v1-5. This checkpoint corresponds to the ControlNet conditioned on MLSD images. |
Computer Vision Image-to-Image | Hugging Face | Diffusion-based text-to-image generation model | lllyasviel/control_v11p_sd15_normalbae | ControlNetModel.from_pretrained('lllyasviel/control_v11p_sd15_normalbae') | ['checkpoint', 'torch_dtype'] | ['diffusers', 'transformers', 'accelerate', 'controlnet_aux'] | import torch
import os
from huggingface_hub import HfApi
from pathlib import Path
from diffusers.utils import load_image
from PIL import Image
import numpy as np
from controlnet_aux import NormalBaeDetector
from diffusers import (
ControlNetModel,
StableDiffusionControlNetPipeline,
UniPCMultistepScheduler,
)
checkpoint = lllyasviel/control_v11p_sd15_normalbae
image = load_image(
https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae/resolve/main/images/input.png
)
prompt = A head full of roses
processor = NormalBaeDetector.from_pretrained(lllyasviel/Annotators)
control_image = processor(image)
control_image.save(./images/control.png)
controlnet = ControlNetModel.from_pretrained(checkpoint, torch_dtype=torch.float16)
pipe = StableDiffusionControlNetPipeline.from_pretrained(
runwayml/stable-diffusion-v1-5, controlnet=controlnet, torch_dtype=torch.float16
)
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
pipe.enable_model_cpu_offload()
generator = torch.manual_seed(33)
image = pipe(prompt, num_inference_steps=30, generator=generator, image=control_image).images[0]
image.save('images/image_out.png') | {'dataset': 'N/A', 'accuracy': 'N/A'} | ControlNet v1.1 is a neural network structure to control diffusion models by adding extra conditions. This checkpoint corresponds to the ControlNet conditioned on normalbae images. It can be used in combination with Stable Diffusion, such as runwayml/stable-diffusion-v1-5. |
Computer Vision Image-to-Image | Hugging Face Transformers | Transformers | swin2SR-classical-sr-x4-64 | pipeline('image-super-resolution', model='caidas/swin2SR-classical-sr-x4-64') | ['input_image'] | ['transformers'] | {'dataset': '', 'accuracy': ''} | Swin2SR model that upscales images x4. It was introduced in the paper Swin2SR: SwinV2 Transformer for Compressed Image Super-Resolution and Restoration by Conde et al. and first released in this repository. This model is intended for image super resolution. |
|
Computer Vision Image-to-Image | Hugging Face | Image-to-Image | GreeneryScenery/SheepsControlV3 | pipeline('image-to-image', model='GreeneryScenery/SheepsControlV3') | {'image': 'Path to image file', 'text_guidance': 'Optional text guidance for the model'} | {'transformers': 'latest', 'torch': 'latest'} | ['from transformers import pipeline', "model = pipeline('image-to-image', model='GreeneryScenery/SheepsControlV3')", "result = model({'image': 'path/to/image.jpg', 'text_guidance': 'Optional text guidance'})"] | {'dataset': 'GreeneryScenery/SheepsControlV3', 'accuracy': 'Not provided'} | GreeneryScenery/SheepsControlV3 is a model for image-to-image tasks. It can be used to generate images based on the input image and optional text guidance. The model has some limitations, such as the conditioning image not affecting the output image much. Improvements can be made by training for more epochs, using better prompts, and preprocessing the data. |
Computer Vision Image-to-Image | Hugging Face | Image-to-Image | GreeneryScenery/SheepsControlV5 | pipeline('image-to-image', model='GreeneryScenery/SheepsControlV5') | {'input_image': 'path/to/image/file'} | {'huggingface_hub': '>=0.0.17', 'transformers': '>=4.13.0', 'torch': '>=1.10.0'} | {'dataset': 'poloclub/diffusiondb', 'accuracy': 'Not provided'} | SheepsControlV5 is an image-to-image model trained on the poloclub/diffusiondb dataset. It is designed for transforming input images into a different style or representation. |
|
Computer Vision Image-to-Image | Keras | Image Deblurring | google/maxim-s3-deblurring-gopro | from_pretrained_keras('google/maxim-s3-deblurring-gopro') | ['image'] | ['huggingface_hub', 'PIL', 'tensorflow', 'numpy', 'requests'] | from huggingface_hub import from_pretrained_keras
from PIL import Image
import tensorflow as tf
import numpy as np
import requests
url = https://github.com/sayakpaul/maxim-tf/raw/main/images/Deblurring/input/1fromGOPR0950.png
image = Image.open(requests.get(url, stream=True).raw)
image = np.array(image)
image = tf.convert_to_tensor(image)
image = tf.image.resize(image, (256, 256))
model = from_pretrained_keras(google/maxim-s3-deblurring-gopro)
predictions = model.predict(tf.expand_dims(image, 0)) | {'dataset': 'GoPro', 'accuracy': {'PSNR': 32.86, 'SSIM': 0.961}} | MAXIM model pre-trained for image deblurring. It was introduced in the paper MAXIM: Multi-Axis MLP for Image Processing by Zhengzhong Tu, Hossein Talebi, Han Zhang, Feng Yang, Peyman Milanfar, Alan Bovik, Yinxiao Li and first released in this repository. |
Computer Vision Image-to-Image | Hugging Face | Text-to-Image Diffusion Models | lllyasviel/control_v11p_sd15s2_lineart_anime | ControlNetModel.from_pretrained('lllyasviel/control_v11p_sd15s2_lineart_anime') | {'checkpoint': 'lllyasviel/control_v11p_sd15s2_lineart_anime', 'torch_dtype': 'torch.float16'} | ['pip install diffusers transformers accelerate', 'pip install controlnet_aux==0.3.0'] | ['import torch', 'import os', 'from huggingface_hub import HfApi', 'from pathlib import Path', 'from diffusers.utils import load_image', 'from PIL import Image', 'import numpy as np', 'from controlnet_aux import LineartAnimeDetector', 'from transformers import CLIPTextModel', 'from diffusers import (', ' ControlNetModel,', ' StableDiffusionControlNetPipeline,', ' UniPCMultistepScheduler,', ')', 'checkpoint = lllyasviel/control_v11p_sd15s2_lineart_anime', 'image = load_image(', ' https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime/resolve/main/images/input.png', ')', 'image = image.resize((512, 512))', 'prompt = A warrior girl in the jungle', 'processor = LineartAnimeDetector.from_pretrained(lllyasviel/Annotators)', 'control_image = processor(image)', 'control_image.save(./images/control.png)', 'text_encoder = CLIPTextModel.from_pretrained(runwayml/stable-diffusion-v1-5, subfolder=text_encoder, num_hidden_layers=11, torch_dtype=torch.float16)', 'controlnet = ControlNetModel.from_pretrained(checkpoint, torch_dtype=torch.float16)', 'pipe = StableDiffusionControlNetPipeline.from_pretrained(', ' runwayml/stable-diffusion-v1-5, text_encoder=text_encoder, controlnet=controlnet, torch_dtype=torch.float16', ')', 'pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)', 'pipe.enable_model_cpu_offload()', 'generator = torch.manual_seed(0)', 'image = pipe(prompt, num_inference_steps=30, generator=generator, image=control_image).images[0]', "image.save('images/image_out.png')"] | {'dataset': 'Not specified', 'accuracy': 'Not specified'} | ControlNet is a neural network structure to control diffusion models by adding extra conditions. This checkpoint corresponds to the ControlNet conditioned on lineart_anime images. |
Computer Vision Image-to-Image | Hugging Face | Image Inpainting | lllyasviel/control_v11p_sd15_inpaint | ControlNetModel.from_pretrained('lllyasviel/control_v11p_sd15_inpaint') | {'checkpoint': 'lllyasviel/control_v11p_sd15_inpaint', 'torch_dtype': 'torch.float16'} | pip install diffusers transformers accelerate | import torch
import os
from diffusers.utils import load_image
from PIL import Image
import numpy as np
from diffusers import (
ControlNetModel,
StableDiffusionControlNetPipeline,
UniPCMultistepScheduler,
)
checkpoint = lllyasviel/control_v11p_sd15_inpaint
original_image = load_image(
https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint/resolve/main/images/original.png
)
mask_image = load_image(
https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint/resolve/main/images/mask.png
)
def make_inpaint_condition(image, image_mask):
image = np.array(image.convert(RGB)).astype(np.float32) / 255.0
image_mask = np.array(image_mask.convert(L))
assert image.shape[0:1] == image_mask.shape[0:1], image and image_mask must have the same image size
image[image_mask < 128] = -1.0 # set as masked pixel
image = np.expand_dims(image, 0).transpose(0, 3, 1, 2)
image = torch.from_numpy(image)
return image
control_image = make_inpaint_condition(original_image, mask_image)
prompt = best quality
negative_prompt=lowres, bad anatomy, bad hands, cropped, worst quality
controlnet = ControlNetModel.from_pretrained(checkpoint, torch_dtype=torch.float16)
pipe = StableDiffusionControlNetPipeline.from_pretrained(
runwayml/stable-diffusion-v1-5, controlnet=controlnet, torch_dtype=torch.float16
)
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
pipe.enable_model_cpu_offload()
generator = torch.manual_seed(2)
image = pipe(prompt, negative_prompt=negative_prompt, num_inference_steps=30,
generator=generator, image=control_image).images[0]
image.save('images/output.png') | {'dataset': 'Stable Diffusion v1-5', 'accuracy': 'Not specified'} | ControlNet is a neural network structure to control diffusion models by adding extra conditions. This checkpoint corresponds to the ControlNet conditioned on inpaint images. |
Computer Vision Unconditional Image Generation | Hugging Face Transformers | Image Synthesis | google/ddpm-cifar10-32 | DDPMPipeline.from_pretrained('google/ddpm-cifar10-32'). | None | diffusers | !pip install diffusers
from diffusers import DDPMPipeline, DDIMPipeline, PNDMPipeline
model_id = google/ddpm-cifar10-32
ddpm = DDPMPipeline.from_pretrained(model_id)
image = ddpm().images[0]
image.save(ddpm_generated_image.png) | {'dataset': 'CIFAR10', 'accuracy': {'Inception_score': 9.46, 'FID_score': 3.17}} | Denoising Diffusion Probabilistic Models (DDPM) is a class of latent variable models inspired by nonequilibrium thermodynamics. It is used for high-quality image synthesis. The model supports different noise schedulers such as scheduling_ddpm, scheduling_ddim, and scheduling_pndm. |
Computer Vision Unconditional Image Generation | Hugging Face Transformers | Diffusers | google/ddpm-celebahq-256 | DDPMPipeline.from_pretrained('ddpm-celebahq-256') | ['model_id'] | ['diffusers'] | !pip install diffusers
from diffusers import DDPMPipeline, DDIMPipeline, PNDMPipeline
model_id = google/ddpm-celebahq-256
ddpm = DDPMPipeline.from_pretrained(model_id)
image = ddpm()[sample]
image[0].save(ddpm_generated_image.png) | {'dataset': 'CIFAR10', 'accuracy': {'Inception_score': 9.46, 'FID_score': 3.17}} | Denoising Diffusion Probabilistic Models (DDPM) for high quality image synthesis. Trained on the unconditional CIFAR10 dataset and 256x256 LSUN, obtaining state-of-the-art FID score of 3.17 and Inception score of 9.46. |
Computer Vision Unconditional Image Generation | Hugging Face Transformers | Unconditional Image Generation | google/ddpm-cat-256 | DDPMPipeline.from_pretrained('google/ddpm-cat-256') | ['model_id'] | ['diffusers'] | !pip install diffusers
from diffusers import DDPMPipeline, DDIMPipeline, PNDMPipeline
model_id = google/ddpm-cat-256
ddpm = DDPMPipeline.from_pretrained(model_id)
image = ddpm().images[0]
image.save(ddpm_generated_image.png) | {'dataset': 'CIFAR10', 'accuracy': {'Inception_score': 9.46, 'FID_score': 3.17}} | Denoising Diffusion Probabilistic Models (DDPM) is a class of latent variable models inspired by considerations from nonequilibrium thermodynamics. It can generate high-quality images using discrete noise schedulers such as scheduling_ddpm, scheduling_ddim, and scheduling_pndm. The model is trained on the unconditional CIFAR10 dataset and 256x256 LSUN, obtaining an Inception score of 9.46 and a state-of-the-art FID score of 3.17. |
Computer Vision Unconditional Image Generation | Hugging Face Transformers | Denoising Diffusion Probabilistic Models (DDPM) | google/ddpm-ema-celebahq-256 | DDPMPipeline.from_pretrained('google/ddpm-ema-celebahq-256') | {'model_id': 'google/ddpm-ema-celebahq-256'} | diffusers | !pip install diffusers
from diffusers import DDPMPipeline, DDIMPipeline, PNDMPipeline
model_id = google/ddpm-ema-celebahq-256
ddpm = DDPMPipeline.from_pretrained(model_id)
image = ddpm().images[0]
image.save(ddpm_generated_image.png) | {'dataset': {'CIFAR10': {'Inception_score': 9.46, 'FID_score': 3.17}, 'LSUN': {'sample_quality': 'similar to ProgressiveGAN'}}} | High quality image synthesis using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics. |
Computer Vision Unconditional Image Generation | Hugging Face Transformers | Diffusers | google/ddpm-ema-church-256 | DDPMPipeline.from_pretrained('google/ddpm-ema-church-256') | ['model_id'] | ['diffusers'] | !pip install diffusers
from diffusers import DDPMPipeline, DDIMPipeline, PNDMPipeline
model_id = google/ddpm-ema-church-256
ddpm = DDPMPipeline.from_pretrained(model_id)
image = ddpm().images[0]
image.save(ddpm_generated_image.png) | {'dataset': 'CIFAR10', 'accuracy': {'Inception score': 9.46, 'FID score': 3.17}} | Denoising Diffusion Probabilistic Models (DDPM) is a class of latent variable models inspired by nonequilibrium thermodynamics. It is used for high-quality image synthesis. DDPM models can use discrete noise schedulers such as scheduling_ddpm, scheduling_ddim, and scheduling_pndm for inference. The model can be used with different pipelines for faster inference and better trade-off between quality and speed. |
Computer Vision Unconditional Image Generation | Hugging Face Transformers | Unconditional Image Generation | CompVis/ldm-celebahq-256 | DiffusionPipeline.from_pretrained('CompVis/ldm-celebahq-256') | ['model_id'] | ['diffusers'] | !pip install diffusers
from diffusers import DiffusionPipeline
model_id = CompVis/ldm-celebahq-256
pipeline = DiffusionPipeline.from_pretrained(model_id)
image = pipeline(num_inference_steps=200)[sample]
image[0].save(ldm_generated_image.png) | {'dataset': 'CelebA-HQ', 'accuracy': 'N/A'} | Latent Diffusion Models (LDMs) achieve state-of-the-art synthesis results on image data and beyond by decomposing the image formation process into a sequential application of denoising autoencoders. LDMs enable high-resolution synthesis, semantic scene synthesis, super-resolution, and image inpainting while significantly reducing computational requirements compared to pixel-based DMs. |
Computer Vision Unconditional Image Generation | Hugging Face Transformers | Unconditional Image Generation | google/ddpm-church-256 | DDPMPipeline.from_pretrained('google/ddpm-church-256') | ['model_id'] | ['diffusers'] | !pip install diffusers
from diffusers import DDPMPipeline, DDIMPipeline, PNDMPipeline
model_id = google/ddpm-church-256
ddpm = DDPMPipeline.from_pretrained(model_id)
image = ddpm().images[0]
image.save(ddpm_generated_image.png) | {'dataset': 'CIFAR10', 'accuracy': {'Inception_score': 9.46, 'FID_score': 3.17}} | Denoising Diffusion Probabilistic Models (DDPM) for high-quality image synthesis. Trained on the unconditional CIFAR10 dataset and 256x256 LSUN. Supports different noise schedulers like scheduling_ddpm, scheduling_ddim, and scheduling_pndm for inference. |
Computer Vision Unconditional Image Generation | Hugging Face Transformers | Unconditional Image Generation | google/ncsnpp-celebahq-256 | DiffusionPipeline.from_pretrained('google/ncsnpp-celebahq-256') | {'model_id': 'google/ncsnpp-celebahq-256'} | ['diffusers'] | !pip install diffusers
from diffusers import DiffusionPipeline
model_id = google/ncsnpp-celebahq-256
sde_ve = DiffusionPipeline.from_pretrained(model_id)
image = sde_ve()[sample]
image[0].save(sde_ve_generated_image.png) | {'dataset': 'CIFAR-10', 'accuracy': {'Inception_score': 9.89, 'FID': 2.2, 'likelihood': 2.99}} | Score-Based Generative Modeling through Stochastic Differential Equations (SDE) for unconditional image generation. This model achieves record-breaking performance on CIFAR-10 and demonstrates high fidelity generation of 1024 x 1024 images for the first time from a score-based generative model. |
Computer Vision Unconditional Image Generation | Transformers | Unconditional Image Generation | ceyda/butterfly_cropped_uniq1K_512 | LightweightGAN.from_pretrained('ceyda/butterfly_cropped_uniq1K_512') | ['pretrained_model_name_or_path'] | ['torch', 'huggan.pytorch.lightweight_gan.lightweight_gan'] | import torch
from huggan.pytorch.lightweight_gan.lightweight_gan import LightweightGAN
gan = LightweightGAN.from_pretrained(ceyda/butterfly_cropped_uniq1K_512)
gan.eval()
batch_size = 1
with torch.no_grad():
ims = gan.G(torch.randn(batch_size, gan.latent_dim)).clamp_(0., 1.)*255
ims = ims.permute(0,2,3,1).detach().cpu().numpy().astype(np.uint8)
# ims is [BxWxHxC] call Image.fromarray(ims[0]) | {'dataset': 'huggan/smithsonian_butterflies_subset', 'accuracy': 'FID score on 100 images'} | Butterfly GAN model based on the paper 'Towards Faster and Stabilized GAN Training for High-fidelity Few-shot Image Synthesis'. The model is intended for fun and learning purposes. It was trained on 1000 images from the huggan/smithsonian_butterflies_subset dataset, with a focus on low data training as mentioned in the paper. The model generates high-quality butterfly images. |
Computer Vision Unconditional Image Generation | Hugging Face Transformers | Denoising Diffusion Probabilistic Models (DDPM) | google/ddpm-bedroom-256 | DDPMPipeline.from_pretrained('google/ddpm-bedroom-256') | None | diffusers | !pip install diffusers
from diffusers import DDPMPipeline, DDIMPipeline, PNDMPipeline
model_id = google/ddpm-bedroom-256
ddpm = DDPMPipeline.from_pretrained(model_id)
image = ddpm().images[0]
image.save(ddpm_generated_image.png) | {'dataset': 'CIFAR10', 'accuracy': {'Inception score': 9.46, 'FID score': 3.17}} | We present high quality image synthesis results using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics. Our best results are obtained by training on a weighted variational bound designed according to a novel connection between diffusion probabilistic models and denoising score matching with Langevin dynamics, and our models naturally admit a progressive lossy decompression scheme that can be interpreted as a generalization of autoregressive decoding. On the unconditional CIFAR10 dataset, we obtain an Inception score of 9.46 and a state-of-the-art FID score of 3.17. On 256x256 LSUN, we obtain sample quality similar to ProgressiveGAN. |
Computer Vision Unconditional Image Generation | Hugging Face Transformers | Unconditional Image Generation | google/ncsnpp-church-256 | DiffusionPipeline.from_pretrained('google/ncsnpp-church-256') | model_id | diffusers | !pip install diffusers
from diffusers import DiffusionPipeline
model_id = google/ncsnpp-church-256
sde_ve = DiffusionPipeline.from_pretrained(model_id)
image = sde_ve()[sample]
image[0].save(sde_ve_generated_image.png) | {'dataset': 'CIFAR-10', 'accuracy': {'Inception_score': 9.89, 'FID': 2.2, 'likelihood': 2.99}} | Score-Based Generative Modeling through Stochastic Differential Equations (SDE) for unconditional image generation. This model achieves record-breaking performance on CIFAR-10 and can generate high fidelity images of size 1024 x 1024. |
Computer Vision Unconditional Image Generation | Hugging Face Transformers | Unconditional Image Generation | johnowhitaker/sd-class-wikiart-from-bedrooms | DDPMPipeline.from_pretrained('johnowhitaker/sd-class-wikiart-from-bedrooms') | diffusers | from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('johnowhitaker/sd-class-wikiart-from-bedrooms')
image = pipeline().images[0]
image | {'dataset': 'https://huggingface.co/datasets/huggan/wikiart', 'accuracy': 'Not provided'} | This model is a diffusion model initialized from https://huggingface.co/google/ddpm-bedroom-256 and trained for 5000 steps on https://huggingface.co/datasets/huggan/wikiart. |
|
Computer Vision Unconditional Image Generation | Hugging Face Transformers | Unconditional Image Generation | ddpm-cifar10-32 | DDPMPipeline.from_pretrained('google/ddpm-cifar10-32') | None | !pip install diffusers | from diffusers import DDPMPipeline, DDIMPipeline, PNDMPipeline
model_id = google/ddpm-cifar10-32
ddpm = DDPMPipeline.from_pretrained(model_id)
image = ddpm().images[0]
image.save(ddpm_generated_image.png) | {'dataset': 'CIFAR10', 'accuracy': {'Inception score': 9.46, 'FID score': 3.17}} | Denoising Diffusion Probabilistic Models (DDPM) for high quality image synthesis. Trained on the unconditional CIFAR10 dataset. Supports various discrete noise schedulers such as scheduling_ddpm, scheduling_ddim, and scheduling_pndm. |
Computer Vision Unconditional Image Generation | Hugging Face Transformers | Denoising Diffusion Probabilistic Models (DDPM) | google/ddpm-ema-bedroom-256 | DDPMPipeline.from_pretrained('google/ddpm-ema-bedroom-256') | ['model_id'] | ['diffusers'] | !pip install diffusers
from diffusers import DDPMPipeline, DDIMPipeline, PNDMPipeline
model_id = google/ddpm-ema-bedroom-256
ddpm = DDPMPipeline.from_pretrained(model_id)
image = ddpm().images[0]
image.save(ddpm_generated_image.png) | {'dataset': 'CIFAR10', 'accuracy': {'Inception_score': 9.46, 'FID_score': 3.17}} | Denoising Diffusion Probabilistic Models (DDPM) is a class of latent variable models inspired by nonequilibrium thermodynamics, capable of producing high-quality image synthesis results. The model can use discrete noise schedulers such as scheduling_ddpm, scheduling_ddim, and scheduling_pndm for inference. It obtains an Inception score of 9.46 and a state-of-the-art FID score of 3.17 on the unconditional CIFAR10 dataset. |