domain
stringclasses
40 values
framework
stringclasses
20 values
functionality
stringclasses
181 values
api_name
stringlengths
4
87
api_call
stringlengths
15
216
api_arguments
stringlengths
0
495
python_environment_requirements
stringlengths
0
190
example_code
stringlengths
0
3.35k
performance
stringlengths
22
1.36k
description
stringlengths
35
1.11k
Computer Vision Unconditional Image Generation
Hugging Face Transformers
Inference
google/ncsnpp-ffhq-1024
DiffusionPipeline.from_pretrained('google/ncsnpp-ffhq-1024')
['model_id']
['diffusers']
!pip install diffusers from diffusers import DiffusionPipeline model_id = google/ncsnpp-ffhq-1024 sde_ve = DiffusionPipeline.from_pretrained(model_id) image = sde_ve()[sample] image[0].save(sde_ve_generated_image.png)
{'dataset': 'CIFAR-10', 'accuracy': {'Inception_score': 9.89, 'FID': 2.2, 'likelihood': 2.99}}
Score-Based Generative Modeling through Stochastic Differential Equations (SDE) for unconditional image generation. Achieves record-breaking performance on CIFAR-10 and demonstrates high fidelity generation of 1024 x 1024 images for the first time from a score-based generative model.
Computer Vision Unconditional Image Generation
Hugging Face Transformers
Unconditional Image Generation
ocariz/universe_1400
DDPMPipeline.from_pretrained('ocariz/universe_1400')
diffusers
from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('ocariz/universe_1400') image = pipeline().images[0] image
{'dataset': '', 'accuracy': ''}
This model is a diffusion model for unconditional image generation of the universe trained for 1400 epochs.
Computer Vision Unconditional Image Generation
Hugging Face Transformers
Diffusers
WiNE-iNEFF/Minecraft-Skin-Diffusion-V2
DDPMPipeline.from_pretrained('WiNE-iNEFF/Minecraft-Skin-Diffusion-V2')
[]
['diffusers']
from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('WiNE-iNEFF/Minecraft-Skin-Diffusion-V2') image = pipeline().images[0].convert('RGBA') image
{'dataset': None, 'accuracy': None}
An unconditional image generation model for generating Minecraft skin images using the diffusion model.
Computer Vision Unconditional Image Generation
Hugging Face Transformers
Diffusers
Minecraft-Skin-Diffusion
DDPMPipeline.from_pretrained('WiNE-iNEFF/Minecraft-Skin-Diffusion')
{}
['diffusers']
from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('WiNE-iNEFF/Minecraft-Skin-Diffusion') image = pipeline().images[0].convert('RGBA') image
{'dataset': '', 'accuracy': ''}
Unconditional Image Generation model for generating Minecraft skins using diffusion-based methods.
Computer Vision Unconditional Image Generation
Hugging Face Transformers
Unconditional Image Generation
sd-class-butterflies-32
DDPMPipeline.from_pretrained('clp/sd-class-butterflies-32')
{'model_id': 'clp/sd-class-butterflies-32'}
['diffusers']
from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('clp/sd-class-butterflies-32') image = pipeline().images[0] image
{'dataset': None, 'accuracy': None}
This model is a diffusion model for unconditional image generation of cute butterflies.
Computer Vision Unconditional Image Generation
Hugging Face Transformers
Unconditional Image Generation
MFawad/sd-class-butterflies-32
DDPMPipeline.from_pretrained('MFawad/sd-class-butterflies-32')
diffusers
from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('MFawad/sd-class-butterflies-32') image = pipeline().images[0] image
{'dataset': '', 'accuracy': ''}
This model is a diffusion model for unconditional image generation of cute 🦋.
Computer Vision Unconditional Image Generation
Hugging Face Transformers
Unconditional Image Generation
google/ncsnpp-ffhq-256
DiffusionPipeline.from_pretrained('google/ncsnpp-ffhq-256')
{'model_id': 'google/ncsnpp-ffhq-256'}
['diffusers']
['!pip install diffusers', 'from diffusers import DiffusionPipeline', 'model_id = google/ncsnpp-ffhq-256', 'sde_ve = DiffusionPipeline.from_pretrained(model_id)', 'image = sde_ve()[sample]', 'image[0].save(sde_ve_generated_image.png)']
{'dataset': 'CIFAR-10', 'accuracy': {'Inception score': 9.89, 'FID': 2.2, 'Likelihood': 2.99}}
Score-Based Generative Modeling through Stochastic Differential Equations (SDE) for unconditional image generation. Achieves record-breaking performance on CIFAR-10 and demonstrates high fidelity generation of 1024 x 1024 images for the first time from a score-based generative model.
Computer Vision Unconditional Image Generation
Hugging Face Transformers
Denoising Diffusion Probabilistic Models (DDPM)
google/ddpm-ema-cat-256
DDPMPipeline.from_pretrained('google/ddpm-ema-cat-256')
['model_id']
['!pip install diffusers']
from diffusers import DDPMPipeline, DDIMPipeline, PNDMPipeline model_id = google/ddpm-ema-cat-256 ddpm = DDPMPipeline.from_pretrained(model_id) image = ddpm().images[0] image.save(ddpm_generated_image.png)
{'dataset': 'CIFAR10', 'accuracy': {'Inception_score': 9.46, 'FID_score': 3.17}}
Denoising Diffusion Probabilistic Models (DDPM) is a class of latent variable models inspired by considerations from nonequilibrium thermodynamics. It can generate high-quality images, and supports different noise schedulers such as scheduling_ddpm, scheduling_ddim, and scheduling_pndm. On the unconditional CIFAR10 dataset, it achieves an Inception score of 9.46 and a state-of-the-art FID score of 3.17.
Computer Vision Unconditional Image Generation
Hugging Face Transformers
Unconditional Image Generation
ocariz/butterfly_200
DDPMPipeline.from_pretrained('ocariz/butterfly_200')
diffusers
from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('ocariz/butterfly_200') image = pipeline().images[0] image
{'dataset': '', 'accuracy': ''}
This model is a diffusion model for unconditional image generation of cute butterflies trained for 200 epochs.
Computer Vision Unconditional Image Generation
Hugging Face Transformers
Unconditional Image Generation
ntrant7/sd-class-butterflies-32
DDPMPipeline.from_pretrained('ntrant7/sd-class-butterflies-32')
[]
['diffusers']
from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('ntrant7/sd-class-butterflies-32') image = pipeline().images[0] image
{'dataset': 'Not specified', 'accuracy': 'Not specified'}
This model is a diffusion model for unconditional image generation of cute butterflies.
Computer Vision Unconditional Image Generation
Hugging Face Transformers
Unconditional Image Generation
Apocalypse-19/shoe-generator
DDPMPipeline.from_pretrained('Apocalypse-19/shoe-generator')
[]
['diffusers']
from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('Apocalypse-19/shoe-generator') image = pipeline().images[0] image
{'dataset': 'custom dataset', 'accuracy': '128x128 resolution'}
This model is a diffusion model for unconditional image generation of shoes trained on a custom dataset at 128x128 resolution.
Computer Vision Unconditional Image Generation
Hugging Face Transformers
Diffusers
pravsels/ddpm-ffhq-vintage-finetuned-vintage-3epochs
DDPMPipeline.from_pretrained('pravsels/ddpm-ffhq-vintage-finetuned-vintage-3epochs')
diffusers
from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('pravsels/ddpm-ffhq-vintage-finetuned-vintage-3epochs') image = pipeline().images[0] image
{'dataset': '', 'accuracy': ''}
Example Fine-Tuned Model for Unit 2 of the Diffusion Models Class
Computer Vision Video Classification
Hugging Face Transformers
Transformers
microsoft/xclip-base-patch32
XClipModel.from_pretrained('microsoft/xclip-base-patch32')
N/A
transformers
For code examples, we refer to the documentation.
{'dataset': 'Kinetics 400', 'accuracy': {'top-1': 80.4, 'top-5': 95.0}}
X-CLIP is a minimal extension of CLIP for general video-language understanding. The model is trained in a contrastive way on (video, text) pairs. This allows the model to be used for tasks like zero-shot, few-shot or fully supervised video classification and video-text retrieval.
Computer Vision Unconditional Image Generation
Hugging Face Transformers
Diffusers
myunus1/diffmodels_galaxies_scratchbook
DDPMPipeline.from_pretrained('myunus1/diffmodels_galaxies_scratchbook')
{'from_pretrained': 'myunus1/diffmodels_galaxies_scratchbook'}
{'package': 'diffusers', 'import': 'from diffusers import DDPMPipeline'}
{'initialize_pipeline': "pipeline = DDPMPipeline.from_pretrained('myunus1/diffmodels_galaxies_scratchbook')", 'generate_image': 'image = pipeline().images[0]', 'display_image': 'image'}
{'dataset': 'Not provided', 'accuracy': 'Not provided'}
This model is a diffusion model for unconditional image generation of cute 🦋.
Computer Vision Unconditional Image Generation
Hugging Face Transformers
Unconditional Image Generation
utyug1/sd-class-butterflies-32
DDPMPipeline.from_pretrained('utyug1/sd-class-butterflies-32')
{'pretrained_model': 'utyug1/sd-class-butterflies-32'}
['diffusers']
from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('utyug1/sd-class-butterflies-32') image = pipeline().images[0] image
{'dataset': 'Not specified', 'accuracy': 'Not specified'}
This model is a diffusion model for unconditional image generation of cute butterflies.
Computer Vision Unconditional Image Generation
Hugging Face Transformers
Unconditional Image Generation
sd-class-pandas-32
DDPMPipeline.from_pretrained('schdoel/sd-class-AFHQ-32')
{'pretrained_model': 'schdoel/sd-class-AFHQ-32'}
{'package': 'diffusers', 'import': 'from diffusers import DDPMPipeline'}
from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('schdoel/sd-class-AFHQ-32') image = pipeline().images[0] image
{'dataset': 'AFHQ', 'accuracy': 'Not provided'}
This model is a diffusion model for unconditional image generation of cute 🦋.
Computer Vision Video Classification
Hugging Face Transformers
Video Classification
facebook/timesformer-base-finetuned-k400
TimesformerForVideoClassification.from_pretrained('facebook/timesformer-base-finetuned-k400')
video, return_tensors
transformers
from transformers import AutoImageProcessor, TimesformerForVideoClassification import numpy as np import torch video = list(np.random.randn(8, 3, 224, 224)) processor = AutoImageProcessor.from_pretrained(facebook/timesformer-base-finetuned-k400) model = TimesformerForVideoClassification.from_pretrained(facebook/timesformer-base-finetuned-k400) inputs = processor(video, return_tensors=pt) with torch.no_grad(): outputs = model(**inputs) logits = outputs.logits predicted_class_idx = logits.argmax(-1).item() print(Predicted class:, model.config.id2label[predicted_class_idx])
{'dataset': 'Kinetics-400', 'accuracy': 'Not provided'}
TimeSformer is a video classification model pre-trained on Kinetics-400. It was introduced in the paper TimeSformer: Is Space-Time Attention All You Need for Video Understanding? by Tong et al. and first released in this repository. The model can be used for video classification into one of the 400 possible Kinetics-400 labels.
Computer Vision Video Classification
Hugging Face Transformers
Video Classification
MCG-NJU/videomae-base
VideoMAEForPreTraining.from_pretrained('MCG-NJU/videomae-base')
['video']
['transformers']
from transformers import VideoMAEImageProcessor, VideoMAEForPreTraining import numpy as np import torch num_frames = 16 video = list(np.random.randn(16, 3, 224, 224)) processor = VideoMAEImageProcessor.from_pretrained(MCG-NJU/videomae-base) model = VideoMAEForPreTraining.from_pretrained(MCG-NJU/videomae-base) pixel_values = processor(video, return_tensors=pt).pixel_values num_patches_per_frame = (model.config.image_size // model.config.patch_size) ** 2 seq_length = (num_frames // model.config.tubelet_size) * num_patches_per_frame bool_masked_pos = torch.randint(0, 2, (1, seq_length)).bool() outputs = model(pixel_values, bool_masked_pos=bool_masked_pos) loss = outputs.loss
{'dataset': 'Kinetics-400', 'accuracy': 'To be provided'}
VideoMAE is an extension of Masked Autoencoders (MAE) to video. The architecture of the model is very similar to that of a standard Vision Transformer (ViT), with a decoder on top for predicting pixel values for masked patches.
Computer Vision Video Classification
Hugging Face Transformers
Video Classification
facebook/timesformer-base-finetuned-k600
TimesformerForVideoClassification.from_pretrained('facebook/timesformer-base-finetuned-k600')
['images']
['transformers']
from transformers import AutoImageProcessor, TimesformerForVideoClassification import numpy as np import torch video = list(np.random.randn(8, 3, 224, 224)) processor = AutoImageProcessor.from_pretrained(facebook/timesformer-base-finetuned-k600) model = TimesformerForVideoClassification.from_pretrained(facebook/timesformer-base-finetuned-k600) inputs = processor(images=video, return_tensors=pt) with torch.no_grad(): outputs = model(**inputs) logits = outputs.logits predicted_class_idx = logits.argmax(-1).item() print(Predicted class:, model.config.id2label[predicted_class_idx])
{'dataset': 'Kinetics-600', 'accuracy': None}
TimeSformer model pre-trained on Kinetics-600. It was introduced in the paper TimeSformer: Is Space-Time Attention All You Need for Video Understanding? by Tong et al. and first released in this repository.
Computer Vision Video Classification
Hugging Face Transformers
Video Classification
MCG-NJU/videomae-base-finetuned-kinetics
VideoMAEForVideoClassification.from_pretrained('MCG-NJU/videomae-base-finetuned-kinetics')
['video']
['transformers']
from transformers import VideoMAEImageProcessor, VideoMAEForVideoClassification import numpy as np import torch video = list(np.random.randn(16, 3, 224, 224)) processor = VideoMAEImageProcessor.from_pretrained(MCG-NJU/videomae-base-finetuned-kinetics) model = VideoMAEForVideoClassification.from_pretrained(MCG-NJU/videomae-base-finetuned-kinetics) inputs = processor(video, return_tensors=pt) with torch.no_grad(): outputs = model(**inputs) logits = outputs.logits predicted_class_idx = logits.argmax(-1).item() print(Predicted class:, model.config.id2label[predicted_class_idx])
{'dataset': 'Kinetics-400', 'accuracy': {'top-1': 80.9, 'top-5': 94.7}}
VideoMAE model pre-trained for 1600 epochs in a self-supervised way and fine-tuned in a supervised way on Kinetics-400. It was introduced in the paper VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training by Tong et al. and first released in this repository.
Computer Vision Video Classification
Hugging Face Transformers
Video Classification
facebook/timesformer-hr-finetuned-k600
TimesformerForVideoClassification.from_pretrained('facebook/timesformer-hr-finetuned-k600')
{'images': 'video', 'return_tensors': 'pt'}
['transformers', 'numpy', 'torch']
from transformers import AutoImageProcessor, TimesformerForVideoClassification import numpy as np import torch video = list(np.random.randn(16, 3, 448, 448)) processor = AutoImageProcessor.from_pretrained(facebook/timesformer-hr-finetuned-k600) model = TimesformerForVideoClassification.from_pretrained(facebook/timesformer-hr-finetuned-k600) inputs = processor(images=video, return_tensors=pt) with torch.no_grad(): outputs = model(**inputs) logits = outputs.logits predicted_class_idx = logits.argmax(-1).item() print(Predicted class:, model.config.id2label[predicted_class_idx])
{'dataset': 'Kinetics-600', 'accuracy': 'Not provided'}
TimeSformer model pre-trained on Kinetics-600. It was introduced in the paper TimeSformer: Is Space-Time Attention All You Need for Video Understanding? by Tong et al. and first released in this repository. The model can be used for video classification into one of the 600 possible Kinetics-600 labels.
Computer Vision Video Classification
Hugging Face Transformers
Video Classification
facebook/timesformer-hr-finetuned-k400
TimesformerForVideoClassification.from_pretrained('facebook/timesformer-hr-finetuned-k400')
['images', 'return_tensors']
['transformers', 'numpy', 'torch']
from transformers import AutoImageProcessor, TimesformerForVideoClassification import numpy as np import torch video = list(np.random.randn(16, 3, 448, 448)) processor = AutoImageProcessor.from_pretrained(facebook/timesformer-hr-finetuned-k400) model = TimesformerForVideoClassification.from_pretrained(facebook/timesformer-hr-finetuned-k400) inputs = processor(images=video, return_tensors=pt) with torch.no_grad(): outputs = model(**inputs) logits = outputs.logits predicted_class_idx = logits.argmax(-1).item() print(Predicted class:, model.config.id2label[predicted_class_idx])
{'dataset': 'Kinetics-400', 'accuracy': 'Not specified'}
TimeSformer model pre-trained on Kinetics-400 for video classification into one of the 400 possible Kinetics-400 labels. Introduced in the paper TimeSformer: Is Space-Time Attention All You Need for Video Understanding? by Tong et al.
Computer Vision Video Classification
Hugging Face Transformers
Video Classification
facebook/timesformer-base-finetuned-ssv2
TimesformerForVideoClassification.from_pretrained('facebook/timesformer-base-finetuned-ssv2')
['images', 'return_tensors']
['transformers', 'numpy', 'torch']
from transformers import AutoImageProcessor, TimesformerForVideoClassification import numpy as np import torch video = list(np.random.randn(8, 3, 224, 224)) processor = AutoImageProcessor.from_pretrained(facebook/timesformer-base-finetuned-ssv2) model = TimesformerForVideoClassification.from_pretrained(facebook/timesformer-base-finetuned-ssv2) inputs = processor(images=video, return_tensors=pt) with torch.no_grad(): outputs = model(**inputs) logits = outputs.logits predicted_class_idx = logits.argmax(-1).item() print(Predicted class:, model.config.id2label[predicted_class_idx])
{'dataset': 'Something Something v2', 'accuracy': 'Not provided'}
TimeSformer model pre-trained on Something Something v2. It was introduced in the paper TimeSformer: Is Space-Time Attention All You Need for Video Understanding? by Tong et al. and first released in this repository.
Computer Vision Video Classification
Hugging Face Transformers
Video Classification
facebook/timesformer-hr-finetuned-ssv2
TimesformerForVideoClassification.from_pretrained('facebook/timesformer-hr-finetuned-ssv2')
['images', 'return_tensors']
['transformers', 'numpy', 'torch']
from transformers import AutoImageProcessor, TimesformerForVideoClassification import numpy as np import torch video = list(np.random.randn(16, 3, 448, 448)) processor = AutoImageProcessor.from_pretrained(facebook/timesformer-hr-finetuned-ssv2) model = TimesformerForVideoClassification.from_pretrained(facebook/timesformer-hr-finetuned-ssv2) inputs = feature_extractor(images=video, return_tensors=pt) with torch.no_grad(): outputs = model(**inputs) logits = outputs.logits predicted_class_idx = logits.argmax(-1).item() print(Predicted class:, model.config.id2label[predicted_class_idx])
{'dataset': 'Something Something v2', 'accuracy': 'Not provided'}
TimeSformer model pre-trained on Something Something v2. It was introduced in the paper TimeSformer: Is Space-Time Attention All You Need for Video Understanding? by Tong et al. and first released in this repository.
Computer Vision Video Classification
Hugging Face Transformers
Video Classification
videomae-large
VideoMAEForPreTraining.from_pretrained('MCG-NJU/videomae-large')
pixel_values, bool_masked_pos
transformers
from transformers import VideoMAEImageProcessor, VideoMAEForPreTraining import numpy as np import torch num_frames = 16 video = list(np.random.randn(16, 3, 224, 224)) processor = VideoMAEImageProcessor.from_pretrained(MCG-NJU/videomae-large) model = VideoMAEForPreTraining.from_pretrained(MCG-NJU/videomae-large) pixel_values = processor(video, return_tensors=pt).pixel_values num_patches_per_frame = (model.config.image_size // model.config.patch_size) ** 2 seq_length = (num_frames // model.config.tubelet_size) * num_patches_per_frame bool_masked_pos = torch.randint(0, 2, (1, seq_length)).bool() outputs = model(pixel_values, bool_masked_pos=bool_masked_pos) loss = outputs.loss
{'dataset': 'Kinetics-400', 'accuracy': 'Not provided'}
VideoMAE is an extension of Masked Autoencoders (MAE) to video. The architecture of the model is very similar to that of a standard Vision Transformer (ViT), with a decoder on top for predicting pixel values for masked patches. Videos are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds fixed sinus/cosinus position embeddings before feeding the sequence to the layers of the Transformer encoder. By pre-training the model, it learns an inner representation of videos that can then be used to extract features useful for downstream tasks.
Computer Vision Video Classification
Hugging Face Transformers
Video Classification
MCG-NJU/videomae-base-finetuned-ssv2
VideoMAEForVideoClassification.from_pretrained('MCG-NJU/videomae-base-finetuned-ssv2')
video
transformers
from transformers import VideoMAEImageProcessor, VideoMAEForVideoClassification import numpy as np import torch video = list(np.random.randn(16, 3, 224, 224)) processor = VideoMAEImageProcessor.from_pretrained(MCG-NJU/videomae-base-finetuned-ssv2) model = VideoMAEForVideoClassification.from_pretrained(MCG-NJU/videomae-base-finetuned-ssv2) inputs = processor(video, return_tensors=pt) with torch.no_grad(): outputs = model(**inputs) logits = outputs.logits predicted_class_idx = logits.argmax(-1).item() print(Predicted class:, model.config.id2label[predicted_class_idx])
{'dataset': 'Something-Something-v2', 'accuracy': {'top-1': 70.6, 'top-5': 92.6}}
VideoMAE model pre-trained for 2400 epochs in a self-supervised way and fine-tuned in a supervised way on Something-Something-v2. It was introduced in the paper VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training by Tong et al. and first released in this repository.
Computer Vision Video Classification
Hugging Face Transformers
Video Classification
MCG-NJU/videomae-base-short
VideoMAEForPreTraining.from_pretrained('MCG-NJU/videomae-base-short')
{'pretrained_model_name_or_path': 'MCG-NJU/videomae-base-short'}
{'packages': ['transformers']}
from transformers import VideoMAEImageProcessor, VideoMAEForPreTraining import numpy as np import torch num_frames = 16 video = list(np.random.randn(16, 3, 224, 224)) processor = VideoMAEImageProcessor.from_pretrained(MCG-NJU/videomae-base-short) model = VideoMAEForPreTraining.from_pretrained(MCG-NJU/videomae-base-short) pixel_values = processor(video, return_tensors=pt).pixel_values num_patches_per_frame = (model.config.image_size // model.config.patch_size) ** 2 seq_length = (num_frames // model.config.tubelet_size) * num_patches_per_frame bool_masked_pos = torch.randint(0, 2, (1, seq_length)).bool() outputs = model(pixel_values, bool_masked_pos=bool_masked_pos) loss = outputs.loss
{'dataset': 'Kinetics-400', 'accuracy': 'Not provided'}
VideoMAE is an extension of Masked Autoencoders (MAE) to video. The architecture of the model is very similar to that of a standard Vision Transformer (ViT), with a decoder on top for predicting pixel values for masked patches. Videos are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds fixed sinus/cosinus position embeddings before feeding the sequence to the layers of the Transformer encoder. By pre-training the model, it learns an inner representation of videos that can then be used to extract features useful for downstream tasks.
Computer Vision Video Classification
Hugging Face Transformers
Video Classification
MCG-NJU/videomae-large-finetuned-kinetics
VideoMAEForVideoClassification.from_pretrained('MCG-NJU/videomae-large-finetuned-kinetics')
['video']
['transformers']
from transformers import VideoMAEImageProcessor, VideoMAEForVideoClassification import numpy as np import torch video = list(np.random.randn(16, 3, 224, 224)) processor = VideoMAEImageProcessor.from_pretrained(MCG-NJU/videomae-large-finetuned-kinetics) model = VideoMAEForVideoClassification.from_pretrained(MCG-NJU/videomae-large-finetuned-kinetics) inputs = processor(video, return_tensors=pt) with torch.no_grad(): outputs = model(**inputs) logits = outputs.logits predicted_class_idx = logits.argmax(-1).item() print(Predicted class:, model.config.id2label[predicted_class_idx])
{'dataset': 'Kinetics-400', 'accuracy': {'top-1': 84.7, 'top-5': 96.5}}
VideoMAE model pre-trained for 1600 epochs in a self-supervised way and fine-tuned in a supervised way on Kinetics-400. It was introduced in the paper VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training by Tong et al. and first released in this repository.
Computer Vision Video Classification
Hugging Face Transformers
Video Classification
MCG-NJU/videomae-base-short-finetuned-kinetics
VideoMAEForVideoClassification.from_pretrained('MCG-NJU/videomae-base-short-finetuned-kinetics')
['video']
['transformers']
from transformers import VideoMAEImageProcessor, VideoMAEForVideoClassification import numpy as np import torch video = list(np.random.randn(16, 3, 224, 224)) processor = VideoMAEImageProcessor.from_pretrained('MCG-NJU/videomae-base-short-finetuned-kinetics') model = VideoMAEForVideoClassification.from_pretrained('MCG-NJU/videomae-base-short-finetuned-kinetics') inputs = processor(video, return_tensors='pt') with torch.no_grad(): outputs = model(**inputs) logits = outputs.logits predicted_class_idx = logits.argmax(-1).item() print('Predicted class:', model.config.id2label[predicted_class_idx])
{'dataset': 'Kinetics-400', 'accuracy': {'top-1': 79.4, 'top-5': 94.1}}
VideoMAE model pre-trained for 800 epochs in a self-supervised way and fine-tuned in a supervised way on Kinetics-400. It was introduced in the paper VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training by Tong et al. and first released in this repository.
Computer Vision Video Classification
Hugging Face Transformers
Transformers
videomae-base-finetuned-RealLifeViolenceSituations-subset
AutoModelForVideoClassification.from_pretrained('dangle124/videomae-base-finetuned-RealLifeViolenceSituations-subset')
{'model_name': 'dangle124/videomae-base-finetuned-RealLifeViolenceSituations-subset'}
{'transformers': '4.27.2', 'pytorch': '1.13.1', 'datasets': '2.10.1', 'tokenizers': '0.13.2'}
{'dataset': 'unknown', 'accuracy': 0.9533}
This model is a fine-tuned version of MCG-NJU/videomae-base on an unknown dataset. It is trained for video classification task, specifically for RealLifeViolenceSituations.
Computer Vision Video Classification
Hugging Face Transformers
Video Classification
fcakyon/timesformer-large-finetuned-k400
TimesformerForVideoClassification.from_pretrained('fcakyon/timesformer-large-finetuned-k400')
['video']
['transformers']
from transformers import AutoImageProcessor, TimesformerForVideoClassification import numpy as np import torch video = list(np.random.randn(96, 3, 224, 224)) processor = AutoImageProcessor.from_pretrained(fcakyon/timesformer-large-finetuned-k400) model = TimesformerForVideoClassification.from_pretrained(fcakyon/timesformer-large-finetuned-k400) inputs = processor(video, return_tensors=pt) with torch.no_grad(): outputs = model(**inputs) logits = outputs.logits predicted_class_idx = logits.argmax(-1).item() print(Predicted class:, model.config.id2label[predicted_class_idx])
{'dataset': 'Kinetics-400', 'accuracy': 'Not provided'}
TimeSformer model pre-trained on Kinetics-400 for video classification into one of the 400 possible Kinetics-400 labels. Introduced in the paper 'TimeSformer: Is Space-Time Attention All You Need for Video Understanding?' by Tong et al.
Computer Vision Video Classification
Hugging Face Transformers
Video Classification
videomae-base-short-ssv2
VideoMAEForPreTraining.from_pretrained('MCG-NJU/videomae-base-short-ssv2')
['video', 'return_tensors']
['transformers', 'numpy', 'torch']
from transformers import VideoMAEImageProcessor, VideoMAEForPreTraining import numpy as np import torch num_frames = 16 video = list(np.random.randn(16, 3, 224, 224)) processor = VideoMAEImageProcessor.from_pretrained(MCG-NJU/videomae-base-short-ssv2) model = VideoMAEForPreTraining.from_pretrained(MCG-NJU/videomae-base-short-ssv2) pixel_values = processor(video, return_tensors=pt).pixel_values num_patches_per_frame = (model.config.image_size // model.config.patch_size) ** 2 seq_length = (num_frames // model.config.tubelet_size) * num_patches_per_frame bool_masked_pos = torch.randint(0, 2, (1, seq_length)).bool() outputs = model(pixel_values, bool_masked_pos=bool_masked_pos) loss = outputs.loss
{'dataset': 'Something-Something-v2', 'accuracy': 'N/A'}
VideoMAE is an extension of Masked Autoencoders (MAE) to video. The architecture of the model is very similar to that of a standard Vision Transformer (ViT), with a decoder on top for predicting pixel values for masked patches. Videos are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds fixed sinus/cosinus position embeddings before feeding the sequence to the layers of the Transformer encoder. By pre-training the model, it learns an inner representation of videos that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled videos for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire video.
Computer Vision Video Classification
Hugging Face Transformers
Video Classification
lmazzon70/videomae-base-finetuned-kinetics-finetuned-rwf2000-epochs8-batch8-kb
AutoModelForVideoClassification.from_pretrained('lmazzon70/videomae-base-finetuned-kinetics-finetuned-rwf2000-epochs8-batch8-kb')
transformers, torch, tokenizers, datasets
{'dataset': 'unknown', 'accuracy': 0.7298}
This model is a fine-tuned version of MCG-NJU/videomae-base-finetuned-kinetics on an unknown dataset. It achieves the following results on the evaluation set: Loss: 0.5482, Accuracy: 0.7298.
Computer Vision Video Classification
Hugging Face Transformers
Video Classification
videomae-small-finetuned-ssv2
VideoMAEForVideoClassification.from_pretrained('MCG-NJU/videomae-small-finetuned-ssv2')
{'model_name': 'MCG-NJU/videomae-small-finetuned-ssv2'}
{'transformers': 'from transformers import VideoMAEFeatureExtractor, VideoMAEForVideoClassification', 'numpy': 'import numpy as np', 'torch': 'import torch'}
video = list(np.random.randn(16, 3, 224, 224)) feature_extractor = VideoMAEFeatureExtractor.from_pretrained(MCG-NJU/videomae-small-finetuned-ssv2) model = VideoMAEForVideoClassification.from_pretrained(MCG-NJU/videomae-small-finetuned-ssv2) inputs = feature_extractor(video, return_tensors=pt) with torch.no_grad(): outputs = model(**inputs) logits = outputs.logits predicted_class_idx = logits.argmax(-1).item() print(Predicted class:, model.config.id2label[predicted_class_idx])
{'dataset': 'Something-Something V2', 'accuracy': {'top-1': 66.8, 'top-5': 90.3}}
VideoMAE is an extension of Masked Autoencoders (MAE) to video. The architecture of the model is very similar to that of a standard Vision Transformer (ViT), with a decoder on top for predicting pixel values for masked patches. Videos are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds fixed sinus/cosinus position embeddings before feeding the sequence to the layers of the Transformer encoder. By pre-training the model, it learns an inner representation of videos that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled videos for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire video.
Computer Vision Video Classification
Hugging Face Transformers
Video Classification
lmazzon70/videomae-base-finetuned-kinetics-finetuned-rwf2000mp4-epochs8-batch8-kb
AutoModelForVideoClassification.from_pretrained('lmazzon70/videomae-base-finetuned-kinetics-finetuned-rwf2000mp4-epochs8-batch8-kb')
[]
['transformers']
{'dataset': 'unknown', 'accuracy': 0.7453}
This model is a fine-tuned version of MCG-NJU/videomae-base-finetuned-kinetics on an unknown dataset.
Computer Vision Video Classification
Hugging Face Transformers
Video Classification
sayakpaul/videomae-base-finetuned-kinetics-finetuned-ucf101-subset
AutoModelForVideoClassification.from_pretrained('sayakpaul/videomae-base-finetuned-kinetics-finetuned-ucf101-subset')
{'learning_rate': 5e-05, 'train_batch_size': 8, 'eval_batch_size': 8, 'seed': 42, 'optimizer': 'Adam with betas=(0.9,0.999) and epsilon=1e-08', 'lr_scheduler_type': 'linear', 'lr_scheduler_warmup_ratio': 0.1, 'training_steps': 111}
{'transformers': '4.24.0', 'pytorch': '1.12.1+cu113', 'datasets': '2.6.1', 'tokenizers': '0.13.2'}
{'dataset': 'unknown', 'accuracy': 1.0}
This model is a fine-tuned version of MCG-NJU/videomae-base-finetuned-kinetics on an unknown dataset.
Computer Vision Video Classification
Hugging Face Transformers
Video Classification
fcakyon/timesformer-hr-finetuned-k400
TimesformerForVideoClassification.from_pretrained('fcakyon/timesformer-hr-finetuned-k400')
['images', 'return_tensors']
['transformers', 'numpy', 'torch']
from transformers import AutoImageProcessor, TimesformerForVideoClassification import numpy as np import torch video = list(np.random.randn(16, 3, 448, 448)) processor = AutoImageProcessor.from_pretrained(fcakyon/timesformer-hr-finetuned-k400) model = TimesformerForVideoClassification.from_pretrained(fcakyon/timesformer-hr-finetuned-k400) inputs = processor(images=video, return_tensors=pt) with torch.no_grad(): outputs = model(**inputs) logits = outputs.logits predicted_class_idx = logits.argmax(-1).item() print(Predicted class:, model.config.id2label[predicted_class_idx])
{'dataset': 'Kinetics-400', 'accuracy': 'Not provided'}
TimeSformer model pre-trained on Kinetics-400 for video classification into one of the 400 possible Kinetics-400 labels. Introduced in the paper 'TimeSformer: Is Space-Time Attention All You Need for Video Understanding?' by Tong et al.
Computer Vision Video Classification
Hugging Face Transformers
Transformers
videomae-base-ssv2
VideoMAEForPreTraining.from_pretrained('MCG-NJU/videomae-base-short-ssv2')
video
transformers
from transformers import VideoMAEFeatureExtractor, VideoMAEForPreTraining import numpy as np import torch num_frames = 16 video = list(np.random.randn(16, 3, 224, 224)) feature_extractor = VideoMAEFeatureExtractor.from_pretrained(MCG-NJU/videomae-base-short-ssv2) model = VideoMAEForPreTraining.from_pretrained(MCG-NJU/videomae-base-short-ssv2) pixel_values = feature_extractor(video, return_tensors=pt).pixel_values num_patches_per_frame = (model.config.image_size // model.config.patch_size) ** 2 seq_length = (num_frames // model.config.tubelet_size) * num_patches_per_frame bool_masked_pos = torch.randint(0, 2, (1, seq_length)).bool() outputs = model(pixel_values, bool_masked_pos=bool_masked_pos) loss = outputs.loss
{'dataset': 'Something-Something-v2', 'accuracy': ''}
VideoMAE is an extension of Masked Autoencoders (MAE) to video. The architecture of the model is very similar to that of a standard Vision Transformer (ViT), with a decoder on top for predicting pixel values for masked patches. Videos are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds fixed sinus/cosinus position embeddings before feeding the sequence to the layers of the Transformer encoder. By pre-training the model, it learns an inner representation of videos that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled videos for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire video.
Computer Vision Video Classification
Hugging Face Transformers
Video Classification
videomae-small-finetuned-kinetics
VideoMAEForVideoClassification.from_pretrained('MCG-NJU/videomae-small-finetuned-kinetics')
{'video': 'list(np.random.randn(16, 3, 224, 224))'}
['transformers', 'numpy', 'torch']
from transformers import VideoMAEImageProcessor, VideoMAEForVideoClassification import numpy as np import torch video = list(np.random.randn(16, 3, 224, 224)) processor = VideoMAEImageProcessor.from_pretrained('MCG-NJU/videomae-small-finetuned-kinetics') model = VideoMAEForVideoClassification.from_pretrained('MCG-NJU/videomae-small-finetuned-kinetics') inputs = processor(video, return_tensors='pt') with torch.no_grad(): outputs = model(**inputs) logits = outputs.logits predicted_class_idx = logits.argmax(-1).item() print('Predicted class:', model.config.id2label[predicted_class_idx])
{'dataset': 'Kinetics-400', 'accuracy': {'top-1': 79.0, 'top-5': 93.8}}
VideoMAE model pre-trained for 1600 epochs in a self-supervised way and fine-tuned in a supervised way on Kinetics-400. It was introduced in the paper VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training by Tong et al. and first released in this repository.
Computer Vision Video Classification
Hugging Face Transformers
Video Classification
lmazzon70/videomae-large-finetuned-kinetics-finetuned-rwf2000-epochs8-batch8-kl-torch2
AutoModelForVideoClassification.from_pretrained('lmazzon70/videomae-large-finetuned-kinetics-finetuned-rwf2000-epochs8-batch8-kl-torch2')
video_path
transformers==4.27.4, torch==2.0.0+cu117, datasets==2.11.0, tokenizers==0.13.2
{'dataset': 'unknown', 'accuracy': 0.7212}
This model is a fine-tuned version of MCG-NJU/videomae-large-finetuned-kinetics on an unknown dataset.
Computer Vision Video Classification
Hugging Face Transformers
Transformers
tiny-random-VideoMAEForVideoClassification
VideoClassificationPipeline(model='hf-tiny-model-private/tiny-random-VideoMAEForVideoClassification')
model
transformers
{'dataset': '', 'accuracy': ''}
A tiny random VideoMAE model for video classification.
Computer Vision Video Classification
Hugging Face Transformers
Video Classification
videomae-base-finetuned-ucf101-subset
AutoModelForSequenceClassification.from_pretrained('zahrav/videomae-base-finetuned-ucf101-subset')
N/A
transformers==4.25.1, torch==1.10.0, datasets==2.7.1, tokenizers==0.12.1
N/A
{'dataset': 'unknown', 'accuracy': 0.8968}
This model is a fine-tuned version of MCG-NJU/videomae-base on an unknown dataset. It is used for video classification tasks.
Computer Vision Video Classification
Hugging Face Transformers
Video Action Recognition
videomae-base-finetuned-ucf101
VideoMAEForVideoClassification.from_pretrained('nateraw/videomae-base-finetuned-ucf101')
{'pretrained_model_name_or_path': 'nateraw/videomae-base-finetuned-ucf101'}
['transformers', 'decord', 'huggingface_hub']
from decord import VideoReader, cpu import torch import numpy as np from transformers import VideoMAEFeatureExtractor, VideoMAEForVideoClassification from huggingface_hub import hf_hub_download np.random.seed(0) def sample_frame_indices(clip_len, frame_sample_rate, seg_len): converted_len = int(clip_len * frame_sample_rate) end_idx = np.random.randint(converted_len, seg_len) start_idx = end_idx - converted_len indices = np.linspace(start_idx, end_idx, num=clip_len) indices = np.clip(indices, start_idx, end_idx - 1).astype(np.int64) return indices file_path = hf_hub_download( repo_id=nateraw/dino-clips, filename=archery.mp4, repo_type=space ) videoreader = VideoReader(file_path, num_threads=1, ctx=cpu(0)) videoreader.seek(0) indices = sample_frame_indices(clip_len=16, frame_sample_rate=4, seg_len=len(videoreader)) video = videoreader.get_batch(indices).asnumpy() feature_extractor = VideoMAEFeatureExtractor.from_pretrained(nateraw/videomae-base-finetuned-ucf101) model = VideoMAEForVideoClassification.from_pretrained(nateraw/videomae-base-finetuned-ucf101) inputs = feature_extractor(list(video), return_tensors=pt) with torch.no_grad(): outputs = model(**inputs) logits = outputs.logits predicted_label = logits.argmax(-1).item() print(model.config.id2label[predicted_label])
{'dataset': 'UCF101', 'accuracy': 0.758209764957428}
VideoMAE Base model fine tuned on UCF101 for Video Action Recognition
Computer Vision Video Classification
Hugging Face Transformers
Video Classification
sayakpaul/videomae-base-finetuned-ucf101-subset
AutoModelForVideoClassification.from_pretrained('sayakpaul/videomae-base-finetuned-ucf101-subset')
{'learning_rate': 5e-05, 'train_batch_size': 8, 'eval_batch_size': 8, 'seed': 42, 'optimizer': 'Adam with betas=(0.9,0.999) and epsilon=1e-08', 'lr_scheduler_type': 'linear', 'lr_scheduler_warmup_ratio': 0.1, 'training_steps': 148}
{'Transformers': '4.24.0', 'Pytorch': '1.12.1+cu113', 'Datasets': '2.6.1', 'Tokenizers': '0.13.2'}
from transformers import AutoModelForVideoClassification, AutoTokenizer model = AutoModelForVideoClassification.from_pretrained('sayakpaul/videomae-base-finetuned-ucf101-subset') tokenizer = AutoTokenizer.from_pretrained('sayakpaul/videomae-base-finetuned-ucf101-subset')
{'dataset': 'unknown', 'accuracy': 0.8645}
This model is a fine-tuned version of MCG-NJU/videomae-base on an unknown dataset. It achieves the following results on the evaluation set: Loss: 0.3992, Accuracy: 0.8645.
Computer Vision Zero-Shot Image Classification
Transformers
Zero-Shot Image Classification
openai/clip-vit-large-patch14-336
CLIPModel.from_pretrained('openai/clip-vit-large-patch14').
image_path, tokenizer, model
Transformers 4.21.3, TensorFlow 2.8.2, Tokenizers 0.12.1
N/A
{'dataset': 'unknown', 'accuracy': 'N/A'}
This model was trained from scratch on an unknown dataset.
Natural Language Processing Zero-Shot Classification
Hugging Face Transformers
Zero-Shot Image Classification
laion/CLIP-ViT-B-32-laion2B-s34B-b79K
pipeline('zero-shot-classification', model='laion/CLIP-ViT-B-32-laion2B-s34B-b79K')
{'image': 'path/to/image', 'class_names': ['class1', 'class2', 'class3']}
{'transformers': '>=4.0.0'}
from transformers import pipeline; classifier = pipeline('zero-shot-classification', model='laion/CLIP-ViT-B-32-laion2B-s34B-b79K'); classifier(image='path/to/image', class_names=['class1', 'class2', 'class3'])
{'dataset': 'ImageNet-1k', 'accuracy': 66.6}
A CLIP ViT-B/32 model trained with the LAION-2B English subset of LAION-5B using OpenCLIP. It enables researchers to better understand and explore zero-shot, arbitrary image classification. The model can be used for zero-shot image classification, image and text retrieval, among others.
Computer Vision Zero-Shot Image Classification
Hugging Face Transformers
Zero-Shot Image Classification
openai/clip-vit-base-patch32
CLIPModel.from_pretrained('openai/clip-vit-base-patch32')
['text', 'images', 'return_tensors', 'padding']
['PIL', 'requests', 'transformers']
from PIL import Image import requests from transformers import CLIPProcessor, CLIPModel model = CLIPModel.from_pretrained(openai/clip-vit-base-patch32) processor = CLIPProcessor.from_pretrained(openai/clip-vit-base-patch32) url = http://images.cocodataset.org/val2017/000000039769.jpg image = Image.open(requests.get(url, stream=True).raw) inputs = processor(text=[a photo of a cat, a photo of a dog], images=image, return_tensors=pt, padding=True) outputs = model(**inputs) logits_per_image = outputs.logits_per_image probs = logits_per_image.softmax(dim=1)
{'dataset': ['Food101', 'CIFAR10', 'CIFAR100', 'Birdsnap', 'SUN397', 'Stanford Cars', 'FGVC Aircraft', 'VOC2007', 'DTD', 'Oxford-IIIT Pet dataset', 'Caltech101', 'Flowers102', 'MNIST', 'SVHN', 'IIIT5K', 'Hateful Memes', 'SST-2', 'UCF101', 'Kinetics700', 'Country211', 'CLEVR Counting', 'KITTI Distance', 'STL-10', 'RareAct', 'Flickr30', 'MSCOCO', 'ImageNet', 'ImageNet-A', 'ImageNet-R', 'ImageNet Sketch', 'ObjectNet (ImageNet Overlap)', 'Youtube-BB', 'ImageNet-Vid'], 'accuracy': 'varies'}
The CLIP model was developed by researchers at OpenAI to learn about what contributes to robustness in computer vision tasks. The model was also developed to test the ability of models to generalize to arbitrary image classification tasks in a zero-shot manner.
Computer Vision Zero-Shot Image Classification
Hugging Face Transformers
Zero-Shot Image Classification
openai/clip-vit-large-patch14
CLIPModel.from_pretrained('openai/clip-vit-large-patch14')
{'text': ['a photo of a cat', 'a photo of a dog'], 'images': 'image', 'return_tensors': 'pt', 'padding': 'True'}
{'packages': ['PIL', 'requests', 'transformers']}
from PIL import Image import requests from transformers import CLIPProcessor, CLIPModel model = CLIPModel.from_pretrained(openai/clip-vit-large-patch14) processor = CLIPProcessor.from_pretrained(openai/clip-vit-large-patch14) url = http://images.cocodataset.org/val2017/000000039769.jpg image = Image.open(requests.get(url, stream=True).raw) inputs = processor(text=[a photo of a cat, a photo of a dog], images=image, return_tensors=pt, padding=True) outputs = model(**inputs) logits_per_image = outputs.logits_per_image probs = logits_per_image.softmax(dim=1)
{'dataset': ['Food101', 'CIFAR10', 'CIFAR100', 'Birdsnap', 'SUN397', 'Stanford Cars', 'FGVC Aircraft', 'VOC2007', 'DTD', 'Oxford-IIIT Pet dataset', 'Caltech101', 'Flowers102', 'MNIST', 'SVHN', 'IIIT5K', 'Hateful Memes', 'SST-2', 'UCF101', 'Kinetics700', 'Country211', 'CLEVR Counting', 'KITTI Distance', 'STL-10', 'RareAct', 'Flickr30', 'MSCOCO', 'ImageNet', 'ImageNet-A', 'ImageNet-R', 'ImageNet Sketch', 'ObjectNet (ImageNet Overlap)', 'Youtube-BB', 'ImageNet-Vid'], 'accuracy': 'varies depending on the dataset'}
The CLIP model was developed by researchers at OpenAI to learn about what contributes to robustness in computer vision tasks. The model was also developed to test the ability of models to generalize to arbitrary image classification tasks in a zero-shot manner.
Computer Vision Zero-Shot Image Classification
Hugging Face Transformers
Zero-Shot Image Classification
laion/CLIP-ViT-L-14-laion2B-s32B-b82K
CLIPModel.from_pretrained('laion/CLIP-ViT-L-14-laion2B-s32B-b82K')
transformers
{'dataset': 'ImageNet-1k', 'accuracy': 75.3}
A CLIP ViT L/14 model trained with the LAION-2B English subset of LAION-5B using OpenCLIP. Intended for research purposes and exploring zero-shot, arbitrary image classification. Can be used for interdisciplinary studies of the potential impact of such model.
Computer Vision Zero-Shot Image Classification
Hugging Face
Zero-Shot Image Classification
laion/CLIP-ViT-g-14-laion2B-s34B-b88K
pipeline('zero-shot-image-classification', model='laion/CLIP-ViT-g-14-laion2B-s34B-b88K')
{'image': 'path/to/image/file', 'class_names': 'list_of_class_names'}
{'huggingface_hub': '0.0.17', 'transformers': '4.11.3', 'torch': '1.9.0', 'torchvision': '0.10.0'}
None
{'dataset': None, 'accuracy': None}
A zero-shot image classification model based on OpenCLIP, which can classify images into various categories without requiring any training data for those categories.
Computer Vision Zero-Shot Image Classification
Hugging Face Transformers
Zero-Shot Image Classification
laion/CLIP-ViT-bigG-14-laion2B-39B-b160k
pipeline('image-classification', model='laion/CLIP-ViT-bigG-14-laion2B-39B-b160k')
['image', 'possible_class_names']
['transformers']
from transformers import pipeline; classifier = pipeline('image-classification', model='laion/CLIP-ViT-bigG-14-laion2B-39B-b160k'); classifier(image, possible_class_names=['cat', 'dog'])
{'dataset': 'ImageNet-1k', 'accuracy': '80.1'}
A CLIP ViT-bigG/14 model trained with the LAION-2B English subset of LAION-5B using OpenCLIP. The model is intended for research purposes and enables researchers to better understand and explore zero-shot, arbitrary image classification. It can be used for interdisciplinary studies of the potential impact of such models. The model achieves a 80.1 zero-shot top-1 accuracy on ImageNet-1k.
Computer Vision Zero-Shot Image Classification
Hugging Face Transformers
Zero-Shot Image Classification
openai/clip-vit-base-patch16
CLIPModel.from_pretrained('openai/clip-vit-base-patch16')
['text', 'images', 'return_tensors', 'padding']
['PIL', 'requests', 'transformers']
from PIL import Image import requests from transformers import CLIPProcessor, CLIPModel model = CLIPModel.from_pretrained(openai/clip-vit-base-patch16) processor = CLIPProcessor.from_pretrained(openai/clip-vit-base-patch16) url = http://images.cocodataset.org/val2017/000000039769.jpg image = Image.open(requests.get(url, stream=True).raw) inputs = processor(text=[a photo of a cat, a photo of a dog], images=image, return_tensors=pt, padding=True) outputs = model(**inputs) logits_per_image = outputs.logits_per_image probs = logits_per_image.softmax(dim=1)
{'dataset': ['Food101', 'CIFAR10', 'CIFAR100', 'Birdsnap', 'SUN397', 'Stanford Cars', 'FGVC Aircraft', 'VOC2007', 'DTD', 'Oxford-IIIT Pet dataset', 'Caltech101', 'Flowers102', 'MNIST', 'SVHN', 'IIIT5K', 'Hateful Memes', 'SST-2', 'UCF101', 'Kinetics700', 'Country211', 'CLEVR Counting', 'KITTI Distance', 'STL-10', 'RareAct', 'Flickr30', 'MSCOCO', 'ImageNet', 'ImageNet-A', 'ImageNet-R', 'ImageNet Sketch', 'ObjectNet (ImageNet Overlap)', 'Youtube-BB', 'ImageNet-Vid'], 'accuracy': 'varies depending on the dataset'}
The CLIP model was developed by researchers at OpenAI to learn about what contributes to robustness in computer vision tasks. The model was also developed to test the ability of models to generalize to arbitrary image classification tasks in a zero-shot manner.
Computer Vision Zero-Shot Image Classification
Hugging Face
Zero-Shot Image Classification
laion/CLIP-ViT-B-16-laion2B-s34B-b88K
pipeline('image-classification', model='laion/CLIP-ViT-B-16-laion2B-s34B-b88K')
{'image': 'Path to image file or URL', 'class_names': 'List of possible class names (comma-separated)'}
{'transformers': '>=4.11.0'}
from transformers import pipeline; classify = pipeline('image-classification', model='laion/CLIP-ViT-B-16-laion2B-s34B-b88K'); classify('/path/to/image.jpg', ['cat', 'dog'])
{'dataset': 'ImageNet-1k', 'accuracy': '70.2%'}
A CLIP ViT-B/16 model trained with the LAION-2B English subset of LAION-5B using OpenCLIP. This model is intended for research purposes and can be used for zero-shot image classification, image and text retrieval, and other related tasks.
Computer Vision Zero-Shot Image Classification
Hugging Face Transformers
Zero-Shot Image Classification
patrickjohncyh/fashion-clip
CLIPModel.from_pretrained('patrickjohncyh/fashion-clip')
{'image': 'File', 'class_names': 'String (comma-separated)'}
['transformers']
from transformers import CLIPProcessor, CLIPModel; model = CLIPModel.from_pretrained('patrickjohncyh/fashion-clip'); processor = CLIPProcessor.from_pretrained('patrickjohncyh/fashion-clip'); inputs = processor(text='blue shoes', images=image, return_tensors='pt', padding=True); logits_per_image = model(**inputs).logits_per_image; probs = logits_per_image.softmax(dim=-1).tolist()[0]
{'dataset': [{'name': 'FMNIST', 'accuracy': 0.83}, {'name': 'KAGL', 'accuracy': 0.73}, {'name': 'DEEP', 'accuracy': 0.62}]}
FashionCLIP is a CLIP-based model developed to produce general product representations for fashion concepts. Leveraging the pre-trained checkpoint (ViT-B/32) released by OpenAI, it is trained on a large, high-quality novel fashion dataset to study whether domain specific fine-tuning of CLIP-like models is sufficient to produce product representations that are zero-shot transferable to entirely new datasets and tasks.
Computer Vision Zero-Shot Image Classification
Hugging Face
Zero-Shot Image Classification
laion/CLIP-convnext_large_d_320.laion2B-s29B-b131K-ft-soup
pipeline('image-classification', model='laion/CLIP-convnext_large_d_320.laion2B-s29B-b131K-ft-soup')
image_path, class_names
transformers
results = model(image_path, class_names='cat, dog, bird')
{'dataset': 'ImageNet-1k', 'accuracy': '76.9'}
A series of CLIP ConvNeXt-Large (w/ extra text depth, vision MLP head) models trained on the LAION-2B (english) subset of LAION-5B using OpenCLIP. The models utilize the timm ConvNeXt-Large model (convnext_large) as the image tower, a MLP (fc - gelu - drop - fc) head in vision tower instead of the single projection of other CLIP models, and a text tower with same width but 4 layers more depth than ViT-L / RN50x16 models (depth 16, embed dim 768).
Computer Vision Zero-Shot Image Classification
Hugging Face
Zero-Shot Image Classification
laion/CLIP-convnext_base_w-laion_aesthetic-s13B-b82K
pipeline('image-classification', model='laion/CLIP-convnext_base_w-laion_aesthetic-s13B-b82K')
{'image': 'path to image file', 'class_names': 'list of possible class names (comma-separated)'}
['transformers']
from transformers import pipeline; model = pipeline('image-classification', model='laion/CLIP-convnext_base_w-laion_aesthetic-s13B-b82K'); model('path/to/image.jpg', ['cat', 'dog'])
{'dataset': 'ImageNet-1k', 'accuracy': '70.8% to 71.7%'}
A series of CLIP ConvNeXt-Base (w/ wide embed dim) models trained on subsets LAION-5B using OpenCLIP. These models achieve between 70.8 and 71.7 zero-shot top-1 accuracy on ImageNet-1k. They can be used for zero-shot image classification, image and text retrieval, and other tasks.
Computer Vision Zero-Shot Image Classification
Hugging Face
Zero-Shot Image Classification
laion/CLIP-convnext_xxlarge-laion2B-s34B-b82K-augreg-soup
pipeline('image-classification', model='laion/CLIP-convnext_xxlarge-laion2B-s34B-b82K-augreg-soup')
image, class_names
['transformers', 'torch']
results = classifier(image, class_names='cat, dog')
{'dataset': 'ImageNet-1k', 'accuracy': '79.1-79.4'}
A series of CLIP ConvNeXt-XXLarge models trained on LAION-2B (English), a subset of LAION-5B, using OpenCLIP. These models achieve between 79.1 and 79.4 top-1 zero-shot accuracy on ImageNet-1k.
Computer Vision Zero-Shot Image Classification
Hugging Face
Zero-Shot Image Classification
CLIPModel.from_pretrained('laion/CLIP-convnext_base_w-laion2B-s13B-b82K')
CLIPModel.from_pretrained('laion/CLIP-convnext_base_w-laion2B-s13B-b82K')
{'image_path': 'path to the image file', 'labels': 'list of possible class names'}
['transformers']
from transformers import pipeline; clip = pipeline('image-classification', model='laion/CLIP-convnext_base_w-laion2B-s13B-b82K'); clip('path/to/image.jpg', ['cat', 'dog'])
{'dataset': 'ImageNet-1k', 'accuracy': '70.8 - 71.7%'}
A series of CLIP ConvNeXt-Base (w/ wide embed dim) models trained on subsets LAION-5B using OpenCLIP. The models achieve between 70.8 and 71.7 zero-shot top-1 accuracy on ImageNet-1k. The models can be used for zero-shot image classification, image and text retrieval, and other related tasks.
Multimodal Zero-Shot Image Classification
Hugging Face
Zero-Shot Image Classification
microsoft/BiomedCLIP-PubMedBERT_256-vit_base_patch16_224
pipeline('zero-shot-image-classification', model='microsoft/BiomedCLIP-PubMedBERT_256-vit_base_patch16_224')
image, possible_class_names
transformers, torch, torchvision
from transformers import pipeline clip = pipeline('zero-shot-image-classification', model='microsoft/BiomedCLIP-PubMedBERT_256-vit_base_patch16_224') image = 'path/to/image.png' possible_class_names = ['class1', 'class2', 'class3'] result = clip(image, possible_class_names)
{'dataset': 'PMC-15M', 'accuracy': 'State of the art'}
BiomedCLIP is a biomedical vision-language foundation model pretrained on PMC-15M, a dataset of 15 million figure-caption pairs extracted from biomedical research articles in PubMed Central, using contrastive learning. It uses PubMedBERT as the text encoder and Vision Transformer as the image encoder, with domain-specific adaptations. It can perform various vision-language processing (VLP) tasks such as cross-modal retrieval, image classification, and visual question answering.
Computer Vision Zero-Shot Image Classification
Hugging Face
Zero-Shot Image Classification
laion/CLIP-convnext_base_w_320-laion_aesthetic-s13B-b82K-augreg
pipeline('image-classification', model='laion/CLIP-convnext_base_w_320-laion_aesthetic-s13B-b82K-augreg')
{'image_path': 'Path to the image', 'class_names': 'Comma-separated list of possible class names'}
transformers
from transformers import pipeline image_classification = pipeline('image-classification', model='laion/CLIP-convnext_base_w_320-laion_aesthetic-s13B-b82K-augreg') image_path = 'path/to/image.jpg' class_names = 'dog, cat' result = image_classification(image_path, class_names) print(result)
{'dataset': 'ImageNet-1k', 'accuracy': '70.8-71.7%'}
A series of CLIP ConvNeXt-Base (w/ wide embed dim) models trained on subsets LAION-5B using OpenCLIP. The models utilize the timm ConvNeXt-Base model (convnext_base) as the image tower, and the same text tower as the RN50x4 (depth 12, embed dim 640) model from OpenAI CLIP.
Computer Vision Zero-Shot Image Classification
Hugging Face Transformers
Zero-Shot Image Classification
flax-community/clip-rsicd-v2
CLIPModel.from_pretrained('flax-community/clip-rsicd-v2')
{'text': ['a photo of a residential area', 'a photo of a playground', 'a photo of a stadium', 'a photo of a forest', 'a photo of an airport'], 'images': 'image', 'return_tensors': 'pt', 'padding': 'True'}
['PIL', 'requests', 'transformers']
from PIL import Image import requests from transformers import CLIPProcessor, CLIPModel model = CLIPModel.from_pretrained(flax-community/clip-rsicd-v2) processor = CLIPProcessor.from_pretrained(flax-community/clip-rsicd-v2) url = https://raw.githubusercontent.com/arampacha/CLIP-rsicd/master/data/stadium_1.jpg image = Image.open(requests.get(url, stream=True).raw) labels = [residential area, playground, stadium, forest, airport] inputs = processor(text=[fa photo of a {l} for l in labels], images=image, return_tensors=pt, padding=True) outputs = model(**inputs) logits_per_image = outputs.logits_per_image probs = logits_per_image.softmax(dim=1) for l, p in zip(labels, probs[0]): print(f{l:<16} {p:.4f})
{'dataset': {'RSICD': {'original CLIP': {'k=1': 0.572, 'k=3': 0.745, 'k=5': 0.837, 'k=10': 0.939}, 'clip-rsicd-v2 (this model)': {'k=1': 0.883, 'k=3': 0.968, 'k=5': 0.982, 'k=10': 0.998}}}}
This model is a fine-tuned CLIP by OpenAI. It is designed with an aim to improve zero-shot image classification, text-to-image and image-to-image retrieval specifically on remote sensing images.
Natural Language Processing Zero-Shot Classification
Hugging Face Transformers
Zero-Shot Image Classification
kakaobrain/align-base
AlignModel.from_pretrained('kakaobrain/align-base')
['text', 'images', 'return_tensors']
['requests', 'torch', 'PIL', 'transformers']
import requests import torch from PIL import Image from transformers import AlignProcessor, AlignModel processor = AlignProcessor.from_pretrained(kakaobrain/align-base) model = AlignModel.from_pretrained(kakaobrain/align-base) url = http://images.cocodataset.org/val2017/000000039769.jpg image = Image.open(requests.get(url, stream=True).raw) candidate_labels = [an image of a cat, an image of a dog] inputs = processor(text=candidate_labels, images=image, return_tensors=pt) with torch.no_grad(): outputs = model(**inputs) logits_per_image = outputs.logits_per_image probs = logits_per_image.softmax(dim=1) print(probs)
{'dataset': 'COYO-700M', 'accuracy': "on-par or outperforms Google ALIGN's reported metrics"}
The ALIGN model is a dual-encoder architecture with EfficientNet as its vision encoder and BERT as its text encoder. It learns to align visual and text representations with contrastive learning. This implementation is trained on the open source COYO dataset and can be used for zero-shot image classification and multi-modal embedding retrieval.
Computer Vision Zero-Shot Image Classification
Hugging Face Transformers
Transformers
tiny-random-CLIPSegModel
pipeline('zero-shot-image-classification', model='hf-tiny-model-private/tiny-random-CLIPSegModel')
{'image': 'File or URL', 'class_names': 'List of strings'}
{'transformers': '>=4.13.0'}
{'dataset': '', 'accuracy': ''}
A tiny random CLIPSegModel for zero-shot image classification.
Computer Vision Zero-Shot Image Classification
Hugging Face
Zero-Shot Image Classification
timm/eva02_enormous_patch14_plus_clip_224.laion2b_s9b_b144k
clip.load('timm/eva02_enormous_patch14_plus_clip_224.laion2b_s9b_b144k')
image, class_names
huggingface_hub, openai, transformers
N/A
{'dataset': 'N/A', 'accuracy': 'N/A'}
This model is a zero-shot image classification model based on OpenCLIP. It can be used for classifying images into various categories without any additional training.
Natural Language Processing Zero-Shot Classification
Hugging Face
Zero-Shot Image Classification
laion/CLIP-convnext_xxlarge-laion2B-s34B-b82K-augreg-rewind
CLIPModel.from_pretrained('laion/CLIP-convnext_xxlarge-laion2B-s34B-b82K-augreg-rewind')
image, class_names
transformers
from transformers import pipeline; clip = pipeline('zero-shot-classification', model='laion/CLIP-convnext_xxlarge-laion2B-s34B-b82K-augreg-rewind'); clip(image, class_names=['cat', 'dog', 'fish'])
{'dataset': 'ImageNet-1k', 'accuracy': '79.1 - 79.4'}
A series of CLIP ConvNeXt-XXLarge models trained on LAION-2B (English), a subset of LAION-5B, using OpenCLIP. These models achieve between 79.1 and 79.4 top-1 zero-shot accuracy on ImageNet-1k. The models can be used for zero-shot image classification, image and text retrieval, and other related tasks.
Computer Vision Zero-Shot Image Classification
Hugging Face
Zero-Shot Image Classification
laion/CLIP-convnext_large_d.laion2B-s26B-b102K-augreg
pipeline('image-classification', model='laion/CLIP-convnext_large_d.laion2B-s26B-b102K-augreg')
{'image_path': './path/to/image.jpg', 'class_names': 'class1,class2,class3'}
transformers
from transformers import pipeline clip = pipeline('image-classification', model='laion/CLIP-convnext_large_d.laion2B-s26B-b102K-augreg') clip('./path/to/image.jpg', 'class1,class2,class3')
{'dataset': 'ImageNet-1k', 'accuracy': '75.9%'}
A series of CLIP ConvNeXt-Large (w/ extra text depth, vision MLP head) models trained on LAION-2B (english), a subset of LAION-5B, using OpenCLIP. The models are trained at 256x256 image resolution and achieve a 75.9 top-1 zero-shot accuracy on ImageNet-1k.
Computer Vision Zero-Shot Image Classification
Hugging Face
Zero-Shot Image Classification
laion/CLIP-convnext_large_d_320.laion2B-s29B-b131K-ft
pipeline('image-classification', model='laion/CLIP-convnext_large_d_320.laion2B-s29B-b131K-ft')
{'image_path': 'Path to the image file', 'class_names': 'List of comma-separated class names'}
['transformers']
from transformers import pipeline; classifier = pipeline('image-classification', model='laion/CLIP-convnext_large_d_320.laion2B-s29B-b131K-ft'); classifier('path/to/image.jpg', ['class1', 'class2'])
{'dataset': 'ImageNet-1k', 'accuracy': '75.9-76.9%'}
A series of CLIP ConvNeXt-Large models trained on the LAION-2B (english) subset of LAION-5B using OpenCLIP. The models achieve between 75.9 and 76.9 top-1 zero-shot accuracy on ImageNet-1k.
Computer Vision Zero-Shot Image Classification
Hugging Face Transformers
Zero-Shot Image Classification
OFA-Sys/chinese-clip-vit-base-patch16
ChineseCLIPModel.from_pretrained('OFA-Sys/chinese-clip-vit-base-patch16')
{'pretrained_model_name_or_path': 'OFA-Sys/chinese-clip-vit-base-patch16'}
{'transformers': 'ChineseCLIPProcessor, ChineseCLIPModel'}
from PIL import Image import requests from transformers import ChineseCLIPProcessor, ChineseCLIPModel model = ChineseCLIPModel.from_pretrained(OFA-Sys/chinese-clip-vit-base-patch16) processor = ChineseCLIPProcessor.from_pretrained(OFA-Sys/chinese-clip-vit-base-patch16) url = https://clip-cn-beijing.oss-cn-beijing.aliyuncs.com/pokemon.jpeg image = Image.open(requests.get(url, stream=True).raw) texts = [, , , ] inputs = processor(images=image, return_tensors=pt) image_features = model.get_image_features(**inputs) image_features = image_features / image_features.norm(p=2, dim=-1, keepdim=True) inputs = processor(text=texts, padding=True, return_tensors=pt) text_features = model.get_text_features(**inputs) text_features = text_features / text_features.norm(p=2, dim=-1, keepdim=True) inputs = processor(text=texts, images=image, return_tensors=pt, padding=True) outputs = model(**inputs) logits_per_image = outputs.logits_per_image probs = logits_per_image.softmax(dim=1)
{'dataset': {'MUGE Text-to-Image Retrieval': {'accuracy': {'Zero-shot R@1': 63.0, 'Zero-shot R@5': 84.1, 'Zero-shot R@10': 89.2, 'Finetune R@1': 68.9, 'Finetune R@5': 88.7, 'Finetune R@10': 93.1}}, 'Flickr30K-CN Retrieval': {'accuracy': {'Zero-shot Text-to-Image R@1': 71.2, 'Zero-shot Text-to-Image R@5': 91.4, 'Zero-shot Text-to-Image R@10': 95.5, 'Finetune Text-to-Image R@1': 83.8, 'Finetune Text-to-Image R@5': 96.9, 'Finetune Text-to-Image R@10': 98.6, 'Zero-shot Image-to-Text R@1': 81.6, 'Zero-shot Image-to-Text R@5': 97.5, 'Zero-shot Image-to-Text R@10': 98.8, 'Finetune Image-to-Text R@1': 95.3, 'Finetune Image-to-Text R@5': 99.7, 'Finetune Image-to-Text R@10': 100.0}}, 'COCO-CN Retrieval': {'accuracy': {'Zero-shot Text-to-Image R@1': 69.2, 'Zero-shot Text-to-Image R@5': 89.9, 'Zero-shot Text-to-Image R@10': 96.1, 'Finetune Text-to-Image R@1': 81.5, 'Finetune Text-to-Image R@5': 96.9, 'Finetune Text-to-Image R@10': 99.1, 'Zero-shot Image-to-Text R@1': 63.0, 'Zero-shot Image-to-Text R@5': 86.6, 'Zero-shot Image-to-Text R@10': 92.9, 'Finetune Image-to-Text R@1': 83.5, 'Finetune Image-to-Text R@5': 97.3, 'Finetune Image-to-Text R@10': 99.2}}, 'Zero-shot Image Classification': {'accuracy': {'CIFAR10': 96.0, 'CIFAR100': 79.7, 'DTD': 51.2, 'EuroSAT': 52.0, 'FER': 55.1, 'FGVC': 26.2, 'KITTI': 49.9, 'MNIST': 79.4, 'PC': 63.5, 'VOC': 84.9}}}}
Chinese CLIP is a simple implementation of CLIP on a large-scale dataset of around 200 million Chinese image-text pairs. It uses ViT-B/16 as the image encoder and RoBERTa-wwm-base as the text encoder.
Computer Vision Zero-Shot Image Classification
Hugging Face Transformers
Zero-Shot Image Classification
clip-vit-base-patch32-ko
pipeline('zero-shot-image-classification', model='Bingsu/clip-vit-base-patch32-ko')
{'images': 'url', 'candidate_labels': 'Array of strings', 'hypothesis_template': 'String'}
['transformers', 'torch', 'PIL', 'requests']
from transformers import pipeline repo = 'Bingsu/clip-vit-base-patch32-ko' pipe = pipeline('zero-shot-image-classification', model=repo) url = 'http://images.cocodataset.org/val2017/000000039769.jpg' result = pipe(images=url, candidate_labels=[], hypothesis_template='{}') result
{'dataset': 'AIHUB', 'accuracy': 'Not provided'}
Korean CLIP model trained by Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation. It is a zero-shot image classification model that can be used to classify images without any training data.
Natural Language Processing Zero-Shot Classification
Hugging Face Transformers
Zero-Shot Image Classification
OFA-Sys/chinese-clip-vit-large-patch14-336px
ChineseCLIPModel.from_pretrained('OFA-Sys/chinese-clip-vit-large-patch14-336px')
{'images': 'image', 'text': 'texts', 'return_tensors': 'pt', 'padding': 'True'}
['PIL', 'requests', 'transformers']
from PIL import Image import requests from transformers import ChineseCLIPProcessor, ChineseCLIPModel model = ChineseCLIPModel.from_pretrained(OFA-Sys/chinese-clip-vit-large-patch14-336px) processor = ChineseCLIPProcessor.from_pretrained(OFA-Sys/chinese-clip-vit-large-patch14-336px) url = https://clip-cn-beijing.oss-cn-beijing.aliyuncs.com/pokemon.jpeg image = Image.open(requests.get(url, stream=True).raw) texts = [] inputs = processor(images=image, return_tensors=pt) image_features = model.get_image_features(**inputs) image_features = image_features / image_features.norm(p=2, dim=-1, keepdim=True) inputs = processor(text=texts, padding=True, return_tensors=pt) text_features = model.get_text_features(**inputs) text_features = text_features / text_features.norm(p=2, dim=-1, keepdim=True) inputs = processor(text=texts, images=image, return_tensors=pt, padding=True) outputs = model(**inputs) logits_per_image = outputs.logits_per_image probs = logits_per_image.softmax(dim=1)
{'dataset': {'CIFAR10': 96.0, 'CIFAR100': 79.75, 'DTD': 51.2, 'EuroSAT': 52.0, 'FER': 55.1, 'FGVC': 26.2, 'KITTI': 49.9, 'MNIST': 79.4, 'PC': 63.5, 'VOC': 84.9}, 'accuracy': 'various'}
Chinese CLIP is a simple implementation of CLIP on a large-scale dataset of around 200 million Chinese image-text pairs. It uses ViT-L/14@336px as the image encoder and RoBERTa-wwm-base as the text encoder.
Natural Language Processing Text Classification
Transformers
Text Classification
distilbert-base-uncased-finetuned-sst-2-english
DistilBertForSequenceClassification.from_pretrained('distilbert-base-uncased-finetuned-sst-2-english')
['inputs']
['torch', 'transformers']
import torch from transformers import DistilBertTokenizer, DistilBertForSequenceClassification tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased') model = DistilBertForSequenceClassification.from_pretrained('distilbert-base-uncased-finetuned-sst-2-english') inputs = tokenizer('Hello, my dog is cute', return_tensors='pt') with torch.no_grad(): logits = model(**inputs).logits predicted_class_id = logits.argmax().item() model.config.id2label[predicted_class_id]
{'dataset': 'glue', 'accuracy': 0.911}
This model is a fine-tune checkpoint of DistilBERT-base-uncased, fine-tuned on SST-2. It reaches an accuracy of 91.3 on the dev set (for comparison, Bert bert-base-uncased version reaches an accuracy of 92.7). This model can be used for topic classification.
Natural Language Processing Text Classification
Hugging Face Transformers
Transformers
sentiment_analysis_generic_dataset
pipeline('text-classification', model='Seethal/sentiment_analysis_generic_dataset')
[]
['transformers']
sentiment_analysis('I love this product!')
{'dataset': 'generic_dataset', 'accuracy': 'Not specified'}
This is a fine-tuned downstream version of the bert-base-uncased model for sentiment analysis, this model is not intended for further downstream fine-tuning for any other tasks. This model is trained on a classified dataset for text classification.
Natural Language Processing Text Classification
Transformers
Sentiment Analysis
cardiffnlp/twitter-roberta-base-sentiment
AutoModelForSequenceClassification.from_pretrained('cardiffnlp/twitter-roberta-base-sentiment')
['MODEL']
['transformers']
from transformers import AutoModelForSequenceClassification from transformers import TFAutoModelForSequenceClassification from transformers import AutoTokenizer import numpy as np from scipy.special import softmax import csv import urllib.request def preprocess(text): new_text = [] for t in text.split( ): t = '@user' if t.startswith('@') and len(t) > 1 else t t = 'http' if t.startswith('http') else t new_text.append(t) return .join(new_text) task='sentiment' MODEL = fcardiffnlp/twitter-roberta-base-{task} tokenizer = AutoTokenizer.from_pretrained(MODEL) labels=[] mapping_link = fhttps://raw.githubusercontent.com/cardiffnlp/tweeteval/main/datasets/{task}/mapping.txt with urllib.request.urlopen(mapping_link) as f: html = f.read().decode('utf-8').split(\n) csvreader = csv.reader(html, delimiter='\t') labels = [row[1] for row in csvreader if len(row) > 1] model = AutoModelForSequenceClassification.from_pretrained(MODEL) model.save_pretrained(MODEL) text = Good night 😊 text = preprocess(text) encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) scores = output[0][0].detach().numpy() scores = softmax(scores) ranking = np.argsort(scores) ranking = ranking[::-1] for i in range(scores.shape[0]): l = labels[ranking[i]] s = scores[ranking[i]] print(f{i+1}) {l} {np.round(float(s), 4)})
{'dataset': 'tweet_eval', 'accuracy': 'Not provided'}
Twitter-roBERTa-base for Sentiment Analysis. This is a roBERTa-base model trained on ~58M tweets and finetuned for sentiment analysis with the TweetEval benchmark. This model is suitable for English.
Natural Language Processing Text Classification
Hugging Face
Sentiment Analysis
cardiffnlp/twitter-xlm-roberta-base-sentiment
pipeline(sentiment-analysis, model='cardiffnlp/twitter-xlm-roberta-base-sentiment')
['model_path']
['transformers']
from transformers import pipeline model_path = cardiffnlp/twitter-xlm-roberta-base-sentiment sentiment_task = pipeline(sentiment-analysis, model=model_path, tokenizer=model_path) sentiment_task(T'estimo!)
{'dataset': 'Twitter', 'accuracy': 'Not provided'}
This is a multilingual XLM-roBERTa-base model trained on ~198M tweets and finetuned for sentiment analysis. The sentiment fine-tuning was done on 8 languages (Ar, En, Fr, De, Hi, It, Sp, Pt) but it can be used for more languages (see paper for details).
Multimodal Zero-Shot Image Classification
Hugging Face Transformers
Image Geolocalization
geolocal/StreetCLIP
CLIPModel.from_pretrained('geolocal/StreetCLIP')
{'pretrained_model_name_or_path': 'geolocal/StreetCLIP'}
['transformers', 'PIL', 'requests']
from PIL import Image import requests from transformers import CLIPProcessor, CLIPModel model = CLIPModel.from_pretrained(geolocal/StreetCLIP) processor = CLIPProcessor.from_pretrained(geolocal/StreetCLIP) url = https://huggingface.co/geolocal/StreetCLIP/resolve/main/sanfrancisco.jpeg image = Image.open(requests.get(url, stream=True).raw) choices = [San Jose, San Diego, Los Angeles, Las Vegas, San Francisco] inputs = processor(text=choices, images=image, return_tensors=pt, padding=True) outputs = model(**inputs) logits_per_image = outputs.logits_per_image probs = logits_per_image.softmax(dim=1)
{'dataset': [{'name': 'IM2GPS', 'accuracy': {'25km': 28.3, '200km': 45.1, '750km': 74.7, '2500km': 88.2}}, {'name': 'IM2GPS3K', 'accuracy': {'25km': 22.4, '200km': 37.4, '750km': 61.3, '2500km': 80.4}}]}
StreetCLIP is a robust foundation model for open-domain image geolocalization and other geographic and climate-related tasks. Trained on an original dataset of 1.1 million street-level urban and rural geo-tagged images, it achieves state-of-the-art performance on multiple open-domain image geolocalization benchmarks in zero-shot, outperforming supervised models trained on millions of images.
Computer Vision Zero-Shot Image Classification
Hugging Face Transformers
Zero-Shot Image Classification
chinese-clip-vit-large-patch14
ChineseCLIPModel.from_pretrained('OFA-Sys/chinese-clip-vit-large-patch14')
{'model_name': 'OFA-Sys/chinese-clip-vit-large-patch14'}
{'libraries': ['transformers', 'PIL', 'requests']}
from PIL import Image import requests from transformers import ChineseCLIPProcessor, ChineseCLIPModel model = ChineseCLIPModel.from_pretrained(OFA-Sys/chinese-clip-vit-large-patch14) processor = ChineseCLIPProcessor.from_pretrained(OFA-Sys/chinese-clip-vit-large-patch14) url = https://clip-cn-beijing.oss-cn-beijing.aliyuncs.com/pokemon.jpeg image = Image.open(requests.get(url, stream=True).raw) texts = [] inputs = processor(images=image, return_tensors=pt) image_features = model.get_image_features(**inputs) image_features = image_features / image_features.norm(p=2, dim=-1, keepdim=True) # normalize inputs = processor(text=texts, padding=True, return_tensors=pt) text_features = model.get_text_features(**inputs) text_features = text_features / text_features.norm(p=2, dim=-1, keepdim=True) # normalize inputs = processor(text=texts, images=image, return_tensors=pt, padding=True) outputs = model(**inputs) logits_per_image = outputs.logits_per_image # this is the image-text similarity score probs = logits_per_image.softmax(dim=1) # probs: [[0.0066, 0.0211, 0.0031, 0.9692]]
{'dataset': 'MUGE Text-to-Image Retrieval, Flickr30K-CN Retrieval, COCO-CN Retrieval, CIFAR10, CIFAR100, DTD, EuroSAT, FER, FGV, KITTI, MNIST, PASCAL VOC', 'accuracy': 'Varies depending on the dataset'}
Chinese-CLIP-ViT-Large-Patch14 is a large version of the Chinese CLIP model, with ViT-L/14 as the image encoder and RoBERTa-wwm-base as the text encoder. Chinese CLIP is a simple implementation of CLIP on a large-scale dataset of around 200 million Chinese image-text pairs. It is designed for zero-shot image classification tasks.
Natural Language Processing Text Classification
Transformers
Language Detection
papluca/xlm-roberta-base-language-detection
pipeline('text-classification', model='papluca/xlm-roberta-base-language-detection')
['text']
['transformers', 'torch']
language_detection('Hello, how are you?')
{'dataset': 'Language Identification', 'accuracy': 0.996}
This model is a fine-tuned version of xlm-roberta-base on the Language Identification dataset. It is an XLM-RoBERTa transformer model with a classification head on top, and can be used as a language detector for sequence classification tasks. It supports 20 languages including Arabic, Bulgarian, German, Greek, English, Spanish, French, Hindi, Italian, Japanese, Dutch, Polish, Portuguese, Russian, Swahili, Thai, Turkish, Urdu, Vietnamese, and Chinese.
Natural Language Processing Text Classification
Hugging Face Transformers
financial-sentiment-analysis
yiyanghkust/finbert-tone
BertForSequenceClassification.from_pretrained('yiyanghkust/finbert-tone')
['sentences']
['transformers']
from transformers import BertTokenizer, BertForSequenceClassification from transformers import pipeline finbert = BertForSequenceClassification.from_pretrained('yiyanghkust/finbert-tone',num_labels=3) tokenizer = BertTokenizer.from_pretrained('yiyanghkust/finbert-tone') nlp = pipeline(sentiment-analysis, model=finbert, tokenizer=tokenizer) sentences = [there is a shortage of capital, and we need extra financing, growth is strong and we have plenty of liquidity, there are doubts about our finances, profits are flat] results = nlp(sentences) print(results)
{'dataset': '10,000 manually annotated sentences from analyst reports', 'accuracy': 'superior performance on financial tone analysis task'}
FinBERT is a BERT model pre-trained on financial communication text. It is trained on the following three financial communication corpus: Corporate Reports 10-K & 10-Q, Earnings Call Transcripts, and Analyst Reports. This released finbert-tone model is the FinBERT model fine-tuned on 10,000 manually annotated (positive, negative, neutral) sentences from analyst reports. This model achieves superior performance on financial tone analysis task.
Natural Language Processing Text Classification
Hugging Face Transformers
financial-sentiment-analysis
ProsusAI/finbert
AutoModelForSequenceClassification.from_pretrained('ProsusAI/finbert')
text
transformers
from transformers import pipeline; classifier = pipeline('sentiment-analysis', model='ProsusAI/finbert'); classifier('your_text_here')
{'dataset': 'Financial PhraseBank', 'accuracy': 'Not provided'}
FinBERT is a pre-trained NLP model to analyze sentiment of financial text. It is built by further training the BERT language model in the finance domain, using a large financial corpus and thereby fine-tuning it for financial sentiment classification. Financial PhraseBank by Malo et al. (2014) is used for fine-tuning.
Natural Language Processing Text Classification
Transformers
Sentiment Analysis
cardiffnlp/twitter-roberta-base-sentiment-latest
pipeline(sentiment-analysis, model=AutoModel.from_pretrained('cardiffnlp/twitter-roberta-base-sentiment-latest'), tokenizer=AutoTokenizer.from_pretrained('cardiffnlp/twitter-roberta-base-sentiment-latest'))
{'model': 'model_path', 'tokenizer': 'model_path'}
['transformers', 'numpy', 'scipy']
from transformers import pipeline sentiment_task = pipeline(sentiment-analysis, model=model_path, tokenizer=model_path) sentiment_task(Covid cases are increasing fast!)
{'dataset': 'tweet_eval', 'accuracy': 'Not provided'}
This is a RoBERTa-base model trained on ~124M tweets from January 2018 to December 2021, and finetuned for sentiment analysis with the TweetEval benchmark. The model is suitable for English.
Natural Language Processing Text Classification
Transformers
Emotion Classification
j-hartmann/emotion-english-distilroberta-base
pipeline('text-classification', model='j-hartmann/emotion-english-distilroberta-base', return_all_scores=True)
{'text': 'string'}
{'transformers': 'latest'}
from transformers import pipeline classifier = pipeline(text-classification, model=j-hartmann/emotion-english-distilroberta-base, return_all_scores=True) classifier(I love this!)
{'dataset': 'Balanced subset from 6 diverse datasets', 'accuracy': '66%'}
This model classifies emotions in English text data. It predicts Ekman's 6 basic emotions, plus a neutral class: anger, disgust, fear, joy, neutral, sadness, and surprise. The model is a fine-tuned checkpoint of DistilRoBERTa-base.
Natural Language Processing Text Classification
Hugging Face Transformers
Transformers
prithivida/parrot_adequacy_model
pipeline('text-classification', model='prithivida/parrot_adequacy_model')
transformers
{'dataset': '', 'accuracy': ''}
Parrot is a paraphrase-based utterance augmentation framework purpose-built to accelerate training NLU models. This model is an ancillary model for Parrot paraphraser.
Natural Language Processing Text Classification
Transformers
Detect GPT-2 generated text
roberta-base-openai-detector
pipeline('text-classification', model='roberta-base-openai-detector')
['text']
['transformers']
from transformers import pipeline pipe = pipeline(text-classification, model=roberta-base-openai-detector) print(pipe(Hello world! Is this content AI-generated?))
{'dataset': 'WebText', 'accuracy': '95%'}
RoBERTa base OpenAI Detector is the GPT-2 output detector model, obtained by fine-tuning a RoBERTa base model with the outputs of the 1.5B-parameter GPT-2 model. The model can be used to predict if text was generated by a GPT-2 model.
Natural Language Processing Text Classification
Hugging Face Transformers
Sentiment Analysis
bert-base-multilingual-uncased-sentiment
pipeline('sentiment-analysis', model='nlptown/bert-base-multilingual-uncased-sentiment')
['text']
['transformers']
result = sentiment_pipeline('I love this product!')
{'dataset': [{'language': 'English', 'accuracy': {'exact': '67%', 'off-by-1': '95%'}}, {'language': 'Dutch', 'accuracy': {'exact': '57%', 'off-by-1': '93%'}}, {'language': 'German', 'accuracy': {'exact': '61%', 'off-by-1': '94%'}}, {'language': 'French', 'accuracy': {'exact': '59%', 'off-by-1': '94%'}}, {'language': 'Italian', 'accuracy': {'exact': '59%', 'off-by-1': '95%'}}, {'language': 'Spanish', 'accuracy': {'exact': '58%', 'off-by-1': '95%'}}]}
This a bert-base-multilingual-uncased model finetuned for sentiment analysis on product reviews in six languages: English, Dutch, German, French, Spanish and Italian. It predicts the sentiment of the review as a number of stars (between 1 and 5).
Natural Language Processing Text Classification
Hugging Face Transformers
Sentiment Inferencing for stock-related comments
zhayunduo/roberta-base-stocktwits-finetuned
RobertaForSequenceClassification.from_pretrained('zhayunduo/roberta-base-stocktwits-finetuned')
{'model': 'RobertaForSequenceClassification', 'tokenizer': 'RobertaTokenizer'}
['transformers']
from transformers import RobertaForSequenceClassification, RobertaTokenizer from transformers import pipeline import pandas as pd import emoji tokenizer_loaded = RobertaTokenizer.from_pretrained('zhayunduo/roberta-base-stocktwits-finetuned') model_loaded = RobertaForSequenceClassification.from_pretrained('zhayunduo/roberta-base-stocktwits-finetuned') nlp = pipeline(text-classification, model=model_loaded, tokenizer=tokenizer_loaded) sentences = pd.Series(['just buy','just sell it','entity rocket to the sky!','go down','even though it is going up, I still think it will not keep this trend in the near future']) sentences = list(sentences) results = nlp(sentences) print(results)
{'dataset': 'stocktwits', 'accuracy': 0.9343}
This model is fine-tuned with roberta-base model on 3,200,000 comments from stocktwits, with the user-labeled tags 'Bullish' or 'Bearish'.
Natural Language Processing Text Classification
Hugging Face Transformers
emotion
bhadresh-savani/distilbert-base-uncased-emotion
pipeline('text-classification', model='bhadresh-savani/distilbert-base-uncased-emotion', return_all_scores=True)
['text']
['transformers']
prediction = classifier('I love using transformers. The best part is wide range of support and its easy to use')
{'dataset': 'Twitter-Sentiment-Analysis', 'accuracy': 0.938}
Distilbert is created with knowledge distillation during the pre-training phase which reduces the size of a BERT model by 40%, while retaining 97% of its language understanding. It's smaller, faster than Bert and any other Bert-based model. Distilbert-base-uncased finetuned on the emotion dataset using HuggingFace Trainer.
Natural Language Processing Text Classification
Hugging Face Transformers
Information Retrieval
cross-encoder/ms-marco-MiniLM-L-6-v2
AutoModelForSequenceClassification.from_pretrained('cross-encoder/ms-marco-MiniLM-L-6-v2')
{'model_name': 'cross-encoder/ms-marco-MiniLM-L-6-v2'}
{'transformers': 'latest', 'torch': 'latest'}
from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch model = AutoModelForSequenceClassification.from_pretrained('model_name') tokenizer = AutoTokenizer.from_pretrained('model_name') features = tokenizer(['How many people live in Berlin?', 'How many people live in Berlin?'], ['Berlin has a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.'], padding=True, truncation=True, return_tensors=pt) model.eval() with torch.no_grad(): scores = model(**features).logits print(scores)
{'dataset': 'MS Marco Passage Reranking', 'accuracy': 'MRR@10: 39.01%'}
This model was trained on the MS Marco Passage Ranking task and can be used for Information Retrieval. Given a query, encode the query with all possible passages, then sort the passages in a decreasing order.
Natural Language Processing Text Classification
Transformers
Sentiment Analysis
finiteautomata/beto-sentiment-analysis
pipeline('sentiment-analysis', model='finiteautomata/beto-sentiment-analysis')
text
Hugging Face Transformers library
{'dataset': 'TASS 2020 corpus', 'accuracy': ''}
Model trained with TASS 2020 corpus (around ~5k tweets) of several dialects of Spanish. Base model is BETO, a BERT model trained in Spanish. Uses POS, NEG, NEU labels.
Natural Language Processing Text Classification
Hugging Face Transformers
Sentiment Analysis
finiteautomata/bertweet-base-sentiment-analysis
pipeline('text-classification', model='finiteautomata/bertweet-base-sentiment-analysis')
text
Transformers
from transformers import pipeline nlp = pipeline('text-classification', model='finiteautomata/bertweet-base-sentiment-analysis') result = nlp('I love this movie!')
{'dataset': 'SemEval 2017', 'accuracy': None}
Model trained with SemEval 2017 corpus (around ~40k tweets). Base model is BERTweet, a RoBERTa model trained on English tweets. Uses POS, NEG, NEU labels.
Natural Language Processing Text Classification
Hugging Face Transformers
Text Classification
lvwerra/distilbert-imdb
pipeline('sentiment-analysis', model='lvwerra/distilbert-imdb')
[]
['transformers', 'pytorch']
classifier('I love this movie!')
{'dataset': 'imdb', 'accuracy': 0.928}
This model is a fine-tuned version of distilbert-base-uncased on the imdb dataset. It is used for sentiment analysis on movie reviews and achieves an accuracy of 0.928 on the evaluation set.
Natural Language Processing Text Classification
Hugging Face Transformers
Paraphrase-based utterance augmentation
prithivida/parrot_fluency_model
pipeline('text-classification', model='prithivida/parrot_fluency_model')
text
['transformers']
parrot('your input text')
{'dataset': 'N/A', 'accuracy': 'N/A'}
Parrot is a paraphrase-based utterance augmentation framework purpose-built to accelerate training NLU models. A paraphrase framework is more than just a paraphrasing model.
Natural Language Processing Text Classification
Hugging Face Transformers
Information Retrieval
cross-encoder/ms-marco-MiniLM-L-12-v2
AutoModelForSequenceClassification.from_pretrained('cross-encoder/ms-marco-MiniLM-L-12-v2')
{'padding': 'True', 'truncation': 'True', 'return_tensors': 'pt'}
{'transformers': 'from transformers import AutoTokenizer, AutoModelForSequenceClassification', 'torch': 'import torch'}
from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch model = AutoModelForSequenceClassification.from_pretrained('model_name') tokenizer = AutoTokenizer.from_pretrained('model_name') features = tokenizer(['How many people live in Berlin?', 'How many people live in Berlin?'], ['Berlin has a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.'], padding=True, truncation=True, return_tensors=pt) model.eval() with torch.no_grad(): scores = model(**features).logits print(scores)
{'dataset': {'TREC Deep Learning 2019': {'NDCG@10': 74.31}, 'MS Marco Passage Reranking': {'MRR@10': 39.02, 'accuracy': '960 Docs / Sec'}}}
This model was trained on the MS Marco Passage Ranking task. The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch). Then sort the passages in a decreasing order. See SBERT.net Retrieve & Re-rank for more details. The training code is available here: SBERT.net Training MS Marco
Natural Language Processing Text Classification
Hugging Face Transformers
Transformers
martin-ha/toxic-comment-model
pipeline(model='martin-ha/toxic-comment-model')
{'model_path': 'martin-ha/toxic-comment-model'}
['transformers']
from transformers import AutoModelForSequenceClassification, AutoTokenizer, TextClassificationPipeline model_path = martin-ha/toxic-comment-model tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForSequenceClassification.from_pretrained(model_path) pipeline = TextClassificationPipeline(model=model, tokenizer=tokenizer) print(pipeline('This is a test text.'))
{'dataset': 'held-out test set', 'accuracy': 0.94, 'f1-score': 0.59}
This model is a fine-tuned version of the DistilBERT model to classify toxic comments.
Natural Language Processing Text Classification
Hugging Face Transformers
German Sentiment Classification
oliverguhr/german-sentiment-bert
SentimentModel()
['texts']
pip install germansentiment
['from germansentiment import SentimentModel', 'model = SentimentModel()', 'texts = [', ' Mit keinem guten Ergebniss,Das ist gar nicht mal so gut,', ' Total awesome!,nicht so schlecht wie erwartet,', ' Der Test verlief positiv.,Sie fhrt ein grnes Auto.]', 'result = model.predict_sentiment(texts)', 'print(result)']
{'dataset': ['holidaycheck', 'scare', 'filmstarts', 'germeval', 'PotTS', 'emotions', 'sb10k', 'Leipzig Wikipedia Corpus 2016', 'all'], 'accuracy': [0.9568, 0.9418, 0.9021, 0.7536, 0.678, 0.9649, 0.7376, 0.9967, 0.9639]}
This model was trained for sentiment classification of German language texts. The model uses the Googles Bert architecture and was trained on 1.834 million German-language samples. The training data contains texts from various domains like Twitter, Facebook and movie, app and hotel reviews.
Natural Language Processing Text Classification
Transformers
Sentiment Analysis
siebert/sentiment-roberta-large-english
pipeline('sentiment-analysis', model='siebert/sentiment-roberta-large-english')
['text']
['transformers']
from transformers import pipeline sentiment_analysis = pipeline(sentiment-analysis, model=siebert/sentiment-roberta-large-english) print(sentiment_analysis(I love this!))
{'dataset': [{'name': 'McAuley and Leskovec (2013) (Reviews)', 'accuracy': 98.0}, {'name': 'McAuley and Leskovec (2013) (Review Titles)', 'accuracy': 87.0}, {'name': 'Yelp Academic Dataset', 'accuracy': 96.5}, {'name': 'Maas et al. (2011)', 'accuracy': 96.0}, {'name': 'Kaggle', 'accuracy': 96.0}, {'name': 'Pang and Lee (2005)', 'accuracy': 91.0}, {'name': 'Nakov et al. (2013)', 'accuracy': 88.5}, {'name': 'Shamma (2009)', 'accuracy': 87.0}, {'name': 'Blitzer et al. (2007) (Books)', 'accuracy': 92.5}, {'name': 'Blitzer et al. (2007) (DVDs)', 'accuracy': 92.5}, {'name': 'Blitzer et al. (2007) (Electronics)', 'accuracy': 95.0}, {'name': 'Blitzer et al. (2007) (Kitchen devices)', 'accuracy': 98.5}, {'name': 'Pang et al. (2002)', 'accuracy': 95.5}, {'name': 'Speriosu et al. (2011)', 'accuracy': 85.5}, {'name': 'Hartmann et al. (2019)', 'accuracy': 98.0}], 'average_accuracy': 93.2}
This model ('SiEBERT', prefix for 'Sentiment in English') is a fine-tuned checkpoint of RoBERTa-large (Liu et al. 2019). It enables reliable binary sentiment analysis for various types of English-language text. For each instance, it predicts either positive (1) or negative (0) sentiment. The model was fine-tuned and evaluated on 15 data sets from diverse text sources to enhance generalization across different types of texts (reviews, tweets, etc.). Consequently, it outperforms models trained on only one type of text (e.g., movie reviews from the popular SST-2 benchmark) when used on new data as shown below.
Natural Language Processing Text Classification
Transformers
Text Classification
joeddav/distilbert-base-uncased-go-emotions-student
pipeline('text-classification', model='joeddav/distilbert-base-uncased-go-emotions-student')
text
['transformers', 'torch', 'tensorflow']
from transformers import pipeline nlp = pipeline('text-classification', model='joeddav/distilbert-base-uncased-go-emotions-student') result = nlp('I am so happy today!')
{'dataset': 'go_emotions'}
This model is distilled from the zero-shot classification pipeline on the unlabeled GoEmotions dataset. It is primarily intended as a demo of how an expensive NLI-based zero-shot model can be distilled to a more efficient student, allowing a classifier to be trained with only unlabeled data.
Natural Language Processing Text Classification
Hugging Face Transformers
Text Classification
shahrukhx01/question-vs-statement-classifier
AutoModelForSequenceClassification.from_pretrained('shahrukhx01/question-vs-statement-classifier')
{'tokenizer': 'AutoTokenizer.from_pretrained(shahrukhx01/question-vs-statement-classifier)'}
{'transformers': 'from transformers import AutoTokenizer, AutoModelForSequenceClassification'}
tokenizer = AutoTokenizer.from_pretrained(shahrukhx01/question-vs-statement-classifier) model = AutoModelForSequenceClassification.from_pretrained(shahrukhx01/question-vs-statement-classifier)
{'dataset': 'Haystack', 'accuracy': 'Not provided'}
Trained to add the feature for classifying queries between Question Query vs Statement Query using classification in Haystack
Natural Language Processing Text Classification
Hugging Face Transformers
Transformers
results-yelp
AutoTokenizer.from_pretrained('bert-base-uncased')
{'tokenizer': "AutoTokenizer.from_pretrained('bert-base-uncased')", 'config': "AutoConfig.from_pretrained('potatobunny/results-yelp')"}
{'Transformers': '4.18.0', 'Pytorch': '1.10.0+cu111', 'Datasets': '2.0.0', 'Tokenizers': '0.12.1'}
{'dataset': 'Yelp', 'accuracy': 0.9302}
This model is a fine-tuned version of textattack/bert-base-uncased-yelp-polarity on a filtered and manually reviewed Yelp dataset containing restaurant reviews only. It is intended to perform text classification, specifically sentiment analysis, on text data obtained from restaurant reviews to determine if the particular review is positive or negative.
Natural Language Processing Text Classification
Hugging Face Transformers
Transformers
madhurjindal/autonlp-Gibberish-Detector-492513457
AutoModelForSequenceClassification.from_pretrained('madhurjindal/autonlp-Gibberish-Detector-492513457')
{'inputs': 'I love AutoNLP'}
{'transformers': 'AutoModelForSequenceClassification', 'AutoTokenizer': 'from_pretrained'}
from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained(madhurjindal/autonlp-Gibberish-Detector-492513457, use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained(madhurjindal/autonlp-Gibberish-Detector-492513457, use_auth_token=True) inputs = tokenizer(I love AutoNLP, return_tensors=pt) outputs = model(**inputs)
{'dataset': 'madhurjindal/autonlp-data-Gibberish-Detector', 'accuracy': 0.9735624586913417}
A multi-class text classification model for detecting gibberish text. Trained using AutoNLP and DistilBERT.
Natural Language Processing Text Classification
Transformers
Sentiment Analysis
michellejieli/NSFW_text_classifier
pipeline('sentiment-analysis', model='michellejieli/NSFW_text_classification')
['text']
['transformers']
classifier(I see you’ve set aside this special time to humiliate yourself in public.)
{'dataset': 'Reddit posts', 'accuracy': 'Not specified'}
DistilBERT is a transformer model that performs sentiment analysis. I fine-tuned the model on Reddit posts with the purpose of classifying not safe for work (NSFW) content, specifically text that is considered inappropriate and unprofessional. The model predicts 2 classes, which are NSFW or safe for work (SFW). The model is a fine-tuned version of DistilBERT. It was fine-tuned on 14317 Reddit posts pulled from the Reddit API.