domain
stringclasses
40 values
framework
stringclasses
20 values
functionality
stringclasses
181 values
api_name
stringlengths
4
87
api_call
stringlengths
15
216
api_arguments
stringlengths
0
495
python_environment_requirements
stringlengths
0
190
example_code
stringlengths
0
3.35k
performance
stringlengths
22
1.36k
description
stringlengths
35
1.11k
Reinforcement Learning
Unity ML-Agents
Train and play SoccerTwos
Raiden-1001/poca-Soccerv7.1
mlagents-load-from-hf --repo-id='Raiden-1001/poca-Soccerv7.1' --local-dir='./downloads'
['your_configuration_file_path.yaml', 'run_id']
['ml-agents']
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
{'dataset': 'SoccerTwos', 'accuracy': 'Not provided'}
A trained model of a poca agent playing SoccerTwos using the Unity ML-Agents Library.
Reinforcement Learning
Stable-Baselines3
CartPole-v1
sb3/ppo-CartPole-v1
load_from_hub(repo_id='sb3/ppo-CartPole-v1')
['algo', 'env', 'f']
['rl_zoo3', 'stable-baselines3', 'stable-baselines3-contrib']
python -m rl_zoo3.load_from_hub --algo ppo --env CartPole-v1 -orga sb3 -f logs/
{'dataset': 'CartPole-v1', 'accuracy': '500.00 +/- 0.00'}
This is a trained model of a PPO agent playing CartPole-v1 using the stable-baselines3 library and the RL Zoo. The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included.
Reinforcement Learning
Stable-Baselines3
LunarLander-v2
araffin/dqn-LunarLander-v2
DQN.load('araffin/dqn-LunarLander-v2')
{'checkpoint': 'araffin/dqn-LunarLander-v2', 'kwargs': {'target_update_interval': 30}}
['huggingface_sb3', 'stable_baselines3']
{'load_model': 'from huggingface_sb3 import load_from_hub\nfrom stable_baselines3 import DQN\nfrom stable_baselines3.common.env_util import make_vec_env\nfrom stable_baselines3.common.evaluation import evaluate_policy\n\ncheckpoint = load_from_hub(araffin/dqn-LunarLander-v2, dqn-LunarLander-v2.zip)\n\nkwargs = dict(target_update_interval=30)\n\nmodel = DQN.load(checkpoint, **kwargs)\nenv = make_vec_env(LunarLander-v2, n_envs=1)', 'evaluate': 'mean_reward, std_reward = evaluate_policy(\n model,\n env,\n n_eval_episodes=20,\n deterministic=True,\n)\nprint(fMean reward = {mean_reward:.2f} +/- {std_reward:.2f})'}
{'dataset': 'LunarLander-v2', 'accuracy': '280.22 +/- 13.03'}
This is a trained model of a DQN agent playing LunarLander-v2 using the stable-baselines3 library.
Reinforcement Learning
Stable-Baselines3
CartPole-v1
dqn-CartPole-v1
load_from_hub(repo_id='sb3/dqn-CartPole-v1')
['algo', 'env', 'logs']
['rl_zoo3', 'stable-baselines3', 'stable-baselines3-contrib']
python train.py --algo dqn --env CartPole-v1 -f logs/
{'dataset': 'CartPole-v1', 'accuracy': '500.00 +/- 0.00'}
This is a trained model of a DQN agent playing CartPole-v1 using the stable-baselines3 library and the RL Zoo. The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included.
Reinforcement Learning
Stable-Baselines3
deep-reinforcement-learning
td3-Ant-v3
load_from_hub(repo_id='sb3/td3-Ant-v3')
['algo', 'env', 'f']
['RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo', 'SB3: https://github.com/DLR-RM/stable-baselines3', 'SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib']
['python -m rl_zoo3.load_from_hub --algo td3 --env Ant-v3 -orga sb3 -f logs/', 'python enjoy.py --algo td3 --env Ant-v3 -f logs/', 'python train.py --algo td3 --env Ant-v3 -f logs/', 'python -m rl_zoo3.push_to_hub --algo td3 --env Ant-v3 -f logs/ -orga sb3']
{'dataset': 'Ant-v3', 'accuracy': '5822.96 +/- 93.33'}
This is a trained model of a TD3 agent playing Ant-v3 using the stable-baselines3 library and the RL Zoo. The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included.
Reinforcement Learning
Hugging Face Transformers
Transformers
edbeeching/decision-transformer-gym-hopper-expert
AutoModel.from_pretrained('edbeeching/decision-transformer-gym-hopper-expert')
{'mean': [1.3490015, -0.11208222, -0.5506444, -0.13188992, -0.00378754, 2.6071432, 0.02322114, -0.01626922, -0.06840388, -0.05183131, 0.04272673], 'std': [0.15980862, 0.0446214, 0.14307782, 0.17629202, 0.5912333, 0.5899924, 1.5405099, 0.8152689, 2.0173461, 2.4107876, 5.8440027]}
['transformers', 'torch']
See our Blog Post, Colab notebook or Example Script for usage.
{'dataset': 'Gym Hopper environment', 'accuracy': 'Not provided'}
Decision Transformer model trained on expert trajectories sampled from the Gym Hopper environment
Reinforcement Learning
Hugging Face Transformers
Transformers
edbeeching/decision-transformer-gym-halfcheetah-expert
AutoModel.from_pretrained('edbeeching/decision-transformer-gym-halfcheetah-expert')
{'mean': [-0.04489148, 0.03232588, 0.06034835, -0.17081226, -0.19480659, -0.05751596, 0.09701628, 0.03239211, 11.047426, -0.07997331, -0.32363534, 0.36297753, 0.42322603, 0.40836546, 1.1085187, -0.4874403, -0.0737481], 'std': [0.04002118, 0.4107858, 0.54217845, 0.41522816, 0.23796624, 0.62036866, 0.30100912, 0.21737163, 2.2105937, 0.572586, 1.7255033, 11.844218, 12.06324, 7.0495934, 13.499867, 7.195647, 5.0264325]}
['transformers']
See our Blog Post, Colab notebook or Example Script for usage.
{'dataset': 'Gym HalfCheetah environment', 'accuracy': 'Not specified'}
Decision Transformer model trained on expert trajectories sampled from the Gym HalfCheetah environment
Reinforcement Learning
ML-Agents
SoccerTwos
Raiden-1001/poca-Soccerv7
mlagents-load-from-hf --repo-id='Raiden-1001/poca-Soccerv7.1' --local-dir='./downloads'
['your_configuration_file_path.yaml', 'run_id']
['unity-ml-agents', 'deep-reinforcement-learning', 'ML-Agents-SoccerTwos']
Step 1: Write your model_id: Raiden-1001/poca-Soccerv7 Step 2: Select your .nn /.onnx file Click on Watch the agent play 👀
{'dataset': 'SoccerTwos', 'accuracy': 'Not provided'}
This is a trained model of a poca agent playing SoccerTwos using the Unity ML-Agents Library.
Reinforcement Learning
Unity ML-Agents Library
Train and play SoccerTwos
poca-SoccerTwosv2
mlagents-load-from-hf --repo-id='Raiden-1001/poca-SoccerTwosv2' --local-dir='./downloads'
['your_configuration_file_path.yaml', 'run_id']
['ml-agents']
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
{'dataset': 'SoccerTwos', 'accuracy': 'Not provided'}
A trained model of a poca agent playing SoccerTwos using the Unity ML-Agents Library.
Reinforcement Learning Robotics
Hugging Face
Inference API
Antheia/Hanna
pipeline('robotics', model='Antheia/Hanna')
model
transformers
{'dataset': 'openai/webgpt_comparisons', 'accuracy': ''}
Antheia/Hanna is a reinforcement learning model for robotics tasks, trained on the openai/webgpt_comparisons dataset.
Reinforcement Learning Robotics
Hugging Face Transformers
EmbodiedAI tasks
VC1_BASE_NAME
model_utils.load_model('model_utils.VC1_BASE_NAME')
img
from vc_models.models.vit import model_utils
model,embd_size,model_transforms,model_info = model_utils.load_model(model_utils.VC1_BASE_NAME) img = your_function_here ... transformed_img = model_transforms(img) embedding = model(transformed_img)
{'dataset': 'CortexBench', 'accuracy': 'Mean Success: 68.7%'}
The VC-1 model is a vision transformer (ViT) pre-trained on over 4,000 hours of egocentric videos from 7 different sources, together with ImageNet. The model is trained using Masked Auto-Encoding (MAE) and is available in two sizes: ViT-B and ViT-L. The model is intended for use for EmbodiedAI tasks, such as object manipulation and indoor navigation.
Reinforcement Learning Robotics
Hugging Face
6D grasping
camusean/grasp_diffusion
AutoModel.from_pretrained('camusean/grasp_diffusion')
N/A
transformers
N/A
{'dataset': 'N/A', 'accuracy': 'N/A'}
Trained Models for Grasp SE(3) DiffusionFields. Check SE(3)-DiffusionFields: Learning smooth cost functions for joint grasp and motion optimization through diffusion for additional details.
Reinforcement Learning Robotics
Hugging Face Transformers
EmbodiedAI tasks, such as object manipulation and indoor navigation
facebook/vc1-large
model_utils.load_model('model_utils.VC1_BASE_NAME')
img
from vc_models.models.vit import model_utils
model,embd_size,model_transforms,model_info = model_utils.load_model(model_utils.VC1_BASE_NAME) img = your_function_here ... transformed_img = model_transforms(img) embedding = model(transformed_img)
{'dataset': 'CortexBench', 'accuracy': '68.7 (Mean Success)'}
The VC-1 model is a vision transformer (ViT) pre-trained on over 4,000 hours of egocentric videos from 7 different sources, together with ImageNet. The model is trained using Masked Auto-Encoding (MAE) and is available in two sizes: ViT-B and ViT-L. The model is intended for use for EmbodiedAI tasks, such as object manipulation and indoor navigation.
Reinforcement Learning
Stable-Baselines3
deep-reinforcement-learning
ppo-BreakoutNoFrameskip-v4
load_from_hub(repo_id='sb3/ppo-BreakoutNoFrameskip-v4')
['algo', 'env', 'orga', 'f']
['rl_zoo3', 'stable-baselines3', 'stable-baselines3-contrib']
['python -m rl_zoo3.load_from_hub --algo ppo --env BreakoutNoFrameskip-v4 -orga sb3 -f logs/', 'python enjoy.py --algo ppo --env BreakoutNoFrameskip-v4 -f logs/', 'python train.py --algo ppo --env BreakoutNoFrameskip-v4 -f logs/', 'python -m rl_zoo3.push_to_hub --algo ppo --env BreakoutNoFrameskip-v4 -f logs/ -orga sb3']
{'dataset': 'BreakoutNoFrameskip-v4', 'accuracy': '398.00 +/- 16.30'}
This is a trained model of a PPO agent playing BreakoutNoFrameskip-v4 using the stable-baselines3 library and the RL Zoo. The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included.
Natural Language Processing Text Classification
Hugging Face Transformers
Sentiment Analysis
bert-base-multilingual-uncased-sentiment
pipeline('sentiment-analysis')
['text']
['transformers']
result = sentiment_pipeline('I love this product!')
{'dataset': [{'language': 'English', 'accuracy': {'exact': '67%', 'off-by-1': '95%'}}, {'language': 'Dutch', 'accuracy': {'exact': '57%', 'off-by-1': '93%'}}, {'language': 'German', 'accuracy': {'exact': '61%', 'off-by-1': '94%'}}, {'language': 'French', 'accuracy': {'exact': '59%', 'off-by-1': '94%'}}, {'language': 'Italian', 'accuracy': {'exact': '59%', 'off-by-1': '95%'}}, {'language': 'Spanish', 'accuracy': {'exact': '58%', 'off-by-1': '95%'}}]}
This a bert-base-multilingual-uncased model finetuned for sentiment analysis on product reviews in six languages: English, Dutch, German, French, Spanish and Italian. It predicts the sentiment of the review as a number of stars (between 1 and 5).
Natural Language Processing Text Classification
Hugging Face Transformers
Transformers
sentiment_analysis_generic_dataset
pipeline('text-classification')
[]
['transformers']
sentiment_analysis('I love this product!')
{'dataset': 'generic_dataset', 'accuracy': 'Not specified'}
This is a fine-tuned downstream version of the bert-base-uncased model for sentiment analysis, this model is not intended for further downstream fine-tuning for any other tasks. This model is trained on a classified dataset for text classification.
Natural Language Processing Text Generation
Transformers
Text Generation
distilgpt2
pipeline('text-generation')
['model']
['from transformers import pipeline, set_seed']
set_seed(42) generator(Hello, I’m a language model, max_length=20, num_return_sequences=5)
{'dataset': 'WikiText-103', 'accuracy': '21.100'}
DistilGPT2 is an English-language model pre-trained with the supervision of the 124 million parameter version of GPT-2. With 82 million parameters, it was developed using knowledge distillation and designed to be a faster, lighter version of GPT-2. It can be used for text generation, writing assistance, creative writing, entertainment, and more.
Natural Language Processing Zero-Shot Classification
Hugging Face Transformers
Zero-Shot Image Classification
laion/CLIP-ViT-B-32-laion2B-s34B-b79K
pipeline('zero-shot-classification')
{'image': 'path/to/image', 'class_names': ['class1', 'class2', 'class3']}
{'transformers': '>=4.0.0'}
from transformers import pipeline; classifier = pipeline('zero-shot-classification', model='laion/CLIP-ViT-B-32-laion2B-s34B-b79K'); classifier(image='path/to/image', class_names=['class1', 'class2', 'class3'])
{'dataset': 'ImageNet-1k', 'accuracy': 66.6}
A CLIP ViT-B/32 model trained with the LAION-2B English subset of LAION-5B using OpenCLIP. It enables researchers to better understand and explore zero-shot, arbitrary image classification. The model can be used for zero-shot image classification, image and text retrieval, among others.
Natural Language Processing Translation
Transformers
Text-to-Text Generation
optimum/t5-small
pipeline('translation')
['text']
['transformers', 'optimum.onnxruntime']
from transformers import AutoTokenizer, pipeline from optimum.onnxruntime import ORTModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained(optimum/t5-small) model = ORTModelForSeq2SeqLM.from_pretrained(optimum/t5-small) translator = pipeline(translation_en_to_fr, model=model, tokenizer=tokenizer) results = translator(My name is Eustache and I have a pet raccoon) print(results)
{'dataset': 'c4', 'accuracy': 'N/A'}
T5 is an encoder-decoder model pre-trained on a multi-task mixture of unsupervised and supervised tasks and for which each task is converted into a text-to-text format. It can be used for translation, text-to-text generation, and summarization.
Audio Audio Classification
Hugging Face Transformers
Audio Classification
ast-finetuned-audioset-10-10-0.4593
pipeline('audio-classification')
transformers
{'dataset': 'AudioSet', 'accuracy': ''}
Audio Spectrogram Transformer (AST) model fine-tuned on AudioSet. It was introduced in the paper AST: Audio Spectrogram Transformer by Gong et al. and first released in this repository. The Audio Spectrogram Transformer is equivalent to ViT, but applied on audio. Audio is first turned into an image (as a spectrogram), after which a Vision Transformer is applied. The model gets state-of-the-art results on several audio classification benchmarks.
Audio Automatic Speech Recognition
Hugging Face Transformers
Transformers
tiny-wav2vec2-stable-ln
pipeline('automatic-speech-recognition')
None
['transformers']
None
{'dataset': None, 'accuracy': None}
A tiny wav2vec2 model for Automatic Speech Recognition
Natural Language Processing Text Generation
Hugging Face Transformers
Conversational
pygmalion-350m
pipeline('conversational')
N/A
transformers
N/A
{'dataset': 'The Pile', 'accuracy': 'N/A'}
This is a proof-of-concept fine-tune of Facebook's OPT-350M model optimized for dialogue, to be used as a stepping stone to higher parameter models. Disclaimer: NSFW data was included in the fine-tuning of this model. Although SFW inputs will usually result in SFW outputs, you are advised to chat at your own risk. This model is not suitable for use by minors.
Computer Vision Depth Estimation
Hugging Face Transformers
Depth Estimation
glpn-nyu-finetuned-diode
pipeline('depth-estimation')
[]
['transformers']
{'dataset': 'diode-subset', 'accuracy': {'Loss': 0.4359, 'Rmse': 0.4276}}
This model is a fine-tuned version of vinvino02/glpn-nyu on the diode-subset dataset.
Multimodal Document Question Answer
Hugging Face Transformers
vision-encoder-decoder
naver-clova-ix/donut-base-finetuned-docvqa
pipeline('document-question-answering')
{'image': 'path_to_image', 'question': 'your_question'}
Transformers
from transformers import pipeline # Initialize the pipeline doc_qa = pipeline('document-question-answering', model='naver-clova-ix/donut-base-finetuned-docvqa') # Load an image and ask a question image_path = 'path_to_image' question = 'your_question' # Get the answer answer = doc_qa({'image': image_path, 'question': question}) print(answer)
{'dataset': 'DocVQA', 'accuracy': 'Not provided'}
Donut model fine-tuned on DocVQA. It was introduced in the paper OCR-free Document Understanding Transformer by Geewok et al. and first released in this repository. Donut consists of a vision encoder (Swin Transformer) and a text decoder (BART). Given an image, the encoder first encodes the image into a tensor of embeddings (of shape batch_size, seq_len, hidden_size), after which the decoder autoregressively generates text, conditioned on the encoding of the encoder.
Natural Language Processing Fill-Mask
Transformers
Masked Language Modeling, Next Sentence Prediction
bert-base-uncased
pipeline('fill-mask')
['text']
['transformers']
from transformers import pipeline unmasker = pipeline('fill-mask', model='bert-base-uncased') unmasker(Hello I'm a [MASK] model.)
{'dataset': 'GLUE', 'accuracy': 79.6}
BERT base model (uncased) is a transformer model pretrained on a large corpus of English data using a masked language modeling (MLM) objective. It can be used for masked language modeling, next sentence prediction, and fine-tuning on downstream tasks such as sequence classification, token classification, or question answering.
Computer Vision Image Classification
Hugging Face Transformers
Image Classification
martinezomg/vit-base-patch16-224-diabetic-retinopathy
pipeline('image-classification')
{'model_name': 'martinezomg/vit-base-patch16-224-diabetic-retinopathy'}
{'transformers': '4.28.1', 'pytorch': '2.0.0+cu118', 'datasets': '2.11.0', 'tokenizers': '0.13.3'}
from transformers import pipeline image_classifier = pipeline('image-classification', 'martinezomg/vit-base-patch16-224-diabetic-retinopathy') result = image_classifier('path/to/image.jpg')
{'dataset': 'None', 'accuracy': 0.7744}
This model is a fine-tuned version of google/vit-base-patch16-224 on the None dataset. It is designed for image classification tasks, specifically for diabetic retinopathy detection.
Computer Vision Image Segmentation
Hugging Face Transformers
Transformers
clipseg-rd64-refined
pipeline('image-segmentation')
{'model': 'CIDAS/clipseg-rd64-refined'}
transformers
{'dataset': '', 'accuracy': ''}
CLIPSeg model with reduce dimension 64, refined (using a more complex convolution). It was introduced in the paper Image Segmentation Using Text and Image Prompts by Lüddecke et al. and first released in this repository. This model is intended for zero-shot and one-shot image segmentation.
Multimodal Image-to-Text
Hugging Face Transformers
Image Captioning
microsoft/git-base
pipeline('image-to-text')
image
transformers
git_base(image)
{'dataset': ['COCO', 'Conceptual Captions (CC3M)', 'SBU', 'Visual Genome (VG)', 'Conceptual Captions (CC12M)', 'ALT200M'], 'accuracy': 'Refer to the paper for evaluation results'}
GIT (short for GenerativeImage2Text) model, base-sized version. It was introduced in the paper GIT: A Generative Image-to-text Transformer for Vision and Language by Wang et al. and first released in this repository. The model is trained using 'teacher forcing' on a lot of (image, text) pairs. The goal for the model is simply to predict the next text token, giving the image tokens and previous text tokens. This allows the model to be used for tasks like image and video captioning, visual question answering (VQA) on images and videos, and even image classification (by simply conditioning the model on the image and asking it to generate a class for it in text).
Natural Language Processing Token Classification
Transformers
Named Entity Recognition
dslim/bert-base-NER-uncased
pipeline('ner')
{}
{'transformers': '>=4.0.0'}
nlp('My name is John and I live in New York.')
{'dataset': '', 'accuracy': ''}
A pretrained BERT model for Named Entity Recognition (NER) on uncased text. It can be used to extract entities such as person names, locations, and organizations from text.
Computer Vision Object Detection
Hugging Face Transformers
Transformers
microsoft/table-transformer-structure-recognition
pipeline('object-detection')
transformers
{'dataset': 'PubTables1M', 'accuracy': ''}
Table Transformer (DETR) model trained on PubTables1M for detecting the structure (like rows, columns) in tables.
Multimodal Document Question Answer
Hugging Face Transformers
Document Question Answering
impira/layoutlm-document-qa
pipeline('question-answering')
['image_url', 'question']
['PIL', 'pytesseract', 'PyTorch', 'transformers']
nlp(https://templates.invoicehome.com/invoice-template-us-neat-750px.png, What is the invoice number?)
{'dataset': ['SQuAD2.0', 'DocVQA'], 'accuracy': 'Not provided'}
A fine-tuned version of the multi-modal LayoutLM model for the task of question answering on documents.
Natural Language Processing Summarization
Hugging Face Transformers
Text Summarization
facebook/bart-large-cnn
pipeline('summarization')
['ARTICLE', 'max_length', 'min_length', 'do_sample']
['transformers']
from transformers import pipeline summarizer = pipeline(summarization, model=facebook/bart-large-cnn) ARTICLE = ... print(summarizer(ARTICLE, max_length=130, min_length=30, do_sample=False))
{'dataset': 'cnn_dailymail', 'accuracy': {'ROUGE-1': 42.949, 'ROUGE-2': 20.815, 'ROUGE-L': 30.619, 'ROUGE-LSUM': 40.038}}
BART (large-sized model), fine-tuned on CNN Daily Mail. BART is a transformer encoder-encoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. BART is particularly effective when fine-tuned for text generation (e.g. summarization, translation) but also works well for comprehension tasks (e.g. text classification, question answering). This particular checkpoint has been fine-tuned on CNN Daily Mail, a large collection of text-summary pairs.
Natural Language Processing Table Question Answering
Transformers
Table Question Answering
google/tapas-large-finetuned-wtq
pipeline('table-question-answering')
{'model': 'google/tapas-large-finetuned-wtq', 'task': 'table-question-answering'}
transformers
from transformers import pipeline qa_pipeline = pipeline('table-question-answering', model='google/tapas-large-finetuned-wtq') result = qa_pipeline(table=table, query=query)
{'dataset': 'wikitablequestions', 'accuracy': 0.5097}
TAPAS large model fine-tuned on WikiTable Questions (WTQ). This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training, and then fine-tuned in a chain on SQA, WikiSQL and finally WTQ. It uses relative position embeddings (i.e. resetting the position index at every cell of the table).
Natural Language Processing Text2Text Generation
Transformers
Text Generation
google/t5-v1_1-base
pipeline('text2text-generation')
{'model': 'google/t5-v1_1-base'}
{'transformers': '>=4.0.0'}
from transformers import pipeline t5 = pipeline('text2text-generation', model='google/t5-v1_1-base') t5('translate English to French: Hugging Face is a great company')
{'dataset': 'c4', 'accuracy': 'Not provided'}
Google's T5 Version 1.1 is a state-of-the-art text-to-text transformer model that achieves high performance on various NLP tasks such as summarization, question answering, and text classification. It is pre-trained on the Colossal Clean Crawled Corpus (C4) and fine-tuned on downstream tasks.
Natural Language Processing Token Classification
Hugging Face Transformers
Transformers
kredor/punctuate-all
pipeline('token-classification')
[]
['transformers']
{'dataset': 'multilingual', 'accuracy': 0.98}
A finetuned xlm-roberta-base model for punctuation prediction on twelve languages: English, German, French, Spanish, Bulgarian, Italian, Polish, Dutch, Czech, Portugese, Slovak, Slovenian.
Multimodal Visual Question Answering
Hugging Face Transformers
Transformers
microsoft/git-base-vqav2
pipeline('visual-question-answering')
image, question
['transformers']
vqa(image='path/to/image.jpg', question='What is in the image?')
{'dataset': 'VQAv2', 'accuracy': 'Refer to the paper for evaluation results'}
GIT (short for GenerativeImage2Text) model, base-sized version, fine-tuned on VQAv2. It was introduced in the paper GIT: A Generative Image-to-text Transformer for Vision and Language by Wang et al. and first released in this repository.