taskID,pwc_id,title,description,areaID
0c381f73-4580-449b-8588-a0de83ff7b58,data-poisoning,Data Poisoning,"**Data Poisoning** is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).
Source: [Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics ](https://arxiv.org/abs/1907.07296)",adversarial
645ec664-af3e-4399-8f4c-30b1d3824548,model-posioning,Model Posioning,,adversarial
09c182b9-3ab6-4650-a1db-61652f94df08,dnn-testing,DNN Testing,Testing the reliability of DNNs.,adversarial
fa099138-5c30-413d-b267-a61ff2051053,provable-adversarial-defense,Provable Adversarial Defense,,adversarial
de10ca7b-d816-43ee-a2ee-b32b3c4b978c,backdoor-defense-for-data-free-distillation,Backdoor Defense for Data-Free Distillation with Poisoned Teachers,Defend against backdoor attack from poisoned teachers.,adversarial
798e3714-9571-435f-8b88-9e3eb93b841d,phishing-website-detection,Phishing Website Detection,,adversarial
d8c341fc-83a6-4d1c-bafa-001374f9decb,website-fingerprinting-defense,Website Fingerprinting Defense,,adversarial
9952506d-cb03-47c6-92ad-a9a61ab5035a,backdoor-attack,Backdoor Attack,"Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.",adversarial
e5add6d4-bf5b-4540-a046-4833ba39c081,adversarial-defense,Adversarial Defense,"Competitions with currently unpublished results:
- [TrojAI](https://pages.nist.gov/trojai/)",adversarial
1b3e7f7a-2cd2-40ff-a37c-d1e9d05ed446,real-world-adversarial-attack,Real-World Adversarial Attack,Adversarial attacks that are presented in the real world,adversarial
b4945412-f0b8-487d-acfd-aae4ecb1d117,inference-attack,Inference Attack,,adversarial
de771df9-4dfb-470d-b25d-82c4d3bdb685,optimize-the-trajectory-of-uav-which-plays-a,Optimize the trajectory of UAV which plays a BS in communication system,,adversarial
2bade543-ab9f-44c8-a98a-7e734a9a5fd3,adversarial-attack,Adversarial Attack,"An **Adversarial Attack** is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.
Source: [Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks ](https://arxiv.org/abs/2002.05388)",adversarial
d15e681a-3b19-4eae-a8c7-86e1e185d892,adversarial-text,Adversarial Text,"Adversarial Text refers to a specialised text sequence that is designed specifically to influence the prediction of a language model. Generally, Adversarial Text attack are carried out on Large Language Models (LLMs). Research on understanding different adversarial approaches can help us build effective defense mechanisms to detect malicious text input and build robust language models.",adversarial
816a5986-fe97-4d51-8f86-2093c7bc9def,design-synthesis,Design Synthesis,,adversarial
df048be5-1c84-41a4-8de1-91cc10d58353,adversarial-robustness,Adversarial Robustness,Adversarial Robustness evaluates the vulnerabilities of machine learning models under various types of adversarial attacks.,adversarial
e59f0f1e-80c5-4772-9bf0-4af7b04b7a49,exposure-fairness,Exposure Fairness,,adversarial
2e14e5cd-d5ce-4e16-a8ba-fb62e19fc6d3,website-fingerprinting-attacks,Website Fingerprinting Attacks,,adversarial
34dcde45-6ae4-4421-8007-e9f789ae70b4,model-extraction,Model extraction,"Model extraction attacks, aka model stealing attacks, are used to extract the parameters from the target model. Ideally, the adversary will be able to steal and replicate a model that will have a very similar performance to the target model.",adversarial
02ff6734-6088-4bf9-89ea-6bf5310309e6,audio-declipping,Audio declipping,Audio declipping is the task of estimating the original audio signal given its clipped measurements.,audio
5f7216ad-1b42-48b1-b6d3-013ca12ff3a2,voice-conversion,Voice Conversion,"**Voice Conversion** is a technology that modifies the speech of a source speaker and makes their speech sound like that of another target speaker without changing the linguistic information.
Source: [Joint training framework for text-to-speech and voice conversion using multi-source Tacotron and WaveNet ](https://arxiv.org/abs/1903.12389)",audio
ea89829b-0b83-4f7a-bf96-19f5d1c67093,chord-recognition,Chord Recognition,,audio
13e2b8b0-8085-4eef-98d5-d9d2ee6dbfde,bandwidth-extension,Bandwidth Extension,Bandwidth extension is the task of expanding the bandwidth of a signal in a way that approximates the original or desired higher bandwidth signal.,audio
e9723fee-3988-42af-9b1a-18bd8b33db34,synthetic-speech-detection,Synthetic Speech Detection,Detect fake synthetic speech generated using machine learning,audio
7393621e-df73-4c88-887d-fe0535f7f518,audio-tagging,Audio Tagging,"Audio tagging is a task to predict the tags of audio clips. Audio tagging tasks include music tagging, acoustic scene classification, audio event classification, etc.",audio
e32a8b33-37af-4229-873a-315201f35cef,directional-hearing,Directional Hearing,Extremely low-latency audio source separation from a known direction of arrival.,audio
44b159e6-1c0c-4ab3-b232-b2ba2519db8c,bird-audio-detection,Bird Audio Detection,,audio
d0cfbdbd-ad48-4a44-a539-7d2984f42eeb,shooter-localization,Shooter Localization,Shooter localization based on videos.,audio
e9a4bfc0-20ed-40d2-be44-c411979a9ca5,speaker-orientation,Speaker Orientation,Direction of Voice or speaker orientation of the person with respect to the target device.,audio
ead36552-861e-4a39-9cef-162d40cbf646,audio-generation,Audio Generation,"Audio generation (synthesis) is the task of generating raw audio such as speech.
( Image credit: [MelNet](https://arxiv.org/pdf/1906.01083v1.pdf) )",audio
6376c35b-59b6-4a04-8992-59dcd97cd43b,sound-event-detection,Sound Event Detection,"**Sound Event Detection** (SED) is the task of recognizing the sound events and their respective temporal start and end time in a recording. Sound events in real life do not always occur in isolation, but tend to considerably overlap with each other. Recognizing such overlapping sound events is referred as polyphonic SED.
Source: [A report on sound event detection with different binaural features ](https://arxiv.org/abs/1710.02997)",audio
c991ce23-761b-4e13-93a9-1cc237416981,audio-signal-processing,Audio Signal Processing,"This is a general task that covers transforming audio inputs into audio outputs, not limited to existing PaperWithCode categories of Source Separation, Denoising, Classification, Recognition, etc.",audio
38a06ebe-b204-41fb-b785-23960c6bae2a,direction-of-arrival-estimation,Direction of Arrival Estimation,Estimating the direction-of-arrival (DOA) of a sound source from multi-channel recordings.,audio
549a6bec-02a2-47b3-b0c2-e7159b521ff1,soundscape-evaluation,Soundscape evaluation,Evaluation of soundscape in accordance to ISO/TS 12913-2,audio
cb059568-6f49-4211-899d-68a7735035d6,room-impulse-response,Room Impulse Response (RIR),"**Room Impulse Response (RIR)** is an audio signal processing task that involves capturing and analyzing the acoustic characteristics of a room or an environment. The goal is to measure and model the way sound waves interact with the space, including reflections, reverberation, and echoes.",audio
90ad8684-5a03-450f-b3ca-25cd2fd5391d,vowel-classification,Vowel Classification,,audio
bc4d98ad-a148-462f-9fed-df0a239d14df,audio-fingerprint,Audio Fingerprint,,audio
07260acf-2863-47ac-a203-70be0bef9405,pitch-control,Pitch control,,audio
05434118-bc36-416d-bc98-8d72e7fe086d,underwater-acoustic-classification,Underwater Acoustic Classification,Classification of underwater acoustic data,audio
ad56ab5d-36a5-4a52-8b17-e8866d7e23ca,active-speaker-localization,Active Speaker Localization,"Active Speaker Localization (ASL) is the process of spatially localizing an active speaker (talker) in an environment using either audio, vision or both.",audio
1af1bdc9-6858-468d-a88a-8a9425763567,acoustic-scene-classification,Acoustic Scene Classification,"The goal of acoustic scene classification is to classify a test recording into one of the provided predefined classes that characterizes the environment in which it was recorded.
Source: [DCASE 2019](http://dcase.community/challenge2019/task-acoustic-scene-classification)
Source: [DCASE 2018](https://dcase.community/challenge2018/task-acoustic-scene-classification)",audio
7973e618-bb32-4953-b66f-6320f99ac549,timbre-interpolation,Timbre Interpolation,,audio
69bc01f2-94af-46f1-988f-940106355f09,audio-denoising,Audio Denoising,,audio
b40d6e4f-a336-404b-b3fc-94b1cadb0f6c,sound-event-localization-and-detection,Sound Event Localization and Detection,"Given multichannel audio input, a sound event detection and localization (SELD) system outputs a temporal activation track for each of the target sound classes, along with one or more corresponding spatial trajectories when the track indicates activity. This results in a spatio-temporal characterization of the acoustic scene that can be used in a wide range of machine cognition tasks, such as inference on the type of environment, self-localization, navigation without visual input or with occluded targets, tracking of specific types of sound sources, smart-home applications, scene visualization systems, and audio surveillance, among others.",audio
0ee4ede2-687a-478f-8e1c-dd19218f8e60,audio-multiple-target-classification,Audio Multiple Target Classification,,audio
2a0e3f33-9e55-4378-b86d-28410b3463a5,bird-species-classification-with-audio-visual,Bird Species Classification With Audio-Visual Data,,audio
179315b9-7c21-4f79-bf3a-98ea344d880a,audio-captioning,Audio captioning,,audio
55dfc53e-b31d-4594-9859-841bd5ff95ad,audio-inpainting,Audio inpainting,Filling in holes in audio data,audio
affa0aa9-5f12-4e46-89ab-5208842a8362,bird-classification,Bird Classification,,audio
64e0f3b9-9ac8-45aa-ab67-2aa8acee118c,music-compression,Music Compression,,audio
1ab2760a-c0a1-41e4-86d8-554a50c2eabf,gunshot-detection,Gunshot Detection,,audio
e9e5debe-ef57-4f15-8c67-b7a444fb9f5f,language-identification,Language Identification,Language identification is the task of determining the language of a text.,audio
eb87e991-1043-47fe-9aa2-d0620a416784,instrument-recognition,Instrument Recognition,,audio
2bbb339e-ab34-496d-8a8e-5e07299bb9d1,audio-super-resolution,Audio Super-Resolution,AUDIO SUPER-RESOLUTION or speech bandwidth extension (Upsampling Ratio = 2),audio
06462d44-572c-4641-ab6b-616d389245fa,target-speaker-extraction,Target Speaker Extraction,Extract the dialogue content of the specified target in a multi-person dialogue.,audio
66588edc-06b1-4825-bf1f-c361758ef406,audio-classification,Audio Classification,"**Audio Classification** is a machine learning task that involves identifying and tagging audio signals into different classes or categories. The goal of audio classification is to enable machines to automatically recognize and distinguish between different types of audio, such as music, speech, and environmental sounds.",audio
d383921c-5c94-40b6-b731-a4b747c568cc,environmental-sound-classification,Environmental Sound Classification,Classification of Environmental Sounds. Most often sounds found in Urban environments. Task related to noise monitoring.,audio
ba9d357e-bc77-4870-82ea-7c011a11b502,fake-voice-detection,fake voice detection,,audio
4d5d8049-6399-4b3d-b8f5-6e5b3826ce4b,audio-source-separation,Audio Source Separation,"**Audio Source Separation** is the process of separating a mixture (e.g. a pop band recording) into isolated sounds from individual sources (e.g. just the lead vocals).
Source: [Model selection for deep audio source separation via clustering analysis ](https://arxiv.org/abs/1910.12626)",audio
a7bbac0e-ad40-449c-8257-76ab7c36a6bf,audio-signal-recognition,Audio Signal Recognition,,audio
dd51cd36-31b1-4162-b106-63d3c285b2b9,text-to-music-generation,Text-to-Music Generation,,audio
264887c1-ec78-41cf-83ea-726b53e45530,audio-effects-modeling,Audio Effects Modeling,"Modeling of audio effects such as reverberation, compression, distortion, etc.",audio
c4ccf591-e491-4a0c-8f80-40608cdc16ec,few-shot-audio-classification,Few-Shot Audio Classification,Few-shot classification for audio signals. Presents a unique challenge compared to other few-shot domains as we deal with temporal dependencies as well,audio
c1412f05-9906-475c-9fe1-dec2c574e64e,sound-classification,Sound Classification,,audio
d7241aaf-716d-4b40-9e09-2eda3f32a977,audio-visual-synchronization,Audio-Visual Synchronization,,audio
decff6be-7200-407f-bea3-39f7748f98aa,zero-shot-multi-speaker-tts,Zero-Shot Multi-Speaker TTS,,audio
62670522-d4f7-4fc6-bc1b-9649934b8e93,music-generation,Music Generation,"**Music Generation** is the task of generating music or music-like sounds from a model or algorithm. The goal is to produce a sequence of notes or sound events that are similar to existing music in some way, such as having the same style, genre, or mood.",audio
017e8a97-4eaa-4934-8da1-ec739e1b91c3,streaming-target-sound-extraction,Streaming Target Sound Extraction,"This task is a variant of the [Target Sound Extraction](https://paperswithcode.com/task/target-sound-extraction) task, with the constraint of causal streaming inference. Aiming for an algorithmic latency of less than 20 ms, at each time step, streaming audio models operate on an input audio chunk of length less than 20 ms. The causal constraint means that the model only has the knowledge of past chunks and no future chunks.",audio
a36d2abb-b66c-46b6-962e-b5505dd58d85,real-time-directional-hearing,Real-time Directional Hearing,Directional hearing models that also support real-time on-device inference,audio
c5c83559-2bf6-4519-bdbe-3ab348c87b10,voice-anti-spoofing,Voice Anti-spoofing,Discriminate genuine speech and spoofing attacks,audio
d2508e7e-3cb8-400a-85db-bc5df6a369dc,audio-dequantization,Audio Dequantization,Audio Dequantization is a process of estimating the original signal from its quantized counterpart.,audio
d328cfbe-937d-4ca5-93c8-1e6c1e401a0c,single-label-target-sound-extraction,Single-Label Target Sound Extraction,"Single-Label Target Sound Extraction is the task of extracting a given class of sounds from an audio mixture. The audio mixture may contain background noise with a relatively low amplitude compared to the foreground mixture components. The choice of the sound class is provided as input to the model in form of a string, integer, or a one-hot encoding of the sound class.",audio
f72bfe21-fd09-46df-9899-bbe7c3e4eca2,target-sound-extraction,Target Sound Extraction,"Target Sound Extraction is the task of extracting a sound corresponding to a given class from an audio mixture. The audio mixture may contain background noise with a relatively low amplitude compared to the foreground mixture components. The choice of the sound class is provided as input to the model in form of a string, integer, or a one-hot encoding of the sound class.",audio
2fe72f9f-d89a-41cf-b47a-603df2974659,acoustic-novelty-detection,Acoustic Novelty Detection,"Detect novel events given acoustic signals, either in domestic or outdoor environments.",audio
a86d3c99-c044-4f57-a4d1-9907684061ef,self-supervised-sound-classification,Self-Supervised Sound Classification,,audio
b10fcd98-de6c-43e5-b30b-17ae324e19f4,inference-optimization,Inference Optimization,,audio
76d977e0-769f-4d4c-926b-e13253fdc67f,fault-localization,Fault localization,,computer-code
2048c8ec-5085-40b6-a306-4350a62e932a,nmt,NMT,"Neural machine translation is an approach to machine translation that uses an artificial neural network to predict the likelihood of a sequence of words, typically modeling entire sentences in a single integrated model.",computer-code
019ed702-060d-4dc5-9523-458c74aaa7e1,code-generation,Code Generation,"**Code Generation** is an important field to predict explicit code or program structure from multimodal data sources such as incomplete code, programs in another programming language, natural language descriptions or execution examples. Code Generation tools can assist the development of automatic programming tools to improve programming productivity.
Source: [Deep Learning for Source Code Modeling and Generation ](https://arxiv.org/abs/2002.05442)
Image source: [Measuring Coding Challenge Competence With APPS](https://paperswithcode.com/paper/measuring-coding-challenge-competence-with)",computer-code
b8dc283e-75e8-482f-bec3-4cd9685dcf0d,compiler-optimization,Compiler Optimization,Machine learning guided compiler optimization,computer-code
ab0c3250-492e-4427-b209-2784b03dace1,write-computer-programs-from-specifications,Write Computer Programs From Specifications,,computer-code
b7e390f7-e8a6-4e69-b5fd-db13c9fbada5,api-sequence-recommendation,API Sequence Recommendation,,computer-code
39060df4-7919-43c2-88db-1556170e139b,code-summarization,Source Code Summarization,"**Code Summarization** is a task that tries to comprehend code and automatically generate descriptions directly from the source code.
Source: [Improving Automatic Source Code Summarization via Deep Reinforcement Learning ](https://arxiv.org/abs/1811.07234)",computer-code
e4d237ab-6781-46a0-a8e6-8730986506de,contextual-embedding-for-source-code,Contextual Embedding for Source Code,,computer-code
6ca7fa54-99bb-4e6f-8bcc-4bd065847f25,formalize-foundations-of-universal-algebra-in,Formalize foundations of universal algebra in dependent type theory,,computer-code
8c8808d5-3c51-4cbb-878a-b9e4593a02ae,sentinel-1-sar-processing,Sentinel-1 SAR processing,,computer-code
dad56733-9fe7-4c85-ac96-793aaa0d7837,enumerative-search,Enumerative Search,,computer-code
ceac676d-6dc2-4762-a130-605150c967b2,type-prediction,Type prediction,,computer-code
6b3ba151-5a17-4ce0-9bb9-51c779140784,code-translation,Code Translation,"Code translation is the process of converting code written in one programming language to another programming language while maintaining the same functionality. This process is also known as code conversion, source-to-source translation, or transpilation. Code translation is often performed when developers want to take advantage of new programming languages, improve code performance, or maintain legacy systems. Some common examples include translating code from Python to Java, or from JavaScript to TypeScript.",computer-code
565662be-bf9a-47b3-a19d-1ec5ce3b80cb,value-prediction,Value prediction,,computer-code
e9acedb3-1b54-44d6-8a6f-43631440ca58,program-repair,Program Repair,Task of teaching ML models to modify an existing program to fix a bug in a given code.,computer-code
55361ad5-e003-496a-b311-26bed0704337,variable-misuse,Variable misuse,,computer-code
6a43283b-dcc5-4204-bd02-030a75840ec4,nature-inspired-optimization-algorithm,Nature-Inspired Optimization Algorithm,,computer-code
d4f31ac1-a07b-4114-b278-8df969405076,git-commit-message-generation,Git Commit Message Generation,,computer-code
7b88e945-6478-4bf9-9717-a2eab3076448,tiling-deployment,Tiling & Deployment,Data tiling over 3 memory hierarchy levels and deployment on microcontroller.,computer-code
86893708-b70c-4586-90d4-2529484a2d5e,single-image-portrait-relighting,Single-Image Portrait Relighting,,computer-code
c96ebb58-7a31-4586-93f3-62e958e295be,code-classification,Code Classification,,computer-code
63799d36-338a-428d-9982-9943cd5887a0,sparse-subspace-based-clustering,Sparse subspace-based clustering,,computer-code
b4701b97-ca37-4f8c-ab97-c8cca9e7fecc,learning-to-execute,Learning to Execute,,computer-code
1f377b6f-3e1b-4007-a40d-6f8b733dfaab,exception-type,Exception type,,computer-code
43cce529-3d8b-474f-b4a8-448526fbd0cf,neural-network-simulation,Neural Network simulation,Simulation of abstract or biophysical neural networks in silico,computer-code
4abd7b15-9387-463e-ba5c-62f44e0d37be,edit-script-generation,Edit script generation,"Generating edit scripts by comparing 2 different files or strings to convert one to another. this script will contain instruction like insert, delete and substitute.",computer-code
06cffcbe-c652-408f-9f90-48b0b35150cb,low-rank-compression,Low-rank compression,,computer-code
36ae4d6d-3b98-4bd7-abb2-90345d17b404,chart-question-answering,Chart Question Answering,Question Answering task on charts images,computer-code
a9486423-09d1-4825-9ada-34c40d277d29,swapped-operands,Swapped operands,,computer-code
2d127967-7fac-4cee-b1df-7fa1385c888f,function-docstring-mismatch,Function-docstring mismatch,,computer-code
c15aefce-aebf-4210-b138-9e604c81e819,webcam-rgb-image-classification,Webcam (RGB) image classification,,computer-code
e0124fbf-54bd-493f-b29f-f68a9329c190,codesearchnet-java,CodeSearchNet - Java,,computer-code
2b9c290f-cdbf-4b40-a992-c6a8a4dd135e,code-documentation-generation,Code Documentation Generation,"Code Documentation Generation is a supervised task where a code function is the input to the model, and the model generates the documentation for this function.
Description from: [CodeTrans: Towards Cracking the Language of Silicone's Code Through Self-Supervised Deep Learning and High Performance Computing](https://arxiv.org/pdf/2104.02443.pdf)",computer-code
c64f4f4d-b95f-4ffa-b7ef-45724bfd1c05,editcompletion,EditCompletion,"Given a code snippet that is partially edited, the goal is to predict a completion of the edit for the rest of the snippet.",computer-code
8794087d-19e3-4282-bfab-1e7125fc04e0,motion-style-transfer,Motion Style Transfer,,computer-code
fe34ae6b-b323-4c58-a326-a01258f14983,file-difference,File difference,"Generate edit script comparing 2 strings or files, which contains instruction of insert, delete and substitute to convert first string to the second.",computer-code
59d59461-9da1-463b-ab4a-2eca29e62637,video-defect-classification,Video Defect Classification,"Quick-View (QV) Inspection is one commonly-used technology. However, it is quite labor-intensive to find defects from a huge number of QV videos. To tackle this problem, we propose a video defect classification task, which is to predict the categories of pipe
defects in a short QV video.",computer-code
2a9d0135-ae0e-4c64-a5ad-6c5eb5faad18,spectral-efficiency-analysis-of-uplink,Spectral Efficiency Analysis of Uplink-Downlink Decoupled Access in C-V2X Networks,Code for Spectral Efficiency Analysis of Uplink-Downlink Decoupled Access in C-V2X Networks,computer-code
6f6c00ac-999c-4d33-858e-d381775a0873,sql-to-text,SQL-to-Text,"( Image credit: [SQL-to-Text Generation with Graph-to-Sequence Model](https://arxiv.org/pdf/1809.05255v2.pdf) )",computer-code
f723516d-6d63-46fb-bba4-0a307c8bf6ea,program-synthesis,Program Synthesis,"Program synthesis is the process of automatically generating a program or code snippet that satisfies a given specification or set of requirements. This can include generating code from a formal specification, a natural language description, or example inputs and outputs. The primary goal of program synthesis is to minimize human intervention in the coding process, reduce errors, and improve productivity.
Program synthesis often involves the use of advanced algorithms, artificial intelligence, and machine learning techniques to search the space of possible programs that meet the given constraints. This process can be guided by a variety of techniques, such as constraint solving, symbolic execution, and genetic algorithms.",computer-code
4a7d455d-13bf-47eb-a23a-1ee59d32d5d7,log-parsing,Log Parsing,"**Log Parsing** is the task of transforming unstructured log data into a structured format that can be used to train machine learning algorithms. The structured log data is then used to identify patterns, trends, and anomalies, which can support decision-making and improve system performance, security, and reliability. The log parsing process involves the extraction of relevant information from log files, the conversion of this information into a standardized format, and the storage of the structured data in a database or other data repository.",computer-code
0f38c7d0-554b-41e9-a5bd-ffb6d68358dc,program-induction,Program induction,Generating program code for domain-specific tasks,computer-code
5393c10a-9168-4516-815c-e30c41bba13d,programming-error-detection,Programming Error Detection,,computer-code
16ee1742-a09d-41f1-8367-8cb126019249,text-to-sql,Text-To-SQL,"**Text-to-SQL** is a task in natural language processing (NLP) where the goal is to automatically generate SQL queries from natural language text. The task involves converting the text input into a structured representation and then using this representation to generate a semantically correct SQL query that can be executed on a database.
( Image credit: [SyntaxSQLNet](https://arxiv.org/pdf/1810.05237v2.pdf) )",computer-code
0ba939a4-5d90-4c29-a82d-3a38bb9ced28,sql-synthesis,SQL Synthesis,,computer-code
1b86c4cb-cff6-4ca0-851c-849a6c71537d,text-to-code-generation,Text-to-Code Generation,"**Text-to-Code Generation** is a task where we can generate code based on the natural language description.
Source: [Text-to-code Generation with TensorFlow, 🤗 & MBPP](https://www.kaggle.com/code/rhtsingh/text-to-code-generation-with-tensorflow-mbpp)",computer-code
d9578093-96de-4aee-9b41-880847afd32c,paraphrase-generation,Paraphrase Generation,"Paraphrase Generation involves transforming a natural language sentence to a new sentence, that has the same semantic meaning but a different syntactic or lexical surface form.",computer-code
f4ee5ab8-3a82-4a42-a4f8-b37e2956453c,sql-chatbots,Sql Chatbots,,computer-code
774dd0d2-7fe2-4079-9cf9-d7d7483a18b7,code-search,Code Search,"The goal of **Code Search** is to retrieve code fragments from a large code corpus that most closely match a developer’s intent, which is expressed in natural language.
Source: [When Deep Learning Met Code Search ](https://arxiv.org/abs/1905.03813)",computer-code
02078562-4898-4632-831a-446d59192e0c,wrong-binary-operator,Wrong binary operator,,computer-code
b99ddfa6-e8a8-4712-a6b3-8c84692190f9,code-comment-generation,Code Comment Generation,,computer-code
a4a10018-cf06-472e-9d39-f9b9fc3b75e6,annotated-code-search,Annotated Code Search,Annotated code search is the retrieval of code snippets paired with brief descriptions of their intent using natural language queries.,computer-code
55c383b6-49a0-41a8-96c5-ae67ec1d9d54,manufacturing-quality-control,Manufacturing Quality Control,AI for Quality control in manufacturing processes.,computer-vision
077fbda9-02a4-4619-b787-43f8745b10af,fine-grained-image-recognition,Fine-Grained Image Recognition,,computer-vision
519112b0-d36e-43c1-8545-7971dcaae54c,camouflage-segmentation,Camouflage Segmentation,,computer-vision
a5a44379-1f7a-49c0-930a-7ac03b5849c5,layout-design,Layout Design,,computer-vision
c34ff477-b477-41ab-b6c4-c04d9df6e1f6,video-propagation,Video Propagation,Propagating information in processed frames to unprocessed frames,computer-vision
90e841e6-df60-4e70-9f48-68f33a36295e,few-shot-temporal-action-localization,Few Shot Temporal Action Localization,Detect Action using few labeled samples,computer-vision
20303578-ba3c-4f6e-9a8c-43b4e5213f08,one-shot-object-detection,One-Shot Object Detection,"( Image credit: [Siamese Mask R-CNN
](https://github.com/bethgelab/siamese-mask-rcnn) )",computer-vision
dcac16b9-53fd-40cc-9b66-7d16e2999f0a,color-manipulation,color manipulation,,computer-vision
e97207d5-3340-43b2-9207-132bc9fa7cd1,single-class-few-shot-image-synthesis,Single class few-shot image synthesis,"The goal of single class few-shot image synthesis task is to learn a generative model that can generate
samples with visual attributes from as few as two or more input images images belonging to the same class.",computer-vision
e6c9bb71-5f50-4da2-a765-690fe7eddd88,sperm-morphology-classification,Sperm Morphology Classification,Multi-class classification of sperm head morphology.,computer-vision
aa9bdfdd-964a-496d-93b4-cdd16cd3f1b7,salt-and-pepper-noise-removal,Salt-And-Pepper Noise Removal,"Salt-and-pepper noise is a form of noise sometimes seen on images. It is also known as impulse noise. This noise can be caused by sharp and sudden disturbances in the image signal. It presents itself as sparsely occurring white and black pixels.
( Image credit: [NAMF](https://arxiv.org/pdf/1910.07787v1.pdf) )",computer-vision
df41c5a9-ab23-49ff-8574-9088fc5ded85,style-transfer,Style Transfer,"**Style Transfer** is a technique in computer vision and graphics that involves generating a new image by combining the content of one image with the style of another image. The goal of style transfer is to create an image that preserves the content of the original image while applying the visual style of another image.
( Image credit: [A Neural Algorithm of Artistic Style](https://arxiv.org/pdf/1508.06576v2.pdf) )",computer-vision
46b2a3b9-31aa-4d20-bb3f-a7a9effafe68,intensity-image-denoising,intensity image denoising,,computer-vision
b93d1d19-ef5f-4082-9954-bd6ca06364d9,table-recognition,Table Recognition,,computer-vision
aa10987d-c813-4ad9-adee-84e05f538b78,interest-point-detection,Interest Point Detection,,computer-vision
af8b5746-1a8d-48b5-a5a5-4940035132cc,lossy-compression-artifact-reduction,Lossy-Compression Artifact Reduction,,computer-vision
61854900-4593-4ba2-97da-07b54e15b7d4,text-based-image-editing,Text-based Image Editing,,computer-vision
337c8c7c-db4b-43ca-895b-727d3ddc4150,3d-face-animation,3D Face Animation,Image: [Cudeiro et al](https://arxiv.org/pdf/1905.03079v1.pdf),computer-vision
48c57d6b-3efd-4e21-baec-6252f1274ebb,physiological-computing,Physiological Computing,,computer-vision
3c069906-8428-4c9f-b1db-56d3ab3dbced,unsupervised-object-localization,Unsupervised Object Localization,,computer-vision
9e89f7c9-b1e6-4281-bc25-4c58860cbf33,traffic-sign-recognition,Traffic Sign Recognition,"Traffic sign recognition is the task of recognising traffic signs in an image or video.
( Image credit: [Novel Deep Learning Model for Traffic Sign Detection Using Capsule
Networks ](https://arxiv.org/pdf/1805.04424v1.pdf) )",computer-vision
f235e0a6-7b27-4833-89f6-84cc21d175b2,facial-recognition-and-modelling,Facial Recognition and Modelling,Facial tasks in machine learning operate based on images or video frames (or other datasets) focussed on human faces.,computer-vision
ce3da826-499f-4a52-9489-c7def9731181,video-compressive-sensing,Video Compressive Sensing,,computer-vision
f0a01df3-eaa3-4df7-92dd-9f90fea3fb75,semantic-part-detection,Semantic Part Detection,,computer-vision
9eaba594-561e-4740-9539-141b1b9391f7,unsupervised-3d-semantic-segmentation,Unsupervised 3D Semantic Segmentation,Unsupervised 3D Semantic Segmentation,computer-vision
6e370762-7b4d-45c0-b703-492e7eb8ebd3,future-hand-prediction,Future Hand Prediction,,computer-vision
335a0e7b-67ec-4a4e-842c-240a96fbc367,matching-disparate-images,Matching Disparate Images,,computer-vision
b2027bce-8030-44b1-82da-867b17ccf7d5,3d-feature-matching,3D Feature Matching,Image: [Choy et al](https://paperswithcode.com/paper/fully-convolutional-geometric-features),computer-vision
23d71f4a-5b03-4112-bb2b-9b8f990a48fa,video-editing,Video Editing,,computer-vision
b2742b5f-2704-4e0c-b487-0d0e5610ceb7,spectral-reconstruction,Spectral Reconstruction,,computer-vision
369fde73-b810-4457-882f-b594da98cbb1,depth-image-estimation,Depth Image Estimation,,computer-vision
48d89220-c2de-4fc4-8ef1-600ec1f33229,multi-hypotheses-3d-human-pose-estimation,Multi-Hypotheses 3D Human Pose Estimation,,computer-vision
d28fabf3-933f-4e1f-b22f-50cff337c0f8,svbrdf-estimation,SVBRDF Estimation,SVBRDF Estimation,computer-vision
98716f61-ddad-4baa-bd77-be3904d330bf,document-enhancement,Document Enhancement,,computer-vision
4821dde5-02a5-418b-94bf-c6c1722a21a2,robust-face-alignment,Robust Face Alignment,"Robust face alignment is the task of face alignment in unconstrained (non-artificial) conditions.
( Image credit: [Deep Alignment Network](https://github.com/MarekKowalski/DeepAlignmentNetwork) )",computer-vision
2429b265-d397-45ae-9b54-0ddcee318764,rice-grain-disease-detection,Rice Grain Disease Detection,,computer-vision
ab523915-8979-4010-ad28-d51fbce5f23b,offline-surgical-phase-recognition,Offline surgical phase recognition,"Offline surgical phase recognition: the first 40 videos to train, the last 40 videos to test.",computer-vision
4e11f383-82a1-47b1-9d66-bde7021f0670,explainable-models,Explainable Models,,computer-vision
c8fd9f9d-9cf0-4d96-a525-cd0ef3be7423,kinship-face-generation,Kinship face generation,Kinship face generation,computer-vision
867175b2-5827-4e9a-a661-d819b576779c,sensor-modeling,Sensor Modeling,"( Image credit: [LiDAR Sensor modeling and Data augmentation with GANs for Autonomous driving](https://arxiv.org/abs/1905.07290) )",computer-vision
7947cd72-a603-48cd-adf6-8234a51e362d,face-quality-assessement,Face Quality Assessement,Estimate the usability of a given face image for recognition,computer-vision
c1ce3791-c283-46be-a180-d75dceb77ef1,ifc-entity-classification,IFC Entity Classification,,computer-vision
34019573-f71b-4903-8189-500c5483c2d3,fashion-compatibility-learning,Fashion Compatibility Learning,,computer-vision
561f90c6-c8a2-4177-b50b-69ba98ae4bd7,robust-face-recognition,Robust Face Recognition,"Robust face recognition is the task of performing recognition in an unconstrained environment, where there is variation of view-point, scale, pose, illumination and expression of the face images.
( Image credit: [MeGlass dataset](https://github.com/cleardusk/MeGlass) )",computer-vision
8c3c7228-c0c5-4e10-95e7-e9f297419249,color-image-denoising,Color Image Denoising,,computer-vision
0c2b5c90-6830-40ae-b57a-0f83d3ac4f6f,adversarial-attack-detection,Adversarial Attack Detection,The detection of adversarial attacks.,computer-vision
33caa197-66cf-4cb7-a17c-a43aaf7531ba,dense-object-detection,Dense Object Detection,,computer-vision
4e2b89cf-e9b1-4f06-8fe7-257064e3f8bf,video-style-transfer,Video Style Transfer,,computer-vision
cc15d108-52f7-4cc9-b308-1740d1dc4fd0,audio-visual-video-captioning,Audio-Visual Video Captioning,,computer-vision
0a3822b1-82b4-4a0f-844f-e30b645ad038,weakly-supervised-panoptic-segmentation,Weakly-supervised panoptic segmentation,,computer-vision
e2b65769-b80e-42b5-b7ad-f294433618b7,satellite-image-classification,Satellite Image Classification,"Satellite image classification is the most significant technique used in remote sensing for the computerized study and pattern recognition of satellite information, which is based on diversity structures of the image that involve rigorous validation of the training samples depending on the used classification algorithm.",computer-vision
0326b15e-764f-4477-81c9-3ece1bbd188d,camera-calibration,Camera Calibration,"Camera calibration involves estimating camera parameters(including camera intrinsics and extrinsics) to infer geometric features from captured sequences, which is crucial for computer vision and robotics. Driven by different architectures of the neural network,
the researchers have developed two main paradigms for learning-based camera calibration and its applications. One is Regression-based Calibration,Reconstruction-based Calibration is another.",computer-vision
cc33b771-f3cf-4d88-b9b0-a3225bced1c0,face-presentation-attack-detection,Face Presentation Attack Detection,,computer-vision
cf80ecd8-1f03-46b5-a9d4-f30b30cc1302,shadow-removal,Shadow Removal,Remove shadow from background,computer-vision
86b882e5-3101-4608-8ed1-ce41c75a4f96,prostate-zones-segmentation,Prostate Zones Segmentation,,computer-vision
b6b772ad-7e16-46eb-af3f-e2a45f10d394,template-matching,Template Matching,,computer-vision
8dc0d7f7-1e2a-4250-aaf3-da3028975801,zero-shot-action-recognition,Zero-Shot Action Recognition,,computer-vision
889b3e79-4f80-454c-862d-fc3afada019b,motion-prediction,motion prediction,,computer-vision
1387e771-0c79-4e24-a9f9-9901fa746702,referring-image-matting-refmatte-rw100,Referring Image Matting (RefMatte-RW100),"Expression-based referring image matting on natural images and manually labelled annotations, i.e., RefMatte-RW100, taking the image and a flowery expression as the input.",computer-vision
a1243840-2589-477e-9f65-dd86d3fca7c5,video-visual-relation-detection,Video Visual Relation Detection,"**Video Visual Relation Detection (VidVRD)** aims to detect instances of visual relations of interest in a video, where a visual relation instance is represented by a relation triplet with the trajectories of the subject and object. As compared to still images, videos provide a more natural set of features for detecting visual relations, such as the dynamic relations like “A-follow-B” and “A-towards-B”, and temporally changing relations like “A-chase-B” followed by “A-hold-B”. Yet, VidVRD is technically more challenging than ImgVRD due to the difficulties in accurate object tracking and diverse relation appearances in the video domain.
Source: [ImageNet-VidVRD Video Visual Relation Dataset](https://xdshang.github.io/docs/imagenet-vidvrd.html)",computer-vision
77e10813-800b-43fb-9e04-7110610a3647,person-re-identification,Person Re-Identification,"**Person Re-Identification** is a computer vision task in which the goal is to match a person's identity across different cameras or locations in a video or image sequence. It involves detecting and tracking a person and then using features such as appearance, body shape, and clothing to match their identity in different frames. The goal is to associate the same person across multiple non-overlapping camera views in a robust and efficient manner.",computer-vision
290cc1a7-896e-4fb8-9815-d20c6852f172,mutual-gaze,Mutual Gaze,Detect if two people are looking at each other,computer-vision
ba519fbb-e9ff-41e4-878d-3284190c78cb,affordance-recognition,Affordance Recognition,Affordance recognition from Human-Object Interaction,computer-vision
9f337995-dd1f-4990-82b7-b376d084985b,scene-parsing,Scene Parsing,"Scene parsing is to segment and parse an image into different image regions associated with semantic categories, such as sky, road, person, and bed. [MIT Description](http://sceneparsing.csail.mit.edu/#:~:text=Scene%20parsing%20is%20to%20segment,the%20algorithms%20of%20scene%20parsing.)",computer-vision
96b5a411-6993-44fe-b4af-02c65f029763,3d-shape-retrieval,3D Shape Classification,Image: [Sun et al](https://arxiv.org/pdf/1804.04610v1.pdf),computer-vision
5c2f38fd-7883-41cb-8961-742685c176f7,semi-supervised-learning-for-image-captioning,Semi Supervised Learning for Image Captioning,,computer-vision
4a029a09-2e27-4176-85b5-97abd7dd8e68,3d-character-animation-from-a-single-photo,3D Character Animation From A Single Photo,Image: [Weng et al](https://arxiv.org/pdf/1812.02246v1.pdf),computer-vision
c2daa4e9-53e0-42c0-a725-df67519a310b,fine-grained-image-inpainting,Fine-Grained Image Inpainting,,computer-vision
9ab7a7d6-c43f-439f-a90a-323256c0fb78,kinematic-based-workflow-recognition,Kinematic Based Workflow Recognition,,computer-vision
484d4a1a-99bd-48ea-993b-bd22dc3400dd,3d-point-cloud-reconstruction,3D Point Cloud Reconstruction,Encoding and reconstruction of 3D point clouds.,computer-vision
df5b792c-348a-4337-8f16-d3135fc3d5e7,3d-instance-segmentation-1,3D Instance Segmentation,Image: [OccuSeg](https://arxiv.org/pdf/2003.06537v3.pdf),computer-vision
54d9eee0-2e9e-44c8-b7f6-bf8dc6ff1836,visual-question-answering,Visual Question Answering (VQA),"**Visual Question Answering (VQA)** is a task in computer vision that involves answering questions about an image. The goal of VQA is to teach machines to understand the content of an image and answer questions about it in natural language.
Image Source: [visualqa.org](https://visualqa.org/)",computer-vision
417f283c-6d4d-4e06-ac9c-d1bd67807095,bbbc021-nsc-accuracy,BBBC021 NSC Accuracy,"BBBC021 is a dataset of fully imaged human cells. Cells are treated with one of 113 small molecules at 8 concentrations, and fluorescent images are captured staining for nucleus, actin and microtubules. The phenotypic profiling problem is presented, where the goal is to extract features containing meaningful information about the cellular phenotype exhibited. Each of 103 unique compound concentration treatment is labeled with a mechanism-of-action (MOA). The MOA is predicted for each unique treatment (averaging features over all treatment examples) by matching the MOA of the closest point excluding points of the same compound. The dataset and more information can be found at https://bbbc.broadinstitute.org/BBBC021.",computer-vision
b3453590-4e34-4e48-93a8-bff7c34596a2,unsupervised-video-summarization,Unsupervised Video Summarization,"**Unsupervised video summarization** approaches overcome the need for ground-truth data (whose production requires time-demanding and laborious manual annotation procedures), based on learning mechanisms that require only an adequately large collection of original videos for their training. Specifically, the training is based on heuristic rules, like the sparsity, the representativeness, and the diversity of the utilized input features/characteristics.",computer-vision
1aa43b81-ebc8-443f-a06c-009c89ee49bb,age-and-gender-estimation,Age and Gender Estimation,Age and gender estimation is a dual-task of identifying the age via regression analysis and classification of gender of a person.,computer-vision
a7bd3b93-8a8f-40f2-ae8c-e41850992073,hyperspectral-image-segmentation,Hyperspectral Image Segmentation,,computer-vision
c593a9b2-cc45-4c32-b4c1-9f57748af8b2,font-style-transfer,Font Style Transfer,**Font style transfer** is the task of converting text written in one font into text written in another font while preserving the meaning of the original text. It is used to change the appearance of text while keeping its content intact.,computer-vision
fa00a39c-fac9-4ff6-ad68-e8240b7a0a7c,3d-dense-shape-correspondence,3D Dense Shape Correspondence,"Finding a meaningful correspondence between two or more shapes is one of the most fundamental shape analysis tasks. The problem can be generally stated as: given input shapes S1,S2,...,SN, find a meaningful relation (or mapping) between their elements. Under different contexts, the problem has also been referred to as registration, alignment, or simply, matching. Shape correspondence is a key algorithmic component in tasks such as 3D scan alignment and space-time reconstruction, as well as an indispensable prerequisite in diverse applications including attribute transfer, shape interpolation, and statistical modeling.",computer-vision
71220af3-d6db-4f1a-a4c1-6532b6f8bcc7,box-supervised-instance-segmentation,Box-supervised Instance Segmentation,This task aims to achieve instance segmentation with weakly bounding box annotations.,computer-vision
721453e6-fd88-4a07-951b-c9f77210a5c3,scene-change-detection,Scene Change Detection,"Scene change detection (SCD) refers to the task of localizing changes and identifying change-categories given two scenes. A scene can be either an RGB (+D) image or a 3D reconstruction (point cloud). If the scene is an image, SCD is a form of pixel-level prediction because each pixel in the image is classified according to a category. On the other hand, if the scene is point cloud, SCD is a form of point-level prediction because each point in the cloud is classified according to a category.
Some example benchmarks for this task are VL-CMU-CD, PCD, and CD2014. Recently, more complicated benchmarks such as ChangeSim, HDMap, and Mallscape are released.
Models are usually evaluated with the Mean Intersection-Over-Union (Mean IoU), Pixel Accuracy, or F1 metrics.",computer-vision
a55bd515-d73c-4629-8016-54eb4e2ebc96,shape-from-texture,Shape from Texture,,computer-vision
99682f7a-cc5c-4074-8794-585dbc0f3dcf,object-slam,Object SLAM,SLAM (Simultaneous Localisation and Mapping) at the level of object,computer-vision
84f3eebf-8bfe-4360-9fbf-8a39e8bfa14f,3d-canonical-hand-pose-estimation,3D Canonical Hand Pose Estimation,Image: [Lin et al](https://arxiv.org/pdf/2006.01320v1.pdf),computer-vision
a1ee8533-30d2-4f07-b62b-cb8521533063,vnla,VNLA,Find objects in photorealistic environments by requesting and executing language subgoals.,computer-vision
6efd7ca6-aa9e-4ceb-8f36-f7f64e419580,multiple-action-detection,Multiple Action Detection,,computer-vision
0d2b4b06-4a14-4995-bbfa-e3b1280448ef,pedestrian-attribute-recognition,Pedestrian Attribute Recognition,"Pedestrian attribution recognition is the task of recognizing pedestrian features - such as whether they are talking on a phone, whether they have a backpack, and so on.
( Image credit: [HydraPlus-Net: Attentive Deep Features for Pedestrian Analysis](https://arxiv.org/pdf/1709.09930v1.pdf) )",computer-vision
9ebcf56e-29a4-4c01-9242-068e7688852f,multi-human-parsing,Multi-Human Parsing,"Multi-human parsing is the task of parsing multiple humans in crowded scenes.
( Image credit: [Multi-Human Parsing](https://github.com/ZhaoJ9014/Multi-Human-Parsing) )",computer-vision
49ac886f-1fe0-4039-982d-b3cefcd3344a,fine-grained-action-detection,Fine-Grained Action Detection,,computer-vision
0b07aef2-5a71-45f1-bd96-b8fb00f46c24,multi-label-image-retrieval,Multi-Label Image Retrieval,,computer-vision
0fe7102a-258d-4fe3-b56d-8c1af789b79f,talking-head-generation,Talking Head Generation,"Talking head generation is the task of generating a talking face from a set of images of a person.
( Image credit: [Few-Shot Adversarial Learning of Realistic Neural Talking Head Models](https://arxiv.org/pdf/1905.08233v2.pdf) )",computer-vision
7e5af9fd-88c6-4b2e-ae4c-2006f8450c5c,overlapped-100-5,Overlapped 100-5,,computer-vision
b9b51ca3-5922-4d9d-b575-9c45f61c4753,text-guided-image-editing,text-guided-image-editing,Editing images using text prompts.,computer-vision
f5407b29-9944-4361-afd4-b2bc593a61e0,car-pose-estimation,Car Pose Estimation,,computer-vision
68c73b3c-0bdb-4690-8046-b960ad5f20cc,aerial-video-saliency-prediction,Aerial Video Saliency Prediction,,computer-vision
424d9fb7-0684-460d-8c28-ead03b68e29e,optical-character-recognition,Optical Character Recognition (OCR),"**Optical Character Recognition** or **Optical Character Reader** (OCR) is the electronic or mechanical conversion of images of typed, handwritten or printed text into machine-encoded text, whether from a scanned document, a photo of a document, a scene-photo (for example the text on signs and billboards in a landscape photo, license plates in cars...) or from subtitle text superimposed on an image (for example: from a television broadcast)",computer-vision
3505d0f8-b6cb-4f43-b471-333c4db6f319,finger-vein-recognition,Finger Vein Recognition,,computer-vision
3fe91d2e-2518-48bb-ba51-ec53faeac2d6,cross-domain-activity-recognition,Cross-Domain Activity Recognition,,computer-vision
955f9345-20c2-4b37-98e0-7fb791f4028e,sketch-based-image-retrieval,Sketch-Based Image Retrieval,,computer-vision
70eae0ab-aa13-403d-a38a-d5e1c36ae1af,localization-in-video-forgery,Localization In Video Forgery,,computer-vision
9f1afc10-e99b-4766-b9fe-6ad91c33c863,text-spotting,Text Spotting,"Text Spotting is the combination of Scene Text Detection and Scene Text Recognition in an end-to-end manner.
It is the ability to read natural text in the wild.",computer-vision
88923373-9998-4c8d-bd46-4c79adf64cfe,temporal-metadata-manipulation-detection,Temporal Metadata Manipulation Detection,Detecting when the timestamp of an outdoor photograph has been manipulated,computer-vision
daa53db0-b87e-4742-b11f-01536d2c3b7d,video-kinematic-segmentation-base-workflow,"Video, Kinematic & Segmentation Base Workflow Recognition",,computer-vision
e69c8198-c8fa-4f6d-b2d3-8b0eee705dfb,intelligent-surveillance,Intelligent Surveillance,,computer-vision
a93cb453-1915-458f-a939-48d3e0ba7c97,occlusion-estimation,Occlusion Estimation,,computer-vision
f99e5457-d8cf-4246-b2a8-23385269a931,out-of-distribution-detection,Out-of-Distribution Detection,Detect out-of-distribution or anomalous examples.,computer-vision
6af32a18-9b9a-4000-98b5-a06dfc19de75,lung-nodule-3d-detection,Lung Nodule 3D Detection,,computer-vision
8c210f6b-6e8e-4e56-a340-175e26313ebd,3d-semantic-instance-segmentation,3D Semantic Instance Segmentation,Image: [3D-SIS](https://github.com/Sekunde/3D-SIS),computer-vision
0c2e346f-8fb7-46d9-9033-ddf88efa0312,data-ablation,Data Ablation,"Data Ablation is the study of change in data, and its effects in the performance of Neural Networks.",computer-vision
be5fee50-fa08-4246-b43c-55ee351ac306,stereo-depth-estimation,Stereo Depth Estimation,,computer-vision
b1adb30a-3650-455f-84ec-855240095f34,image-comprehension,Image Comprehension,,computer-vision
3fd4e0cd-44bb-4d97-9167-16621e9b1b99,blood-cell-count,Blood Cell Count,,computer-vision
01e23933-dc0b-4d83-9784-afecaa227383,face-to-face-translation,Face to Face Translation,"Given a video of a person speaking in a source language, generate a video of the same person speaking in a target language.",computer-vision
45baf847-d353-4d19-b848-3821d3d34755,whole-slide-images,whole slide images,,computer-vision
70863b8b-b974-4ee6-8126-f64110c2e5e9,depth-image-upsampling,Depth Image Upsampling,,computer-vision
a6824d7c-24f9-4a01-8dbc-1a04b8f08b1e,video-classification,Video Classification,"**Video Classification** is the task of producing a label that is relevant to the video given its frames. A good video level classifier is one that not only provides accurate frame labels, but also best describes the entire video given the features and the annotations of the various frames in the video. For example, a video might contain a tree in some frame, but the label that is central to the video might be something else (e.g., “hiking”). The granularity of the labels that are needed to describe the frames and the video depends on the task. Typical tasks include assigning one or more global labels to the video, and assigning one or more labels for each frame inside the video.
Source: [Efficient Large Scale Video Classification ](https://arxiv.org/abs/1505.06250)",computer-vision
fedd1a0f-a03f-4146-984a-534bb4eb5137,pose-prediction,Pose Prediction,Pose prediction is to predict future poses given a window of previous poses.,computer-vision
e3327164-58ee-4cb4-bfc4-69ae2bd7570b,body-detection,Body Detection,Detection of the persons or the characters defined in the dataset.,computer-vision
a0a7c3b0-348a-4dc9-a076-d0eba73d5e5c,medical-image-retrieval,Medical Image Retrieval,,computer-vision
b3e68a5e-d83a-4ab8-8b99-ad7dffad4264,reconstruction,Reconstruction,,computer-vision
cfe1465e-0e7c-4d3b-b233-04914315e3e8,soil-moisture-estimation,Soil moisture estimation,,computer-vision
afaa2030-e986-414c-86f3-da78664947a8,calving-front-delineation-in-synthetic,Calving Front Delineation In Synthetic Aperture Radar Imagery,"Delineating the calving front of a marine-terminating glacier in synthetic aperture radar (SAR) imagery. This can, for example, be done through Semantic Segmentation.",computer-vision
9826d11b-af86-4622-a81d-266e0f79e0f5,overlapping-pose-estimation,Overlapping Pose Estimation,Pose estimation with overlapping poses.,computer-vision
4f2397dd-544e-451d-ab28-22b029136503,video-anomaly-detection,Video Anomaly Detection,,computer-vision
4cc5e4f3-e037-4d4c-ae12-099da2769bf9,image-quality-estimation,Image Quality Estimation,,computer-vision
8648328f-dbd9-4250-9d8f-d89d5742e14b,hand-gesture-recognition,Hand Gesture Recognition,,computer-vision
0b64f8d1-af25-41b1-8bf5-8c10e483eb16,overlapped-5-3,Overlapped 5-3,,computer-vision
3af6e9c5-9997-4c8c-9b4c-13eeef4c32b5,interactive-video-object-segmentation,Interactive Video Object Segmentation,"The interactive scenario assumes the user gives iterative refinement inputs to the algorithm, in our case in the form of a scribble, to segment the objects of interest. Methods have to produce a segmentation mask for that object in all the frames of a video sequence taking into account all the user interactions.",computer-vision
521f7434-a72f-46d1-8c4b-4bebc0035514,space-time-video-super-resolution,Space-time Video Super-resolution,,computer-vision
c7fd4df7-a4b6-47c7-859a-602d7a60bbbe,image-manipulation-detection,Image Manipulation Detection,"The task of detecting images or image parts that have been tampered or manipulated (sometimes also referred to as doctored). This typically encompasses image splicing, copy-move, or image inpainting.",computer-vision
19bfa5c5-8ea2-4751-bf82-065f0c73e66b,few-shot-action-recognition,Few Shot Action Recognition,"Few-shot (FS) action recognition is a challenging com-
puter vision problem, where the task is to classify an unlabelled query video into one of the action categories in the support set having limited samples per action class.",computer-vision
ffaa51b4-231b-4b71-9cad-0aec21fc93cd,camera-absolute-pose-regression,camera absolute pose regression,,computer-vision
139c67c3-be6e-4ccc-868b-8103be3ff7d1,procedure-learning,Procedure Learning,"Given a set of videos of the same task, the goal is to identify the key-steps required to perform the task.",computer-vision
29a96914-951f-49bd-b071-5c80a4dd4fa3,coos-7-accuracy,COOS-7 Accuracy,"COOS-7 contains 132,209 single-cell images of mouse cells, where the task is to predict protein subcellular localization. Images are spread over 1 training set and 4 testing sets, where each single-cell image contains a protein and nucleus fluorescent channels. COOS-7 provides a classification setting where four test datasets have increasing degrees of covariate shift: some images are random subsets of the training data, while others are from experiments reproduced months later and imaged by different instruments. While most classifiers perform well on test datasets similar to the training dataset, all classifiers failed to generalize their performance to datasets with greater covariate shifts. Read more at https://www.alexluresearch.com/publication/coos/.",computer-vision
78dd9bbd-8eb6-4db4-b0ee-449df2e5137b,geometric-matching,Geometric Matching,,computer-vision
f53a9606-9039-4294-b52e-72f7bcb62f14,incomplete-multi-view-clustering,Incomplete multi-view clustering,,computer-vision
00fdedef-5928-448e-9db6-bf4ffb57d09d,semi-supervised-human-pose-estimation,Semi-Supervised Human Pose Estimation,Semi-supervised human pose estimation aims to leverage the unlabelled data along with labeled data to improve the model performance.,computer-vision
20bcea27-c740-400d-a06f-3aa1fb56d55e,training-free-3d-point-cloud-classification,Training-free 3D Point Cloud Classification,Evaluation on target datasets for 3D Point Cloud Classification without any training,computer-vision
7ae77ba4-cc57-460b-84ff-d7b5890e45e8,multiple-people-tracking,Multiple People Tracking,,computer-vision
164a738f-d23a-44ff-948b-05a6ce5bbd8b,monocular-3d-object-detection,Monocular 3D Object Detection,Monocular 3D Object Detection is the task to draw 3D bounding box around objects in a single 2D RGB image. It is localization task but without any extra information like depth or other sensors or multiple-images.,computer-vision
837166a1-3331-4762-a77b-b59ca758f007,sample-probing,Sample Probing,,computer-vision
db0a2ae1-8b22-48e4-afc3-26daec162829,cloud-removal,Cloud Removal,"The majority of all optical observations collected via spaceborne satellites are affected by haze or clouds. Consequently, persistent cloud coverage affects the remote sensing practitioner's capabilities of a continuous and seamless monitoring of our planet. **Cloud removal** is the task of reconstructing cloud-covered information while preserving originally cloud-free details.
Image Source: [URL](https://patrickTUM.github.io/cloud_removal/)",computer-vision
efa83860-188c-42be-baa1-c1131044d352,speaker-specific-lip-to-speech-synthesis,Speaker-Specific Lip to Speech Synthesis,"How accurately can we infer an individual’s speech style and content from his/her lip movements? [1]
In this task, the model is trained on a specific speaker, or a very limited set of speakers.
[1] Learning Individual Speaking Styles for Accurate Lip to Speech Synthesis, CVPR 2020.",computer-vision
28f05948-018a-42a7-8e61-d6b8cff93385,bokeh-effect-rendering,Bokeh Effect Rendering,,computer-vision
eaf2caae-282d-43d8-8615-3387dea1f3c6,video,Video,,computer-vision
78ef7f7f-ef94-42ee-96af-c3b467e6f2c9,evolving-domain-generalization,Evolving Domain Generalization,,computer-vision
a6a0f03f-eac6-4e71-bc1e-02c891aac738,infrared-image-super-resolution,Infrared image super-resolution,Aims at upsampling the IR image and create the high resolution image with help of a low resolution image.,computer-vision
d5047c96-075a-4e40-86f0-6c475f4915ec,referring-expression-generation,Referring expression generation,Generate referring expressions,computer-vision
993664f7-2581-46b2-beef-acd7fedaa525,image-super-resolution,Image Super-Resolution,"**Image Super-Resolution** is a machine learning task where the goal is to increase the resolution of an image, often by a factor of 4x or more, while maintaining its content and details as much as possible. The end result is a high-resolution version of the original image. This task can be used for various applications such as improving image quality, enhancing visual detail, and increasing the accuracy of computer vision algorithms.",computer-vision
5db84c96-12ef-495b-8760-0a87a49fb436,video-deinterlacing,Video Deinterlacing,,computer-vision
b6847de3-8842-4679-aa5d-1b840c8d3ccd,unsupervised-semantic-segmentation-with,Unsupervised Semantic Segmentation with Language-image Pre-training,A segmentation task which does not utilise any human-level supervision for semantic segmentation except for a backbone which is initialised with features pre-trained with image-level labels.,computer-vision
7ee866be-8f59-4912-8489-8d7746c5e895,point-cloud-classification-dataset,Point cloud classification dataset,,computer-vision
efe7b0a1-1ed7-45f8-aa7b-2b4509b8ac04,head-detection,Head Detection,,computer-vision
b04e4ceb-329c-415e-a2b2-f5ab28785e00,classifier-calibration,Classifier calibration,Confidence calibration – the problem of predicting probability estimates representative of the true correctness likelihood – is important for classification models in many applications. The two common calibration metrics are Expected Calibration Error (ECE) and Maximum Calibration Error (MCE).,computer-vision
e612c9f7-c5c7-49a3-8b5f-cac7f26c3c8a,pso-convnets-dynamics-1,PSO-ConvNets Dynamics 1,Incorporating distilled Cucker-Smale elements into PSO algorithm using KNN and intertwine training with SGD,computer-vision
db54e6d3-f7b0-4e33-a1ea-0213e709cecb,trajectory-prediction,Trajectory Prediction,"**Trajectory Prediction** is the problem of predicting the short-term (1-3 seconds) and long-term (3-5 seconds) spatial coordinates of various road-agents such as cars, buses, pedestrians, rickshaws, and animals, etc. These road-agents have different dynamic behaviors that may correspond to aggressive or conservative driving styles.
Source: [Forecasting Trajectory and Behavior of Road-Agents Using Spectral Clustering in Graph-LSTMs ](https://arxiv.org/abs/1912.01118)",computer-vision
d7732c1f-343f-43f6-9745-19d606784fc4,scene-labeling,Scene Labeling,,computer-vision
9aee4efd-52dd-4b99-aa94-f965b7ea8776,transparent-objects,Transparent objects,,computer-vision
cdb65c62-caf0-4215-a7ac-b77aaefd9a77,continuous-object-recognition,Continuous Object Recognition,"Continuous object recognition is the task of performing object recognition on a data stream and learning continuously, trying to mitigate issues such as catastrophic forgetting.
( Image credit: [CORe50 dataset](https://vlomonaco.github.io/core50/) )",computer-vision
b6461c31-073a-4013-bee7-4a50cdb685ff,3d-inpainting,3D Inpainting,"**3D Inpainting** is the removal of unwanted objects
from a 3D scene, such that the replaced region is visually
plausible and consistent with its context.",computer-vision
5dc61c2c-a88a-43f3-af1e-9873a04167b3,image-to-gps-verification,Image-To-Gps Verification,"The image-to-GPS verification task asks whether a given image is taken at a claimed GPS location.
( Image credit: [Image-to-GPS Verification Through A Bottom-Up Pattern Matching Network](https://arxiv.org/pdf/1811.07288v1.pdf) )",computer-vision
a4d9be52-c44e-4515-879b-9d3c56865afa,hand-detection,Hand Detection,"As an important subject in the field of computer vision, hand detection plays an important role in many tasks such as human-computer interaction, automatic driving, virtual reality and so on.",computer-vision
8d5b5616-a45a-498b-8ccf-07f7d99690d0,vision-language-navigation,Vision-Language Navigation,"Vision-language navigation (VLN) is the task of navigating an embodied agent to carry out natural language instructions inside real 3D environments.
( Image credit: [Learning to Navigate Unseen Environments:
Back Translation with Environmental Dropout](https://arxiv.org/pdf/1904.04195v1.pdf) )",computer-vision
6ae549b2-59ce-4f0a-a762-fe445324e214,self-supervised-image-classification,Self-Supervised Image Classification,"This is the task of image classification using representations learnt with self-supervised learning. Self-supervised methods generally involve a pretext task that is solved to learn a good representation and a loss function to learn with. One example of a loss function is an autoencoder based loss where the goal is reconstruction of an image pixel-by-pixel. A more popular recent example is a contrastive loss, which measure the similarity of sample pairs in a representation space, and where there can be a varying target instead of a fixed target to reconstruct (as in the case of autoencoders).
A common evaluation protocol is to train a linear classifier on top of (frozen) representations learnt by self-supervised methods. The leaderboards for the linear evaluation protocol can be found below. In practice, it is more common to fine-tune features on a downstream task. An alternative evaluation protocol therefore uses semi-supervised learning and finetunes on a % of the labels. The leaderboards for the finetuning protocol can be accessed [here](https://paperswithcode.com/task/semi-supervised-image-classification).
You may want to read some blog posts before reading the papers and checking the leaderboards:
- [Contrastive Self-Supervised Learning](https://ankeshanand.com/blog/2020/01/26/contrative-self-supervised-learning.html) - Ankesh Anand
- [The Illustrated Self-Supervised Learning](https://amitness.com/2020/02/illustrated-self-supervised-learning/) - Amit Chaudhary
- [Self-supervised learning and computer vision](https://www.fast.ai/2020/01/13/self_supervised/) - Jeremy Howard
- [Self-Supervised Representation Learning](https://lilianweng.github.io/lil-log/2019/11/10/self-supervised-learning.html) - Lilian Weng
There is also Yann LeCun's talk at AAAI-20 which you can watch [here](https://vimeo.com/390347111) (35:00+).
( Image credit: [A Simple Framework for Contrastive Learning of Visual Representations](https://arxiv.org/pdf/2002.05709v1.pdf) )",computer-vision
722efea2-a742-4c19-a715-e94446640c2e,webpage-object-detection,Webpage Object Detection,Detect Web Element for various classes from candidate web elements obtained from DOM tree (No need for Bounding Box Regression),computer-vision
04c6e1de-7f01-4984-9b0f-8af11d58de11,contrastive-learning,Contrastive Learning,"**Contrastive Learning** is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.
It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.
(Image credit: [Schroff et al. 2015](https://arxiv.org/abs/1503.03832))",computer-vision
93f99e1f-b2a1-4570-bcf4-50b15867cdda,vehicle-key-point-and-orientation-estimation,Vehicle Key-Point and Orientation Estimation,,computer-vision
ff2439da-641e-4d87-bed2-e414b20df2ad,action-triplet-detection,Action Triplet Detection,"Detecting and localizing bounding boxes of tools and anatomies. Then prediction their relationship as action triplet ",computer-vision
60dda5a0-648b-4ad4-bef4-68ab7b1cbaaf,self-driving-cars,Self-Driving Cars,"Self-driving cars : the task of making a car that can drive itself without human guidance.
( Image credit: [Learning a Driving Simulator](https://github.com/commaai/research) )",computer-vision
f89748a6-f8c7-46ba-a42b-264cf24084fa,gesture-to-gesture-translation,Gesture-to-Gesture Translation,,computer-vision
e3383875-747e-4324-8573-2e6d909a1077,3d-hand-pose-estimation,3D Hand Pose Estimation,Image: [Zimmerman et l](https://arxiv.xsrg/pdf/1705.01389v3.pdf),computer-vision
e344093f-4360-4d58-8321-26642809a7df,texture-synthesis,Texture Synthesis,"The fundamental goal of example-based **Texture Synthesis** is to generate a texture, usually larger than the input, that faithfully captures all the visual characteristics of the exemplar, yet is neither identical to it, nor exhibits obvious unnatural looking artifacts.
Source: [Non-Stationary Texture Synthesis by Adversarial Expansion ](https://arxiv.org/abs/1805.04487)",computer-vision
68e36a37-0815-493a-9a46-e8300ffb6547,dehazing,Dehazing,,computer-vision
0b6645c6-d25d-4c31-af4a-8fce3aae6271,3d-point-cloud-reinforcement-learning,3D Point Cloud Reinforcement Learning,Reinforcement learning / robot learning from 3D point clouds,computer-vision
d0951d52-4bb6-446e-82e4-9eeb73eb9a95,action-spotting,Action Spotting,,computer-vision
a6913cfb-4563-4c2d-992b-cc5dcc00bc8a,image-editing,Image Editing,,computer-vision
086733df-4710-414b-8b93-fb2ec56e9091,thoracic-disease-classification,Thoracic Disease Classification,,computer-vision
23fbce04-2d53-466e-9bee-79ba90df50b1,point-cloud-completion,Point Cloud Completion,,computer-vision
ae694a8f-8d74-4a6c-9593-d2483680d6fe,depiction-invariant-object-recognition,Depiction Invariant Object Recognition,"Depiction invariant object recognition is the task of recognising objects irrespective of how they are visually depicted (line drawing, realistic shaded drawing, photograph etc.).
( Image credit: [SwiDeN](https://arxiv.org/pdf/1607.08764v1.pdf) )",computer-vision
2c693644-e175-41ff-a7cc-e2a5c5fea4a5,iris-recognition,Iris Recognition,,computer-vision
d5b8e217-85ba-49a4-af88-8e534be9affd,pedestrian-detection,Pedestrian Detection,"Pedestrian detection is the task of detecting pedestrians from a camera.
Further state-of-the-art results (e.g. on the KITTI dataset) can be found at [3D Object Detection](https://paperswithcode.com/task/object-detection).
( Image credit: [High-level Semantic Feature Detection: A New Perspective for Pedestrian Detection](https://github.com/liuwei16/CSP) )",computer-vision
00a704ac-9c71-446a-940e-b4a963fc24d9,co-saliency-detection,Co-Salient Object Detection,"**Co-Salient Object Detection** is a computational problem that aims at highlighting the common and salient foreground regions (or objects) in an image group. Please also refer to the online benchmark: http://dpfan.net/cosod3k/
( Image credit: [Taking a Deeper Look at Co-Salient Object Detection, CVPR2020](https://openaccess.thecvf.com/content_CVPR_2020/papers/Fan_Taking_a_Deeper_Look_at_Co-Salient_Object_Detection_CVPR_2020_paper.pdf) )",computer-vision
629faf87-2337-4999-a7f7-be2fb9f1d6a3,deep-attention,Deep Attention,,computer-vision
f0b13301-1d4d-4cb6-8f8d-217db0fe0aba,steganalysis,Steganalysis,Detect the usage of Steganography,computer-vision
67929ecc-7c05-4090-a0ee-14a1bb4ee73e,motion-detection-in-non-stationary-scenes,Motion Detection In Non-Stationary Scenes,,computer-vision
2b22bbfd-7ebf-417a-9d9f-72f522b9c40a,document-image-classification,Document Image Classification,"Document image classification is the task of classifying documents based on images of their contents.
( Image credit: [Real-Time Document Image Classification using Deep CNN and Extreme Learning Machines](https://arxiv.org/pdf/1711.05862v1.pdf) )",computer-vision
244e0caf-0328-4401-bb90-ed4f4158bc5a,image-imputation,Image Imputation,"Image imputation is the task of creating plausible images from low-resolution images or images with missing data.
( Image credit: [NASA](https://www.jpl.nasa.gov/edu/news/2019/4/19/how-scientists-captured-the-first-image-of-a-black-hole/) )",computer-vision
fbd1ef22-0580-483d-84d6-94871538f89a,safety-perception-recognition,Safety Perception Recognition,City safety perception recognition,computer-vision
867cd700-7dbe-4fb2-b953-e9c3b1dd6ef9,3d-object-classification,3D Object Classification,"3D Object Classification is the task of predicting the class of a 3D object point cloud. It is a voxel level prediction where each voxel is classified into a category. The popular benchmark for this task is the ModelNet dataset. The models for this task are usually evaluated with the Classification Accuracy metric.
Image: [Sedaghat et al](https://arxiv.org/pdf/1604.03351v2.pdf)",computer-vision
3e490758-8ff9-4d8d-9be3-1a7bfbc8ee48,interspecies-facial-keypoint-transfer,Interspecies Facial Keypoint Transfer,Find cross-domain semantic correspondence between faces of different species,computer-vision
f73034a5-4f60-4bd1-9275-892756ce8b52,classify-3d-point-clouds,Classify 3D Point Clouds,,computer-vision
5d0fbe78-523b-4b46-9ee6-366427ef57b2,image-fusion,Image Fusion,,computer-vision
bea07472-ff1d-4a8e-9722-024290c8dc9f,junction-detection,Junction Detection,,computer-vision
ede644bf-a9b8-46b9-bcd0-9e51188b63d3,aerial-video-semantic-segmentation,Aerial Video Semantic Segmentation,,computer-vision
0f9cd1bd-ec99-4d17-84b6-f2db4e012f52,panoptic-segmentation,Panoptic Segmentation,"**Panoptic Segmentation** is a computer vision task that combines semantic segmentation and instance segmentation to provide a comprehensive understanding of the scene. The goal of panoptic segmentation is to segment the image into semantically meaningful parts or regions, while also detecting and distinguishing individual instances of objects within those regions.
( Image credit: [Detectron2](https://github.com/facebookresearch/detectron2) )",computer-vision
a5be0f6f-6867-4153-aab9-18999bffd892,seeing-beyond-the-visible,Seeing Beyond the Visible,"The objective of this challenge is to automate the process of estimating the soil parameters, specifically, potassium (KKK), phosphorus pentoxide (P2O5P_2O_5P2O5), magnesium (MgMgMg) and pHpHpH, through extracting them from the airborne hyperspectral images captured over agricultural areas in Poland (the exact locations are not revealed). To make the solution applicable in real-life use cases, all the parameters should be estimated as precisely as possible.",computer-vision
291abbdf-cd25-4f76-b99e-66d61171bd55,defocus-blur-detection,Defocus Blur Detection,,computer-vision
8d65585c-52ab-4b05-8f3d-35c4d0ef8037,3d-surface-generation,3D Surface Generation,Image: [AtlasNet](https://arxiv.org/pdf/1802.05384v3.pdf),computer-vision
5e90bbd8-7d20-44f1-96be-bd39a7e0d0ff,3d-room-layouts-from-a-single-rgb-panorama,3D Room Layouts From A Single RGB Panorama,Image: [Zou et al](https://arxiv.org/pdf/1803.08999v1.pdf),computer-vision
b1ba8d30-b784-4164-992c-0180ca846a27,pulmonary-arteryvein-classification,Pulmonary Artery–Vein Classification,,computer-vision
5e27d197-525e-4cc3-a50a-38c447436efe,heterogeneous-face-recognition,Heterogeneous Face Recognition,"Heterogeneous face recognition is the task of matching face images acquired from different sources (i.e., different sensors or different wavelengths) for identification or verification.
( Image credit: [Pose Agnostic Cross-spectral Hallucination via Disentangling Independent Factors](https://arxiv.org/pdf/1909.04365v1.pdf) )",computer-vision
cc29554a-8288-4bd3-9cd5-558cbc088316,mobile-periocular-recognition,Mobile Periocular Recognition,"Periocular recognition is the task of recognising a person based on their eyes (periocular).
( Image credit: [Heterogeneity Aware Deep Embedding for Mobile Periocular Recognition](https://arxiv.org/pdf/1811.00846v1.pdf) )",computer-vision
08cf72a0-5bf4-4bdd-9f19-d8c05facd584,class-incremental-learning,Class Incremental Learning,Incremental learning of a sequence of tasks when the task-ID is not available at test time.,computer-vision
c9fb7ffa-72e7-41d7-8cc1-0725826650e0,event-data-classification,Event data classification,,computer-vision
fd91982f-1b4f-4eab-b15d-3e79e0e64857,zero-shot-segmentation,Zero Shot Segmentation,,computer-vision
8d518009-78e1-458f-b843-ede5c2056eb7,animated-gif-generation,Animated GIF Generation,,computer-vision
88b82def-c3fb-40ac-89fc-bd96d7448a48,single-shot-hdr-reconstruction,Single-shot HDR Reconstruction,"SVE-based HDR imaging, also known as single-shot HDR imaging, algorithms capture a scene with pixel-wise varying exposures in a single image and then computationally synthesize an HDR image, which benefits from the multiple exposures of the single image.",computer-vision
e843433f-f490-4c04-bf33-8637fefd167d,zero-shot-transfer-image-classification,Zero-Shot Transfer Image Classification,,computer-vision
ad9768e0-b60c-4de7-a458-c7876a55c46b,self-knowledge-distillation,Self-Knowledge Distillation,,computer-vision
3ae9fdc8-77c7-49ed-bf26-00c4a7bb513e,colorization,Colorization,"**Colorization** is the process of adding plausible color information to monochrome photographs or videos. Colorization is a highly undetermined problem, requiring mapping a real-valued luminance image to a three-dimensional color-valued one, that has not a unique solution.
Source: [ChromaGAN: An Adversarial Approach for Picture Colorization ](https://arxiv.org/abs/1907.09837)",computer-vision
0642119e-a1da-4d84-a396-d8c267afbeaf,stereo-matching,Stereo Matching Hand,,computer-vision
e5e5d6a3-79cc-4338-a9a4-4397f5e44956,sketch,Sketch,,computer-vision
d63ce1d7-12a4-4bc4-bd10-6ab205764ba9,point-cloud-reconstruction,Point cloud reconstruction,"This task aims to solve inherent problems in raw point clouds: sparsity, noise, and irregularity.",computer-vision
e989dcd6-5ddc-4673-b713-b6ed74dce0b9,sports-analytics,Sports Analytics,,computer-vision
4b40fdfe-537f-49b3-a989-e0641d9721a1,lung-nodule-3d-classification,Lung Nodule 3D Classification,,computer-vision
b64227ac-7389-4a68-ab87-7a811abe7bc8,underwater-image-restoration,Underwater Image Restoration,Underwater image restoration aims to rectify the distorted colors and present the true colors of the underwater scene.,computer-vision
ef8b1841-94e9-40d9-8b80-4c8dea16ae9d,partially-view-aligned-multi-view-learning,Partially View-aligned Multi-view Learning,"In multi-view learning, Partially View-aligned Problem (PVP) refers to the case when only a portion of data is aligned, thus leading to data inconsistency.",computer-vision
2ce123e8-adc1-413c-adc5-c3672f77d379,scene-text-recognition,Scene Text Recognition,See [Scene Text Detection](https://paperswithcode.com/task/scene-text-detection) for leaderboards in this task.,computer-vision
d6d0c82b-0b87-4c57-83a8-35372398bf99,face-image-quality,Face Image Quality,,computer-vision
b1589653-13d4-46ca-91a8-0d01287988e5,demosaicking,Demosaicking,"Most modern digital cameras acquire color images by measuring only one color channel per pixel, red, green, or blue, according to a specific pattern called the Bayer pattern. **Demosaicking** is the processing step that reconstruct a full color image given these incomplete measurements.
Source: [Revisiting Non Local Sparse Models for Image Restoration ](https://arxiv.org/abs/1912.02456)",computer-vision
22ccbda1-5de9-4b39-a355-d72aa10681ab,handwritten-word-generation,Handwritten Word Generation,,computer-vision
a145a0db-28e7-440d-b309-b7e4f1559ab2,physical-attribute-prediction,Physical Attribute Prediction,,computer-vision
b0f76513-c2be-4cd6-9085-312b87152077,face-recognition,Face Recognition,"**Facial Recognition** is the task of making a positive identification of a face in a photo or video image against a pre-existing database of faces. It begins with detection - distinguishing human faces from other objects in the image - and then works on identification of those detected faces.
The state of the art tables for this task are contained mainly in the consistent parts of the task : the face verification and face identification tasks.
( Image credit: [Face Verification](https://shuftipro.com/face-verification) )",computer-vision
066705ce-a049-43ee-91d3-057a16e0c7bb,2d-object-detection,2D Object Detection,,computer-vision
a8a726be-4fd9-4870-aa7c-234d0133f6d4,hand-gesture-recognition-1,Hand-Gesture Recognition,,computer-vision
4bb6df73-0222-48dc-8d8b-8c20ed85003d,saliency-prediction,Saliency Prediction,A saliency map is a model that predicts eye fixations on a visual scene.,computer-vision
51bbf085-942e-48dc-929d-0974cdfc8aee,video-individual-counting,Video Individual Counting,,computer-vision
93d1724f-d3d6-462d-b3a4-db94c55313a1,road-scene-understanding,road scene understanding,,computer-vision
cdb48867-4f12-40ff-9e72-2dd4bbcf7089,pose-tracking,Pose Tracking,"**Pose Tracking** is the task of estimating multi-person human poses in videos and assigning unique instance IDs for each keypoint across frames. Accurate estimation of human keypoint-trajectories is useful for human action recognition, human interaction understanding, motion capture and animation.
Source: [LightTrack: A Generic Framework for Online Top-Down Human Pose Tracking ](https://arxiv.org/abs/1905.02822)",computer-vision
6ed7e9fb-f666-4f6a-b904-a8ca3ed9743c,physical-video-anomaly-detection,Physical Video Anomaly Detection,Detecting if an entire short clip of a physical or mechanical process features an anomalous motion,computer-vision
8c1dbb22-a5cb-42fd-b7cd-f23b999556e7,short-term-object-interaction-anticipation,Short-term Object Interaction Anticipation,,computer-vision
8b52de3a-b1cc-4cb9-95d6-cf81d5de974c,human-object-interaction-motion-tracking,Human-Object-interaction motion tracking,,computer-vision
c127e59c-15f4-4a57-a4b4-9b4baf4f5969,sketch-recognition,Sketch Recognition,,computer-vision
3504d098-f824-4261-b663-aa9bf1d33806,image-smoothing,image smoothing,,computer-vision
a26fd2fb-87df-4918-9c4c-01b051caeb13,image-dehazing,Image Dehazing,"( Image credit: [Densely Connected Pyramid Dehazing Network](https://github.com/hezhangsprinter/DCPDN) )",computer-vision
730e0515-5a5e-41c6-be34-568a89197e5b,action-quality-assessment,Action Quality Assessment,Assessing/analyzing/quantifying how well an action was performed.,computer-vision
c8342f5c-aca9-4942-bd71-4d1720026534,multi-oriented-scene-text-detection,Multi-Oriented Scene Text Detection,,computer-vision
4960d685-950e-4e7a-9ad2-2365d1b27952,hand,Hand,,computer-vision
68604d30-0d8a-44b5-a6b2-0267ee2154fd,referring-image-matting-keyword-based,Referring Image Matting (Keyword-based),"Keyword-based referring image matting, taking an image and a keyword word as the input.",computer-vision
b8d19e52-b81b-4319-a4d7-5f3619f99f60,single-object-discovery,Single-object discovery,,computer-vision
b8218909-ed55-48ac-9653-ce3fb35dbb21,deblurring,Deblurring,"**Deblurring** is a computer vision task that involves removing the blurring artifacts from images or videos to restore the original, sharp content. Blurring can be caused by various factors such as camera shake, fast motion, and out-of-focus objects, and can result in a loss of detail and quality in the captured images. The goal of deblurring is to produce a clear, high-quality image that accurately represents the original scene.
( Image credit: [Deblurring Face Images using Uncertainty Guided Multi-Stream Semantic Networks](https://arxiv.org/pdf/1907.13106v1.pdf) )",computer-vision
0a6700bb-2c49-4071-8dfd-b2ed92926a47,unsupervised-long-term-person-re,Unsupervised Long Term Person Re-Identification,"Long-term Person Re-Identification(Clothes-Changing Person Re-ID) is a computer vision task in which the goal is to match a person's identity across different cameras, clothes, and locations in a video or image sequence. It involves detecting and tracking a person and then using features such as appearance, and body shape to match their identity in different frames. The goal is to associate the same person across multiple non-overlapping camera views in a robust and efficient manner.",computer-vision
df95db4c-899e-4bae-ae2b-bd4657a2f2a4,language-based-temporal-localization,Language-Based Temporal Localization,,computer-vision
77216b2f-f182-46af-9db3-d6b74bcfa6ae,skeleton-based-action-recognition,Skeleton Based Action Recognition,"**Skeleton-based Action Recognition** is a computer vision task that involves recognizing human actions from a sequence of 3D skeletal joint data captured from sensors such as Microsoft Kinect, Intel RealSense, and wearable devices. The goal of skeleton-based action recognition is to develop algorithms that can understand and classify human actions from skeleton data, which can be used in various applications such as human-computer interaction, sports analysis, and surveillance.
( Image credit: [View Adaptive Neural Networks for High
Performance Skeleton-based Human Action
Recognition](https://arxiv.org/pdf/1804.07453v3.pdf) )",computer-vision
70c69369-1184-4e29-9321-f271cb660d19,pose-estimation,Pose Estimation,"**Pose Estimation** is a computer vision task where the goal is to detect the position and orientation of a person or an object. Usually, this is done by predicting the location of specific keypoints like hands, head, elbows, etc. in case of Human Pose Estimation.
A common benchmark for this task is [MPII Human Pose](https://paperswithcode.com/sota/pose-estimation-on-mpii-human-pose)
( Image credit: [Real-time 2D Multi-Person Pose Estimation on CPU: Lightweight OpenPose](https://github.com/Daniil-Osokin/lightweight-human-pose-estimation.pytorch) )",computer-vision
7cd209ee-e5b7-4bd9-a518-2ed01ca2a1b4,overlapped-10-1,Overlapped 10-1,,computer-vision
9d954707-0027-457a-b517-70f8bb4a0037,facial-attribute-classification,Facial Attribute Classification,"Facial attribute classification is the task of classifying various attributes of a facial image - e.g. whether someone has a beard, is wearing a hat, and so on.
( Image credit: [Multi-task Learning of Cascaded CNN for Facial Attribute Classification
](https://arxiv.org/pdf/1805.01290v1.pdf) )",computer-vision
85aec53c-b21b-4da7-ac4a-78a3e9b5f725,material-classification,Material Classification,,computer-vision
128f926c-6ec2-4a0f-b7a4-ad6b579126f3,natural-image-orientation-angle-detection,Natural Image Orientation Angle Detection,"Image orientation angle detection is a pretty challenging task for a machine because the machine has to learn the features of an image in such a way so that it can detect the arbitrary angle by which the image is rotated. Though there are some modern cameras with features involving inertial sensors that can correct image orientation in steps of 90 degrees, those features are seldom used. In this paper, we propose a method to detect the orientation angle of a digitally captured image where the image may have been captured by a camera at a tilted angle (between 0\degree to 359\degree).",computer-vision
1764d539-d15e-4990-b34a-525c68e296b5,motion-detection,Motion Detection,"**Motion Detection** is a process to detect the presence of any moving entity in an area of interest. Motion Detection is of great importance due to its application in various areas such as surveillance and security, smart homes, and health monitoring.
Source: [Different Approaches for Human Activity Recognition– A Survey ](https://arxiv.org/abs/1906.05074)",computer-vision
57458956-1fb6-47ff-bcbf-97f392e5a22a,image-matting,Image Matting,"**Image Matting** is the process of accurately estimating the foreground object in images and videos. It is a very important technique in image and video editing applications, particularly in film production for creating visual effects. In case of image segmentation, we segment the image into foreground and background by labeling the pixels. Image segmentation generates a binary image, in which a pixel either belongs to foreground or background. However, Image Matting is different from the image segmentation, wherein some pixels may belong to foreground as well as background, such pixels are called partial or mixed pixels. In order to fully separate the foreground from the background in an image, accurate estimation of the alpha values for partial or mixed pixels is necessary.
Source: [Automatic Trimap Generation for Image Matting ](https://arxiv.org/abs/1707.00333)
Image Source: [Real-Time High-Resolution Background Matting](https://arxiv.org/pdf/2012.07810v1.pdf)",computer-vision
cb07fdbc-734e-49a8-84e7-cccdca49cce3,semi-supervised-video-object-segmentation,Semi-Supervised Video Object Segmentation,The semi-supervised scenario assumes the user inputs a full mask of the object(s) of interest in the first frame of a video sequence. Methods have to produce the segmentation mask for that object(s) in the subsequent frames.,computer-vision
f9eb0f5c-b26b-4616-a325-73bd8bbbb158,scene-segmentation,Scene Segmentation,"Scene segmentation is the task of splitting a scene into its various object components.
Image adapted from [Temporally coherent 4D reconstruction of complex dynamic scenes](https://paperswithcode.com/paper/temporally-coherent-4d-reconstruction-of2).",computer-vision
5bc54972-2e4a-46d4-9a7f-d5506e39e080,fine-grained-visual-recognition,Fine-Grained Visual Recognition,,computer-vision
6a8b1df1-a414-4f51-a8de-f2c99cf6dbb1,object-discovery,Object Discovery,"**Object Discovery** is the task of identifying previously unseen objects.
Source: [Unsupervised Object Discovery and Segmentation of RGBD-images ](https://arxiv.org/abs/1710.06929)",computer-vision
0fde8405-d663-4d6d-85c1-5dc9b4a3f51b,multimodal-forgery-detection,Multimodal Forgery Detection,**Multimodal Forgery Detection** task is a deep forgery detection method which uses both video and audio.,computer-vision
296bca04-4f96-41e4-9f69-629c74cb53cc,blind-image-quality-assessment,Blind Image Quality Assessment,,computer-vision
89346b32-5230-4da7-b939-d3233240c4b0,frame-duplication-detection,Frame Duplication Detection,,computer-vision
bd86d59c-4ecc-4767-8dd9-1293a9ce4371,simultaneous-localization-and-mapping,Simultaneous Localization and Mapping,"Simultaneous localization and mapping (SLAM) is the task of constructing or updating a map of an unknown environment while simultaneously keeping track of an agent's location within it.
( Image credit: [ORB-SLAM2](https://arxiv.org/pdf/1610.06475v2.pdf) )",computer-vision
8bc3bd3e-14e8-4650-976f-13111449c39f,video-grounding,Video Grounding,"**Video grounding** is the task of linking spoken language descriptions to specific video segments. In video grounding, the model is given a video and a natural language description, such as a sentence or a caption, and its goal is to identify the specific segment of the video that corresponds to the description. This can involve tasks such as localizing the objects or actions mentioned in the description within the video, or associating a specific time interval with the description.",computer-vision
4bbabb0c-1960-490a-aa50-99eda64429f1,point-cloud-registration,Point Cloud Registration,"**Point Cloud Registration** is a fundamental problem in 3D computer vision and photogrammetry. Given several sets of points in different coordinate systems, the aim of registration is to find the transformation that best aligns all of them into a common coordinate system. Point Cloud Registration plays a significant role in many vision applications such as 3D model reconstruction, cultural heritage management, landslide monitoring and solar energy analysis.
Source: [Iterative Global Similarity Points : A robust coarse-to-fine integration solution for pairwise 3D point cloud registration ](https://arxiv.org/abs/1808.03899)",computer-vision
c27d0624-33ce-4c0d-9814-ff845d77afec,3d-face-reconstruction,3D Face Reconstruction,"**3D Face Reconstruction** is a computer vision task that involves creating a 3D model of a human face from a 2D image or a set of images. The goal of 3D face reconstruction is to reconstruct a digital 3D representation of a person's face, which can be used for various applications such as animation, virtual reality, and biometric identification.
( Image credit: [3DDFA_V2](https://github.com/cleardusk/3DDFA_V2) )",computer-vision
641f0836-e546-4bc8-bfbb-5479b72be726,3d-car-instance-understanding,3D Car Instance Understanding,"3D Car Instance Understanding is the task of estimating properties (e.g.translation, rotation and shape) of a moving or parked vehicle on the road.
( Image credit: [Occlusion-Net](http://openaccess.thecvf.com/content_CVPR_2019/papers/Reddy_Occlusion-Net_2D3D_Occluded_Keypoint_Localization_Using_Graph_Networks_CVPR_2019_paper.pdf) )",computer-vision
2eca3a59-e000-41c1-9e45-e124b01dc386,compositional-zero-shot-learning,Compositional Zero-Shot Learning,"**Compositional Zero-Shot Learning (CZSL)** is a computer vision task in which the goal is to recognize unseen compositions fromed from seen state and object during training. The key challenge in CZSL is the inherent entanglement between the state and object within the context of an image. Some example benchmarks for this task are MIT-states, UT-Zappos, and C-GQA. Models are usually evaluated with the Accuracy for both seen and unseen compositions, as well as their Harmonic Mean(HM).
( Image credit: [Heosuab](https://hellopotatoworld.tistory.com/24) )",computer-vision
01bee264-d7d4-425e-9b85-f125a664aa3e,video-to-shop,Video-to-Shop,,computer-vision
289111b6-14f2-4339-845c-075674cec619,3d-semantic-scene-completion,3D Semantic Scene Completion,"This task was introduced in ""Semantic Scene Completion from a Single Depth Image"" (https://arxiv.org/abs/1611.08974) at CVPR 2017 . The target is to infer the dense 3D voxelized semantic scene from an incompleted 3D input (e.g. point cloud, depth map) and an optional RGB image. A recent summary can be found in the paper ""3D Semantic Scene Completion: a Survey"" (https://arxiv.org/abs/2103.07466), published at IJCV 2021.",computer-vision
63fe70fb-4e3d-4eab-b742-65de8cb17da3,multispectral-object-detection,Multispectral Object Detection,,computer-vision
cdf7811a-d400-4a7b-9e7f-e3d980ee9f8b,video-based-workflow-recognition,Video Based Workflow Recognition,,computer-vision
7098f779-59f2-4caa-9445-6dffdd928e28,deception-detection-in-videos,Deception Detection In Videos,,computer-vision
f801f87b-89ee-4a80-90cd-dff6c8de1ddb,dense-captioning,Dense Captioning,,computer-vision
a32f2343-f44c-4998-b3f3-e7a5f33215e4,amodal-panoptic-segmentation,Amodal Panoptic Segmentation,The goal of this task is to simultaneously predict the pixel-wise semantic segmentation labels of the visible regions of stuff classes and the instance segmentation labels of both the visible and occluded regions of thing classes.,computer-vision
e19ba15f-36bd-4c22-9449-ef2e2543cdaf,3d-shape-modeling,3D Shape Modeling,Image: [Gkioxari et al](https://arxiv.org/pdf/1906.02739v2.pdf),computer-vision
a6a95ded-bf70-4dc3-812c-9606e38677ff,visual-grounding,Visual Grounding,"Visual Grounding (VG) aims to locate the most relevant object or region in an image, based on a natural language query. The query can be a phrase, a sentence, or even a multi-round dialogue. There are three main challenges in
VG:
* What is the main focus in a query?
* How to understand an image?
* How to locate an object?",computer-vision
677954f6-7958-4557-8c54-1ff4d465776a,referring-expression,Referring Expression,"Referring expressions places a bounding box around
the instance corresponding to the provided description and
image.",computer-vision
714fc505-696f-4e09-b53b-d122a21ab2e6,document-layout-analysis,Document Layout Analysis,"""**Document Layout Analysis** is performed to determine physical structure of a document, that is, to determine document components. These document components can consist of single connected components-regions [...] of
pixels that are adjacent to form single regions [...] , or group
of text lines. A text line is a group of characters, symbols,
and words that are adjacent, “relatively close” to each other
and through which a straight line can be drawn (usually with
horizontal or vertical orientation)."" L. O'Gorman, ""The document spectrum for page layout analysis,"" in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 15, no. 11, pp. 1162-1173, Nov. 1993.
Image credit: [PubLayNet: largest dataset ever for document layout analysis](https://arxiv.org/pdf/1908.07836v1.pdf)",computer-vision
a4fa9c4d-fadc-4e1d-9c58-0fa07aab0399,multi-object-colocalization,Multi-object colocalization,,computer-vision
2657af08-e5af-47d4-b9f5-65cf6a9b915c,feature-compression,Feature Compression,"Compress data for machine interpretability to perform downstream tasks, rather than for human perception.",computer-vision
637af08f-cb6c-4c87-928b-cf09b9a3c8f4,visual-prompting,Visual Prompting,"Visual Prompting is the task of streamlining computer vision processes by harnessing the power of prompts,
inspired by the breakthroughs of text prompting in NLP. This innovative approach involves using a few visual
prompts to swiftly convert an unlabeled dataset into a deployed model, significantly reducing development time
for both individual projects and enterprise solutions.",computer-vision
2e016c3e-2618-4d5d-8af3-a9b0709c51cd,unsupervised-text-recognition,Unsupervised Text Recognition,Decompose a text into the letters / tokens that are used to write it.,computer-vision
08e33de7-6eac-4573-9082-eaa51b8834ae,3d-pose-estimation,3D Pose Estimation,"Image credit: [GSNet: Joint Vehicle Pose and Shape Reconstruction with Geometrical and Scene-aware Supervision
, ECCV'20](https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123600511.pdf)",computer-vision
5af9618f-0db9-4ff1-82ae-dbd488c28446,video-super-resolution,Video Super-Resolution,"**Video Super-Resolution** is a computer vision task that aims to increase the resolution of a video sequence, typically from lower to higher resolutions. The goal is to generate high-resolution video frames from low-resolution input, improving the overall quality of the video.
( Image credit: [Detail-revealing Deep Video Super-Resolution](https://github.com/jiangsutx/SPMC_VideoSR) )",computer-vision
28d54ff9-acf2-432d-a1c1-ae44bc913dee,point-cloud-super-resolution,Point Cloud Super Resolution,"Point cloud super-resolution is a fundamental problem
for 3D reconstruction and 3D data understanding. It takes
a low-resolution (LR) point cloud as input and generates
a high-resolution (HR) point cloud with rich details",computer-vision
b65962df-5ec2-424e-9c29-822b4159c237,image-variation,Image-Variation,"Given an image, generate variations of the image",computer-vision
1258a435-c8f0-499a-902b-c2bee746b9ab,gaze-target-estimation,Gaze Target Estimation,Gaze Target Estimation refers to predicting the image 2D gaze location of a person in the image.,computer-vision
d971493a-67b7-4521-873c-98d2dccca480,robust-bev-detection,Robust BEV Detection,,computer-vision
773f3cb8-8f82-42ed-b260-0e4dafd14b8f,aesthetic-image-captioning,Aesthetic Image Captioning,,computer-vision
572d5644-32b7-4210-8a15-6c8d210864d8,video-denoising,Video Denoising,,computer-vision
ce356a6f-6f64-49d5-806c-f977a9e779d0,online-surgical-phase-recognition,Online surgical phase recognition,"Online surgical phase recognition: the first 40 videos to train, the last 40 videos to test.",computer-vision
fb1eeb4b-b46c-4540-b8fb-2abdd2e9c57d,rain-removal,Rain Removal,,computer-vision
889ef9ed-b7e3-4caf-b76d-a991535a4528,image-outpainting,Image Outpainting,"Predicting the visual context of an image beyond its boundary.
Image credit: [NUWA-Infinity: Autoregressive over Autoregressive Generation for Infinite Visual Synthesis](https://paperswithcode.com/paper/nuwa-infinity-autoregressive-over?from=n35)",computer-vision
32f3895d-31f7-41c7-842a-793e049b8ac1,object-tracking,Object Tracking,"**Object tracking** is the task of taking an initial set of object detections, creating a unique ID for each of the initial detections, and then tracking each of the objects as they move around frames in a video, maintaining the ID assignment. State-of-the-art methods involve fusing data from RGB and event-based cameras to produce more reliable object tracking. CNN-based models using only RGB images as input are also effective. The most popular benchmark is OTB. There are several evaluation metrics specific to object tracking, including HOTA, MOTA, IDF1, and Track-mAP.
( Image credit: [Towards-Realtime-MOT
](https://github.com/Zhongdao/Towards-Realtime-MOT) )",computer-vision
85ce05dd-b537-487e-930c-b95a0196874c,curved-text-detection,Curved Text Detection,,computer-vision
28874bd4-abcd-4d41-bb42-3e25401885b2,action-understanding,Action Understanding,,computer-vision
e99a6051-893b-49dd-ac7f-b5635941e689,multi-label-zero-shot-learning,Multi-label zero-shot learning,,computer-vision
b7e57b27-4e4d-421d-8986-ddd9395629e2,text-to-video-editing,Text-to-Video Editing,,computer-vision
02b9255a-6ab3-43c8-9d90-b0b3a393a451,generalized-few-shot-classification,Generalized Few-Shot Classification,,computer-vision
6b23becb-5cf9-4c58-96e6-32f7cd4699c9,3d-point-cloud-classification,3D Point Cloud Classification,Image: [Qi et al](https://arxiv.org/pdf/1612.00593v2.pdf),computer-vision
7b82290f-e746-4dd6-9f83-eae2b28fce3d,photo-to-caricature-translation,Photo-To-Caricature Translation,"Photo-to-caricature translation is the task of adapting a photo to a cartoon or sketch.
( Image credit: [WarpGAN](https://arxiv.org/pdf/1811.10100v3.pdf) )",computer-vision
1c184c1e-81f2-402a-99f7-3c6695888e97,open-vocabulary-panoptic-segmentation,Open Vocabulary Panoptic Segmentation,,computer-vision
9464c263-1354-473d-8b6c-0b10d3669a1e,camera-shot-boundary-detection,Camera shot boundary detection,"The objective of camera shot boundary detection is to find the transitions between the camera shots in a video and classify the type of camera transition. This task is introduced in SoccerNet-v2, where 3 types of transitions are considered (abrupt, logo, smooth).",computer-vision
03844e08-8b60-4d5c-b0e9-69092cc06e0d,jpeg-compression-artifact-reduction,Jpeg Compression Artifact Reduction,,computer-vision
99b0da17-dcf6-44c7-850a-6bd5a9a91e2d,3d-shape-reconstruction-from-a-single-2d,3D Shape Reconstruction From A Single 2D Image,Image: [Liao et al](https://arxiv.org/pdf/1811.12016v1.pdf),computer-vision
8c8f5486-08a8-4b35-adb6-ae05864a2731,3d-face-modeling,3D Face Modelling,,computer-vision
9ea266c8-d0e7-44de-998e-7d2869eb9cd6,steering-control,Steering Control,,computer-vision
dde44fd2-d615-4d40-ab17-13ee041cf113,event-based-optical-flow,Event-based Optical Flow,,computer-vision
b00cbca4-27c3-4a5f-8828-891f204c03a5,robust-3d-semantic-segmentation,Robust 3D Semantic Segmentation,3D Semantic Segmentation under Out-of-Distribution Scenarios,computer-vision
04e3ecab-e306-4185-b439-fa0fe66bac45,kiss-detection,Kiss Detection,,computer-vision
aaf96e32-38be-46d8-9b7a-405d251d3bc6,photo-retouching,Photo Retouching,,computer-vision
0a86bfbb-5ffa-47c3-accb-9e987ffd4646,handwriting-verification,Handwriting Verification,The goal of handwriting verification is to find a measure of confidence whether the given handwritten samples are written by the same or different writer.,computer-vision
d344af70-5268-4657-a5d1-662e98049eb4,automatic-post-editing,Automatic Post-Editing,Automatic post-editing (APE) is used to correct errors in the translation made by the machine translation systems.,computer-vision
cad8b222-0440-4319-8a0e-2cf3d6db3f6b,sensor-fusion,Sensor Fusion,Sensor fusion is the process of combining sensor data or data derived from disparate sources such that the resulting information has less uncertainty than would be possible when these sources were used individually. [Wikipedia],computer-vision
6139624f-cd9b-4d3d-909c-ece1570ec869,road-segementation,Road Segmentation,Road Segmentation is a pixel wise binary classification in order to extract underlying road network. Various Heuristic and data driven models are proposed. Continuity and robustness still remains one of the major challenges in the area.,computer-vision
b8ced694-49c0-4068-8995-fe257865899b,steganographics,Steganographics,,computer-vision
aa006b5c-eb4d-4856-b917-20c4fc876487,fish-detection,Fish Detection,,computer-vision
37e643e2-4304-4dcf-954d-0c731195ab5c,keypoint-detection,Keypoint Detection,"**Keypoint Detection** involves simultaneously detecting people and localizing their keypoints. Keypoints are the same thing as interest points. They are spatial locations, or points in the image that define what is interesting or what stand out in the image. They are invariant to image rotation, shrinkage, translation, distortion, and so on.
( Image credit: [PifPaf: Composite Fields for Human Pose Estimation](https://github.com/vita-epfl/openpifpaf); ""Learning to surf"" by fotologic, license: CC-BY-2.0 )",computer-vision
ffd96a82-9c99-43c5-86ed-98d6f9ef9d81,jpeg-decompression,JPEG Decompression,Image credit: [Palette: Image-to-Image Diffusion Models](https://paperswithcode.com/paper/palette-image-to-image-diffusion-models),computer-vision
57e88d12-831f-46a0-a621-18384bbfa2fc,image-to-video-person-re-identification,Image-To-Video Person Re-Identification,,computer-vision
ecd40f2c-4316-4890-b4eb-1207488e977a,gaze-redirection,gaze redirection,,computer-vision
6acabd30-8181-459b-a680-00ea9573ed78,set-matching,set matching,,computer-vision
a2bc2510-418d-4940-8d13-7b930938f95c,boundary-grounding,Boundary Grounding,"Provided with a description of a boundary inside a video, the machine is required to locate that boundary in the video.",computer-vision
00da8436-2416-47a4-b1bf-a245aaecc577,grounded-situation-recognition,Grounded Situation Recognition,"Grounded Situation Recognition aims to produce the structured image summary which describes the primary activity (verb), its relevant entities (nouns), and their bounding-box groundings.",computer-vision
ef5d03b8-024a-4b60-8af1-23c882c8f410,gan-image-forensics,GAN image forensics,,computer-vision
701ec3f3-144b-4019-a392-834d0cf7c2d3,video-synchronization,Video Synchronization,,computer-vision
0fbffadf-2436-4652-8e67-5ebbc3742d69,rgb-t-tracking,Rgb-T Tracking,,computer-vision
c15bc292-a06e-48df-a2eb-bc093855dd38,temporal-action-proposal-generation,Temporal Action Proposal Generation,,computer-vision
2c269f80-3001-4bab-9c3a-6f84c9b2dc20,disjoint-15-5,Disjoint 15-5,,computer-vision
a25568fa-cb3e-4571-9068-aea851bd6b23,image-reconstruction,Image Reconstruction,,computer-vision
7ccd73ee-e1e2-48bf-9a57-0655add9b7d7,single-view-3d-reconstruction,Single-View 3D Reconstruction,,computer-vision
7fe7666c-b9d6-4c2b-b2b8-105af8bcf5ed,image-declipping,Image Declipping,,computer-vision
f95d37f0-6284-4ebd-9a6e-1f37757597d2,sign-language-recognition,Sign Language Recognition,"**Sign Language Recognition** is a computer vision and natural language processing task that involves automatically recognizing and translating sign language gestures into written or spoken language. The goal of sign language recognition is to develop algorithms that can understand and interpret sign language, enabling people who use sign language as their primary mode of communication to communicate more easily with non-signers.
( Image credit: [Word-level Deep Sign Language Recognition from Video:
A New Large-scale Dataset and Methods Comparison](https://arxiv.org/pdf/1910.11006v1.pdf) )",computer-vision
ec82c0b8-2b90-42b2-94fb-b4e2dcff037f,embodied-question-answering,Embodied Question Answering,,computer-vision
8140ab19-f065-4d9c-ae59-d7132f888df0,concept-alignment,Concept Alignment,**Concept Alignment** aims to align the learned representations or concepts within a model with the intended or target concepts. It involves adjusting the model's parameters or training process to ensure that the learned concepts accurately reflect the underlying patterns in the data.,computer-vision
c19a07b0-9c08-496c-ab48-5d1592aed73d,spatial-relation-recognition,Spatial Relation Recognition,,computer-vision
3beb979b-4aeb-45fc-b78f-91c8104c53b4,story-continuation,Story Continuation,"The task involves providing an initial scene that can be obtained in real world use cases. By including this scene, a model can then copy and adapt elements from it as it generates subsequent images.
Source: [StoryDALL-E: Adapting Pretrained Text-to-Image Transformers for Story Continuation](https://paperswithcode.com/paper/storydall-e-adapting-pretrained-text-to-image)",computer-vision
06b80c56-ff77-4207-b1d9-e6aaeaf85679,attentive-segmentation-networks,Attentive segmentation networks,,computer-vision
2e97d436-ad3e-4008-a379-e8c58481c7dc,3d-multi-person-pose-estimation-absolute,3D Multi-Person Pose Estimation (absolute),"This task aims to solve absolute 3D multi-person pose Estimation (camera-centric coordinates). No ground truth human bounding box and human root joint coordinates are used during testing stage.
( Image credit: [RootNet](https://github.com/mks0601/3DMPPE_ROOTNET_RELEASE) )",computer-vision
572ad12d-3105-4b16-a57c-f7db9edaec7a,hand-keypoint-localization,Hand Keypoint Localization,,computer-vision
3c7f562b-0d26-4b1a-8bec-7753f0d3cf0c,3d-facial-expression-recognition,3D Facial Expression Recognition,"3D facial expression recognition is the task of modelling facial expressions in 3D from an image or video.
( Image credit: [Expression-Net](https://github.com/fengju514/Expression-Net) )",computer-vision
e847a5c9-f25a-4327-ad89-11539e8de2d5,self-supervised-learning,Self-Supervised Learning,"**Self-Supervised Learning** is proposed for utilizing unlabeled data with the success of supervised learning. Producing a dataset with good labels is expensive, while unlabeled data is being generated all the time. The motivation of Self-Supervised Learning is to make use of the large amount of unlabeled data. The main idea of Self-Supervised Learning is to generate the labels from unlabeled data, according to the structure or characteristics of the data itself, and then train on this unsupervised data in a supervised manner. Self-Supervised Learning is wildly used in representation learning to make a model learn the latent features of the data. This technique is often employed in computer vision, video processing and robot control.
Source: [Self-supervised Point Set Local Descriptors for Point Cloud Registration ](https://arxiv.org/abs/2003.05199)
Image source: [LeCun](https://www.youtube.com/watch?v=7I0Qt7GALVk)",computer-vision
f683e0da-af3a-49f0-b612-503426e13613,anomaly-detection-at-30-anomaly,Anomaly Detection at 30% anomaly,Performance of unsupervised anomaly detection at specific anomaly percentage.,computer-vision
3ef0ab23-0e7c-4e84-b97c-559a803c2c91,activity-recognition,Activity Recognition,"Human **Activity Recognition** is the problem of identifying events performed by humans given a video input. It is formulated as a binary (or multiclass) classification problem of outputting activity class labels. Activity Recognition is an important problem with many societal applications including smart surveillance, video search/retrieval, intelligent robots, and other monitoring systems.
Source: [Learning Latent Sub-events in Activity Videos Using Temporal Attention Filters ](https://arxiv.org/abs/1605.08140)",computer-vision
def15f22-d707-42ab-97a7-27fb4e90aefa,action-triplet-recognition,Action Triplet Recognition,"Recognising action as a triplet of subject verb and object. Example HOI = Human Object Interaction, Surgical IVT = Instrument Verb Target, etc.",computer-vision
2dbae742-1996-42ea-b195-63eda3373c34,image-based-localization,Image-Based Localization,Determining the location of an image without GPS based on cross-view matching. In most of the cases a database of satellite images is used to match the ground images to them.,computer-vision
36a8b3a4-5e75-471a-a47f-d09a64c876ad,robust-object-detection,Robust Object Detection,"A Benchmark for the:
Robustness of Object Detection Models to Image Corruptions and Distortions
To allow fair comparison of robustness enhancing methods all models have to use a standard ResNet50 backbone because performance strongly scales with backbone capacity. If requested an unrestricted category can be added later.
Benchmark Homepage: https://github.com/bethgelab/robust-detection-benchmark
Metrics:
mPC [AP]: Mean Performance under Corruption [measured in AP]
rPC [%]: Relative Performance under Corruption [measured in %]
Test sets:
Coco: val 2017; Pascal VOC: test 2007; Cityscapes: val;
( Image credit: [Benchmarking Robustness in Object Detection](https://arxiv.org/pdf/1907.07484v1.pdf) )",computer-vision
ff8f3837-1a78-420f-97c2-f86b11efd4dc,video-quality-assessment,Video Quality Assessment,"Video Quality Assessment is a computer vision task aiming to mimic video-based human subjective perception. The goal is to produce a mos score, where higher score indicates better perceptual quality. Some well-known benchmarks for this task are KoNViD-1k, LIVE-VQC, YouTube-UGC and LSVQ. SROCC/PLCC/RMSE are usually used to evaluate the performance of different models.",computer-vision
05a0944e-bc95-46eb-a7bf-fce5e9398668,semi-supervised-image-classification,Semi-Supervised Image Classification,"Semi-supervised image classification leverages unlabelled data as well as labelled data to increase classification performance.
You may want to read some blog posts to get an overview before reading the papers and checking the leaderboards:
- [An overview of proxy-label approaches for semi-supervised learning](https://ruder.io/semi-supervised/) - Sebastian Ruder
- [Semi-Supervised Learning in Computer Vision](https://amitness.com/2020/07/semi-supervised-learning/) - Amit Chaudhary
( Image credit: [Self-Supervised Semi-Supervised Learning](https://arxiv.org/pdf/1905.03670v2.pdf) )",computer-vision
5279524a-7d6b-4c54-994e-7000f928a9b7,action-analysis,Action Analysis,,computer-vision
69f9fa5b-0d24-48a1-9007-7f40b25bd770,holdout-set,Holdout Set,,computer-vision
c7c472be-e1b0-4364-bb74-9ff5aeb19321,gait-recognition-in-the-wild,Gait Recognition in the Wild,"Gait Recognition in the Wild refers to methods under real-world senses, i.e., unconstrained environment.",computer-vision
cbf2402c-f923-4662-9669-3871f8124cc9,3d-object-reconstruction,3D Object Reconstruction,Image: [Choy et al](https://arxiv.org/pdf/1604.00449v1.pdf),computer-vision
fe76d073-97b4-4d4d-87fb-4205d488be1f,materials-imaging,Materials Imaging,,computer-vision
944a959e-026b-476e-96be-69858e44a4e3,video-matting,Video Matting,Image credit: [https://arxiv.org/pdf/2012.07810v1.pdf](https://arxiv.org/pdf/2012.07810v1.pdf),computer-vision
972530d5-379c-4a17-934d-5fb25e3062cf,gait-recognition,Gait Recognition,"( Image credit: [GaitSet: Regarding Gait as a Set for Cross-View Gait Recognition](https://github.com/AbnerHqC/GaitSet) )",computer-vision
06ca4f24-76e2-46be-a305-3224c2806f58,monocular-3d-object-localization,Monocular 3D Object Localization,,computer-vision
48aa91eb-20d1-46f4-91f6-9161bccd3054,birds-eye-view-object-detection,Birds Eye View Object Detection,KITTI birds eye view detection task,computer-vision
444f0469-3ebc-4a14-86ff-e0af3a850025,moving-object-detection,Moving Object Detection,,computer-vision
3206863e-8bf4-4a79-8eea-f800ea7fffec,crosslingual-text-to-image-generation,Crosslingual Text-to-Image Generation,,computer-vision
72c7bcf5-2728-49c8-9a40-82d02a565c02,document-to-image-conversion,Document To Image Conversion,,computer-vision
1b1ff466-1cc6-4828-afd2-20afd521a56b,person-identification,Person Identification,,computer-vision
081e0f9f-0678-49bb-955a-49ff5f03f74b,medical-image-enhancement,Medical Image Enhancement,Aims to improve the perceptual quality of low-quality medical images,computer-vision
c3cc7e4f-7eb8-44d1-a135-96211e0aeb86,semi-supervised-fashion-compatibility,Semi-Supervised Fashion Compatibility,,computer-vision
91324ede-2c21-4a19-b054-ce1f3d7f009f,visual-tracking,Visual Tracking,"**Visual Tracking** is an essential and actively researched problem in the field of computer vision with various real-world applications such as robotic services, smart surveillance systems, autonomous driving, and human-computer interaction. It refers to the automatic estimation of the trajectory of an arbitrary target object, usually specified by a bounding box in the first frame, as it moves around in subsequent video frames.
Source: [Learning Reinforced Attentional Representation for End-to-End Visual Tracking ](https://arxiv.org/abs/1908.10009)",computer-vision
0ddc2bfc-c370-4362-af84-9fee2d92c0d3,overlapped-25-25,Overlapped 25-25,,computer-vision
8a97ad79-47a5-418d-abd9-1eaf97b045fc,3d-human-pose-tracking,3D Human Pose Tracking,,computer-vision
caa5c8bd-2bdc-4ef1-9d0c-444e72e8aa57,transparency-separation,Transparency Separation,,computer-vision
62b0c8dd-a998-4365-9869-b619a0eca128,3d-lane-detection,3D Lane Detection,"The goal of **3D Lane Detection** is to perceive lanes that provide guidance for autonomous vehicles. A lane can be represented as a visible laneline or a conceptual centerline. Furthermore, a lane obtains extra attributes from the understanding of the surrounding environment.
( Image credit: [OpenLane-V2](https://github.com/OpenDriveLab/OpenLane-V2 ) )",computer-vision
3391ea2f-e679-4bc6-ae4e-449105d7abe9,indoor-monocular-depth-estimation,Indoor Monocular Depth Estimation,,computer-vision
e5009c42-e81d-4dc7-9493-ce7af1bb2275,landmine,Landmine,,computer-vision
6169b676-ddb2-4dd2-9dc6-4ae0b64eca2a,multiple-object-tracking-with-transformer,Multiple Object Tracking with Transformer,,computer-vision
08020121-9962-4411-90ad-bd9fcac3e86d,large-scale-person-re-identification,Large-Scale Person Re-Identification,,computer-vision
a68aee2e-39df-4390-89df-f08c27a852a0,face-detection,Face Detection,"**Face Detection** is a computer vision task that involves automatically identifying and locating human faces within digital images or videos. It is a fundamental technology that underpins many applications such as face recognition, face tracking, and facial analysis.
( Image credit: [insightface](https://github.com/deepinsight/insightface) )",computer-vision
16491e3f-4ba0-4f31-a6b5-70fc390064eb,infrared-and-visible-image-fusion,Infrared And Visible Image Fusion,Image fusion with paired infrared and visible images,computer-vision
19f60037-8163-4ad4-aa46-49a6fcac65ea,3d-point-cloud-linear-classification,3D Point Cloud Linear Classification,Training a linear classifier(e.g. SVM) on the embeddings/representations of 3D point clouds. The embeddings/representations are usually trained in an unsupervised manner.,computer-vision
7958957f-34b0-4948-b724-c5b9ffc22551,3d-object-detection,3D Object Detection,"**3D Object Detection** is a task in computer vision where the goal is to identify and locate objects in a 3D environment based on their shape, location, and orientation. It involves detecting the presence of objects and determining their location in the 3D space in real-time. This task is crucial for applications such as autonomous vehicles, robotics, and augmented reality.
( Image credit: [AVOD](https://github.com/kujason/avod) )",computer-vision
fbeab44c-597e-4b63-a5b3-3ee8872cdd63,semantic-segmentation,Semantic Segmentation,"**Semantic Segmentation** is a computer vision task in which the goal is to categorize each pixel in an image into a class or object. The goal is to produce a dense pixel-wise segmentation map of an image, where each pixel is assigned to a specific class or object. Some example benchmarks for this task are Cityscapes, PASCAL VOC and ADE20K. Models are usually evaluated with the Mean Intersection-Over-Union (Mean IoU) and Pixel Accuracy metrics.
( Image credit: [CSAILVision](https://github.com/CSAILVision/semantic-segmentation-pytorch) )",computer-vision
6f1b1909-2a4e-4853-a08b-404c8734f908,composite-action-recognition,Composite action recognition,,computer-vision
6debcaac-cbd2-466d-83f8-23526f23ea38,histopathological-segmentation,Histopathological Segmentation,,computer-vision
576bcef7-c01a-48d1-be8a-6d77ebfcd236,plan2scene,Plan2Scene,Converting floorplans + RGB photos to textured 3D mesh models of houses.,computer-vision
aac916eb-47b0-4ac5-b7c1-7297b4649a86,offline-handwritten-chinese-character,Offline Handwritten Chinese Character Recognition,Handwritten Chinese characters recognition is the task of detecting and interpreting the components of Chinese characters (i.e. radicals and two-dimensional structures).,computer-vision
509d6acc-9ccf-4d0d-839c-d1d200061294,unet-quantization,UNET Quantization,,computer-vision
49ef38c6-f309-4dd2-bb49-de813d796ced,de-aliasing,De-aliasing,De-aliasing is the problem of recovering the original high-frequency information that has been aliased during the acquisition of an image.,computer-vision
5dc0a58b-b5c9-4f11-9461-454173c4ca59,video-recognition,Video Recognition,"**Video Recognition** is a process of obtaining, processing, and analysing data that it receives from a visual source, specifically video.",computer-vision
8e9dfb6d-b693-4d2f-9b56-0db0c6d40341,3d-plane-detection,3D Plane Detection,Image: [Liu et al](https://arxiv.org/pdf/1812.04072v2.pdf),computer-vision
d228daa5-780e-4a17-ab68-15181da2f6d2,cube-engraving-classification,Cube Engraving Classification,,computer-vision
977981a8-16a1-4906-9922-01f245fd6918,supervised-video-summarization,Supervised Video Summarization,"**Supervised video summarization** rely on datasets with human-labeled ground-truth annotations (either in the form of video summaries, as in the case of the [SumMe](https://paperswithcode.com/dataset/summe) dataset, or in the form of frame-level importance scores, as in the case of the [TVSum](https://paperswithcode.com/dataset/tvsum-1) dataset), based on which they try to discover the underlying criterion for video frame/fragment selection and video summarization.
Source: [Video Summarization Using Deep Neural Networks: A Survey](https://arxiv.org/abs/2101.06072)",computer-vision
f1ed5238-907a-4d78-b3eb-5a2303ce5207,multi-view-subspace-clustering,Multi-view Subspace Clustering,,computer-vision
dffe50c2-10a0-4ad6-932e-e47fed86885a,self-supervised-anomaly-detection,Self-Supervised Anomaly Detection,Self-Supervision towards anomaly detection,computer-vision
bfa91636-7840-4e87-a206-7161e24350e5,rgb-d-salient-object-detection,RGB-D Salient Object Detection,"RGB-D Salient object detection (SOD) aims at distinguishing the most visually distinctive objects or regions in a scene from the given RGB and Depth data. It has a wide range of applications, including video/image segmentation, object recognition, visual tracking, foreground maps evaluation, image retrieval, content-aware image editing, information discovery, photosynthesis, and weakly
supervised semantic segmentation. Here, depth information plays an important complementary role in finding salient objects. Online benchmark: http://dpfan.net/d3netbenchmark.
( Image credit: [Rethinking RGB-D Salient Object Detection: Models, Data Sets, and Large-Scale Benchmarks, TNNLS20](https://ieeexplore.ieee.org/abstract/document/9107477) )",computer-vision
f3ed2c74-ca44-4f90-b0e5-f9b9cc7c0d93,synthetic-image-detection,Synthetic Image Detection,Identify if the image is real or generated/manipulated by any generative models (GAN or Diffusion).,computer-vision
caa247f7-8776-42ab-a64b-f48a21baca3c,sketch-to-image-translation,Sketch-to-Image Translation,,computer-vision
cd22009d-7f46-46e3-97e0-0c792dbdf1fa,vocabulary-free-image-classification,Vocabulary-free Image Classification,"Recent advances in large vision-language models have revolutionized the image classification paradigm. Despite showing impressive zero-shot capabilities, a pre-defined set of categories, a.k.a. the vocabulary, is assumed at test time for composing the textual prompts. However, such assumption can be impractical when the semantic context is unknown and evolving. Vocabulary-free Image Classification (VIC) aims to assign to an input image a class that resides in an unconstrained language-induced semantic space, without the prerequisite of a known vocabulary.",computer-vision
16003131-f745-41cc-87e5-f826ddc76717,video-generation,Video Generation,"( Various Video Generation Tasks.
Gif credit: [MaGViT](https://paperswithcode.com/paper/magvit-masked-generative-video-transformer) )",computer-vision
e0808f5d-4a1b-4e24-bc78-a59ff418c040,one-shot-instance-segmentation,One-Shot Instance Segmentation,"( Image credit: [Siamese Mask R-CNN
](https://github.com/bethgelab/siamese-mask-rcnn) )",computer-vision
92e6faf1-0a3e-4a66-8fd9-03841a3f8ee3,explainable-artificial-intelligence,Explainable artificial intelligence,"XAI refers to methods and techniques in the application of artificial intelligence (AI) such that the results of the solution can be understood by humans. It contrasts with the concept of the ""black box"" in machine learning where even its designers cannot explain why an AI arrived at a specific decision. XAI may be an implementation of the social right to explanation. XAI is relevant even if there is no legal right or regulatory requirement—for example, XAI can improve the user experience of a product or service by helping end users trust that the AI is making good decisions. This way the aim of XAI is to explain what has been done, what is done right now, what will be done next and unveil the information the actions are based on. These characteristics make it possible (i) to confirm existing knowledge (ii) to challenge existing knowledge and (iii) to generate new assumptions.",computer-vision
697b6894-fa2a-4e70-8bb1-018f19afa10e,one-shot-3d-action-recognition,One-Shot 3D Action Recognition,,computer-vision
c73e205a-ea7c-4e32-a914-2a2876dfc867,jpeg-forgery-localization,Jpeg Forgery Localization,,computer-vision
f88a8052-d633-484b-8197-63325ee7d6b3,image-deblocking,Image Deblocking,,computer-vision
70c809a4-0be4-4c73-b6c8-c576ddffdb03,key-frame-based-video-super-resolution-k-15,Key-Frame-based Video Super-Resolution (K = 15),"Key-Frame-based Video Super-Resolution is a sub-task of [Video Super-Resolution](https://paperswithcode.com/task/video-super-resolution), where, in addition to the low-resolution frames, high-resolution ground-truth frames for every Kth input frame are also provided as inputs to the model. For example, if `[LR-frame-1, LR-frame-2, LR-frame-3, ..., LR-frame-100]` is the sequence of low-resolution frames to be upscaled, the Key-Frame-based Video Super-Resolution (K = 15) model is also provided with the high-resolution frames `[HR-frame-1, HR-frame-16, ..., HR-frame-91]` . Key-frames are excluded when measuring the evaluation metrics.",computer-vision
941ed8a7-a2ff-40b6-9158-6607ed3663ae,amodal-instance-segmentation,Amodal Instance Segmentation,"Different from traditional segmentation which only focuses on visible regions, amodal instance segmentation also predicts the occluded parts of object instances.
Description Credit: [Deep Occlusion-Aware Instance Segmentation with Overlapping BiLayers, CVPR'21](https://openaccess.thecvf.com/content/CVPR2021/papers/Ke_Deep_Occlusion-Aware_Instance_Segmentation_With_Overlapping_BiLayers_CVPR_2021_paper.pdf)",computer-vision
151e1f27-6952-4f92-9dc6-dc163c4d3742,deepfake-detection,DeepFake Detection,"**DeepFake Detection** is the task of detecting fake videos or images that have been generated using deep learning techniques. Deepfakes are created by using machine learning algorithms to manipulate or replace parts of an original video or image, such as the face of a person. The goal of deepfake detection is to identify such manipulations and distinguish them from real videos or images.
Description source: [DeepFakes: a New Threat to Face Recognition? Assessment and Detection](https://arxiv.org/pdf/1812.08685.pdf)
Image source: [DeepFakes: a New Threat to Face Recognition? Assessment and Detection](https://paperswithcode.com/paper/deepfakes-a-new-threat-to-face-recognition)",computer-vision
4aab386e-c07b-4b8f-b26c-d3b59d8f4b9e,action-detection,Action Detection,"Action Detection aims to find both where and when an action occurs within a video clip and classify what the action is taking place. Typically results are given in the form of action tublets, which are action bounding boxes linked across time in the video. This is related to temporal localization, which seeks to identify the start and end frame of an action, and action recognition, which seeks only to classify which action is taking place and typically assumes a trimmed video.",computer-vision
318393f8-f656-4b23-8fbc-b8b926f485cb,human-object-interaction-detection,Human-Object Interaction Detection,"Human-Object Interaction (HOI) detection is a task of identifying ""a set of interactions"" in an image, which involves the i) localization of the subject (i.e., humans) and target (i.e., objects) of interaction, and ii) the classification of the interaction labels.",computer-vision
c686647b-3234-4e7a-8ec3-b492915ab2e1,lighting-estimation,Lighting Estimation,Lighting Estimation analyzes given images to provide detailed information about the lighting in a scene.,computer-vision
3d458398-30a0-4a0a-9835-7ea5ad9e7fc2,document-image-skew-estimation,Document Image Skew Estimation,,computer-vision
65b36aa3-dca7-4f7b-a56b-179fd642203a,road-damage-detection,Road Damage Detection,"Road damage detection is the task of detecting damage in roads.
( Image credit: [Road Damage Detection And Classification In Smartphone Captured Images Using Mask R-CNN](https://arxiv.org/pdf/1811.04535v1.pdf) )",computer-vision
f8986fe7-7911-4b7c-b25f-ba2cb0b18860,multiple-object-tracking,Multiple Object Tracking,"**Multiple Object Tracking** is the problem of automatically identifying multiple objects in a video and representing them as a set of trajectories with high accuracy.
Source: [SOT for MOT ](https://arxiv.org/abs/1712.01059)",computer-vision
2f2697e2-4515-4272-b5a0-3c474e8e284c,depth-and-camera-motion,Depth And Camera Motion,,computer-vision
7d1e5caa-28d6-4e52-92eb-0083a9fca96e,film-simulation,Film Simulation,Simulate the appearance of film camera.,computer-vision
0e226eaa-6918-44a1-8705-6c73cb0689bf,subject-driven-video-generation,Subject-driven Video Generation,,computer-vision
61aa7cb2-8d92-4bcc-a937-076135c2972a,occluded-face-detection,Occluded Face Detection,,computer-vision
91931be3-b53f-4748-b4de-b83383adf78f,food-recognition,Food Recognition,,computer-vision
d20883b0-4180-47d3-8d0c-6a8520f1fc16,medical-image-denoising,Medical Image Denoising,Image credit: [Learning Medical Image Denoising with Deep Dynamic Residual Attention Network](https://paperswithcode.com/paper/learning-medical-image-denoising-with-deep),computer-vision
cb2b4824-451f-431e-a147-464cfa7bd06c,3d-semantic-scene-completion-from-a-single,3D Semantic Scene Completion from a single RGB image,This task relies on a single RGB image to infer the dense 3D voxelized semantic scene.,computer-vision
6d7b4978-2512-4d2f-a757-39b781033f93,unrolling,Rolling Shutter Correction,Rolling Shutter Correction,computer-vision
8b7cee71-6791-4228-9aaf-fa295d7b6b59,learning-with-coarse-labels,Learning with coarse labels,"Learning fine-grained representation with coarsely-labelled dataset, which can significantly reduce the labelling cost. As a simple example, for the task of differentiation between different pets, we need a knowledgeable cat lover to distinguish between ‘British short’ and ‘Siamese’, but even a child annotator may help to discriminate between ‘cat’ and ‘non-cat’.",computer-vision
ca3428a3-1887-4996-95a4-5d4621815685,person-retrieval,Person Retrieval,,computer-vision
d6b9649d-07e2-4f50-bc5d-8730872965b4,open-vocabulary-object-detection,Open Vocabulary Object Detection,"Open-vocabulary detection (OVD) aims to generalize beyond the limited number of base classes labeled during the training phase. The goal is to detect novel classes defined by an unbounded
(open) vocabulary at inference.",computer-vision
b46e05c4-f401-41a0-a51f-e7549a33122f,video-emotion-detection,Video Emotion Detection,,computer-vision
7288dc51-b8d4-4e37-a36b-2d77c158e16f,surface-normals-estimation-from-point-clouds,Surface Normals Estimation from Point Clouds,Parent task: 3d Point Clouds Analysis,computer-vision
1bf73733-56b5-4042-8790-9da72b384325,video-captioning,Video Captioning,"**Video Captioning** is a task of automatic captioning a video by understanding the action and event in the video which can help in the retrieval of the video efficiently through text.
Source: [NITS-VC System for VATEX Video Captioning Challenge 2020 ](https://arxiv.org/abs/2006.04058)",computer-vision
5806789d-d072-4b3d-8dae-cb12364ae3b6,action-recognition-in-still-images,Action Recognition In Still Images,,computer-vision
a0660026-be42-4f46-969d-c95a6959950e,video-compression,Video Compression,"**Video Compression** is a process of reducing the size of an image or video file by exploiting spatial and temporal redundancies within an image or video frame and across multiple video frames. The ultimate goal of a successful Video Compression system is to reduce data volume while retaining the perceptual quality of the decompressed data.
Source: [Adversarial Video Compression Guided by Soft Edge Detection ](https://arxiv.org/abs/1811.10673)",computer-vision
f8660d25-c1ed-41ea-b31e-ef8e705531b4,real-time-instance-segmentation,Real-time Instance Segmentation,"Similar to its parent task, instance segmentation, but with the goal of achieving real-time capabilities under a defined setting.
Image Credit: [SipMask: Spatial Information Preservation for Fast Image and Video Instance Segmentation](https://arxiv.org/pdf/2007.14772v1.pdf)",computer-vision
115843cb-1ee2-4ba0-9b14-fed4f4777522,few-shot-video-object-detection,Few-Shot Video Object Detection,"Few-Shot Video Object Detection
(FSVOD): given only a few support images of the target
object in an unseen class, detect all the objects belonging to
the same class in a given query video.",computer-vision
40d5bc40-b998-4962-80cf-f55a21486fe0,material-recognition,Material Recognition,,computer-vision
cf6b2e0b-516d-472c-ac9d-586f657f6666,small-data,Small Data Image Classification,Supervised image classification with tens to hundreds of labeled training examples.,computer-vision
41f16c63-6c18-4e4b-aabd-e72242ec2c3b,3d-shape-recognition,3D Shape Recognition,Image: [Wei et al](https://arxiv.org/pdf/1908.10098v1.pdf),computer-vision
431efc37-3e79-4299-ac6a-9f3b9b12393e,video-narrative-grounding,Video Narrative Grounding,"**Video Narrative Grounding** is the task of linking video narratives to specific video segments. The input is a video with a text description (the narrative) and the positions of certain nouns marked. For each marked
noun, the method must output a segmentation mask for the
object it refers to, in each video frame.
Source: [Connecting Vision and Language with Video Localized Narratives](/paper/connecting-vision-and-language-with-video)",computer-vision
d43cbfec-05c8-40b3-8d03-c2b0a53cf999,hand-joint-reconstruction,Hand Joint Reconstruction,,computer-vision
6e3d6ed6-472a-4a6c-94b1-e5a994b9e3ef,action-anticipation,Action Anticipation,"Next action anticipation is defined as observing 1, ... , T frames and predicting the action that happens after a gap of T_a seconds. It is important to note that a new action starts after T_a seconds that is not seen in the observed frames. Here T_a=1 second.",computer-vision
dbee4aa9-39d6-44c5-b290-c5d1e763b33c,text-to-image,Text-To-Image,,computer-vision
536b0bcb-7cec-40c1-8375-15b449ecc682,depth-estimation,Depth Estimation,"**Depth Estimation** is the task of measuring the distance of each pixel relative to the camera. Depth is extracted from either monocular (single) or stereo (multiple views of a scene) images. Traditional methods use multi-view geometry to find the relationship between the images. Newer methods can directly estimate depth by minimizing the regression loss, or by learning to generate a novel view from a sequence. The most popular benchmarks are KITTI and NYUv2. Models are typically evaluated according to a RMS metric.
Source: [DIODE: A Dense Indoor and Outdoor DEpth Dataset ](https://arxiv.org/abs/1908.00463)",computer-vision
1c27100a-f08d-4e1d-b72f-87801cd92874,motion-synthesis,Motion Synthesis,"Image source: [Multi-View Motion Synthesis via Applying Rotated Dual-Pixel Blur Kernels
](https://paperswithcode.com/paper/multi-view-motion-synthesis-via-applying)",computer-vision
81e5719b-d08c-4e1f-899b-11b389e19ceb,visual-question-answering-1,Visual Question Answering,,computer-vision
8819c3bb-2c0c-4de6-8813-a8fa4b1ca801,intrinsic-image-decomposition,Intrinsic Image Decomposition,"**Intrinsic Image Decomposition** is the process of separating an image into its formation components such as reflectance (albedo) and shading (illumination). Reflectance is the color of the object, invariant to camera viewpoint and illumination conditions, whereas shading, dependent on camera viewpoint and object geometry, consists of different illumination effects, such as shadows, shading and inter-reflections. Using intrinsic images, instead of the original images, can be beneficial for many computer vision algorithms. For instance, for shape-from-shading algorithms, the shading images contain important visual cues to recover geometry, while for segmentation and detection algorithms, reflectance images can be beneficial as they are independent of confounding illumination effects. Furthermore, intrinsic images are used in a wide range of computational photography applications, such as material recoloring, relighting, retexturing and stylization.
Source: [CNN based Learning using Reflection and Retinex Models for Intrinsic Image Decomposition ](https://arxiv.org/abs/1712.01056)",computer-vision
0e236c19-0df9-4e07-82e9-747b7a443f04,neural-rendering,Neural Rendering,"Given a representation of a 3D scene of some kind (point cloud, mesh, voxels, etc.), the task is to create an algorithm that can produce photorealistic renderings of this scene from an arbitrary viewpoint. Sometimes, the task is accompanied by image/scene appearance manipulation.",computer-vision
9223cff5-8be3-4856-aba0-f6d6eb1be750,segmenting-flooded-buildings,Flooded Building Segmentation,,computer-vision
e4abaf5b-0a2c-4650-9454-9f724e4b90a9,text-to-face-generation,Text-to-Face Generation,,computer-vision
25bce3c5-b964-4443-a3b7-808f1d24f949,one-shot-visual-object-segmentation,One-shot visual object segmentation,,computer-vision
43e0b0ba-b58c-4a26-88d6-39e14a691e79,weakly-supervised-segmentation,Weakly supervised segmentation,,computer-vision
c02507ab-4f4d-488f-816b-2715113ac35b,real-time-visual-tracking,Real-Time Visual Tracking,,computer-vision
3b479ceb-43c1-4bb9-8354-632af77cb7dc,plant-phenotyping,Plant Phenotyping,,computer-vision
90ed73c7-5ca3-4979-b878-612c0745a006,3d-human-dynamics,3D Human Dynamics,Image: [Zhang et al](https://openaccess.thecvf.com/content_ICCV_2019/papers/Zhang_Predicting_3D_Human_Dynamics_From_Video_ICCV_2019_paper.pdf),computer-vision
4390b430-4aa6-4707-a6c8-abc7f66a5844,unsupervised-anomaly-detection-with-specified-5,Unsupervised Anomaly Detection with Specified Settings -- 1% anomaly,,computer-vision
671968c1-9ec0-4a5d-9527-b79defb926da,3d-object-recognition,3D Object Recognition,"3D object recognition is the task of recognising objects from 3D data.
Note that there are related tasks you can look at, such as [3D Object Detection](https://paperswithcode.com/task/3d-object-detection) which have more leaderboards.
(Image credit: [Look Further to Recognize Better](https://arxiv.org/pdf/1907.12924v1.pdf))",computer-vision
2f0ba034-2a77-48ab-a438-5312328d1c5e,object-proposal-generation,Object Proposal Generation,"Object proposal generation is a preprocessing technique that has been widely used in current object detection pipelines to guide the search of objects and avoid exhaustive sliding window search across images.
( Image credit: [Multiscale Combinatorial Grouping
for Image Segmentation and Object Proposal Generation](https://arxiv.org/pdf/1503.00848v4.pdf) )",computer-vision
28a325be-4af1-4b38-9318-bbe08cf91db9,3d-shape-generation,3D Shape Generation,Image: [Mo et al](https://arxiv.org/pdf/1908.00575v1.pdf),computer-vision
b654ac3b-9cba-408a-a803-8c1b53ec50d2,reflection-removal,Reflection Removal,,computer-vision
b2f13c51-0d29-4b75-ae6e-eddff0fe04ad,multi-animal-tracking-with-identification,Multi-Animal Tracking with identification,Tracking all animals in a video maintaining their identities after touches or occlusions.,computer-vision
29547f22-2164-4776-bcfa-9558532e5cbc,multi-person-pose-forecasting,Multi-Person Pose forecasting,,computer-vision
4bec1a88-b039-434d-9205-81df57d9715b,2d-cyclist-detection,2D Cyclist Detection,,computer-vision
95c50a1a-d396-4b66-8dd0-c3e4d06bdb42,real-time-semantic-segmentation,Real-Time Semantic Segmentation,"Semantic Segmentation is a computer vision task that involves assigning a semantic label to each pixel in an image. In **Real-Time Semantic Segmentation**, the goal is to perform this labeling quickly and accurately in real-time, allowing for the segmentation results to be used for tasks such as object recognition, scene understanding, and autonomous navigation.
( Image credit: [TorchSeg](https://github.com/ycszen/TorchSeg) )",computer-vision
b761adb4-9a41-40ed-a0da-002b897b7633,generalized-referring-expression-segmentation,Generalized Referring Expression Segmentation,"Generalized Referring Expression Segmentation (GRES), introduced by [Liu et al in CVPR 2023](https://henghuiding.github.io/GRES/), allows expressions indicating any number of target objects. GRES takes an image and a referring expression as input, and requires mask prediction of the target object(s).",computer-vision
2b8e719e-ad46-4688-ace4-950cb9a1e841,disparity-estimation,Disparity Estimation,The Disparity Estimation is the task of finding the pixels in the multiscopic views that correspond to the same 3D point in the scene.,computer-vision
b179a6ae-44ed-4bb7-a2a1-8f1710c2ef85,disjoint-10-1,Disjoint 10-1,,computer-vision
3d2072cd-3f94-4177-9dd5-8cf79e7c3626,mental-workload-estimation,Mental Workload Estimation,,computer-vision
7d2107a4-84dd-444f-adab-8d9e25f77892,repetitive-action-counting,Repetitive Action Counting,Repetitive action counting aims to count the number of repetitive actions in a video.,computer-vision
7ff03d75-4c70-49f1-90c3-a6371f08600a,person-recognition,Person Recognition,,computer-vision
35dff208-f1d5-48ce-ba91-f49703e88537,video-emotion-recognition,Video Emotion Recognition,,computer-vision
929265e3-b263-43d6-9b45-252f3ac4bb1a,hybrid-positioning,Hybrid Positioning,Hybrid Positioning using CV and dead reckoning,computer-vision
5af88ab2-3e9b-4fc1-bdf7-5107d83ce215,indoor-localization,Indoor Localization,Indoor localization is a fundamental problem in indoor location-based applications.,computer-vision
cb70811d-3af6-4388-aaa6-c924d3463629,face-sketch-synthesis,Face Sketch Synthesis,"Face sketch synthesis is the task of generating a sketch from an input face photo.
( Image credit: [High-Quality Facial Photo-Sketch Synthesis Using Multi-Adversarial Networks](https://arxiv.org/pdf/1710.10182v2.pdf) )",computer-vision
e3322255-0380-4b7d-8ec1-4110b954bff3,3d-scene-reconstruction,3D Scene Reconstruction,Creating 3D scene either using conventional SFM pipelines or latest deep learning approaches.,computer-vision
f770494a-1f8b-4156-b570-ce91224d4ace,facial-emotion-recognition,Facial Emotion Recognition,Emotion Recognition from facial images,computer-vision
371828ba-efc1-4200-a441-b1a5b8edfa05,lipreading,Lipreading,"Lipreading is a process of extracting speech by watching lip movements of a speaker in the absence of sound. Humans lipread all the time without even noticing. It is a big part in communication albeit not as dominant as audio. It is a very helpful skill to learn especially for those who are hard of hearing.
Deep Lipreading is the process of extracting speech from a video of a silent talking face using deep neural networks. It is also known by few other names: Visual Speech Recognition (VSR), Machine Lipreading, Automatic Lipreading etc.
The primary methodology involves two stages: i) Extracting visual and temporal features from a sequence of image frames from a silent talking video ii) Processing the sequence of features into units of speech e.g. characters, words, phrases etc. We can find several implementations of this methodology either done in two separate stages or trained end-to-end in one go.",computer-vision
fa73356a-ccce-4b15-a336-8b2b4405b4cb,multi-exposure-image-fusion,Multi-Exposure Image Fusion,,computer-vision
11953a49-a27c-4aae-8d6e-59d2b333f46f,3d-point-cloud-matching,3D Point Cloud Matching,Image: [Gojic et al](https://openaccess.thecvf.com/content_CVPR_2019/papers/Gojcic_The_Perfect_Match_3D_Point_Cloud_Matching_With_Smoothed_Densities_CVPR_2019_paper.pdf),computer-vision
80b88f0d-5733-4a48-b0d0-b0992cb7bd20,camera-localization,Camera Localization,,computer-vision
670065d7-74af-4853-83e6-ce6b7e7f9c55,hd-semantic-map-learning,HD semantic map learning,"The goal of task is to generate map elements in a vectorized form using data from onboard sensors, e.g., RGB cameras and/or LiDARs. These map elements include but are not limited to : Road boundaries, boundaries of roads that split roads and sidewalks.",computer-vision
3e016f93-9b74-4116-9562-13d28bfdfc1f,3d-volumetric-reconstruction,3D Volumetric Reconstruction,Image: [Grinvald et al](https://arxiv.org/pdf/1903.00268.pdf),computer-vision
a47434a0-2254-4789-988f-6757093163ca,camera-auto-calibration,Camera Auto-Calibration,,computer-vision
da5870d1-ba1d-4434-a1d4-8bd87e83f3fe,face-swapping,Face Swapping,"Face swapping refers to the task of swapping faces between images or in an video, while maintaining the rest of the body and environment context.
( Image credit: [Swapped Face Detection using Deep Learning and Subjective Assessment](https://arxiv.org/pdf/1909.04217v1.pdf) )",computer-vision
2f6ba3bc-9229-44cc-8bc3-1b3ad505a183,change-detection,Change Detection,"**Change Detection** is a computer vision task that involves detecting changes in an image or video sequence over time. The goal is to identify areas in the image or video that have undergone changes, such as appearance changes, object disappearance or appearance, or even changes in the scene's background.
Image credit: [""A TRANSFORMER-BASED SIAMESE NETWORK FOR CHANGE DETECTION""](https://arxiv.org/pdf/2201.01293v1.pdf)",computer-vision
982a8571-e366-4a8a-9be6-0a0edfdbb921,gesture-recognition,Gesture Recognition,"**Gesture Recognition** is an active field of research with applications such as automatic recognition of sign language, interaction of humans and robots or for new ways of controlling video games.
Source: [Gesture Recognition in RGB Videos Using Human Body Keypoints and Dynamic Time Warping ](https://arxiv.org/abs/1906.12171)",computer-vision
52679226-17f8-4d16-a521-2222b9f30914,aesthetics-quality-assessment,Aesthetics Quality Assessment,Automatic assessment of aesthetic-related subjective ratings.,computer-vision
587de265-f142-4d8a-8e78-79dc8d9e33ac,anomaly-detection-in-surveillance-videos,Anomaly Detection In Surveillance Videos,,computer-vision
af81f585-b09a-4970-89c6-7fed22d6b454,metric-learning,Metric Learning,"The goal of **Metric Learning** is to learn a representation function that maps objects into an embedded space. The distance in the embedded space should preserve the objects’ similarity — similar objects get close and dissimilar objects get far away. Various loss functions have been developed for Metric Learning. For example, the **contrastive loss** guides the objects from the same class to be mapped to the same point and those from different classes to be mapped to different points whose distances are larger than a margin. **Triplet loss** is also popular, which requires the distance between the anchor sample and the positive sample to be smaller than the distance between the anchor sample and the negative sample.
Source: [Road Network Metric Learning for Estimated Time of Arrival ](https://arxiv.org/abs/2006.13477)",computer-vision
1352a180-75b5-4af6-bde3-9f939d0197aa,reverse-style-transfer,Reverse Style Transfer,,computer-vision
5644b076-06a0-4b96-b9c4-b6f1616756ef,handwriting-recognition,Handwriting Recognition,Image source: [Handwriting Recognition of Historical Documents with few labeled data](https://arxiv.org/pdf/1811.07768v1.pdf),computer-vision
fdf915be-f3fd-4815-9797-29676b4de9f4,dial-meter-reading,Dial Meter Reading,,computer-vision
bafcd448-33ff-4f3b-8346-7b9cd2e2c4be,cross-domain-few-shot,Cross-Domain Few-Shot,,computer-vision
19c6eecb-af8a-4361-a327-c2c3cff9da1b,hurricane-forecasting,Hurricane Forecasting,"Tropical Cyclone Forecasting using Computer Vision, Deep Learning, and Time-Series methods",computer-vision
554c338e-ffc8-4cae-840b-f19ddb7afa0c,supervised-dimensionality-reduction,Supervised dimensionality reduction,,computer-vision
4f4d5ac8-8217-475a-b358-9b00628a5bb5,saliency-detection,Saliency Detection,"**Saliency Detection** is a preprocessing step in computer vision which aims at finding salient objects in an image.
Source: [An Unsupervised Game-Theoretic Approach to Saliency Detection ](https://arxiv.org/abs/1708.02476)",computer-vision
bb8ed6ba-f01a-4fff-8471-df080d6e488a,semantic-image-matting,Semantic Image Matting,,computer-vision
58af08d4-6ae4-4ba0-bd59-3a363c7b26be,license-plate-recognition,License Plate Recognition,,computer-vision
533ccd65-953b-4f01-8243-e98508a4d89c,disjoint-15-1,Disjoint 15-1,,computer-vision
4c3d65b5-2353-4dcb-9b7f-9a60fd557cb5,dense-video-captioning,Dense Video Captioning,"Most natural videos contain numerous events. For example, in a video of a “man playing a piano”, the video might also contain “another man dancing” or “a crowd clapping”. The task of dense video captioning involves both detecting and describing events in a video.",computer-vision
b9f709b3-a0a2-4889-91a5-66b8d0de94d0,cross-domain-few-shot-learning,cross-domain few-shot learning,Its essence is transfer learning. The model needs to be trained in the source domain and then migrated to the target domain. Compliant with (1) the category in the target domain has never appeared in the source domain (2) the data distribution of the target domain is inconsistent with the source domain (3) each class in the target domain has very few labels,computer-vision
7dd9830d-8d60-4ff5-aaf6-04956795253a,age-and-gender-classification,Age And Gender Classification,"Age and gender classification is a dual-task of identifying the age and gender of a person from an image or video.
( Image credit: [Multi-Expert Gender Classification on Age Group by Integrating Deep Neural Networks](https://arxiv.org/pdf/1809.01990v2.pdf) )",computer-vision
27e002b3-9df4-4b8c-9bcd-5ee82ea76e2a,removing-text-from-natural-images,Image Text Removal,,computer-vision
07aa3c9a-9cdf-4eb5-8fe0-5db24d4cdef8,drone-navigation,Drone navigation,"(Satellite -> Drone) Given one satellite-view image, the drone intends to find the most relevant place (drone-view images) that it has passed by. According to its flight history, the drone could be navigated back to the target place.",computer-vision
bb5e070a-9ed7-4802-bd76-2b252a014cda,handwritten-document-recognition,Handwritten Document Recognition,,computer-vision
9837abc8-bbe7-4212-b292-bd4b44583920,disguised-face-verification,Disguised Face Verification,,computer-vision
7f4d79dd-2edc-4c0d-abdc-0d12b07ad397,layout-to-image-generation,Layout-to-Image Generation,"Layout-to-image generation its the task to generate a scene based on the given layout. The layout describes the location of the objects to be included in the output image.
In this section, you can find state-of-the-art leaderboards for Layout-to-image generation.",computer-vision
0e08079e-cef2-4ce3-ab39-e0b142ceb9d2,transform-a-video-into-a-comics,Transform A Video Into A Comics,,computer-vision
3fcaa48c-a6ff-451e-9ab8-501a2f247486,image-stitching,Image Stitching,"**Image Stitching** is a process of composing multiple images with narrow but overlapping fields of view to create a larger image with a wider field of view.
Source: [Single-Perspective Warps in Natural Image Stitching ](https://arxiv.org/abs/1802.04645)
( Image credit: [Kornia](https://github.com/kornia/kornia) )",computer-vision
cfb77ba8-4548-448b-b910-278bb14e4535,markerless-motion-capture,Markerless Motion Capture,,computer-vision
23dd3e5a-6df3-46c5-b671-c266ea210e0f,visual-crowd-analysis,Visual Crowd Analysis,,computer-vision
6ed19c70-9eb0-4bad-b37b-ab034af94605,highlight-detection,Highlight Detection,,computer-vision
112e1c2f-7228-4771-b474-2f9d392bd077,saliency-ranking,Saliency Ranking,,computer-vision
48dee1de-fccf-48ec-b180-dfff70e13fd4,point-set-upsampling,Point Set Upsampling,,computer-vision
224c56e6-05ab-433f-9ed8-07ec10b55d94,scene-generation,Scene Generation,,computer-vision
08f1bc79-ab92-4b1d-a66f-2e6623b294e1,optical-flow-estimation,Optical Flow Estimation,"**Optical Flow Estimation** is a computer vision task that involves computing the motion of objects in an image or a video sequence. The goal of optical flow estimation is to determine the movement of pixels or features in the image, which can be used for various applications such as object tracking, motion analysis, and video compression.
Approaches for optical flow estimation include correlation-based, block-matching, feature tracking, energy-based, and more recently gradient-based.
Further readings:
- [Optical Flow Estimation](https://www.cs.toronto.edu/~fleet/research/Papers/flowChapter05.pdf)
- [Performance of Optical Flow Techniques](https://www.cs.toronto.edu/~fleet/research/Papers/ijcv-94.pdf)
Definition source: [Devon: Deformable Volume Network for Learning Optical Flow ](https://arxiv.org/abs/1802.07351)
Image credit: [Optical Flow Estimation](https://www.cs.toronto.edu/~fleet/research/Papers/flowChapter05.pdf)",computer-vision
ca62e6b0-2b1f-4d7a-a2ff-8009da14e5cb,image-clustering,Image Clustering,"Models that partition the dataset into semantically meaningful clusters without having access to the ground truth labels.
Image credit: ImageNet clustering results of [SCAN: Learning to Classify Images without Labels (ECCV 2020)](https://arxiv.org/abs/2005.12320) ",computer-vision
fe685443-2489-4f81-93de-1e5ff04418d3,image-classification,Image Classification,"**Image Classification** is a fundamental task that attempts to comprehend an entire image as a whole. The goal is to classify the image by assigning it to a specific label. Typically, Image Classification refers to images in which only one object appears and is analyzed. In contrast, object detection involves both classification and localization tasks, and is used to analyze more realistic cases in which multiple objects may exist in an image.
Source: [Metamorphic Testing for Object Detection Systems ](https://arxiv.org/abs/1912.12162)",computer-vision
07f2cdae-e05d-4083-ac86-c488cfa4c7ec,video-to-video-synthesis,Video-to-Video Synthesis,,computer-vision
8be3ef71-a510-4150-8324-98e64131703a,referring-image-matting,Referring Image Matting,"Extracting the meticulous alpha matte of the specific object from the image that can best match the given natural language description, e.g., a keyword or a expression.",computer-vision
5a8ccd54-500e-4e6b-9fb3-ebedabb5aaa3,moment-retrieval,Moment Retrieval,"Moment retrieval can de defined as the task of ""localizing moments in a video given a user query"".
Description from: [QVHIGHLIGHTS: Detecting Moments and Highlights in Videos via Natural Language Queries](https://arxiv.org/pdf/2107.09609v1.pdf)
Image credit: [QVHIGHLIGHTS: Detecting Moments and Highlights in Videos via Natural Language Queries](https://arxiv.org/pdf/2107.09609v1.pdf)",computer-vision
2b6ee65d-2bd8-44b2-a9b7-00a91aaa8751,3d-depth-estimation,3D Depth Estimation,Image: [monodepth2](https://github.com/nianticlabs/monodepth2),computer-vision
403935c6-6d31-4d43-9cd2-53f98d7a81ca,reference-based-video-super-resolution,Reference-based Video Super-Resolution,"Reference-based video super-resolution (RefVSR) is an expansion of reference-based super-resolution (RefSR) to the video super-resolution (VSR). RefVSR inherits the objectives of both RefSR and VSR tasks and utilizes a Ref video for reconstructing an HR video from an LR video
video from an LR video.",computer-vision
96aa9d1a-996c-4b33-a50a-2f0f37168990,multi-class-one-shot-image-synthesis,Multi class one-shot image synthesis,The goal of Multi-class one-shot image synthesis is to learn a generative model that can generate samples with visual attributes from as few as one or more images of at least 2 related classes.,computer-vision
d64ebae2-f912-4c88-aebe-b20bc5edca39,image-enhancement,Image Enhancement,"**Image Enhancement** is basically improving the interpretability or perception of information in images for human viewers and providing ‘better’ input for other automated image processing techniques. The principal objective of Image Enhancement is to modify attributes of an image to make it more suitable for a given task and a specific observer.
Source: [A Comprehensive Review of Image Enhancement Techniques ](https://arxiv.org/abs/1003.4053)",computer-vision
b71ab9a3-8b8e-4aa3-a11b-65411775f5c7,video-text-retrieval,Video-Text Retrieval,Video-Text retrieval requires understanding of both video and language together. Therefore it's different to video retrieval task.,computer-vision
07ef80d7-5844-4a42-93cb-49ec280f46b0,video-description,Video Description,"The goal of automatic **Video Description** is to tell a story about events happening in a video. While early Video Description methods produced captions for short clips that were manually segmented to contain a single event of interest, more recently dense video captioning has been proposed to both segment distinct events in time and describe them in a series of coherent sentences. This problem is a generalization of dense image region captioning and has many practical applications, such as generating textual summaries for the visually impaired, or detecting and describing important events in surveillance footage.
Source: [Joint Event Detection and Description in Continuous Video Streams ](https://arxiv.org/abs/1802.10250)",computer-vision
79a662ab-3ccc-4d68-8461-935989504909,lake-ice-detection,Lake Ice Monitoring,,computer-vision
74feef66-1374-4393-8dab-7d3231225a17,partial-video-copy-detection,Partial Video Copy Detection,The PVCD goal is identifying and locating if one or more segments of a long testing video have been copied (transformed) from the reference videos dataset.,computer-vision
eabda833-6379-44a4-ad81-dcd4eba0b057,overlapped-15-5,Overlapped 15-5,,computer-vision
b7d9448f-3a9c-4d22-a447-f3ba054e36e4,lidar-absolute-pose-regression,lidar absolute pose regression,,computer-vision
97d5fc61-c2c3-4a68-b333-0e406e310b39,class-agnostic-object-detection,Class-agnostic Object Detection,Class-agnostic object detection aims to localize objects in images without specifying their categories.,computer-vision
00f60ec0-b4d8-4ab9-aeba-7a1ca9e4bca8,unbalanced-segmentation,Unbalanced Segmentation,,computer-vision
c7e805da-d22d-431e-82a7-7837614273b4,scene-classification,Scene Classification,"**Scene Classification** is a task in which scenes from photographs are categorically classified. Unlike object classification, which focuses on classifying prominent objects in the foreground, Scene Classification uses the layout of objects within the scene, in addition to the ambient context, for classification.
Source: [Scene classification with Convolutional Neural Networks ](http://cs231n.stanford.edu/reports/2017/pdfs/102.pdf)",computer-vision
fa6f4b90-831d-4d85-9512-d496dcb51235,one-shot-segmentation,One-Shot Segmentation,"( Image credit: [One-Shot Learning for Semantic
Segmentation](https://arxiv.org/pdf/1709.03410v1.pdf) )",computer-vision
5674b604-18b2-4e0f-9727-349ff59fb275,human-instance-segmentation,Human Instance Segmentation,"Instance segmentation is the task of detecting and delineating each distinct object of interest appearing in an image.
Image Credit: [Deep Occlusion-Aware Instance Segmentation with Overlapping BiLayers](https://arxiv.org/abs/2103.12340)",computer-vision
d2a7f989-9267-4a03-b46b-75ba05986574,wavelet-structure-similarity-loss,wavelet structure similarity loss,,computer-vision
ff9bf882-586b-4a1b-89e1-81eea657253c,generalized-zero-shot-learning-unseen,Generalized Zero-Shot Learning - Unseen,"The average of the normalized top-1 prediction scores of unseen classes in the generalized zero-shot learning setting, where the label of a test sample is predicted among all (seen + unseen) classes.",computer-vision
ce01e74c-4161-4b95-9a04-05c379d0c6dd,head-pose-estimation,Head Pose Estimation,"Estimating the head pose of a person is a crucial problem that has a large amount of applications such as aiding in gaze estimation, modeling attention, fitting 3D models to video and performing face alignment.
( Image credit: [FSA-Net: Learning Fine-Grained Structure Aggregation for Head Pose
Estimation from a Single Image](http://openaccess.thecvf.com/content_CVPR_2019/papers/Yang_FSA-Net_Learning_Fine-Grained_Structure_Aggregation_for_Head_Pose_Estimation_From_CVPR_2019_paper.pdf) )",computer-vision
8a233671-56c3-420b-808c-84b187b18f86,scene-aware-dialogue,Scene-Aware Dialogue,,computer-vision
53b86ccf-d514-4e4c-8286-cb6ec7c8c174,3d-object-super-resolution,3D Object Super-Resolution,"3D object super-resolution is the task of up-sampling 3D objects.
( Image credit: [Multi-View Silhouette and Depth Decomposition for High Resolution 3D Object Representation](https://github.com/EdwardSmith1884/Multi-View-Silhouette-and-Depth-Decomposition-for-High-Resolution-3D-Object-Representation) )",computer-vision
fe330451-7ba9-4820-a967-e1ee078338af,few-shot-point-cloud-classification,Few-Shot Point Cloud Classification,Few-Shot Learning on point cloud classification task,computer-vision
9dd0f423-271d-4372-a9d6-341749414fdd,referring-expression-segmentation,Referring Expression Segmentation,"The task aims at labeling the pixels of an image or video that represent an object instance referred by a linguistic expression. In particular, the referring expression (RE) must allow the identification of an individual object in a discourse or scene (the referent). REs unambiguously identify the target instance.",computer-vision
520fec0e-4224-46a0-b273-359391a03f46,landslide-segmentation,Landslide segmentation,,computer-vision
922b0236-4ba2-4116-927e-f8b276194fc6,pose-transfer,Pose Transfer,,computer-vision
c732d3e4-00a9-4c1c-8ae6-9f6f6935b7e5,color-image-compression-artifact-reduction,Color Image Compression Artifact Reduction,,computer-vision
38ab4156-5a38-4ff0-b2d6-34943db8d1a3,dynamic-region-segmentation,Dynamic Region Segmentation,,computer-vision
5d64abac-fa22-4cea-be5e-61653fba73d0,multilingual-text-to-image-generation,Multilingual Text-to-Image Generation,,computer-vision
8cbe6246-820d-4323-a0a9-bf6e6f398845,multi-object-discovery,Multi-object discovery,,computer-vision
8134df7a-909b-40f6-8ca5-27636e509575,symmetry-detection,Symmetry Detection,,computer-vision
4a17a8f8-c226-4d11-b3da-7214e6a2d10d,user-constrained-thumbnail-generation,User Constrained Thumbnail Generation,"Thumbnail generation is the task of generating image thumbnails from an input image.
( Image credit: [User Constrained Thumbnail Generation using Adaptive Convolutions](https://arxiv.org/pdf/1810.13054v3.pdf) )",computer-vision
40a16a7d-43db-4ea8-81c7-3108d957533c,metamerism,Metamerism,,computer-vision
92f3fb32-886b-4be8-b4ed-e8c893ba1a7f,action-assessment,Action Assessment,,computer-vision
31ab05f4-15e8-4e67-b01c-d63dd55b2d68,unsupervised-image-to-image-translation,Unsupervised Image-To-Image Translation,"Unsupervised image-to-image translation is the task of doing image-to-image translation without ground truth image-to-image pairings.
( Image credit: [Unpaired Image-to-Image Translation
using Cycle-Consistent Adversarial Networks](https://arxiv.org/pdf/1703.10593v6.pdf) )",computer-vision
03c4da62-def2-4a96-b440-b840ab319bf0,video-object-detection,Video Object Detection,"Video object detection is the task of detecting objects from a video as opposed to images.
( Image credit: [Learning Motion Priors for Efficient Video Object Detection](https://arxiv.org/pdf/1911.05253v1.pdf) )",computer-vision
e802bf3b-d393-4dfb-a405-7d2c0e586411,gaze-estimation,Gaze Estimation,"**Gaze Estimation** is a task to predict where a person is looking at given the person’s full face. The task contains two directions: 3-D gaze vector and 2-D gaze position estimation. 3-D gaze vector estimation is to predict the gaze vector, which is usually used in the automotive safety. 2-D gaze position estimation is to predict the horizontal and vertical coordinates on a 2-D screen, which allows utilizing gaze point to control a cursor for human-machine interaction.
Source: [A Generalized and Robust Method Towards Practical Gaze Estimation on Smart Phone ](https://arxiv.org/abs/1910.07331)",computer-vision
ed5df291-bcef-432e-bbbf-8d36f508559e,action-classification,Action Classification,Image source: [The Kinetics Human Action Video Dataset](https://arxiv.org/pdf/1705.06950.pdf),computer-vision
e9d8955c-3234-47cb-b002-7a6feb2ac7d7,flare-removal,Flare Removal,"When a camera is pointed at a strong light source, the resulting photograph may contain lens flare artifacts. Flares appear in a wide variety of patterns (halos, streaks, color bleeding, haze, etc.) and this diversity in appearance makes flare removal challenging.",computer-vision
c31b15a0-0e02-4c33-b3d5-7bac1de7f306,motion-magnification,Motion Magnification,"Motion magnification is a technique that acts like a microscope for visual motion. It can amplify subtle motions in a video sequence, allowing for visualization of deformations that would otherwise be invisible. To achieve motion magnification, we need to accurately measure visual motions, and group the pixels to be modified.
There are different approaches to motion magnification, such as Lagrangian and Eulerian methods. Lagrangian methods track the trajectories of moving objects and exaggerate them, while Eulerian methods manipulate the motions at fixed positions. Eulerian methods can be further divided into linear and phase-based methods. Linear methods apply a temporal bandpass filter to boost the linear term of a Taylor series expansion of the displacement function, while phase-based methods use complex wavelet transforms to manipulate the phase of the signal.
Motion magnification has various applications, such as measuring the human pulse, visualizing the heat plume of candles, revealing the oscillations of a wine glass, and detecting structural defects.",computer-vision
3de104cc-f7d3-4422-95bb-f58f5eed6306,color-constancy,Color Constancy,"**Color Constancy** is the ability of the human vision system to perceive the colors of the objects in the scene largely invariant to the color of the light source. The task of computational Color Constancy is to estimate the scene illumination and then perform the chromatic adaptation in order to remove the influence of the illumination color on the colors of the objects in the scene.
Source: [CroP: Color Constancy Benchmark Dataset Generator ](https://arxiv.org/abs/1903.12581)",computer-vision
870f230e-6163-4c64-9796-2f4709a9714f,visual-relationship-detection,Visual Relationship Detection,"Visual relationship detection (VRD) is one newly developed computer vision task aiming to recognize relations or interactions between objects in an image. It is a further learning task after object recognition and is essential for fully understanding images, even the visual world.",computer-vision
06a2ed12-1e36-41a7-8fcf-607c24e49bcb,unsupervised-facial-landmark-detection,Unsupervised Facial Landmark Detection,"Facial landmark detection in the unsupervised setting popularized by [1]. The evaluation occurs in two stages:
(1) Embeddings are first learned in an unsupervised manner (i.e. without labels);
(2) A simple regressor is trained to regress landmarks from the unsupervised embedding.
[1] Thewlis, James, Hakan Bilen, and Andrea Vedaldi. ""Unsupervised learning of object landmarks by factorized spatial embeddings."" Proceedings of the IEEE International Conference on Computer Vision. 2017.
( Image credit: [Unsupervised learning of object landmarks by factorized spatial embeddings](https://www.robots.ox.ac.uk/~vedaldi/assets/pubs/thewlis17unsupervised.pdf) )",computer-vision
b2f25755-a86e-4dfe-8587-c12276ed9372,generative-visual-question-answering,Generative Visual Question Answering,Generating answers in free form to questions posed about images.,computer-vision
239b0ef5-0fdb-4fd7-a02a-da254e8b7e86,unsupervised-face-recognition,Unsupervised face recognition,,computer-vision
eb245ac9-7252-4414-9c85-d09af7dfc675,text-line-extraction,Text-Line Extraction,,computer-vision
e19733ad-e8e3-46dd-b16c-d0f5291a0dd7,unsupervised-anomaly-detection-with-specified-4,Unsupervised Anomaly Detection with Specified Settings -- 20% anomaly,,computer-vision
fc569c5d-38be-44f7-bd1d-d67ef1bef76b,supervised-anomaly-detection,Supervised Anomaly Detection,"In the training set, the amount of abnormal samples is limited and significant fewer than normal samples, producing data distributions that lead to a naturally imbalanced learning problem.",computer-vision
4edb1319-c7e7-4b9a-8c01-944303d6cc7d,personalized-segmentation,Personalized Segmentation,"Given a one-shot image with a reference mask, the models are required to segment the indicated target object in any other images.",computer-vision
d36d8a4e-7d9b-4e6d-a4af-7eed3d3e50a3,3d-human-pose-estimation,3D Human Pose Estimation,"**3D Human Pose Estimation** is a computer vision task that involves estimating the 3D positions and orientations of body joints and bones from 2D images or videos. The goal is to reconstruct the 3D pose of a person in real-time, which can be used in a variety of applications, such as virtual reality, human-computer interaction, and motion analysis.",computer-vision
a2a6b77f-95e1-44a2-96bf-9693927ad456,video-background-subtraction,Video Background Subtraction,,computer-vision
d07648ab-fef1-4d28-ac52-9dc94701c653,direct-transfer-person-re-identification,Direct Transfer Person Re-identification,,computer-vision
f1196635-41af-4ef0-bc1b-73cdd1ee2a7d,chat-based-image-retrieval,Chat-based Image Retrieval,"ChatIR: a Chat-based Image Retrieval system that engages in a conversation with the user to elicit information, in addition to an initial query, in order to clarify the user’s search intent.",computer-vision
054650e5-a67a-4d6f-bc89-b75b4ca4f0a8,3d-human-pose-and-shape-estimation,3D human pose and shape estimation,Estimate 3D human pose and shape (e.g. SMPL) from images,computer-vision
d123f6a9-f214-4419-88f9-eab2a95ff605,conditional-text-to-image-synthesis,Conditional Text-to-Image Synthesis,"Introducing extra conditions based on the text-to-image generation process, similar to the paradigm of ControlNet.",computer-vision
385d0bbb-8b3d-4e12-b660-35b29add8b4a,human-action-generation,Human action generation,"Yan et al. (2019) CSGN:
""When the dancer is stepping, jumping and spinning on the
stage, attentions of all audiences are attracted by the streamof the fluent and graceful movements. Building a model that is capable of dancing is as fascinating a task as appreciating the performance itself. In this paper, we aim to generate long-duration human actions represented as skeleton sequences, e.g. those that cover the entirety of a dance, with hundreds of moves and countless possible combinations.""
( Image credit: [Convolutional Sequence Generation for Skeleton-Based Action Synthesis](http://www.dahualin.org/publications/dhl19_csgn.pdf) )",computer-vision
c19df628-44fc-408b-84cd-ff0f6ef42e75,unity,Unity,,computer-vision
cc670beb-4a15-4a6e-a7fd-b610c443eb93,lip-sync,Unconstrained Lip-synchronization,"Given a video of an arbitrary person, and an arbitrary driving speech, the task is to generate a lip-synced video that matches the given speech.
This task requires the approach to not be constrained by identity, voice, or language.",computer-vision
e95480b2-6833-44c2-a3bc-5099f2f410bd,face-alignment,Face Alignment,"Face alignment is the task of identifying the geometric structure of faces in digital images, and attempting to obtain a canonical alignment of the face based on translation, scale, and rotation.
( Image credit: [3DDFA_V2](https://github.com/cleardusk/3DDFA_V2) )",computer-vision
27b6f5bd-42a5-45e5-8741-4fd7fffddc38,image-quality-assessment,Image Quality Assessment,,computer-vision
044e8b9d-6ed7-4eda-b97f-1a7b939118fb,semi-supervised-object-detection,Semi-Supervised Object Detection,Semi-supervised object detection uses both labeled data and unlabeled data for training. It not only reduces the annotation burden for training high-performance object detectors but also further improves the object detector by using a large number of unlabeled data.,computer-vision
86982fd2-15e5-4ed6-90ca-fde16ddca53b,medical-image-detection,medical image detection,,computer-vision
3eb73fe1-a382-4181-bb0c-6edad1774ea5,image-retargeting,Image Retargeting,,computer-vision
6cae6ff1-2844-4721-8ecb-2e748b22f9e0,visual-sentiment-prediction,Visual Sentiment Prediction,,computer-vision
f9deca92-a8ef-439d-9d75-b5441ad71cc1,object-recognition,Object Recognition,"Object recognition is a computer vision technique for detecting + classifying objects in images or videos. Since this is a combined task of object detection plus image classification, the state-of-the-art tables are recorded for each component task [here](https://www.paperswithcode.com/task/object-detection) and [here](https://www.paperswithcode.com/task/image-classification2).
( Image credit: [Tensorflow Object Detection API
](https://github.com/tensorflow/models/tree/master/research/object_detection) )",computer-vision
33ad5f95-3a70-467d-ba15-401b63119d89,rotated-mnist,Rotated MNIST,,computer-vision
0b7063fc-fde6-44af-976e-34bb10c95226,partially-relevant-video-retrieval,Partially Relevant Video Retrieval,"In the Partially Relevant Video Retrieval (PRVR) task, an untrimmed video is considered to be partially relevant w.r.t. a given textual query if it contains a moment relevant to the query. PRVR aims to retrieve such partially relevant videos from a large collection of untrimmed videos.",computer-vision
65ad5685-8bc7-4771-bab7-e084f5ff67c6,single-image-blind-deblurring,Single-Image Blind Deblurring,,computer-vision
e691f373-2a6a-43a7-8044-4a798de20c10,face-reenactment,Face Reenactment,"**Face Reenactment** is an emerging conditional face synthesis task that aims at fulfilling two goals simultaneously: 1) transfer a source face shape to a target face; while 2) preserve the appearance and the identity of the target face.
Source: [One-shot Face Reenactment ](https://arxiv.org/abs/1908.03251)",computer-vision
010bae59-d4ee-4c73-af94-cc2fb3a62e8a,point-cloud-pre-training,Point Cloud Pre-training,,computer-vision
9d8befd2-769d-4119-bf25-ea8e19277fe6,thermal-infrared-object-tracking,Thermal Infrared Object Tracking,,computer-vision
3f145377-75a3-47be-af9a-2c71f38cb6af,neural-stylization,Neural Stylization,,computer-vision
112d3287-d788-4e15-b117-159ce511bb3f,synthetic-image-attribution,Synthetic Image Attribution,"Determine the source or origin of a generated image, such as identifying the model or tool used to create it. This information can be useful for detecting copyright infringement or for investigating digital crimes.",computer-vision
45b70003-3482-4478-a5a2-3fb40e960c30,active-observation-completion,Active Observation Completion,,computer-vision
0be6e837-5bb2-4cfe-927d-b671bb11e7f9,electron-microscopy-image-segmentation,Electron Microscopy Image Segmentation,,computer-vision
b27c03e0-b800-4180-af84-e9f9c21bd4be,scene-graph-generation,Scene Graph Generation,"A scene graph is a structured representation of an image, where nodes in a scene graph correspond to object bounding boxes with their object categories, and edges correspond to their pairwise relationships between objects. The task of **Scene Graph Generation** is to generate a visually-grounded scene graph that most accurately correlates with an image.
Source: [Scene Graph Generation by Iterative Message Passing ](https://arxiv.org/abs/1701.02426)",computer-vision
7b2014fc-e5ed-41e7-9859-0bbcbd80a940,depth-aleatoric-uncertainty-estimation,Depth Aleatoric Uncertainty Estimation,,computer-vision
15e1fe36-6944-4a0c-8e54-f367888458a8,multiple-affordance-detection,Multiple Affordance Detection,"Affordance detection is the task of detecting objects that are usable (or graspable) by a human.
( Image credit: [What can I do here? Leveraging Deep 3D saliency and geometry for fast and scalable multiple affordance detection](https://github.com/eduard626/deep-interaction-tensor) )",computer-vision
d30f6406-a906-4552-ae4b-62a647b12052,handwriting-generation,Handwriting generation,The inverse of handwriting recognition. From text generate and image of handwriting (offline) of trajectory of handwriting (online).,computer-vision
bfd06c16-0818-432e-9ec8-e19aafebe543,emotion-recognition,Emotion Recognition,"**Emotion Recognition** is an important area of research to enable effective human-computer interaction. Human emotions can be detected using speech signal, facial expressions, body language, and electroencephalography (EEG). Source: [Using Deep Autoencoders for Facial Expression Recognition ](https://arxiv.org/abs/1801.08329)",computer-vision
05317613-0276-45b7-943b-5d39e821cb31,image-generation-from-scene-graphs,Image Generation from Scene Graphs,,computer-vision
6f300f21-a2ea-4f18-9adb-928f87fa4e87,image-denoising,Image Denoising,"**Image Denoising** is a computer vision task that involves removing noise from an image. Noise can be introduced into an image during acquisition or processing, and can reduce image quality and make it difficult to interpret. Image denoising techniques aim to restore an image to its original quality by reducing or removing the noise, while preserving the important features of the image.
( Image credit: [Wide Inference Network for Image Denoising via
Learning Pixel-distribution Prior](https://arxiv.org/pdf/1707.05414v5.pdf) )",computer-vision
c5ea913b-7703-4841-ad55-9361c6d61b14,stereoscopic-image-quality-assessment,Stereoscopic image quality assessment,,computer-vision
e6717f4d-3e17-4c62-a46b-e4cdebe02035,drone-based-object-tracking,drone-based object tracking,drone-based object tracking,computer-vision
1cd3d593-afa7-4ad3-9aaa-20cd50826803,video-question-answering,Video Question Answering,"Video Question Answering (VideoQA) aims to answer natural language questions according to the
given videos. Given a video and a question in natural language, the model produces accurate answers according
to the content of the video.",computer-vision
5e385fb2-0133-403c-b4db-c94f5c7a4dba,image-based-automatic-meter-reading,Image-based Automatic Meter Reading,,computer-vision
38020be0-b3fe-4431-8fce-d1a53736cbac,facial-inpainting,Facial Inpainting,"Facial inpainting (or face completion) is the task of generating plausible facial structures for missing pixels in a face image.
( Image credit: [SymmFCNet](https://github.com/csxmli2016/SymmFCNet) )",computer-vision
96542f99-1ae2-4bbc-9b5e-dbc3a87da9dd,vehicle-speed-estimation,Vehicle Speed Estimation,Vehicle speed estimation is the task of detecting and tracking vehicles whose real-world speeds are then estimated. The task is usually evaluated with recall and precision of the detected vehicle tracks as well as the mean or median errors of the estimated vehicle speeds.,computer-vision
a6eba406-d038-42a2-9c05-42d76f8a51e6,cross-view-person-re-identification,Cross-Modal Person Re-Identification,,computer-vision
db4a9f46-b0b3-4d63-9f31-c57afaf1ade5,laminar-turbulent-flow-localisation,Laminar-Turbulent Flow Localisation,It is a segmentation task on thermographic measurement images in order to separate laminar and turbulent flow regions on flight body parts.,computer-vision
62c2f21d-6af0-41bc-877a-e02037eb0b30,geometrical-view,Geometrical View,,computer-vision
e8ca33fe-23e6-4317-81f1-04f3b948cde7,covid-19-image-segmentation,COVID-19 Image Segmentation,,computer-vision
f2fe2c9a-6250-48a2-983d-f792e5cc571d,motion-retargeting,motion retargeting,,computer-vision
332e5306-68a6-4c61-9eb5-438cb4e49abc,neural-radiance-caching,Neural Radiance Caching,"Involves the task of predicting photorealistic pixel colors from feature buffers.
Image source: [Instant Neural Graphics Primitives with a Multiresolution Hash Encoding](https://arxiv.org/pdf/2201.05989v1.pdf)",computer-vision
99954eb6-bde0-4f37-b6b3-6460cc1860cd,facial-expression-recognition,Facial Expression Recognition (FER),"**Facial Expression Recognition (FER)** is a computer vision task aimed at identifying and categorizing emotional expressions depicted on a human face. The goal is to automate the process of determining emotions in real-time, by analyzing the various features of a face such as eyebrows, eyes, mouth, and other features, and mapping them to a set of emotions such as anger, fear, surprise, sadness and happiness.
( Image credit: [DeXpression](https://arxiv.org/pdf/1509.05371v2.pdf) )",computer-vision
e9c5c5e5-a0dd-4112-ab37-d5b44957c1e5,camera-shot-segmentation,Camera shot segmentation,"Camera shot temporal segmentation consists in classifying each video frame according to the type of camera used to record said frame. This task is introduced with the SoccerNet-v2 dataset, where 13 camera classes are considered (main camera, behind the goal, corner camera, etc.).",computer-vision
c5b96f7c-a6bf-498f-a1f7-ca14253cc9f6,local-color-enhancement,Local Color Enhancement,"Enhancement techniques for improving the contrast between lesion and background skin on dermatological macro-images are limited in the literature. To fill this gap, a modified sigmoid transform is applied in the HSV color space. The crossover point in the modified sigmoid transform that divides the macro-image into lesion and background is predicted using a modified EfficientNet regressor to exclude manual intervention and subjectivity.",computer-vision
ae65d586-8d2c-45ad-afe5-789cbec4d5ce,video-reconstruction,Video Reconstruction,"Source: [Deep-SloMo](https://github.com/avinashpaliwal/Deep-SloMo)",computer-vision
e75e305f-5930-4f85-91b0-f9a2be45e494,vgsi,VGSI,"Given a textual goal and multiple images representing candidate events, a model must choose one image which constitutes a reason- able step towards the given goal.
A model should correctly recognize not only the specific action illustrated in an image (e.g., “turning on the oven”), but also the intent of the action (“baking fish”).",computer-vision
083fa873-e7a4-4451-b39d-c268cd7e34c7,monocular-cross-view-road-scene-parsing,Monocular Cross-View Road Scene Parsing(Vehicle),,computer-vision
951efd7c-1808-4bdf-9f74-baee7f240dc2,video-instance-segmentation,Video Instance Segmentation,"The goal of video instance segmentation is simultaneous detection, segmentation and tracking of instances in videos. In words, it is the first time that the image instance segmentation problem is extended to the video domain.
To facilitate research on this new task, a large-scale benchmark called YouTube-VIS, which consists of 2,883 high-resolution YouTube videos, a 40-category label set and 131k high-quality instance masks is built.",computer-vision
f5bc4fe9-0fa7-4e70-9210-4a255f237136,camera-relocalization,Camera Relocalization,"""Camera relocalization, or image-based localization is a fundamental problem in robotics and computer vision. It refers to the process of determining camera pose from the visual scene representation and it is essential for many applications such as navigation of autonomous vehicles, structure from motion (SfM), augmented reality (AR) and simultaneous localization and mapping (SLAM)."" ([Source](https://paperswithcode.com/paper/camera-relocalization-by-computing-pairwise))",computer-vision
12436762-90d8-4141-ba0b-b4414246397a,denoising,Denoising,"**Denoising** is a task in image processing and computer vision that aims to remove or reduce noise from an image. Noise can be introduced into an image due to various reasons, such as camera sensor limitations, lighting conditions, and compression artifacts. The goal of denoising is to recover the original image, which is considered to be noise-free, from a noisy observation.
( Image credit: [Beyond a Gaussian Denoiser](https://arxiv.org/pdf/1608.03981v1.pdf) )",computer-vision
3a92de4e-791a-4191-95a2-0c6a77685cf6,cross-domain-iris-presentation-attack,Cross-Domain Iris Presentation Attack Detection,,computer-vision
cd550b13-c416-4111-8b17-0d33cd8690c9,few-shot-image-segmentation,Few-Shot Semantic Segmentation,Few-shot semantic segmentation (FSS) learns to segment target objects in query image given few pixel-wise annotated support image.,computer-vision
a72d47d2-132c-49dd-a086-8720f58cb588,lake-detection,Lake Detection,,computer-vision
94970623-5657-466b-9dd9-b4e967f04dc7,video-interlacing,Video Interlacing,,computer-vision
ad79d4aa-896e-4e3a-87d3-05223310922a,6d-pose-estimation-using-rgbd,6D Pose Estimation using RGBD,Image: [Zeng et al](https://arxiv.org/pdf/1609.09475v3.pdf),computer-vision
f1b45278-0144-433c-b2e4-b688e24b695b,face-model,Face Model,,computer-vision
84523ac1-e652-4a17-9b4e-1e01497b87d2,image-similarity-search,Image Similarity Search,Image credit: [The 2021 Image Similarity Dataset and Challenge](https://paperswithcode.com/paper/the-2021-image-similarity-dataset-and),computer-vision
1c841d5e-bad9-4d51-9950-683a0a524bb2,self-supervised-person-re-identification,Self-Supervised Person Re-Identification,"Currently, self-supervised representation learning is mainly tested on image classification tasks, which is not insufficient to verify its effectiveness. It should also be tested in the visual matching task, and pedestrian re-recognition is just such an appropriate task.",computer-vision
7fc61c77-0fa1-4cc9-a38b-a23992d7261e,online-clustering,Online Clustering,"Models that learn to label each image (i.e. cluster the dataset into its ground truth classes) without seeing the ground truth labels. Under the online scenario, data is in the form of streams, i.e., the whole dataset could not be accessed at the same time and the model should be able to make cluster assignments for new data without accessing the former data.
Image Credit: [Online Clustering by Penalized Weighted GMM](https://arxiv.org/pdf/1902.02544v1.pdf)",computer-vision
0fe68e2b-8cd7-473a-8707-d580c4b9c7db,human-object-interaction-concept-discovery,Human-Object Interaction Concept Discovery,"Discovering the reasonable HOI concepts/categories from known categories and their instances. Actually, it is also a matrix (verb-object matrix) complementation problem.",computer-vision
65f5b681-bcf5-46a5-b544-0a49f6614108,video-boundary-captioning,Video Boundary Captioning,"Provided with the timestamp of a boundary inside a video, the machine is required to generate sentences describing the status change at the boundary.",computer-vision
361080bc-7984-4ba7-8bf1-a0e731e7bd0e,spectral-super-resolution,Spectral Super-Resolution,,computer-vision
64007132-3eb0-4b7b-af83-a67732e48508,superpixel-image-classification,Superpixel Image Classification,A **Superpixel Image classification** can be classified the group of pixels that share common characteristics (like pixel intensity ) or segementize the common pixel value in to one group.,computer-vision
7cc096aa-890c-44b7-8657-27dd8ce090ea,medical-image-deblurring,Medical Image Deblurring,Medical image deblurring aims to remove blurs from medical images,computer-vision
11e8203c-8e55-42aa-8641-f60913b6f4c9,partial-point-cloud-matching,Partial Point Cloud Matching,,computer-vision
a3bc4fe2-20ba-4b10-a64c-963f5679fa2c,fingertip-detection,Fingertip Detection,,computer-vision
5e2aab75-a528-4026-aaaa-3aea0e515a46,real-time-3d-semantic-segmentation,Real-Time 3D Semantic Segmentation,,computer-vision
948b2546-519c-4fb4-b606-c5e660f5503e,pose-guided-image-generation,Pose-Guided Image Generation,"Pose-guided image generation is the task of generating a new image of a person with guidance from pose information that the new image should synthesise around.
( Image credit: [Coordinate-based Texture Inpainting for Pose-Guided Human Image Generation](https://arxiv.org/pdf/1811.11459v2.pdf) )",computer-vision
d5ee125c-1b6c-40a6-9efe-5d1261d1686e,on-the-fly-sketch-based-image-retrieval,On-the-Fly Sketch Based Image Retrieval,Start retrieving as the user starts drawing.,computer-vision
86cf9b2e-5502-4322-a144-7fa08bbfccbf,hand-pose-estimation,Hand Pose Estimation,"Hand pose estimation is the task of finding the joints of the hand from an image or set of video frames.
( Image credit: [Pose-REN](https://github.com/xinghaochen/Pose-REN) )",computer-vision
e376199f-c72b-41b5-abc7-8b2e8026ddfe,missing-markers-reconstruction,Missing Markers Reconstruction,Reconstructing missing markers in the motion caption 3d poses,computer-vision
5cae5d64-ed35-4c1c-bb5b-a1f3b1540d93,unsupervised-object-segmentation,Unsupervised Object Segmentation,Image credit: [ClevrTex: A Texture-Rich Benchmark for Unsupervised Multi-Object Segmentation](https://paperswithcode.com/paper/clevrtex-a-texture-rich-benchmark-for),computer-vision
7607790e-a95d-47a9-a8ee-ee2593368886,fairness,Fairness,,computer-vision
3ecac97d-be5a-4fa0-8351-fea7b499843a,animation,Animation,,computer-vision
54ab7fac-4cad-4104-9d9d-e57572a60481,video-retrieval,Video Retrieval,"The objective of video retrieval is as follows: given a text query and a pool of candidate videos, select the video which corresponds to the text query. Typically, the videos are returned as a ranked list of candidates and scored via document retrieval metrics.",computer-vision
ab088503-398c-45b6-b4ce-457f8363e3ea,displaced-people-recognition,Displaced People Recognition,"Recognise displaced people from images.
( Image credit: [DisplaceNet: Recognising Displaced People from Images by Exploiting Dominance Level](https://arxiv.org/pdf/1905.02025v1.pdf) )",computer-vision
6f7a1b1d-b8e9-4294-9dcc-39add6205bf1,temporal-localization,Temporal Localization,,computer-vision
774b9e4b-f2de-4b3d-b731-0906d36da67a,generalized-few-shot-semantic-segmentation,Generalized Few-Shot Semantic Segmentation,,computer-vision
07944d65-a486-4626-b61f-22e28068d491,human-fmri-response-prediction,Human fMRI response prediction,"The task is: Given a) the set of videos of everyday events and b) the corresponding brain responses recorded while human participants viewed those videos, use computational models to predict brain responses for videos.",computer-vision
cf79d3a3-c4ae-4ac0-a83f-c8407e1c6c5f,depth-completion,Depth Completion,"The **Depth Completion** task is a sub-problem of depth estimation. In the sparse-to-dense depth completion problem, one wants to infer the dense depth map of a 3-D scene given an RGB image and its corresponding sparse reconstruction in the form of a sparse depth map obtained either from computational methods such as SfM (Strcuture-from-Motion) or active sensors such as lidar or structured light sensors.
Source: [LiStereo: Generate Dense Depth Maps from LIDAR and Stereo Imagery ](https://arxiv.org/abs/1905.02744),
[Unsupervised Depth Completion from Visual Inertial Odometry](https://arxiv.org/abs/1905.08616)",computer-vision
0aeb25f5-2eb6-4266-b5f4-c4e5fc608bda,font-recognition,Font Recognition,"Font recognition (also called *visual font recognition* or *optical font recognition*) is the task of identifying the font family or families used in images containing text. Understanding which fonts are used in text may, for example, help designers find the right style, as well as help select an optical character recognition engine or model that is a better fit for certain texts.",computer-vision
e01b1ed7-97b6-4ea8-9fc7-840a756bcca5,recognizing-and-localizing-human-actions,Recognizing And Localizing Human Actions,,computer-vision
813928ba-384c-4104-bc1d-6130d9b34128,spatio-temporal-video-grounding,Spatio-Temporal Video Grounding,,computer-vision
0cc792c5-eb54-4971-ab5c-18bcc0ff6d0e,spectrum-cartography,Spectrum Cartography,,computer-vision
79250cec-5c78-425e-adea-3e7c9e4b6d06,retinal-oct-disease-classification,Retinal OCT Disease Classification,Classifying different Retinal degeneration from Optical Coherence Tomography Images (OCT).,computer-vision
2a6d9b0b-3472-4d97-a4f8-2ae13991c139,image-stylization,Image Stylization,"**Image stylization** is a task that involves transforming an input image into a new image that has a different style, while preserving the content of the original image. The goal of image stylization is to create visually appealing images with a specific style or aesthetic, such as impressionism, cubism, or surrealism. It can also be used to make images more visually appealing for specific applications, such as social media or advertising.",computer-vision
c9820a18-b949-4fc5-8a1d-2069f44a796e,fundus-to-angiography-generation,Fundus to Angiography Generation,Generating Retinal Fluorescein Angiography from Retinal Fundus Image using Generative Adversarial Networks.,computer-vision
9f6ed954-21db-44ef-a5c0-fb83d055571b,situation-recognition,Situation Recognition,"Situation Recognition aims to produce the structured image summary which describes the primary activity (verb), and its relevant entities (nouns).",computer-vision
ff4971cb-cfbb-4dd0-a30e-73cee8980a3a,multiview-learning,Multiview Learning,,computer-vision
dc1f9ebf-307b-4ebf-b3df-30be09332bf1,instance-search,Instance Search,"Visual **Instance Search** is the task of retrieving from a database of images the ones that contain an instance of a visual query. It is typically much more challenging than finding images from the database that contain objects belonging to the same category as the object in the query. If the visual query is an image of a shoe, visual Instance Search does not try to find images of shoes, which might differ from the query in shape, color or size, but tries to find images of the exact same shoe as the one in the query image. Visual Instance Search challenges image representations as the features extracted from the images must enable such fine-grained recognition despite variations in viewpoints, scale, position, illumination, etc. Whereas holistic image representations, where each image is mapped to a single high-dimensional vector, are sufficient for coarse-grained similarity retrieval, local features are needed for instance retrieval.
Source: [Dynamicity and Durability in Scalable Visual Instance Search ](https://arxiv.org/abs/1805.10942)",computer-vision
909b3182-cabf-4102-bf39-e0018d4c8b18,segmentation-based-workflow-recognition,Segmentation Based Workflow Recognition,,computer-vision
c24474a8-c81c-4237-a96f-538fb4bf3577,image-registration,Image Registration,"Image registration is the process of transforming different sets of data into one coordinate system. Data may be multiple photographs, data from different sensors, times, depths, or viewpoints. It is used in computer vision, medical imaging, and compiling and analyzing images and data from satellites. Registration is necessary in order to be able to compare or integrate the data obtained from these different measurements.
Source: [Image registration | Wikipedia](https://en.wikipedia.org/wiki/Image_registration)
( Image credit: [Kornia](https://github.com/kornia/kornia) )",computer-vision
2e3f5932-f727-4654-b7c4-c39b42f92659,3d-object-detection-from-monocular-images,3D Object Detection From Monocular Images,"This is the task of detecting 3D objects from monocular images (as opposed to LiDAR based counterparts). It is usually associated with autonomous driving based tasks.
( Image credit: [Orthographic Feature Transform for Monocular 3D Object Detection](https://arxiv.org/pdf/1811.08188v1.pdf) )",computer-vision
936afecf-786a-4cbd-bd62-6f87ff58c201,one-shot-face-stylization,One-Shot Face Stylization,"Image credit: [""JoJoGAN: One Shot Face Stylization""](https://arxiv.org/pdf/2112.11641v1.pdf)",computer-vision
21c05be4-544b-4f91-85c4-74f2c921f624,prompt-driven-zero-shot-domain-adaptation,Prompt-driven Zero-shot Domain Adaptation,Domain adaptation using only a single source domain and a description of the target domain in natural language (No images from target domain are available),computer-vision
2e3979a8-02cf-4f84-a8b8-83c606065da0,human-part-segmentation,Human Part Segmentation,,computer-vision
aa67fb13-5deb-40eb-9979-6c1b7feda1de,nlp-based-person-retrival,Text based Person Retrieval,,computer-vision
4a1417f2-2cc3-4d0d-9192-b78bc73732b2,scene-text-detection,Scene Text Detection,"**Scene Text Detection** is a computer vision task that involves automatically identifying and localizing text within natural images or videos. The goal of scene text detection is to develop algorithms that can robustly detect and and label text with bounding boxes in uncontrolled and complex environments, such as street signs, billboards, or license plates.
Source: [ContourNet: Taking a Further Step toward Accurate Arbitrary-shaped Scene Text Detection ](https://arxiv.org/abs/2004.04940)",computer-vision
e0e4e94b-8031-4d45-83b1-57e5175bcc45,video-summarization,Video Summarization,"**Video Summarization** aims to generate a short synopsis that summarizes the video content by selecting its most informative and important parts. The produced summary is usually composed of a set of representative video frames (a.k.a. *video key-frames*), or video fragments (a.k.a. *video key-fragments*) that have been stitched in chronological order to form a shorter video. The former type of a video summary is known as **video storyboard**, and the latter type is known as **video skim**.
Source: [Video Summarization Using Deep Neural Networks: A Survey](https://arxiv.org/abs/2101.06072)
Image credit: [iJRASET](https://www.ijraset.com/fileserve.php?FID=12932)",computer-vision
63e140dd-0249-4b5f-8b25-c1d54e532391,open-vocabulary-attribute-detection,Open Vocabulary Attribute Detection,"Open-Vocabulary Attribute Detection (OVAD) is a task that aims to detect and recognize an open set of objects and their associated attributes in an image. The objects and attributes are defined by text queries during inference, without prior knowledge of the tested classes during training.",computer-vision
2267b94c-120a-4526-9a5a-f08ae15c0fe9,thermal-image-segmentation,Thermal Image Segmentation,,computer-vision
07b14dbd-f88a-487f-9199-69cd54b6b3b1,object-detection-in-aerial-images,Object Detection In Aerial Images,"Object Detection in Aerial Images is the task of detecting objects from aerial images.
( Image credit: [DOTA: A Large-Scale Dataset for Object Detection in Aerial Images](http://openaccess.thecvf.com/content_cvpr_2018/papers/Xia_DOTA_A_Large-Scale_CVPR_2018_paper.pdf) )",computer-vision
2d42e513-a15e-49e7-b81c-f2814e07b18a,weakly-supervised-action-localization,Weakly Supervised Action Localization,"In this task, the training data consists of videos with a list of activities in them without any temporal boundary annotations. However, while testing, given a video, the algorithm should recognize the activities in the video and also provide the start and end time.",computer-vision
8f74ceff-34c0-4c4b-9b6c-9dccb3ee1ad2,handwritten-digit-image-synthesis,Handwritten Digit Image Synthesis,,computer-vision
dae88bac-de44-456c-8e03-d3784175c00a,3d-multi-person-pose-estimation-root-relative,3D Multi-Person Pose Estimation (root-relative),"This task aims to solve root-relative 3D multi-person pose estimation (person-centric coordinate system). No ground truth human bounding box and human root joint coordinates are used during testing stage.
( Image credit: [RootNet](https://github.com/mks0601/3DMPPE_ROOTNET_RELEASE) )",computer-vision
7da29bc8-a6a2-4b4c-a6ef-b444b4515b6f,object-counting,Object Counting,"The goal of **Object Counting** task is to count the number of object instances in a single image or video sequence. It has many real-world applications such as traffic flow monitoring, crowdedness estimation, and product counting.
Source: [Learning to Count Objects with Few Exemplar Annotations ](https://arxiv.org/abs/1905.07898)",computer-vision
d5740a8a-0f8b-445d-b92b-f17db4513f77,medical-object-detection,Medical Object Detection,"Medical object detection is the task of identifying medical-based objects within an image.
( Image credit: [Liver Lesion Detection from Weakly-labeled Multi-phase CT Volumes with a Grouped Single Shot MultiBox Detector](https://github.com/L0SG/grouped-ssd-pytorch) )",computer-vision
13f18479-e822-440b-bd31-5472c50e30f1,chinese-landscape-painting-generation,Chinese Landscape Painting Generation,,computer-vision
2a1fc1b7-6ef1-4ac5-b2e0-ceacf199622f,ensemble-learning,Ensemble Learning,,computer-vision
7d4c34c8-2f19-4e36-8604-321b4e42a47f,low-light-image-enhancement,Low-Light Image Enhancement,"**Low-Light Image Enhancement** is a computer vision task that involves improving the quality of images captured under low-light conditions. The goal of low-light image enhancement is to make images brighter, clearer, and more visually appealing, without introducing too much noise or distortion.",computer-vision
0fc48329-66eb-4a3f-99c3-bf123299527e,video-salient-object-detection,Video Salient Object Detection,"Video salient object detection (VSOD) is significantly essential for understanding the underlying mechanism behind HVS during free-viewing in general and instrumental to a wide range of real-world applications, e.g., video segmentation, video captioning, video compression, autonomous driving, robotic interaction, weakly supervised attention. Besides its academic value and practical significance, VSOD presents great difficulties due to the challenges carried by video data (diverse motion patterns, occlusions, blur, large object deformations, etc.) and the inherent complexity of human visual attention behavior (i.e., selective attention allocation, attention shift) during dynamic scenes. Online benchmark: http://dpfan.net/davsod.
( Image credit: [Shifting More Attention to Video Salient Object Detection, CVPR2019-Best Paper Finalist](https://openaccess.thecvf.com/content_CVPR_2019/papers/Fan_Shifting_More_Attention_to_Video_Salient_Object_Detection_CVPR_2019_paper.pdf) )",computer-vision
10efe36d-72d3-438b-9ded-c425d5098a57,zero-shot-object-detection,Zero-Shot Object Detection,"Zero-shot object detection (ZSD) is the task of object detection where no visual training data is available for some of the target object classes.
( Image credit: [Zero-Shot Object Detection: Learning to Simultaneously Recognize and Localize Novel Concepts](https://github.com/salman-h-khan/ZSD_Release) )",computer-vision
a1621858-5b5f-4348-968d-934ac21f27db,dynamic-texture-recognition,Dynamic Texture Recognition,,computer-vision
9e21b1e3-b37d-4462-bec3-d4b76b1dc312,surface-normals-estimation,Surface Normals Estimation,Surface normal estimation deals with the task of predicting the surface orientation of the objects present inside a scene. Refer to [Designing Deep Networks for Surface Normal Estimation (Wang et al.)](https://www.cs.cmu.edu/~xiaolonw/papers/deep3d.pdf) to get a good overview of several design choices that led to the development of a CNN-based surface normal estimator.,computer-vision
faaa59b9-d1e6-4fd5-951e-0bb0344cbad3,mistake-detection,Mistake Detection,"Mistakes are natural occurrences in many tasks and an opportunity for an AR assistant to provide help. Identifying such mistakes requires modelling procedural knowledge and retaining long-range sequence information. In its simplest form Mistake Detection aims to classify each coarse action segment into one of the three classes: {“correct”, “mistake”, “correction”}.",computer-vision
6de73cd2-401a-4004-ae9c-d7c70aa0c17f,shadow-detection-and-removal,Shadow Detection And Removal,,computer-vision
c7921578-4ab5-46c6-bee2-d78c7f0f7925,referring-image-matting-prompt-based,Referring Image Matting (Prompt-based),"Prompt-based referring image matting, taking an image and a prompt word as the input.",computer-vision
3bc9b58b-2f70-424b-b523-1befc25e321a,3d-shape-representation,3D Shape Representation,Image: [MeshNet](https://arxiv.org/pdf/1811.11424v1.pdf),computer-vision
8f22bb11-b97a-48e6-a090-561b7160812d,lightfield,Lightfield,Tasks related to the light-field imagery,computer-vision
cb3afeda-6be1-4312-b8ff-2d616badeaac,image-harmonization,Image Harmonization,Image harmonization aims to modify the color of the composited region with respect to the specific background.,computer-vision
efcdfb0b-87d0-4099-bb73-875870f2ffea,online-multi-object-tracking,Online Multi-Object Tracking,"The goal of **Online Multi-Object Tracking** is to estimate the spatio-temporal trajectories of multiple objects in an online video stream (i.e., the video is provided frame-by-frame), which is a fundamental problem for numerous real-time applications, such as video surveillance, autonomous driving, and robot navigation.
Source: [A Hybrid Data Association Framework for Robust Online Multi-Object Tracking ](https://arxiv.org/abs/1703.10764)",computer-vision
8e10c4af-6d00-45eb-95ac-f01591ee7fca,novel-class-discovery,Novel Class Discovery,"The goal of Novel Class Discovery (NCD) is to identify new classes in unlabeled data, by exploiting prior knowledge from known classes. In this specific setup, the data is split in two sets. The first is a labeled set containing known classes and the second is an unlabeled set containing unknown classes that must be discovered.",computer-vision
f085e333-1947-4e50-9b08-4d2310d1d454,3d-object-reconstruction-from-a-single-image,3D Object Reconstruction From A Single Image,Image: [Fan et al](https://arxiv.org/pdf/1612.00603v2.pdf),computer-vision
977fea09-fe7d-4add-97c4-e04a0ce0ba2f,referring-video-object-segmentation,Referring Video Object Segmentation,"Referring video object segmentation aims at segmenting an object in video with language expressions. Unlike the previous video object segmentation, the task exploits a different type of supervision, language expressions, to identify and segment an object referred by the given language expressions in a video.",computer-vision
520790ad-b015-4c4c-8309-a8d7e6e7d52d,finger-dorsal-image-spoof-detection,Finger Dorsal Image Spoof Detection,,computer-vision
a41c3c9a-5e90-4eba-b5cf-1f5e4be0a450,cross-view-image-to-image-translation,Cross-View Image-to-Image Translation,,computer-vision
dc116db5-5466-465b-a33d-586718c20b01,state-change-object-detection,State Change Object Detection,,computer-vision
a818cb86-3ca1-4d57-b604-5aa5e2e005cd,object-detection-in-indoor-scenes,Object Detection In Indoor Scenes,"Object detection in indoor scenes is the task of performing object detection within an indoor environment.
( Image credit: [Faster Bounding Box Annotation for Object Detection in Indoor Scenes](https://arxiv.org/pdf/1807.03142v1.pdf) )",computer-vision
1406338a-d8f9-4576-a34d-111d59412679,vehicle-re-identification,Vehicle Re-Identification,"Vehicle re-identification is the task of identifying the same vehicle across multiple cameras.
( Image credit: [A Two-Stream Siamese Neural Network for Vehicle Re-Identification by Using Non-Overlapping Cameras](https://github.com/icarofua/siamese-two-stream) )",computer-vision
3c30fa0e-89ae-4d2b-b550-97939a0680d1,overlapped-100-10,Overlapped 100-10,,computer-vision
d3d54823-c262-4b3b-aa40-04a1cee1a8cc,motion-segmentation,Motion Segmentation,"**Motion Segmentation** is an essential task in many applications in Computer Vision and Robotics, such as surveillance, action recognition and scene understanding. The classic way to state the problem is the following: given a set of feature points that are tracked through a sequence of images, the goal is to cluster those trajectories according to the different motions they belong to. It is assumed that the scene contains multiple objects that are moving rigidly and independently in 3D-space.
Source: [Robust Motion Segmentation from Pairwise Matches ](https://arxiv.org/abs/1905.09043)",computer-vision
2ed91dad-2800-4e4a-af8d-940ac1a745d3,weakly-supervised-instance-segmentation,Weakly-supervised instance segmentation,,computer-vision
90ae0bf3-b1d7-4951-8985-aa31fac0fa8e,drone-view-target-localization,Drone-view target localization,"(Drone -> Satellite) Given one drone-view image or video, the task aims to find the most similar satellite-view image to localize the target building in the satellite view.",computer-vision
32ae2c14-cfcd-4dda-9b01-103b22a69e32,3d-classification,3D Classification,,computer-vision
f4f552d0-dce7-44eb-b9aa-7a62e61e64d3,group-detection-in-crowds,Group Detection In Crowds,,computer-vision
643a74fc-6aa4-4c84-a4ac-b60a2999f27e,open-set-action-recognition,Open Set Action Recognition,,computer-vision
9c92e4da-9da4-4895-ae75-74125fa6a46d,rgb-d-reconstruction,RGB-D Reconstruction,,computer-vision
c2470d63-7dab-48ba-a359-cdc6a01201ee,3d,3D,,computer-vision
39177d21-03d8-49b6-b400-bfa4be336865,hand-object-pose,hand-object pose,6D pose estimation of hand and object,computer-vision
7be512c0-5bd6-4433-b35d-380c520560da,object-categorization,Object Categorization,"Object categorization identifies which label, from a
given set, best corresponds to an image region defined by
an input image and bounding box.",computer-vision
5b1e0efc-626b-45a8-83b0-6a3b129d612d,object-detection,Object Detection,"**Object Detection** is a computer vision task in which the goal is to detect and locate objects of interest in an image or video. The task involves identifying the position and boundaries of objects in an image, and classifying the objects into different categories.
The state-of-the-art methods can be categorized into two main types: one-stage methods and two stage-methods:
- One-stage methods prioritize inference speed, and example models include YOLO, SSD and RetinaNet.
- Two-stage methods prioritize detection accuracy, and example models include Faster R-CNN, Mask R-CNN and Cascade R-CNN.
The most popular benchmark is the MSCOCO dataset. Models are typically evaluated according to a Mean Average Precision metric.
( Image credit: [Detectron](https://github.com/facebookresearch/detectron) )",computer-vision
64334b64-e1ab-4c15-a83c-5cd299267fa7,texture-classification,Texture Classification,"**Texture Classification** is a fundamental issue in computer vision and image processing, playing a significant role in many applications such as medical image analysis, remote sensing, object recognition, document analysis, environment modeling, content-based image retrieval and many more.
Source: [Improving Texture Categorization with Biologically Inspired Filtering ](https://arxiv.org/abs/1312.0072)",computer-vision
97f2c8e0-9539-424d-8d3a-d40c2caa4d22,age-estimation,Age Estimation,"Age Estimation is the task of estimating the age of a person from an image some other kind of data.
( Image credit: [BridgeNet](https://arxiv.org/pdf/1904.03358v1.pdf) )",computer-vision
a9c54ad9-a5c6-4ce5-aa53-28f196d69b40,visual-localization,Visual Localization,"**Visual Localization** is the problem of estimating the camera pose of a given image relative to a visual representation of a known scene.
Source: [Fine-Grained Segmentation Networks: Self-Supervised Segmentation for Improved Long-Term Visual Localization ](https://arxiv.org/abs/1908.06387)",computer-vision
f4afa024-7589-45b9-beab-1bd6b0ed2bb1,unsupervised-anomaly-detection-with-specified-7,Unsupervised Anomaly Detection with Specified Settings -- 10% anomaly,,computer-vision
e799df3d-5a62-45bc-afff-974814270ecf,video-forensics,Video Forensics,,computer-vision
80b0e258-274c-4e4e-9730-ebac44961ea3,visual-recognition,Visual Recognition,,computer-vision
1a6881e5-f0fb-4b6c-b846-3c78667b35a8,stereo-lidar-fusion,Stereo-LiDAR Fusion,Depth estimation using stereo cameras and a LiDAR sensor.,computer-vision
62c63344-4e23-4a38-b6e1-50d262af7155,video-inpainting,Video Inpainting,"The goal of **Video Inpainting** is to fill in missing regions of a given video sequence with contents that are both spatially and temporally coherent. Video Inpainting, also known as video completion, has many real-world applications such as undesired object removal and video restoration.
Source: [Deep Flow-Guided Video Inpainting ](https://arxiv.org/abs/1905.02884)",computer-vision
e1834198-b440-4eff-83bb-682ca1c48741,depth-map-super-resolution,Depth Map Super-Resolution,"Depth map super-resolution is the task of upsampling depth images.
( Image credit: [A Joint Intensity and Depth Co-Sparse Analysis Model
for Depth Map Super-Resolution](https://arxiv.org/pdf/1304.5319v1.pdf) )",computer-vision
f47ff5f0-af0d-461e-a1c1-358ac24e8744,face-parsing,Face Parsing,Classify pixels of a face image into different classes based on a given bounding box.,computer-vision
ddf738f4-1170-42e0-9132-3e1de65d4362,unsupervised-video-clustering,Unsupervised Video Clustering,,computer-vision
0c8f5579-af80-4939-b017-dc8c06763f5e,single-image-generation,single-image-generation,,computer-vision
21b6a8a4-35e8-42d4-8396-478f39843180,activity-recognition-in-videos,Activity Recognition In Videos,,computer-vision
b637ffc3-74d0-43fe-9580-e85c1137129b,image-augmentation,Image Augmentation,"**Image Augmentation** is a data augmentation method that generates more training data from the existing training samples. Image Augmentation is especially useful in domains where training data is limited or expensive to obtain like in biomedical applications.
Source: [Improved Image Augmentation for Convolutional Neural Networks by Copyout and CopyPairing ](https://arxiv.org/abs/1909.00390)
( Image credit: [Kornia](https://github.com/kornia/kornia) )",computer-vision
94682c23-a132-4079-b307-4682a65bf3b2,video-temporal-consistency,Video Temporal Consistency,"A method that remove temporal flickering and other artifacts from videos, in particular those introduced by (non-temporal-aware) per-frame processing",computer-vision
d3437f0e-a1f3-4748-8574-d7a87c4000db,light-source-estimation,Outdoor Light Source Estimation,,computer-vision
fc6bc6c1-05b1-43e9-a5c3-e72dfcf2dad1,audio-visual-active-speaker-detection,Audio-Visual Active Speaker Detection,Determine if and when each visible person in the video is speaking.,computer-vision
4e43584a-3ae9-421e-a100-0a76e36e6b4d,3d-rotation-estimation,3D Rotation Estimation,,computer-vision
563a5c19-a843-4b0c-af49-7ac3653e618d,relational-captioning,Relational Captioning,,computer-vision
9243817e-311c-45d4-a101-f73150836778,caricature,Caricature,"**Caricature** is a pictorial representation or description that deliberately exaggerates a person’s distinctive features or peculiarities to create an easily identifiable visual likeness with a comic effect. This vivid art form contains the concepts of abstraction, simplification and exaggeration.
Source: [Alive Caricature from 2D to 3D ](https://arxiv.org/abs/1803.06802)",computer-vision
4887ab8f-c199-4c25-a4f7-af909d13491f,edge-detection,Edge Detection,"**Edge Detection** is a fundamental image processing technique which involves computing an image gradient to quantify the magnitude and direction of edges in an image. Image gradients are used in various downstream tasks in computer vision such as line detection, feature detection, and image classification.
Source: [Artistic Enhancement and Style Transfer of Image Edges using Directional Pseudo-coloring ](https://arxiv.org/abs/1906.07981)
( Image credit: [Kornia](https://github.com/kornia/kornia) )",computer-vision
bbbcda6e-bc0f-4631-9eed-e0dbc226e530,3d-place-recognition,3D Place Recognition,Pointcloud-based place recognition and retrieval,computer-vision
6b696db1-bbc0-4612-ad52-a671951c9a55,human-interaction-recognition,Human Interaction Recognition,"Human Interaction Recognition (HIR) is a field of study that involves the development of computer algorithms to detect and recognize human interactions in videos, images, or other multimedia content. The goal of HIR is to automatically identify and analyze the social interactions between people, their body language, and facial expressions.",computer-vision
f9a9b7e0-1e64-4a48-b264-42e7d042b58c,autonomous-vehicles,Autonomous Vehicles,"Autonomous vehicles is the task of making a vehicle that can guide itself without human conduction.
Many of the state-of-the-art results can be found at more general task pages such as [3D Object Detection](https://paperswithcode.com/task/3d-object-detection) and [Semantic Segmentation](https://paperswithcode.com/task/semantic-segmentation).
( Image credit: [GSNet: Joint Vehicle Pose and Shape Reconstruction with Geometrical and Scene-aware Supervision](https://arxiv.org/abs/2007.13124) )",computer-vision
8a379c8c-6eae-40e5-bb25-ed74068d3d0f,4d-spatio-temporal-semantic-segmentation,4D Spatio Temporal Semantic Segmentation,Image: [Choy et al](https://paperswithcode.com/paper/4d-spatio-temporal-convnets-minkowski),computer-vision
53ca5aa8-25b5-4911-94b6-00020dab2beb,chart-question-answering,Chart Question Answering,Question Answering task on charts images,computer-vision
25bb53af-397c-4c81-9a81-4db19637c863,serial-style-transfer,Serial Style Transfer,,computer-vision
84bcebec-b4d7-4ce1-b494-ebb18d03053b,face-hallucination,Face Hallucination,"Face hallucination is the task of generating high-resolution (HR) facial images from low-resolution (LR) inputs.
( Image credit: [Deep CNN Denoiser and Multi-layer Neighbor Component Embedding for Face Hallucination](https://arxiv.org/pdf/1806.10726v1.pdf) )",computer-vision
898acd00-976e-46d2-847f-2f7e788d48de,hyperspectral-unmixing,Hyperspectral Unmixing,"**Hyperspectral Unmixing** is a procedure that decomposes the measured pixel spectrum of hyperspectral data into a collection of constituent spectral signatures (or endmembers) and a set of corresponding fractional abundances. Hyperspectral Unmixing techniques have been widely used for a variety of applications, such as mineral mapping and land-cover change detection.
Source: [An Augmented Linear Mixing Model to Address Spectral Variability for Hyperspectral Unmixing ](https://arxiv.org/abs/1810.12000)",computer-vision
071eb09a-8a03-444d-a2da-ee4c2d2fcd53,visual-reasoning,Visual Reasoning,Ability to understand actions and reasoning associated with any visual images,computer-vision
e729ee1b-31c4-447a-aec3-6051da7d1034,action-recognition-in-videos,Action Recognition,"**Action Recognition** is a computer vision task that involves recognizing human actions in videos or images. The goal is to classify and categorize the actions being performed in the video or image into a predefined set of action classes.
In the video domain, it is an open question whether training an action classification network on a sufficiently large dataset, will give a similar boost in performance when applied to a different temporal task or dataset. The challenges of building video datasets has meant that most popular benchmarks for action recognition are small, having on the order of 10k videos.
Please note some benchmarks may be located in the [Action Classification](https://paperswithcode.com/task/action-classification) or [Video Classification](https://paperswithcode.com/task/video-classification) tasks, e.g. Kinetics-400.",computer-vision
22ac7048-7005-4d93-949a-a8d21724a709,active-object-localization,Active Object Localization,,computer-vision
f46f92e6-9ee1-4c46-a274-913a22fcd1ca,video-kinematic-base-workflow-recognition,Video & Kinematic Base Workflow Recognition,,computer-vision
b9a18c9f-1f6b-43dd-a039-0f0c6419c4c7,intubation-support-prediction,Intubation Support Prediction,Prediction of need for Intubation support of Covid-19 patients.,computer-vision
854e623e-91b8-459a-904d-c7dffb2ce22d,real-time-object-detection,Real-Time Object Detection,"**Real-Time Object Detection** is a computer vision task that involves identifying and locating objects of interest in real-time video sequences with fast inference while maintaining a base level of accuracy.
This is typically solved using algorithms that combine object detection and tracking techniques to accurately detect and track objects in real-time. They use a combination of feature extraction, object proposal generation, and classification to detect and localize objects of interest.
( Image credit: [CenterNet](https://github.com/xingyizhou/CenterNet) )",computer-vision
d168a50c-0075-461e-86bd-7ef77df2beea,smile-recognition,Smile Recognition,Smile recognition is the task of recognising a smiling face in a photo or video.,computer-vision
b5dd5a08-e077-4b6c-9866-194be8d1b82f,unsupervised-semantic-segmentation,Unsupervised Semantic Segmentation,"Models that learn to segment each image (i.e. assign a class to every pixel) without seeing the ground truth labels.
( Image credit: [SegSort: Segmentation by Discriminative Sorting of Segments](http://openaccess.thecvf.com/content_ICCV_2019/papers/Hwang_SegSort_Segmentation_by_Discriminative_Sorting_of_Segments_ICCV_2019_paper.pdf) )",computer-vision
9cb89426-e916-40e0-955b-dfab4eacad85,atomic-action-recognition,Atomic action recognition,,computer-vision
9c34a5bb-61e1-4b12-a25f-8079700d20aa,word-spotting-in-handwritten-documents,Word Spotting In Handwritten Documents,,computer-vision
993ce2e9-39cc-4c83-86b7-3ee79c2bd1ca,trademark-retrieval,Trademark Retrieval,,computer-vision
41f51a00-76c1-41fb-ae0d-ce846309ba29,blended-target-domain-adaptation,Blended-target Domain Adaptation,"Blended-target domain adaptation is to adapt a single source model to multiple different target domains. The task is similar to the multi-target domain adaptation. However, the domain labels are not available.",computer-vision
e29db2e0-b3e5-43e6-ac1d-1198823a528d,video-understanding,Video Understanding,"A crucial task of **Video Understanding** is to recognise and localise (in space and time) different actions or events appearing in the video.
Source: [Action Detection from a Robot-Car Perspective ](https://arxiv.org/abs/1807.11332)",computer-vision
097b4c0a-c2fc-4f24-9acc-29e1ff761309,3d-single-object-tracking,3D Single Object Tracking,"3D tracking of a single object, based on an initial 3D bounding box, provided to the tracker. 3D single object tracking is commonly performed using point cloud data from Lidars, as it provides valuable depth information, which is lost in camera images.
However, irregular point cloud structure and an increasing point sparsity with distance makes Lidar-based 3D single object tracking a nontrivial task.",computer-vision
a5d7a9bc-cd27-4f6e-bd07-304896527241,handwritten-line-segmentation,Handwritten Line Segmentation,Handwritten Line Segmentation from Document Images.,computer-vision
f716063b-2801-4c25-84ca-f819f401ccd3,conditional-image-generation,Conditional Image Generation,"Conditional image generation is the task of generating new images from a dataset conditional on their class.
( Image credit: [PixelCNN++](https://github.com/openai/pixel-cnn) )",computer-vision
57a4b044-91f0-409d-92fc-bbe2bc6e2395,virtual-try-on,Virtual Try-on,Virtual try-on of clothing or other items such as glasses and makeup. Most recent techniques use Generative Adversarial Networks.,computer-vision
2c197186-4db4-4bcd-80e9-649f56749e0a,3d-geometry-perception,3D Geometry Perception,Image: [Zhao et al](https://arxiv.org/pdf/1812.10775v2.pdf),computer-vision
c226db18-4047-4b68-9274-dc64b821090f,visual-text-correction,Visual Text Correction,,computer-vision
a7a3ccd8-8174-4e1f-866e-2eceae489a82,video-domain-adapation,Video Domain Adapation,Unsupervised Domain Adaptation on Videos for the task of Action Recognition.,computer-vision
3c8d6703-1aaa-4f05-bd7f-dc76afd76749,camouflaged-object-segmentation,Camouflaged Object Segmentation,"Camouflaged object segmentation (COS) or Camouflaged object detection (COD), which was originally promoted by [T.-N. Le et al.](https://www.sciencedirect.com/science/article/abs/pii/S1077314219300608) (2017), aims to identify objects that conceal their texture into the surrounding environment. The high intrinsic similarities between the target object and the background make COS/COD far more challenging than the traditional object segmentation task. Also, refer to the online benchmarks on [CAMO dataset](https://sites.google.com/view/ltnghia/research/camo), [COD dataset](http://dpfan.net/Camouflage/), and [online demo](http://mc.nankai.edu.cn/cod).
( Image source: [Anabranch Network for Camouflaged Object Segmentation](https://www.sciencedirect.com/science/article/abs/pii/S1077314219300608) )",computer-vision
cdf87566-e113-44bb-9da6-0ecfc0c301ed,unseen-object-instance-segmentation,Unseen Object Instance Segmentation,"Instance segmentation is the task of detecting and delineating each distinct object of interest appearing in an image.
Image Credit: [Deep Occlusion-Aware Instance Segmentation with Overlapping BiLayers](https://arxiv.org/abs/2103.12340)",computer-vision
f650bd8e-73fc-45df-85ad-436ca48364ed,3d-dense-captioning,3D dense captioning,"Dense captioning in 3D point clouds is an emerging vision-and-language task involving object-level 3D scene understanding. Apart from coarse semantic class prediction and bounding box regression as in traditional 3D object detection, 3D dense captioning aims at producing a further and finer instance-level label of natural language description on visual appearance and spatial relations for each scene object of interest.",computer-vision
bededfab-4d5c-4ad6-8d96-bebf317a7a51,skills-evaluation,Skills Evaluation,,computer-vision
d5a0c934-0141-4aa3-8ff8-467a5e817121,facial-editing,Facial Editing,Image source: [Stitch it in Time: GAN-Based Facial Editing of Real Videos](https://arxiv.org/pdf/2201.08361v2.pdf),computer-vision
6b985326-b85a-41e3-8ea5-e2f85bdb5a0c,micro-expression-spotting,Micro-Expression Spotting,"Facial Micro-Expression Spotting is a challenging task in identifying onset, apex and/or offset over a short or long micro-expression sequence.",computer-vision
a94e29dd-63ed-4948-bbe6-ac30cf462b51,pose-contrastive-learning,Pose Contrastive Learning,,computer-vision
b097081e-139e-49b8-ba23-df9aca34f284,motion-disentanglement,Motion Disentanglement,Disentangling irregular (anomalous) motion from regular motion.,computer-vision
249c0595-455d-48f0-a9ea-43fc84b1b605,semantic-correspondence,Semantic correspondence,The task of semantic correspondence aims to establish reliable visual correspondence between different instances of the same object category.,computer-vision
924764ac-a115-42df-bd41-461726a8591b,zero-shot-composed-image-retrieval-zs-cir,Zero-Shot Composed Image Retrieval (ZS-CIR),"Given a query composed of a reference image and a relative caption, Composed Image Retrieval (CIR) aims to retrieve target images that are visually similar to the reference one but incorporate the changes specified in the relative caption. The bi-modality of the query provides users with more precise control over the characteristics of the desired image, as some features are more easily described with language, while others can be better expressed visually.
**Zero-Shot Composed Image Retrieval (ZS-CIR)** is a subtask of CIR that aims to design an approach that manages to combine the reference image and the relative caption without the need for supervised learning.",computer-vision
748345ee-cd52-450b-86a6-e47351352c1c,monocular-3d-human-pose-estimation,Monocular 3D Human Pose Estimation,This task targets at 3D human pose estimation with a single RGB camera.,computer-vision
b9e6772f-de7d-4b90-afcd-4d5668b99f1a,multimodal-patch-matching,Multimodal Patch Matching,"Multimodal patch matching focuses on matching patches originating from different sources, such as visible RGB and near-infrared.",computer-vision
4d2ffac0-ff92-4a19-9d09-cb8e3293f286,video-to-image-affordance-grounding,Video-to-image Affordance Grounding,"Given a demonstration video V and a target image I, the goal of video-to-image affordance grounding predict an affordance heatmap over the target image according to the hand-interacted region in the video, accompanied by the affordance action (e.g., press, turn).",computer-vision
9aad12d2-ee1e-4851-b4f5-1d48d55022dd,typeface-completion,Typeface Completion,,computer-vision
3ace723e-3e18-42f1-bdc3-8a3777c6f544,robust-bev-map-segmentation,Robust BEV Map Segmentation,,computer-vision
2bffe84f-f580-423b-b804-7779a5ed118e,face-image-retrieval,Face Image Retrieval,"Face image retrieval is the task of retrieving faces similar to a query, according to the given
criteria (e.g. identity) and rank them using their distances to the query.
( Image credit: [CP-mtML](http://openaccess.thecvf.com/content_cvpr_2016/papers/Bhattarai_CP-mtML_Coupled_Projection_CVPR_2016_paper.pdf) )",computer-vision
cacea5f8-2ac1-4b21-b1e7-55acd3217250,jpeg-artifact-correction,JPEG Artifact Correction,"Correction of visual artifacts caused by JPEG compression, these artifacts are usually grouped into three types: blocking, blurring, and ringing. They are caused by quantization and removal of high frequency DCT coefficients.",computer-vision
43515b68-2b7a-4e7f-ae2b-cfa48524d0fb,3d-part-segmentation,3D Part Segmentation,"Segmenting 3D object parts
( Image credit: [MeshCNN: A Network with an Edge](https://arxiv.org/pdf/1809.05910v2.pdf) )",computer-vision
cb3a226e-26a4-4f5b-8a16-5eb5a0102d07,3d-human-reconstruction,3D Human Reconstruction,,computer-vision
317890fa-9504-498f-b788-1e6cbc7235e6,scene-recognition,Scene Recognition,,computer-vision
f160dfd4-0096-4b4d-844d-2dd47e982096,image-manipulation,Image Manipulation,,computer-vision
8ba94f70-bc5c-48df-afdd-4ca442ee7e04,spoof-detection,Spoof Detection,,computer-vision
f237d033-59d3-4fcc-9efb-755f5ec93f5e,image-to-image-translation,Image-to-Image Translation,"**Image-to-Image Translation** is a task in computer vision and machine learning where the goal is to learn a mapping between an input image and an output image, such that the output image can be used to perform a specific task, such as style transfer, data augmentation, or image restoration.
( Image credit: [Unpaired Image-to-Image Translation
using Cycle-Consistent Adversarial Networks](https://arxiv.org/pdf/1703.10593v6.pdf) )",computer-vision
f3daf541-2023-4d90-8044-24ccbe1b90bb,fashion-understanding,Fashion Understanding,,computer-vision
b087eaf3-66e6-4366-8886-d8017ca272b8,imputation,Imputation,Substituting missing data with values according to some criteria.,computer-vision
52a3f415-2b8d-48f5-afc6-6199e0b04b13,thermal-image-denoising,Thermal Image Denoising,,computer-vision
32402433-6490-45a8-b646-2dac457771b1,open-world-semi-supervised-learning,Open-World Semi-Supervised Learning,,computer-vision
5aea3535-4850-46d0-8350-10f6ef2d681a,human-pose-forecasting,Human Pose Forecasting,"Human pose forecasting is the task of detecting and predicting future human poses.
( Image credit: [EgoPose](https://github.com/Khrylx/EgoPose) )",computer-vision
d22332fd-85b9-4cd6-a7ad-6750239445bb,corpus-video-moment-retrieval,Corpus Video Moment Retrieval,The task extends the Single Video Moment Retrieval task to the Corpus setup where a single textual query is used to temporally localize relevant moments across all videos in the dataset.,computer-vision
318739fc-4397-4666-b4a3-9da7d423bdce,amodal-layout-estimation,Amodal Layout Estimation,"Amodal scene layout estimation involves estimating the static and dynamic portion of an urban driving scene in bird's-eye view, given a single image. The concept of ""amodal"" estimation refers to the fact that we also estimate layout of parts of the scene that are not observable in the image.",computer-vision
9ee31595-89f9-4142-9d68-f952e12e3cb4,egocentric-pose-estimation,Egocentric Pose Estimation,,computer-vision
55bf2074-2af5-4672-b67c-b234e42508fc,anomaly-classification,Anomaly Classification,,computer-vision
e788cda2-8b83-4627-afba-70ba44dce400,2d-human-pose-estimation,2D Human Pose Estimation,"What is Human Pose Estimation?
Human pose estimation is the process of estimating the configuration of the body (pose) from a single, typically monocular, image. Background. Human pose estimation is one of the key problems in computer vision that has been studied for well over 15 years. The reason for its importance is the
abundance of applications that can benefit from such a technology. For example,
human pose estimation allows for higher-level reasoning in the context of human-computer interaction and activity recognition; it is also one of the basic building blocks for marker-less motion capture (MoCap) technology. MoCap technology is useful for applications ranging from character animation to clinical analysis of gait pathologies.",computer-vision
42d1117c-5d9d-4a4b-8438-ea39c0b8b108,face-clustering,Face Clustering,Face Clustering in the videos,computer-vision
036b2d23-edef-48c7-9eca-ee324b5dfeb5,visual-analogies,Visual Analogies,,computer-vision
2b610386-d6a1-422e-9b8f-24a96156daf8,classification-of-hyperspectral-images,Classification Of Hyperspectral Images,,computer-vision
37f72e51-e56e-453d-8b6b-9aeac38400c2,single-image-based-hdr-reconstruction,Single-Image-Based Hdr Reconstruction,,computer-vision
e3510881-f52a-47dd-a51f-58b2dfd67d8c,landmark-based-segmentation,Landmark-based segmentation,,computer-vision
cb10eba4-d2b6-4fb1-9a97-e02dc4fc781e,animal-pose-estimation,Animal Pose Estimation,"Animal pose estimation is the task of identifying the pose of an animal.
( Image credit: [Using DeepLabCut for 3D markerless pose estimation across species and behaviors](http://www.mousemotorlab.org/s/NathMathis2019.pdf) )",computer-vision
e1c3dce4-f733-49f0-a777-066539f4d51f,3d-reconstruction,3D Reconstruction,"**3D Reconstruction** is the task of creating a 3D model or representation of an object or scene from 2D images or other data sources. The goal of 3D reconstruction is to create a virtual representation of an object or scene that can be used for a variety of purposes, such as visualization, animation, simulation, and analysis. It can be used in fields such as computer vision, robotics, and virtual reality.
Image: [Gwak et al](https://arxiv.org/pdf/1705.10904v2.pdf)",computer-vision
20611c4e-143f-475b-aff4-3dde9aeb09ce,semi-supervised-person-instance-segmentation,Semi-Supervised Person Instance Segmentation,,computer-vision
c27a9a68-c746-4434-b464-cd1482718052,facial-expression-generation,Facial expression generation,,computer-vision
1c47b714-e9e5-4775-8f62-cb11390d4bc3,body-mass-index-bmi-prediction,Body Mass Index (BMI) Prediction,,computer-vision
7c454f09-a590-499c-92d2-d23db62e52fb,image-steganography,Image Steganography,"**Image Steganography** is the main content of information hiding. The sender conceal a secret message into a cover image, then get the container image called stego, and finish the secret message’s transmission on the public channel by transferring the stego image. Then the receiver part of the transmission can reveal the secret message out. Steganalysis is an attack to the steganography algorithm. The listener on the public channel intercept the image and analyze whether the image contains secret information.
Source: [Invisible Steganography via Generative Adversarial Networks ](https://arxiv.org/abs/1807.08571)",computer-vision
b04b098c-18ea-4786-9542-d10b7482fcef,license-plate-detection,License Plate Detection,License Plate Recognition is an image-processing technology used to identify vehicles by their license plates. This technology is used in various security and traffic applications.,computer-vision
729d49b2-e0b4-4e9e-8448-be65165288c9,single-image-haze-removal,Single Image Haze Removal,,computer-vision
bdb5eff1-e426-459c-8d51-b1e4a418cf6d,semi-supervised-person-re-identification,Semi-Supervised Person Re-Identification,,computer-vision
c13be598-fc07-45f3-8c95-11239c0030f0,morphology-classification,Morphology classification,,computer-vision
21286c48-9707-4e8d-bd7e-95740bcc8cbe,defocus-estimation,Defocus Estimation,,computer-vision
0dd84be5-7e76-47b7-ada4-ce6c0fbcbbf3,mixed-reality,Mixed Reality,,computer-vision
1bd660ec-f2ec-4b1d-a433-751144311ffa,multi-modal-subspace-clustering,Multi-modal Subspace Clustering,,computer-vision
f9c8967e-d2f6-4249-8bf7-fa78d5c39c1f,hdr-reconstruction,HDR Reconstruction,,computer-vision
37bc9538-655e-448c-bf0d-ef5216ae1c6d,pso-convnets-dynamics-2,PSO-ConvNets Dynamics 2,Incorporating distilled Cucker-Smale elements into PSO algorithm using KNN and intertwine training with SGD (Pull back method),computer-vision
59556099-d490-4e63-a527-5ad91c3421ed,3d-scene-graph-alignment,3D Scene Graph Alignment,,computer-vision
5c99a018-e5d1-43ff-b28d-e1a9caf7304e,hyperspectral-image-classification,Hyperspectral Image Classification,"**Hyperspectral Image Classification** is a task in the field of remote sensing and computer vision. It involves the classification of pixels in hyperspectral images into different classes based on their spectral signature. Hyperspectral images contain information about the reflectance of objects in hundreds of narrow, contiguous wavelength bands, making them useful for a wide range of applications, including mineral mapping, vegetation analysis, and urban land-use mapping. The goal of this task is to accurately identify and classify different types of objects in the image, such as soil, vegetation, water, and buildings, based on their spectral properties.
( Image credit: [Shorten Spatial-spectral RNN with Parallel-GRU for Hyperspectral Image Classification](https://arxiv.org/pdf/1810.12563v1.pdf) )",computer-vision
fca296aa-a734-4ad4-88c2-d0e675ed8fc7,image-to-video,Image to Video Generation,"**Image to Video Generation** refers to the task of generating a sequence of video frames based on a single still image or a set of still images. The goal is to produce a video that is coherent and consistent in terms of appearance, motion, and style, while also being temporally consistent, meaning that the generated video should look like a coherent sequence of frames that are temporally ordered. This task is typically tackled using deep generative models, such as Generative Adversarial Networks (GANs) or Variational Autoencoders (VAEs), that are trained on large datasets of videos. The models learn to generate plausible video frames that are conditioned on the input image, as well as on any other auxiliary information, such as a sound or text track.",computer-vision
442e5355-c0fd-4334-92be-1f08619f4653,zero-shot-skeletal-action-recognition,Zero Shot Skeletal Action Recognition,Zero-Shot Learning for 3D skeletal action recognition,computer-vision
58504d09-8724-437f-9f94-548a467f47a0,single-image-desnowing,Single Image Desnowing,,computer-vision
10c3090d-f3de-4580-bade-f251a4ec5b0a,egocentric-activity-recognition,Egocentric Activity Recognition,,computer-vision
8edf9307-63b3-4908-a2b3-36f721dea0a8,salient-object-detection,RGB Salient Object Detection,"RGB Salient object detection is a task-based on a visual attention mechanism, in which algorithms aim to explore objects or regions more attentive than the surrounding areas on the scene or RGB images.
( Image credit: [Attentive Feedback Network for Boundary-Aware Salient Object Detection](http://openaccess.thecvf.com/content_CVPR_2019/papers/Feng_Attentive_Feedback_Network_for_Boundary-Aware_Salient_Object_Detection_CVPR_2019_paper.pdf) )",computer-vision
549cb7b5-3760-4031-8faa-5bc83302ae95,image-retrieval,Image Retrieval,"**Image Retrieval** is a computer vision task that involves searching for images in a large database that are similar to a given query image. The goal of image retrieval is to enable users to find images that match their interests or needs, based on visual similarity or other criteria.
( Image credit: [DELF](https://github.com/tensorflow/models/tree/master/research/delf) )",computer-vision
47203abc-97f0-473c-a99b-d121fbb4402c,video-object-tracking,Video Object Tracking,Video Object Detection aims to detect targets in videos using both spatial and temporal information. It's usually deeply integrated with tasks such as Object Detection and Object Tracking.,computer-vision
7ca47c22-c80d-4eaf-b81a-4592e63e96dd,blind-image-deblurring,Blind Image Deblurring,"**Blind Image Deblurring** is a classical problem in image processing and computer vision, which aims to recover a latent image from a blurred input.
Source: [Learning a Discriminative Prior for Blind Image Deblurring ](https://arxiv.org/abs/1803.03363)",computer-vision
d9e3f3c2-dc72-47e2-bef4-53d76913622f,crack-segmentation,Crack Segmentation,,computer-vision
92a56df9-c15e-44d2-b0dd-6058f59a6ae0,3d-canonicalization,3D Canonicalization,3D Canonicalization is the process of estimating a transformation-invariant feature for classification and part segmentation tasks.,computer-vision
b05f2814-fe2a-4ce1-804f-6d9798d7f70c,multi-label-classification,Multi-Label Classification,"**Multi-Label Classification** is the supervised learning problem where an instance may be associated with multiple labels. This is an extension of single-label classification (i.e., multi-class, or binary) where each instance is only associated with a single class label.
Source: [Deep Learning for Multi-label Classification ](https://arxiv.org/abs/1502.05988)",computer-vision
870d516d-b7b3-483e-b0a4-50dcc7bcd152,transparent-object-depth-estimation,Transparent Object Depth Estimation,Estimating the 3D shape of transparent objects,computer-vision
8c3865e5-6fd8-456e-a177-57f5165a33c6,human-parsing,Human Parsing,"Human parsing is the task of segmenting a human image into different fine-grained semantic parts such as head, torso, arms and legs.
( Image credit: [Multi-Human-Parsing (MHP)
](https://github.com/ZhaoJ9014/Multi-Human-Parsing) )",computer-vision
088a7e4e-28a3-44f4-802a-758059c3ebf7,few-shot-image-classification,Few-Shot Image Classification,"**Few-Shot Image Classification** is a computer vision task that involves training machine learning models to classify images into predefined categories using only a few labeled examples of each category (typically < 6 examples). The goal is to enable models to recognize and classify new images with minimal supervision and limited data, without having to train on large datasets. (typically < 6 examples)
( Image credit: [Learning Embedding Adaptation for Few-Shot Learning](https://github.com/Sha-Lab/FEAT) )",computer-vision
ebe0e568-0727-4b23-812d-846f22761006,sar-image-despeckling,Sar Image Despeckling,"Despeckling is the task of suppressing speckle from Synthetic Aperture Radar (SAR) acquisitions.
Image credits: GRD Sentinel-1 SAR image despeckled with [SAR2SAR-GRD](https://arxiv.org/abs/2102.00692)",computer-vision
f9a22c73-6a42-4041-bca2-9f509eb87db6,face-transfer,Face Transfer,"**Face Transfer** is a method for mapping face performances of one individual to facial animations of another one. It uses facial expressions and head poses from the video of a source actor to generate a video of a target character. Face Transfer is a special case of image-to-image translation tasks.
Source: [Face Transfer with Generative Adversarial Network ](https://arxiv.org/abs/1710.06090)",computer-vision
2b33ced7-91af-41fb-b5cc-72439d13afd6,rf-based-gesture-recognition,RF-based Gesture Recognition,"RF-based gesture sensing and recognition has increasingly attracted intense academic and industrial interest due to its various device-free applications in daily life, such as elder monitoring, mobile games. State-of-the-art approaches achieved accurate gesture sensing by using fine-grained RF signatures (such as CSI, Doppler effect) while could not achieve the same accuracy with coarse-grained RF signatures such as received signal strength (RSS).
See e.g.
Project Soli in depth: How radar-detected gestures could set the Pixel 4 apart
An experimental Google project may finally be ready to make its way into the real world — and the implications could be enormous. https://www.computerworld.com/article/3402019/google-project-soli-pixel-4.html
( Image credit: [Accurate Human Gesture Sensing With
Coarse-Grained RF Signatures](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8737967) )",computer-vision
58601124-dac1-4f43-8656-3889a7aff1a5,multi-label-image-classification,Multi-Label Image Classification,The Multi-Label Image Classification focuses on predicting labels for images in a multi-class classification problem where each image may belong to more than one class.,computer-vision
276fb4b6-5eef-483b-95e0-fff79e0e5483,super-resolution,Super-Resolution,"**Super-Resolution** is a task in computer vision that involves increasing the resolution of an image or video by generating missing high-frequency details from low-resolution input. The goal is to produce an output image with a higher resolution than the input image, while preserving the original content and structure.
( Credit: [MemNet](https://github.com/tyshiwo/MemNet) )",computer-vision
32406564-46f4-4463-8b37-05c411429d90,autonomous-driving,Autonomous Driving,"Autonomous driving is the task of driving a vehicle without human conduction.
Many of the state-of-the-art results can be found at more general task pages such as [3D Object Detection](https://paperswithcode.com/task/3d-object-detection) and [Semantic Segmentation](https://paperswithcode.com/task/semantic-segmentation).
(Image credit: [Exploring the Limitations of Behavior Cloning for Autonomous Driving](https://arxiv.org/pdf/1904.08980v1.pdf))",computer-vision
3501df27-0c7d-4066-9fb0-9720a292c663,foreground-segmentation,Foreground Segmentation,,computer-vision
3569178c-806e-49d3-82ec-1d5f9daaf827,weakly-supervised-action-recognition,Weakly-Supervised Action Recognition,Action recognition with single-point annotations in time (there are no action start/stop time annotations),computer-vision
861a0186-da4a-48cb-850a-269297882c69,unsupervised-anomaly-detection-with-specified-6,Unsupervised Anomaly Detection with Specified Settings -- 0.1% anomaly,,computer-vision
ee586069-057e-4fdc-845e-ea55428fc6ef,person-search,Person Search,"**Person Search** is a task which aims at matching a specific person among a great number of whole scene images.
Source: [Re-ID Driven Localization Refinement for Person Search ](https://arxiv.org/abs/1909.08580)",computer-vision
58cab7c2-a2f1-42ad-8bd9-e648fc1e97ea,personality-trait-recognition,Personality Trait Recognition,,computer-vision
bcaaa3a5-e2d9-4e45-91fd-67a84807895f,online-action-detection,Online Action Detection,Online action detection is the task of predicting the action as soon as it happens in a streaming video without access to video frames in the future.,computer-vision
ce537ad8-ecda-40ce-bf6d-d5455e125a81,crowd-counting,Crowd Counting,"**Crowd Counting** is a task to count people in image. It is mainly used in real-life for automated public monitoring such as surveillance and traffic control. Different from object detection, Crowd Counting aims at recognizing arbitrarily sized targets in various situations including sparse and cluttering scenes at the same time.
Source: [Deep Density-aware Count Regressor ](https://arxiv.org/abs/1908.03314)",computer-vision
9568f316-33d7-490e-b010-bff124c55fb8,object-localization,Object Localization,"**Object Localization** is the task of locating an instance of a particular object category in an image, typically by specifying a tightly cropped bounding box centered on the instance. An object proposal specifies a candidate bounding box, and an object proposal is said to be a correct localization if it sufficiently overlaps a human-labeled “ground-truth” bounding box for the given object. In the literature, the “Object Localization” task is to locate one instance of an object category, whereas “object detection” focuses on locating all instances of a category in a given image.
Source: [Fast On-Line Kernel Density Estimation for Active Object Localization ](https://arxiv.org/abs/1611.05369)",computer-vision
b2e1a71e-3a6f-43c5-a46f-15172984c28c,age-invariant-face-recognition,Age-Invariant Face Recognition,"Age-invariant face recognition is the task of performing face recognition that is invariant to differences in age.
( Image credit: [Look Across Elapse](https://arxiv.org/pdf/1809.00338v2.pdf) )",computer-vision
b931f8fd-ad88-48ba-8156-fdee4655a91f,video-object-segmentation,Video Object Segmentation,"Video object segmentation is a binary labeling problem aiming to separate foreground object(s) from the background region of a video.
For leaderboards please refer to the different subtasks.",computer-vision
c7301024-901b-43e5-a4c0-60f798a9ef59,semi-supervised-sketch-based-image-retrieval,Semi-Supervised Sketch Based Image Retrieval,"Whilst the number of photos can be easily scaled, each corresponding sketch still needs to be individually produced for fine-grained sketch-based image retrieval. The objective is to mitigate such an upper-bound on sketch data, and study whether unlabelled photos alone (of which they are many) can be cultivated for performance gain.",computer-vision
d170e810-6c91-475d-9a5c-f23440ba52c3,facial-landmark-detection,Facial Landmark Detection,"**Facial Landmark Detection** is a computer vision task that involves detecting and localizing specific points or landmarks on a face, such as the eyes, nose, mouth, and chin. The goal is to accurately identify these landmarks in images or videos of faces in real-time and use them for various applications, such as face recognition, facial expression analysis, and head pose estimation.
( Image credit: [Style Aggregated Network for Facial Landmark Detection](https://arxiv.org/pdf/1803.04108v4.pdf) )",computer-vision
d4509f0e-1e9c-47eb-98a3-11badc6344f6,point-supervised-instance-segmentation,Point-Supervised Instance Segmentation,Weakly-Supervised Instance Segmentation using Point Labels,computer-vision
ba5ad070-d359-415e-906f-e5c033299405,image-generation,Image Generation,"**Image Generation** (synthesis) is the task of generating new images from an existing dataset.
- **Unconditional generation** refers to generating samples unconditionally from the dataset, i.e. $p(y)$
- **[Conditional image generation](/task/conditional-image-generation)** (subtask) refers to generating samples conditionally from the dataset, based on a label, i.e. $p(y|x)$.
In this section, you can find state-of-the-art leaderboards for **unconditional generation**. For conditional generation, and other types of image generations, refer to the subtasks.
( Image credit: [StyleGAN](https://github.com/NVlabs/stylegan) )",computer-vision
5ac5ef8d-08b1-4baa-97ad-2f8cd36b7fcc,video-prediction,Video Prediction,"**Video Prediction** is the task of predicting future frames given past video frames.
Gif credit: [MAGVIT ](https://magvit.cs.cmu.edu/)
Source: [Photo-Realistic Video Prediction on Natural Videos of Largely Changing Frames ](https://arxiv.org/abs/2003.08635)",computer-vision
239fec5b-cdc3-4b55-b3bd-66516202834f,gender-prediction,Gender Prediction,,computer-vision
b9fd7d07-729d-469c-b95f-a56ae52f7391,reference-based-super-resolution,Reference-based Super-Resolution,Reference-based Super-Resolution aims to recover high-resolution images by utilizing external reference images containing similar content to generate rich textures.,computer-vision
01ec8b07-3f85-4c84-9aaa-2b2bf178cd48,gait-identification,Gait Identification,,computer-vision
9022b29a-2857-466f-90b3-325f73b6ef16,network-interpretation,Network Interpretation,,computer-vision
7e701f53-8fbe-4307-883f-ca22770515e7,video-semantic-segmentation,Video Semantic Segmentation,,computer-vision
233664a2-f489-4579-bf07-b506a0d24b4d,aggregate-xview3-metric,Aggregate xView3 Metric,"The aggregate xView3 metric is the combination of five metrics: object detection F1 score, close-to-shore object detection F1 score, vessel/not vessel classification F1 score, fishing/not fishing classification F1 score, and vessel length estimation percent error regression.",computer-vision
b11a1c4b-1eeb-45c8-9e4a-7f5fe7e63f27,vehicle-pose-estimation,Vehicle Pose Estimation,"Image Credit: [GSNet: Joint Vehicle Pose and Shape Reconstruction with Geometrical and Scene-aware Supervision, ECCV'20](https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123600511.pdf)",computer-vision
118f7b0f-0005-4e8d-a8f9-1b0b80842fc3,persuasion-strategies,Persuasion Strategies,Prediction of Persuasion Strategy in Advertisements,computer-vision
12d9df88-1557-4a7a-93a8-beaeedd3a208,semi-supervised-video-classification,Semi-Supervised Video Classification,,computer-vision
fd955d45-a243-4a00-b9d2-7b0661b44bce,imagedocument-clustering,Image/Document Clustering,,computer-vision
0b981bec-289b-4190-8215-e990f527a39f,hyperspectral,Hyperspectral,,computer-vision
fc2b60b1-3b5e-43aa-aefd-992d62ff82f1,video-stabilization,Video Stabilization,,computer-vision
30aa6813-8d17-4c22-9a4b-431a27c11969,video-story-qa,Video Story QA,MCQ about clips from movies/tvshows/etc,computer-vision
1a90a981-2023-4637-862b-1429cf95044c,multiple-object-forecasting,Multiple Object Forecasting,"( Image credit: [Multiple Object Forecasting](https://github.com/olly-styles/Multiple-Object-Forecasting) )",computer-vision
7d78341e-40b0-4d85-aefe-f2ca241521e0,unsupervised-person-re-identification,Unsupervised Person Re-Identification,,computer-vision
5ce64826-7c1c-49e5-9545-5d5f058da00a,shadow-detection,Shadow Detection,,computer-vision
d2b98624-3b34-4191-999e-f164a70fc9ec,indoor-scene-reconstruction,Indoor Scene Reconstruction,,computer-vision
3893910b-e793-4b53-8608-eac685650d43,line-segment-detection,Line Segment Detection,,computer-vision
2367138e-6791-4d69-9483-7238781cada8,continuous-affect-estimation,Continuous Affect Estimation,,computer-vision
42ff85ed-2d0c-47af-9def-9f22891c073f,monocular-depth-estimation,Monocular Depth Estimation,"**Monocular Depth Estimation** is the task of estimating the depth value (distance relative to the camera) of each pixel given a single (monocular) RGB image. This challenging task is a key prerequisite for determining scene understanding for applications such as 3D scene reconstruction, autonomous driving, and AR. State-of-the-art methods usually fall into one of two categories: designing a complex network that is powerful enough to directly regress the depth map, or splitting the input into bins or windows to reduce computational complexity. The most popular benchmarks are the KITTI and NYUv2 datasets. Models are typically evaluated using RMSE or absolute relative error.
Source: [Defocus Deblurring Using Dual-Pixel Data ](https://arxiv.org/abs/2005.00305)",computer-vision
3f0f7c8f-da5f-4326-9a7c-2f2cb3c3a3a4,multi-object-tracking,Multi-Object Tracking,"**Multi-Object Tracking** is a task in computer vision that involves detecting and tracking multiple objects within a video sequence. The goal is to identify and locate objects of interest in each frame and then associate them across frames to keep track of their movements over time. This task is challenging due to factors such as occlusion, motion blur, and changes in object appearance, and is typically solved using algorithms that integrate object detection and data association techniques.",computer-vision
050e614a-5cb8-4a01-a450-8e0ff71d4de7,zero-shot-transfer-3d-point-cloud,Zero-Shot Transfer 3D Point Cloud Classification,,computer-vision
5f3f4463-7f2c-41a6-93c3-7289b713dee1,point-cloud-generation,Point Cloud Generation,,computer-vision
4950e48b-cea4-4f5d-af14-3b03b879515c,lane-detection,Lane Detection,"**Lane Detection** is a computer vision task that involves identifying the boundaries of driving lanes in a video or image of a road scene. The goal is to accurately locate and track the lane markings in real-time, even in challenging conditions such as poor lighting, glare, or complex road layouts.
Lane detection is an important component of advanced driver assistance systems (ADAS) and autonomous vehicles, as it provides information about the road layout and the position of the vehicle within the lane, which is crucial for navigation and safety. The algorithms typically use a combination of computer vision techniques, such as edge detection, color filtering, and Hough transforms, to identify and track the lane markings in a road scene.
( Image credit: [End-to-end Lane Detection
](https://github.com/wvangansbeke/LaneDetection_End2End) )",computer-vision
3f9fbd0e-ec0a-4626-9a25-b41a340f1dd8,multimodal-unsupervised-image-to-image,Multimodal Unsupervised Image-To-Image Translation,"Multimodal unsupervised image-to-image translation is the task of producing multiple translations to one domain from a single image in another domain.
( Image credit: [MUNIT: Multimodal UNsupervised Image-to-image Translation](https://github.com/NVlabs/MUNIT) )",computer-vision
ce20e7a7-260d-4021-a5bf-af4f239b1b56,human-detection,Human Detection,,computer-vision
236f6adb-3f6f-46ac-b0e2-438242ff9731,abnormal-event-detection-in-video,Abnormal Event Detection In Video,"**Abnormal Event Detection In Video** is a challenging task in computer vision, as the definition of what an abnormal event looks like depends very much on the context. For instance, a car driving by on the street is regarded as a normal event, but if the car enters a pedestrian area, this is regarded as an abnormal event. A person running on a sports court (normal event) versus running outside from a bank (abnormal event) is another example. Although what is considered abnormal depends on the context, we can generally agree that abnormal events should be unexpected events that occur less often than familiar (normal) events
Source: [Unmasking the abnormal events in video ](https://arxiv.org/abs/1705.08182)
Image: [Ravanbakhsh et al](https://arxiv.org/pdf/1708.09644v1.pdf)",computer-vision
c374c4b6-eed5-467e-b4e0-28103d4ce7b7,single-image-deraining,Single Image Deraining,,computer-vision
c69155b7-cb78-4963-befb-225f6cfe8763,fake-image-attribution,Fake Image Attribution,Attribute the origin (model/architecture) of fake images.,computer-vision
08bf7f3e-7584-4f5d-a478-8e94918dd718,pose-retrieval,Pose Retrieval,Retrieval of similar human poses from images or videos,computer-vision
9083bd73-8817-41df-ad02-ad71036259bc,joint-deblur-and-frame-interpolation,Joint Deblur and Frame Interpolation,,computer-vision
9e337011-7a8c-4730-a2e9-20a988737288,low-light-pedestrian-detection,Low-light Pedestrian Detection,Pedestrian Detection in low-light condition,computer-vision
943c4f1f-4a8e-4394-8260-4a821a98dbc6,action-recognition-in-videos-2,Action Recognition In Videos,"**Action Recognition in Videos** is a task in computer vision and pattern recognition where the goal is to identify and categorize human actions performed in a video sequence. The task involves analyzing the spatiotemporal dynamics of the actions and mapping them to a predefined set of action classes, such as running, jumping, or swimming.",computer-vision
e333d5ff-4de7-4e17-9c9f-77d81605a549,video-enhancement,Video Enhancement,,computer-vision
6f30ede3-8c2a-43b6-855c-9c191c63b91b,uie,UIE,,computer-vision
a53e07d8-099a-4f6c-bc8b-74e7f69b949f,amodal-tracking,Amodal Tracking,,computer-vision
892bbbfc-5617-4175-99ae-daaeeed70cd5,scanpath-prediction,Scanpath prediction,Learning to Predict Sequences of Human Fixations.,computer-vision
76fe3409-5393-4509-b016-5b4059aaa003,semi-supervised-instance-segmentation,Semi-Supervised Instance Segmentation,,computer-vision
80a36ee5-d170-478d-b550-692b78d56f97,weakly-supervised-semantic-segmentation,Weakly-Supervised Semantic Segmentation,"The semantic segmentation task is to assign a label from a label set to each pixel in an image. In the case of fully supervised setting, the dataset consists of images and their corresponding
pixel-level class-specific annotations (expensive pixel-level annotations). However, in the
weakly-supervised setting, the dataset consists of images and corresponding annotations that
are relatively easy to obtain, such as tags/labels of objects present in the image.
( Image credit: [Weakly-Supervised Semantic Segmentation Network with Deep Seeded Region Growing](http://openaccess.thecvf.com/content_cvpr_2018/papers/Huang_Weakly-Supervised_Semantic_Segmentation_CVPR_2018_paper.pdf) )",computer-vision
bd286440-4ed1-42ce-8336-90756db9c57f,image-inpainting,Image Inpainting,"**Image Inpainting** is a task of reconstructing missing regions in an image. It is an important problem in computer vision and an essential functionality in many imaging and graphics applications, e.g. object removal, image restoration, manipulation, re-targeting, compositing, and image-based rendering.
Source: [High-Resolution Image Inpainting with Iterative Confidence Feedback and Guided Upsampling ](https://arxiv.org/abs/2005.11742)
Image source: [High-Resolution Image Inpainting with Iterative Confidence Feedback and Guided Upsampling](https://arxiv.org/pdf/2005.11742.pdf)",computer-vision
c806855b-95b4-4901-abc3-1959d4b740f8,visibility-estimation-from-point-cloud,Visibility Estimation from Point Cloud,"Estimate the point-wise visibility of each point from a given point of view (a point, or a view frustum).",computer-vision
da327067-0ce5-4331-b0d4-173d9e601824,weakly-supervised-object-detection,Weakly Supervised Object Detection,"Weakly Supervised Object Detection (WSOD) is the task of training object detectors with only image tag supervisions.
( Image credit: [Soft Proposal Networks for Weakly Supervised Object Localization](https://arxiv.org/pdf/1709.01829v1.pdf) )",computer-vision
a941cb65-71a0-421f-935e-ab1509d60009,lidar-semantic-segmentation,LIDAR Semantic Segmentation,,computer-vision
9a042aa9-d285-4db6-8d20-db34f54daf1c,6d-pose-estimation-1,6D Pose Estimation,Image: [Zeng et al](https://arxiv.org/pdf/1609.09475v3.pdf),computer-vision
b7b09d5c-85d4-40f9-9f99-40a4702a57b3,boundary-detection,Boundary Detection,"**Boundary Detection** is a vital part of extracting information encoded in images, allowing for the computation of quantities of interest including density, velocity, pressure, etc.
Source: [A Locally Adapting Technique for Boundary Detection using Image Segmentation ](https://arxiv.org/abs/1707.09030)",computer-vision
b148e7b3-bd0f-4988-a8cf-d1300cfac7f8,class-incremental-semantic-segmentation,Class-Incremental Semantic Segmentation,Semantic segmentation with continous increments of classes.,computer-vision
a7e85883-ec5e-429f-ba9d-d57c495a347d,image-compression-artifact-reduction,Image Compression Artifact Reduction,,computer-vision
e9cce948-f959-4514-a396-d7c289c029e7,face-image-quality-assessment,Face Image Quality Assessment,,computer-vision
71de3358-22d8-425b-9247-a9cf484f476d,visual-object-tracking,Visual Object Tracking,"**Visual Object Tracking** is an important research topic in computer vision, image understanding and pattern recognition. Given the initial state (centre location and scale) of a target in the first frame of a video sequence, the aim of Visual Object Tracking is to automatically obtain the states of the object in the subsequent video frames.
Source: [Learning Adaptive Discriminative Correlation Filters via Temporal Consistency Preserving Spatial Feature Selection for Robust Visual Object Tracking ](https://arxiv.org/abs/1807.11348)",computer-vision
711e27db-e6c9-4c43-b901-54afa1e98b6b,weakly-supervised-temporal-action,Weakly-supervised Temporal Action Localization,Temporal Action Localization with weak supervision where only video-level labels are given for training,computer-vision
5ce200a7-ffc8-4fa4-b376-95e90a05c563,interactive-segmentation,Interactive Segmentation,,computer-vision
ba29903d-0227-4579-a400-69a624351e22,weakly-supervised-object-localization,Weakly-Supervised Object Localization,,computer-vision
afa9eaf3-0829-499e-b28c-93b74ea17344,pulmorary-vessel-segmentation,Pulmorary Vessel Segmentation,,computer-vision
158cfcb0-609a-40c4-9213-43bab1041a9c,detecting-shadows,Detecting Shadows,,computer-vision
381cc9be-f358-494a-b8d7-d3519be44047,handwritten-chinese-text-recognition,Handwritten Chinese Text Recognition,"Handwritten Chinese text recognition is the task of interpreting handwritten Chinese input, e.g., from images of documents or scans.",computer-vision
2b97c7d1-6eac-4ca4-bac3-41d7231b17e3,viewpoint-estimation,Viewpoint Estimation,,computer-vision
dcb3ae40-72b5-447d-b852-b7b6d709ada1,autonomous-navigation,Autonomous Navigation,"Autonomous navigation is the task of autonomously navigating a vehicle or robot to or around a location without human guidance.
( Image credit: [Approximate LSTMs for Time-Constrained Inference:
Enabling Fast Reaction in Self-Driving Cars](https://arxiv.org/pdf/1905.00689v2.pdf) )",computer-vision
e37f6272-b377-493a-9714-56f21c7b0c65,facial-makeup-transfer,Facial Makeup Transfer,Facial makeup transfer aims to translate the **makeup style** from a given *reference* makeup face image to another non-makeup one while *preserving face identity*.,computer-vision
24d902e5-1fa4-4fa6-b46d-4b555ec9eedb,3d-multi-person-pose-estimation,3D Multi-Person Pose Estimation,"This task aims to solve root-relative 3D multi-person pose estimation. No human bounding box and root joint coordinate groundtruth are used in testing time.
( Image credit: [RootNet](https://github.com/mks0601/3DMPPE_ROOTNET_RELEASE) )",computer-vision
d017d45a-274f-491e-9753-10a05ecee02b,few-shot-3d-point-cloud-classification,Few-Shot 3D Point Cloud Classification,,computer-vision
8e4cb4bd-b1ab-4520-a3f7-ad4e508e5209,video-similarity,Video Similarity,,computer-vision
0759ff25-b226-4b8a-b6dc-5bd0e3edcc2e,tumor-segmentation,Tumor Segmentation,Tumor Segmentation is the task of identifying the spatial location of a tumor. It is a pixel-level prediction where each pixel is classified as a tumor or background. The most popular benchmark for this task is the BraTS dataset. The models are typically evaluated with the Dice Score metric.,computer-vision
1111852f-2259-40fe-afa8-23dbaa86a91f,unbiased-scene-graph-generation,Unbiased Scene Graph Generation,"Unbiased Scene Graph Generation (Unbiased SGG) aims to predict more informative scene graphs composed of more ""tail predicates"" *(in contrast to ""head predicates"" in terms of class frequencies) by dealing with the skewed, long-tailed predicate class distribution. (Definition from Chiou et al. ""Recovering the Unbiased Scene Graphs from the Biased Ones"")",computer-vision
dd032db3-e8f3-43d5-8705-e5f98371607c,irregular-text-recognition,Irregular Text Recognition,"To read a text from an image might be difficult due to the improper angle of the text inside the image or due to surprising font. Hence, to recognize the text data from the image, Irregular Text Recognition is used.",computer-vision
76700e29-6b3d-4494-888a-26ced06d8e41,bird-view-synthesis,Bird View Synthesis,,computer-vision
7d593f5a-7d93-4d5d-94ee-54acb92c823e,underwater-3d-scene-reconstruction,Underwater 3D Scene Reconstruction,,computer-vision
20375cf3-52bf-41d2-bffe-dad05e7dfb87,data-free-quantization,Data Free Quantization,"**Data Free Quantization** is a technique to achieve a highly accurate quantized model without accessing any training data.
Source: [Qimera: Data-free Quantization with Synthetic Boundary Supporting Samples](https://arxiv.org/abs/2111.02625)",computer-vision
f2f429e0-40ee-48fa-9dee-d66f458be413,3d-point-cloud-part-segmentation,3D Point Cloud Part Segmentation,"3D point cloud part segmentation on datasets like ShapeNet Part, etc.",computer-vision
d364ac80-6a5a-4980-9b2f-13c2adb1d119,text-guided-generation,text-guided-generation,,computer-vision
e364cfac-85dc-4130-adf2-65e3b84cf18e,unconditional-video-generation,Unconditional Video Generation,,computer-vision
faae0907-ba0b-4db8-b7e0-7be17f9384c3,semi-supervised-medical-image-segmentation,Semi-supervised Medical Image Segmentation,,computer-vision
4ffdbc8c-f28b-43a9-bc69-919754ca524c,affordance-detection,Affordance Detection,"Affordance detection refers to identifying the potential action possibilities of objects in an image, which is an important ability for robot perception and manipulation.
Image source: [Object-Based Affordances Detection with Convolutional Neural Networks and Dense Conditional Random Fields](https://dkanou.github.io/publ/P15__Nguyen_Kanoulas_Caldwell_Tsagarakis__2017__Object-Based_Affordances_Detection_with_Convolutional_Neural_Networks_and__Dense_Conditional_Random_Fields.pdf)
Unlike other visual or physical properties that mainly describe the object alone, affordances indicate functional interactions of object parts with humans.",computer-vision
d5d08565-af55-4b5b-815a-421608c70408,image-categorization,Image Categorization,,computer-vision
83edbe1e-ca61-4086-a3a4-32163db01ee6,cell-detection,Cell Detection,Cell Detection,computer-vision
ef97a6d6-40a0-4f17-acee-fb4d57542ad4,ad-hoc-video-search,Ad-hoc video search,"The Ad-hoc search task ended a 3 year cycle from 2016-2018 with a goal to model the end user search use-case, who is searching (using textual sentence queries) for segments of video containing persons, objects, activities, locations, etc. and combinations of the former. While the Internet Archive (IACC.3) dataset was adopted between 2016 to 2018, starting in 2019 a new data collection based on Vimeo Creative Commons (V3C) will be adopted to support the task for at least 3 more years.
Given the test collection (V3C1 or IACC.3), master shot boundary reference, and set of Ad-hoc queries (approx. 30 queries) released by NIST, return for each query a list of at most 1000 shot IDs from the test collection ranked according to their likelihood of containing the target query.",computer-vision
6b2f1479-31f4-4b05-8b7c-2bccd766648e,unsupervised-landmark-detection,Unsupervised Landmark Detection,"The discovery of object landmarks on a set of images depicting objects of the same category, directly from raw images without using any manual annotations.",computer-vision
34d6348b-6e82-4245-a8ff-0a6967a95fdb,video-saliency-detection,Video Saliency Detection,,computer-vision
68820a8c-75b4-445b-ab81-e927a46093b5,unsupervised-anomaly-detection-with-specified,Unsupervised Anomaly Detection with Specified Settings -- 30% anomaly,,computer-vision
3565c93a-0c70-43c6-82fe-786295c04e30,dense-pixel-correspondence-estimation,Dense Pixel Correspondence Estimation,,computer-vision
dea0651c-64e4-4edc-8052-18c52bc4db79,multi-view-learning,MULTI-VIEW LEARNING,"**Multi-View Learning** is a machine learning framework where data are represented by multiple distinct feature groups, and each feature group is referred to as a particular view.
Source: [Dissimilarity-based representation for radiomics applications ](https://arxiv.org/abs/1803.04460)",computer-vision
6e306be4-26c9-42aa-9cf1-d4bf091a1ec3,action-localization,Action Localization,"Action Localization is finding the spatial and temporal co ordinates for an action in a video. An action localization model will identify which frame an action start and ends in video and return the x,y coordinates of an action. Further the co ordinates will change when the object performing action undergoes a displacement.",computer-vision
5bd883a1-4bb0-479f-a472-85a5a5e13221,single-image-dehazing,Single Image Dehazing,,computer-vision
17c4cf18-c99f-4aa9-8cdf-9ff493fc9957,artistic-style-classification,Artistic style classification,Classify the artistic style of an artwork image,computer-vision
b62a1ce9-f847-40df-b455-196dd344647a,text-to-shape-generation,Text-to-Shape Generation,,computer-vision
fb743af1-b63b-43ff-9f3a-8ad6f6ac26fb,skills-assessment,Skills Assessment,,computer-vision
0f8fce53-c324-4766-b65b-02b584a6bf3d,3d-semantic-segmentation,3D Semantic Segmentation,"**3D Semantic Segmentation** is a computer vision task that involves dividing a 3D point cloud or 3D mesh into semantically meaningful parts or regions. The goal of 3D semantic segmentation is to identify and label different objects and parts within a 3D scene, which can be used for applications such as robotics, autonomous driving, and augmented reality.",computer-vision
85fdee85-e864-4eb2-ab43-ee2cc5398435,style-generalization,Style Generalization,,computer-vision
b6e93d80-1add-412c-a0fa-3fda84752a6a,handwritten-word-segmentation,Handwritten Word Segmentation,Handwritten Word Segmentation from Document Images.,computer-vision
b6b992ae-4435-43cb-b0cd-59171b27fa78,unsupervised-image-decomposition,Unsupervised Image Decomposition,,computer-vision
5cedfcf7-6db1-4399-81ff-9a5a56218eae,hand-segmentation,Hand Segmentation,,computer-vision
2d812df1-1eca-4fce-8a0f-36bcb3085361,novel-view-synthesis,Novel View Synthesis,"Synthesize a target image with an arbitrary target camera pose from given source images and their camera poses.
( Image credit: [Multi-view to Novel view: Synthesizing novel views with Self-Learned Confidence](https://github.com/shaohua0116/Multiview2Novelview) )",computer-vision
0bb6a869-89ae-45a3-b835-cace26582558,sparse-representation-based-classification,Sparse Representation-based Classification,Sparse Representation-based Classification is the task based on the description of the data as a linear combination of few building blocks - atoms - taken from a pre-defined dictionary of such fundamental elements.,computer-vision
fa4ec37e-f7c1-4bb1-879d-21e6770a4e45,composed-image-retrieval,Composed Image Retrieval (CoIR),"**Composed Image Retrieval (CoIR)** is the task involves retrieving images from a large database based on a query composed of multiple elements, such as text, images, and sketches. The goal is to develop algorithms that can understand and combine multiple sources of information to accurately retrieve images that match the query, extending the user’s expression ability.",computer-vision
9f46d6d8-551e-437f-9431-73037d1eada0,drawing-pictures,Drawing Pictures,,computer-vision
88079a6a-cf24-4f3f-9f50-d1c52c980aed,patch-matching,Patch Matching,,computer-vision
6f52297e-6d03-46d7-bd1b-3589f79be3b3,unsupervised-3d-human-pose-estimation,Unsupervised 3D Human Pose Estimation,,computer-vision
3905ad20-0503-4bb2-9153-09584940b63c,highlight-removal,highlight removal,,computer-vision
8272f372-92ce-472c-97a6-a927bee0ddb2,spatial-token-mixer,Spatial Token Mixer,Spatial Token Mixer (STM) is a module for vision transformers that aims to improve the efficiency of token mixing. STM is a type of depthwise convolution that operates on the spatial dimension of the tokens. STM is a drop-in replacement for the token mixing layers in vision transformers.,computer-vision
59975817-b1d7-4a3f-a495-a6a4afd0d6a9,traffic-sign-detection,Traffic Sign Detection,,computer-vision
bbf5140c-50f6-4853-ba11-d86d780e279c,part-level-panoptic-segmentation,Part-aware Panoptic Segmentation,Panoptic segmentation with part-aware predictions.,computer-vision
b7b8e9a4-715f-4315-a051-17e41fc410b2,crowds,Crowds,,computer-vision
91849402-1312-4d53-8dd3-34688f57c364,blind-face-restoration,Blind Face Restoration,"Blind face restoration aims at recovering high-quality faces from the low-quality counterparts suffering from unknown degradation, such as low-resolution, noise, blur, compression artifacts, etc. When applied to real-world scenarios, it becomes more challenging, due to more complicated degradation, diverse poses and expressions.
Description source: [Towards Real-World Blind Face Restoration with Generative Facial Prior](https://paperswithcode.com/paper/towards-real-world-blind-face-restoration)
Image source: [Towards Real-World Blind Face Restoration with Generative Facial Prior](https://paperswithcode.com/paper/towards-real-world-blind-face-restoration)",computer-vision
6e4f646c-a3d7-42e6-8c6d-56c46652ce45,mri-reconstruction,MRI Reconstruction,"In its most basic form, MRI reconstruction consists in retrieving a complex-valued image from its under-sampled Fourier coefficients.
Besides, it can be addressed as a encoder-decoder task, in which the normative model in the latent space will only capture the relevant information without noise or corruptions. Then, we decode the latent space in order to have a reconstructed MRI.",computer-vision
24124cc5-0a27-4c35-85d6-17d6aa982c82,semi-supervised-2d-and-3d-landmark-labeling,Semi-supervised 2D and 3D landmark labeling,,computer-vision
8593c4a4-f2f2-44bd-a2cb-e68a74afece9,occlusion-handling,Occlusion Handling,,computer-vision
5f0da5fa-655f-4a38-91f2-3ec2bc6a5733,pornography-detection,Pornography Detection,,computer-vision
502b1794-47db-4f88-94c3-ffb3db1304f6,unsupervised-long-term-person-re-1,Unsupervised Long Term Person Re-Identificationa,"Long-term Person Re-Identification(Clothes-Changing Person Re-ID) is a computer vision task in which the goal is to match a person's identity across different cameras, clothes, and locations in a video or image sequence. It involves detecting and tracking a person and then using features such as appearance, and body shape to match their identity in different frames. The goal is to associate the same person across multiple non-overlapping camera views in a robust and efficient manner.",computer-vision
c22c5858-c210-42d6-a8e1-704e0d7b2328,3d-facial-landmark-localization,3D Facial Landmark Localization,Image: [Zhang et al](https://arxiv.org/pdf/1801.09242v1.pdf),computer-vision
5f6f1d79-91ec-4baa-aa3c-5ef85fc42da7,content-based-image-retrieval,Content-Based Image Retrieval,"**Content-Based Image Retrieval** is a well studied problem in computer vision, with retrieval problems generally divided into two groups: category-level retrieval and instance-level retrieval. Given a query image of the Sydney Harbour bridge, for instance, category-level retrieval aims to find any bridge in a given dataset of images, whilst instance-level retrieval must find the Sydney Harbour bridge to be considered a match.
Source: [Camera Obscurer: Generative Art for Design Inspiration ](https://arxiv.org/abs/1903.02165)",computer-vision
ef21c582-8c35-467e-8103-e28527a39ffb,photo-geolocation-estimation,Photo geolocation estimation,**Photo geolocation estimation** is task of estimate or classify the geolocation from photos on world map.,computer-vision
3fa6fe08-77e0-4a53-b534-53fb358afb8b,homography-estimation,Homography Estimation,"**Homography estimation** is a technique used in computer vision and image processing to find the relationship between two images of the same scene, but captured from different viewpoints. It is used to align images, correct for perspective distortions, or perform image stitching. In order to estimate the homography, a set of corresponding points between the two images must be found, and a mathematical model must be fit to these points. There are various algorithms and techniques that can be used to perform homography estimation, including direct methods, RANSAC, and machine learning-based approaches.",computer-vision
fa7d8134-e0e9-4c45-a809-93d6965a2971,driver-attention-monitoring,Driver Attention Monitoring,"Driver attention monitoring is the task of monitoring the attention of a driver.
( Image credit: [Predicting Driver Attention in Critical Situations](https://arxiv.org/pdf/1711.06406v3.pdf) )",computer-vision
ed60c7c0-4357-4857-81fd-1605c16edf63,group-activity-recognition,Group Activity Recognition,"**Group Activity Recognition** is a subset of human activity recognition problem which focuses on the collective behavior of a group of people, resulted from the individual actions of the persons and their interactions. Collective activity recognition is a basic task for automatic human behavior analysis in many areas like surveillance or sports videos.
Source: [A Multi-Stream Convolutional Neural Network Framework for Group Activity Recognition ](https://arxiv.org/abs/1812.10328)",computer-vision
a92ad31b-3933-46f4-ac4f-440a112582c2,calving-front-delineation-from-synthetic,Calving Front Delineation From Synthetic Aperture Radar Imagery,,computer-vision
4a9e015f-2cc0-4abe-8eed-7d44e9ed7afc,self-supervised-action-recognition,Self-Supervised Action Recognition,,computer-vision
f511b827-9a05-4d81-9126-291b57e5d79a,unet-segmentation,UNET Segmentation,"U-Net is an architecture for semantic segmentation. It consists of a contracting path (Up to down) and an expanding path (Down to up). During the contraction, the spatial information is reduced while feature information is increased.
The contracting path follows the typical architecture of a convolutional network. It consists of the repeated application of two 3x3 convolutions (unpadded convolutions), each followed by a rectified linear unit (ReLU) and a 2x2 max pooling operation with stride 2 for downsampling. At each downsampling step, we double the number of feature channels. Every step in the expansive path consists of an upsampling of the feature map followed by a 2x2 convolution (“up-convolution”) that halves the number of feature channels, a concatenation with the correspondingly cropped feature map from the contracting path, and two 3x3 convolutions, each followed by a ReLU. The cropping is necessary due to the loss of border pixels in every convolution. At the final layer, a 1x1 convolution is used to map each 64-component feature vector to the desired number of classes. In total the network has 23 convolutional layers.",computer-vision
caa24ef2-7ca0-4977-8351-26c542c8d6d0,brain-landmark-detection,Brain landmark detection,,computer-vision
eedce13a-437f-4932-8fa8-f22efc77fce4,overlapped-15-1,Overlapped 15-1,,computer-vision
839946ee-0896-455a-bf87-3e07ea8b9e9e,overlapped-50-50,Overlapped 50-50,,computer-vision
d47a3487-e84e-43a9-b82e-e5e75501002e,general-action-video-anomaly-detection,General Action Video Anomaly Detection,Detecting if an entire short clip of a any action features an anomalous motion - another action class not seen during training.,computer-vision
aa315c17-d3a8-48b3-ac7c-4fcb5dfc26c7,semantic-slam,Semantic SLAM,SLAM with semantic level scene understanding,computer-vision
890587db-a11b-44fa-a17b-449d72a631d8,factual-visual-question-answering,Factual Visual Question Answering,,computer-vision
fec537bf-b29f-43ae-91c8-64490a2b5dc9,video-harmonization,Video Harmonization,Video harmonization aims to adjust the foreground of a composite video to make it compatible with the background.,computer-vision
9d058d43-f993-4664-8262-2744f2bc72e0,6d-pose-estimation,6D Pose Estimation using RGB,"**6D Pose Estimation using RGB** refers to the task of determining the six degree-of-freedom (6D) pose of an object in 3D space based on RGB images. This involves estimating the position and orientation of an object in a scene, and is a fundamental problem in computer vision and robotics. In this task, the goal is to estimate the 6D pose of an object given an RGB image of the object and the scene, which can be used for tasks such as robotic manipulation, augmented reality, and scene reconstruction.
( Image credit: [Segmentation-driven 6D Object Pose Estimation](https://github.com/cvlab-epfl/segmentation-driven-pose) )",computer-vision
0b12c511-f7f9-47e3-abc2-0f69d64b0519,compressive-sensing,Compressive Sensing,"**Compressive Sensing** is a new signal processing framework for efficiently acquiring and reconstructing a signal that have a sparse representation in a fixed linear basis.
Source: [Sparse Estimation with Generalized Beta Mixture and the Horseshoe Prior ](https://arxiv.org/abs/1411.2405)",computer-vision
69c331e1-a625-4629-b5ad-9f003642ae3c,line-detection,Line Detection,,computer-vision
2796a19f-f344-4386-9bc6-d3d142f9535d,video-frame-interpolation,Video Frame Interpolation,"The goal of **Video Frame Interpolation** is to synthesize several frames in the middle of two adjacent frames of the original video. Video Frame Interpolation can be applied to generate slow motion video, increase video frame rate, and frame recovery in video streaming.
Source: [Reducing the X-ray radiation exposure frequency in cardio-angiography via deep-learning based video interpolation ](https://arxiv.org/abs/2006.00781)",computer-vision
92707feb-f61f-4587-9e8b-d0dfd94b1553,overlapped-100-50,Overlapped 100-50,,computer-vision
6e5407b1-50c1-4859-a6b9-4e997cbadccf,zero-shot-text-to-video-generation,Zero-shot Text-to-Video Generation,,computer-vision
322a70b1-0d4c-4354-ac40-68e5fd35fda1,semi-supervised-change-detection,Semi-supervised Change Detection,,computer-vision
9f9181e7-0545-4fcf-99b1-d7e74b40aa32,3d-multi-person-mesh-recovery,3D Multi-Person Mesh Recovery,,computer-vision
e64b0c02-f3e0-402c-bbee-fded5f485cd0,no-reference-image-quality-assessment,No-Reference Image Quality Assessment,An Image Quality Assessment approach where no reference image information is available to the model.,computer-vision
27246443-697b-41c2-868e-f689edf6f84e,motion-compensation,Motion Compensation,,computer-vision
ec4ee87e-cbc5-4eaa-98d1-ab4be06d2dcf,face-identification,Face Identification,Face identification is the task of matching a given face image to one in an existing database of faces. It is the second part of face recognition (the first part being detection). It is a one-to-many mapping: you have to find an unknown person in a database to find who that person is.,computer-vision
79a53c5c-0e39-42b6-93c2-0ee81abda6ae,continual-anomaly-detection,continual anomaly detection,,computer-vision
d230f098-8e3b-4819-93dd-2cc5b6756c7d,person-centric-visual-grounding,Person-centric Visual Grounding,"Person-centric visual grounding is the problem of linking between people named in a caption and people pictured in an image. Introduced in ""Who's Waldo? Linking People Across Text and Images"" (Cui et al, ICCV 2021).",computer-vision
466fea09-3aa3-4b65-ab57-3df0cad709b0,image-captioning,Image Captioning,"**Image Captioning** is the task of describing the content of an image in words. This task lies at the intersection of computer vision and natural language processing. Most image captioning systems use an encoder-decoder framework, where an input image is encoded into an intermediate representation of the information in the image, and then decoded into a descriptive text sequence. The most popular benchmarks are nocaps and COCO, and models are typically evaluated according to a BLEU or CIDER metric.
( Image credit: [Reflective Decoding Network for Image Captioning, ICCV'19](https://openaccess.thecvf.com/content_ICCV_2019/papers/Ke_Reflective_Decoding_Network_for_Image_Captioning_ICCV_2019_paper.pdf))",computer-vision
a72fb15a-db3d-45f0-a02b-3ad41ffd7ed1,accident-anticipation,Accident Anticipation,,computer-vision
cce52691-2bb2-4298-9c2f-078864b10144,few-shot-object-detection,Few-Shot Object Detection,**Few-Shot Object Detection** is a computer vision task that involves detecting objects in images with limited training data. The goal is to train a model on a few examples of each object class and then use the model to detect objects in new images.,computer-vision
7e9c4ae2-ef4e-4d0b-9d5c-37c0cb9c51d1,color-mismatch-correction,Color Mismatch Correction,Color mismatch correction is the task of transferring color from one view of a stereopair to corresponding areas in another where the colors differ incorrectly. Color mismatches can decrease the overall video quality and may cause viewer discomfort and headaches. Similar tasks are color transfer and example-based colorization.,computer-vision
63ebc19d-c48d-4591-9efb-3f46290d7266,logo-recognition,Logo Recognition,,computer-vision
9b2fa227-40d9-475b-ae80-ce24a801797e,dichotomous-image-segmentation,Dichotomous Image Segmentation,"Currently, existing image segmentation tasks mainly focus on segmenting objects with specific characteristics, e.g., salient, camouflaged, meticulous, or specific categories. Most of them have the same input/output formats, and barely use exclusive mechanisms designed for segmenting targets in their models, which means almost all tasks are dataset-dependent. Thus, it is very promising to formulate a category-agnostic DIS task for accurately segmenting objects with different structure complexities, regardless of their characteristics. Compared with semantic segmentation, the proposed DIS task usually focuses on images with single or a few targets, from which getting richer accurate details of each target is more feasible.",computer-vision
693aa54d-a066-4b1d-b17b-b99f83d83ab6,perpetual-view-generation,Perpetual View Generation,**Perpetual View Generation** is the task of generating long-range novel views by flying into a given image.,computer-vision
700dc851-5722-4318-8e97-72045430e20e,image-morphing,Image Morphing,,computer-vision
5d12aeff-5b8a-4370-8aa3-29ad606bafbc,salient-object-detection-1,Salient Object Detection,,computer-vision
0e563457-198f-4560-92c2-b04641b02860,3d-absolute-human-pose-estimation,3D Absolute Human Pose Estimation,"This task aims to solve absolute (camera-centric not root-relative) 3D human pose estimation.
( Image credit: [RootNet](https://github.com/mks0601/3DMPPE_ROOTNET_RELEASE) )",computer-vision
395a4abf-d834-463c-949f-7f7b7dde2b94,audio-visual-synchronization,Audio-Visual Synchronization,,computer-vision
c27b4df7-2165-46d2-bd97-00bc524a756d,small-object-detection,Small Object Detection,"**Small Object Detection** is a computer vision task that involves detecting and localizing small objects in images or videos. This task is challenging due to the small size and low resolution of the objects, as well as other factors such as occlusion, background clutter, and variations in lighting conditions.
( Image credit: [Feature-Fused SSD](https://arxiv.org/pdf/1709.05054v3.pdf) )",computer-vision
22706484-1c02-471c-af4e-42620892dd2f,lip-to-speech-synthesis,Lip to Speech Synthesis,"Given a silent video of a speaker, generate the corresponding speech that matches the lip movements.",computer-vision
df597a93-e3a6-44a0-82ed-fec095d70205,line-art-colorization,Line Art Colorization,,computer-vision
92bb059a-5c11-431a-8ddf-4adafcfecf97,parking-space-occupancy,Parking Space Occupancy,Image credit: [https://github.com/martin-marek/parking-space-occupancy](https://github.com/martin-marek/parking-space-occupancy),computer-vision
b5874d0c-c519-4413-9319-bb8492f88781,instance-segmentation,Instance Segmentation,"**Instance Segmentation** is a computer vision task that involves identifying and separating individual objects within an image, including detecting the boundaries of each object and assigning a unique label to each object. The goal of instance segmentation is to produce a pixel-wise segmentation map of the image, where each pixel is assigned to a specific object instance.
Image Credit: [Deep Occlusion-Aware Instance Segmentation with Overlapping BiLayers, CVPR'21](https://github.com/lkeab/BCNet)",computer-vision
dc38f685-d4d9-475b-bec1-31be6e5f58c3,image-level-supervised-instance-segmentation,Image-level Supervised Instance Segmentation,Weakly-Supervised Instance Segmentation using Image-level Labels,computer-vision
afe2bb93-3dae-4862-a13a-7e7ff45d40f4,observation-completion,Observation Completion,,computer-vision
bd35363c-a31f-454d-88fd-92f827e18b3c,cbc-test,CBC TEST,,computer-vision
a530fc8b-b039-4288-b0a3-e1b4db52ceeb,infinite-image-generation,Infinite Image Generation,"**Infinite Image Generation** refers to the task of generating an unlimited number of images that belong to a specific distribution or category. It is a challenging task, as it requires the model to capture the underlying patterns and distributions in the data, and generate images that are diverse, yet still follow the same patterns. There are various techniques and algorithms that can be used to perform infinite image generation, including Variational Autoencoders (VAEs), Generative Adversarial Networks (GANs), and Convolutional Neural Networks (CNNs).",computer-vision
4da3cae3-ab60-414e-9362-777de1bd1c61,video-alignment,Video Alignment,,computer-vision
a6011014-4795-441d-97df-5c9d2ce043e3,visual-dialogue,Visual Dialog,"Visual Dialog requires an AI agent to hold a meaningful dialog with humans in natural, conversational language about visual content. Specifically, given an image, a dialog history, and a follow-up question about the image, the task is to answer the question.",computer-vision
b4a8a320-5ac9-40af-b27b-e39d3c0c548e,3d-shape-reconstruction-from-videos,3D Shape Reconstruction from Videos,,computer-vision
c76fd5b8-e8f2-4e43-a57d-5997f83c34f4,scene-text-editing,Scene Text Editing,,computer-vision
1ab67372-ff56-407e-8024-5ff2c52bd3af,3d-point-cloud-interpolation,3D Point Cloud Interpolation,"Point cloud interpolation is a fundamental problem for 3D computer vision. Given a low temporal resolution (frame rate) point cloud sequence, the target of interpolation is to generate a smooth point cloud sequence with high temporal resolution (frame rate).",computer-vision
dcb22804-fb12-4797-927c-8b26c1e1520a,shape-representation-of-3d-point-clouds,Shape Representation Of 3D Point Clouds,,computer-vision
f5537253-b535-430b-af01-9a7bfec53115,hindi-image-captioning,Hindi Image Captioning,The main goal of this task is to generate a caption for an input image in a native langugae Hindi.,computer-vision
49f0d30d-3330-4591-a275-a419075b950c,tone-mapping,Tone Mapping,,computer-vision
43e65456-4227-4281-9c33-e7fb5b30edb6,referring-image-matting-expression-based,Referring Image Matting (Expression-based),"Expression-based referring image matting, taking an image and a flowery expression as the input.",computer-vision
fc00dc51-f3ef-4e74-a7e8-65a1961ec5b5,multi-frame-super-resolution,Multi-Frame Super-Resolution,"When multiple images of the same view are taken from slightly different positions, perhaps also at different times, then they collectively contain more information than any single image on its own. Multi-Frame Super-Resolution fuses these low-res inputs into a composite high-res image that can reveal some of the original detail that cannot be recovered from any low-res image alone.
( Credit: [HighRes-net](https://github.com/ElementAI/HighRes-net) )",computer-vision
d54e8262-2359-4455-b4d9-002d1fc5fcb9,image-text-matching,Image-text matching,,computer-vision
f43079d6-d598-4156-8ff1-5988b5e64a14,image-restoration,Image Restoration,"**Image Restoration** is a family of inverse problems for obtaining a high quality image from a corrupted input image. Corruption may occur due to the image-capture process (e.g., noise, lens blur), post-processing (e.g., JPEG compression), or photography in non-ideal conditions (e.g., haze, motion blur).
Source: [Blind Image Restoration without Prior Knowledge ](https://arxiv.org/abs/2003.01764)",computer-vision
6a2acdfc-973a-4376-8f56-336da4f2a845,computer-vision-transduction,Computer Vision Transduction,Transductive learning in computer vision tasks,computer-vision
14869e26-b49a-4ab0-9203-fa4d18dd444f,open-world-object-detection,Open World Object Detection,"Open World Object Detection is a computer vision problem where a model is tasked to: 1) identify objects that have not been introduced to it as `unknown', without explicit supervision to do so, and 2) incrementally learn these identified unknown categories without forgetting previously learned classes, when the corresponding labels are progressively received.",computer-vision
b4931a6a-53a8-42e2-8ea9-bce71c88c0ca,point-interactive-image-colorization,Point-interactive Image Colorization,"__Point-interactive colorization__ is a task of colorizing images given user-guided clicks containing colors (a.k.a color hints).
Unlike unconditional image colorization, which is an underdetermined problem by nature, point-interactive colorization aims to generate images containing specific colors given by the user.
Point-interactive colorization is evaluated by providing simulated user hints from the groundtruth color image.
Following the [iColoriT protocol](https://arxiv.org/abs/2207.06831), user hints have a size of 2x2 pixels and color is given as the average color within the 2x2 pixels.",computer-vision
49fa6c51-bddc-407b-a243-6a5ee83cf0f3,blink-estimation,Blink estimation,,computer-vision
ee957486-3eac-476b-8c05-d523f1da0fbf,overlapped-14-1,Overlapped 14-1,,computer-vision
c347d5e8-7618-485c-83de-ba680bc1b8b9,image-forensics,Image Forensics,,computer-vision
8508aab9-f63e-46a5-a672-5abc394fe01a,temporal-defect-localization,Temporal Defect Localization,"Closed-Circuit TeleVision (CCTV) is popular method for pipe defect
inspection. Different from short QV videos, CCTV videos are much longer and record more comprehensive content in the very distant pipe. The main task is to discover temporal locations of pipe defects in such untrimmed videos. Clearly, manual inspection is expensive, based on hundreds of hours of CCTV videos. To fill this gap, we introduce this temporal localization task, which is to find the temporal locations of pipe detects and recognizing their corresponding categories in
a long CCTV video.",computer-vision
b7707c07-f673-4989-8ff3-2f7504b83b5a,forgery,Forgery,,computer-vision
e12fdac0-5f3d-475c-a11b-387c0163ec6b,federated-lifelong-person-reid,Federated Lifelong Person ReID,,computer-vision
a60a6250-56fe-4130-b943-1504e680e0fe,object-discovery-in-videos,Object Discovery In Videos,,computer-vision
2da40b93-6ea4-441b-baca-08dd9b37cd67,grasp-generation,Grasp Generation,,computer-vision
c8e2e881-b8ef-461e-927b-36c9fc4b4762,video-correspondence-flow,Video Correspondence Flow,,computer-vision
ece20ff1-cde7-4800-9e7b-23827ae6a2bf,activeness-detection,Activeness Detection,Determining activeness via images,computer-vision
01470ac5-deb3-481d-b354-b4051c51e057,semi-supervised-image-classification-cold,Semi-Supervised Image Classification (Cold Start),"This is the same as the [semi-supervised image classification task](https://paperswithcode.com/task/semi-supervised-image-classification), with the key difference being that the labelled subset chosen needs to be selection in a class agnostic manner. This means that the standard practice in semi-supervised learning of using a random class stratified sample is ""cheating"" in this case, as class information is required for the whole dataset for this to be done. Rather, this challenge requires a smart cold-start or unsupervised selective labelling strategy to identify images that are most informative and result in the best performing models.",computer-vision
d826c645-5be6-4fbe-abda-8aa27b3fbc73,multiple-object-track-and-segmentation,Multiple Object Track and Segmentation,"Multiple object tracking and segmentation requires detecting, tracking, and segmenting objects belonging to a set of given classes.
(Image and definition credit: [Prototypical Cross-Attention Networks for Multiple Object Tracking and Segmentation](https://github.com/SysCV/pcan), NeurIPS 2021, Spotlight )",computer-vision
56cd2b1e-899e-4b48-8ea7-23b9a8de212b,damaged-building-detection,Damaged Building Detection,,computer-vision
9b274599-db8f-4c09-a8c1-6854eaf98fa3,brdf-estimation,BRDF estimation,,computer-vision
db34c7c9-5cce-4974-b7db-16829058c0d2,membership-inference-attack,Membership Inference Attack,,computer-vision
4a5eb7b1-2f1e-416b-9b94-96b1cd6b4cd4,superpixels,Superpixels,,computer-vision
2acc42d6-c2bc-4740-99f4-2fb7b703eeb0,multi-target-domain-adaptation,Multi-target Domain Adaptation,The idea of Multi-target Domain Adaptation is to adapt a model from a single labelled source domain to multiple unlabelled target domains.,computer-vision
1d480600-2374-475f-b5f4-c5201c36e7dc,rf-based-pose-estimation,RF-based Pose Estimation,"Detect human actions through walls and occlusions, and in poor lighting conditions. Taking radio frequency (RF) signals as input (e.g. Wifi), generating 3D human skeletons as an intermediate representation, and recognizing actions and interactions.
See e.g. RF-Pose from MIT for a good illustration of the approach
http://rfpose.csail.mit.edu/
( Image credit: [Making the Invisible Visible](https://arxiv.org/pdf/1909.09300v1.pdf) )",computer-vision
25cbaf3d-1d70-41a0-99ee-f3bf9d5f160a,weakly-supervised-action-segmentation-action,Weakly Supervised Action Segmentation (Action Set)),Learning an action segmentation model while the only available supervision is action set -- the set of actions happened in the video without information about their temporal locations.,computer-vision
ac441882-fc11-4397-8640-a713bd5769bc,boundary-captioning,Boundary Captioning,"Provided with the timestamp of a boundary inside a video, the machine is required to generate sentences describing the status change at the boundary.",computer-vision
6a99abb5-80ba-4a12-9c13-ee09cb6aa3d5,training-free-3d-part-segmentation,Training-free 3D Part Segmentation,Evaluation on target datasets for 3D Part Segmentation without any training,computer-vision
2d5aab76-e2f6-4e02-8138-64ff7f50f8ba,image-shadow-removal,Image Shadow Removal,,computer-vision
d2d1b451-3593-42f9-8857-db247c331f8f,raw-reconstruction,Raw reconstruction,Reconstruct RAW camera sensor readings from the corresponding sRGB images,computer-vision
82102e7c-065d-4d24-9ae4-4309f64f0b9a,face-age-editing,Face Age Editing,,computer-vision
e280209d-2b97-4dea-adc6-21f0c01ac667,multimodal-activity-recognition,Multimodal Activity Recognition,,computer-vision
30911ac8-1714-432f-99ad-f1f7710be9de,generating-3d-point-clouds,Generating 3D Point Clouds,,computer-vision
85480a3a-a330-4a49-90ab-2ad234c41837,surgical-tool-detection,Surgical tool detection,Presence detection of various classes of surgical instruments in endoscopy videos.,computer-vision
e477209b-1a7f-446f-b6d6-aa31dd3f912f,event-based-vision,Event-based vision,"An event camera, also known as a neuromorphic camera, silicon retina or dynamic vision sensor, is an imaging sensor that responds to local changes in brightness. Event cameras do not capture images using a shutter as conventional cameras do. Instead, each pixel inside an event camera operates independently and asynchronously, reporting changes in brightness as they occur and staying silent otherwise. Modern event cameras have microsecond temporal resolution, 120 dB dynamic range, and less under/overexposure and motion blur than frame cameras.",computer-vision
e9c15075-4e07-45b4-9d44-d544cdbaffe9,multi-person-pose-estimation,Multi-Person Pose Estimation,"Multi-person pose estimation is the task of estimating the pose of multiple people in one frame.
( Image credit: [Human Pose Estimation with TensorFlow
](https://github.com/eldar/pose-tensorflow) )",computer-vision
7c3b1af6-7a5f-4ed3-acce-61285f760115,person-reposing,person reposing,Person reposing describes the task of changing the pose of a human in a given image to any desired target pose.,computer-vision
8e6a5855-5e4e-4fc2-8fb6-e41c97e6b43f,image-deconvolution,Image Deconvolution,,computer-vision
4c3d412c-c6ae-47c1-9d95-5d893942ed0d,3d-object-detection-from-stereo-images,3D Object Detection From Stereo Images,"Estimating oriented 3D bounding boxes from Stereo Cameras only.
Image: [You et al](https://openreview.net/pdf?id=BJedHRVtPB)",computer-vision
242f8b65-50f4-4392-852f-a33e20475ba9,visual-social-relationship-recognition,Visual Social Relationship Recognition,,computer-vision
bf414a28-1160-47d3-8af0-5ec0c772bc95,human-activity-recognition,Human Activity Recognition,Classify various human activities,computer-vision
9683a53c-98a4-4467-a8b5-273ca6cdba04,unsupervised-3d-point-cloud-linear-evaluation,Unsupervised 3D Point Cloud Linear Evaluation,Training a linear classifier(e.g. SVM) on the representations learned in an unsupervised manner on the pretrained(e.g. ShapeNet) dataset.,computer-vision
3dd576d7-bc90-4857-b0cc-793d911dad78,earthquake-prediction,Earthquake prediction,,computer-vision
12df1fd4-61ba-47ad-a576-1395653c8751,defect-detection,Defect Detection,For automatic detection of surface defects in various products,computer-vision
b8eebe99-659f-4578-b9f4-c22e22a0ac7c,2d-semantic-segmentation,2D Semantic Segmentation,,computer-vision
e35122d2-3649-4f77-82f5-b66cd03a0aae,cloud-detection,Cloud Detection,,computer-vision
1ecd10b7-cbb2-4231-a76b-828719612434,cryogenic-electron-microscopy-cryo-em,Cryogenic Electron Microscopy (cryo-EM),"Analysis of images and videos from transmission electron microscopes, including single-particle cryogenic electron microscopy and cryogenic electron tomography (cryo-ET).
https://en.wikipedia.org/wiki/Cryogenic_electron_microscopy",computer-vision
8059f6c1-7112-4554-85e7-ce889c4166fa,unsupervised-video-object-segmentation,Unsupervised Video Object Segmentation,The unsupervised scenario assumes that the user does not interact with the algorithm to obtain the segmentation masks. Methods should provide a set of object candidates with no overlapping pixels that span through the whole video sequence. This set of objects should contain at least the objects that capture human attention when watching the whole video sequence i.e objects that are more likely to be followed by human gaze.,computer-vision
2606aba8-0d09-4d05-8f54-fe58fde9eeec,cross-corpus,Cross-corpus,,computer-vision
60be93a6-be05-4115-babd-b5a91b4a06d8,sign-language-translation,Sign Language Translation,"Given a video containing sign language, the task is to predict the translation into (written) spoken language.
Image credit: [How2Sign](https://how2sign.github.io/)",computer-vision
11ff5ef7-7f0b-4d9b-805e-252940f7e3a0,image-relighting,Image Relighting,Image relighting involves changing the illumination settings of an image.,computer-vision
ee3e0f94-6064-480b-91b0-4d0dc72e01be,generalized-zero-shot-learning,Generalized Zero-Shot Learning,,computer-vision
8a466efc-46ab-4a1b-a097-19bb65b7beeb,image-compression,Image Compression,"**Image Compression** is an application of data compression for digital images to lower their storage and/or transmission requirements.
Source: [Variable Rate Deep Image Compression With a Conditional Autoencoder ](https://arxiv.org/abs/1909.04802)",computer-vision
244772b8-513f-47a6-8a5e-8922b397104d,multimodal-fashion-image-editing,Multimodal fashion image editing,"Given a target model image, a sketch and a textual description of a garment, the aim is to generate a new image of the same model, wearing a new garment that follows the user input.",computer-vision
1a5c2fd5-6faa-4534-a711-76143df9edaa,talking-face-generation,Talking Face Generation,"Talking face generation aims to synthesize a sequence of face images that correspond to given speech semantics
( Image credit: [Talking Face Generation by Adversarially Disentangled Audio-Visual Representation](https://github.com/Hangz-nju-cuhk/Talking-Face-Generation-DAVS) )",computer-vision
d410cc0c-738a-4509-a530-81b73cc478be,image-to-3d,Image to 3D,,computer-vision
cd6cd439-dfc2-480a-9cc6-84139437f371,image-classification-with-dp,Image Classification with Differential Privacy,**Image Classification with Differential Privacy** is an improved version of the image classification task whereby the final classification output only describe the patterns of groups within the dataset while withholding information about individuals in the dataset.,computer-vision
a9d7ae0a-d1fc-4b53-b397-968cd538b60d,satellite-image-super-resolution,satellite image super-resolution,,computer-vision
a70afc2f-8d8e-4cf4-8141-f4441a2b3d64,facial-beauty-prediction,Facial Beauty Prediction,"Facial beauty prediction is the task of predicting the attractiveness of a face.
( Image credit: [SCUT-FBP5500: A Diverse Benchmark Dataset for Multi-Paradigm Facial Beauty Prediction](https://github.com/HCIILAB/SCUT-FBP5500-Database-Release) )",computer-vision
5ba86a85-073f-40a8-b8c4-5e617ded4d3e,detect-forged-images-and-videos,Image and Video Forgery Detection,,computer-vision
761fd492-65c0-482c-9d14-333203dfa2d2,weakly-supervised-action-segmentation,Weakly Supervised Action Segmentation (Transcript),Action Segmentation from weak (transcript) supervision.,computer-vision
26c2f5ab-ae12-42f3-9542-804249268a0e,text-to-image-generation,Text-to-Image Generation,"**Text-to-Image Generation** is a task in computer vision and natural language processing where the goal is to generate an image that corresponds to a given textual description. This involves converting the text input into a meaningful representation, such as a feature vector, and then using this representation to generate an image that matches the description.",computer-vision
7744c153-02c0-492c-be61-0d4649efcbaa,3d-object-tracking,3D Object Tracking,,computer-vision
b1ae5edc-e0b5-43d2-88d5-575acd032f4c,horizon-line-estimation,Horizon Line Estimation,,computer-vision
260d1a80-7e51-4706-9440-20fe13655af6,surgical-phase-recognition,Surgical phase recognition,"The first 40 videos are used for training, the last 40 videos are used for testing.",computer-vision
3cb9fed2-998d-473c-820a-5b95532c1dfa,pupil-dilation,Pupil Dilation,"Phenomenon on which the human iris reacts to illumination changes and drugs; thus, changing the aperture of the pupil.",computer-vision
b3254154-7542-4f7c-ba8a-b45ce8bbb551,eye-tracking,Gaze Prediction,,computer-vision
f83fd9ba-f5d8-4548-82d1-4a9cd64d19c5,crop-yield-prediction,Crop Yield Prediction,,computer-vision
aa187068-48e6-4cab-9bce-ac2cb849f25b,semi-supervised-person-bounding-box-detection,Semi-Supervised Person Bounding Box Detection,,computer-vision
e2e21017-01eb-46f6-b124-e2f07f50865d,semi-supervised-semantic-segmentation,Semi-Supervised Semantic Segmentation,Models that are trained with a small number of labeled examples and a large number of unlabeled examples and whose aim is to learn to segment an image (i.e. assign a class to every pixel).,computer-vision
d6939bb8-0af2-4758-8c67-98b5f2bcce86,autonomous-flight-dense-forest,Autonomous Flight (Dense Forest),Number of interventions during autonomous flight under the forest canopy.,computer-vision
3895dd03-d6ee-4f16-925a-06b70955deca,street-scene-parsing,Street Scene Parsing,,computer-vision
e93f06b4-0635-43ff-bdd4-b60b8e28597d,pansharpening,Pansharpening,,computer-vision
c386bb70-111d-4b3a-9abd-d6df093f6ffe,object-skeleton-detection,Object Skeleton Detection,"Object skeleton detection is the task of detecting the skeleton of an object in an image.
( Image credit: [DeepFlux for Skeletons in the Wild](https://arxiv.org/pdf/1811.12608v1.pdf) )",computer-vision
ed9edc42-9eed-44b0-831b-72e1da64b554,image-recognition,Image Recognition,,computer-vision
aa8c6ae2-e8bf-41c7-8d17-e0109a487e84,face-generation,Face Generation,"Face generation is the task of generating (or interpolating) new faces from an existing dataset.
The state-of-the-art results for this task are located in the Image Generation parent.
( Image credit: [Progressive Growing of GANs for Improved Quality, Stability, and Variation
](https://arxiv.org/pdf/1710.10196v3.pdf) )",computer-vision
ef1ffdd7-1d60-4736-8ebf-90e1bdc14254,multi-view-3d-shape-retrieval,Multi-View 3D Shape Retrieval,,computer-vision
e5d54b1e-6425-4521-8d03-86ac87608d08,human-scene-contact-detection,human-scene contact detection,detecting contact between human bodies and scenes,computer-vision
15f0337b-2e8a-4817-b1dc-ef8356f72075,3d-human-action-recognition,3D Action Recognition,Image: [Rahmani et al](https://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Rahmani_3D_Action_Recognition_CVPR_2016_paper.pdf),computer-vision
7a196c36-a5ab-4f61-89a8-158db0b0472a,burst-image-super-resolution,Burst Image Super-Resolution,"Reconstruct a high-resolution image from a set of low-quality images, very like the multi-frame super-resolution task.",computer-vision
132d8d20-1c13-4cc3-af37-220d8e8a19f1,spectral-estimation,Spectral Estimation,,computer-vision
25460319-7ed9-4e4c-a7bf-0ea4834ffef4,fine-grained-image-classification,Fine-Grained Image Classification,"**Fine-Grained Image Classification** is a task in computer vision where the goal is to classify images into subcategories within a larger category. For example, classifying different species of birds or different types of flowers. This task is considered to be fine-grained because it requires the model to distinguish between subtle differences in visual appearance and patterns, making it more challenging than regular image classification tasks.
( Image credit: [Looking for the Devil in the Details](https://arxiv.org/pdf/1903.06150v2.pdf) )",computer-vision
3ca6fad8-05ff-44c4-8f34-8218e0367ae1,face-anti-spoofing,Face Anti-Spoofing,"Facial anti-spoofing is the task of preventing false facial verification by using a photo, video, mask or a different substitute for an authorized person’s face. Some examples of attacks:
- **Print attack**: The attacker uses someone’s photo. The image is printed or displayed on a digital device.
- **Replay/video attack**: A more sophisticated way to trick the system, which usually requires a looped video of a victim’s face. This approach ensures behaviour and facial movements to look more ‘natural’ compared to holding someone’s photo.
- **3D mask attack**: During this type of attack, a mask is used as the tool of choice for spoofing. It’s an even more sophisticated attack than playing a face video. In addition to natural facial movements, it enables ways to deceive some extra layers of protection such as depth sensors.
( Image credit: [Learning Generalizable and Identity-Discriminative Representations for Face Anti-Spoofing](https://github.com/XgTu/GFA-CNN) )",computer-vision
06d57561-a1b0-4169-b9d7-af79022227af,3d-holography,3D Holography,"The images that are presented here are multiplanar images that were reconstructed using a holographic display. For more details, please see: https://complightlab.com/publications/realistic_defocus_cgh/",computer-vision
f03e2a11-316d-49a2-9eea-27f277068a2d,wildly-unsupervised-domain-adaptation,Wildly Unsupervised Domain Adaptation,Transferring knowledge from a noisy source domain to unlabeled target domain.,computer-vision
fd94ad7c-82e1-4b0f-82db-dd48781ca8c4,3d-shape-reconstruction,3D Shape Reconstruction,"Image credit: [GSNet: Joint Vehicle Pose and Shape Reconstruction with Geometrical and Scene-aware Supervision
, ECCV'20](https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123600511.pdf)",computer-vision
97508b63-d0e9-4a33-85cd-70082cc0ea2e,extreme-video-frame-interpolation,eXtreme-Video-Frame-Interpolation,"Type of Video Frame Interpolation (VFI) that interpolates an intermediate frame on X4K1000FPS dataset containing 4K videos of 1000 fps with the extreme motion. The dataset has a wide variety of textures, extremely large motions, zoomings and occlusions, which have never been seen in the previous VFI benchmark datasets.",computer-vision
9344ec36-2b3b-4444-9e18-542bb4536f05,micro-expression-recognition,Micro-Expression Recognition,"Facial Micro-Expression Recognition is a challenging task in identifying suppressed emotion in a high-stake environment, often comes in very brief duration and subtle changes.",computer-vision
9c5d801c-0f6d-49ab-9051-9445b5ef908e,breast-cancer-histology-image-classification-1,Breast Cancer Histology Image Classification (20% labels),Model's breast cancer histology image classification performance on BreakHis dataset with limited training data labels of 20%.,computer-vision
09aac6a2-7e4d-44c6-a3fb-98d47107f6e8,scene-flow-estimation,Scene Flow Estimation,"**Scene Flow Estimation** is the task of obtaining 3D structure and 3D motion of dynamic scenes, which is crucial to environment perception, e.g., in the context of autonomous navigation.
Source: [Self-Supervised Monocular Scene Flow Estimation ](https://arxiv.org/abs/2004.04143)",computer-vision
76116404-513c-4b6d-bafc-154b54667bd7,object-reconstruction,Object Reconstruction,,computer-vision
eb6fd825-ea5f-40ae-9fdb-111d582f575e,generalized-zero-shot-skeletal-action,Generalized Zero Shot skeletal action recognition,Generalized Zero Shot Learning for 3d Skeletal Action Recognition,computer-vision
3ba24767-0208-4cf2-b138-c33d43fa1b37,robust-3d-object-detection,Robust 3D Object Detection,3D Object Detection under Out-of-Distribution Scenarios,computer-vision
5261dd3d-8dca-4da3-af10-af452b962492,probabilistic-deep-learning,Probabilistic Deep Learning,,computer-vision
da46c55e-4ea7-4b6d-9a2a-136bd94d0b1e,polyp-segmentation,Polyp Segmentation,The goal of the project is to develop a computer-aided detection and diagnosis system for automatic polyp segmentation and detection.,computer-vision
c6e41df0-2252-4f43-b5e7-05d1b41a1fe4,stereo-image-super-resolution,Stereo Image Super-Resolution,,computer-vision
30996972-95e2-4f43-b22e-1ee81593bd5e,transparent-object-detection,Transparent Object Detection,Detecting transparent objects in 2D or 3D,computer-vision
ad92873a-808a-4f4c-bd70-16d0e7e2e4cc,3d-object-retrieval,3D Object Retrieval,Source: [He et al](https://arxiv.org/pdf/1803.06189v1.pdf),computer-vision
63717f6c-2be2-44dc-943b-9b24154cd1bd,point-cloud-segmentation,Point Cloud Segmentation,"3D point cloud segmentation is the process of
classifying point clouds into multiple homogeneous regions, the
points in the same region will have the same properties. The
segmentation is challenging because of high redundancy, uneven
sampling density, and lack explicit structure of point cloud
data. This problem has many applications in robotics such as
intelligent vehicles, autonomous mapping and navigation.
Source: [3D point cloud segmentation: A survey](https://doi.org/10.1109/RAM.2013.6758588)",computer-vision
56f51487-c3b6-41d3-b8f9-4fd9cb77fdae,loop-closure-detection,Loop Closure Detection,"Loop closure detection is the process of detecting whether an agent has returned to a previously visited location.
( Image credit: [Backtracking Regression Forests for Accurate Camera Relocalization](https://github.com/LiliMeng/btrf) )",computer-vision
7cd1a86f-247f-4764-8d2d-20ee7f4f353a,weakly-supervised-3d-point-cloud-segmentation,Weakly Supervised 3D Point Cloud Segmentation,,computer-vision
52613435-5cd5-41ba-b5b2-b46d888f01e0,thermal-infrared-pedestrian-detection,Thermal Infrared Pedestrian Detection,Thermal Infrared Pedestrian Detection under low-light condition,computer-vision
bbff45ff-71ca-421a-abe1-94f80689a16b,scale-generalisation,Scale Generalisation,Scale generalisation implies that learning is performed at some scale(s) and testing at other scales.,computer-vision
b6d49331-87f9-45aa-8939-87355d64902c,phrase-extraction-and-grounding-peg,Phrase Extraction and Grounding (PEG),PEG requires a model to extract phrases from text and locate objects from images simultaneously.,computer-vision
4cb73063-8f77-469d-a7a6-ebb0754b8d5d,event-based-motion-estimation,Event-based Motion Estimation,,computer-vision
e3a79482-f1cc-489b-8d05-e81ed54c44a7,face-verification,Face Verification,"**Face Verification** is a machine learning task in computer vision that involves determining whether two facial images belong to the same person or not. The task involves extracting features from the facial images, such as the shape and texture of the face, and then using these features to compare and verify the similarity between the images.
( Image credit: [Pose-Robust Face Recognition via Deep Residual Equivariant Mapping](https://arxiv.org/pdf/1803.00839v1.pdf) )",computer-vision
0aa10af5-24ec-4041-a33b-b20bf8152d16,solar-cell-segmentation,Solar Cell Segmentation,,computer-vision
026534d8-a2ce-4685-80a2-dcbcaed24d6c,learning-with-noisy-labels,Learning with noisy labels,"Learning with noisy labels means When we say ""noisy labels,"" we mean that an adversary has intentionally messed up the labels, which would have come from a ""clean"" distribution otherwise. This setting can also be used to cast learning from only positive and unlabeled data.",computer-vision
300bd301-7085-4630-b395-cba23cca93f8,3d-prostate-segmentation,3D Prostate Segmentation,,computer-vision
24285e6c-f92e-432e-8eb1-a62c6cf61971,visual-place-recognition,Visual Place Recognition,"**Visual Place Recognition** is the task of matching a view of a place with a different view of the same place taken at a different time.
Source: [Visual place recognition using landmark distribution descriptors ](https://arxiv.org/abs/1608.04274)
Image credit: [Visual place recognition using landmark distribution descriptors](https://arxiv.org/pdf/1608.04274.pdf)",computer-vision
f6427446-2837-4d99-9713-c24b5ade53c8,roi-based-image-generation,ROI-based image generation,,computer-vision
128fdb4d-919d-4089-ac58-81744c6833b0,action-recognition,Temporal Action Localization,Temporal Action Localization aims to detect activities in the video stream and output beginning and end timestamps. It is closely related to Temporal Action Proposal Generation.,computer-vision
5b75ac19-42b2-42f8-b2c9-2d7282d8bd40,lip-sync-1,Constrained Lip-synchronization,"This task deals with lip-syncing a video (or) an image to the desired target speech. Approaches in this task work only for a specific (limited set) of identities, languages, speech/voice. See also: Unconstrained lip-synchronization - https://paperswithcode.com/task/lip-sync",computer-vision
ad32738c-a5bf-4660-a452-aad13097c474,image-deep-networks,Image Deep Networks,,computer-vision
3cba119d-4f68-4c08-9f53-30c631a15ab3,continual-semantic-segmentation,Continual Semantic Segmentation,Continual learning in semantic segmentation.,computer-vision
68e12c76-2799-4ff9-91d6-8454b24e62bf,im2spec,Im2Spec,Predicting spectra from images (and vice versa),computer-vision
2a3df05f-d08d-47a7-941b-59929184b367,3d-multi-object-tracking,3D Multi-Object Tracking,Image: [Weng et al](https://arxiv.org/pdf/1907.03961v4.pdf),computer-vision
ce265ffd-0584-4eb5-9b22-ae154f7c34e4,replay-grounding,Replay Grounding,"Replay grounding is introduced in SoccerNet-v2 in the case of videos of soccer games. Given a replay shot of a soccer action, the objective is to retrieve when said action occurs within the whole live game.",computer-vision
ca59e60b-9c23-43b8-8410-fcd9d94b9cd9,natural-language-moment-retrieval,Natural Language Moment Retrieval,,computer-vision
6b8f56f3-091f-4288-961f-30afacb02d59,visual-keyword-spotting,Visual Keyword Spotting,Spot a given query keyword in a silent talking face video,computer-vision
e06248ff-82f7-4aaf-96f0-0f6942d8dc05,motion-forecasting,Motion Forecasting,Motion forecasting is the task of predicting the location of a tracked object in the future,computer-vision
6f3b9031-ca61-47ed-9fe5-61cf3efe052f,conformal-prediction,Conformal Prediction,,computer-vision
77ffb756-b83f-4eae-b575-30620a6061b4,art-analysis,Art Analysis,,computer-vision
dc579518-3f46-4908-bf1c-99d2b468960f,yield-mapping-in-apple-orchards,Yield Mapping In Apple Orchards,,computer-vision
8189d15d-f11c-4bbd-81da-19c0372a048d,image-to-point-cloud-registration,Image to Point Cloud Registration,"Given a query image and a scene of point cloud, get the camera pose according to them.",computer-vision
7351b060-7f31-421c-84d2-d1b04a437a7b,anomaly-detection-at-various-anomaly,Anomaly Detection at Various Anomaly Percentages,Unsupervised anomaly detection at 10% anomaly,computer-vision
c1c71da7-224b-42c6-a06b-ef43a284508a,mri-segmentation,MRI segmentation,,computer-vision
52b0f335-9c83-445f-b340-173b89443b4a,panoptic-scene-graph-generation,Panoptic Scene Graph Generation,"PSG task abstracts the given image with a scene graph, where nodes are grounded by panoptic segmentation",computer-vision
4f419566-f8f4-493c-a1f1-ca8ed8b75176,fast-vehicle-detection,Fast Vehicle Detection,Fast vehicle detection is the task of detecting fast or speeding vehicles from video footage.,computer-vision
e807870f-0b8b-4b5d-b843-9e86f96d99c6,unsupervised-few-shot-learning,Unsupervised Few-Shot Learning,"In contrast to supervised few-shot learning, only the unlabeled dataset is available in the pre-training
or meta-training stage for unsupervised few-shot learning.",computer-vision
ea2e278d-d8c7-4d46-81fa-2372a60704f2,prediction-of-occupancy-grid-maps,Prediction Of Occupancy Grid Maps,,computer-vision
889100b3-5488-41a1-85b3-02c73237eccf,future-prediction,Future prediction,,computer-vision
00975df5-ad7e-4736-82e7-4b195bbb81ab,rf-based-visual-tracking,RF-based Visual Tracking,"From mID:
https://doi.org/10.1109/DCOSS.2019.00028
""The key to offering personalised services in smart spaces is knowing where a particular person is with a high degree of accuracy. Visual tracking is one such solution, but concerns arise around the potential leakage of raw video information and many people are not comfortable accepting cameras in their homes or workplaces. We propose a human tracking and identification system (mID) based on millimeter wave radar which has a high tracking accuracy, without being visually compromising. Unlike competing techniques based on WiFi Channel State Information (CSI), it is capable of tracking and identifying multiple people simultaneously. Using a lowcost, commercial, off-the-shelf radar, we first obtain sparse point clouds and form temporally associated trajectories.""
( Image credit: [mID: Tracking and Identifying People with Millimeter Wave Radar](http://www.cs.ox.ac.uk/files/10889/%5BDCOSS19%5DmID.pdf) )",computer-vision
15ebce5d-8b60-4701-91b0-deaa93b579b4,source-free-domain-adaptation,Source-Free Domain Adaptation,,computer-vision
f7fcd875-2bba-49e7-ae78-278ae6ebc167,activity-detection,Activity Detection,Detecting activities in extended videos.,computer-vision
cc4bf4c9-325d-401c-9ff1-75aeb3c5d03a,text-to-3d,Text to 3D,,computer-vision
b0858d62-e7df-4e82-9fb5-c6b202c55f14,hyperview-challenge,HYPERVIEW Challenge,"The objective of this challenge is to advance the state of the art for soil parameter retrieval from hyperspectral data in view of the upcoming Intuition-1 mission. A campaign took place in March 2021 over agricultural areas in Poland with extensive ground samplings collocated with airborne hyperspectral measurements from imagers mounted onboard an airplane. The hyperspectral data contains 150 contiguous hyperspectral bands (462-942 nm, with a spectral resolution of 3.2 nm), which reflects the spectral range of the hyperspectral imaging sensor deployed on-board Intuition-1.",computer-vision
70f359cc-abf8-423c-8afc-332fd440e191,face-anonymization,Face Anonymization,,computer-vision
8935e656-e49f-43fd-9268-d00280a1f523,image-matching,Image Matching,"Image Matching or wide multiple baseline stereo (WxBS) is a process of establishing a sufficient number of pixel or region correspondences from two or more images depicting the same scene to estimate the geometric relationship between cameras, which produced these images.
Source: [The Role of Wide Baseline Stereo in the Deep Learning World](https://ducha-aiki.github.io/wide-baseline-stereo-blog/2020/03/27/intro.html)
( Image credit: [Kornia](https://github.com/kornia/kornia) )",computer-vision
796d88b5-9f84-4b14-8588-09e9c73ffd81,real-time-multi-object-tracking,Real-Time Multi-Object Tracking,Online and Real-time Multi-Object Tracking would achieve the real-time speed over 30 frames per second with online approach.,computer-vision
0935dff6-2a53-4b7d-8edb-1e98b84e3705,sequential-image-classification,Sequential Image Classification,"Sequential image classification is the task of classifying a sequence of images.
( Image credit: [TensorFlow-101](https://github.com/sjchoi86/tensorflow-101/blob/master/notebooks/rnn_mnist_simple.ipynb) )",computer-vision
e01f69cc-5055-482b-848b-74d520873bad,pedestrian-density-estimation,Pedestrian Density Estimation,Pedestrian density estimation is the task of estimating the density of pedestrians from cameras.,computer-vision
57cfe662-9922-4898-a5d5-53b7cc1c2d2d,fake-image-detection,Fake Image Detection,"( Image credit: [FaceForensics++](https://github.com/ondyari/FaceForensics) )",computer-vision
e9adc686-f9fa-4030-b59d-ae8b0e1aac44,3d-aware-image-synthesis,3D-Aware Image Synthesis,,computer-vision
afbf383a-6936-48c6-97b9-f16781ea84ba,inverse-tone-mapping,inverse tone mapping,,computer-vision
3960a688-1539-4d75-a23f-2c0e57cbab7f,multi-person-pose-estimation-and-tracking,Multi-Person Pose Estimation and Tracking,"Joint multi-person pose estimation and tracking following the PoseTrack benchmark.
https://posetrack.net/
( Image credit: [PoseTrack](https://github.com/iqbalu/PoseTrack-CVPR2017) )",computer-vision
e2b4df16-216a-440b-8de2-8294d335408f,weakly-supervised-3d-human-pose-estimation,Weakly-supervised 3D Human Pose Estimation,This task targets at 3D Human Pose Estimation with fewer 3D annotation.,computer-vision
5459921c-611f-4555-8b2e-adef15c11b63,image-instance-retrieval,Image Instance Retrieval,"**Image Instance Retrieval** is the problem of retrieving images from a database representing the same object or scene as the one depicted in a query image.
Source: [Compression of Deep Neural Networks for Image Instance Retrieval ](https://arxiv.org/abs/1701.04923)",computer-vision
d091c2a7-f18e-410f-ac96-ca28c7b54714,occluded-3d-object-symmetry-detection,Occluded 3D Object Symmetry Detection,,computer-vision
47cdfc91-255a-4f42-9797-1411d36166cf,motion-estimation,Motion Estimation,"**Motion Estimation** is used to determine the block-wise or pixel-wise motion vectors between two frames.
Source: [MEMC-Net: Motion Estimation and Motion Compensation Driven Neural Network for Video Interpolation and Enhancement ](https://arxiv.org/abs/1810.08768)",computer-vision
d3cc08ae-be7b-41db-a4f4-d2969c8e0799,gpr,GPR,Gaussian Process Regression,computer-vision
3d072a02-4eab-4aca-8cf5-7c01dd7b5a3a,constrained-diffeomorphic-image-registration,Constrained Diffeomorphic Image Registration,,computer-vision
5611ddf3-041e-4981-bab5-856890155fdc,indoor-scene-synthesis,Indoor Scene Synthesis,,computer-vision
6e400bb6-d7b4-47bc-a1f4-61f832bb665e,inverse-rendering,Inverse Rendering,"**Inverse Rendering** is the task of recovering the properties of a scene, such as shape, material, and lighting, from an image or a video. The goal of inverse rendering is to determine the properties of a scene given an observation of it, and to generate new images or videos based on these properties.",computer-vision
8ee05aca-e8ae-45e7-a718-3bb9cd6ee3b3,furniture-segmentation,furniture segmentation,,computer-vision
79edcfaa-e559-40d5-baa9-3cdb9189a1f1,fine-grained-action-recognition,Fine-grained Action Recognition,,computer-vision
1ef31f2d-6d9e-409f-bc88-67715d947bfb,fine-grained-visual-categorization,Fine-Grained Visual Categorization,,computer-vision
b2b84670-571d-4990-9aea-93b9aae7201f,concurrent-activity-recognition,Concurrent Activity Recognition,,computer-vision
e91f32b7-f781-43ee-ac2a-a9721324b237,animal-action-recognition,Animal Action Recognition,"Cross-species (intra-class, inter-class) action recognition",computer-vision
291b6483-bc1a-4838-b60f-b823383ecf01,part-based-representation-learning,Part-based Representation Learning,,computer-vision
b9c97b9a-38e2-4be6-928c-30b39591eda8,generalizable-person-re-identification,Generalizable Person Re-identification,Generalizable person re-identification refers to methods trained on a source dataset but directly evaluated on a target dataset without domain adaptation or transfer learning.,computer-vision
9a97dabe-2c97-4ed4-aa51-bb0d1fc31087,spatio-temporal-action-localization,Spatio-Temporal Action Localization,,computer-vision
ce2f6e3f-6a16-48be-92fe-d82c30e83608,video-restoration,Video Restoration,,computer-vision
e1164ba3-9e0d-40e7-a909-58340a687cc5,predict-future-video-frames,Predict Future Video Frames,,computer-vision
c986dd7d-2aff-49f3-95ba-9e02f1e7cc5e,multimodal-sentiment-analysis,Multimodal Sentiment Analysis,"Multimodal sentiment analysis is the task of performing sentiment analysis with multiple data sources - e.g. a camera feed of someone's face and their recorded speech.
( Image credit: [ICON: Interactive Conversational Memory Network
for Multimodal Emotion Detection](https://www.aclweb.org/anthology/D18-1280.pdf) )",computer-vision
5d2ec410-84c2-42f4-9f58-b1f0b2dbfe97,uncropping,Uncropping,,computer-vision
02e5f65c-8d05-46e3-9f0b-7ab4cd3d622d,multimodal-emotion-recognition,Multimodal Emotion Recognition,"This is a leaderboard for multimodal emotion recognition on the IEMOCAP dataset. The modality abbreviations are
A: Acoustic
T: Text
V: Visual
Please include the modality in the bracket after the model name.
All models must use standard five emotion categories and are evaluated in standard leave-one-session-out (LOSO). See the papers for references.",computer-vision
3f929695-c64b-41a7-977c-7151d44b57fe,melanoma-diagnosis,Melanoma Diagnosis,,computer-vision
1b0bd9e0-0239-43f9-b909-fa2b98dc26d2,wireframe-parsing,Wireframe Parsing,Detect Line Segments and their connecting Junctions in a single perspective image.,computer-vision
90312cca-a8db-420c-8170-0e7a54c8da5d,saliency-prediction-1,Few-Shot Transfer Learning for Saliency Prediction,"Saliency prediction aims to predict important locations in a visual scene. It is a per-pixel regression task with predicted values ranging from 0 to 1.
Benefiting from deep learning research and large-scale datasets, saliency prediction has achieved significant success in the past decade. However, it still remains challenging to predict saliency maps on images in new domains that lack sufficient data for data-hungry models.",computer-vision
94b76ec0-02fd-4621-9f14-ce0192bf933b,foveation,Foveation,,computer-vision
f2da4689-df79-4c90-93b9-3054330e3207,cyclops-accuracy,CYCLoPs Accuracy,"The WT2 dataset from the CYCLoPs database consists of 27,058 single-cell images of yeast cells. The task is to classify the subcellular localization of a fluoresced protein, given
two channels staining for the protein of interest and the cytosol.",computer-vision
88ee21bf-fc68-4983-9bbd-6e394f3f0785,unsupervised-image-classification,Unsupervised Image Classification,"Models that learn to label each image (i.e. cluster the dataset into its ground truth classes) without seeing the ground truth labels.
Image credit: ImageNet clustering results of [SCAN: Learning to Classify Images without Labels (ECCV 2020)](https://arxiv.org/abs/2005.12320) ",computer-vision
64f981ac-5843-4432-b0b5-46e626aa1643,code-search,Code Search,"The goal of **Code Search** is to retrieve code fragments from a large code corpus that most closely match a developer’s intent, which is expressed in natural language.
Source: [When Deep Learning Met Code Search ](https://arxiv.org/abs/1905.03813)",computer-vision
20d882d5-eee9-4b62-a3b0-7dd84df606d4,unsupervised-few-shot-image-classification,Unsupervised Few-Shot Image Classification,"In contrast to (supervised) few-shot image classification, only the unlabeled dataset is available in the pre-training or meta-training stage for unsupervised few-shot image classification.",computer-vision
26d98e9b-b63d-4b44-aef3-5d28951795db,image-cropping,Image Cropping,"**Image Cropping** is a common photo manipulation process, which improves the overall composition by removing unwanted regions. Image Cropping is widely used in photographic, film processing, graphic design, and printing businesses.
Source: [Listwise View Ranking for Image Cropping ](https://arxiv.org/abs/1905.05352)",computer-vision
60048e6c-0850-489a-bdf2-34bcb21b5fb1,multiview-gait-recognition,Multiview Gait Recognition,,computer-vision
0480c015-8f26-46de-af03-d0fcc7d831e6,deep-feature-inversion,Deep Feature Inversion,,computer-vision
0b840206-52c2-4631-b127-d43e47b77858,multiview-detection,Multiview Detection,Incorporating multiple camera views for detection in heavily occluded scenarios.,computer-vision
9a794950-71f6-43c3-bf10-6f6715f7ca54,gallbladder-cancer-detection,Gallbladder Cancer Detection,,computer-vision
928fd4bc-bc4f-4df8-af6d-cf25669d46b3,negative-flip-rate,negative flip rate,,computer-vision
037b01c9-9bcd-4069-8851-2860e65f91ad,handwritten-digit-recognition,Handwritten Digit Recognition,,computer-vision
2e2ba92b-59ca-4af1-a98a-1223e0d921a3,ms-ssim,MS-SSIM,A MS-SSIM score helps to analyze how much a De-warping module has been able to de-warp a document image from its initial distorted view.,computer-vision
7d2ec0d5-ad2b-4d96-ada6-2735e70d5218,face-reconstruction,Face Reconstruction,"Face reconstruction is the task of recovering the facial geometry of a face from an image.
( Image credit: Microsoft [Deep3DFaceReconstruction](https://github.com/Microsoft/Deep3DFaceReconstruction) )",computer-vision
f3860089-093f-47c6-884a-6583ecaea76d,jpeg-artifact-removal,JPEG Artifact Removal,,computer-vision
c19da7ec-fbe5-4490-9bc0-06fa81db674e,universal-domain-adaptation,Universal Domain Adaptation,,computer-vision
d3a5d349-0bc2-47b0-b07b-48da4151c839,video-based-person-re-identification,Video-Based Person Re-Identification,Video-based person re-identification (reID) aims to retrieve person videos with the same identity as a query person across multiple cameras,computer-vision
92d48934-fc65-41d2-b0a0-a47258349836,few-shot-camera-adaptive-color-constancy,Few-Shot Camera-Adaptive Color Constancy,,computer-vision
152a1ae4-d851-4ec2-ab57-9345f4ec9925,historical-color-image-dating,Historical Color Image Dating,,computer-vision
225f73af-e801-49b1-a673-80e555a68ceb,multi-object-tracking-and-segmentation,Multi-Object Tracking and Segmentation,"Multiple object tracking and segmentation requires detecting, tracking, and segmenting objects belonging to a set of given classes.
(Image and definition credit: [Prototypical Cross-Attention Networks for Multiple Object Tracking and Segmentation](https://github.com/SysCV/pcan), NeurIPS 2021, Spotlight )",computer-vision
a671a4f9-10b9-4c51-8ef0-c05ef746ca50,spectral-estimation-from-a-single-rgb-image,Spectral Estimation From A Single Rgb Image,,computer-vision
fa767897-1aa5-46e6-8e5f-2136b4647e8e,grayscale-image-denoising,Grayscale Image Denoising,,computer-vision
fbebe205-4418-4d33-a3ed-28c3f76add95,video-polyp-segmentation,Video Polyp Segmentation,,computer-vision
4c22bfa6-4153-43de-b86a-4bbb11798f35,human-dynamics,Human Dynamics,,computer-vision
82271ff6-dbcc-437a-9745-79ccdfa21a38,house-generation,House Generation,,computer-vision
fe993f10-6526-4f43-9f1b-c83e83b9b2a8,monocular-cross-view-road-scene-parsing-road,Monocular Cross-View Road Scene Parsing(Road),,computer-vision
92c08b9e-5bf9-4d36-92d2-c8559afced16,detecting-image-manipulation,Detecting Image Manipulation,,computer-vision
b88b5ad7-04a4-4ae2-9854-fcb7ad54ec05,image-segmentation,Image Segmentation,"**Image Segmentation** is a computer vision task that involves dividing an image into multiple segments or regions, each of which corresponds to a different object or part of an object. The goal of image segmentation is to assign a unique label or category to each pixel in the image, so that pixels with similar attributes are grouped together.",computer-vision
f2f6ace9-1935-4b7f-9fc3-3b31e544731b,genre-classification,Genre classification,"Genre classification is the process of grouping objects together based on defined similarities such as shape, pixel, location, or intensity.",computer-vision
70707c3d-3670-4b75-a618-800fa1b9d031,human-mesh-recovery,Human Mesh Recovery,Estimate 3D body mesh from images,computer-vision
db507efa-4854-4b0f-8dc7-3f994a0a03ee,stochastic-human-motion-prediction,Stochastic Human Motion Prediction,"Stochastic Human Motion Prediction assumes future stochasticity and therefore tackles the task from a generative point of view. Instead of predicting a single future, it predicts N possible futures.",computer-vision
deb901c7-d8fe-4e99-9f76-1cff72c79076,story-visualization,Story Visualization,,computer-vision
e12143a3-8c60-445c-b02c-e5fe126f48d7,trajectory-forecasting,Trajectory Forecasting,"Trajectory forecasting is a sequential prediction task, where a forecasting model predicts future trajectories of all moving agents (humans, vehicles, etc.) in a scene, based on their past trajectories and/or the scene context.
(Illustrative figure from [Social NCE: Contrastive Learning of Socially-aware Motion Representations](https://github.com/vita-epfl/social-nce))",computer-vision
8b1869c3-eced-433a-8268-f32fd557b10d,contour-detection,Contour Detection,"Object **Contour Detection** extracts information about the object shape in images.
Source: [Object Contour and Edge Detection with RefineContourNet ](https://arxiv.org/abs/1904.13353)",computer-vision
823b1a43-62c0-4ff2-b99f-306fd18bd9f9,action-unit-detection,Action Unit Detection,"Action unit detection is the task of detecting action units from a video - for example, types of facial action units (lip tightening, cheek raising) from a video of a face.
( Image credit: [AU R-CNN](https://arxiv.org/pdf/1812.05788v2.pdf) )",computer-vision
231ada00-fd49-445d-9239-42d2d942bcff,opd-single-view-3d-openable-part-detection,OPD: Single-view 3D Openable Part Detection,Detect the openable parts and predict their motion parameters from single-view image,computer-vision
5f846366-b478-41b0-8088-0af3250a2511,severity-prediction,severity prediction,,computer-vision
2f4d84de-6834-4118-b43a-f766d98ba7a9,active-object-detection,Active Object Detection,Active Learning for Object Detection,computer-vision
26b212c4-324c-45a3-8d02-24b4e7c45094,controllable-image-captioning,controllable image captioning,generate image captions conditioned on control signals,computer-vision
3109ec32-6b88-4e11-b7dc-0303bcbd51e7,image-similarity-detection,Image Similarity Detection,"A fundamental computer vision task to determine whether a part of an image has been copied from another image.
Description from: [The 2021 Image Similarity Dataset and Challenge](https://paperswithcode.com/paper/the-2021-image-similarity-dataset-and)
Image credit: [The 2021 Image Similarity Dataset and Challenge](https://paperswithcode.com/paper/the-2021-image-similarity-dataset-and)",computer-vision
626a2980-2f62-4c16-9c75-8f5ff99e0dc9,action-segmentation,Action Segmentation,"**Action Segmentation** is a challenging problem in high-level video understanding. In its simplest form, Action Segmentation aims to segment a temporally untrimmed video by time and label each segmented part with one of pre-defined action labels. The results of Action Segmentation can be further used as input to various applications, such as video-to-text and action localization.
Source: [TricorNet: A Hybrid Temporal Convolutional and Recurrent Network for Video Action Segmentation ](https://arxiv.org/abs/1705.07818)",computer-vision
75da5076-cfc2-47c8-bb6e-cd9afa2a32f7,ood-detection,Out of Distribution (OOD) Detection,"**Out of Distribution (OOD) Detection** is the task of detecting instances that do not belong to the distribution the classifier has been trained on. OOD data is often referred to as ""unseen"" data, as the model has not encountered it during training.
OOD detection is typically performed by training a model to distinguish between in-distribution (ID) data, which the model has seen during training, and OOD data, which it has not seen. This can be done using a variety of techniques, such as training a separate OOD detector, or modifying the model's architecture or loss function to make it more sensitive to OOD data.",computer-vision
8cecede1-d514-45f0-ac18-7ae9f483d0bc,landmark-tracking,Landmark Tracking,,computer-vision
4b1ecc9d-d83b-405a-8080-ecdb04f9f22f,semi-supervised-anomaly-detection,Semi-supervised Anomaly Detection,,computer-vision
cf5c271f-f125-4abe-b2cd-48a0c702cf8b,multi-modal-image-segmentation,Multi-modal image segmentation,,computer-vision
30020159-8fe3-4386-8f07-cae483c55f21,stereo-matching-1,Stereo Matching,"**Stereo Matching** is one of the core technologies in computer vision, which recovers 3D structures of real world from 2D images. It has been widely used in areas such as autonomous driving, augmented reality and robotics navigation. Given a pair of rectified stereo images, the goal of Stereo Matching is to compute the disparity for each pixel in the reference image, where disparity is defined as the horizontal displacement between a pair of corresponding pixels in the left and right images.
Source: [Adaptive Unimodal Cost Volume Filtering for Deep Stereo Matching ](https://arxiv.org/abs/1909.03751)",computer-vision
3298d446-446c-4d22-a369-abf4ffa55c70,image-animation,Image Animation,Image Animation is a field for image-animation of a source image by a driving video,computer-vision
e9e484a9-c09e-41c7-84e6-63ee9b1f9b8b,artist-classification,Artist classification,Classification of the artist for artistic images,computer-vision
38dc7e38-db33-4af7-b0dd-de72f31bbb15,point-cloud-classification,Point Cloud Classification,Point Cloud Classification is a task involving the classification of unordered 3D point sets (point clouds).,computer-vision
845f0a72-a09c-46b1-9ed8-c1dea4425b52,activity-prediction,Activity Prediction,Predict human activities in videos,computer-vision
64107bf5-9602-4547-9099-9c2f4a08df9e,image-retouching,Image Retouching,,computer-vision
19859d19-786c-459f-a0d4-40d451b810ef,drivable-area-detection,Drivable Area Detection,,computer-vision
abff794f-1f78-460c-8764-020b7530ead4,open-set-video-captioning,Open Set Video Captioning,,computer-vision
ad595902-4c0a-49ad-92b3-2782256127d7,synthetic-to-real-translation,Synthetic-to-Real Translation,"Synthetic-to-real translation is the task of domain adaptation from synthetic (or virtual) data to real data.
( Image credit: [CYCADA](https://arxiv.org/pdf/1711.03213v3.pdf) )",computer-vision
f3430181-2a5b-4c86-b0a4-8c0b93df4267,image-classification-shift-consistency,Classification Consistency,"How often two shifts of the same image are classified the same
( Image credit: [Antialiased CNNs](https://github.com/adobe/antialiased-cnns) )",computer-vision
53721f17-8ce1-4d93-8ac8-90d7b820700f,population-mapping,Population Mapping,,computer-vision
ab5799f4-8956-47f2-840c-597d02e22c3f,action-generation,Action Generation,,computer-vision
b886eef1-d2c3-4231-9ac6-7c4b13c17a35,scene-understanding,Scene Understanding,"Scene Understanding is something that to understand a scene. For instance, iPhone has function that help eye disabled person to take a photo by discribing what the camera sees. This is an example of Scene Understanding.",computer-vision
afdd12c2-c0b2-4313-8413-ac48183caf68,emotion-classification,Emotion Classification,"Emotion classification, or emotion categorization, is the task of recognising emotions to classify them into the corresponding category. Given an input, classify it as 'neutral or no emotion' or as one, or more, of several given emotions that best represent the mental state of the subject's facial expression, words, and so on. Some example benchmarks include ROCStories, Many Faces of Anger (MFA), and GoEmotions. Models can be evaluated using metrics such as the Concordance Correlation Coefficient (CCC) and the Mean Squared Error (MSE).",computer-vision
10f590a0-4e89-4e6a-a4ff-1b656370593e,real-to-cartoon-translation,Real-to-Cartoon translation,Cartoonifying images,computer-vision
be191be4-6450-4688-ba05-f4ac982879c9,facial-action-unit-detection,Facial Action Unit Detection,"Facial action unit detection is the task of detecting action units from a video of a face - for example, lip tightening and cheek raising.
( Image credit: [Self-supervised Representation Learning from Videos for Facial Action Unit Detection](http://openaccess.thecvf.com/content_CVPR_2019/papers/Li_Self-Supervised_Representation_Learning_From_Videos_for_Facial_Action_Unit_Detection_CVPR_2019_paper.pdf) )",computer-vision
5310fba6-572a-41a1-a153-75306ab759c8,nmr-j-coupling,NMR J-coupling,https://github.com/larsbratholm/champs_kaggle,graphs
66a98b41-c23e-4e51-8cf1-9376d95269dd,graph-clustering,Graph Clustering,"**Graph Clustering** is the process of grouping the nodes of the graph into clusters, taking into account the edge structure of the graph in such a way that there are several edges within each cluster and very few between clusters. Graph Clustering intends to partition the nodes in the graph into disjoint groups.
Source: [Clustering for Graph Datasets via Gumbel Softmax ](https://arxiv.org/abs/2005.02372)",graphs
e11db026-f44a-446d-8d2f-5f6646f2d4a9,inductive-link-prediction,Inductive Link Prediction,"In inductive link prediction inference is performed on a new, unseen graph whereas classical transductive link prediction performs both training and inference on the same graph.",graphs
a2431223-70a4-42da-adfb-d3dbf402bcbf,node-classification-on-non-homophilic,Node Classification on Non-Homophilic (Heterophilic) Graphs,"There exists a non-trivial set of graphs where graph-aware models underperform their corresponding graph-agnostic models, e.g. SGC and GCN underperform MLP with 1 layer and 2 layers. Although still controversial, people believe the performance degradation results from heterophily, i.e. there exist much more inter-class edges than inner-class edges. This task aims to evaluate models designed for non-homophilic (heterophilic) datasets.",graphs
2af84871-c264-4d7f-b7e5-a49361abac36,approximating-betweenness-centrality-ranking,Approximating Betweenness-Centrality ranking,Betweenness-centrality is a popular measure in network analysis that aims to describe the importance of nodes in a graph. It accounts for the fraction of shortest paths passing through that node and is a key measure in many applications including community detection and network dismantling.,graphs
9a8e8c38-cc5f-465c-879e-f7793b5e4392,spectral-graph-clustering,Spectral Graph Clustering,,graphs
09dfa89c-f5a7-4480-b697-44e94b8c06fa,steiner-tree-problem,Steiner Tree Problem,"The **Steiner tree problem** is a computational problem in computer science and graph theory that involves finding the minimum weight subgraph in an undirected graph that connects a given set of terminal vertices. The goal of the Steiner tree problem is to minimize the total weight of the edges in the subgraph, and it is considered NP-hard, meaning that finding the optimal solution is computationally difficult.",graphs
eca5ec82-950d-475d-81ad-94a6e1e94267,graph-sampling,Graph Sampling,Training GNNs or generating graph embeddings requires graph samples.,graphs
ea4e0648-d711-41f6-b4a3-9fadd81c5ceb,hyperedge-prediction,Hyperedge Prediction,,graphs
8e95ab44-798b-4ce1-ae47-ce9fe0f29ba7,heterogeneous-node-classification,Heterogeneous Node Classification,"Node classification in heterogeneous graphs, where nodes and/or edges have multiple types.",graphs
42fc2d06-5cf8-49ba-bebc-47e2326dac84,collaborative-ranking,Collaborative Ranking,,graphs
87cb5472-551d-4d66-92e8-496aa1f20a91,3d-hand-pose-estimation,3D Hand Pose Estimation,Image: [Zimmerman et l](https://arxiv.xsrg/pdf/1705.01389v3.pdf),graphs
947af7e2-84e9-4af0-92b2-cafc9592b23d,tree-decomposition,Tree Decomposition,"**Tree Decomposition** is a technique in graph theory and computer science for representing a graph as a tree, where each node in the tree represents a set of vertices in the original graph. The goal of tree decomposition is to divide the graph into smaller, more manageable pieces, and to use the tree to represent the relationships between these pieces.",graphs
694d0117-5b82-4afb-b943-6953e3390b93,graph-structure-learning,Graph structure learning,Semi-supervised node classification when a graph structure is not available.,graphs
6d5ef090-3823-4cc5-a8a4-e9ecb875f61f,calibration-for-link-prediction,Calibration for Link Prediction,,graphs
a04f74fa-61e7-45a9-b072-766e401ac6ff,hypergraph-embedding,hypergraph embedding,Compute useful representations of hyperedges and vertices,graphs
7c8675f5-6b0d-45ee-a1a0-240a5ed9bb6d,graph-embedding,Graph Embedding,"Graph embeddings learn a mapping from a network to a vector space, while preserving relevant network properties.
( Image credit: [GAT](https://github.com/PetarV-/GAT) )",graphs
6528a692-89fa-441c-9bc5-d4e97e186b3f,graph-mining,Graph Mining,,graphs
244313aa-288a-428c-a471-46a07127c876,topological-data-analysis,Topological Data Analysis,,graphs
1a4d8f4f-3571-4fe2-97ae-c69acb8365de,anchor-link-prediction,Anchor link prediction,,graphs
70dd0754-835d-4bfc-9174-1df4ad2169bd,graph-similarity,Graph Similarity,,graphs
acb770ca-b864-4f59-9f0a-7282f3ee7af0,node-classification,Node Classification,"**Node Classification** is a machine learning task in graph-based data analysis, where the goal is to assign labels to nodes in a graph based on the properties of nodes and the relationships between them.
**Node Classification** models aim to predict non-existing node properties (known as the target property) based on other node properties. Typical models used for node classification consists of a large family of graph neural networks. Model performance can be measured using benchmark datasets like [Cora](/dataset/cora), [Citeseer](/dataset/citeseer), and [Pubmed](/dataset/pubmed), among others, typically using Accuracy and F1.
( Image credit: [Fast Graph Representation Learning With PyTorch Geometric](https://arxiv.org/pdf/1903.02428v3.pdf) )",graphs
ead7fec8-bba2-4d69-b4e3-69b43ca469a3,network-community-partition,Network Community Partition,,graphs
ea98f16c-7f7d-4658-8eb9-76d89154c2ec,knowledge-base-completion,Knowledge Base Completion,"Knowledge base completion is the task which automatically infers missing facts by reasoning about the information already present in the knowledge base. A knowledge base is a collection of relational
facts, often represented in the form of ""subject"", ""relation"", ""object""-triples.",graphs
aa6da97c-8f4f-4aee-9cfb-75ea4b0d1378,graphon-estimation,Graphon Estimation,,graphs
43552e7e-9574-4093-8526-523851b0b37e,hyperedge-classification,hyperedge classification,,graphs
1157248e-64dc-43de-ae8e-375a0a542f34,graph-attention,Graph Attention,,graphs
9f637c23-b12d-4542-be82-1cfeae225ae8,clustering-ensemble,Clustering Ensemble,,graphs
abbd8f48-98d7-4f29-9daf-3c4630dc4c44,subgraph-counting,Subgraph Counting,,graphs
c7e52ce9-5344-43c6-8b75-05f357ea2512,graph-ranking,Graph Ranking,,graphs
7c30465a-b1fa-414b-88da-aa1ce4dff9ea,graph-anomaly-detection,Graph Anomaly Detection,,graphs
cd73e8f8-2dff-4664-9cdf-9454ede1c40f,link-prediction,Link Prediction,"**Link Prediction** is a task in graph and network analysis where the goal is to predict missing or future connections between nodes in a network. Given a partially observed network, the goal of link prediction is to infer which links are most likely to be added or missing based on the observed connections and the structure of the network.
( Image credit: [Inductive Representation Learning on Large Graphs](https://arxiv.org/pdf/1706.02216v4.pdf) )",graphs
2029d479-59d3-4268-861b-1bea40818994,triad-prediction,Triad Prediction,,graphs
2a74a889-7d03-4d32-b497-34a8dbf7c026,local-community-detection,Local Community Detection,,graphs
c328b218-fdf3-4f28-82c4-b6edd775336d,jet-tagging,Jet Tagging,"Jet tagging is the process of identifying the type of elementary particle that initiates a ""jet"", i.e., a collimated spray of outgoing particles. It is essentially a classification task that aims to distinguish jets arising from particles of interest, such as the Higgs boson or the top quark, from other less interesting types of jets.",graphs
0609a0c8-821b-421a-aae5-4658657a61af,learning-to-rank,Learning-To-Rank,"Learning to rank is the application of machine learning to build ranking models. Some common use cases for ranking models are information retrieval (e.g., web search) and news feeds application (think Twitter, Facebook, Instagram).",graphs
930f53a7-3113-45da-8cd3-bb7931eafc3e,dynamic-link-prediction,Dynamic Link Prediction,,graphs
c6c5a9e2-8bc6-4182-9329-ce41cdc65f04,feedback-vertex-set-fvs,Feedback Vertex Set (FVS),"The **Feedback Vertex Set (FVS)** problem is a computational problem in computer science and graph theory that involves finding the smallest possible subset of vertices in an undirected graph such that removing those vertices results in a graph that is acyclic, i.e., a forest. The goal of the FVS problem is to minimize the size of the feedback vertex set, and is considered NP-hard, meaning that finding the optimal solution is computationally difficult. For directed graphs, a feedback vertex set is instead a subset of vertices whose removal results in directed acyclic graph (DAG), not necessarily a forest. This task can refer to either looking for a set of provably minimal size (in as little time as possible), or a heuristic algorithm that produces small solutions quickly (but which may have even smaller sets).",graphs
bf2495d3-7792-4b02-b5f4-f6324fb1e791,graph-learning,Graph Learning,,graphs
900824f2-7a99-4889-9213-97e4cdc87b09,person-centric-knowledge-graphs,Person-Centric Knowledge Graphs,,graphs
986b2d4e-1d47-4218-8dc8-2c84fc34474f,ancestor-descendant-prediction,Ancestor-descendant prediction,"Given two entities, make a binary prediction if they have ancestor-descendant relationship, based on existing and missing hierarchical edges in the graph.",graphs
1e34a026-563a-4fa4-a6b7-2dc923b5f553,link-sign-prediction,Link Sign Prediction,,graphs
97ad3338-e999-4dfc-b125-e9bc403b7bf7,physics-informed-machine-learning,Physics-informed machine learning,Machine learning used to represent physics-based and/or engineering models,graphs
526b74b5-880d-49c2-86ae-e0784d96c256,connectivity-estimation,Connectivity Estimation,,graphs
1633f6e6-fb71-4b7a-86af-97aa4e3bd510,triple-classification,Triple Classification,"Triple classification aims to judge whether a given triple (h, r, t) is correct or not with respect to the knowledge graph.",graphs
a654d886-83c7-4529-b3bf-a387adbf4497,set-to-graph-prediction,SET TO GRAPH PREDICTION,,graphs
2c28c1d3-77b0-4a10-affd-cf87f8d4eb27,structual-feature-correlation,Structual Feature Correlation,Expressive Power of GNN to predict structural feature's correlation mutually.,graphs
d0b27229-741a-4138-af17-15d913c22996,dynamic-community-detection,Dynamic Community Detection,community detection in dynamic networks,graphs
25349318-0e57-4818-8fb2-2e52df91c2e9,twinwidth-contraction-sequence,Twinwidth Contraction Sequence,"**Twinwidth Contraction Sequence** is a concept in graph theory and computer science that refers to a sequence of graph contractions that transform a graph into a smaller, more manageable graph. The goal of the Twinwidth Contraction Sequence is to find a sequence of contractions that results in a graph with a small ""twinwidth"", which is a measure of how well the graph can be decomposed into small, connected subgraphs.",graphs
4060f475-c791-48ac-9ba4-b138c7ebc919,set-to-graph-prediction-1,Set-to-Graph Prediction,,graphs
c1d06cec-8225-486d-9c18-8ae84937e4a0,hypergraph-partitioning,hypergraph partitioning,,graphs
4815a368-597d-4e13-9a6e-4cd1c9c40dbf,graph-outlier-detection,Graph Outlier Detection,,graphs
a7aa4341-1e0b-41d9-8f4d-61ab60668104,link-property-prediction,Link Property Prediction,,graphs
f9f9fc56-23d1-4c25-8855-bc1f4a76f592,root-cause-ranking,Root Cause Ranking,Detection of causal anomalous nodes in graphs,graphs
1efd0d3c-39a2-4cbc-9b43-2cf4d43773ab,tree-map-layout,Tree Map Layout,Hierarchical tree visualisation by assigning sizes and positions of nodes. https://en.wikipedia.org/wiki/Treemapping,graphs
e4560a7a-e591-45a3-9e4d-7968bf4c2588,graph-construction,graph construction,,graphs
c90a1a1f-36c3-4fac-9a79-1d620eb162af,graph-to-graph-translation,Graph-To-Graph Translation,,graphs
93f4bce4-755f-44a6-adc8-8cc329a96b54,graph-property-prediction,Graph Property Prediction,,graphs
4b948139-6c0b-4f74-a535-87b131c2eb1c,hypergraph-matching,Hypergraph Matching,,graphs
579de427-b3ff-4b70-bc9f-be9458dba427,knowledge-graph-embedding,Knowledge Graph Embedding,,graphs
82069a75-744e-4661-ab33-efdd40f9d1a9,graph-regression,Graph Regression,The regression task is similar to graph classification but using different loss function and performance metric.,graphs
239732a6-9a99-42a3-b7c1-270399923bce,graph-generation,Graph Generation,"**Graph Generation** is an important research area with significant applications in drug and material designs.
Source: [Graph Deconvolutional Generation ](https://arxiv.org/abs/2002.07087)",graphs
271b0bd5-a654-4227-90e1-f81e045df078,graph-question-answering,Graph Question Answering,,graphs
2406b3a7-555d-4f14-bda8-57cf18c6172d,graph-classification,Graph Classification,"**Graph Classification** is a task that involves classifying a graph-structured data into different classes or categories. Graphs are a powerful way to represent relationships and interactions between different entities, and graph classification can be applied to a wide range of applications, such as social network analysis, bioinformatics, and recommendation systems. In graph classification, the input is a graph, and the goal is to learn a classifier that can accurately predict the class of the graph.
( Image credit: [Hierarchical Graph Pooling with Structure Learning](https://github.com/cszhangzhen/HGP-SL) )",graphs
0560b79b-68a7-41b2-b22a-7a78b4421c7c,structural-node-embedding,Structural Node Embedding,,graphs
3987b12d-03b9-4396-8067-1b6c73d9620d,stochastic-block-model,Stochastic Block Model,,graphs
d6e0e4be-cdc3-4816-b29d-9a032f801203,dynamic-graph-embedding,Dynamic graph embedding,,graphs
45606f00-2ddb-44fb-b50c-989f60b6f1c8,initial-structure-to-relaxed-energy-is2re,Initial Structure to Relaxed Energy (IS2RE),,graphs
9a0c33ef-4512-4a46-b618-a58c9b996e12,inductive-relation-prediction,Inductive Relation Prediction,Inductive setting of the knowledge graph completion task. This requires a model to perform link prediction on an entirely new test graph with new set of entities.,graphs
aabb93bd-a7fc-4b49-91bd-fd4caf35268a,role-embedding,Role Embedding,,graphs
6215b6fd-97ec-4c8b-aa85-31007728789b,community-search,Community Search,,graphs
ccc0bbf0-eb9c-4e40-aebb-781a93ece4c2,graph-matching,Graph Matching,"**Graph Matching** is the problem of finding correspondences between two sets of vertices while preserving complex relational information among them. Since the graph structure has a strong capacity to represent objects and robustness to severe deformation and outliers, it is frequently adopted to formulate various correspondence problems in the field of computer vision. Theoretically, the Graph Matching problem can be solved by exhaustively searching the entire solution space. However, this approach is infeasible in practice because the solution space expands exponentially as the size of input data increases. For that reason, previous studies have attempted to solve the problem by using various approximation techniques.
Source: [Consistent Multiple Graph Matching with Multi-layer Random Walks Synchronization ](https://arxiv.org/abs/1712.02575)",graphs
3b9ed29a-f213-48e6-9c7e-61e2cbc79792,rubik-s-cube,Rubik's Cube,Solving the Rubik's Cube is a pathfinding task on a massive implicit graph.,graphs
7e8ced63-b811-4702-84d1-714f88c5adb6,gene-interaction-prediction,Gene Interaction Prediction,,graphs
f860939e-f8ab-4c3e-85a6-a9d00a1261e2,community-detection,Community Detection,"**Community Detection** is one of the fundamental problems in network analysis, where the goal is to find groups of nodes that are, in some sense, more similar to each other than to the other nodes.
Source: [Randomized Spectral Clustering in Large-Scale Stochastic Block Models ](https://arxiv.org/abs/2002.00839)",graphs
f2f21c29-a8e1-48d0-9057-6c61f4deed55,knowledge-graph-embeddings,Knowledge Graph Embeddings,,graphs
d0e92af6-d35b-4ef7-a092-935f3fc13c0f,graph-partitioning,graph partitioning,Graph Partitioning is generally the first step of distributed graph computing tasks. The targets are load-balance and minimizing the communication volume.,graphs
e8bae857-a087-48c6-8af0-fe41fbdb52ab,graph-nonvolutional-network,Graph Nonvolutional Network,,graphs
bd40ff90-a55c-47a9-86e3-bbb7fcc8c0f3,graph-reconstruction,Graph Reconstruction,,graphs
4313640c-baa4-4f13-928f-0937965fb4be,online-community-detection,Online Community Detection,,graphs
b5be9655-0a5f-4bf8-b559-bb53eea4c51d,multi-modal-entity-alignment,Multi-modal Entity Alignment,,knowledge-base
5f18b909-27f4-48fa-b2f6-89ebba240783,open-knowledge-graph-embedding,Open Knowledge Graph Embedding,,knowledge-base
29e3e3a2-1cfe-4ee7-beb1-3ef1d52a5577,inductive-knowledge-graph-completion,Inductive knowledge graph completion,,knowledge-base
a5c8b450-05ac-4e47-9206-d5a4d05109b5,knowledge-graphs-data-curation,Knowledge Graphs Data Curation,,knowledge-base
fc88b2e6-74e5-4f85-bd2e-b651de2c4265,ontology-matching,Ontology Matching,,knowledge-base
f1dfb10a-0cbb-46b9-9753-837aaf148ca5,entity-alignment,Entity Alignment,"**Entity Alignment** is the task of finding entities in two knowledge bases that refer to the same real-world object. It plays a vital role in automatically integrating multiple knowledge bases.
Note: results that have incorporated machine translated entity names (introduced in the RDGCN paper) or pre-alignment name embeddings are considered to have used **extra training labels** (both are marked with ""Extra Training Data"" in the leaderboard) and are **not adhere to a comparable setting** with others that have followed the original setting of the benchmark.
Source: [Cross-lingual Entity Alignment via Joint Attribute-Preserving Embedding ](https://arxiv.org/abs/1708.05045)
The task of entity alignment is related to the task of [entity resolution](https://paperswithcode.com/task/entity-resolution) which focuses on matching structured entity descriptions in different contexts.",knowledge-base
9419e0db-89df-4907-8af6-34158f75e6b3,linear-mode-connectivity,Linear Mode Connectivity,"**Linear Mode Connectivity** refers to the relationship between input and output variables in a linear regression model. In a linear regression model, input variables are combined with weights to predict output variables. Understanding the linear model connectivity can help interpret model results and identify which input variables are most important for predicting output variables.",knowledge-base
b8c47ae1-4f37-43b2-a79f-5251935cfa40,relational-pattern-learning,Relational Pattern Learning,Learning and extracting the hidden patterns among the relations in a Knowledge Graph.,knowledge-base
3583b9cd-ed1a-4c49-a4f8-04f5c6c54f44,knowledge-graph-completion,Knowledge Graph Completion,"Knowledge graphs $G$ are represented as a collection of triples $\\{(h, r, t)\\}\subseteq E\times R\times E$, where $E$ and $R$ are the entity set and relation set. The task of **Knowledge Graph Completion** is to either predict unseen relations $r$ between two existing entities: $(h, ?, t)$ or predict the tail entity $t$ given the head entity and the query relation: $(h, r, ?)$.
Source: [One-Shot Relational Learning for Knowledge Graphs ](https://arxiv.org/abs/1808.09040)",knowledge-base
1949bc1b-eb10-42e1-b087-c72e1abb71f0,complex-query-answering,Complex Query Answering,"This task is concerned with answering complex queries over incomplete knowledge graphs. In the most simple case, the task is reduced to link prediction: a 1-hop query for predicting the existence of an edge between a pair of nodes. Complex queries are concerned with other structures between nodes, such as 2-hop and 3-paths, and intersecting paths with intermediate variables.",knowledge-base
3e22b85f-17c0-4248-b320-b33d00eb1dde,multi-hop-question-answering,Multi-hop Question Answering,,knowledge-base
3be2b217-b403-4c03-baaa-1db62fcedb54,rdf-dataset-discovery,RDF Dataset Discovery,Given a URI find the RDF datasets containing this URI.,knowledge-base
d439835e-0681-4d27-aaab-679a8d01daa3,knowledge-graphs,Knowledge Graphs,,knowledge-base
4c2837e3-c35b-4dc0-a709-4d071c08d8bf,ontology-embedding,Ontology Embedding,,knowledge-base
16b1806e-74b4-411b-a1c3-645fdc655226,math-word-problem-solving,Math Word Problem Solving,,knowledge-base
140b0f59-cb2c-4b52-a9b4-dc93cdb45d2b,table-annotation,Table annotation,"**Table annotation** is the task of annotating a table with terms/concepts from knowledge graph or database schema. Table annotation is typically broken down into the following five subtasks:
1. Cell Entity Annotation ([CEA](https://paperswithcode.com/task/cell-entity-annotation))
2. Column Type Annotation ([CTA](https://paperswithcode.com/task/column-type-annotation))
3. Column Property Annotation ([CPA](https://paperswithcode.com/task/columns-property-annotation))
4. [Table Type Detection](https://paperswithcode.com/task/table-type-detection)
5. [Row Annotation](https://paperswithcode.com/task/row-annotation)
The [SemTab](http://www.cs.ox.ac.uk/isg/challenges/sem-tab/) challenge is closely related to the Table Annotation problem. It is a yearly challenge which focuses on the first three tasks of table annotation and its purpose is to benchmark different table annotation systems.",knowledge-base
f19694e1-03dd-4f1f-bcbc-07914d14ba07,commonsense-knowledge-base-construction,Commonsense Knowledge Base Construction,,knowledge-base
069eca5c-91e0-47ca-9a82-cd6b11cd5a7c,data-integration,Data Integration,,knowledge-base
6719e329-e2f8-4506-bb5a-bfd6faebe920,manufacturing-simulation,Manufacturing simulation,Simulation of manufacturing system for applying AI methods and big data analysis,knowledge-base
62835176-7dc2-4f3a-b6d6-3fe8bd3c65af,temporal-knowledge-graph-completion,Temporal Knowledge Graph Completion,,knowledge-base
47e24292-8172-4dbb-80e5-1c91d6a18139,models-alignment,Models Alignment,"**Models Alignment** is the process of ensuring that multiple models used in a machine learning system are consistent with each other and aligned with the goals of the system. This involves defining clear and consistent objectives for each model, identifying and addressing any inconsistencies or biases in the data used to train each model, testing and validating each model to ensure its accuracy, and ensuring that the predictions and decisions made by each model are consistent and aligned with the overall goals of the system.",knowledge-base
badd7cbb-e405-4c9a-b1df-38c30bfe5cba,open-knowledge-graph-canonicalization,Open Knowledge Graph Canonicalization,"Open Information Extraction approaches leads to creation of large Knowledge bases (KB) from the web. The problem with such methods is that their entities and relations are not canonicalized, which leads to storage of redundant and ambiguous facts. For example, an Open KB storing *\* and *\* doesn't know that *Barack Obama* and *Obama* mean the same entity. Similarly, *took birth in* and *was born in* also refer to the same relation. Problem of Open KB canonicalization involves identifying groups of equivalent entities and relations in the KB.
( Image credit: [CESI: Canonicalizing Open Knowledge Bases using Embeddings and Side Information](https://github.com/malllabiisc/cesi) )",knowledge-base
e3331585-23cf-4bae-9450-fa61af692ec9,causal-inference,Causal Inference,"Causal inference is the task of drawing a conclusion about a causal connection based on the conditions of the occurrence of an effect.
( Image credit: [Recovery of non-linear cause-effect relationships from linearly mixed neuroimaging data](https://arxiv.org/pdf/1605.00391v2.pdf) )",knowledge-base
a7c4bc04-c746-44d9-af7c-68da13e050a0,breast-cancer-detection,Breast Cancer Detection,,knowledge-base
09443fb8-d049-4a67-8ba0-d5d09f0102fb,symbolic-regression,Symbolic Regression,"producing a mathematical expression (symbolic expression)
that fits a given tabular data.",knowledge-base
384de012-99db-47bb-a48c-508a22313181,adversarial-attack-detection,Adversarial Attack Detection,The detection of adversarial attacks.,knowledge-base
a481f83d-9e7e-49aa-8301-94cf9b95b65f,causal-discovery,Causal Discovery,"( Image credit: [TCDF](https://github.com/M-Nauta/TCDF) )",knowledge-base
e8decc77-bedd-40d4-8824-b82408eef78b,multi-modal-knowledge-graph,Multi-modal Knowledge Graph,,knowledge-base
c37e0b7d-367e-4045-8ec1-ee0da8292634,knowledge-base-completion,Knowledge Base Completion,"Knowledge base completion is the task which automatically infers missing facts by reasoning about the information already present in the knowledge base. A knowledge base is a collection of relational
facts, often represented in the form of ""subject"", ""relation"", ""object""-triples.",knowledge-base
a51d8c19-1128-4d2a-9c4c-df95ea6a18e3,non-intrusive-load-monitoring,Non-Intrusive Load Monitoring,,knowledge-base
69a73db1-334e-42ef-8628-fc647359ae86,ontology-subsumption-inferece,Ontology Subsumption Inferece,,knowledge-base
06c4fb00-d558-4728-ace5-46559ab54502,re-basin,Re-basin,,knowledge-base
3cf679cd-e6c7-4a08-bd9a-b47d60389fba,skin-lesion-segmentation,Skin Lesion Segmentation,,medical
c97e7412-7f19-4828-b77c-9bfedcfcfab8,pulmonary-nodules-classification,Pulmonary Nodules Classification,,medical
2fde71f9-0823-46f6-a8ab-76347979731d,muscular-movement-recognition,Muscular Movement Recognition,,medical
54f6cf0e-685d-4264-99b8-5d139ac2e0c2,medical-x-ray-image-segmentation,Medical X-Ray Image Segmentation,,medical
1ad87ea4-940c-48ec-bb80-7604f6e4ef85,sleep-stage-detection,Sleep Stage Detection,Human Sleep Staging into W-N1-N2-N3-REM classes from multiple or single polysomnography signals,medical
0aa5849e-5d7e-4ed4-90f4-85154bc2d771,blood-pressure-estimation,Blood pressure estimation,,medical
386ff1e1-18ab-4fde-b2f9-8f988fa9634d,magnetic-resonance-fingerprinting,Magnetic Resonance Fingerprinting,,medical
436b56fd-e558-4bd3-acf7-0c42fab5d23a,splice-site-prediction,Splice Site Prediction,,medical
0b15f9b6-450e-4631-a8af-8e438f18a818,patient-outcomes,Patient Outcomes,,medical
fa682aaa-e8a5-4455-95e8-ff7f2ae9fe25,drug-discovery,Drug Discovery,"Drug discovery is the task of applying machine learning to discover new candidate drugs.
( Image credit: [A Turing Test for Molecular Generators](https://pubs.acs.org/doi/10.1021/acs.jmedchem.0c01148) )",medical
fcfe7343-1003-4742-87a7-13100072fc32,covid-variant-prediction,Covid Variant Prediction,,medical
1f0c0c0f-bfb9-49c7-9422-db5ed08d020f,cardiac-segmentation,Cardiac Segmentation,,medical
3b296d7c-9636-47d5-91e9-506de7c6d694,tomography,Tomography,,medical
5a669622-c71c-4a28-bae9-48b558dd7703,medical-waveform-analysis,Medical waveform analysis,"Information extraction from medical waveforms such as the electrocardiogram (ECG), arterial blood pressure (ABP) central venous pressure (CVP), photoplethysmogram (PPG, Pleth).",medical
c827b2ee-2ed5-43d2-b103-06dd850c5864,quantum-state-tomography,Quantum State Tomography,,medical
3b69126d-3d3b-4cda-89a2-df17d84f64ef,molecular-dynamics,Molecular Dynamics,,medical
7ea32c37-cb83-4d86-b992-c07eefa105e0,multi-focus-microscopical-images-fusion,Multi-Focus Microscopical Images Fusion,,medical
9bbcdf1f-4877-4c96-8491-e1b6d130f9c0,photoplethysmogram-simulation,Photoplethysmogram simulation,Simulating photoplethysmogram (PPG) signals,medical
1f5d9044-ef13-47b1-90d2-3e592f65ba08,medical-image-retrieval,Medical Image Retrieval,,medical
9d8cb44a-7c2b-4144-ba10-454ec856eb5a,myocardial-infarction-detection,Myocardial infarction detection,,medical
1383bbbc-e621-48d1-830e-8f7c209cdaa5,mammogram,Mammogram,,medical
5b62d5c8-d556-4272-a330-cd3fb30620e6,blood-cell-detection,Blood Cell Detection,,medical
81fd8879-ee35-4b22-9f86-88a7f7d64ce9,deformable-medical-image-registration,Deformable Medical Image Registration,,medical
5b51bd11-8f24-40a6-a6e3-7154ab904d9a,seizure-prediction,Seizure prediction,,medical
1aeffc0b-e819-41da-b2d9-b6e4161533a9,blind-docking,Blind Docking,,medical
8159dc16-6e5e-4b8e-85d0-6a3ef07b9839,cervical-nucleus-detection,Cervical Nucleus Detection,,medical
ee81a97f-9bee-4192-95b3-e4ecfb2ebc18,epilepsy-prediction,Epilepsy Prediction,,medical
c638c42e-fb0f-460e-9402-8fcc56a05091,malaria-risk-exposure-prediction,Malaria Risk Exposure Prediction,,medical
819cdbc2-2e3e-4da8-8680-fd57fcc5305e,multimodal-sleep-stage-detection,Multimodal Sleep Stage Detection,"Using multiple modalities such as EEG+EOG, EEG+HR instead of just relying on EEG (polysomnography)",medical
e35d4a28-5379-4f18-9ffa-9e5a054f8661,spo2-estimation,SpO2 estimation,SpO2 estimation,medical
84fd52e3-b75f-4077-a0c1-37fcf613733d,mortality-prediction,Mortality Prediction,"( Image credit: [Early hospital mortality prediction using vital signals](https://arxiv.org/pdf/1803.06589v2.pdf) )",medical
2d91bb37-db71-40d1-b3fe-aa408b9563d9,alzheimer-s-disease-detection,alzheimer's disease detection,classification,medical
96a52f53-efce-474e-8b09-66322cebf205,cervical-cancer-biopsy-identification,Cervical cancer biopsy identification,,medical
5bc8d803-d99c-4406-93a9-23f8df538eef,domain-motif-prediction,Domain/Motif Prediction,"The domain prediction task which tasks language models with the identification of the domain type within a given protein sequence, which is defined as a compact folded three-dimensional structure.",medical
885ff924-2c0d-47eb-9d6e-65059247896d,finding-pulmonary-nodules-in-large-scale-ct,Finding Pulmonary Nodules In Large-Scale Ct Images,,medical
c775f6f9-b660-461e-9c57-256edbd0ed0a,protein-language-model,Protein Language Model,,medical
304a1887-0e9a-4f17-804e-72fe49ba9f35,sleep-quality-prediction,Sleep Quality,"( Image credit: [DeepSleep](https://github.com/GuanLab/DeepSleep) )",medical
388a9d58-569b-4c03-84cd-591ce8c56739,joint-vertebrae-identification-and,Joint Vertebrae Identification And Localization In Spinal Ct Images,,medical
c282ece8-5bd1-4e10-a92f-332414574cc3,pulmonary-embolism-detection,Pulmonary Embolism Detection,,medical
6a608e47-c2dc-4616-a791-9bdb626d6489,acute-stroke-lesion-segmentation,Acute Stroke Lesion Segmentation,,medical
0424ea2c-a1eb-4dea-88f7-22b445a1c0ae,3d-medical-imaging-segmentation,3D Medical Imaging Segmentation,"3D medical imaging segmentation is the task of segmenting medical objects of interest from 3D medical imaging.
( Image credit: [Elastic Boundary Projection for 3D Medical Image Segmentation](https://github.com/twni2016/Elastic-Boundary-Projection) )",medical
bb0f7be2-7cf0-4fa1-8c57-2c25d47a5b75,whole-mammogram-classification,Whole Mammogram Classification,,medical
85e9f33c-5a92-42de-99b7-f76bf4cd6ff6,cervical-spondylosis-identification,Cervical Spondylosis Identification,,medical
15a33d00-60ce-462a-9ec7-347159af2b5c,photoplethysmography-ppg-heart-rate,Photoplethysmography (PPG) heart rate estimation,Estimating heart rate from the photoplethysmogram (PPG) signal,medical
5553a985-1d9f-4f7e-be36-f6c3853a3c42,chemical-reaction-prediction,Chemical Reaction Prediction,,medical
b845cae2-06c8-4457-956a-0435ca947235,breast-density-classification,breast density classification,,medical
a030c631-0245-4cd7-8332-ad92eeac567a,skin-cancer-segmentation,Skin Cancer Segmentation,,medical
20d88d79-98e0-46f4-83c3-2955fcf626ce,atrial-fibrillation,Atrial Fibrillation,,medical
3698df3c-5ec7-4e15-b66d-8f328d6a3741,sequential-diagnosis,Sequential Diagnosis,,medical
ee26f36c-10dc-4765-b5c1-e976110573d2,molecule-captioning,Molecule Captioning,,medical
8a513eea-b331-4d95-946f-e442863aabf0,protein-secondary-structure-prediction,Protein Secondary Structure Prediction,,medical
7646c014-3726-400c-9dff-65cbc8e69dd6,bladder-segmentation,Bladder Segmentation,,medical
edb8bfac-483d-4e28-80ed-802f4b70f3eb,promoter-detection,Promoter Detection,,medical
3736c7c9-3dae-48bb-90e8-a072d4a0a8ad,photoplethysmography-ppg,Photoplethysmography (PPG),"**Photoplethysmography (PPG)** is a non-invasive light-based method that has been used since the 1930s for monitoring cardiovascular activity.
Source: [Non-contact transmittance photoplethysmographic imaging (PPGI) for long-distance cardiovascular monitoring ](https://arxiv.org/abs/1503.06775)",medical
55223abb-0efc-4ca3-9ed1-036f5478fce2,k-complex-detection,K-complex detection,,medical
78b6e0dc-ae15-4adc-bad8-83e0914e94df,skin-cancer-classification,Skin Cancer Classification,,medical
6d605164-b616-4565-bbe8-33d4d5a7489a,protein-design,Protein Design,"Formally, given the design requirements of users, models are required to generate protein amino acid sequences that align with those requirements.",medical
60e60d5f-f1b0-4449-bb87-8a53549f3007,chemical-entity-recognition,Chemical Entity Recognition,"Chemical Entity Recognition (CER) is a fundamental task in biomedical text mining and Natural Language Processing (NLP). It involves the identification and classification of chemical entities in textual data, such as scientific literature. These entities can encompass a broad range of concepts including chemical compounds, drugs, elements, ions or functional groups. Given the complexity and variety of chemical nomenclature, the CER task represents a significant challenge for LLMs, and their performance in this task can provide important insights into their overall capabilities in the biomedical domain.",medical
3da332e7-19cf-4731-ab52-fcb96c5565c7,breast-cancer-detection,Breast Cancer Detection,,medical
9d397feb-fe16-428e-b5da-2f6a6ef7f2b4,forward-reaction-prediction,Forward reaction prediction,"Forward reaction prediction pertains to the anticipatory determination of the probable product(s) of a chemical reaction, given specific reactants and reagents.
This facilitates the optimization of research and development methodologies, curbs the extent of experimental speculation, and endorses greener chemistry practices by mitigating waste production.",medical
1ed4fd86-85a6-4b4d-84af-4b0765b1f207,diabetic-foot-ulcer-detection,Diabetic Foot Ulcer Detection,,medical
44c345eb-a32d-4c72-a571-8cf217c0b1fc,knee-osteoarthritis-prediction,Knee Osteoarthritis Prediction,,medical
12bfecfc-fb2a-47b5-b82a-57ad9d11ba0e,muscle-force-prediction,Muscle Force Prediction,,medical
3d1e7f90-7fc7-49b8-80b0-f96afeda4b32,synthesizing-multi-parameter-magnetic,Synthesizing Multi-Parameter Magnetic Resonance Imaging (Mp-Mri) Data,,medical
97964f56-d8ca-44e9-82c6-122ffb43b53f,emg-signal-prediction,EMG Signal Prediction,,medical
4182f722-3f9b-44ff-88e2-41d8fc0a48dc,heart-rate-variability,Heart Rate Variability,Heart rate variability (HRV) is the physiological phenomenon of variation in the time interval between heartbeats. It is measured by the variation in the beat-to-beat interval.,medical
c3393552-11f3-4793-9546-6697ec6125d4,lung-cancer-diagnosis,Lung Cancer Diagnosis,,medical
ba581b38-d886-4ae2-9154-13eb0fcafc32,shadow-confidence-maps-in-ultrasound-imaging,Shadow Confidence Maps In Ultrasound Imaging,,medical
2e56a83f-f6d5-4096-8d9d-4d80fcb2387b,covid-19-detection,COVID-19 Diagnosis,Covid-19 Diagnosis is the task of diagnosing the presence of COVID-19 in an individual with machine learning.,medical
82dc7497-cecf-4067-9906-f138e843df3e,mental-arithmetic-task,mental arithmetic task,,medical
56d97395-5e5f-4eeb-a3b3-1830162f0f64,skin-lesion-classification,Skin Lesion Classification,,medical
746f43d1-86b1-4b82-b512-7b8c3f1e0c4f,transcription-factor-binding-site-prediction,Transcription Factor Binding Site Prediction,,medical
6e302eb5-f4fb-4e75-81c1-adf72a39ceb6,medical-super-resolution,Medical Super-Resolution,,medical
0a4318bb-498b-4cc1-8c66-474c86a3e822,medical-procedure,Medical Procedure,Predicting medical procedures performed during a hospital admission.,medical
e583888b-a5b4-40bf-af9b-c9a121ecae96,ischemic-stroke-lesion-segmentation,Ischemic Stroke Lesion Segmentation,,medical
edf232af-ff6d-4e47-9927-0e33d92f04cb,chemical-protein-interaction-extraction,Chemical-Protein Interaction Extraction,"The models are presented with excerpts from scientific literature and are required to not only identify distinct chemicals within the text but also to discern the specific nature of the interactions between them. This could involve, for instance, determining regulatory relationships between identified ligands and proteins.",medical
1b6b8c7a-4a8a-4dbe-b0f1-6de4495447c9,diffeomorphic-medical-image-registration,Diffeomorphic Medical Image Registration,"Diffeomorphic mapping is the underlying technology for mapping and analyzing information measured in human anatomical coordinate systems which have been measured via Medical imaging. Diffeomorphic mapping is a broad term that actually refers to a number of different algorithms, processes, and methods. It is attached to many operations and has many applications for analysis and visualization. Diffeomorphic mapping can be used to relate various sources of information which are indexed as a function of spatial position as the key index variable. Diffeomorphisms are by their Latin root structure preserving transformations, which are in turn differentiable and therefore smooth, allowing for the calculation of metric based quantities such as arc length and surface areas. Spatial location and extents in human anatomical coordinate systems can be recorded via a variety of Medical imaging modalities, generally termed multi-modal medical imagery, providing either scalar and or vector quantities at each spatial location.
( Image credit: [Quicksilver](https://arxiv.org/pdf/1703.10908.pdf) )",medical
77a0cf58-4a2b-444b-9348-4ec7cf20b1a4,colorectal-gland-segmentation,Colorectal Gland Segmentation:,,medical
d0695b53-cb8e-4d43-9f32-b8541539b9d9,cbct-artifact-reduction,Cbct Artifact Reduction,,medical
b6c29261-f12a-4314-a7c2-196d43573eb5,ventricular-fibrillation-detection,Ventricular fibrillation detection,,medical
adb61dfa-7eb5-4dc4-bd7d-42aa2b5a03d4,protein-complex-prediction,Protein complex prediction,,medical
b45702e9-328a-4fad-a988-fa6295a859f8,splenomegaly-segmentation-on-multi-modal-mri,Splenomegaly Segmentation On Multi-Modal Mri,,medical
4f9bd2b2-836b-424e-83b1-5853ba6c17e2,arrhythmia-detection,Arrhythmia Detection,,medical
d905a53f-8eb7-4f6b-b74f-1a100f66cf62,breast-tumour-classification,Breast Tumour Classification,,medical
9471a681-f14a-4dd7-9d2c-b7916b75801f,medical-concept-normalization,Medical Concept Normalization,,medical
47e9c4d2-cbba-48ef-98fe-20f41a2b72e6,multi-diseases-detection,Multi Diseases Detection,,medical
03e57508-dfbe-4715-8cc4-3676281b8251,semantic-segmentation-of-orthoimagery,Semantic Segmentation Of Orthoimagery,,medical
8edec621-4738-4492-9a02-6a0eceb4696a,drug-response-prediction,Drug Response Prediction,,medical
a165778b-0bc2-434f-a637-cdb43a48e6d8,medical-report-generation,Medical Report Generation,"Medical report generation (MRG) is a task which focus on training AI to automatically generate professional report according the input image data. This can help clinicians make faster and more accurate decision since the task itself is both time consuming and error prone even for experienced doctors.
Deep neural network and transformer based architecture are currently the most popular methods for this certain task, however, when we try to transfer out pre-trained model into this certain domain, their performance always degrade.
The following are some of the reasons why RSG is hard for pre-trained models:
* Language datasets in a particular domain can sometimes be quite different from the large number of datasets available on the Internet
* During the fine-tuning phase, datasets in the medical field are often unevenly distributed
More recently, multi-modal learning and contrastive learning have shown some inspiring results in this field, but it's still challenging and requires further attention.
Here are some additional readings to go deeper on the task:
* On the Automatic Generation of Medical Imaging Reports
[ https://doi.org/10.48550/arXiv.1711.08195](https://doi.org/10.48550/arXiv.1711.08195)
* A scoping review of transfer learning research on medical image analysis using ImageNet
[ https://arxiv.org/abs/2004.13175](https://arxiv.org/abs/2004.13175)
* A Survey on Incorporating Domain Knowledge into Deep Learning for Medical Image Analysis
[ https://arxiv.org/abs/2004.12150]( https://arxiv.org/abs/2004.12150)
(Image credit : Transformers in Medical Imaging: A Survey)",medical
cb09b7f3-8c75-4182-890c-a1260790a593,immune-repertoire-classification,Immune Repertoire Classification,,medical
2781bd80-8ae6-46c0-ae4c-dd3ee830c9cb,lifetime-image-denoising,lifetime image denoising,,medical
09ad8cf7-24e3-4e3e-af99-8300cce37c81,medical-relation-extraction,Medical Relation Extraction,Biomedical relation extraction is the task of detecting and classifying semantic relationships from biomedical text.,medical
fd4c32e7-d2e3-426a-ac9d-f0a6f50679af,als-detection,ALS Detection,,medical
2d64008a-f33a-45c2-8d91-78c243d0601e,brain-decoding,Brain Decoding,"**Motor Brain Decoding** is fundamental task for building motor brain computer interfaces (BCI).
Progress in predicting finger movements based on brain activity allows us to restore motor functions and improve rehabilitation process of patients.",medical
4f282ff1-036e-44dd-b9e6-9ae3113b9c97,seizure-detection,Seizure Detection,"**Seizure Detection** is a binary supervised classification problem with the aim of classifying between seizure and non-seizure states of a patient.
Source: [ResOT: Resource-Efficient Oblique Trees for Neural Signal Classification ](https://arxiv.org/abs/2006.07900)",medical
b362a860-88a8-4080-b185-e1f687959e06,catalytic-activity-prediction,Catalytic activity prediction,"The EC number, a numerical classification system for enzymes hinging on the chemical reactions they catalyze, is substituted with the corresponding reaction. This substitution aims to leverage the tacit knowledge ingrained in pre-trained language models, thereby encouraging the model to predict the reaction itself rather than the mere EC number.",medical
fba004fb-a18a-414f-b27d-472ac689adf6,surgical-skills-evaluation,Surgical Skills Evaluation,The task is to classify surgical skills using data that is recorded during the surgical intervention.,medical
a35be9d6-2cb7-45ef-ac9e-46f1b8fc17c0,tumour-classification,Tumour Classification,,medical
7d66fa36-7f1f-4627-bc07-b73d9ca3a912,remaining-length-of-stay,Remaining Length of Stay,Continuous prediction of the remaining ICU stay duration.,medical
ec493500-5e41-4e36-a121-509c29d0b289,dna-analysis,DNA analysis,,medical
719d8264-0f53-464a-937b-1ed50ed99fca,optic-disc-detection,Optic Disc Detection,Region proposal for optic disc,medical
c5ae5811-edbe-4ca2-8208-b4bdc1b4173c,optic-cup-segmentation,Optic Cup Segmentation,"Optic cup segmentation, concentric with optic disc, useful for glaucoma management (ophthalmology)",medical
9858ca6c-cf95-4ecc-817a-dee37127f81e,transcription-factor-binding-site-prediction-1,Transcription Factor Binding Site Prediction (Human),,medical
358c1b00-6da0-4b37-800b-c68615c9dc0d,multi-label-classification-of-biomedical,Multi-Label Classification Of Biomedical Texts,,medical
d35986b3-351d-4e4b-bba3-0963ead033fd,lung-nodule-detection,Lung Nodule Detection,,medical
b342a264-e0dc-4cc2-af25-8afe40069c33,synthetic-data-generation,Synthetic Data Generation,The generation of tabular data by any means possible.,medical
e5b1560e-ac61-4235-a0be-343a2d19513c,multiple-sequence-alignment,Multiple Sequence Alignment,,medical
052ca545-4c49-448f-a614-c43fc60eed1f,ecg-qrs-detection,ECG QRS Detection,,medical
363f1b84-0bc2-4ad5-bb58-ff3ebd0f24ab,breast-cancer-histology-image-classification,Breast Cancer Histology Image Classification,,medical
c211499b-d760-4729-96ce-81d1f5c8f4bd,single-cell-modeling,Single-cell modeling,Single Cell RNA sequencing (scRNAseq) revolutionized our understanding of the fundamental of life sciences. The technology enables an unprecedented resolution to study heterogeneity in cell populations and their functionalities.,medical
88837ff2-d93d-4833-944f-e48431119354,emg-gesture-recognition,EMG Gesture Recognition,Electromyographic Gesture Recognition,medical
f78cec1c-be39-4d39-8036-6c067d9a29df,radiologist-binary-classification,Radiologist Binary Classification,"This task measures a radiologist's performance on distinguishing between generated (e.g. with a GAN, VAE, etc.) and real images, ascribing to the high visual quality of the synthesized images, and to their potential use in advancing and facilitating downstream medical tasks.",medical
3453c7f2-8f1b-4339-9dc3-81516939376c,disease-trajectory-forecasting,Disease Trajectory Forecasting,,medical
4cf8b89a-c419-490e-934e-b49e5c67aef5,multi-tissue-nucleus-segmentation,Multi-tissue Nucleus Segmentation,,medical
4c45366b-2624-40d0-9682-b6e2039be1d2,prediction-of-cancer-cell-line-sensitivity,Prediction Of Cancer Cell Line Sensitivity,,medical
166ae789-7161-4c7e-9fbe-cdd5d9ba5a9c,nuclear-segmentation,Nuclear Segmentation,,medical
31231ddd-2792-4c91-b9a3-3495e367a5a5,respiratory-motion-forecasting,Respiratory motion forecasting,Respiratory motion forecasting to compensate for the latency of the radiotherapy treatment systems and target more accurately chest tumors.,medical
a3007e7e-9fdb-4074-aa3a-24a2ed1d3123,brain-image-segmentation,Brain Image Segmentation,,medical
e03af18e-b1ba-42cb-ba09-60409598c652,infant-brain-mri-segmentation,Infant Brain Mri Segmentation,,medical
55170325-8cf5-460f-b764-09be7e545df1,brain-segmentation,Brain Segmentation,"( Image credit: [3D fully convolutional networks for subcortical segmentation in MRI: A large-scale study](https://github.com/josedolz/LiviaNET) )",medical
93ee0bcb-05aa-471b-9a29-8beaa5761575,mapping-of-lung-nodules-in-low-dose-ct-images,Mapping Of Lung Nodules In Low-Dose Ct Images,,medical
002b7777-7458-4fe6-aa90-cbdbe4ec0918,organ-detection,Organ Detection,,medical
6438fe35-fb5e-480b-a3b2-b18465602554,heart-rate-estimation,Heart rate estimation,RR interval detection and R peak detection from QRS complex,medical
129eec55-71a9-4515-acde-16432db91e61,transcription-factor-binding-site-prediction-2,Transcription Factor Binding Site Prediction (Mouse),,medical
166ceac7-35f6-4534-a889-e321d05daf71,cancer,Cancer,,medical
9e178d6a-2923-4aae-91d3-e67992348e97,computational-phenotyping,Computational Phenotyping,"**Computational Phenotyping** is the process of transforming the noisy, massive Electronic Health Record (EHR) data into meaningful medical concepts that can be used to predict the risk of disease for an individual, or the response to drug therapy.
Source: [Privacy-Preserving Tensor Factorization for Collaborative Health Data Analysis ](https://arxiv.org/abs/1908.09888)",medical
f0037ce3-dc56-4d6e-a3e5-5f14a4b1535d,molecular-docking,Molecular Docking,"Predicting the binding structure of a small molecule ligand to a protein, which is critical to drug design.
Description from: [DiffDock: Diffusion Steps, Twists, and Turns for Molecular Docking](https://paperswithcode.com/paper/diffdock-diffusion-steps-twists-and-turns-for)",medical
b465b0fb-561a-4531-9b05-76c118c7ee06,retrosynthesis,Retrosynthesis,"Retrosynthetic analysis is a pivotal synthetic methodology in organic chemistry that employs a reverse-engineering approach, initiating from the target compound and retroactively tracing potential synthesis routes and precursor molecules.
This technique proves instrumental in sculpting efficient synthetic strategies for intricate molecules, thus catalyzing the evolution and progression of novel pharmaceuticals and materials.",medical
659c5d25-943d-4f85-b6fe-d5079f5c7235,iris-segmentation,Iris Segmentation,,medical
befbb3c7-a927-4b89-a897-0bdbe624c9c3,discovery-of-integrative-cancer-subtypes,Discovery Of Integrative Cancer Subtypes,,medical
625d12c4-314e-4d7a-8cf1-2f271ea5ca23,icu-mortality,ICU Mortality,Prediction of a patient mortality in the Intensive Care Unit (ICU) given its first hours of Electronic Health Record (EHR).,medical
1a07d0a1-5de6-48e1-85a6-a2958081cdf1,medical-code-prediction,Medical Code Prediction,"Context: Prediction of medical codes from clinical notes is both a practical and essential need for every healthcare delivery organization within current medical systems. Automating annotation will save significant time and excessive effort by human coders today. A new milestone will mark a meaningful step toward fully Autonomous Medical Coding in machines reaching parity with human coders' performance in medical code prediction.
Question: What exactly is the medical code prediction problem?
Answer: Clinical notes contain much information about what precisely happened during the patient's entire stay. And those clinical notes (e.g., discharge summary) is typically long, loosely structured, consists of medical domain language, and sometimes riddled with spelling errors. So, it's a highly multi-label classification problem, and the forthcoming ICD-11 standard will add more complexity to the problem! The medical code prediction problem is to annotate this clinical note with multiple codes subset from nearly 70K total codes (in the current ICD-10 system, for example).",medical
51c8782b-2241-4c6c-a9c2-75bad299d175,functional-description-generation,Functional Description Generation,"The functional description generation task, which not only evaluates the reasoning capability of the language model in determining the function of a protein sequence but also assesses the efficacy of the language model's text generation.",medical
1053eb49-c816-4593-be0f-a2bee84f451f,medical-image-classification,Medical Image Classification,"**Medical Image Classification** is a task in medical image analysis that involves classifying medical images, such as X-rays, MRI scans, and CT scans, into different categories based on the type of image or the presence of specific structures or diseases. The goal is to use computer algorithms to automatically identify and classify medical images based on their content, which can help in diagnosis, treatment planning, and disease monitoring.",medical
feca15d7-398c-4240-838f-f28a68c9a565,property-prediction,Property Prediction,"Property prediction involves forecasting or estimating a molecule's inherent physical and chemical properties based on information derived from its structural characteristics.
It facilitates high-throughput evaluation of an extensive array of molecular properties, enabling the virtual screening of compounds.
Additionally, it provides the means to predict the unknown attributes of new molecules, thereby bolstering research efficiency and reducing development times.",medical
1d8c8c8d-f96d-49fa-8ba7-aae60b93bc04,photoplethysmography-ppg-beat-detection,Photoplethysmography (PPG) beat detection,Detecting heartbeats in the photoplethysmogram (PPG) signal,medical
9cd3e134-3f75-4a39-b2fe-695e8cfd2b48,motion-correction-in-multishot-mri,Motion Correction In Multishot Mri,,medical
39d28c99-4e1c-40b8-8c64-d3871ce2245f,reagent-prediction,Reagent Prediction,"Reagent prediction endeavors to ascertain the suitable catalysts, solvents, or ancillary substances required for a specific chemical reaction.
This endeavor facilitates chemists in uncovering novel reaction types and mechanisms, identifying more optimal or eco-friendly reaction conditions, and ultimately streamlining the comprehensive chemical process to attain maximal cost-effectiveness and environmental stewardship.",medical
7787b338-5cd3-4d48-828b-97b848395778,molecule-interpretation,Molecule Interpretation,,medical
bc5a3a11-ebfb-46d7-8971-4347cc786647,3d-molecule-generation,3D Molecule Generation,,medical
7e563c13-0220-446e-8908-ccd2ed373b57,muscle-tendon-junction-identification,Muscle Tendon Junction Identification,,medical
49ee0c2e-0596-4848-ad0f-6afdf496ec4b,fovea-detection,Fovea Detection,,medical
cb8f9576-7e67-474a-a7de-c901b50d859c,noise-estimation,Noise Estimation,,medical
33342ce2-a683-4a3c-b7eb-11987519d8ca,diabetic-retinopathy-detection,Diabetic Retinopathy Detection,,medical
65a34432-e607-4aca-89a0-33185fb61ac7,optic-cup-detection,Optic Cup Detection,Region proposal for optic cup,medical
be0272a7-7ad0-4117-a5e9-d4ef966aefb1,lung-disease-classification,Lung Disease Classification,,medical
53753560-d223-42b7-87fa-8259bfada4ac,genome-understanding,Genome Understanding,,medical
dc08ab97-260f-4f08-9aef-f3e9b8aa17c6,core-promoter-detection,Core Promoter Detection,,medical
65c33ce6-88d5-4534-9612-76f76eee9528,epigenetic-marks-prediction,Epigenetic Marks Prediction,,medical
cd92c937-ca39-45f6-8d81-073297f428b5,ecg-classification,ECG Classification,,medical
8829c18c-7c09-473d-b7f7-d43f0be79960,medical-image-registration,Medical Image Registration,"Image registration, also known as image fusion or image matching, is the process of aligning two or more images based on image appearances. **Medical Image Registration** seeks to find an optimal spatial transformation that best aligns the underlying anatomical structures. Medical Image Registration is used in many clinical applications such as image guidance, motion tracking, segmentation, dose accumulation, image reconstruction and so on. Medical Image Registration is a broad topic which can be grouped from various perspectives. From input image point of view, registration methods can be divided into unimodal, multimodal, interpatient, intra-patient (e.g. same- or different-day) registration. From deformation model point of view, registration methods can be divided in to rigid, affine and deformable methods. From region of interest (ROI) perspective, registration methods can be grouped according to anatomical sites such as brain, lung registration and so on. From image pair dimension perspective, registration methods can be divided into 3D to 3D, 3D to 2D and 2D to 2D/3D.
Source: [Deep Learning in Medical Image Registration: A Review ](https://arxiv.org/abs/1912.12318)",medical
231905c0-83c8-4369-b38b-60a7e426b158,readmission-prediction,Readmission Prediction,,medical
99928541-dcdb-4427-8020-9f1798e6df81,sleep-staging,Sleep Staging,Human Sleep Staging into W-R-N or W-R-L-D classes from multiple or single polysomnography signals,medical
51f30830-da81-444a-85c9-2b54a72f699c,protein-function-prediction,Protein Function Prediction,"For GO terms prediction, given the specific function prediction instruction and a protein sequence, models characterize the protein functions using the GO terms presented in three different domains (cellular component, biological process, and molecular function).",medical
e807334e-4e0a-480a-8a9a-6c3676fc33fb,spindle-detection,Spindle Detection,,medical
068e59f2-62ef-453b-ae5f-da5191360745,epidemiology,Epidemiology,"**Epidemiology** is a scientific discipline that provides reliable knowledge for clinical medicine focusing on prevention, diagnosis and treatment of diseases. Research in Epidemiology aims at characterizing risk factors for the outbreak of diseases and at evaluating the efficiency of certain treatment strategies, e.g., to compare a new treatment with an established gold standard. This research is strongly hypothesis-driven and statistical analysis is the major tool for epidemiologists so far. Correlations between genetic factors, environmental factors, life style-related parameters, age and diseases are analyzed.
Source: [Visual Analytics of Image-Centric Cohort Studies in Epidemiology ](https://arxiv.org/abs/1501.04009)",medical
4a53eaf5-6883-4ae0-9728-9ac40959e42a,cancer-metastasis-detection,Cancer Metastasis Detection,,medical
b375b2ab-e92a-41b7-b231-ee6190a57dbd,lung-nodule-segmentation,Lung Nodule Segmentation,,medical
f1e53905-16d6-438a-9701-19080de6b748,birl-cima,BIRL,"BIRL: Benchmark on Image Registration methods with Landmark validation, in particular, Biomedical image registration on WSI microscopy images of a multi-strain histology tissue sample.",medical
38b80cab-fab7-403c-a7f4-605e1e48c748,brain-ventricle-localization-and-segmentation,Brain Ventricle Localization And Segmentation In 3D Ultrasound Images,,medical
f4c85b2f-0b00-4dd6-a1ee-30de3ec5ccbb,acne-severity-grading,Acne Severity Grading,,medical
1d9a5aae-145e-4d89-9204-07b256465add,registration-of-sparse-clinical-images,Registration Of Sparse Clinical Images,,medical
db8eae9f-02ee-4617-80e2-8c341bfa3542,medical-image-segmentation,Medical Image Segmentation,"**Medical Image Segmentation** is a computer vision task that involves dividing an medical image into multiple segments, where each segment represents a different object or structure of interest in the image. The goal of medical image segmentation is to provide a precise and accurate representation of the objects of interest within the image, typically for the purpose of diagnosis, treatment planning, and quantitative analysis.
( Image credit: [IVD-Net](https://github.com/josedolz/IVD-Net) )",medical
2ca8e08b-b7b8-46e4-bad7-7dc50f2b83a0,chemical-disease-interaction-extraction,Chemical-Disease Interaction Extraction,"The goal of this task is to discern the relationships between chemicals and diseases from given medical literature, a concept known as chemical-induced disease (CID) relations. These CID relations are paramount in biomedical research, playing crucial roles in areas such as drug discovery, toxicology, and disease etiology.",medical
b5cabced-2efe-4722-b7d5-63689a9401c7,volumetric-medical-image-segmentation,Volumetric Medical Image Segmentation,,medical
d6a8a1da-aa69-429b-9804-8cde1530a2b9,nuclei-classification,Nuclei Classification,,medical
34564575-f18e-4f2b-aa79-df467d91c850,pancreas-segmentation,Pancreas Segmentation,"Pancreas segmentation is the task of segmenting out the pancreas from medical imaging.
Convolutional neural network",medical
5d520029-4903-4946-b125-a2de4de69706,medial-knee-jrf-prediction,Medial knee JRF Prediction,,medical
c5fb39f2-6bd7-4945-96ba-c9ccf61c1e8e,liver-segmentation,Liver Segmentation,,medical
63ed9d19-ab03-4664-89b6-a2ccd46a3a67,x-ray,X-Ray,,medical
6cdbac32-ab84-42a9-8900-a1e2bc4b2d59,population-assignment,Population Assignment,,medical
fc193750-021a-4a75-bf89-0a7f6c930f4b,cell-segmentation,Cell Segmentation,"**Cell Segmentation** is a task of splitting a microscopic image domain into segments, which represent individual instances of cells. It is a fundamental step in many biomedical studies, and it is regarded as a cornerstone of image-based cellular research. Cellular morphology is an indicator of a physiological state of the cell, and a well-segmented image can capture biologically relevant morphological information.
Source: [Cell Segmentation by Combining Marker-controlled Watershed and Deep Learning ](https://arxiv.org/abs/2004.01607)",medical
c03293bb-ac70-4160-a9fc-2ae2ab184483,participant-intervention-comparison-outcome,Participant Intervention Comparison Outcome Extraction,"PICO recognition is an information extraction task for identifying Participant, Intervention, Comparator, and Outcome (PICO elements) information from clinical literature.",medical
45a15f85-2998-408b-9847-ce17dbcde150,sleep-apnea-detection,Sleep apnea detection,,medical
667778c0-6b16-4d42-9f78-b382a51d9945,ecg-risk-stratification,Ecg Risk Stratification,,medical
00ff7719-17f9-406a-94ee-ba8d8e955718,medical-diagnosis,Medical Diagnosis,"**Medical Diagnosis** is the process of identifying the disease a patient is affected by, based on the assessment of specific risk factors, signs, symptoms and results of exams.
Source: [A probabilistic network for the diagnosis of acute cardiopulmonary diseases ](https://arxiv.org/abs/1609.06864)",medical
1e86c392-adf0-4713-bd4c-83d7ff24c6f3,breast-mass-segmentation-in-whole-mammograms,Breast Mass Segmentation In Whole Mammograms,,medical
87392b31-652a-4206-8c0e-e2d3ecfc749c,skin-lesion-identification,Skin Lesion Identification,,medical
265da8b8-52b2-4c17-ac0c-ab6d96a5b542,pain-intensity-regression,Pain Intensity Regression,,medical
9ba0b2a9-678d-47ed-8666-db8f36b14b98,classification-of-age-related-macular,Age-Related Macular Degeneration Classification,,medical
8831ed95-81bc-41da-9925-6e1b16cb043a,respiratory-rate-estimation,Respiratory Rate Estimation,,medical
a62f36c6-321f-4ec2-a295-67cf0b64f492,automated-pulmonary-nodule-detection-and,Automated Pulmonary Nodule Detection And Classification,,medical
72e14d05-e5ff-48bf-ad14-7c3064094bb3,automatic-liver-and-tumor-segmentation,Automatic Liver And Tumor Segmentation,,medical
02071b8e-daac-46a1-810a-d3a2c058767f,sleep-micro-event-detection,Sleep Micro-event detection,,medical
440283ea-3b23-4da3-b079-c928c0d25ccb,low-dose-x-ray-ct-reconstruction,Low-Dose X-Ray Ct Reconstruction,,medical
0912caad-c26a-4f1a-bc37-881c82a42c40,medical-image-generation,Medical Image Generation,"Medical image generation is the task of synthesising new medical images.
( Image credit: [Towards Adversarial Retinal Image Synthesis](https://arxiv.org/pdf/1701.08974v1.pdf) )",medical
4bb7150d-4ed8-49ef-9da7-319133f695d5,eeg-decoding,Eeg Decoding,**EEG Decoding** - extracting useful information directly from EEG data.,medical
d496bad6-684a-448f-98ad-48acd2ae8344,brain-tumor-segmentation,Brain Tumor Segmentation,"**Brain Tumor Segmentation** is a medical image analysis task that involves the separation of brain tumors from normal brain tissue in magnetic resonance imaging (MRI) scans. The goal of brain tumor segmentation is to produce a binary or multi-class segmentation map that accurately reflects the location and extent of the tumor.
( Image credit: [Brain Tumor Segmentation with Deep Neural Networks](https://github.com/naldeborgh7575/brain_segmentation) )",medical
303ea518-62fa-401d-a81f-aafd80f078e4,length-of-stay-prediction,Length-of-Stay prediction,,medical
42cc4142-a7ea-4317-a961-5b3681b5cd9b,multi-subject-fmri-data-alignment,Multi-Subject Fmri Data Alignment,,medical
f9a4cfc9-7d98-45d4-8994-2e0ef35048fc,ultrasound,Ultrasound,,medical
37ed3f52-20c9-40f2-b5d2-3430dfd154a7,mass-segmentation-from-mammograms,Mass Segmentation From Mammograms,,medical
72de7772-1450-4a20-b001-791923e88576,placenta-segmentation,Placenta Segmentation,,medical
61f17a14-a674-4acc-a3d7-d2de277d1450,automated-pancreas-segmentation,Automated Pancreas Segmentation,,medical
90f94a82-b0a6-403c-a8b0-991ae2b5be89,breast-tissue-identification,Breast Tissue Identification,,medical
cd9580e8-98b8-4f95-9ad7-a5aea4c6f61b,genetic-risk-prediction,Genetic Risk Prediction,Polygenic Risk Scores (PRS) / Polygenic Scores (PGS),medical
1f918088-ff54-4cce-8090-2badf0932359,brain-lesion-segmentation-from-mri,Brain Lesion Segmentation From Mri,,medical
a516b1e4-88e3-463d-9f52-6a5e5bf940e5,clinical-concept-extraction,Clinical Concept Extraction,"Automatic extraction of clinical named entities such as clinical problems, treatments, tests and anatomical parts from clinical notes.
( [Source](https://arxiv.org/pdf/2012.04005v1.pdf) )",medical
6e4cdd1c-4ec4-4672-be4b-b5b92864f4f1,cerebrovascular-network-segmentation,Cerebrovascular Network Segmentation,,medical
b201b237-a13a-4516-b43a-12b84fa47552,phenotype-classification,Phenotype classification,,medical
d2aed489-9b48-4b53-a174-e7b8a828c5e3,retinal-vessel-segmentation,Retinal Vessel Segmentation,"Retinal vessel segmentation is the task of segmenting vessels in retina imagery.
( Image credit: [LadderNet](https://github.com/juntang-zhuang/LadderNet) )",medical
cda79c4c-4bcd-474a-8d15-dbad119baa0c,atrial-fibrillation-recurrence-estimation,Atrial Fibrillation Recurrence Estimation,,medical
394156f5-8b1d-4b46-9eef-ef5ebc183c2b,pulse-wave-simulation,Pulse wave simulation,Simulating arterial pulse waves,medical
83350b1c-fcd4-4be5-964a-814f0b412a88,qrs-complex-detection,QRS Complex Detection,,medical
91d2484f-cc2f-4568-97a7-c6f4a03bc22a,pneumonia-detection,Pneumonia Detection,,medical
f97f0158-fd0b-42ef-8955-9abdd3cebaff,acoustic-echo-cancellation,Acoustic echo cancellation,,medical
bbbddceb-355b-4db9-8f2e-43ec42d10774,histopathological-image-classification,Histopathological Image Classification,,medical
753d8d3c-2925-4841-9b6a-1ffee712f87a,ecg-denoising,ECG Denoising,,medical
88511705-bc18-47ac-a173-40fe93a8d1b2,lung-nodule-classification,Lung Nodule Classification,,medical
f78d72bf-d314-4ee4-8aa3-57dab93b3cb2,congestive-heart-failure-detection,Congestive Heart Failure detection,,medical
18ffde5b-1a83-4efc-95c6-a313de4d4dc2,ecg-wave-delineation,ECG Wave Delineation,"Delineation of the waveforms P, T and QRS complexes from ECG signals",medical
fead9478-bcfc-4566-a3e0-c24622ea85ab,text-based-de-novo-molecule-generation,Text-based de novo Molecule Generation,,medical
7817d18d-3f00-4066-8eac-606bd556a463,lesion-segmentation,Lesion Segmentation,"Lesion segmentation is the task of segmenting out lesions from other objects in medical based images.
( Image credit: [D-UNet](https://arxiv.org/pdf/1908.05104v1.pdf) )",medical
0e0a3a90-be23-4edf-a96e-dec87d3745c2,automatic-sleep-stage-classification,Automatic Sleep Stage Classification,,medical
30a41445-3eb6-4ae3-9871-b5d3ce315d30,electromyography-emg,Electromyography (EMG),,medical
587b1d0e-fa95-4af0-b452-654db06269b0,kidney-function,Kidney Function,Continuous prediction of urine production in the next 2h as an average rate in ml/kg/h. The task is predicted at irregular intervals.,medical
9945a61b-19e2-4bd5-8aa9-88a6c41aa42a,diabetic-retinopathy-grading,Diabetic Retinopathy Grading,Grading the severity of diabetic retinopathy from (ophthalmic) fundus images,medical
ccd198c3-f7fb-40fe-9597-777ee72339d5,diabetes-prediction,Diabetes Prediction,,medical
ae463819-1cbc-45c0-95cb-cd2587b84e12,respiratory-failure,Respiratory Failure,Continuous prediction of onset of respiratory failure in the next 12h given the patient is not in failure now.,medical
ae1dd5d6-0c05-47a4-be1c-13657a586563,circulatory-failure,Circulatory Failure,"Continuous prediction of onset of circulatory failure in the next 12h, given the patient is not in failure now.",medical
92a0fc6c-76f1-427b-9926-6b4b0e07b864,heartbeat-classification,Heartbeat Classification,,medical
3ce99b36-f6f3-419b-9bfb-a574d00ab045,atrial-fibrillation-detection,Atrial Fibrillation Detection,,medical
9811e0c5-6c65-4b67-9a84-7950f3e2d080,bone-suppression-from-dual-energy-chest-x,Bone Suppression From Dual Energy Chest X-Rays,,medical
9bfd2bf5-53e0-46d8-b59c-bdf492a2e2f7,mhc-presentation-prediction,MHC presentation prediction,"Task solves important immunological problem, which is predicting whether given peptide will present on given MHC.",medical
25811612-abdc-4c82-96d6-8d796f0b985f,oral-cancer-classification,Oral Cancer Classification,,medical
48eb98d7-394c-4883-80cd-6b7d29576707,patient-phenotyping,Patient Phenotyping,"Classifying patients after 24h regarding their admission diagnosis,
using the APACHE group II and IV labels.",medical
9067ca68-2ad7-4ce0-b953-aa1127844c33,semi-supervised-medical-image-classification,Semi-supervised Medical Image Classification,Semi-supervised Medical Image Classification,medical
6197eed2-e7c6-4cce-adbe-f70ed68a82f6,brain-morphometry,Brain Morphometry,Measurement of brain structures from neuroimaging (MRI).,medical
8c549804-e16d-483f-bc48-5f7f627fa7fe,anxiety-detection,Anxiety Detection,Detect anxiety distress of human beings / animals,medical
c751171c-0636-4ff5-907e-6c6418a590cd,disease-prediction,Disease Prediction,,medical
c6d6c2b3-2ac8-4a3d-b840-7fc510ad97f4,white-matter-fiber-tractography,White Matter Fiber Tractography,,medical
d77d6ea7-dd63-42d8-b34e-f6fc94deb57e,colon-cancer-detection-in-confocal-laser,Colon Cancer Detection In Confocal Laser Microscopy Images,,medical
3d291041-3825-4ef9-958b-e81a4e97f8bc,surgical-gesture-recognition,Surgical Gesture Recognition,,medical
2a134fd0-51ca-4b19-b2cb-7a8dee20f9f1,outcome-prediction-in-multimodal-mri,Outcome Prediction In Multimodal Mri,,medical
d9eeed53-27eb-4bf4-a845-554ef6c98757,metal-artifact-reduction,Metal Artifact Reduction,Metal artifact reduction aims to remove the artifacts introduced by metallic implants in CT images.,medical
bab2ce77-798e-4499-845a-3b50a97edbb1,decision-making-under-uncertainty,Decision Making Under Uncertainty,,medical
410321a8-fb92-4c92-977a-7752e58ee2cc,predicting-patient-outcomes,Predicting Patient Outcomes,,medical
48797aed-2b09-443c-bc98-c66630ea1fa8,skin,Skin,,medical
b29b9080-40b4-4d23-8e07-cb4e3c7ced58,sleep-arousal-detection,Sleep Arousal Detection,"Sleep arousal is a kind of EEG events happened during octurnal sleep. Too many arousals will contribute to many health problem, like daytime sleepiness, memory loss, diabetes, etc. Some research take it as a kind of sleep deprivation.",medical
2602a30c-7822-434b-a5b0-01582cbd63c4,classification-of-breast-cancer-histology,Classification Of Breast Cancer Histology Images,,medical
25b834d8-4c90-40e9-a9d6-7d6da3703d9a,mitosis-detection,Mitosis Detection,,medical
eb0bdafd-b123-46b0-9ae4-24322013d58f,sleep-quality-prediction-1,Sleep Quality Prediction,"( Image credit: [DeepSleep](https://github.com/GuanLab/DeepSleep) )",medical
3a4c704f-055f-49b5-8414-c2ccc04b088d,electron-tomography,Electron Tomography,,medical
c641e308-f6de-40dc-ab90-29e89e4ac611,skull-stripping,Skull Stripping,,medical
0357371d-4d8a-4320-8b63-ea4a8f954352,tomographic-reconstructions,Tomographic Reconstructions,,medical
dda22a51-c0b3-48b7-aa69-d20c410fe1fd,electrocardiography-ecg,Electrocardiography (ECG),,methodology
b37bb01d-665f-41e1-ac87-8ad40441322e,combinatorial-optimization,Combinatorial Optimization,"**Combinatorial Optimization** is a category of problems which requires optimizing a function over a combination of discrete objects and the solutions are constrained. Examples include finding shortest paths in a graph, maximizing value in the Knapsack problem and finding boolean settings that satisfy a set of constraints. Many of these problems are NP-Hard, which means that no polynomial time solution can be developed for them. Instead, we can only produce approximations in polynomial time that are guaranteed to be some factor worse than the true optimal solution.
Source: [Recent Advances in Neural Program Synthesis ](https://arxiv.org/abs/1802.02353)",methodology
5693ac18-a55c-4934-ad24-5e5bd83eef8f,influence-approximation,Influence Approximation,Estimating the influence of training triples on the behavior of a machine learning model.,methodology
fa3621c8-7229-4ba5-87a8-ffde16794056,multi-objective-reinforcement-learning,Multi-Objective Reinforcement Learning,,methodology
865020c7-6f70-4e5c-88ae-df0a6dd649cd,auxiliary-learning,Auxiliary Learning,"Auxiliary learning aims to find or design auxiliary tasks which can improve the performance on one or some primary tasks.
( Image credit: [Self-Supervised Generalisation with Meta Auxiliary Learning](https://arxiv.org/pdf/1901.08933v3.pdf) )",methodology
f2febf91-fd66-4d94-a706-3a3305398298,quantum-circuit-equivalence-checking,Quantum Circuit Equivalence Checking,Equivalence Checking of Quantum Circuits,methodology
0903bc78-ce7b-4e01-a52d-996deb7b3689,sentence-embedding,Sentence Embedding,,methodology
724c640d-4004-4801-8534-4c7f6d9e75ed,rgb-3d-anomaly-detection-and-segmentation,RGB+3D Anomaly Detection and Segmentation,RGB+3D Anomaly Detection and Segmentation,methodology
a849c3af-e189-4ed2-98c5-540b1cc37011,sentence-embeddings,Sentence Embeddings,,methodology
3463981e-c9c1-4882-bde9-9d4b21c61bc4,multilingual-word-embeddings,Multilingual Word Embeddings,,methodology
aa6c4b84-0632-4793-8b57-96e656aa8e11,arbitrary-conditional-density-estimation,Arbitrary Conditional Density Estimation,,methodology
9b1709f5-b3df-4c9c-bf41-1ca07952dade,experimental-design,Experimental Design,,methodology
78cb09d9-73f8-44c0-bf58-67636dd3b73a,density-ratio-estimation,Density Ratio Estimation,Estimating the ratio of one density function to the other.,methodology
638c6b04-83e4-4cb1-82b4-20b79a523098,activation-function-synthesis,Activation Function Synthesis,,methodology
4cd1a7e0-75b8-490f-95e3-c246181042a8,bayesian-inference,Bayesian Inference,Bayesian Inference is a methodology that employs Bayes Rule to estimate parameters (and their full posterior).,methodology
86408256-77ee-488e-80b7-bd247ddaf3ed,low-rank-matrix-completion,Low-Rank Matrix Completion,"**Low-Rank Matrix Completion** is an important problem with several applications in areas such as recommendation systems, sketching, and quantum tomography. The goal in matrix completion is to recover a low rank matrix, given a small number of entries of the matrix.
Source: [Universal Matrix Completion ](https://arxiv.org/abs/1402.2324)",methodology
57ebef14-b891-4fe0-b462-f17c8f728a13,group-anomaly-detection,Group Anomaly Detection,,methodology
3cefc3b1-29ea-4547-8308-0bf2056268a3,contrastive-learning,Contrastive Learning,"**Contrastive Learning** is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.
It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.
(Image credit: [Schroff et al. 2015](https://arxiv.org/abs/1503.03832))",methodology
25bfeaa5-f990-4fd8-a62b-3d18d42bfd69,domain-generalization,Domain Generalization,"The idea of **Domain Generalization** is to learn from one or multiple training domains, to extract a domain-agnostic model which can be applied to an unseen domain
Source: [Diagram Image Retrieval using Sketch-Based Deep Learning and Transfer Learning ](https://arxiv.org/abs/2004.10780)",methodology
3cc8fdb7-8204-42cc-987c-d55cf8aa920a,result-aggregation,Result aggregation,,methodology
e78e6e0c-95bf-4687-91c8-fdba88425aa6,thompson-sampling,Thompson Sampling,"Thompson sampling, named after William R. Thompson, is a heuristic for choosing actions that addresses the exploration-exploitation dilemma in the multi-armed bandit problem. It consists of choosing the action that maximizes the expected reward with respect to a randomly drawn belief.",methodology
03373f59-d778-4ee6-8ee6-09392cf84712,meta-learning,Meta-Learning,"Meta-learning is a methodology considered with ""learning to learn"" machine learning algorithms.
( Image credit: [Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks](https://arxiv.org/pdf/1703.03400v3.pdf) )",methodology
a321b0c7-f9ef-47f9-9d47-3a30a89b11bf,generalized-few-shot-learning,Generalized Few-Shot Learning,,methodology
f03ecbdf-a411-4758-8a19-d09ed5256bb5,hard-attention,Hard Attention,,methodology
9330457b-27fd-47b2-9e51-f83d6eb08515,policy-gradient-methods,Policy Gradient Methods,,methodology
299001ec-dc9b-4978-80b6-e777d6b43af4,anomaly-detection,Anomaly Detection,"**Anomaly Detection** is a binary classification identifying unusual or unexpected patterns in a dataset, which deviate significantly from the majority of the data. The goal of anomaly detection is to identify such anomalies, which could represent errors, fraud, or other types of unusual events, and flag them for further investigation.
[Image source]: [GAN-based Anomaly Detection in Imbalance Problems](https://paperswithcode.com/paper/gan-based-anomaly-detection-in-imbalance)",methodology
a18c1662-d3cf-4fbc-a6c5-c2bf9c56cb6e,ticket-search,Ticket Search,,methodology
1a875756-f1d9-4934-a2b4-00a1c55bcd55,dimensionality-reduction,Dimensionality Reduction,"Dimensionality reduction is the task of reducing the dimensionality of a dataset.
( Image credit: [openTSNE](https://github.com/pavlin-policar/openTSNE) )",methodology
25805b45-f5b8-4c0a-af5c-83ce48af6bf0,efficient-exploration,Efficient Exploration,"**Efficient Exploration** is one of the main obstacles in scaling up modern deep reinforcement learning algorithms. The main challenge in Efficient Exploration is the balance between exploiting current estimates, and gaining information about poorly understood states and actions.
Source: [Randomized Value Functions via Multiplicative Normalizing Flows ](https://arxiv.org/abs/1806.02315)",methodology
e8e79c0c-f7c8-421d-ad15-72181fa46085,transfer-reinforcement-learning,Transfer Reinforcement Learning,,methodology
169a7968-5cf1-4fd1-b49a-d9f6b6b05d38,inductive-logic-programming,Inductive logic programming,,methodology
de191002-53b4-41e1-a6e0-258315ff9080,network-pruning,Network Pruning,"**Network Pruning** is a popular approach to reduce a heavy network to obtain a light-weight form by removing redundancy in the heavy network. In this approach, a complex over-parameterized network is first trained, then pruned based on come criterions, and finally fine-tuned to achieve comparable performance with reduced parameters.
Source: [Ensemble Knowledge Distillation for Learning Improved and Efficient Networks ](https://arxiv.org/abs/1909.08097)",methodology
bcc85f23-914b-4310-9494-2c2bc0939563,similarity-explanation,Similarity Explanation,,methodology
d17bcad0-baf6-4475-9dd8-0dc07e246035,data-augmentation,Data Augmentation,"Data augmentation involves techniques used for increasing the amount of data, based on different modifications, to expand the amount of examples in the original dataset. Data augmentation not only helps to grow the dataset but it also increases the diversity of the dataset. When training machine learning models, data augmentation acts as a regularizer and helps to avoid overfitting.
Data augmentation techniques have been found useful in domains like NLP and computer vision. In computer vision, transformations like cropping, flipping, and rotation are used. In NLP, data augmentation techniques can include swapping, deletion, random insertion, among others.
Further readings:
- [A Survey of Data Augmentation Approaches for NLP](https://paperswithcode.com/paper/a-survey-of-data-augmentation-approaches-for)
- [A survey on Image Data Augmentation for Deep Learning](https://journalofbigdata.springeropen.com/articles/10.1186/s40537-019-0197-0)
( Image credit: [Albumentations](https://github.com/albumentations-team/albumentations) )",methodology
8ac43175-38b7-47bf-a1d4-84097800d957,learning-representation-of-multi-view-data,Learning Representation Of Multi-View Data,,methodology
ee7360c6-177d-4339-81c7-cf6fa7e2eb9f,point-processes,Point Processes,,methodology
6f1247cc-cb3f-49f7-bdaa-89b990a956e4,hypothesis-testing,Two-sample testing,"In statistical hypothesis testing, a two-sample test is a test performed on the data of two random samples, each independently obtained from a different given population. The purpose of the test is to determine whether the difference between these two populations is statistically significant. The statistics used in two-sample tests can be used to solve many machine learning problems, such as domain adaptation, covariate shift and generative adversarial networks.",methodology
e2cc5aa3-25e2-492f-b2ec-63045c8c6dd2,neural-network-compression,Neural Network Compression,,methodology
08f9c7bc-c5ed-4017-8054-756a37616511,quantization,Quantization,"**Quantization** is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).
Source: [Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers ](https://arxiv.org/abs/1911.00361)",methodology
746ebe8e-5fba-4c8d-b588-1120574afa14,matrix-completion,Matrix Completion,"**Matrix Completion** is a method for recovering lost information. It originates from machine learning and usually deals with highly sparse matrices. Missing or unknown data is estimated using the low-rank matrix of the known data.
Source: [A Fast Matrix-Completion-Based Approach for Recommendation Systems ](https://arxiv.org/abs/1912.00600)",methodology
e7812c3a-e25b-4512-94b0-6c9cb7ecef40,clustering,Clustering,,methodology
a1b535df-98bb-4e59-96f9-54ddc3180aa4,representation-learning,Representation Learning,"**Representation Learning** is concerned with training machine learning algorithms to learn useful representations, e.g. those that are interpretable, have latent features, or can be used for transfer learning.
Deep neural networks can be considered representation learning models that typically encode information which is projected into a different subspace. These representations are then usually passed on to a linear classifier to, for instance, train a classifier.
Representation learning can be divided into:
- **Supervised representation learning**: learning representations on task A using annotated data and used to solve task B
- **Unsupervised representation learning**: learning representations on a task in an unsupervised way (label-free data). These are then used to address downstream tasks and reducing the need for annotated data when learning news tasks. Powerful models like [GPT](/method/gpt) and [BERT](/method/bert) leverage unsupervised representation learning to tackle language tasks.
More recently, [self-supervised learning (SSL)](/task/self-supervised-learning) is one of the main drivers behind unsupervised representation learning in fields like computer vision and NLP.
Here are some additional readings to go deeper on the task:
- [Representation Learning: A Review and New Perspectives](/paper/representation-learning-a-review-and-new) - Bengio et al. (2012)
- [A Few Words on Representation Learning](https://sthalles.github.io/a-few-words-on-representation-learning/) - Thalles Silva
( Image credit: [Visualizing and Understanding Convolutional Networks](https://arxiv.org/pdf/1311.2901.pdf) )",methodology
74270fd8-87e8-4d69-9c7c-432bb05fae01,feature-importance,Feature Importance,,methodology
5cd17ca2-8355-4e88-a4d6-489bb3e717c3,computed-tomography-ct,Computed Tomography (CT),"The term “computed tomography”, or CT, refers to a computerized x-ray imaging procedure in which a narrow beam of x-rays is aimed at a patient and quickly rotated around the body, producing signals that are processed by the machine's computer to generate cross-sectional images—or “slices”—of the body.
( Image credit: [Liver Lesion Detection from Weakly-labeled Multi-phase CT Volumes with a Grouped Single Shot MultiBox Detector](https://github.com/L0SG/grouped-ssd-pytorch) )",methodology
b1365c50-7647-40ed-8722-5d0ad3c3c101,few-shot-class-incremental-learning,Few-Shot Class-Incremental Learning,,methodology
c80e6172-019f-4adc-8cfe-43f4eeb88d23,generative-semantic-nursing,Generative Semantic Nursing,"**Generative Semantic Nursing** is a task of intervening the generative process on the fly during inference time to improve the faithfulness of the generated images. It works by carefully manipulating of latents during the denoising process of a pre-trained text-to-image diffusion model.
Source: [Attend-and-Excite: Attention-Based Semantic Guidance for Text-to-Image Diffusion Models](https://paperswithcode.com/paper/attend-and-excite-attention-based-semantic)",methodology
7922da39-4eb5-4c4a-a631-0fff402d17b6,learning-representation-on-graph,Learning Representation On Graph,,methodology
a677c998-296b-4e0f-97db-26fc4173d0fd,domain-adaptation,Domain Adaptation,"**Domain Adaptation** is the task of adapting models across domains. This is motivated by the challenge where the test and training datasets fall from different data distributions due to some factor. Domain adaptation aims to build machine learning models that can be generalized into a target domain and dealing with the discrepancy across domain distributions.
Further readings:
- [A Brief Review of Domain Adaptation](https://paperswithcode.com/paper/a-brief-review-of-domain-adaptation)
( Image credit: [Unsupervised Image-to-Image Translation Networks](https://arxiv.org/pdf/1703.00848v6.pdf) )",methodology
ecad8d42-0d67-4a2f-9fce-79bc88a631ab,multi-label-text-classification,Multi-Label Text Classification,"According to Wikipedia ""In machine learning, multi-label classification and the strongly related problem of multi-output classification are variants of the classification problem where multiple labels may be assigned to each instance. Multi-label classification is a generalization of multiclass classification, which is the single-label problem of categorizing instances into precisely one of more than two classes; in the multi-label problem there is no constraint on how many of the classes the instance can be assigned to.""",methodology
21c478c9-1425-4cf0-8790-4aaf1b3d9ced,active-learning,Active Learning,"**Active Learning** is a paradigm in supervised machine learning which uses fewer training examples to achieve better optimization by iteratively training a predictor, and using the predictor in each iteration to choose the training examples which will increase its chances of finding better configurations and at the same time improving the accuracy of the prediction model
Source: [Polystore++: Accelerated Polystore System for Heterogeneous Workloads ](https://arxiv.org/abs/1905.10336)",methodology
0816f751-3331-4d7d-917d-dd54ed7f5a39,continual-pretraining,Continual Pretraining,,methodology
5985581d-ce27-465c-82d3-58a15a6d081b,network-embedding,Network Embedding,"**Network Embedding**, also known as ""Network Representation Learning"", is a collective term for techniques for mapping graph nodes to vectors of real numbers in a multidimensional space. To be useful, a good embedding should preserve the structure of the graph. The vectors can then be used as input to various network and graph analysis tasks, such as link prediction
Source: [Tutorial on NLP-Inspired Network Embedding ](https://arxiv.org/abs/1910.07212)",methodology
26ebf931-d87c-42cf-974a-7a669a705c1a,mutual-information-estimation,Mutual Information Estimation,"To estimate mutual information from samples, specially for high-dimensional variables.",methodology
76414191-a882-49cd-b4d5-8f9d2b0c0088,eeg,Electroencephalogram (EEG),"**Electroencephalogram (EEG)** is a method of recording brain activity using electrophysiological indexes. When the brain is active, a large number of postsynaptic potentials generated synchronously by neurons are formed after summation. It records the changes of electric waves during brain activity and is the overall reflection of the electrophysiological activities of brain nerve cells on the surface of cerebral cortex or scalp. Brain waves originate from the postsynaptic potential of the apical dendrites of pyramidal cells. The formation of synchronous rhythm of EEG is also related to the activity of nonspecific projection system of cortex and thalamus. EEG is the basic theoretical research of brain science. EEG monitoring is widely used in its clinical application.",methodology
ad790a78-9512-4cf1-b7ef-d222fe60747e,structured-prediction,Structured Prediction,"**Structured Prediction** is an area of machine learning focusing on representations of spaces with combinatorial structure, and algorithms for inference and parameter estimation over these structures. Core methods include both tractable exact approaches like dynamic programming and spanning tree algorithms as well as heuristic techniques such as linear programming relaxations and greedy search.
Source: [Torch-Struct: Deep Structured Prediction Library ](https://arxiv.org/abs/2002.00876)",methodology
422c0762-62d5-4466-aa72-b061d3a412bd,machine-learning,BIG-bench Machine Learning,This branch include most common machine learning fundamental algorithms.,methodology
0b79dbe1-16aa-4886-9439-3e7e8cd699fc,one-class-classifier,One-class classifier,,methodology
7fa14464-ff6b-40dc-a70d-4d3be238d634,3d-anomaly-detection-and-segmentation,3D Anomaly Detection and Segmentation,3D-Only Anomaly Detection and Segmentation,methodology
6b2de20c-8d06-450d-a13f-a783e3eb613c,zero-shot-learning,Zero-Shot Learning,"**Zero-shot learning (ZSL)** is a model's ability to detect classes never seen during training. The condition is that the classes are not known during supervised learning.
Earlier work in zero-shot learning use attributes in a two-step approach to infer unknown classes. In the computer vision context, more recent advances learn mappings from image feature space to semantic space. Other approaches learn non-linear multimodal embeddings. In the modern NLP context, language models can be evaluated on downstream tasks without fine tuning.
Benchmark datasets for zero-shot learning include [aPY](/dataset/apy), [AwA](/dataset/awa2-1), and [CUB](/dataset/cub-200-2011), among others.
( Image credit: [Prototypical Networks for Few shot Learning in PyTorch
](https://github.com/orobix/Prototypical-Networks-for-Few-shot-Learning-PyTorch) )
Further readings:
- [Zero-Shot Learning -- A Comprehensive Evaluation of the Good, the Bad and the Ugly](https://paperswithcode.com/paper/zero-shot-learning-a-comprehensive-evaluation)
- [Zero-Shot Learning in Modern NLP](https://joeddav.github.io/blog/2020/05/29/ZSL.html)
- [Zero-Shot Learning for Text Classification](https://amitness.com/2020/05/zero-shot-text-classification/)",methodology
8685330b-a921-44b6-a6a5-2b571e964bc6,anomaly-detection-in-surveillance-videos,Anomaly Detection In Surveillance Videos,,methodology
2f631601-ee66-4f2e-8b02-502bceeff6d6,metric-learning,Metric Learning,"The goal of **Metric Learning** is to learn a representation function that maps objects into an embedded space. The distance in the embedded space should preserve the objects’ similarity — similar objects get close and dissimilar objects get far away. Various loss functions have been developed for Metric Learning. For example, the **contrastive loss** guides the objects from the same class to be mapped to the same point and those from different classes to be mapped to different points whose distances are larger than a margin. **Triplet loss** is also popular, which requires the distance between the anchor sample and the positive sample to be smaller than the distance between the anchor sample and the negative sample.
Source: [Road Network Metric Learning for Estimated Time of Arrival ](https://arxiv.org/abs/2006.13477)",methodology
2509b655-bf01-45e6-a66c-b50f5299c44d,sentence-embeddings-for-biomedical-texts,Sentence Embeddings For Biomedical Texts,,methodology
05ea783a-bd2a-4fa6-a3ce-951950f94b23,classification,General Classification,Algorithms trying to solve the general task of classification.,methodology
cd018a94-6bce-49a6-b8b2-080a560bb640,3d-rgb-anomaly-detection,3D + RGB Anomaly Detection,3D + RGB Anomaly Detection,methodology
f426fe5a-a10e-4ce4-bdf2-40c28d4364c2,probabilistic-programming,Probabilistic Programming,"Probabilistic programming languages are designed to describe probabilistic models and then perform inference in those models. PPLs are closely related to graphical models and Bayesian networks, but are more expressive and flexible.
( Image credit: [Michael Betancourt](https://betanalpha.github.io/writing/) )",methodology
a4bd54eb-a7da-4f1b-bd6b-7c8dc62da0e8,subdomain-adaptation,Subdomain adaptation,,methodology
f0c4c260-37b8-45b8-91b3-49ee9e9788b9,depth-anomaly-detection-and-segmentation,Depth Anomaly Detection and Segmentation,Depth-only Anomaly Detection and Segmentation,methodology
4c248573-289b-4552-b15f-f162141f0c36,depthanomaly-detection,DepthAnomaly Detection,Depth-only Anomaly Detection,methodology
ab457d14-9ef2-4073-8b31-b6bb87006b36,model-selection,Model Selection,"Given a set of candidate models, the goal of **Model Selection** is to select the model that best approximates the observed data and captures its underlying regularities. Model Selection criteria are defined such that they strike a balance between the goodness of fit, and the generalizability or complexity of the models.
Source: [Kernel-based Information Criterion ](https://arxiv.org/abs/1408.5810)",methodology
11498458-1546-4e84-9d07-c1ccd2cc189f,data-visualization,Data Visualization,,methodology
f9911899-2230-4e49-9831-6db6c6cd8371,continuously-indexed-domain-adaptation,Continuously Indexed Domain Adaptation,"Continuously indexed domain adaptation adapts across continuously indexed domains, e.g., across patients of different ages, where 'age' is a continuous notion.",methodology
ba28f348-353c-4af8-ac3f-741a4471ae16,3d-rgb-anomaly-segmentation,3D + RGB Anomaly Segmentation,3D + RGB Anomaly Segmentation,methodology
89f164ca-ce10-482b-8253-51d4c5c7c5c5,document-embedding,Document Embedding,,methodology
fa580e7c-232f-4b8b-9188-923f126a9a2d,core-set-discovery,Core set discovery,A core set in machine learning is defined as the minimal set of training samples that allows a supervised algorithm to deliver a result as good as the one obtained when the whole set is used.,methodology
79684d31-21ee-4d8f-8ba9-7d4fb83f83a8,data-mining,Data Mining,,methodology
11725b9b-6c82-4d79-a71c-83a9f26daf1e,explanation-fidelity-evaluation,Explanation Fidelity Evaluation,Evaluation of explanation fidelity with respect to the underlying model.,methodology
e385f5ea-25f4-46f2-8574-4cb4924dd571,unsupervised-anomaly-detection,Unsupervised Anomaly Detection,"The objective of **Unsupervised Anomaly Detection** is to detect previously unseen rare objects or events without any prior knowledge about these. The only information available is that the percentage of anomalies in the dataset is small, usually less than 1%. Since anomalies are rare and unknown to the user at training time, anomaly detection in most cases boils down to the problem of modelling the normal data distribution and defining a measurement in this space in order to classify samples as anomalous or normal. In high-dimensional data such as images, distances in the original space quickly lose descriptive power (curse of dimensionality) and a mapping to some more suitable space is required.
Source: [Unsupervised Learning of Anomaly Detection from Contaminated Image Data using Simultaneous Encoder Training ](https://arxiv.org/abs/1905.11034)",methodology
228fa45c-bb06-477b-a79b-a05a318fef2a,statistical-independence-testing,statistical independence testing,,methodology
a0e2e523-e82f-4dcf-8d64-3baad64be0e9,continual-learning,Continual Learning,"**Continual Learning** (also known as **Incremental Learning**, **Life-long Learning**) is a concept to learn a model for a large number of tasks sequentially without forgetting knowledge obtained from the preceding tasks, where the data in the old tasks are not available anymore during training new ones.
If not mentioned, the benchmarks here are **Task-CL**, where task-id is provided on validation.
Source:
[Continual Learning by Asymmetric Loss Approximation with Single-Side Overestimation](https://arxiv.org/abs/1908.02984)
[Three scenarios for continual learning](https://arxiv.org/abs/1904.07734)
[Lifelong Machine Learning](https://books.google.ca/books/about/Lifelong_Machine_Learning.html?id=JQ5pDwAAQBAJ&redir_esc=y)
[Continual lifelong learning with neural networks: A review](https://www.sciencedirect.com/science/article/pii/S0893608019300231)",methodology
0646c050-aabe-4e8b-aba6-d1157d1edda8,detection-of-higher-order-dependencies,Detection of Higher Order Dependencies,,methodology
0222054d-1334-407f-8a85-bffa0f7958b6,learning-network-representations,Learning Network Representations,,methodology
5f0d7dd7-89db-4065-bec0-b7e139242078,multi-label-learning,Multi-Label Learning,,methodology
a55cc9aa-2750-4c92-8212-1639c142705c,q-learning,Q-Learning,"The goal of Q-learning is to learn a policy, which tells an agent what action to take under what circumstances.
( Image credit: [Playing Atari with Deep Reinforcement Learning](https://arxiv.org/pdf/1312.5602v1.pdf) )",methodology
0b0dae5e-68d7-4706-b00c-91a6f68a8a75,extreme-multi-label-classification,Extreme Multi-Label Classification,Extreme Multi-Label Classification is a supervised learning problem where an instance may be associated with multiple labels. The two main problems are the unbalanced labels in the dataset and the amount of different labels.,methodology
c23d3556-b7df-4aef-b45a-89073713b5d2,incremental-learning,Incremental Learning,Incremental learning aims to develop artificially intelligent systems that can continuously learn to address new tasks from new data while preserving knowledge learned from previously learned tasks.,methodology
9e07a2e1-b499-438e-ae92-efe40e5a8da8,3d-anomaly-detection,3D Anomaly Detection,3D-only Anomaly Detection,methodology
6b913b0c-d1df-448b-93a0-24157f9d194a,federated-learning,Federated Learning,"**Federated Learning** is a machine learning approach that allows multiple devices or entities to collaboratively train a shared model without exchanging their data with each other. Instead of sending data to a central server for training, the model is trained locally on each device, and only the model updates are sent to the central server, where they are aggregated to improve the shared model.
This approach allows for privacy-preserving machine learning, as each device keeps its data locally and only shares the information needed to improve the model.",methodology
cf9020b3-5fc3-446e-8a13-d35272f26bc4,web-credibility,Web Credibility,Define the level of credibility of web sources,methodology
07594e5c-1c54-49b6-8885-95213a8bfd37,automated-feature-engineering,Automated Feature Engineering,Automated feature engineering improves upon the traditional approach to feature engineering by automatically extracting useful and meaningful features from a set of related data tables with a framework that can be applied to any problem.,methodology
b6852231-3e47-4f04-a42d-c46267f384fd,depth-rgb-anomaly-detection,Depth + RGB Anomaly Detection,Depth + RGB Anomaly Detection,methodology
0153bc7c-e800-485c-a7dc-c4291552dcad,additive-models,Additive models,,methodology
f0cd8b7a-ecf6-40f1-848d-bc3b5576d8a5,stochastic-optimization,Stochastic Optimization,"**Stochastic Optimization** is the task of optimizing certain objective functional by generating and using stochastic random variables. Usually the Stochastic Optimization is an iterative process of generating random variables that progressively finds out the minima or the maxima of the objective functional. Stochastic Optimization is usually applied in the non-convex functional spaces where the usual deterministic optimization such as linear or quadratic programming or their variants cannot be used.
Source: [ASOC: An Adaptive Parameter-free Stochastic Optimization Techinique for Continuous Variables ](https://arxiv.org/abs/1506.08004)",methodology
5fb4a480-9d02-4e66-9ec4-da61afdfcda5,learning-semantic-representations,Learning Semantic Representations,,methodology
2972e2b3-13ed-4be6-bac5-0a1e780b5f50,ensemble-learning,Ensemble Learning,,methodology
440e83b9-6506-457a-a7f6-082b9033ff5e,distributional-reinforcement-learning,Distributional Reinforcement Learning,"Value distribution is the distribution of the random return received by a reinforcement learning agent. it been used for a specific purpose such as implementing risk-aware behaviour.
We have random return Z whose expectation is the value Q. This random return is also described by a recursive equation, but one of a distributional nature",methodology
fef33044-7356-4064-b7c7-bc78b7bf0aab,dictionary-learning,Dictionary Learning,"**Dictionary Learning** is an important problem in multiple areas, ranging from computational neuroscience, machine learning, to computer vision and image processing. The general goal is to find a good basis for given data. More formally, in the Dictionary Learning problem, also known as sparse coding, we are given samples of a random vector $y\in\mathbb{R}^n$, of the form $y=Ax$ where $A$ is some unknown matrix in $\mathbb{R}^{n×m}$, called dictionary, and $x$ is sampled from an unknown distribution over sparse vectors. The goal is to approximately recover the dictionary $A$.
Source: [Polynomial-time tensor decompositions with sum-of-squares ](https://arxiv.org/abs/1610.01980)",methodology
0a420db2-ecc1-4ec3-bede-4947b75e1d54,novel-class-discovery,Novel Class Discovery,"The goal of Novel Class Discovery (NCD) is to identify new classes in unlabeled data, by exploiting prior knowledge from known classes. In this specific setup, the data is split in two sets. The first is a labeled set containing known classes and the second is an unlabeled set containing unknown classes that must be discovered.",methodology
0b00aa0e-2958-4ad0-9cbb-f5ea39bae6db,imitation-learning,Imitation Learning,"**Imitation Learning** is a framework for learning a behavior policy from demonstrations. Usually, demonstrations are presented in the form of state-action trajectories, with each pair indicating the action to take at the state being visited. In order to learn the behavior policy, the demonstrated actions are usually utilized in two ways. The first, known as Behavior Cloning (BC), treats the action as the target label for each state, and then learns a generalized mapping from states to actions in a supervised manner. Another way, known as Inverse Reinforcement Learning (IRL), views the demonstrated actions as a sequence of decisions, and aims at finding a reward/cost function under which the demonstrated decisions are optimal.
Finally, a newer methodology, Inverse Q-Learning aims at directly learning Q-functions from expert data, implicitly representing rewards, under which the optimal policy can be given as a Boltzmann distribution similar to soft Q-learning
Source: [Learning to Imitate ](https://ai.stanford.edu/blog/learning-to-imitate)",methodology
fcd66199-e155-4c52-8ecf-2666604be485,multimodal-text-and-image-classification,Multimodal Text and Image Classification,Classification with both source Image and Text,methodology
2016ca30-0e5c-453f-874e-6bdb9827b56f,one-shot-learning,One-Shot Learning,"One-shot learning is the task of learning information about object categories from a single training example.
( Image credit: [Siamese Neural Networks for One-shot Image Recognition](https://www.cs.cmu.edu/~rsalakhu/papers/oneshot1.pdf) )",methodology
d9d9ef7d-e201-4e86-9ad5-c507708cbceb,unsupervised-pre-training,Unsupervised Pre-training,Pre-training a neural network using unsupervised (self-supervised) auxiliary tasks on unlabeled data.,methodology
6e089b6f-2031-4168-a814-d5fdfb5d7bcf,gaussian-processes,Gaussian Processes,"**Gaussian Processes** is a powerful framework for several machine learning tasks such as regression, classification and inference. Given a finite set of input output training data that is generated out of a fixed (but possibly unknown) function, the framework models the unknown function as a stochastic process such that the training outputs are a finite number of jointly Gaussian random variables, whose properties can then be used to infer the statistics (the mean and variance) of the function at test values of input.
Source: [Sequential Randomized Matrix Factorization for Gaussian Processes: Efficient Predictions and Hyper-parameter Optimization ](https://arxiv.org/abs/1711.06989)",methodology
500c9088-f127-4395-b775-fa5ef8a6907f,long-tail-learning-with-class-descriptors,Long-tail learning with class descriptors,"Long-tail learning by using class descriptors (like attributes, class embedding, etc) to learn tail classes as well as head classes.",methodology
a9526708-addd-469a-b28e-21e7ed70e7fd,personalized-federated-learning,Personalized Federated Learning,"The federated learning setup presents numerous challenges including data heterogeneity (differences in data distribution), device heterogeneity (in terms of computation capabilities, network connection, etc.), and communication efficiency.
Especially data heterogeneity makes it hard to learn a single shared global model that applies to all clients. To overcome these issues, Personalized Federated Learning (PFL) aims to personalize the global model for each client in the federation.",methodology
5a921317-c6ff-4311-919f-f28ebe5a9780,few-shot-learning,Few-Shot Learning,"**Few-Shot Learning** is an example of meta-learning, where a learner is trained on several related tasks, during the meta-training phase, so that it can generalize well to unseen (but related) tasks with just few examples, during the meta-testing phase. An effective approach to the Few-Shot Learning problem is to learn a common representation for various tasks and train task specific classifiers on top of this representation.
Source: [Penalty Method for Inversion-Free Deep Bilevel Optimization ](https://arxiv.org/abs/1911.03432)",methodology
e8909485-f083-453b-b1bd-b22b2d6990fa,classification-1,Classification,"**Classification** is the task of categorizing a set of data into predefined classes or groups. The aim of classification is to train a model to correctly predict the class or group of new, unseen data. The model is trained on a labeled dataset where each instance is assigned a class label. The learning algorithm then builds a mapping between the features of the data and the class labels. This mapping is then used to predict the class label of new, unseen data points. The quality of the prediction is usually evaluated using metrics such as accuracy, precision, and recall.",methodology
c2156a71-681e-4b11-a263-ac612cb23c73,entity-embeddings,Entity Embeddings,Entity Embeddings is a technique for applying deep learning to tabular data. It involves representing the categorical data of an information systems entity with multiple dimensions.,methodology
6dba4bca-91a4-4bf6-98c6-b59acc46f0cf,multiobjective-optimization,Multiobjective Optimization,"Multi-objective optimization (also known as multi-objective programming, vector optimization, multicriteria optimization, multiattribute optimization or Pareto optimization) is an area of multiple criteria decision making that is concerned with mathematical optimization problems involving more than one objective function to be optimized simultaneously. Multi-objective optimization has been applied in many fields of science, including engineering, economics and logistics where optimal decisions need to be taken in the presence of trade-offs between two or more conflicting objectives. Minimizing cost while maximizing comfort while buying a car, and maximizing performance whilst minimizing fuel consumption and emission of pollutants of a vehicle are examples of multi-objective optimization problems involving two and three objectives, respectively. In practical problems, there can be more than three objectives.",methodology
05cbb23a-6606-4f64-b17b-bdb8f35b6eac,eeg-denoising,EEG Denoising,,methodology
63f3f540-3c8a-4afa-82ec-fdd00be8d65a,word-embeddings,Word Embeddings,"Word embedding is the collective name for a set of language modeling and feature learning techniques in natural language processing (NLP) where words or phrases from the vocabulary are mapped to vectors of real numbers.
Techniques for learning word embeddings can include Word2Vec, GloVe, and other neural network-based approaches that train on an NLP task such as language modeling or document classification.
( Image credit: [Dynamic Word Embedding for Evolving Semantic Discovery](https://arxiv.org/pdf/1703.00607v2.pdf) )",methodology
d659e2dc-5715-4570-b474-e312a48d6598,automatic-machine-learning-model-selection,Automatic Machine Learning Model Selection,,methodology
f5411774-01e5-4c7b-ab7d-3002699869bc,tensor-networks,Tensor Networks,,methodology
fd130c57-e46a-4899-b60a-cf07c7944fcb,unsupervised-mnist,Unsupervised MNIST,Accuracy on MNIST when training without any labels,methodology
ed894989-dbc0-4cc8-b902-72279b9121d1,unsupervised-domain-adaptation,Unsupervised Domain Adaptation,"**Unsupervised Domain Adaptation** is a learning framework to transfer knowledge learned from source domains with a large number of annotated training examples to target domains with unlabeled data only.
Source: [Domain-Specific Batch Normalization for Unsupervised Domain Adaptation ](https://arxiv.org/abs/1906.03950)",methodology
9c7cadc6-110d-4e82-b7e5-b15a3246f194,depth-anomaly-segmentation,Depth Anomaly Segmentation,Anomaly Segmentation using depth information only,methodology
be1e165a-3c37-42fa-a552-d33b024d8e33,nonparametric-deep-clustering,NONPARAMETRIC DEEP CLUSTERING,Deep Nonparametric clustering are methods which utilize deep clustering when the number of clusters is not known apriorly and needs to be inferred.,methodology
10675c7e-cfe4-4c14-a3dd-7e6da83e8776,interpretable-machine-learning,Interpretable Machine Learning,"The goal of **Interpretable Machine Learning** is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.
Source: [Assessing the Local Interpretability of Machine Learning Models ](https://arxiv.org/abs/1902.03501)",methodology
73df912d-b4d0-47f3-9e2d-d19183b5eb90,quantum-circuit-mapping,Quantum Circuit Mapping,Mapping quantum circuits to quantum devices,methodology
34da7064-279a-499a-b42d-a8f3d5015ea6,partial-domain-adaptation,Partial Domain Adaptation,"**Partial Domain Adaptation** is a transfer learning paradigm, which manages to transfer relevant knowledge from a large-scale source domain to a small-scale target domain.
Source: [Deep Residual Correction Network for Partial Domain Adaptation ](https://arxiv.org/abs/2004.04914)",methodology
e039966d-7552-4336-923f-2cd1a34aed54,privacy-preserving-deep-learning,Privacy Preserving Deep Learning,"The goal of privacy-preserving (deep) learning is to train a model while preserving privacy of the training dataset. Typically, it is understood that the trained model should be privacy-preserving (e.g., due to the training algorithm being differentially private).",methodology
e6e83db2-10aa-4378-8b86-54f0be89f438,3d-anomaly-segmentation,3D Anomaly Segmentation,Anomaly Segmentation using 3D information only,methodology
278b9437-ca60-46ba-bca6-758b3586e1a7,multi-label-classification,Multi-Label Classification,"**Multi-Label Classification** is the supervised learning problem where an instance may be associated with multiple labels. This is an extension of single-label classification (i.e., multi-class, or binary) where each instance is only associated with a single class label.
Source: [Deep Learning for Multi-label Classification ](https://arxiv.org/abs/1502.05988)",methodology
7b50456c-021a-4467-8239-9c9bfdadc98f,feature-engineering,Feature Engineering,"Feature engineering is the process of taking a dataset and constructing explanatory variables — features — that can be used to train a machine learning model for a prediction problem. Often, data is spread across multiple tables and must be gathered into a single table with rows containing the observations and features in the columns.
The traditional approach to feature engineering is to build features one at a time using domain knowledge, a tedious, time-consuming, and error-prone process known as manual feature engineering. The code for manual feature engineering is problem-dependent and must be re-written for each new dataset.",methodology
225957cf-c8b8-45a4-8b17-ecdb727c20ee,few-shot-relation-classification,Few-Shot Relation Classification,"**Few-Shot Relation Classification** is a particular relation classification task under minimum annotated data, where a model is required to classify a new incoming query instance given only few support instances (e.g., 1 or 5) during testing.
Source: [MICK: A Meta-Learning Framework for Few-shot Relation Classification with Little Training Data ](https://arxiv.org/abs/2004.14164)",methodology
2f6035db-895f-45b8-87b7-3ad0ff98db84,variable-selection,Variable Selection,,methodology
694df67f-5a5b-4cb9-af6c-4a8cd87261e1,bayesian-optimisation,Bayesian Optimisation,"Expensive black-box functions are a common problem in many disciplines, including tuning the parameters of machine learning algorithms, robotics, and other engineering design problems. **Bayesian Optimisation** is a principled and efficient technique for the global optimisation of these functions. The idea behind Bayesian Optimisation is to place a prior distribution over the target function and then update that prior with a set of “true” observations of the target function by expensively evaluating it in order to produce a posterior predictive distribution. The posterior then informs where to make the next observation of the target function through the use of an acquisition function, which balances the exploitation of regions known to have good performance with the exploration of regions where there is little information about the function’s response.
Source: [A Bayesian Approach for the Robust Optimisation of Expensive-to-Evaluate Functions ](https://arxiv.org/abs/1904.11416)",methodology
0a077125-87d9-4faa-b831-0ce6f1cf5b85,graph-representation-learning,Graph Representation Learning,"The goal of **Graph Representation Learning** is to construct a set of features (‘embeddings’) representing the structure of the graph and the data thereon. We can distinguish among Node-wise embeddings, representing each node of the graph, Edge-wise embeddings, representing each edge in the graph, and Graph-wise embeddings representing the graph as a whole.
Source: [SIGN: Scalable Inception Graph Neural Networks ](https://arxiv.org/abs/2004.11198)",methodology
5326889d-7260-4869-9211-eb6842ba3be9,abnormal-event-detection-in-video,Abnormal Event Detection In Video,"**Abnormal Event Detection In Video** is a challenging task in computer vision, as the definition of what an abnormal event looks like depends very much on the context. For instance, a car driving by on the street is regarded as a normal event, but if the car enters a pedestrian area, this is regarded as an abnormal event. A person running on a sports court (normal event) versus running outside from a bank (abnormal event) is another example. Although what is considered abnormal depends on the context, we can generally agree that abnormal events should be unexpected events that occur less often than familiar (normal) events
Source: [Unmasking the abnormal events in video ](https://arxiv.org/abs/1705.08182)
Image: [Ravanbakhsh et al](https://arxiv.org/pdf/1708.09644v1.pdf)",methodology
8f4d6873-1c9b-4829-89d6-e1906b4ceb64,subgroup-discovery,Subgroup Discovery,,methodology
8c129910-a62c-426d-831e-8fce33c0d116,bilevel-optimization,Bilevel Optimization,"**Bilevel Optimization** is a branch of optimization, which contains a nested optimization problem within the constraints of the outer optimization problem. The outer optimization task is usually referred as the upper level task, and the nested inner optimization task is referred as the lower level task. The lower level problem appears as a constraint, such that only an optimal solution to the lower level optimization problem is a possible feasible candidate to the upper level optimization problem.
Source: [Efficient Evolutionary Algorithm for Single-Objective Bilevel Optimization ](https://arxiv.org/abs/1303.3901)",methodology
3890025b-d8b6-42d6-8893-f3710eeeddf9,data-free-quantization,Data Free Quantization,"**Data Free Quantization** is a technique to achieve a highly accurate quantized model without accessing any training data.
Source: [Qimera: Data-free Quantization with Synthetic Boundary Supporting Samples](https://arxiv.org/abs/2111.02625)",methodology
bbc0b991-c163-4ea3-952f-40554679264e,generalization-bounds,Generalization Bounds,,methodology
c9239fef-deea-4d93-a47a-2762a448b07b,clustering-algorithms-evaluation,Clustering Algorithms Evaluation,,methodology
801ce668-c7a4-449d-ad02-456a0dc4e0e1,automl,AutoML,"Automated Machine Learning (**AutoML**) is a general concept which covers diverse techniques for automated model learning including automatic data preprocessing, architecture search, and model selection.
Source: Evaluating recommender systems for AI-driven data science (1905.09205)
Source: [CHOPT : Automated Hyperparameter Optimization Framework for Cloud-Based Machine Learning Platforms ](https://arxiv.org/abs/1810.03527)",methodology
912a56e6-3a04-4c20-b79d-8e6913830969,stroke-classification,Stroke Classification,,methodology
c05b9f22-004b-4356-8565-56a8342e6d38,online-nonnegative-cp-decomposition,Online nonnegative CP decomposition,,methodology
a68ec749-2eaa-4d7e-bb2b-53ffddfb41bd,multi-goal-reinforcement-learning,Multi-Goal Reinforcement Learning,,methodology
5960a54b-a539-487f-8a21-412652bdf3fa,learning-word-embeddings,Learning Word Embeddings,,methodology
6e5606db-2753-46a2-ab00-1cdaf83f5fdf,metaheuristic-optimization,Metaheuristic Optimization,"In computer science and mathematical optimization, a metaheuristic is a higher-level procedure or heuristic designed to find, generate, or select a heuristic (partial search algorithm) that may provide a sufficiently good solution to an optimization problem. For some examples, you can visit https://aliasgharheidari.com/Publications.html",methodology
d714eb4b-fee9-4485-939e-c8e7789b961b,outlier-ensembles,outlier ensembles,,methodology
252db5be-b8c4-4162-b083-fff358413b6a,distributed-optimization,Distributed Optimization,"The goal of **Distributed Optimization** is to optimize a certain objective defined over millions of billions of data that is distributed over many machines by utilizing the computational power of these machines.
Source: [Analysis of Distributed StochasticDual Coordinate Ascent ](https://arxiv.org/abs/1312.1031)",methodology
0d91e375-3949-40dd-b2e3-f8922a07dd49,federated-unsupervised-learning,Federated Unsupervised Learning,Federated unsupervised learning trains models from decentralized data that have no labels.,methodology
611134de-7b6c-4cc1-8c04-2be2ab123cb2,outlier-detection,Outlier Detection,"**Outlier Detection** is a task of identifying a subset of a given data set which are considered anomalous in that they are unusual from other instances. It is one of the core data mining tasks and is central to many applications. In the security field, it can be used to identify potentially threatening users, in the manufacturing field it can be used to identify parts that are likely to fail.
Source: [Coverage-based Outlier Explanation ](https://arxiv.org/abs/1911.02617)",methodology
15b1af90-6beb-4b98-9707-e2efdd3e5bb7,hierarchical-reinforcement-learning,Hierarchical Reinforcement Learning,,methodology
0ac6616e-42ff-45c5-938a-a041d9433f40,information-plane,Information Plane,"To obtain the Information Plane (IP) of deep neural networks, which shows the trajectories of the hidden layers during training in a 2D plane using as coordinate axes the mutual information between the input and the hidden layer, and the mutual information between the output and the hidden layer.",methodology
0b44f703-a354-46c0-aa9c-fda41befeffa,disentanglement,Disentanglement,"This is an approach to solve a diverse set of tasks in a data efficient manner by disentangling (or isolating ) the underlying structure of the main problem into disjoint parts of its representations. This disentanglement can be done by focussing on the ""transformation"" properties of the world(main problem)",methodology
35ff68ae-f5b7-468f-bafb-e7844f700aa4,chatbot,Chatbot,"**Chatbot** or conversational AI is a language model designed and implemented to have conversations with humans.
Source: [Open Data Chatbot ](https://arxiv.org/abs/1909.03653)
[Image source](https://arxiv.org/pdf/2006.16779v3.pdf)",methodology
d60c6a4c-aa28-4473-b340-708a03cc6280,hyperparameter-optimization,Hyperparameter Optimization,"**Hyperparameter Optimization** is the problem of choosing a set of optimal hyperparameters for a learning algorithm. Whether the algorithm is suitable for the data directly depends on hyperparameters, which directly influence overfitting or underfitting. Each model requires different assumptions, weights or training speeds for different types of data under the conditions of a given loss function.
Source: [Data-driven model for fracturing design optimization: focus on building digital database and production forecast ](https://arxiv.org/abs/1910.14499)",methodology
c44de774-e661-412d-9d07-4a3075590a43,generalized-zero-shot-learning,Generalized Zero-Shot Learning,,methodology
6e518ad0-1ae8-4317-a8b1-438beec295bd,model-compression,Model Compression,"**Model Compression** is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.
Source: [KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow ](https://arxiv.org/abs/2004.05319)",methodology
05d0ee81-be0f-489d-ba00-be94201c0849,density-estimation,Density Estimation,"The goal of **Density Estimation** is to give an accurate description of the underlying probabilistic density distribution of an observable data set with unknown density.
Source: [Contrastive Predictive Coding Based Feature for Automatic Speaker Verification ](https://arxiv.org/abs/1904.01575)",methodology
b4099847-82ea-4bd5-8199-6e1262423860,multi-task-learning,Multi-Task Learning,"Multi-task learning aims to learn multiple different tasks simultaneously while maximizing
performance on one or all of the tasks.
( Image credit: [Cross-stitch Networks for Multi-task Learning](https://arxiv.org/pdf/1604.03539v1.pdf) )",methodology
a77e5ec2-ad5a-4ae1-9dde-e980ec536ccb,sparse-learning,Sparse Learning,,methodology
20a378fe-b1ac-4986-ad5b-ac1509c6aa5f,detection-of-dependencies,Detection of Dependencies,,methodology
0c43a514-4631-40cd-83ed-2916f7aea05f,architecture-search,Neural Architecture Search,"**Neural architecture search (NAS)** is a technique for automating the design of artificial neural networks (ANN), a widely used model in the field of machine learning. NAS essentially takes the process of a human manually tweaking a neural network and learning what works well, and automates this task to discover more complex architectures.
Image Credit : [NAS with Reinforcement Learning](https://arxiv.org/abs/1611.01578)",methodology
68df26be-3683-4f66-9060-f1befb3810a3,long-tail-learning,Long-tail Learning,"Long-tailed learning, one of the most challenging problems in visual recognition, aims to train well-performing models from a large number of images that follow a long-tailed class distribution.",methodology
ecce23e6-77cc-4fe1-910d-7c2da5c77d47,unsupervised-anomaly-detection-in-sound,Unsupervised Anomaly Detection In Sound,,methodology
9890bfbe-95aa-4317-942f-b1932cb445a3,bayesian-optimization,Bayesian Optimization,,methodology
319eedb6-064e-4d1d-8671-3bf2e493baf0,partial-label-learning,Partial Label Learning,,methodology
8e39c32c-81c8-44e9-8b2a-3332afeb66af,rgb-depth-anomaly-detection-and-segmentation,RGB+Depth Anomaly Detection and Segmentation,RGB+Depth Anomaly Detection and Segmentation,methodology
9962e5a4-14bc-406f-a1bc-3a1b764f7aa8,few-shot-imitation-learning,Few-Shot Imitation Learning,,methodology
c66d1438-1cd1-46c6-b3b1-3f0445e22d4b,model-extraction,Model extraction,"Model extraction attacks, aka model stealing attacks, are used to extract the parameters from the target model. Ideally, the adversary will be able to steal and replicate a model that will have a very similar performance to the target model.",methodology
236bd5e1-b68e-4d8f-bfc1-37fbc3f6e2b7,unsupervised-domain-expansion,Unsupervised Domain Expansion,,methodology
f6b3495c-9607-46ab-ad77-6c4d7c9a1bd6,logical-fallacy-detection,Logical Fallacy Detection,,methodology
0a443799-f9fe-43c4-b3d2-a11418f38ed2,normalising-flows,Normalising Flows,,methodology
7b8c196f-a8d0-434f-96ee-fcae85c191a3,depth-rgb-anomaly-segmentation,Depth + RGB Anomaly Segmentation,Depth + RGB Anomaly Segmentation,methodology
a71a0a03-7d77-4fb4-99a7-2defa5094201,knowledge-graph-embeddings,Knowledge Graph Embeddings,,methodology
b60f45d4-b7e3-495f-a8c3-022795208e8f,multi-task-language-understanding,Multi-task Language Understanding,"The test covers 57 tasks including elementary mathematics, US history, computer science, law, and more. https://arxiv.org/pdf/2009.03300.pdf",methodology
d9dc8b0d-26dd-4416-b7af-8a169382e41d,few-shot-camera-adaptive-color-constancy,Few-Shot Camera-Adaptive Color Constancy,,methodology
f7e043fd-da78-45b5-b40c-e929a60fa751,l2-regularization,L2 Regularization,,methodology
f8d5e18f-d865-41b8-9b41-f79032dc47ec,multi-agent-reinforcement-learning,Multi-agent Reinforcement Learning,"The target of **Multi-agent Reinforcement Learning** is to solve complex problems by integrating multiple agents that focus on different sub-tasks. In general, there are two types of multi-agent systems: independent and cooperative systems.
Source: [Show, Describe and Conclude: On Exploiting the Structure Information of Chest X-Ray Reports ](https://arxiv.org/abs/2004.12274)",methodology
eb627aa7-4958-4ac5-93c6-a2dfa3ec6461,reinforcement-learning-1,Reinforcement Learning (RL),**Reinforcement Learning (RL)** involves training an agent to take actions in an environment to maximize a cumulative reward signal. The agent interacts with the environment and learns by receiving feedback in the form of rewards or punishments for its actions. The goal of reinforcement learning is to find the optimal policy or decision-making strategy that maximizes the long-term reward.,methodology
e245a394-bfa1-44f6-b301-7741770c8e6f,multiple-instance-learning,Multiple Instance Learning,"**Multiple Instance Learning** is a type of weakly supervised learning algorithm where training data is arranged in bags, where each bag contains a set of instances $X=\\{x_1,x_2, \ldots,x_M\\}$, and there is one single label $Y$ per bag, $Y\in\\{0, 1\\}$ in the case of a binary classification problem. It is assumed that individual labels $y_1, y_2,\ldots, y_M$ exist for the instances within a bag, but they are unknown during training. In the standard Multiple Instance assumption, a bag is considered negative if all its instances are negative. On the other hand, a bag is positive, if at least one instance in the bag is positive.
Source: [Monte-Carlo Sampling applied to Multiple Instance Learning for Histological Image Classification ](https://arxiv.org/abs/1812.11560)",methodology
30f46b66-8ec9-477b-986a-6508dce2333d,retrieval,Retrieval,,methodology
149194d5-ee04-4c88-b72d-dd1249bebc5a,geophysics,Geophysics,,miscellaneous
d900740a-c27b-4287-8c21-b5b432283bb7,crowd-flows-prediction,Crowd Flows Prediction,,miscellaneous
e77a2092-415c-4f4b-9698-7bfa82263834,one-class-classification,One-Class Classification,,miscellaneous
de715ac1-3a38-4ff2-893c-653fbc009475,college-computer-science,College Computer Science,,miscellaneous
fbc6d622-2920-4edc-a861-2855673850d3,security-studies,Security Studies,,miscellaneous
0b30dd92-a78c-49e1-a1f1-410456c9aa65,photometric-redshift-estimation,Photometric Redshift Estimation,,miscellaneous
6dd2fa5e-4b9a-49b1-b835-96534177691c,model-discovery,Model Discovery,discovering PDEs from spatiotemporal data,miscellaneous
af951164-33bb-4144-9e69-fa7c825dd31b,multi-modal-classification,Multi-modal Classification,,miscellaneous
ce567aae-4f4f-4211-9b37-29c413b72e02,seismic-imaging,Seismic Imaging,,miscellaneous
5354fd4d-7c84-430e-8e02-747b54b64fc3,science-question-answering,Science Question Answering,Image credit: [Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering](https://paperswithcode.com/paper/learn-to-explain-multimodal-reasoning-via),miscellaneous
1eaafa18-003f-414d-b96c-df9a96675676,sequential-correlation-estimation,Sequential Correlation Estimation,,miscellaneous
15f2990b-bfcb-419c-8bcf-812e42d7f45f,multi-lingual-image-text-classification,Multilingual Image-Text Classification,,miscellaneous
c29aaa29-b10a-49a0-b3b8-d5b7fa7a14a7,penn-machine-learning-benchmark,Penn Machine Learning Benchmark,Penn Machine Learning Benchmarks (PMLB) is a large collection of curated benchmark datasets for evaluating and comparing supervised machine learning algorithms.,miscellaneous
cd12e98f-042b-4092-a35a-6dd7612eb1d2,fever-3-way,FEVER (3-way),,miscellaneous
340c21f8-20e7-4ef4-926e-4f6b718f9925,pulsar-prediction,Pulsar Prediction,,miscellaneous
75b2c788-0185-4d41-9c92-b39f61325d7a,electrical-engineering,Electrical Engineering,,miscellaneous
7fde6538-5861-4d5c-90dc-2c76f8b0af0a,robust-design,Robust Design,,miscellaneous
d38f75fa-64db-45a7-b0c7-4f584daeb6d1,high-school-psychology,High School Psychology,,miscellaneous
0bc7696d-f7e0-4c2c-bf26-a6c409dff73c,problem-decomposition,Problem Decomposition,,miscellaneous
6f512033-3791-4e2c-9668-f498ef5630ef,weather-forecasting,Weather Forecasting,"**Weather Forecasting** is the prediction of future weather conditions such as precipitation, temperature, pressure and wind.
Source: [MetNet: A Neural Weather Model for Precipitation Forecasting ](https://arxiv.org/abs/2003.12140)",miscellaneous
84e0cf3b-bb21-48a1-ba64-1036b565ed5b,knowledge-aware-recommendation,Knowledge-Aware Recommendation,,miscellaneous
ea908610-904c-4948-b208-3e1b65d08689,moral-permissibility,Moral Permissibility,,miscellaneous
4fe64179-bddf-4329-be31-4060e676c0b7,jurisprudence,Jurisprudence,,miscellaneous
b29f8cef-6bc8-4825-93d4-e1337e4739e2,food-recommendation,Food recommendation,,miscellaneous
fa09bccd-5076-4bce-b374-b0f81bdef864,seismic-interpretation,Seismic Interpretation,,miscellaneous
3f7c0b7d-1984-4a2d-bcd4-03ddf5097158,clinical-knowledge,Clinical Knowledge,,miscellaneous
d64976cc-c2b9-43d9-9bc7-6cf00ccf4348,radio-interferometry,Radio Interferometry,,miscellaneous
81324038-9947-4e70-aab9-fe449b45468b,behavioral-malware-detection,Behavioral Malware Detection,,miscellaneous
d67bc424-f30b-4deb-aa2e-5e99311fa404,classifier-calibration,Classifier calibration,Confidence calibration – the problem of predicting probability estimates representative of the true correctness likelihood – is important for classification models in many applications. The two common calibration metrics are Expected Calibration Error (ECE) and Maximum Calibration Error (MCE).,miscellaneous
fee74468-1a63-4738-9ff6-0d38203c88b0,transfer-learning,Transfer Learning,"**Transfer Learning** is a machine learning technique where a model trained on one task is re-purposed and fine-tuned for a related, but different task. The idea behind transfer learning is to leverage the knowledge learned from a pre-trained model to solve a new, but related problem. This can be useful in situations where there is limited data available to train a new model from scratch, or when the new task is similar enough to the original task that the pre-trained model can be adapted to the new problem with only minor modifications.
( Image credit: [Subodh Malgonde](https://medium.com/@subodh.malgonde/transfer-learning-using-tensorflow-52a4f6bcde3e) )",miscellaneous
dc39334a-356b-430c-a552-b89659ea1653,intrusion-detection,Intrusion Detection,"**Intrusion Detection** is the process of dynamically monitoring events occurring in a computer system or network, analyzing them for signs of possible incidents and often interdicting the unauthorized access. This is typically accomplished by automatically collecting information from a variety of systems and network sources, and then analyzing the information for possible security problems.
Source: [Machine Learning Techniques for Intrusion Detection ](https://arxiv.org/abs/1312.2177)",miscellaneous
cdf82919-23e0-470f-b359-94dccd2314c9,natural-questions,Natural Questions,,miscellaneous
a9b200a8-63be-44dd-adf9-6fab8e695d31,econometrics,Econometrics,,miscellaneous
153b4423-acfb-47e0-9b39-77061d59346e,classification-with-costly-features,Classification with Costly Features,The task is to classify the dataset with costly features with different budget settings. The final metric is the normalized area under the cost-accuracy curve.,miscellaneous
caf36626-b560-4adb-a0ec-4eada15b1a8a,human-sexuality,Human Sexuality,,miscellaneous
c76db4f0-b783-40be-bb2f-3308d1b5828a,android-malware-detection,Android Malware Detection,,miscellaneous
1c7b5d4d-6296-4f17-9c56-e3357cc0e5ba,modeling-local-geometric-structure,Modeling Local Geometric Structure,,miscellaneous
d6d08001-0cd2-497a-8dfd-8fb95bd6fc13,ethics,Ethics,,miscellaneous
24d7896b-5c52-40a3-b00e-fee92b48af8f,behavioral-malware-classification,Behavioral Malware Classification,,miscellaneous
860e9764-fa2e-4bb8-bd89-1df0b6d099cc,ecommerce,Ecommerce,,miscellaneous
4d771ee5-3e20-48dd-a39c-6609a53c64f4,high-school-statistics,High School Statistics,,miscellaneous
60c11d1b-de2b-4686-b680-e7dbeaf2be54,sentence-ambiguity,Sentence Ambiguity,"Asks models to identify the truth or falsehood of purposely ambiguous sentences.
Source: [BIG-bench](https://github.com/google/BIG-bench/tree/main/bigbench/benchmark_tasks/sentence_ambiguity)",miscellaneous
c696870a-2daa-4cb9-8b06-0f717c55d07b,network-intrusion-detection,Network Intrusion Detection,**Network intrusion detection** is the task of monitoring network traffic to and from all devices on a network in order to detect computer attacks.,miscellaneous
ec871fe7-cc0c-4f1b-808b-1481d3172840,next-basket-recommendation,Next-basket recommendation,,miscellaneous
1c214455-f887-4754-992c-a4950b049ad6,vector-quantization-k-means-problem,Vector Quantization (k-means problem),Given a data set $X$ of d-dimensional numeric vectors and a number $k$ find a codebook $C$ of $k$ d-dimensional vectors such that the sum of square distances of each $x \in X$ to the respective nearest $c \in C$ is as small as possible. This is also known as the k-means problem and is known to be NP-hard.,miscellaneous
47547b1e-2e9d-4745-b2a3-bdd237edbe90,load-forecasting,Load Forecasting,,miscellaneous
d9e8cf09-f8b6-4eb2-b9d9-6ca39b42494c,cross-modal-retrieval,Cross-Modal Retrieval,"**Cross-Modal Retrieval** is used for implementing a retrieval task across different modalities. such as image-text, video-text, and audio-text Cross-Modal Retrieval. The main challenge of Cross-Modal Retrieval is the modality gap and the key solution of Cross-Modal Retrieval is to generate new representations from different modalities in the shared subspace, such that new generated features can be applied in the computation of distance metrics, such as cosine distance and Euclidean distance.
Source: [Deep Triplet Neural Networks with Cluster-CCA for Audio-Visual Cross-modal Retrieval ](https://arxiv.org/abs/1908.03737)",miscellaneous
d0cefb8a-2926-4aba-8fd8-f2047275be42,protein-interface-prediction,Protein Interface Prediction,"How and where proteins interface with one another can ultimately impact the proteins' functions along with a range of other biological processes. As such, precise computational methods for protein interface prediction (PIP) come highly sought after as they could yield significant advances in drug discovery and design as well as protein function analysis.",miscellaneous
c4cd5fdf-9d85-47d9-8407-a0caa31b484b,air-quality-inference,Air Quality Inference,,miscellaneous
06dad2b9-8874-4395-b9a0-753f92995c7e,medical-genetics,Medical Genetics,,miscellaneous
1359fb73-1a6e-412b-8401-60bb7339957a,gender-bias-detection,Gender Bias Detection,,miscellaneous
e8199451-35c7-4009-a89e-f4308c202c8f,high-school-world-history,High School World History,,miscellaneous
060ae2f6-dc29-4b8c-a391-3adaffcab5e7,professional-psychology,Professional Psychology,,miscellaneous
6e7d3008-849e-4e94-91ce-3161881b51f7,mathematical-proofs,Mathematical Proofs,,miscellaneous
f29c70b8-bcde-4dec-9839-715bd2e88971,time-offset-calibration,Time Offset Calibration,,miscellaneous
ca8add1a-6679-49ca-afdc-5bada6805e0f,table-detection,Table Detection,Image credit:[Table Detection in the Wild: A Novel Diverse Table Detection Dataset and Method](https://paperswithcode.com/paper/table-detection-in-the-wild-a-novel-diverse),miscellaneous
5e703b21-8881-40e2-8324-8f2de1b4c8d2,malware-classification,Malware Classification,"**Malware Classification** is the process of assigning a malware sample to a specific malware family. Malware within a family shares similar properties that can be used to create signatures for detection and classification. Signatures can be categorized as static or dynamic based on how they are extracted. A static signature can be based on a byte-code sequence, binary assembly instruction, or an imported Dynamic Link Library (DLL). Dynamic signatures can be based on file system activities, terminal commands, network communications, or function and system call sequences.
Source: [Behavioral Malware Classification using Convolutional Recurrent Neural Networks ](https://arxiv.org/abs/1811.07842)",miscellaneous
7a52348b-c36a-4ba4-8cf4-ae8e0f7aa26a,cross-modal-information-retrieval,Cross-Modal Information Retrieval,"**Cross-Modal Information Retrieval** (CMIR) is the task of finding relevant items across different modalities. For example, given an image, find a text or vice versa. The main challenge in CMIR is known as the *heterogeneity gap*: since items from different modalities have different data types, the similarity between them cannot be measured directly. Therefore, the majority of CMIR methods published to date attempt to bridge this gap by learning a latent representation space, where the similarity between items from different modalities can be measured.
Source: [Scene-centric vs. Object-centric Image-Text Cross-modal Retrieval: A Reproducibility Study](https://arxiv.org/abs/2301.05174)",miscellaneous
aaac72f0-e415-4c18-bfc7-cd9e1b574858,making-hiring-decisions,Making Hiring Decisions,,miscellaneous
1e3da564-e4ad-488a-a1a9-4940a22f2cb3,deception-detection-in-videos,Deception Detection In Videos,,miscellaneous
63dcc513-ca92-43d5-96b3-0d17b55e26c7,liquid-simulation,Liquid Simulation,,miscellaneous
51dc7daa-a4e2-4283-86d1-379f1b00fdf6,x-ray-diffraction,X-Ray Diffraction (XRD),"Diffraction of X-ray patterns and images, with common applications for materials and images.",miscellaneous
cd28f8e0-8de4-4bd8-bd45-e23d2416ec0b,anatomy,Anatomy,,miscellaneous
d4b9f3a8-a46e-4e17-873b-91e3eaa95ad6,human-grasp-contact-prediction,Grasp Contact Prediction,Predict contact between object and hand (human or robot).,miscellaneous
4b30db46-0e17-4c77-94a1-b643c3c3164c,pseudo-label,Pseudo Label,A lightweight but very power technique for semi supervised learning,miscellaneous
388263e8-f3af-47c6-8ef9-7d6c32cfc7b2,hypothesis-testing,Two-sample testing,"In statistical hypothesis testing, a two-sample test is a test performed on the data of two random samples, each independently obtained from a different given population. The purpose of the test is to determine whether the difference between these two populations is statistically significant. The statistics used in two-sample tests can be used to solve many machine learning problems, such as domain adaptation, covariate shift and generative adversarial networks.",miscellaneous
3b02f93e-7e3c-4b81-ae27-4450dc331532,detecting-adverts,Detecting Adverts,,miscellaneous
5ef54c17-4d45-4737-832e-c4d150f136ae,multi-modal-learning,Multi-Modal Learning,,miscellaneous
47d7230e-c2d9-4976-aec5-af93af5dc6b0,neural-network-compression,Neural Network Compression,,miscellaneous
261d0781-94e3-4121-b787-fd7a98461013,auto-debugging,Auto Debugging,,miscellaneous
853820e8-4cc4-4026-9c30-0c94f3ae16c1,deception-detection,Deception Detection,,miscellaneous
f65d628e-7eaf-49bb-854d-782d76f54e27,knowledge-tracing,Knowledge Tracing,"**Knowledge Tracing** is the task of modelling student knowledge over time so that we can accurately predict how students will perform on future interactions. Improvement on this task means that resources can be suggested to students based on their individual needs, and content which is predicted to be too easy or too hard can be skipped or delayed.
Source: [Deep Knowledge Tracing ](https://arxiv.org/abs/1506.05908)",miscellaneous
77e1b8bc-acd0-4e9a-89d1-791e6c95312d,counterfactual-explanation,Counterfactual Explanation,"Returns a contrastive argument that permits to achieve the desired class, e.g., “to obtain this loan, you need XXX of annual
revenue instead of the current YYY”",miscellaneous
1f2fcb26-cb89-4053-9e29-9701be9e12a0,professional-medicine,Professional Medicine,,miscellaneous
63254b36-a1e2-4ca1-8654-86c8efe7d4fa,numerical-integration,Numerical Integration,Numerical integration is the task to calculate the numerical value of a definite integral or the numerical solution of differential equations.,miscellaneous
86d14dac-52b7-41e9-bb52-7b0dbd4cedfb,human-organs-senses-multiple-choice,Human Organs Senses Multiple Choice,,miscellaneous
9fcedd24-4324-45be-89fc-2de7780d8bf6,change-detection-for-remote-sensing-images,Change detection for remote sensing images,,miscellaneous
47cdc095-d8f7-45a1-a482-07dd30764c1c,high-school-government-and-politics,High School Government and Politics,,miscellaneous
81fe92a0-251f-4ae5-9c83-d7e05f24ad73,nutrition,Nutrition,,miscellaneous
40e7de41-a272-4bee-a3d9-8f4df75f3c4e,stress-strain-relation,Stress-Strain Relation,"Data-driven techniques for finding stress-strain relation in non-linearly elastic bodies.
( Image credit: [Data-driven Computing in Elasticity
via Chebyshev Approximation](https://arxiv.org/pdf/1904.10434.pdf) )",miscellaneous
a8973129-9c9a-4400-8454-5a15c20294b7,multimodal-intent-recognition,Multimodal Intent Recognition,"Intent recognition on multimodal content.
Image source: [MIntRec: A New Dataset for Multimodal Intent Recognition](https://paperswithcode.com/dataset/mintrec)",miscellaneous
e21b56dc-9ecf-452a-bc9f-7ac13494ed92,insurance-prediction,Insurance Prediction,,miscellaneous
3783eb10-fc87-4580-9dcc-f29b67592f2a,penn-machine-learning-benchmark-real-world,Penn Machine Learning Benchmark (Real-World),Real-World Datasets in Penn Machine Learning Benchmark,miscellaneous
6bf0b998-7fee-4eb2-a2af-8a0a13fce18e,social-media-popularity-prediction,Social Media Popularity Prediction,"Social Media Popularity Prediction (SMPP) aims to predict the future popularity (e.g., clicks, views, likes, etc.) of online posts automatically via plenty of social media data from public platforms. It is a crucial problem for social media learning and forecasting and one of the most challenging problems in the field. With the ever-changing user interests and public attention on social media platforms, how to predict popularity accurately becomes more challenging than before. This task is valuable to content providers, marketers, or consumers in a range of real-world applications, including multimedia advertising, recommendation system, or trend analysis.",miscellaneous
2220c1ef-26b7-474e-8f3d-30e5a18dc71f,community-question-answering,Community Question Answering,"Community question answering is the task of answering questions on a Q&A forum or board, such as Stack Overflow or Quora.",miscellaneous
73f42a22-8cd2-4e9d-9d37-591c2898f652,us-foreign-policy,US Foreign Policy,,miscellaneous
131b12df-e825-48bc-9433-988b9c94d031,click-through-rate-prediction,Click-Through Rate Prediction,"Click-through rate prediction is the task of predicting the likelihood that something on a website (such as an advertisement) will be clicked.
( Image credit: [Deep Spatio-Temporal Neural Networks for Click-Through Rate Prediction](https://arxiv.org/pdf/1906.03776v2.pdf) )",miscellaneous
1294b378-7e81-45be-80ff-c05bd8563a2e,seismic-detection,Seismic Detection,"When recording seismic ground motion in multiple sites using independent recording stations one needs to recognize the presence of the same parts of seismic waves arriving at these stations. This problem is known in seismology as seismic phase picking or, more generally, seismic detection.",miscellaneous
5043b9c2-0676-4f0e-aa21-ddac8f654815,product-recommendation,Product Recommendation,,miscellaneous
682e6dcd-ea06-4dcf-8646-0c69321647ef,imbalanced-classification,imbalanced classification,learning classifier from class-imbalanced data,miscellaneous
73111b95-a55b-4455-a8c9-e6982ba83f4a,machine-learning,BIG-bench Machine Learning,This branch include most common machine learning fundamental algorithms.,miscellaneous
22b841e3-f99c-4974-8084-46584f1fa91a,operator-learning,Operator learning,Learn an operator between infinite dimensional Hilbert spaces or Banach spaces,miscellaneous
98f2082c-611b-4ae5-be47-e7af631174a5,sequential-distribution-function-estimation,Sequential Distribution Function Estimation,,miscellaneous
70580fd5-2a8d-465a-bcb3-b8ad3b53bea8,high-school-microeconomics,High School Microeconomics,,miscellaneous
689646d6-db6b-48fc-86cb-22d185d36052,twitter-bot-detection,Twitter Bot Detection,"Academic studies estimate that up to 15% of Twitter users are automated bot accounts [1]. The prevalence of Twitter bots coupled with the ability of some bots to give seemingly human responses has enabled these non-human accounts to garner widespread influence. Hence, detecting non-human Twitter users or automated bot accounts using machine learning techniques has become an area of interest to researchers in the last few years.
[1] https://aaai.org/ocs/index.php/ICWSM/ICWSM17/paper/view/15587",miscellaneous
39deb82d-c303-4d13-8139-bec58485053b,college-chemistry,College Chemistry,,miscellaneous
04b55739-c36d-4b91-9c69-ac94e5007348,formation-energy,Formation Energy,"On the QM9 dataset the numbers reported in the table are the mean absolute error in eV on the target variable U0 divided by U0's chemical accuracy, which is equal to 0.043.",miscellaneous
fb7c84c4-a146-4ca7-b96a-32e7411979f7,international-law,International Law,,miscellaneous
da209242-bca2-4dd9-bac2-0e5c29444125,online-review-rating,Online Review Rating,,miscellaneous
9181c025-bd30-41b8-be3a-2fa4f4995a90,variational-monte-carlo,Variational Monte Carlo,Variational methods for quantum physics,miscellaneous
5b86606f-987b-45e3-9f65-4d7e6d7a0cca,eeg-emotion-recognition,EEG Emotion Recognition,Emotion Recognition using EEG signals,miscellaneous
4b31fa81-2c3a-4348-8417-40c93558056b,learning-theory,Learning Theory,Learning theory,miscellaneous
ac6acbe8-1a39-4f39-8202-ec1bdb40be4a,multi-modal,Image Retrieval with Multi-Modal Query,"The problem of retrieving images from a database based on a multi-modal (image- text) query. Specifically, the query text prompts some modification in the query image and the task is to retrieve images with the desired modifications.",miscellaneous
1a0b12ad-dcdb-41ec-8b3a-d570b8d88915,prehistory,Prehistory,,miscellaneous
a16aae7f-5726-4592-973b-5bbc598f2ae3,classification-of-variable-stars,Classification Of Variable Stars,,miscellaneous
f1845bda-3966-4341-ba18-66a9bc0cea54,neural-network-security,Neural Network Security,,miscellaneous
f2314ef8-79e6-443e-8eea-83ab5b3b1ed1,logical-fallacies,Logical Fallacies,,miscellaneous
2a8a29d3-a592-47f1-8973-f0e3aa955b5c,brain-decoding,Brain Decoding,"**Motor Brain Decoding** is fundamental task for building motor brain computer interfaces (BCI).
Progress in predicting finger movements based on brain activity allows us to restore motor functions and improve rehabilitation process of patients.",miscellaneous
ce0637e7-66c0-4e33-aca6-d2e83c085717,college-medicine,College Medicine,,miscellaneous
48a0fd0e-96f1-4f90-8430-3bd2a9c7aa95,similarities-abstraction,Similarities Abstraction,,miscellaneous
1a955145-338a-48a7-a631-276f32fa470c,lake-ice-detection,Lake Ice Monitoring,,miscellaneous
530ba69d-2b67-498a-bcda-bb1d120c6d7c,misconceptions,Misconceptions,"Measures whether a model can discern popular misconceptions from the truth.
Example:
```
input: The daddy longlegs spider is the most venomous spider in the world.
choice: T
choice: F
answer: F
input: Karl Benz is correctly credited with the invention of the first modern automobile.
choice: T
choice: F
answer: T
```
Source: [BIG-bench](https://github.com/google/BIG-bench/tree/main/bigbench/benchmark_tasks/misconceptions)",miscellaneous
54de7368-7edc-496a-843b-4209c181739f,moral-disputes,Moral Disputes,,miscellaneous
1054ae93-b3b0-4448-b1d4-50d922f79391,denoising-of-radar-micro-doppler-signatures,Denoising Of Radar Micro-Doppler Signatures,,miscellaneous
5396efc5-46f0-4c1b-b4ad-5fb0cb4db3b5,high-school-chemistry,High School Chemistry,,miscellaneous
d894dc72-93d2-45f2-bc16-9fc616ced5a8,general-knowledge,General Knowledge,"This task aims to evaluate the ability of a model to answer general-knowledge questions.
Source: [BIG-bench](https://github.com/google/BIG-bench/tree/main/bigbench/benchmark_tasks/general_knowledge)",miscellaneous
503c035b-bf34-4e02-92ff-87e69cadf548,learning-to-rank,Learning-To-Rank,"Learning to rank is the application of machine learning to build ranking models. Some common use cases for ranking models are information retrieval (e.g., web search) and news feeds application (think Twitter, Facebook, Instagram).",miscellaneous
58271dba-04f9-45c4-9711-ce2296c57675,data-visualization,Data Visualization,,miscellaneous
73c544a0-02bc-48c5-96db-5903e9dafb60,prediction-intervals,Prediction Intervals,"A prediction interval is an estimate of an interval in which a future observation will fall, with a certain probability, given what has already been observed. Prediction intervals are often used in regression analysis.",miscellaneous
2e43f345-b556-4b0a-992c-9d27037b4452,synthetic-data-generation,Synthetic Data Generation,The generation of tabular data by any means possible.,miscellaneous
972a0b71-a8a8-42d3-8abf-58335f92fe32,college-biology,College Biology,,miscellaneous
46275358-496e-4fa5-97e1-f272da5527e7,business-ethics,Business Ethics,,miscellaneous
b86404be-d81a-464d-b11d-d65a662c251b,non-linear-elasticity,Non-Linear Elasticity,,miscellaneous
6eb38012-08b0-43a4-af30-0d1bd0b758ce,high-school-geography,High School Geography,,miscellaneous
6e056e37-6ed7-48f6-81ad-2b9c0b15b8a7,recommendation-systems,Recommendation Systems,"The **Recommendation Systems** task is to produce a list of recommendations for a user. The most common methods used in recommender systems are factor models (Koren et al., 2009; Weimer et al., 2007; Hidasi & Tikk, 2012) and neighborhood methods (Sarwar et al., 2001; Koren, 2008).
Factor models work by decomposing the sparse user-item interactions matrix to a set of d dimensional vectors one for each item and user in the dataset. Factor models are hard to apply in session-based recommendations due to the absence of a user profile. On the other hand, neighborhood methods, which rely on computing similarities between items (or users) are based on co-occurrences of items in sessions (or user profiles). Neighborhood methods have been used extensively in session-based recommendations.
( Image credit: [CuMF_SGD](https://arxiv.org/pdf/1610.05838v3.pdf) )",miscellaneous
053ba684-0e23-454e-b192-034dcfb42470,building-change-detection-for-remote-sensing,Building change detection for remote sensing images,,miscellaneous
e6ff349e-a93c-445a-a3d0-2c2038bf3e6f,session-based-recommendations,Session-Based Recommendations,Recommendation based on a sequence of events. e.g. next item prediction,miscellaneous
4e33a7c8-0dc1-4a41-bd61-b8d2b96f2bcd,unsupervised-contextual-anomaly-detection,Unsupervised Contextual Anomaly Detection,"The objective of Unsupervised Anomaly Detection is to detect previously unseen rare objects or events. Unsupervised Contextual Anomaly Detection is formulated such that the data contains two types of attributes, behavioral and contextual attributes with no pre-existing information which observations are anomalous. Behavioral attributes are attributes that relate directly to the process of interest whereas contextual attributes relate to exogenous but highly affecting factors in relation to the process. Generally the behavioral attributes are conditional on the contextual attributes.
Source: [Unsupervised Contextual Anomaly Detection using Joint Deep Variational Generative Models](https://arxiv.org/pdf/1904.00548.pdf)",miscellaneous
eca19802-5ab3-4a5d-ab8b-2c8e1e252a67,crop-classification,Crop Classification,,miscellaneous
0e1a24e3-4c5b-44fb-95ac-d7e694b105fc,dead-reckoning-prediction,Dead-Reckoning Prediction,,miscellaneous
f730c5d6-aca4-41ef-b640-9d1a0b8e0a72,extracting-buildings-in-remote-sensing-images,Extracting Buildings In Remote Sensing Images,,miscellaneous
943a40c4-d7fa-4f44-845c-97de896a8af9,traffic-classification,Traffic Classification,"**Traffic Classification** is a task of categorizing traffic flows into application-aware classes such as chats, streaming, VoIP, etc. Classification can be used for several purposes including policy enforcement and control or QoS management.
Source: [Classification of Traffic Using Neural Networks by Rejecting: a Novel Approach in Classifying VPN Traffic ](https://arxiv.org/abs/2001.03665)",miscellaneous
3ba21079-b084-4d9e-ad13-eecff8470c61,fairness,Fairness,,miscellaneous
f25fdfa2-9d0a-45e3-86ee-b08f7793fb07,multilingual-text-classification,Multilingual text classification,,miscellaneous
d51433ad-7dfa-42e7-8bf7-6fa24f87835c,service-composition,Service Composition,"Let T be the task that the service composition needs to accomplish. The task T can be granulated to T 1 , T 2 , T 3 , T 4 , … , T n . i.e. T =
{T 1 , T 2 , T 3 , T 4 , … , T n } . For each task T i , a set of service S i = S i 1 , S i 2 , S i 3 , … , S i m is discovered during the service discovery process such that all
services in a set S i perform the same function and have the same input and output parameters (See Figure 2). S 1 = {S 11 , S 12 , S 13 , … , S 1m } , S 2 =
{S 21 , S 22 , S 23 , … , S 2m } , S 3 = {S 31 , S 32 , S 33 , … , S 3m } , … , S n = {S n 1 , S n 2 , S n 3 , … , S n m }
We need to select one service from each set S i in order to compose the big service such that the overall QoS attributes of the big service
are optimal. The total number of the possible distinct service composition is n m . Let k be the the number of QoS attributes. Then the total num-
ber of comparisons required are kn m . We need at least kn m comparisons to find whether the solution is optimal, thus making the problem as
NP-Hard.",miscellaneous
5d7ee556-a13e-46c7-beb2-978569a37c76,non-intrusive-load-monitoring,Non-Intrusive Load Monitoring,,miscellaneous
03d2d31d-d39c-44cc-acbc-219a95046d78,marketing,Marketing,,miscellaneous
f4f4ae2c-a174-478d-ab6c-bd65c1fa24f8,epistemic-reasoning,Epistemic Reasoning,,miscellaneous
d1ef038b-352d-455f-9f82-da872474a1f0,artificial-life,Artificial Life,,miscellaneous
e0a6e647-cc3f-4976-a13c-c03daba7c71b,parameter-prediction,Parameter Prediction,,miscellaneous
aff540fe-914d-408a-ba3d-7037a3ec291c,image-text-classification,Image-text Classification,,miscellaneous
c5724f64-2a5c-4afe-a2d6-ee5d7c6138a2,crime-prediction,Crime Prediction,,miscellaneous
4004c837-ff3b-491a-b36a-32e4e0a663b3,age-estimation,Age Estimation,"Age Estimation is the task of estimating the age of a person from an image some other kind of data.
( Image credit: [BridgeNet](https://arxiv.org/pdf/1904.03358v1.pdf) )",miscellaneous
a41f3071-1c19-4b72-b3fb-435108be8da3,cryptanalysis,Cryptanalysis,,miscellaneous
61fc6272-2c86-4be1-9598-8fff4aa7cc7b,cantilever-beam,Cantilever Beam,,miscellaneous
2b9554ac-1a0f-4bb1-8c6f-e60e967b4379,link-quality-estimation,Link Quality Estimation,,miscellaneous
62a356a8-c383-4394-9fdc-6a9183a8dc4a,movie-recommendation,Movie Recommendation,"Evaluates the ability of language models to propose relevant movie recommendations with collaborative filtering data.
Source: [BIG-bench](https://github.com/google/BIG-bench/tree/main/bigbench/benchmark_tasks/movie_recommendation)",miscellaneous
6ada25d6-db38-4d18-98e2-7a54ddd951fa,high-school-computer-science,High School Computer Science,,miscellaneous
71c9a953-e422-46c7-97d8-8cdce24a4e14,human-detection-of-deepfakes,Human Detection of Deepfakes,"The task of detecting deepfake stimuli, as given to human participants in a statistical study. Methodologies should ideally include a-priori power analysis (e.g. using the GPower software) to calculate the sample size of human participants that would be sufficient to detect the presence of a main effect of a specified effect size.",miscellaneous
9116e337-07b0-4955-8dd7-c813da223464,high-school-biology,High School Biology,,miscellaneous
67709744-bee9-4242-b86d-87e4c51ef4a8,protein-structure-prediction,Protein Structure Prediction,Image credit: [FastFold: Reducing AlphaFold Training Time from 11 Days to 67 Hours](https://arxiv.org/pdf/2203.00854v1.pdf),miscellaneous
72cd9615-8c1a-4e47-b8b6-ce8010f21c89,recipe-generation,Recipe Generation,,miscellaneous
d8345502-9a59-47b7-9d43-c98be3b90a3a,online-ranker-evaluation,Online Ranker Evaluation,,miscellaneous
178a325a-d709-4580-8e96-91390e23da94,virology,Virology,,miscellaneous
b1a169d3-2102-4ca8-ae6a-b4fcbc14ad09,interpretability-techniques-for-deep-learning,Interpretability Techniques for Deep Learning,,miscellaneous
1a929e11-a127-4722-a8cd-5e5b23ca6bb9,image-to-image-translation,Image-to-Image Translation,"**Image-to-Image Translation** is a task in computer vision and machine learning where the goal is to learn a mapping between an input image and an output image, such that the output image can be used to perform a specific task, such as style transfer, data augmentation, or image restoration.
( Image credit: [Unpaired Image-to-Image Translation
using Cycle-Consistent Adversarial Networks](https://arxiv.org/pdf/1703.10593v6.pdf) )",miscellaneous
0990fc3a-8243-461c-86b8-6eeda02ed0b8,imputation,Imputation,Substituting missing data with values according to some criteria.,miscellaneous
46b1b057-63c2-4883-843f-166c648a580b,segmentation-of-remote-sensing-imagery,Segmentation Of Remote Sensing Imagery,,miscellaneous
29636033-562c-40ba-805e-e93e47ad52da,context-aware-product-recommendation,Context Aware Product Recommendation,Context-aware recommender systems (CARS) generate more relevant recommendations by adapting them to the specific contextual situation of the user. This article explores how contextual information can be used to create more intelligent and useful recommender systems.,miscellaneous
8b9e3bf0-8ec5-4a67-9d7d-b621d4dbf7f9,smart-grid-prediction,Smart Grid Prediction,,miscellaneous
33aa0798-6f58-4ee2-aeaa-e3db3859210f,total-energy,Total Energy,,miscellaneous
b1ad2cd1-e781-4726-84e1-fe25925a36be,collaborative-filtering,Collaborative Filtering,,miscellaneous
3935926f-272f-4e7b-abcb-c7cc39f4de96,pde-surrogate-modeling,PDE Surrogate Modeling,,miscellaneous
7f01856f-dd44-4b6f-97b2-8177bd9bc077,cyber-attack-investigation,Cyber Attack Investigation,,miscellaneous
21a212d7-7879-46b5-b602-c358df9b0d9c,business-taxonomy-construction,Business Taxonomy Construction,,miscellaneous
14afe058-8e65-4a42-ae73-9d7341941fde,malware-detection,Malware Detection,"**Malware Detection** is a significant part of endpoint security including workstations, servers, cloud instances, and mobile devices. Malware Detection is used to detect and identify malicious activities caused by malware. With the increase in the variety of malware activities on CMS based websites such as [malicious malware redirects on WordPress site](https://secure.wphackedhelp.com/blog/wordpress-malware-redirect-hack-cleanup/) (Aka, WordPress Malware Redirect Hack) where the site redirects to spam, being the most widespread, the need for automatic detection and classifier amplifies as well. The signature-based Malware Detection system is commonly used for existing malware that has a signature but it is not suitable for unknown malware or zero-day malware
Source: [The Threat of Adversarial Attacks on Machine Learning in Network Security - A Survey ](https://arxiv.org/abs/1911.02621)",miscellaneous
1fa3b7a9-da64-42d6-aa1d-d336b2ef0d1f,air-pollution-prediction,Air Pollution Prediction,,miscellaneous
61d54cf5-34c9-4b88-ab91-6b6c2a980e0d,fever-2-way,FEVER (2-way),,miscellaneous
14486ee0-1801-44b2-8b14-bcdfbb5f27b2,fine-grained-urban-flow-inference,Fine-Grained Urban Flow Inference,Fine-grained urban flow inference (FUFI) aims to infer the fine-grained urban flow map from the coarse-grained one.,miscellaneous
86250d9b-41b4-4797-a49c-8e9b2326b7b7,survival-analysis,Survival Analysis,"**Survival Analysis** is a branch of statistics focused on the study of time-to-event data, usually called survival times. This type of data appears in a wide range of applications such as failure times in mechanical systems, death times of patients in a clinical trial or duration of unemployment in a population. One of the main objectives of Survival Analysis is the estimation of the so-called survival function and the hazard function. If a random variable has density function $f$ and cumulative distribution function $F$, then its survival function $S$ is $1-F$, and its hazard $λ$ is $f/S$.
Source: [Gaussian Processes for Survival Analysis ](https://arxiv.org/abs/1611.00817)
Image: [Kvamme et al.](https://arxiv.org/pdf/1910.06724v1.pdf)",miscellaneous
6c9773df-edc8-4594-ae10-dc1d90f94c57,moral-scenarios,Moral Scenarios,,miscellaneous
e74716fe-5637-48e9-919c-191a026d31fb,high-school-macroeconomics,High School Macroeconomics,,miscellaneous
a2ba2c2f-8bcc-4645-94de-6c5efaed02fd,fraud-detection,Fraud Detection,"**Fraud Detection** is a vital topic that applies to many industries including the financial sectors, banking, government agencies, insurance, and law enforcement, and more. Fraud endeavors have detected a radical rise in current years, creating this topic more critical than ever. Despite struggles on the part of the troubled organizations, hundreds of millions of dollars are wasted to fraud each year. Because nearly a few samples confirm fraud in a vast community, locating these can be complex. Data mining and statistics help to predict and immediately distinguish fraud and take immediate action to minimize costs.
Source: [Applying support vector data description for fraud detection ](https://arxiv.org/abs/2006.00618)",miscellaneous
edb71183-2dff-4fd1-91eb-81845d7953dd,astronomy,Astronomy,,miscellaneous
fda1ea26-b516-41ca-a74a-46c3fa24173c,triviaqa,TriviaQA,,miscellaneous
674b5e84-5452-4124-8f7f-4fee354ac65e,facies-classification,Facies Classification,,miscellaneous
4cc9537a-53ff-40db-b38e-0769f05251d3,multi-modal-person-identification,Multi-Modal Person Identification,,miscellaneous
4abd488f-2509-439d-b944-97b66cd8afeb,multi-armed-bandits,Multi-Armed Bandits,"Multi-armed bandits refer to a task where a fixed amount of resources must be allocated between competing resources that maximizes expected gain. Typically these problems involve an exploration/exploitation trade-off.
( Image credit: [Microsoft Research](http://research.microsoft.com/en-us/projects/bandits/) )",miscellaneous
5eb78a14-9321-4bb8-bd28-d4fbef933621,molecular-property-prediction,Molecular Property Prediction,Molecular property prediction is the task of predicting the properties of a molecule from its structure.,miscellaneous
a3bdf5fd-e40d-4903-a9a3-c46669c859d2,deep-clustering,Deep Clustering,,miscellaneous
2902a070-4d9e-4b63-812a-b6fe6861b666,zero-shot-cross-modal-retrieval,Zero-Shot Cross-Modal Retrieval,"**Zero-Shot Cross-Modal Retrieval** is the task of finding relevant items across different modalities without having received any training examples. For example, given an image, find a text or vice versa. The main challenge in the task is known as the *heterogeneity gap*: since items from different modalities have different data types, the similarity between them cannot be measured directly. Therefore, the majority of methods published to date attempt to bridge this gap by learning a latent representation space, where the similarity between items from different modalities can be measured.
Source: [Scene-centric vs. Object-centric Image-Text Cross-modal Retrieval: A Reproducibility Study](https://arxiv.org/abs/2301.05174)",miscellaneous
9ecb4573-9e56-4ebf-ac84-bfef2ca3659a,network-congestion-control,Network Congestion Control,,miscellaneous
ded26662-1f02-4224-8e76-88287b15a3ce,world-religions,World Religions,,miscellaneous
ea477fe6-12b9-4375-b642-bd9b8c2161c6,contextual-anomaly-detection,Contextual Anomaly Detection,"The objective of Unsupervised Anomaly Detection is to detect previously unseen rare objects or events. Contextual Anomaly Detection is formulated such that the data contains two types of attributes, behavioral and contextual attributes. Behavioral attributes are attributes that relate directly to the process of interest whereas contextual attributes relate to exogenous but highly affecting factors in relation to the process. Generally the behavioral attributes are conditional on the contextual attributes.
Source: [Unsupervised Contextual Anomaly Detection using Joint Deep Variational Generative Models](https://arxiv.org/pdf/1904.00548.pdf)",miscellaneous
79f52cff-94ca-47ad-9a8b-a44f92057b29,multi-target-regression,Multi-target regression,,miscellaneous
3d07cee8-bb10-4a53-b8b1-312c9145a7a5,known-unknowns,Known Unknowns,"Language models have a tendency to generate text containing false statements that are often referred to as ""Hallucinations."" The primary purpose of this task is to test for this failure case by probing whether a model can correctly identify that the answer to a question is unknown. A common failure mode would be to prefer a prediction of false on unknown truth over a prediction that the answer is unknown.
Source: [BIG-bench](https://github.com/google/BIG-bench/tree/main/bigbench/benchmark_tasks/known_unknowns)",miscellaneous
99567b1d-bc5d-47c2-9ff5-0d16b90d1699,misinformation,Misinformation,,miscellaneous
58410c54-a326-456a-90ea-7b0beaa918e6,gravitational-wave-detection,Gravitational Wave Detection,,miscellaneous
c2f27a25-c5cd-4bf5-8c87-5ed31ed007d3,sequential-recommendation,Sequential Recommendation,,miscellaneous
a5519a5a-b9b0-47ee-b14b-ab0588cac816,sequential-quantile-estimation,Sequential Quantile Estimation,,miscellaneous
794a8040-6540-4053-8ea9-b8685beaa4d6,table-extraction,Table Extraction,Table extraction involves detecting and recognizing a table's logical structure and content from its unstructured presentation within a document,miscellaneous
602afdb8-c607-4935-9b4e-a09b382e56c8,counterfactual-inference,Counterfactual Inference,,miscellaneous
d49cb0b2-f138-4a87-9e14-f23ec9bd1222,philosophy,Philosophy,,miscellaneous
417f53e4-4554-4bd9-8b9f-2b0f3125fafe,automatic-cell-counting,Automatic Cell Counting,,miscellaneous
33c3d51c-6d6f-47c4-bcfd-44e94d0d2cc0,dqn-replay-dataset,DQN Replay Dataset,,miscellaneous
08707ea4-7f4a-4c01-8227-02d8fa1e96a8,miscellaneous,Miscellaneous,,miscellaneous
3ef298bc-1d5f-4c82-b1a3-0e75c05568d4,logic-grid-puzzle,Logic Grid Puzzle,,miscellaneous
b8d88b8f-ff88-4c98-96f5-fe73724f8f9e,management,Management,,miscellaneous
0419acc6-290a-410a-a51d-dccb103ba890,automated-theorem-proving,Automated Theorem Proving,"The goal of **Automated Theorem Proving** is to automatically generate a proof, given a conjecture (the target theorem) and a knowledge base of known facts, all expressed in a formal language. Automated Theorem Proving is useful in a wide range of applications, including the verification and synthesis of software and hardware systems.
Source: [Learning to Prove Theorems by Learning to Generate Theorems ](https://arxiv.org/abs/2002.07019)",miscellaneous
221f9eac-35ea-4755-82c1-253006742455,remote-sensing,Remote Sensing,,miscellaneous
dfa0dd77-5c61-4bd4-b1a2-661fe8ff99bc,semeval-2022-task-4-1-binary-pcl-detection,SemEval-2022 Task 4-1 (Binary PCL Detection),,miscellaneous
1f63cf1e-6fb5-4e83-b75f-0ab32e512c1b,crop-yield-prediction,Crop Yield Prediction,,miscellaneous
306790c5-a90b-4a45-a49f-f82e22545127,oceanic-eddy-classification,Oceanic Eddy Classification,,miscellaneous
cb9e3c24-7f7f-4824-81f0-ad9bc03f3c07,vulnerability-detection,Vulnerability Detection,,miscellaneous
b3f428b8-ad1d-4247-856c-c6fd5fa37de1,physics-mc,Physics MC,,miscellaneous
a0d3b631-feb1-4aa4-abc6-da857fa21689,benchmarking,Benchmarking,,miscellaneous
011bca96-2ff3-49ce-8150-952d88289834,hindu-knowledge,Hindu Knowledge,,miscellaneous
336f7d3a-2127-403f-ab93-d18be0663a36,computer-security,Computer Security,,miscellaneous
439bb215-5267-4b90-a74e-c6064d66c1de,remote-sensing-image-classification,Remote Sensing Image Classification,,miscellaneous
f61bd810-500a-4974-922a-e7c66140f67f,self-organized-clustering,Self-Organized Clustering,Clustering with Self-Organized Maps,miscellaneous
60145078-3cef-4a00-9e72-674de3da3328,conceptual-physics,Conceptual Physics,,miscellaneous
db7c54b4-857c-45ca-95a1-ba61556f0343,product-categorization,Product Categorization,,miscellaneous
7c6a33de-ba11-4a83-a9b1-c088a90fec47,ancient-tex-restoration,Ancient Text Restoration,"Image credit: [Restoring and attributing ancient texts using deep neural networks
](https://paperswithcode.com/paper/restoring-and-attributing-ancient-texts-using)",miscellaneous
88832571-88ca-46fb-975d-e768769d3531,home-activity-monitoring,Home Activity Monitoring,,miscellaneous
260a1ba5-5d9f-4683-8292-b0d2dbfb3db3,advertising,Advertising,,miscellaneous
930784d2-f5bd-4284-9695-9460ff53ae2d,open-set-learning,Open Set Learning,"Traditional supervised learning aims to train a classifier in the closed-set world, where training and test samples share the same label space. Open set learning (OSL) is a more challenging and realistic setting, where there exist test samples from the classes that are unseen during training. Open set recognition (OSR) is the sub-task of detecting test samples which do not come from the training.",miscellaneous
3b1974e3-ca9e-4019-995c-bd762906654e,public-relations,Public Relations,,miscellaneous
df68b23c-5cd3-456c-98d6-c9a713848cf2,causal-inference,Causal Inference,"Causal inference is the task of drawing a conclusion about a causal connection based on the conditions of the occurrence of an effect.
( Image credit: [Recovery of non-linear cause-effect relationships from linearly mixed neuroimaging data](https://arxiv.org/pdf/1605.00391v2.pdf) )",miscellaneous
87ee8d3e-cc00-4083-9372-ead5478ea588,data-summarization,Data Summarization,"**Data Summarization** is a central problem in the area of machine learning, where we want to compute a small summary of the data.
Source: [How to Solve Fair k-Center in Massive Data Models ](https://arxiv.org/abs/2002.07682)",miscellaneous
839fe5f2-9e9b-4f04-95c3-bc599d5b5fe4,human-aging,Human Aging,,miscellaneous
38f996fe-954c-4389-adb1-4825c6b075cb,jsoniq-query-execution,JSONiq Query Execution,"Execute JSONiq query, typically on semi-structured JSON data",miscellaneous
3f329417-7343-4632-b2d9-7f44d3bf6cd6,detect-ground-reflections,Detect Ground Reflections,This task helps in detecting the significant ground reflections at mm-wave bands. The harvested ground reflections can help in overcoming transient blockages at mm-wave bands,miscellaneous
f2cbda3d-d6c4-44e9-bc0a-d8378ac3b024,high-school-physics,High School Physics,,miscellaneous
63cdf498-2b26-447b-8d30-9a6a9a937925,gpr,GPR,Gaussian Process Regression,miscellaneous
b4843443-70e7-4e6b-999c-eff768d9a321,seismic-inversion,Seismic Inversion,,miscellaneous
e4ec12ee-5464-48e8-ba29-4bdc5e244711,high-school-european-history,High School European History,,miscellaneous
d6cb320f-0e95-4605-91ac-ed5a05f2fa22,fault-detection,Fault Detection,,miscellaneous
c7466e8b-f7db-4b52-acd5-8d7c8c84947d,sociology,Sociology,,miscellaneous
248b0cb7-cd3c-4e7c-80e1-73eafac3bdbf,physical-simulations,Physical Simulations,,miscellaneous
0ad0359c-ca84-495d-be0f-6e1758484c6b,the-semantic-segmentation-of-remote-sensing,The Semantic Segmentation Of Remote Sensing Imagery,,miscellaneous
3a9d3a49-3b9f-4d79-be14-8b77f75e7ec3,mobile-security,Mobile Security,,miscellaneous
d33a2409-affb-4ff4-bf02-af6fcf542bcf,college-physics,College Physics,,miscellaneous
c6f1f84b-be8c-40f9-b99a-198a5f6af69f,cyber-attack-detection,Cyber Attack Detection,,miscellaneous
c4471d0c-9d53-4fe7-95d6-a9b0aca8dc66,outdoor-positioning,Outdoor Positioning,Outdoor Positioning (e.g. GPS),miscellaneous
98e63f83-b8e5-4fe0-afef-7518c444d9ef,professional-law,Professional Law,,miscellaneous
fc419dcb-5cb4-4309-8bf0-999c4b130aa5,sports-understanding,Sports Understanding,,miscellaneous
b89ba8a6-0db0-495f-acb6-c6c7306ac02b,global-facts,Global Facts,,miscellaneous
1f825ff6-d5d5-43f4-bc3c-21a1d3c3ea78,3d-bin-packing,3D Bin Packing,"As a classic NP-hard problem, the bin packing problem (1D-BPP) seeks for an assignment of a collection of items with various weights to bins. The optimal assignment houses all the items with the fewest bins such that the total weight of items in a bin is below the bin’s capacity. In its 3D version (3D-BPP), an item has a 3D “weight” corresponding
to its length, width and height.",miscellaneous
65dce967-7f94-4c5c-850d-a6376707568d,high-school-us-history,High School US History,,miscellaneous
8dd41f9e-93e8-45d0-b7f2-a10c555983e7,music-emotion-recognition,Music Emotion Recognition,,music
e046e1ea-179a-49e4-92ff-d1bbd720eee3,music-auto-tagging,Music Auto-Tagging,,music
f57ecb1c-d431-460f-973f-1185258c30e2,semeval-2022-task-4-1-binary-pcl-detection,SemEval-2022 Task 4-1 (Binary PCL Detection),,music
70863834-7904-4a91-a51f-fc0138e0972f,singer-identification,Singer Identification,,music
8988e7f4-bfb4-450f-aa79-0762f58ea658,music-texture-transfer,Music Texture Transfer,"Texture is the collective temporal homogeneity of acoustic
events.",music
669326b6-0475-4fd2-9674-0c47947e6aab,music-genre-recognition,Music Genre Recognition,"Recognizing the genre (e.g. rock, pop, jazz, etc.) of a piece of music.",music
a34efb94-3783-4035-b839-8352f4b2124d,detection-of-instrumentals-musical-tracks,Detection Of Instrumentals Musical Tracks,,music
471f652f-1505-4de5-801d-e88b8ea2b404,music-source-separation,Music Source Separation,"Music source separation is the task of decomposing music into its constitutive components, e. g., yielding separated stems for the vocals, bass, and drums.
( Image credit: [SigSep](https://github.com/sigsep ) )",music
2b89ab26-962c-49bd-b7dd-c5721b1b9a3b,vocal-technique-classification,Vocal technique classification,,music
2290ae88-a4b6-41b1-8010-cda0d2064557,music-modeling,Music Modeling,"( Image credit: [R-Transformer](https://arxiv.org/pdf/1907.05572v1.pdf) )",music
1e88b7d6-817b-4e34-9ea7-8c8e86f220bd,audio-super-resolution,Audio Super-Resolution,AUDIO SUPER-RESOLUTION or speech bandwidth extension (Upsampling Ratio = 2),music
dace5057-1a6c-468d-805d-cbf3b2416448,cover-song-identification,Cover song identification,"**Cover Song Identification** is the task of identifying an alternative version of a previous musical piece, even though it may differ substantially in timbre, tempo, structure, and even fundamental aspects relating to the harmony and melody of the song. The term “cover” is so wide that it ranges from acoustic renditions of a previous song, to Jimi Hendrix’ famous (and radical) reinterpretation of Bob Dylan’s “All Along the Watchtower”, to Rage Against the Machine essentially rewriting Bob Dylan’s “Maggie’s Farm”. Beyond its value for computational musicology and for enhancing music recommendation, Cover Song Identification is of interest because of its potential for benchmarking other music similarity and retrieval algorithms. Chord analysis, melody extraction and music similarity are all strongly connected to Cover Song Identification - another field of music analysis where AI has been applied.
Source: [Artificial Musical Intelligence: A Survey ](https://arxiv.org/abs/2006.10553)",music
517b7f67-562e-4704-8789-e76f53f7d150,music-generation,Music Generation,"**Music Generation** is the task of generating music or music-like sounds from a model or algorithm. The goal is to produce a sequence of notes or sound events that are similar to existing music in some way, such as having the same style, genre, or mood.",music
b76f3136-e31e-4a69-8340-41e15fbb117d,recognizing-seven-different-dastgahs-of,Recognizing Seven Different Dastgahs Of Iranian Classical Music,,music
41f5096d-e6bf-4c1d-8427-0b97c58b2382,music-classification,Music Classification,,music
0564b78e-d4dc-4424-a29d-3354b5d3b7d0,piano-music-modeling,Piano Music Modeling,,music
88ba5d4e-165a-44cb-956a-37c0fd8843fc,instrument-playing-technique-detection,Instrument Playing Technique Detection,,music
dea77d55-784e-43b2-80e4-cd721f8b4bd2,text-to-music-generation,Text-to-Music Generation,,music
9f2916f4-2345-430d-9aff-3da5d795e807,drum-transcription,Drum Transcription,,music
0372b699-0ae9-495a-b6c2-1c0d0f290812,music-transcription,Music Transcription,"Music transcription is the task of converting an acoustic musical signal into some form of music notation.
( Image credit: [ISMIR 2015 Tutorial - Automatic Music Transcription](http://c4dm.eecs.qmul.ac.uk/ismir15-amt-tutorial/AMT_tutorial_ISMIR_2015.pdf) )",music
68d52be2-9680-45f4-a15a-e0a60f296fe0,melody-extraction,Melody Extraction,,music
89ebd93e-f667-4b6f-bc83-07000b194368,music-information-retrieval,Music Information Retrieval,,music
9870e801-8677-4d08-85b5-5b80401e5ef8,fact-verification,Fact Verification,"Fact verification, also called ""fact checking"", is a process of verifying facts in natural text against a database of facts.",natural-language-processing
7722e5a5-9d8e-4cc7-8b6c-f450811acb74,conversational-question-answering,Conversational Question Answering,,natural-language-processing
b85fc00a-7be9-4c1b-87bf-edf94d15bb89,predicate-detection,Predicate Detection,Detecting predicates in sentences. Semantic frames are defined with respect to predicates. This task is a prerequisite to semantic role labeling.,natural-language-processing
3651fd5f-1bd1-4e1b-9c57-9d4ac0f0a491,hope-speech-detection-for-tamil,Hope Speech Detection for Tamil,Detecting Hope Speech in the Tamil language,natural-language-processing
77f8e47d-db18-47f8-a745-41b65baf48f7,fake-news-detection,Fake News Detection,"**Fake News Detection** is a natural language processing task that involves identifying and classifying news articles or other types of text as real or fake. The goal of fake news detection is to develop algorithms that can automatically identify and flag fake news articles, which can be used to combat misinformation and promote the dissemination of accurate information.",natural-language-processing
10f4bf7e-bfba-4097-8db0-f6b7c6c84d80,causal-emotion-entailment,Causal Emotion Entailment,"The Causal Emotion Entailment is a simpler version of the span extraction task. In this task, given a
target utterance (U) with emotion E, the goal is to predict which particular utterances in the conversation
history H(U) are responsible for the
emotion E in the target utterance.",natural-language-processing
c2e03d99-a95c-4ae1-b065-452e4f62d1c6,discourse-segmentation,Discourse Segmentation,,natural-language-processing
3b4c6793-ea15-48e9-809f-752c09dd0c42,memorization,Memorization,,natural-language-processing
607b694f-9723-45ac-a8ae-f135102cca01,zero-shot-cross-lingual-transfer,Zero-Shot Cross-Lingual Transfer,,natural-language-processing
4a4cb131-4a43-4956-b698-f5ea14980264,syntax-representation,Syntax Representation,,natural-language-processing
dd4e46aa-ad80-4d92-b71f-dc0d0f4c62f0,aspect-term-extraction-and-sentiment,Aspect Term Extraction and Sentiment Classification,Extracting the aspect terms as well as the corresponding sentiment polarities simultaneously.,natural-language-processing
633259ad-2b7b-4c65-8ff2-5b5ce917c8a8,text-attribute-transfer,Text Attribute Transfer,"The goal of the **Text Attribute Transfer** task is to change an input text such that the value of a particular linguistic attribute of interest (e.g. language = English, sentiment = Positive) is transferred to a different desired value (e.g. language = French, sentiment = Negative). This task needs approaches that can disentangle the content from other linguistic attributes of the text.
Source: [Improved Neural Text Attribute Transfer with Non-parallel Data ](https://arxiv.org/abs/1711.09395)",natural-language-processing
d89de9cb-18a8-4a82-bc3c-f9edb8b52ff1,decipherment,Decipherment,,natural-language-processing
b49da132-b0a3-48b9-83bb-1ae92f5b7eb7,hierarchical-text-classification-of-blurbs,Hierarchical Text Classification of Blurbs (GermEval 2019),"Shared Task on Hierarchical Classification of Blurbs (GermEval 2019 / KONVENS)
https://www.inf.uni-hamburg.de/en/inst/ab/lt/resources/data/germeval-2019-hmc.html",natural-language-processing
d87b8b53-76b6-43ab-8fcf-c3c77ec8e999,lexical-simplification,Lexical Simplification,"The goal of **Lexical Simplification** is to replace complex words (typically words that are used less often in language and are therefore less familiar to readers) with their simpler synonyms, without infringing the grammaticality and changing the meaning of the text.
Source: [Adversarial Propagation and Zero-Shot Cross-Lingual Transfer of Word Vector Specialization ](https://arxiv.org/abs/1809.04163)",natural-language-processing
457c5fa0-5ebb-4e65-a4b3-ef6e9929e780,dark-humor-detection,Dark Humor Detection,,natural-language-processing
e198eef0-462a-4f23-a398-f85075068421,text-style-transfoer,Text Style Transfer,"Text Style Transfer is the task of controlling certain attributes of generated text. The state-of-the-art methods can be categorized into two main types which are used on parallel and non-parallel data. Methods on parallel data are typically supervised methods that use a neural sequence-to-sequence model with the encoder-decoder architecture. Methods on non-parallel data are usually unsupervised approaches using Disentanglement, Prototype Editing and Pseudo-Parallel Corpus Construction.
The popular benchmark for this task is the Yelp Review Dataset. Models are typically evaluated with the metrics of Sentiment Accuracy, BLEU, and PPL.",natural-language-processing
f3cc9ccc-a70d-47d0-8c19-a406df9c9dd4,text-variation,Text-Variation,Generate variations of the input text,natural-language-processing
6f8cfbf3-a672-4aad-ac8a-dd5e831104a6,dialogue-evaluation,Dialogue Evaluation,,natural-language-processing
f0f50df9-bfe7-4ec7-80a3-4ab6aede884a,implicatures,Implicatures,,natural-language-processing
b647963b-42df-4d24-835f-b4aa72043a1e,true-or-false-question,True or False Question Answering,"The objective of the true or false question-answer task is to answer research questions with affirmative, negative, or uncertain responses (e.g., Do preoperative statins reduce atrial fibrillation after coronary artery bypass grafting?).",natural-language-processing
d67ce512-3bf7-4040-a03d-2b8409be5baa,fantasy-reasoning,Fantasy Reasoning,,natural-language-processing
d1314620-6e70-466a-8418-de0363d918ac,discourse-parsing,Discourse Parsing,,natural-language-processing
eb34b403-81d2-4abb-8c7d-f29a7c1d4a7e,sentence-embedding,Sentence Embedding,,natural-language-processing
d27d1f3b-76a2-4b9e-814c-15fb097e8a32,text-classification,Text Classification,"**Text Classification** is the task of assigning a sentence or document an appropriate category. The categories depend on the chosen dataset and can range from topics.
Text Classification problems include emotion classification, news classification, citation intent classification, among others. Benchmark datasets for evaluating text classification capabilities include GLUE, AGNews, among others.
In recent years, deep learning techniques like XLNet and RoBERTa have attained some of the biggest performance jumps for text classification problems.
( Image credit: [Text Classification Algorithms: A Survey](https://arxiv.org/pdf/1904.08067v4.pdf) )",natural-language-processing
edc0bb58-e08f-474a-af3c-7c61f0bf44d2,taxonomy-learning,Taxonomy Learning,"Taxonomy learning is the task of hierarchically classifying concepts in an automatic manner from text corpora. The process of building taxonomies is usually divided into two main steps: (1) extracting hypernyms for concepts, which may constitute a field of research in itself (see Hypernym Discovery below) and (2) refining the structure into a taxonomy.
Description from [NLP Progress](http://nlpprogress.com/english/taxonomy_learning.html)",natural-language-processing
4c781bb6-186b-45ca-92e6-c91bce8b2833,automated-essay-scoring,Automated Essay Scoring,"Essay scoring: **Automated Essay Scoring** is the task of assigning a score to an essay, usually in the context of assessing the language ability of a language learner. The quality of an essay is affected by the following four primary dimensions: topic relevance, organization and coherence, word usage and sentence complexity, and grammar and mechanics.
Source: [A Joint Model for Multimodal Document Quality Assessment ](https://arxiv.org/abs/1901.01010)",natural-language-processing
1283f348-8740-4e44-b7fe-c8f136bda055,irish-text-diacritization,Irish Text Diacritization,Addition of diacritics for undiacritized Irish Wikipedia texts.,natural-language-processing
1fddbd42-1f4d-4a7d-8966-021d2d640d40,hope-speech-detection-for-malayalam,Hope Speech Detection for Malayalam,Detecting Hopespeech in the Malayalam language,natural-language-processing
54b0dfc1-bbf7-4f27-a20f-96b47e7f2258,term-extraction,Term Extraction,"Term Extraction, or Automated Term Extraction (ATE), is about extraction domain-specific terms from natural language text.
For example, the sentence “We meta-analyzed mortality using random-effect models” contains the domain-specific single-word terms ""meta-analyzed"", ""mortality"" and the multi-word term ""random-effect models"".",natural-language-processing
01a781d0-fc98-434f-b65d-c0a59ef1e6e2,hate-speech-normalization,Hate Speech Normalization,,natural-language-processing
a12a7c27-a7fe-4213-b568-da4d404025d9,session-search,Session Search,,natural-language-processing
07e94a72-23d6-4407-8633-5de3f8a91d82,argument-pair-extraction-ape,Argument Pair Extraction (APE),Argument pair extraction (APE) aims to extract interactive argument pairs from two passages of a discussion.,natural-language-processing
13ce4e20-d8b0-4bde-b57c-1154a374cbb6,sentence-embeddings,Sentence Embeddings,,natural-language-processing
c8799a22-2a3b-497f-b56b-8ffb58ff84bc,figure-of-speech-detection,Figure Of Speech Detection,,natural-language-processing
ada2b302-c452-475f-9662-95c75ad2cb3e,text-generation,Text Generation,"**Text Generation** is the task of generating text with the goal of appearing indistinguishable to human-written text. This task if more formally known as ""natural language generation"" in the literature.
Text generation can be addressed with Markov processes or deep generative models like LSTMs. Recently, some of the most advanced methods for text generation include [BART](/method/bart), [GPT](/method/gpt) and other [GAN-based approaches](/method/gan). Text generation systems are evaluated either through human ratings or automatic evaluation metrics like METEOR, ROUGE, and BLEU.
Further readings:
- [The survey: Text generation models in deep learning](https://www.sciencedirect.com/science/article/pii/S1319157820303360)
- [Modern Methods for Text Generation](https://arxiv.org/abs/2009.04968)
( Image credit: [Adversarial Ranking for Language Generation](https://arxiv.org/abs/1705.11001) )",natural-language-processing
57366879-217e-4231-974f-38e5b719f52e,dialect-identification,Dialect Identification,Dialectal Arabic Identification,natural-language-processing
fb714495-2731-447a-84e0-1d0189c386bf,cross-lingual-natural-language-inference,Cross-Lingual Natural Language Inference,"Using data and models available for one language for which ample such resources are available (e.g., English) to solve a natural language inference task in another, commonly more low-resource, language.",natural-language-processing
de33eed6-9660-4de5-866a-8b428de0e225,lay-summarization,Lay Summarization,"Summarizing a technical or scientific document in simple, non-technical language that is comprehensible to a lay person (non-expert).",natural-language-processing
6b606777-cd6f-4673-8408-f8cb3dea070b,author-attribution,Author Attribution,Authorship attribution is the task of determining the author of a text.,natural-language-processing
7f6ba3a8-0b19-444d-bb57-69b655930280,mathematical-question-answering,Mathematical Question Answering,Building systems that automatically answer mathematical questions.,natural-language-processing
701b0676-e61b-4f73-bf72-4ceba02abcd9,unsupervised-extractive-summarization,Unsupervised Extractive Summarization,,natural-language-processing
e6b11e63-4447-45d2-a10a-45c67a69dbc8,row-annotation,Row Annotation,"**Row Annotation** is the task of linking a row to a real-world entity and is a subtask of [Table Annotation](https://paperswithcode.com/task/table-annotation). It differs from [Cell Entity Annotation](https://paperswithcode.com/task/cell-entity-annotation) because CEA considers the linking of *cells* to entities while Row Annotation assumes that there is only an entity described in a row which usually is located in the main/entity column.
A row is annotated using entities from different knowledge bases such as DBpedia or WikiData.",natural-language-processing
170e51af-e3d2-4790-9a86-feb863fef5cb,variable-disambiguation,Variable Disambiguation,Identifying which variable is mentioned in a text.,natural-language-processing
5b31958f-22b5-438a-880c-e51834a449be,page-stream-segmentation,Page Stream Segmentation,page stream segmentation (PSS) is the task to automatically separate a stream of scanned images into coherent multi-page documents,natural-language-processing
496dfd40-fda8-4b18-ac93-2f0bee97bf09,probing-language-models,Probing Language Models,,natural-language-processing
c10d1f21-23b3-4da4-8fcd-76cdd443e184,toxic-spans-detection,Toxic Spans Detection,Given a sentence identify the toxic spans present in it.,natural-language-processing
e0ae3572-c973-47c0-a3b0-f23e91e090d8,dialogue-safety-prediction,Dialogue Safety Prediction,Determine the safety of a given dialogue context.,natural-language-processing
a21e1c13-0bc6-4902-815a-8db25a3f6a3f,morpheme-segmentaiton,Morpheme Segmentaiton,Succesful systems segment a given word or sentence into a sequence of morphemes.,natural-language-processing
47350967-b47b-467d-bda2-86c67fe1724e,text-based-stock-prediction,Text-Based Stock Prediction,"Make stock predictions based on text (e.g., news articles, twitters, etc.).",natural-language-processing
0da2601e-d4f9-44d3-ac51-e7e24dce84cb,text-clustering,Text Clustering,Grouping a set of texts in such a way that objects in the same group (called a cluster) are more similar (in some sense) to each other than to those in other groups (clusters). (Source: Adapted from Wikipedia),natural-language-processing
ca1c8bf5-f19a-4cfa-94d7-6701e8d4bd1d,constituency-grammar-induction,Constituency Grammar Induction,Inducing a constituency-based phrase structure grammar.,natural-language-processing
3cc7aab3-6b1a-427f-aa17-5cacce60526c,user-simulation,User Simulation,Simulation of user interacting with a dialog system,natural-language-processing
40d9ba98-3106-41c1-866a-3d72ff973c40,knowledge-base-question-answering,Knowledge Base Question Answering,"Knowledge Base Q&A is the task of answering questions from a knowledge base.
( Image credit: [Modeling Semantics with Gated Graph Neural Networks for Knowledge Base Question Answering](https://www.aclweb.org/anthology/C18-1280.pdf) )",natural-language-processing
fe660f52-7db7-409d-8312-f3145d546923,grammatical-error-detection,Grammatical Error Detection,"Grammatical Error Detection (GED) is the task of detecting different kinds of errors in text such as spelling, punctuation, grammatical, and word choice errors. Grammatical error detection (GED) is one of the key component in grammatical error correction (GEC) community.",natural-language-processing
316a173c-6911-433c-8f59-4aa1baf39450,zero-shot-relation-triplet-extraction,Zero-shot Relation Triplet Extraction,"Given an input sentence, the task is to extract triplets consisting of the head entity, relation label, and tail entity where the relation label is not seen at the training stage.",natural-language-processing
33771f7a-5d1a-43da-9a86-7a8cd6eb5f62,semeval-2022-task-4-2-multi-label-pcl,SemEval-2022 Task 4-2 (Multi-label PCL Detection),,natural-language-processing
e75ca5c0-a97c-4698-8900-5eb234f11dac,race-h,RACE-h,,natural-language-processing
40ef8415-703f-46b9-8b3c-157d2bc0767c,kb-to-language-generation,KB-to-Language Generation,"Given information from a knowledge base, generate a description of this information in natural language.",natural-language-processing
94bdcd97-6aa4-42e7-a1c0-64e9e8cb8da9,croatian-text-diacritization,Croatian Text Diacritization,Addition of diacritics for undiacritized Croatian Wikipedia texts.,natural-language-processing
54d17e11-bd2c-494a-b8d5-baec3c82a0e6,grammatical-error-correction,Grammatical Error Correction,"Grammatical Error Correction (GEC) is the task of correcting different kinds of errors in text such as spelling, punctuation, grammatical, and word choice errors.
GEC is typically formulated as a sentence correction task. A GEC system takes a potentially erroneous sentence as input and is expected to transform it to its corrected version. See the example given below:
| Input (Erroneous) | Output (Corrected) |
| ------------------------- | ---------------------- |
|She see Tom is catched by policeman in park at last night. | She saw Tom caught by a policeman in the park last night.|",natural-language-processing
ca4c1b83-3aed-4011-8602-caeec77f7f6e,clinical-information-retreival,Clinical Information Retreival,,natural-language-processing
4c9518f1-e05f-43cd-b8a8-b62c143b6512,twitter-sentiment-analysis,Twitter Sentiment Analysis,Twitter sentiment analysis is the task of performing sentiment analysis on tweets from Twitter.,natural-language-processing
248450bc-4ea4-4656-b8f0-ec2fde14bf91,text-to-video-generation,Text-to-Video Generation,This task refers to video generation based on a given sentence or sequence of words.,natural-language-processing
e3138900-63be-4016-ad2b-e620dbed1003,role-filler-entity-extraction,Role-filler Entity Extraction,Role-filler entity extraction task on the MUC-4 dataset.,natural-language-processing
d6fb2362-2617-41dd-a9b1-7a1731619f32,bilingual-lexicon-induction,Bilingual Lexicon Induction,Translate words from one language to another.,natural-language-processing
406056bc-71d2-4fca-9588-7aea54a2ac3f,dialog-relation-extraction,Dialog Relation Extraction,Dialog Relation Extraction is the task of predicting the relation type between entities mentioned in dialogue. It uses multiple tokens to capture possible relations between pairs of entities in the dialogue. The popular benchmark for this task is the DialogRE dataset. The models are typically evaluated with the metric of F1 Score for both standard-setting and conversational settings.,natural-language-processing
82908f59-9c06-40df-8ac9-d017e7a27944,cross-lingual,Cross-Lingual,"Cross-lingual natural language processing is the task of using data and models available for one language for which ample such resources are available (e.g., English) to solve tasks in another, commonly more low-resource, language.",natural-language-processing
2a84d3c0-91c8-46af-beb8-ed7fb63d1833,natural-language-landmark-navigation,Natural Language Landmark Navigation Instructions Generation,Generate natural language navigation instructions that revolve around visual landmarks instead of turn-by-turn directives.,natural-language-processing
932b5826-3717-4b4d-a8be-95cc64db800a,deep-attention,Deep Attention,,natural-language-processing
f20397d5-1d31-452b-ab8f-c4f0e5686b8e,phrase-tagging,Phrase Tagging,A fine-grained task that aims to find all occurrences of phrases in sentences.,natural-language-processing
d2ffe41b-bc62-4d91-9254-be2d8215141a,cross-language-text-summarization,Cross-Language Text Summarization,"Using data and models available for one language for which ample such resources are available (e.g., English) to solve summarization tasks in another, commonly more low-resource, language.",natural-language-processing
78f4a529-626f-4418-9871-7a554e5306e7,cognate-prediction,Cognate Prediction,,natural-language-processing
31e69273-a5ba-4f26-829c-378318a459a7,visual-storytelling,Visual Storytelling,"( Image credit: [No Metrics Are Perfect](https://github.com/eric-xw/AREL) )",natural-language-processing
52b4c395-b975-40b4-a704-f45eb8baba65,multimodal-abstractive-text-summarization,Multimodal Abstractive Text Summarization,Abstractive text summarization by utilizing information from multiple modalities.,natural-language-processing
5d58ea69-98a9-42c2-899c-95ecc1d2c552,attribute-value-extraction,Attribute Value Extraction,,natural-language-processing
90995bda-9a1a-412f-9a5b-e76d12fa9206,french-text-diacritization,French Text Diacritization,Addition of diacritics for undiacritized French Wikipedia texts.,natural-language-processing
ed6810d8-2677-44a0-86cf-d67f001beae0,question-answering,Question Answering,"**Question Answering** is the task of answering questions (typically reading comprehension questions), but abstaining when presented with a question that cannot be answered based on the provided context.
Question answering can be segmented into domain-specific tasks like community question answering and knowledge-base question answering. Popular benchmark datasets for evaluation question answering systems include [SQuAD](/dataset/squad), [HotPotQA](/dataset/hotpotqa), [bAbI](/dataset/babi-1), [TriviaQA](/dataset/triviaqa), [WikiQA](/dataset/wikiqa), and many others. Models for question answering are typically evaluated on metrics like EM and F1. Some recent top performing models are T5 and XLNet.
( Image credit: [SQuAD](https://rajpurkar.github.io/mlx/qa-and-squad/) )",natural-language-processing
e161ad38-3f0d-4b80-b3f2-1cf52e00e79e,tweet-reply-sentiment-analysis,Tweet-Reply Sentiment Analysis,"To predict the predominant sentiment among (potential) first-order replies to a given tweet, in a Message-level Polarity Classification paradigm.",natural-language-processing
c3298933-9f4e-401d-9709-ef9cfde4b283,negation-and-speculation-scope-resolution,Negation and Speculation Scope resolution,,natural-language-processing
3b964ccd-cb6a-4eae-9703-4854772c63cb,record-linking,Record linking,"The task of finding records in a data set that refer to the same entity across different data sources.
Record linking is also called *entity resolution* or *entity matching*. Further material about this task is collected at [entity resolution](https://paperswithcode.com/task/entity-resolution).",natural-language-processing
ef234c66-a601-4e5c-bf51-65b92d27c23d,multimodal-lexical-translation,Multimodal Lexical Translation,"Translate a given word in a source language to a word in the target language, given the source sentence and one or more images illustrating the word.",natural-language-processing
a835e49c-69a5-4820-ac63-873be06dc089,ad-hoc-information-retrieval,Ad-Hoc Information Retrieval,Ad-hoc information retrieval refers to the task of returning information resources related to a user query formulated in natural language.,natural-language-processing
7aa8a377-d792-47d8-8658-0c232da8c63b,natural-language-inference,Natural Language Inference,"**Natural language inference (NLI)** is the task of determining whether a ""hypothesis"" is
true (entailment), false (contradiction), or undetermined (neutral) given a ""premise"".
Example:
| Premise | Label | Hypothesis |
| --- | ---| --- |
| A man inspects the uniform of a figure in some East Asian country. | contradiction | The man is sleeping. |
| An older and younger man smiling. | neutral | Two men are smiling and laughing at the cats playing on the floor. |
| A soccer game with multiple males playing. | entailment | Some men are playing a sport. |
Approaches used for NLI include earlier symbolic and statistical approaches to more recent deep learning approaches. Benchmark datasets used for NLI include [SNLI](/dataset/snli), [MultiNLI](/dataset/multinli), [SciTail](/dataset/scitail), among others. You can get hands-on practice on the SNLI task by following this [d2l.ai chapter](https://d2l.ai/chapter_natural-language-processing-applications/natural-language-inference-and-dataset.html).
Further readings:
- [Recent Advances in Natural Language Inference: A Survey of Benchmarks, Resources, and Approaches](https://arxiv.org/abs/1904.01172)",natural-language-processing
29c93f93-fd13-4edf-8d4e-970742b657d0,hate-speech-detection,Hate Speech Detection,"Hate speech detection is the task of detecting if communication such as text, audio, and so on contains hatred and or encourages violence towards a person or a group of people. This is usually based on prejudice against 'protected characteristics' such as their ethnicity, gender, sexual orientation, religion, age et al. Some example benchmarks are ETHOS and HateXplain. Models can be evaluated with metrics like the F-score or F-measure.",natural-language-processing
d081c2d4-46c5-49e7-b863-ff77c57ff0f3,conditional-text-generation,Conditional Text Generation,The task of generating text according to some pre-specified conditioning (e.g. topic or sentiment or constraint),natural-language-processing
9877a6c0-bb31-4075-ad73-bc1a79cdeb09,query-wellformedness,Query Wellformedness,"Assessing whether a query is grammatically correct, contains no spelling mistakes, and asks an explicit question.
Image Source: [Identifying Well-formed Natural Language Questions](https://arxiv.org/pdf/1808.09419.pdf)",natural-language-processing
5aaba86c-e0fa-43c9-841f-b7754859eae3,sentiment-analysis,Sentiment Analysis,"**Sentiment Analysis** is the task of classifying the polarity of a given text. For instance, a text-based tweet can be categorized into either ""positive"", ""negative"", or ""neutral"". Given the text and accompanying labels, a model can be trained to predict the correct sentiment.
**Sentiment Analysis** techniques can be categorized into machine learning approaches, lexicon-based approaches, and even hybrid methods. Some subcategories of research in sentiment analysis include: multimodal sentiment analysis, aspect-based sentiment analysis, fine-grained opinion analysis, language specific sentiment analysis.
More recently, deep learning techniques, such as RoBERTa and T5, are used to train high-performing sentiment classifiers that are evaluated using metrics like F1, recall, and precision. To evaluate sentiment analysis systems, benchmark datasets like SST, GLUE, and IMDB movie reviews are used.
Further readings:
- [Sentiment Analysis Based on Deep Learning: A Comparative Study](https://paperswithcode.com/paper/sentiment-analysis-based-on-deep-learning-a)",natural-language-processing
a42b61e7-3059-4526-b0ab-bd7b59a32f6e,hyper-relational-extraction,Hyper-Relational Extraction,"HyperRED is a dataset for the new task of hyper-relational extraction, which extracts relation triplets together with qualifier information such as time, quantity or location. For example, the relation triplet (Leonard Parker, Educated At, Harvard University) can be factually enriched by including the qualifier (End Time, 1967).",natural-language-processing
0c96c2d9-746c-45f7-80c5-b1374c178c10,extracting-covid-19-events-from-twitter,Extracting COVID-19 Events from Twitter,,natural-language-processing
c5d2a19e-ff25-4c0a-8263-278aedd98378,word-sense-disambiguation,Word Sense Disambiguation,"The task of Word Sense Disambiguation (WSD) consists of associating words in context with their most suitable entry in a pre-defined sense inventory. The de-facto sense inventory for English in WSD is [WordNet](https://wordnet.princeton.edu).
For example, given the word “mouse” and the following sentence:
“A mouse consists of an object held in one's hand, with one or more buttons.”
we would assign “mouse” with its electronic device sense ([the 4th sense in the WordNet sense inventory](http://wordnetweb.princeton.edu/perl/webwn?c=8&sub=Change&o2=&o0=1&o8=1&o1=1&o7=&o5=&o9=&o6=&o3=&o4=&i=-1&h=000000&s=mouse)).",natural-language-processing
248678e8-9bdb-47cf-927b-6e5c19b49c10,text-augmentation,Text Augmentation,"You can read these blog posts to get an overview of the approaches.
- [**A Visual Survey of Data Augmentation in NLP**](https://amitness.com/2020/05/data-augmentation-for-nlp/)",natural-language-processing
1f523e91-223d-49e2-b31c-68961ae85018,temporal-relation-extraction,Temporal Relation Extraction,"Temporal relation extraction systems aim to identify and classify the temporal relation between a pair of entities provided in a text. For instance, in the sentence ""Bob sent a message to Alice while she was leaving her birthday party."" one can infer that the actions ""sent"" and ""leaving"" entails a temporal relation that can be described as ""simultaneous"".",natural-language-processing
30b6c384-8975-48fd-9d2b-2a7b05fba6d3,toponym-resolution,Toponym Resolution,The goal is to find a mapping from a toponym (a location mention) in the text to a spatial footprint.,natural-language-processing
0d3738ef-b04c-4a2b-b7da-6a3136ba9e56,word-alignment,Word Alignment,"**Word Alignment** is the task of finding the correspondence between source and target words in a pair of sentences that are translations of each other.
Source: [Neural Network-based Word Alignment through Score Aggregation ](https://arxiv.org/abs/1606.09560)",natural-language-processing
796f39d4-e93f-4a44-9a77-2aef42c3fd31,dependency-grammar-induction,Dependency Grammar Induction,"Also known as ""unsupervised dependency parsing""",natural-language-processing
9b7a1d7e-a584-47e3-b9af-5d302e4c5266,relational-reasoning,Relational Reasoning,"The goal of **Relational Reasoning** is to figure out the relationships among different entities, such as image pixels, words or sentences, human skeletons or interactive moving agents.
Source: [Social-WaGDAT: Interaction-aware Trajectory Prediction via Wasserstein Graph Double-Attention Network ](https://arxiv.org/abs/2002.06241)",natural-language-processing
b41b95f4-6850-4e80-812f-48b5a5b46225,event-extraction,Event Extraction,"Determine the extent of the events in a text.
Other names: Event Tagging; Event Identification",natural-language-processing
21bab8b1-c206-407a-acc3-ea8cc8419868,document-ai,Document AI,,natural-language-processing
b059ea4b-44fb-4453-b716-9bc7a22ea45d,commonsense-causal-reasoning,Commonsense Causal Reasoning,"""Commonsense Causal Reasoning is the process of capturing and understanding the causal dependencies amongst events and actions."" Luo, Zhiyi, et al. ""Commonsense causal reasoning between short texts."" Fifteenth International Conference on the Principles of Knowledge Representation and Reasoning. 2016.",natural-language-processing
b5808a98-3849-41cd-8eba-fb4604ae6098,abusive-language,Abusive Language,,natural-language-processing
fb6e1efd-bed0-4cfc-a71f-1599919ee87f,cross-lingual-entity-linking,Cross-Lingual Entity Linking,"Cross-lingual entity linking is the task of using data and models available for one language for which ample such resources are available (e.g., English) to solve entity linking tasks (i.e., assigning a unique identity to entities in a text) in another, commonly low-resource, language.
Image Source: [Towards Zero-resource Cross-lingual Entity Linking](https://www.aclweb.org/anthology/D19-6127.pdf)",natural-language-processing
60039224-bad8-4a2b-91dd-d58a50d1a5ba,gender-bias-detection,Gender Bias Detection,,natural-language-processing
d1e33c44-b44a-4fe0-b415-60b80df412b9,vietnamese-aspect-based-sentiment-analysis,Vietnamese Aspect-Based Sentiment Analysis,"UIT-ViSFD: A Vietnamese Smartphone Feedback Dataset for Aspect-Based Sentiment Analysis
In this paper, we present a process of building a social listening system based on aspect-based sentiment analysis in Vietnamese from creating a dataset to building a real application. Firstly, we create UIT-ViSFD, a Vietnamese Smartphone Feedback Dataset as a new benchmark corpus built based on strict annotation schemes for evaluating aspect-based sentiment analysis, consisting of 11,122 human-annotated comments for mobile e-commerce, which is freely available for research purposes. We also present a proposed approach based on the Bi-LSTM architecture with the fastText word embeddings for the Vietnamese aspect-based sentiment task. Our experiments show that our approach achieves the best performances with the F1-score of 84.48% for the aspect task and 63.06% for the sentiment task, which performs several conventional machine learning and deep learning systems. Last but not least, we build SA2SL, a social listening system based on the best performance model on our dataset, which will inspire more social listening systems in the future. Dataset download: https://www.facebook.com/ViDataset
Paper: Phan, Luong Luc, Phuc Huynh Pham, Kim Thi-Thanh Nguyen, Tham Thi Nguyen, Sieu Khai Huynh, Luan Thanh Nguyen, Tin Van Huynh, and Kiet Van Nguyen. ""SA2SL: From Aspect-Based Sentiment Analysis to Social Listening System for Business Intelligence."" arXiv preprint arXiv:2105.15079 (2021).",natural-language-processing
82929d2b-2f92-403c-8688-c2086ab3a3ac,dialogue-generation,Dialogue Generation,"Dialogue generation is the task of ""understanding"" natural language inputs - within natural language processing in order to produce output. The systems are usually intended for conversing with humans, for instance back and forth dialogue with a conversation agent like a chatbot. Some example benchmarks for this task (see others such as Natural Language Understanding) include FusedChat and Ubuntu DIalogue Corpus (UDC). Models can be evaluated via metrics such as BLEU, ROUGE, and METEOR albeit with challenges in terms of weak correlation with human judgement, that may be addressed by new ones like UnSupervised and Reference-free (USR) and Metric for automatic Unreferenced dialog evaluation (MaUde).",natural-language-processing
f2e1799f-a0c2-45df-b9d5-07a61798d892,constituency-parsing,Constituency Parsing,"Constituency parsing aims to extract a constituency-based parse tree from a sentence that
represents its syntactic structure according to a [phrase structure grammar](https://en.wikipedia.org/wiki/Phrase_structure_grammar).
Example:
Sentence (S)
|
+-------------+------------+
| |
Noun (N) Verb Phrase (VP)
| |
John +-------+--------+
| |
Verb (V) Noun (N)
| |
sees Bill
[Recent approaches](https://papers.nips.cc/paper/5635-grammar-as-a-foreign-language.pdf)
convert the parse tree into a sequence following a depth-first traversal in order to
be able to apply sequence-to-sequence models to it. The linearized version of the
above parse tree looks as follows: (S (N) (VP V N)).",natural-language-processing
88ad5307-2a70-49bf-bf9c-5956e0f8f2a8,sign-language-production,Sign Language Production,"Sign Language Production (SLP) is the automatically translation from spoken language sentences into sign language sequences. Whilst Sign language Translation translates from sign to text, SLP is the opposite task from text to sign.",natural-language-processing
16945ffc-ffd6-41af-8127-8bc280af971d,english-proverbs,English Proverbs,,natural-language-processing
5448ab65-d9ed-4dbd-892a-093c97b42b22,machine-translation,Machine Translation,"**Machine translation** is the task of translating a sentence in a source language to a different target language.
Approaches for machine translation can range from rule-based to statistical to neural-based. More recently, encoder-decoder attention-based architectures like BERT have attained major improvements in machine translation.
One of the most popular datasets used to benchmark machine translation systems is the WMT family of datasets. Some of the most commonly used evaluation metrics for machine translation systems include BLEU, METEOR, NIST, and others.
( Image credit: [Google seq2seq](https://github.com/google/seq2seq) )",natural-language-processing
2da139d5-c2e8-4f0e-a5f4-f572ee55201d,sentence-pair-classification,Sentence-Pair Classification,,natural-language-processing
cc4ac815-a03b-4e54-988c-7bea32e2000b,goal-oriented-dialog,Goal-Oriented Dialog,Achieving a pre-defined goal through a dialog.,natural-language-processing
fd645301-03d0-470b-9bac-0b73f69eff40,aspect-sentiment-opinion-triplet-extraction,Aspect-Sentiment-Opinion Triplet Extraction,"Aspect-Sentiment-Opinion Triplet Extraction (ASOTE) extracts aspect term, sentiment and opinion term triplets from sentences. In the triplet extracted by ASOTE the sentiment is the sentiment of the aspect term and opinion term pair.",natural-language-processing
9689e589-3c20-481a-b376-b8fae89d7725,web-page-tagging,Web Page Tagging,Assigning appropriate tags to a web page.,natural-language-processing
270efc57-5f7a-49ea-b71f-fd4da72f130f,embeddings-evaluation,Embeddings Evaluation,,natural-language-processing
90614700-afd0-400d-90d1-079a4854f3a8,lemmatization,Lemmatization,"**Lemmatization** is a process of determining a base or dictionary form (lemma) for a given surface form. Especially for languages with rich morphology it is important to be able to normalize words into their base forms to better support for example search engines and linguistic studies. Main difficulties in Lemmatization arise from encountering previously unseen words during inference time as well as disambiguating ambiguous surface forms which can be inflected variants of several different base forms depending on the context.
Source: [Universal Lemmatizer: A Sequence to Sequence Model for Lemmatizing Universal Dependencies Treebanks ](https://arxiv.org/abs/1902.00972)",natural-language-processing
7f56c58e-7920-41c3-9e77-976cc0d0c0c3,topic-coverage,Topic coverage,"A prevalent use case of topic models is that of topic discovery.
However, most of the topic model evaluation methods rely on abstract metrics such as perplexity or topic coherence. The topic coverage approach is to measure the models' performance by matching model-generated topics to a fixed set of reference topics - topics discovered by humans and represented in a machine-readable format. This way, the models are evaluated in the context of their use, by essentially simulating topic modeling in a fixed setting defined by a text collection and a set of reference topics.
Reference topics represent a ground truth that can be used to evaluate both topic models and other measures of model performance. This coverage approach enables large-scale automatic evaluation of existing and future topic models.",natural-language-processing
3fb57fa9-834a-4ef8-bcc8-121b89e65668,prepositional-phrase-attachment,Prepositional Phrase Attachment,,natural-language-processing
c6d20572-2389-4e8c-a3ab-e7c80027b3de,morphological-analysis,Morphological Analysis,"**Morphological Analysis** is a central task in language processing that can take a word as input and detect the various morphological entities in the word and provide a morphological representation of it.
Source: [Towards Finite-State Morphology of Kurdish ](https://arxiv.org/abs/2005.10652)",natural-language-processing
6f33d0a5-2057-4666-918e-16931c015381,phrase-ranking,Phrase Ranking,This task aims to evaluate the “global” rank list of phrases that a method finds from the input corpus.,natural-language-processing
8273846e-dde8-4d24-b914-1decff220223,zero-shot-out-of-domain-detection,Zero-Shot Out-of-Domain Detection,Tasks for simultaneously learning few-shot In-Domain text classification and zero-shot Out-of-Domain detection on sentiment classification and intent classification.,natural-language-processing
45d27005-a029-420b-9c3b-40776416e233,romanian-text-diacritization,Romanian Text Diacritization,Addition of diacritics for undiacritized Romanian Wikipedia texts.,natural-language-processing
573ce5b3-e03e-43bb-9f32-1273995b1e45,table-to-text-generation,Table-to-Text Generation,"**Table-to-Text Generation** is to generate a description from the structured table.
Source: [Key Fact as Pivot: A Two-Stage Model for Low Resource Table-to-Text Generation ](https://arxiv.org/abs/1908.03067)",natural-language-processing
79ae65f3-4311-49e1-8d1c-89a3000b9b77,headline-generation,Headline Generation,,natural-language-processing
8483b954-36e7-418a-9948-b844d9367967,multiple-choice-qa,Multiple Choice Question Answering (MCQA),"A multiple-choice question (MCQ) is composed of two parts: a stem that identifies the question or problem, and a set of alternatives or possible answers that contain a key that is the best answer to the question, and a number of distractors that are plausible but incorrect answers to the question.
In a k-way MCQA task, a model is provided with a question q, a set of candidate options O = {O1, . . . , Ok}, and a supporting context for each option C = {C1, . . . , Ck}. The model needs to predict the correct answer option that is best supported by the given contexts.",natural-language-processing
25983ee6-16d8-466b-9e32-9172ed872d87,memex-question-answering,Memex Question Answering,"Question answering with real-world multi-modal personal collections, e.g., photo albums with visual, text, time and location information.",natural-language-processing
31de1370-d980-4250-bd12-fbc62c8dac0d,cross-domain-text-classification,Cross-Domain Text Classification,Learning an accurate model for the new unlabeled target domain given labeled data from multiple source domains where all domains have (possibly) different label sets. (Source: https://www.aclweb.org/anthology/P16-1155.pdf),natural-language-processing
ebae6db6-864a-46ae-9aa0-57ad4edee94e,word-translation,Word Translation,,natural-language-processing
0498463d-a962-4f8d-87f5-ef35c4bdaac5,clinical-language-translation,Clinical Language Translation,Translating clinical texts to layperson-understandable language.,natural-language-processing
049c79fb-3d82-4f78-bbe9-8c5ae28dc23d,vietnamese-word-segmentation,Vietnamese Word Segmentation,,natural-language-processing
0ad6c9f3-04ab-4b90-89cb-8564ed217d22,spelling-correction,Spelling Correction,Spelling correction is the task of detecting and correcting spelling mistakes.,natural-language-processing
72fa5a54-5962-43fc-b4fb-09d862be4b70,poem-meters-classification,Poem meters classification,,natural-language-processing
145f769b-2212-4503-9f5b-47e5fdf81f13,meme-classification,Meme Classification,Meme classification refers to the task of classifying internet memes.,natural-language-processing
ffa78461-840a-4650-b9e7-74cad10a780e,turkish-text-diacritization,Turkish Text Diacritization,Addition of diacritics for undiacritized Turkish Wikipedia texts.,natural-language-processing
8d95818f-2f86-458d-96f1-b37ac10f2784,text-matching,Text Matching,Matching a target text to a source text based on their meaning.,natural-language-processing
d36bf59e-a8ff-43eb-b54d-6d7eb94548f1,entity-linking,Entity Linking,"Assigning a unique identity to entities (such as famous individuals, locations, or companies) mentioned in text (Source: Wikipedia).",natural-language-processing
5279aaf1-16c0-4fbf-82f1-342ea56fcf37,document-classification,Document Classification,"**Document Classification** is a procedure of assigning one or more labels to a document from a predetermined set of labels.
Source: [Long-length Legal Document Classification ](https://arxiv.org/abs/1912.06905)",natural-language-processing
d98081eb-05d6-4732-884e-80d96d42fd40,email-thread-summarization,Email Thread Summarization,Image credit: [EmailSum: Abstractive Email Thread Summarization](https://paperswithcode.com/paper/emailsum-abstractive-email-thread),natural-language-processing
ed3b0c20-7487-437e-b7f3-67550bf7bb8a,speech-to-text-translation,Speech-to-Text Translation,"Translate audio signals of speech in one language into text in a foreign language, either in an end-to-end or cascade manner.",natural-language-processing
8e59acb7-e7f4-4a8a-a7d0-f9110b404807,instruction-following,Instruction Following,,natural-language-processing
4338b1ea-087f-4c0c-b6e4-247f8b67315f,automatic-post-editing,Automatic Post-Editing,Automatic post-editing (APE) is used to correct errors in the translation made by the machine translation systems.,natural-language-processing
896e9e8c-7a85-4403-93ba-f9cbd1ef954f,political-salient-issue-orientation-detection,Political Salient Issue Orientation Detection,,natural-language-processing
05d522dd-29e8-4b73-b00c-26d8a93cc3b8,4-ary-relation-extraction,4-ary Relation Extraction,,natural-language-processing
70559fcc-332d-4463-ad98-8dee152a0aff,link-prediction-on-dh-kgs,Link prediction on DH-KGs,,natural-language-processing
a4634b16-43b9-4542-b1e3-ba37628f4c3e,context-query-reformulation,Context Query Reformulation,,natural-language-processing
d0a34f17-eb36-472c-8e34-d5b36dea0d59,punctuation-restoration,Punctuation Restoration,Punctuation Restoration,natural-language-processing
ea80c600-1180-4d68-a5d9-64ffd1e94dc3,logical-reasoning-question-ansering,Logical Reasoning Question Answering,"Introduced by ReClor (ICLR 2020), logical reasoning is to evaluate the logical reasoning ability of models for question answering.",natural-language-processing
6c60d301-9081-4a2b-b882-54ffeeb20655,cross-lingual-document-classification,Cross-Lingual Document Classification,"Cross-lingual document classification refers to the task of using data and models available for one language for which ample such resources are available (e.g., English) to solve classification tasks in another, commonly low-resource, language.",natural-language-processing
aaa12d03-4d09-4bf1-86c3-eb2e018267a2,language-modelling,Language Modelling,"**Language Modeling** is the task of predicting the next word or character in a document. This technique can be used to train language models that can further be applied to a wide range of natural language tasks like text generation, text classification, and question answering.
The common types of language modeling techniques involve:
- N-gram Language Models
- Neural Langauge Models
A model's language modeling capability is measured using cross-entropy and perplexity. Some datasets to evaluate language modeling are WikiText-103, One Billion Word, Text8, C4, among others.
One of the most recent popular benchmarks to evaluate language modeling capabilities is called SuperGLUE.
Some popular and notable state-of-the-art language models, include:
- [GPT-3](/method/gpt-3)
- Megatron-LM
- [BERT](/method/bert)
Check below for all state-of-the-art models.
Here are some additional readings to go deeper on the task:
- [Language Modeling](https://lena-voita.github.io/nlp_course/language_modeling.html) - Lena Voita
( Image credit: [Exploring the Limits of Language Modeling](https://arxiv.org/pdf/1602.02410v2.pdf) )",natural-language-processing
16bf2777-2a8f-434e-b0dd-fcb801cd5438,prosody-prediction,Prosody Prediction,"Predicting prosodic prominence from text. This is a 2-way classification task, assigning each word in a sentence a label 1 (prominent) or 0 (non-prominent).
( Image credit: [Helsinki Prosody Corpus](https://github.com/Helsinki-NLP/prosody) )",natural-language-processing
0770e8ba-8dcd-4681-a897-024351569643,formality-style-transfer,Formality Style Transfer,Formality Style Transfer,natural-language-processing
41b5195a-6236-4b27-bfcf-757900e5aa2b,response-generation,Response Generation,A task where an agent should play the $DE$ role and generate a text to respond to a $P$ message.,natural-language-processing
f7b9af04-6088-4b61-a62c-73c473195851,unsupervised-semantic-parsing,Unsupervised semantic parsing,,natural-language-processing
00cc8750-0ef9-47e2-a138-93e14bfcf4bf,definition-extraction,Definition Extraction,,natural-language-processing
751b7902-7876-428c-9f56-07698e639732,intent-recognition,Intent Recognition,,natural-language-processing
c3bb1d2b-cbb0-47e4-81ca-9c23be7cc2e2,incongruity-detection,incongruity detection,Incongruity detection is the task of identifying statements in a text that are inconsistent with each other.,natural-language-processing
4367962d-def2-448d-afa5-77370bc6ce18,blackout-poetry-generation,Blackout Poetry Generation,"Blackout poetry is a form of poetry in which words in a passage are masked, except for a few which when combined together in order to convey some meaning.",natural-language-processing
c2a42076-a4d6-4224-aece-a3ae11c344db,information-extraction,Information Extraction,Information extraction is the task of automatically extracting structured information from unstructured and / or semi-structured machine-readable documents and other electronically represented sources (Source: Wikipedia).,natural-language-processing
2564d072-7c3b-4955-8d1e-6b5ac58d0f25,sarcasm-detection,Sarcasm Detection,"The goal of **Sarcasm Detection** is to determine whether a sentence is sarcastic or non-sarcastic. Sarcasm is a type of phenomenon with specific perlocutionary effects on the hearer, such as to break their pattern of expectation. Consequently, correct understanding of sarcasm often requires a deep understanding of multiple sources of information, including the utterance, the conversational context, and, frequently some real world facts.
Source: [Attentional Multi-Reading Sarcasm Detection ](https://arxiv.org/abs/1809.03051)",natural-language-processing
29088710-5780-486b-ad98-06dc86689b73,phrase-relatedness,Phrase Relatedness,,natural-language-processing
d0b1d63f-b4d3-474e-b7e8-017b4d2d5334,sentence-classification,Sentence Classification,,natural-language-processing
01411903-23d1-4125-87d5-0602fbc3896e,open-information-extraction,Open Information Extraction,"In natural language processing, open information extraction is the task of generating a structured, machine-readable representation of the information in text, usually in the form of triples or n-ary propositions (Source: Wikipedia).",natural-language-processing
55e2d31c-da19-4abe-9683-e3e5bbd0f88c,cloze-test,Cloze Test,The cloze task refers to infilling individual words.,natural-language-processing
069d3b83-30e9-4ce4-8ada-39ebb59d7a48,bridging-anaphora-resolution,Bridging Anaphora Resolution,,natural-language-processing
72615f40-b449-4c67-91cf-9e288113d0ab,math-information-retrieval,Math Information Retrieval,Information Retrieval on Math Contents,natural-language-processing
031fe34b-286e-4eb5-bce1-88f9c64a0d01,claim-extraction-with-stance-classification,Claim Extraction with Stance Classification (CESC),"Since claims stand at a clear position towards a given topic, the sentences with clear stances should have a higher possibility to be the claims. Hence, identifying the stances of the claims is supposed to benefit the claim extraction task. By combining the claim extraction and stance classification subtasks, we define this integrated task as: given a specific topic and relevant articles, extract the claims from the articles and also identify the stance of the claims towards the topic.",natural-language-processing
30b6aa2d-619d-4d94-92c1-5c35d7d40f38,offline-handwritten-chinese-character,Offline Handwritten Chinese Character Recognition,Handwritten Chinese characters recognition is the task of detecting and interpreting the components of Chinese characters (i.e. radicals and two-dimensional structures).,natural-language-processing
59516557-1321-427f-a9d0-d11acd55027e,sentence-compression,Sentence Compression,**Sentence Compression** is the task of reducing the length of text by removing non-essential content while preserving important facts and grammaticality.,natural-language-processing
ba76ceb3-bda4-47bb-939c-fefaceb8d09f,medical-question-pair-similarity-computation,Medical question pair similarity computation,Predicting whether to questions on medical topics have the same meaning.,natural-language-processing
e03842d7-0397-438d-9ce5-1cfb49cdfded,valnov,ValNov,"Given a textual premise and conclusion candidate, the Argument-Validity-and-Novelty-Prediction-Shared-Task ValNov consists in predicting two aspects of a conclusion: its validity and novelty.
Validity is defined as the degree to which the conclusion is justified with respect to the given premise. A conclusion is considered to be valid if it is supported by inferences that link the premise to the conclusion, based on logical principles or commonsense or world knowledge, which may be defeasible. A conclusion will be trivially considered valid if it repeats or summarizes the premise – in which case it can hardly be considered as novel.
Novelty defines the degree to which the conclusion contains content that is new in relation to the premise. As extreme cases, a conclusion candidate that repeats or summarizes the premise or is unrelated to the premise will not be considered novel.",natural-language-processing
6ddfe0c1-78f9-4b35-80d7-aa9518c97b8b,legal-document-translation,Legal Document Translation,Legal document translation is the task of translating legal documents between languages.,natural-language-processing
c4e9a4c3-25db-4398-8063-b1f94f5acf53,timedial,Timedial,,natural-language-processing
83a80193-7e52-4052-864c-a74940b2a2c0,cross-lingual-word-embeddings,Cross-Lingual Word Embeddings,,natural-language-processing
dc59975e-2e94-4a32-93d4-00dfabf39f45,dialogue,Dialogue,Dialogue is notoriously hard to evaluate. Past approaches have used human evaluation.,natural-language-processing
177d7f28-89c2-4c47-af9f-7ee33894b984,document-level-relation-extraction,Document-level Relation Extraction,Document-level RE aim to identify the relations of various entity pairs expressed across multiple sentences.,natural-language-processing
c556d2d4-543e-4ae0-b13c-abaab7e7d97c,word-sense-induction,Word Sense Induction,"Word sense induction (WSI) is widely known as the “unsupervised version” of WSD. The problem states as: Given a target word (e.g., “cold”) and a collection of sentences (e.g., “I caught a cold”, “The weather is cold”) that use the word, cluster the sentences according to their different senses/meanings. We do not need to know the sense/meaning of each cluster, but sentences inside a cluster should have used the target words with the same sense.
Description from [NLP Progress](http://nlpprogress.com/english/word_sense_disambiguation.html)",natural-language-processing
94b7d453-3246-4b85-9e12-c39bcd25517d,word-similarity,Word Similarity,Calculate a numerical score for the semantic similarity between two words.,natural-language-processing
acc7e524-541c-469f-8cd3-01fe09115246,long-form-question-answering,Long Form Question Answering,Long-form question answering is a task requiring elaborate and in-depth answers to open-ended questions.,natural-language-processing
8aaae122-4aef-4dfb-8020-e6a076ce444c,dialogue-state-tracking,Dialogue State Tracking,"Dialogue state tacking consists of determining at each turn of a dialogue the
full representation of what the user wants at that point in the dialogue,
which contains a goal constraint, a set of requested slots, and the user's dialogue act.",natural-language-processing
ae280cb6-1223-40ad-9c34-00e7b3204b84,action-parsing,Action Parsing,"Action parsing is the task of, given a video or still image, assigning each frame or image a label describing the action in that frame or image.",natural-language-processing
988b79dc-0669-47ee-ad6d-600b91c739a0,hope-speech-detection,Hope Speech Detection,"Detecting speech associated with positive, uplifting,
promise, potential, support, reassurance, suggestions, or inspiration.",natural-language-processing
2a15b9a6-50d7-4493-aa03-3cf5fea5f33a,dependency-parsing,Dependency Parsing,"Dependency parsing is the task of extracting a dependency parse of a sentence that represents its grammatical
structure and defines the relationships between ""head"" words and words, which modify those heads.
Example:
```
root
|
| +-------dobj---------+
| | |
nsubj | | +------det-----+ | +-----nmod------+
+--+ | | | | | | |
| | | | | +-nmod-+| | | +-case-+ |
+ | + | + + || + | + | |
I prefer the morning flight through Denver
```
Relations among the words are illustrated above the sentence with directed, labeled
arcs from heads to dependents (+ indicates the dependent).",natural-language-processing
da95e432-2adb-434d-b88c-3aabb54f6dff,unsupervised-machine-translation,Unsupervised Machine Translation,"Unsupervised machine translation is the task of doing machine translation without any translation resources at training time.
( Image credit: [Phrase-Based & Neural Unsupervised Machine Translation](https://arxiv.org/pdf/1804.07755v2.pdf) )",natural-language-processing
c43d3a80-cee1-442c-b86b-96dd70f109bd,multi-label-text-classification,Multi-Label Text Classification,"According to Wikipedia ""In machine learning, multi-label classification and the strongly related problem of multi-output classification are variants of the classification problem where multiple labels may be assigned to each instance. Multi-label classification is a generalization of multiclass classification, which is the single-label problem of categorizing instances into precisely one of more than two classes; in the multi-label problem there is no constraint on how many of the classes the instance can be assigned to.""",natural-language-processing
c5d82d26-b5bb-4962-a468-dcaa90048334,binary-relation-extraction,Binary Relation Extraction,,natural-language-processing
071b5a3d-4120-4bcd-9c32-fef45bfb8f2e,table-annotation,Table annotation,"**Table annotation** is the task of annotating a table with terms/concepts from knowledge graph or database schema. Table annotation is typically broken down into the following five subtasks:
1. Cell Entity Annotation ([CEA](https://paperswithcode.com/task/cell-entity-annotation))
2. Column Type Annotation ([CTA](https://paperswithcode.com/task/column-type-annotation))
3. Column Property Annotation ([CPA](https://paperswithcode.com/task/columns-property-annotation))
4. [Table Type Detection](https://paperswithcode.com/task/table-type-detection)
5. [Row Annotation](https://paperswithcode.com/task/row-annotation)
The [SemTab](http://www.cs.ox.ac.uk/isg/challenges/sem-tab/) challenge is closely related to the Table Annotation problem. It is a yearly challenge which focuses on the first three tasks of table annotation and its purpose is to benchmark different table annotation systems.",natural-language-processing
f527612f-1d9a-4949-b3c5-87bc682fa863,temporal-relation-classification,Temporal Relation Classification,"Temporal Relation Classification is the task that is concerned with classifying the temporal relation between a pair of temporal entities (traditional events and temporal expressions). Initial approaches aimed to classify the temporal relation in thirteen relation types that were depicted by James Allen in his seminal work ""Maintaining Knowledge about Temporal Intervals"". However, due to the ambiguity in the annotation, recent corpora have been limiting the type of relations to a subset of those relations.
Notice that although Temporal Relation Classification can be thought of as a subtask of Temporal Relation Extraction, the two tasks can be morphed if one adds a label that indicates the absence of a temporal relation between the entities (e.g. ""no_relation"" or ""vague"") to Temporal Relation Classification.",natural-language-processing
7228cdca-c538-422a-8b04-f3e29cc6a1b0,definition-modelling,Definition Modelling,,natural-language-processing
bb9e95e7-ab53-459e-a879-aa1dd759ad40,toxic-comment-classification,Toxic Comment Classification,,natural-language-processing
6240229b-1a58-46ba-a856-cc5045e4e54b,semantic-entity-labeling,Semantic entity labeling,"- One of Form Understanding task (Word grouping, Semantic entity labeling, Entity linking)
- Classifying entities into one of four pre-defined categories: question, answer, header and, other.
cited from
G. Jaume, H. K. Ekenel, J. Thiran ""FUNSD: A Dataset for Form Understanding in Noisy Scanned Documents,"" 2019",natural-language-processing
746768ec-892a-47eb-9042-6a3da060f4a8,active-learning,Active Learning,"**Active Learning** is a paradigm in supervised machine learning which uses fewer training examples to achieve better optimization by iteratively training a predictor, and using the predictor in each iteration to choose the training examples which will increase its chances of finding better configurations and at the same time improving the accuracy of the prediction model
Source: [Polystore++: Accelerated Polystore System for Heterogeneous Workloads ](https://arxiv.org/abs/1905.10336)",natural-language-processing
7b4caaa6-2564-4c37-85ec-9b75f8b30f0a,amr-parsing,AMR Parsing,"Each AMR is a single rooted, directed graph. AMRs include PropBank semantic roles, within-sentence coreference, named entities and types, modality, negation, questions, quantities, and so on. [See](https://amr.isi.edu/index.html).",natural-language-processing
4d25ef36-35e4-44bb-800f-73e982c6eec6,job-classification,Job classification,,natural-language-processing
a96e0e6e-df97-4289-b304-0fbd9a9d61a8,community-question-answering,Community Question Answering,"Community question answering is the task of answering questions on a Q&A forum or board, such as Stack Overflow or Quora.",natural-language-processing
772e8070-fb38-48c0-9be1-5e7142a6b933,twitter-event-detection,Twitter Event Detection,"Detection of worldwide events from categories like Sports, Politics, Entertainment, Science & Technology, etc. by analyzing Twitter Tweets.",natural-language-processing
1d5b8311-215f-41b1-a038-7769157c66d1,context-specific-spam-detection,Context-specific Spam Detection,,natural-language-processing
c2d5608c-2c61-4519-8ed0-43d04f6c5ed3,method-name-prediction,Method name prediction,,natural-language-processing
9cb7d479-88af-42ba-ac07-0f9c987aef3c,turning-point-identification,Turning Point Identification,"Identification of key events in a narrative (such as movie or TV episode). The task is supported by screenwriting theory, according to which there are 5 different types of key events in a movie. These key events (e.g., change of plans, major setback, climax) are crucial narrative moments: they define the plot structure and determine its progression and thematic units (e.g., setup, complications, aftermath).",natural-language-processing
62fc3c44-a9b4-4481-8299-551d9c9cf08c,aspect-extraction,Aspect Extraction,"Aspect extraction is the task of identifying and extracting terms relevant for opinion mining and sentiment analysis, for example terms for product attributes or features.",natural-language-processing
7af5b96d-51ce-4c97-be1b-988256abdf6a,hope-speech-detection-for-english,Hope Speech Detection for English,Detecting Hope Speech in the English Language,natural-language-processing
de2c9716-26bb-42a2-882f-fb4a200b8b3b,gsm8k,GSM8K,,natural-language-processing
7e0f0822-4c50-484c-9a3d-3b8560f3bb87,drug-drug-interaction-extraction,Drug–drug Interaction Extraction,"Automatic extraction of Drug-drug interaction (DDI) information from the biomedical literature.
( Image credit: [Using Drug Descriptions and Molecular Structures for Drug-Drug Interaction Extraction from Literature](https://watermark.silverchair.com/btaa907.pdf?token=AQECAHi208BE49Ooan9kkhW_Ercy7Dm3ZL_9Cf3qfKAc485ysgAAAuAwggLcBgkqhkiG9w0BBwagggLNMIICyQIBADCCAsIGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQMb9BaAetiYVbvf0_9AgEQgIICk0-IUccCHnqrDDtbnyBSTvPnrWXc4F2vfEMKkxzGRA3-WbynJbw0HptyHyjawXr3c4TeC9ZhIta1szzhc5t1JOhdh8rOo5CTtGk_JPfq14TkMkIISDCvdsVL76fvCn-3KhtAJhHPcyYTDqMaGSb3ltXJLfR6PXoCXnINprcZ3pO6ktuNLA8KF0_CHTITnGNcX1G1n6ZERyGTPwGjodH9Qq9UcYzJCx4N1KKgxOMAj5oxIoLPQi97oNJ3eCoYBoKDuSW-Zza_ULcBDXkkTvt3A460O32vfRAqnnPYlVSuvEiASc3lX8O6Qb28GXc99EIzwDQJEO4znl7haAGOxubuUm5Of9p22xRfc7KSuCh41cJxW_31bsnTuf8Sb2z6O6bFo3aNlxF3SrLfNTHfJH6Tst0WvaaRJ1gQ3JxcFwu--hsZMF9bW3_yzFwP1ZnVIPVtE0bqm3QZt-_nHaE4o9KgCCNY4t70h7U7yD9ZrZMvFnNieSvgL35t--l5PerE9uZgP6v9LUKUyAI1TDHHMameO5794Z7WII8v_MKG0jmUIbr564ENnyKJYunNowu3EbvUMxBv3DSUkepU1kP66tJjSflal3LlRd8LUbj4m2Tin3LteS-09Gje4pyMaeO5RywX_tSPSFGK8QGaZqpgbIU2y23YnCMAnIDOSS3_L3LQRGu50YK5OwEvMY7azpEMbR3kaaHWu_cmEN2Vm61UdG6uLql5pMc9zKfGAQ3E1VXGuhdEalRtuLbtBZ9UFj-vYePfrRGWRXjkg-11SGgKqRaJcLK32yuvhG11KqacCoY169A4G29_GfRC3rbDsnqyMRZ9ESe3FB1NnWwZ4HPNA7ju3yJ7cfZSfYgdygT6oTpBQjaweH5U) )",natural-language-processing
f0abf77d-e38c-4183-9b2e-6de1b39696a6,personality-recognition-in-conversation,Personality Recognition in Conversation,"Given a speaker's conversation with others, it is required to recognize the speaker's personality traits through the conversation record, which includes two scenarios, (1) $1-1$ conversations: the robot recognizes the personality traits of the speaker through the conversation between them (e.g., psychological counseling), (2) $1-N$ conversations : the robot listens to the speaker's conversations with other $N$ people and then recognizes the speaker's personality traits (e.g., group chatbot, home service robot). Since $1-N$ includes the case of $1-1$, we only discusses PRC in $1-N$ conversations. The task of PRC in $1-N$ conversations can be formulated as:
$Per_i = argmax_{Per'_i}P(Per'_i | C_{i,j}, \cdots, C_{i,N})$
where $Per_i=[Neu, Ext, Ope, Agr, Con]$ is a 5-dimensional vector representing Neuroticism, Extraversion, Openness, Agreeableness, and Conscientiousness. $C_{i,j}$ is the conversations between $Speaker_i$ and $Speaker_j$ ($1 \leq j \leq N$).",natural-language-processing
39d892da-58da-4fee-8179-ccaf69cd0824,emotional-intelligence,Emotional Intelligence,,natural-language-processing
a336b665-84b2-4ac9-8830-04208d4d048d,word-attribute-transfer,Word Attribute Transfer,"Changing a word's attribute, such as its gender.",natural-language-processing
e7b9b9d6-fc31-4cf1-b173-dc22f2208524,hungarian-text-diacritization,Hungarian Text Diacritization,Addition of diacritics for undiacritized Hungarian Wikipedia texts.,natural-language-processing
c19b107a-11ef-43c0-99d5-37b745d9c693,news-classification,News Classification,,natural-language-processing
5c18fcf8-aa05-499c-8fed-87f8567c8d62,few-shot-text-classification,Few-Shot Text Classification,,natural-language-processing
66ee7417-5271-4353-9b41-f8304061e490,table-retrieval,Table Retrieval,"When given a query, the goal of this task is to retrieve a relevant table from a (potentially large) collection of tables. The query could be a single sentence (such as a question), or it could also be a conversation. As for the retrieval, the tables could be in the raw form (i.e. the values of each cells), the metadata (such as the title, description), or summary statistics.",natural-language-processing
754167ce-ddd3-4d64-a6d5-633c5feb98fd,multimodal-text-prediction,Multimodal Text Prediction,"**Multimodal text prediction** is a type of natural language processing that involves predicting the next word or sequence of words in a sentence, given multiple modalities or types of input. In traditional text prediction, the prediction is based solely on the context of the sentence, such as the words that precede the target word. In multimodal text prediction, additional modalities, such as images, audio, or user behavior, are also used to inform the prediction.
For example, in a multimodal text prediction system for captioning images, the system may use both the content of the image and the words that have been typed so far to generate the next word in the caption. The image may provide additional context or information about the content of the caption, while the typed words may provide information about the style or tone of the caption.
Multimodal text prediction can be achieved using a variety of techniques, including deep learning models and statistical models. These models can be trained on large datasets of text and multimodal inputs to learn the relationships between the different types of data and improve the accuracy of the predictions.
Multimodal text prediction has many applications, including chatbots, virtual assistants, and predictive text input for mobile devices. By incorporating additional modalities into the prediction process, multimodal text prediction systems can provide more accurate and useful predictions, improving the overall user experience.",natural-language-processing
7bbf7581-d0fe-4075-bccb-88d6335eba77,semantic-role-labeling,Semantic Role Labeling,"Semantic role labeling aims to model the predicate-argument structure of a sentence
and is often described as answering ""Who did what to whom"". BIO notation is typically
used for semantic role labeling.
Example:
| Housing | starts | are | expected | to | quicken | a | bit | from | August’s | pace |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| B-ARG1 | I-ARG1 | O | O | O | V | B-ARG2 | I-ARG2 | B-ARG3 | I-ARG3 | I-ARG3 |",natural-language-processing
6b4f0f23-0d1e-40cb-a72d-a04b8b23343e,prompt-engineering,Prompt Engineering,"**Prompt engineering** is the process of designing and refining the prompts used to generate text from language models, such as GPT-3 or similar models. The goal of prompt engineering is to improve the quality and relevance of the generated text by carefully crafting the prompts to elicit the desired responses from the model.
Prompt engineering involves several steps, including selecting the appropriate model architecture and parameters, designing the prompt format and structure, selecting the appropriate task and training data, and fine-tuning the model using the selected prompt and data.
Prompt engineering is a crucial step in the development of language models, as it can greatly influence the quality and effectiveness of the model's responses. By carefully designing and refining the prompts used to generate text, researchers and developers can improve the accuracy and relevance of the model's output, making it more useful for a wide range of applications, including chatbots, language translation, content creation, and more.",natural-language-processing
fb432356-5aa4-4944-a2ef-bee353ecd3e7,graph-to-sequence,Graph-to-Sequence,Mapping an input graph to a sequence of vectors.,natural-language-processing
5cbd9817-2861-4f47-8ea9-e39e366e486c,review-generation,Review Generation,,natural-language-processing
ccbcee12-adff-4bf1-9c9c-e38894c4ac44,stance-detection,Stance Detection,"Stance detection is the extraction of a subject's reaction to a claim made by a primary actor. It is a core part of a set of approaches to fake news assessment.
Example:
* Source: ""Apples are the most delicious fruit in existence""
* Reply: ""Obviously not, because that is a reuben from Katz's""
* Stance: deny",natural-language-processing
cfa95e22-4c7f-4fd8-af9a-d72ad4520f05,empathetic-response-generation,Empathetic Response Generation,Generate empathetic responses in dialogues,natural-language-processing
a38cc6a7-b9e3-4697-b4f3-b9ecbf3bec18,sentence-embeddings-for-biomedical-texts,Sentence Embeddings For Biomedical Texts,,natural-language-processing
f155fd75-c890-4566-8b08-3013a2838262,connective-detection,Connective Detection,Misha packed carefully so that everything would fit in her bag.,natural-language-processing
bf499f88-d5b0-474d-9e46-bc6ae146af7f,molecular-description-generation,Molecular description generation,"Molecular description generation entails the creation of a detailed textual depiction illuminating the structure, properties, biological activity, and applications of a molecule based on its molecular descriptors. It furnishes chemists and biologists with a swift conduit to essential molecular information, thus efficiently guiding their research and experiments.",natural-language-processing
ecc6d138-9454-42f5-9688-7501164f13ef,document-ranking,Document Ranking,"Sort documents according to some criterion so that the ""best"" results appear early in the result list displayed to the user (Source: Wikipedia).",natural-language-processing
61442973-214a-4f7e-bd36-a961beba1c57,cross-lingual-question-answering,Cross-Lingual Question Answering,,natural-language-processing
f6653ffe-5a71-44d0-bdab-e99b5e8c0434,unsupervised-dependency-parsing,Unsupervised Dependency Parsing,"Unsupervised dependency parsing is the task of inferring the dependency parse of sentences without any labeled training data.
Description from [NLP Progress](http://nlpprogress.com/english/dependency_parsing.html)",natural-language-processing
531a82b9-124f-4401-a5f6-1355af589ae9,part-of-speech-tagging,Part-Of-Speech Tagging,"Part-of-speech tagging (POS tagging) is the task of tagging a word in a text with its part of speech.
A part of speech is a category of words with similar grammatical properties. Common English
parts of speech are noun, verb, adjective, adverb, pronoun, preposition, conjunction, etc.
Example:
| Vinken | , | 61 | years | old |
| --- | ---| --- | --- | --- |
| NNP | , | CD | NNS | JJ |",natural-language-processing
3ae40429-112c-41c8-9ba4-60a3b2a9794e,drugprot,DrugProt,,natural-language-processing
58e0f2a9-cc63-4f49-a697-9a0cd884535f,readability-optimization,Readability optimization,It consists of improving the readability of a text automatically and without significantly altering the form or meaning.,natural-language-processing
da913118-1409-406f-b2fa-248a0a088707,metric-type-identification,Metric-Type Identification,,natural-language-processing
53b7c692-063b-4226-adef-ab16bb28a7c4,misogynistic-aggression-identification,Misogynistic Aggression Identification,"Develop a binary classifier for classifying the text as ‘gendered’ or ‘non-gendered’. For this, the TRAC-2 dataset of 5,000 annotated data from social media each in Bangla (in both Roman and Bangla script), Hindi (in both Roman and Devanagari script) and English for training and validation is to be used.",natural-language-processing
5c164ef1-2087-4df2-ad84-7292ba8eb65f,few-shot-ner,Few-shot NER,"Few-Shot Named Entity Recognition (NER) is the task of recognising a 'named entity' like a person, organization, time and so on in a piece of text e.g. ""Alan Mathison [person] visited the Turing Institute [organization] in June [time].",natural-language-processing
b7c52e1f-441b-4791-be4d-3942eef0add0,irony-identification,Irony Identification,"This task asks a model to identify whether a given sentence(s) is/are ironic or not.
Source: [BIG-bench](https://github.com/google/BIG-bench/tree/main/bigbench/benchmark_tasks/irony_identification)",natural-language-processing
866d72e8-ea78-40d1-ba24-2d5efff97fa1,cross-lingual-text-to-image-generation,Cross-lingual Text-to-Image Generation,,natural-language-processing
46a74e54-0bae-494d-9483-6a2a7921515f,abstractive-text-summarization,Abstractive Text Summarization,"**Abstractive Text Summarization** is the task of generating a short and concise summary that captures the salient ideas of the source text. The generated summaries potentially contain new phrases and sentences that may not appear in the source text.
Source: [Generative Adversarial Network for Abstractive Text Summarization ](https://arxiv.org/abs/1711.09357)
Image credit: [Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond](https://arxiv.org/pdf/1602.06023v5.pdf)",natural-language-processing
cbd3adbc-2dc2-413a-90af-142ddc65e8d0,relation-extraction,Relation Extraction,"**Relation Extraction** is the task of predicting attributes and relations for entities in a sentence. For example, given a sentence “Barack Obama was born in Honolulu, Hawaii.”, a relation classifier aims at predicting the relation of “bornInCity”. Relation Extraction is the key component for building relation knowledge graphs, and it is of crucial significance to natural language processing applications such as structured search, sentiment analysis, question answering, and summarization.
Source: [Deep Residual Learning for Weakly-Supervised Relation Extraction ](https://arxiv.org/abs/1707.08866)",natural-language-processing
c2e55aa9-93d9-4cb2-b1cc-d23efb50ea0d,timex-normalization,Timex normalization,"Temporal expression normalisation is the grounding of a lexicalisation of a time to a calendar date or other formal temporal representation.
Example:
10/18/2000 21:01:00.65
Dozens of Palestinians were wounded in scattered clashes in the West Bank and Gaza Strip, Wednesday, despite the Sharm el-Sheikh truce accord.
Chuck Rich reports on entertainment every Saturday
Description from [NLP Progress](http://nlpprogress.com/english/temporal_processing.html)",natural-language-processing
920cd076-1ba1-41cf-963e-a16be8ea31c9,specificity,Specificity,,natural-language-processing
dad2b9af-d558-4d60-a70b-ce3f6060e176,semantic-composition,Semantic Composition,Understanding the meaning of text by composing the meanings of the individual words in the text (Source: https://arxiv.org/pdf/1405.7908.pdf),natural-language-processing
266de524-35e7-4aa0-88f9-1693fad6c2df,emotion-recognition-in-conversation,Emotion Recognition in Conversation,"Given the transcript of a conversation along with speaker information of each constituent utterance, the ERC task aims to identify the emotion of each utterance from several pre-defined emotions. Formally, given the input sequence of N number of utterances [(u1, p1), (u2, p2), . . . , (uN , pN )], where each utterance ui = [ui,1, ui,2, . . . , ui,T ] consists of T words ui,j and spoken by party pi, the task is to predict the emotion label ei of each utterance ui.
.",natural-language-processing
f272fd1a-bc00-4ef6-88cb-0d996ae0bf7c,lambada,LAMBADA,,natural-language-processing
37f3cacf-5924-47b8-998d-bc649c851a94,multimodal-gif-dialog,Multimodal GIF Dialog,,natural-language-processing
99973e20-da31-4843-8df4-17b1a69e851c,semantic-retrieval,Semantic Retrieval,,natural-language-processing
bdae859e-4c77-4487-bd86-f225f2ad87d0,ucca-parsing,UCCA Parsing,"UCCA (Abend and Rappoport, 2013) is a semantic representation whose main design principles are ease of annotation, cross-linguistic applicability, and a modular architecture. UCCA represents the semantics of linguistic utterances as directed acyclic graphs (DAGs), where terminal (childless) nodes correspond to the text tokens, and non-terminal nodes to semantic units that participate in some super-ordinate relation. Edges are labeled, indicating the role of a child in the relation the parent represents. UCCA’s foundational layer mostly covers predicate-argument structure, semantic heads and inter-Scene relations. UCCA distinguishes primary edges, corresponding to explicit relations, from remote edges that allow for a unit to participate in several super-ordinate relations. Primary edges form a tree in each layer, whereas remote edges enable reentrancy, forming a DAG.
Description from [NLP Progress](http://nlpprogress.com/english/semantic_parsing.html)",natural-language-processing
9edaf0ff-8a00-4334-a0a7-a2c1e6aaf157,bias-detection,Bias Detection,"Bias detection is the task of detecting and measuring racism, sexism and otherwise discriminatory behavior in a model (Source: https://stereoset.mit.edu/)",natural-language-processing
e0becbee-2aa6-45e7-bd2d-1ca67a7216a7,native-language-identification,Native Language Identification,Native Language Identification (NLI) is the task of determining an author's native language (L1) based only on their writings in a second language (L2).,natural-language-processing
dd10e875-22fa-4327-8678-39deeb1f6570,generative-question-answering,Generative Question Answering,,natural-language-processing
b77edc85-bdfc-47df-b285-2d6a0025d7bd,phrase-vector-embedding,Phrase Vector Embedding,"Just like the generation of word (1-gram) vector embedding, this task is for phrase (n-gram) vector embedding.",natural-language-processing
d821ef55-4a0d-4942-9b14-73e1ddd40fe7,relation-mention-extraction,Relation Mention Extraction,Extracting phrases representative for a specific relation.,natural-language-processing
98b75a6b-e871-4000-99b2-77feb5777ed9,text-summarization,Text Summarization,"**Text Summarization** is a natural language processing (NLP) task that involves condensing a lengthy text document into a shorter, more compact version while still retaining the most important information and meaning. The goal is to produce a summary that accurately represents the content of the original text in a concise form.
There are different approaches to text summarization, including extractive methods that identify and extract important sentences or phrases from the text, and abstractive methods that generate new text based on the content of the original text.",natural-language-processing
75270bae-21be-4c1c-a6d7-2c290a5ae151,keyword-extraction,Keyword Extraction,Keyword extraction is tasked with the automatic identification of terms that best describe the subject of a document (Source: Wikipedia).,natural-language-processing
634b931e-334c-4866-bc27-b7a44d820e8b,few-shot-htc,few-shot-htc,,natural-language-processing
e36fa73c-36df-4cc0-b191-e12b9acf4593,dialog-act-classification,Dialog Act Classification,,natural-language-processing
eec6f86a-bebe-4614-ab04-d344e758de42,joint-entity-and-relation-extraction-on,Joint Entity and Relation Extraction on Scientific Data,,natural-language-processing
3d94fed5-6138-4404-8419-a89d089ebfcc,stance-detection-us-election-2020-biden,Stance Detection (US Election 2020 - Biden),,natural-language-processing
266f21f6-0c25-4489-9d2f-fb100decd50f,aspect-based-sentiment-analysis,Aspect-Based Sentiment Analysis (ABSA),"**Aspect-Based Sentiment Analysis (ABSA)** is a Natural Language Processing task that aims to identify and extract the sentiment of specific aspects or components of a product or service. ABSA typically involves a multi-step process that begins with identifying the aspects or features of the product or service that are being discussed in the text. This is followed by sentiment analysis, where the sentiment polarity (positive, negative, or neutral) is assigned to each aspect based on the context of the sentence or document. Finally, the results are aggregated to provide an overall sentiment for each aspect.
And recent works propose more challenging ABSA tasks to predict sentiment triplets or quadruplets (Chen et al., 2022), the most influential of which are ASTE (Peng et al., 2020; Zhai et al., 2022), TASD (Wan et al., 2020), ASQP (Zhang et al., 2021a) and ACOS with an emphasis on the implicit aspects or opinions (Cai et al., 2020a).
( Source: [MvP: Multi-view Prompting Improves Aspect Sentiment Tuple Prediction](https://arxiv.org/abs/2305.12627) )",natural-language-processing
06ca46c7-9352-4ea0-a3ab-a26759df7333,query-based-extractive-summarization,Query-Based Extractive Summarization,Extracting summarized information that answers a given query based on a reference text.,natural-language-processing
d260869e-1d98-49fd-9ba4-91b5dd4028c1,named-entity-recognition-ner,Named Entity Recognition (NER),"**Named Entity Recognition (NER)** is a task of Natural Language Processing (NLP) that involves identifying and classifying named entities in a text into predefined categories such as person names, organizations, locations, and others. The goal of NER is to extract structured information from unstructured text data and represent it in a machine-readable format. Approaches typically use BIO notation, which differentiates the beginning (B) and the inside (I) of entities. O is used for non-entity tokens.
Example:
| Mark | Watney | visited | Mars |
| --- | ---| --- | --- |
| B-PER | I-PER | O | B-LOC |
( Image credit: [Zalando](https://research.zalando.com/welcome/mission/research-projects/flair-nlp/) )",natural-language-processing
94daaa79-451f-44e0-a197-645934062911,arabic-sentiment-analysis,Arabic Sentiment Analysis,"Arabic sentiment analysis is the process of computationally identifying and categorizing opinions expressed in a piece of arabic text, especially in order to determine whether the writer's attitude towards a particular topic, product, etc. is positive, negative, or neutral (Source: Oxford Languages)",natural-language-processing
5385d8a5-b1c2-4e7a-89b1-d2163a15dadf,authorship-verification,Authorship Verification,"Authorship verification (**AV**) is a research subject in the field of digital text forensics that concerns itself with the question, whether two documents have been written by the same person.
Definition taken from the paper **Assessing the Applicability of Authorship Verification Methods**,
available at: ",natural-language-processing
7467690c-8ac5-4b53-b32d-a827b6af17a5,spoken-language-understanding,Spoken Language Understanding,,natural-language-processing
f02f2a4e-686a-42cb-a14d-737b8f18e41f,data-mining,Data Mining,,natural-language-processing
3b1f9c08-cf07-4834-b3fb-6f08d559e588,lexical-normalization,Lexical Normalization,"Lexical normalization is the task of translating/transforming a non standard text to a standard register.
Example:
```
new pix comming tomoroe
new pictures coming tomorrow
```
Datasets usually consists of tweets, since these naturally contain a fair amount of
these phenomena.
For lexical normalization, only replacements on the word-level are annotated.
Some corpora include annotation for 1-N and N-1 replacements. However, word
insertion/deletion and reordering is not part of the task.",natural-language-processing
3706e014-65ac-44b8-97b6-f7d2b7c7a5fe,reverse-dictionary,Reverse Dictionary,,natural-language-processing
a4fae6b7-b4ed-48f8-b836-1593736cef60,knowledge-base-population,Knowledge Base Population,Knowledge base population is the task of filling the incomplete elements of a given knowledge base by automatically processing a large corpus of text.,natural-language-processing
c57f0070-1ea0-4c59-a10a-63bcfa2105c2,race-m,RACE-m,,natural-language-processing
dfb93811-dda3-48e1-9ef1-4e7fae224dac,document-level-re-with-incomplete-labeling,Document-level RE with incomplete labeling,,natural-language-processing
15b89984-db38-432d-a114-d5f8d8d3ed8b,overlapping-mention-recognition,Overlapping Mention Recognition,Overlapping mention recognition is the task of correctly identifying all mentions of an entity in the presence of overlapping entity mentions.,natural-language-processing
2dc6318a-9089-4ab6-947c-ab813cf9f518,zero-shot-slot-filling,Zero-shot Slot Filling,,natural-language-processing
dbfb0ff0-88c9-41bb-abc4-85bac04ccce7,cross-document-language-modeling,Cross-Document Language Modeling,"Involves pretraining language models to support multi-document NLP tasks.
Source: [Cross-Document Language Modeling](https://arxiv.org/pdf/2101.00406v1.pdf)
Image Credit: [Cross-Document Language Modeling](https://arxiv.org/pdf/2101.00406v1.pdf)",natural-language-processing
bd3ba07d-a1e1-41ad-bb70-4f2145fa3137,crowdsourced-text-aggregation,Crowdsourced Text Aggregation,"One of the most important parts of processing responses from crowd workers is **aggregation**: given several conflicting opinions, a method should extract the truth. This problem is also known as *truth-inference* in crowdsourcing. Text aggregation problem is dedicated to extracting the correct information from crowd workers' responses for a crowdsourcing task where the output is a *text*: audio transcription, translation, character recognition, etc.",natural-language-processing
af3cdd7e-2014-4a2d-b485-d8e3129dbdb8,passage-ranking,Passage Ranking,,natural-language-processing
d17725c1-470b-4989-bbff-cf55ad67bd7f,negation-and-speculation-cue-detection,Negation and Speculation Cue Detection,,natural-language-processing
6e43a7ad-805c-47ba-8257-c9ebf6a99772,open-question,Open-Ended Question Answering,"Open-ended questions are defined as those that simply pose the question, without imposing any constraints on the format of the response. This distinguishes them from questions with a predetermined answer format.",natural-language-processing
112e1f50-5a65-4532-b1fe-2ddcde5ab2db,vietnamese-parsing,Vietnamese Parsing,,natural-language-processing
69fe16d6-cf27-47aa-b1b3-9498ef9d6bc6,transliteration,Transliteration,"**Transliteration** is a mechanism for converting a word in a source (foreign) language to a target language, and often adopts approaches from machine translation. In machine translation, the objective is to preserve the semantic meaning of the utterance as much as possible while following the syntactic structure in the target language. In Transliteration, the objective is to preserve the original pronunciation of the source word as much as possible while following the phonological structures of the target language.
For example, the city’s name “Manchester” has become well known by people of languages other than English. These new words are often named entities that are important in cross-lingual information retrieval, information extraction, machine translation, and often present out-of-vocabulary challenges to spoken language technologies such as automatic speech recognition, spoken keyword search, and text-to-speech.
Source: [Phonology-Augmented Statistical Framework for Machine Transliteration using Limited Linguistic Resources ](https://arxiv.org/abs/1810.03184)",natural-language-processing
67ddf66e-a583-4cad-9846-08e623736f41,joint-ner-and-classification,Joint NER and Classification,Joint named entity recognition and classification refers to the combined task of identifying named entitites in a given text and text classification.,natural-language-processing
26856b4e-9abe-4061-9805-71e768df405a,multi-modal-named-entity-recognition,Multi-modal Named Entity Recognition,"Multi-modal named entity recognition aims at improving
the accuracy of NER models through utilizing image information.",natural-language-processing
5166bf21-c2bd-4794-91c5-cbec84c1a421,dialogue-understanding,Dialogue Understanding,,natural-language-processing
ce68264c-3540-4821-9f64-5a317fccf61b,document-dating,Document Dating,"Document Dating is the problem of automatically predicting the date of a document based on its content. Date of a document, also referred to as the Document Creation Time (DCT), is at the core of many important tasks, such as, information retrieval, temporal reasoning, text summarization, event detection, and analysis of historical text, among others.
For example, in the following document, the correct creation year is 1999. This can be inferred by the presence of terms 1995 and Four years after.
Swiss adopted that form of taxation in 1995. The concession was approved by the govt last September. Four years after, the IOC….
Description from [NLP Progress](http://nlpprogress.com/english/temporal_processing.html)",natural-language-processing
d3ef12b7-82fc-4070-a3d1-032a61ca3f31,key-information-extraction,Key Information Extraction,"Key Information Extraction (KIE) is aimed at extracting structured information (e.g. key-value pairs) from form-style documents (e.g. invoices), which makes an important step towards intelligent document understanding.",natural-language-processing
a19bddc0-e3bc-4cc5-9ed3-c66e033b9869,empirical-judgments,Empirical Judgments,"Drawing inspiration from Immanuel Kant, this task measures a model’s ability to distinguish between two kinds of empirical judgments: judgments that assert a correlative relation between empirical events, and judgments that assert a causal relation.
Source: [BIG-bench](https://github.com/google/BIG-bench/tree/main/bigbench/benchmark_tasks/empirical_judgments)",natural-language-processing
4df7909f-b577-48b9-a578-8be3d55ed5c2,description-guided-molecule-generation,Description-guided molecule generation,"The significance of description-based molecule generation lies in its potential to streamline the process of molecular design by enabling the production of molecules that directly meet the criteria outlined in a given description. This facilitates a more targeted approach in the creation and optimization of novel molecules, with applications in diverse fields such as drug discovery and materials science.",natural-language-processing
6928b10d-2cb5-48b7-a6ee-27b0fd21f8f0,multi-lingual-text-to-image-generation,Multi-lingual Text-to-Image Generation,,natural-language-processing
3764621f-95d8-44ab-8b1c-520f4bf7c869,chinese-spell-checking,Chinese Spell Checking,Chinese Spell Checking (CSC) aims to detect and correct erroneous characters for user-generated text in Chinese language.,natural-language-processing
8d1d48dc-5f7b-4785-ab62-df03bd919862,low-resource-named-entity-recognition,Low Resource Named Entity Recognition,"Low resource named entity recognition is the task of using data and models available for one language for which ample such resources are available (e.g., English) to solve named entity recognition tasks in another, commonly more low-resource, language.",natural-language-processing
6da1a9e5-8894-454a-bad9-ea09e8cf2e35,domain-labelling,Domain Labelling,,natural-language-processing
0849a601-9e51-44f3-a215-49494275ec0b,aspect-category-polarity,Aspect Category Polarity,,natural-language-processing
19c9a5bc-9369-4b4a-9e74-60bb5739d811,document-level-event-extraction,Document-level Event Extraction,,natural-language-processing
5cd2b20e-2d26-4c5c-8c6a-4a18971e71c7,entity-typing-on-dh-kgs,Entity Typing on DH-KGs,,natural-language-processing
084984a4-f378-4670-b739-4e317574304c,slovak-text-diacritization,Slovak Text Diacritization,Addition of diacritics for undiacritized Slovak Wikipedia texts.,natural-language-processing
5afa5d19-34de-465c-b238-734f1c49a4f8,relation-classification,Relation Classification,"**Relation Classification** is the task of identifying the semantic relation holding between two nominal entities in text.
Source: [Structure Regularized Neural Network for Entity Relation Classification for Chinese Literature Text ](https://arxiv.org/abs/1803.05662)",natural-language-processing
d13f97b1-5f68-4f38-b22c-4b3f5a90cd9e,conversation-disentanglement,Conversation Disentanglement,"Automatic disentanglement could be used to provide more interpretable results when searching over chat logs, and to help users understand what is happening when they join a channel.
Source: [Kummerfeld et al.](https://arxiv.org/pdf/1810.11118v2.pdf)",natural-language-processing
02406310-4ac8-4b34-91d6-dd20539588d4,natural-language-transduction,Natural Language Transduction,Converting one sequence into another,natural-language-processing
6d9aa838-ea10-4bbe-9e8e-a8b2afff2154,cause-effect-relation-classification,Cause-Effect Relation Classification,"Classifying pairs of entities (e.g., events) into causal or non-causal or predicting Cause and Effect in a causal relation",natural-language-processing
68252255-2d05-4dba-a5da-ce4a2a97be54,open-intent-detection,Open Intent Detection,"Open intent detection aims to identify n-class known intents, and detect one-class open intent.",natural-language-processing
4049996a-320a-4bd5-8dbe-447ae86df973,end-to-end-dialogue-modelling,End-To-End Dialogue Modelling,,natural-language-processing
023ba158-eac8-4d58-b985-4a6b7ea1b96d,cross-document-coreference-resolution,Cross Document Coreference Resolution,,natural-language-processing
125fc259-0c32-4f56-8a24-05ea14e10f05,traditional-spam-detection,Traditional Spam Detection,,natural-language-processing
7ecf0d9c-9639-4422-8e2a-b0ab94f49e92,keyphrase-generation,Keyphrase Generation,Keyphrase Generation aims at generating keyphrases (or phrases) that best summarize a given text article or document.,natural-language-processing
6442742c-65c2-4c79-84c8-0eeac2ad1245,distractor-generation,Distractor Generation,"Given a passage, a question, and an answer phrase, the goal of distractor generation (DG) is to generate context-related wrong options (i.e., distractor) for multiple-choice questions (MCQ).",natural-language-processing
edd46cec-4a8d-46b1-b10b-99132d3919cb,multilingual-machine-comprehension,Multilingual Machine Comprehension in English Hindi,"Multilingual Machine Comprehension (MMC) is a Question-Answering (QA) sub-task that involves quoting the answer for a question from a given snippet, where the question and the snippet can be in different languages. Results on an extended version of the recently released XQuAD dataset, which we propose to use as the evaluation benchmark for future research.",natural-language-processing
9536645a-86b3-45df-83ba-a87f6347c0e9,aspect-oriented-opinion-extraction,Aspect-oriented Opinion Extraction,Extracting the paired opinion terms for every given aspect term in a sentence.,natural-language-processing
1c212ab1-235a-49ed-ad82-9fd4ec815578,sentence-ordering,Sentence Ordering,Sentence ordering task deals with finding the correct order of sentences given a randomly ordered paragraph.,natural-language-processing
da680ff6-8cff-4b88-a8ca-47bfee1a04da,question-quality-assessment,Question Quality Assessment,This task expects to build subjective question-answering algorithms to check whether a question is of high quality or needs to be edited/flagged.,natural-language-processing
2dc6efee-4ad7-45b9-9ba0-7060f6c4edb2,open-intent-discovery,Open Intent Discovery,Open intent discovery aims to leverage limited prior knowledge of known intents to find fine-grained known and open intent-wise clusters.,natural-language-processing
a01ab4e6-608e-4c88-a63e-d13acda9f30a,table-search,Table Search,,natural-language-processing
067fe5f1-e137-4d61-bd34-1551b5536d57,joint-entity-and-relation-extraction,Joint Entity and Relation Extraction,Scores reported from systems which jointly extract entities and relations.,natural-language-processing
9b0d572a-2a12-431c-ac1c-ce3f5d7ce841,semantic-textual-similarity,Semantic Textual Similarity,"Semantic textual similarity deals with determining how similar two pieces of texts are.
This can take the form of assigning a score from 1 to 5. Related tasks are paraphrase or duplicate identification.
Image source: [Learning Semantic Textual Similarity from Conversations](https://arxiv.org/pdf/1804.07754.pdf)",natural-language-processing
5b30e337-89c2-417b-b049-6bc64780259a,hurtful-sentence-completion,Hurtful Sentence Completion,Measure hurtful sentence completions in language models (HONEST),natural-language-processing
1a9040a9-46ce-4d2e-a0c2-e054bb363263,latent-aspect-detection,Latent Aspect Detection,,natural-language-processing
ae4f9726-5d9d-45b1-bf10-0914ff36fc3f,multiview-contextual-commonsense-inference,Multiview Contextual Commonsense Inference,"Depending on the situation, multiple different reasonings are possible each leading to various unique inferences. For a given context, identifying all valid inferences require commonsense reasoning and as such, this task is called Multiview Contextual Commonsense Inference.",natural-language-processing
d3d68914-1e30-4750-8b88-55bc6379c795,aggression-identification,Aggression Identification,"Develop a classifier that could make a 3-way classification in-between ‘Overtly Aggressive’, ‘Covertly Aggressive’ and ‘Non-aggressive’ text data. For this, TRAC-2 dataset of 5,000 aggression-annotated data from social media each in Bangla (in both Roman and Bangla script), Hindi (in both Roman and Devanagari script) and English for training and validation is to be used.",natural-language-processing
127108da-b93c-47bd-bef4-1e822ec980f1,multimodal-machine-translation,Multimodal Machine Translation,"Multimodal machine translation is the task of doing machine translation with multiple data sources - for example, translating ""a bird is flying over water"" + an image of a bird over water to German text.
( Image credit: [Findings of the Third Shared Task on Multimodal Machine Translation](https://www.aclweb.org/anthology/W18-6402.pdf) )",natural-language-processing
916afcc1-cceb-47ed-b4a7-4ac5594a0e7a,comment-generation,Comment Generation,"Article commenting poses new challenges for machines, as it involves multiple cognitive abilities: understanding the given article, formulating opinions and arguments, and organizing natu ral language for expression.",natural-language-processing
24013f2b-efe7-427c-84dd-dff27ad7f444,emergent-communications-on-relations,Emergent communications on relations,Emergent communications in the context of relations.,natural-language-processing
f6a476fc-e412-4f9a-9ced-de7f2c536ca3,automated-writing-evaluation,Automated Writing Evaluation,"Automated writing evaluation refers to the task of analysing and measuring written text based on features, such as syntax, text complexity and vocabulary range.",natural-language-processing
f40283dd-52b3-497c-b6ba-8b797d602457,multi-document-summarization,Multi-Document Summarization,"**Multi-Document Summarization** is a process of representing a set of documents with a short piece of text by capturing the relevant information and filtering out the redundant information. Two prominent approaches to Multi-Document Summarization are extractive and abstractive summarization. Extractive summarization systems aim to extract salient snippets, sentences or passages from documents, while abstractive summarization systems aim to concisely paraphrase the content of the documents.
Source: [Multi-Document Summarization using Distributed Bag-of-Words Model ](https://arxiv.org/abs/1710.02745)",natural-language-processing
77cb04f6-edac-4dbc-95ed-86d63c7616ec,entity-alignment,Entity Alignment,"**Entity Alignment** is the task of finding entities in two knowledge bases that refer to the same real-world object. It plays a vital role in automatically integrating multiple knowledge bases.
Note: results that have incorporated machine translated entity names (introduced in the RDGCN paper) or pre-alignment name embeddings are considered to have used **extra training labels** (both are marked with ""Extra Training Data"" in the leaderboard) and are **not adhere to a comparable setting** with others that have followed the original setting of the benchmark.
Source: [Cross-lingual Entity Alignment via Joint Attribute-Preserving Embedding ](https://arxiv.org/abs/1708.05045)
The task of entity alignment is related to the task of [entity resolution](https://paperswithcode.com/task/entity-resolution) which focuses on matching structured entity descriptions in different contexts.",natural-language-processing
c8a0cd77-99ca-4522-8309-44ab36263951,visual-commonsense-tests,Visual Commonsense Tests,"Predict 5 property types (color, shape, material, size, and visual co-occurrence) for over 5000 subjects.",natural-language-processing
a554a4b0-2699-4025-a299-a00051710fd5,text-anonymization,Text Anonymization,,natural-language-processing
ced1b961-f4da-4de0-9ce0-8c8309bf7b12,emotional-dialogue-acts,Emotional Dialogue Acts,"Associating Emotions and Dialogue Acts to find unique relationships between them such as Accept/Agree dialogue acts often occur with the Joy emotion, Apology with Sadness, or Thanking with Joy.
First introduced in the paper EDA: Enriching Emotional Dialogue Acts using an Ensemble of Neural Annotators, LREC 2020 (https://aclanthology.org/2020.lrec-1.78/).",natural-language-processing
d0aab134-0daa-40bb-bedd-0929ae4f9a2d,stereotypical-bias-analysis,Stereotypical Bias Analysis,,natural-language-processing
69d678e8-ba90-4b9c-89bf-e031c37d525f,polyphone-disambiguation,Polyphone disambiguation,A part of the TTS-front end framework which serves to predict the correct pronunciation for the input polyphone characters.,natural-language-processing
eb2a5f04-c7bf-4524-a835-d37bfd26d7f6,passage-re-ranking,Passage Re-Ranking,Passage re-ranking is the task of scoring and re-ranking a collection of retrieved documents based on an input query.,natural-language-processing
1f30bce0-9fc8-42f1-9d63-5b495c1b6f13,entity-typing,Entity Typing,"**Entity Typing** is an important task in text analysis. Assigning types (e.g., person, location, organization) to mentions of entities in documents enables effective structured analysis of unstructured text corpora. The extracted type information can be used in a wide range of ways (e.g., serving as primitives for information extraction and knowledge base (KB) completion, and assisting question answering). Traditional Entity Typing systems focus on a small set of coarse types (typically fewer than 10). Recent studies work on a much larger set of fine-grained types which form a tree-structured hierarchy (e.g., actor as a subtype of artist, and artist is a subtype of person).
Source: [Label Noise Reduction in Entity Typing by Heterogeneous Partial-Label Embedding ](https://arxiv.org/abs/1602.05307)
Image Credit: [Label Noise Reduction in Entity Typing by Heterogeneous Partial-Label Embedding ](https://arxiv.org/abs/1602.05307)",natural-language-processing
3d170842-43e7-4974-b218-11007f9ae5a0,lexical-complexity-prediction,Lexical Complexity Prediction,Predicting the complexity of a word/multi-word expression in a sentence.,natural-language-processing
ec3a8a79-29aa-4d65-ae3b-0f814a59e15e,morphological-disambiguation,Morphological Disambiguation,,natural-language-processing
cd0632e4-3862-42c4-8409-6f6c72337490,passage-retrieval,Passage Retrieval,**Passage retrieval** is a specialized type of IR application that retrieves relevant passages (or pieces of text) rather than an entire ranked set of documents.,natural-language-processing
fe2f1127-f7c8-4d47-a2c5-72768dae0b6a,conversational-search,Conversational Search,,natural-language-processing
7057d1b5-dff0-4e5f-b9c0-e2a8a7fa044f,data-to-text-generation,Data-to-Text Generation,"A classic problem in natural-language generation (NLG) involves taking structured data, such as a table, as input, and producing text that adequately and fluently describes this data as output. Unlike machine translation, which aims for complete transduction of the sentence to be translated, this form of NLG is usually taken to require addressing (at least) two separate challenges: what to say, the selection of an appropriate subset of the input data to discuss, and how to say it, the surface realization of a generation.
( Image credit: [Data-to-Text Generation with Content Selection and Planning](https://arxiv.org/pdf/1809.00582v2.pdf) )",natural-language-processing
123a5846-0acc-412b-a5e1-335c6b3b3be8,abstract-argumentation,Abstract Argumentation,Identifying argumentative statements from natural language dialogs.,natural-language-processing
c479483a-d048-495f-87c3-4a9d0ed259a3,satire-detection,Satire Detection,Satire detection consists in detecting when a text is written in a satirical tone and its content shouldn't be interpreted literally.,natural-language-processing
815d0248-8727-4265-84f5-edc060589217,joint-multilingual-sentence-representations,Joint Multilingual Sentence Representations,,natural-language-processing
0ab6f460-d692-4bef-8ce1-5171e2834a4c,de-identification,De-identification,"De-identification is the task of detecting privacy-related entities in text, such as person names, emails and contact data.",natural-language-processing
62bf6266-c4c1-417a-b756-4fc5d0c3946c,aspect-category-opinion-sentiment-quadruple,Aspect-Category-Opinion-Sentiment Quadruple Extraction,Aspect-Category-Opinion-Sentiment (ACOS) Quadruple Extraction is the task with the goal to extract all aspect-category-opinion-sentiment quadruples in a review sentence. ( and provide full support for aspect-level sentiment analysis with implicit aspects and opinions if possible ),natural-language-processing
032be135-7bcb-4f7c-bf66-03d56e2671bc,chinese-zero-pronoun-resolution,Chinese Zero Pronoun Resolution,Chinese zero pronoun resolution refers to the task of resolving a so-called zero segment of a Chinese text that is not written but supplies information for interpreting the text.,natural-language-processing
d54f8852-6d73-45f1-8c61-4de356ab32c5,extreme-summarization,Extreme Summarization,Image credit: [TLDR: Extreme Summarization of Scientific Documents](https://arxiv.org/pdf/2004.15011v3.pdf),natural-language-processing
64a338a3-98d6-48e1-a340-120bc77a7646,text-to-video-search,Text-to-video search,,natural-language-processing
166e0a1a-58c5-4e8c-8542-00e43bbc8808,entity-disambiguation,Entity Disambiguation,"**Entity Disambiguation** is the task of linking mentions of ambiguous entities to their referent entities in a knowledge base such as Wikipedia.
Source: [Leveraging Deep Neural Networks and Knowledge Graphs for Entity Disambiguation ](https://arxiv.org/abs/1504.07678)",natural-language-processing
8bbf553a-b1fb-4ce1-ba74-eada25f92f95,topic-models,Topic Models,"A topic model is a type of statistical model for discovering the abstract ""topics"" that occur in a collection of documents. Topic modeling is a frequently used text-mining tool for the discovery of hidden semantic structures in a text body.",natural-language-processing
ba38279d-7948-4f2e-b61b-a51274a0616a,multi-labeled-relation-extraction,Multi-Labeled Relation Extraction,,natural-language-processing
a9bb2eca-380e-48f8-bd38-f337fcb07b33,gre-reading-comprehension,GRE Reading Comprehension,,natural-language-processing
09196e81-5276-48c4-8397-33c8c52465bd,abuse-detection,Abuse Detection,"Abuse detection is the task of identifying abusive behaviors, such as hate speech, offensive language, sexism and racism, in utterances from social media platforms (Source: https://arxiv.org/abs/1802.00385).",natural-language-processing
857ecd64-b299-4fff-8755-9fae2c3d0868,nested-named-entity-recognition,Nested Named Entity Recognition,"Nested named entity recognition is a subtask of information extraction that seeks to locate and classify nested named entities (i.e., hierarchically structured entities) mentioned in unstructured text (Source: Adapted from Wikipedia).",natural-language-processing
81213004-0afa-4a8b-a85e-76fda51e833c,parallel-corpus-mining,Parallel Corpus Mining,Mining a corpus of bilingual sentence pairs that are translations of each other.,natural-language-processing
7546b072-b82c-449d-8083-cb08e542c9d0,negation-detection,Negation Detection,Negation detection is the task of identifying negation cues in text.,natural-language-processing
78c158c7-2415-45ce-ace0-4b2a14713cee,relationship-extraction-distant-supervised,Relationship Extraction (Distant Supervised),"Relationship extraction is the task of extracting semantic relationships from a text. Extracted relationships usually
occur between two or more entities of a certain type (e.g. Person, Organisation, Location) and fall into a number of
semantic categories (e.g. married to, employed by, lives in).",natural-language-processing
93f0a41f-d62c-4664-b77f-ae8ef62bc32c,sequential-pattern-mining,Sequential Pattern Mining,"**Sequential Pattern Mining** is the process that discovers relevant patterns between data examples where the values are delivered in a sequence.
Source: [Big Data Analytics for Large Scale Wireless Networks: Challenges and Opportunities ](https://arxiv.org/abs/1909.08069)",natural-language-processing
c544a3a6-45da-4c55-b9cd-bf5d459b0eb5,arabic-text-diacritization,Arabic Text Diacritization,Addition of diacritics for undiacritized arabic texts for words disambiguation.,natural-language-processing
5e50f52f-162f-4d0d-9310-6c51c8169cac,story-generation,Story Generation,"Story generation is the task of automatically generating a coherent narrative, often from a set of premises or a brief summary.",natural-language-processing
dc6a8884-de5e-4599-8b63-fce87609f7fa,target-oriented-opinion-words-extraction,target-oriented opinion words extraction,The objective of TOWE is to extract the corresponding opinion words describing or evaluating the target from the review.,natural-language-processing
c2fac144-2031-4def-b3f9-5969ddc61443,relational-captioning,Relational Captioning,,natural-language-processing
405ebac1-4026-4bca-977d-0025accf6a5a,medical-named-entity-recognition,Medical Named Entity Recognition,,natural-language-processing
5adcb8a6-5153-49a7-b28d-74c5e4532e42,pesona-dialogue-in-story,Persona Dialogue in Story,Building persona dialogue in a story,natural-language-processing
e28c2c9f-8420-4c2d-9dae-64d4fb8eba24,rumour-detection,Rumour Detection,"Rumor detection is the task of identifying rumors, i.e. statements whose veracity is not quickly or ever confirmed, in utterances on social media platforms.",natural-language-processing
4d0c3385-af2b-4a41-9a4d-7268193f3f27,paraphrase-identification,Paraphrase Identification,"The goal of **Paraphrase Identification** is to determine whether a pair of sentences have the same meaning.
Source: [Adversarial Examples with Difficult Common Words for Paraphrase Identification ](https://arxiv.org/abs/1909.02560)
Image source: [On Paraphrase Identification Corpora ](http://www.lrec-conf.org/proceedings/lrec2014/pdf/1000_Paper.pdf)",natural-language-processing
df618593-0acf-490c-b67f-3edbf25896e4,phrase-grounding,Phrase Grounding,"Given an image and a corresponding caption, the **Phrase Grounding** task aims to ground each entity mentioned by a noun phrase in the caption to a region in the image.
Source: [Phrase Grounding by Soft-Label Chain Conditional Random Field ](https://arxiv.org/abs/1909.00301)",natural-language-processing
5c10f1c8-b711-4665-b492-2608d744a1ef,extract-aspect,Extract Aspect,"Aspect extraction is the task of identifying and extracting terms relevant for opinion mining and sentiment analysis, for example terms for product attributes or features.",natural-language-processing
9c2e0e76-dc69-47f0-9632-69ce10fbd567,multi-word-expression-embedding,multi-word expression embedding,Learn embeddings for multi-word expressions,natural-language-processing
61c0a572-feda-440f-8972-d2012ec03242,opinion-mining,Opinion Mining,"Identifying and categorizing opinions expressed in a piece of text, especially in order to determine whether the writer's attitude towards a particular topic, product, etc. is positive, negative, or neutral (Source: Oxford Languages)
Image Source: [Deep learning for sentiment analysis: A survey](https://onlinelibrary.wiley.com/doi/abs/10.1002/widm.1253)",natural-language-processing
e5e37abd-6625-4e87-aa2b-dd57d3c5b37d,concept-to-text-generation,Concept-To-Text Generation,"Generating natural language text from a conceptualized representation, such as an ontology.",natural-language-processing
832f35fb-fb11-4970-b07d-8d9ac3d71deb,conversational-response-selection,Conversational Response Selection,Conversational response selection refers to the task of identifying the most relevant response to a given input sentence from a collection of sentences.,natural-language-processing
48184e49-fa6a-40f5-9c4c-dc764ed66450,language-acquisition,Language Acquisition,Language acquisition refers to tasks related to the learning of a second language.,natural-language-processing
b2721334-8fb0-45d3-985f-06c50e3d5a09,blocking,Blocking,"Entity resolution (also known as entity matching, record linkage, or duplicate detection) is the task of finding records that refer to the same real-world entity across different data sources (e.g., data files, books, websites, and databases). (Source: Wikipedia)
**Blocking** is a crucial step in any entity resolution pipeline because a pair-wise comparison of all records across two data sources is infeasible.
Blocking applies a computationally cheap method to generate a smaller set of candidate record pairs reducing the workload of the matcher.
During matching a more expensive pair-wise matcher generates a final set of matching record pairs.
Survey on blocking:
- [Papadakis et al.: Blocking and Filtering Techniques for Entity Resolution: A Survey](https://dl.acm.org/doi/pdf/10.1145/3377455), 2020.",natural-language-processing
e823570e-8a9d-4a93-8103-01fd15768f29,transition-based-dependency-parsing,Transition-Based Dependency Parsing,,natural-language-processing
0c22682d-4960-4f3a-ab20-49a1cd7b7d46,reading-comprehension,Reading Comprehension,"Most current question answering datasets frame the task as reading comprehension where the question is about a paragraph or document and the answer often is a span in the document.
Some specific tasks of reading comprehension include multi-modal machine reading comprehension and textual machine reading comprehension, among others. In the literature, machine reading comprehension can be divide into four categories: **cloze style**, **multiple choice**, **span prediction**, and **free-form answer**. Read more about each category [here](https://paperswithcode.com/paper/a-survey-on-machine-reading-comprehension-1).
Benchmark datasets used for testing a model's reading comprehension abilities include [MovieQA](/dataset/movieqa), [ReCoRD](dataset/record), and [RACE](/dataset/race), among others.
The Machine Reading group at UCL also provides an [overview of reading comprehension tasks](https://uclnlp.github.io/ai4exams/data.html).
Figure source: [A Survey on Machine Reading Comprehension: Tasks, Evaluation Metrics and Benchmark Datasets](https://arxiv.org/pdf/2006.11880.pdf)",natural-language-processing
9c03441c-fab9-43d4-a5aa-3b1f3a290b6b,information-retrieval,Information Retrieval,"Information retrieval is the task of ranking a list of documents or search results in response to a query
( Image credit: [sudhanshumittal](https://github.com/sudhanshumittal/Information-retrieval-system) )",natural-language-processing
c1c5f4c4-b471-46d4-a956-c2ffa57ac9f1,spanish-text-diacritization,Spanish Text Diacritization,Addition of diacritics for undiacritized Spanish Wikipedia texts.,natural-language-processing
4e3cbfd4-ab93-4c10-be8a-d475e727234b,emotion-cause-pair-extraction,Emotion-Cause Pair Extraction,,natural-language-processing
3b3e44a1-697e-4d54-b49b-de53e05adfd2,workflow-discovery,Workflow Discovery,"Workflow Discovery (WD) was introduced by [Workflow Discovery from Dialogues in the Low Data Regime](https://openreview.net/forum?id=L9othQvPks). WD aims to extract work-flows that have either implicitly or explicitly guided task-oriented dialogues be-
tween two people. Workflows extracted from a conversation consist of a summary of the key actions taken during
the dialogue. These workflows consist of pre-defined terms for actions and slots when possible, but our approach also al-
lows for actions that are not known to be invented by the model online and used as new steps in the generated work-
flow.",natural-language-processing
e75b0a22-2868-49da-a897-2baf57dd974c,clickbait-detection,Clickbait Detection,"Clickbait detection is the task of identifying clickbait, a form of false advertisement, that uses hyperlink text or a thumbnail link that is designed to attract attention and to entice users to follow that link and read, view, or listen to the linked piece of online content, with a defining characteristic of being deceptive, typically sensationalized or misleading (Source: Adapted from Wikipedia)",natural-language-processing
acda3012-4915-480a-98c1-cbad447b0891,column-type-annotation,Column Type Annotation,"**Column type annotation** (CTA) refers to the task of predicting the semantic type of a table column and is a subtask of [Table Annotation](https://paperswithcode.com/task/table-annotation). The labels that are usually used in a CTA problem are semantic types from vocabularies like DBpedia, Schema.org or WikiData. Some examples are: *Book*, *Country*, *LocalBusiness* etc.
CTA can be either treated as a multi-class classification problem where a column is annotated by only one semantic type or as multi-label classification problem where a column can be annotated using multiple semantic types.",natural-language-processing
9ad581bf-976c-4c20-93a9-37c8c39a3196,zero-shot-machine-translation,Zero-Shot Machine Translation,Translate text or speech from one language to another without supervision.,natural-language-processing
93e944b5-e5ed-4469-9213-c0826bb080a0,dialogue-rewriting,Dialogue Rewriting,,natural-language-processing
08bef5d5-bec0-48d9-a434-d0c6a313ee44,contextualized-literature-based-discovery,Contextualized Literature-based Discovery,"Given a seed term (e.g., a task or method in NLP, or a disease in biomedicine) and corresponding background (e.g., challenges for a given task), the model's aim is to generate idea suggestions.
The Contextual Literature-Based Discovery (CLBD) will take two different formulations of C-LBD: idea sentence generation and idea node prediction.",natural-language-processing
b6c6d0aa-c682-48b1-b082-a15398d0c815,fine-grained-opinion-analysis,Fine-Grained Opinion Analysis,"Fine-Grained Opinion Analysis aims to: (i) detect opinion expressions that convey attitudes such as sentiments, agreements, beliefs, or intentions, (ii) measure their intensity, (iii) identify their holders i.e. entities that express an attitude, (iv) identify their targets i.e. entities or propositions at which the attitude is directed, and (v) classify their target-dependent attitude.
( Image credit: [SRL4ORL](https://arxiv.org/pdf/1711.00768v3.pdf) )",natural-language-processing
da6059c0-7ec2-4289-9116-19cc24d02358,multi-hop-reading-comprehension,Multi-Hop Reading Comprehension,,natural-language-processing
2c0e4315-a2b4-4465-8e1a-f1047ea1c660,scientific-article-summarization,Scientific Document Summarization,,natural-language-processing
141d20ad-b02e-40ce-acca-37873108f9f6,propaganda-technique-identification,Propaganda technique identification,,natural-language-processing
64dac3b5-d8d4-4e56-a040-23213ce1eb12,text-categorization,Text Categorization,"**Text Categorization** is the task of automatically assigning pre-defined categories to documents written in natural languages. Several types of Text Categorization have been studied, each of which deals with different types of documents and categories, such as topic categorization to detect discussed topics (e.g., sports, politics), spam detection, and sentiment classification to determine the sentiment typically in product or movie reviews.
Source: [Effective Use of Word Order for Text Categorization with Convolutional Neural Networks ](https://arxiv.org/abs/1412.1058)",natural-language-processing
36d9b2d1-f3a5-4a16-8305-3c4d50d371ec,long-range-modeling,Long-range modeling,"A new task for testing the long-sequence modeling capabilities and efficiency of language models.
Image credit: [SCROLLS: Standardized CompaRison Over Long Language Sequences](https://arxiv.org/pdf/2201.03533v1.pdf)",natural-language-processing
0867bc4e-52f5-4ded-898c-0d37bd3f5764,extract-aspect-polarity-tuple,Extract aspect-polarity tuple,,natural-language-processing
3862d62a-4c81-4036-92d4-4e25ca595eeb,face-selection,Face Selection,A task where an agent should select at most two sentences from the paper as argumentative facts.,natural-language-processing
c548a10f-1e3d-48c6-a362-9390ea6c4415,event-relation-extraction,Event Relation Extraction,"To extract relations among events, such as event coreference, temporal, causal and subevent relations.",natural-language-processing
e4bd95ea-190b-4f9e-8044-cf07cfa7ec33,counterspeech-detection,Counterspeech Detection,"Counter-speech detection is the task of detecting counter-speech, i.e., a crowd-sourced response that argues, disagrees, or presents an opposing view to extremism or hateful content on social media platforms (Source: Adapted from: https://icsr.info/wp-content/uploads/2018/03/ICSR-Report-Challenging-Hate-Counter-speech-Practices-in-Europe.pdf)",natural-language-processing
07499610-4381-428e-8d6f-4b1a56875ccb,semantic-parsing,Semantic Parsing,"**Semantic Parsing** is the task of transducing natural language utterances into formal meaning representations. The target meaning representations can be defined according to a wide variety of formalisms. This include linguistically-motivated semantic representations that are designed to capture the meaning of any sentence such as λ-calculus or the abstract meaning representations. Alternatively, for more task-driven approaches to Semantic Parsing, it is common for meaning representations to represent executable programs such as SQL queries, robotic commands, smart phone instructions, and even general-purpose programming languages like Python and Java.
Source: [Tranx: A Transition-based Neural Abstract Syntax Parser for Semantic Parsing and Code Generation ](https://arxiv.org/abs/1810.02720)",natural-language-processing
6667b64e-2903-4605-82f2-2ba52687e556,named-entity-recognition-in-vietnamese,Named Entity Recognition In Vietnamese,,natural-language-processing
7d3bb004-e5b9-4b3e-aa2c-5f84e933e07f,table-type-detection,Table Type Detection,"Table Type Detection is the task of predicting the semantic type of a table and is a subtask of [Table Annotation](https://paperswithcode.com/task/table-annotation). The labels used for annotation in this task are types from vocabularies like DBpedia, Schema.org etc. like *Music* or *Hotel*. The semantic type of a table can indicate the content of a table. For example, if the semantic type of a table is *Music* it indicates that the table consists of music records/entities.",natural-language-processing
10bb6946-edff-4eee-b3b7-3834c4561918,controllable-language-modelling,Controllable Language Modelling,"Given unsupervised Language Modeling as pretraining task, the objective is to generate texts under particular control attributes (Topic, Sentiment)",natural-language-processing
cacacf82-806c-4d03-868e-7c8c249cc64b,weakly-supervised-data-denoising,Weakly Supervised Data Denoising,,natural-language-processing
da585295-3a9f-4ae9-80cd-6eabc7eadf42,simultaneous-speech-to-text-translation,Simultaneous Speech-to-Text Translation,"Simultaneous Speech-to-Text translation aims to translate concurrently with the source speech. It is crucial since it enables real-time interpretation of conversations, lectures and talks.",natural-language-processing
b47a7b6c-be44-4716-a2da-5015d2919aad,protein-folding,Protein Folding,,natural-language-processing
bb7eee53-3e99-4dba-a9b0-142a0f39f1ce,reliable-intelligence-identification,Reliable Intelligence Identification,,natural-language-processing
caa99953-c0cc-48e5-be1f-47e53dabe99b,sstod,SSTOD,,natural-language-processing
4a196739-a7e1-4138-872c-757d7e59a6b9,intent-discovery,Intent Discovery,"Given a set of labelled and unlabelled utterances, the idea is to identify existing (known) intents and potential (new intents) intents. This method can be utilised in conversational system setting.",natural-language-processing
0a53fae2-22ce-408d-a0aa-021f9a1e880f,low-resource-neural-machine-translation,Low-Resource Neural Machine Translation,Low-resource machine translation is the task of machine translation on a low-resource language where large data may not be available.,natural-language-processing
ca075753-e921-4553-b091-060ed91d2384,cell-entity-annotation,Cell Entity Annotation,"**Cell Entity Annotation** (CEA) is the task of annotating cells in a table with an entity from a knowledge base and is a subtask of [Table Annotation](https://paperswithcode.com/task/table-annotation). CEA problem labels are entities from knowledge bases such as DBpedia or WikiData. It usually is considered as a multi-class classification problem.
CEA can also be referred to in different works as the problem of entity linking, as it links a cell in a table to an entity.",natural-language-processing
8e041c84-b80c-4535-90ce-aa4f6e315763,semantic-similarity,Semantic Similarity,"The main objective **Semantic Similarity** is to measure the distance between the semantic meanings of a pair of words, phrases, sentences, or documents. For example, the word “car” is more similar to “bus” than it is to “cat”. The two main approaches to measuring Semantic Similarity are knowledge-based approaches and corpus-based, distributional methods.
Source: [Visual and Semantic Knowledge Transfer for Large Scale Semi-supervised Object Detection ](https://arxiv.org/abs/1801.03145)",natural-language-processing
7550ea24-daec-4f66-a8b6-4f803057c56b,propaganda-detection,Propaganda detection,,natural-language-processing
2a0d1aa2-8a08-4e24-9bc3-b935ded6651f,pretrained-multilingual-language-models,Pretrained Multilingual Language Models,,natural-language-processing
7d6bdcb4-3b6b-4c38-89c1-9996998e1df7,coreference-resolution,Coreference Resolution,"Coreference resolution is the task of clustering mentions in text that refer to the same underlying real world entities.
Example:
```
+-----------+
| |
I voted for Obama because he was most aligned with my values"", she said.
| | |
+-------------------------------------------------+------------+
```
""I"", ""my"", and ""she"" belong to the same cluster and ""Obama"" and ""he"" belong to the same cluster.",natural-language-processing
0fabcb70-2e27-466f-8d65-c2e4a1e10f48,emotion-recognition-in-context,Emotion Recognition in Context,,natural-language-processing
fa34fde2-6665-498a-bc9c-1b38fe36c33a,persian-sentiment-anlysis,Persian Sentiment Analysis,Persian Sentiment analysis is the task of classifying the polarity of a given text.,natural-language-processing
342c804b-7ecf-42de-9387-4a12260647ce,variable-detection,Variable Detection,Identifying whether a sentence contains a variable mention.,natural-language-processing
743565da-b29f-4852-b472-a6a4ee9d4a54,language-identification,Language Identification,Language identification is the task of determining the language of a text.,natural-language-processing
3a31b8a9-cbf5-4cdb-9c27-33acd92d1c20,collaborative-plan-acquisition,Collaborative Plan Acquisition,,natural-language-processing
4736c280-4e0b-44f8-b315-62f404e58678,zero-shot-text-to-image-generation,Zero-Shot Text-to-Image Generation,Image credit: [GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models](https://paperswithcode.com/paper/glide-towards-photorealistic-image-generation),natural-language-processing
4b86847d-650c-4262-a95f-a6348ec7d2ce,citation-intent-classification,Citation Intent Classification,Identifying the reason why an author cited another author.,natural-language-processing
cc00ec35-afce-4d16-b00d-94d664edb65b,sentiment-dependency-learning,Sentiment Dependency Learning,,natural-language-processing
e5153e71-0255-42ba-b947-f0cf09f2a9e3,few-shot-htc-1,Few-shot HTC,,natural-language-processing
85a3e51b-442b-46a7-a224-a9f57dd762aa,privacy-preserving-deep-learning,Privacy Preserving Deep Learning,"The goal of privacy-preserving (deep) learning is to train a model while preserving privacy of the training dataset. Typically, it is understood that the trained model should be privacy-preserving (e.g., due to the training algorithm being differentially private).",natural-language-processing
dc154953-57c1-4390-b0b2-03f96216a0e1,model-editing,Model Editing,,natural-language-processing
da77d1ff-754e-4a12-a92b-09c33c7ad576,semi-supervised-formality-style-transfer,Semi-Supervised Formality Style Transfer,Semi-Supervised Formality Style Transfer,natural-language-processing
b9ed302c-1893-4e03-a66c-7bbdf18ec322,lexical-analysis,Lexical Analysis,Lexical analysis is the process of converting a sequence of characters into a sequence of tokens (strings with an assigned and thus identified meaning). (Source: Adapted from Wikipedia),natural-language-processing
b9573650-dbd4-4f5e-88ed-e7dfdf3e68d8,multimodal-deep-learning,Multimodal Deep Learning,"**Multimodal deep learning** is a type of deep learning that combines information from multiple modalities, such as text, image, audio, and video, to make more accurate and comprehensive predictions. It involves training deep neural networks on data that includes multiple types of information and using the network to make predictions based on this combined data.
One of the key challenges in multimodal deep learning is how to effectively combine information from multiple modalities. This can be done using a variety of techniques, such as fusing the features extracted from each modality, or using attention mechanisms to weight the contribution of each modality based on its importance for the task at hand.
Multimodal deep learning has many applications, including image captioning, speech recognition, natural language processing, and autonomous vehicles. By combining information from multiple modalities, multimodal deep learning can improve the accuracy and robustness of models, enabling them to perform better in real-world scenarios where multiple types of information are present.",natural-language-processing
38715f36-ebeb-4413-8ac7-07841139b427,dialogue-act-classification,Dialogue Act Classification,"Dialogue act classification is the task of classifying an utterance with respect to the function it serves in a dialogue, i.e. the act the speaker is performing. Dialogue acts are a type of speech acts (for Speech Act Theory, see [Austin (1975)](http://www.hup.harvard.edu/catalog.php?isbn=9780674411524) and [Searle (1969)](https://www.cambridge.org/core/books/speech-acts/D2D7B03E472C8A390ED60B86E08640E7)).",natural-language-processing
cb39ac5d-46dc-4b6f-8211-f1ec01bd91b9,few-shot-relation-classification,Few-Shot Relation Classification,"**Few-Shot Relation Classification** is a particular relation classification task under minimum annotated data, where a model is required to classify a new incoming query instance given only few support instances (e.g., 1 or 5) during testing.
Source: [MICK: A Meta-Learning Framework for Few-shot Relation Classification with Little Training Data ](https://arxiv.org/abs/2004.14164)",natural-language-processing
fe3135df-4b81-43c6-b2b1-c9861e93ca4b,timeline-summarization,Timeline Summarization,Identifying key dates of major events and providing short descriptions of what happened on these dates. (Source: https://www.aclweb.org/anthology/D19-5403/),natural-language-processing
f725fad0-61fe-435e-ba2b-53fa7f332ba5,poll-generation,Poll Generation,,natural-language-processing
b3e9bff1-7511-4853-91b8-bad230923991,interactive-evaluation-of-dialog,Interactive Evaluation of Dialog,Task that involve building/adapting conversation models to work effectively in an interactive setting.,natural-language-processing
48c25382-1433-4127-988a-8e9b900d9913,scientific-results-extraction,Scientific Results Extraction,"Scientific results extraction is the task of extracting relevant result information (e.g., in the case of Machine learning performance results: task, dataset, metric name, metric value) from the scientific literature.",natural-language-processing
52e7725a-18bf-4ca0-8b72-8bee44e25759,hate-span-identification,Hate Span Identification,,natural-language-processing
615c53fa-74c2-4a21-869a-856cbd82efc8,coding-problem-tagging,Coding Problem Tagging,Assigning data structures to coding problems,natural-language-processing
5866b4d2-309c-4767-a57f-c5931933cbb9,spam-detection,Spam detection,,natural-language-processing
1c76a956-5c21-4669-a132-b484a1ed3499,code-documentation-generation,Code Documentation Generation,"Code Documentation Generation is a supervised task where a code function is the input to the model, and the model generates the documentation for this function.
Description from: [CodeTrans: Towards Cracking the Language of Silicone's Code Through Self-Supervised Deep Learning and High Performance Computing](https://arxiv.org/pdf/2104.02443.pdf)",natural-language-processing
5b16f42b-adf1-454f-a801-2f5d62118661,hate-intensity-prediction,Hate Intensity Prediction,,natural-language-processing
1e5d39cc-7419-4742-95ff-afa5b700108e,pgtask,Profile Generation,"**Profile Generation** is the task of creating a profile for an individual or entity based on information about them. The goal of profile generation is to gather as much relevant information about the individual or entity as possible, in order to build an accurate and comprehensive profile.",natural-language-processing
759304dc-16e0-458c-9638-6829c309f444,riddle-sense,Riddle Sense,,natural-language-processing
1f273d78-77f3-4eae-bac3-062cdfb947cf,natural-language-understanding,Natural Language Understanding,"**Natural Language Understanding** is an important field of Natural Language Processing which contains various tasks such as text classification, natural language inference and story comprehension. Applications enabled by natural language understanding range from question answering to automated reasoning.
Source: [Find a Reasonable Ending for Stories: Does Logic Relation Help the Story Cloze Test? ](https://arxiv.org/abs/1812.05411)",natural-language-processing
df7247a2-6b3f-4853-99e7-36719f3a140c,vietnamese-datasets,Vietnamese Datasets,,natural-language-processing
ad28503f-2c14-49da-87e7-3f00fc14f710,multi-agent-integration,Multi-agent Integration,,natural-language-processing
e4b1d11b-b6d2-40d6-bd76-df625b76ab18,latvian-text-diacritization,Latvian Text Diacritization,Addition of diacritics for undiacritized Latvian Wikipedia texts.,natural-language-processing
da6708da-d647-4861-94ec-67b1d6d7be83,temporal-information-extraction,Temporal Information Extraction,"Temporal information extraction is the identification of chunks/tokens corresponding to temporal intervals, and the extraction and determination of the temporal relations between those. The entities extracted may be temporal expressions (timexes), eventualities (events), or auxiliary signals that support the interpretation of an entity or relation. Relations may be temporal links (tlinks), describing the order of events and times, or subordinate links (slinks) describing modality and other subordinative activity, or aspectual links (alinks) around the various influences aspectuality has on event structure.
The markup scheme used for temporal information extraction is well-described in the ISO-TimeML standard, and also on [www.timeml.org](http://www.timeml.org).
```
PRI20001020.2000.0127
NEWS STORY
10/20/2000 20:02:07.85
The Navy has changed its account of the attack on the USS Cole in Yemen.
Officials now say the ship was hit nearly two hours after it had docked.
Initially the Navy said the explosion occurred while several boats were helping
the ship to tie up. The change raises new questions about how the attackers
were able to get past the Navy security.
10/20/2000 20:02:28.05
```
To avoid leaking knowledge about temporal structure, train, dev and test splits must be made at document level for temporal information extraction.",natural-language-processing
58ff89eb-9b1c-43ac-afae-47db9bc7942d,cross-lingual-transfer,Cross-Lingual Transfer,"Cross-lingual transfer refers to transfer learning using data and models available for one language for which ample such resources are available (e.g., English) to solve tasks in another, commonly more low-resource, language.",natural-language-processing
718698d1-9cdf-4531-b6db-ffc92a94839c,reader-aware-summarization,Reader-Aware Summarization,Using reader comments to improve summarization performance.,natural-language-processing
8534a12c-09c1-4344-976c-9eef16327c12,news-generation,News Generation,Generation of larger segments of text with consistent topic and evolving story.,natural-language-processing
8f46bd31-9885-482f-a328-d5967a9e03fe,pico,PICO,"The proliferation of healthcare data has contributed to the widespread usage of the PICO paradigm for creating specific clinical questions from RCT.
PICO is a mnemonic that stands for:
Population/Problem: Addresses the characteristics of populations involved and the specific characteristics of the disease or disorder.
Intervention: Addresses the primary intervention (including treatments, procedures, or diagnostic tests) along with any risk factors.
Comparison: Compares the efficacy of any new interventions with the primary intervention.
Outcome: Measures the results of the intervention, including improvements or side effects.
PICO is an essential tool that aids evidence-based practitioners in creating precise clinical questions and searchable keywords to address those issues. It calls for a high level of technical competence and medical domain knowledge, but it’s also frequently very time-consuming.
Automatically identifying PICO elements from this large sea of data can be made easier with the aid of machine learning (ML) and natural language processing (NLP). This facilitates the development of precise research questions by evidence-based practitioners more quickly and precisely.
Empirical studies have shown that the use of PICO frames improves the specificity and conceptual clarity of clinical problems, elicits more information during pre-search reference interviews, leads to more complex search strategies, and yields more precise search results.",natural-language-processing
6b34b7bd-c2d9-4404-a17f-3ca378e4ce8e,morphological-inflection,Morphological Inflection,"**Morphological Inflection** is the task of generating a target (inflected form) word from a source word (base form), given a morphological attribute, e.g. number, tense, and person etc. It is useful for alleviating data sparsity issues in translating morphologically rich languages. The transformation from a base form to an inflected form usually includes concatenating the base form with a prefix or a suffix and substituting some characters. For example, the inflected form of a Finnish stem eläkeikä (retirement age) is eläkeiittä when the case is abessive and the number is plural.
Source: [Tackling Sequence to Sequence Mapping Problems with Neural Networks ](https://arxiv.org/abs/1810.10802)",natural-language-processing
afd951cb-1093-497d-96dd-cb9633cf7438,cross-lingual-ner,Cross-Lingual NER,,natural-language-processing
a61be846-b332-4e9a-95e7-41f22063a547,literature-mining,Literature Mining,The task where the publication texts are used to mine knowledge using NLP,natural-language-processing
139fb9d5-4390-4219-a607-211a44e479a5,question-generation,Question Generation,"The goal of **Question Generation** is to generate a valid and fluent question according to a given passage and the target answer. Question Generation can be used in many scenarios, such as automatic tutoring systems, improving the performance of Question Answering models and enabling chatbots to lead a conversation.
Source: [Generating Highly Relevant Questions ](https://arxiv.org/abs/1910.03401)",natural-language-processing
53d2c2d3-bfd5-494e-a799-f9178db5a22f,grounded-open-vocabulary-acquisition,Grounded Open Vocabulary Acquisition,,natural-language-processing
191dd766-32ac-43e5-8716-19c089876f89,entity-extraction,Entity Extraction using GAN,,natural-language-processing
f6796304-c228-4964-aa43-0e53792d3d38,emotion-cause-extraction,Emotion Cause Extraction,,natural-language-processing
2ba6df96-4210-4993-98b6-c41ccd5cafc0,handwritten-chinese-text-recognition,Handwritten Chinese Text Recognition,"Handwritten Chinese text recognition is the task of interpreting handwritten Chinese input, e.g., from images of documents or scans.",natural-language-processing
cf38a53e-c610-4668-93ba-08ec0cee174c,coherence-evaluation,Coherence Evaluation,Evaluating the overall coherence of text as measured by its readability and flow through ideas.,natural-language-processing
7751258e-f4a6-49b3-acb0-ef64355e4bea,bangla-spelling-error-correction,Bangla Spelling Error Correction,Bangla spell checker which improves the quality of suggestions for misspelled words.,natural-language-processing
fa640630-0f2a-4c22-b919-bfcd17e9b7a2,fact-based-text-editing,Fact-based Text Editing,"Fact-based Text Editing aims to revise a given document to better describe the facts in a knowledge base (e.g., several triples).",natural-language-processing
54bf3667-1902-4154-af4c-87b1c1f1a3b8,question-answer-categorization,Question-Answer categorization,,natural-language-processing
345b9841-0f86-4a22-8273-0b95b030147a,meeting-summarization,Meeting Summarization,"Generating a summary from meeting transcriptions.
A survey for this task: [Abstractive Meeting Summarization: A Survey](https://paperswithcode.com/paper/abstractive-meeting-summarization-a-survey)",natural-language-processing
2840985f-5a12-4b21-bc3b-c35f7459c1cb,linguistic-acceptability,Linguistic Acceptability,"Linguistic Acceptability is the task of determining whether a sentence is grammatical or ungrammatical.
Image Source: [Warstadt et al](https://arxiv.org/pdf/1901.03438v4.pdf)",natural-language-processing
d3549b83-1f88-405e-8b4c-fc551ae797b1,pronunciation-dictionary-creation,Pronunciation Dictionary Creation,Create a pronunciation dictionary,natural-language-processing
bd765b73-8421-4cde-b602-5be9cbdd8e23,extractive-tags-summarization,Extractive Tags Summarization,"The goal of Extractive Tags Summarization (ETS) task is to shorten the list of tags corresponding to a digital image while keeping the representativity; i.e., is to extract important tags from the context lying in an image and its corresponding tags.",natural-language-processing
603043ca-7acc-44af-b66f-32b98a876930,propaganda-span-identification,Propaganda span identification,,natural-language-processing
678e8631-b2c4-4623-8cdd-14cc5023009f,emotion-detection-and-trigger-summarization,Emotion Detection and Trigger Summarization,,natural-language-processing
f65afbb7-5cd3-4b3a-8fa6-6bc18c059ea6,dialog-learning,Dialog Learning,,natural-language-processing
984e47c3-1af0-4de9-9d89-1fb07195b466,chinese-named-entity-recognition,Chinese Named Entity Recognition,"Chinese named entity recognition is a subtask of information extraction that seeks to locate and classify named entities mentioned in unstructured text into pre-defined categories such as person names, organizations, locations, medical codes, time expressions, quantities, monetary values, percentages, etc. from Chinese text (Source: Adapted from Wikipedia).",natural-language-processing
ef331af7-4e9e-48c8-a976-06f9d9152966,disambiguation-q,Disambiguation QA,,natural-language-processing
09e167f3-9013-49f4-a1ec-cfc725ea4f4b,implicit-discourse-relation-classification,Implicit Discourse Relation Classification,"Parsing a text into a set of discourse relations between two adjacent or non-adjacent discourse units in the absence of explicit connectives, such as 'but' or 'however', and classifying those relations. (Source: Adapted from https://www.cs.brandeis.edu/~clp/conll15st/intro.html)",natural-language-processing
162db201-4ab0-4520-8fe4-3b89346f624f,hidden-aspect-detection,Hidden Aspect Detection,,natural-language-processing
88c9cf33-3731-4392-964e-2e8f5e494482,speculation-scope-resolution,Speculation Scope Resolution,Identifiy the scope of a speculation cue that indicates uncertainty in a given text.,natural-language-processing
fb59a99f-cc88-4353-af69-d0a61b176767,self-learning,Self-Learning,,natural-language-processing
8891e701-ed96-434e-a561-c1529b0d33ea,deep-clustering,Deep Clustering,,natural-language-processing
6b042226-e82e-4722-9864-0727cee48ac9,open-relation-modeling,Open Relation Modeling,,natural-language-processing
2a1f2614-b1b9-4bf3-8116-d22d45b3f80c,question-to-declarative-sentence,Question to Declarative Sentence,"Question Answer to Declarative Sentence (QA2D) is the task of generating declarative statements from question, answer pairs.
See:
Demszky, D., Guu, K., & Liang, P. (2018). Transforming Question Answering Datasets Into Natural Language Inference Datasets. arXiv preprint. arXiv:1809.02922",natural-language-processing
908ee443-aa64-4a02-b676-f687ddfc4102,xlm-r,XLM-R,XLM-R,natural-language-processing
ae676a43-a3da-47b4-bd9c-16e9717a71ee,multimodal-generation,multimodal generation,"**Multimodal generation** refers to the process of generating outputs that incorporate multiple modalities, such as images, text, and sound. This can be done using deep learning models that are trained on data that includes multiple modalities, allowing the models to generate output that is informed by more than one type of data.
For example, a multimodal generation model could be trained to generate captions for images that incorporate both text and visual information. The model could learn to identify objects in the image and generate descriptions of them in natural language, while also taking into account contextual information and the relationships between the objects in the image.
Multimodal generation can also be used in other applications, such as generating realistic images from textual descriptions or generating audio descriptions of video content. By combining multiple modalities in this way, multimodal generation models can produce more accurate and comprehensive output, making them useful for a wide range of applications.",natural-language-processing
6d412111-635a-4772-a09e-423ff8a16f98,multi-grained-named-entity-recognition,Multi-Grained Named Entity Recognition,"Multi-Grained Named Entity Recognition aims to detect and recognize entities on multiple granularities, without explicitly assuming non-overlapping or totally nested structures.",natural-language-processing
d80347ae-44ad-4212-a0ec-c4cd1d2e1845,subjectivity-analysis,Subjectivity Analysis,A related task to sentiment analysis is the subjectivity analysis with the goal of labeling an opinion as either subjective or objective.,natural-language-processing
0efe2891-ccde-4a7b-b532-5f52ca671902,humor-detection,Humor Detection,Humor detection is the task of identifying comical or amusing elements.,natural-language-processing
1b0767c9-9dee-44f5-96e1-4724e0a8e37b,explanation-generation,Explanation Generation,,natural-language-processing
a2605f87-4a6e-4e6b-b5dd-c7eed09893b0,ruin-names,Ruin Names,,natural-language-processing
560ffd00-8dfd-4052-9976-bb6bb820bf34,kg-to-text,KG-to-Text Generation,"Knowledge-graph-to-text (KG-to-text) generation aims to generate high-quality texts which are consistent with input graphs.
Description from: [JointGT: Graph-Text Joint Representation Learning for Text Generation from Knowledge Graphs](https://arxiv.org/pdf/2106.10502v1.pdf)",natural-language-processing
718e246a-55b2-4733-a0f6-e69862d14efe,intent-detection,Intent Detection,"**Intent Detection** is a vital component of any task-oriented conversational system. In order to understand the user’s current goal, the system must leverage its intent detector to classify the user’s utterance (provided in varied natural language) into one of several predefined classes, that is, intents. However, the performance of intent detection has been hindered by the data scarcity issue, as it is non-trivial to collect sufficient examples for new intents. How to effectively identify user intents in few-shot learning has become popular.
Source: [Few-shot Intent Detection Datasets, Baselines and Results ](https://github.com/jianguoz/Few-Shot-Intent-Detection)
Source: [Are Pretrained Transformers Robust in Intent Classification? A Missing Ingredient in Evaluation of Out-of-Scope Intent Detection ](https://github.com/jianguoz/Few-Shot-Intent-Detection)
Source: [Efficient Intent Detection with Dual Sentence Encoders](https://github.com/PolyAI-LDN/polyai-models)",natural-language-processing
40c3a936-6623-45d9-a138-4313ad863a40,understanding-fables,Understanding Fables,,natural-language-processing
d0a21ca4-61a3-4902-a4e3-dbeffedf91f7,unsupervised-part-of-speech-tagging,Unsupervised Part-Of-Speech Tagging,Marking up a word in a text (corpus) as corresponding to a particular part of speech based on both its definition and its context by using an untagged corpus for training and producing the tagset by induction (Source: Wikipedia).,natural-language-processing
4ef7e148-cb77-414f-bc14-130b46ed5e1f,semantic-role-labeling-predicted-predicates,Semantic Role Labeling (predicted predicates),PropBank semantic role labeling with predicted predicates.,natural-language-processing
4c8172de-8f57-4ac8-82b8-2da263a149e0,semi-supervised-text-classification-1,Semi-Supervised Text Classification,,natural-language-processing
625f001c-09db-4a24-a28d-e014d73a5ea5,text-annotation,text annotation,,natural-language-processing
709e8f16-822a-4a35-8894-14422ba7b729,occupation-prediction,Occupation prediction,,natural-language-processing
f8ac3f92-a720-4053-9b7e-14e97f2d5c05,short-text-clustering,Short Text Clustering,,natural-language-processing
5910ebc1-e05b-4ed2-b4ce-d2fa4b5454bb,extractive-summarization,Extractive Summarization,,natural-language-processing
3d4d0d86-088a-46cb-b566-0f8aff5a514e,speculation-detection,Speculation Detection,Identifying information in text that is speculative as opposed to factual information.,natural-language-processing
3ac1b692-7fcb-4524-9231-1012075c6bc5,summarization,Summarization,"Summarization is the task of producing a shorter version of one or several documents that preserves most of the
input's meaning.",natural-language-processing
d2c8a8c0-081c-4599-8fb6-aed5a7380ea7,image-sentence-alignment,image-sentence alignment,Predict the alignment (score) between an image and a sentence.,natural-language-processing
6242ae72-9664-427e-ac8b-a36e47917739,text-compression,Text Compression,,natural-language-processing
a3a179a4-5c53-44d4-bfc7-8afca1877244,visual-dialogue,Visual Dialog,"Visual Dialog requires an AI agent to hold a meaningful dialog with humans in natural, conversational language about visual content. Specifically, given an image, a dialog history, and a follow-up question about the image, the task is to answer the question.",natural-language-processing
7c291fe8-4a93-467a-adb4-b8f91e4c9cc3,chinese,Chinese,Chinese language processing is the task of applying natural language processing to the Chinese language.,natural-language-processing
cfcc6a9c-6760-482b-9ac4-769668d0ea0b,grounded-language-learning,Grounded language learning,Acquire the meaning of language in situated environments.,natural-language-processing
e5195380-db6c-4f51-989c-83cd3acc3576,czech-text-diacritization,Czech Text Diacritization,Addition of diacritics for undiacritized Czech Wikipedia texts.,natural-language-processing
1fd54720-b5ee-41de-9641-4d9803d87763,question-rewriting,Question Rewriting,,natural-language-processing
e582dcf0-bddd-4fdc-9958-1e6a3cd62199,legal-outcome-extraction,legal outcome extraction,,natural-language-processing
5bbd46e0-0282-4058-8d8f-1f513fcff933,scientific-concept-extraction,Scientific Concept Extraction,Identification of scientific concepts in research articles.,natural-language-processing
bf765fa2-9af2-4b37-9bdd-213e39fc5c58,csc,Chinese Spelling Error Correction,,natural-language-processing
68891cf1-82d3-41cb-9c05-53ae2b9c5e8b,temporal-tagging,Temporal Tagging,"Identification of the extent of a temporal expression (timex) in a text. The temporal expressions can be explicit (e.g. ""October, 27 "") or implicit (e.g. ""last month"").
Other names: Timex Extraction; Timex Identification; Timex Detection",natural-language-processing
bcf32572-d97a-4c3e-a3d9-0c1597e2f04a,ccg-supertagging,CCG Supertagging,"Combinatory Categorical Grammar (CCG; [Steedman, 2000](http://www.citeulike.org/group/14833/article/8971002)) is a
highly lexicalized formalism. The standard parsing model of [Clark and Curran (2007)](https://www.mitpressjournals.org/doi/abs/10.1162/coli.2007.33.4.493)
uses over 400 lexical categories (or _supertags_), compared to about 50 part-of-speech tags for typical parsers.
Example:
| Vinken | , | 61 | years | old |
| --- | ---| --- | --- | --- |
| N| , | N/N | N | (S[adj]\ NP)\ NP |",natural-language-processing
32ad9f9a-6b7b-4ba8-b4ac-29ee27e466fc,meme-captioning,Meme Captioning,Automatic generation of natural language descriptions of the content of an input meme.,natural-language-processing
5befd81f-8749-4ff0-968c-6829117a46f0,story-completion,Story Completion,"Given a story prefix and two possible endings, determining which one is the correct (coherent) ending of the story.",natural-language-processing
7a623bec-1ac3-4c07-9a79-86c0f82a6392,movie-dialog-same-or-different,Movie Dialog Same Or Different,,natural-language-processing
d869a3fb-7880-48d1-b768-77fdff84de4e,discourse-marker-prediction,Discourse Marker Prediction,,natural-language-processing
a3197b08-df0f-4979-bd1e-fc5de9e5eba9,selection-bias,Selection bias,,natural-language-processing
297e3ceb-4be6-45ce-b366-a4c6604804c0,logical-reasoning-reading-comprehension,Logical Reasoning Reading Comprehension,"Logical reasoning reading comprehension is a task proposed by the paper ReClor (ICLR 2020), which is to evaluate the logical reasoning ability of machine reading comprehension models. ReClor is the first dataset for logical reasoning reading comprehension.",natural-language-processing
2d051a9d-2d87-46a4-a832-0a3c86a356ec,chatbot,Chatbot,"**Chatbot** or conversational AI is a language model designed and implemented to have conversations with humans.
Source: [Open Data Chatbot ](https://arxiv.org/abs/1909.03653)
[Image source](https://arxiv.org/pdf/2006.16779v3.pdf)",natural-language-processing
63a16939-ccea-4ad1-bb88-53e0109f846f,continual-relation-extraction,Continual Relation Extraction,"Compared with traditional relation extraction, CRE aims to help the model learn new relations while maintaining accurate classification of old ones.",natural-language-processing
686a9196-3cec-4612-a2ae-93ad339324b5,continual-named-entity-recognition,Continual Named Entity Recognition,Continual learning for named entity recogntion,natural-language-processing
044ea3e5-bcef-4a1d-b233-50f7d4511905,anaphora-resolution,Anaphora Resolution,Resolving what expression a pronoun or a noun phrase refers to.,natural-language-processing
21a83b1e-ed71-4aa7-a1d4-51d9e41776f0,fact-selection,Fact Selection,A task where an agent should select at most two sentences from the paper as argumentative facts.,natural-language-processing
be7072ae-7da4-4ce0-b8d3-55e4a4478ffb,short-text-conversation,Short-Text Conversation,"Given a short text, finding an appropriate response (Source: http://staff.ustc.edu.cn/~cheneh/paper_pdf/2013/HaoWang.pdf)",natural-language-processing
f12db63e-98dc-4f50-b62e-0f59b688210d,open-domain-question-answering,Open-Domain Question Answering,Open-domain question answering is the task of question answering on open-domain datasets such as Wikipedia.,natural-language-processing
a5eb9431-a9bc-47dc-b25a-c991b3cd0fe1,goal-oriented-dialogue-systems,Goal-Oriented Dialogue Systems,Achieving a pre-defined goal through a dialog.,natural-language-processing
1a6e037a-c62c-4758-89b0-d91624338786,sketch-to-text-generation,Sketch-to-text Generation,"Generate a full text based on a sketch (key information consisting of textual spans, phrases, or words), where the sketch may only make up a very small part of the full text.",natural-language-processing
58198c8f-8c8f-40ac-9c7f-25cec288ece2,complaint-comment-classification,Complaint Comment Classification,,natural-language-processing
d9591edd-1e6f-4fe6-927b-4be809a0c9f9,cross-domain-named-entity-recognition,Cross-Domain Named Entity Recognition,,natural-language-processing
b6713e7f-7760-4cd1-98ab-045887b1e9b8,cross-lingual-information-retrieval,Cross-Lingual Information Retrieval,"Cross-Lingual Information Retrieval (CLIR) is a retrieval task in which search queries and candidate documents are written in different languages. CLIR can be very useful in some scenarios. For example, a reporter may want to search foreign language news to obtain different perspectives for her story; an inventor may explore the patents in another country to understand prior art.",natural-language-processing
a163ea57-cd85-48fe-9c89-d7a0bfe3b28e,dialogue-interpretation,Dialogue Interpretation,Interpreting the meaning of a dialog.,natural-language-processing
67482fbd-c2e1-417a-960a-5a1d33ecf18c,semantic-dependency-parsing,Semantic Dependency Parsing,Identify semantic relationships between words in a text using a graph representation.,natural-language-processing
38c80f7d-66f5-44da-9f03-692d5ef8a8ff,semeval-2022-task-4-1-binary-pcl-detection,SemEval-2022 Task 4-1 (Binary PCL Detection),,natural-language-processing
5b719e3f-bc8b-4941-bcf4-54d926592424,columns-property-annotation,Columns Property Annotation,"**Column Property Annotation** (CPA) refers to the task of predicting the semantic relation between two columns and is a subtask of [Table Annotation](https://paperswithcode.com/task/table-annotation). The input of a CPA problem is most commonly a pair of columns, but can also be only one column. The labels used in CPA are properties from vocabularies. Some examples are *name*, *price*, *datePublished* etc.
CPA is usually a multi-class classification problem and is also referred to as column relation annotation or relation extraction in different works.",natural-language-processing
d6d07b4d-5501-45ae-b741-bea8e0280ed3,chemical-indexing,Chemical Indexing,Predict which chemicals should be indexed.,natural-language-processing
1cf8b58c-3336-4929-a9d4-c51fc9f5ee64,cross-lingual-semantic-textual-similarity,Cross-Lingual Semantic Textual Similarity,,natural-language-processing
b0b1dbca-6b60-4285-80a0-50a85f48bc47,mathematical-reasoning,Mathematical Reasoning,,natural-language-processing
25ac47a8-2d42-4ddc-893f-f7d1d804b91d,question-similarity,Question Similarity,"This is the problem of detecting duplicate questions in forums, which is an important step towards automating the process of answering new questions",natural-language-processing
bc2fc76c-7daa-45bd-b29d-d430ae569dfa,entity-resolution,Entity Resolution,"**Entity resolution** (also known as entity matching, record linkage, or duplicate detection) is the task of finding records that refer to the same real-world entity across different data sources (e.g., data files, books, websites, and databases). (Source: Wikipedia)
Surveys on entity resolution:
- [Vassilis et al.: End-to-End Entity Resolution for Big Data: A Survey](https://arxiv.org/pdf/1905.06397.pdf), 2020.
- [Barlaug and Gulla: Neural Networks for Entity Matching: A Survey](https://arxiv.org/pdf/2010.11075.pdf), 2021.
The task of entity resolution is closely related to the task of [entity alignment](https://paperswithcode.com/task/entity-alignment) which focuses on matching entities between knowledge bases. The task of [entity linking](https://paperswithcode.com/task/entity-linking) differs from entity resolution as entity linking focuses on identifying entity mentions in free text.",natural-language-processing
06cea5f8-b025-409f-b440-b969471ad9f2,snarks,SNARKS,,natural-language-processing
e6f765d2-59ce-4f44-9838-1add67cc9d29,abstract-anaphora-resolution,Abstract Anaphora Resolution,"Abstract Anaphora Resolution aims to resolve nominal expressions (e.g., this result, those two actions) and pronominal expressions (e.g. this, that, it) that refer to abstract-object-antecedents such as facts, events, plans, actions, or situations.",natural-language-processing
462185c3-360d-4c13-b848-370afaed07c5,sentence-pair-modeling,Sentence Pair Modeling,Comparing two sentences and their relationship based on their internal representation.,natural-language-processing
70c9a020-5ba5-4b25-b45c-c04724fd96aa,multimedia-generative-script-learning,Multimedia Generative Script Learning,"Given an activity goal $G$, an optional subgoal $M$ that specifies the concrete needs, and the previous multimedia step history $H_n=\{(S_1,V_1),...,(S_n,V_n)\}$ with length $n$, a model is expected to predict the next possible step $S_{n+1}$, where $S_i$ is a text sequence and $V_i$ is an image.",natural-language-processing
bc2b3ba3-21d2-41a1-8791-d5d4e32410e4,hypernym-discovery,Hypernym Discovery,"Given a corpus and a target term (hyponym), the task of hypernym discovery consists of extracting a set of its most appropriate hypernyms from the corpus. For example, for the input word “dog”, some valid hypernyms would be “canine”, “mammal” or “animal”.",natural-language-processing
b0a36304-a300-468a-9af7-2b7356c4428a,textual-analogy-parsing,Textual Analogy Parsing,"Textual Analogy Parsing (TAP) is the task of identifying analogy frames from text.
( Image credit: [Textual Analogy Parsing: What’s Shared and
What’s Compared among Analogous Facts](https://arxiv.org/pdf/1809.02700v1.pdf) )",natural-language-processing
798a0049-ba8a-44c6-b530-f94164a73db6,multlingual-neural-machine-translation,Multlingual Neural Machine Translation,,natural-language-processing
94fb68bd-35b3-49c2-8a6b-203d48c2246b,component-classification,Component Classification,Classification of argumentative components inside a document,natural-language-processing
be279f99-1fcd-4142-abbb-a0629d7cc020,formal-fallacies-syllogisms-negation,Formal Fallacies Syllogisms Negation,,natural-language-processing
9b65effb-d56a-4180-981a-6455e493b2a7,thai-word-tokenization,Thai Word Segmentation,Thai word segmentation,natural-language-processing
05cc8910-88c4-4b0a-9d40-6fa28b844224,news-annotation,News Annotation,Assigning the appropriate labels to a news text based on a set of pre-defined labels.,natural-language-processing
49562956-3baa-4643-89ff-be62f5a5e930,unsupervised-kg-to-text-generation,Unsupervised KG-to-Text Generation,,natural-language-processing
2a8d1d87-e45a-447c-a2eb-6fb0f79e9396,cg,NER,"The named entity recognition (NER) involves identification of key information in the text and classification into a set of predefined categories. This includes standard entities in the text like Part of Speech (PoS) and entities like places, names etc...",natural-language-processing
2f93cbe7-3b34-4aac-9774-f4c08b1bf8ce,keyphrase-extraction,Keyphrase Extraction,"A classic task to extract salient phrases that best summarize a document, which essentially has two stages: candidate generation and keyphrase ranking.",natural-language-processing
924b209c-adfc-4a1d-97be-a7dae7b8bb3c,vietnamese-sentiment-analysis,Vietnamese Sentiment Analysis,,natural-language-processing
09a843a7-31f3-4d2e-b854-ae35f82b1395,dynamic-topic-modeling,Dynamic Topic Modeling,,natural-language-processing
3c9b694c-ea13-4d13-8d9d-226a4e9cfa29,drs-parsing,DRS Parsing,"Discourse Representation Structures (DRS) are formal meaning representations introduced by Discourse Representation Theory. DRS parsing is a complex task, comprising other NLP tasks, such as semantic role labeling, word sense disambiguation, co-reference resolution and named entity tagging. Also, DRSs show explicit scope for certain operators, which allows for a more principled and linguistically motivated treatment of negation, modals and quantification, as has been advocated in formal semantics. Moreover, DRSs can be translated to formal logic, which allows for automatic forms of inference by third parties.
Description from [NLP Progress](http://nlpprogress.com/english/semantic_parsing.html)",natural-language-processing
0c19d67a-2440-4169-960f-83b4478df778,personalized-and-emotional-conversation,Personalized and Emotional Conversation,"Personalized and Emotional Conversation (**PEC**) is defined as follows: Given the personalized information ($P_{R1}$ and $P_{R2}$) of two speakers, their conversation context $C$, the emotion $E_K$ and DA $D_K$ of the response to be generated, and the personalized information $P_{K}$ of the responder, the goal is to generate an anthropomorphic response $Y$.
\begin{equation}
Y = argmax_{Y'}P(Y'|C, E_K, D_K, P_K) \label{task_definition}
\end{equation}
Particularly, context $C=\{(U_1,E_1,D_1,P_1),\cdots,(U_{K-1},E_{K-1},D_{K-1},P_{K-1})\}$ contains multi-turn conversation content (i.e., utterance $U_i$), emotion $E_i$ of the associated utterance, DA $D_i$ of the associated utterance, and personalized information $P_i$ of the associated speaker.",natural-language-processing
1fbd3bda-fcba-4c85-9364-9fe1e2652d3f,machine-reading-comprehension,Machine Reading Comprehension,"**Machine Reading Comprehension** is one of the key problems in Natural Language Understanding, where the task is to read and comprehend a given text passage, and then answer questions based on it.
Source: [Making Neural Machine Reading Comprehension Faster ](https://arxiv.org/abs/1904.00796)",natural-language-processing
bb9fbb94-4903-406d-af5e-0beb35e1d245,problem-solving-deliberation,Problem-Solving Deliberation,,natural-language-processing
31c5f897-9e77-4988-b77a-e76fc0aac1bb,learning-with-noisy-labels,Learning with noisy labels,"Learning with noisy labels means When we say ""noisy labels,"" we mean that an adversary has intentionally messed up the labels, which would have come from a ""clean"" distribution otherwise. This setting can also be used to cast learning from only positive and unlabeled data.",natural-language-processing
4eea9414-00aa-455f-95c5-4ccfeb0f3e40,commonsense-rl,Commonsense Reasoning for RL,Commonsense reasoning for Reinforcement Learning agents,natural-language-processing
ebb813f9-2a0a-4293-8209-381bcc8fbec2,question-answer-generation,Question-Answer-Generation,,natural-language-processing
e774c987-0df8-475c-a79f-27003d4a58cf,chinese-word-segmentation,Chinese Word Segmentation,Chinese word segmentation is the task of splitting Chinese text (i.e. a sequence of Chinese characters) into words (Source: www.nlpprogress.com).,natural-language-processing
ea107857-e928-45c5-a0f7-ccf53d9a195c,paraphrase-generation,Paraphrase Generation,"Paraphrase Generation involves transforming a natural language sentence to a new sentence, that has the same semantic meaning but a different syntactic or lexical surface form.",natural-language-processing
2eae6f4d-e0dd-40d2-a7e0-f79fb5ce16d0,abstractive-sentence-summarization,Sentence Summarization,Generating a summary of a given sentence.,natural-language-processing
64a005a3-1414-44bd-b16b-07f2efe6f1f0,job-prediction,Job Prediction,,natural-language-processing
3ec3c439-8de8-4032-bd06-dc31923064d4,intent-classification,Intent Classification,"**Intent Classification** is the task of correctly labeling a natural language utterance from a predetermined set of intents
Source: [Multi-Layer Ensembling Techniques for Multilingual Intent Classification ](https://arxiv.org/abs/1806.07914)",natural-language-processing
21486371-cc22-4736-91bd-a878d00619e0,slot-filling,Slot Filling,"The goal of **Slot Filling** is to identify from a running dialog different slots, which correspond to different parameters of the user’s query. For instance, when a user queries for nearby restaurants, key slots for location and preferred food are required for a dialog system to retrieve the appropriate information. Thus, the main challenge in the slot-filling task is to extract the target entity.
Source: [Real-time On-Demand Crowd-powered Entity Extraction ](https://arxiv.org/abs/1704.03627)
Image credit: [Robust Retrieval Augmented Generation for Zero-shot Slot Filling](https://arxiv.org/pdf/2108.13934.pdf)",natural-language-processing
e99d039e-2f2b-42f2-bbb5-fb1a34535b3d,paper-generation,Paper generation,"Generating scientific paper texts, such as abstracts.",natural-language-processing
280d8c2c-494c-428a-9e64-6932aa0a8282,extractive-document-summarization,Extractive Text Summarization,"Given a document, selecting a subset of the words or sentences which best represents a summary of the document.",natural-language-processing
a0e70cd2-bd35-4d04-a44b-8789862a63fe,relation-explanation,relation explanation,,natural-language-processing
997a744e-295d-4ff4-9d75-b0ff54e36eca,unsupervised-text-classification,Unsupervised Text Classification,,natural-language-processing
bcb0e408-0140-4fb5-9b10-751abacd6600,event-driven-trading,Event-Driven Trading,Making stock trading decisions based on events.,natural-language-processing
3fdce292-cf68-4f49-a182-e1e5dbe52407,suggestion-mining,Suggestion mining,"Suggestion mining can be defined as the extraction of suggestions from unstructured text,",natural-language-processing
2bbe24cd-7279-4d5d-be59-f6927d6dfc2f,answer-generation,Answer Generation,,natural-language-processing
4c32119c-7201-40b9-b880-8b4e2dad052d,recognizing-emotion-cause-in-conversations,Recognizing Emotion Cause in Conversations,"Given an utterance U, labeled with emotion E, the task is to extract the causal spans S from the conversational history H (including utterance U) that sufficiently represent the causes of emotion E.",natural-language-processing
b089b78e-5eb3-4ae4-94dc-e9cba6f69e76,contextualised-word-representations,Contextualised Word Representations,,natural-language-processing
1fa3249a-db9a-4659-8fd2-b74f80bfeac2,text-simplification,Text Simplification,"**Text Simplification** is the task of reducing the complexity of the vocabulary and sentence structure of text while retaining its original meaning, with the goal of improving readability and understanding. Simplification has a variety of important societal applications, for example increasing accessibility for those with cognitive disabilities such as aphasia, dyslexia, and autism, or for non-native speakers and children with reading difficulties.
Source: [Multilingual Unsupervised Sentence Simplification](https://arxiv.org/abs/2005.00352)",natural-language-processing
e6d059ab-c7a4-4110-893c-68a565f2fd67,stance-detection-us-election-2020-trump,Stance Detection (US Election 2020 - Trump),,natural-language-processing
5c5ab90b-ed74-4cad-afb1-e0767fac58ef,morphological-tagging,Morphological Tagging,"Morphological tagging is the task of assigning labels to a sequence of tokens that describe them morphologically. As compared to Part-of-speech tagging, morphological tagging also considers morphological features, such as case, gender or the tense of verbs.",natural-language-processing
91c6d280-1895-4bdd-8fa1-e7ad9b4df23a,claim-evidence-pair-extraction-cepe,Claim-Evidence Pair Extraction (CEPE),"Since evidence is clearly supporting the corresponding claims in an article, claims and evidence are mutually reinforcing each other in the context. Therefore, we hypothesize the claim extraction task and the evidence extraction task may benefit each other. By combining these two subtasks, we define the second integrated task as: given a specific topic and relevant articles, extract the claim-evidence pairs (CEPs) from the articles.",natural-language-processing
81832646-15fd-435f-af4f-da7673f58da8,taxonomy-expansion,Taxonomy Expansion,Expand a seed taxonomy with new unseen node,natural-language-processing
135ec9a0-e065-46f5-b4c3-a921b75ddcc3,multi-modal-dialogue-generation,Multi-modal Dialogue Generation,Image credit: [OpenViDial](https://github.com/ShannonAI/OpenViDial),natural-language-processing
13abfcab-982b-4f3e-8a20-6b2ead99e80a,nonsense-words-grammar,Nonsense Words Grammar,,natural-language-processing
ad931b28-18de-4051-b02e-41aa55d614ab,answer-selection,Answer Selection,"**Answer Selection** is the task of identifying the correct answer to a question from a pool of candidate answers. This task can be formulated as a classification or a ranking problem.
Source: [Learning Analogy-Preserving Sentence Embeddings for Answer Selection ](https://arxiv.org/abs/1910.05315)",natural-language-processing
342b3455-bc77-47c2-b1e3-142440c40fe2,automatic-writing,Automatic Writing,Generating text based on internal machine representations.,natural-language-processing
9b96b488-fef6-458f-af93-85f0e9935b9a,implicit-relations,Implicit Relations,,natural-language-processing
2509609c-3204-4cc7-bab8-0831b01155bd,catalog-extraction,Catalog Extraction,Catalog Extraction aims to re-construct catalog-based document trees.,natural-language-processing
ae7de282-3f07-46ab-86be-38ad8537ffa7,dialogue-management,Dialogue Management,"( Image credit: [Bocklisch et al.](https://arxiv.org/pdf/1712.05181v2.pdf) )",natural-language-processing
7499a4a1-29ec-441a-9d4d-ee7439eae426,multimodal-sentiment-analysis,Multimodal Sentiment Analysis,"Multimodal sentiment analysis is the task of performing sentiment analysis with multiple data sources - e.g. a camera feed of someone's face and their recorded speech.
( Image credit: [ICON: Interactive Conversational Memory Network
for Multimodal Emotion Detection](https://www.aclweb.org/anthology/D18-1280.pdf) )",natural-language-processing
06d03467-7006-4bd0-9fc7-6843caa44231,task-oriented-dialogue-systems,Task-Oriented Dialogue Systems,Achieving a pre-defined task through a dialog.,natural-language-processing
13076795-0fc0-433e-8106-f965da77dad2,arqmath2,ARQMath2,Answer Retrieval for Questions about Math v2 (2021),natural-language-processing
a5ddd66c-dc47-49b5-80ca-e0a4c5a20056,sonnet-generation,Sonnet Generation,Generating a poetry in the form of a sonnet.,natural-language-processing
20f5f573-af8e-4db9-a2cb-dbf8cb3eb4a5,document-summarization,Document Summarization,"Automatic **Document Summarization** is the task of rewriting a document into its shorter form while still retaining its important content. The most popular two paradigms are extractive approaches and abstractive approaches. Extractive approaches generate summaries by extracting parts of the original document (usually sentences), while abstractive methods may generate new words or phrases which are not in the original document.
Source: [HIBERT: Document Level Pre-training of Hierarchical Bidirectional Transformers for Document Summarization ](https://arxiv.org/abs/1905.06566)",natural-language-processing
1bd9b879-dd43-4045-8eef-07e66240562a,nested-mention-recognition,Nested Mention Recognition,Nested mention recognition is the task of correctly modeling the nested structure of mentions.,natural-language-processing
cbb76b11-4331-45ba-af7f-82c135593a1c,task-completion-dialogue-policy-learning,Task-Completion Dialogue Policy Learning,,natural-language-processing
a3d2c4d1-0f78-4e63-bce3-f75c65e73313,open-domain-dialog,Open-Domain Dialog,,natural-language-processing
a83783b4-4ecf-4420-96d8-8e565f7ed6b0,clinical-assertion-status-detection,Clinical Assertion Status Detection,"Classifying the assertions made on given medical concepts as being present, absent, or possible in the patient, conditionally present in the patient under certain circumstances, hypothetically present in the patient at some future point, and mentioned in the patient report but associated with someoneelse. (e.g. clinical finding pertains to the patient by assigning a label such as present (”patient is diabetic”), absent (”patient denies nausea”), conditional (”dyspnea while climbing stairs”), or associated with someone else (”family history of depression”))
( [Source](https://arxiv.org/pdf/2012.04005v1.pdf) )",natural-language-processing
f51e511f-d45b-4a04-b9d1-ea0c02bb70d9,temporal-casual-qa,Temporal/Casual QA,,natural-language-processing
7e6bef30-2e94-4bc1-bb37-70a3b00bc479,rules-of-thumb-generation,Rules-of-thumb Generation,Generate relevant rules-of-thumb given text input.,natural-language-processing
fff83e1c-911d-40b7-88c1-b36cb1c4ac91,cross-lingual-bitext-mining,Cross-Lingual Bitext Mining,Cross-lingual bitext mining is the task of mining sentence pairs that are translations of each other from large text corpora.,natural-language-processing
c6a7ebe9-5910-4110-a20c-32f1e8dec344,aspect-sentiment-triplet-extraction,Aspect Sentiment Triplet Extraction,"Aspect Sentiment Triplet Extraction (ASTE)
is the task of extracting the triplets of target
entities, their associated sentiment, and opinion spans explaining the reason for the sentiment.",natural-language-processing
cb9b4e46-ab23-4f66-a076-010504d0b1fa,unsupervised-abstractive-sentence-compression,Unsupervised Sentence Compression,"Producing a shorter sentence by removing redundant information, preserving the grammatically and the important content of the original sentence without supervision. (Source: nlpprogress.com)",natural-language-processing
f8326942-5ff4-4024-9d0c-96eb7402ef15,pcl-detection,Pcl Detection,,natural-language-processing
b095ca4a-1ac4-4671-a631-afabcda8aeab,question-selection,Question Selection,,natural-language-processing
ed220a9a-4cc8-4b8b-8815-d8d47e9433d6,unsupervised-opinion-summarization,Unsupervised Opinion Summarization,,natural-language-processing
5cd442b5-1db5-4cf6-95f6-fa02a2ee3227,conversational-response-generation,Conversational Response Generation,"Given an input conversation, generate a natural-looking text reply to the last conversation element.
Image credit: [DIALOGPT : Large-Scale Generative Pre-training for Conversational Response Generation](https://www.aclweb.org/anthology/2020.acl-demos.30.pdf)",natural-language-processing
85c16dc4-7dbc-4891-a8e0-05d817c8f2fa,text-infilling,Text Infilling,"**Text Infilling** is the task of predicting missing spans of text which are consistent with the preceding and subsequent text. Text Infilling is a generalization of the cloze task—cloze historically refers to infilling individual words.
Source: [Enabling Language Models to Fill in the Blanks ](https://arxiv.org/abs/2005.05339)",natural-language-processing
14825574-c589-41ab-a3e4-330300a24841,table-based-fact-verification,Table-based Fact Verification,Verifying facts given semi-structured data.,natural-language-processing
a1f3e376-1b3a-47aa-a99a-652c834dc975,unsupervised-sentence-summarization,Unsupervised Sentence Summarization,Generating a summary of a given sentence without supervision.,natural-language-processing
e110b408-8b1b-4a70-a9df-1295328a2001,complex-word-identification,Complex Word Identification,Identifying difficult words or expressions in a text.,natural-language-processing
2451b412-6ecc-4c51-bedf-90a4838aef48,fg-1-pg-1,FG-1-PG-1,"The model learns one entity in the first step (no continuous learning is required), and learns one new entity in each subsequent continuous learning step.",natural-language-processing
e9e674fe-b16b-4d3f-8d8d-e36c8e0c3375,cross-lingual-zero-shot-dependency-parsing,Cross-lingual zero-shot dependency parsing,"Cross-lingual zero-shot parsing is the task of inferring the dependency parse of sentences from one language without any labeled training trees for that language.
Description from [NLP Progress](http://nlpprogress.com/english/dependency_parsing.html)",natural-language-processing
289e918e-1d40-4be5-b84c-06f8e4bdcd8f,multilingual-named-entity-recognition,Multilingual Named Entity Recognition,,natural-language-processing
ed75c074-1ade-4d12-ba9c-5c36dbddd7cd,argument-mining,Argument Mining,"**Argument Mining** is a field of corpus-based discourse analysis that involves the automatic identification of argumentative structures in text.
Source: [AMPERSAND: Argument Mining for PERSuAsive oNline Discussions ](https://arxiv.org/abs/2004.14677)",natural-language-processing
938c9e66-19de-46ed-9810-09aaab3eac26,aspect-category-detection,Aspect Category Detection,Aspect category detection (ACD) in sentiment analysis aims to identify the aspect categories mentioned in a sentence.,natural-language-processing
ae504e2b-5125-4661-b56b-a811388e8f20,diachronic-word-embeddings,Diachronic Word Embeddings,,natural-language-processing
6d04747d-2131-4f14-8b79-51b958c16380,event-causality-identification,Event Causality Identification,,natural-language-processing
9dd2679a-f45c-4fab-bbfe-5f35e2f18a69,chunking,Chunking,"Chunking, also known as shallow parsing, identifies continuous spans of tokens that form syntactic units such as noun phrases or verb phrases.
Example:
| Vinken | , | 61 | years | old |
| --- | ---| --- | --- | --- |
| B-NLP| I-NP | I-NP | I-NP | I-NP |",natural-language-processing
1569b9c9-52b4-412b-afb9-5b2925fdfa2c,vietnamese-text-diacritization,Vietnamese Text Diacritization,Addition of diacritics for undiacritized Vietnamese Wikipedia texts.,natural-language-processing
e83e7ed6-564d-4155-ba8c-4ee2c0d04d30,key-point-matching,Key Point Matching,"Given a debatable topic, a set of key points per stance, and a set of crowd arguments supporting or contesting the topic, report for each argument its match score for each of the key points under the same stance towards the topic.",natural-language-processing
fc1cf6e6-a16d-4afa-9bad-67a27ce01764,hyperbaton,Hyperbaton,,natural-language-processing
3214ac84-103c-4f00-9033-f4be0284c1a8,negation-scope-resolution,Negation Scope Resolution,,natural-language-processing
3340d020-f2fe-4fa7-b3e4-bf8ee47a8570,multi-word-expression-sememe-prediction,multi-word expression sememe prediction,Predict sememes for unannotated multi-word expressions.,natural-language-processing
324e3941-96b0-447e-9898-775365c374e1,multilingual-nlp,Multilingual NLP,,natural-language-processing
4c1a8873-5e14-4f49-92fd-7a54b44b253c,text-effects-transfer,Text Effects Transfer,"Text effects transfer refers to the task of transferring typography styles (e.g., color, texture) to an input image of a text element.",natural-language-processing
329a3833-9ba0-4579-830f-c8fba1158007,emotion-classification,Emotion Classification,"Emotion classification, or emotion categorization, is the task of recognising emotions to classify them into the corresponding category. Given an input, classify it as 'neutral or no emotion' or as one, or more, of several given emotions that best represent the mental state of the subject's facial expression, words, and so on. Some example benchmarks include ROCStories, Many Faces of Anger (MFA), and GoEmotions. Models can be evaluated using metrics such as the Concordance Correlation Coefficient (CCC) and the Mean Squared Error (MSE).",natural-language-processing
c021e84d-ec87-459c-a6f6-4c9edbb1c167,game-of-cricket,Game of Cricket,,playing-games
c6806e82-7e06-416c-9713-781ad58bd01d,procgen-hard-100m,Procgen Hard (100M),,playing-games
4ab5b41b-590b-404f-af63-40de005ff4a9,video-games,Video Games,,playing-games
4cec7eab-97e5-4157-85a4-03e1c562aa35,multi-agent-path-finding,Multi-Agent Path Finding,,playing-games
866b2d81-5472-402c-8f4e-a62e5602c078,injury-prediction,Injury Prediction,,playing-games
5bf10ecf-4a4c-42d6-a544-77fd84355464,openai-gym,OpenAI Gym,"An open-source toolkit from OpenAI that implements several Reinforcement Learning benchmarks including: classic control, Atari, Robotics and MuJoCo tasks.
(Description by [Evolutionary learning of interpretable decision trees](https://paperswithcode.com/paper/evolutionary-learning-of-interpretable))
(Image Credit: [OpenAI Gym](https://gym.openai.com/))",playing-games
1a2b88ed-c631-4a27-a39a-931130ad50df,score,NetHack Score,Mean in-game score over 1000 episodes with random seeds not seen during training. See https://arxiv.org/abs/2006.13760 (Section 2.4 Evaluation Protocol) for details.,playing-games
e58324e9-6f18-4534-9079-538a6f3447fd,game-of-go,Game of Go,"Go is an abstract strategy board game for two players, in which the aim is to surround more territory than the opponent. The task is to train an agent to play the game and be superior to other players.",playing-games
cbec0598-08f5-46f2-9644-e20aee72ccb9,board-games,Board Games,,playing-games
7dfcff3f-f9f8-4710-8560-a1c197876f55,deep-sea-treasure-image-version,"Deep-Sea Treasure, Image version","Image state version of the multi-objective reinforcement learning toy environment originally introduced in ""Empirical evaluation methods for multiobjective reinforcement learning algorithms"" by P. Vamplew et al.",playing-games
f0422133-f362-483b-9c3c-bb58f92f6a09,game-of-shogi,Game of Shogi,,playing-games
8c904baf-98f5-45b0-8f3c-3a98ad352e5f,text-based-games,text-based games,Text-based games to evaluate the Reinforcement Learning Agents,playing-games
edb60c9e-c1b4-4e44-b0a8-190fab995310,montezumas-revenge,Montezuma's Revenge,"Montezuma's Revenge is an ATARI 2600 Benchmark game that is known to be difficult to perform on for reinforcement learning algorithms. Solutions typically employ algorithms that incentivise environment exploration in different ways.
For the state-of-the art tables, please consult the parent Atari Games task.
( Image credit: [Q-map](https://github.com/fabiopardo/qmap) )",playing-games
cc01668a-5dd4-4e61-b5b9-1f30cb2ec017,offline-rl,Offline RL,,playing-games
93baac4d-4295-4a02-99a1-3622fda2436e,game-of-hanabi,Game of Hanabi,,playing-games
35195550-7703-47bc-bab9-6d46452d9e86,game-of-football,Game of Football,,playing-games
17c92bbf-a1aa-4a3e-b4fd-582402c6a8de,control-with-prametrised-actions,Control with Prametrised Actions,"Most reinforcement learning research papers focus on environments where the agent’s actions are either discrete or continuous. However, when training an agent to play a video game, it is common to encounter situations where actions have both discrete and continuous components. For example, a set of high-level discrete actions (ex: move, jump, fire), each of them being associated with continuous parameters (ex: target coordinates for the move action, direction for the jump action, aiming angle for the fire action). These kinds of tasks are included in Control with Parameterised Actions.",playing-games
2d758334-d206-42e2-947d-fa56b2b74e5b,snes-games,SNES Games,"The task is to train an agent to play SNES games such as Super Mario.
( Image credit: [Large-Scale Study of Curiosity-Driven Learning](https://github.com/openai/large-scale-curiosity) )",playing-games
40026de0-adcd-4bad-ae82-bb99b6051548,game-of-poker,Game of Poker,,playing-games
7f894726-fe90-4bfe-a97c-bbce652a4aa1,starcraft-ii,Starcraft II,"Starcraft II is a RTS game; the task is to train an agent to play the game.
( Image credit: [The StarCraft Multi-Agent Challenge](https://arxiv.org/pdf/1902.04043v2.pdf) )",playing-games
769a4eb7-85e6-429a-8d35-5b6202ccb2c1,continuous-control,Continuous Control,,playing-games
9b6ba3a8-093f-498f-abc7-86438ca827f3,smac-1,SMAC+,Bechmarks for Efficient Exploration of Completion of Multi-stage Tasks and Usage of Environmental Factors,playing-games
10288ddc-0eb3-479c-aa48-9696655b7951,real-time-strategy-games,Real-Time Strategy Games,"Real-Time Strategy (RTS) tasks involve training an agent to play video games with continuous gameplay and high-level macro-strategic goals such as map control, economic superiority and more.
( Image credit: [Multi-platform Version of StarCraft: Brood War in a Docker Container](https://github.com/Games-and-Simulations/sc-docker) )",playing-games
6223454d-c9e0-4ec6-9ba1-f8804de8cfd3,game-of-chess,Game of Chess,"Chess is a two-player strategy board game played on a chessboard, a checkered gameboard with 64 squares arranged in an 8×8 grid. The idea of making a machine that could beat a Grandmaster human player was a fascination in the artificial community for decades. Famously IBM's DeepBlue beat Kasparov in the 1990s. More recently more human-like approaches such as AlphaZero have appeared.",playing-games
e5185d80-526d-42a6-afa4-74d6da97e25e,klondike,Klondike,The most commonly played game in the family of Solitaire card games.,playing-games
e7e744f5-4290-45f0-aec7-3776f7e9925f,carracing-v0,Car Racing,https://gym.openai.com/envs/CarRacing-v0/,playing-games
7b997ca4-ebf0-4076-a428-d30bc2743810,smac-plus,SMAC Plus,Multi-agent reinforcement learning for completion of multi-stage tasks,playing-games
12c2e229-fc01-4d6f-93fc-03858841bb10,dota-2,Dota 2,"Dota 2 is a multiplayer online battle arena (MOBA). The task is to train one-or-more agents to play and win the game.
( Image credit: [OpenAI Five](https://openai.com/five/) )",playing-games
da4f01b0-7211-4498-873f-0a068d184d8a,2048,Playing the Game of 2048,,playing-games
4cf05b13-56b9-42bd-b3b9-e092ae78cc83,suduko,Game of Sudoku,,playing-games
33d8cf0a-643c-45f5-b26e-c5004ecc8d79,starcraft,Starcraft,"Starcraft I is a RTS game; the task is to train an agent to play the game.
( Image credit: [Macro Action Selection with Deep Reinforcement Learning in StarCraft](https://arxiv.org/pdf/1812.00336v3.pdf) )",playing-games
5563ae3f-c1da-4511-9c08-ccb19000be10,atari-games,Atari Games,"The Atari 2600 Games task (and dataset) involves training an agent to achieve high game scores.
( Image credit: [Playing Atari with Deep Reinforcement Learning](https://arxiv.org/pdf/1312.5602v1.pdf) )",playing-games
9abe2430-856d-4e28-bea8-430f1aba5f92,nethack,NetHack,Mean in-game score over 1000 episodes with random seeds not seen during training. See https://arxiv.org/abs/2006.13760 (Section 2.4 Evaluation Protocol) for details.,playing-games
6d9fe331-1abc-4502-a221-5f2fa6f57377,solitaire,Solitaire,A family of single-player games using one or more standard decks of playing cards.,playing-games
88f1ab7f-94c3-4984-ac07-0854a3a68279,football-action-valuation,Football Action Valuation,,playing-games
f137ac99-53cb-48bb-a50f-49e9207d5a3e,acrobot,Acrobot,"The acrobot system includes two joints and two links, where the joint between the two links is actuated. Initially, the links are hanging downwards, and the goal is to swing the end of the lower link up to a given height.",playing-games
44bc12cd-6d16-476c-af5f-4239757bb6e4,dqn-replay-dataset,DQN Replay Dataset,,playing-games
16f4ee8a-cc58-4467-b31a-54658c33f174,card-games,Card Games,Card games involve playing cards: the task is to train an agent to play the game with specified rules and beat other players.,playing-games
54999d9d-c975-4f44-b2dc-188d0b3927c7,game-of-doom,Game of Doom,"Doom is an FPS game : the task is typically to train an agent to navigate the game environment, and additionally, acquire points by eliminating enemies.
( Image credit: [Playing FPS Games with Deep Reinforcement Learning](https://arxiv.org/pdf/1609.05521v2.pdf) )",playing-games
1c7d6351-800e-472e-b443-b11eef41616a,fps-games,FPS Games,"First-person shooter (FPS) games Involve like call of duty so enjoy
( Image credit: [Procedural Urban Environments for FPS Games](https://arxiv.org/pdf/1604.05791v1.pdf) )",playing-games
bf9686e8-dd44-443b-89d9-6f49bbc2ce2a,smac,SMAC,"The StarCraft Multi-Agent Challenge (SMAC) is a benchmark that provides elements of partial observability, challenging dynamics, and high-dimensional observation spaces. SMAC is built using the StarCraft II game engine, creating a testbed for research in cooperative MARL where each game unit is an independent RL agent.",playing-games
0e49f8c1-16e6-4dc9-8bf6-a4f7f7ebcc9e,pass-classification,Pass Classification,,playing-games
7b603b26-6a26-405c-b9db-ec4a63742029,model-based-reinforcement-learning,Model-based Reinforcement Learning,,reasoning
a85541a8-4a3e-4121-b25c-71bbd2a29aaa,elementary-mathematics,Elementary Mathematics,,reasoning
7cd9d17a-3777-4b73-b52c-17540b81989b,abstract-algebra,Abstract Algebra,,reasoning
ee8abcc1-6317-47d1-99ac-1ecc4e8f59ed,navigate,Navigate,,reasoning
68b52127-44ad-423a-bc41-76427e4bb4e9,metaphor-boolean,Metaphor Boolean,,reasoning
313e728e-5989-454f-880d-7140bf9d4f5e,penguins-in-a-table,Penguins In A Table,,reasoning
ff14dd2e-35db-4c28-a891-3edbfcee8860,checkmate-in-one,Checkmate In One,,reasoning
5542fca6-4d3e-404f-bdd5-c6fbb624370e,analogical-similarity,Analogical Similarity,,reasoning
68e6c84f-ab9c-4e76-a64f-f5beaeccec5e,temporal-sequences,Temporal Sequences,"This task asks models to answer questions about which times certain events could have occurred.
Source: [BIG-bench](https://github.com/google/BIG-bench/tree/main/bigbench/benchmark_tasks/temporal_sequences)
Image source: [BIG-bench](https://github.com/google/BIG-bench/tree/main/bigbench/benchmark_tasks/temporal_sequences)",reasoning
4b2ee072-994e-40f8-a612-1c515b6dd23b,physical-intuition,Physical Intuition,,reasoning
9ae97919-ac07-451f-b1ec-8b95c21996d6,arithmetic-reasoning,Arithmetic Reasoning,,reasoning
4314df98-6aa3-4e54-a2a9-d4e8852c9293,identify-odd-metapor,Identify Odd Metapor,,reasoning
4002ac45-93f7-4d15-864c-4b18d6bdccdb,crash-blossom,Crash Blossom,,reasoning
3b1eb865-b344-40d7-9c72-e441e7009455,logical-reasoning,Logical Reasoning,,reasoning
edf3ca6e-edb7-4c6e-8b85-b644be47c8b2,human-judgment-correlation,Human Judgment Correlation,A task where an algorithm should generate the judgment scores correlating with human judgments.,reasoning
f0cc6e63-1442-4d38-8c2a-9110ffed4a5c,presuppositions-as-nli,Presuppositions As NLI,,reasoning
5deeee07-4ea3-478e-91f7-cc1801b3f39d,abstract-argumentation,Abstract Argumentation,Identifying argumentative statements from natural language dialogs.,reasoning
35ed7fc9-8c74-478f-a302-c65292f9b044,causal-identification,Causal Identification,,reasoning
dadfed76-359d-4ac4-994e-c06032bfe559,college-mathematics,College Mathematics,,reasoning
b69de813-68e7-47de-a795-8cb69d364aa3,professional-accounting,Professional Accounting,,reasoning
64eed278-e660-4a35-b000-f2599090f958,commonsense-rl,Commonsense Reasoning for RL,Commonsense reasoning for Reinforcement Learning agents,reasoning
198571db-ec87-4e1e-85ea-298ccd274c2a,entailed-polarity,Entailed Polarity,,reasoning
b922987d-f483-4bc0-aaaf-f3d355a2a734,math-word-problem-solving,Math Word Problem Solving,,reasoning
c5e92cce-392b-4397-b7e9-8e40a8efcebe,evaluating-information-essentiality,Evaluating Information Essentiality,,reasoning
f1d2f8c9-8a15-4958-a9ef-b4dd6515b844,systematic-generalization,Systematic Generalization,,reasoning
7efe746b-16b5-4713-a38d-db7c1e3e1ed9,odd-one-out,Odd One Out,"This task tests to what extent a language model is able to identify the odd word.
Source: [BIG-bench](https://github.com/google/BIG-bench/tree/main/bigbench/benchmark_tasks/odd_one_out)",reasoning
53c872e2-9be1-4dc1-881e-b22b0bd3b144,code-line-descriptions,Code Line Descriptions,,reasoning
782be8f2-8fb9-4794-b4aa-6b51d47bbfd1,automated-theorem-proving,Automated Theorem Proving,"The goal of **Automated Theorem Proving** is to automatically generate a proof, given a conjecture (the target theorem) and a knowledge base of known facts, all expressed in a formal language. Automated Theorem Proving is useful in a wide range of applications, including the verification and synthesis of software and hardware systems.
Source: [Learning to Prove Theorems by Learning to Generate Theorems ](https://arxiv.org/abs/2002.07019)",reasoning
b6a5d6ca-2b11-4135-9d96-0e480a05f9aa,anachronisms,Anachronisms,,reasoning
dd9ca405-4d88-4983-8646-29fd89cb0305,mathematical-induction,Mathematical Induction,"Tests the language model's capability to understand induction by asking the model to verify the correctness of an induction argument.
Source: [BIG-bench](https://github.com/google/BIG-bench/tree/main/bigbench/benchmark_tasks/mathematical_induction)",reasoning
72a3f744-152b-4ae0-8d80-815bea051af9,program-repair,Program Repair,Task of teaching ML models to modify an existing program to fix a bug in a given code.,reasoning
2c283c93-1bd8-487b-b75a-44556d40abdf,common-sense-reasoning,Common Sense Reasoning,"Common sense reasoning tasks are intended to require the model to go beyond pattern
recognition. Instead, the model should use ""common sense"" or world knowledge
to make inferences.",reasoning
7698ae2a-f57d-43d5-acbb-a01221804ead,visual-commonsense-reasoning,Visual Commonsense Reasoning,Image source: [Visual Commonsense Reasoning](https://paperswithcode.com/dataset/vcr),reasoning
fff722db-a69b-4df1-82fb-352ea8803a42,reasoning-about-colored-objects,Reasoning About Colored Objects,,reasoning
f2e306ba-3997-4734-a484-d1c188b8735b,causal-judgment,Causal Judgment,,reasoning
e5c8aa58-5f0f-4640-8674-e40b0a6be9a7,analytic-entailment,Analytic Entailment,,reasoning
6f8d30d6-8162-4f32-85f1-924bc9ea29db,logical-fallacy-detection,Logical Fallacy Detection,,reasoning
fb50884c-b11d-4cb3-b0ed-23c36e6440fd,human-judgment-classification,Human Judgment Classification,A task where an algorithm judges which sample is better in accordance with human judgment.,reasoning
c67e5283-ec78-4999-9edd-f0422b421d65,decision-making-under-uncertainty,Decision Making Under Uncertainty,,reasoning
2e8963a2-5257-43c8-9fca-c15d3362ce19,visual-reasoning,Visual Reasoning,Ability to understand actions and reasoning associated with any visual images,reasoning
f3a9e5b5-8b41-43b7-a454-afa0eb664c6c,date-understanding,Date Understanding,,reasoning
d83453aa-0839-446a-bf83-2dd37b634f9e,pre-election-ratings-estimation,Pre-election ratings estimation,,reasoning
844dc4dd-295f-40bb-8dae-90575cc5ab7c,physical-commonsense-reasoning,Physical Commonsense Reasoning,,reasoning
c72d5e8a-18fe-4e61-a618-e0516596d406,conformal-prediction,Conformal Prediction,,reasoning
d0c37179-bcd8-486b-83df-1d989e00b55e,formal-logic,Formal Logic,,reasoning
5e452017-42e0-44a6-9207-f214a666880f,crass-ai,Crass AI,,reasoning
ebef641b-78d1-4d8d-a9ba-0b7063c64285,high-school-mathematics,High School Mathematics,,reasoning
9f0a94d3-4574-41c7-9a95-4b935fa2a48e,natural-language-visual-grounding,Natural Language Visual Grounding,,reasoning
6d3c298b-4e91-419f-9046-2c877d75f227,logical-args,Logical Args,,reasoning
bfbdfb42-a66b-4e71-9112-f3ae91c1e14c,visual-entailment,Visual Entailment,"Visual Entailment (VE) - is a task consisting of image-sentence pairs
whereby a premise is defined by an image, rather than a
natural language sentence as in traditional Textual Entailment tasks. The goal is to predict
whether the image semantically entails the text.",reasoning
5a2c8349-9ff7-42b1-8445-e5696ffc2786,winowhy,Winowhy,,reasoning
c07a4a1a-64cc-432c-91b0-3ce0dc12cbea,decision-making,Decision Making,"**Decision Making** is a complex task that involves analyzing data (of different level of abstraction) from disparate sources and with different levels of certainty, merging the information by weighing in on some data source more than other, and arriving at a conclusion by exploring all possible alternatives.
Source: [Complex Events Recognition under Uncertainty in a Sensor Network ](https://arxiv.org/abs/1411.0085)",reasoning
7ddd7c0e-6769-4440-9e7c-869ce77a8e1d,strategyqa,StrategyQA,"StrategyQA aims to measure the ability of models to answer questions that require multi-step implicit reasoning.
Source: [BIG-bench](https://github.com/google/BIG-bench/tree/main/bigbench/benchmark_tasks/strategyqa)",reasoning
f0d34eb3-ee34-4b83-a59e-ab2b0ebe6068,novel-concepts,Novel Concepts,"Measures the ability of models to uncover an underlying concept that unites several ostensibly disparate entities, which hopefully would not co-occur frequently. This provides a limited test of a model's ability to creatively construct the necessary abstraction to make sense of a situation that it cannot have memorized in training.
Source: [BIG-bench](https://github.com/google/BIG-bench/tree/main/bigbench/benchmark_tasks/novel_concepts)",reasoning
6a4f0e83-40d9-4790-aa6d-56360685ab48,logical-sequence,Logical Sequence,,reasoning
07119042-c25b-4771-8a95-bb09c439adb7,contact-detection,Contact Detection,Static-friction contact detection in legged locomotion,robots
6fc15990-2e82-4179-915d-20eb39f48b79,marine-robot-navigation,Marine Robot Navigation,,robots
a043764e-0154-46a2-be3f-623126426c9f,outdoor-localization,Outdoor Localization,,robots
af506c2c-639d-4caf-bd48-50f6c37a7e0d,humanoid-control,Humanoid Control,"Control of a high-dimensional humanoid. This can include skill learning by tracking motion capture clips, learning goal-directed tasks like going towards a moving target, and generating motion within a physics simulator.",robots
767d521a-69fe-4ab4-baf6-2cef4802cc7d,muscle-computer-interfaces-mcis,Muscle-Computer Interfaces (MCIs),An interaction methodology that directly senses and decodes human muscular activity rather than relying on physical device actuation or user actions that are externally visible or audible.,robots
f048a65a-f621-47f4-b149-6a9fb38c6a53,sequential-place-learning,Sequential Place Learning,State-of-the-art algorithms for route-based place recognition under changing conditions.,robots
474ab3ec-5889-4cca-8323-c0688b6b4e7c,industrial-robots,Industrial Robots,"An industrial robot is a robot system used for manufacturing. Industrial robots are automated, programmable and capable of movement on three or more axes.",robots
3c9ab101-cbb6-48f9-97ff-6e0878392fb2,mental-stress-detection,Mental Stress Detection,,robots
29006d09-cb18-4746-af8f-685cb952079a,social-navigation,Social Navigation,This task studies how to navigate robot(s) among humans in a safe and socially acceptable way.,robots
3d9d7bb2-30f6-4533-a1a6-8f43fecc1707,deformable-object-manipulation,Deformable Object Manipulation,,robots
2bb59aa5-4863-4295-870a-f260486917cf,low-latency-processing,Low-latency processing,,robots
7198de15-bbc7-41d7-ba84-e3789d16ff3c,benchmarking,Benchmarking,,robots
bbb0a5fd-3143-4a3e-999b-25dc3385ef7a,pointgoal-navigation,PointGoal Navigation,,robots
4c814a59-048e-4429-abfd-0ca99a154ae6,d4rl,D4RL,,robots
d8bddc13-74e6-4fd3-ae0a-551386ec61c4,carla-map-leaderboard,CARLA MAP Leaderboard,https://leaderboard.carla.org/leaderboard/,robots
5621b1e1-2243-44c2-a1e5-39a7cd1b8e4b,robot-navigation,Robot Navigation,"The fundamental objective of mobile **Robot Navigation** is to arrive at a goal position without collision. The mobile robot is supposed to be aware of obstacles and move freely in different working scenarios.
Source: [Learning to Navigate from Simulation via Spatial and Semantic Information Synthesis with Noise Model Embedding ](https://arxiv.org/abs/1910.05758)",robots
405996aa-6769-4e6f-a1f8-f0fe601d02bf,touch-detection,Touch detection,,robots
46636e1d-d103-40b0-a1c9-53b39206a29e,omniverse-isaac-gym,Omniverse Isaac Gym,"The Omniverse Isaac Gym extension provides an interface for performing reinforcement learning training and inferencing in Isaac Sim. This framework simplifies the process of connecting reinforcement learning libraries and algorithms with other components in Isaac Sim. Similar to existing frameworks and environment wrapper classes that inherit from gym.Env, the Omniverse Isaac Gym extension also provides an interface inheriting from gym.Env and implements a simple set of APIs required by most common RL libraries. This interface can be used as a bridge connecting RL libraries with physics simulation and tasks running in the Isaac Sim framework.",robots
301fb2ac-d7cb-49e3-b808-7b55ac1faea2,robotic-grasping,Robotic Grasping,This task is composed of using Deep Learning to identify how best to grasp objects using robotic arms in different scenarios. This is a very complex task as it might involve dynamic environments and objects unknown to the network.,robots
143434e7-3a44-4116-b05e-426355ba6997,vision-and-language-navigation,Vision and Language Navigation,,robots
2bae0c5a-b54a-4282-b641-8848eea39abd,developmental-learning,Developmental Learning,,robots
af8712a0-846b-4e03-b657-271d18b2e92c,skill-generalization,Skill Generalization,Image credit: [A Generalist Agent](https://storage.googleapis.com/deepmind-media/A%20Generalist%20Agent/Generalist%20Agent.pdf),robots
186054e2-3723-46a5-a243-96b51d879299,skill-mastery,Skill Mastery,,robots
27c1e49e-fb9a-4bee-8d95-48f16ac259c9,radar-odometry,Radar odometry,"Radar odometry is the task of estimating the trajectory of the radar sensor, e.g. as presented in https://arxiv.org/abs/2105.01457.
A well established performance metric was presented by Geiger (2012) - ""Are we ready for autonomous driving? the KITTI vision benchmark suite""",robots
ceedc9a5-6e19-46cd-956f-e0fab3035c75,monocular-visual-odometry,Monocular Visual Odometry,,robots
416e0b9f-db94-4c64-8b85-78692f692ab1,radar-object-detection,Radar Object Detection,The radar object detection (ROD) task aims to classify and localize the objects in 3D purely from radar's radio frequency (RF) images.,robots
af62519a-7384-4911-8f7a-d3edf5f1b017,joint-radar-communication,Joint Radar-Communication,Intelligently decide how to simultaneously conduct radar and communication over a shared radio channel.,robots
8de53b86-3efc-44e6-8cf4-cdcc78645e24,optimal-motion-planning,Optimal Motion Planning,,robots
956552f8-b5da-4d1e-a118-e662248b6f20,visual-navigation,Visual Navigation,"**Visual Navigation** is the problem of navigating an agent, e.g. a mobile robot, in an environment using camera input only. The agent is given a target image (an image it will see from the target position), and its goal is to move from its current position to the target by applying a sequence of actions, based on the camera observations only.
Source: [Vision-based Navigation Using Deep Reinforcement Learning ](https://arxiv.org/abs/1908.03627)",robots
ea835d9b-a356-43c8-a3bb-847b45e529ca,sequential-place-recognition,Sequential Place Recognition,State-of-the-art algorithms for route-based place recognition under changing conditions.,robots
6db42f94-7e72-4c01-9ffa-a7a6a79b91b9,visual-odometry,Visual Odometry,"**Visual Odometry** is an important area of information fusion in which the central aim is to estimate the pose of a robot using data collected by visual sensors.
Source: [Bi-objective Optimization for Robust RGB-D Visual Odometry ](https://arxiv.org/abs/1411.7445)",robots
7c209250-1f69-4bad-8f99-369a1c7536e0,safe-exploration,Safe Exploration,"**Safe Exploration** is an approach to collect ground truth data by safely interacting with the environment.
Source: [Chance-Constrained Trajectory Optimization for Safe Exploration and Learning of Nonlinear Systems ](https://arxiv.org/abs/2005.04374)",robots
6bc59c2c-823f-41aa-a8ae-1b3c547cd657,robot-manipulation,Robot Manipulation,,robots
87ab5b56-53f6-4814-95e2-7792014130be,trajectory-planning,Trajectory Planning,"Trajectory planning for industrial robots consists of moving the tool center point from point A to point B while avoiding body collisions over time.
Trajectory planning is sometimes referred to as motion planning and erroneously as path planning. Trajectory planning is distinct from path planning in that it is parametrized by time. Essentially trajectory planning encompasses path planning in addition to planning how to move based on velocity, time, and kinematics.",robots
59344ae6-1b08-4f37-ae36-07ea05ac5e99,drone-controller,Drone Controller,,robots
f2bd8c8f-2a71-4e40-940c-5a92fe874338,grasp-rectangle-generation,Grasp rectangle generation,Grasp rectangles are a popular way to represent a two-finger grasp. Grasp rectangle generation is a task to (automatically) label datasets for parallel-jaw grap learning.,robots
696db8d8-466a-4458-926f-222840141c04,gesture-generation,Gesture Generation,"Generation of gestures, as a sequence of 3d poses",robots
40659b12-a8cf-4691-b408-c3b6cbe2ca81,vision-based-navigation-with-language-based,Vision-based navigation with language-based assistance,"A grounded vision-language task where an agent with visual perception is guided via language to find objects in photorealistic indoor environments. The task emulates a real-world scenario in that (a) the requester may not know how to navigate to the target objects and thus makes requests by only specifying high-level endgoals, and (b) the agent is capable of sensing when it is lost and querying an advisor, who is more qualified at the task, to obtain language subgoals to make progress.",robots
050f3ff6-6276-4d63-8f24-7a2524ed9528,isaac-gym-preview,Isaac Gym Preview,Isaac Gym offers a high performance learning platform to train policies for wide variety of robotics tasks directly on GPU. Both physics simulation and the neural network policy training reside on GPU and communicate by directly passing data from physics buffers to PyTorch tensors without ever going through any CPU bottlenecks. This leads to blazing fast training times for complex robotics tasks on a single GPU with 2-3 orders of magnitude improvements compared to conventional RL training that uses a CPU based simulator and GPU for neural networks.,robots
a256ec58-a2ea-4d03-925a-d65e9df62580,robot-task-planning,Robot Task Planning,,robots
a27889ad-18c0-4bd3-bcbc-879d90356503,motion-planning,Motion Planning,"( Image credit: [Motion Planning Among Dynamic, Decision-Making Agents with Deep Reinforcement Learning](https://arxiv.org/pdf/1805.01956v1.pdf) )",robots
71a7d361-68a0-430d-aab6-8174831af8e7,phone-level-pronunciation-scoring,Phone-level pronunciation scoring,,speech
cf4ab03d-f2f3-472c-8378-6ee6921569e2,voice-conversion,Voice Conversion,"**Voice Conversion** is a technology that modifies the speech of a source speaker and makes their speech sound like that of another target speaker without changing the linguistic information.
Source: [Joint training framework for text-to-speech and voice conversion using multi-source Tacotron and WaveNet ](https://arxiv.org/abs/1903.12389)",speech
7454fbce-732e-442c-86ea-0e39a9dacfcd,speech-synthesis-bodo,Speech Synthesis - Bodo,,speech
e6bc8646-9267-4821-a315-8213d8ffd7aa,small-footprint-keyword-spotting,Small-Footprint Keyword Spotting,,speech
85ca1707-6f91-4da5-9cfd-e07e6cd44dea,word-level-pronunciation-scoring,Word-level pronunciation scoring,Total score of a word pronunciation.,speech
8956b289-bafe-4497-9f8c-2d95a9658a95,robust-speech-recognition,Robust Speech Recognition,,speech
7f83e8a7-619c-4f75-bdec-b0817c030b6d,speaker-profiling,Speaker Profiling,Estimation of Physical parameters from Speech data,speech
06044852-1a67-4eb7-9c70-05f57e5ecc58,speech-recognition,Speech Recognition,"**Speech Recognition** is the task of converting spoken language into text. It involves recognizing the words spoken in an audio recording and transcribing them into a written format. The goal is to accurately transcribe the speech in real-time or from recorded audio, taking into account factors such as accents, speaking speed, and background noise.
( Image credit: [SpecAugment](https://arxiv.org/pdf/1904.08779v2.pdf) )",speech
116ec838-8041-4b04-899f-775a136cc607,bandwidth-extension,Bandwidth Extension,Bandwidth extension is the task of expanding the bandwidth of a signal in a way that approximates the original or desired higher bandwidth signal.,speech
99afa507-b68c-441c-99d0-51044c470fb0,keyword-spotting,Keyword Spotting,"In speech processing, keyword spotting deals with the identification of keywords in utterances.
( Image credit: [Simon Grest](https://github.com/simongrest/kaggle-freesound-audio-tagging-2019) )",speech
f3efdedb-1212-4f16-b6c3-a3462835e401,pronunciation-assessment,Pronunciation Assessment,,speech
36192ffc-75bd-44a4-878a-cb9d0d0be0c5,speech-synthesis-rajasthani,Speech Synthesis - Rajasthani,,speech
d0e0639e-e57a-4b5f-ba5c-98f1e53d3dcc,speech-synthesis-marathi,Speech Synthesis - Marathi,,speech
636ea2ca-4074-4d4c-985f-a58e71b91a1f,voice-query-recognition,Voice Query Recognition,,speech
7621b5ec-abe4-4b07-97e8-fd3d01dbab91,dialogue-generation,Dialogue Generation,"Dialogue generation is the task of ""understanding"" natural language inputs - within natural language processing in order to produce output. The systems are usually intended for conversing with humans, for instance back and forth dialogue with a conversation agent like a chatbot. Some example benchmarks for this task (see others such as Natural Language Understanding) include FusedChat and Ubuntu DIalogue Corpus (UDC). Models can be evaluated via metrics such as BLEU, ROUGE, and METEOR albeit with challenges in terms of weak correlation with human judgement, that may be addressed by new ones like UnSupervised and Reference-free (USR) and Metric for automatic Unreferenced dialog evaluation (MaUde).",speech
d8c20a26-3c9a-42a5-82b0-dc432187b85e,noisy-speech-recognition,Noisy Speech Recognition,,speech
5f9512d1-e1ff-47b1-ba4f-4697bf0193cf,a-vb-culture,A-VB Culture,"In the A-VB Culture track, participants will be challenged with predicting the intensity of 10 emotions associated with each vocal burst as a multi-output regression task, using a model or multiple models that generate predictions specific to each of the four cultures (the U.S., China, Venezuela, or South Africa). Specifically, annotations of each vocal burst will consist of culture-specific ground truth, meaning that the ground truth for each sample will be the average of annotations solely from the country of origin of the sample. Participants will report the average Concordance Correlation Coefficient (CCC), as well as the Pearson correlation coefficient, across all 10 emotions. The baseline for this challenge will be based on CCC.",speech
28a06174-8eee-4109-8482-8448e57d53bf,speech-synthesis-kannada,Speech Synthesis - Kannada,,speech
2702d281-5a8f-4e6d-9a67-542e2baca40d,distant-speech-recognition,Distant Speech Recognition,,speech
c4a314a8-7c91-4847-92a6-1b32cbc7c30e,speech-to-gesture-translation,Speech-to-Gesture Translation,,speech
cca7afda-f9f1-48d3-8bbd-df19dbb50771,a-vb-high,A-VB High,"The A-VB High track, explores a high-dimensional emotion space for understanding vocal bursts. Participants will be challenged with predicting the intensity of 10 emotions (Awe, Excitement, Amusement, Awkwardness, Fear, Horror, Distress, Triumph, Sadness, and Surprise) associated with each vocal burst as a multi-output regression task. Participants will report the average Concordance Correlation Coefficient (CCC), as well as the Pearson correlation coefficient, across all 10 emotions. The baseline for this challenge will be based on CCC.",speech
89133d4e-d321-4313-b494-cdf13bce9b68,speaker-recognition,Speaker Recognition,"**Speaker Recognition** is the process of identifying or confirming the identity of a person given his speech segments.
Source: [Margin Matters: Towards More Discriminative Deep Neural Network Embeddings for Speaker Recognition ](https://arxiv.org/abs/1906.07317)",speech
f89bc33f-3dcd-4fb3-8941-8960581ce312,culture,Cultural Vocal Bursts Intensity Prediction,to predict the intensity of 40 culture-specific emotions (10 emotions from each culture),speech
bd4cdc7e-2b5b-48b9-b89d-d305210b8917,speaking-style-synthesis,Speaking Style Synthesis,,speech
99ae2d29-3cb3-4408-b2b4-34ed003f0701,speech-separation,Speech Separation,"The task of extracting all overlapping speech sources in a given mixed speech signal refers to the **Speech Separation**. Speech Separation is a special scenario of source separation problem, where the focus is only on the overlapping speech signal sources and other interferences such as music or noise signals are not the main concern of the study.
Source: [A Unified Framework for Speech Separation ](https://arxiv.org/abs/1912.07814)
Image credit: [Speech Separation of A Target Speaker Based on Deep Neural Networks](http://staff.ustc.edu.cn/~jundu/Publications/publications/ICSP2014_Du.pdf)",speech
c78bb020-6558-460f-a851-a94b11f3a08c,emotional-speech-synthesis,Emotional Speech Synthesis,,speech
43f2bba4-55eb-4ede-a92c-52a9974c110e,text-to-speech-synthesis,Text-To-Speech Synthesis,**Text-To-Speech Synthesis** is a machine learning task that involves converting written text into spoken words. The goal is to generate synthetic speech that sounds natural and resembles human speech as closely as possible.,speech
c87e2f27-df7e-4d20-be19-94aca924ac1b,text-independent-speaker-recognition,Text-Independent Speaker Recognition,,speech
7c846e63-01b4-4bd4-b3dc-14d556e97359,text-independent-speaker-verification,Text-Independent Speaker Verification,,speech
59ec3e9f-6ee8-402a-9e68-b3493d8275c8,speech-to-speech-translation,Speech-to-Speech Translation,"Speech-to-speech translation (S2ST) consists on translating speech from one language to speech in another language. This can be done with a cascade of automatic speech recognition (ASR), text-to-text machine translation (MT), and text-to-speech (TTS) synthesis
sub-systems, which is text-centric. Recently, works on S2ST without relying on intermediate text representation is emerging.",speech
96b72a5e-f26d-46bc-82f1-ab8ab00e9591,speech-synthesis-odia,Speech Synthesis - Odia,,speech
bb001bba-3f65-4eaa-9617-b5c1fa3c3638,spoken-language-understanding,Spoken Language Understanding,,speech
9d2b3be5-cab4-4b30-a06f-376cb2929baa,automatic-speech-recognition,Automatic Speech Recognition (ASR),"**Automatic Speech Recognition (ASR)** involves converting spoken language into written text. It is designed to transcribe spoken words into text in real-time, allowing people to communicate with computers, mobile devices, and other technology using their voice. The goal of Automatic Speech Recognition is to accurately transcribe speech, taking into account variations in accent, pronunciation, and speaking style, as well as background noise and other factors that can affect speech quality.",speech
7f8d4dd2-5cb1-4a8c-9fa6-d47f90f3df46,a-vb-two,A-VB Two,"In the A-VB Two track, we investigate a low-dimensional emotion space that is based on the circumplex model of affect. Participants will predict values of arousal and valence (on a scale from 1=unpleasant/subdued, 5=neutral, 9=pleasant/stimulated) as a regression task. Participants will report the average Concordance Correlation Coefficient (CCC), as well as the Pearson correlation coefficient, across the two dimensions. The baseline for this challenge will be based on CCC.",speech
8e9e5fde-f255-4579-92af-5fb456b793e5,utterance-level-pronounciation-scoring,Utterance-level pronounciation scoring,Total pronunciation score of an utterance.,speech
cad669d1-9931-4451-8b87-72a7cead47cf,speech-synthesis-manipuri,Speech Synthesis - Manipuri,,speech
cdadf2f4-6f48-49d0-9c67-e7e64bf637a7,accented-speech-recognition,Accented Speech Recognition,,speech
839b3ee3-e559-41ae-857f-bc66042759ec,speech-synthesis-bengali,Speech Synthesis - Bengali,,speech
75dade66-ac83-458e-bd35-0c638d1d72f1,spoken-language-identification,Spoken language identification,Identify the language being spoken from an audio input only.,speech
56b04299-6d00-4120-bc05-d3006a9b9d15,spoken-dialogue-systems,Spoken Dialogue Systems,,speech
ceac241e-4f2f-4622-acc7-a9e51471baa2,speaker-diarization,Speaker Diarization,"**Speaker Diarization** is the task of segmenting and co-indexing audio recordings by speaker. The way the task is commonly defined, the goal is not to identify known speakers, but to co-index segments that are attributed to the same speaker; in other words, diarization implies finding speaker boundaries and grouping segments that belong to the same speaker, and, as a by-product, determining the number of distinct speakers. In combination with speech recognition, diarization enables speaker-attributed speech-to-text transcription.
Source: [Improving Diarization Robustness using Diversification, Randomization and the DOVER Algorithm ](https://arxiv.org/abs/1910.11691)",speech
cd9d3073-3dba-4182-a1de-98175ae7072d,speech-enhancement,Speech Enhancement,"**Speech Enhancement** is a signal processing task that involves improving the quality of speech signals captured under noisy or degraded conditions. The goal of speech enhancement is to make speech signals clearer, more intelligible, and more pleasant to listen to, which can be used for various applications such as voice recognition, teleconferencing, and hearing aids.
( Image credit: [A Fully Convolutional Neural Network For Speech Enhancement](https://arxiv.org/pdf/1609.07132v1.pdf) )",speech
2bd115f3-eb0e-4102-a319-2771a97ccdcd,speech-extraction,Speech Extraction,,speech
d9175059-cabb-4562-8c5d-b1252bcab5d5,expressive-speech-synthesis,Expressive Speech Synthesis,,speech
d9b19df9-4ef4-40c5-9089-1228a102176b,unsupervised-speech-recognition,Unsupervised Speech Recognition,,speech
8f6fad49-110b-4353-ba77-43e14fd2815b,speech-synthesis-assamese,Speech Synthesis - Assamese,,speech
5a453c1e-9a56-4d5a-b65c-139dd23cb626,speaker-separation,Speaker Separation,,speech
0b8a14ce-25ec-4b0a-ab98-751b17ff1a56,acoustic-unit-discovery,Acoustic Unit Discovery,,speech
65df62bf-34fa-4f87-afe3-f5cdcbbd3e5b,speaker-identification,Speaker Identification,,speech
916028e9-a4d2-4dc0-80ec-657b6a7e69b8,manner-of-articulation-detection,Manner Of Articulation Detection,,speech
7343b9ae-3d77-4144-a37c-60772310b38e,multi-speaker-source-separation,Multi-Speaker Source Separation,,speech
9ffa25fc-d178-462e-a9c6-a3afb420fe1f,text-dependent-speaker-verification,Text-Dependent Speaker Verification,,speech
68fe847a-ed39-4974-b328-ca64a45d1721,voice-cloning,Voice Cloning,Voice cloning is a highly desired feature for personalized speech interfaces. Neural voice cloning system learns to synthesize a person’s voice from only a few audio samples.,speech
922f36d4-a318-4b54-8a54-5b517933ed45,speech-synthesis-hindi,Speech Synthesis - Hindi,,speech
a1eba75d-46ca-42f2-bf0d-46be751e3717,speech-emotion-recognition,Speech Emotion Recognition,"**Speech Emotion Recognition** is a task of speech processing and computational paralinguistics that aims to recognize and categorize the emotions expressed in spoken language. The goal is to determine the emotional state of a speaker, such as happiness, anger, sadness, or frustration, from their speech patterns, such as prosody, pitch, and rhythm.
For multimodal emotion recognition, please upload your result to [Multimodal Emotion Recognition on IEMOCAP](https://paperswithcode.com/sota/multimodal-emotion-recognition-on-iemocap)",speech
521d2169-4c38-4abe-8859-fde6af79654d,speaker-verification,Speaker Verification,"Speaker verification is the verifying the identity of a person from characteristics of the voice.
( Image credit: [Contrastive-Predictive-Coding-PyTorch
](https://github.com/jefflai108/Contrastive-Predictive-Coding-PyTorch) )",speech
d55c791d-bb11-454c-8e8e-82007364616f,speech-synthesis,Speech Synthesis,"Speech synthesis is the task of generating speech from some other modality like text, lip movements etc.
Please note that the leaderboards here are not really comparable between studies - as they use mean opinion score as a metric and collect different samples from Amazon Mechnical Turk.
( Image credit: [WaveNet: A generative model for raw audio](https://deepmind.com/blog/article/wavenet-generative-model-raw-audio) )",speech
0b14491c-cbfb-41f9-8170-548d3de7074f,speech-denoising,Speech Denoising,Obtain the clean speech of the target speaker by suppressing the background noise.,speech
ee3aa67c-7df6-4cd7-9abd-cbb798474466,speech-synthesis-gujarati,Speech Synthesis - Gujarati,,speech
d95c738d-0f61-4880-ac99-c8fd06966394,speech-synthesis-tamil,Speech Synthesis - Tamil,,speech
ba49afaf-3b4c-44bf-9a29-f9107345c01e,acoustic-question-answering,Acoustic Question Answering,,speech
642f5d62-6381-49d6-9f82-38cabb3da461,singing-voice-synthesis,Singing Voice Synthesis,,speech
196a32ec-a352-4d3f-9ba0-503629e59ced,spoken-command-recognition,Spoken Command Recognition,,speech
de55df1c-f0d0-4d12-aaaa-8815522d08ee,acoustic-echo-cancellation,Acoustic echo cancellation,,speech
5939abc4-e472-4506-9355-4ff0f09e5fe3,speech-synthesis-malayalam,Speech Synthesis - Malayalam,,speech
f04865a2-ac2c-4441-90c1-f83756808af0,acoustic-modelling,Acoustic Modelling,,speech
8ff053e7-7cb4-4dc5-90bd-59da9be9cd02,visual-speech-recognition,Visual Speech Recognition,,speech
160f2ff8-c676-412e-a3e1-4f8a97d39733,speech-dereverberation,Speech Dereverberation,Removing reverberation from audio signals,speech
c889d586-5d94-4d9f-892e-ae7428b6a950,sequence-to-sequence-speech-recognition,Sequence-To-Sequence Speech Recognition,,speech
2b5afa3a-2d01-4aab-9841-11f02df19047,type,Vocal Bursts Type Prediction,predict the type of given vocal bursts,speech
60b823b9-337d-45f6-b677-14628e8a7c00,multimodal-emotion-recognition,Multimodal Emotion Recognition,"This is a leaderboard for multimodal emotion recognition on the IEMOCAP dataset. The modality abbreviations are
A: Acoustic
T: Text
V: Visual
Please include the modality in the bracket after the model name.
All models must use standard five emotion categories and are evaluated in standard leave-one-session-out (LOSO). See the papers for references.",speech
336d61f5-b89e-4ee3-857e-a7239e79ae98,speech-synthesis-telugu,Speech Synthesis - Telugu,,speech
3f753b20-4589-407a-ab75-3a84cb4ec6da,audio-visual-speech-recognition,Audio-Visual Speech Recognition,Audio-visual speech recognition is the task of transcribing a paired audio and visual stream into text.,speech
e37a1d44-b9c2-4860-8594-0ed68d0764cb,english-conversational-speech-recognition,English Conversational Speech Recognition,,speech
2f9ff996-0c97-4c4d-bee6-056f40bebdba,traffic-prediction,Traffic Prediction,"**Traffic Prediction** is a task that involves forecasting traffic conditions, such as the volume of vehicles and travel time, in a specific area or along a particular road. This task is important for optimizing transportation systems and reducing traffic congestion.
( Image credit: [BaiduTraffic](https://github.com/JingqingZ/BaiduTraffic) )",time-series
60620b64-4b99-42e3-9441-585914b40391,social-cue-forecasting,Social Cue Forecasting,,time-series
6f28a965-c15e-417d-923b-f0d7442d201e,time-series-denoising,Time Series Denoising,,time-series
e7c4b31c-62bf-47de-8a02-0ad739903a21,edge-computing,Edge-computing,Deep Learning on EDGE devices,time-series
73f882d1-a9de-4306-8e96-8d1f0ac31bd2,predict-clinical-outcome,Predict clinical outcome,"A cost-based metric that considers the costs of algorithmic prescreening, expert screening, treatment, and diagnostic errors that result in late or missed treatments. This metric is further described here: https://moody-challenge.physionet.org/2022/",time-series
4ad08f22-5a0c-4eea-9d5f-9901aebe688a,stock-trend-prediction,Stock Trend Prediction,,time-series
f8464130-54df-4623-b809-8bcf7ce3a4d4,covid-19-tracking,COVID-19 Tracking,,time-series
a70c7e16-58a4-482e-abb2-7bb71fc3ef6e,w-r-n-sleep-staging,W-R-N Sleep Staging,"3-class Sleep Staging into
- Wake
- Rem
- NREM",time-series
28194e8e-898f-4943-97d1-e2017623a346,w-r-l-d-sleep-staging,W-R-L-D Sleep Staging,"4-class Sleep Staging into
- Wake
- REM
- LIGHT SLEEP
- DEEP SLEEP",time-series
0c098f7d-5eba-4861-a9ac-d88257b07612,covid-19-modelling,COVID-19 Modelling,,time-series
6f2cac9e-f4cb-4275-a488-f5bf07e63f16,multivariate-time-series-imputation,Multivariate Time Series Imputation,,time-series
65a2e000-f4d4-4914-9787-d81471387671,automatic-modulation-recognition,Automatic Modulation Recognition,Automatic modulation recognition/classification identifies the modulation pattern of communication signals received from wireless or wired networks.,time-series
8edbc220-8510-452c-bc7d-2d19069d54da,lwr-classification,LWR Classification,"T4: LWR Classification: Predicting if the subject is Listening, Writing, or Resting from physiological signals (EEG, GSR, PPG). Labels: 0-listening, 1-writing, 2-resting. Classification tasks.",time-series
28ce589f-15f8-4c11-8769-d0835d971a08,energy-management,energy management,"energy management is to schedule energy units inside the systems, enabling an reliable, safe and cost-effective operation",time-series
bec57325-3d27-4894-9c21-cde276667ff7,new-product-sales-forecasting,New Product Sales Forecasting,"Sales forecasting of new product, which the market hasn’t seen
before.",time-series
9d7d16c3-3657-4a59-bb6b-bb9f3bc601be,change-point-detection,Change Point Detection,"**Change Point Detection** is concerned with the accurate detection of abrupt and significant changes in the behavior of a time series.
Change point detection is the task of finding changes in the underlying model of a signal or time series. They are two main methods:
1) Online methods, that aim to detect changes as soon as they occur in a real-time setting
2) Offline methods that retrospectively detect changes when all samples are received.
Source: [Selective review of offline change point detection methods](https://arxiv.org/pdf/1801.00718.pdf)",time-series
e0f38d82-d590-4486-ba58-0464504fc64c,time-series-forecasting,Time Series Forecasting,"**Time Series Forecasting** is the task of fitting a model to historical, time-stamped data in order to predict future values. Traditional approaches include moving average, exponential smoothing, and ARIMA, though models as various as RNNs, Transformers, or XGBoost can also be applied. The most popular benchmark is the ETTh1 dataset. Models are typically evaluated using the Mean Square Error (MSE) or Root Mean Square Error (RMSE).
( Image credit: [ThaiBinh Nguyen](https://github.com/tn16jv/Stock-Price-Prediction) )",time-series
35b14346-48f2-40c3-baab-2ab9a54762fb,human-behavior-forecasting,Human Behavior Forecasting,,time-series
706113c3-92f2-4519-bb17-de5a161b131b,wearable-activity-recognition,Wearable Activity Recognition,,time-series
12187904-f958-46b2-8c56-f2b8ae91d91d,exponential-degradation,Exponential degradation,Exponential degradation used to solve problems where systems exposed to an exponential loss in performances such as reparable industrial systems.,time-series
cc00fad1-bc88-46a2-a8d8-92282f4c7454,unsupervised-spatial-clustering,Unsupervised Spatial Clustering,,time-series
8921da4a-6734-4d9d-8089-9146d76f0028,video-quality-assessment,Video Quality Assessment,"Video Quality Assessment is a computer vision task aiming to mimic video-based human subjective perception. The goal is to produce a mos score, where higher score indicates better perceptual quality. Some well-known benchmarks for this task are KoNViD-1k, LIVE-VQC, YouTube-UGC and LSVQ. SROCC/PLCC/RMSE are usually used to evaluate the performance of different models.",time-series
b1d12fe6-8951-4c7e-99c1-a25abce0093a,sequential-skip-prediction,Sequential skip prediction,,time-series
88539d28-7106-4d4f-8e1a-a0f7db139e4b,time-to-event-prediction,Time-to-Event Prediction,,time-series
fdf1170f-a9b5-4c90-992f-1014cfabc322,time-series-anomaly-detection,Time Series Anomaly Detection,,time-series
4af8ba05-04cc-4b2a-8bae-d39ce77c4dfc,social-media-popularity-prediction,Social Media Popularity Prediction,"Social Media Popularity Prediction (SMPP) aims to predict the future popularity (e.g., clicks, views, likes, etc.) of online posts automatically via plenty of social media data from public platforms. It is a crucial problem for social media learning and forecasting and one of the most challenging problems in the field. With the ever-changing user interests and public attention on social media platforms, how to predict popularity accurately becomes more challenging than before. This task is valuable to content providers, marketers, or consumers in a range of real-world applications, including multimedia advertising, recommendation system, or trend analysis.",time-series
e6918a10-bfaa-43f7-bd37-8bdfd5107123,short-observation-new-product-sales,Short-observation new product sales forecasting,,time-series
0f9c739b-640e-4033-ab97-88962bc5cd2d,eeg-based-sleep-staging,EEG based sleep staging,Sleep staging from only EEG signal,time-series
02ff1864-c74f-480e-a4fa-df99797b8c39,eeg,Electroencephalogram (EEG),"**Electroencephalogram (EEG)** is a method of recording brain activity using electrophysiological indexes. When the brain is active, a large number of postsynaptic potentials generated synchronously by neurons are formed after summation. It records the changes of electric waves during brain activity and is the overall reflection of the electrophysiological activities of brain nerve cells on the surface of cerebral cortex or scalp. Brain waves originate from the postsynaptic potential of the apical dendrites of pyramidal cells. The formation of synchronous rhythm of EEG is also related to the activity of nonspecific projection system of cortex and thalamus. EEG is the basic theoretical research of brain science. EEG monitoring is widely used in its clinical application.",time-series
b3e5b915-6d41-41d1-8549-328c2b2bf987,time-series-classification,Time Series Classification,"**Time Series Classification** is a general task that can be useful across many subject-matter domains and applications. The overall goal is to identify a time series as coming from one of possibly many sources or predefined groups, using labeled training data. That is, in this setting we conduct supervised learning, where the different time series sources are considered known.
Source: [Nonlinear Time Series Classification Using Bispectrum-based Deep Convolutional Neural Networks ](https://arxiv.org/abs/2003.02353)",time-series
2d50ae31-b91a-416a-8124-a4af469b6511,ecg-based-sleep-staging,ECG based Sleep Staging,Sleep Staging from only ECG signal,time-series
de5469a8-23fa-41ef-b5b9-b9fc3d745a65,time-series-clustering,Time Series Clustering,"**Time Series Clustering** is an unsupervised data mining technique for organizing data points into groups based on their similarity. The objective is to maximize data similarity within clusters and minimize it across clusters. Time-series clustering is often used as a subroutine of other more complex algorithms and is employed as a standard tool in data science for anomaly detection, character recognition, pattern discovery, visualization of time series.
Source: [Comprehensive Process Drift Detection with Visual Analytics ](https://arxiv.org/abs/1907.06386)",time-series
002ee402-b7b7-4b56-b15b-7ee01377bbc0,irregular-time-series,Irregular Time Series,Irregular Time Series,time-series
3e16c316-4b02-4073-b50f-f0e2e08b1e1b,earth-surface-forecasting,Earth Surface Forecasting,Conditional forecasting of future multi-spectral imagery.,time-series
169c9588-8836-4e2a-8c7a-d44fd3017183,sleep-spindles-detection,Sleep spindles detection,,time-series
30b9b0d3-6dfc-4c95-a425-e68733ce1ed5,trajectory-modeling,Trajectory Modeling,The equivalent of language modeling but for trajectories.,time-series
36d1eb57-3fb6-4f5a-8511-9a3b52e12351,seismic-source-localization,Seismic source localization,Locating a seismic source using seismometer recordings,time-series
7950bb52-5a31-458d-b5d8-644d5b60938d,correlated-time-series-forecasting,Correlated Time Series Forecasting,,time-series
0eca094f-b728-40a0-b73c-126104679f4f,semanticity-prediction,Semanticity prediction,"T3: Semanticity Prediction: Estimating the semanticity perceived by Listener from physiological signals (EEG, GSP, PPG). Label: 0-(semantic), 1-(non-semantic). Binary classification problem.",time-series
d78aafb1-bf1e-4648-a9b4-11c0c563e3b5,pair-trading,PAIR TRADING,Pair trading is one of the most effective statistical arbitrage strategies which seeks a neutral profit by hedging a pair of selected assets.,time-series
80c3280b-4913-4158-bad0-10d64a5af690,solar-flare-prediction,Solar Flare Prediction,Solar flare prediction in heliophysics,time-series
477b7e27-108a-4cf2-9bf6-85d432b9aa35,lip-password-classification,Lip password classification,A classification task that predicts whether the designated user is uttering the designated password.,time-series
880656f9-776a-465e-bc3b-1bd12e69bfbb,algorithmic-trading,Algorithmic Trading,An algorithmic trading system is a software that is used for trading in the stock market.,time-series
c8576c67-8ffa-45c8-8d37-a98ceb7e7b00,multimodal-association,Multimodal Association,"**Multimodal association** refers to the process of associating multiple modalities or types of data in time series analysis. In time series analysis, multiple modalities or types of data can be collected, such as sensor data, images, audio, and text. Multimodal association aims to integrate these different types of data to improve the understanding and prediction of the time series.
For example, in a smart home application, sensor data from temperature, humidity, and motion sensors can be combined with images from cameras to monitor the activities of residents. By analyzing the multimodal data together, the system can detect anomalies or patterns that may not be visible in individual modalities alone.
Multimodal association can be achieved using various techniques, including deep learning models, statistical models, and graph-based models. These models can be trained on the multimodal data to learn the associations and dependencies between the different types of data.",time-series
01814fbc-e07b-4df1-915f-b6c18331a8a9,non-intrusive-load-monitoring,Non-Intrusive Load Monitoring,,time-series
1fc277d6-8271-4d8e-9ff8-c536dcbf4576,intelligent-communication,Intelligent Communication,"Intelligently decide (i) the content of data
to be shared/communicated and (ii) the direction in which the chosen
data is transmitted.",time-series
555ac514-1a10-453e-b4e3-7f31cbfbc56e,fire-detection,Fire Detection,Detection of fire using multi-variate time series sensor data.,time-series
bbbd045f-5162-4626-8cbe-59e90c601817,data-compression,Data Compression,,time-series
fa317c51-6a4b-499b-b5ae-d7cf8bba87ef,tropical-cyclone-intensity-forecasting,Tropical Cyclone Intensity Forecasting,,time-series
f2d2a7c9-1adc-41d6-9ebc-9043f95a2a34,time-series-regression,Time Series Regression,Predicting one or more scalars for an entire time series example.,time-series
fbf699eb-f5a0-42b7-a28f-5cc7f3c7b07d,imputation,Imputation,Substituting missing data with values according to some criteria.,time-series
3a3cace7-e926-41ff-8a1b-ff81597dc4a4,time-series,Time Series Analysis,"**Time Series Analysis** is a statistical technique used to analyze and model time-based data. It is used in various fields such as finance, economics, and engineering to analyze patterns and trends in data over time. The goal of time series analysis is to identify the underlying patterns, trends, and seasonality in the data, and to use this information to make informed predictions about future values.
( Image credit: [Autoregressive CNNs for Asynchronous Time Series](https://arxiv.org/pdf/1703.04122v4.pdf) )",time-series
a780761d-5d12-4863-b7da-18c9de69bee8,solar-irradiance-forecasting,Solar Irradiance Forecasting,,time-series
8ac5ead7-1a9f-4f26-aec5-fc8e4ac8ba51,remaining-useful-lifetime-estimation,Remaining Useful Lifetime Estimation,Estimating the number of machine operation cycles until breakdown from the time series of previous cycles.,time-series
7661b92f-069e-4743-8d5e-38433f9798ea,moving-point-cloud-processing,Moving Point Cloud Processing,,time-series
76c9c5f2-d93c-4838-8502-19e4feea210a,stock-market-prediction,Stock Market Prediction,,time-series
7a4d20a2-c178-432a-853b-9b50efb94874,portfolio-optimization,Portfolio Optimization,,time-series
6e4dccd7-7c4b-4717-aecf-8a926108d778,semi-supervised-time-series-classification,Semi-supervised time series classification,,time-series
2d007899-14af-45ae-9ac5-024fa8eb2ecb,eeg-decoding,Eeg Decoding,**EEG Decoding** - extracting useful information directly from EEG data.,time-series
8d2a8ec0-1002-42ff-9e2d-b2fe3397c5a4,time-series-alignment,Time Series Alignment,,time-series
8074bffc-e263-433f-8cde-eb2db75cc561,predictive-process-monitoring,Predictive Process Monitoring,A branch of predictive analysis that attempts to predict some future state of a business process.,time-series
7971cf35-e88a-4001-9be6-7c45739f0116,episode-classification,Episode Classification,The episode classification is a branch of the classification aiming to classify groups of observations of a Time Series. (Example: critical episodes/ normal episode),time-series
63e05a14-f540-4e33-aa19-53baf59311ef,classification-on-time-series-with-missing,Classification on Time Series with Missing Data,,time-series
2fdce4cd-5223-48e2-8fae-6b782919cbd7,classify-murmurs,Classify murmurs,Classify murmurs based on Phonocardiograms (PCGs),time-series
08d0c424-40df-4ab6-b733-7c99d0904f0d,robust-traffic-prediction,Robust Traffic Prediction,,time-series
2d6f0a70-8f86-4ca6-b71f-8f5e6aa7eccf,time-series-averaging,Time Series Averaging,,time-series
6ffb401d-5ff8-4f3f-ada0-267d72f48f16,human-activity-recognition,Human Activity Recognition,Classify various human activities,time-series
b5ad0fdd-8cf7-4af7-818a-a001dbdce22e,time-series-streams,Time Series Streams,,time-series
97d2f181-3871-41f7-8f50-ac0a3fda2772,probabilistic-time-series-forecasting,Probabilistic Time Series Forecasting,,time-series
4bfecb4f-cd20-4292-89c9-72817c3094e0,time-series-prediction,Time Series Prediction,"The goal of **Time Series Prediction** is to infer the future values of a time series from the past.
Source: [Orthogonal Echo State Networks and stochastic evaluations of likelihoods ](https://arxiv.org/abs/1601.05911)",time-series
3e5ffb50-587f-4049-834b-8743135ca731,stock-price-prediction,Stock Price Prediction,**Stock Price Prediction** is the task of forecasting future stock prices based on historical data and various market indicators. It involves using statistical models and machine learning algorithms to analyze financial data and make predictions about the future performance of a stock. The goal of stock price prediction is to help investors make informed investment decisions by providing a forecast of future stock prices.,time-series
4a5945ed-3690-4779-aac0-617ad399a10c,human-motion-prediction,Human motion prediction,"Action prediction is a pre-fact video understanding task, which focuses on future states, in other words, it needs to reason about future states or infer action labels before the end of action execution.",time-series
4d3bbe5a-6d61-47cf-874f-6b645ad5d798,attention-score-prediction,Attention Score Prediction,"Auditory Attention Score Prediction: Estimating the attention level of Listener from physiological signals (EEG, GSR, PPG), a regression task. The attention score ranges from 0 to 100.",time-series
58cd1a0a-0d2f-45f9-b5fe-6f5541058171,multivariate-time-series-forecasting,Multivariate Time Series Forecasting,,time-series
d9e2a2f3-4b34-41bf-b53c-834365a1cee2,math-word-problem-solving,Math Word Problem Solving,,time-series
75ef840d-3e7b-4e36-8218-48ba6170ccef,tropical-cyclone-track-forecasting,Tropical Cyclone Track Forecasting,,time-series
ba673f11-b5c1-490a-b2cf-afc6bb1d931f,traffic-data-imputation,Traffic Data Imputation,,time-series
27796464-c6e7-44ba-a388-cade7c9f2a06,univariate-time-series-forecasting,Univariate Time Series Forecasting,,time-series
d462e5a4-41bb-4f45-9a98-e37184bd4f0b,noise-level-prediction,Noise Level Prediction,"T2: Noise Level Prediction: Estimating the noise level experienced by the Listener from physiological signals (EEG, GSR, PPG). Six different levels of background noise (SNR)
Label: -6, -3, 0, 3, 6, and inf (noise-free) in dB.",time-series
08714fad-e179-4a6c-8c86-b6004c86a0d3,sequential-bayesian-inference,Sequential Bayesian Inference,"Also known as Bayesian filtering or [recursive Bayesian estimation](https://en.wikipedia.org/wiki/Recursive_Bayesian_estimation), this task aims to perform inference on latent state-space models.",time-series
71978ea9-54ad-4a79-9efa-e4437b740ed7,stock-prediction,Stock Prediction,,time-series
e59310d4-a7be-4a22-b38a-11b5f270d82b,dynamic-time-warping,Dynamic Time Warping,,time-series
ca9970bd-4090-46cd-baec-31862ea58ccc,phonocardiogram-classification,Phonocardiogram Classification,Classify labels/murmur/clinical outcome based on Phonocardiograms (PCGs),time-series
42988484-1e46-4040-ad57-943298a658b0,activity-prediction,Activity Prediction,Predict human activities in videos,time-series
a988d358-dfad-45bf-b0ac-c3e3fade21ab,spatio-temporal-forecasting,Spatio-Temporal Forecasting,,time-series
6c329bcc-1533-4147-ba57-f8316fc18dad,clustering-multivariate-time-series,Clustering Multivariate Time Series,,time-series