taskID,pwc_id,title,description,areaID 6fd143b2-e3dc-457a-a199-b8d671750e3c,data-poisoning,Data Poisoning,"**Data Poisoning** is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe). Source: [Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics ](https://arxiv.org/abs/1907.07296)",adversarial 9f76c114-12a8-47fa-aa1f-8a680ec819e2,model-posioning,Model Posioning,,adversarial 5fddb458-0a2d-435b-bcb0-322be02b3a48,dnn-testing,DNN Testing,Testing the reliability of DNNs.,adversarial 8a2df849-20d2-47cd-ad63-3d4a698aafb3,provable-adversarial-defense,Provable Adversarial Defense,,adversarial b9e36f78-ce57-4421-a42b-e8e26d39c0a8,backdoor-defense-for-data-free-distillation,Backdoor Defense for Data-Free Distillation with Poisoned Teachers,Defend against backdoor attack from poisoned teachers.,adversarial 111f3a46-923c-4580-9e22-58a0e076d3f0,phishing-website-detection,Phishing Website Detection,,adversarial cbfecba9-553e-458c-a23b-25249ff518a9,website-fingerprinting-defense,Website Fingerprinting Defense,,adversarial 9b2678f3-f3ed-44dc-a55c-78fd5d4f5fd0,backdoor-attack,Backdoor Attack,"Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.",adversarial 6d013445-7084-4675-892c-0f0c0829cc9d,adversarial-defense,Adversarial Defense,"Competitions with currently unpublished results: - [TrojAI](https://pages.nist.gov/trojai/)",adversarial c445a68f-d8f7-49df-b1d3-c1e6322ed571,real-world-adversarial-attack,Real-World Adversarial Attack,Adversarial attacks that are presented in the real world,adversarial fce9fa98-756e-4701-8003-159adc7c6a10,inference-attack,Inference Attack,,adversarial b14f408c-6b59-4dee-935c-2ce10c601198,optimize-the-trajectory-of-uav-which-plays-a,Optimize the trajectory of UAV which plays a BS in communication system,,adversarial 36bcd0a3-4fec-4413-9cd7-ca7d78ec550b,adversarial-attack,Adversarial Attack,"An **Adversarial Attack** is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes. Source: [Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks ](https://arxiv.org/abs/2002.05388)",adversarial 248fc705-85fc-4d29-878b-800a22f84a92,adversarial-text,Adversarial Text,"Adversarial Text refers to a specialised text sequence that is designed specifically to influence the prediction of a language model. Generally, Adversarial Text attack are carried out on Large Language Models (LLMs). Research on understanding different adversarial approaches can help us build effective defense mechanisms to detect malicious text input and build robust language models.",adversarial c6924411-81d5-487d-a0f7-63b6706459af,design-synthesis,Design Synthesis,,adversarial f243de31-4f70-4a28-b60e-21da65356e64,adversarial-robustness,Adversarial Robustness,Adversarial Robustness evaluates the vulnerabilities of machine learning models under various types of adversarial attacks.,adversarial 113c6061-e6bb-484c-ad69-4d97b985d3c4,exposure-fairness,Exposure Fairness,,adversarial 599853aa-d278-44b1-b358-a2e8f3640866,website-fingerprinting-attacks,Website Fingerprinting Attacks,,adversarial c383316c-821c-420b-81b7-b0b327a6b6d2,model-extraction,Model extraction,"Model extraction attacks, aka model stealing attacks, are used to extract the parameters from the target model. Ideally, the adversary will be able to steal and replicate a model that will have a very similar performance to the target model.",adversarial 6ba6acaa-d106-40bc-8cdc-f911d90fb8fa,audio-declipping,Audio declipping,Audio declipping is the task of estimating the original audio signal given its clipped measurements.,audio 0e3930a8-378c-4074-999c-17dc325298bc,voice-conversion,Voice Conversion,"**Voice Conversion** is a technology that modifies the speech of a source speaker and makes their speech sound like that of another target speaker without changing the linguistic information. Source: [Joint training framework for text-to-speech and voice conversion using multi-source Tacotron and WaveNet ](https://arxiv.org/abs/1903.12389)",audio 4165f4be-9672-49ed-8d31-ed8ba985ab8e,chord-recognition,Chord Recognition,,audio d34e2b02-c8f6-4047-97f3-c13bc7cfc868,bandwidth-extension,Bandwidth Extension,Bandwidth extension is the task of expanding the bandwidth of a signal in a way that approximates the original or desired higher bandwidth signal.,audio 3bb03e25-85da-49dd-aa3e-58450bd88551,synthetic-speech-detection,Synthetic Speech Detection,Detect fake synthetic speech generated using machine learning,audio 19f79f8d-29e7-40a8-a5ae-02e9b792b259,audio-tagging,Audio Tagging,"Audio tagging is a task to predict the tags of audio clips. Audio tagging tasks include music tagging, acoustic scene classification, audio event classification, etc.",audio f6ed6635-2302-47c5-afb8-e47ecfa21b93,directional-hearing,Directional Hearing,Extremely low-latency audio source separation from a known direction of arrival.,audio 2a7caa61-07d0-4bc9-862c-342e602d956d,bird-audio-detection,Bird Audio Detection,,audio 561a4f04-5199-498c-ae64-bbd553563218,shooter-localization,Shooter Localization,Shooter localization based on videos.,audio 9af95e17-c86e-4384-aaa7-33a0d03e133a,speaker-orientation,Speaker Orientation,Direction of Voice or speaker orientation of the person with respect to the target device.,audio a195b1a7-342e-4610-bbb5-4317e7179566,audio-generation,Audio Generation,"Audio generation (synthesis) is the task of generating raw audio such as speech. ( Image credit: [MelNet](https://arxiv.org/pdf/1906.01083v1.pdf) )",audio cfe1fce7-ac96-468a-b13c-f7024e60a0fa,sound-event-detection,Sound Event Detection,"**Sound Event Detection** (SED) is the task of recognizing the sound events and their respective temporal start and end time in a recording. Sound events in real life do not always occur in isolation, but tend to considerably overlap with each other. Recognizing such overlapping sound events is referred as polyphonic SED. Source: [A report on sound event detection with different binaural features ](https://arxiv.org/abs/1710.02997)",audio 5feddbfc-e04c-44b7-b374-f2b783dad8f5,audio-signal-processing,Audio Signal Processing,"This is a general task that covers transforming audio inputs into audio outputs, not limited to existing PaperWithCode categories of Source Separation, Denoising, Classification, Recognition, etc.",audio 66606443-b911-4e5a-b3ec-38c0162f287a,direction-of-arrival-estimation,Direction of Arrival Estimation,Estimating the direction-of-arrival (DOA) of a sound source from multi-channel recordings.,audio 4a332fbd-e4c6-47fe-9523-a1861ad61b09,soundscape-evaluation,Soundscape evaluation,Evaluation of soundscape in accordance to ISO/TS 12913-2,audio 6026d958-26f0-4b30-bb24-2cbc6c8924ae,room-impulse-response,Room Impulse Response (RIR),"**Room Impulse Response (RIR)** is an audio signal processing task that involves capturing and analyzing the acoustic characteristics of a room or an environment. The goal is to measure and model the way sound waves interact with the space, including reflections, reverberation, and echoes.",audio ee04f35b-a512-4d22-9a85-fa30ab6879b8,vowel-classification,Vowel Classification,,audio 88c2e3a6-7ce0-41ec-a132-8fce7643112a,audio-fingerprint,Audio Fingerprint,,audio 8df66d85-353e-4f5f-a028-84249de34876,pitch-control,Pitch control,,audio f733071e-35e2-450f-839d-ffe6adced489,underwater-acoustic-classification,Underwater Acoustic Classification,Classification of underwater acoustic data,audio a4930f8b-c5b5-49a8-b0b5-063b8d10b41c,active-speaker-localization,Active Speaker Localization,"Active Speaker Localization (ASL) is the process of spatially localizing an active speaker (talker) in an environment using either audio, vision or both.",audio 4ca24c2f-1abe-4606-b72f-4135b0baefd3,acoustic-scene-classification,Acoustic Scene Classification,"The goal of acoustic scene classification is to classify a test recording into one of the provided predefined classes that characterizes the environment in which it was recorded. Source: [DCASE 2019](http://dcase.community/challenge2019/task-acoustic-scene-classification) Source: [DCASE 2018](https://dcase.community/challenge2018/task-acoustic-scene-classification)",audio 48356260-c73f-481d-819f-1cb9ccd2a4d4,timbre-interpolation,Timbre Interpolation,,audio c13c7126-bd53-455c-aeaa-9e66569928d5,audio-denoising,Audio Denoising,,audio db9465ba-8c3a-4462-84f1-a430feb25215,sound-event-localization-and-detection,Sound Event Localization and Detection,"Given multichannel audio input, a sound event detection and localization (SELD) system outputs a temporal activation track for each of the target sound classes, along with one or more corresponding spatial trajectories when the track indicates activity. This results in a spatio-temporal characterization of the acoustic scene that can be used in a wide range of machine cognition tasks, such as inference on the type of environment, self-localization, navigation without visual input or with occluded targets, tracking of specific types of sound sources, smart-home applications, scene visualization systems, and audio surveillance, among others.",audio 7ea55ca2-f7fe-4b1b-ae2f-ca23aa3242ed,audio-multiple-target-classification,Audio Multiple Target Classification,,audio e95aaace-7fbb-4d0e-bf07-cc1ea8c2c5e8,bird-species-classification-with-audio-visual,Bird Species Classification With Audio-Visual Data,,audio 8708ffc6-dab4-47aa-bfc3-d999bac28bc3,audio-captioning,Audio captioning,,audio 2843e13c-9847-43bf-abb5-703a3f21f259,audio-inpainting,Audio inpainting,Filling in holes in audio data,audio 67d41471-ddbf-48c2-8071-3a75ce916379,bird-classification,Bird Classification,,audio 829bc069-f39c-45ff-854e-f17ff108abcd,music-compression,Music Compression,,audio 69524fb8-047d-4a6c-a2b4-b512767874e7,gunshot-detection,Gunshot Detection,,audio 8f67593e-1263-4a59-9067-c22270bc4c55,language-identification,Language Identification,Language identification is the task of determining the language of a text.,audio fd43566c-7133-4956-bd1c-a91e9636276e,instrument-recognition,Instrument Recognition,,audio e202d49a-33ee-4d68-9509-994b58fca9fd,audio-super-resolution,Audio Super-Resolution,AUDIO SUPER-RESOLUTION or speech bandwidth extension (Upsampling Ratio = 2),audio 680b1bec-9873-42c2-802d-183552d2f438,target-speaker-extraction,Target Speaker Extraction,Extract the dialogue content of the specified target in a multi-person dialogue.,audio 754c67b9-747b-45ab-b324-67ac73231e91,audio-classification,Audio Classification,"**Audio Classification** is a machine learning task that involves identifying and tagging audio signals into different classes or categories. The goal of audio classification is to enable machines to automatically recognize and distinguish between different types of audio, such as music, speech, and environmental sounds.",audio aefa41ac-5fce-4299-8939-342fce8d2511,environmental-sound-classification,Environmental Sound Classification,Classification of Environmental Sounds. Most often sounds found in Urban environments. Task related to noise monitoring.,audio 50dd1159-d7f2-4d93-aa50-6cd8334ff5dd,fake-voice-detection,fake voice detection,,audio d1c97289-6799-4b81-894a-17d16fbd3b5d,audio-source-separation,Audio Source Separation,"**Audio Source Separation** is the process of separating a mixture (e.g. a pop band recording) into isolated sounds from individual sources (e.g. just the lead vocals). Source: [Model selection for deep audio source separation via clustering analysis ](https://arxiv.org/abs/1910.12626)",audio 0af1b5a6-e530-44b2-b05e-071299a35e4e,audio-signal-recognition,Audio Signal Recognition,,audio bad4a297-3f72-4d3e-b3e1-4ed5c9c7a791,text-to-music-generation,Text-to-Music Generation,,audio 6a894212-022e-40e2-acfe-36b1c864776c,audio-effects-modeling,Audio Effects Modeling,"Modeling of audio effects such as reverberation, compression, distortion, etc.",audio 8846f94c-b4c1-43aa-b2bb-21bcdcf36f01,few-shot-audio-classification,Few-Shot Audio Classification,Few-shot classification for audio signals. Presents a unique challenge compared to other few-shot domains as we deal with temporal dependencies as well,audio 7e2c45e2-fcb5-4e23-83c1-2f8235c3a041,sound-classification,Sound Classification,,audio f240f9ae-77e5-401d-8b2b-0c437bd67844,audio-visual-synchronization,Audio-Visual Synchronization,,audio b475e807-f49a-45be-91bd-d1235118435c,zero-shot-multi-speaker-tts,Zero-Shot Multi-Speaker TTS,,audio 4cf0d042-a585-45f0-b4ab-e5c79f1948c1,music-generation,Music Generation,"**Music Generation** is the task of generating music or music-like sounds from a model or algorithm. The goal is to produce a sequence of notes or sound events that are similar to existing music in some way, such as having the same style, genre, or mood.",audio 9a32a5e5-503c-42ef-8b7c-8190ac8d585d,streaming-target-sound-extraction,Streaming Target Sound Extraction,"This task is a variant of the [Target Sound Extraction](https://paperswithcode.com/task/target-sound-extraction) task, with the constraint of causal streaming inference. Aiming for an algorithmic latency of less than 20 ms, at each time step, streaming audio models operate on an input audio chunk of length less than 20 ms. The causal constraint means that the model only has the knowledge of past chunks and no future chunks.",audio 999913fe-91a0-4917-b9a9-74d4a43c4165,real-time-directional-hearing,Real-time Directional Hearing,Directional hearing models that also support real-time on-device inference,audio 587ca093-54b0-487b-b699-444e593f7127,voice-anti-spoofing,Voice Anti-spoofing,Discriminate genuine speech and spoofing attacks,audio 844b362b-5213-4a85-b6b7-128252fbb22c,audio-dequantization,Audio Dequantization,Audio Dequantization is a process of estimating the original signal from its quantized counterpart.,audio 1b199a3c-ad07-4fb9-8716-26bcd700a84a,single-label-target-sound-extraction,Single-Label Target Sound Extraction,"Single-Label Target Sound Extraction is the task of extracting a given class of sounds from an audio mixture. The audio mixture may contain background noise with a relatively low amplitude compared to the foreground mixture components. The choice of the sound class is provided as input to the model in form of a string, integer, or a one-hot encoding of the sound class.",audio 4ec78a95-37bc-4f77-8c10-47b70e47eb39,target-sound-extraction,Target Sound Extraction,"Target Sound Extraction is the task of extracting a sound corresponding to a given class from an audio mixture. The audio mixture may contain background noise with a relatively low amplitude compared to the foreground mixture components. The choice of the sound class is provided as input to the model in form of a string, integer, or a one-hot encoding of the sound class.",audio 8d54025e-e889-4323-9c14-9a5b18ecef29,acoustic-novelty-detection,Acoustic Novelty Detection,"Detect novel events given acoustic signals, either in domestic or outdoor environments.",audio 6cb0994c-edc0-4134-815f-828476da0906,self-supervised-sound-classification,Self-Supervised Sound Classification,,audio 97dddbfd-b0de-489d-8acb-e265993711b1,inference-optimization,Inference Optimization,,audio 6a0fc811-cd11-41ea-8483-b43384b313b0,fault-localization,Fault localization,,computer-code b7123449-480d-483f-983b-e2bbdc3a49b9,nmt,NMT,"Neural machine translation is an approach to machine translation that uses an artificial neural network to predict the likelihood of a sequence of words, typically modeling entire sentences in a single integrated model.",computer-code 75054394-b88e-47b8-b1b0-b8284f044022,code-generation,Code Generation,"**Code Generation** is an important field to predict explicit code or program structure from multimodal data sources such as incomplete code, programs in another programming language, natural language descriptions or execution examples. Code Generation tools can assist the development of automatic programming tools to improve programming productivity. Source: [Deep Learning for Source Code Modeling and Generation ](https://arxiv.org/abs/2002.05442) Image source: [Measuring Coding Challenge Competence With APPS](https://paperswithcode.com/paper/measuring-coding-challenge-competence-with)",computer-code c545b464-3b82-47d9-9cda-451e1004cfc4,compiler-optimization,Compiler Optimization,Machine learning guided compiler optimization,computer-code 19a10edb-2e12-4e19-af0e-8cd08184f746,write-computer-programs-from-specifications,Write Computer Programs From Specifications,,computer-code 5e076123-b1ea-4d08-b4e7-c5a715c0020e,api-sequence-recommendation,API Sequence Recommendation,,computer-code 3214205e-7197-4fe6-930e-b937df6f53ac,code-summarization,Source Code Summarization,"**Code Summarization** is a task that tries to comprehend code and automatically generate descriptions directly from the source code. Source: [Improving Automatic Source Code Summarization via Deep Reinforcement Learning ](https://arxiv.org/abs/1811.07234)",computer-code 58848877-3c4b-46d9-85b2-228c41f09e5d,contextual-embedding-for-source-code,Contextual Embedding for Source Code,,computer-code 78d3301a-61ff-42ab-a2b8-ad3da3d3dac1,formalize-foundations-of-universal-algebra-in,Formalize foundations of universal algebra in dependent type theory,,computer-code d20340c9-b513-47f0-bdc6-17e13f1be88e,sentinel-1-sar-processing,Sentinel-1 SAR processing,,computer-code 84bf8080-b8a2-4427-b499-f12b592461d5,enumerative-search,Enumerative Search,,computer-code f47512f2-93b6-4870-ad7b-c127aa56b764,type-prediction,Type prediction,,computer-code f01eb769-bfc3-44bc-a962-cccbc6810df1,code-translation,Code Translation,"Code translation is the process of converting code written in one programming language to another programming language while maintaining the same functionality. This process is also known as code conversion, source-to-source translation, or transpilation. Code translation is often performed when developers want to take advantage of new programming languages, improve code performance, or maintain legacy systems. Some common examples include translating code from Python to Java, or from JavaScript to TypeScript.",computer-code faee1947-6132-4107-9d9b-0c4de77d7e7c,value-prediction,Value prediction,,computer-code 59fd5ddc-9794-4ff7-b331-9e122c81b6ba,program-repair,Program Repair,Task of teaching ML models to modify an existing program to fix a bug in a given code.,computer-code 9c9d0413-4d42-40a7-9297-bea6aa9981ad,variable-misuse,Variable misuse,,computer-code e4be7a9c-4edc-473b-9e28-c8dc1d737593,nature-inspired-optimization-algorithm,Nature-Inspired Optimization Algorithm,,computer-code e9c67180-bdf1-41e3-bcd7-5b7d71650315,git-commit-message-generation,Git Commit Message Generation,,computer-code d9a0706a-14a3-4532-9eb9-03b414cee8fd,tiling-deployment,Tiling & Deployment,Data tiling over 3 memory hierarchy levels and deployment on microcontroller.,computer-code d7ef25b8-2e06-4e5d-a5e6-6adeb99f8319,single-image-portrait-relighting,Single-Image Portrait Relighting,,computer-code b161dcff-a1d9-4a07-b7cb-08d83395acf7,code-classification,Code Classification,,computer-code 98419e53-e9ec-46ab-ad8d-e25255867da1,sparse-subspace-based-clustering,Sparse subspace-based clustering,,computer-code 7d08fd6c-b509-4f27-8d16-386deda40095,learning-to-execute,Learning to Execute,,computer-code 1d664fdd-7531-4096-9924-84505629695c,exception-type,Exception type,,computer-code d3b06eb8-117c-42d5-acb2-63c6d6bf1769,neural-network-simulation,Neural Network simulation,Simulation of abstract or biophysical neural networks in silico,computer-code 10932eaa-8f78-467d-ab3e-7ee88e657bee,edit-script-generation,Edit script generation,"Generating edit scripts by comparing 2 different files or strings to convert one to another. this script will contain instruction like insert, delete and substitute.",computer-code bfd8624a-a1aa-4a9e-a909-36203d2f157c,low-rank-compression,Low-rank compression,,computer-code e12aad32-cad2-4d13-9a10-858777de85b0,chart-question-answering,Chart Question Answering,Question Answering task on charts images,computer-code c069c9b2-2a5d-499f-9a12-159aa6395942,swapped-operands,Swapped operands,,computer-code ce8d93c0-a4da-407f-8716-bab66d36b832,function-docstring-mismatch,Function-docstring mismatch,,computer-code 06837c7e-0f2a-4d9a-97d7-a96e5c1f1656,webcam-rgb-image-classification,Webcam (RGB) image classification,,computer-code ab6bc3b7-9161-4dcd-842f-87f01f903c0a,codesearchnet-java,CodeSearchNet - Java,,computer-code 931d6c91-f3a1-4161-9430-d358dbca7631,code-documentation-generation,Code Documentation Generation,"Code Documentation Generation is a supervised task where a code function is the input to the model, and the model generates the documentation for this function. Description from: [CodeTrans: Towards Cracking the Language of Silicone's Code Through Self-Supervised Deep Learning and High Performance Computing](https://arxiv.org/pdf/2104.02443.pdf)",computer-code a0031fff-0de2-43db-9f82-2aae2fa3841e,editcompletion,EditCompletion,"Given a code snippet that is partially edited, the goal is to predict a completion of the edit for the rest of the snippet.",computer-code 97f19c78-ae0e-4c10-a29c-9337648c6693,motion-style-transfer,Motion Style Transfer,,computer-code 09c72d3d-e4d7-489c-b389-ca859c74c2fa,file-difference,File difference,"Generate edit script comparing 2 strings or files, which contains instruction of insert, delete and substitute to convert first string to the second.",computer-code 98ab73fa-eb91-4fcc-b5cd-0133ca4d0658,video-defect-classification,Video Defect Classification,"Quick-View (QV) Inspection is one commonly-used technology. However, it is quite labor-intensive to find defects from a huge number of QV videos. To tackle this problem, we propose a video defect classification task, which is to predict the categories of pipe defects in a short QV video.",computer-code 2989d89f-6353-44be-9d79-04c30cb7a830,spectral-efficiency-analysis-of-uplink,Spectral Efficiency Analysis of Uplink-Downlink Decoupled Access in C-V2X Networks,Code for Spectral Efficiency Analysis of Uplink-Downlink Decoupled Access in C-V2X Networks,computer-code ae0cbbd0-4a0a-439c-a5c4-10b3185e2213,sql-to-text,SQL-to-Text,"( Image credit: [SQL-to-Text Generation with Graph-to-Sequence Model](https://arxiv.org/pdf/1809.05255v2.pdf) )",computer-code 3c3c8aa2-ad5c-4c0b-9167-957092eaa55c,program-synthesis,Program Synthesis,"Program synthesis is the process of automatically generating a program or code snippet that satisfies a given specification or set of requirements. This can include generating code from a formal specification, a natural language description, or example inputs and outputs. The primary goal of program synthesis is to minimize human intervention in the coding process, reduce errors, and improve productivity. Program synthesis often involves the use of advanced algorithms, artificial intelligence, and machine learning techniques to search the space of possible programs that meet the given constraints. This process can be guided by a variety of techniques, such as constraint solving, symbolic execution, and genetic algorithms.",computer-code 964bd246-ee2b-4d83-a5ff-f15c5c3f0ea2,log-parsing,Log Parsing,"**Log Parsing** is the task of transforming unstructured log data into a structured format that can be used to train machine learning algorithms. The structured log data is then used to identify patterns, trends, and anomalies, which can support decision-making and improve system performance, security, and reliability. The log parsing process involves the extraction of relevant information from log files, the conversion of this information into a standardized format, and the storage of the structured data in a database or other data repository.",computer-code a19f918a-9489-49b0-a094-17ca6de850bf,program-induction,Program induction,Generating program code for domain-specific tasks,computer-code c7790180-feb0-43f5-b48c-f58f53a5cd29,programming-error-detection,Programming Error Detection,,computer-code 63bcab46-013f-4c52-94f1-a189175799f2,text-to-sql,Text-To-SQL,"**Text-to-SQL** is a task in natural language processing (NLP) where the goal is to automatically generate SQL queries from natural language text. The task involves converting the text input into a structured representation and then using this representation to generate a semantically correct SQL query that can be executed on a database. ( Image credit: [SyntaxSQLNet](https://arxiv.org/pdf/1810.05237v2.pdf) )",computer-code 046d9b48-a71d-48f1-a1a7-5d881c7d74e8,sql-synthesis,SQL Synthesis,,computer-code f3f0fadc-f3dd-41c6-b6af-a83c786e8fd6,text-to-code-generation,Text-to-Code Generation,"**Text-to-Code Generation** is a task where we can generate code based on the natural language description. Source: [Text-to-code Generation with TensorFlow, 🤗 & MBPP](https://www.kaggle.com/code/rhtsingh/text-to-code-generation-with-tensorflow-mbpp)",computer-code 6bfb15cd-9e35-4237-a344-aa82d37b302b,paraphrase-generation,Paraphrase Generation,"Paraphrase Generation involves transforming a natural language sentence to a new sentence, that has the same semantic meaning but a different syntactic or lexical surface form.",computer-code 06092d2d-2132-486b-b29a-dd6c1b1940b2,sql-chatbots,Sql Chatbots,,computer-code 0389d435-5263-44bf-9007-de7bbdde59b9,code-search,Code Search,"The goal of **Code Search** is to retrieve code fragments from a large code corpus that most closely match a developer’s intent, which is expressed in natural language. Source: [When Deep Learning Met Code Search ](https://arxiv.org/abs/1905.03813)",computer-code 5fafca91-f1e5-4963-a28f-e42e29bb7b76,wrong-binary-operator,Wrong binary operator,,computer-code 204a46eb-75a6-4ac4-84d5-cdd0d1ec71a1,code-comment-generation,Code Comment Generation,,computer-code fc88936a-738b-47d7-8cdd-227f95f86c68,annotated-code-search,Annotated Code Search,Annotated code search is the retrieval of code snippets paired with brief descriptions of their intent using natural language queries.,computer-code 0e91d5bd-a029-4c3e-8ef7-208168a52e26,manufacturing-quality-control,Manufacturing Quality Control,AI for Quality control in manufacturing processes.,computer-vision 1b3e56c7-7bad-4419-a408-8b0d268a5fdb,fine-grained-image-recognition,Fine-Grained Image Recognition,,computer-vision a259573b-8d0d-43a0-bd45-430123eb1a07,camouflage-segmentation,Camouflage Segmentation,,computer-vision 976463fe-5f4e-43b6-a1af-4c5884e107a3,layout-design,Layout Design,,computer-vision ab6a11d2-2972-41f0-8c68-b46e2ddcaa63,video-propagation,Video Propagation,Propagating information in processed frames to unprocessed frames,computer-vision bf563485-7d8a-4dc6-84b5-41e11dcbcd97,few-shot-temporal-action-localization,Few Shot Temporal Action Localization,Detect Action using few labeled samples,computer-vision c2c8912d-119e-43ee-a464-9acb2d87509b,one-shot-object-detection,One-Shot Object Detection,"( Image credit: [Siamese Mask R-CNN ](https://github.com/bethgelab/siamese-mask-rcnn) )",computer-vision 9958c075-b8e0-4938-baa5-72f9ebb2dac5,color-manipulation,color manipulation,,computer-vision 5024747a-5530-43eb-9b6c-27592f836d80,single-class-few-shot-image-synthesis,Single class few-shot image synthesis,"The goal of single class few-shot image synthesis task is to learn a generative model that can generate samples with visual attributes from as few as two or more input images images belonging to the same class.",computer-vision c208b637-f319-447b-9355-a42f680ca884,sperm-morphology-classification,Sperm Morphology Classification,Multi-class classification of sperm head morphology.,computer-vision f99bf7fa-d464-4abe-9c78-f777bed1ccab,salt-and-pepper-noise-removal,Salt-And-Pepper Noise Removal,"Salt-and-pepper noise is a form of noise sometimes seen on images. It is also known as impulse noise. This noise can be caused by sharp and sudden disturbances in the image signal. It presents itself as sparsely occurring white and black pixels. ( Image credit: [NAMF](https://arxiv.org/pdf/1910.07787v1.pdf) )",computer-vision c0e9a69a-fee6-4524-800f-104ea7bad6d7,style-transfer,Style Transfer,"**Style Transfer** is a technique in computer vision and graphics that involves generating a new image by combining the content of one image with the style of another image. The goal of style transfer is to create an image that preserves the content of the original image while applying the visual style of another image. ( Image credit: [A Neural Algorithm of Artistic Style](https://arxiv.org/pdf/1508.06576v2.pdf) )",computer-vision a9173b27-50ae-49ba-97a8-c3dd064a3269,intensity-image-denoising,intensity image denoising,,computer-vision b83ddd70-0cd9-4397-8788-996b5e27f738,table-recognition,Table Recognition,,computer-vision cdd45765-8ad4-49ca-9e6c-f940a9bc91a6,interest-point-detection,Interest Point Detection,,computer-vision f7d273ea-b979-4da0-aa7a-055cf5d2d820,lossy-compression-artifact-reduction,Lossy-Compression Artifact Reduction,,computer-vision 0ccc3c6e-95f2-4e5b-abf8-2a19cd3e2b70,text-based-image-editing,Text-based Image Editing,,computer-vision 23799022-93d0-4bde-9012-0ae14d607691,3d-face-animation,3D Face Animation,Image: [Cudeiro et al](https://arxiv.org/pdf/1905.03079v1.pdf),computer-vision cfadd6e5-3bdd-42b8-8f28-c48cf68209d5,physiological-computing,Physiological Computing,,computer-vision 295509b1-485a-41a3-8034-874a8ab97592,unsupervised-object-localization,Unsupervised Object Localization,,computer-vision 199246ce-afca-48a2-9d89-089d1e1d66d6,traffic-sign-recognition,Traffic Sign Recognition,"Traffic sign recognition is the task of recognising traffic signs in an image or video. ( Image credit: [Novel Deep Learning Model for Traffic Sign Detection Using Capsule Networks ](https://arxiv.org/pdf/1805.04424v1.pdf) )",computer-vision df31dc86-76af-4e21-a47c-97c54fd94113,facial-recognition-and-modelling,Facial Recognition and Modelling,Facial tasks in machine learning operate based on images or video frames (or other datasets) focussed on human faces.,computer-vision fd6a4d2d-2c82-4e31-8c5f-68466c1f51a0,video-compressive-sensing,Video Compressive Sensing,,computer-vision a0af1805-bc94-457d-b671-7c67fded3feb,semantic-part-detection,Semantic Part Detection,,computer-vision f42db817-aa22-495b-bf69-93d873e3e0d9,unsupervised-3d-semantic-segmentation,Unsupervised 3D Semantic Segmentation,Unsupervised 3D Semantic Segmentation,computer-vision ac67f6cd-349e-4874-9965-826851c2f914,future-hand-prediction,Future Hand Prediction,,computer-vision 92663016-88d8-4095-baa1-5b2147142277,matching-disparate-images,Matching Disparate Images,,computer-vision 528e9c82-8046-4835-942b-208ce5d888db,3d-feature-matching,3D Feature Matching,Image: [Choy et al](https://paperswithcode.com/paper/fully-convolutional-geometric-features),computer-vision 8126ae3d-69dd-4da7-bedc-8b0ccfd0f86b,video-editing,Video Editing,,computer-vision 8899ac03-ebcf-423c-925f-21bd5802fb75,spectral-reconstruction,Spectral Reconstruction,,computer-vision ae9cd9a5-5b66-44ba-b10a-7318c8c8c40c,depth-image-estimation,Depth Image Estimation,,computer-vision d58fc61f-40bc-4c00-9c91-3853117c907f,multi-hypotheses-3d-human-pose-estimation,Multi-Hypotheses 3D Human Pose Estimation,,computer-vision 0be19bff-e7fb-419d-8c06-f9d0e56ae6d7,svbrdf-estimation,SVBRDF Estimation,SVBRDF Estimation,computer-vision 44776105-7a6d-43f5-b363-dbca3660afa2,document-enhancement,Document Enhancement,,computer-vision 945ada03-cc7a-4f03-824a-2a3032c0d932,robust-face-alignment,Robust Face Alignment,"Robust face alignment is the task of face alignment in unconstrained (non-artificial) conditions. ( Image credit: [Deep Alignment Network](https://github.com/MarekKowalski/DeepAlignmentNetwork) )",computer-vision e0531d71-eeca-4a97-be2e-6194dabb2737,rice-grain-disease-detection,Rice Grain Disease Detection,,computer-vision 980e490b-c80e-4ca0-912a-27b368725e23,offline-surgical-phase-recognition,Offline surgical phase recognition,"Offline surgical phase recognition: the first 40 videos to train, the last 40 videos to test.",computer-vision 95cd9793-d1d5-42e2-913a-4cdc9c4bbf49,explainable-models,Explainable Models,,computer-vision a1af98e8-122d-4a57-bee3-799300dc7e0f,kinship-face-generation,Kinship face generation,Kinship face generation,computer-vision 48098bc6-f28c-453b-982e-a5582b4fed5e,sensor-modeling,Sensor Modeling,"( Image credit: [LiDAR Sensor modeling and Data augmentation with GANs for Autonomous driving](https://arxiv.org/abs/1905.07290) )",computer-vision 371344a1-3fcc-400e-85b6-1a034b261248,face-quality-assessement,Face Quality Assessement,Estimate the usability of a given face image for recognition,computer-vision 4e0c0c5d-a920-477e-a5cb-c1bb3adb6b82,ifc-entity-classification,IFC Entity Classification,,computer-vision 69107063-fb16-4fe8-a948-e0899d25a565,fashion-compatibility-learning,Fashion Compatibility Learning,,computer-vision a7309764-d1c7-4316-a05a-3300e44b360d,robust-face-recognition,Robust Face Recognition,"Robust face recognition is the task of performing recognition in an unconstrained environment, where there is variation of view-point, scale, pose, illumination and expression of the face images. ( Image credit: [MeGlass dataset](https://github.com/cleardusk/MeGlass) )",computer-vision 66b392ce-d32b-42b3-b883-8bf4d5f0126b,color-image-denoising,Color Image Denoising,,computer-vision a43db77c-38a0-48a9-8f2d-e5fbd6f75d44,adversarial-attack-detection,Adversarial Attack Detection,The detection of adversarial attacks.,computer-vision b8f48755-6116-4204-8a26-83fe84141f68,dense-object-detection,Dense Object Detection,,computer-vision 1c867704-6058-4853-8731-7d22169b170e,video-style-transfer,Video Style Transfer,,computer-vision 877be219-cec3-41af-91c7-4e009a583454,audio-visual-video-captioning,Audio-Visual Video Captioning,,computer-vision 22a42437-c945-4d1a-9bb8-eea0af95149b,weakly-supervised-panoptic-segmentation,Weakly-supervised panoptic segmentation,,computer-vision e444f779-29fe-4691-ab4d-00e575e40fda,satellite-image-classification,Satellite Image Classification,"Satellite image classification is the most significant technique used in remote sensing for the computerized study and pattern recognition of satellite information, which is based on diversity structures of the image that involve rigorous validation of the training samples depending on the used classification algorithm.",computer-vision 5fca84f4-032f-41f7-b7cc-dae30d7cb4c0,camera-calibration,Camera Calibration,"Camera calibration involves estimating camera parameters(including camera intrinsics and extrinsics) to infer geometric features from captured sequences, which is crucial for computer vision and robotics. Driven by different architectures of the neural network, the researchers have developed two main paradigms for learning-based camera calibration and its applications. One is Regression-based Calibration,Reconstruction-based Calibration is another.",computer-vision 19691350-7c8f-4b13-a0af-a76d6db3da27,face-presentation-attack-detection,Face Presentation Attack Detection,,computer-vision 5130d8dd-f687-41db-b9a8-490fd4c87442,shadow-removal,Shadow Removal,Remove shadow from background,computer-vision a71b5dab-267d-4c86-9f40-85e64403f5ed,prostate-zones-segmentation,Prostate Zones Segmentation,,computer-vision 4a2ac0d8-b946-40c9-a450-d146ffd98500,template-matching,Template Matching,,computer-vision da62d84f-4fd9-4c9c-aa3e-4c22f16704a1,zero-shot-action-recognition,Zero-Shot Action Recognition,,computer-vision f77daf68-708c-4c47-84b8-a29c5188a0b9,motion-prediction,motion prediction,,computer-vision b8cf8c5c-e1ca-4a6c-8347-dbe4857fd7fe,referring-image-matting-refmatte-rw100,Referring Image Matting (RefMatte-RW100),"Expression-based referring image matting on natural images and manually labelled annotations, i.e., RefMatte-RW100, taking the image and a flowery expression as the input.",computer-vision 5e6d2df1-a69f-440e-ae76-81972a889783,video-visual-relation-detection,Video Visual Relation Detection,"**Video Visual Relation Detection (VidVRD)** aims to detect instances of visual relations of interest in a video, where a visual relation instance is represented by a relation triplet with the trajectories of the subject and object. As compared to still images, videos provide a more natural set of features for detecting visual relations, such as the dynamic relations like “A-follow-B” and “A-towards-B”, and temporally changing relations like “A-chase-B” followed by “A-hold-B”. Yet, VidVRD is technically more challenging than ImgVRD due to the difficulties in accurate object tracking and diverse relation appearances in the video domain. Source: [ImageNet-VidVRD Video Visual Relation Dataset](https://xdshang.github.io/docs/imagenet-vidvrd.html)",computer-vision fee08922-362d-41d6-8cb6-675db54ba472,person-re-identification,Person Re-Identification,"**Person Re-Identification** is a computer vision task in which the goal is to match a person's identity across different cameras or locations in a video or image sequence. It involves detecting and tracking a person and then using features such as appearance, body shape, and clothing to match their identity in different frames. The goal is to associate the same person across multiple non-overlapping camera views in a robust and efficient manner.",computer-vision aac03e3f-e00a-4918-bde6-f89cebf37dc1,mutual-gaze,Mutual Gaze,Detect if two people are looking at each other,computer-vision 33d9acbe-f338-4007-b382-4a205a2f1f06,affordance-recognition,Affordance Recognition,Affordance recognition from Human-Object Interaction,computer-vision 4a3f8369-ff14-42ad-a1fb-074f77bd9f39,scene-parsing,Scene Parsing,"Scene parsing is to segment and parse an image into different image regions associated with semantic categories, such as sky, road, person, and bed. [MIT Description](http://sceneparsing.csail.mit.edu/#:~:text=Scene%20parsing%20is%20to%20segment,the%20algorithms%20of%20scene%20parsing.)",computer-vision 054df625-1176-4676-ac27-7a0e9158bedd,3d-shape-retrieval,3D Shape Classification,Image: [Sun et al](https://arxiv.org/pdf/1804.04610v1.pdf),computer-vision b8bc11cb-f902-4727-8e03-c93f3cf6198d,semi-supervised-learning-for-image-captioning,Semi Supervised Learning for Image Captioning,,computer-vision 850fe1ca-6a05-4dbf-84c9-810dae4eb6ca,3d-character-animation-from-a-single-photo,3D Character Animation From A Single Photo,Image: [Weng et al](https://arxiv.org/pdf/1812.02246v1.pdf),computer-vision b9ef01c9-01af-4413-9380-79cf4c8bc5f3,fine-grained-image-inpainting,Fine-Grained Image Inpainting,,computer-vision 46d2d68f-146d-40f3-8535-96b516753f90,kinematic-based-workflow-recognition,Kinematic Based Workflow Recognition,,computer-vision 723dc4a2-4bf5-4308-919c-c2a5630befe4,3d-point-cloud-reconstruction,3D Point Cloud Reconstruction,Encoding and reconstruction of 3D point clouds.,computer-vision cebe8ba3-21a5-426d-8af0-c87e50a73c91,3d-instance-segmentation-1,3D Instance Segmentation,Image: [OccuSeg](https://arxiv.org/pdf/2003.06537v3.pdf),computer-vision 6b09bb39-64ff-4185-92ea-12d33274014a,visual-question-answering,Visual Question Answering (VQA),"**Visual Question Answering (VQA)** is a task in computer vision that involves answering questions about an image. The goal of VQA is to teach machines to understand the content of an image and answer questions about it in natural language. Image Source: [visualqa.org](https://visualqa.org/)",computer-vision ec37baec-50ef-4cad-8a65-1ca06f664a7e,bbbc021-nsc-accuracy,BBBC021 NSC Accuracy,"BBBC021 is a dataset of fully imaged human cells. Cells are treated with one of 113 small molecules at 8 concentrations, and fluorescent images are captured staining for nucleus, actin and microtubules. The phenotypic profiling problem is presented, where the goal is to extract features containing meaningful information about the cellular phenotype exhibited. Each of 103 unique compound concentration treatment is labeled with a mechanism-of-action (MOA). The MOA is predicted for each unique treatment (averaging features over all treatment examples) by matching the MOA of the closest point excluding points of the same compound. The dataset and more information can be found at https://bbbc.broadinstitute.org/BBBC021.",computer-vision 416172ad-ac7b-4507-b893-25857b06fc14,unsupervised-video-summarization,Unsupervised Video Summarization,"**Unsupervised video summarization** approaches overcome the need for ground-truth data (whose production requires time-demanding and laborious manual annotation procedures), based on learning mechanisms that require only an adequately large collection of original videos for their training. Specifically, the training is based on heuristic rules, like the sparsity, the representativeness, and the diversity of the utilized input features/characteristics.",computer-vision 74804cb0-bfea-4681-9f38-2b47599e887e,age-and-gender-estimation,Age and Gender Estimation,Age and gender estimation is a dual-task of identifying the age via regression analysis and classification of gender of a person.,computer-vision a36d6d88-4d0f-469d-a780-ae938b6eb8b8,hyperspectral-image-segmentation,Hyperspectral Image Segmentation,,computer-vision bc7706ad-b0d8-4d35-ae22-857ff82b660c,font-style-transfer,Font Style Transfer,**Font style transfer** is the task of converting text written in one font into text written in another font while preserving the meaning of the original text. It is used to change the appearance of text while keeping its content intact.,computer-vision ba03b4b0-e787-4958-8cb3-b133bea8a5a3,3d-dense-shape-correspondence,3D Dense Shape Correspondence,"Finding a meaningful correspondence between two or more shapes is one of the most fundamental shape analysis tasks. The problem can be generally stated as: given input shapes S1,S2,...,SN, find a meaningful relation (or mapping) between their elements. Under different contexts, the problem has also been referred to as registration, alignment, or simply, matching. Shape correspondence is a key algorithmic component in tasks such as 3D scan alignment and space-time reconstruction, as well as an indispensable prerequisite in diverse applications including attribute transfer, shape interpolation, and statistical modeling.",computer-vision bec27b92-e0cd-462d-a186-7023850e20bc,box-supervised-instance-segmentation,Box-supervised Instance Segmentation,This task aims to achieve instance segmentation with weakly bounding box annotations.,computer-vision c3dc83df-723e-4689-9505-969450bb0650,scene-change-detection,Scene Change Detection,"Scene change detection (SCD) refers to the task of localizing changes and identifying change-categories given two scenes. A scene can be either an RGB (+D) image or a 3D reconstruction (point cloud). If the scene is an image, SCD is a form of pixel-level prediction because each pixel in the image is classified according to a category. On the other hand, if the scene is point cloud, SCD is a form of point-level prediction because each point in the cloud is classified according to a category. Some example benchmarks for this task are VL-CMU-CD, PCD, and CD2014. Recently, more complicated benchmarks such as ChangeSim, HDMap, and Mallscape are released. Models are usually evaluated with the Mean Intersection-Over-Union (Mean IoU), Pixel Accuracy, or F1 metrics.",computer-vision 216f4430-39c9-4d66-ace1-7dce794e6a74,shape-from-texture,Shape from Texture,,computer-vision 394d8587-f5ce-4277-8768-24f864c323b7,object-slam,Object SLAM,SLAM (Simultaneous Localisation and Mapping) at the level of object,computer-vision 46969927-f59f-4bc0-b744-57c5d8ee6f4b,3d-canonical-hand-pose-estimation,3D Canonical Hand Pose Estimation,Image: [Lin et al](https://arxiv.org/pdf/2006.01320v1.pdf),computer-vision dcf514ab-af7c-4fc3-adf6-8c3321bfa817,vnla,VNLA,Find objects in photorealistic environments by requesting and executing language subgoals.,computer-vision e93ca207-f3ed-4500-aba3-732dc5f16298,multiple-action-detection,Multiple Action Detection,,computer-vision 43cb2468-492d-41b5-a6db-f40ac3e91bd4,pedestrian-attribute-recognition,Pedestrian Attribute Recognition,"Pedestrian attribution recognition is the task of recognizing pedestrian features - such as whether they are talking on a phone, whether they have a backpack, and so on. ( Image credit: [HydraPlus-Net: Attentive Deep Features for Pedestrian Analysis](https://arxiv.org/pdf/1709.09930v1.pdf) )",computer-vision 8b38205a-c5d3-4ba8-a319-a7da2ada5034,multi-human-parsing,Multi-Human Parsing,"Multi-human parsing is the task of parsing multiple humans in crowded scenes. ( Image credit: [Multi-Human Parsing](https://github.com/ZhaoJ9014/Multi-Human-Parsing) )",computer-vision 57311aa0-b8bb-4f60-b386-9e1bb968afdc,fine-grained-action-detection,Fine-Grained Action Detection,,computer-vision b4dc1162-eb1c-41a5-9953-90b9095e3f8e,multi-label-image-retrieval,Multi-Label Image Retrieval,,computer-vision 25ed0d1b-2ef5-4655-bc49-15b5406c85c4,talking-head-generation,Talking Head Generation,"Talking head generation is the task of generating a talking face from a set of images of a person. ( Image credit: [Few-Shot Adversarial Learning of Realistic Neural Talking Head Models](https://arxiv.org/pdf/1905.08233v2.pdf) )",computer-vision d718cff9-ea8e-4d66-9b82-6a4ad11fc8a6,overlapped-100-5,Overlapped 100-5,,computer-vision 63c2e631-ddb3-45b4-8ff5-fee100a5679f,text-guided-image-editing,text-guided-image-editing,Editing images using text prompts.,computer-vision edce8046-b226-4d1a-8d0f-ea07edb930b6,car-pose-estimation,Car Pose Estimation,,computer-vision 799d0614-5ffb-4f9a-93d4-8febb9205427,aerial-video-saliency-prediction,Aerial Video Saliency Prediction,,computer-vision 94fb6801-595d-4589-aa0b-dfa63acd7b83,optical-character-recognition,Optical Character Recognition (OCR),"**Optical Character Recognition** or **Optical Character Reader** (OCR) is the electronic or mechanical conversion of images of typed, handwritten or printed text into machine-encoded text, whether from a scanned document, a photo of a document, a scene-photo (for example the text on signs and billboards in a landscape photo, license plates in cars...) or from subtitle text superimposed on an image (for example: from a television broadcast)",computer-vision ade08720-9421-41a3-a61b-08c167eb8718,finger-vein-recognition,Finger Vein Recognition,,computer-vision e1405af4-b588-47ef-b210-638a4eeecfb5,cross-domain-activity-recognition,Cross-Domain Activity Recognition,,computer-vision 38a38adc-3757-4804-9a45-87dd838d5c7e,sketch-based-image-retrieval,Sketch-Based Image Retrieval,,computer-vision cd1d3e5b-8245-40da-9597-48918d6630cb,localization-in-video-forgery,Localization In Video Forgery,,computer-vision 3b4b65b4-77bf-49c0-a895-da75e039bb65,text-spotting,Text Spotting,"Text Spotting is the combination of Scene Text Detection and Scene Text Recognition in an end-to-end manner. It is the ability to read natural text in the wild.",computer-vision fa33d39c-ef4d-4664-b99d-d00d50e2544a,temporal-metadata-manipulation-detection,Temporal Metadata Manipulation Detection,Detecting when the timestamp of an outdoor photograph has been manipulated,computer-vision 12db39be-a63b-4116-a7ef-d5a11cd1efef,video-kinematic-segmentation-base-workflow,"Video, Kinematic & Segmentation Base Workflow Recognition",,computer-vision 18140740-8671-4c59-a0f3-0a97ae2d7b88,intelligent-surveillance,Intelligent Surveillance,,computer-vision 72599c6e-cfa5-4377-9908-01a49837b3cc,occlusion-estimation,Occlusion Estimation,,computer-vision 21da2130-a239-4897-8812-b0db2f4dad2a,out-of-distribution-detection,Out-of-Distribution Detection,Detect out-of-distribution or anomalous examples.,computer-vision 7f1e78cb-63b4-4a98-83d8-e9b1eeb4f157,lung-nodule-3d-detection,Lung Nodule 3D Detection,,computer-vision f1805e1a-b2fb-4120-a1a2-c774df0460e2,3d-semantic-instance-segmentation,3D Semantic Instance Segmentation,Image: [3D-SIS](https://github.com/Sekunde/3D-SIS),computer-vision 05fa3d5f-1672-46c9-9e40-df31d6dd8b77,data-ablation,Data Ablation,"Data Ablation is the study of change in data, and its effects in the performance of Neural Networks.",computer-vision 5abfc39b-2b83-46db-9d5c-2eb01777727f,stereo-depth-estimation,Stereo Depth Estimation,,computer-vision 0689431a-6c93-4a2d-b089-9ba047b63a5d,image-comprehension,Image Comprehension,,computer-vision b922190a-9548-427a-ad58-cc159c616762,blood-cell-count,Blood Cell Count,,computer-vision 9dcc8bd7-5de0-4df2-8008-790ca9b1d8be,face-to-face-translation,Face to Face Translation,"Given a video of a person speaking in a source language, generate a video of the same person speaking in a target language.",computer-vision e3041a77-1b9d-4e39-b1df-308b9af05b2e,whole-slide-images,whole slide images,,computer-vision a5a5d818-1ca8-4fbe-b0c9-0d5c17f5697b,depth-image-upsampling,Depth Image Upsampling,,computer-vision 7fb4867c-5654-4b6a-a044-a6e7b12d93d5,video-classification,Video Classification,"**Video Classification** is the task of producing a label that is relevant to the video given its frames. A good video level classifier is one that not only provides accurate frame labels, but also best describes the entire video given the features and the annotations of the various frames in the video. For example, a video might contain a tree in some frame, but the label that is central to the video might be something else (e.g., “hiking”). The granularity of the labels that are needed to describe the frames and the video depends on the task. Typical tasks include assigning one or more global labels to the video, and assigning one or more labels for each frame inside the video. Source: [Efficient Large Scale Video Classification ](https://arxiv.org/abs/1505.06250)",computer-vision a3f83dec-1202-483e-86c4-03d64d34e64d,pose-prediction,Pose Prediction,Pose prediction is to predict future poses given a window of previous poses.,computer-vision b3a58287-bffe-42c6-adb9-b2b1ab253f7e,body-detection,Body Detection,Detection of the persons or the characters defined in the dataset.,computer-vision 80cda782-ce65-4995-a695-52e3b141272a,medical-image-retrieval,Medical Image Retrieval,,computer-vision abccca0c-7b88-4453-bb56-5a85c2bfb98e,reconstruction,Reconstruction,,computer-vision 2c99156f-e5d9-49c3-a52f-14b985930d16,soil-moisture-estimation,Soil moisture estimation,,computer-vision 3fa5f31d-23e6-4dac-ba54-6c25b94319b7,calving-front-delineation-in-synthetic,Calving Front Delineation In Synthetic Aperture Radar Imagery,"Delineating the calving front of a marine-terminating glacier in synthetic aperture radar (SAR) imagery. This can, for example, be done through Semantic Segmentation.",computer-vision 9f8957c2-332c-41d6-a11a-fb406b48d016,overlapping-pose-estimation,Overlapping Pose Estimation,Pose estimation with overlapping poses.,computer-vision 23841ae0-647a-40dd-ba67-c0bfe55ed6cd,video-anomaly-detection,Video Anomaly Detection,,computer-vision 51dc6e90-ade2-4a35-aed6-38c8f94ada97,image-quality-estimation,Image Quality Estimation,,computer-vision b7e61bfc-f316-41e8-a655-520995c582fb,hand-gesture-recognition,Hand Gesture Recognition,,computer-vision 8235a86d-e484-4bf6-b245-315191355afd,overlapped-5-3,Overlapped 5-3,,computer-vision 9855a142-dbd6-46e2-b31e-3586efd0a78b,interactive-video-object-segmentation,Interactive Video Object Segmentation,"The interactive scenario assumes the user gives iterative refinement inputs to the algorithm, in our case in the form of a scribble, to segment the objects of interest. Methods have to produce a segmentation mask for that object in all the frames of a video sequence taking into account all the user interactions.",computer-vision 1e18c425-8400-47c8-a0fb-f92a328573a1,space-time-video-super-resolution,Space-time Video Super-resolution,,computer-vision 1d78dd7f-d924-4380-b35b-c8ebd0764c06,image-manipulation-detection,Image Manipulation Detection,"The task of detecting images or image parts that have been tampered or manipulated (sometimes also referred to as doctored). This typically encompasses image splicing, copy-move, or image inpainting.",computer-vision d593bc67-825b-4e3e-a455-87aa36a7fafa,few-shot-action-recognition,Few Shot Action Recognition,"Few-shot (FS) action recognition is a challenging com- puter vision problem, where the task is to classify an unlabelled query video into one of the action categories in the support set having limited samples per action class.",computer-vision 66b7d07b-d656-4a37-b68e-86359fb32791,camera-absolute-pose-regression,camera absolute pose regression,,computer-vision 48c623af-7b5c-4cd2-b6b7-aaef41bcda41,procedure-learning,Procedure Learning,"Given a set of videos of the same task, the goal is to identify the key-steps required to perform the task.",computer-vision f4b5487f-955b-48b4-b832-d370826627f8,coos-7-accuracy,COOS-7 Accuracy,"COOS-7 contains 132,209 single-cell images of mouse cells, where the task is to predict protein subcellular localization. Images are spread over 1 training set and 4 testing sets, where each single-cell image contains a protein and nucleus fluorescent channels. COOS-7 provides a classification setting where four test datasets have increasing degrees of covariate shift: some images are random subsets of the training data, while others are from experiments reproduced months later and imaged by different instruments. While most classifiers perform well on test datasets similar to the training dataset, all classifiers failed to generalize their performance to datasets with greater covariate shifts. Read more at https://www.alexluresearch.com/publication/coos/.",computer-vision cc478b80-09db-429c-939d-5ff3b8caf3e4,geometric-matching,Geometric Matching,,computer-vision a7a2c2ae-1c8f-4519-8416-cb6aae068ae3,incomplete-multi-view-clustering,Incomplete multi-view clustering,,computer-vision c0bbf2cc-fb41-4425-9ca9-1c4ea8051457,semi-supervised-human-pose-estimation,Semi-Supervised Human Pose Estimation,Semi-supervised human pose estimation aims to leverage the unlabelled data along with labeled data to improve the model performance.,computer-vision bf9b27b6-74bb-4bf6-93cd-63213f37ca2c,training-free-3d-point-cloud-classification,Training-free 3D Point Cloud Classification,Evaluation on target datasets for 3D Point Cloud Classification without any training,computer-vision a570d232-1b9a-46c2-994f-8d9d427f11ce,multiple-people-tracking,Multiple People Tracking,,computer-vision 9a20a044-61b5-439b-8698-0ac869dd889d,monocular-3d-object-detection,Monocular 3D Object Detection,Monocular 3D Object Detection is the task to draw 3D bounding box around objects in a single 2D RGB image. It is localization task but without any extra information like depth or other sensors or multiple-images.,computer-vision 2cb07cda-08ba-40d4-80ef-a7622b7043a9,sample-probing,Sample Probing,,computer-vision ba15895c-0e50-4474-bca8-9b0fe814bd12,cloud-removal,Cloud Removal,"The majority of all optical observations collected via spaceborne satellites are affected by haze or clouds. Consequently, persistent cloud coverage affects the remote sensing practitioner's capabilities of a continuous and seamless monitoring of our planet. **Cloud removal** is the task of reconstructing cloud-covered information while preserving originally cloud-free details. Image Source: [URL](https://patrickTUM.github.io/cloud_removal/)",computer-vision e0260aec-3eb0-4f16-8e62-9e2f951beb84,speaker-specific-lip-to-speech-synthesis,Speaker-Specific Lip to Speech Synthesis,"How accurately can we infer an individual’s speech style and content from his/her lip movements? [1] In this task, the model is trained on a specific speaker, or a very limited set of speakers. [1] Learning Individual Speaking Styles for Accurate Lip to Speech Synthesis, CVPR 2020.",computer-vision 9d80a0a3-02bf-405b-ae61-fbdfa7685f57,bokeh-effect-rendering,Bokeh Effect Rendering,,computer-vision c342fcfd-d872-4b64-a6ee-b3bf3b41dd3c,video,Video,,computer-vision f992ca31-07ee-4dd3-9941-fb8ef61a6dbe,evolving-domain-generalization,Evolving Domain Generalization,,computer-vision 5bdfbed8-ee91-4893-b1ba-0b8d9c2be409,infrared-image-super-resolution,Infrared image super-resolution,Aims at upsampling the IR image and create the high resolution image with help of a low resolution image.,computer-vision 17856ed4-9415-4ae0-ba36-b9b392b8c7b4,referring-expression-generation,Referring expression generation,Generate referring expressions,computer-vision 2ef55c58-dcdc-4d40-9db9-e5528a8d8827,image-super-resolution,Image Super-Resolution,"**Image Super-Resolution** is a machine learning task where the goal is to increase the resolution of an image, often by a factor of 4x or more, while maintaining its content and details as much as possible. The end result is a high-resolution version of the original image. This task can be used for various applications such as improving image quality, enhancing visual detail, and increasing the accuracy of computer vision algorithms.",computer-vision 222235c9-08e8-407f-8394-a9abd16d98de,video-deinterlacing,Video Deinterlacing,,computer-vision 6b786155-01ba-45d4-96ab-31f77673b222,unsupervised-semantic-segmentation-with,Unsupervised Semantic Segmentation with Language-image Pre-training,A segmentation task which does not utilise any human-level supervision for semantic segmentation except for a backbone which is initialised with features pre-trained with image-level labels.,computer-vision 54532c49-e0de-4067-9914-1f7fb4e0bc82,point-cloud-classification-dataset,Point cloud classification dataset,,computer-vision 6ba03727-0f83-4559-8812-0feefacbac76,head-detection,Head Detection,,computer-vision 24967556-a98f-4bb4-94bb-3887175a3315,classifier-calibration,Classifier calibration,Confidence calibration – the problem of predicting probability estimates representative of the true correctness likelihood – is important for classification models in many applications. The two common calibration metrics are Expected Calibration Error (ECE) and Maximum Calibration Error (MCE).,computer-vision 60932ef7-e075-471b-aff0-49b5d8d53473,pso-convnets-dynamics-1,PSO-ConvNets Dynamics 1,Incorporating distilled Cucker-Smale elements into PSO algorithm using KNN and intertwine training with SGD,computer-vision 8266034c-658d-4856-8032-03036ff92a1b,trajectory-prediction,Trajectory Prediction,"**Trajectory Prediction** is the problem of predicting the short-term (1-3 seconds) and long-term (3-5 seconds) spatial coordinates of various road-agents such as cars, buses, pedestrians, rickshaws, and animals, etc. These road-agents have different dynamic behaviors that may correspond to aggressive or conservative driving styles. Source: [Forecasting Trajectory and Behavior of Road-Agents Using Spectral Clustering in Graph-LSTMs ](https://arxiv.org/abs/1912.01118)",computer-vision 703ffa05-82d0-41ec-917b-aa206eac02a8,scene-labeling,Scene Labeling,,computer-vision 1f047631-e1ce-487d-914b-83daaf78811d,transparent-objects,Transparent objects,,computer-vision 9e4441b4-89de-4375-a58d-0fedfd312e8f,continuous-object-recognition,Continuous Object Recognition,"Continuous object recognition is the task of performing object recognition on a data stream and learning continuously, trying to mitigate issues such as catastrophic forgetting. ( Image credit: [CORe50 dataset](https://vlomonaco.github.io/core50/) )",computer-vision 7da35209-aa26-4a5e-8110-ad6438905257,3d-inpainting,3D Inpainting,"**3D Inpainting** is the removal of unwanted objects from a 3D scene, such that the replaced region is visually plausible and consistent with its context.",computer-vision 15e74df3-be84-4fee-a091-7334b22f0786,image-to-gps-verification,Image-To-Gps Verification,"The image-to-GPS verification task asks whether a given image is taken at a claimed GPS location. ( Image credit: [Image-to-GPS Verification Through A Bottom-Up Pattern Matching Network](https://arxiv.org/pdf/1811.07288v1.pdf) )",computer-vision cd34d626-1b22-4929-ba43-b76815733792,hand-detection,Hand Detection,"As an important subject in the field of computer vision, hand detection plays an important role in many tasks such as human-computer interaction, automatic driving, virtual reality and so on.",computer-vision b15957f8-1c8c-42ce-a65a-2450d28fa8ca,vision-language-navigation,Vision-Language Navigation,"Vision-language navigation (VLN) is the task of navigating an embodied agent to carry out natural language instructions inside real 3D environments. ( Image credit: [Learning to Navigate Unseen Environments: Back Translation with Environmental Dropout](https://arxiv.org/pdf/1904.04195v1.pdf) )",computer-vision 44150d39-d078-4511-b94d-49bc08ffd190,self-supervised-image-classification,Self-Supervised Image Classification,"This is the task of image classification using representations learnt with self-supervised learning. Self-supervised methods generally involve a pretext task that is solved to learn a good representation and a loss function to learn with. One example of a loss function is an autoencoder based loss where the goal is reconstruction of an image pixel-by-pixel. A more popular recent example is a contrastive loss, which measure the similarity of sample pairs in a representation space, and where there can be a varying target instead of a fixed target to reconstruct (as in the case of autoencoders). A common evaluation protocol is to train a linear classifier on top of (frozen) representations learnt by self-supervised methods. The leaderboards for the linear evaluation protocol can be found below. In practice, it is more common to fine-tune features on a downstream task. An alternative evaluation protocol therefore uses semi-supervised learning and finetunes on a % of the labels. The leaderboards for the finetuning protocol can be accessed [here](https://paperswithcode.com/task/semi-supervised-image-classification). You may want to read some blog posts before reading the papers and checking the leaderboards: - [Contrastive Self-Supervised Learning](https://ankeshanand.com/blog/2020/01/26/contrative-self-supervised-learning.html) - Ankesh Anand - [The Illustrated Self-Supervised Learning](https://amitness.com/2020/02/illustrated-self-supervised-learning/) - Amit Chaudhary - [Self-supervised learning and computer vision](https://www.fast.ai/2020/01/13/self_supervised/) - Jeremy Howard - [Self-Supervised Representation Learning](https://lilianweng.github.io/lil-log/2019/11/10/self-supervised-learning.html) - Lilian Weng There is also Yann LeCun's talk at AAAI-20 which you can watch [here](https://vimeo.com/390347111) (35:00+). ( Image credit: [A Simple Framework for Contrastive Learning of Visual Representations](https://arxiv.org/pdf/2002.05709v1.pdf) )",computer-vision f6b579a6-1084-45f3-89e0-77703c7e83b5,webpage-object-detection,Webpage Object Detection,Detect Web Element for various classes from candidate web elements obtained from DOM tree (No need for Bounding Box Regression),computer-vision 8ac147eb-cfb9-44a5-8451-0ddc57c30616,contrastive-learning,Contrastive Learning,"**Contrastive Learning** is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart. It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering. (Image credit: [Schroff et al. 2015](https://arxiv.org/abs/1503.03832))",computer-vision 7a2c0795-026d-4cbd-8022-56a30e2921b8,vehicle-key-point-and-orientation-estimation,Vehicle Key-Point and Orientation Estimation,,computer-vision 5f7b96ee-e452-4602-8a59-3fc6ac13c9e8,action-triplet-detection,Action Triplet Detection,"Detecting and localizing bounding boxes of tools and anatomies. Then prediction their relationship as action triplet ",computer-vision e7a93535-cfde-47e7-8a9c-769071d6715e,self-driving-cars,Self-Driving Cars,"Self-driving cars : the task of making a car that can drive itself without human guidance. ( Image credit: [Learning a Driving Simulator](https://github.com/commaai/research) )",computer-vision eeaad7a2-7616-4ac0-b7fd-c9be0574516f,gesture-to-gesture-translation,Gesture-to-Gesture Translation,,computer-vision 9f03f48f-c645-4423-a800-4c96d28ea951,3d-hand-pose-estimation,3D Hand Pose Estimation,Image: [Zimmerman et l](https://arxiv.xsrg/pdf/1705.01389v3.pdf),computer-vision 09dd3238-9b34-41b0-ad46-cce3b4805874,texture-synthesis,Texture Synthesis,"The fundamental goal of example-based **Texture Synthesis** is to generate a texture, usually larger than the input, that faithfully captures all the visual characteristics of the exemplar, yet is neither identical to it, nor exhibits obvious unnatural looking artifacts. Source: [Non-Stationary Texture Synthesis by Adversarial Expansion ](https://arxiv.org/abs/1805.04487)",computer-vision 1c158e2e-bdd7-4420-9d63-82a81ff050ec,dehazing,Dehazing,,computer-vision e3392a2c-667f-4150-a2cb-445eac963ed5,3d-point-cloud-reinforcement-learning,3D Point Cloud Reinforcement Learning,Reinforcement learning / robot learning from 3D point clouds,computer-vision 499b9dac-abf4-45df-9b46-7749e7be4cc5,action-spotting,Action Spotting,,computer-vision 44dce5d7-5783-4ec7-8724-1a62ffeb58c0,image-editing,Image Editing,,computer-vision 6046d910-07a6-419c-a4f6-ee34b635cb25,thoracic-disease-classification,Thoracic Disease Classification,,computer-vision 960b7cfb-0f71-4b49-982e-77aca1b228f0,point-cloud-completion,Point Cloud Completion,,computer-vision 8641b703-0514-42d1-bce6-8f75de09c0b9,depiction-invariant-object-recognition,Depiction Invariant Object Recognition,"Depiction invariant object recognition is the task of recognising objects irrespective of how they are visually depicted (line drawing, realistic shaded drawing, photograph etc.). ( Image credit: [SwiDeN](https://arxiv.org/pdf/1607.08764v1.pdf) )",computer-vision 97e34d22-ed15-44eb-ad66-7212eaa29674,iris-recognition,Iris Recognition,,computer-vision 03055b5d-357f-4a10-973b-f2bbe5b37314,pedestrian-detection,Pedestrian Detection,"Pedestrian detection is the task of detecting pedestrians from a camera. Further state-of-the-art results (e.g. on the KITTI dataset) can be found at [3D Object Detection](https://paperswithcode.com/task/object-detection). ( Image credit: [High-level Semantic Feature Detection: A New Perspective for Pedestrian Detection](https://github.com/liuwei16/CSP) )",computer-vision c96f28e3-0c7d-44ba-9998-b99317274c22,co-saliency-detection,Co-Salient Object Detection,"**Co-Salient Object Detection** is a computational problem that aims at highlighting the common and salient foreground regions (or objects) in an image group. Please also refer to the online benchmark: http://dpfan.net/cosod3k/ ( Image credit: [Taking a Deeper Look at Co-Salient Object Detection, CVPR2020](https://openaccess.thecvf.com/content_CVPR_2020/papers/Fan_Taking_a_Deeper_Look_at_Co-Salient_Object_Detection_CVPR_2020_paper.pdf) )",computer-vision f6d7576e-7e15-4064-9ed0-3a6a0097dd61,deep-attention,Deep Attention,,computer-vision 73c6438e-a779-4eeb-b17e-2dcb8e8d9941,steganalysis,Steganalysis,Detect the usage of Steganography,computer-vision 2508f20c-b3e9-4680-a263-1fdd2e8d6b53,motion-detection-in-non-stationary-scenes,Motion Detection In Non-Stationary Scenes,,computer-vision dd1b1b27-7257-4eb3-988c-896413c053d1,document-image-classification,Document Image Classification,"Document image classification is the task of classifying documents based on images of their contents. ( Image credit: [Real-Time Document Image Classification using Deep CNN and Extreme Learning Machines](https://arxiv.org/pdf/1711.05862v1.pdf) )",computer-vision 3fa98e1f-f9f6-45fd-a754-0844f08bffe8,image-imputation,Image Imputation,"Image imputation is the task of creating plausible images from low-resolution images or images with missing data. ( Image credit: [NASA](https://www.jpl.nasa.gov/edu/news/2019/4/19/how-scientists-captured-the-first-image-of-a-black-hole/) )",computer-vision 558be24c-25af-4630-86a1-8caa57fdbe19,safety-perception-recognition,Safety Perception Recognition,City safety perception recognition,computer-vision e7864627-809c-41f7-9551-63dba7b6a8ca,3d-object-classification,3D Object Classification,"3D Object Classification is the task of predicting the class of a 3D object point cloud. It is a voxel level prediction where each voxel is classified into a category. The popular benchmark for this task is the ModelNet dataset. The models for this task are usually evaluated with the Classification Accuracy metric. Image: [Sedaghat et al](https://arxiv.org/pdf/1604.03351v2.pdf)",computer-vision b56206a6-8243-4c90-8408-7149ccaef62e,interspecies-facial-keypoint-transfer,Interspecies Facial Keypoint Transfer,Find cross-domain semantic correspondence between faces of different species,computer-vision 7652506d-6ca9-4769-adb5-475675bc00da,classify-3d-point-clouds,Classify 3D Point Clouds,,computer-vision 39ca221a-f35f-4312-9045-1f6f39ca1816,image-fusion,Image Fusion,,computer-vision 3f3d93e2-e9a7-4751-87fd-b7c93345d151,junction-detection,Junction Detection,,computer-vision 40283550-eba0-4132-9396-573f4cd8c637,aerial-video-semantic-segmentation,Aerial Video Semantic Segmentation,,computer-vision e5add8c5-8a34-4d68-baf0-a18ad241732f,panoptic-segmentation,Panoptic Segmentation,"**Panoptic Segmentation** is a computer vision task that combines semantic segmentation and instance segmentation to provide a comprehensive understanding of the scene. The goal of panoptic segmentation is to segment the image into semantically meaningful parts or regions, while also detecting and distinguishing individual instances of objects within those regions. ( Image credit: [Detectron2](https://github.com/facebookresearch/detectron2) )",computer-vision ceb4fce9-e64f-43b3-8c32-dc32d6b4d591,seeing-beyond-the-visible,Seeing Beyond the Visible,"The objective of this challenge is to automate the process of estimating the soil parameters, specifically, potassium (KKK), phosphorus pentoxide (P2O5P_2O_5P2​O5​), magnesium (MgMgMg) and pHpHpH, through extracting them from the airborne hyperspectral images captured over agricultural areas in Poland (the exact locations are not revealed). To make the solution applicable in real-life use cases, all the parameters should be estimated as precisely as possible.",computer-vision 6966e2f2-af07-47e3-a44b-b12011c93367,defocus-blur-detection,Defocus Blur Detection,,computer-vision c786c34b-ec6b-4989-9cf5-86105753a3c0,3d-surface-generation,3D Surface Generation,Image: [AtlasNet](https://arxiv.org/pdf/1802.05384v3.pdf),computer-vision b5470826-fe4d-43f9-9f64-265ae7eef21a,3d-room-layouts-from-a-single-rgb-panorama,3D Room Layouts From A Single RGB Panorama,Image: [Zou et al](https://arxiv.org/pdf/1803.08999v1.pdf),computer-vision b1dc544c-0786-409a-8cfc-77c5548f0c5c,pulmonary-arteryvein-classification,Pulmonary Artery–Vein Classification,,computer-vision 6b5cb2a0-edfc-44eb-a4bd-d0add6980790,heterogeneous-face-recognition,Heterogeneous Face Recognition,"Heterogeneous face recognition is the task of matching face images acquired from different sources (i.e., different sensors or different wavelengths) for identification or verification. ( Image credit: [Pose Agnostic Cross-spectral Hallucination via Disentangling Independent Factors](https://arxiv.org/pdf/1909.04365v1.pdf) )",computer-vision 12cf4514-7616-43f1-95f3-cc3bfa127a24,mobile-periocular-recognition,Mobile Periocular Recognition,"Periocular recognition is the task of recognising a person based on their eyes (periocular). ( Image credit: [Heterogeneity Aware Deep Embedding for Mobile Periocular Recognition](https://arxiv.org/pdf/1811.00846v1.pdf) )",computer-vision 5c59921c-b7ae-4d73-ba3e-58a9c7fbf7a7,class-incremental-learning,Class Incremental Learning,Incremental learning of a sequence of tasks when the task-ID is not available at test time.,computer-vision d0750c62-498b-4beb-9257-5e0d40df9209,event-data-classification,Event data classification,,computer-vision c0689bfe-5ffc-4837-9828-5e3706e1b9b5,zero-shot-segmentation,Zero Shot Segmentation,,computer-vision 5229ed4d-6203-4266-aca8-6df0e8c59ed8,animated-gif-generation,Animated GIF Generation,,computer-vision 545dda68-45d0-43e6-b31f-f0e97dc045b5,single-shot-hdr-reconstruction,Single-shot HDR Reconstruction,"SVE-based HDR imaging, also known as single-shot HDR imaging, algorithms capture a scene with pixel-wise varying exposures in a single image and then computationally synthesize an HDR image, which benefits from the multiple exposures of the single image.",computer-vision ffbaaf3c-3ae6-486e-b333-9d755763f531,zero-shot-transfer-image-classification,Zero-Shot Transfer Image Classification,,computer-vision f0ec4f70-9844-469d-8fe8-294584090c15,self-knowledge-distillation,Self-Knowledge Distillation,,computer-vision 7cb187be-8404-407d-acd9-4e80337a9bd1,colorization,Colorization,"**Colorization** is the process of adding plausible color information to monochrome photographs or videos. Colorization is a highly undetermined problem, requiring mapping a real-valued luminance image to a three-dimensional color-valued one, that has not a unique solution. Source: [ChromaGAN: An Adversarial Approach for Picture Colorization ](https://arxiv.org/abs/1907.09837)",computer-vision c39b7095-04ba-4414-a1c1-4445a437aa5d,stereo-matching,Stereo Matching Hand,,computer-vision a3165a70-eeb5-4932-b0d8-38a28d48d0c8,sketch,Sketch,,computer-vision 4d7f83ea-88ad-4470-82c3-b42f9923b8c6,point-cloud-reconstruction,Point cloud reconstruction,"This task aims to solve inherent problems in raw point clouds: sparsity, noise, and irregularity.",computer-vision 425adff8-97cf-4db6-8a8b-fa5d419a2bd4,sports-analytics,Sports Analytics,,computer-vision 161ec6cc-0f5d-4ea2-8916-cf7e625e95cc,lung-nodule-3d-classification,Lung Nodule 3D Classification,,computer-vision b47da698-6ddc-409c-81e5-95c17755c7d6,underwater-image-restoration,Underwater Image Restoration,Underwater image restoration aims to rectify the distorted colors and present the true colors of the underwater scene.,computer-vision 9842db39-3e69-4dab-8351-d15cbabfc1c5,partially-view-aligned-multi-view-learning,Partially View-aligned Multi-view Learning,"In multi-view learning, Partially View-aligned Problem (PVP) refers to the case when only a portion of data is aligned, thus leading to data inconsistency.",computer-vision 37adb025-133f-4dc3-b783-5c300fc918b6,scene-text-recognition,Scene Text Recognition,See [Scene Text Detection](https://paperswithcode.com/task/scene-text-detection) for leaderboards in this task.,computer-vision e774a091-7dec-4609-a405-75993420ec3b,face-image-quality,Face Image Quality,,computer-vision aa7a113b-1d5b-43f5-bfd2-24439adcb386,demosaicking,Demosaicking,"Most modern digital cameras acquire color images by measuring only one color channel per pixel, red, green, or blue, according to a specific pattern called the Bayer pattern. **Demosaicking** is the processing step that reconstruct a full color image given these incomplete measurements. Source: [Revisiting Non Local Sparse Models for Image Restoration ](https://arxiv.org/abs/1912.02456)",computer-vision 3a645d1b-7a5e-4ac1-aca2-97cf2f671728,handwritten-word-generation,Handwritten Word Generation,,computer-vision a772458d-b740-43fd-98c6-fdf8239a9ff1,physical-attribute-prediction,Physical Attribute Prediction,,computer-vision e4199210-f7a5-418e-a7c9-44eef6787ef9,face-recognition,Face Recognition,"**Facial Recognition** is the task of making a positive identification of a face in a photo or video image against a pre-existing database of faces. It begins with detection - distinguishing human faces from other objects in the image - and then works on identification of those detected faces. The state of the art tables for this task are contained mainly in the consistent parts of the task : the face verification and face identification tasks. ( Image credit: [Face Verification](https://shuftipro.com/face-verification) )",computer-vision 73076f7f-3696-4c1b-b6de-e052a6122dcd,2d-object-detection,2D Object Detection,,computer-vision ffa05148-d8d9-412a-8dc0-6d9ce0dae722,hand-gesture-recognition-1,Hand-Gesture Recognition,,computer-vision 62519edd-dd37-4fa4-9bbd-c77df86b7cef,saliency-prediction,Saliency Prediction,A saliency map is a model that predicts eye fixations on a visual scene.,computer-vision 598266d6-9ad8-4c1b-80d0-889d181543d8,video-individual-counting,Video Individual Counting,,computer-vision 2c84af61-c3ed-465d-a56b-5a687b1865be,road-scene-understanding,road scene understanding,,computer-vision decdb3d9-e999-4a4b-aa9f-5e8879d978ab,pose-tracking,Pose Tracking,"**Pose Tracking** is the task of estimating multi-person human poses in videos and assigning unique instance IDs for each keypoint across frames. Accurate estimation of human keypoint-trajectories is useful for human action recognition, human interaction understanding, motion capture and animation. Source: [LightTrack: A Generic Framework for Online Top-Down Human Pose Tracking ](https://arxiv.org/abs/1905.02822)",computer-vision efdfa4d9-2923-46e0-8601-d15811fb1c93,physical-video-anomaly-detection,Physical Video Anomaly Detection,Detecting if an entire short clip of a physical or mechanical process features an anomalous motion,computer-vision 0ff35f8d-1c2a-44ea-8f5f-bfff6349fa9c,short-term-object-interaction-anticipation,Short-term Object Interaction Anticipation,,computer-vision 4f84d5f5-df2b-4cf7-90d6-2a872ca9f035,human-object-interaction-motion-tracking,Human-Object-interaction motion tracking,,computer-vision e95b2a2d-60f2-4dd7-9312-de3c1d9b3d4d,sketch-recognition,Sketch Recognition,,computer-vision 5f514594-7e9d-4485-831f-5c6ba687cc2c,image-smoothing,image smoothing,,computer-vision 9a0551f8-2f53-4f80-bcc0-270f7f2ff2e2,image-dehazing,Image Dehazing,"( Image credit: [Densely Connected Pyramid Dehazing Network](https://github.com/hezhangsprinter/DCPDN) )",computer-vision ac57b90e-a6a7-435f-b9b7-cdb5b0109620,action-quality-assessment,Action Quality Assessment,Assessing/analyzing/quantifying how well an action was performed.,computer-vision 5da5be03-bbff-41cf-ad3c-8de291b38bb6,multi-oriented-scene-text-detection,Multi-Oriented Scene Text Detection,,computer-vision e0df1e19-6873-4df5-a154-94791463fe90,hand,Hand,,computer-vision 19b3ae32-7764-42b8-8beb-69da638c7bf2,referring-image-matting-keyword-based,Referring Image Matting (Keyword-based),"Keyword-based referring image matting, taking an image and a keyword word as the input.",computer-vision 0121e17f-8041-4c49-bb4a-a8e453694d99,single-object-discovery,Single-object discovery,,computer-vision 92e29716-2c1e-46a8-874a-28385251812f,deblurring,Deblurring,"**Deblurring** is a computer vision task that involves removing the blurring artifacts from images or videos to restore the original, sharp content. Blurring can be caused by various factors such as camera shake, fast motion, and out-of-focus objects, and can result in a loss of detail and quality in the captured images. The goal of deblurring is to produce a clear, high-quality image that accurately represents the original scene. ( Image credit: [Deblurring Face Images using Uncertainty Guided Multi-Stream Semantic Networks](https://arxiv.org/pdf/1907.13106v1.pdf) )",computer-vision 6dfbd461-c177-4d48-bbe5-0faa7ea2ba87,unsupervised-long-term-person-re,Unsupervised Long Term Person Re-Identification,"Long-term Person Re-Identification(Clothes-Changing Person Re-ID) is a computer vision task in which the goal is to match a person's identity across different cameras, clothes, and locations in a video or image sequence. It involves detecting and tracking a person and then using features such as appearance, and body shape to match their identity in different frames. The goal is to associate the same person across multiple non-overlapping camera views in a robust and efficient manner.",computer-vision d3ccf96f-264a-415d-b2d0-b3508eac2f7c,language-based-temporal-localization,Language-Based Temporal Localization,,computer-vision 07477428-009f-4e13-a02d-b6e054b8a053,skeleton-based-action-recognition,Skeleton Based Action Recognition,"**Skeleton-based Action Recognition** is a computer vision task that involves recognizing human actions from a sequence of 3D skeletal joint data captured from sensors such as Microsoft Kinect, Intel RealSense, and wearable devices. The goal of skeleton-based action recognition is to develop algorithms that can understand and classify human actions from skeleton data, which can be used in various applications such as human-computer interaction, sports analysis, and surveillance. ( Image credit: [View Adaptive Neural Networks for High Performance Skeleton-based Human Action Recognition](https://arxiv.org/pdf/1804.07453v3.pdf) )",computer-vision 1cd36ca0-926c-4c58-9908-ba28af6466a5,pose-estimation,Pose Estimation,"**Pose Estimation** is a computer vision task where the goal is to detect the position and orientation of a person or an object. Usually, this is done by predicting the location of specific keypoints like hands, head, elbows, etc. in case of Human Pose Estimation. A common benchmark for this task is [MPII Human Pose](https://paperswithcode.com/sota/pose-estimation-on-mpii-human-pose) ( Image credit: [Real-time 2D Multi-Person Pose Estimation on CPU: Lightweight OpenPose](https://github.com/Daniil-Osokin/lightweight-human-pose-estimation.pytorch) )",computer-vision 7d084b81-6bdb-481f-84b6-40c26318b2b6,overlapped-10-1,Overlapped 10-1,,computer-vision 2cbd4db6-d55e-4468-94b1-8faaf16817f4,facial-attribute-classification,Facial Attribute Classification,"Facial attribute classification is the task of classifying various attributes of a facial image - e.g. whether someone has a beard, is wearing a hat, and so on. ( Image credit: [Multi-task Learning of Cascaded CNN for Facial Attribute Classification ](https://arxiv.org/pdf/1805.01290v1.pdf) )",computer-vision f5ddeb93-3361-4cf9-b635-02d0663bfd24,material-classification,Material Classification,,computer-vision 41397ff1-b9a9-4ff3-bee8-b1fe4bffbdab,natural-image-orientation-angle-detection,Natural Image Orientation Angle Detection,"Image orientation angle detection is a pretty challenging task for a machine because the machine has to learn the features of an image in such a way so that it can detect the arbitrary angle by which the image is rotated. Though there are some modern cameras with features involving inertial sensors that can correct image orientation in steps of 90 degrees, those features are seldom used. In this paper, we propose a method to detect the orientation angle of a digitally captured image where the image may have been captured by a camera at a tilted angle (between 0\degree to 359\degree).",computer-vision f9e9de16-f37e-4ad9-8a20-594a41a3d417,motion-detection,Motion Detection,"**Motion Detection** is a process to detect the presence of any moving entity in an area of interest. Motion Detection is of great importance due to its application in various areas such as surveillance and security, smart homes, and health monitoring. Source: [Different Approaches for Human Activity Recognition– A Survey ](https://arxiv.org/abs/1906.05074)",computer-vision 16b9901d-5e3a-42ae-8f8b-8e401955b48c,image-matting,Image Matting,"**Image Matting** is the process of accurately estimating the foreground object in images and videos. It is a very important technique in image and video editing applications, particularly in film production for creating visual effects. In case of image segmentation, we segment the image into foreground and background by labeling the pixels. Image segmentation generates a binary image, in which a pixel either belongs to foreground or background. However, Image Matting is different from the image segmentation, wherein some pixels may belong to foreground as well as background, such pixels are called partial or mixed pixels. In order to fully separate the foreground from the background in an image, accurate estimation of the alpha values for partial or mixed pixels is necessary. Source: [Automatic Trimap Generation for Image Matting ](https://arxiv.org/abs/1707.00333) Image Source: [Real-Time High-Resolution Background Matting](https://arxiv.org/pdf/2012.07810v1.pdf)",computer-vision b3101a43-dc62-489b-bcf7-082e13dcc193,semi-supervised-video-object-segmentation,Semi-Supervised Video Object Segmentation,The semi-supervised scenario assumes the user inputs a full mask of the object(s) of interest in the first frame of a video sequence. Methods have to produce the segmentation mask for that object(s) in the subsequent frames.,computer-vision b6d2e61d-7800-4940-91be-1e233bc53e99,scene-segmentation,Scene Segmentation,"Scene segmentation is the task of splitting a scene into its various object components. Image adapted from [Temporally coherent 4D reconstruction of complex dynamic scenes](https://paperswithcode.com/paper/temporally-coherent-4d-reconstruction-of2).",computer-vision cb72fd30-befb-4c26-aead-354ad7da525f,fine-grained-visual-recognition,Fine-Grained Visual Recognition,,computer-vision 5aea1b66-b51a-4129-b773-7549932a1d8e,object-discovery,Object Discovery,"**Object Discovery** is the task of identifying previously unseen objects. Source: [Unsupervised Object Discovery and Segmentation of RGBD-images ](https://arxiv.org/abs/1710.06929)",computer-vision acfd1896-5bc7-4c35-8d28-b5bed2b2a8b1,multimodal-forgery-detection,Multimodal Forgery Detection,**Multimodal Forgery Detection** task is a deep forgery detection method which uses both video and audio.,computer-vision 6eaf348f-170f-4fa0-8dd1-cd683a8f91a4,blind-image-quality-assessment,Blind Image Quality Assessment,,computer-vision b6f95aeb-9089-46fd-9e6c-66d73684daa0,frame-duplication-detection,Frame Duplication Detection,,computer-vision d20cf4e1-9548-4ab9-b590-fb12ba937fc9,simultaneous-localization-and-mapping,Simultaneous Localization and Mapping,"Simultaneous localization and mapping (SLAM) is the task of constructing or updating a map of an unknown environment while simultaneously keeping track of an agent's location within it. ( Image credit: [ORB-SLAM2](https://arxiv.org/pdf/1610.06475v2.pdf) )",computer-vision 33f37971-c887-49e7-b5ae-1314d3a3de01,video-grounding,Video Grounding,"**Video grounding** is the task of linking spoken language descriptions to specific video segments. In video grounding, the model is given a video and a natural language description, such as a sentence or a caption, and its goal is to identify the specific segment of the video that corresponds to the description. This can involve tasks such as localizing the objects or actions mentioned in the description within the video, or associating a specific time interval with the description.",computer-vision a99223d5-5ada-477d-b61f-0a86db4dfb23,point-cloud-registration,Point Cloud Registration,"**Point Cloud Registration** is a fundamental problem in 3D computer vision and photogrammetry. Given several sets of points in different coordinate systems, the aim of registration is to find the transformation that best aligns all of them into a common coordinate system. Point Cloud Registration plays a significant role in many vision applications such as 3D model reconstruction, cultural heritage management, landslide monitoring and solar energy analysis. Source: [Iterative Global Similarity Points : A robust coarse-to-fine integration solution for pairwise 3D point cloud registration ](https://arxiv.org/abs/1808.03899)",computer-vision e03d10a2-8421-4c63-9198-46ba4f245d0b,3d-face-reconstruction,3D Face Reconstruction,"**3D Face Reconstruction** is a computer vision task that involves creating a 3D model of a human face from a 2D image or a set of images. The goal of 3D face reconstruction is to reconstruct a digital 3D representation of a person's face, which can be used for various applications such as animation, virtual reality, and biometric identification. ( Image credit: [3DDFA_V2](https://github.com/cleardusk/3DDFA_V2) )",computer-vision 278f9c04-eca5-4374-9804-57a1381cefa3,3d-car-instance-understanding,3D Car Instance Understanding,"3D Car Instance Understanding is the task of estimating properties (e.g.translation, rotation and shape) of a moving or parked vehicle on the road. ( Image credit: [Occlusion-Net](http://openaccess.thecvf.com/content_CVPR_2019/papers/Reddy_Occlusion-Net_2D3D_Occluded_Keypoint_Localization_Using_Graph_Networks_CVPR_2019_paper.pdf) )",computer-vision 08d7f4fa-d560-4cf4-8e51-9c7790f4c989,compositional-zero-shot-learning,Compositional Zero-Shot Learning,"**Compositional Zero-Shot Learning (CZSL)** is a computer vision task in which the goal is to recognize unseen compositions fromed from seen state and object during training. The key challenge in CZSL is the inherent entanglement between the state and object within the context of an image. Some example benchmarks for this task are MIT-states, UT-Zappos, and C-GQA. Models are usually evaluated with the Accuracy for both seen and unseen compositions, as well as their Harmonic Mean(HM). ( Image credit: [Heosuab](https://hellopotatoworld.tistory.com/24) )",computer-vision 9da8b6ea-ffb9-46be-a5c6-4f0506e2ff86,video-to-shop,Video-to-Shop,,computer-vision d0c51986-42fa-472e-b5c1-0875b18f76a5,3d-semantic-scene-completion,3D Semantic Scene Completion,"This task was introduced in ""Semantic Scene Completion from a Single Depth Image"" (https://arxiv.org/abs/1611.08974) at CVPR 2017 . The target is to infer the dense 3D voxelized semantic scene from an incompleted 3D input (e.g. point cloud, depth map) and an optional RGB image. A recent summary can be found in the paper ""3D Semantic Scene Completion: a Survey"" (https://arxiv.org/abs/2103.07466), published at IJCV 2021.",computer-vision 27334d01-0264-46ee-96d2-5acb384e72e7,multispectral-object-detection,Multispectral Object Detection,,computer-vision ab0c58a8-c6c4-456b-8cd9-c439aa964d64,video-based-workflow-recognition,Video Based Workflow Recognition,,computer-vision 24bbae38-a471-4616-8978-e55514f77962,deception-detection-in-videos,Deception Detection In Videos,,computer-vision 9d52b4e7-f6a6-473f-8def-f40481eca435,dense-captioning,Dense Captioning,,computer-vision 84418cfa-fd0d-48eb-8ca1-6432df553744,amodal-panoptic-segmentation,Amodal Panoptic Segmentation,The goal of this task is to simultaneously predict the pixel-wise semantic segmentation labels of the visible regions of stuff classes and the instance segmentation labels of both the visible and occluded regions of thing classes.,computer-vision d7510111-0f3e-42df-a405-cb15bd12395e,3d-shape-modeling,3D Shape Modeling,Image: [Gkioxari et al](https://arxiv.org/pdf/1906.02739v2.pdf),computer-vision 3e1fcbf6-ba45-44e4-bc5c-fa884f4eb2f9,visual-grounding,Visual Grounding,"Visual Grounding (VG) aims to locate the most relevant object or region in an image, based on a natural language query. The query can be a phrase, a sentence, or even a multi-round dialogue. There are three main challenges in VG: * What is the main focus in a query? * How to understand an image? * How to locate an object?",computer-vision 41cd7aae-743f-4d7b-a350-9bf4f95ee71a,referring-expression,Referring Expression,"Referring expressions places a bounding box around the instance corresponding to the provided description and image.",computer-vision 3520dd70-da4d-4f03-9c50-f28944eae6e4,document-layout-analysis,Document Layout Analysis,"""**Document Layout Analysis** is performed to determine physical structure of a document, that is, to determine document components. These document components can consist of single connected components-regions [...] of pixels that are adjacent to form single regions [...] , or group of text lines. A text line is a group of characters, symbols, and words that are adjacent, “relatively close” to each other and through which a straight line can be drawn (usually with horizontal or vertical orientation)."" L. O'Gorman, ""The document spectrum for page layout analysis,"" in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 15, no. 11, pp. 1162-1173, Nov. 1993. Image credit: [PubLayNet: largest dataset ever for document layout analysis](https://arxiv.org/pdf/1908.07836v1.pdf)",computer-vision c4464b8c-d9dc-4cb2-894b-33ce25325286,multi-object-colocalization,Multi-object colocalization,,computer-vision a8001a41-cc88-4992-8227-ee6630670bde,feature-compression,Feature Compression,"Compress data for machine interpretability to perform downstream tasks, rather than for human perception.",computer-vision 10f5532f-56a5-4f56-9481-f3683ff74dfa,visual-prompting,Visual Prompting,"Visual Prompting is the task of streamlining computer vision processes by harnessing the power of prompts, inspired by the breakthroughs of text prompting in NLP. This innovative approach involves using a few visual prompts to swiftly convert an unlabeled dataset into a deployed model, significantly reducing development time for both individual projects and enterprise solutions.",computer-vision eb786d0f-96f2-4d13-bd26-3ec85a2db925,unsupervised-text-recognition,Unsupervised Text Recognition,Decompose a text into the letters / tokens that are used to write it.,computer-vision 637b089c-17a2-4ada-8768-084549df3bb4,3d-pose-estimation,3D Pose Estimation,"Image credit: [GSNet: Joint Vehicle Pose and Shape Reconstruction with Geometrical and Scene-aware Supervision , ECCV'20](https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123600511.pdf)",computer-vision 98fe3ade-d8e0-4fb9-bf51-156a22279e33,video-super-resolution,Video Super-Resolution,"**Video Super-Resolution** is a computer vision task that aims to increase the resolution of a video sequence, typically from lower to higher resolutions. The goal is to generate high-resolution video frames from low-resolution input, improving the overall quality of the video. ( Image credit: [Detail-revealing Deep Video Super-Resolution](https://github.com/jiangsutx/SPMC_VideoSR) )",computer-vision b2df5834-32c9-4b52-9fa1-c76218a3b6bb,point-cloud-super-resolution,Point Cloud Super Resolution,"Point cloud super-resolution is a fundamental problem for 3D reconstruction and 3D data understanding. It takes a low-resolution (LR) point cloud as input and generates a high-resolution (HR) point cloud with rich details",computer-vision 5af47daa-447a-4599-b679-2dfee0ce9de5,image-variation,Image-Variation,"Given an image, generate variations of the image",computer-vision 79fbbe33-564a-47de-8f68-eda7f9762683,gaze-target-estimation,Gaze Target Estimation,Gaze Target Estimation refers to predicting the image 2D gaze location of a person in the image.,computer-vision e5c5ee2c-f094-43d6-bb74-c5f20c2df8b4,robust-bev-detection,Robust BEV Detection,,computer-vision 1e2b958e-105f-4148-af00-7d902f1a2b9e,aesthetic-image-captioning,Aesthetic Image Captioning,,computer-vision cea8ced2-eeb2-4104-9a31-7d7822826a5c,video-denoising,Video Denoising,,computer-vision 7756bf4d-a336-4f16-b54f-440210a9f86b,online-surgical-phase-recognition,Online surgical phase recognition,"Online surgical phase recognition: the first 40 videos to train, the last 40 videos to test.",computer-vision c6e9c36e-2650-4e43-9107-1f1d0321c0ba,rain-removal,Rain Removal,,computer-vision 09ba3e51-f38b-4fdb-bde7-29dfc1dce88e,image-outpainting,Image Outpainting,"Predicting the visual context of an image beyond its boundary. Image credit: [NUWA-Infinity: Autoregressive over Autoregressive Generation for Infinite Visual Synthesis](https://paperswithcode.com/paper/nuwa-infinity-autoregressive-over?from=n35)",computer-vision ca422a33-20fe-44db-8ac0-4d0ba224e23b,object-tracking,Object Tracking,"**Object tracking** is the task of taking an initial set of object detections, creating a unique ID for each of the initial detections, and then tracking each of the objects as they move around frames in a video, maintaining the ID assignment. State-of-the-art methods involve fusing data from RGB and event-based cameras to produce more reliable object tracking. CNN-based models using only RGB images as input are also effective. The most popular benchmark is OTB. There are several evaluation metrics specific to object tracking, including HOTA, MOTA, IDF1, and Track-mAP. ( Image credit: [Towards-Realtime-MOT ](https://github.com/Zhongdao/Towards-Realtime-MOT) )",computer-vision 3623574e-0f5f-465d-a740-581031eff08e,curved-text-detection,Curved Text Detection,,computer-vision ade41a3e-476f-41b3-a0e2-d473c183d05e,action-understanding,Action Understanding,,computer-vision 943f4181-fc0d-4625-9476-c05cb976c266,multi-label-zero-shot-learning,Multi-label zero-shot learning,,computer-vision da01d7f8-ed03-4b12-9d6f-46be32f773f3,text-to-video-editing,Text-to-Video Editing,,computer-vision 125ef8c4-66ab-41f2-adf7-097842dff356,generalized-few-shot-classification,Generalized Few-Shot Classification,,computer-vision 40653d8b-3420-4f63-ac76-f078845bf002,3d-point-cloud-classification,3D Point Cloud Classification,Image: [Qi et al](https://arxiv.org/pdf/1612.00593v2.pdf),computer-vision 5c588a74-c816-4110-8cbc-d9b092d5c853,photo-to-caricature-translation,Photo-To-Caricature Translation,"Photo-to-caricature translation is the task of adapting a photo to a cartoon or sketch. ( Image credit: [WarpGAN](https://arxiv.org/pdf/1811.10100v3.pdf) )",computer-vision 3c308c7a-cf5d-4119-9e5f-50bec1928475,open-vocabulary-panoptic-segmentation,Open Vocabulary Panoptic Segmentation,,computer-vision 676bc096-5276-45d8-8e16-878ac1a97108,camera-shot-boundary-detection,Camera shot boundary detection,"The objective of camera shot boundary detection is to find the transitions between the camera shots in a video and classify the type of camera transition. This task is introduced in SoccerNet-v2, where 3 types of transitions are considered (abrupt, logo, smooth).",computer-vision ee8cea23-ab9b-4056-bcca-039f690e1531,jpeg-compression-artifact-reduction,Jpeg Compression Artifact Reduction,,computer-vision e81a962c-c02a-48b9-b02f-f6fe719ba006,3d-shape-reconstruction-from-a-single-2d,3D Shape Reconstruction From A Single 2D Image,Image: [Liao et al](https://arxiv.org/pdf/1811.12016v1.pdf),computer-vision 09351cc1-4387-4f84-bd91-14253d53495c,3d-face-modeling,3D Face Modelling,,computer-vision dc0a49dd-c5f4-4974-9c29-bb87664aa1a4,steering-control,Steering Control,,computer-vision 873e90df-2572-45b6-acc4-4bd06d0e17de,event-based-optical-flow,Event-based Optical Flow,,computer-vision 6c63cc64-77e0-43d4-98b5-2625bb5f3572,robust-3d-semantic-segmentation,Robust 3D Semantic Segmentation,3D Semantic Segmentation under Out-of-Distribution Scenarios,computer-vision d6e4e812-1394-4759-9cbc-9c3c70c1f981,kiss-detection,Kiss Detection,,computer-vision 3b9f01d0-e9d6-4bd2-acf2-95d290ee9a7b,photo-retouching,Photo Retouching,,computer-vision 15d6a799-7eec-4257-b588-326b89dbbbca,handwriting-verification,Handwriting Verification,The goal of handwriting verification is to find a measure of confidence whether the given handwritten samples are written by the same or different writer.,computer-vision 83009cf3-4b1c-4109-a057-daeb15addf6c,automatic-post-editing,Automatic Post-Editing,Automatic post-editing (APE) is used to correct errors in the translation made by the machine translation systems.,computer-vision 5fe519ad-6051-4b87-a57d-d7fd414d4855,sensor-fusion,Sensor Fusion,Sensor fusion is the process of combining sensor data or data derived from disparate sources such that the resulting information has less uncertainty than would be possible when these sources were used individually. [Wikipedia],computer-vision 3e009e95-8a9d-4680-86d2-7519e45c5514,road-segementation,Road Segmentation,Road Segmentation is a pixel wise binary classification in order to extract underlying road network. Various Heuristic and data driven models are proposed. Continuity and robustness still remains one of the major challenges in the area.,computer-vision 309e1a47-4797-4d06-8893-d762175b1238,steganographics,Steganographics,,computer-vision 4715a4b1-8c8c-43ee-bb48-84603307b074,fish-detection,Fish Detection,,computer-vision 0706d656-aa1f-4739-a5f6-2143114a8be6,keypoint-detection,Keypoint Detection,"**Keypoint Detection** involves simultaneously detecting people and localizing their keypoints. Keypoints are the same thing as interest points. They are spatial locations, or points in the image that define what is interesting or what stand out in the image. They are invariant to image rotation, shrinkage, translation, distortion, and so on. ( Image credit: [PifPaf: Composite Fields for Human Pose Estimation](https://github.com/vita-epfl/openpifpaf); ""Learning to surf"" by fotologic, license: CC-BY-2.0 )",computer-vision 4d002420-4be0-4d7f-9191-9db02db58379,jpeg-decompression,JPEG Decompression,Image credit: [Palette: Image-to-Image Diffusion Models](https://paperswithcode.com/paper/palette-image-to-image-diffusion-models),computer-vision 4a70ee08-7708-4c6d-a5e8-7caf732565ce,image-to-video-person-re-identification,Image-To-Video Person Re-Identification,,computer-vision 838020a6-86ff-4e4b-b6a7-0c00549798cf,gaze-redirection,gaze redirection,,computer-vision 28e63089-6873-49e1-8464-16c158e2dbeb,set-matching,set matching,,computer-vision 4c95b2a2-c9f3-4f67-b947-fc08dd2f0135,boundary-grounding,Boundary Grounding,"Provided with a description of a boundary inside a video, the machine is required to locate that boundary in the video.",computer-vision 92763350-b118-4f26-8db8-ef200c5c9144,grounded-situation-recognition,Grounded Situation Recognition,"Grounded Situation Recognition aims to produce the structured image summary which describes the primary activity (verb), its relevant entities (nouns), and their bounding-box groundings.",computer-vision 17c37e61-5c91-4bd8-b890-aed3be677b5f,gan-image-forensics,GAN image forensics,,computer-vision 4b685b4b-7212-4543-985d-02dcb0f137e3,video-synchronization,Video Synchronization,,computer-vision b5754d58-4798-400e-a1af-e377c037aeef,rgb-t-tracking,Rgb-T Tracking,,computer-vision c1409f98-bbdc-4b80-9a8c-109ca0f5584f,temporal-action-proposal-generation,Temporal Action Proposal Generation,,computer-vision 0edc4b91-96f5-40ac-aea9-8d292b294f59,disjoint-15-5,Disjoint 15-5,,computer-vision 498cb08e-aa4e-4940-9fdd-8234e6c68f8e,image-reconstruction,Image Reconstruction,,computer-vision 5cc9ef69-fe0d-4b8c-8a40-27e42f8ac1ce,single-view-3d-reconstruction,Single-View 3D Reconstruction,,computer-vision 7654f7bb-7aef-4076-8f48-41ad01fe22fb,image-declipping,Image Declipping,,computer-vision 8e64cada-c4f4-4324-b504-10355a3a4c6f,sign-language-recognition,Sign Language Recognition,"**Sign Language Recognition** is a computer vision and natural language processing task that involves automatically recognizing and translating sign language gestures into written or spoken language. The goal of sign language recognition is to develop algorithms that can understand and interpret sign language, enabling people who use sign language as their primary mode of communication to communicate more easily with non-signers. ( Image credit: [Word-level Deep Sign Language Recognition from Video: A New Large-scale Dataset and Methods Comparison](https://arxiv.org/pdf/1910.11006v1.pdf) )",computer-vision 95b75309-61c5-4d47-a44c-4ffa2f48f8ff,embodied-question-answering,Embodied Question Answering,,computer-vision 4251c6fe-d7cf-483e-be13-58c9ef259be3,concept-alignment,Concept Alignment,**Concept Alignment** aims to align the learned representations or concepts within a model with the intended or target concepts. It involves adjusting the model's parameters or training process to ensure that the learned concepts accurately reflect the underlying patterns in the data.,computer-vision 08f7375e-890e-482e-bf62-3b2c6b3450f8,spatial-relation-recognition,Spatial Relation Recognition,,computer-vision aa29b25c-e975-4466-92f6-63a3d759e6ff,story-continuation,Story Continuation,"The task involves providing an initial scene that can be obtained in real world use cases. By including this scene, a model can then copy and adapt elements from it as it generates subsequent images. Source: [StoryDALL-E: Adapting Pretrained Text-to-Image Transformers for Story Continuation](https://paperswithcode.com/paper/storydall-e-adapting-pretrained-text-to-image)",computer-vision 13af0442-7fa8-467a-8d1b-e4e211ebaa41,attentive-segmentation-networks,Attentive segmentation networks,,computer-vision 1c47b7cf-1b4c-4eaf-8bd6-85c3cd78ebdf,3d-multi-person-pose-estimation-absolute,3D Multi-Person Pose Estimation (absolute),"This task aims to solve absolute 3D multi-person pose Estimation (camera-centric coordinates). No ground truth human bounding box and human root joint coordinates are used during testing stage. ( Image credit: [RootNet](https://github.com/mks0601/3DMPPE_ROOTNET_RELEASE) )",computer-vision 427c6069-186a-4e81-9887-0cd3eac1e1fa,hand-keypoint-localization,Hand Keypoint Localization,,computer-vision f5ea788b-1455-4f81-8271-a4cff0e6e01f,3d-facial-expression-recognition,3D Facial Expression Recognition,"3D facial expression recognition is the task of modelling facial expressions in 3D from an image or video. ( Image credit: [Expression-Net](https://github.com/fengju514/Expression-Net) )",computer-vision d8f0667d-8f3b-498b-a266-5836fc6048b9,self-supervised-learning,Self-Supervised Learning,"**Self-Supervised Learning** is proposed for utilizing unlabeled data with the success of supervised learning. Producing a dataset with good labels is expensive, while unlabeled data is being generated all the time. The motivation of Self-Supervised Learning is to make use of the large amount of unlabeled data. The main idea of Self-Supervised Learning is to generate the labels from unlabeled data, according to the structure or characteristics of the data itself, and then train on this unsupervised data in a supervised manner. Self-Supervised Learning is wildly used in representation learning to make a model learn the latent features of the data. This technique is often employed in computer vision, video processing and robot control. Source: [Self-supervised Point Set Local Descriptors for Point Cloud Registration ](https://arxiv.org/abs/2003.05199) Image source: [LeCun](https://www.youtube.com/watch?v=7I0Qt7GALVk)",computer-vision 73ac2a1c-1536-47c2-9dc9-5bcbcb0f3cb6,anomaly-detection-at-30-anomaly,Anomaly Detection at 30% anomaly,Performance of unsupervised anomaly detection at specific anomaly percentage.,computer-vision ff3089e8-fdda-4f90-9925-ea3900ea12d1,activity-recognition,Activity Recognition,"Human **Activity Recognition** is the problem of identifying events performed by humans given a video input. It is formulated as a binary (or multiclass) classification problem of outputting activity class labels. Activity Recognition is an important problem with many societal applications including smart surveillance, video search/retrieval, intelligent robots, and other monitoring systems. Source: [Learning Latent Sub-events in Activity Videos Using Temporal Attention Filters ](https://arxiv.org/abs/1605.08140)",computer-vision d482e1c9-b03f-48c4-a654-042a4a7ac84b,action-triplet-recognition,Action Triplet Recognition,"Recognising action as a triplet of subject verb and object. Example HOI = Human Object Interaction, Surgical IVT = Instrument Verb Target, etc.",computer-vision c1e7f1ee-ea14-4e5b-9449-5e11697737ae,image-based-localization,Image-Based Localization,Determining the location of an image without GPS based on cross-view matching. In most of the cases a database of satellite images is used to match the ground images to them.,computer-vision 55c4d34d-d56c-4f22-9ca0-eae35af324ce,robust-object-detection,Robust Object Detection,"A Benchmark for the: Robustness of Object Detection Models to Image Corruptions and Distortions To allow fair comparison of robustness enhancing methods all models have to use a standard ResNet50 backbone because performance strongly scales with backbone capacity. If requested an unrestricted category can be added later. Benchmark Homepage: https://github.com/bethgelab/robust-detection-benchmark Metrics: mPC [AP]: Mean Performance under Corruption [measured in AP] rPC [%]: Relative Performance under Corruption [measured in %] Test sets: Coco: val 2017; Pascal VOC: test 2007; Cityscapes: val; ( Image credit: [Benchmarking Robustness in Object Detection](https://arxiv.org/pdf/1907.07484v1.pdf) )",computer-vision e5252aeb-4531-4b8f-b5cf-0193b79b3109,video-quality-assessment,Video Quality Assessment,"Video Quality Assessment is a computer vision task aiming to mimic video-based human subjective perception. The goal is to produce a mos score, where higher score indicates better perceptual quality. Some well-known benchmarks for this task are KoNViD-1k, LIVE-VQC, YouTube-UGC and LSVQ. SROCC/PLCC/RMSE are usually used to evaluate the performance of different models.",computer-vision cccbb91e-3d3b-441b-9289-b1dec185dac5,semi-supervised-image-classification,Semi-Supervised Image Classification,"Semi-supervised image classification leverages unlabelled data as well as labelled data to increase classification performance. You may want to read some blog posts to get an overview before reading the papers and checking the leaderboards: - [An overview of proxy-label approaches for semi-supervised learning](https://ruder.io/semi-supervised/) - Sebastian Ruder - [Semi-Supervised Learning in Computer Vision](https://amitness.com/2020/07/semi-supervised-learning/) - Amit Chaudhary ( Image credit: [Self-Supervised Semi-Supervised Learning](https://arxiv.org/pdf/1905.03670v2.pdf) )",computer-vision 51bb8851-5260-47a3-a891-7a294415b460,action-analysis,Action Analysis,,computer-vision c48e0134-f37f-4181-a4c3-2020414544e2,holdout-set,Holdout Set,,computer-vision 54851971-3e60-4878-bcca-dc21925b5852,gait-recognition-in-the-wild,Gait Recognition in the Wild,"Gait Recognition in the Wild refers to methods under real-world senses, i.e., unconstrained environment.",computer-vision 1580af53-1ee2-4e37-9994-2556c5dd3ae1,3d-object-reconstruction,3D Object Reconstruction,Image: [Choy et al](https://arxiv.org/pdf/1604.00449v1.pdf),computer-vision 2ec2a183-f152-46db-8aca-66179a09448e,materials-imaging,Materials Imaging,,computer-vision d66875e2-eb68-4ecd-a28a-bd55b16e4797,video-matting,Video Matting,Image credit: [https://arxiv.org/pdf/2012.07810v1.pdf](https://arxiv.org/pdf/2012.07810v1.pdf),computer-vision e1e18e08-7308-4cbc-9055-c6e0206ac1dc,gait-recognition,Gait Recognition,"( Image credit: [GaitSet: Regarding Gait as a Set for Cross-View Gait Recognition](https://github.com/AbnerHqC/GaitSet) )",computer-vision 3a215e28-b496-4b48-83f0-09831d0fe7f3,monocular-3d-object-localization,Monocular 3D Object Localization,,computer-vision 3498f8f1-6b7f-46b3-90d7-d6d3a0b561d4,birds-eye-view-object-detection,Birds Eye View Object Detection,KITTI birds eye view detection task,computer-vision ac65c824-3010-4034-a2ce-965195961c57,moving-object-detection,Moving Object Detection,,computer-vision 20319665-81a3-45de-a43d-8247a3666405,crosslingual-text-to-image-generation,Crosslingual Text-to-Image Generation,,computer-vision 619ebeb6-340d-49f7-8763-8f8dd5b52557,document-to-image-conversion,Document To Image Conversion,,computer-vision 9c8f015a-01c2-48bc-884e-59bbeb9fce3a,person-identification,Person Identification,,computer-vision 1d12bbed-23e3-47c7-b69a-39dfae3ab9da,medical-image-enhancement,Medical Image Enhancement,Aims to improve the perceptual quality of low-quality medical images,computer-vision 735601e2-1096-406a-b544-030254d1208c,semi-supervised-fashion-compatibility,Semi-Supervised Fashion Compatibility,,computer-vision 2f502ba6-e817-4cea-a277-fced1c22a2c0,visual-tracking,Visual Tracking,"**Visual Tracking** is an essential and actively researched problem in the field of computer vision with various real-world applications such as robotic services, smart surveillance systems, autonomous driving, and human-computer interaction. It refers to the automatic estimation of the trajectory of an arbitrary target object, usually specified by a bounding box in the first frame, as it moves around in subsequent video frames. Source: [Learning Reinforced Attentional Representation for End-to-End Visual Tracking ](https://arxiv.org/abs/1908.10009)",computer-vision a9c84d55-97bb-4319-8487-d1cb432216dc,overlapped-25-25,Overlapped 25-25,,computer-vision 5ab2491c-c346-41fd-a340-912cb175479b,3d-human-pose-tracking,3D Human Pose Tracking,,computer-vision 46786ae8-834c-4543-87cf-9f44c826b2ec,transparency-separation,Transparency Separation,,computer-vision a1615da7-9ef2-47ab-af78-da9503db4d62,3d-lane-detection,3D Lane Detection,"The goal of **3D Lane Detection** is to perceive lanes that provide guidance for autonomous vehicles. A lane can be represented as a visible laneline or a conceptual centerline. Furthermore, a lane obtains extra attributes from the understanding of the surrounding environment. ( Image credit: [OpenLane-V2](https://github.com/OpenDriveLab/OpenLane-V2 ) )",computer-vision ae774ea2-0f29-425d-bb5b-1604ba7cabbf,indoor-monocular-depth-estimation,Indoor Monocular Depth Estimation,,computer-vision 90ef2fc9-c460-4104-a6b3-396098161bc9,landmine,Landmine,,computer-vision ef271038-c8c9-4eac-be62-fbe3870cbc2e,multiple-object-tracking-with-transformer,Multiple Object Tracking with Transformer,,computer-vision 3a14b5bd-0d6d-4f41-8796-27c6fcf2874a,large-scale-person-re-identification,Large-Scale Person Re-Identification,,computer-vision decc1855-49d2-4045-935e-da8ccaf165de,face-detection,Face Detection,"**Face Detection** is a computer vision task that involves automatically identifying and locating human faces within digital images or videos. It is a fundamental technology that underpins many applications such as face recognition, face tracking, and facial analysis. ( Image credit: [insightface](https://github.com/deepinsight/insightface) )",computer-vision 04a6680c-ba5b-478a-ae27-be40703db6ad,infrared-and-visible-image-fusion,Infrared And Visible Image Fusion,Image fusion with paired infrared and visible images,computer-vision 2a633a94-f7f8-4c9a-8254-db2c587ec1e0,3d-point-cloud-linear-classification,3D Point Cloud Linear Classification,Training a linear classifier(e.g. SVM) on the embeddings/representations of 3D point clouds. The embeddings/representations are usually trained in an unsupervised manner.,computer-vision 35d5b3e3-91bc-426b-ab28-bea1cfa5b075,3d-object-detection,3D Object Detection,"**3D Object Detection** is a task in computer vision where the goal is to identify and locate objects in a 3D environment based on their shape, location, and orientation. It involves detecting the presence of objects and determining their location in the 3D space in real-time. This task is crucial for applications such as autonomous vehicles, robotics, and augmented reality. ( Image credit: [AVOD](https://github.com/kujason/avod) )",computer-vision af226e4f-76ec-4c9c-9e32-31c73a059c09,semantic-segmentation,Semantic Segmentation,"**Semantic Segmentation** is a computer vision task in which the goal is to categorize each pixel in an image into a class or object. The goal is to produce a dense pixel-wise segmentation map of an image, where each pixel is assigned to a specific class or object. Some example benchmarks for this task are Cityscapes, PASCAL VOC and ADE20K. Models are usually evaluated with the Mean Intersection-Over-Union (Mean IoU) and Pixel Accuracy metrics. ( Image credit: [CSAILVision](https://github.com/CSAILVision/semantic-segmentation-pytorch) )",computer-vision ff2cdb06-03a2-4da6-bbb8-1396c9cefd05,composite-action-recognition,Composite action recognition,,computer-vision 0a20c938-15cf-48b1-a046-09d6135f1dfc,histopathological-segmentation,Histopathological Segmentation,,computer-vision 3ff46dfa-bf79-4fc9-81a3-e83f5abbe96e,plan2scene,Plan2Scene,Converting floorplans + RGB photos to textured 3D mesh models of houses.,computer-vision cc1292e0-da78-46f0-95dd-29b7e969bb3d,offline-handwritten-chinese-character,Offline Handwritten Chinese Character Recognition,Handwritten Chinese characters recognition is the task of detecting and interpreting the components of Chinese characters (i.e. radicals and two-dimensional structures).,computer-vision 9cc240b8-8dea-4301-9ca5-3294a1e00a59,unet-quantization,UNET Quantization,,computer-vision 7bc76389-29b5-4d79-979f-fc42fda837ba,de-aliasing,De-aliasing,De-aliasing is the problem of recovering the original high-frequency information that has been aliased during the acquisition of an image.,computer-vision f3a5e8be-0cb8-489d-ac15-81ae4f6a5544,video-recognition,Video Recognition,"**Video Recognition** is a process of obtaining, processing, and analysing data that it receives from a visual source, specifically video.",computer-vision 98ffaf5b-425a-48dd-b8f6-c7bb548e874a,3d-plane-detection,3D Plane Detection,Image: [Liu et al](https://arxiv.org/pdf/1812.04072v2.pdf),computer-vision 6b972076-1cee-44f5-b3fe-c7e6da180bae,cube-engraving-classification,Cube Engraving Classification,,computer-vision cfb4e9a9-5e03-4244-847c-535b34c22aeb,supervised-video-summarization,Supervised Video Summarization,"**Supervised video summarization** rely on datasets with human-labeled ground-truth annotations (either in the form of video summaries, as in the case of the [SumMe](https://paperswithcode.com/dataset/summe) dataset, or in the form of frame-level importance scores, as in the case of the [TVSum](https://paperswithcode.com/dataset/tvsum-1) dataset), based on which they try to discover the underlying criterion for video frame/fragment selection and video summarization. Source: [Video Summarization Using Deep Neural Networks: A Survey](https://arxiv.org/abs/2101.06072)",computer-vision 0b5f5810-4659-4f9f-a9b2-34fe069a42b1,multi-view-subspace-clustering,Multi-view Subspace Clustering,,computer-vision 52fe80d4-26ba-4788-8ae2-1a0e7a0db3ae,self-supervised-anomaly-detection,Self-Supervised Anomaly Detection,Self-Supervision towards anomaly detection,computer-vision e377656c-298a-492c-ba51-f03e5ff86ece,rgb-d-salient-object-detection,RGB-D Salient Object Detection,"RGB-D Salient object detection (SOD) aims at distinguishing the most visually distinctive objects or regions in a scene from the given RGB and Depth data. It has a wide range of applications, including video/image segmentation, object recognition, visual tracking, foreground maps evaluation, image retrieval, content-aware image editing, information discovery, photosynthesis, and weakly supervised semantic segmentation. Here, depth information plays an important complementary role in finding salient objects. Online benchmark: http://dpfan.net/d3netbenchmark. ( Image credit: [Rethinking RGB-D Salient Object Detection: Models, Data Sets, and Large-Scale Benchmarks, TNNLS20](https://ieeexplore.ieee.org/abstract/document/9107477) )",computer-vision bef16087-6cbc-4bc5-8c17-406ce0fe977d,synthetic-image-detection,Synthetic Image Detection,Identify if the image is real or generated/manipulated by any generative models (GAN or Diffusion).,computer-vision cd808f35-4f0b-46f3-9d72-909114b57159,sketch-to-image-translation,Sketch-to-Image Translation,,computer-vision 6489e7e7-59ef-4c78-92e8-d448778765f7,vocabulary-free-image-classification,Vocabulary-free Image Classification,"Recent advances in large vision-language models have revolutionized the image classification paradigm. Despite showing impressive zero-shot capabilities, a pre-defined set of categories, a.k.a. the vocabulary, is assumed at test time for composing the textual prompts. However, such assumption can be impractical when the semantic context is unknown and evolving. Vocabulary-free Image Classification (VIC) aims to assign to an input image a class that resides in an unconstrained language-induced semantic space, without the prerequisite of a known vocabulary.",computer-vision 01c37915-7537-4b25-88eb-fa4aea23fa3e,video-generation,Video Generation,"( Various Video Generation Tasks. Gif credit: [MaGViT](https://paperswithcode.com/paper/magvit-masked-generative-video-transformer) )",computer-vision d60fc433-fc1a-468e-8094-2fddd663ce4b,one-shot-instance-segmentation,One-Shot Instance Segmentation,"( Image credit: [Siamese Mask R-CNN ](https://github.com/bethgelab/siamese-mask-rcnn) )",computer-vision 34dd8178-de32-4170-b68d-59e986f88328,explainable-artificial-intelligence,Explainable artificial intelligence,"XAI refers to methods and techniques in the application of artificial intelligence (AI) such that the results of the solution can be understood by humans. It contrasts with the concept of the ""black box"" in machine learning where even its designers cannot explain why an AI arrived at a specific decision. XAI may be an implementation of the social right to explanation. XAI is relevant even if there is no legal right or regulatory requirement—for example, XAI can improve the user experience of a product or service by helping end users trust that the AI is making good decisions. This way the aim of XAI is to explain what has been done, what is done right now, what will be done next and unveil the information the actions are based on. These characteristics make it possible (i) to confirm existing knowledge (ii) to challenge existing knowledge and (iii) to generate new assumptions.",computer-vision 0240288e-72a4-4279-98eb-6940ee1a5d9c,one-shot-3d-action-recognition,One-Shot 3D Action Recognition,,computer-vision 9c493ccc-86ed-4749-9687-bf0878c0a1f2,jpeg-forgery-localization,Jpeg Forgery Localization,,computer-vision 1b51655e-f4ad-4ddb-ad09-37ec41150dc6,image-deblocking,Image Deblocking,,computer-vision 8cb3b2ae-ac31-4970-b7e7-cd09fda1edc3,key-frame-based-video-super-resolution-k-15,Key-Frame-based Video Super-Resolution (K = 15),"Key-Frame-based Video Super-Resolution is a sub-task of [Video Super-Resolution](https://paperswithcode.com/task/video-super-resolution), where, in addition to the low-resolution frames, high-resolution ground-truth frames for every Kth input frame are also provided as inputs to the model. For example, if `[LR-frame-1, LR-frame-2, LR-frame-3, ..., LR-frame-100]` is the sequence of low-resolution frames to be upscaled, the Key-Frame-based Video Super-Resolution (K = 15) model is also provided with the high-resolution frames `[HR-frame-1, HR-frame-16, ..., HR-frame-91]` . Key-frames are excluded when measuring the evaluation metrics.",computer-vision 82c3f07d-a9a2-4b85-8bc0-168b61a772f3,amodal-instance-segmentation,Amodal Instance Segmentation,"Different from traditional segmentation which only focuses on visible regions, amodal instance segmentation also predicts the occluded parts of object instances. Description Credit: [Deep Occlusion-Aware Instance Segmentation with Overlapping BiLayers, CVPR'21](https://openaccess.thecvf.com/content/CVPR2021/papers/Ke_Deep_Occlusion-Aware_Instance_Segmentation_With_Overlapping_BiLayers_CVPR_2021_paper.pdf)",computer-vision 3d4c14bb-dfec-4e55-bc60-9480d48e08aa,deepfake-detection,DeepFake Detection,"**DeepFake Detection** is the task of detecting fake videos or images that have been generated using deep learning techniques. Deepfakes are created by using machine learning algorithms to manipulate or replace parts of an original video or image, such as the face of a person. The goal of deepfake detection is to identify such manipulations and distinguish them from real videos or images. Description source: [DeepFakes: a New Threat to Face Recognition? Assessment and Detection](https://arxiv.org/pdf/1812.08685.pdf) Image source: [DeepFakes: a New Threat to Face Recognition? Assessment and Detection](https://paperswithcode.com/paper/deepfakes-a-new-threat-to-face-recognition)",computer-vision a5d4923b-b2f3-4d1c-8b85-657cfd641b3a,action-detection,Action Detection,"Action Detection aims to find both where and when an action occurs within a video clip and classify what the action is taking place. Typically results are given in the form of action tublets, which are action bounding boxes linked across time in the video. This is related to temporal localization, which seeks to identify the start and end frame of an action, and action recognition, which seeks only to classify which action is taking place and typically assumes a trimmed video.",computer-vision f41febfd-8254-4f95-bbf8-999642aa4f75,human-object-interaction-detection,Human-Object Interaction Detection,"Human-Object Interaction (HOI) detection is a task of identifying ""a set of interactions"" in an image, which involves the i) localization of the subject (i.e., humans) and target (i.e., objects) of interaction, and ii) the classification of the interaction labels.",computer-vision 5b01076c-0964-4e4b-8a84-8d9dc4e55f86,lighting-estimation,Lighting Estimation,Lighting Estimation analyzes given images to provide detailed information about the lighting in a scene.,computer-vision 83dfde13-600d-447a-bce5-f6037bd57006,document-image-skew-estimation,Document Image Skew Estimation,,computer-vision 9c05da45-ed29-4a38-9c36-5729a5acf347,road-damage-detection,Road Damage Detection,"Road damage detection is the task of detecting damage in roads. ( Image credit: [Road Damage Detection And Classification In Smartphone Captured Images Using Mask R-CNN](https://arxiv.org/pdf/1811.04535v1.pdf) )",computer-vision c5b220f6-d28f-4092-b82b-ba72f0a0520f,multiple-object-tracking,Multiple Object Tracking,"**Multiple Object Tracking** is the problem of automatically identifying multiple objects in a video and representing them as a set of trajectories with high accuracy. Source: [SOT for MOT ](https://arxiv.org/abs/1712.01059)",computer-vision be79ed0d-5a2c-4159-812f-20c2e80877fe,depth-and-camera-motion,Depth And Camera Motion,,computer-vision b8e3f6bb-55a8-40bd-997e-7d3d592b4aac,film-simulation,Film Simulation,Simulate the appearance of film camera.,computer-vision 6cc080ce-5886-4e6a-b9e2-1850951a2539,subject-driven-video-generation,Subject-driven Video Generation,,computer-vision 221cccb4-7481-400c-8735-5f91d5901bc9,occluded-face-detection,Occluded Face Detection,,computer-vision 83d406fc-72da-435d-9725-5762f6b7b2a1,food-recognition,Food Recognition,,computer-vision 5b17ff0e-7078-4419-a321-9f2716ead47e,medical-image-denoising,Medical Image Denoising,Image credit: [Learning Medical Image Denoising with Deep Dynamic Residual Attention Network](https://paperswithcode.com/paper/learning-medical-image-denoising-with-deep),computer-vision 4e97419c-1154-4bd4-b893-16f67a8143a6,3d-semantic-scene-completion-from-a-single,3D Semantic Scene Completion from a single RGB image,This task relies on a single RGB image to infer the dense 3D voxelized semantic scene.,computer-vision 1be75958-7e31-4cdf-a8d7-97e91930ede5,unrolling,Rolling Shutter Correction,Rolling Shutter Correction,computer-vision 3f74793a-206f-45ec-bd6f-b703b1783925,learning-with-coarse-labels,Learning with coarse labels,"Learning fine-grained representation with coarsely-labelled dataset, which can significantly reduce the labelling cost. As a simple example, for the task of differentiation between different pets, we need a knowledgeable cat lover to distinguish between ‘British short’ and ‘Siamese’, but even a child annotator may help to discriminate between ‘cat’ and ‘non-cat’.",computer-vision 932463ce-d625-4775-965f-46dbd51cced8,person-retrieval,Person Retrieval,,computer-vision 85daacc9-e963-4e54-b04d-41ca628cbda3,open-vocabulary-object-detection,Open Vocabulary Object Detection,"Open-vocabulary detection (OVD) aims to generalize beyond the limited number of base classes labeled during the training phase. The goal is to detect novel classes defined by an unbounded (open) vocabulary at inference.",computer-vision e8d08f18-1bda-4c49-86ca-b6e65070d10f,video-emotion-detection,Video Emotion Detection,,computer-vision 099a2a00-b6e0-4d5a-9e75-3adfbd7fb267,surface-normals-estimation-from-point-clouds,Surface Normals Estimation from Point Clouds,Parent task: 3d Point Clouds Analysis,computer-vision 30c76c9c-7cd0-4510-9299-6faf06830340,video-captioning,Video Captioning,"**Video Captioning** is a task of automatic captioning a video by understanding the action and event in the video which can help in the retrieval of the video efficiently through text. Source: [NITS-VC System for VATEX Video Captioning Challenge 2020 ](https://arxiv.org/abs/2006.04058)",computer-vision aab59969-aad0-4ab1-a187-6636822e32b5,action-recognition-in-still-images,Action Recognition In Still Images,,computer-vision f2129983-da52-4135-85fd-b5e71c1f254a,video-compression,Video Compression,"**Video Compression** is a process of reducing the size of an image or video file by exploiting spatial and temporal redundancies within an image or video frame and across multiple video frames. The ultimate goal of a successful Video Compression system is to reduce data volume while retaining the perceptual quality of the decompressed data. Source: [Adversarial Video Compression Guided by Soft Edge Detection ](https://arxiv.org/abs/1811.10673)",computer-vision 373f644b-22d8-4608-87e3-16c1808b5753,real-time-instance-segmentation,Real-time Instance Segmentation,"Similar to its parent task, instance segmentation, but with the goal of achieving real-time capabilities under a defined setting. Image Credit: [SipMask: Spatial Information Preservation for Fast Image and Video Instance Segmentation](https://arxiv.org/pdf/2007.14772v1.pdf)",computer-vision 7109440e-1735-4b5c-ba73-ca6d975181ad,few-shot-video-object-detection,Few-Shot Video Object Detection,"Few-Shot Video Object Detection (FSVOD): given only a few support images of the target object in an unseen class, detect all the objects belonging to the same class in a given query video.",computer-vision 4ce7c08b-1d0f-4fe5-8fd5-80f8eb6b6a6e,material-recognition,Material Recognition,,computer-vision e8f4e05e-c051-41f3-9129-0b8bc2abd155,small-data,Small Data Image Classification,Supervised image classification with tens to hundreds of labeled training examples.,computer-vision 66fcaa84-8a19-4613-994e-075fa3608712,3d-shape-recognition,3D Shape Recognition,Image: [Wei et al](https://arxiv.org/pdf/1908.10098v1.pdf),computer-vision 14bf5bac-340d-4a53-8645-89723be5209e,video-narrative-grounding,Video Narrative Grounding,"**Video Narrative Grounding** is the task of linking video narratives to specific video segments. The input is a video with a text description (the narrative) and the positions of certain nouns marked. For each marked noun, the method must output a segmentation mask for the object it refers to, in each video frame. Source: [Connecting Vision and Language with Video Localized Narratives](/paper/connecting-vision-and-language-with-video)",computer-vision b8c6620f-9ed8-42c9-bcf5-be3f8fdb22c1,hand-joint-reconstruction,Hand Joint Reconstruction,,computer-vision 98c4b4b7-5434-4e0d-9818-5fef4cb6cc19,action-anticipation,Action Anticipation,"Next action anticipation is defined as observing 1, ... , T frames and predicting the action that happens after a gap of T_a seconds. It is important to note that a new action starts after T_a seconds that is not seen in the observed frames. Here T_a=1 second.",computer-vision 3c7e8f5d-a08b-461a-bae8-8253cf1799c0,text-to-image,Text-To-Image,,computer-vision 5d6de600-0ff2-40d9-9e6a-dc4c576cb474,depth-estimation,Depth Estimation,"**Depth Estimation** is the task of measuring the distance of each pixel relative to the camera. Depth is extracted from either monocular (single) or stereo (multiple views of a scene) images. Traditional methods use multi-view geometry to find the relationship between the images. Newer methods can directly estimate depth by minimizing the regression loss, or by learning to generate a novel view from a sequence. The most popular benchmarks are KITTI and NYUv2. Models are typically evaluated according to a RMS metric. Source: [DIODE: A Dense Indoor and Outdoor DEpth Dataset ](https://arxiv.org/abs/1908.00463)",computer-vision 1896daef-cdeb-46a5-a9ff-04c744d436db,motion-synthesis,Motion Synthesis,"Image source: [Multi-View Motion Synthesis via Applying Rotated Dual-Pixel Blur Kernels ](https://paperswithcode.com/paper/multi-view-motion-synthesis-via-applying)",computer-vision 8edc8f5a-a3dd-4573-8768-193b2e272f51,visual-question-answering-1,Visual Question Answering,,computer-vision 3a79202e-62ed-4e1d-8876-cba586729e50,intrinsic-image-decomposition,Intrinsic Image Decomposition,"**Intrinsic Image Decomposition** is the process of separating an image into its formation components such as reflectance (albedo) and shading (illumination). Reflectance is the color of the object, invariant to camera viewpoint and illumination conditions, whereas shading, dependent on camera viewpoint and object geometry, consists of different illumination effects, such as shadows, shading and inter-reflections. Using intrinsic images, instead of the original images, can be beneficial for many computer vision algorithms. For instance, for shape-from-shading algorithms, the shading images contain important visual cues to recover geometry, while for segmentation and detection algorithms, reflectance images can be beneficial as they are independent of confounding illumination effects. Furthermore, intrinsic images are used in a wide range of computational photography applications, such as material recoloring, relighting, retexturing and stylization. Source: [CNN based Learning using Reflection and Retinex Models for Intrinsic Image Decomposition ](https://arxiv.org/abs/1712.01056)",computer-vision 70a001d8-3016-4f2f-a8e6-2ebde8882c30,neural-rendering,Neural Rendering,"Given a representation of a 3D scene of some kind (point cloud, mesh, voxels, etc.), the task is to create an algorithm that can produce photorealistic renderings of this scene from an arbitrary viewpoint. Sometimes, the task is accompanied by image/scene appearance manipulation.",computer-vision 6c6c2c46-6d6b-4a96-8ca3-733759bac32c,segmenting-flooded-buildings,Flooded Building Segmentation,,computer-vision ab2f7783-5cf9-492f-805d-0cd94ff3dcf7,text-to-face-generation,Text-to-Face Generation,,computer-vision 7f4a57f5-ef98-436b-8482-f328c94a0cb5,one-shot-visual-object-segmentation,One-shot visual object segmentation,,computer-vision efaa1caf-a09d-400b-970e-e2e09d50d199,weakly-supervised-segmentation,Weakly supervised segmentation,,computer-vision 3c25212c-bc58-498c-a2a4-0f963960bee6,real-time-visual-tracking,Real-Time Visual Tracking,,computer-vision 18f7908f-e35a-4000-b87a-c960ec746287,plant-phenotyping,Plant Phenotyping,,computer-vision 9297b2f2-92f5-47a2-89ac-9c815dfffe32,3d-human-dynamics,3D Human Dynamics,Image: [Zhang et al](https://openaccess.thecvf.com/content_ICCV_2019/papers/Zhang_Predicting_3D_Human_Dynamics_From_Video_ICCV_2019_paper.pdf),computer-vision 722ffd4e-0e86-4a31-b55f-7ff081a65011,unsupervised-anomaly-detection-with-specified-5,Unsupervised Anomaly Detection with Specified Settings -- 1% anomaly,,computer-vision 7e4d8c53-28c3-4518-9d0f-539ed0fc7be4,3d-object-recognition,3D Object Recognition,"3D object recognition is the task of recognising objects from 3D data. Note that there are related tasks you can look at, such as [3D Object Detection](https://paperswithcode.com/task/3d-object-detection) which have more leaderboards. (Image credit: [Look Further to Recognize Better](https://arxiv.org/pdf/1907.12924v1.pdf))",computer-vision 27920077-c03b-4842-afa3-219313f41f54,object-proposal-generation,Object Proposal Generation,"Object proposal generation is a preprocessing technique that has been widely used in current object detection pipelines to guide the search of objects and avoid exhaustive sliding window search across images. ( Image credit: [Multiscale Combinatorial Grouping for Image Segmentation and Object Proposal Generation](https://arxiv.org/pdf/1503.00848v4.pdf) )",computer-vision 89a8a2a1-d9c7-4235-a2f4-b64e47a3999f,3d-shape-generation,3D Shape Generation,Image: [Mo et al](https://arxiv.org/pdf/1908.00575v1.pdf),computer-vision e36051d6-c65a-43e6-ae75-6da02ab89688,reflection-removal,Reflection Removal,,computer-vision 6b005646-1a0d-4f48-ac77-b83253edfe14,multi-animal-tracking-with-identification,Multi-Animal Tracking with identification,Tracking all animals in a video maintaining their identities after touches or occlusions.,computer-vision 4a6244ba-ce3b-4a37-bc19-34f966c1d0d6,multi-person-pose-forecasting,Multi-Person Pose forecasting,,computer-vision 48462ea5-ebf9-43dd-84f1-d9b6d0c09862,2d-cyclist-detection,2D Cyclist Detection,,computer-vision 12466ee9-d9ce-4039-9ec8-a4e17541ab31,real-time-semantic-segmentation,Real-Time Semantic Segmentation,"Semantic Segmentation is a computer vision task that involves assigning a semantic label to each pixel in an image. In **Real-Time Semantic Segmentation**, the goal is to perform this labeling quickly and accurately in real-time, allowing for the segmentation results to be used for tasks such as object recognition, scene understanding, and autonomous navigation. ( Image credit: [TorchSeg](https://github.com/ycszen/TorchSeg) )",computer-vision fc2c0b6b-1212-40ce-9c22-7b69feb06e85,generalized-referring-expression-segmentation,Generalized Referring Expression Segmentation,"Generalized Referring Expression Segmentation (GRES), introduced by [Liu et al in CVPR 2023](https://henghuiding.github.io/GRES/), allows expressions indicating any number of target objects. GRES takes an image and a referring expression as input, and requires mask prediction of the target object(s).",computer-vision 5eb83ba1-a912-4490-8883-3e184428da17,disparity-estimation,Disparity Estimation,The Disparity Estimation is the task of finding the pixels in the multiscopic views that correspond to the same 3D point in the scene.,computer-vision 433e341b-4e12-4c20-8a2a-0e7640733073,disjoint-10-1,Disjoint 10-1,,computer-vision 19007a7d-13fb-4fc8-94fa-8ee3270b9b75,mental-workload-estimation,Mental Workload Estimation,,computer-vision 5448e0eb-8b0a-43c6-9440-4d4e19f9434a,repetitive-action-counting,Repetitive Action Counting,Repetitive action counting aims to count the number of repetitive actions in a video.,computer-vision 3badd92c-66de-402f-bb73-8199bf66f81d,person-recognition,Person Recognition,,computer-vision 522bb7fa-1145-4c9c-bac7-847a7bd60092,video-emotion-recognition,Video Emotion Recognition,,computer-vision 366d4503-edae-4b00-b6eb-5639fdf34161,hybrid-positioning,Hybrid Positioning,Hybrid Positioning using CV and dead reckoning,computer-vision 96866937-eec4-4d38-9d92-4129b838aa95,indoor-localization,Indoor Localization,Indoor localization is a fundamental problem in indoor location-based applications.,computer-vision 238702ee-479b-4ab8-b410-0854afe06c7e,face-sketch-synthesis,Face Sketch Synthesis,"Face sketch synthesis is the task of generating a sketch from an input face photo. ( Image credit: [High-Quality Facial Photo-Sketch Synthesis Using Multi-Adversarial Networks](https://arxiv.org/pdf/1710.10182v2.pdf) )",computer-vision 9d1b8893-13c5-48d9-8922-4bb8488497c3,3d-scene-reconstruction,3D Scene Reconstruction,Creating 3D scene either using conventional SFM pipelines or latest deep learning approaches.,computer-vision 3d1b8779-92fe-4028-b197-b5df1044cd1e,facial-emotion-recognition,Facial Emotion Recognition,Emotion Recognition from facial images,computer-vision b33fc841-b20c-41bc-ac7b-d8ef19113e78,lipreading,Lipreading,"Lipreading is a process of extracting speech by watching lip movements of a speaker in the absence of sound. Humans lipread all the time without even noticing. It is a big part in communication albeit not as dominant as audio. It is a very helpful skill to learn especially for those who are hard of hearing. Deep Lipreading is the process of extracting speech from a video of a silent talking face using deep neural networks. It is also known by few other names: Visual Speech Recognition (VSR), Machine Lipreading, Automatic Lipreading etc. The primary methodology involves two stages: i) Extracting visual and temporal features from a sequence of image frames from a silent talking video ii) Processing the sequence of features into units of speech e.g. characters, words, phrases etc. We can find several implementations of this methodology either done in two separate stages or trained end-to-end in one go.",computer-vision abe03496-1ebb-4cf2-ba6b-ca2e096c360f,multi-exposure-image-fusion,Multi-Exposure Image Fusion,,computer-vision 8fc6c93d-6649-44f6-b293-5fe359fb7d93,3d-point-cloud-matching,3D Point Cloud Matching,Image: [Gojic et al](https://openaccess.thecvf.com/content_CVPR_2019/papers/Gojcic_The_Perfect_Match_3D_Point_Cloud_Matching_With_Smoothed_Densities_CVPR_2019_paper.pdf),computer-vision d1487389-eec1-4b99-8f7f-978fcf49f585,camera-localization,Camera Localization,,computer-vision 7a20033c-5375-4a7e-926b-85a5c72502de,hd-semantic-map-learning,HD semantic map learning,"The goal of task is to generate map elements in a vectorized form using data from onboard sensors, e.g., RGB cameras and/or LiDARs. These map elements include but are not limited to : Road boundaries, boundaries of roads that split roads and sidewalks.",computer-vision e74623ba-f141-46c2-b992-f0c68433c620,3d-volumetric-reconstruction,3D Volumetric Reconstruction,Image: [Grinvald et al](https://arxiv.org/pdf/1903.00268.pdf),computer-vision 3bc173b6-e5ad-436d-b2b0-16ae7abb3b7e,camera-auto-calibration,Camera Auto-Calibration,,computer-vision 74bcf158-5beb-41b1-8d2c-3a357e3eddec,face-swapping,Face Swapping,"Face swapping refers to the task of swapping faces between images or in an video, while maintaining the rest of the body and environment context. ( Image credit: [Swapped Face Detection using Deep Learning and Subjective Assessment](https://arxiv.org/pdf/1909.04217v1.pdf) )",computer-vision c6ba86ff-e52b-4dae-8a41-4ed07e113fc9,change-detection,Change Detection,"**Change Detection** is a computer vision task that involves detecting changes in an image or video sequence over time. The goal is to identify areas in the image or video that have undergone changes, such as appearance changes, object disappearance or appearance, or even changes in the scene's background. Image credit: [""A TRANSFORMER-BASED SIAMESE NETWORK FOR CHANGE DETECTION""](https://arxiv.org/pdf/2201.01293v1.pdf)",computer-vision 577c91e2-f8ca-4e78-abe0-1a4cc5588107,gesture-recognition,Gesture Recognition,"**Gesture Recognition** is an active field of research with applications such as automatic recognition of sign language, interaction of humans and robots or for new ways of controlling video games. Source: [Gesture Recognition in RGB Videos Using Human Body Keypoints and Dynamic Time Warping ](https://arxiv.org/abs/1906.12171)",computer-vision c92e916a-5310-4520-b9a8-ce4a512a210b,aesthetics-quality-assessment,Aesthetics Quality Assessment,Automatic assessment of aesthetic-related subjective ratings.,computer-vision 6d326e9a-913b-476f-b476-8688f6f5f00a,anomaly-detection-in-surveillance-videos,Anomaly Detection In Surveillance Videos,,computer-vision 6b5ddf9e-5b3c-42d8-b266-49e57523697d,metric-learning,Metric Learning,"The goal of **Metric Learning** is to learn a representation function that maps objects into an embedded space. The distance in the embedded space should preserve the objects’ similarity — similar objects get close and dissimilar objects get far away. Various loss functions have been developed for Metric Learning. For example, the **contrastive loss** guides the objects from the same class to be mapped to the same point and those from different classes to be mapped to different points whose distances are larger than a margin. **Triplet loss** is also popular, which requires the distance between the anchor sample and the positive sample to be smaller than the distance between the anchor sample and the negative sample. Source: [Road Network Metric Learning for Estimated Time of Arrival ](https://arxiv.org/abs/2006.13477)",computer-vision a27c88bf-5f84-44ed-ab8c-166f9ad77470,reverse-style-transfer,Reverse Style Transfer,,computer-vision 680e3f1c-b337-4e25-a1e6-46ead492aadb,handwriting-recognition,Handwriting Recognition,Image source: [Handwriting Recognition of Historical Documents with few labeled data](https://arxiv.org/pdf/1811.07768v1.pdf),computer-vision 1e45e47c-5ef1-48bf-9978-92ba3e93de7a,dial-meter-reading,Dial Meter Reading,,computer-vision ff3f3ed5-9599-4b53-98ca-89a1b141bfd7,cross-domain-few-shot,Cross-Domain Few-Shot,,computer-vision 41d011cc-0899-448d-b0ad-d2fe997c8ae7,hurricane-forecasting,Hurricane Forecasting,"Tropical Cyclone Forecasting using Computer Vision, Deep Learning, and Time-Series methods",computer-vision 12bc82d6-1cea-4831-9176-450e113f2a88,supervised-dimensionality-reduction,Supervised dimensionality reduction,,computer-vision 3c80b37d-9882-4d8a-b276-7699a011b266,saliency-detection,Saliency Detection,"**Saliency Detection** is a preprocessing step in computer vision which aims at finding salient objects in an image. Source: [An Unsupervised Game-Theoretic Approach to Saliency Detection ](https://arxiv.org/abs/1708.02476)",computer-vision 7b4642e9-b9e5-4ebd-91e9-1c052f646a64,semantic-image-matting,Semantic Image Matting,,computer-vision 73e3e913-48f1-40f1-a51b-0cc9b8038e2d,license-plate-recognition,License Plate Recognition,,computer-vision 50a086e1-cd47-43c1-accc-910bcfa86c53,disjoint-15-1,Disjoint 15-1,,computer-vision 3640034f-3d62-483d-91ff-4ed897d201dd,dense-video-captioning,Dense Video Captioning,"Most natural videos contain numerous events. For example, in a video of a “man playing a piano”, the video might also contain “another man dancing” or “a crowd clapping”. The task of dense video captioning involves both detecting and describing events in a video.",computer-vision 312e03b9-6d87-406f-aacc-76b1a9491fd0,cross-domain-few-shot-learning,cross-domain few-shot learning,Its essence is transfer learning. The model needs to be trained in the source domain and then migrated to the target domain. Compliant with (1) the category in the target domain has never appeared in the source domain (2) the data distribution of the target domain is inconsistent with the source domain (3) each class in the target domain has very few labels,computer-vision 1a68b54f-e0eb-4024-be24-562b4d9ea2ab,age-and-gender-classification,Age And Gender Classification,"Age and gender classification is a dual-task of identifying the age and gender of a person from an image or video. ( Image credit: [Multi-Expert Gender Classification on Age Group by Integrating Deep Neural Networks](https://arxiv.org/pdf/1809.01990v2.pdf) )",computer-vision 2898c35c-b2ee-495c-bd92-6b7c6c85c63e,removing-text-from-natural-images,Image Text Removal,,computer-vision 85eedb32-017a-47bc-a451-423a0d4c7204,drone-navigation,Drone navigation,"(Satellite -> Drone) Given one satellite-view image, the drone intends to find the most relevant place (drone-view images) that it has passed by. According to its flight history, the drone could be navigated back to the target place.",computer-vision d092e653-447e-4a28-920d-6ae72f61cd3b,handwritten-document-recognition,Handwritten Document Recognition,,computer-vision eac0d5a3-3369-445a-a53e-e5ffa9a93dfe,disguised-face-verification,Disguised Face Verification,,computer-vision 5d3ff9f4-ec25-49c3-967f-b6d368d12ae4,layout-to-image-generation,Layout-to-Image Generation,"Layout-to-image generation its the task to generate a scene based on the given layout. The layout describes the location of the objects to be included in the output image. In this section, you can find state-of-the-art leaderboards for Layout-to-image generation.",computer-vision 5c868094-7a4b-4361-85a3-faaa15f55730,transform-a-video-into-a-comics,Transform A Video Into A Comics,,computer-vision 47ffac89-be1e-4ac9-9f45-5c12c41d0b97,image-stitching,Image Stitching,"**Image Stitching** is a process of composing multiple images with narrow but overlapping fields of view to create a larger image with a wider field of view. Source: [Single-Perspective Warps in Natural Image Stitching ](https://arxiv.org/abs/1802.04645) ( Image credit: [Kornia](https://github.com/kornia/kornia) )",computer-vision f6942401-b4d6-4b14-94ed-9f7d89235d36,markerless-motion-capture,Markerless Motion Capture,,computer-vision 95c02a6c-9678-4c84-98ba-2e8b8274abaa,visual-crowd-analysis,Visual Crowd Analysis,,computer-vision 0584f78c-9e01-40ae-b3d7-eed262d507ec,highlight-detection,Highlight Detection,,computer-vision 01293107-0aab-4fa7-92f7-3a6b301acaf4,saliency-ranking,Saliency Ranking,,computer-vision e0eca96f-5ee9-48fd-9ca7-39f4ec192793,point-set-upsampling,Point Set Upsampling,,computer-vision a9a325c9-4678-44ba-96c0-287addecd3fb,scene-generation,Scene Generation,,computer-vision e14eee3d-3ed6-4cc4-8db1-b2d595cda7cb,optical-flow-estimation,Optical Flow Estimation,"**Optical Flow Estimation** is a computer vision task that involves computing the motion of objects in an image or a video sequence. The goal of optical flow estimation is to determine the movement of pixels or features in the image, which can be used for various applications such as object tracking, motion analysis, and video compression. Approaches for optical flow estimation include correlation-based, block-matching, feature tracking, energy-based, and more recently gradient-based. Further readings: - [Optical Flow Estimation](https://www.cs.toronto.edu/~fleet/research/Papers/flowChapter05.pdf) - [Performance of Optical Flow Techniques](https://www.cs.toronto.edu/~fleet/research/Papers/ijcv-94.pdf) Definition source: [Devon: Deformable Volume Network for Learning Optical Flow ](https://arxiv.org/abs/1802.07351) Image credit: [Optical Flow Estimation](https://www.cs.toronto.edu/~fleet/research/Papers/flowChapter05.pdf)",computer-vision fdb1da93-4426-4ce0-bd5e-903dad058e08,image-clustering,Image Clustering,"Models that partition the dataset into semantically meaningful clusters without having access to the ground truth labels. Image credit: ImageNet clustering results of [SCAN: Learning to Classify Images without Labels (ECCV 2020)](https://arxiv.org/abs/2005.12320) ",computer-vision 235dac22-1f0c-491b-a66e-2c6c93191c62,image-classification,Image Classification,"**Image Classification** is a fundamental task that attempts to comprehend an entire image as a whole. The goal is to classify the image by assigning it to a specific label. Typically, Image Classification refers to images in which only one object appears and is analyzed. In contrast, object detection involves both classification and localization tasks, and is used to analyze more realistic cases in which multiple objects may exist in an image. Source: [Metamorphic Testing for Object Detection Systems ](https://arxiv.org/abs/1912.12162)",computer-vision e7c378b1-002f-4c49-a28a-a6a888442c07,video-to-video-synthesis,Video-to-Video Synthesis,,computer-vision ffafcd7e-2770-47a9-8855-f6bcc68d447c,referring-image-matting,Referring Image Matting,"Extracting the meticulous alpha matte of the specific object from the image that can best match the given natural language description, e.g., a keyword or a expression.",computer-vision e3598820-eadc-4448-ac36-af1e3d4b0361,moment-retrieval,Moment Retrieval,"Moment retrieval can de defined as the task of ""localizing moments in a video given a user query"". Description from: [QVHIGHLIGHTS: Detecting Moments and Highlights in Videos via Natural Language Queries](https://arxiv.org/pdf/2107.09609v1.pdf) Image credit: [QVHIGHLIGHTS: Detecting Moments and Highlights in Videos via Natural Language Queries](https://arxiv.org/pdf/2107.09609v1.pdf)",computer-vision b6562155-6600-48ae-8107-853385151224,3d-depth-estimation,3D Depth Estimation,Image: [monodepth2](https://github.com/nianticlabs/monodepth2),computer-vision efcd6359-9071-4c02-9c7b-fb23c16645b1,reference-based-video-super-resolution,Reference-based Video Super-Resolution,"Reference-based video super-resolution (RefVSR) is an expansion of reference-based super-resolution (RefSR) to the video super-resolution (VSR). RefVSR inherits the objectives of both RefSR and VSR tasks and utilizes a Ref video for reconstructing an HR video from an LR video video from an LR video.",computer-vision 8b062738-1c44-410a-89e7-a3cebdeed796,multi-class-one-shot-image-synthesis,Multi class one-shot image synthesis,The goal of Multi-class one-shot image synthesis is to learn a generative model that can generate samples with visual attributes from as few as one or more images of at least 2 related classes.,computer-vision 3a9a82ad-57ca-4033-99d5-ab381f9abad7,image-enhancement,Image Enhancement,"**Image Enhancement** is basically improving the interpretability or perception of information in images for human viewers and providing ‘better’ input for other automated image processing techniques. The principal objective of Image Enhancement is to modify attributes of an image to make it more suitable for a given task and a specific observer. Source: [A Comprehensive Review of Image Enhancement Techniques ](https://arxiv.org/abs/1003.4053)",computer-vision eb0cb877-bcc6-49c8-ab43-fe4280c08811,video-text-retrieval,Video-Text Retrieval,Video-Text retrieval requires understanding of both video and language together. Therefore it's different to video retrieval task.,computer-vision b00cc83c-a383-4f55-ae12-3975bc69dd35,video-description,Video Description,"The goal of automatic **Video Description** is to tell a story about events happening in a video. While early Video Description methods produced captions for short clips that were manually segmented to contain a single event of interest, more recently dense video captioning has been proposed to both segment distinct events in time and describe them in a series of coherent sentences. This problem is a generalization of dense image region captioning and has many practical applications, such as generating textual summaries for the visually impaired, or detecting and describing important events in surveillance footage. Source: [Joint Event Detection and Description in Continuous Video Streams ](https://arxiv.org/abs/1802.10250)",computer-vision 794365a3-4ef4-45f7-8566-099100e6c8ff,lake-ice-detection,Lake Ice Monitoring,,computer-vision d1f7e2a2-d406-4885-be52-6023ab525297,partial-video-copy-detection,Partial Video Copy Detection,The PVCD goal is identifying and locating if one or more segments of a long testing video have been copied (transformed) from the reference videos dataset.,computer-vision 0d6a7f57-92b3-42b2-ac58-d84727981a2e,overlapped-15-5,Overlapped 15-5,,computer-vision 3a4c8a5b-bb9c-42a1-91c3-2ac271ea659c,lidar-absolute-pose-regression,lidar absolute pose regression,,computer-vision b256b553-fe42-43a6-ae47-64e5fd58ed6c,class-agnostic-object-detection,Class-agnostic Object Detection,Class-agnostic object detection aims to localize objects in images without specifying their categories.,computer-vision 80d8f236-3fd8-4040-9af5-72f334278b64,unbalanced-segmentation,Unbalanced Segmentation,,computer-vision 518fdaa7-4672-480e-ad4f-59cf3d7171dd,scene-classification,Scene Classification,"**Scene Classification** is a task in which scenes from photographs are categorically classified. Unlike object classification, which focuses on classifying prominent objects in the foreground, Scene Classification uses the layout of objects within the scene, in addition to the ambient context, for classification. Source: [Scene classification with Convolutional Neural Networks ](http://cs231n.stanford.edu/reports/2017/pdfs/102.pdf)",computer-vision 2a46d2cf-c12f-42ff-b787-c93197d7c531,one-shot-segmentation,One-Shot Segmentation,"( Image credit: [One-Shot Learning for Semantic Segmentation](https://arxiv.org/pdf/1709.03410v1.pdf) )",computer-vision cbc8b264-5fbe-4bb4-805e-e76ad9d5235a,human-instance-segmentation,Human Instance Segmentation,"Instance segmentation is the task of detecting and delineating each distinct object of interest appearing in an image. Image Credit: [Deep Occlusion-Aware Instance Segmentation with Overlapping BiLayers](https://arxiv.org/abs/2103.12340)",computer-vision a4732122-7612-4f6c-bf42-6248fcf87cfc,wavelet-structure-similarity-loss,wavelet structure similarity loss,,computer-vision ec257ac0-622b-4565-9859-fa32ef0dfb17,generalized-zero-shot-learning-unseen,Generalized Zero-Shot Learning - Unseen,"The average of the normalized top-1 prediction scores of unseen classes in the generalized zero-shot learning setting, where the label of a test sample is predicted among all (seen + unseen) classes.",computer-vision b9074f29-65d5-44f2-85fa-9720794e2e0b,head-pose-estimation,Head Pose Estimation,"Estimating the head pose of a person is a crucial problem that has a large amount of applications such as aiding in gaze estimation, modeling attention, fitting 3D models to video and performing face alignment. ( Image credit: [FSA-Net: Learning Fine-Grained Structure Aggregation for Head Pose Estimation from a Single Image](http://openaccess.thecvf.com/content_CVPR_2019/papers/Yang_FSA-Net_Learning_Fine-Grained_Structure_Aggregation_for_Head_Pose_Estimation_From_CVPR_2019_paper.pdf) )",computer-vision 166d00c1-3212-41ef-ac18-e61f30534035,scene-aware-dialogue,Scene-Aware Dialogue,,computer-vision db41e40f-0c62-4b37-ab6b-b54ad3cedd0d,3d-object-super-resolution,3D Object Super-Resolution,"3D object super-resolution is the task of up-sampling 3D objects. ( Image credit: [Multi-View Silhouette and Depth Decomposition for High Resolution 3D Object Representation](https://github.com/EdwardSmith1884/Multi-View-Silhouette-and-Depth-Decomposition-for-High-Resolution-3D-Object-Representation) )",computer-vision 415397d5-740e-4d7c-8e2e-7e5549ce968f,few-shot-point-cloud-classification,Few-Shot Point Cloud Classification,Few-Shot Learning on point cloud classification task,computer-vision 24c1c91e-5e09-45a0-a710-700de20d7c4b,referring-expression-segmentation,Referring Expression Segmentation,"The task aims at labeling the pixels of an image or video that represent an object instance referred by a linguistic expression. In particular, the referring expression (RE) must allow the identification of an individual object in a discourse or scene (the referent). REs unambiguously identify the target instance.",computer-vision 1f2886ab-6ac2-4308-b43d-c98ae7f43ac0,landslide-segmentation,Landslide segmentation,,computer-vision 19a0fcfe-da41-427b-8970-53973135eea6,pose-transfer,Pose Transfer,,computer-vision 3df60d58-8f47-4222-8ce3-430bf4c61ac2,color-image-compression-artifact-reduction,Color Image Compression Artifact Reduction,,computer-vision 31b0e2f9-d673-465a-8c69-50a58488bc66,dynamic-region-segmentation,Dynamic Region Segmentation,,computer-vision e80e492b-4bc6-44db-802d-c2b3fe9e80b3,multilingual-text-to-image-generation,Multilingual Text-to-Image Generation,,computer-vision 6b629472-1dd2-4009-9781-de6cb2fc974f,multi-object-discovery,Multi-object discovery,,computer-vision d51ec404-208a-4ff2-a848-1c0fc327bba1,symmetry-detection,Symmetry Detection,,computer-vision c8c879f9-eada-413f-b9b7-72cf405e5f6d,user-constrained-thumbnail-generation,User Constrained Thumbnail Generation,"Thumbnail generation is the task of generating image thumbnails from an input image. ( Image credit: [User Constrained Thumbnail Generation using Adaptive Convolutions](https://arxiv.org/pdf/1810.13054v3.pdf) )",computer-vision f11b4ca3-e1ef-4a09-9b68-887f96fb7e6c,metamerism,Metamerism,,computer-vision 5cdd4f97-6266-470f-bc11-9ded1c782966,action-assessment,Action Assessment,,computer-vision bf571ef3-5c54-4e3b-8ea2-b8695a483958,unsupervised-image-to-image-translation,Unsupervised Image-To-Image Translation,"Unsupervised image-to-image translation is the task of doing image-to-image translation without ground truth image-to-image pairings. ( Image credit: [Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks](https://arxiv.org/pdf/1703.10593v6.pdf) )",computer-vision ca3bc6e8-c805-4c66-859d-42b4a6f2faa2,video-object-detection,Video Object Detection,"Video object detection is the task of detecting objects from a video as opposed to images. ( Image credit: [Learning Motion Priors for Efficient Video Object Detection](https://arxiv.org/pdf/1911.05253v1.pdf) )",computer-vision 693610f9-8a38-4f22-a2ba-2bab83bce50f,gaze-estimation,Gaze Estimation,"**Gaze Estimation** is a task to predict where a person is looking at given the person’s full face. The task contains two directions: 3-D gaze vector and 2-D gaze position estimation. 3-D gaze vector estimation is to predict the gaze vector, which is usually used in the automotive safety. 2-D gaze position estimation is to predict the horizontal and vertical coordinates on a 2-D screen, which allows utilizing gaze point to control a cursor for human-machine interaction. Source: [A Generalized and Robust Method Towards Practical Gaze Estimation on Smart Phone ](https://arxiv.org/abs/1910.07331)",computer-vision 88d8eda0-2192-4312-a338-93d1658fe4ef,action-classification,Action Classification,Image source: [The Kinetics Human Action Video Dataset](https://arxiv.org/pdf/1705.06950.pdf),computer-vision 8bcc4d8e-5cdb-4fa0-85ba-9715e6d2f56e,flare-removal,Flare Removal,"When a camera is pointed at a strong light source, the resulting photograph may contain lens flare artifacts. Flares appear in a wide variety of patterns (halos, streaks, color bleeding, haze, etc.) and this diversity in appearance makes flare removal challenging.",computer-vision 7127a14b-f31d-4199-8799-36cf364f0143,motion-magnification,Motion Magnification,"Motion magnification is a technique that acts like a microscope for visual motion. It can amplify subtle motions in a video sequence, allowing for visualization of deformations that would otherwise be invisible. To achieve motion magnification, we need to accurately measure visual motions, and group the pixels to be modified. There are different approaches to motion magnification, such as Lagrangian and Eulerian methods. Lagrangian methods track the trajectories of moving objects and exaggerate them, while Eulerian methods manipulate the motions at fixed positions. Eulerian methods can be further divided into linear and phase-based methods. Linear methods apply a temporal bandpass filter to boost the linear term of a Taylor series expansion of the displacement function, while phase-based methods use complex wavelet transforms to manipulate the phase of the signal. Motion magnification has various applications, such as measuring the human pulse, visualizing the heat plume of candles, revealing the oscillations of a wine glass, and detecting structural defects.",computer-vision bf565d52-18f1-4e13-8fe3-2c94aee3b126,color-constancy,Color Constancy,"**Color Constancy** is the ability of the human vision system to perceive the colors of the objects in the scene largely invariant to the color of the light source. The task of computational Color Constancy is to estimate the scene illumination and then perform the chromatic adaptation in order to remove the influence of the illumination color on the colors of the objects in the scene. Source: [CroP: Color Constancy Benchmark Dataset Generator ](https://arxiv.org/abs/1903.12581)",computer-vision 1ac214af-8ce9-43ec-92bc-90a3a3d86eb3,visual-relationship-detection,Visual Relationship Detection,"Visual relationship detection (VRD) is one newly developed computer vision task aiming to recognize relations or interactions between objects in an image. It is a further learning task after object recognition and is essential for fully understanding images, even the visual world.",computer-vision 1a1e1b8d-6ae5-49ee-b355-a619f58a4169,unsupervised-facial-landmark-detection,Unsupervised Facial Landmark Detection,"Facial landmark detection in the unsupervised setting popularized by [1]. The evaluation occurs in two stages: (1) Embeddings are first learned in an unsupervised manner (i.e. without labels); (2) A simple regressor is trained to regress landmarks from the unsupervised embedding. [1] Thewlis, James, Hakan Bilen, and Andrea Vedaldi. ""Unsupervised learning of object landmarks by factorized spatial embeddings."" Proceedings of the IEEE International Conference on Computer Vision. 2017. ( Image credit: [Unsupervised learning of object landmarks by factorized spatial embeddings](https://www.robots.ox.ac.uk/~vedaldi/assets/pubs/thewlis17unsupervised.pdf) )",computer-vision 15225091-c6d9-495c-ba5b-b538f13a6fef,generative-visual-question-answering,Generative Visual Question Answering,Generating answers in free form to questions posed about images.,computer-vision 503d1a04-1f41-430f-89e2-2edecd83a8b1,unsupervised-face-recognition,Unsupervised face recognition,,computer-vision c066f268-a5fc-44f1-8d42-058e882e8af8,text-line-extraction,Text-Line Extraction,,computer-vision 8b3ea835-3e05-4e9a-88df-1ed4097ae5b9,unsupervised-anomaly-detection-with-specified-4,Unsupervised Anomaly Detection with Specified Settings -- 20% anomaly,,computer-vision af8c4e03-4e28-463b-81df-55a377f7f604,supervised-anomaly-detection,Supervised Anomaly Detection,"In the training set, the amount of abnormal samples is limited and significant fewer than normal samples, producing data distributions that lead to a naturally imbalanced learning problem.",computer-vision 6f890032-8afd-4727-aef0-7438e8bf1d3b,personalized-segmentation,Personalized Segmentation,"Given a one-shot image with a reference mask, the models are required to segment the indicated target object in any other images.",computer-vision fccfea97-232f-457e-abf0-654570f8d3b5,3d-human-pose-estimation,3D Human Pose Estimation,"**3D Human Pose Estimation** is a computer vision task that involves estimating the 3D positions and orientations of body joints and bones from 2D images or videos. The goal is to reconstruct the 3D pose of a person in real-time, which can be used in a variety of applications, such as virtual reality, human-computer interaction, and motion analysis.",computer-vision f6386616-661c-4524-8806-f2fd84802cba,video-background-subtraction,Video Background Subtraction,,computer-vision 15cc180b-1350-4a07-a7f1-9a49748d9b67,direct-transfer-person-re-identification,Direct Transfer Person Re-identification,,computer-vision aa706edd-b3eb-45f3-8344-8d43fe8eb329,chat-based-image-retrieval,Chat-based Image Retrieval,"ChatIR: a Chat-based Image Retrieval system that engages in a conversation with the user to elicit information, in addition to an initial query, in order to clarify the user’s search intent.",computer-vision b01e9cb4-3884-4d18-bcb0-284a9596cf6a,3d-human-pose-and-shape-estimation,3D human pose and shape estimation,Estimate 3D human pose and shape (e.g. SMPL) from images,computer-vision d0a3ebcf-c552-4ce6-bb96-6cf9008a81d8,conditional-text-to-image-synthesis,Conditional Text-to-Image Synthesis,"Introducing extra conditions based on the text-to-image generation process, similar to the paradigm of ControlNet.",computer-vision 81ec785a-6cf3-4324-9c5a-8f24eb5c1200,human-action-generation,Human action generation,"Yan et al. (2019) CSGN: ""When the dancer is stepping, jumping and spinning on the stage, attentions of all audiences are attracted by the streamof the fluent and graceful movements. Building a model that is capable of dancing is as fascinating a task as appreciating the performance itself. In this paper, we aim to generate long-duration human actions represented as skeleton sequences, e.g. those that cover the entirety of a dance, with hundreds of moves and countless possible combinations."" ( Image credit: [Convolutional Sequence Generation for Skeleton-Based Action Synthesis](http://www.dahualin.org/publications/dhl19_csgn.pdf) )",computer-vision e336a2b5-b263-457e-8d48-1a7fd4ecdd05,unity,Unity,,computer-vision 8568940e-ebc1-4a06-88d3-c7bab486a666,lip-sync,Unconstrained Lip-synchronization,"Given a video of an arbitrary person, and an arbitrary driving speech, the task is to generate a lip-synced video that matches the given speech. This task requires the approach to not be constrained by identity, voice, or language.",computer-vision 3418a5fa-da0e-4cb3-ae00-262ff2a22ac3,face-alignment,Face Alignment,"Face alignment is the task of identifying the geometric structure of faces in digital images, and attempting to obtain a canonical alignment of the face based on translation, scale, and rotation. ( Image credit: [3DDFA_V2](https://github.com/cleardusk/3DDFA_V2) )",computer-vision 4eebf9dd-0859-4c92-808b-e76780cbf31f,image-quality-assessment,Image Quality Assessment,,computer-vision 31c55176-061b-405b-b54d-0e2c519f1086,semi-supervised-object-detection,Semi-Supervised Object Detection,Semi-supervised object detection uses both labeled data and unlabeled data for training. It not only reduces the annotation burden for training high-performance object detectors but also further improves the object detector by using a large number of unlabeled data.,computer-vision 579fc4db-7376-45ed-8bed-f5c06b98d534,medical-image-detection,medical image detection,,computer-vision 757a4bf8-ec58-4146-a26d-8e1cf47fa709,image-retargeting,Image Retargeting,,computer-vision c50fd80e-1393-45eb-b3b4-f6d47720ee13,visual-sentiment-prediction,Visual Sentiment Prediction,,computer-vision fc287133-6a0b-4004-9186-15f50410f2b4,object-recognition,Object Recognition,"Object recognition is a computer vision technique for detecting + classifying objects in images or videos. Since this is a combined task of object detection plus image classification, the state-of-the-art tables are recorded for each component task [here](https://www.paperswithcode.com/task/object-detection) and [here](https://www.paperswithcode.com/task/image-classification2). ( Image credit: [Tensorflow Object Detection API ](https://github.com/tensorflow/models/tree/master/research/object_detection) )",computer-vision 54580acb-dc43-433f-8fd8-7b06e78f95be,rotated-mnist,Rotated MNIST,,computer-vision 51e7b2c3-7d41-4fea-8b6d-c569def29e41,partially-relevant-video-retrieval,Partially Relevant Video Retrieval,"In the Partially Relevant Video Retrieval (PRVR) task, an untrimmed video is considered to be partially relevant w.r.t. a given textual query if it contains a moment relevant to the query. PRVR aims to retrieve such partially relevant videos from a large collection of untrimmed videos.",computer-vision e838ace4-dfcc-40c2-b530-dd05883dafb6,single-image-blind-deblurring,Single-Image Blind Deblurring,,computer-vision 05f98391-210d-472d-a2bf-553d151ce538,face-reenactment,Face Reenactment,"**Face Reenactment** is an emerging conditional face synthesis task that aims at fulfilling two goals simultaneously: 1) transfer a source face shape to a target face; while 2) preserve the appearance and the identity of the target face. Source: [One-shot Face Reenactment ](https://arxiv.org/abs/1908.03251)",computer-vision 421fefdf-c89d-413b-8749-09f0add697ef,point-cloud-pre-training,Point Cloud Pre-training,,computer-vision 8898695f-b78a-448a-b1c0-811c8b94b829,thermal-infrared-object-tracking,Thermal Infrared Object Tracking,,computer-vision 5af36ecc-a2e1-4750-b120-ab8c72e2b581,neural-stylization,Neural Stylization,,computer-vision 9436f70a-e50a-4168-bf24-f0819a2ffed3,synthetic-image-attribution,Synthetic Image Attribution,"Determine the source or origin of a generated image, such as identifying the model or tool used to create it. This information can be useful for detecting copyright infringement or for investigating digital crimes.",computer-vision 9f47e689-68b1-4466-92e0-2bbac4ba8795,active-observation-completion,Active Observation Completion,,computer-vision 55758a3b-0461-47ff-b30a-d6a4b139e91a,electron-microscopy-image-segmentation,Electron Microscopy Image Segmentation,,computer-vision e1c80782-5a00-4660-9139-228e8b02b7d3,scene-graph-generation,Scene Graph Generation,"A scene graph is a structured representation of an image, where nodes in a scene graph correspond to object bounding boxes with their object categories, and edges correspond to their pairwise relationships between objects. The task of **Scene Graph Generation** is to generate a visually-grounded scene graph that most accurately correlates with an image. Source: [Scene Graph Generation by Iterative Message Passing ](https://arxiv.org/abs/1701.02426)",computer-vision 701e39e5-dc78-4134-abe2-59aeb81225f4,depth-aleatoric-uncertainty-estimation,Depth Aleatoric Uncertainty Estimation,,computer-vision eb53566c-8c80-4147-b7a0-9fcd979a1cc2,multiple-affordance-detection,Multiple Affordance Detection,"Affordance detection is the task of detecting objects that are usable (or graspable) by a human. ( Image credit: [What can I do here? Leveraging Deep 3D saliency and geometry for fast and scalable multiple affordance detection](https://github.com/eduard626/deep-interaction-tensor) )",computer-vision 906029bf-475c-403c-812d-ac9770fcbe52,handwriting-generation,Handwriting generation,The inverse of handwriting recognition. From text generate and image of handwriting (offline) of trajectory of handwriting (online).,computer-vision f3c129cc-436c-4d39-b620-e50d9b42ab33,emotion-recognition,Emotion Recognition,"**Emotion Recognition** is an important area of research to enable effective human-computer interaction. Human emotions can be detected using speech signal, facial expressions, body language, and electroencephalography (EEG). Source: [Using Deep Autoencoders for Facial Expression Recognition ](https://arxiv.org/abs/1801.08329)",computer-vision a1f6f742-6399-4df3-8529-b0452a4a8c80,image-generation-from-scene-graphs,Image Generation from Scene Graphs,,computer-vision 79439ef5-952a-439a-9277-3e7d5a2ad792,image-denoising,Image Denoising,"**Image Denoising** is a computer vision task that involves removing noise from an image. Noise can be introduced into an image during acquisition or processing, and can reduce image quality and make it difficult to interpret. Image denoising techniques aim to restore an image to its original quality by reducing or removing the noise, while preserving the important features of the image. ( Image credit: [Wide Inference Network for Image Denoising via Learning Pixel-distribution Prior](https://arxiv.org/pdf/1707.05414v5.pdf) )",computer-vision 82cc0c24-2df3-42d6-be23-5ef452b01488,stereoscopic-image-quality-assessment,Stereoscopic image quality assessment,,computer-vision bebbda6d-4a83-493c-bb85-334d25a5bc9d,drone-based-object-tracking,drone-based object tracking,drone-based object tracking,computer-vision 6936d65f-2f81-487a-ab51-ce46a0fa37ed,video-question-answering,Video Question Answering,"Video Question Answering (VideoQA) aims to answer natural language questions according to the given videos. Given a video and a question in natural language, the model produces accurate answers according to the content of the video.",computer-vision d406fc7b-defc-4519-8c03-653d2936d796,image-based-automatic-meter-reading,Image-based Automatic Meter Reading,,computer-vision d1e479a3-118b-42a6-9cd8-c0cd9fbd75e8,facial-inpainting,Facial Inpainting,"Facial inpainting (or face completion) is the task of generating plausible facial structures for missing pixels in a face image. ( Image credit: [SymmFCNet](https://github.com/csxmli2016/SymmFCNet) )",computer-vision fcf1d936-c961-466a-ba51-7aa7078639ff,vehicle-speed-estimation,Vehicle Speed Estimation,Vehicle speed estimation is the task of detecting and tracking vehicles whose real-world speeds are then estimated. The task is usually evaluated with recall and precision of the detected vehicle tracks as well as the mean or median errors of the estimated vehicle speeds.,computer-vision e3cffc81-ae1a-4c1a-852c-a31447e63bf8,cross-view-person-re-identification,Cross-Modal Person Re-Identification,,computer-vision 2ec7f17e-6e8d-4160-b4cd-e5ca4fa021f1,laminar-turbulent-flow-localisation,Laminar-Turbulent Flow Localisation,It is a segmentation task on thermographic measurement images in order to separate laminar and turbulent flow regions on flight body parts.,computer-vision d18a71d3-f0cb-40b8-9bb7-9d18f2a0345b,geometrical-view,Geometrical View,,computer-vision fc878640-d7da-42c4-a91d-3574333a8ed9,covid-19-image-segmentation,COVID-19 Image Segmentation,,computer-vision 0b6d3163-1834-4a54-a7ef-c014cc0cd908,motion-retargeting,motion retargeting,,computer-vision 1c498b6b-d034-4765-94ea-5752f41ff6d1,neural-radiance-caching,Neural Radiance Caching,"Involves the task of predicting photorealistic pixel colors from feature buffers. Image source: [Instant Neural Graphics Primitives with a Multiresolution Hash Encoding](https://arxiv.org/pdf/2201.05989v1.pdf)",computer-vision c6ac3350-5f6d-4343-8de1-17b0dff002b6,facial-expression-recognition,Facial Expression Recognition (FER),"**Facial Expression Recognition (FER)** is a computer vision task aimed at identifying and categorizing emotional expressions depicted on a human face. The goal is to automate the process of determining emotions in real-time, by analyzing the various features of a face such as eyebrows, eyes, mouth, and other features, and mapping them to a set of emotions such as anger, fear, surprise, sadness and happiness. ( Image credit: [DeXpression](https://arxiv.org/pdf/1509.05371v2.pdf) )",computer-vision 6aaf6a93-4c50-4a29-a829-3782733b345a,camera-shot-segmentation,Camera shot segmentation,"Camera shot temporal segmentation consists in classifying each video frame according to the type of camera used to record said frame. This task is introduced with the SoccerNet-v2 dataset, where 13 camera classes are considered (main camera, behind the goal, corner camera, etc.).",computer-vision b049dfec-ab6c-47ac-b35b-8f9c96767e34,local-color-enhancement,Local Color Enhancement,"Enhancement techniques for improving the contrast between lesion and background skin on dermatological macro-images are limited in the literature. To fill this gap, a modified sigmoid transform is applied in the HSV color space. The crossover point in the modified sigmoid transform that divides the macro-image into lesion and background is predicted using a modified EfficientNet regressor to exclude manual intervention and subjectivity.",computer-vision 1bd2c36d-0c85-419a-aeca-d7aa9172e613,video-reconstruction,Video Reconstruction,"Source: [Deep-SloMo](https://github.com/avinashpaliwal/Deep-SloMo)",computer-vision 47294fca-e6cd-4364-940f-688b2734af75,vgsi,VGSI,"Given a textual goal and multiple images representing candidate events, a model must choose one image which constitutes a reason- able step towards the given goal. A model should correctly recognize not only the specific action illustrated in an image (e.g., “turning on the oven”), but also the intent of the action (“baking fish”).",computer-vision 2be217ea-83fa-41ce-a8ed-a457699cb48b,monocular-cross-view-road-scene-parsing,Monocular Cross-View Road Scene Parsing(Vehicle),,computer-vision 495975f8-389a-49b2-9069-ff0318cdb585,video-instance-segmentation,Video Instance Segmentation,"The goal of video instance segmentation is simultaneous detection, segmentation and tracking of instances in videos. In words, it is the first time that the image instance segmentation problem is extended to the video domain. To facilitate research on this new task, a large-scale benchmark called YouTube-VIS, which consists of 2,883 high-resolution YouTube videos, a 40-category label set and 131k high-quality instance masks is built.",computer-vision 859bf4db-247b-4808-9181-246fe8fe0745,camera-relocalization,Camera Relocalization,"""Camera relocalization, or image-based localization is a fundamental problem in robotics and computer vision. It refers to the process of determining camera pose from the visual scene representation and it is essential for many applications such as navigation of autonomous vehicles, structure from motion (SfM), augmented reality (AR) and simultaneous localization and mapping (SLAM)."" ([Source](https://paperswithcode.com/paper/camera-relocalization-by-computing-pairwise))",computer-vision c52d70b9-433b-4e8e-85c6-c5bc8c04fcd2,denoising,Denoising,"**Denoising** is a task in image processing and computer vision that aims to remove or reduce noise from an image. Noise can be introduced into an image due to various reasons, such as camera sensor limitations, lighting conditions, and compression artifacts. The goal of denoising is to recover the original image, which is considered to be noise-free, from a noisy observation. ( Image credit: [Beyond a Gaussian Denoiser](https://arxiv.org/pdf/1608.03981v1.pdf) )",computer-vision 8e818f1e-2844-4f21-9d72-3f4c898b49bf,cross-domain-iris-presentation-attack,Cross-Domain Iris Presentation Attack Detection,,computer-vision c692bea2-e921-4e4e-ad17-55302cab8ca9,few-shot-image-segmentation,Few-Shot Semantic Segmentation,Few-shot semantic segmentation (FSS) learns to segment target objects in query image given few pixel-wise annotated support image.,computer-vision 08c4cf37-6ad9-4513-8565-adabdd43613e,lake-detection,Lake Detection,,computer-vision c0c28ec2-db8c-46a8-b4c2-2ccae9bb5c88,video-interlacing,Video Interlacing,,computer-vision d7cc855a-0f7b-4d4e-9086-c696057d7d58,6d-pose-estimation-using-rgbd,6D Pose Estimation using RGBD,Image: [Zeng et al](https://arxiv.org/pdf/1609.09475v3.pdf),computer-vision 7c4179e9-1417-4e43-bebe-1f727125b8ea,face-model,Face Model,,computer-vision 8887db77-d0d5-468d-8206-a74c470c83ef,image-similarity-search,Image Similarity Search,Image credit: [The 2021 Image Similarity Dataset and Challenge](https://paperswithcode.com/paper/the-2021-image-similarity-dataset-and),computer-vision 6e8c8429-a01d-400a-b2c8-81df31bbe511,self-supervised-person-re-identification,Self-Supervised Person Re-Identification,"Currently, self-supervised representation learning is mainly tested on image classification tasks, which is not insufficient to verify its effectiveness. It should also be tested in the visual matching task, and pedestrian re-recognition is just such an appropriate task.",computer-vision 06be337f-7fd9-402b-8ee4-267c8c232f39,online-clustering,Online Clustering,"Models that learn to label each image (i.e. cluster the dataset into its ground truth classes) without seeing the ground truth labels. Under the online scenario, data is in the form of streams, i.e., the whole dataset could not be accessed at the same time and the model should be able to make cluster assignments for new data without accessing the former data. Image Credit: [Online Clustering by Penalized Weighted GMM](https://arxiv.org/pdf/1902.02544v1.pdf)",computer-vision 782812ee-db82-4308-b1f9-df2a5e333ca7,human-object-interaction-concept-discovery,Human-Object Interaction Concept Discovery,"Discovering the reasonable HOI concepts/categories from known categories and their instances. Actually, it is also a matrix (verb-object matrix) complementation problem.",computer-vision 5c50dd36-66a5-4553-96ea-4add826eecd1,video-boundary-captioning,Video Boundary Captioning,"Provided with the timestamp of a boundary inside a video, the machine is required to generate sentences describing the status change at the boundary.",computer-vision e9b5d209-a9f5-45b1-9ff8-effe11208038,spectral-super-resolution,Spectral Super-Resolution,,computer-vision 6408f1e9-b631-41b6-bf7e-f9839ed6a4cc,superpixel-image-classification,Superpixel Image Classification,A **Superpixel Image classification** can be classified the group of pixels that share common characteristics (like pixel intensity ) or segementize the common pixel value in to one group.,computer-vision 33d74d8c-3cb8-4e86-8a01-e3e38961c583,medical-image-deblurring,Medical Image Deblurring,Medical image deblurring aims to remove blurs from medical images,computer-vision 8480877b-2788-4d02-b4d5-a9e3bf14e8c8,partial-point-cloud-matching,Partial Point Cloud Matching,,computer-vision 821b93de-97c7-4f49-8734-4ef174333292,fingertip-detection,Fingertip Detection,,computer-vision efe780aa-556d-4991-bcfb-0c4dd22c687e,real-time-3d-semantic-segmentation,Real-Time 3D Semantic Segmentation,,computer-vision 6f13a8e2-3217-4199-a2d0-7cc37f97d9dc,pose-guided-image-generation,Pose-Guided Image Generation,"Pose-guided image generation is the task of generating a new image of a person with guidance from pose information that the new image should synthesise around. ( Image credit: [Coordinate-based Texture Inpainting for Pose-Guided Human Image Generation](https://arxiv.org/pdf/1811.11459v2.pdf) )",computer-vision 62891c13-dbe2-4595-bbe4-cf9cfaf1638a,on-the-fly-sketch-based-image-retrieval,On-the-Fly Sketch Based Image Retrieval,Start retrieving as the user starts drawing.,computer-vision 93703509-d310-4ba3-88b3-b49fdb9df946,hand-pose-estimation,Hand Pose Estimation,"Hand pose estimation is the task of finding the joints of the hand from an image or set of video frames. ( Image credit: [Pose-REN](https://github.com/xinghaochen/Pose-REN) )",computer-vision e13d58d4-6718-49eb-8587-6e7e08747ae5,missing-markers-reconstruction,Missing Markers Reconstruction,Reconstructing missing markers in the motion caption 3d poses,computer-vision a06e3a8f-554b-46a7-8781-184b1138905b,unsupervised-object-segmentation,Unsupervised Object Segmentation,Image credit: [ClevrTex: A Texture-Rich Benchmark for Unsupervised Multi-Object Segmentation](https://paperswithcode.com/paper/clevrtex-a-texture-rich-benchmark-for),computer-vision 79d581b4-6ea5-4861-ad0f-eb05c40f6a29,fairness,Fairness,,computer-vision bf737702-abe8-444a-890f-27034f88c7b7,animation,Animation,,computer-vision 4ae8db62-f595-4902-ba23-92f208dbf319,video-retrieval,Video Retrieval,"The objective of video retrieval is as follows: given a text query and a pool of candidate videos, select the video which corresponds to the text query. Typically, the videos are returned as a ranked list of candidates and scored via document retrieval metrics.",computer-vision 768cc5f0-05e2-4d36-a085-864f201bec96,displaced-people-recognition,Displaced People Recognition,"Recognise displaced people from images. ( Image credit: [DisplaceNet: Recognising Displaced People from Images by Exploiting Dominance Level](https://arxiv.org/pdf/1905.02025v1.pdf) )",computer-vision b2dccdea-3a2b-4b00-8f8a-a3c80e7acb8c,temporal-localization,Temporal Localization,,computer-vision 3f00df79-30e5-4fbf-b35f-81daeefda83b,generalized-few-shot-semantic-segmentation,Generalized Few-Shot Semantic Segmentation,,computer-vision dbd6c735-7450-4edd-97e1-b272fb969689,human-fmri-response-prediction,Human fMRI response prediction,"The task is: Given a) the set of videos of everyday events and b) the corresponding brain responses recorded while human participants viewed those videos, use computational models to predict brain responses for videos.",computer-vision 75d08d34-2e18-411c-b128-88d064e55b1a,depth-completion,Depth Completion,"The **Depth Completion** task is a sub-problem of depth estimation. In the sparse-to-dense depth completion problem, one wants to infer the dense depth map of a 3-D scene given an RGB image and its corresponding sparse reconstruction in the form of a sparse depth map obtained either from computational methods such as SfM (Strcuture-from-Motion) or active sensors such as lidar or structured light sensors. Source: [LiStereo: Generate Dense Depth Maps from LIDAR and Stereo Imagery ](https://arxiv.org/abs/1905.02744), [Unsupervised Depth Completion from Visual Inertial Odometry](https://arxiv.org/abs/1905.08616)",computer-vision 3a7e5346-0d41-4dff-847f-3cf449f166fa,font-recognition,Font Recognition,"Font recognition (also called *visual font recognition* or *optical font recognition*) is the task of identifying the font family or families used in images containing text. Understanding which fonts are used in text may, for example, help designers find the right style, as well as help select an optical character recognition engine or model that is a better fit for certain texts.",computer-vision 2f5ebb19-b093-4022-8bf6-3139054ae078,recognizing-and-localizing-human-actions,Recognizing And Localizing Human Actions,,computer-vision 65b58435-dc0c-4be2-8bf5-23356e13adb2,spatio-temporal-video-grounding,Spatio-Temporal Video Grounding,,computer-vision f411eff3-46aa-4964-8e48-25238d9ad920,spectrum-cartography,Spectrum Cartography,,computer-vision ec0e598d-8798-48e9-b4f5-7570f3784912,retinal-oct-disease-classification,Retinal OCT Disease Classification,Classifying different Retinal degeneration from Optical Coherence Tomography Images (OCT).,computer-vision 443eedf2-9aa5-4fef-ac8f-7338aac91fe5,image-stylization,Image Stylization,"**Image stylization** is a task that involves transforming an input image into a new image that has a different style, while preserving the content of the original image. The goal of image stylization is to create visually appealing images with a specific style or aesthetic, such as impressionism, cubism, or surrealism. It can also be used to make images more visually appealing for specific applications, such as social media or advertising.",computer-vision 7248000f-0541-482e-9ffc-140d2051f3e2,fundus-to-angiography-generation,Fundus to Angiography Generation,Generating Retinal Fluorescein Angiography from Retinal Fundus Image using Generative Adversarial Networks.,computer-vision 48351d6b-4007-4a87-98b3-72d463a14024,situation-recognition,Situation Recognition,"Situation Recognition aims to produce the structured image summary which describes the primary activity (verb), and its relevant entities (nouns).",computer-vision 0707610b-71fc-45d0-ba58-882d28200c07,multiview-learning,Multiview Learning,,computer-vision deb82f5a-0351-467f-a131-9589fc4e6e8b,instance-search,Instance Search,"Visual **Instance Search** is the task of retrieving from a database of images the ones that contain an instance of a visual query. It is typically much more challenging than finding images from the database that contain objects belonging to the same category as the object in the query. If the visual query is an image of a shoe, visual Instance Search does not try to find images of shoes, which might differ from the query in shape, color or size, but tries to find images of the exact same shoe as the one in the query image. Visual Instance Search challenges image representations as the features extracted from the images must enable such fine-grained recognition despite variations in viewpoints, scale, position, illumination, etc. Whereas holistic image representations, where each image is mapped to a single high-dimensional vector, are sufficient for coarse-grained similarity retrieval, local features are needed for instance retrieval. Source: [Dynamicity and Durability in Scalable Visual Instance Search ](https://arxiv.org/abs/1805.10942)",computer-vision 987d175a-7587-41e2-92e3-f5b51757c951,segmentation-based-workflow-recognition,Segmentation Based Workflow Recognition,,computer-vision a1232853-1ef9-4f28-b242-03eab20d1a15,image-registration,Image Registration,"Image registration is the process of transforming different sets of data into one coordinate system. Data may be multiple photographs, data from different sensors, times, depths, or viewpoints. It is used in computer vision, medical imaging, and compiling and analyzing images and data from satellites. Registration is necessary in order to be able to compare or integrate the data obtained from these different measurements. Source: [Image registration | Wikipedia](https://en.wikipedia.org/wiki/Image_registration) ( Image credit: [Kornia](https://github.com/kornia/kornia) )",computer-vision fe534205-6d20-46d3-b081-70e5cf58ef89,3d-object-detection-from-monocular-images,3D Object Detection From Monocular Images,"This is the task of detecting 3D objects from monocular images (as opposed to LiDAR based counterparts). It is usually associated with autonomous driving based tasks. ( Image credit: [Orthographic Feature Transform for Monocular 3D Object Detection](https://arxiv.org/pdf/1811.08188v1.pdf) )",computer-vision 9ec6ce96-102d-4b68-a789-4ba0dad23270,one-shot-face-stylization,One-Shot Face Stylization,"Image credit: [""JoJoGAN: One Shot Face Stylization""](https://arxiv.org/pdf/2112.11641v1.pdf)",computer-vision 74ccac8a-dac7-4adb-801e-ffaf7699f428,prompt-driven-zero-shot-domain-adaptation,Prompt-driven Zero-shot Domain Adaptation,Domain adaptation using only a single source domain and a description of the target domain in natural language (No images from target domain are available),computer-vision 3b43e8e5-608b-43fc-be5b-d03486d4dbcb,human-part-segmentation,Human Part Segmentation,,computer-vision b2f3aead-b361-4932-9f75-29d8cf6715ad,nlp-based-person-retrival,Text based Person Retrieval,,computer-vision c6f2530e-5b85-4674-8836-0202fa5eb15e,scene-text-detection,Scene Text Detection,"**Scene Text Detection** is a computer vision task that involves automatically identifying and localizing text within natural images or videos. The goal of scene text detection is to develop algorithms that can robustly detect and and label text with bounding boxes in uncontrolled and complex environments, such as street signs, billboards, or license plates. Source: [ContourNet: Taking a Further Step toward Accurate Arbitrary-shaped Scene Text Detection ](https://arxiv.org/abs/2004.04940)",computer-vision 740e2c80-efc2-46ca-801e-a582711c6cf8,video-summarization,Video Summarization,"**Video Summarization** aims to generate a short synopsis that summarizes the video content by selecting its most informative and important parts. The produced summary is usually composed of a set of representative video frames (a.k.a. *video key-frames*), or video fragments (a.k.a. *video key-fragments*) that have been stitched in chronological order to form a shorter video. The former type of a video summary is known as **video storyboard**, and the latter type is known as **video skim**. Source: [Video Summarization Using Deep Neural Networks: A Survey](https://arxiv.org/abs/2101.06072)
Image credit: [iJRASET](https://www.ijraset.com/fileserve.php?FID=12932)",computer-vision 8fb891af-1d04-4819-9d70-0cdd522d29db,open-vocabulary-attribute-detection,Open Vocabulary Attribute Detection,"Open-Vocabulary Attribute Detection (OVAD) is a task that aims to detect and recognize an open set of objects and their associated attributes in an image. The objects and attributes are defined by text queries during inference, without prior knowledge of the tested classes during training.",computer-vision e2b9092f-6397-4d42-99bf-e53e26c041c6,thermal-image-segmentation,Thermal Image Segmentation,,computer-vision da48f845-c8bf-4a1d-b31a-5d9eb44f0d45,object-detection-in-aerial-images,Object Detection In Aerial Images,"Object Detection in Aerial Images is the task of detecting objects from aerial images. ( Image credit: [DOTA: A Large-Scale Dataset for Object Detection in Aerial Images](http://openaccess.thecvf.com/content_cvpr_2018/papers/Xia_DOTA_A_Large-Scale_CVPR_2018_paper.pdf) )",computer-vision a2972463-65de-4374-ac7f-bc0f7ccb1930,weakly-supervised-action-localization,Weakly Supervised Action Localization,"In this task, the training data consists of videos with a list of activities in them without any temporal boundary annotations. However, while testing, given a video, the algorithm should recognize the activities in the video and also provide the start and end time.",computer-vision 75b94704-c2e1-4f94-bbc6-7e6e45d8a36c,handwritten-digit-image-synthesis,Handwritten Digit Image Synthesis,,computer-vision fdc3f47f-7626-4637-ac53-98e99a9442c2,3d-multi-person-pose-estimation-root-relative,3D Multi-Person Pose Estimation (root-relative),"This task aims to solve root-relative 3D multi-person pose estimation (person-centric coordinate system). No ground truth human bounding box and human root joint coordinates are used during testing stage. ( Image credit: [RootNet](https://github.com/mks0601/3DMPPE_ROOTNET_RELEASE) )",computer-vision ceacb0b4-d4c4-4c88-8d49-703756baf5cc,object-counting,Object Counting,"The goal of **Object Counting** task is to count the number of object instances in a single image or video sequence. It has many real-world applications such as traffic flow monitoring, crowdedness estimation, and product counting. Source: [Learning to Count Objects with Few Exemplar Annotations ](https://arxiv.org/abs/1905.07898)",computer-vision 466dade9-b5e5-40c9-a1ee-911255a2c244,medical-object-detection,Medical Object Detection,"Medical object detection is the task of identifying medical-based objects within an image. ( Image credit: [Liver Lesion Detection from Weakly-labeled Multi-phase CT Volumes with a Grouped Single Shot MultiBox Detector](https://github.com/L0SG/grouped-ssd-pytorch) )",computer-vision f3f0f1b0-a2da-44c6-bded-db9eb50f2b45,chinese-landscape-painting-generation,Chinese Landscape Painting Generation,,computer-vision ac08af92-df2c-41bc-bb79-d0c28a8925e3,ensemble-learning,Ensemble Learning,,computer-vision d46933dd-0d36-4108-b1b8-756878a88ed8,low-light-image-enhancement,Low-Light Image Enhancement,"**Low-Light Image Enhancement** is a computer vision task that involves improving the quality of images captured under low-light conditions. The goal of low-light image enhancement is to make images brighter, clearer, and more visually appealing, without introducing too much noise or distortion.",computer-vision 196dd21a-9c15-407c-9131-511f5a6f30e9,video-salient-object-detection,Video Salient Object Detection,"Video salient object detection (VSOD) is significantly essential for understanding the underlying mechanism behind HVS during free-viewing in general and instrumental to a wide range of real-world applications, e.g., video segmentation, video captioning, video compression, autonomous driving, robotic interaction, weakly supervised attention. Besides its academic value and practical significance, VSOD presents great difficulties due to the challenges carried by video data (diverse motion patterns, occlusions, blur, large object deformations, etc.) and the inherent complexity of human visual attention behavior (i.e., selective attention allocation, attention shift) during dynamic scenes. Online benchmark: http://dpfan.net/davsod. ( Image credit: [Shifting More Attention to Video Salient Object Detection, CVPR2019-Best Paper Finalist](https://openaccess.thecvf.com/content_CVPR_2019/papers/Fan_Shifting_More_Attention_to_Video_Salient_Object_Detection_CVPR_2019_paper.pdf) )",computer-vision 4bdf9e0f-38df-4f1d-8f83-6cfb1641c6a8,zero-shot-object-detection,Zero-Shot Object Detection,"Zero-shot object detection (ZSD) is the task of object detection where no visual training data is available for some of the target object classes. ( Image credit: [Zero-Shot Object Detection: Learning to Simultaneously Recognize and Localize Novel Concepts](https://github.com/salman-h-khan/ZSD_Release) )",computer-vision 34a41b4e-9800-409d-a1fe-8d788bb51bff,dynamic-texture-recognition,Dynamic Texture Recognition,,computer-vision 888ae343-c343-4697-80ab-e3abc7ba67b3,surface-normals-estimation,Surface Normals Estimation,Surface normal estimation deals with the task of predicting the surface orientation of the objects present inside a scene. Refer to [Designing Deep Networks for Surface Normal Estimation (Wang et al.)](https://www.cs.cmu.edu/~xiaolonw/papers/deep3d.pdf) to get a good overview of several design choices that led to the development of a CNN-based surface normal estimator.,computer-vision 5f730391-5c75-417d-bc8e-9170eef91d3c,mistake-detection,Mistake Detection,"Mistakes are natural occurrences in many tasks and an opportunity for an AR assistant to provide help. Identifying such mistakes requires modelling procedural knowledge and retaining long-range sequence information. In its simplest form Mistake Detection aims to classify each coarse action segment into one of the three classes: {“correct”, “mistake”, “correction”}.",computer-vision 4be3d0d7-8c80-4dc0-9bbb-49837636e9f9,shadow-detection-and-removal,Shadow Detection And Removal,,computer-vision 5be21165-6a01-48df-94bd-233520cd3704,referring-image-matting-prompt-based,Referring Image Matting (Prompt-based),"Prompt-based referring image matting, taking an image and a prompt word as the input.",computer-vision 864e2029-7028-4569-ba4a-d387eddbc512,3d-shape-representation,3D Shape Representation,Image: [MeshNet](https://arxiv.org/pdf/1811.11424v1.pdf),computer-vision 2f4246cc-9ebd-4f2b-87db-c8ebc3994e05,lightfield,Lightfield,Tasks related to the light-field imagery,computer-vision d11222ec-327f-4c75-b31f-b906daf12eae,image-harmonization,Image Harmonization,Image harmonization aims to modify the color of the composited region with respect to the specific background.,computer-vision d10a89b7-ec88-4690-9938-39e25230d6e0,online-multi-object-tracking,Online Multi-Object Tracking,"The goal of **Online Multi-Object Tracking** is to estimate the spatio-temporal trajectories of multiple objects in an online video stream (i.e., the video is provided frame-by-frame), which is a fundamental problem for numerous real-time applications, such as video surveillance, autonomous driving, and robot navigation. Source: [A Hybrid Data Association Framework for Robust Online Multi-Object Tracking ](https://arxiv.org/abs/1703.10764)",computer-vision 96bd680f-e5d0-4f98-9ca3-f5b2c56448a7,novel-class-discovery,Novel Class Discovery,"The goal of Novel Class Discovery (NCD) is to identify new classes in unlabeled data, by exploiting prior knowledge from known classes. In this specific setup, the data is split in two sets. The first is a labeled set containing known classes and the second is an unlabeled set containing unknown classes that must be discovered.",computer-vision 3daca605-8122-4f87-a348-58502ce11a8d,3d-object-reconstruction-from-a-single-image,3D Object Reconstruction From A Single Image,Image: [Fan et al](https://arxiv.org/pdf/1612.00603v2.pdf),computer-vision 7ae13c44-a96f-43ca-a8b7-1fa779f3617b,referring-video-object-segmentation,Referring Video Object Segmentation,"Referring video object segmentation aims at segmenting an object in video with language expressions. Unlike the previous video object segmentation, the task exploits a different type of supervision, language expressions, to identify and segment an object referred by the given language expressions in a video.",computer-vision e65ec0ba-c5d0-4495-9a2a-67f8d3a511ec,finger-dorsal-image-spoof-detection,Finger Dorsal Image Spoof Detection,,computer-vision f68c5667-1682-41d9-b7e6-8b05a24336f5,cross-view-image-to-image-translation,Cross-View Image-to-Image Translation,,computer-vision 170d5616-4bee-4c1a-a7b6-610899d767dc,state-change-object-detection,State Change Object Detection,,computer-vision 4410c421-9aea-4a58-b8eb-abb595172649,object-detection-in-indoor-scenes,Object Detection In Indoor Scenes,"Object detection in indoor scenes is the task of performing object detection within an indoor environment. ( Image credit: [Faster Bounding Box Annotation for Object Detection in Indoor Scenes](https://arxiv.org/pdf/1807.03142v1.pdf) )",computer-vision 9813b00e-a925-4de7-ace4-e762821dc33a,vehicle-re-identification,Vehicle Re-Identification,"Vehicle re-identification is the task of identifying the same vehicle across multiple cameras. ( Image credit: [A Two-Stream Siamese Neural Network for Vehicle Re-Identification by Using Non-Overlapping Cameras](https://github.com/icarofua/siamese-two-stream) )",computer-vision 29f3c6c7-7a6b-40e9-9c43-f223328a6e6e,overlapped-100-10,Overlapped 100-10,,computer-vision 749b5422-0302-4b4f-aee9-1319a33e5232,motion-segmentation,Motion Segmentation,"**Motion Segmentation** is an essential task in many applications in Computer Vision and Robotics, such as surveillance, action recognition and scene understanding. The classic way to state the problem is the following: given a set of feature points that are tracked through a sequence of images, the goal is to cluster those trajectories according to the different motions they belong to. It is assumed that the scene contains multiple objects that are moving rigidly and independently in 3D-space. Source: [Robust Motion Segmentation from Pairwise Matches ](https://arxiv.org/abs/1905.09043)",computer-vision 8844da70-2436-45cb-a1ce-4b47b571761b,weakly-supervised-instance-segmentation,Weakly-supervised instance segmentation,,computer-vision 398d6fa1-6765-415b-9565-7b9d11069826,drone-view-target-localization,Drone-view target localization,"(Drone -> Satellite) Given one drone-view image or video, the task aims to find the most similar satellite-view image to localize the target building in the satellite view.",computer-vision 76c4a885-d046-454a-bf13-2259fded6460,3d-classification,3D Classification,,computer-vision 422b429b-0b33-4da9-b807-ac6843a8f8d4,group-detection-in-crowds,Group Detection In Crowds,,computer-vision 2ba35485-c94a-48c8-8df6-7603bfffd5f1,open-set-action-recognition,Open Set Action Recognition,,computer-vision 3be1001c-a33b-4150-a538-65f68dd78bb8,rgb-d-reconstruction,RGB-D Reconstruction,,computer-vision ad7237e1-5601-412c-a12f-1051c19fd162,3d,3D,,computer-vision 8489e78e-694a-4ffc-b4ed-f9f11108c92a,hand-object-pose,hand-object pose,6D pose estimation of hand and object,computer-vision 5607fa6a-cfd1-4bd0-8264-c373a630b005,object-categorization,Object Categorization,"Object categorization identifies which label, from a given set, best corresponds to an image region defined by an input image and bounding box.",computer-vision 50aa1044-1908-48f7-b9b1-6b9b8ebcb9df,object-detection,Object Detection,"**Object Detection** is a computer vision task in which the goal is to detect and locate objects of interest in an image or video. The task involves identifying the position and boundaries of objects in an image, and classifying the objects into different categories. The state-of-the-art methods can be categorized into two main types: one-stage methods and two stage-methods: - One-stage methods prioritize inference speed, and example models include YOLO, SSD and RetinaNet. - Two-stage methods prioritize detection accuracy, and example models include Faster R-CNN, Mask R-CNN and Cascade R-CNN. The most popular benchmark is the MSCOCO dataset. Models are typically evaluated according to a Mean Average Precision metric. ( Image credit: [Detectron](https://github.com/facebookresearch/detectron) )",computer-vision 23726e2c-61c9-4df8-9bd2-6b1da83d779b,texture-classification,Texture Classification,"**Texture Classification** is a fundamental issue in computer vision and image processing, playing a significant role in many applications such as medical image analysis, remote sensing, object recognition, document analysis, environment modeling, content-based image retrieval and many more. Source: [Improving Texture Categorization with Biologically Inspired Filtering ](https://arxiv.org/abs/1312.0072)",computer-vision e028dd88-c34f-49c1-afcd-f09a1b7e042d,age-estimation,Age Estimation,"Age Estimation is the task of estimating the age of a person from an image some other kind of data. ( Image credit: [BridgeNet](https://arxiv.org/pdf/1904.03358v1.pdf) )",computer-vision ac837bd0-ec3c-4cec-9f52-da0dd18a5426,visual-localization,Visual Localization,"**Visual Localization** is the problem of estimating the camera pose of a given image relative to a visual representation of a known scene. Source: [Fine-Grained Segmentation Networks: Self-Supervised Segmentation for Improved Long-Term Visual Localization ](https://arxiv.org/abs/1908.06387)",computer-vision f6a8c909-bf0c-4386-8b10-3274606fc6a6,unsupervised-anomaly-detection-with-specified-7,Unsupervised Anomaly Detection with Specified Settings -- 10% anomaly,,computer-vision 9d37344e-c7c5-443d-98d6-289b9199c9f5,video-forensics,Video Forensics,,computer-vision bac2627c-5811-4758-af00-fc4dd3bdc1a9,visual-recognition,Visual Recognition,,computer-vision 1a8f60cb-16fe-4717-857a-cf848628af6a,stereo-lidar-fusion,Stereo-LiDAR Fusion,Depth estimation using stereo cameras and a LiDAR sensor.,computer-vision a5a779b9-7f7c-4959-9b7d-8245d4493c72,video-inpainting,Video Inpainting,"The goal of **Video Inpainting** is to fill in missing regions of a given video sequence with contents that are both spatially and temporally coherent. Video Inpainting, also known as video completion, has many real-world applications such as undesired object removal and video restoration. Source: [Deep Flow-Guided Video Inpainting ](https://arxiv.org/abs/1905.02884)",computer-vision 9224ef46-c34d-4ae3-b333-b48c351cdc1e,depth-map-super-resolution,Depth Map Super-Resolution,"Depth map super-resolution is the task of upsampling depth images. ( Image credit: [A Joint Intensity and Depth Co-Sparse Analysis Model for Depth Map Super-Resolution](https://arxiv.org/pdf/1304.5319v1.pdf) )",computer-vision 185ff286-f234-43aa-b482-1170cb42e981,face-parsing,Face Parsing,Classify pixels of a face image into different classes based on a given bounding box.,computer-vision 1988c556-24a2-4cf1-9f9a-3d2637e101f0,unsupervised-video-clustering,Unsupervised Video Clustering,,computer-vision b10ad960-a20d-46ba-88a8-dbacfb7a747e,single-image-generation,single-image-generation,,computer-vision fdc36452-44ab-42e1-b83c-bb8b4342d936,activity-recognition-in-videos,Activity Recognition In Videos,,computer-vision 43731d9a-e8ed-4b19-9d06-1d4346eeb029,image-augmentation,Image Augmentation,"**Image Augmentation** is a data augmentation method that generates more training data from the existing training samples. Image Augmentation is especially useful in domains where training data is limited or expensive to obtain like in biomedical applications. Source: [Improved Image Augmentation for Convolutional Neural Networks by Copyout and CopyPairing ](https://arxiv.org/abs/1909.00390) ( Image credit: [Kornia](https://github.com/kornia/kornia) )",computer-vision 2b6ec428-4c1c-48e1-8626-3f38618ac0e1,video-temporal-consistency,Video Temporal Consistency,"A method that remove temporal flickering and other artifacts from videos, in particular those introduced by (non-temporal-aware) per-frame processing",computer-vision 38f264e9-da19-4b1d-88ac-0a559813c077,light-source-estimation,Outdoor Light Source Estimation,,computer-vision 94e04067-7ec9-4828-88b5-295db04f82fa,audio-visual-active-speaker-detection,Audio-Visual Active Speaker Detection,Determine if and when each visible person in the video is speaking.,computer-vision 5d3b70fe-c289-411f-8310-96a86ca3ba98,3d-rotation-estimation,3D Rotation Estimation,,computer-vision 999039f0-58b6-4237-a030-b6d6a636aa32,relational-captioning,Relational Captioning,,computer-vision 3a29dbbd-08da-47e6-a3bb-fb92c216978a,caricature,Caricature,"**Caricature** is a pictorial representation or description that deliberately exaggerates a person’s distinctive features or peculiarities to create an easily identifiable visual likeness with a comic effect. This vivid art form contains the concepts of abstraction, simplification and exaggeration. Source: [Alive Caricature from 2D to 3D ](https://arxiv.org/abs/1803.06802)",computer-vision e11334f5-1b3a-42bc-ac79-1ad46d1aca7f,edge-detection,Edge Detection,"**Edge Detection** is a fundamental image processing technique which involves computing an image gradient to quantify the magnitude and direction of edges in an image. Image gradients are used in various downstream tasks in computer vision such as line detection, feature detection, and image classification. Source: [Artistic Enhancement and Style Transfer of Image Edges using Directional Pseudo-coloring ](https://arxiv.org/abs/1906.07981) ( Image credit: [Kornia](https://github.com/kornia/kornia) )",computer-vision 7c998985-c766-4d77-877f-de27d7d13413,3d-place-recognition,3D Place Recognition,Pointcloud-based place recognition and retrieval,computer-vision f88f5fe2-b497-4d23-b93d-e2a5ff6f50fa,human-interaction-recognition,Human Interaction Recognition,"Human Interaction Recognition (HIR) is a field of study that involves the development of computer algorithms to detect and recognize human interactions in videos, images, or other multimedia content. The goal of HIR is to automatically identify and analyze the social interactions between people, their body language, and facial expressions.",computer-vision f5236a45-ec0f-449f-82be-3e1e9ce43fbb,autonomous-vehicles,Autonomous Vehicles,"Autonomous vehicles is the task of making a vehicle that can guide itself without human conduction. Many of the state-of-the-art results can be found at more general task pages such as [3D Object Detection](https://paperswithcode.com/task/3d-object-detection) and [Semantic Segmentation](https://paperswithcode.com/task/semantic-segmentation). ( Image credit: [GSNet: Joint Vehicle Pose and Shape Reconstruction with Geometrical and Scene-aware Supervision](https://arxiv.org/abs/2007.13124) )",computer-vision 4de5a778-f62c-4548-acda-91ef125e50f2,4d-spatio-temporal-semantic-segmentation,4D Spatio Temporal Semantic Segmentation,Image: [Choy et al](https://paperswithcode.com/paper/4d-spatio-temporal-convnets-minkowski),computer-vision 2a680417-e5f0-49c2-a9b1-745b5ef682a6,chart-question-answering,Chart Question Answering,Question Answering task on charts images,computer-vision 7ebfbc6b-ce03-4a95-98ad-46e80b03f3b6,serial-style-transfer,Serial Style Transfer,,computer-vision 17278601-adec-47c9-bd14-2ba79c241ba3,face-hallucination,Face Hallucination,"Face hallucination is the task of generating high-resolution (HR) facial images from low-resolution (LR) inputs. ( Image credit: [Deep CNN Denoiser and Multi-layer Neighbor Component Embedding for Face Hallucination](https://arxiv.org/pdf/1806.10726v1.pdf) )",computer-vision 1c5b6453-e2b4-47f4-8eec-7fe64b7b011c,hyperspectral-unmixing,Hyperspectral Unmixing,"**Hyperspectral Unmixing** is a procedure that decomposes the measured pixel spectrum of hyperspectral data into a collection of constituent spectral signatures (or endmembers) and a set of corresponding fractional abundances. Hyperspectral Unmixing techniques have been widely used for a variety of applications, such as mineral mapping and land-cover change detection. Source: [An Augmented Linear Mixing Model to Address Spectral Variability for Hyperspectral Unmixing ](https://arxiv.org/abs/1810.12000)",computer-vision d71640a4-a426-4a25-b0d4-a6e30f21d754,visual-reasoning,Visual Reasoning,Ability to understand actions and reasoning associated with any visual images,computer-vision 16dea9cd-67b8-46f3-a505-c56407493f20,action-recognition-in-videos,Action Recognition,"**Action Recognition** is a computer vision task that involves recognizing human actions in videos or images. The goal is to classify and categorize the actions being performed in the video or image into a predefined set of action classes. In the video domain, it is an open question whether training an action classification network on a sufficiently large dataset, will give a similar boost in performance when applied to a different temporal task or dataset. The challenges of building video datasets has meant that most popular benchmarks for action recognition are small, having on the order of 10k videos. Please note some benchmarks may be located in the [Action Classification](https://paperswithcode.com/task/action-classification) or [Video Classification](https://paperswithcode.com/task/video-classification) tasks, e.g. Kinetics-400.",computer-vision 7666d1cb-c1df-4e57-84d9-f48c772d25e4,active-object-localization,Active Object Localization,,computer-vision 7ad78279-4ce5-4129-86fd-a043a75c4ed1,video-kinematic-base-workflow-recognition,Video & Kinematic Base Workflow Recognition,,computer-vision ecf090dd-8fc9-4610-80e0-3c1f85de13aa,intubation-support-prediction,Intubation Support Prediction,Prediction of need for Intubation support of Covid-19 patients.,computer-vision dd09f1b2-270a-478e-b427-1561333746c7,real-time-object-detection,Real-Time Object Detection,"**Real-Time Object Detection** is a computer vision task that involves identifying and locating objects of interest in real-time video sequences with fast inference while maintaining a base level of accuracy. This is typically solved using algorithms that combine object detection and tracking techniques to accurately detect and track objects in real-time. They use a combination of feature extraction, object proposal generation, and classification to detect and localize objects of interest. ( Image credit: [CenterNet](https://github.com/xingyizhou/CenterNet) )",computer-vision d9f71c4b-6ec1-44d1-bf21-d3ccdb284ad1,smile-recognition,Smile Recognition,Smile recognition is the task of recognising a smiling face in a photo or video.,computer-vision 3e8573be-4c31-4f83-8f6f-52c8fb6ca4ac,unsupervised-semantic-segmentation,Unsupervised Semantic Segmentation,"Models that learn to segment each image (i.e. assign a class to every pixel) without seeing the ground truth labels. ( Image credit: [SegSort: Segmentation by Discriminative Sorting of Segments](http://openaccess.thecvf.com/content_ICCV_2019/papers/Hwang_SegSort_Segmentation_by_Discriminative_Sorting_of_Segments_ICCV_2019_paper.pdf) )",computer-vision b93e6755-9a5d-4512-b542-b02e39240ebc,atomic-action-recognition,Atomic action recognition,,computer-vision 2e891f69-ad37-4ef2-85c0-d2fa4c980005,word-spotting-in-handwritten-documents,Word Spotting In Handwritten Documents,,computer-vision 72088790-4827-4639-82ab-5061db0e478f,trademark-retrieval,Trademark Retrieval,,computer-vision 5cfc45ba-c8e5-47b6-9f6a-8be4c9adfa1b,blended-target-domain-adaptation,Blended-target Domain Adaptation,"Blended-target domain adaptation is to adapt a single source model to multiple different target domains. The task is similar to the multi-target domain adaptation. However, the domain labels are not available.",computer-vision 85df147c-500e-4dde-8636-abf3e81d958d,video-understanding,Video Understanding,"A crucial task of **Video Understanding** is to recognise and localise (in space and time) different actions or events appearing in the video. Source: [Action Detection from a Robot-Car Perspective ](https://arxiv.org/abs/1807.11332)",computer-vision 669e67b9-55e3-4172-8e60-8704dbcad1be,3d-single-object-tracking,3D Single Object Tracking,"3D tracking of a single object, based on an initial 3D bounding box, provided to the tracker. 3D single object tracking is commonly performed using point cloud data from Lidars, as it provides valuable depth information, which is lost in camera images. However, irregular point cloud structure and an increasing point sparsity with distance makes Lidar-based 3D single object tracking a nontrivial task.",computer-vision ac6e768d-06a2-40c9-95e4-514d1bd3ecaf,handwritten-line-segmentation,Handwritten Line Segmentation,Handwritten Line Segmentation from Document Images.,computer-vision 6243e1c1-014e-42a8-8335-72897a1c1055,conditional-image-generation,Conditional Image Generation,"Conditional image generation is the task of generating new images from a dataset conditional on their class. ( Image credit: [PixelCNN++](https://github.com/openai/pixel-cnn) )",computer-vision ab643cac-43df-4b27-802c-ab463365712c,virtual-try-on,Virtual Try-on,Virtual try-on of clothing or other items such as glasses and makeup. Most recent techniques use Generative Adversarial Networks.,computer-vision ae5cc5df-8374-4139-b0d5-877a78f2d687,3d-geometry-perception,3D Geometry Perception,Image: [Zhao et al](https://arxiv.org/pdf/1812.10775v2.pdf),computer-vision 4fe321ae-5ef5-482d-917a-f81ef75a6757,visual-text-correction,Visual Text Correction,,computer-vision 3e37d676-aa9c-4f90-88cb-b6bdbf76fd76,video-domain-adapation,Video Domain Adapation,Unsupervised Domain Adaptation on Videos for the task of Action Recognition.,computer-vision c0816d0c-25c2-4401-87d2-f26ffd48e3fa,camouflaged-object-segmentation,Camouflaged Object Segmentation,"Camouflaged object segmentation (COS) or Camouflaged object detection (COD), which was originally promoted by [T.-N. Le et al.](https://www.sciencedirect.com/science/article/abs/pii/S1077314219300608) (2017), aims to identify objects that conceal their texture into the surrounding environment. The high intrinsic similarities between the target object and the background make COS/COD far more challenging than the traditional object segmentation task. Also, refer to the online benchmarks on [CAMO dataset](https://sites.google.com/view/ltnghia/research/camo), [COD dataset](http://dpfan.net/Camouflage/), and [online demo](http://mc.nankai.edu.cn/cod). ( Image source: [Anabranch Network for Camouflaged Object Segmentation](https://www.sciencedirect.com/science/article/abs/pii/S1077314219300608) )",computer-vision b9f0b333-4d7c-413a-b613-03bed2da7c0a,unseen-object-instance-segmentation,Unseen Object Instance Segmentation,"Instance segmentation is the task of detecting and delineating each distinct object of interest appearing in an image. Image Credit: [Deep Occlusion-Aware Instance Segmentation with Overlapping BiLayers](https://arxiv.org/abs/2103.12340)",computer-vision dbc28e4a-1083-4f41-8117-3711128da84a,3d-dense-captioning,3D dense captioning,"Dense captioning in 3D point clouds is an emerging vision-and-language task involving object-level 3D scene understanding. Apart from coarse semantic class prediction and bounding box regression as in traditional 3D object detection, 3D dense captioning aims at producing a further and finer instance-level label of natural language description on visual appearance and spatial relations for each scene object of interest.",computer-vision c9f25c60-a7a7-4c59-bb1d-a89164554585,skills-evaluation,Skills Evaluation,,computer-vision 299ca938-30bd-4980-95e3-369fa3740bcd,facial-editing,Facial Editing,Image source: [Stitch it in Time: GAN-Based Facial Editing of Real Videos](https://arxiv.org/pdf/2201.08361v2.pdf),computer-vision c91e096c-aec4-4239-b564-207097940139,micro-expression-spotting,Micro-Expression Spotting,"Facial Micro-Expression Spotting is a challenging task in identifying onset, apex and/or offset over a short or long micro-expression sequence.",computer-vision 12ca3d52-b51d-417d-b5a6-18e4622bbf46,pose-contrastive-learning,Pose Contrastive Learning,,computer-vision 6a4791f7-9a61-479f-ae8c-0635a087835a,motion-disentanglement,Motion Disentanglement,Disentangling irregular (anomalous) motion from regular motion.,computer-vision 86956437-863e-4d9c-b28b-4c7134906e84,semantic-correspondence,Semantic correspondence,The task of semantic correspondence aims to establish reliable visual correspondence between different instances of the same object category.,computer-vision a0f54dd0-5031-484c-86ec-8ea0c7ce5221,zero-shot-composed-image-retrieval-zs-cir,Zero-Shot Composed Image Retrieval (ZS-CIR),"Given a query composed of a reference image and a relative caption, Composed Image Retrieval (CIR) aims to retrieve target images that are visually similar to the reference one but incorporate the changes specified in the relative caption. The bi-modality of the query provides users with more precise control over the characteristics of the desired image, as some features are more easily described with language, while others can be better expressed visually. **Zero-Shot Composed Image Retrieval (ZS-CIR)** is a subtask of CIR that aims to design an approach that manages to combine the reference image and the relative caption without the need for supervised learning.",computer-vision e07934a9-96fe-4bab-994d-9a92a4e27c3b,monocular-3d-human-pose-estimation,Monocular 3D Human Pose Estimation,This task targets at 3D human pose estimation with a single RGB camera.,computer-vision b38ffa71-e4c4-458f-9d8d-75c02e37651f,multimodal-patch-matching,Multimodal Patch Matching,"Multimodal patch matching focuses on matching patches originating from different sources, such as visible RGB and near-infrared.",computer-vision 3531aa85-8021-443a-8a4a-ab584b53760a,video-to-image-affordance-grounding,Video-to-image Affordance Grounding,"Given a demonstration video V and a target image I, the goal of video-to-image affordance grounding predict an affordance heatmap over the target image according to the hand-interacted region in the video, accompanied by the affordance action (e.g., press, turn).",computer-vision 558786b1-451e-47b1-8fbd-00d784bab81c,typeface-completion,Typeface Completion,,computer-vision 0f110ee0-a362-4471-846f-0655f2a75524,robust-bev-map-segmentation,Robust BEV Map Segmentation,,computer-vision 09284c33-0dc3-48ee-9762-d873abaa184d,face-image-retrieval,Face Image Retrieval,"Face image retrieval is the task of retrieving faces similar to a query, according to the given criteria (e.g. identity) and rank them using their distances to the query. ( Image credit: [CP-mtML](http://openaccess.thecvf.com/content_cvpr_2016/papers/Bhattarai_CP-mtML_Coupled_Projection_CVPR_2016_paper.pdf) )",computer-vision 74148a88-0e21-4873-a262-525e0f513a13,jpeg-artifact-correction,JPEG Artifact Correction,"Correction of visual artifacts caused by JPEG compression, these artifacts are usually grouped into three types: blocking, blurring, and ringing. They are caused by quantization and removal of high frequency DCT coefficients.",computer-vision 39c76c3a-8fce-4682-92c2-2edae603f05a,3d-part-segmentation,3D Part Segmentation,"Segmenting 3D object parts ( Image credit: [MeshCNN: A Network with an Edge](https://arxiv.org/pdf/1809.05910v2.pdf) )",computer-vision 69e100bd-0039-41b4-be84-c9da4d905c6b,3d-human-reconstruction,3D Human Reconstruction,,computer-vision b5f167e5-cc28-4e08-81e2-b94a52e8793c,scene-recognition,Scene Recognition,,computer-vision 7f65a1a1-48b0-4a6c-a4ab-589953ca72cc,image-manipulation,Image Manipulation,,computer-vision e977fc3e-f948-4f06-8537-999c49e3b0ff,spoof-detection,Spoof Detection,,computer-vision 48a16bdc-4f67-4beb-b730-aa8cc1558078,image-to-image-translation,Image-to-Image Translation,"**Image-to-Image Translation** is a task in computer vision and machine learning where the goal is to learn a mapping between an input image and an output image, such that the output image can be used to perform a specific task, such as style transfer, data augmentation, or image restoration. ( Image credit: [Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks](https://arxiv.org/pdf/1703.10593v6.pdf) )",computer-vision 428ee989-ab7f-4f6a-aafd-b390e203ff0b,fashion-understanding,Fashion Understanding,,computer-vision c90b056c-ca6f-4a6f-a5a2-de16d980697b,imputation,Imputation,Substituting missing data with values according to some criteria.,computer-vision cb3c4bf3-3905-4ec0-848c-9725b4cc24c8,thermal-image-denoising,Thermal Image Denoising,,computer-vision 633b7188-67c7-4b6f-b8e0-570a9457e878,open-world-semi-supervised-learning,Open-World Semi-Supervised Learning,,computer-vision cf2c291b-7c33-4774-85f3-fdf442778dfa,human-pose-forecasting,Human Pose Forecasting,"Human pose forecasting is the task of detecting and predicting future human poses. ( Image credit: [EgoPose](https://github.com/Khrylx/EgoPose) )",computer-vision 86e69045-2c20-40a7-ae55-b6d6a0ae60ae,corpus-video-moment-retrieval,Corpus Video Moment Retrieval,The task extends the Single Video Moment Retrieval task to the Corpus setup where a single textual query is used to temporally localize relevant moments across all videos in the dataset.,computer-vision fef5ad8e-1861-46a2-805e-3e69469f3980,amodal-layout-estimation,Amodal Layout Estimation,"Amodal scene layout estimation involves estimating the static and dynamic portion of an urban driving scene in bird's-eye view, given a single image. The concept of ""amodal"" estimation refers to the fact that we also estimate layout of parts of the scene that are not observable in the image.",computer-vision 1cc34e49-5e98-4649-8619-8d08974ac4a5,egocentric-pose-estimation,Egocentric Pose Estimation,,computer-vision bd5482f6-d02a-4ed2-88e1-8bd3fabf7807,anomaly-classification,Anomaly Classification,,computer-vision 43623c1a-2c5a-4ecb-a527-b4bf78c4b993,2d-human-pose-estimation,2D Human Pose Estimation,"What is Human Pose Estimation? Human pose estimation is the process of estimating the configuration of the body (pose) from a single, typically monocular, image. Background. Human pose estimation is one of the key problems in computer vision that has been studied for well over 15 years. The reason for its importance is the abundance of applications that can benefit from such a technology. For example, human pose estimation allows for higher-level reasoning in the context of human-computer interaction and activity recognition; it is also one of the basic building blocks for marker-less motion capture (MoCap) technology. MoCap technology is useful for applications ranging from character animation to clinical analysis of gait pathologies.",computer-vision 05f7d79e-0209-477c-a4ef-bef0dfd3526d,face-clustering,Face Clustering,Face Clustering in the videos,computer-vision 6c0b2642-3c81-4d44-9d61-6b994d1d5612,visual-analogies,Visual Analogies,,computer-vision 71d22a75-018e-4ba3-a306-4f771b4ee3bf,classification-of-hyperspectral-images,Classification Of Hyperspectral Images,,computer-vision 4946ba79-0716-4028-9651-4f2fcd30f42f,single-image-based-hdr-reconstruction,Single-Image-Based Hdr Reconstruction,,computer-vision d126e3f9-a738-4f56-9b74-c9f91af85d40,landmark-based-segmentation,Landmark-based segmentation,,computer-vision 31c42e73-5b40-4fa1-9913-573bb95acb52,animal-pose-estimation,Animal Pose Estimation,"Animal pose estimation is the task of identifying the pose of an animal. ( Image credit: [Using DeepLabCut for 3D markerless pose estimation across species and behaviors](http://www.mousemotorlab.org/s/NathMathis2019.pdf) )",computer-vision f50df6ae-398c-4f4d-9888-d30f4e4459d7,3d-reconstruction,3D Reconstruction,"**3D Reconstruction** is the task of creating a 3D model or representation of an object or scene from 2D images or other data sources. The goal of 3D reconstruction is to create a virtual representation of an object or scene that can be used for a variety of purposes, such as visualization, animation, simulation, and analysis. It can be used in fields such as computer vision, robotics, and virtual reality. Image: [Gwak et al](https://arxiv.org/pdf/1705.10904v2.pdf)",computer-vision beeba194-5914-4e70-a82c-295b944084f9,semi-supervised-person-instance-segmentation,Semi-Supervised Person Instance Segmentation,,computer-vision 4c64c9e1-41c0-4fcc-ad36-a1c75550ebf5,facial-expression-generation,Facial expression generation,,computer-vision 751f53af-d741-4117-a131-95b91aa40149,body-mass-index-bmi-prediction,Body Mass Index (BMI) Prediction,,computer-vision f2e5e696-0fdb-48e6-bd0b-ba5b423832c9,image-steganography,Image Steganography,"**Image Steganography** is the main content of information hiding. The sender conceal a secret message into a cover image, then get the container image called stego, and finish the secret message’s transmission on the public channel by transferring the stego image. Then the receiver part of the transmission can reveal the secret message out. Steganalysis is an attack to the steganography algorithm. The listener on the public channel intercept the image and analyze whether the image contains secret information. Source: [Invisible Steganography via Generative Adversarial Networks ](https://arxiv.org/abs/1807.08571)",computer-vision a6167b0f-fb88-4d58-bb2d-bb640e80e30a,license-plate-detection,License Plate Detection,License Plate Recognition is an image-processing technology used to identify vehicles by their license plates. This technology is used in various security and traffic applications.,computer-vision 7bed3cef-7fef-4682-88bc-77e07efc70e0,single-image-haze-removal,Single Image Haze Removal,,computer-vision e75ee0b5-3c9a-4561-9d60-908bc24574fe,semi-supervised-person-re-identification,Semi-Supervised Person Re-Identification,,computer-vision f4f78d41-5e87-4a3a-a34a-982c15708f28,morphology-classification,Morphology classification,,computer-vision 32c21c0a-d8f7-48b6-8144-1076a5faa034,defocus-estimation,Defocus Estimation,,computer-vision 281a9c16-a7d9-4424-bd5a-7482a461f2f3,mixed-reality,Mixed Reality,,computer-vision a72a8450-37d7-4a60-9624-a104eca1c69c,multi-modal-subspace-clustering,Multi-modal Subspace Clustering,,computer-vision c6f83188-6725-4668-8c61-dd2a6270bbd3,hdr-reconstruction,HDR Reconstruction,,computer-vision bdd2a4cb-998b-4741-b5e0-2d83d1108d4b,pso-convnets-dynamics-2,PSO-ConvNets Dynamics 2,Incorporating distilled Cucker-Smale elements into PSO algorithm using KNN and intertwine training with SGD (Pull back method),computer-vision b8344f49-4921-451f-84f1-de1f936403ac,3d-scene-graph-alignment,3D Scene Graph Alignment,,computer-vision da161f6f-f5a0-4534-b78f-32a7a1de12c1,hyperspectral-image-classification,Hyperspectral Image Classification,"**Hyperspectral Image Classification** is a task in the field of remote sensing and computer vision. It involves the classification of pixels in hyperspectral images into different classes based on their spectral signature. Hyperspectral images contain information about the reflectance of objects in hundreds of narrow, contiguous wavelength bands, making them useful for a wide range of applications, including mineral mapping, vegetation analysis, and urban land-use mapping. The goal of this task is to accurately identify and classify different types of objects in the image, such as soil, vegetation, water, and buildings, based on their spectral properties. ( Image credit: [Shorten Spatial-spectral RNN with Parallel-GRU for Hyperspectral Image Classification](https://arxiv.org/pdf/1810.12563v1.pdf) )",computer-vision 43b09bd5-a22b-4813-9e1e-caaa0fee2870,image-to-video,Image to Video Generation,"**Image to Video Generation** refers to the task of generating a sequence of video frames based on a single still image or a set of still images. The goal is to produce a video that is coherent and consistent in terms of appearance, motion, and style, while also being temporally consistent, meaning that the generated video should look like a coherent sequence of frames that are temporally ordered. This task is typically tackled using deep generative models, such as Generative Adversarial Networks (GANs) or Variational Autoencoders (VAEs), that are trained on large datasets of videos. The models learn to generate plausible video frames that are conditioned on the input image, as well as on any other auxiliary information, such as a sound or text track.",computer-vision d4e8a6f5-d54c-4ec8-b4b6-3f730b1f3e7f,zero-shot-skeletal-action-recognition,Zero Shot Skeletal Action Recognition,Zero-Shot Learning for 3D skeletal action recognition,computer-vision d65cd8b5-e067-4708-a79f-ec91d292f007,single-image-desnowing,Single Image Desnowing,,computer-vision c5a1b93a-6ff3-4e47-85d4-d8e3fe3b689b,egocentric-activity-recognition,Egocentric Activity Recognition,,computer-vision 5310ac2c-6fc1-4814-8e90-8f9ce2ded917,salient-object-detection,RGB Salient Object Detection,"RGB Salient object detection is a task-based on a visual attention mechanism, in which algorithms aim to explore objects or regions more attentive than the surrounding areas on the scene or RGB images. ( Image credit: [Attentive Feedback Network for Boundary-Aware Salient Object Detection](http://openaccess.thecvf.com/content_CVPR_2019/papers/Feng_Attentive_Feedback_Network_for_Boundary-Aware_Salient_Object_Detection_CVPR_2019_paper.pdf) )",computer-vision 35c78196-4bbd-4e63-9d79-e03387469411,image-retrieval,Image Retrieval,"**Image Retrieval** is a computer vision task that involves searching for images in a large database that are similar to a given query image. The goal of image retrieval is to enable users to find images that match their interests or needs, based on visual similarity or other criteria. ( Image credit: [DELF](https://github.com/tensorflow/models/tree/master/research/delf) )",computer-vision 5736af5e-d9a0-4938-8370-bd911fe86dbc,video-object-tracking,Video Object Tracking,Video Object Detection aims to detect targets in videos using both spatial and temporal information. It's usually deeply integrated with tasks such as Object Detection and Object Tracking.,computer-vision 37369819-d335-44b9-a9fc-339a92e11785,blind-image-deblurring,Blind Image Deblurring,"**Blind Image Deblurring** is a classical problem in image processing and computer vision, which aims to recover a latent image from a blurred input. Source: [Learning a Discriminative Prior for Blind Image Deblurring ](https://arxiv.org/abs/1803.03363)",computer-vision 2b0e66f8-c7f0-4747-a695-212dcff81ef4,crack-segmentation,Crack Segmentation,,computer-vision 6860db98-a17c-4041-8ff2-ab7086f5d41c,3d-canonicalization,3D Canonicalization,3D Canonicalization is the process of estimating a transformation-invariant feature for classification and part segmentation tasks.,computer-vision 3f8510cf-06b1-4677-91ec-262e45cb381a,multi-label-classification,Multi-Label Classification,"**Multi-Label Classification** is the supervised learning problem where an instance may be associated with multiple labels. This is an extension of single-label classification (i.e., multi-class, or binary) where each instance is only associated with a single class label. Source: [Deep Learning for Multi-label Classification ](https://arxiv.org/abs/1502.05988)",computer-vision 28b0199c-0df6-4668-8e32-850e29094da8,transparent-object-depth-estimation,Transparent Object Depth Estimation,Estimating the 3D shape of transparent objects,computer-vision 63fcb9a2-a957-47c0-97c3-b9eb1e7c4731,human-parsing,Human Parsing,"Human parsing is the task of segmenting a human image into different fine-grained semantic parts such as head, torso, arms and legs. ( Image credit: [Multi-Human-Parsing (MHP) ](https://github.com/ZhaoJ9014/Multi-Human-Parsing) )",computer-vision 7595e19d-e77e-44d0-8737-dc5e0d6d3d73,few-shot-image-classification,Few-Shot Image Classification,"**Few-Shot Image Classification** is a computer vision task that involves training machine learning models to classify images into predefined categories using only a few labeled examples of each category (typically < 6 examples). The goal is to enable models to recognize and classify new images with minimal supervision and limited data, without having to train on large datasets. (typically < 6 examples) ( Image credit: [Learning Embedding Adaptation for Few-Shot Learning](https://github.com/Sha-Lab/FEAT) )",computer-vision 67fa00c2-bcfb-45ce-ac28-bb5c7f2fd083,sar-image-despeckling,Sar Image Despeckling,"Despeckling is the task of suppressing speckle from Synthetic Aperture Radar (SAR) acquisitions. Image credits: GRD Sentinel-1 SAR image despeckled with [SAR2SAR-GRD](https://arxiv.org/abs/2102.00692)",computer-vision 6bfa3ece-58ae-40de-9076-7fb382ac4f7c,face-transfer,Face Transfer,"**Face Transfer** is a method for mapping face performances of one individual to facial animations of another one. It uses facial expressions and head poses from the video of a source actor to generate a video of a target character. Face Transfer is a special case of image-to-image translation tasks. Source: [Face Transfer with Generative Adversarial Network ](https://arxiv.org/abs/1710.06090)",computer-vision 59ed9cbc-da42-4673-9d56-26ed5a2146e8,rf-based-gesture-recognition,RF-based Gesture Recognition,"RF-based gesture sensing and recognition has increasingly attracted intense academic and industrial interest due to its various device-free applications in daily life, such as elder monitoring, mobile games. State-of-the-art approaches achieved accurate gesture sensing by using fine-grained RF signatures (such as CSI, Doppler effect) while could not achieve the same accuracy with coarse-grained RF signatures such as received signal strength (RSS). See e.g. Project Soli in depth: How radar-detected gestures could set the Pixel 4 apart An experimental Google project may finally be ready to make its way into the real world — and the implications could be enormous. https://www.computerworld.com/article/3402019/google-project-soli-pixel-4.html ( Image credit: [Accurate Human Gesture Sensing With Coarse-Grained RF Signatures](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8737967) )",computer-vision 6d714383-ae95-47e4-ba7b-302e59aeaa61,multi-label-image-classification,Multi-Label Image Classification,The Multi-Label Image Classification focuses on predicting labels for images in a multi-class classification problem where each image may belong to more than one class.,computer-vision 0ec7ff60-d4cb-43a8-9ce1-8b49d3f73b31,super-resolution,Super-Resolution,"**Super-Resolution** is a task in computer vision that involves increasing the resolution of an image or video by generating missing high-frequency details from low-resolution input. The goal is to produce an output image with a higher resolution than the input image, while preserving the original content and structure. ( Credit: [MemNet](https://github.com/tyshiwo/MemNet) )",computer-vision 6911a146-595a-4ede-bf8c-049723b69828,autonomous-driving,Autonomous Driving,"Autonomous driving is the task of driving a vehicle without human conduction. Many of the state-of-the-art results can be found at more general task pages such as [3D Object Detection](https://paperswithcode.com/task/3d-object-detection) and [Semantic Segmentation](https://paperswithcode.com/task/semantic-segmentation). (Image credit: [Exploring the Limitations of Behavior Cloning for Autonomous Driving](https://arxiv.org/pdf/1904.08980v1.pdf))",computer-vision 31091359-aebc-4e84-9e1b-6f8369967498,foreground-segmentation,Foreground Segmentation,,computer-vision 954e3c7c-5a2c-4563-bbd1-620b86dc8826,weakly-supervised-action-recognition,Weakly-Supervised Action Recognition,Action recognition with single-point annotations in time (there are no action start/stop time annotations),computer-vision 4abdfe09-d83c-4f6c-8943-e722b434c19d,unsupervised-anomaly-detection-with-specified-6,Unsupervised Anomaly Detection with Specified Settings -- 0.1% anomaly,,computer-vision c973715d-82e4-4bfc-b4e5-0b7f963daee5,person-search,Person Search,"**Person Search** is a task which aims at matching a specific person among a great number of whole scene images. Source: [Re-ID Driven Localization Refinement for Person Search ](https://arxiv.org/abs/1909.08580)",computer-vision 65e95a7c-ba47-42c2-8e22-7490525cedf1,personality-trait-recognition,Personality Trait Recognition,,computer-vision 2b0fa2d8-4257-4dae-95a8-aae810db676c,online-action-detection,Online Action Detection,Online action detection is the task of predicting the action as soon as it happens in a streaming video without access to video frames in the future.,computer-vision ebda6d57-6019-4133-bc84-2da9bc13d790,crowd-counting,Crowd Counting,"**Crowd Counting** is a task to count people in image. It is mainly used in real-life for automated public monitoring such as surveillance and traffic control. Different from object detection, Crowd Counting aims at recognizing arbitrarily sized targets in various situations including sparse and cluttering scenes at the same time. Source: [Deep Density-aware Count Regressor ](https://arxiv.org/abs/1908.03314)",computer-vision be8ae9a1-737c-4eef-96d9-2691717f7ca5,object-localization,Object Localization,"**Object Localization** is the task of locating an instance of a particular object category in an image, typically by specifying a tightly cropped bounding box centered on the instance. An object proposal specifies a candidate bounding box, and an object proposal is said to be a correct localization if it sufficiently overlaps a human-labeled “ground-truth” bounding box for the given object. In the literature, the “Object Localization” task is to locate one instance of an object category, whereas “object detection” focuses on locating all instances of a category in a given image. Source: [Fast On-Line Kernel Density Estimation for Active Object Localization ](https://arxiv.org/abs/1611.05369)",computer-vision 987f36a3-a87b-43d2-8941-d8a8a14cecf5,age-invariant-face-recognition,Age-Invariant Face Recognition,"Age-invariant face recognition is the task of performing face recognition that is invariant to differences in age. ( Image credit: [Look Across Elapse](https://arxiv.org/pdf/1809.00338v2.pdf) )",computer-vision e820a802-9ee5-48c4-b9e2-6721d702255f,video-object-segmentation,Video Object Segmentation,"Video object segmentation is a binary labeling problem aiming to separate foreground object(s) from the background region of a video. For leaderboards please refer to the different subtasks.",computer-vision 636445d1-b9ef-4e8e-bda1-79bec5277c69,semi-supervised-sketch-based-image-retrieval,Semi-Supervised Sketch Based Image Retrieval,"Whilst the number of photos can be easily scaled, each corresponding sketch still needs to be individually produced for fine-grained sketch-based image retrieval. The objective is to mitigate such an upper-bound on sketch data, and study whether unlabelled photos alone (of which they are many) can be cultivated for performance gain.",computer-vision e2e2afb9-0298-4188-b80f-3a7913293400,facial-landmark-detection,Facial Landmark Detection,"**Facial Landmark Detection** is a computer vision task that involves detecting and localizing specific points or landmarks on a face, such as the eyes, nose, mouth, and chin. The goal is to accurately identify these landmarks in images or videos of faces in real-time and use them for various applications, such as face recognition, facial expression analysis, and head pose estimation. ( Image credit: [Style Aggregated Network for Facial Landmark Detection](https://arxiv.org/pdf/1803.04108v4.pdf) )",computer-vision d029c770-4dd1-43db-b119-bf32af1f1faf,point-supervised-instance-segmentation,Point-Supervised Instance Segmentation,Weakly-Supervised Instance Segmentation using Point Labels,computer-vision 1731ac36-ccb1-4dca-83c9-d18400b42ce8,image-generation,Image Generation,"**Image Generation** (synthesis) is the task of generating new images from an existing dataset. - **Unconditional generation** refers to generating samples unconditionally from the dataset, i.e. $p(y)$ - **[Conditional image generation](/task/conditional-image-generation)** (subtask) refers to generating samples conditionally from the dataset, based on a label, i.e. $p(y|x)$. In this section, you can find state-of-the-art leaderboards for **unconditional generation**. For conditional generation, and other types of image generations, refer to the subtasks. ( Image credit: [StyleGAN](https://github.com/NVlabs/stylegan) )",computer-vision c3c3b640-4a68-430b-8ee9-2a206adb44d0,video-prediction,Video Prediction,"**Video Prediction** is the task of predicting future frames given past video frames. Gif credit: [MAGVIT ](https://magvit.cs.cmu.edu/) Source: [Photo-Realistic Video Prediction on Natural Videos of Largely Changing Frames ](https://arxiv.org/abs/2003.08635)",computer-vision 65675d4f-d4b5-45ea-9d41-fd8e48ae161d,gender-prediction,Gender Prediction,,computer-vision 95db7c2b-49b5-405f-ac0d-511befed8a0d,reference-based-super-resolution,Reference-based Super-Resolution,Reference-based Super-Resolution aims to recover high-resolution images by utilizing external reference images containing similar content to generate rich textures.,computer-vision 8af168b2-423f-4b5c-b1a6-a2e20da64e0a,gait-identification,Gait Identification,,computer-vision 57d490de-b757-4134-b233-560c97b6c6f6,network-interpretation,Network Interpretation,,computer-vision 7df4546c-78dc-48b2-af88-8a9ab968bfa8,video-semantic-segmentation,Video Semantic Segmentation,,computer-vision 22fc6559-f392-4aa6-8edb-95279b97c4ae,aggregate-xview3-metric,Aggregate xView3 Metric,"The aggregate xView3 metric is the combination of five metrics: object detection F1 score, close-to-shore object detection F1 score, vessel/not vessel classification F1 score, fishing/not fishing classification F1 score, and vessel length estimation percent error regression.",computer-vision ffc07ba2-5a45-49ff-8827-852c15ed8de4,vehicle-pose-estimation,Vehicle Pose Estimation,"Image Credit: [GSNet: Joint Vehicle Pose and Shape Reconstruction with Geometrical and Scene-aware Supervision, ECCV'20](https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123600511.pdf)",computer-vision 31d613b5-1c80-415c-aa4b-0a4d38e69f1e,persuasion-strategies,Persuasion Strategies,Prediction of Persuasion Strategy in Advertisements,computer-vision 1e51e41c-4caf-42ef-b872-3a67c8569a7f,semi-supervised-video-classification,Semi-Supervised Video Classification,,computer-vision 036c1952-7413-480f-9394-d8ceff178bc4,imagedocument-clustering,Image/Document Clustering,,computer-vision 74651c67-2d20-4172-a0ba-0b0b51a4c6df,hyperspectral,Hyperspectral,,computer-vision 4138b865-7b71-4a1a-b9be-0a3b10b9f6f9,video-stabilization,Video Stabilization,,computer-vision 33a50198-d5d4-458d-acb8-479a67c019da,video-story-qa,Video Story QA,MCQ about clips from movies/tvshows/etc,computer-vision eee3f8d8-ed98-4c1e-80a3-dcd88b2b4c9f,multiple-object-forecasting,Multiple Object Forecasting,"( Image credit: [Multiple Object Forecasting](https://github.com/olly-styles/Multiple-Object-Forecasting) )",computer-vision c972ad5a-1348-48eb-88b2-e4ad198a45d8,unsupervised-person-re-identification,Unsupervised Person Re-Identification,,computer-vision 6d159c55-ea34-4fe6-a831-15937473e67a,shadow-detection,Shadow Detection,,computer-vision d21d59ee-9687-4a19-8c14-bbe87cd6a453,indoor-scene-reconstruction,Indoor Scene Reconstruction,,computer-vision b6e43627-371e-433d-ac1e-a30dbe7d0ec9,line-segment-detection,Line Segment Detection,,computer-vision 04053f9e-1b71-4ff9-81f9-ccb5a0b6a8d4,continuous-affect-estimation,Continuous Affect Estimation,,computer-vision 70a17e0d-5b41-4dc3-a5a3-8cf508e1ec26,monocular-depth-estimation,Monocular Depth Estimation,"**Monocular Depth Estimation** is the task of estimating the depth value (distance relative to the camera) of each pixel given a single (monocular) RGB image. This challenging task is a key prerequisite for determining scene understanding for applications such as 3D scene reconstruction, autonomous driving, and AR. State-of-the-art methods usually fall into one of two categories: designing a complex network that is powerful enough to directly regress the depth map, or splitting the input into bins or windows to reduce computational complexity. The most popular benchmarks are the KITTI and NYUv2 datasets. Models are typically evaluated using RMSE or absolute relative error. Source: [Defocus Deblurring Using Dual-Pixel Data ](https://arxiv.org/abs/2005.00305)",computer-vision 396972f2-076a-49d3-bd20-808823e2fbb3,multi-object-tracking,Multi-Object Tracking,"**Multi-Object Tracking** is a task in computer vision that involves detecting and tracking multiple objects within a video sequence. The goal is to identify and locate objects of interest in each frame and then associate them across frames to keep track of their movements over time. This task is challenging due to factors such as occlusion, motion blur, and changes in object appearance, and is typically solved using algorithms that integrate object detection and data association techniques.",computer-vision 2962b4d5-3e55-4bfc-acf8-ebf5d12b85c1,zero-shot-transfer-3d-point-cloud,Zero-Shot Transfer 3D Point Cloud Classification,,computer-vision 22c2aa5b-8fee-487f-8ae9-d4ba42904a30,point-cloud-generation,Point Cloud Generation,,computer-vision bb4c7451-29c0-411a-9b44-46721349623b,lane-detection,Lane Detection,"**Lane Detection** is a computer vision task that involves identifying the boundaries of driving lanes in a video or image of a road scene. The goal is to accurately locate and track the lane markings in real-time, even in challenging conditions such as poor lighting, glare, or complex road layouts. Lane detection is an important component of advanced driver assistance systems (ADAS) and autonomous vehicles, as it provides information about the road layout and the position of the vehicle within the lane, which is crucial for navigation and safety. The algorithms typically use a combination of computer vision techniques, such as edge detection, color filtering, and Hough transforms, to identify and track the lane markings in a road scene. ( Image credit: [End-to-end Lane Detection ](https://github.com/wvangansbeke/LaneDetection_End2End) )",computer-vision 21acd3a6-c423-4e85-b71c-988982a3659c,multimodal-unsupervised-image-to-image,Multimodal Unsupervised Image-To-Image Translation,"Multimodal unsupervised image-to-image translation is the task of producing multiple translations to one domain from a single image in another domain. ( Image credit: [MUNIT: Multimodal UNsupervised Image-to-image Translation](https://github.com/NVlabs/MUNIT) )",computer-vision ba1945ad-9775-48eb-8c79-2718646096de,human-detection,Human Detection,,computer-vision 78119481-87ce-4d8d-b491-5861e1a7f61f,abnormal-event-detection-in-video,Abnormal Event Detection In Video,"**Abnormal Event Detection In Video** is a challenging task in computer vision, as the definition of what an abnormal event looks like depends very much on the context. For instance, a car driving by on the street is regarded as a normal event, but if the car enters a pedestrian area, this is regarded as an abnormal event. A person running on a sports court (normal event) versus running outside from a bank (abnormal event) is another example. Although what is considered abnormal depends on the context, we can generally agree that abnormal events should be unexpected events that occur less often than familiar (normal) events Source: [Unmasking the abnormal events in video ](https://arxiv.org/abs/1705.08182) Image: [Ravanbakhsh et al](https://arxiv.org/pdf/1708.09644v1.pdf)",computer-vision 546cca6d-7abb-4d9a-a790-78f951298a46,single-image-deraining,Single Image Deraining,,computer-vision c3a4e35a-b059-4325-987d-dc8f98a283de,fake-image-attribution,Fake Image Attribution,Attribute the origin (model/architecture) of fake images.,computer-vision 72d70a5a-854c-4820-8de3-7dee47cbcf55,pose-retrieval,Pose Retrieval,Retrieval of similar human poses from images or videos,computer-vision 10d2b5fc-4ca3-4cdb-9a66-54cc5b973630,joint-deblur-and-frame-interpolation,Joint Deblur and Frame Interpolation,,computer-vision 5bc6d331-9bcb-4502-81a3-1c0afa2d1390,low-light-pedestrian-detection,Low-light Pedestrian Detection,Pedestrian Detection in low-light condition,computer-vision a9988fb4-0873-4a97-ad89-90ac74058468,action-recognition-in-videos-2,Action Recognition In Videos,"**Action Recognition in Videos** is a task in computer vision and pattern recognition where the goal is to identify and categorize human actions performed in a video sequence. The task involves analyzing the spatiotemporal dynamics of the actions and mapping them to a predefined set of action classes, such as running, jumping, or swimming.",computer-vision 0603cf32-9720-49b0-bb34-af5ab9cb11bf,video-enhancement,Video Enhancement,,computer-vision 2f3b57f1-47fa-4214-9f9c-90f13e09d24f,uie,UIE,,computer-vision d19e6a7a-7daf-406c-a161-b94bb207047b,amodal-tracking,Amodal Tracking,,computer-vision 0b9e69b1-bfb4-4662-858b-d571eb644998,scanpath-prediction,Scanpath prediction,Learning to Predict Sequences of Human Fixations.,computer-vision dacf681d-9609-4789-881e-5fb49ce3df6e,semi-supervised-instance-segmentation,Semi-Supervised Instance Segmentation,,computer-vision 56610cc1-d82d-496f-82e4-8dc9b52f4580,weakly-supervised-semantic-segmentation,Weakly-Supervised Semantic Segmentation,"The semantic segmentation task is to assign a label from a label set to each pixel in an image. In the case of fully supervised setting, the dataset consists of images and their corresponding pixel-level class-specific annotations (expensive pixel-level annotations). However, in the weakly-supervised setting, the dataset consists of images and corresponding annotations that are relatively easy to obtain, such as tags/labels of objects present in the image. ( Image credit: [Weakly-Supervised Semantic Segmentation Network with Deep Seeded Region Growing](http://openaccess.thecvf.com/content_cvpr_2018/papers/Huang_Weakly-Supervised_Semantic_Segmentation_CVPR_2018_paper.pdf) )",computer-vision 51b1de14-9c70-4879-adf0-15d1aef5cfdf,image-inpainting,Image Inpainting,"**Image Inpainting** is a task of reconstructing missing regions in an image. It is an important problem in computer vision and an essential functionality in many imaging and graphics applications, e.g. object removal, image restoration, manipulation, re-targeting, compositing, and image-based rendering. Source: [High-Resolution Image Inpainting with Iterative Confidence Feedback and Guided Upsampling ](https://arxiv.org/abs/2005.11742) Image source: [High-Resolution Image Inpainting with Iterative Confidence Feedback and Guided Upsampling](https://arxiv.org/pdf/2005.11742.pdf)",computer-vision 01a7a6ef-417a-4847-ac2c-0a4917128339,visibility-estimation-from-point-cloud,Visibility Estimation from Point Cloud,"Estimate the point-wise visibility of each point from a given point of view (a point, or a view frustum).",computer-vision 7b89bcea-68eb-4113-a937-4a56ea6bb641,weakly-supervised-object-detection,Weakly Supervised Object Detection,"Weakly Supervised Object Detection (WSOD) is the task of training object detectors with only image tag supervisions. ( Image credit: [Soft Proposal Networks for Weakly Supervised Object Localization](https://arxiv.org/pdf/1709.01829v1.pdf) )",computer-vision 4784bd26-3a9b-4d14-8fec-24405e84895b,lidar-semantic-segmentation,LIDAR Semantic Segmentation,,computer-vision e0e3a025-9188-4c40-ad61-6f9da066db37,6d-pose-estimation-1,6D Pose Estimation,Image: [Zeng et al](https://arxiv.org/pdf/1609.09475v3.pdf),computer-vision a30b692c-c6ce-49c7-8af3-6c4abc3c7656,boundary-detection,Boundary Detection,"**Boundary Detection** is a vital part of extracting information encoded in images, allowing for the computation of quantities of interest including density, velocity, pressure, etc. Source: [A Locally Adapting Technique for Boundary Detection using Image Segmentation ](https://arxiv.org/abs/1707.09030)",computer-vision f6a6f0c9-7e51-41cf-9ed9-cf9ec3bb85d1,class-incremental-semantic-segmentation,Class-Incremental Semantic Segmentation,Semantic segmentation with continous increments of classes.,computer-vision d032ed18-dc2c-4dc3-aadc-253169ccda47,image-compression-artifact-reduction,Image Compression Artifact Reduction,,computer-vision ab2999ec-ef32-41ee-bf51-dbc7d8bbfdb0,face-image-quality-assessment,Face Image Quality Assessment,,computer-vision a5314c01-7a05-485d-9942-68ddbaac9b3a,visual-object-tracking,Visual Object Tracking,"**Visual Object Tracking** is an important research topic in computer vision, image understanding and pattern recognition. Given the initial state (centre location and scale) of a target in the first frame of a video sequence, the aim of Visual Object Tracking is to automatically obtain the states of the object in the subsequent video frames. Source: [Learning Adaptive Discriminative Correlation Filters via Temporal Consistency Preserving Spatial Feature Selection for Robust Visual Object Tracking ](https://arxiv.org/abs/1807.11348)",computer-vision d03ec41d-31ce-4026-8266-b80567f0ddc8,weakly-supervised-temporal-action,Weakly-supervised Temporal Action Localization,Temporal Action Localization with weak supervision where only video-level labels are given for training,computer-vision 16e5b13a-1a18-447e-b4c4-d22888f4014c,interactive-segmentation,Interactive Segmentation,,computer-vision fedfaa67-a836-4931-abeb-ca68ba00c8a4,weakly-supervised-object-localization,Weakly-Supervised Object Localization,,computer-vision e1886bc6-01a5-4423-bcf2-e584313c9cb1,pulmorary-vessel-segmentation,Pulmorary Vessel Segmentation,,computer-vision 595d5e0e-58b8-4266-8e82-1d9e7c0661c1,detecting-shadows,Detecting Shadows,,computer-vision 53be3fff-e695-4495-b0c3-0b15478413eb,handwritten-chinese-text-recognition,Handwritten Chinese Text Recognition,"Handwritten Chinese text recognition is the task of interpreting handwritten Chinese input, e.g., from images of documents or scans.",computer-vision 6c9ac075-dd98-483e-8d2f-56812b1f6479,viewpoint-estimation,Viewpoint Estimation,,computer-vision 601ef08e-d792-47d5-bf61-ac408723fad9,autonomous-navigation,Autonomous Navigation,"Autonomous navigation is the task of autonomously navigating a vehicle or robot to or around a location without human guidance. ( Image credit: [Approximate LSTMs for Time-Constrained Inference: Enabling Fast Reaction in Self-Driving Cars](https://arxiv.org/pdf/1905.00689v2.pdf) )",computer-vision 0a600d2e-e676-4e23-9e5e-7c5cc2fe1a2f,facial-makeup-transfer,Facial Makeup Transfer,Facial makeup transfer aims to translate the **makeup style** from a given *reference* makeup face image to another non-makeup one while *preserving face identity*.,computer-vision 61ddb633-515b-422e-9347-b0d379eb3510,3d-multi-person-pose-estimation,3D Multi-Person Pose Estimation,"This task aims to solve root-relative 3D multi-person pose estimation. No human bounding box and root joint coordinate groundtruth are used in testing time. ( Image credit: [RootNet](https://github.com/mks0601/3DMPPE_ROOTNET_RELEASE) )",computer-vision 9ec41186-ff44-4f73-9d11-e0fd17bb79e8,few-shot-3d-point-cloud-classification,Few-Shot 3D Point Cloud Classification,,computer-vision 8848bc35-ff51-4887-a4a1-c2601d39340e,video-similarity,Video Similarity,,computer-vision 16b20484-e122-4d3d-946f-67053a190210,tumor-segmentation,Tumor Segmentation,Tumor Segmentation is the task of identifying the spatial location of a tumor. It is a pixel-level prediction where each pixel is classified as a tumor or background. The most popular benchmark for this task is the BraTS dataset. The models are typically evaluated with the Dice Score metric.,computer-vision 85c024b4-9380-49ea-bde3-d7142181e12c,unbiased-scene-graph-generation,Unbiased Scene Graph Generation,"Unbiased Scene Graph Generation (Unbiased SGG) aims to predict more informative scene graphs composed of more ""tail predicates"" *(in contrast to ""head predicates"" in terms of class frequencies) by dealing with the skewed, long-tailed predicate class distribution. (Definition from Chiou et al. ""Recovering the Unbiased Scene Graphs from the Biased Ones"")",computer-vision 38b160bf-9d3d-4a2a-982f-4f3d830868eb,irregular-text-recognition,Irregular Text Recognition,"To read a text from an image might be difficult due to the improper angle of the text inside the image or due to surprising font. Hence, to recognize the text data from the image, Irregular Text Recognition is used.",computer-vision 2d262035-ad6a-44a4-a036-b890efb09902,bird-view-synthesis,Bird View Synthesis,,computer-vision 40f7e7b6-0ef0-450e-af6d-8e56620b8903,underwater-3d-scene-reconstruction,Underwater 3D Scene Reconstruction,,computer-vision 3c1597cc-37d1-4867-818a-92d881f5c506,data-free-quantization,Data Free Quantization,"**Data Free Quantization** is a technique to achieve a highly accurate quantized model without accessing any training data. Source: [Qimera: Data-free Quantization with Synthetic Boundary Supporting Samples](https://arxiv.org/abs/2111.02625)",computer-vision cbbb6823-3ca1-4c30-8f17-dd1ef1164651,3d-point-cloud-part-segmentation,3D Point Cloud Part Segmentation,"3D point cloud part segmentation on datasets like ShapeNet Part, etc.",computer-vision 5e803c94-77a0-4965-a56c-4665be74f089,text-guided-generation,text-guided-generation,,computer-vision bf2796aa-bb78-4b49-aa2d-8e33a9d41d18,unconditional-video-generation,Unconditional Video Generation,,computer-vision ff2bb6ac-50d4-47f1-b8c9-97a463bb324a,semi-supervised-medical-image-segmentation,Semi-supervised Medical Image Segmentation,,computer-vision 9b09076e-5654-416f-b0db-3692755187a6,affordance-detection,Affordance Detection,"Affordance detection refers to identifying the potential action possibilities of objects in an image, which is an important ability for robot perception and manipulation. Image source: [Object-Based Affordances Detection with Convolutional Neural Networks and Dense Conditional Random Fields](https://dkanou.github.io/publ/P15__Nguyen_Kanoulas_Caldwell_Tsagarakis__2017__Object-Based_Affordances_Detection_with_Convolutional_Neural_Networks_and__Dense_Conditional_Random_Fields.pdf) Unlike other visual or physical properties that mainly describe the object alone, affordances indicate functional interactions of object parts with humans.",computer-vision bc210385-4e21-4bca-869c-e1fb0b98846e,image-categorization,Image Categorization,,computer-vision 0f2bbee5-1aec-43c1-8ea7-92a66f4c1bbd,cell-detection,Cell Detection,Cell Detection,computer-vision 376d41bf-cbbd-4edb-9326-7f0460744076,ad-hoc-video-search,Ad-hoc video search,"The Ad-hoc search task ended a 3 year cycle from 2016-2018 with a goal to model the end user search use-case, who is searching (using textual sentence queries) for segments of video containing persons, objects, activities, locations, etc. and combinations of the former. While the Internet Archive (IACC.3) dataset was adopted between 2016 to 2018, starting in 2019 a new data collection based on Vimeo Creative Commons (V3C) will be adopted to support the task for at least 3 more years. Given the test collection (V3C1 or IACC.3), master shot boundary reference, and set of Ad-hoc queries (approx. 30 queries) released by NIST, return for each query a list of at most 1000 shot IDs from the test collection ranked according to their likelihood of containing the target query.",computer-vision 93514ef4-9f7e-49bd-92ff-620ed533ce22,unsupervised-landmark-detection,Unsupervised Landmark Detection,"The discovery of object landmarks on a set of images depicting objects of the same category, directly from raw images without using any manual annotations.",computer-vision 6dab41db-b468-4e22-9fd2-c82b3f00cc09,video-saliency-detection,Video Saliency Detection,,computer-vision ed3d2280-d135-4b61-a698-845fd44f66f8,unsupervised-anomaly-detection-with-specified,Unsupervised Anomaly Detection with Specified Settings -- 30% anomaly,,computer-vision 1616ade0-dd8a-4f55-97fb-77bf0a8ea188,dense-pixel-correspondence-estimation,Dense Pixel Correspondence Estimation,,computer-vision 73a0b7bb-2234-4ce8-b87c-f32367faf06f,multi-view-learning,MULTI-VIEW LEARNING,"**Multi-View Learning** is a machine learning framework where data are represented by multiple distinct feature groups, and each feature group is referred to as a particular view. Source: [Dissimilarity-based representation for radiomics applications ](https://arxiv.org/abs/1803.04460)",computer-vision f38181ae-f1be-49df-a4dd-ebdc4a431bc9,action-localization,Action Localization,"Action Localization is finding the spatial and temporal co ordinates for an action in a video. An action localization model will identify which frame an action start and ends in video and return the x,y coordinates of an action. Further the co ordinates will change when the object performing action undergoes a displacement.",computer-vision ba21bc82-830d-488b-8c9c-60c7e4102dd7,single-image-dehazing,Single Image Dehazing,,computer-vision 2f6a1f14-1a80-4e3a-a8eb-2cabd32a6563,artistic-style-classification,Artistic style classification,Classify the artistic style of an artwork image,computer-vision 0f62b861-7e85-46ec-a2d0-3360e9e7a20b,text-to-shape-generation,Text-to-Shape Generation,,computer-vision fc73ee44-cff8-4e82-a9e9-64f0e9a9de32,skills-assessment,Skills Assessment,,computer-vision fb952c29-e739-44d5-befc-0fee7e8092ef,3d-semantic-segmentation,3D Semantic Segmentation,"**3D Semantic Segmentation** is a computer vision task that involves dividing a 3D point cloud or 3D mesh into semantically meaningful parts or regions. The goal of 3D semantic segmentation is to identify and label different objects and parts within a 3D scene, which can be used for applications such as robotics, autonomous driving, and augmented reality.",computer-vision 47ecd5bc-fa14-4c80-8436-c62667373394,style-generalization,Style Generalization,,computer-vision 30175eab-9a93-4a07-ae6c-85e94b7d2ed5,handwritten-word-segmentation,Handwritten Word Segmentation,Handwritten Word Segmentation from Document Images.,computer-vision c9079f67-ab6a-447a-981a-6ef3e2b94d19,unsupervised-image-decomposition,Unsupervised Image Decomposition,,computer-vision 7a47e431-679e-45b7-a7f1-37a478417eee,hand-segmentation,Hand Segmentation,,computer-vision 171011ad-140c-495c-ba83-fb6dec7ad4cf,novel-view-synthesis,Novel View Synthesis,"Synthesize a target image with an arbitrary target camera pose from given source images and their camera poses. ( Image credit: [Multi-view to Novel view: Synthesizing novel views with Self-Learned Confidence](https://github.com/shaohua0116/Multiview2Novelview) )",computer-vision f69ecdef-9d30-45e9-bd88-e7a46cf4dc86,sparse-representation-based-classification,Sparse Representation-based Classification,Sparse Representation-based Classification is the task based on the description of the data as a linear combination of few building blocks - atoms - taken from a pre-defined dictionary of such fundamental elements.,computer-vision be694c02-fa7d-4933-aa6b-892d4a63111c,composed-image-retrieval,Composed Image Retrieval (CoIR),"**Composed Image Retrieval (CoIR)** is the task involves retrieving images from a large database based on a query composed of multiple elements, such as text, images, and sketches. The goal is to develop algorithms that can understand and combine multiple sources of information to accurately retrieve images that match the query, extending the user’s expression ability.",computer-vision ce1ae882-4ce1-4f23-b464-90245a1851aa,drawing-pictures,Drawing Pictures,,computer-vision 9ed85bfc-ce45-48aa-9022-c91e0aec689f,patch-matching,Patch Matching,,computer-vision 8242ec1a-99be-4c31-b2b8-e3fe8e48ef8b,unsupervised-3d-human-pose-estimation,Unsupervised 3D Human Pose Estimation,,computer-vision ecae40ab-a500-47dc-8a50-70a00ccd35bc,highlight-removal,highlight removal,,computer-vision 6e0becb4-3a55-4bd5-b791-d60c6a15889a,spatial-token-mixer,Spatial Token Mixer,Spatial Token Mixer (STM) is a module for vision transformers that aims to improve the efficiency of token mixing. STM is a type of depthwise convolution that operates on the spatial dimension of the tokens. STM is a drop-in replacement for the token mixing layers in vision transformers.,computer-vision 5faad284-d305-4418-a457-96fe22efbfca,traffic-sign-detection,Traffic Sign Detection,,computer-vision f655fccb-9967-4429-83ed-6f5a3574c291,part-level-panoptic-segmentation,Part-aware Panoptic Segmentation,Panoptic segmentation with part-aware predictions.,computer-vision b95b90a5-b7a2-46d6-9f46-c900915f6a74,crowds,Crowds,,computer-vision f4bddc02-d2b5-40d2-a658-30e093cf8856,blind-face-restoration,Blind Face Restoration,"Blind face restoration aims at recovering high-quality faces from the low-quality counterparts suffering from unknown degradation, such as low-resolution, noise, blur, compression artifacts, etc. When applied to real-world scenarios, it becomes more challenging, due to more complicated degradation, diverse poses and expressions. Description source: [Towards Real-World Blind Face Restoration with Generative Facial Prior](https://paperswithcode.com/paper/towards-real-world-blind-face-restoration) Image source: [Towards Real-World Blind Face Restoration with Generative Facial Prior](https://paperswithcode.com/paper/towards-real-world-blind-face-restoration)",computer-vision 11f1c669-f37d-4965-9a3a-cc9b6a61d7ba,mri-reconstruction,MRI Reconstruction,"In its most basic form, MRI reconstruction consists in retrieving a complex-valued image from its under-sampled Fourier coefficients. Besides, it can be addressed as a encoder-decoder task, in which the normative model in the latent space will only capture the relevant information without noise or corruptions. Then, we decode the latent space in order to have a reconstructed MRI.",computer-vision 3b579a4f-2357-45dc-b2b2-ef5bcebcb812,semi-supervised-2d-and-3d-landmark-labeling,Semi-supervised 2D and 3D landmark labeling,,computer-vision 2710c165-d6b5-466b-87a0-4f327f8ef3ba,occlusion-handling,Occlusion Handling,,computer-vision 8777a4b5-8ce1-4f94-82df-1b8e15247242,pornography-detection,Pornography Detection,,computer-vision 1935415c-9138-412a-96fe-f4426387fd22,unsupervised-long-term-person-re-1,Unsupervised Long Term Person Re-Identificationa,"Long-term Person Re-Identification(Clothes-Changing Person Re-ID) is a computer vision task in which the goal is to match a person's identity across different cameras, clothes, and locations in a video or image sequence. It involves detecting and tracking a person and then using features such as appearance, and body shape to match their identity in different frames. The goal is to associate the same person across multiple non-overlapping camera views in a robust and efficient manner.",computer-vision 6af7fd79-5ca6-442b-9bd9-af4b61262e44,3d-facial-landmark-localization,3D Facial Landmark Localization,Image: [Zhang et al](https://arxiv.org/pdf/1801.09242v1.pdf),computer-vision 7351d8fe-f690-42e0-ae37-93169a5cba6e,content-based-image-retrieval,Content-Based Image Retrieval,"**Content-Based Image Retrieval** is a well studied problem in computer vision, with retrieval problems generally divided into two groups: category-level retrieval and instance-level retrieval. Given a query image of the Sydney Harbour bridge, for instance, category-level retrieval aims to find any bridge in a given dataset of images, whilst instance-level retrieval must find the Sydney Harbour bridge to be considered a match. Source: [Camera Obscurer: Generative Art for Design Inspiration ](https://arxiv.org/abs/1903.02165)",computer-vision 06969490-aea3-44f1-8b1d-29e848402f2a,photo-geolocation-estimation,Photo geolocation estimation,**Photo geolocation estimation** is task of estimate or classify the geolocation from photos on world map.,computer-vision 8d4ccd97-8620-4d4d-b9ed-90406db4eb30,homography-estimation,Homography Estimation,"**Homography estimation** is a technique used in computer vision and image processing to find the relationship between two images of the same scene, but captured from different viewpoints. It is used to align images, correct for perspective distortions, or perform image stitching. In order to estimate the homography, a set of corresponding points between the two images must be found, and a mathematical model must be fit to these points. There are various algorithms and techniques that can be used to perform homography estimation, including direct methods, RANSAC, and machine learning-based approaches.",computer-vision cbcfef11-2fa8-4033-b414-4781afe23100,driver-attention-monitoring,Driver Attention Monitoring,"Driver attention monitoring is the task of monitoring the attention of a driver. ( Image credit: [Predicting Driver Attention in Critical Situations](https://arxiv.org/pdf/1711.06406v3.pdf) )",computer-vision 9e9a1db1-da31-4157-8c15-c3226cee353f,group-activity-recognition,Group Activity Recognition,"**Group Activity Recognition** is a subset of human activity recognition problem which focuses on the collective behavior of a group of people, resulted from the individual actions of the persons and their interactions. Collective activity recognition is a basic task for automatic human behavior analysis in many areas like surveillance or sports videos. Source: [A Multi-Stream Convolutional Neural Network Framework for Group Activity Recognition ](https://arxiv.org/abs/1812.10328)",computer-vision daf297c8-3a58-44e7-a2fa-bb5843f899c7,calving-front-delineation-from-synthetic,Calving Front Delineation From Synthetic Aperture Radar Imagery,,computer-vision de07d2ec-5acb-4c67-ae6a-cbe64e8fc518,self-supervised-action-recognition,Self-Supervised Action Recognition,,computer-vision 10a753d7-83b1-49c9-aac3-fd0b911a1c19,unet-segmentation,UNET Segmentation,"U-Net is an architecture for semantic segmentation. It consists of a contracting path (Up to down) and an expanding path (Down to up). During the contraction, the spatial information is reduced while feature information is increased. The contracting path follows the typical architecture of a convolutional network. It consists of the repeated application of two 3x3 convolutions (unpadded convolutions), each followed by a rectified linear unit (ReLU) and a 2x2 max pooling operation with stride 2 for downsampling. At each downsampling step, we double the number of feature channels. Every step in the expansive path consists of an upsampling of the feature map followed by a 2x2 convolution (“up-convolution”) that halves the number of feature channels, a concatenation with the correspondingly cropped feature map from the contracting path, and two 3x3 convolutions, each followed by a ReLU. The cropping is necessary due to the loss of border pixels in every convolution. At the final layer, a 1x1 convolution is used to map each 64-component feature vector to the desired number of classes. In total the network has 23 convolutional layers.",computer-vision e7630fbc-c4ea-43ff-b44f-21d23443dd0d,brain-landmark-detection,Brain landmark detection,,computer-vision 0aa96e3a-fc74-433e-a8b8-5115fda1a78e,overlapped-15-1,Overlapped 15-1,,computer-vision c35fb92b-0adf-47ee-a2fa-8febe7b28311,overlapped-50-50,Overlapped 50-50,,computer-vision 6d3b69f9-34c4-4243-bd8e-2f73011235e2,general-action-video-anomaly-detection,General Action Video Anomaly Detection,Detecting if an entire short clip of a any action features an anomalous motion - another action class not seen during training.,computer-vision 2cf95d41-71db-4314-8976-c367ec61812c,semantic-slam,Semantic SLAM,SLAM with semantic level scene understanding,computer-vision 18f1d884-5094-471c-aaa2-a2037cad3aa2,factual-visual-question-answering,Factual Visual Question Answering,,computer-vision 1d76f1e0-fecb-4824-a882-bfc27477c997,video-harmonization,Video Harmonization,Video harmonization aims to adjust the foreground of a composite video to make it compatible with the background.,computer-vision 281f8668-31de-43d3-820f-4c3e566ef1de,6d-pose-estimation,6D Pose Estimation using RGB,"**6D Pose Estimation using RGB** refers to the task of determining the six degree-of-freedom (6D) pose of an object in 3D space based on RGB images. This involves estimating the position and orientation of an object in a scene, and is a fundamental problem in computer vision and robotics. In this task, the goal is to estimate the 6D pose of an object given an RGB image of the object and the scene, which can be used for tasks such as robotic manipulation, augmented reality, and scene reconstruction. ( Image credit: [Segmentation-driven 6D Object Pose Estimation](https://github.com/cvlab-epfl/segmentation-driven-pose) )",computer-vision b0384c5d-5353-46da-be7a-72c35a30889c,compressive-sensing,Compressive Sensing,"**Compressive Sensing** is a new signal processing framework for efficiently acquiring and reconstructing a signal that have a sparse representation in a fixed linear basis. Source: [Sparse Estimation with Generalized Beta Mixture and the Horseshoe Prior ](https://arxiv.org/abs/1411.2405)",computer-vision 125e3eec-1f34-4ff1-8af5-d021a493cf32,line-detection,Line Detection,,computer-vision ddfa9413-3bc6-44ed-9644-b3650a8b7bcb,video-frame-interpolation,Video Frame Interpolation,"The goal of **Video Frame Interpolation** is to synthesize several frames in the middle of two adjacent frames of the original video. Video Frame Interpolation can be applied to generate slow motion video, increase video frame rate, and frame recovery in video streaming. Source: [Reducing the X-ray radiation exposure frequency in cardio-angiography via deep-learning based video interpolation ](https://arxiv.org/abs/2006.00781)",computer-vision c627238d-bdec-48f2-831b-768102d85111,overlapped-100-50,Overlapped 100-50,,computer-vision 6d95f749-176d-4190-a7a2-6eb091918899,zero-shot-text-to-video-generation,Zero-shot Text-to-Video Generation,,computer-vision 394ae974-bb0f-435c-af1b-a89974f08ba6,semi-supervised-change-detection,Semi-supervised Change Detection,,computer-vision ffd3dde6-eca2-45ed-b18c-1125b6992705,3d-multi-person-mesh-recovery,3D Multi-Person Mesh Recovery,,computer-vision f07d61f5-26ad-4e44-bbff-f4d5c9800dd0,no-reference-image-quality-assessment,No-Reference Image Quality Assessment,An Image Quality Assessment approach where no reference image information is available to the model.,computer-vision 27d78a8a-b637-45c2-a42e-bd2a554abfed,motion-compensation,Motion Compensation,,computer-vision ad359719-b943-4d26-976a-ec9c75466914,face-identification,Face Identification,Face identification is the task of matching a given face image to one in an existing database of faces. It is the second part of face recognition (the first part being detection). It is a one-to-many mapping: you have to find an unknown person in a database to find who that person is.,computer-vision 186ac0d3-f24d-438c-bcc4-426bc895f99b,continual-anomaly-detection,continual anomaly detection,,computer-vision d6edb592-11fb-4426-9829-80fd91df6830,person-centric-visual-grounding,Person-centric Visual Grounding,"Person-centric visual grounding is the problem of linking between people named in a caption and people pictured in an image. Introduced in ""Who's Waldo? Linking People Across Text and Images"" (Cui et al, ICCV 2021).",computer-vision 56b137e9-30a4-4fef-b0dc-2db7a2b7e66a,image-captioning,Image Captioning,"**Image Captioning** is the task of describing the content of an image in words. This task lies at the intersection of computer vision and natural language processing. Most image captioning systems use an encoder-decoder framework, where an input image is encoded into an intermediate representation of the information in the image, and then decoded into a descriptive text sequence. The most popular benchmarks are nocaps and COCO, and models are typically evaluated according to a BLEU or CIDER metric. ( Image credit: [Reflective Decoding Network for Image Captioning, ICCV'19](https://openaccess.thecvf.com/content_ICCV_2019/papers/Ke_Reflective_Decoding_Network_for_Image_Captioning_ICCV_2019_paper.pdf))",computer-vision bac0f198-8af8-40bb-8f0b-e910dd44098e,accident-anticipation,Accident Anticipation,,computer-vision ce416a32-8154-466d-941d-acd4692d7367,few-shot-object-detection,Few-Shot Object Detection,**Few-Shot Object Detection** is a computer vision task that involves detecting objects in images with limited training data. The goal is to train a model on a few examples of each object class and then use the model to detect objects in new images.,computer-vision 44b16e64-364f-4b8d-8b12-803387855ef6,color-mismatch-correction,Color Mismatch Correction,Color mismatch correction is the task of transferring color from one view of a stereopair to corresponding areas in another where the colors differ incorrectly. Color mismatches can decrease the overall video quality and may cause viewer discomfort and headaches. Similar tasks are color transfer and example-based colorization.,computer-vision ba1f64cb-9cec-48fa-a07c-3f56a5d88eb3,logo-recognition,Logo Recognition,,computer-vision df1fe6ca-9602-4219-8a2f-c8c855738482,dichotomous-image-segmentation,Dichotomous Image Segmentation,"Currently, existing image segmentation tasks mainly focus on segmenting objects with specific characteristics, e.g., salient, camouflaged, meticulous, or specific categories. Most of them have the same input/output formats, and barely use exclusive mechanisms designed for segmenting targets in their models, which means almost all tasks are dataset-dependent. Thus, it is very promising to formulate a category-agnostic DIS task for accurately segmenting objects with different structure complexities, regardless of their characteristics. Compared with semantic segmentation, the proposed DIS task usually focuses on images with single or a few targets, from which getting richer accurate details of each target is more feasible.",computer-vision 6b7eb3a2-721a-46b8-8ede-527dbc70be5d,perpetual-view-generation,Perpetual View Generation,**Perpetual View Generation** is the task of generating long-range novel views by flying into a given image.,computer-vision cb41ca05-24b7-42f3-9d8f-8de8d4bab123,image-morphing,Image Morphing,,computer-vision 606c929b-8b05-438d-8f8b-900340feda03,salient-object-detection-1,Salient Object Detection,,computer-vision a804d9f4-e3db-4095-9280-82b442fddd25,3d-absolute-human-pose-estimation,3D Absolute Human Pose Estimation,"This task aims to solve absolute (camera-centric not root-relative) 3D human pose estimation. ( Image credit: [RootNet](https://github.com/mks0601/3DMPPE_ROOTNET_RELEASE) )",computer-vision 88634101-b1bd-41e0-afa4-e47c11353f6b,audio-visual-synchronization,Audio-Visual Synchronization,,computer-vision 654f7157-6279-442e-8d66-a6bd4f75e1de,small-object-detection,Small Object Detection,"**Small Object Detection** is a computer vision task that involves detecting and localizing small objects in images or videos. This task is challenging due to the small size and low resolution of the objects, as well as other factors such as occlusion, background clutter, and variations in lighting conditions. ( Image credit: [Feature-Fused SSD](https://arxiv.org/pdf/1709.05054v3.pdf) )",computer-vision 67f05300-edea-4660-8313-25d8134f11b1,lip-to-speech-synthesis,Lip to Speech Synthesis,"Given a silent video of a speaker, generate the corresponding speech that matches the lip movements.",computer-vision 740ad6a7-6d49-49da-b21e-cb61cac4ebc9,line-art-colorization,Line Art Colorization,,computer-vision 613670c1-cfdd-4328-839c-87f0eeb65669,parking-space-occupancy,Parking Space Occupancy,Image credit: [https://github.com/martin-marek/parking-space-occupancy](https://github.com/martin-marek/parking-space-occupancy),computer-vision d38fb463-6848-42af-9f29-fb6850fecb8a,instance-segmentation,Instance Segmentation,"**Instance Segmentation** is a computer vision task that involves identifying and separating individual objects within an image, including detecting the boundaries of each object and assigning a unique label to each object. The goal of instance segmentation is to produce a pixel-wise segmentation map of the image, where each pixel is assigned to a specific object instance. Image Credit: [Deep Occlusion-Aware Instance Segmentation with Overlapping BiLayers, CVPR'21](https://github.com/lkeab/BCNet)",computer-vision d341c4c5-a3fb-4b86-9b98-7c6c56a6667a,image-level-supervised-instance-segmentation,Image-level Supervised Instance Segmentation,Weakly-Supervised Instance Segmentation using Image-level Labels,computer-vision bec42272-1ba3-4a89-b14e-11e2f21b1990,observation-completion,Observation Completion,,computer-vision 1c4aa2ad-b9fb-4c22-bf30-576027817bae,cbc-test,CBC TEST,,computer-vision 491de09b-a51c-4d37-9e06-885296bb1971,infinite-image-generation,Infinite Image Generation,"**Infinite Image Generation** refers to the task of generating an unlimited number of images that belong to a specific distribution or category. It is a challenging task, as it requires the model to capture the underlying patterns and distributions in the data, and generate images that are diverse, yet still follow the same patterns. There are various techniques and algorithms that can be used to perform infinite image generation, including Variational Autoencoders (VAEs), Generative Adversarial Networks (GANs), and Convolutional Neural Networks (CNNs).",computer-vision 14727083-e3f2-41af-87ce-f054cb26d52e,video-alignment,Video Alignment,,computer-vision ba3ef521-5d09-4974-aeac-a80a79a98d12,visual-dialogue,Visual Dialog,"Visual Dialog requires an AI agent to hold a meaningful dialog with humans in natural, conversational language about visual content. Specifically, given an image, a dialog history, and a follow-up question about the image, the task is to answer the question.",computer-vision b6c66747-ca5c-4b9f-8d26-a39b21739e0a,3d-shape-reconstruction-from-videos,3D Shape Reconstruction from Videos,,computer-vision e44d0d14-3405-413a-a015-86453fc6fe4f,scene-text-editing,Scene Text Editing,,computer-vision fd302081-134c-4d91-b912-7efbb831e3bc,3d-point-cloud-interpolation,3D Point Cloud Interpolation,"Point cloud interpolation is a fundamental problem for 3D computer vision. Given a low temporal resolution (frame rate) point cloud sequence, the target of interpolation is to generate a smooth point cloud sequence with high temporal resolution (frame rate).",computer-vision b697cffc-a817-4afd-bbef-dfc64ece7df2,shape-representation-of-3d-point-clouds,Shape Representation Of 3D Point Clouds,,computer-vision 8dafb5b4-8dd1-473e-b435-a31fa817c773,hindi-image-captioning,Hindi Image Captioning,The main goal of this task is to generate a caption for an input image in a native langugae Hindi.,computer-vision 06a4db93-1fb8-4e35-b924-1cb6d98149cb,tone-mapping,Tone Mapping,,computer-vision 811de7c1-f96f-48fb-adae-2045cef3f62e,referring-image-matting-expression-based,Referring Image Matting (Expression-based),"Expression-based referring image matting, taking an image and a flowery expression as the input.",computer-vision ce3dd41a-c838-48a1-a2ed-20a4dcd38827,multi-frame-super-resolution,Multi-Frame Super-Resolution,"When multiple images of the same view are taken from slightly different positions, perhaps also at different times, then they collectively contain more information than any single image on its own. Multi-Frame Super-Resolution fuses these low-res inputs into a composite high-res image that can reveal some of the original detail that cannot be recovered from any low-res image alone. ( Credit: [HighRes-net](https://github.com/ElementAI/HighRes-net) )",computer-vision d6cd2a62-ea6b-4fbd-87a2-bf8b3c12a23b,image-text-matching,Image-text matching,,computer-vision fabe1b38-69b3-4759-a5c0-18dd3b782d55,image-restoration,Image Restoration,"**Image Restoration** is a family of inverse problems for obtaining a high quality image from a corrupted input image. Corruption may occur due to the image-capture process (e.g., noise, lens blur), post-processing (e.g., JPEG compression), or photography in non-ideal conditions (e.g., haze, motion blur). Source: [Blind Image Restoration without Prior Knowledge ](https://arxiv.org/abs/2003.01764)",computer-vision 7a595b40-3221-46c6-8934-2f01dc9d7ea7,computer-vision-transduction,Computer Vision Transduction,Transductive learning in computer vision tasks,computer-vision d328a1ec-48b3-4256-b0a2-d0fad5039ec2,open-world-object-detection,Open World Object Detection,"Open World Object Detection is a computer vision problem where a model is tasked to: 1) identify objects that have not been introduced to it as `unknown', without explicit supervision to do so, and 2) incrementally learn these identified unknown categories without forgetting previously learned classes, when the corresponding labels are progressively received.",computer-vision eb4d1274-7e26-4cf2-bb02-21a04d989cba,point-interactive-image-colorization,Point-interactive Image Colorization,"__Point-interactive colorization__ is a task of colorizing images given user-guided clicks containing colors (a.k.a color hints). Unlike unconditional image colorization, which is an underdetermined problem by nature, point-interactive colorization aims to generate images containing specific colors given by the user. Point-interactive colorization is evaluated by providing simulated user hints from the groundtruth color image. Following the [iColoriT protocol](https://arxiv.org/abs/2207.06831), user hints have a size of 2x2 pixels and color is given as the average color within the 2x2 pixels.",computer-vision b95f2f0b-e97d-4152-8b5d-f1426a5bd0da,blink-estimation,Blink estimation,,computer-vision 8730e3f8-27c7-4d73-ab69-352065b7f1f5,overlapped-14-1,Overlapped 14-1,,computer-vision dda768a7-95fd-4058-a4be-836248d3926a,image-forensics,Image Forensics,,computer-vision 09da7e17-3fb4-4f76-bca2-81312fd6b604,temporal-defect-localization,Temporal Defect Localization,"Closed-Circuit TeleVision (CCTV) is popular method for pipe defect inspection. Different from short QV videos, CCTV videos are much longer and record more comprehensive content in the very distant pipe. The main task is to discover temporal locations of pipe defects in such untrimmed videos. Clearly, manual inspection is expensive, based on hundreds of hours of CCTV videos. To fill this gap, we introduce this temporal localization task, which is to find the temporal locations of pipe detects and recognizing their corresponding categories in a long CCTV video.",computer-vision f02ccb80-5d73-46bc-ad4e-326e6a4bfe5f,forgery,Forgery,,computer-vision c5ee6967-f669-48c1-af49-887879ef7344,federated-lifelong-person-reid,Federated Lifelong Person ReID,,computer-vision abf53e15-8a19-4c0d-b0b5-653d3540e7f1,object-discovery-in-videos,Object Discovery In Videos,,computer-vision 30bde436-7190-42b6-a6d7-4d1dbd6d99c4,grasp-generation,Grasp Generation,,computer-vision 7887941d-1853-497c-8c32-1430356434e7,video-correspondence-flow,Video Correspondence Flow,,computer-vision 6f6cf7ae-4775-45d8-9eef-cc3fcbd45ddb,activeness-detection,Activeness Detection,Determining activeness via images,computer-vision 974fdd24-82ee-4fac-9848-c5979c0338a8,semi-supervised-image-classification-cold,Semi-Supervised Image Classification (Cold Start),"This is the same as the [semi-supervised image classification task](https://paperswithcode.com/task/semi-supervised-image-classification), with the key difference being that the labelled subset chosen needs to be selection in a class agnostic manner. This means that the standard practice in semi-supervised learning of using a random class stratified sample is ""cheating"" in this case, as class information is required for the whole dataset for this to be done. Rather, this challenge requires a smart cold-start or unsupervised selective labelling strategy to identify images that are most informative and result in the best performing models.",computer-vision ab98643d-9601-411d-a82f-b72256fa3eb4,multiple-object-track-and-segmentation,Multiple Object Track and Segmentation,"Multiple object tracking and segmentation requires detecting, tracking, and segmenting objects belonging to a set of given classes. (Image and definition credit: [Prototypical Cross-Attention Networks for Multiple Object Tracking and Segmentation](https://github.com/SysCV/pcan), NeurIPS 2021, Spotlight )",computer-vision d23bd6ef-2b53-48b6-8fba-461c888e1d3c,damaged-building-detection,Damaged Building Detection,,computer-vision 5bfb4a7c-40e9-401d-b9df-193ea908c113,brdf-estimation,BRDF estimation,,computer-vision f6e49054-e5d6-4793-b56e-1c3801f93b03,membership-inference-attack,Membership Inference Attack,,computer-vision 0f09aeec-e703-4eb0-b43c-514da956c03a,superpixels,Superpixels,,computer-vision ef702f72-cffb-4806-970f-43c2465bfb3e,multi-target-domain-adaptation,Multi-target Domain Adaptation,The idea of Multi-target Domain Adaptation is to adapt a model from a single labelled source domain to multiple unlabelled target domains.,computer-vision 00933281-f8d0-40da-9d27-667181532566,rf-based-pose-estimation,RF-based Pose Estimation,"Detect human actions through walls and occlusions, and in poor lighting conditions. Taking radio frequency (RF) signals as input (e.g. Wifi), generating 3D human skeletons as an intermediate representation, and recognizing actions and interactions. See e.g. RF-Pose from MIT for a good illustration of the approach http://rfpose.csail.mit.edu/ ( Image credit: [Making the Invisible Visible](https://arxiv.org/pdf/1909.09300v1.pdf) )",computer-vision 51daecbc-c787-4afe-a053-581356f2c8e7,weakly-supervised-action-segmentation-action,Weakly Supervised Action Segmentation (Action Set)),Learning an action segmentation model while the only available supervision is action set -- the set of actions happened in the video without information about their temporal locations.,computer-vision 6b3d840f-872a-434c-baf8-a9a57e7cd2ff,boundary-captioning,Boundary Captioning,"Provided with the timestamp of a boundary inside a video, the machine is required to generate sentences describing the status change at the boundary.",computer-vision 4fc4d534-a407-4658-a698-e4cbd1f9fa8e,training-free-3d-part-segmentation,Training-free 3D Part Segmentation,Evaluation on target datasets for 3D Part Segmentation without any training,computer-vision d6b0f018-721e-4f03-9ae0-f868ebbb152e,image-shadow-removal,Image Shadow Removal,,computer-vision aba5e61b-1141-45eb-8e7b-87bd30655965,raw-reconstruction,Raw reconstruction,Reconstruct RAW camera sensor readings from the corresponding sRGB images,computer-vision 012528a5-fc4c-498c-b8b2-96d4433f9590,face-age-editing,Face Age Editing,,computer-vision 67fba269-7a48-4a4d-a726-3a9502debbf2,multimodal-activity-recognition,Multimodal Activity Recognition,,computer-vision e300bfbc-203e-4807-98b6-141440d5804b,generating-3d-point-clouds,Generating 3D Point Clouds,,computer-vision ea764edb-ebb5-4e58-89dc-9a9f198b97a0,surgical-tool-detection,Surgical tool detection,Presence detection of various classes of surgical instruments in endoscopy videos.,computer-vision 844deff3-c7a6-4790-9b1e-d51c7564c34f,event-based-vision,Event-based vision,"An event camera, also known as a neuromorphic camera, silicon retina or dynamic vision sensor, is an imaging sensor that responds to local changes in brightness. Event cameras do not capture images using a shutter as conventional cameras do. Instead, each pixel inside an event camera operates independently and asynchronously, reporting changes in brightness as they occur and staying silent otherwise. Modern event cameras have microsecond temporal resolution, 120 dB dynamic range, and less under/overexposure and motion blur than frame cameras.",computer-vision d62072f5-3227-4ae7-b5d1-04a98c99e3ca,multi-person-pose-estimation,Multi-Person Pose Estimation,"Multi-person pose estimation is the task of estimating the pose of multiple people in one frame. ( Image credit: [Human Pose Estimation with TensorFlow ](https://github.com/eldar/pose-tensorflow) )",computer-vision 1839ac86-474e-48d1-8665-d17a181f6936,person-reposing,person reposing,Person reposing describes the task of changing the pose of a human in a given image to any desired target pose.,computer-vision dd2bf5ad-11a0-44db-b72b-29e9b5a30344,image-deconvolution,Image Deconvolution,,computer-vision a03e5ba6-463b-4bf3-93b0-ed2d4debcd7a,3d-object-detection-from-stereo-images,3D Object Detection From Stereo Images,"Estimating oriented 3D bounding boxes from Stereo Cameras only. Image: [You et al](https://openreview.net/pdf?id=BJedHRVtPB)",computer-vision f0ebe55e-5b2d-4430-ad00-579b7c2890ed,visual-social-relationship-recognition,Visual Social Relationship Recognition,,computer-vision 1ec78478-5d3b-41f8-82b4-ec20b4dbb812,human-activity-recognition,Human Activity Recognition,Classify various human activities,computer-vision ba2b86d8-292f-4aeb-8302-5055e1291cd9,unsupervised-3d-point-cloud-linear-evaluation,Unsupervised 3D Point Cloud Linear Evaluation,Training a linear classifier(e.g. SVM) on the representations learned in an unsupervised manner on the pretrained(e.g. ShapeNet) dataset.,computer-vision 495a2ec1-d5b3-454d-8f30-8dc449bf05a0,earthquake-prediction,Earthquake prediction,,computer-vision ace13983-40bb-45d2-841f-2ce31898206a,defect-detection,Defect Detection,For automatic detection of surface defects in various products,computer-vision a7a57486-9c47-4a6e-8d8b-6f6c6fcd0468,2d-semantic-segmentation,2D Semantic Segmentation,,computer-vision b2801cf7-a411-4c51-a448-4fc42fc73650,cloud-detection,Cloud Detection,,computer-vision c570139a-04d2-4bbe-871b-a9987594303a,cryogenic-electron-microscopy-cryo-em,Cryogenic Electron Microscopy (cryo-EM),"Analysis of images and videos from transmission electron microscopes, including single-particle cryogenic electron microscopy and cryogenic electron tomography (cryo-ET). https://en.wikipedia.org/wiki/Cryogenic_electron_microscopy",computer-vision ce25d9cb-bc4b-4daa-a611-76779e5d154c,unsupervised-video-object-segmentation,Unsupervised Video Object Segmentation,The unsupervised scenario assumes that the user does not interact with the algorithm to obtain the segmentation masks. Methods should provide a set of object candidates with no overlapping pixels that span through the whole video sequence. This set of objects should contain at least the objects that capture human attention when watching the whole video sequence i.e objects that are more likely to be followed by human gaze.,computer-vision 4f3b58be-03b9-4fd3-bf4d-0bf6777b0806,cross-corpus,Cross-corpus,,computer-vision 845164e3-f8d1-4e9a-9bec-007db9ee5620,sign-language-translation,Sign Language Translation,"Given a video containing sign language, the task is to predict the translation into (written) spoken language. Image credit: [How2Sign](https://how2sign.github.io/)",computer-vision 22b485aa-40f3-4587-8422-d6067b39b00a,image-relighting,Image Relighting,Image relighting involves changing the illumination settings of an image.,computer-vision 9187a70c-337d-40de-8f99-a7ad1536a763,generalized-zero-shot-learning,Generalized Zero-Shot Learning,,computer-vision 9750a309-bce7-4e4b-81b4-7a7d90d64749,image-compression,Image Compression,"**Image Compression** is an application of data compression for digital images to lower their storage and/or transmission requirements. Source: [Variable Rate Deep Image Compression With a Conditional Autoencoder ](https://arxiv.org/abs/1909.04802)",computer-vision 75b7612c-8df5-4919-994a-620ecc19ff5d,multimodal-fashion-image-editing,Multimodal fashion image editing,"Given a target model image, a sketch and a textual description of a garment, the aim is to generate a new image of the same model, wearing a new garment that follows the user input.",computer-vision 625c90f5-720b-4103-8e99-101a1d97c2ec,talking-face-generation,Talking Face Generation,"Talking face generation aims to synthesize a sequence of face images that correspond to given speech semantics ( Image credit: [Talking Face Generation by Adversarially Disentangled Audio-Visual Representation](https://github.com/Hangz-nju-cuhk/Talking-Face-Generation-DAVS) )",computer-vision f762d35a-ecd1-400a-bd2b-28a6bd2d3f0f,image-to-3d,Image to 3D,,computer-vision 9582bcc3-05fd-4e21-b6e1-f420404a16d8,image-classification-with-dp,Image Classification with Differential Privacy,**Image Classification with Differential Privacy** is an improved version of the image classification task whereby the final classification output only describe the patterns of groups within the dataset while withholding information about individuals in the dataset.,computer-vision d94b8b7f-2a3f-47f2-be88-bbe11a5106f7,satellite-image-super-resolution,satellite image super-resolution,,computer-vision 4de4b194-ed16-4d3e-bf32-45564d05cee6,facial-beauty-prediction,Facial Beauty Prediction,"Facial beauty prediction is the task of predicting the attractiveness of a face. ( Image credit: [SCUT-FBP5500: A Diverse Benchmark Dataset for Multi-Paradigm Facial Beauty Prediction](https://github.com/HCIILAB/SCUT-FBP5500-Database-Release) )",computer-vision 2bce0969-6317-4691-be36-bae1452b731e,detect-forged-images-and-videos,Image and Video Forgery Detection,,computer-vision 82054068-b750-4887-ac5d-29fc841f67aa,weakly-supervised-action-segmentation,Weakly Supervised Action Segmentation (Transcript),Action Segmentation from weak (transcript) supervision.,computer-vision 02cbb7d6-d8e3-44e0-8657-57707a7923ef,text-to-image-generation,Text-to-Image Generation,"**Text-to-Image Generation** is a task in computer vision and natural language processing where the goal is to generate an image that corresponds to a given textual description. This involves converting the text input into a meaningful representation, such as a feature vector, and then using this representation to generate an image that matches the description.",computer-vision d21866e2-a03a-47f9-912f-5333271172db,3d-object-tracking,3D Object Tracking,,computer-vision 3dba7094-df0b-4fd8-8f97-e02e7751bfa6,horizon-line-estimation,Horizon Line Estimation,,computer-vision fd75dc32-de96-49f8-93e5-50329b309171,surgical-phase-recognition,Surgical phase recognition,"The first 40 videos are used for training, the last 40 videos are used for testing.",computer-vision 691e3a2a-7d65-40f8-b009-ac985c7e7672,pupil-dilation,Pupil Dilation,"Phenomenon on which the human iris reacts to illumination changes and drugs; thus, changing the aperture of the pupil.",computer-vision 6f8ee656-ad2d-4c12-b15f-ee88189328d4,eye-tracking,Gaze Prediction,,computer-vision 42b15e71-e8b8-432b-9d24-dbe72ac625a8,crop-yield-prediction,Crop Yield Prediction,,computer-vision 2f7d48c9-9664-4604-97d4-2d875781a2b8,semi-supervised-person-bounding-box-detection,Semi-Supervised Person Bounding Box Detection,,computer-vision 53495992-b868-4bc9-b2a2-6f57008590f1,semi-supervised-semantic-segmentation,Semi-Supervised Semantic Segmentation,Models that are trained with a small number of labeled examples and a large number of unlabeled examples and whose aim is to learn to segment an image (i.e. assign a class to every pixel).,computer-vision 27f93ea0-372b-406e-9e76-d8fa056a06a7,autonomous-flight-dense-forest,Autonomous Flight (Dense Forest),Number of interventions during autonomous flight under the forest canopy.,computer-vision e4dfac41-dafb-47d8-b764-1ddd3c539b0e,street-scene-parsing,Street Scene Parsing,,computer-vision 52709284-f18b-4d28-ad79-29699d662f4a,pansharpening,Pansharpening,,computer-vision e3b98ff6-84cd-4bad-b868-6a3cc466092d,object-skeleton-detection,Object Skeleton Detection,"Object skeleton detection is the task of detecting the skeleton of an object in an image. ( Image credit: [DeepFlux for Skeletons in the Wild](https://arxiv.org/pdf/1811.12608v1.pdf) )",computer-vision 309dcd53-7ba7-470f-afcf-ef1e81eaf5ca,image-recognition,Image Recognition,,computer-vision 7a383565-ee30-455f-b4a6-72b88a0bb886,face-generation,Face Generation,"Face generation is the task of generating (or interpolating) new faces from an existing dataset. The state-of-the-art results for this task are located in the Image Generation parent. ( Image credit: [Progressive Growing of GANs for Improved Quality, Stability, and Variation ](https://arxiv.org/pdf/1710.10196v3.pdf) )",computer-vision 3245aed1-abbb-49f5-bdce-0f4c9d7143b6,multi-view-3d-shape-retrieval,Multi-View 3D Shape Retrieval,,computer-vision adbb3dbb-abcd-4128-ae73-98085a0091c7,human-scene-contact-detection,human-scene contact detection,detecting contact between human bodies and scenes,computer-vision d7695fb3-3271-4563-bb39-726919f2f2ad,3d-human-action-recognition,3D Action Recognition,Image: [Rahmani et al](https://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Rahmani_3D_Action_Recognition_CVPR_2016_paper.pdf),computer-vision 464bc8d8-1809-4da7-b8a9-d14556f41f26,burst-image-super-resolution,Burst Image Super-Resolution,"Reconstruct a high-resolution image from a set of low-quality images, very like the multi-frame super-resolution task.",computer-vision a35a2fe0-7af5-4987-8643-010048995c67,spectral-estimation,Spectral Estimation,,computer-vision adb24d38-3268-4041-b7a4-62de8d6ad707,fine-grained-image-classification,Fine-Grained Image Classification,"**Fine-Grained Image Classification** is a task in computer vision where the goal is to classify images into subcategories within a larger category. For example, classifying different species of birds or different types of flowers. This task is considered to be fine-grained because it requires the model to distinguish between subtle differences in visual appearance and patterns, making it more challenging than regular image classification tasks. ( Image credit: [Looking for the Devil in the Details](https://arxiv.org/pdf/1903.06150v2.pdf) )",computer-vision 8e179ba8-199b-437f-91ac-5437e6219e3b,face-anti-spoofing,Face Anti-Spoofing,"Facial anti-spoofing is the task of preventing false facial verification by using a photo, video, mask or a different substitute for an authorized person’s face. Some examples of attacks: - **Print attack**: The attacker uses someone’s photo. The image is printed or displayed on a digital device. - **Replay/video attack**: A more sophisticated way to trick the system, which usually requires a looped video of a victim’s face. This approach ensures behaviour and facial movements to look more ‘natural’ compared to holding someone’s photo. - **3D mask attack**: During this type of attack, a mask is used as the tool of choice for spoofing. It’s an even more sophisticated attack than playing a face video. In addition to natural facial movements, it enables ways to deceive some extra layers of protection such as depth sensors. ( Image credit: [Learning Generalizable and Identity-Discriminative Representations for Face Anti-Spoofing](https://github.com/XgTu/GFA-CNN) )",computer-vision 9dbfde69-1a28-45b7-9ee0-406857cbbdb1,3d-holography,3D Holography,"The images that are presented here are multiplanar images that were reconstructed using a holographic display. For more details, please see: https://complightlab.com/publications/realistic_defocus_cgh/",computer-vision 05313ca2-220f-4f1a-9165-77125114506c,wildly-unsupervised-domain-adaptation,Wildly Unsupervised Domain Adaptation,Transferring knowledge from a noisy source domain to unlabeled target domain.,computer-vision 06bc4685-87a4-4b6d-96ae-f6400a4545eb,3d-shape-reconstruction,3D Shape Reconstruction,"Image credit: [GSNet: Joint Vehicle Pose and Shape Reconstruction with Geometrical and Scene-aware Supervision , ECCV'20](https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123600511.pdf)",computer-vision 540bf838-e644-4f5c-a06c-f012b38f04d2,extreme-video-frame-interpolation,eXtreme-Video-Frame-Interpolation,"Type of Video Frame Interpolation (VFI) that interpolates an intermediate frame on X4K1000FPS dataset containing 4K videos of 1000 fps with the extreme motion. The dataset has a wide variety of textures, extremely large motions, zoomings and occlusions, which have never been seen in the previous VFI benchmark datasets.",computer-vision 7edd4dff-69d1-48e7-96ef-716b00658aef,micro-expression-recognition,Micro-Expression Recognition,"Facial Micro-Expression Recognition is a challenging task in identifying suppressed emotion in a high-stake environment, often comes in very brief duration and subtle changes.",computer-vision ff6b76a8-da48-4b92-9804-40b1f3e10069,breast-cancer-histology-image-classification-1,Breast Cancer Histology Image Classification (20% labels),Model's breast cancer histology image classification performance on BreakHis dataset with limited training data labels of 20%.,computer-vision cda0f841-3700-4d71-a9c4-ad03658a112f,scene-flow-estimation,Scene Flow Estimation,"**Scene Flow Estimation** is the task of obtaining 3D structure and 3D motion of dynamic scenes, which is crucial to environment perception, e.g., in the context of autonomous navigation. Source: [Self-Supervised Monocular Scene Flow Estimation ](https://arxiv.org/abs/2004.04143)",computer-vision 804cea1d-3b44-45d3-9bb3-f857365956fc,object-reconstruction,Object Reconstruction,,computer-vision a05156dd-42e2-495e-8b99-b178a5cdcb5f,generalized-zero-shot-skeletal-action,Generalized Zero Shot skeletal action recognition,Generalized Zero Shot Learning for 3d Skeletal Action Recognition,computer-vision f5c13a14-d3b7-498f-bfdb-31796c183e69,robust-3d-object-detection,Robust 3D Object Detection,3D Object Detection under Out-of-Distribution Scenarios,computer-vision 2ef28c5e-6254-4d01-9a57-e477fee61d62,probabilistic-deep-learning,Probabilistic Deep Learning,,computer-vision 74a6a94d-3756-42ac-990e-bf2439a81e38,polyp-segmentation,Polyp Segmentation,The goal of the project is to develop a computer-aided detection and diagnosis system for automatic polyp segmentation and detection.,computer-vision c2670fa7-5bcb-4d09-8b2b-c8ef961325d9,stereo-image-super-resolution,Stereo Image Super-Resolution,,computer-vision 4be6e018-51c1-4615-8062-480fbe84af20,transparent-object-detection,Transparent Object Detection,Detecting transparent objects in 2D or 3D,computer-vision 679272c8-ca06-4b73-a72f-68b5dc75cbbf,3d-object-retrieval,3D Object Retrieval,Source: [He et al](https://arxiv.org/pdf/1803.06189v1.pdf),computer-vision 8a5bffce-0367-406a-89bc-530d1ac5b136,point-cloud-segmentation,Point Cloud Segmentation,"3D point cloud segmentation is the process of classifying point clouds into multiple homogeneous regions, the points in the same region will have the same properties. The segmentation is challenging because of high redundancy, uneven sampling density, and lack explicit structure of point cloud data. This problem has many applications in robotics such as intelligent vehicles, autonomous mapping and navigation. Source: [3D point cloud segmentation: A survey](https://doi.org/10.1109/RAM.2013.6758588)",computer-vision f545d57d-9ebc-41d0-ba75-09a45e79e104,loop-closure-detection,Loop Closure Detection,"Loop closure detection is the process of detecting whether an agent has returned to a previously visited location. ( Image credit: [Backtracking Regression Forests for Accurate Camera Relocalization](https://github.com/LiliMeng/btrf) )",computer-vision 253b2cbe-5aea-49d0-ba47-85fd6f9bf1ff,weakly-supervised-3d-point-cloud-segmentation,Weakly Supervised 3D Point Cloud Segmentation,,computer-vision 1080cf74-3563-41c4-8245-95ca4ea01aaa,thermal-infrared-pedestrian-detection,Thermal Infrared Pedestrian Detection,Thermal Infrared Pedestrian Detection under low-light condition,computer-vision f7483a7f-219c-47ad-96ae-355ea9d1b004,scale-generalisation,Scale Generalisation,Scale generalisation implies that learning is performed at some scale(s) and testing at other scales.,computer-vision 83adfc5d-9633-4efa-9adc-03dfca32cea9,phrase-extraction-and-grounding-peg,Phrase Extraction and Grounding (PEG),PEG requires a model to extract phrases from text and locate objects from images simultaneously.,computer-vision 4e0a1e4e-7b0f-4853-81bb-551212f5d189,event-based-motion-estimation,Event-based Motion Estimation,,computer-vision 605a53e3-7a70-4ae5-8732-4b94ffc7fffc,face-verification,Face Verification,"**Face Verification** is a machine learning task in computer vision that involves determining whether two facial images belong to the same person or not. The task involves extracting features from the facial images, such as the shape and texture of the face, and then using these features to compare and verify the similarity between the images. ( Image credit: [Pose-Robust Face Recognition via Deep Residual Equivariant Mapping](https://arxiv.org/pdf/1803.00839v1.pdf) )",computer-vision 330bed31-05d9-4a33-ae11-67f567b91db5,solar-cell-segmentation,Solar Cell Segmentation,,computer-vision 9629b7a5-00ab-439f-b2d6-e0c44ba34f7e,learning-with-noisy-labels,Learning with noisy labels,"Learning with noisy labels means When we say ""noisy labels,"" we mean that an adversary has intentionally messed up the labels, which would have come from a ""clean"" distribution otherwise. This setting can also be used to cast learning from only positive and unlabeled data.",computer-vision 35e41fb3-36f4-4f4c-8884-2c5ae07d1d32,3d-prostate-segmentation,3D Prostate Segmentation,,computer-vision 50f24fb1-aec4-4e9a-a37f-6eeb21c06688,visual-place-recognition,Visual Place Recognition,"**Visual Place Recognition** is the task of matching a view of a place with a different view of the same place taken at a different time. Source: [Visual place recognition using landmark distribution descriptors ](https://arxiv.org/abs/1608.04274) Image credit: [Visual place recognition using landmark distribution descriptors](https://arxiv.org/pdf/1608.04274.pdf)",computer-vision 331fb10e-e51a-45c3-b2da-6923144af150,roi-based-image-generation,ROI-based image generation,,computer-vision cff59004-c90a-41ce-9c8a-6f5b7d8d105c,action-recognition,Temporal Action Localization,Temporal Action Localization aims to detect activities in the video stream and output beginning and end timestamps. It is closely related to Temporal Action Proposal Generation.,computer-vision 7047bb6e-431b-44e7-8ba7-c0375d9d1fd5,lip-sync-1,Constrained Lip-synchronization,"This task deals with lip-syncing a video (or) an image to the desired target speech. Approaches in this task work only for a specific (limited set) of identities, languages, speech/voice. See also: Unconstrained lip-synchronization - https://paperswithcode.com/task/lip-sync",computer-vision 54ee55da-dd32-43a8-8c42-d7908dcb3582,image-deep-networks,Image Deep Networks,,computer-vision afa9569c-d1e2-4ca9-9563-fa2b5aa90724,continual-semantic-segmentation,Continual Semantic Segmentation,Continual learning in semantic segmentation.,computer-vision e9f8ebd3-761b-4711-a039-26a66a312acb,im2spec,Im2Spec,Predicting spectra from images (and vice versa),computer-vision f7c79a38-3485-4f2a-9fe6-1c32a3473e56,3d-multi-object-tracking,3D Multi-Object Tracking,Image: [Weng et al](https://arxiv.org/pdf/1907.03961v4.pdf),computer-vision ee235bb3-7e75-47ce-b704-816a2e8ddaf0,replay-grounding,Replay Grounding,"Replay grounding is introduced in SoccerNet-v2 in the case of videos of soccer games. Given a replay shot of a soccer action, the objective is to retrieve when said action occurs within the whole live game.",computer-vision 171cde71-4a78-4ce5-82a1-961b4ba0aaec,natural-language-moment-retrieval,Natural Language Moment Retrieval,,computer-vision f8c607ae-c3db-47d7-8c87-d89a1081384d,visual-keyword-spotting,Visual Keyword Spotting,Spot a given query keyword in a silent talking face video,computer-vision 53f1794b-ef2c-40ca-8eb6-e59577c080fe,motion-forecasting,Motion Forecasting,Motion forecasting is the task of predicting the location of a tracked object in the future,computer-vision b3d71a59-9b3e-4251-93bc-3d851504c8ae,conformal-prediction,Conformal Prediction,,computer-vision 2aa7487b-395f-48af-9c99-1ba6670c8b2a,art-analysis,Art Analysis,,computer-vision f2db7a44-39bd-444f-b96b-44a1f8ab1ae6,yield-mapping-in-apple-orchards,Yield Mapping In Apple Orchards,,computer-vision 1a2119f5-8d8f-40e1-8cf4-f25eaf075cc7,image-to-point-cloud-registration,Image to Point Cloud Registration,"Given a query image and a scene of point cloud, get the camera pose according to them.",computer-vision feca25dc-5eba-46d9-8b0d-4743f0c14c5a,anomaly-detection-at-various-anomaly,Anomaly Detection at Various Anomaly Percentages,Unsupervised anomaly detection at 10% anomaly,computer-vision 5b06e005-b411-4cc8-af85-41260d9cf91e,mri-segmentation,MRI segmentation,,computer-vision cc3d62a0-b1a8-49d2-aeb9-f3ad2f5bf5a1,panoptic-scene-graph-generation,Panoptic Scene Graph Generation,"PSG task abstracts the given image with a scene graph, where nodes are grounded by panoptic segmentation",computer-vision ac27e799-2355-43a3-8452-c02ec6138737,fast-vehicle-detection,Fast Vehicle Detection,Fast vehicle detection is the task of detecting fast or speeding vehicles from video footage.,computer-vision 78e94f50-c15e-42bb-8bcd-23832dbe7627,unsupervised-few-shot-learning,Unsupervised Few-Shot Learning,"In contrast to supervised few-shot learning, only the unlabeled dataset is available in the pre-training or meta-training stage for unsupervised few-shot learning.",computer-vision 3c7319db-e6df-457e-8c57-1a89bf6b3f97,prediction-of-occupancy-grid-maps,Prediction Of Occupancy Grid Maps,,computer-vision 6f10036f-b34c-4b86-be05-b2afd6e03123,future-prediction,Future prediction,,computer-vision 91476f75-0c77-410f-9100-d369198c6661,rf-based-visual-tracking,RF-based Visual Tracking,"From mID: https://doi.org/10.1109/DCOSS.2019.00028 ""The key to offering personalised services in smart spaces is knowing where a particular person is with a high degree of accuracy. Visual tracking is one such solution, but concerns arise around the potential leakage of raw video information and many people are not comfortable accepting cameras in their homes or workplaces. We propose a human tracking and identification system (mID) based on millimeter wave radar which has a high tracking accuracy, without being visually compromising. Unlike competing techniques based on WiFi Channel State Information (CSI), it is capable of tracking and identifying multiple people simultaneously. Using a lowcost, commercial, off-the-shelf radar, we first obtain sparse point clouds and form temporally associated trajectories."" ( Image credit: [mID: Tracking and Identifying People with Millimeter Wave Radar](http://www.cs.ox.ac.uk/files/10889/%5BDCOSS19%5DmID.pdf) )",computer-vision beaf7c2f-e76b-4287-8fe1-dd170372a6f9,source-free-domain-adaptation,Source-Free Domain Adaptation,,computer-vision 31f8e61e-0c91-461a-9271-b13fee476ffb,activity-detection,Activity Detection,Detecting activities in extended videos.,computer-vision d8f49bc5-8a96-46dd-b224-9a501d0840da,text-to-3d,Text to 3D,,computer-vision 641998b6-bb7d-4212-a7d7-01190a3beb7f,hyperview-challenge,HYPERVIEW Challenge,"The objective of this challenge is to advance the state of the art for soil parameter retrieval from hyperspectral data in view of the upcoming Intuition-1 mission. A campaign took place in March 2021 over agricultural areas in Poland with extensive ground samplings collocated with airborne hyperspectral measurements from imagers mounted onboard an airplane. The hyperspectral data contains 150 contiguous hyperspectral bands (462-942 nm, with a spectral resolution of 3.2 nm), which reflects the spectral range of the hyperspectral imaging sensor deployed on-board Intuition-1.",computer-vision 06f96c25-824e-47a0-a7db-d48ee1bf269c,face-anonymization,Face Anonymization,,computer-vision c38d74d4-54a6-4c89-8d1a-b79acbd7753d,image-matching,Image Matching,"Image Matching or wide multiple baseline stereo (WxBS) is a process of establishing a sufficient number of pixel or region correspondences from two or more images depicting the same scene to estimate the geometric relationship between cameras, which produced these images. Source: [The Role of Wide Baseline Stereo in the Deep Learning World](https://ducha-aiki.github.io/wide-baseline-stereo-blog/2020/03/27/intro.html) ( Image credit: [Kornia](https://github.com/kornia/kornia) )",computer-vision 89a0785a-3e1c-409a-8558-4d2e1c975943,real-time-multi-object-tracking,Real-Time Multi-Object Tracking,Online and Real-time Multi-Object Tracking would achieve the real-time speed over 30 frames per second with online approach.,computer-vision ea3deddd-bcf9-4808-affe-d6d3899d6b0e,sequential-image-classification,Sequential Image Classification,"Sequential image classification is the task of classifying a sequence of images. ( Image credit: [TensorFlow-101](https://github.com/sjchoi86/tensorflow-101/blob/master/notebooks/rnn_mnist_simple.ipynb) )",computer-vision d85eff75-2fb4-497b-916a-609b9b9d2b20,pedestrian-density-estimation,Pedestrian Density Estimation,Pedestrian density estimation is the task of estimating the density of pedestrians from cameras.,computer-vision ae48a23e-8660-47f1-953a-c80430673dd1,fake-image-detection,Fake Image Detection,"( Image credit: [FaceForensics++](https://github.com/ondyari/FaceForensics) )",computer-vision ce389f60-9cfb-4793-956e-6668d8abc1b0,3d-aware-image-synthesis,3D-Aware Image Synthesis,,computer-vision aae71263-4d02-4bf4-9e0a-a6ec591f44ee,inverse-tone-mapping,inverse tone mapping,,computer-vision b9b29ca7-3f14-4d78-b863-0e54e2f9e6ca,multi-person-pose-estimation-and-tracking,Multi-Person Pose Estimation and Tracking,"Joint multi-person pose estimation and tracking following the PoseTrack benchmark. https://posetrack.net/ ( Image credit: [PoseTrack](https://github.com/iqbalu/PoseTrack-CVPR2017) )",computer-vision c35c8d7e-a633-4f56-900e-57599f79479a,weakly-supervised-3d-human-pose-estimation,Weakly-supervised 3D Human Pose Estimation,This task targets at 3D Human Pose Estimation with fewer 3D annotation.,computer-vision 9ee8f630-352c-4b94-a65b-594597c75b41,image-instance-retrieval,Image Instance Retrieval,"**Image Instance Retrieval** is the problem of retrieving images from a database representing the same object or scene as the one depicted in a query image. Source: [Compression of Deep Neural Networks for Image Instance Retrieval ](https://arxiv.org/abs/1701.04923)",computer-vision 4bfff613-1292-4eb2-a12c-7041faf2d856,occluded-3d-object-symmetry-detection,Occluded 3D Object Symmetry Detection,,computer-vision 44b4aa27-8867-46af-a9af-4f81543ea63b,motion-estimation,Motion Estimation,"**Motion Estimation** is used to determine the block-wise or pixel-wise motion vectors between two frames. Source: [MEMC-Net: Motion Estimation and Motion Compensation Driven Neural Network for Video Interpolation and Enhancement ](https://arxiv.org/abs/1810.08768)",computer-vision ec9a708e-8108-48ad-a12c-74295c72f914,gpr,GPR,Gaussian Process Regression,computer-vision f30f8215-2c5b-4f46-bf53-4b13afe7560b,constrained-diffeomorphic-image-registration,Constrained Diffeomorphic Image Registration,,computer-vision 96bfb987-30f7-48b8-8fbe-b24d37d34f4a,indoor-scene-synthesis,Indoor Scene Synthesis,,computer-vision 14d3166d-5eb9-47f3-b81d-2c18f5526057,inverse-rendering,Inverse Rendering,"**Inverse Rendering** is the task of recovering the properties of a scene, such as shape, material, and lighting, from an image or a video. The goal of inverse rendering is to determine the properties of a scene given an observation of it, and to generate new images or videos based on these properties.",computer-vision 3dcc0013-8ff7-49dd-819a-4d90156b397b,furniture-segmentation,furniture segmentation,,computer-vision da0f047f-c6aa-48e7-8947-7bf2813947e0,fine-grained-action-recognition,Fine-grained Action Recognition,,computer-vision ac53e15f-c58d-40b3-ba5c-498cfd75bdf6,fine-grained-visual-categorization,Fine-Grained Visual Categorization,,computer-vision c1706018-93bf-471d-9054-acf5ff41b75e,concurrent-activity-recognition,Concurrent Activity Recognition,,computer-vision 59046c64-fd10-4c94-968d-48aa85fbdcc8,animal-action-recognition,Animal Action Recognition,"Cross-species (intra-class, inter-class) action recognition",computer-vision dbf68566-6090-493d-a341-dff2ff5174d3,part-based-representation-learning,Part-based Representation Learning,,computer-vision f31a1997-c2ff-4120-85f8-a5e1eeabe577,generalizable-person-re-identification,Generalizable Person Re-identification,Generalizable person re-identification refers to methods trained on a source dataset but directly evaluated on a target dataset without domain adaptation or transfer learning.,computer-vision 738e32c0-5948-41c3-8ed6-ab2a0bccea2b,spatio-temporal-action-localization,Spatio-Temporal Action Localization,,computer-vision b15d4da8-8f66-4a79-b2b9-cc4d7683fc74,video-restoration,Video Restoration,,computer-vision 3fa4f8c0-fe51-440f-a1e8-601810823c64,predict-future-video-frames,Predict Future Video Frames,,computer-vision cf4d5051-ffa1-4f35-bb0c-2c5a65e14eac,multimodal-sentiment-analysis,Multimodal Sentiment Analysis,"Multimodal sentiment analysis is the task of performing sentiment analysis with multiple data sources - e.g. a camera feed of someone's face and their recorded speech. ( Image credit: [ICON: Interactive Conversational Memory Network for Multimodal Emotion Detection](https://www.aclweb.org/anthology/D18-1280.pdf) )",computer-vision b7ba1eba-2c86-457f-b127-89c3d23eb99a,uncropping,Uncropping,,computer-vision 23ab69cf-f314-4da3-87b4-d84b8369666e,multimodal-emotion-recognition,Multimodal Emotion Recognition,"This is a leaderboard for multimodal emotion recognition on the IEMOCAP dataset. The modality abbreviations are A: Acoustic T: Text V: Visual Please include the modality in the bracket after the model name. All models must use standard five emotion categories and are evaluated in standard leave-one-session-out (LOSO). See the papers for references.",computer-vision 7778364d-9a3b-487f-9eff-cf3c820ede92,melanoma-diagnosis,Melanoma Diagnosis,,computer-vision ed24be40-9160-4e7f-b89e-6171b6951394,wireframe-parsing,Wireframe Parsing,Detect Line Segments and their connecting Junctions in a single perspective image.,computer-vision 1680123b-96df-426b-905b-4df96bc71fad,saliency-prediction-1,Few-Shot Transfer Learning for Saliency Prediction,"Saliency prediction aims to predict important locations in a visual scene. It is a per-pixel regression task with predicted values ranging from 0 to 1. Benefiting from deep learning research and large-scale datasets, saliency prediction has achieved significant success in the past decade. However, it still remains challenging to predict saliency maps on images in new domains that lack sufficient data for data-hungry models.",computer-vision e26933bd-6afe-4b73-af31-dbdad006ab34,foveation,Foveation,,computer-vision 55837500-bb79-4b56-abb7-44f8246c7d73,cyclops-accuracy,CYCLoPs Accuracy,"The WT2 dataset from the CYCLoPs database consists of 27,058 single-cell images of yeast cells. The task is to classify the subcellular localization of a fluoresced protein, given two channels staining for the protein of interest and the cytosol.",computer-vision c54d5452-19a4-49b2-ae21-6c85e53a998b,unsupervised-image-classification,Unsupervised Image Classification,"Models that learn to label each image (i.e. cluster the dataset into its ground truth classes) without seeing the ground truth labels. Image credit: ImageNet clustering results of [SCAN: Learning to Classify Images without Labels (ECCV 2020)](https://arxiv.org/abs/2005.12320) ",computer-vision 4118ff6b-5007-4a6a-8cf6-1c321b50427f,code-search,Code Search,"The goal of **Code Search** is to retrieve code fragments from a large code corpus that most closely match a developer’s intent, which is expressed in natural language. Source: [When Deep Learning Met Code Search ](https://arxiv.org/abs/1905.03813)",computer-vision cb3ddba7-89d2-4815-a984-8a109145c392,unsupervised-few-shot-image-classification,Unsupervised Few-Shot Image Classification,"In contrast to (supervised) few-shot image classification, only the unlabeled dataset is available in the pre-training or meta-training stage for unsupervised few-shot image classification.",computer-vision 2fde1022-fcc8-488d-a095-cf51f1588d0c,image-cropping,Image Cropping,"**Image Cropping** is a common photo manipulation process, which improves the overall composition by removing unwanted regions. Image Cropping is widely used in photographic, film processing, graphic design, and printing businesses. Source: [Listwise View Ranking for Image Cropping ](https://arxiv.org/abs/1905.05352)",computer-vision 4ed4423b-e576-4f79-afa7-6b8382f25363,multiview-gait-recognition,Multiview Gait Recognition,,computer-vision 6ec55ce2-a590-43db-8c45-63f50dd64c69,deep-feature-inversion,Deep Feature Inversion,,computer-vision 5fec4c27-859d-4dfe-b619-010409871da6,multiview-detection,Multiview Detection,Incorporating multiple camera views for detection in heavily occluded scenarios.,computer-vision e3fa9ae3-4d84-47d4-997f-d354a03a6869,gallbladder-cancer-detection,Gallbladder Cancer Detection,,computer-vision 3a2e70db-cdf1-47b5-862e-2ebd458ebd5b,negative-flip-rate,negative flip rate,,computer-vision 3fc68e06-17d8-405c-b5ef-43d692ab8308,handwritten-digit-recognition,Handwritten Digit Recognition,,computer-vision 6702d95c-5325-4363-a94b-67410807ec7d,ms-ssim,MS-SSIM,A MS-SSIM score helps to analyze how much a De-warping module has been able to de-warp a document image from its initial distorted view.,computer-vision 7bf677a9-1a1e-450f-aa0c-83f40b744d90,face-reconstruction,Face Reconstruction,"Face reconstruction is the task of recovering the facial geometry of a face from an image. ( Image credit: Microsoft [Deep3DFaceReconstruction](https://github.com/Microsoft/Deep3DFaceReconstruction) )",computer-vision 967c050c-0094-45e1-a127-87099ac2cb56,jpeg-artifact-removal,JPEG Artifact Removal,,computer-vision 71af1fcb-8290-406e-824f-47df306dff02,universal-domain-adaptation,Universal Domain Adaptation,,computer-vision 915c4dd6-b91f-4b50-9ce0-fb97d4b1142c,video-based-person-re-identification,Video-Based Person Re-Identification,Video-based person re-identification (reID) aims to retrieve person videos with the same identity as a query person across multiple cameras,computer-vision a0aa3b43-c84c-4359-b62d-e7d2a4b2324b,few-shot-camera-adaptive-color-constancy,Few-Shot Camera-Adaptive Color Constancy,,computer-vision e2cdaae9-0ab5-451e-a161-6a41f41d809d,historical-color-image-dating,Historical Color Image Dating,,computer-vision 1519dc42-2b7c-46a3-87ba-a4db045ef692,multi-object-tracking-and-segmentation,Multi-Object Tracking and Segmentation,"Multiple object tracking and segmentation requires detecting, tracking, and segmenting objects belonging to a set of given classes. (Image and definition credit: [Prototypical Cross-Attention Networks for Multiple Object Tracking and Segmentation](https://github.com/SysCV/pcan), NeurIPS 2021, Spotlight )",computer-vision 001efd92-8efb-40b6-9718-38e826e351ce,spectral-estimation-from-a-single-rgb-image,Spectral Estimation From A Single Rgb Image,,computer-vision ea8b1726-4c56-4976-afd4-946e3aba544a,grayscale-image-denoising,Grayscale Image Denoising,,computer-vision b6ba596f-4415-4d6e-b933-5d70fdec181f,video-polyp-segmentation,Video Polyp Segmentation,,computer-vision a2325971-b3cd-434e-94c4-714ac8e9ba12,human-dynamics,Human Dynamics,,computer-vision 4f75b07a-3261-46c5-83d4-6bd1f67ef38b,house-generation,House Generation,,computer-vision 0624a560-6ad9-434d-84fe-6b3aff30017c,monocular-cross-view-road-scene-parsing-road,Monocular Cross-View Road Scene Parsing(Road),,computer-vision f5c6391e-4537-446b-a692-4872bc2b78fe,detecting-image-manipulation,Detecting Image Manipulation,,computer-vision 7b83e20c-7310-4a9f-8f77-3aed88b642bc,image-segmentation,Image Segmentation,"**Image Segmentation** is a computer vision task that involves dividing an image into multiple segments or regions, each of which corresponds to a different object or part of an object. The goal of image segmentation is to assign a unique label or category to each pixel in the image, so that pixels with similar attributes are grouped together.",computer-vision 3a503546-f745-41a5-a603-0ea9caefc6a5,genre-classification,Genre classification,"Genre classification is the process of grouping objects together based on defined similarities such as shape, pixel, location, or intensity.",computer-vision 09862d9f-9fde-44c3-8667-0359518ea55f,human-mesh-recovery,Human Mesh Recovery,Estimate 3D body mesh from images,computer-vision 1a43f727-5655-4e73-af51-e13914239c79,stochastic-human-motion-prediction,Stochastic Human Motion Prediction,"Stochastic Human Motion Prediction assumes future stochasticity and therefore tackles the task from a generative point of view. Instead of predicting a single future, it predicts N possible futures.",computer-vision 8e237d2f-61e9-4f76-bb41-e71a4b9d5a3d,story-visualization,Story Visualization,,computer-vision c92e4b6a-e5aa-4a70-9edc-ca650c8706cd,trajectory-forecasting,Trajectory Forecasting,"Trajectory forecasting is a sequential prediction task, where a forecasting model predicts future trajectories of all moving agents (humans, vehicles, etc.) in a scene, based on their past trajectories and/or the scene context. (Illustrative figure from [Social NCE: Contrastive Learning of Socially-aware Motion Representations](https://github.com/vita-epfl/social-nce))",computer-vision de95a309-7cff-493a-aae0-c113a2e81824,contour-detection,Contour Detection,"Object **Contour Detection** extracts information about the object shape in images. Source: [Object Contour and Edge Detection with RefineContourNet ](https://arxiv.org/abs/1904.13353)",computer-vision b72f9808-7537-4844-aef3-5f41964a61e7,action-unit-detection,Action Unit Detection,"Action unit detection is the task of detecting action units from a video - for example, types of facial action units (lip tightening, cheek raising) from a video of a face. ( Image credit: [AU R-CNN](https://arxiv.org/pdf/1812.05788v2.pdf) )",computer-vision 52c4ff99-54ab-4425-8d62-679c5256830c,opd-single-view-3d-openable-part-detection,OPD: Single-view 3D Openable Part Detection,Detect the openable parts and predict their motion parameters from single-view image,computer-vision 75225ded-47f5-4ffa-94a5-1375e9ec135b,severity-prediction,severity prediction,,computer-vision 5db404f9-819c-45b6-bfc1-ead4c6167d89,active-object-detection,Active Object Detection,Active Learning for Object Detection,computer-vision 33f8d5b2-1d22-4fdb-a2d1-2d442bffbf0b,controllable-image-captioning,controllable image captioning,generate image captions conditioned on control signals,computer-vision 1ff312ec-f9ff-4aca-a49d-dc084e8af2a3,image-similarity-detection,Image Similarity Detection,"A fundamental computer vision task to determine whether a part of an image has been copied from another image. Description from: [The 2021 Image Similarity Dataset and Challenge](https://paperswithcode.com/paper/the-2021-image-similarity-dataset-and) Image credit: [The 2021 Image Similarity Dataset and Challenge](https://paperswithcode.com/paper/the-2021-image-similarity-dataset-and)",computer-vision 56b778de-d299-4790-a5dc-8420e38cb1ac,action-segmentation,Action Segmentation,"**Action Segmentation** is a challenging problem in high-level video understanding. In its simplest form, Action Segmentation aims to segment a temporally untrimmed video by time and label each segmented part with one of pre-defined action labels. The results of Action Segmentation can be further used as input to various applications, such as video-to-text and action localization. Source: [TricorNet: A Hybrid Temporal Convolutional and Recurrent Network for Video Action Segmentation ](https://arxiv.org/abs/1705.07818)",computer-vision e4653778-a53b-4719-aad4-38ce644d16c8,ood-detection,Out of Distribution (OOD) Detection,"**Out of Distribution (OOD) Detection** is the task of detecting instances that do not belong to the distribution the classifier has been trained on. OOD data is often referred to as ""unseen"" data, as the model has not encountered it during training. OOD detection is typically performed by training a model to distinguish between in-distribution (ID) data, which the model has seen during training, and OOD data, which it has not seen. This can be done using a variety of techniques, such as training a separate OOD detector, or modifying the model's architecture or loss function to make it more sensitive to OOD data.",computer-vision ae708644-ac6d-4864-ba2d-a46c0a6c8486,landmark-tracking,Landmark Tracking,,computer-vision c58cb648-ac08-4e57-9438-6972be62bd60,semi-supervised-anomaly-detection,Semi-supervised Anomaly Detection,,computer-vision f4ac5d28-450b-4cf0-90d3-aef963c3f65d,multi-modal-image-segmentation,Multi-modal image segmentation,,computer-vision 7a945f4c-3b0e-4abd-b2e7-19bbb0bb31ce,stereo-matching-1,Stereo Matching,"**Stereo Matching** is one of the core technologies in computer vision, which recovers 3D structures of real world from 2D images. It has been widely used in areas such as autonomous driving, augmented reality and robotics navigation. Given a pair of rectified stereo images, the goal of Stereo Matching is to compute the disparity for each pixel in the reference image, where disparity is defined as the horizontal displacement between a pair of corresponding pixels in the left and right images. Source: [Adaptive Unimodal Cost Volume Filtering for Deep Stereo Matching ](https://arxiv.org/abs/1909.03751)",computer-vision 87c0f833-8886-41c6-9655-84e5de22469e,image-animation,Image Animation,Image Animation is a field for image-animation of a source image by a driving video,computer-vision c555ed97-a092-43c1-ae0c-54ab427287e3,artist-classification,Artist classification,Classification of the artist for artistic images,computer-vision 455a9fd3-37ad-4fed-b02c-17d498bfcc66,point-cloud-classification,Point Cloud Classification,Point Cloud Classification is a task involving the classification of unordered 3D point sets (point clouds).,computer-vision e7d88af4-7ff1-4043-acb9-7343aeda8d06,activity-prediction,Activity Prediction,Predict human activities in videos,computer-vision 493fc914-cd8a-43f4-afc2-85c0e7c339f8,image-retouching,Image Retouching,,computer-vision 7ca77e32-b8db-4e30-9399-dd87a08ce186,drivable-area-detection,Drivable Area Detection,,computer-vision 8fa20006-ee37-49ec-8b94-cfa4fd91f1a2,open-set-video-captioning,Open Set Video Captioning,,computer-vision a12ca70e-1c1a-43ea-8703-2ac0dc11d682,synthetic-to-real-translation,Synthetic-to-Real Translation,"Synthetic-to-real translation is the task of domain adaptation from synthetic (or virtual) data to real data. ( Image credit: [CYCADA](https://arxiv.org/pdf/1711.03213v3.pdf) )",computer-vision ab9a2ab5-2bba-4782-ae06-077376bd2fb0,image-classification-shift-consistency,Classification Consistency,"How often two shifts of the same image are classified the same ( Image credit: [Antialiased CNNs](https://github.com/adobe/antialiased-cnns) )",computer-vision 0eefc78f-9726-4bf1-8c62-e0dd128405f3,population-mapping,Population Mapping,,computer-vision 272c4720-de2f-4621-bcf6-43e632d7ea58,action-generation,Action Generation,,computer-vision 5c3e700d-b195-4718-9cba-0cd6287ad59a,scene-understanding,Scene Understanding,"Scene Understanding is something that to understand a scene. For instance, iPhone has function that help eye disabled person to take a photo by discribing what the camera sees. This is an example of Scene Understanding.",computer-vision f56ec5a0-5d8b-4d68-85f2-9c3db73c852b,emotion-classification,Emotion Classification,"Emotion classification, or emotion categorization, is the task of recognising emotions to classify them into the corresponding category. Given an input, classify it as 'neutral or no emotion' or as one, or more, of several given emotions that best represent the mental state of the subject's facial expression, words, and so on. Some example benchmarks include ROCStories, Many Faces of Anger (MFA), and GoEmotions. Models can be evaluated using metrics such as the Concordance Correlation Coefficient (CCC) and the Mean Squared Error (MSE).",computer-vision e289fb0f-2a18-46f4-a2a3-51f58830d1f4,real-to-cartoon-translation,Real-to-Cartoon translation,Cartoonifying images,computer-vision 00950123-5677-4b85-bd5f-072572b6448c,facial-action-unit-detection,Facial Action Unit Detection,"Facial action unit detection is the task of detecting action units from a video of a face - for example, lip tightening and cheek raising. ( Image credit: [Self-supervised Representation Learning from Videos for Facial Action Unit Detection](http://openaccess.thecvf.com/content_CVPR_2019/papers/Li_Self-Supervised_Representation_Learning_From_Videos_for_Facial_Action_Unit_Detection_CVPR_2019_paper.pdf) )",computer-vision 9f00567c-0a76-451e-8054-4d8ef3f70f37,nmr-j-coupling,NMR J-coupling,https://github.com/larsbratholm/champs_kaggle,graphs 929245eb-4192-4edb-8489-0ea34a490a62,graph-clustering,Graph Clustering,"**Graph Clustering** is the process of grouping the nodes of the graph into clusters, taking into account the edge structure of the graph in such a way that there are several edges within each cluster and very few between clusters. Graph Clustering intends to partition the nodes in the graph into disjoint groups. Source: [Clustering for Graph Datasets via Gumbel Softmax ](https://arxiv.org/abs/2005.02372)",graphs 34064e4a-64cc-468a-8c69-e297af848f99,inductive-link-prediction,Inductive Link Prediction,"In inductive link prediction inference is performed on a new, unseen graph whereas classical transductive link prediction performs both training and inference on the same graph.",graphs 37da9415-9106-44be-84ee-ddbffc484aff,node-classification-on-non-homophilic,Node Classification on Non-Homophilic (Heterophilic) Graphs,"There exists a non-trivial set of graphs where graph-aware models underperform their corresponding graph-agnostic models, e.g. SGC and GCN underperform MLP with 1 layer and 2 layers. Although still controversial, people believe the performance degradation results from heterophily, i.e. there exist much more inter-class edges than inner-class edges. This task aims to evaluate models designed for non-homophilic (heterophilic) datasets.",graphs f5c27fa3-15ea-4767-aaae-01e0d4bd7dbb,approximating-betweenness-centrality-ranking,Approximating Betweenness-Centrality ranking,Betweenness-centrality is a popular measure in network analysis that aims to describe the importance of nodes in a graph. It accounts for the fraction of shortest paths passing through that node and is a key measure in many applications including community detection and network dismantling.,graphs af06664a-b819-4196-bc6d-25b75e13b93b,spectral-graph-clustering,Spectral Graph Clustering,,graphs 6c89fb53-3614-4328-8089-2999b2803a25,steiner-tree-problem,Steiner Tree Problem,"The **Steiner tree problem** is a computational problem in computer science and graph theory that involves finding the minimum weight subgraph in an undirected graph that connects a given set of terminal vertices. The goal of the Steiner tree problem is to minimize the total weight of the edges in the subgraph, and it is considered NP-hard, meaning that finding the optimal solution is computationally difficult.",graphs 31cb8514-09c9-4ac3-9995-ad798ad2978a,graph-sampling,Graph Sampling,Training GNNs or generating graph embeddings requires graph samples.,graphs 6324813f-26bb-4e61-8a35-1c548f0fa744,hyperedge-prediction,Hyperedge Prediction,,graphs a8596857-edb9-4c58-96d9-aa5a721882ac,heterogeneous-node-classification,Heterogeneous Node Classification,"Node classification in heterogeneous graphs, where nodes and/or edges have multiple types.",graphs 6279d4e5-178f-4ef0-af75-fdf565a94abe,collaborative-ranking,Collaborative Ranking,,graphs 1b994906-7c8c-4fc6-a0e6-1ac23ce96a55,3d-hand-pose-estimation,3D Hand Pose Estimation,Image: [Zimmerman et l](https://arxiv.xsrg/pdf/1705.01389v3.pdf),graphs b6fce50c-1a0b-4e0e-ac0e-34cfefebef37,tree-decomposition,Tree Decomposition,"**Tree Decomposition** is a technique in graph theory and computer science for representing a graph as a tree, where each node in the tree represents a set of vertices in the original graph. The goal of tree decomposition is to divide the graph into smaller, more manageable pieces, and to use the tree to represent the relationships between these pieces.",graphs de29be95-3944-4e95-8604-538f94bec768,graph-structure-learning,Graph structure learning,Semi-supervised node classification when a graph structure is not available.,graphs b4d362bd-c024-48fe-865f-0d09c809e8e2,calibration-for-link-prediction,Calibration for Link Prediction,,graphs f989b9ca-e14c-488a-bad2-0c6ef410f802,hypergraph-embedding,hypergraph embedding,Compute useful representations of hyperedges and vertices,graphs ceb1dd95-84ce-4ecb-b911-cd8c641f9a33,graph-embedding,Graph Embedding,"Graph embeddings learn a mapping from a network to a vector space, while preserving relevant network properties. ( Image credit: [GAT](https://github.com/PetarV-/GAT) )",graphs ba9c1689-65c9-4e46-8afc-c8347e713c3f,graph-mining,Graph Mining,,graphs a93ad4ae-acee-4f35-ac79-af679ae2d9f5,topological-data-analysis,Topological Data Analysis,,graphs bfd747f5-dc62-4128-8cd5-3ed5928b3151,anchor-link-prediction,Anchor link prediction,,graphs 999d31e9-6d2f-4e82-90fc-f709e924eaaf,graph-similarity,Graph Similarity,,graphs 6e2225ad-9255-4093-b49b-d641ea0d9020,node-classification,Node Classification,"**Node Classification** is a machine learning task in graph-based data analysis, where the goal is to assign labels to nodes in a graph based on the properties of nodes and the relationships between them. **Node Classification** models aim to predict non-existing node properties (known as the target property) based on other node properties. Typical models used for node classification consists of a large family of graph neural networks. Model performance can be measured using benchmark datasets like [Cora](/dataset/cora), [Citeseer](/dataset/citeseer), and [Pubmed](/dataset/pubmed), among others, typically using Accuracy and F1. ( Image credit: [Fast Graph Representation Learning With PyTorch Geometric](https://arxiv.org/pdf/1903.02428v3.pdf) )",graphs b93bc237-30ca-4cc8-b5ce-76ced91d432a,network-community-partition,Network Community Partition,,graphs 44266c08-739e-46bf-9595-da303c6b48c2,knowledge-base-completion,Knowledge Base Completion,"Knowledge base completion is the task which automatically infers missing facts by reasoning about the information already present in the knowledge base. A knowledge base is a collection of relational facts, often represented in the form of ""subject"", ""relation"", ""object""-triples.",graphs 6528483e-a754-4e7a-afd6-4b9d3eea0f3e,graphon-estimation,Graphon Estimation,,graphs 52f224a4-7397-4e96-9a97-6de6455b7df3,hyperedge-classification,hyperedge classification,,graphs c837c4c7-f06c-4ba3-a0b6-5ba727a94949,graph-attention,Graph Attention,,graphs 3f27e200-64f3-4005-9e64-3622a58ed82f,clustering-ensemble,Clustering Ensemble,,graphs bf0a611d-b87a-496c-a6c9-424e299f7635,subgraph-counting,Subgraph Counting,,graphs 03348fec-8d26-4b39-b021-1d3dfd2caa7d,graph-ranking,Graph Ranking,,graphs ea659489-c12b-4390-baca-9d13523e0675,graph-anomaly-detection,Graph Anomaly Detection,,graphs 1ba2c268-dfda-42f4-8b64-736788883b7c,link-prediction,Link Prediction,"**Link Prediction** is a task in graph and network analysis where the goal is to predict missing or future connections between nodes in a network. Given a partially observed network, the goal of link prediction is to infer which links are most likely to be added or missing based on the observed connections and the structure of the network. ( Image credit: [Inductive Representation Learning on Large Graphs](https://arxiv.org/pdf/1706.02216v4.pdf) )",graphs 24aaa8d1-56dd-486b-b71e-23fd09ece072,triad-prediction,Triad Prediction,,graphs f083e701-9b67-443b-9676-ffccac90cd62,local-community-detection,Local Community Detection,,graphs 615941ca-763d-4c0a-bce7-8cad118b3a31,jet-tagging,Jet Tagging,"Jet tagging is the process of identifying the type of elementary particle that initiates a ""jet"", i.e., a collimated spray of outgoing particles. It is essentially a classification task that aims to distinguish jets arising from particles of interest, such as the Higgs boson or the top quark, from other less interesting types of jets.",graphs ff9d0411-2c4e-4e65-bc7b-2295406783c3,learning-to-rank,Learning-To-Rank,"Learning to rank is the application of machine learning to build ranking models. Some common use cases for ranking models are information retrieval (e.g., web search) and news feeds application (think Twitter, Facebook, Instagram).",graphs 098d42b7-4998-45b5-93cb-a58f5508432f,dynamic-link-prediction,Dynamic Link Prediction,,graphs e561031a-1438-45e2-808a-f431681c295d,feedback-vertex-set-fvs,Feedback Vertex Set (FVS),"The **Feedback Vertex Set (FVS)** problem is a computational problem in computer science and graph theory that involves finding the smallest possible subset of vertices in an undirected graph such that removing those vertices results in a graph that is acyclic, i.e., a forest. The goal of the FVS problem is to minimize the size of the feedback vertex set, and is considered NP-hard, meaning that finding the optimal solution is computationally difficult. For directed graphs, a feedback vertex set is instead a subset of vertices whose removal results in directed acyclic graph (DAG), not necessarily a forest. This task can refer to either looking for a set of provably minimal size (in as little time as possible), or a heuristic algorithm that produces small solutions quickly (but which may have even smaller sets).",graphs 436e332f-2fb2-4572-8ee3-f4e3c59b2d29,graph-learning,Graph Learning,,graphs 4d5469a9-5c1d-4076-a816-a56dbc68bd00,person-centric-knowledge-graphs,Person-Centric Knowledge Graphs,,graphs 9d26a801-e369-4906-8978-10c306cb4eb0,ancestor-descendant-prediction,Ancestor-descendant prediction,"Given two entities, make a binary prediction if they have ancestor-descendant relationship, based on existing and missing hierarchical edges in the graph.",graphs e5fc893f-fed8-4219-908a-22acbf071a43,link-sign-prediction,Link Sign Prediction,,graphs a7f88833-d650-45c3-af23-cae815bb3ae8,physics-informed-machine-learning,Physics-informed machine learning,Machine learning used to represent physics-based and/or engineering models,graphs befce126-28de-4a66-90e0-bb1f17934e8b,connectivity-estimation,Connectivity Estimation,,graphs dd2d7924-e21a-45e4-ad27-1286bed611d1,triple-classification,Triple Classification,"Triple classification aims to judge whether a given triple (h, r, t) is correct or not with respect to the knowledge graph.",graphs 2868e621-54d1-462a-8301-76d2107d3d19,set-to-graph-prediction,SET TO GRAPH PREDICTION,,graphs 229663f1-c3e9-4614-9b79-4a73cc4d0b01,structual-feature-correlation,Structual Feature Correlation,Expressive Power of GNN to predict structural feature's correlation mutually.,graphs 512b9b3a-7f87-4f0d-9372-7de10930fa7c,dynamic-community-detection,Dynamic Community Detection,community detection in dynamic networks,graphs 969804bc-a657-43c5-9330-b9dd5f0ead38,twinwidth-contraction-sequence,Twinwidth Contraction Sequence,"**Twinwidth Contraction Sequence** is a concept in graph theory and computer science that refers to a sequence of graph contractions that transform a graph into a smaller, more manageable graph. The goal of the Twinwidth Contraction Sequence is to find a sequence of contractions that results in a graph with a small ""twinwidth"", which is a measure of how well the graph can be decomposed into small, connected subgraphs.",graphs e4253795-a046-4c63-a304-d637d7d6c037,set-to-graph-prediction-1,Set-to-Graph Prediction,,graphs f7a294a8-f6a5-4569-a546-5b34737c21f5,hypergraph-partitioning,hypergraph partitioning,,graphs 4144aedd-5efa-48e2-8b9c-a317fcb66a4a,graph-outlier-detection,Graph Outlier Detection,,graphs 2f499e7e-1021-4d7f-96b3-b51cd8735842,link-property-prediction,Link Property Prediction,,graphs 09923fb6-7e3f-4cc3-b9d6-2bf790e9168c,root-cause-ranking,Root Cause Ranking,Detection of causal anomalous nodes in graphs,graphs 17f001f1-86b3-4a2b-819b-12e8c29ae0c3,tree-map-layout,Tree Map Layout,Hierarchical tree visualisation by assigning sizes and positions of nodes. https://en.wikipedia.org/wiki/Treemapping,graphs 4bff3224-0aef-4ed1-8f7d-3133fa56edf3,graph-construction,graph construction,,graphs e1011142-5a2a-4c6e-baa2-75045aaf056f,graph-to-graph-translation,Graph-To-Graph Translation,,graphs 8a80e7e0-fe55-4187-8f2a-1e32367b5958,graph-property-prediction,Graph Property Prediction,,graphs 71a1f7e4-2a45-430e-8f92-5ed344bd73fd,hypergraph-matching,Hypergraph Matching,,graphs 506e024f-e654-4ef3-a4e1-b69e91978755,knowledge-graph-embedding,Knowledge Graph Embedding,,graphs c3f81749-61d7-4181-8187-161e58fabc69,graph-regression,Graph Regression,The regression task is similar to graph classification but using different loss function and performance metric.,graphs 2aea7050-89ad-49c6-815a-22bc2fb2e162,graph-generation,Graph Generation,"**Graph Generation** is an important research area with significant applications in drug and material designs. Source: [Graph Deconvolutional Generation ](https://arxiv.org/abs/2002.07087)",graphs 27bcfb0c-c354-4bdd-9c34-8ecf8cab027f,graph-question-answering,Graph Question Answering,,graphs 287ece7e-b89d-442e-87da-ab9eb9d9b7f2,graph-classification,Graph Classification,"**Graph Classification** is a task that involves classifying a graph-structured data into different classes or categories. Graphs are a powerful way to represent relationships and interactions between different entities, and graph classification can be applied to a wide range of applications, such as social network analysis, bioinformatics, and recommendation systems. In graph classification, the input is a graph, and the goal is to learn a classifier that can accurately predict the class of the graph. ( Image credit: [Hierarchical Graph Pooling with Structure Learning](https://github.com/cszhangzhen/HGP-SL) )",graphs 3ad06fc7-18e9-46f3-bfb1-7968e7f5a619,structural-node-embedding,Structural Node Embedding,,graphs c38bff5e-6c11-4a45-8bf3-14d65b274b9e,stochastic-block-model,Stochastic Block Model,,graphs be75dd26-159d-49d0-9c93-64daf30781f5,dynamic-graph-embedding,Dynamic graph embedding,,graphs d6988ba5-8641-49dc-9832-0f34d4418841,initial-structure-to-relaxed-energy-is2re,Initial Structure to Relaxed Energy (IS2RE),,graphs cbc17629-79fd-4522-a763-4c9872b1edec,inductive-relation-prediction,Inductive Relation Prediction,Inductive setting of the knowledge graph completion task. This requires a model to perform link prediction on an entirely new test graph with new set of entities.,graphs c44afef3-1e73-4b78-aa37-d82d004d2b4b,role-embedding,Role Embedding,,graphs 032fefd5-b07b-4db6-a7b1-0fdd43aecd50,community-search,Community Search,,graphs e87b3155-f279-4b72-8aca-3ae95a3852c1,graph-matching,Graph Matching,"**Graph Matching** is the problem of finding correspondences between two sets of vertices while preserving complex relational information among them. Since the graph structure has a strong capacity to represent objects and robustness to severe deformation and outliers, it is frequently adopted to formulate various correspondence problems in the field of computer vision. Theoretically, the Graph Matching problem can be solved by exhaustively searching the entire solution space. However, this approach is infeasible in practice because the solution space expands exponentially as the size of input data increases. For that reason, previous studies have attempted to solve the problem by using various approximation techniques. Source: [Consistent Multiple Graph Matching with Multi-layer Random Walks Synchronization ](https://arxiv.org/abs/1712.02575)",graphs 55b85517-16b2-4e82-bf86-ed1d3e82288b,rubik-s-cube,Rubik's Cube,Solving the Rubik's Cube is a pathfinding task on a massive implicit graph.,graphs 8cab6d6f-3d53-4ec2-b012-c3d22f597247,gene-interaction-prediction,Gene Interaction Prediction,,graphs 5392439a-aab4-480b-baf7-d28108c424c5,community-detection,Community Detection,"**Community Detection** is one of the fundamental problems in network analysis, where the goal is to find groups of nodes that are, in some sense, more similar to each other than to the other nodes. Source: [Randomized Spectral Clustering in Large-Scale Stochastic Block Models ](https://arxiv.org/abs/2002.00839)",graphs 0dba3e94-10ac-40bb-948f-2d3d4956befd,knowledge-graph-embeddings,Knowledge Graph Embeddings,,graphs aef9589b-9e57-46f5-8b9a-7d7dcb2afa3f,graph-partitioning,graph partitioning,Graph Partitioning is generally the first step of distributed graph computing tasks. The targets are load-balance and minimizing the communication volume.,graphs 2fe1de25-f01f-4dbb-8ae9-215c9c70c781,graph-nonvolutional-network,Graph Nonvolutional Network,,graphs 1503df73-bdcf-4444-8709-842b31ecb4aa,graph-reconstruction,Graph Reconstruction,,graphs 1f2937a9-1233-4423-9e6d-5743ee2a9b68,online-community-detection,Online Community Detection,,graphs 8f11cd53-501e-4a41-be59-8e655ba95c86,multi-modal-entity-alignment,Multi-modal Entity Alignment,,knowledge-base 85ece3a7-8ef1-4cbc-88e0-c69431bfbd2d,open-knowledge-graph-embedding,Open Knowledge Graph Embedding,,knowledge-base 266679ef-6add-466e-9ccd-97979e4494b5,inductive-knowledge-graph-completion,Inductive knowledge graph completion,,knowledge-base 52080e1c-6d8b-4c9f-9da9-3fa856613857,knowledge-graphs-data-curation,Knowledge Graphs Data Curation,,knowledge-base dbf9aaf2-6499-4e49-9387-007302ff2b72,ontology-matching,Ontology Matching,,knowledge-base 185be154-9473-4b8f-82d7-2c798b582170,entity-alignment,Entity Alignment,"**Entity Alignment** is the task of finding entities in two knowledge bases that refer to the same real-world object. It plays a vital role in automatically integrating multiple knowledge bases. Note: results that have incorporated machine translated entity names (introduced in the RDGCN paper) or pre-alignment name embeddings are considered to have used **extra training labels** (both are marked with ""Extra Training Data"" in the leaderboard) and are **not adhere to a comparable setting** with others that have followed the original setting of the benchmark. Source: [Cross-lingual Entity Alignment via Joint Attribute-Preserving Embedding ](https://arxiv.org/abs/1708.05045) The task of entity alignment is related to the task of [entity resolution](https://paperswithcode.com/task/entity-resolution) which focuses on matching structured entity descriptions in different contexts.",knowledge-base 3fbad695-c165-4d75-ab52-6c7724e5d97b,linear-mode-connectivity,Linear Mode Connectivity,"**Linear Mode Connectivity** refers to the relationship between input and output variables in a linear regression model. In a linear regression model, input variables are combined with weights to predict output variables. Understanding the linear model connectivity can help interpret model results and identify which input variables are most important for predicting output variables.",knowledge-base 95a9f4b3-d346-485b-bf77-8afe4ca6414a,relational-pattern-learning,Relational Pattern Learning,Learning and extracting the hidden patterns among the relations in a Knowledge Graph.,knowledge-base 80239d94-f66c-4cc6-8e8f-e52423eb959e,knowledge-graph-completion,Knowledge Graph Completion,"Knowledge graphs $G$ are represented as a collection of triples $\\{(h, r, t)\\}\subseteq E\times R\times E$, where $E$ and $R$ are the entity set and relation set. The task of **Knowledge Graph Completion** is to either predict unseen relations $r$ between two existing entities: $(h, ?, t)$ or predict the tail entity $t$ given the head entity and the query relation: $(h, r, ?)$. Source: [One-Shot Relational Learning for Knowledge Graphs ](https://arxiv.org/abs/1808.09040)",knowledge-base 8f654b47-734f-4eef-bf21-7155c32733a3,complex-query-answering,Complex Query Answering,"This task is concerned with answering complex queries over incomplete knowledge graphs. In the most simple case, the task is reduced to link prediction: a 1-hop query for predicting the existence of an edge between a pair of nodes. Complex queries are concerned with other structures between nodes, such as 2-hop and 3-paths, and intersecting paths with intermediate variables.",knowledge-base e3e654b4-9d49-44b1-bbe1-a64016cab142,multi-hop-question-answering,Multi-hop Question Answering,,knowledge-base 2d6ca266-5923-4331-bc18-ccb53441d6ed,rdf-dataset-discovery,RDF Dataset Discovery,Given a URI find the RDF datasets containing this URI.,knowledge-base 23c696ec-4c06-4174-898d-8a26c9b78f98,knowledge-graphs,Knowledge Graphs,,knowledge-base 76d86994-0d62-415c-85bf-e72ae9ffe92b,ontology-embedding,Ontology Embedding,,knowledge-base d799c130-d88b-4365-bbb1-721b6fab8b47,math-word-problem-solving,Math Word Problem Solving,,knowledge-base 753f57d2-aa90-48aa-98a5-96004e0e30a6,table-annotation,Table annotation,"**Table annotation** is the task of annotating a table with terms/concepts from knowledge graph or database schema. Table annotation is typically broken down into the following five subtasks: 1. Cell Entity Annotation ([CEA](https://paperswithcode.com/task/cell-entity-annotation)) 2. Column Type Annotation ([CTA](https://paperswithcode.com/task/column-type-annotation)) 3. Column Property Annotation ([CPA](https://paperswithcode.com/task/columns-property-annotation)) 4. [Table Type Detection](https://paperswithcode.com/task/table-type-detection) 5. [Row Annotation](https://paperswithcode.com/task/row-annotation) The [SemTab](http://www.cs.ox.ac.uk/isg/challenges/sem-tab/) challenge is closely related to the Table Annotation problem. It is a yearly challenge which focuses on the first three tasks of table annotation and its purpose is to benchmark different table annotation systems.",knowledge-base fbfecd13-2cae-4382-ac5d-148e730ae6e4,commonsense-knowledge-base-construction,Commonsense Knowledge Base Construction,,knowledge-base e7049dfc-b345-45ae-9290-8eec5ce357f7,data-integration,Data Integration,,knowledge-base 69360a39-e6c4-47ca-b565-3b312faa3542,manufacturing-simulation,Manufacturing simulation,Simulation of manufacturing system for applying AI methods and big data analysis,knowledge-base ec8ff79d-2d9a-4071-a986-2634b3b7ed76,temporal-knowledge-graph-completion,Temporal Knowledge Graph Completion,,knowledge-base f18eefed-03a6-46c8-bdc5-78e9da7028fe,models-alignment,Models Alignment,"**Models Alignment** is the process of ensuring that multiple models used in a machine learning system are consistent with each other and aligned with the goals of the system. This involves defining clear and consistent objectives for each model, identifying and addressing any inconsistencies or biases in the data used to train each model, testing and validating each model to ensure its accuracy, and ensuring that the predictions and decisions made by each model are consistent and aligned with the overall goals of the system.",knowledge-base b2bd5d95-7174-4145-b5e1-a16d0d438e97,open-knowledge-graph-canonicalization,Open Knowledge Graph Canonicalization,"Open Information Extraction approaches leads to creation of large Knowledge bases (KB) from the web. The problem with such methods is that their entities and relations are not canonicalized, which leads to storage of redundant and ambiguous facts. For example, an Open KB storing *\* and *\* doesn't know that *Barack Obama* and *Obama* mean the same entity. Similarly, *took birth in* and *was born in* also refer to the same relation. Problem of Open KB canonicalization involves identifying groups of equivalent entities and relations in the KB. ( Image credit: [CESI: Canonicalizing Open Knowledge Bases using Embeddings and Side Information](https://github.com/malllabiisc/cesi) )",knowledge-base 63733272-4bc5-4dc4-8918-d73e76a52980,causal-inference,Causal Inference,"Causal inference is the task of drawing a conclusion about a causal connection based on the conditions of the occurrence of an effect. ( Image credit: [Recovery of non-linear cause-effect relationships from linearly mixed neuroimaging data](https://arxiv.org/pdf/1605.00391v2.pdf) )",knowledge-base 1338758e-ccc3-45ee-bb69-1cc57f47bb7d,breast-cancer-detection,Breast Cancer Detection,,knowledge-base 6df97967-6d9f-4243-a30a-bd5bf2081add,symbolic-regression,Symbolic Regression,"producing a mathematical expression (symbolic expression) that fits a given tabular data.",knowledge-base 669dc1d6-07bc-4756-a5f2-c47cd32bc175,adversarial-attack-detection,Adversarial Attack Detection,The detection of adversarial attacks.,knowledge-base 6c207253-6ced-4495-ad78-bc76c0c11d94,causal-discovery,Causal Discovery,"( Image credit: [TCDF](https://github.com/M-Nauta/TCDF) )",knowledge-base 889b134d-b793-40f4-9a7f-25c1c50c53ec,multi-modal-knowledge-graph,Multi-modal Knowledge Graph,,knowledge-base 932bb42f-2136-4b72-b314-25daf9a4cf53,knowledge-base-completion,Knowledge Base Completion,"Knowledge base completion is the task which automatically infers missing facts by reasoning about the information already present in the knowledge base. A knowledge base is a collection of relational facts, often represented in the form of ""subject"", ""relation"", ""object""-triples.",knowledge-base ed4a7ccb-a15d-4202-be7b-d4f81af7e96f,non-intrusive-load-monitoring,Non-Intrusive Load Monitoring,,knowledge-base 324773e6-70fa-4c8e-946b-c463bb95df31,ontology-subsumption-inferece,Ontology Subsumption Inferece,,knowledge-base e9442c1d-bce1-4999-ae6b-ac8aa263bd87,re-basin,Re-basin,,knowledge-base cbdf7c06-e1c0-42a9-b67e-7c56daf2deeb,skin-lesion-segmentation,Skin Lesion Segmentation,,medical 1931a655-9f47-4e84-851a-64f7fd0b1ab5,pulmonary-nodules-classification,Pulmonary Nodules Classification,,medical 0f8fd985-852e-44c3-9646-22a47e71b89a,muscular-movement-recognition,Muscular Movement Recognition,,medical 86024301-0ea5-4a7d-9800-8d3c47f0bb54,medical-x-ray-image-segmentation,Medical X-Ray Image Segmentation,,medical 24aa53d3-5b24-4a15-ae1e-ff9f72796600,sleep-stage-detection,Sleep Stage Detection,Human Sleep Staging into W-N1-N2-N3-REM classes from multiple or single polysomnography signals,medical 858ee87e-38f2-43e1-b9dd-448b6a7aaf07,blood-pressure-estimation,Blood pressure estimation,,medical 895c0c8e-e13a-4954-a14f-49f63eeb971e,magnetic-resonance-fingerprinting,Magnetic Resonance Fingerprinting,,medical 03340d68-05ed-4de5-8994-328607ba5f04,splice-site-prediction,Splice Site Prediction,,medical 0d946add-62d1-4b2a-97a0-e4b23f5bc566,patient-outcomes,Patient Outcomes,,medical 3dc3444c-25f4-4d97-ab8b-50029334d502,drug-discovery,Drug Discovery,"Drug discovery is the task of applying machine learning to discover new candidate drugs. ( Image credit: [A Turing Test for Molecular Generators](https://pubs.acs.org/doi/10.1021/acs.jmedchem.0c01148) )",medical af6255df-a5bd-4c80-8900-5012e2153efc,covid-variant-prediction,Covid Variant Prediction,,medical 8c1985c7-a7e8-4659-93d8-495432dd7edd,cardiac-segmentation,Cardiac Segmentation,,medical 36b19694-87a0-4c21-975d-046d6ed21118,tomography,Tomography,,medical aed6e0d5-19bc-4906-9a01-1f65dd9a750f,medical-waveform-analysis,Medical waveform analysis,"Information extraction from medical waveforms such as the electrocardiogram (ECG), arterial blood pressure (ABP) central venous pressure (CVP), photoplethysmogram (PPG, Pleth).",medical 4b7e2fce-624f-44c3-b292-35e8b16540e0,quantum-state-tomography,Quantum State Tomography,,medical c122f938-897d-4281-9b61-7422033f8f29,molecular-dynamics,Molecular Dynamics,,medical fd2f9189-85ac-495b-a559-248f876525dd,multi-focus-microscopical-images-fusion,Multi-Focus Microscopical Images Fusion,,medical 32a23164-21af-48b1-a155-53d63e1ae7cb,photoplethysmogram-simulation,Photoplethysmogram simulation,Simulating photoplethysmogram (PPG) signals,medical bfda94ad-0c2f-48dc-b6fd-890aec73040d,medical-image-retrieval,Medical Image Retrieval,,medical 21b95316-3315-4c4e-b95f-b3fc476ed2bf,myocardial-infarction-detection,Myocardial infarction detection,,medical 91418bcd-e67c-48bf-8c9a-b3d102a047c1,mammogram,Mammogram,,medical c972233a-cb27-478d-8fcc-e46b44d38262,blood-cell-detection,Blood Cell Detection,,medical ffe5d8ba-ef7f-4155-a331-43408302d995,deformable-medical-image-registration,Deformable Medical Image Registration,,medical bdd5db13-57a6-4c7f-b263-bd5ca70a0a06,seizure-prediction,Seizure prediction,,medical bbf89c57-2d54-4385-8707-487cfde362af,blind-docking,Blind Docking,,medical 7e7e9155-4bc0-43b2-9643-96f7210fe018,cervical-nucleus-detection,Cervical Nucleus Detection,,medical a80f6fc4-3d64-4ce2-afba-db683d7b3874,epilepsy-prediction,Epilepsy Prediction,,medical 51f3dcf0-2261-4fb2-84be-f09998cac4c6,malaria-risk-exposure-prediction,Malaria Risk Exposure Prediction,,medical 02687c9c-9beb-4e1b-808d-d12ca6f66fca,multimodal-sleep-stage-detection,Multimodal Sleep Stage Detection,"Using multiple modalities such as EEG+EOG, EEG+HR instead of just relying on EEG (polysomnography)",medical f5e3d397-f116-44cd-9ed5-f5f17f7a6014,spo2-estimation,SpO2 estimation,SpO2 estimation,medical 7fae6bd1-9f4c-4de9-b056-44928b787a81,mortality-prediction,Mortality Prediction,"( Image credit: [Early hospital mortality prediction using vital signals](https://arxiv.org/pdf/1803.06589v2.pdf) )",medical b1275025-4c3d-45a5-939b-c337c400ac77,alzheimer-s-disease-detection,alzheimer's disease detection,classification,medical f774a468-6da2-4488-8a05-fa74a17b72e0,cervical-cancer-biopsy-identification,Cervical cancer biopsy identification,,medical 86037b83-6798-44ea-ae08-fa0fe5b2213f,domain-motif-prediction,Domain/Motif Prediction,"The domain prediction task which tasks language models with the identification of the domain type within a given protein sequence, which is defined as a compact folded three-dimensional structure.",medical 18cd7f4c-938f-4e37-b295-074edac2f8d9,finding-pulmonary-nodules-in-large-scale-ct,Finding Pulmonary Nodules In Large-Scale Ct Images,,medical 312d7fd0-792c-4e7b-95fc-e151681233dd,protein-language-model,Protein Language Model,,medical 50ae3ead-6c6d-4a2f-8202-064c17564326,sleep-quality-prediction,Sleep Quality,"( Image credit: [DeepSleep](https://github.com/GuanLab/DeepSleep) )",medical b2c32583-bc5d-4439-85dd-b35f28d687f5,joint-vertebrae-identification-and,Joint Vertebrae Identification And Localization In Spinal Ct Images,,medical 2ec02d87-6a92-42d7-84cb-8e9afe070585,pulmonary-embolism-detection,Pulmonary Embolism Detection,,medical c92af08a-bb01-4e45-ac35-51553324ad8b,acute-stroke-lesion-segmentation,Acute Stroke Lesion Segmentation,,medical 218d463e-81ec-4f9b-875f-108b909f8644,3d-medical-imaging-segmentation,3D Medical Imaging Segmentation,"3D medical imaging segmentation is the task of segmenting medical objects of interest from 3D medical imaging. ( Image credit: [Elastic Boundary Projection for 3D Medical Image Segmentation](https://github.com/twni2016/Elastic-Boundary-Projection) )",medical 49183351-a280-43d4-82fa-2e4896175b8b,whole-mammogram-classification,Whole Mammogram Classification,,medical bc3bf301-82d9-42e1-a042-858e3d7d2746,cervical-spondylosis-identification,Cervical Spondylosis Identification,,medical 1b0fb14e-0db2-4e78-ac10-3fe95d174d71,photoplethysmography-ppg-heart-rate,Photoplethysmography (PPG) heart rate estimation,Estimating heart rate from the photoplethysmogram (PPG) signal,medical 56c9b9ae-ceb1-4317-b631-8a8a8bdefe30,chemical-reaction-prediction,Chemical Reaction Prediction,,medical a9995978-b74f-4ea0-ad25-d79fdf75cea0,breast-density-classification,breast density classification,,medical 01c4f12d-865f-42ae-83de-4a4219acb1ba,skin-cancer-segmentation,Skin Cancer Segmentation,,medical 24625308-4ee0-4a6d-a4d4-7adc48b239fb,atrial-fibrillation,Atrial Fibrillation,,medical 2c5c6684-c9f8-4b89-bece-3da2ba4b4e55,sequential-diagnosis,Sequential Diagnosis,,medical d41b1c59-70d7-4f12-84b2-bb33ef6f2b5f,molecule-captioning,Molecule Captioning,,medical 32ff5a36-1071-4c80-bcda-3fe6786ce21b,protein-secondary-structure-prediction,Protein Secondary Structure Prediction,,medical 76b34dc6-bd2c-4ab5-8107-1b76462301b0,bladder-segmentation,Bladder Segmentation,,medical 1bf98869-8aa3-43bd-8d84-aa7743dc0660,promoter-detection,Promoter Detection,,medical 3a5beb24-bbef-45ad-b85a-a758ee005bea,photoplethysmography-ppg,Photoplethysmography (PPG),"**Photoplethysmography (PPG)** is a non-invasive light-based method that has been used since the 1930s for monitoring cardiovascular activity. Source: [Non-contact transmittance photoplethysmographic imaging (PPGI) for long-distance cardiovascular monitoring ](https://arxiv.org/abs/1503.06775)",medical 1948958f-f4d0-4d81-877c-cb212995fd5b,k-complex-detection,K-complex detection,,medical ebdd5ff3-01ec-4138-bd52-2936b599fdba,skin-cancer-classification,Skin Cancer Classification,,medical 0506d4d3-b069-40c2-aea2-096b583b78a8,protein-design,Protein Design,"Formally, given the design requirements of users, models are required to generate protein amino acid sequences that align with those requirements.",medical f0dbe7f3-5ae0-4366-8b8c-6a5dffbfd165,chemical-entity-recognition,Chemical Entity Recognition,"Chemical Entity Recognition (CER) is a fundamental task in biomedical text mining and Natural Language Processing (NLP). It involves the identification and classification of chemical entities in textual data, such as scientific literature. These entities can encompass a broad range of concepts including chemical compounds, drugs, elements, ions or functional groups. Given the complexity and variety of chemical nomenclature, the CER task represents a significant challenge for LLMs, and their performance in this task can provide important insights into their overall capabilities in the biomedical domain.",medical 330024fc-fafa-4f86-8ddc-409d4708112c,breast-cancer-detection,Breast Cancer Detection,,medical 32b61c74-34d0-45df-85d6-aaa796ac9913,forward-reaction-prediction,Forward reaction prediction,"Forward reaction prediction pertains to the anticipatory determination of the probable product(s) of a chemical reaction, given specific reactants and reagents. This facilitates the optimization of research and development methodologies, curbs the extent of experimental speculation, and endorses greener chemistry practices by mitigating waste production.",medical e866ba1d-5df4-4e01-8a8e-05c697c6de40,diabetic-foot-ulcer-detection,Diabetic Foot Ulcer Detection,,medical c313f578-6c75-497e-9efc-f494de8941e0,knee-osteoarthritis-prediction,Knee Osteoarthritis Prediction,,medical 557a84b9-e60c-41b8-9165-3e9a6342f610,muscle-force-prediction,Muscle Force Prediction,,medical 9cacbb05-8d35-4339-a2fd-efc0ffeb2e6e,synthesizing-multi-parameter-magnetic,Synthesizing Multi-Parameter Magnetic Resonance Imaging (Mp-Mri) Data,,medical c3f2d64a-7d3e-4f67-a048-20d6f16dfd71,emg-signal-prediction,EMG Signal Prediction,,medical 56f6151a-9b4f-40d9-a274-ffca45fc6f06,heart-rate-variability,Heart Rate Variability,Heart rate variability (HRV) is the physiological phenomenon of variation in the time interval between heartbeats. It is measured by the variation in the beat-to-beat interval.,medical 0fac0551-d882-446c-874c-d8393b079ed2,lung-cancer-diagnosis,Lung Cancer Diagnosis,,medical 4a885b0b-1835-4732-a936-ad9100a629df,shadow-confidence-maps-in-ultrasound-imaging,Shadow Confidence Maps In Ultrasound Imaging,,medical f3223822-ee9f-43ad-a78d-4e7b7876286e,covid-19-detection,COVID-19 Diagnosis,Covid-19 Diagnosis is the task of diagnosing the presence of COVID-19 in an individual with machine learning.,medical 26a31773-279e-460e-886e-aaee2ffc86b7,mental-arithmetic-task,mental arithmetic task,,medical 3acf8db7-b9fe-4dec-ba90-edc94424510d,skin-lesion-classification,Skin Lesion Classification,,medical a98aecfe-8176-4c53-9908-e4c9a735eb0d,transcription-factor-binding-site-prediction,Transcription Factor Binding Site Prediction,,medical 68908322-60f0-4275-8a32-3ec6ec0b7796,medical-super-resolution,Medical Super-Resolution,,medical 7a48f15f-1156-438c-976f-ebcf10c8e50c,medical-procedure,Medical Procedure,Predicting medical procedures performed during a hospital admission.,medical eba75aad-b384-4d66-9fbf-3f694bf3c89b,ischemic-stroke-lesion-segmentation,Ischemic Stroke Lesion Segmentation,,medical c89c8400-4dca-4cb8-8b6f-a35037bea600,chemical-protein-interaction-extraction,Chemical-Protein Interaction Extraction,"The models are presented with excerpts from scientific literature and are required to not only identify distinct chemicals within the text but also to discern the specific nature of the interactions between them. This could involve, for instance, determining regulatory relationships between identified ligands and proteins.",medical d92d203a-0ab9-4933-8533-e0743d3ffd46,diffeomorphic-medical-image-registration,Diffeomorphic Medical Image Registration,"Diffeomorphic mapping is the underlying technology for mapping and analyzing information measured in human anatomical coordinate systems which have been measured via Medical imaging. Diffeomorphic mapping is a broad term that actually refers to a number of different algorithms, processes, and methods. It is attached to many operations and has many applications for analysis and visualization. Diffeomorphic mapping can be used to relate various sources of information which are indexed as a function of spatial position as the key index variable. Diffeomorphisms are by their Latin root structure preserving transformations, which are in turn differentiable and therefore smooth, allowing for the calculation of metric based quantities such as arc length and surface areas. Spatial location and extents in human anatomical coordinate systems can be recorded via a variety of Medical imaging modalities, generally termed multi-modal medical imagery, providing either scalar and or vector quantities at each spatial location. ( Image credit: [Quicksilver](https://arxiv.org/pdf/1703.10908.pdf) )",medical 62ab1d37-7577-4e80-9ed6-1d419139d8a6,colorectal-gland-segmentation,Colorectal Gland Segmentation:,,medical 5816c0e3-8efc-4fff-9f3b-3e07d1210dc4,cbct-artifact-reduction,Cbct Artifact Reduction,,medical a5a2c9b8-3266-474a-97ed-925737cb113e,ventricular-fibrillation-detection,Ventricular fibrillation detection,,medical bb9e78bb-9c75-445a-9ab2-f66a1662914b,protein-complex-prediction,Protein complex prediction,,medical 528aad91-efc6-4cbf-a142-6e8b80dee29d,splenomegaly-segmentation-on-multi-modal-mri,Splenomegaly Segmentation On Multi-Modal Mri,,medical 3de1c641-4bb2-4601-96b8-3214d229d2a8,arrhythmia-detection,Arrhythmia Detection,,medical 9d6361c4-02c2-41b3-991e-19055241327f,breast-tumour-classification,Breast Tumour Classification,,medical bba5b8fb-a155-4df7-bf0f-d52a00fc3ee1,medical-concept-normalization,Medical Concept Normalization,,medical d5ebf31e-7c51-425d-8fec-de7c88341efc,multi-diseases-detection,Multi Diseases Detection,,medical 0b564d7e-38cb-40b3-9610-f734c55884c9,semantic-segmentation-of-orthoimagery,Semantic Segmentation Of Orthoimagery,,medical f0018755-97b0-48f8-9a54-d3224e006c02,drug-response-prediction,Drug Response Prediction,,medical cd52b897-cb4c-430c-9b66-f27d147a3487,medical-report-generation,Medical Report Generation,"Medical report generation (MRG) is a task which focus on training AI to automatically generate professional report according the input image data. This can help clinicians make faster and more accurate decision since the task itself is both time consuming and error prone even for experienced doctors. Deep neural network and transformer based architecture are currently the most popular methods for this certain task, however, when we try to transfer out pre-trained model into this certain domain, their performance always degrade. The following are some of the reasons why RSG is hard for pre-trained models: * Language datasets in a particular domain can sometimes be quite different from the large number of datasets available on the Internet * During the fine-tuning phase, datasets in the medical field are often unevenly distributed More recently, multi-modal learning and contrastive learning have shown some inspiring results in this field, but it's still challenging and requires further attention. Here are some additional readings to go deeper on the task: * On the Automatic Generation of Medical Imaging Reports [ https://doi.org/10.48550/arXiv.1711.08195](https://doi.org/10.48550/arXiv.1711.08195) * A scoping review of transfer learning research on medical image analysis using ImageNet [ https://arxiv.org/abs/2004.13175](https://arxiv.org/abs/2004.13175) * A Survey on Incorporating Domain Knowledge into Deep Learning for Medical Image Analysis [ https://arxiv.org/abs/2004.12150]( https://arxiv.org/abs/2004.12150) (Image credit : Transformers in Medical Imaging: A Survey)",medical 1ff2cfb7-b329-4818-8fd2-0a1c80c69533,immune-repertoire-classification,Immune Repertoire Classification,,medical 9bb00489-d883-4970-a83c-2e5c54f9070f,lifetime-image-denoising,lifetime image denoising,,medical 0e711876-d192-4979-93ed-2c655cab6360,medical-relation-extraction,Medical Relation Extraction,Biomedical relation extraction is the task of detecting and classifying semantic relationships from biomedical text.,medical 1ec7cdc6-66a1-4671-ae46-530b8c444238,als-detection,ALS Detection,,medical 2a71a43b-eba6-4a67-a826-6aedaa8807fe,brain-decoding,Brain Decoding,"**Motor Brain Decoding** is fundamental task for building motor brain computer interfaces (BCI). Progress in predicting finger movements based on brain activity allows us to restore motor functions and improve rehabilitation process of patients.",medical f0086c95-6ee5-4a91-83f6-ae3dd3859161,seizure-detection,Seizure Detection,"**Seizure Detection** is a binary supervised classification problem with the aim of classifying between seizure and non-seizure states of a patient. Source: [ResOT: Resource-Efficient Oblique Trees for Neural Signal Classification ](https://arxiv.org/abs/2006.07900)",medical 7cbbbcb6-05d7-461e-9802-1e152c52aa14,catalytic-activity-prediction,Catalytic activity prediction,"The EC number, a numerical classification system for enzymes hinging on the chemical reactions they catalyze, is substituted with the corresponding reaction. This substitution aims to leverage the tacit knowledge ingrained in pre-trained language models, thereby encouraging the model to predict the reaction itself rather than the mere EC number.",medical 37b45a9e-9f69-46f7-b06b-d7f3d0694ed0,surgical-skills-evaluation,Surgical Skills Evaluation,The task is to classify surgical skills using data that is recorded during the surgical intervention.,medical f26c488a-fdf9-4db1-b431-ca44660b1949,tumour-classification,Tumour Classification,,medical 6fda04eb-771d-49e3-b7a4-3c5db9a4bee9,remaining-length-of-stay,Remaining Length of Stay,Continuous prediction of the remaining ICU stay duration.,medical 7c9ef9db-12e3-4ffd-a4cb-cf5ee0a927bb,dna-analysis,DNA analysis,,medical 504b8a6c-2899-47f2-b84c-43b6faf3bfe5,optic-disc-detection,Optic Disc Detection,Region proposal for optic disc,medical 2a5ef865-a8d6-479e-82fb-065430a89133,optic-cup-segmentation,Optic Cup Segmentation,"Optic cup segmentation, concentric with optic disc, useful for glaucoma management (ophthalmology)",medical 8d7fcee2-2ee4-4d1c-a56c-3b59e47a1456,transcription-factor-binding-site-prediction-1,Transcription Factor Binding Site Prediction (Human),,medical c6e033da-d0d6-48c5-84aa-c109948cd059,multi-label-classification-of-biomedical,Multi-Label Classification Of Biomedical Texts,,medical 2a168b94-41c5-4981-8f54-f82a8e161fe7,lung-nodule-detection,Lung Nodule Detection,,medical dc0e2e0f-87fc-493f-8763-5e8b95bd8d9c,synthetic-data-generation,Synthetic Data Generation,The generation of tabular data by any means possible.,medical 05d8180b-25dd-4656-b14f-09d8a40a1054,multiple-sequence-alignment,Multiple Sequence Alignment,,medical 35d19f95-3f2c-49bd-a871-512231548ea7,ecg-qrs-detection,ECG QRS Detection,,medical caa2e04b-0d9a-4516-9c89-1f10eddadd05,breast-cancer-histology-image-classification,Breast Cancer Histology Image Classification,,medical d3790dd5-424d-4944-aa8b-3f4dd8ad16b1,single-cell-modeling,Single-cell modeling,Single Cell RNA sequencing (scRNAseq) revolutionized our understanding of the fundamental of life sciences. The technology enables an unprecedented resolution to study heterogeneity in cell populations and their functionalities.,medical 03b70c7d-a7c2-43bb-9403-dfafa9a4b6c0,emg-gesture-recognition,EMG Gesture Recognition,Electromyographic Gesture Recognition,medical 3f337c0a-5566-4626-88e3-2c982d4760c0,radiologist-binary-classification,Radiologist Binary Classification,"This task measures a radiologist's performance on distinguishing between generated (e.g. with a GAN, VAE, etc.) and real images, ascribing to the high visual quality of the synthesized images, and to their potential use in advancing and facilitating downstream medical tasks.",medical 73e9095b-a5e3-49f1-bd2d-43a36eb81dde,disease-trajectory-forecasting,Disease Trajectory Forecasting,,medical 7bb303d7-5b32-430b-ad39-5e9aa8f92afc,multi-tissue-nucleus-segmentation,Multi-tissue Nucleus Segmentation,,medical 39397619-73e3-4e66-afa3-d58ddf9a1a1a,prediction-of-cancer-cell-line-sensitivity,Prediction Of Cancer Cell Line Sensitivity,,medical 8b53a1ae-4656-4c53-9e25-d83f99467d49,nuclear-segmentation,Nuclear Segmentation,,medical efbb71fb-fdce-4723-9a27-1f43bbb001d1,respiratory-motion-forecasting,Respiratory motion forecasting,Respiratory motion forecasting to compensate for the latency of the radiotherapy treatment systems and target more accurately chest tumors.,medical e5a0c194-035e-4879-be84-2d4fd0a1f4bb,brain-image-segmentation,Brain Image Segmentation,,medical 6a96ed3a-a673-41ec-ac2d-e5533a7e9b28,infant-brain-mri-segmentation,Infant Brain Mri Segmentation,,medical 0f15b43b-8f23-4996-98e3-63acb861eb8b,brain-segmentation,Brain Segmentation,"( Image credit: [3D fully convolutional networks for subcortical segmentation in MRI: A large-scale study](https://github.com/josedolz/LiviaNET) )",medical b2f2c9ee-b823-4546-8db1-091d2fc27019,mapping-of-lung-nodules-in-low-dose-ct-images,Mapping Of Lung Nodules In Low-Dose Ct Images,,medical 2705d2af-f061-4784-8b26-1658a360bff7,organ-detection,Organ Detection,,medical 86ed92c1-5d45-4ff5-96fe-d97869fe207b,heart-rate-estimation,Heart rate estimation,RR interval detection and R peak detection from QRS complex,medical d279aca6-b0ad-4fa2-b867-6a690d0a433d,transcription-factor-binding-site-prediction-2,Transcription Factor Binding Site Prediction (Mouse),,medical 68e060f5-51e2-4dd1-b658-af0c1fb6b12e,cancer,Cancer,,medical 57d5f3c0-1d0f-43bf-967e-70498e588c54,computational-phenotyping,Computational Phenotyping,"**Computational Phenotyping** is the process of transforming the noisy, massive Electronic Health Record (EHR) data into meaningful medical concepts that can be used to predict the risk of disease for an individual, or the response to drug therapy. Source: [Privacy-Preserving Tensor Factorization for Collaborative Health Data Analysis ](https://arxiv.org/abs/1908.09888)",medical b6f7f0bd-7532-489a-88b2-123c174a4237,molecular-docking,Molecular Docking,"Predicting the binding structure of a small molecule ligand to a protein, which is critical to drug design. Description from: [DiffDock: Diffusion Steps, Twists, and Turns for Molecular Docking](https://paperswithcode.com/paper/diffdock-diffusion-steps-twists-and-turns-for)",medical 7c0f0739-7257-4afb-a0ef-ef65d8551276,retrosynthesis,Retrosynthesis,"Retrosynthetic analysis is a pivotal synthetic methodology in organic chemistry that employs a reverse-engineering approach, initiating from the target compound and retroactively tracing potential synthesis routes and precursor molecules. This technique proves instrumental in sculpting efficient synthetic strategies for intricate molecules, thus catalyzing the evolution and progression of novel pharmaceuticals and materials.",medical 08abfb7f-7e24-4c60-846a-dc8d2eb3cd31,iris-segmentation,Iris Segmentation,,medical 66afc48b-8c9b-4d44-bab4-181aa44a6578,discovery-of-integrative-cancer-subtypes,Discovery Of Integrative Cancer Subtypes,,medical b519b280-9fb4-47ae-bb77-c1295015c6a5,icu-mortality,ICU Mortality,Prediction of a patient mortality in the Intensive Care Unit (ICU) given its first hours of Electronic Health Record (EHR).,medical 7c7d124c-b919-4985-92cb-d4e87e301884,medical-code-prediction,Medical Code Prediction,"Context: Prediction of medical codes from clinical notes is both a practical and essential need for every healthcare delivery organization within current medical systems. Automating annotation will save significant time and excessive effort by human coders today. A new milestone will mark a meaningful step toward fully Autonomous Medical Coding in machines reaching parity with human coders' performance in medical code prediction. Question: What exactly is the medical code prediction problem? Answer: Clinical notes contain much information about what precisely happened during the patient's entire stay. And those clinical notes (e.g., discharge summary) is typically long, loosely structured, consists of medical domain language, and sometimes riddled with spelling errors. So, it's a highly multi-label classification problem, and the forthcoming ICD-11 standard will add more complexity to the problem! The medical code prediction problem is to annotate this clinical note with multiple codes subset from nearly 70K total codes (in the current ICD-10 system, for example).",medical 42e27715-2964-44ba-9ff6-d6754fe3cfb0,functional-description-generation,Functional Description Generation,"The functional description generation task, which not only evaluates the reasoning capability of the language model in determining the function of a protein sequence but also assesses the efficacy of the language model's text generation.",medical 075b6209-8046-408f-838b-4f1d97597aa6,medical-image-classification,Medical Image Classification,"**Medical Image Classification** is a task in medical image analysis that involves classifying medical images, such as X-rays, MRI scans, and CT scans, into different categories based on the type of image or the presence of specific structures or diseases. The goal is to use computer algorithms to automatically identify and classify medical images based on their content, which can help in diagnosis, treatment planning, and disease monitoring.",medical 8e670f17-d3ae-463c-bf31-f27c4f1123f9,property-prediction,Property Prediction,"Property prediction involves forecasting or estimating a molecule's inherent physical and chemical properties based on information derived from its structural characteristics. It facilitates high-throughput evaluation of an extensive array of molecular properties, enabling the virtual screening of compounds. Additionally, it provides the means to predict the unknown attributes of new molecules, thereby bolstering research efficiency and reducing development times.",medical 03200d0f-b7c4-44bc-8ec3-7800186d9ddd,photoplethysmography-ppg-beat-detection,Photoplethysmography (PPG) beat detection,Detecting heartbeats in the photoplethysmogram (PPG) signal,medical 76bd5866-0421-43ee-961a-782d7a49da0c,motion-correction-in-multishot-mri,Motion Correction In Multishot Mri,,medical 6f2a0129-959a-4ecb-97f8-b5b85d38fdd2,reagent-prediction,Reagent Prediction,"Reagent prediction endeavors to ascertain the suitable catalysts, solvents, or ancillary substances required for a specific chemical reaction. This endeavor facilitates chemists in uncovering novel reaction types and mechanisms, identifying more optimal or eco-friendly reaction conditions, and ultimately streamlining the comprehensive chemical process to attain maximal cost-effectiveness and environmental stewardship.",medical 3ddc8df6-70c9-425f-a8dc-e5d883954350,molecule-interpretation,Molecule Interpretation,,medical 2ffc9d78-365d-433c-9e9d-fc516a78f4be,3d-molecule-generation,3D Molecule Generation,,medical c962652c-4039-45cd-812b-5bd2e51625e5,muscle-tendon-junction-identification,Muscle Tendon Junction Identification,,medical e2cde90b-cde0-49d8-ab9d-ab4eee26e55c,fovea-detection,Fovea Detection,,medical 5c8440e6-5424-4d96-9529-02032995f37d,noise-estimation,Noise Estimation,,medical 626f1aa8-c188-4972-bb38-b5bb2a16109a,diabetic-retinopathy-detection,Diabetic Retinopathy Detection,,medical 69907402-3557-4598-9689-f6ac3a2e4773,optic-cup-detection,Optic Cup Detection,Region proposal for optic cup,medical c96d1382-333c-4929-a8a8-5b2736df5b97,lung-disease-classification,Lung Disease Classification,,medical e8a9a96b-9963-4b3e-8101-2a7306995a6e,genome-understanding,Genome Understanding,,medical 29e94e33-d799-48f7-9ff1-f71d2824a7ca,core-promoter-detection,Core Promoter Detection,,medical 82c9ee9d-e6bd-4cfe-a23f-76134f0e877b,epigenetic-marks-prediction,Epigenetic Marks Prediction,,medical eca46ff6-028b-48a3-bbd8-2f3663b6e50e,ecg-classification,ECG Classification,,medical a324128f-d9fe-4964-9f17-3a0f6e283d34,medical-image-registration,Medical Image Registration,"Image registration, also known as image fusion or image matching, is the process of aligning two or more images based on image appearances. **Medical Image Registration** seeks to find an optimal spatial transformation that best aligns the underlying anatomical structures. Medical Image Registration is used in many clinical applications such as image guidance, motion tracking, segmentation, dose accumulation, image reconstruction and so on. Medical Image Registration is a broad topic which can be grouped from various perspectives. From input image point of view, registration methods can be divided into unimodal, multimodal, interpatient, intra-patient (e.g. same- or different-day) registration. From deformation model point of view, registration methods can be divided in to rigid, affine and deformable methods. From region of interest (ROI) perspective, registration methods can be grouped according to anatomical sites such as brain, lung registration and so on. From image pair dimension perspective, registration methods can be divided into 3D to 3D, 3D to 2D and 2D to 2D/3D. Source: [Deep Learning in Medical Image Registration: A Review ](https://arxiv.org/abs/1912.12318)",medical 556b7472-bae9-42be-9615-d2ffcf35be6a,readmission-prediction,Readmission Prediction,,medical ef504cad-f5c4-4d65-baa6-d926c37c42b4,sleep-staging,Sleep Staging,Human Sleep Staging into W-R-N or W-R-L-D classes from multiple or single polysomnography signals,medical 599cdb5f-afcb-4879-aefd-aacde29e17eb,protein-function-prediction,Protein Function Prediction,"For GO terms prediction, given the specific function prediction instruction and a protein sequence, models characterize the protein functions using the GO terms presented in three different domains (cellular component, biological process, and molecular function).",medical c1ffa69b-47ab-4b75-8352-1eb5b2b58682,spindle-detection,Spindle Detection,,medical 0cd2bbaa-91ce-4034-9cf2-d42f99220286,epidemiology,Epidemiology,"**Epidemiology** is a scientific discipline that provides reliable knowledge for clinical medicine focusing on prevention, diagnosis and treatment of diseases. Research in Epidemiology aims at characterizing risk factors for the outbreak of diseases and at evaluating the efficiency of certain treatment strategies, e.g., to compare a new treatment with an established gold standard. This research is strongly hypothesis-driven and statistical analysis is the major tool for epidemiologists so far. Correlations between genetic factors, environmental factors, life style-related parameters, age and diseases are analyzed. Source: [Visual Analytics of Image-Centric Cohort Studies in Epidemiology ](https://arxiv.org/abs/1501.04009)",medical e233c705-8d6d-40f7-af16-0d08c4dbd5e9,cancer-metastasis-detection,Cancer Metastasis Detection,,medical 4ac1afc0-4565-433c-bf4e-0d4dd7761034,lung-nodule-segmentation,Lung Nodule Segmentation,,medical 5e6a2acf-2965-4c3d-b549-8ecd71a7bb49,birl-cima,BIRL,"BIRL: Benchmark on Image Registration methods with Landmark validation, in particular, Biomedical image registration on WSI microscopy images of a multi-strain histology tissue sample.",medical 9a52ced1-e40b-43f9-9679-e19706f10ab0,brain-ventricle-localization-and-segmentation,Brain Ventricle Localization And Segmentation In 3D Ultrasound Images,,medical 606d3989-2987-443a-b51e-c00412476294,acne-severity-grading,Acne Severity Grading,,medical 7297d046-a815-4978-b284-2b78ad99f5a4,registration-of-sparse-clinical-images,Registration Of Sparse Clinical Images,,medical 9654b7c4-e3f4-4163-8373-85d2f03dd620,medical-image-segmentation,Medical Image Segmentation,"**Medical Image Segmentation** is a computer vision task that involves dividing an medical image into multiple segments, where each segment represents a different object or structure of interest in the image. The goal of medical image segmentation is to provide a precise and accurate representation of the objects of interest within the image, typically for the purpose of diagnosis, treatment planning, and quantitative analysis. ( Image credit: [IVD-Net](https://github.com/josedolz/IVD-Net) )",medical 5df862ef-db61-447a-bfff-048e0321c7b0,chemical-disease-interaction-extraction,Chemical-Disease Interaction Extraction,"The goal of this task is to discern the relationships between chemicals and diseases from given medical literature, a concept known as chemical-induced disease (CID) relations. These CID relations are paramount in biomedical research, playing crucial roles in areas such as drug discovery, toxicology, and disease etiology.",medical 762df2b1-e9e0-4b1e-bb6a-fcfa855c2c30,volumetric-medical-image-segmentation,Volumetric Medical Image Segmentation,,medical 2ad5090b-f00c-4282-8c94-5e7e6073f5bf,nuclei-classification,Nuclei Classification,,medical d1011810-56ed-4ca6-b380-0bf6f71dbf65,pancreas-segmentation,Pancreas Segmentation,"Pancreas segmentation is the task of segmenting out the pancreas from medical imaging. Convolutional neural network",medical fd0b01d7-0b54-4274-ac4b-f67cd99902bc,medial-knee-jrf-prediction,Medial knee JRF Prediction,,medical 753321ee-81a3-4f9a-9d4c-e0f42291b374,liver-segmentation,Liver Segmentation,,medical 18fbc86a-79be-48d3-9753-7bd9e40a2bbe,x-ray,X-Ray,,medical fcf1b0df-7075-4ae7-9b38-e4f09c61ca02,population-assignment,Population Assignment,,medical 78b71f83-6f40-4bb0-82d1-9cf69d2fb6bd,cell-segmentation,Cell Segmentation,"**Cell Segmentation** is a task of splitting a microscopic image domain into segments, which represent individual instances of cells. It is a fundamental step in many biomedical studies, and it is regarded as a cornerstone of image-based cellular research. Cellular morphology is an indicator of a physiological state of the cell, and a well-segmented image can capture biologically relevant morphological information. Source: [Cell Segmentation by Combining Marker-controlled Watershed and Deep Learning ](https://arxiv.org/abs/2004.01607)",medical 5429405e-8543-433c-b0c9-9ae142aa2db9,participant-intervention-comparison-outcome,Participant Intervention Comparison Outcome Extraction,"PICO recognition is an information extraction task for identifying Participant, Intervention, Comparator, and Outcome (PICO elements) information from clinical literature.",medical de59695f-e65a-450a-a1e3-246767e0d4eb,sleep-apnea-detection,Sleep apnea detection,,medical 292c92ed-c562-4dac-9cc3-c6ff5a63e9a7,ecg-risk-stratification,Ecg Risk Stratification,,medical 52043111-8085-4bcc-a729-89b83814cae4,medical-diagnosis,Medical Diagnosis,"**Medical Diagnosis** is the process of identifying the disease a patient is affected by, based on the assessment of specific risk factors, signs, symptoms and results of exams. Source: [A probabilistic network for the diagnosis of acute cardiopulmonary diseases ](https://arxiv.org/abs/1609.06864)",medical 96f623c3-8f52-4697-8ed4-022d4c04fb44,breast-mass-segmentation-in-whole-mammograms,Breast Mass Segmentation In Whole Mammograms,,medical 791a26b9-ae59-45ee-b2fc-64df0547540d,skin-lesion-identification,Skin Lesion Identification,,medical 847603af-bddc-446f-8752-e3921f051568,pain-intensity-regression,Pain Intensity Regression,,medical 0d98b42f-1844-4f8f-938e-6e48859db6fb,classification-of-age-related-macular,Age-Related Macular Degeneration Classification,,medical 16cdf02a-892d-4e9d-8c34-44091e7b7e5a,respiratory-rate-estimation,Respiratory Rate Estimation,,medical ba0704dc-cff1-403a-85cf-0cad1f3ee3f5,automated-pulmonary-nodule-detection-and,Automated Pulmonary Nodule Detection And Classification,,medical d033b4d2-97b1-4a3e-85be-8e63244d5e06,automatic-liver-and-tumor-segmentation,Automatic Liver And Tumor Segmentation,,medical 444ac0be-970c-4a79-b9d1-04c7da811074,sleep-micro-event-detection,Sleep Micro-event detection,,medical 0154960c-893d-4862-b9cb-84504a0f6e4e,low-dose-x-ray-ct-reconstruction,Low-Dose X-Ray Ct Reconstruction,,medical a21f8a29-f944-467a-a833-a265862d3d9c,medical-image-generation,Medical Image Generation,"Medical image generation is the task of synthesising new medical images. ( Image credit: [Towards Adversarial Retinal Image Synthesis](https://arxiv.org/pdf/1701.08974v1.pdf) )",medical ce2fd312-cb47-4359-a7af-9aafabed47fd,eeg-decoding,Eeg Decoding,**EEG Decoding** - extracting useful information directly from EEG data.,medical 5f72b184-1047-427c-b4a0-58a9e2767872,brain-tumor-segmentation,Brain Tumor Segmentation,"**Brain Tumor Segmentation** is a medical image analysis task that involves the separation of brain tumors from normal brain tissue in magnetic resonance imaging (MRI) scans. The goal of brain tumor segmentation is to produce a binary or multi-class segmentation map that accurately reflects the location and extent of the tumor. ( Image credit: [Brain Tumor Segmentation with Deep Neural Networks](https://github.com/naldeborgh7575/brain_segmentation) )",medical 724bca85-d162-4df9-9cdb-e803d925788e,length-of-stay-prediction,Length-of-Stay prediction,,medical f5a0bf7c-7a36-4bfc-973e-0e9ab08e02c4,multi-subject-fmri-data-alignment,Multi-Subject Fmri Data Alignment,,medical 5fc58c8b-3842-451c-8cfd-63f99969b5bb,ultrasound,Ultrasound,,medical 3f1e1d80-8d54-461a-bbac-572b69b008c5,mass-segmentation-from-mammograms,Mass Segmentation From Mammograms,,medical 937d6b65-e85c-4b3a-88c3-fae7283a6ae0,placenta-segmentation,Placenta Segmentation,,medical 47d90d5c-9219-496c-a66e-b8f99622cbda,automated-pancreas-segmentation,Automated Pancreas Segmentation,,medical 2e006067-81e5-4e06-b129-dfdc6b51b4b3,breast-tissue-identification,Breast Tissue Identification,,medical 5404061f-9043-44b9-9e92-1f87bf288cfc,genetic-risk-prediction,Genetic Risk Prediction,Polygenic Risk Scores (PRS) / Polygenic Scores (PGS),medical db3dc0d0-79ee-451d-9c96-3eaa558b1a8b,brain-lesion-segmentation-from-mri,Brain Lesion Segmentation From Mri,,medical 6e4b7a3d-298e-4bbf-af70-e3cebf8a4095,clinical-concept-extraction,Clinical Concept Extraction,"Automatic extraction of clinical named entities such as clinical problems, treatments, tests and anatomical parts from clinical notes. ( [Source](https://arxiv.org/pdf/2012.04005v1.pdf) )",medical 75a08076-7102-458b-ad85-f641c0f01b81,cerebrovascular-network-segmentation,Cerebrovascular Network Segmentation,,medical f8deb8ac-1b8d-46e5-9cad-c1a3039892fd,phenotype-classification,Phenotype classification,,medical 3c98887d-3f64-44f9-a50b-71abbf66679a,retinal-vessel-segmentation,Retinal Vessel Segmentation,"Retinal vessel segmentation is the task of segmenting vessels in retina imagery. ( Image credit: [LadderNet](https://github.com/juntang-zhuang/LadderNet) )",medical 2f9de3e0-c88b-4123-88a8-8e589081220c,atrial-fibrillation-recurrence-estimation,Atrial Fibrillation Recurrence Estimation,,medical 76a4f326-6640-4770-b36c-66801a1ce8af,pulse-wave-simulation,Pulse wave simulation,Simulating arterial pulse waves,medical d2c2ecdd-9623-4975-aa69-32d8fb6dc8bc,qrs-complex-detection,QRS Complex Detection,,medical 9e1c1dee-7593-45a4-abf7-4c97c85092a1,pneumonia-detection,Pneumonia Detection,,medical f37b9c91-9361-4ab5-aa6e-3a700db9b62c,acoustic-echo-cancellation,Acoustic echo cancellation,,medical e5495662-6b1b-4d79-9b28-85a2769ab9de,histopathological-image-classification,Histopathological Image Classification,,medical b97bdcfd-c6e1-4a55-b4f2-92e30b690a8f,ecg-denoising,ECG Denoising,,medical ab37fd93-0855-4ebf-b5fb-3c69c553b1cf,lung-nodule-classification,Lung Nodule Classification,,medical f722e99b-f259-43cb-8b50-9cc3c66eb63b,congestive-heart-failure-detection,Congestive Heart Failure detection,,medical 16686f33-eefb-4b19-adbd-c815a31aad08,ecg-wave-delineation,ECG Wave Delineation,"Delineation of the waveforms P, T and QRS complexes from ECG signals",medical e527bc5f-7f0e-4356-849c-b3d98a58b3a7,text-based-de-novo-molecule-generation,Text-based de novo Molecule Generation,,medical 81539996-da8b-4496-a546-b6144c84f712,lesion-segmentation,Lesion Segmentation,"Lesion segmentation is the task of segmenting out lesions from other objects in medical based images. ( Image credit: [D-UNet](https://arxiv.org/pdf/1908.05104v1.pdf) )",medical c3891501-60ce-4323-9790-c065e6744664,automatic-sleep-stage-classification,Automatic Sleep Stage Classification,,medical 90e3901c-df19-419c-9c5a-308746c293b7,electromyography-emg,Electromyography (EMG),,medical 21c7e604-59bc-4978-98d0-c7dde4158043,kidney-function,Kidney Function,Continuous prediction of urine production in the next 2h as an average rate in ml/kg/h. The task is predicted at irregular intervals.,medical fa784d6d-172e-4ca0-ae16-d931c81ab2c6,diabetic-retinopathy-grading,Diabetic Retinopathy Grading,Grading the severity of diabetic retinopathy from (ophthalmic) fundus images,medical 9ba77bd5-9b00-4326-b351-ef5b504201c3,diabetes-prediction,Diabetes Prediction,,medical 6cb4f973-f1a3-4898-88a2-dafbda53a684,respiratory-failure,Respiratory Failure,Continuous prediction of onset of respiratory failure in the next 12h given the patient is not in failure now.,medical 1dcd7248-63c8-4b5f-b91d-4f115a413935,circulatory-failure,Circulatory Failure,"Continuous prediction of onset of circulatory failure in the next 12h, given the patient is not in failure now.",medical 8bc6c24e-6c5b-4f7a-af48-1d7aefe4e9ac,heartbeat-classification,Heartbeat Classification,,medical dceefdba-dd12-44e7-a3a7-43d15c2f1a92,atrial-fibrillation-detection,Atrial Fibrillation Detection,,medical 48d5ab41-5218-4481-bc05-b3c3c3feeabb,bone-suppression-from-dual-energy-chest-x,Bone Suppression From Dual Energy Chest X-Rays,,medical 6f72db10-676d-441c-9131-4a87fc710e51,mhc-presentation-prediction,MHC presentation prediction,"Task solves important immunological problem, which is predicting whether given peptide will present on given MHC.",medical fc0729c9-d8e8-474a-9ea3-49840c0119a5,oral-cancer-classification,Oral Cancer Classification,,medical 63a5b178-d042-4c9f-a9fd-cbf5c239d840,patient-phenotyping,Patient Phenotyping,"Classifying patients after 24h regarding their admission diagnosis, using the APACHE group II and IV labels.",medical d9c5e48b-9a2c-4a02-bd24-a28c76bee305,semi-supervised-medical-image-classification,Semi-supervised Medical Image Classification,Semi-supervised Medical Image Classification,medical 9b0be2f2-c1af-46b3-a3e5-08b1201fa370,brain-morphometry,Brain Morphometry,Measurement of brain structures from neuroimaging (MRI).,medical 1f66ab5b-b8ad-4edd-8750-75da4e04efc2,anxiety-detection,Anxiety Detection,Detect anxiety distress of human beings / animals,medical dd1e1321-d0d6-48b5-b20e-c8ef8de38600,disease-prediction,Disease Prediction,,medical 5dc16c16-4ba0-4197-834c-bf932a91712c,white-matter-fiber-tractography,White Matter Fiber Tractography,,medical 472c3212-9d00-40c9-afd1-593ad7f9e748,colon-cancer-detection-in-confocal-laser,Colon Cancer Detection In Confocal Laser Microscopy Images,,medical 869e2094-7193-4e75-9d38-267f06cba81b,surgical-gesture-recognition,Surgical Gesture Recognition,,medical 5e8e5441-1b97-4786-a6c6-7c65be0d4def,outcome-prediction-in-multimodal-mri,Outcome Prediction In Multimodal Mri,,medical 8737e5a0-2514-4fee-8603-e74e788a6a42,metal-artifact-reduction,Metal Artifact Reduction,Metal artifact reduction aims to remove the artifacts introduced by metallic implants in CT images.,medical f8042e04-796b-4f94-adae-594631e7dc07,decision-making-under-uncertainty,Decision Making Under Uncertainty,,medical 3ba611e6-dcab-425d-8b6b-ac575a9f92cc,predicting-patient-outcomes,Predicting Patient Outcomes,,medical 9650fbe6-53f2-4c76-a26f-9f82604dc16a,skin,Skin,,medical 392335ed-a8a9-4c42-bb61-05698fba47ca,sleep-arousal-detection,Sleep Arousal Detection,"Sleep arousal is a kind of EEG events happened during octurnal sleep. Too many arousals will contribute to many health problem, like daytime sleepiness, memory loss, diabetes, etc. Some research take it as a kind of sleep deprivation.",medical a96c5ea0-840c-436a-975b-77f29160e193,classification-of-breast-cancer-histology,Classification Of Breast Cancer Histology Images,,medical 417ea4eb-a48a-472a-84ff-ec5c3ea05bc4,mitosis-detection,Mitosis Detection,,medical 4c595ce5-590f-4c6d-a23f-9d0549c293f5,sleep-quality-prediction-1,Sleep Quality Prediction,"( Image credit: [DeepSleep](https://github.com/GuanLab/DeepSleep) )",medical ba1870cf-0730-40e7-8aa4-4ce67b70c272,electron-tomography,Electron Tomography,,medical 318154f2-ddf8-4d27-9e34-6d7f8a0f6c66,skull-stripping,Skull Stripping,,medical 9b695839-863a-41d9-b730-1fe8e54d0cc8,tomographic-reconstructions,Tomographic Reconstructions,,medical ce421295-4c44-4124-8c96-0298989525f0,electrocardiography-ecg,Electrocardiography (ECG),,methodology dbaf223a-08ad-41e6-99c3-be42c464ecef,combinatorial-optimization,Combinatorial Optimization,"**Combinatorial Optimization** is a category of problems which requires optimizing a function over a combination of discrete objects and the solutions are constrained. Examples include finding shortest paths in a graph, maximizing value in the Knapsack problem and finding boolean settings that satisfy a set of constraints. Many of these problems are NP-Hard, which means that no polynomial time solution can be developed for them. Instead, we can only produce approximations in polynomial time that are guaranteed to be some factor worse than the true optimal solution. Source: [Recent Advances in Neural Program Synthesis ](https://arxiv.org/abs/1802.02353)",methodology ea0571cd-1bbf-46cf-bcfa-310a2d801a81,influence-approximation,Influence Approximation,Estimating the influence of training triples on the behavior of a machine learning model.,methodology 8fc7da48-96c0-4ea6-8243-1c0eaa09b28c,multi-objective-reinforcement-learning,Multi-Objective Reinforcement Learning,,methodology 16b9ad28-b022-4be1-8019-5eb628ae7abf,auxiliary-learning,Auxiliary Learning,"Auxiliary learning aims to find or design auxiliary tasks which can improve the performance on one or some primary tasks. ( Image credit: [Self-Supervised Generalisation with Meta Auxiliary Learning](https://arxiv.org/pdf/1901.08933v3.pdf) )",methodology 3a505eff-a955-407b-a732-79057fc4b62a,quantum-circuit-equivalence-checking,Quantum Circuit Equivalence Checking,Equivalence Checking of Quantum Circuits,methodology b4a4a6bc-34f0-4fc0-a26e-da8aa437e017,sentence-embedding,Sentence Embedding,,methodology 11bdd64e-b5a7-4be3-9b53-7f4edda198e8,rgb-3d-anomaly-detection-and-segmentation,RGB+3D Anomaly Detection and Segmentation,RGB+3D Anomaly Detection and Segmentation,methodology a9c74ba3-d640-45d9-a435-7a5f7aaf1f33,sentence-embeddings,Sentence Embeddings,,methodology fd20efc3-e818-43e9-9c66-fa0c09428084,multilingual-word-embeddings,Multilingual Word Embeddings,,methodology b7f91f6c-fc45-4de2-8cd2-9201721097bc,arbitrary-conditional-density-estimation,Arbitrary Conditional Density Estimation,,methodology e344ca02-81c0-4e27-941e-b39bc5ea69aa,experimental-design,Experimental Design,,methodology 748546f5-cfb9-4dfc-b3d7-86e134037510,density-ratio-estimation,Density Ratio Estimation,Estimating the ratio of one density function to the other.,methodology d64bfd5b-9a5e-45d6-aede-991d074e39a1,activation-function-synthesis,Activation Function Synthesis,,methodology 40640578-1fed-4233-9b73-0e6011df968a,bayesian-inference,Bayesian Inference,Bayesian Inference is a methodology that employs Bayes Rule to estimate parameters (and their full posterior).,methodology f2fc5ba1-0ce7-480f-ba39-faebf3596670,low-rank-matrix-completion,Low-Rank Matrix Completion,"**Low-Rank Matrix Completion** is an important problem with several applications in areas such as recommendation systems, sketching, and quantum tomography. The goal in matrix completion is to recover a low rank matrix, given a small number of entries of the matrix. Source: [Universal Matrix Completion ](https://arxiv.org/abs/1402.2324)",methodology 6ac8c4a7-ddd6-466d-92e5-da070ac50762,group-anomaly-detection,Group Anomaly Detection,,methodology ed5a8576-fbe7-48c9-bd60-89b2426b3a59,contrastive-learning,Contrastive Learning,"**Contrastive Learning** is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart. It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering. (Image credit: [Schroff et al. 2015](https://arxiv.org/abs/1503.03832))",methodology b56baab8-d400-4a92-aefd-d00d66a08107,domain-generalization,Domain Generalization,"The idea of **Domain Generalization** is to learn from one or multiple training domains, to extract a domain-agnostic model which can be applied to an unseen domain Source: [Diagram Image Retrieval using Sketch-Based Deep Learning and Transfer Learning ](https://arxiv.org/abs/2004.10780)",methodology 532bc663-8da0-4dcf-82f9-a1f6e07c3c60,result-aggregation,Result aggregation,,methodology 30e88bb9-6892-495b-ac7a-2c134c378e1c,thompson-sampling,Thompson Sampling,"Thompson sampling, named after William R. Thompson, is a heuristic for choosing actions that addresses the exploration-exploitation dilemma in the multi-armed bandit problem. It consists of choosing the action that maximizes the expected reward with respect to a randomly drawn belief.",methodology cf2c8aff-67bc-466b-9b3e-48dc6eb227bc,meta-learning,Meta-Learning,"Meta-learning is a methodology considered with ""learning to learn"" machine learning algorithms. ( Image credit: [Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks](https://arxiv.org/pdf/1703.03400v3.pdf) )",methodology 4a793a6d-71cb-4a87-b404-0b11e7780ab9,generalized-few-shot-learning,Generalized Few-Shot Learning,,methodology e3e8a8d3-e399-436c-ae51-47b76ce3a029,hard-attention,Hard Attention,,methodology e6213699-e054-4dae-8fb8-01596cdbe5a5,policy-gradient-methods,Policy Gradient Methods,,methodology 1b5810ae-7d0e-445f-b120-74c9ccc1ba06,anomaly-detection,Anomaly Detection,"**Anomaly Detection** is a binary classification identifying unusual or unexpected patterns in a dataset, which deviate significantly from the majority of the data. The goal of anomaly detection is to identify such anomalies, which could represent errors, fraud, or other types of unusual events, and flag them for further investigation. [Image source]: [GAN-based Anomaly Detection in Imbalance Problems](https://paperswithcode.com/paper/gan-based-anomaly-detection-in-imbalance)",methodology a28391fb-3239-4f16-952e-7d169102eda4,ticket-search,Ticket Search,,methodology 740ac8a2-5265-42c4-b4a6-831080cea68b,dimensionality-reduction,Dimensionality Reduction,"Dimensionality reduction is the task of reducing the dimensionality of a dataset. ( Image credit: [openTSNE](https://github.com/pavlin-policar/openTSNE) )",methodology 0a06da8f-3b11-41dc-af4f-df343a8917c0,efficient-exploration,Efficient Exploration,"**Efficient Exploration** is one of the main obstacles in scaling up modern deep reinforcement learning algorithms. The main challenge in Efficient Exploration is the balance between exploiting current estimates, and gaining information about poorly understood states and actions. Source: [Randomized Value Functions via Multiplicative Normalizing Flows ](https://arxiv.org/abs/1806.02315)",methodology 3310843f-4daa-45b2-8b67-b6c0159a0170,transfer-reinforcement-learning,Transfer Reinforcement Learning,,methodology 3db83aa4-7d81-4483-988f-271e2102d588,inductive-logic-programming,Inductive logic programming,,methodology 6cd59230-35e5-4055-9377-fa38cfcae36b,network-pruning,Network Pruning,"**Network Pruning** is a popular approach to reduce a heavy network to obtain a light-weight form by removing redundancy in the heavy network. In this approach, a complex over-parameterized network is first trained, then pruned based on come criterions, and finally fine-tuned to achieve comparable performance with reduced parameters. Source: [Ensemble Knowledge Distillation for Learning Improved and Efficient Networks ](https://arxiv.org/abs/1909.08097)",methodology 0e0ab6a6-e041-4b86-8cb9-feeb418d2e85,similarity-explanation,Similarity Explanation,,methodology ea4b9be3-9016-46b4-b50e-36aa309514c7,data-augmentation,Data Augmentation,"Data augmentation involves techniques used for increasing the amount of data, based on different modifications, to expand the amount of examples in the original dataset. Data augmentation not only helps to grow the dataset but it also increases the diversity of the dataset. When training machine learning models, data augmentation acts as a regularizer and helps to avoid overfitting. Data augmentation techniques have been found useful in domains like NLP and computer vision. In computer vision, transformations like cropping, flipping, and rotation are used. In NLP, data augmentation techniques can include swapping, deletion, random insertion, among others. Further readings: - [A Survey of Data Augmentation Approaches for NLP](https://paperswithcode.com/paper/a-survey-of-data-augmentation-approaches-for) - [A survey on Image Data Augmentation for Deep Learning](https://journalofbigdata.springeropen.com/articles/10.1186/s40537-019-0197-0) ( Image credit: [Albumentations](https://github.com/albumentations-team/albumentations) )",methodology 618a5609-d8c3-4828-87e9-2aacacf7ac4c,learning-representation-of-multi-view-data,Learning Representation Of Multi-View Data,,methodology 41854ff0-ae39-4fc8-afc4-073ca3693dbc,point-processes,Point Processes,,methodology 9e0e86e3-6207-49af-a4e5-17179f2c3ab8,hypothesis-testing,Two-sample testing,"In statistical hypothesis testing, a two-sample test is a test performed on the data of two random samples, each independently obtained from a different given population. The purpose of the test is to determine whether the difference between these two populations is statistically significant. The statistics used in two-sample tests can be used to solve many machine learning problems, such as domain adaptation, covariate shift and generative adversarial networks.",methodology 6ab853d9-0c1c-44a1-9907-bbec12945f39,neural-network-compression,Neural Network Compression,,methodology 3321bd4b-882d-4de7-b373-1d1136f77e73,quantization,Quantization,"**Quantization** is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16). Source: [Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers ](https://arxiv.org/abs/1911.00361)",methodology 7118d10d-59a1-426e-9db7-4bcc08efb2e1,matrix-completion,Matrix Completion,"**Matrix Completion** is a method for recovering lost information. It originates from machine learning and usually deals with highly sparse matrices. Missing or unknown data is estimated using the low-rank matrix of the known data. Source: [A Fast Matrix-Completion-Based Approach for Recommendation Systems ](https://arxiv.org/abs/1912.00600)",methodology 97e3fb72-cebc-4d58-acbe-feb8836eb026,clustering,Clustering,,methodology 587b5c26-af2c-4649-8bac-b646e9ae9104,representation-learning,Representation Learning,"**Representation Learning** is concerned with training machine learning algorithms to learn useful representations, e.g. those that are interpretable, have latent features, or can be used for transfer learning. Deep neural networks can be considered representation learning models that typically encode information which is projected into a different subspace. These representations are then usually passed on to a linear classifier to, for instance, train a classifier. Representation learning can be divided into: - **Supervised representation learning**: learning representations on task A using annotated data and used to solve task B - **Unsupervised representation learning**: learning representations on a task in an unsupervised way (label-free data). These are then used to address downstream tasks and reducing the need for annotated data when learning news tasks. Powerful models like [GPT](/method/gpt) and [BERT](/method/bert) leverage unsupervised representation learning to tackle language tasks. More recently, [self-supervised learning (SSL)](/task/self-supervised-learning) is one of the main drivers behind unsupervised representation learning in fields like computer vision and NLP. Here are some additional readings to go deeper on the task: - [Representation Learning: A Review and New Perspectives](/paper/representation-learning-a-review-and-new) - Bengio et al. (2012) - [A Few Words on Representation Learning](https://sthalles.github.io/a-few-words-on-representation-learning/) - Thalles Silva ( Image credit: [Visualizing and Understanding Convolutional Networks](https://arxiv.org/pdf/1311.2901.pdf) )",methodology 43cbc1ad-f93b-4969-bac2-d746b2f9935d,feature-importance,Feature Importance,,methodology 8258318f-d554-46e0-8ec6-d82dd713f989,computed-tomography-ct,Computed Tomography (CT),"The term “computed tomography”, or CT, refers to a computerized x-ray imaging procedure in which a narrow beam of x-rays is aimed at a patient and quickly rotated around the body, producing signals that are processed by the machine's computer to generate cross-sectional images—or “slices”—of the body. ( Image credit: [Liver Lesion Detection from Weakly-labeled Multi-phase CT Volumes with a Grouped Single Shot MultiBox Detector](https://github.com/L0SG/grouped-ssd-pytorch) )",methodology 09ddafe6-3191-4a2b-8ad3-1f0d0a877c64,few-shot-class-incremental-learning,Few-Shot Class-Incremental Learning,,methodology 5936b135-8f13-4320-a04b-07c4375ddde6,generative-semantic-nursing,Generative Semantic Nursing,"**Generative Semantic Nursing** is a task of intervening the generative process on the fly during inference time to improve the faithfulness of the generated images. It works by carefully manipulating of latents during the denoising process of a pre-trained text-to-image diffusion model. Source: [Attend-and-Excite: Attention-Based Semantic Guidance for Text-to-Image Diffusion Models](https://paperswithcode.com/paper/attend-and-excite-attention-based-semantic)",methodology 31534e90-2c1d-4e04-80c0-024035b232d3,learning-representation-on-graph,Learning Representation On Graph,,methodology fb2e76fc-b153-4fff-9f4b-dc84dd69e3d1,domain-adaptation,Domain Adaptation,"**Domain Adaptation** is the task of adapting models across domains. This is motivated by the challenge where the test and training datasets fall from different data distributions due to some factor. Domain adaptation aims to build machine learning models that can be generalized into a target domain and dealing with the discrepancy across domain distributions. Further readings: - [A Brief Review of Domain Adaptation](https://paperswithcode.com/paper/a-brief-review-of-domain-adaptation) ( Image credit: [Unsupervised Image-to-Image Translation Networks](https://arxiv.org/pdf/1703.00848v6.pdf) )",methodology 5e7fcca8-9c25-4a21-8750-625bf1b6ea07,multi-label-text-classification,Multi-Label Text Classification,"According to Wikipedia ""In machine learning, multi-label classification and the strongly related problem of multi-output classification are variants of the classification problem where multiple labels may be assigned to each instance. Multi-label classification is a generalization of multiclass classification, which is the single-label problem of categorizing instances into precisely one of more than two classes; in the multi-label problem there is no constraint on how many of the classes the instance can be assigned to.""",methodology 9d63db0f-3e55-4c45-ab53-84ab8dfff358,active-learning,Active Learning,"**Active Learning** is a paradigm in supervised machine learning which uses fewer training examples to achieve better optimization by iteratively training a predictor, and using the predictor in each iteration to choose the training examples which will increase its chances of finding better configurations and at the same time improving the accuracy of the prediction model Source: [Polystore++: Accelerated Polystore System for Heterogeneous Workloads ](https://arxiv.org/abs/1905.10336)",methodology be05114d-0b42-4855-8fbd-45f75f32787b,continual-pretraining,Continual Pretraining,,methodology e9f743ed-9dde-4e37-b6dc-b2c059238107,network-embedding,Network Embedding,"**Network Embedding**, also known as ""Network Representation Learning"", is a collective term for techniques for mapping graph nodes to vectors of real numbers in a multidimensional space. To be useful, a good embedding should preserve the structure of the graph. The vectors can then be used as input to various network and graph analysis tasks, such as link prediction Source: [Tutorial on NLP-Inspired Network Embedding ](https://arxiv.org/abs/1910.07212)",methodology fa69e59b-1276-4a83-8c7b-b9d52b71b19b,mutual-information-estimation,Mutual Information Estimation,"To estimate mutual information from samples, specially for high-dimensional variables.",methodology 82320bc7-5816-464e-9832-ce6d941f2838,eeg,Electroencephalogram (EEG),"**Electroencephalogram (EEG)** is a method of recording brain activity using electrophysiological indexes. When the brain is active, a large number of postsynaptic potentials generated synchronously by neurons are formed after summation. It records the changes of electric waves during brain activity and is the overall reflection of the electrophysiological activities of brain nerve cells on the surface of cerebral cortex or scalp. Brain waves originate from the postsynaptic potential of the apical dendrites of pyramidal cells. The formation of synchronous rhythm of EEG is also related to the activity of nonspecific projection system of cortex and thalamus. EEG is the basic theoretical research of brain science. EEG monitoring is widely used in its clinical application.",methodology 273decca-b7f2-4482-ae2e-e03ed552fae2,structured-prediction,Structured Prediction,"**Structured Prediction** is an area of machine learning focusing on representations of spaces with combinatorial structure, and algorithms for inference and parameter estimation over these structures. Core methods include both tractable exact approaches like dynamic programming and spanning tree algorithms as well as heuristic techniques such as linear programming relaxations and greedy search. Source: [Torch-Struct: Deep Structured Prediction Library ](https://arxiv.org/abs/2002.00876)",methodology 0e4e41f7-283e-446f-9507-7bf9090144a0,machine-learning,BIG-bench Machine Learning,This branch include most common machine learning fundamental algorithms.,methodology 89e497de-1fca-4403-9ee7-09d62f5905ea,one-class-classifier,One-class classifier,,methodology dd6a4216-68ae-4bf8-81fa-744a4f251639,3d-anomaly-detection-and-segmentation,3D Anomaly Detection and Segmentation,3D-Only Anomaly Detection and Segmentation,methodology f5fee104-4631-4c08-a7f8-78eda05e8310,zero-shot-learning,Zero-Shot Learning,"**Zero-shot learning (ZSL)** is a model's ability to detect classes never seen during training. The condition is that the classes are not known during supervised learning. Earlier work in zero-shot learning use attributes in a two-step approach to infer unknown classes. In the computer vision context, more recent advances learn mappings from image feature space to semantic space. Other approaches learn non-linear multimodal embeddings. In the modern NLP context, language models can be evaluated on downstream tasks without fine tuning. Benchmark datasets for zero-shot learning include [aPY](/dataset/apy), [AwA](/dataset/awa2-1), and [CUB](/dataset/cub-200-2011), among others. ( Image credit: [Prototypical Networks for Few shot Learning in PyTorch ](https://github.com/orobix/Prototypical-Networks-for-Few-shot-Learning-PyTorch) ) Further readings: - [Zero-Shot Learning -- A Comprehensive Evaluation of the Good, the Bad and the Ugly](https://paperswithcode.com/paper/zero-shot-learning-a-comprehensive-evaluation) - [Zero-Shot Learning in Modern NLP](https://joeddav.github.io/blog/2020/05/29/ZSL.html) - [Zero-Shot Learning for Text Classification](https://amitness.com/2020/05/zero-shot-text-classification/)",methodology 6c4aed08-cc94-4484-accb-0c634adbedd5,anomaly-detection-in-surveillance-videos,Anomaly Detection In Surveillance Videos,,methodology c682ec7b-372f-4fcd-86d6-368106ec5086,metric-learning,Metric Learning,"The goal of **Metric Learning** is to learn a representation function that maps objects into an embedded space. The distance in the embedded space should preserve the objects’ similarity — similar objects get close and dissimilar objects get far away. Various loss functions have been developed for Metric Learning. For example, the **contrastive loss** guides the objects from the same class to be mapped to the same point and those from different classes to be mapped to different points whose distances are larger than a margin. **Triplet loss** is also popular, which requires the distance between the anchor sample and the positive sample to be smaller than the distance between the anchor sample and the negative sample. Source: [Road Network Metric Learning for Estimated Time of Arrival ](https://arxiv.org/abs/2006.13477)",methodology 6691545f-3561-4db1-8756-6431314dd158,sentence-embeddings-for-biomedical-texts,Sentence Embeddings For Biomedical Texts,,methodology 8fd98a66-c32b-4a52-9c77-ce4445995f35,classification,General Classification,Algorithms trying to solve the general task of classification.,methodology 44b45b2b-4272-4c10-97a7-081090d7fd20,3d-rgb-anomaly-detection,3D + RGB Anomaly Detection,3D + RGB Anomaly Detection,methodology 7ef7592a-de9f-4e31-8e50-01fc860e2127,probabilistic-programming,Probabilistic Programming,"Probabilistic programming languages are designed to describe probabilistic models and then perform inference in those models. PPLs are closely related to graphical models and Bayesian networks, but are more expressive and flexible. ( Image credit: [Michael Betancourt](https://betanalpha.github.io/writing/) )",methodology bbc07e73-f3f1-48e9-88ea-f9d4b46a493e,subdomain-adaptation,Subdomain adaptation,,methodology 6af1a200-502f-4a0f-ae1b-3f6758b1925b,depth-anomaly-detection-and-segmentation,Depth Anomaly Detection and Segmentation,Depth-only Anomaly Detection and Segmentation,methodology 6430a5c9-a46a-4af7-95e5-b73ef4b03ef8,depthanomaly-detection,DepthAnomaly Detection,Depth-only Anomaly Detection,methodology b6ce32f9-0ac9-46d7-a950-df99d960feed,model-selection,Model Selection,"Given a set of candidate models, the goal of **Model Selection** is to select the model that best approximates the observed data and captures its underlying regularities. Model Selection criteria are defined such that they strike a balance between the goodness of fit, and the generalizability or complexity of the models. Source: [Kernel-based Information Criterion ](https://arxiv.org/abs/1408.5810)",methodology 79887ada-6b84-42df-ab11-a586c5d34d0a,data-visualization,Data Visualization,,methodology b149416b-d789-44a0-8e7b-3e117cdf9131,continuously-indexed-domain-adaptation,Continuously Indexed Domain Adaptation,"Continuously indexed domain adaptation adapts across continuously indexed domains, e.g., across patients of different ages, where 'age' is a continuous notion.",methodology fffd5f09-fd3f-49c1-9399-a72969bc318f,3d-rgb-anomaly-segmentation,3D + RGB Anomaly Segmentation,3D + RGB Anomaly Segmentation,methodology 25aad08d-47bb-4f0f-8bdf-05c5b10ee939,document-embedding,Document Embedding,,methodology 858deaf1-9ad5-40c7-9508-b7181f06af06,core-set-discovery,Core set discovery,A core set in machine learning is defined as the minimal set of training samples that allows a supervised algorithm to deliver a result as good as the one obtained when the whole set is used.,methodology deaeddf8-7a36-4bf5-b4c0-90577263e5d5,data-mining,Data Mining,,methodology 6ef9de48-6c12-43d1-aa9b-c75dcc29bb8c,explanation-fidelity-evaluation,Explanation Fidelity Evaluation,Evaluation of explanation fidelity with respect to the underlying model.,methodology 408f5607-6b83-4d3a-b636-29239ea9654a,unsupervised-anomaly-detection,Unsupervised Anomaly Detection,"The objective of **Unsupervised Anomaly Detection** is to detect previously unseen rare objects or events without any prior knowledge about these. The only information available is that the percentage of anomalies in the dataset is small, usually less than 1%. Since anomalies are rare and unknown to the user at training time, anomaly detection in most cases boils down to the problem of modelling the normal data distribution and defining a measurement in this space in order to classify samples as anomalous or normal. In high-dimensional data such as images, distances in the original space quickly lose descriptive power (curse of dimensionality) and a mapping to some more suitable space is required. Source: [Unsupervised Learning of Anomaly Detection from Contaminated Image Data using Simultaneous Encoder Training ](https://arxiv.org/abs/1905.11034)",methodology 278c707d-5556-4feb-b503-ce3b81447f16,statistical-independence-testing,statistical independence testing,,methodology deca5cfa-f319-464f-91ce-a2d824d81ce1,continual-learning,Continual Learning,"**Continual Learning** (also known as **Incremental Learning**, **Life-long Learning**) is a concept to learn a model for a large number of tasks sequentially without forgetting knowledge obtained from the preceding tasks, where the data in the old tasks are not available anymore during training new ones. If not mentioned, the benchmarks here are **Task-CL**, where task-id is provided on validation. Source: [Continual Learning by Asymmetric Loss Approximation with Single-Side Overestimation](https://arxiv.org/abs/1908.02984) [Three scenarios for continual learning](https://arxiv.org/abs/1904.07734) [Lifelong Machine Learning](https://books.google.ca/books/about/Lifelong_Machine_Learning.html?id=JQ5pDwAAQBAJ&redir_esc=y) [Continual lifelong learning with neural networks: A review](https://www.sciencedirect.com/science/article/pii/S0893608019300231)",methodology ef513eac-de7e-435c-980d-76ded9f9653b,detection-of-higher-order-dependencies,Detection of Higher Order Dependencies,,methodology fd0fa3bb-b324-47a7-a29f-4c4642c4c11c,learning-network-representations,Learning Network Representations,,methodology cabbcad0-8933-4530-85ef-4f5fedc96ca4,multi-label-learning,Multi-Label Learning,,methodology 520b5618-bde4-4dde-b8ff-9eb0d60e0d38,q-learning,Q-Learning,"The goal of Q-learning is to learn a policy, which tells an agent what action to take under what circumstances. ( Image credit: [Playing Atari with Deep Reinforcement Learning](https://arxiv.org/pdf/1312.5602v1.pdf) )",methodology 8b50b9c3-db75-424d-9985-51f4bcc82a9a,extreme-multi-label-classification,Extreme Multi-Label Classification,Extreme Multi-Label Classification is a supervised learning problem where an instance may be associated with multiple labels. The two main problems are the unbalanced labels in the dataset and the amount of different labels.,methodology ac0e2929-f53c-4827-8311-a7c8e4b1decf,incremental-learning,Incremental Learning,Incremental learning aims to develop artificially intelligent systems that can continuously learn to address new tasks from new data while preserving knowledge learned from previously learned tasks.,methodology 7857dcc5-2fdb-48ca-b68e-e071b6654ff9,3d-anomaly-detection,3D Anomaly Detection,3D-only Anomaly Detection,methodology 79bff48e-3391-446b-b5dd-c31f1574f1dc,federated-learning,Federated Learning,"**Federated Learning** is a machine learning approach that allows multiple devices or entities to collaboratively train a shared model without exchanging their data with each other. Instead of sending data to a central server for training, the model is trained locally on each device, and only the model updates are sent to the central server, where they are aggregated to improve the shared model. This approach allows for privacy-preserving machine learning, as each device keeps its data locally and only shares the information needed to improve the model.",methodology c8abf251-97bd-4297-b498-5122d0376546,web-credibility,Web Credibility,Define the level of credibility of web sources,methodology eafd7998-0852-42a7-8b0d-ba5f2a45b6cf,automated-feature-engineering,Automated Feature Engineering,Automated feature engineering improves upon the traditional approach to feature engineering by automatically extracting useful and meaningful features from a set of related data tables with a framework that can be applied to any problem.,methodology e7bc0299-3a74-4752-af58-812998ea3b31,depth-rgb-anomaly-detection,Depth + RGB Anomaly Detection,Depth + RGB Anomaly Detection,methodology 8b856e16-62d6-4d42-8a6e-61c56f1ebcd3,additive-models,Additive models,,methodology 36bfac88-e2c4-434d-9100-89fa560035b3,stochastic-optimization,Stochastic Optimization,"**Stochastic Optimization** is the task of optimizing certain objective functional by generating and using stochastic random variables. Usually the Stochastic Optimization is an iterative process of generating random variables that progressively finds out the minima or the maxima of the objective functional. Stochastic Optimization is usually applied in the non-convex functional spaces where the usual deterministic optimization such as linear or quadratic programming or their variants cannot be used. Source: [ASOC: An Adaptive Parameter-free Stochastic Optimization Techinique for Continuous Variables ](https://arxiv.org/abs/1506.08004)",methodology c82db90d-944a-49d9-93ca-607613bee70e,learning-semantic-representations,Learning Semantic Representations,,methodology 5e87867b-74df-4e2c-bab9-f14ba4a732b4,ensemble-learning,Ensemble Learning,,methodology d1dd4f34-124a-4c7d-9e08-f171ae60d1c5,distributional-reinforcement-learning,Distributional Reinforcement Learning,"Value distribution is the distribution of the random return received by a reinforcement learning agent. it been used for a specific purpose such as implementing risk-aware behaviour. We have random return Z whose expectation is the value Q. This random return is also described by a recursive equation, but one of a distributional nature",methodology e2d145c2-cc2c-4dd7-bf22-489b70d938b5,dictionary-learning,Dictionary Learning,"**Dictionary Learning** is an important problem in multiple areas, ranging from computational neuroscience, machine learning, to computer vision and image processing. The general goal is to find a good basis for given data. More formally, in the Dictionary Learning problem, also known as sparse coding, we are given samples of a random vector $y\in\mathbb{R}^n$, of the form $y=Ax$ where $A$ is some unknown matrix in $\mathbb{R}^{n×m}$, called dictionary, and $x$ is sampled from an unknown distribution over sparse vectors. The goal is to approximately recover the dictionary $A$. Source: [Polynomial-time tensor decompositions with sum-of-squares ](https://arxiv.org/abs/1610.01980)",methodology a50e63d9-50fb-435a-ad78-18184de0e1bd,novel-class-discovery,Novel Class Discovery,"The goal of Novel Class Discovery (NCD) is to identify new classes in unlabeled data, by exploiting prior knowledge from known classes. In this specific setup, the data is split in two sets. The first is a labeled set containing known classes and the second is an unlabeled set containing unknown classes that must be discovered.",methodology 2dea2dab-0772-4bc7-b85e-0465948256f5,imitation-learning,Imitation Learning,"**Imitation Learning** is a framework for learning a behavior policy from demonstrations. Usually, demonstrations are presented in the form of state-action trajectories, with each pair indicating the action to take at the state being visited. In order to learn the behavior policy, the demonstrated actions are usually utilized in two ways. The first, known as Behavior Cloning (BC), treats the action as the target label for each state, and then learns a generalized mapping from states to actions in a supervised manner. Another way, known as Inverse Reinforcement Learning (IRL), views the demonstrated actions as a sequence of decisions, and aims at finding a reward/cost function under which the demonstrated decisions are optimal. Finally, a newer methodology, Inverse Q-Learning aims at directly learning Q-functions from expert data, implicitly representing rewards, under which the optimal policy can be given as a Boltzmann distribution similar to soft Q-learning Source: [Learning to Imitate ](https://ai.stanford.edu/blog/learning-to-imitate)",methodology 37d198e8-21cd-4242-93fe-f77899563598,multimodal-text-and-image-classification,Multimodal Text and Image Classification,Classification with both source Image and Text,methodology f3a5b7bf-3659-4aee-8888-ee15c35ad6b5,one-shot-learning,One-Shot Learning,"One-shot learning is the task of learning information about object categories from a single training example. ( Image credit: [Siamese Neural Networks for One-shot Image Recognition](https://www.cs.cmu.edu/~rsalakhu/papers/oneshot1.pdf) )",methodology 31463773-c63d-4bd9-b93c-3f3d212bf939,unsupervised-pre-training,Unsupervised Pre-training,Pre-training a neural network using unsupervised (self-supervised) auxiliary tasks on unlabeled data.,methodology 6f731a9c-3467-45a4-abb9-9b70c4f1a59f,gaussian-processes,Gaussian Processes,"**Gaussian Processes** is a powerful framework for several machine learning tasks such as regression, classification and inference. Given a finite set of input output training data that is generated out of a fixed (but possibly unknown) function, the framework models the unknown function as a stochastic process such that the training outputs are a finite number of jointly Gaussian random variables, whose properties can then be used to infer the statistics (the mean and variance) of the function at test values of input. Source: [Sequential Randomized Matrix Factorization for Gaussian Processes: Efficient Predictions and Hyper-parameter Optimization ](https://arxiv.org/abs/1711.06989)",methodology b4bb8aad-2ffa-49b7-af16-846600998ed9,long-tail-learning-with-class-descriptors,Long-tail learning with class descriptors,"Long-tail learning by using class descriptors (like attributes, class embedding, etc) to learn tail classes as well as head classes.",methodology 92671fa2-4af6-4286-8bd8-1b5f9bee3a31,personalized-federated-learning,Personalized Federated Learning,"The federated learning setup presents numerous challenges including data heterogeneity (differences in data distribution), device heterogeneity (in terms of computation capabilities, network connection, etc.), and communication efficiency. Especially data heterogeneity makes it hard to learn a single shared global model that applies to all clients. To overcome these issues, Personalized Federated Learning (PFL) aims to personalize the global model for each client in the federation.",methodology 6724d0b8-2772-40fc-8715-c716242cfb1d,few-shot-learning,Few-Shot Learning,"**Few-Shot Learning** is an example of meta-learning, where a learner is trained on several related tasks, during the meta-training phase, so that it can generalize well to unseen (but related) tasks with just few examples, during the meta-testing phase. An effective approach to the Few-Shot Learning problem is to learn a common representation for various tasks and train task specific classifiers on top of this representation. Source: [Penalty Method for Inversion-Free Deep Bilevel Optimization ](https://arxiv.org/abs/1911.03432)",methodology 6392ac21-e21f-49d5-9a14-9242e88d5c98,classification-1,Classification,"**Classification** is the task of categorizing a set of data into predefined classes or groups. The aim of classification is to train a model to correctly predict the class or group of new, unseen data. The model is trained on a labeled dataset where each instance is assigned a class label. The learning algorithm then builds a mapping between the features of the data and the class labels. This mapping is then used to predict the class label of new, unseen data points. The quality of the prediction is usually evaluated using metrics such as accuracy, precision, and recall.",methodology 905c6470-d040-4976-a632-6ce8dc072ee5,entity-embeddings,Entity Embeddings,Entity Embeddings is a technique for applying deep learning to tabular data. It involves representing the categorical data of an information systems entity with multiple dimensions.,methodology b2f8daff-26f6-4219-9103-5c47183d136d,multiobjective-optimization,Multiobjective Optimization,"Multi-objective optimization (also known as multi-objective programming, vector optimization, multicriteria optimization, multiattribute optimization or Pareto optimization) is an area of multiple criteria decision making that is concerned with mathematical optimization problems involving more than one objective function to be optimized simultaneously. Multi-objective optimization has been applied in many fields of science, including engineering, economics and logistics where optimal decisions need to be taken in the presence of trade-offs between two or more conflicting objectives. Minimizing cost while maximizing comfort while buying a car, and maximizing performance whilst minimizing fuel consumption and emission of pollutants of a vehicle are examples of multi-objective optimization problems involving two and three objectives, respectively. In practical problems, there can be more than three objectives.",methodology 1e771223-11ca-4ed3-8d6e-1047ba47a315,eeg-denoising,EEG Denoising,,methodology b20c0abf-7e7c-4c7b-bb8d-fe0345eb03ff,word-embeddings,Word Embeddings,"Word embedding is the collective name for a set of language modeling and feature learning techniques in natural language processing (NLP) where words or phrases from the vocabulary are mapped to vectors of real numbers. Techniques for learning word embeddings can include Word2Vec, GloVe, and other neural network-based approaches that train on an NLP task such as language modeling or document classification. ( Image credit: [Dynamic Word Embedding for Evolving Semantic Discovery](https://arxiv.org/pdf/1703.00607v2.pdf) )",methodology f9fdd9f4-1157-4cbd-833f-9225ffa95236,automatic-machine-learning-model-selection,Automatic Machine Learning Model Selection,,methodology 9c1610d8-ab54-4926-8436-cefb8cf5bdae,tensor-networks,Tensor Networks,,methodology b208b8bc-a7d9-4c48-a51c-3bcf2b964936,unsupervised-mnist,Unsupervised MNIST,Accuracy on MNIST when training without any labels,methodology f0de9484-9124-491b-a9a4-ce7dffd03390,unsupervised-domain-adaptation,Unsupervised Domain Adaptation,"**Unsupervised Domain Adaptation** is a learning framework to transfer knowledge learned from source domains with a large number of annotated training examples to target domains with unlabeled data only. Source: [Domain-Specific Batch Normalization for Unsupervised Domain Adaptation ](https://arxiv.org/abs/1906.03950)",methodology 419c0877-1e50-4843-a618-b89bf532e94e,depth-anomaly-segmentation,Depth Anomaly Segmentation,Anomaly Segmentation using depth information only,methodology 43830060-7474-4dc2-87fe-5f19270c06d2,nonparametric-deep-clustering,NONPARAMETRIC DEEP CLUSTERING,Deep Nonparametric clustering are methods which utilize deep clustering when the number of clusters is not known apriorly and needs to be inferred.,methodology 554f828d-5c6d-4955-9827-341eeea0b857,interpretable-machine-learning,Interpretable Machine Learning,"The goal of **Interpretable Machine Learning** is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models. Source: [Assessing the Local Interpretability of Machine Learning Models ](https://arxiv.org/abs/1902.03501)",methodology 507b8a36-64c9-483f-a8b0-32823798f6e1,quantum-circuit-mapping,Quantum Circuit Mapping,Mapping quantum circuits to quantum devices,methodology c49c068f-1932-4a3a-ba82-b1a5029d5faf,partial-domain-adaptation,Partial Domain Adaptation,"**Partial Domain Adaptation** is a transfer learning paradigm, which manages to transfer relevant knowledge from a large-scale source domain to a small-scale target domain. Source: [Deep Residual Correction Network for Partial Domain Adaptation ](https://arxiv.org/abs/2004.04914)",methodology 242f1f86-8ff0-4197-a509-5dc76d52590e,privacy-preserving-deep-learning,Privacy Preserving Deep Learning,"The goal of privacy-preserving (deep) learning is to train a model while preserving privacy of the training dataset. Typically, it is understood that the trained model should be privacy-preserving (e.g., due to the training algorithm being differentially private).",methodology 04347865-d14a-4fdf-a6be-2ccd6aaa60fe,3d-anomaly-segmentation,3D Anomaly Segmentation,Anomaly Segmentation using 3D information only,methodology 4faa5595-d61e-4d4e-819f-95d3b2d782d8,multi-label-classification,Multi-Label Classification,"**Multi-Label Classification** is the supervised learning problem where an instance may be associated with multiple labels. This is an extension of single-label classification (i.e., multi-class, or binary) where each instance is only associated with a single class label. Source: [Deep Learning for Multi-label Classification ](https://arxiv.org/abs/1502.05988)",methodology 3eb928c4-23de-4cb6-bea6-f6ed352d163d,feature-engineering,Feature Engineering,"Feature engineering is the process of taking a dataset and constructing explanatory variables — features — that can be used to train a machine learning model for a prediction problem. Often, data is spread across multiple tables and must be gathered into a single table with rows containing the observations and features in the columns. The traditional approach to feature engineering is to build features one at a time using domain knowledge, a tedious, time-consuming, and error-prone process known as manual feature engineering. The code for manual feature engineering is problem-dependent and must be re-written for each new dataset.",methodology a89068e7-5c63-4eec-b042-6f96c154d04a,few-shot-relation-classification,Few-Shot Relation Classification,"**Few-Shot Relation Classification** is a particular relation classification task under minimum annotated data, where a model is required to classify a new incoming query instance given only few support instances (e.g., 1 or 5) during testing. Source: [MICK: A Meta-Learning Framework for Few-shot Relation Classification with Little Training Data ](https://arxiv.org/abs/2004.14164)",methodology 4ba4eb3e-f47f-4ef4-a386-ee719941d80c,variable-selection,Variable Selection,,methodology c4d87a01-0b95-4a6d-8716-1892132db7f9,bayesian-optimisation,Bayesian Optimisation,"Expensive black-box functions are a common problem in many disciplines, including tuning the parameters of machine learning algorithms, robotics, and other engineering design problems. **Bayesian Optimisation** is a principled and efficient technique for the global optimisation of these functions. The idea behind Bayesian Optimisation is to place a prior distribution over the target function and then update that prior with a set of “true” observations of the target function by expensively evaluating it in order to produce a posterior predictive distribution. The posterior then informs where to make the next observation of the target function through the use of an acquisition function, which balances the exploitation of regions known to have good performance with the exploration of regions where there is little information about the function’s response. Source: [A Bayesian Approach for the Robust Optimisation of Expensive-to-Evaluate Functions ](https://arxiv.org/abs/1904.11416)",methodology d23ecbee-d015-4b5c-90ac-4579370e6634,graph-representation-learning,Graph Representation Learning,"The goal of **Graph Representation Learning** is to construct a set of features (‘embeddings’) representing the structure of the graph and the data thereon. We can distinguish among Node-wise embeddings, representing each node of the graph, Edge-wise embeddings, representing each edge in the graph, and Graph-wise embeddings representing the graph as a whole. Source: [SIGN: Scalable Inception Graph Neural Networks ](https://arxiv.org/abs/2004.11198)",methodology 5bb50af7-4c68-4672-89cd-ade0bd04d255,abnormal-event-detection-in-video,Abnormal Event Detection In Video,"**Abnormal Event Detection In Video** is a challenging task in computer vision, as the definition of what an abnormal event looks like depends very much on the context. For instance, a car driving by on the street is regarded as a normal event, but if the car enters a pedestrian area, this is regarded as an abnormal event. A person running on a sports court (normal event) versus running outside from a bank (abnormal event) is another example. Although what is considered abnormal depends on the context, we can generally agree that abnormal events should be unexpected events that occur less often than familiar (normal) events Source: [Unmasking the abnormal events in video ](https://arxiv.org/abs/1705.08182) Image: [Ravanbakhsh et al](https://arxiv.org/pdf/1708.09644v1.pdf)",methodology 16764319-3f18-4219-b10f-e02f3adc815a,subgroup-discovery,Subgroup Discovery,,methodology fd3ba211-344e-47e4-845c-ac07f342a4af,bilevel-optimization,Bilevel Optimization,"**Bilevel Optimization** is a branch of optimization, which contains a nested optimization problem within the constraints of the outer optimization problem. The outer optimization task is usually referred as the upper level task, and the nested inner optimization task is referred as the lower level task. The lower level problem appears as a constraint, such that only an optimal solution to the lower level optimization problem is a possible feasible candidate to the upper level optimization problem. Source: [Efficient Evolutionary Algorithm for Single-Objective Bilevel Optimization ](https://arxiv.org/abs/1303.3901)",methodology 5329b0ba-5c1b-49e8-8894-cf8a6227b0d8,data-free-quantization,Data Free Quantization,"**Data Free Quantization** is a technique to achieve a highly accurate quantized model without accessing any training data. Source: [Qimera: Data-free Quantization with Synthetic Boundary Supporting Samples](https://arxiv.org/abs/2111.02625)",methodology 74e3945e-6c1a-4bf6-af46-1d61e8775733,generalization-bounds,Generalization Bounds,,methodology 9189c5d0-e38e-4e1b-a0bb-6f2eb79a8f49,clustering-algorithms-evaluation,Clustering Algorithms Evaluation,,methodology 1bf8e18a-2278-45a0-8775-ebf991a69c00,automl,AutoML,"Automated Machine Learning (**AutoML**) is a general concept which covers diverse techniques for automated model learning including automatic data preprocessing, architecture search, and model selection. Source: Evaluating recommender systems for AI-driven data science (1905.09205) Source: [CHOPT : Automated Hyperparameter Optimization Framework for Cloud-Based Machine Learning Platforms ](https://arxiv.org/abs/1810.03527)",methodology 869ebc4f-4d77-42f3-9185-a907706d3eb2,stroke-classification,Stroke Classification,,methodology 3ca62cde-4c58-4984-a9bc-e44e771b095b,online-nonnegative-cp-decomposition,Online nonnegative CP decomposition,,methodology b2c789e9-624d-46ce-af24-57c2ed7a9bb7,multi-goal-reinforcement-learning,Multi-Goal Reinforcement Learning,,methodology 8328a35f-ccb9-4e7c-a7cf-ef9c6951b86c,learning-word-embeddings,Learning Word Embeddings,,methodology 4d438251-598d-4f72-b4a9-369bba9d8456,metaheuristic-optimization,Metaheuristic Optimization,"In computer science and mathematical optimization, a metaheuristic is a higher-level procedure or heuristic designed to find, generate, or select a heuristic (partial search algorithm) that may provide a sufficiently good solution to an optimization problem. For some examples, you can visit https://aliasgharheidari.com/Publications.html",methodology 2d2f4310-ab21-4469-9247-b04e3c8a9f36,outlier-ensembles,outlier ensembles,,methodology e3182a4e-1093-4c73-8440-8cdb9dd67e31,distributed-optimization,Distributed Optimization,"The goal of **Distributed Optimization** is to optimize a certain objective defined over millions of billions of data that is distributed over many machines by utilizing the computational power of these machines. Source: [Analysis of Distributed StochasticDual Coordinate Ascent ](https://arxiv.org/abs/1312.1031)",methodology cc7d25ea-94ee-484d-b772-736ebc0e20d2,federated-unsupervised-learning,Federated Unsupervised Learning,Federated unsupervised learning trains models from decentralized data that have no labels.,methodology 520debbd-dc63-496d-90fa-0c0c0682db1a,outlier-detection,Outlier Detection,"**Outlier Detection** is a task of identifying a subset of a given data set which are considered anomalous in that they are unusual from other instances. It is one of the core data mining tasks and is central to many applications. In the security field, it can be used to identify potentially threatening users, in the manufacturing field it can be used to identify parts that are likely to fail. Source: [Coverage-based Outlier Explanation ](https://arxiv.org/abs/1911.02617)",methodology ef336e0c-5668-414b-b543-c618c03a6b76,hierarchical-reinforcement-learning,Hierarchical Reinforcement Learning,,methodology 2031bb0d-7bb8-4fbd-acbc-8a6e52976f1f,information-plane,Information Plane,"To obtain the Information Plane (IP) of deep neural networks, which shows the trajectories of the hidden layers during training in a 2D plane using as coordinate axes the mutual information between the input and the hidden layer, and the mutual information between the output and the hidden layer.",methodology 9143e6ca-dc1a-4841-b77c-f96989987fd0,disentanglement,Disentanglement,"This is an approach to solve a diverse set of tasks in a data efficient manner by disentangling (or isolating ) the underlying structure of the main problem into disjoint parts of its representations. This disentanglement can be done by focussing on the ""transformation"" properties of the world(main problem)",methodology b00c9e62-bdba-484c-b24a-0ff6f0c9d253,chatbot,Chatbot,"**Chatbot** or conversational AI is a language model designed and implemented to have conversations with humans. Source: [Open Data Chatbot ](https://arxiv.org/abs/1909.03653) [Image source](https://arxiv.org/pdf/2006.16779v3.pdf)",methodology 60ac9379-53b3-4ad6-bdb2-c74312b9be5b,hyperparameter-optimization,Hyperparameter Optimization,"**Hyperparameter Optimization** is the problem of choosing a set of optimal hyperparameters for a learning algorithm. Whether the algorithm is suitable for the data directly depends on hyperparameters, which directly influence overfitting or underfitting. Each model requires different assumptions, weights or training speeds for different types of data under the conditions of a given loss function. Source: [Data-driven model for fracturing design optimization: focus on building digital database and production forecast ](https://arxiv.org/abs/1910.14499)",methodology aebcecb6-6278-4335-97c3-9f92b39f4409,generalized-zero-shot-learning,Generalized Zero-Shot Learning,,methodology c7034069-6cfc-4a9f-bf6e-6ef7098b56de,model-compression,Model Compression,"**Model Compression** is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks. Source: [KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow ](https://arxiv.org/abs/2004.05319)",methodology fe56d0d0-e770-40c4-8b41-f0f7791b0bdb,density-estimation,Density Estimation,"The goal of **Density Estimation** is to give an accurate description of the underlying probabilistic density distribution of an observable data set with unknown density. Source: [Contrastive Predictive Coding Based Feature for Automatic Speaker Verification ](https://arxiv.org/abs/1904.01575)",methodology e166497f-7667-4ed7-8dbe-ea69e7ee3f6c,multi-task-learning,Multi-Task Learning,"Multi-task learning aims to learn multiple different tasks simultaneously while maximizing performance on one or all of the tasks. ( Image credit: [Cross-stitch Networks for Multi-task Learning](https://arxiv.org/pdf/1604.03539v1.pdf) )",methodology ee7e9d9d-2c9c-454d-bdf9-fb7f6c833846,sparse-learning,Sparse Learning,,methodology 0440764d-42ca-4514-98cc-b5c15f5126ee,detection-of-dependencies,Detection of Dependencies,,methodology 6e2d852b-119d-4515-86e6-4615c5468af6,architecture-search,Neural Architecture Search,"**Neural architecture search (NAS)** is a technique for automating the design of artificial neural networks (ANN), a widely used model in the field of machine learning. NAS essentially takes the process of a human manually tweaking a neural network and learning what works well, and automates this task to discover more complex architectures. Image Credit : [NAS with Reinforcement Learning](https://arxiv.org/abs/1611.01578)",methodology c87fafc0-7f8a-45fe-862a-0442700f48b3,long-tail-learning,Long-tail Learning,"Long-tailed learning, one of the most challenging problems in visual recognition, aims to train well-performing models from a large number of images that follow a long-tailed class distribution.",methodology a6f10298-0714-4a1c-90a6-012c49e217ca,unsupervised-anomaly-detection-in-sound,Unsupervised Anomaly Detection In Sound,,methodology 0ab14c36-f60d-4e50-9de2-1937ad85c61e,bayesian-optimization,Bayesian Optimization,,methodology 972e52ea-f685-4f2f-9135-f84cf64f6a2d,partial-label-learning,Partial Label Learning,,methodology 9773b4f7-f2ab-41cd-9c7a-da93ea978c56,rgb-depth-anomaly-detection-and-segmentation,RGB+Depth Anomaly Detection and Segmentation,RGB+Depth Anomaly Detection and Segmentation,methodology edab500e-6cc2-411a-bff8-1e5eb2c92893,few-shot-imitation-learning,Few-Shot Imitation Learning,,methodology 974601c8-6db5-413a-9dc1-3c6a438de9be,model-extraction,Model extraction,"Model extraction attacks, aka model stealing attacks, are used to extract the parameters from the target model. Ideally, the adversary will be able to steal and replicate a model that will have a very similar performance to the target model.",methodology 6fd8b34d-a5f5-46a1-9f4c-6c98e9ac2321,unsupervised-domain-expansion,Unsupervised Domain Expansion,,methodology 13966e71-63c5-4cc8-a60c-1e6f9d10a36a,logical-fallacy-detection,Logical Fallacy Detection,,methodology d7aba1fe-6238-472e-bafa-a1cc4c87ce60,normalising-flows,Normalising Flows,,methodology 8d2b8d16-2bce-4714-ae38-6ca9b44ffdc2,depth-rgb-anomaly-segmentation,Depth + RGB Anomaly Segmentation,Depth + RGB Anomaly Segmentation,methodology 142145f0-3964-4b3f-a228-aa72f5839583,knowledge-graph-embeddings,Knowledge Graph Embeddings,,methodology 64b5e306-adca-4fac-9db1-2997f5a1c4da,multi-task-language-understanding,Multi-task Language Understanding,"The test covers 57 tasks including elementary mathematics, US history, computer science, law, and more. https://arxiv.org/pdf/2009.03300.pdf",methodology cf58dc21-fdf9-480d-8342-c270f1624823,few-shot-camera-adaptive-color-constancy,Few-Shot Camera-Adaptive Color Constancy,,methodology 67322536-dd6b-4d09-92ea-3095c8bb2c47,l2-regularization,L2 Regularization,,methodology 3a4fb1ac-0176-4cd0-ab2e-61c846275146,multi-agent-reinforcement-learning,Multi-agent Reinforcement Learning,"The target of **Multi-agent Reinforcement Learning** is to solve complex problems by integrating multiple agents that focus on different sub-tasks. In general, there are two types of multi-agent systems: independent and cooperative systems. Source: [Show, Describe and Conclude: On Exploiting the Structure Information of Chest X-Ray Reports ](https://arxiv.org/abs/2004.12274)",methodology d18ad6e0-65ef-4643-9bf0-7efa8be24bd4,reinforcement-learning-1,Reinforcement Learning (RL),**Reinforcement Learning (RL)** involves training an agent to take actions in an environment to maximize a cumulative reward signal. The agent interacts with the environment and learns by receiving feedback in the form of rewards or punishments for its actions. The goal of reinforcement learning is to find the optimal policy or decision-making strategy that maximizes the long-term reward.,methodology eb074c2d-9d6f-4d94-8b82-1a60cc9989c8,multiple-instance-learning,Multiple Instance Learning,"**Multiple Instance Learning** is a type of weakly supervised learning algorithm where training data is arranged in bags, where each bag contains a set of instances $X=\\{x_1,x_2, \ldots,x_M\\}$, and there is one single label $Y$ per bag, $Y\in\\{0, 1\\}$ in the case of a binary classification problem. It is assumed that individual labels $y_1, y_2,\ldots, y_M$ exist for the instances within a bag, but they are unknown during training. In the standard Multiple Instance assumption, a bag is considered negative if all its instances are negative. On the other hand, a bag is positive, if at least one instance in the bag is positive. Source: [Monte-Carlo Sampling applied to Multiple Instance Learning for Histological Image Classification ](https://arxiv.org/abs/1812.11560)",methodology 849ac727-5a55-4782-b122-0598e019cce9,retrieval,Retrieval,,methodology 72503297-2f91-4141-8c6c-ec497a96e9ea,geophysics,Geophysics,,miscellaneous ac17ead2-52e5-4f00-b0a7-7713abe177a0,crowd-flows-prediction,Crowd Flows Prediction,,miscellaneous 6e5cd293-86f5-4f62-acb1-bdd033f72fbb,one-class-classification,One-Class Classification,,miscellaneous 7081f124-2e3b-4603-8fed-dce14614a0be,college-computer-science,College Computer Science,,miscellaneous 7071b6c0-7e71-4df7-9787-b5f29dd0f2ba,security-studies,Security Studies,,miscellaneous 5e280098-9850-45ad-9779-fb2b0d87b535,photometric-redshift-estimation,Photometric Redshift Estimation,,miscellaneous 6bf11f9f-b489-4ab2-84c5-7f1ffb4e8623,model-discovery,Model Discovery,discovering PDEs from spatiotemporal data,miscellaneous 6da04ad9-56ac-4b50-8048-ef85ccfa2246,multi-modal-classification,Multi-modal Classification,,miscellaneous 176daa02-3dc8-464a-a2a4-965aaba3ad9a,seismic-imaging,Seismic Imaging,,miscellaneous 090fffb7-cc0e-4224-9e58-3405c51a2bae,science-question-answering,Science Question Answering,Image credit: [Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering](https://paperswithcode.com/paper/learn-to-explain-multimodal-reasoning-via),miscellaneous d8879acd-6df4-4db7-95ad-cc1a7e74de72,sequential-correlation-estimation,Sequential Correlation Estimation,,miscellaneous 0695ee9f-48f3-4748-9d6b-47ec493baa6d,multi-lingual-image-text-classification,Multilingual Image-Text Classification,,miscellaneous ae40b2fe-184f-4f24-9c93-7c60a20fab82,penn-machine-learning-benchmark,Penn Machine Learning Benchmark,Penn Machine Learning Benchmarks (PMLB) is a large collection of curated benchmark datasets for evaluating and comparing supervised machine learning algorithms.,miscellaneous a653cd03-4958-4d16-a619-10d759739fc4,fever-3-way,FEVER (3-way),,miscellaneous 7695af33-8982-4024-82ab-9fdf4686cf10,pulsar-prediction,Pulsar Prediction,,miscellaneous 01537b19-7619-4d3f-ba74-de9d178c476c,electrical-engineering,Electrical Engineering,,miscellaneous 83ff09bc-563f-49b6-988f-bb7ae08f25df,robust-design,Robust Design,,miscellaneous 6f359c2a-1f46-46dc-8a63-2384522a2ce4,high-school-psychology,High School Psychology,,miscellaneous bcf96023-5491-4c1b-a331-21cd80fca9f1,problem-decomposition,Problem Decomposition,,miscellaneous 283817d0-1f72-4da3-b2f4-2ef39fa7d7a6,weather-forecasting,Weather Forecasting,"**Weather Forecasting** is the prediction of future weather conditions such as precipitation, temperature, pressure and wind. Source: [MetNet: A Neural Weather Model for Precipitation Forecasting ](https://arxiv.org/abs/2003.12140)",miscellaneous 262c2c09-56e7-4dc4-ac49-aebd42a2e0d4,knowledge-aware-recommendation,Knowledge-Aware Recommendation,,miscellaneous a7c93542-ea6e-4268-b20a-ebbf534a5fe8,moral-permissibility,Moral Permissibility,,miscellaneous 37e1cfa5-f60e-4d1b-851d-192cb1a0da0f,jurisprudence,Jurisprudence,,miscellaneous e2604ef7-a3af-4e90-93d0-6dab8e297f20,food-recommendation,Food recommendation,,miscellaneous 1ed7286a-7f10-4bb2-9091-f2f4b0056d7f,seismic-interpretation,Seismic Interpretation,,miscellaneous 2670c8e7-dd89-4b4e-a7fd-9eb89f0ace97,clinical-knowledge,Clinical Knowledge,,miscellaneous b1cc64b7-c7e6-469f-8ad4-8fa1d6307e62,radio-interferometry,Radio Interferometry,,miscellaneous dc603a3e-a003-44a9-a18f-68a35689e7b6,behavioral-malware-detection,Behavioral Malware Detection,,miscellaneous 64bc4ad5-bf77-446d-a87b-7f7551755036,classifier-calibration,Classifier calibration,Confidence calibration – the problem of predicting probability estimates representative of the true correctness likelihood – is important for classification models in many applications. The two common calibration metrics are Expected Calibration Error (ECE) and Maximum Calibration Error (MCE).,miscellaneous 34dda677-3078-4432-886e-9986ff39db9f,transfer-learning,Transfer Learning,"**Transfer Learning** is a machine learning technique where a model trained on one task is re-purposed and fine-tuned for a related, but different task. The idea behind transfer learning is to leverage the knowledge learned from a pre-trained model to solve a new, but related problem. This can be useful in situations where there is limited data available to train a new model from scratch, or when the new task is similar enough to the original task that the pre-trained model can be adapted to the new problem with only minor modifications. ( Image credit: [Subodh Malgonde](https://medium.com/@subodh.malgonde/transfer-learning-using-tensorflow-52a4f6bcde3e) )",miscellaneous 1d5bd9ba-796e-4690-9725-2c1038adf0c6,intrusion-detection,Intrusion Detection,"**Intrusion Detection** is the process of dynamically monitoring events occurring in a computer system or network, analyzing them for signs of possible incidents and often interdicting the unauthorized access. This is typically accomplished by automatically collecting information from a variety of systems and network sources, and then analyzing the information for possible security problems. Source: [Machine Learning Techniques for Intrusion Detection ](https://arxiv.org/abs/1312.2177)",miscellaneous 3fdcf716-33f2-4f4b-9592-cac0a8a13db4,natural-questions,Natural Questions,,miscellaneous 95172bb9-0962-4bd0-98cd-ae540cc088a3,econometrics,Econometrics,,miscellaneous dd2e0f23-4ce2-4c47-80b3-ea2d82390d23,classification-with-costly-features,Classification with Costly Features,The task is to classify the dataset with costly features with different budget settings. The final metric is the normalized area under the cost-accuracy curve.,miscellaneous 20f0b753-47bf-4ec6-a4e3-4ea79e72fadd,human-sexuality,Human Sexuality,,miscellaneous de5a2a65-e397-41b4-8f52-ba9db4990515,android-malware-detection,Android Malware Detection,,miscellaneous e1b374a8-6c1d-4d33-b62c-2d7613cf4233,modeling-local-geometric-structure,Modeling Local Geometric Structure,,miscellaneous 99b23892-6d6a-422e-ae09-7a8be88f3cd8,ethics,Ethics,,miscellaneous 0fd0b57a-7cbf-4e8b-b469-ec3015dcac81,behavioral-malware-classification,Behavioral Malware Classification,,miscellaneous 752ccdad-61c2-4cab-9b1b-351b7d89e4f1,ecommerce,Ecommerce,,miscellaneous b0ba7872-520e-4f2f-a5dd-0e72f111b0ac,high-school-statistics,High School Statistics,,miscellaneous 1438ab95-cf2d-48cb-8d3a-a91580c98138,sentence-ambiguity,Sentence Ambiguity,"Asks models to identify the truth or falsehood of purposely ambiguous sentences. Source: [BIG-bench](https://github.com/google/BIG-bench/tree/main/bigbench/benchmark_tasks/sentence_ambiguity)",miscellaneous 801dfdf2-0253-4612-b01d-d2f3e2cc5afc,network-intrusion-detection,Network Intrusion Detection,**Network intrusion detection** is the task of monitoring network traffic to and from all devices on a network in order to detect computer attacks.,miscellaneous 327cf93f-eb85-4deb-85c0-9d9327ec901a,next-basket-recommendation,Next-basket recommendation,,miscellaneous b42039d3-d972-4f4e-973d-bcaf96a9dbdc,vector-quantization-k-means-problem,Vector Quantization (k-means problem),Given a data set $X$ of d-dimensional numeric vectors and a number $k$ find a codebook $C$ of $k$ d-dimensional vectors such that the sum of square distances of each $x \in X$ to the respective nearest $c \in C$ is as small as possible. This is also known as the k-means problem and is known to be NP-hard.,miscellaneous 71623fa0-dc86-49a2-8317-2ad20a51df1b,load-forecasting,Load Forecasting,,miscellaneous 7f48fab4-3bf1-4ec0-a99a-fd31acf50d65,cross-modal-retrieval,Cross-Modal Retrieval,"**Cross-Modal Retrieval** is used for implementing a retrieval task across different modalities. such as image-text, video-text, and audio-text Cross-Modal Retrieval. The main challenge of Cross-Modal Retrieval is the modality gap and the key solution of Cross-Modal Retrieval is to generate new representations from different modalities in the shared subspace, such that new generated features can be applied in the computation of distance metrics, such as cosine distance and Euclidean distance. Source: [Deep Triplet Neural Networks with Cluster-CCA for Audio-Visual Cross-modal Retrieval ](https://arxiv.org/abs/1908.03737)",miscellaneous aaf09cf5-9842-487a-886c-6d271c44022e,protein-interface-prediction,Protein Interface Prediction,"How and where proteins interface with one another can ultimately impact the proteins' functions along with a range of other biological processes. As such, precise computational methods for protein interface prediction (PIP) come highly sought after as they could yield significant advances in drug discovery and design as well as protein function analysis.",miscellaneous 31498f94-96f7-483f-ab7a-464dc4e6eeff,air-quality-inference,Air Quality Inference,,miscellaneous 43a9330f-4c14-442e-8e22-11a250a9a1f6,medical-genetics,Medical Genetics,,miscellaneous 3d8e4676-4d0a-4015-b41b-87779412b67a,gender-bias-detection,Gender Bias Detection,,miscellaneous 9db484ad-cfaf-413a-add7-044c379acb32,high-school-world-history,High School World History,,miscellaneous 263e8539-9f6f-4ab5-9393-b497df13a403,professional-psychology,Professional Psychology,,miscellaneous 2763edbb-6587-45dd-8a64-d0710e3b9cb9,mathematical-proofs,Mathematical Proofs,,miscellaneous 42ff56ca-3dcc-45eb-9f79-bb4aba91d6fc,time-offset-calibration,Time Offset Calibration,,miscellaneous f1d32835-ed89-4415-af51-81fc39c58544,table-detection,Table Detection,Image credit:[Table Detection in the Wild: A Novel Diverse Table Detection Dataset and Method](https://paperswithcode.com/paper/table-detection-in-the-wild-a-novel-diverse),miscellaneous 6ce6a72e-1706-435c-bcb4-e594ac31264b,malware-classification,Malware Classification,"**Malware Classification** is the process of assigning a malware sample to a specific malware family. Malware within a family shares similar properties that can be used to create signatures for detection and classification. Signatures can be categorized as static or dynamic based on how they are extracted. A static signature can be based on a byte-code sequence, binary assembly instruction, or an imported Dynamic Link Library (DLL). Dynamic signatures can be based on file system activities, terminal commands, network communications, or function and system call sequences. Source: [Behavioral Malware Classification using Convolutional Recurrent Neural Networks ](https://arxiv.org/abs/1811.07842)",miscellaneous b4857e52-1262-4537-bf60-0bdbd6ff8325,cross-modal-information-retrieval,Cross-Modal Information Retrieval,"**Cross-Modal Information Retrieval** (CMIR) is the task of finding relevant items across different modalities. For example, given an image, find a text or vice versa. The main challenge in CMIR is known as the *heterogeneity gap*: since items from different modalities have different data types, the similarity between them cannot be measured directly. Therefore, the majority of CMIR methods published to date attempt to bridge this gap by learning a latent representation space, where the similarity between items from different modalities can be measured. Source: [Scene-centric vs. Object-centric Image-Text Cross-modal Retrieval: A Reproducibility Study](https://arxiv.org/abs/2301.05174)",miscellaneous 90251495-5b8b-4071-bf15-53813aeaec27,making-hiring-decisions,Making Hiring Decisions,,miscellaneous a7657ab6-3004-424f-84a6-d1b536ed59a5,deception-detection-in-videos,Deception Detection In Videos,,miscellaneous 8fa44651-8c68-43c7-b3d0-4e76fecebb5f,liquid-simulation,Liquid Simulation,,miscellaneous ec7bc8b0-edae-4821-910f-0d53a330161e,x-ray-diffraction,X-Ray Diffraction (XRD),"Diffraction of X-ray patterns and images, with common applications for materials and images.",miscellaneous e199a9ca-7689-47bc-9e79-961bea154851,anatomy,Anatomy,,miscellaneous 00a3dcbf-5c7b-459f-8ce4-9eb6fdb8acba,human-grasp-contact-prediction,Grasp Contact Prediction,Predict contact between object and hand (human or robot).,miscellaneous 8bcf5bb5-da24-4ccb-b09c-7c79b1ebb612,pseudo-label,Pseudo Label,A lightweight but very power technique for semi supervised learning,miscellaneous 7fd95146-8609-4441-afb7-30abf672bd31,hypothesis-testing,Two-sample testing,"In statistical hypothesis testing, a two-sample test is a test performed on the data of two random samples, each independently obtained from a different given population. The purpose of the test is to determine whether the difference between these two populations is statistically significant. The statistics used in two-sample tests can be used to solve many machine learning problems, such as domain adaptation, covariate shift and generative adversarial networks.",miscellaneous d3f90d0d-1a33-4567-b755-8469dc98737b,detecting-adverts,Detecting Adverts,,miscellaneous 0d1b286e-1ed8-4322-bf8c-3f144fd4fdc2,multi-modal-learning,Multi-Modal Learning,,miscellaneous bbe2a46c-b86f-4dc5-b2fa-3fcd1ab25b1d,neural-network-compression,Neural Network Compression,,miscellaneous 822dae6a-db48-4a69-9234-a0c0506c80c8,auto-debugging,Auto Debugging,,miscellaneous 6c46194e-223b-476b-a3ad-42d6792fd43f,deception-detection,Deception Detection,,miscellaneous d340c51e-1244-47aa-a482-edde721956fa,knowledge-tracing,Knowledge Tracing,"**Knowledge Tracing** is the task of modelling student knowledge over time so that we can accurately predict how students will perform on future interactions. Improvement on this task means that resources can be suggested to students based on their individual needs, and content which is predicted to be too easy or too hard can be skipped or delayed. Source: [Deep Knowledge Tracing ](https://arxiv.org/abs/1506.05908)",miscellaneous fc6f189d-9c7b-4a6b-90aa-a687717dc80e,counterfactual-explanation,Counterfactual Explanation,"Returns a contrastive argument that permits to achieve the desired class, e.g., “to obtain this loan, you need XXX of annual revenue instead of the current YYY”",miscellaneous ac244e0c-8995-4ad9-977b-c4f9283cffa7,professional-medicine,Professional Medicine,,miscellaneous 23cde2db-49c5-4f7c-bab0-56c23f1e0613,numerical-integration,Numerical Integration,Numerical integration is the task to calculate the numerical value of a definite integral or the numerical solution of differential equations.,miscellaneous bf52ea8c-cb81-4b5b-b7be-0cfaebe20124,human-organs-senses-multiple-choice,Human Organs Senses Multiple Choice,,miscellaneous 3ce9db7d-9923-4cc1-a799-aef9b4c73599,change-detection-for-remote-sensing-images,Change detection for remote sensing images,,miscellaneous bf33302f-ddac-4dd0-8098-9fa0fb14f4aa,high-school-government-and-politics,High School Government and Politics,,miscellaneous d6b4900c-f95b-43bf-be54-4beeb1e03049,nutrition,Nutrition,,miscellaneous 4d28975c-1813-4c99-bfca-822b6a6c30d0,stress-strain-relation,Stress-Strain Relation,"Data-driven techniques for finding stress-strain relation in non-linearly elastic bodies. ( Image credit: [Data-driven Computing in Elasticity via Chebyshev Approximation](https://arxiv.org/pdf/1904.10434.pdf) )",miscellaneous 717ace6a-75c1-410a-af2f-fcd2860f962e,multimodal-intent-recognition,Multimodal Intent Recognition,"Intent recognition on multimodal content. Image source: [MIntRec: A New Dataset for Multimodal Intent Recognition](https://paperswithcode.com/dataset/mintrec)",miscellaneous a86bafc9-ea26-4589-a614-0d438873b560,insurance-prediction,Insurance Prediction,,miscellaneous 01cd810c-740e-4d9d-9faf-744740d484ab,penn-machine-learning-benchmark-real-world,Penn Machine Learning Benchmark (Real-World),Real-World Datasets in Penn Machine Learning Benchmark,miscellaneous 5073bf55-7281-48b9-bd6d-c02c951d2852,social-media-popularity-prediction,Social Media Popularity Prediction,"Social Media Popularity Prediction (SMPP) aims to predict the future popularity (e.g., clicks, views, likes, etc.) of online posts automatically via plenty of social media data from public platforms. It is a crucial problem for social media learning and forecasting and one of the most challenging problems in the field. With the ever-changing user interests and public attention on social media platforms, how to predict popularity accurately becomes more challenging than before. This task is valuable to content providers, marketers, or consumers in a range of real-world applications, including multimedia advertising, recommendation system, or trend analysis.",miscellaneous c3c48d83-8229-467c-83cf-18f1ba1d07c4,community-question-answering,Community Question Answering,"Community question answering is the task of answering questions on a Q&A forum or board, such as Stack Overflow or Quora.",miscellaneous 9c252bb7-8edd-4b7b-bc54-695ff3e9c0b1,us-foreign-policy,US Foreign Policy,,miscellaneous 71620efe-afc5-4df1-b763-53b45fc0bad7,click-through-rate-prediction,Click-Through Rate Prediction,"Click-through rate prediction is the task of predicting the likelihood that something on a website (such as an advertisement) will be clicked. ( Image credit: [Deep Spatio-Temporal Neural Networks for Click-Through Rate Prediction](https://arxiv.org/pdf/1906.03776v2.pdf) )",miscellaneous 69c34546-04e0-45f5-9b05-10dfa766cc70,seismic-detection,Seismic Detection,"When recording seismic ground motion in multiple sites using independent recording stations one needs to recognize the presence of the same parts of seismic waves arriving at these stations. This problem is known in seismology as seismic phase picking or, more generally, seismic detection.",miscellaneous e9d9d330-f914-48e9-9471-213b2040f080,product-recommendation,Product Recommendation,,miscellaneous 55094b1c-e561-46c4-8856-0ed1317afbd4,imbalanced-classification,imbalanced classification,learning classifier from class-imbalanced data,miscellaneous 813fb63e-3b5f-4e38-a0fb-3d2e738a3e1b,machine-learning,BIG-bench Machine Learning,This branch include most common machine learning fundamental algorithms.,miscellaneous 31645594-a94b-4f90-b86c-77ad030037d9,operator-learning,Operator learning,Learn an operator between infinite dimensional Hilbert spaces or Banach spaces,miscellaneous 4059a10e-1eb7-4c30-9437-627f44c231b4,sequential-distribution-function-estimation,Sequential Distribution Function Estimation,,miscellaneous 8a55795a-d957-4208-bb1f-4db1803e569d,high-school-microeconomics,High School Microeconomics,,miscellaneous 042fa4df-4af4-478a-a811-f7babfcf1a34,twitter-bot-detection,Twitter Bot Detection,"Academic studies estimate that up to 15% of Twitter users are automated bot accounts [1]. The prevalence of Twitter bots coupled with the ability of some bots to give seemingly human responses has enabled these non-human accounts to garner widespread influence. Hence, detecting non-human Twitter users or automated bot accounts using machine learning techniques has become an area of interest to researchers in the last few years. [1] https://aaai.org/ocs/index.php/ICWSM/ICWSM17/paper/view/15587",miscellaneous f0e95d50-93e0-458c-a349-b3a7e8c109a0,college-chemistry,College Chemistry,,miscellaneous 52157bca-f309-43b9-a18e-f5d32a23cc17,formation-energy,Formation Energy,"On the QM9 dataset the numbers reported in the table are the mean absolute error in eV on the target variable U0 divided by U0's chemical accuracy, which is equal to 0.043.",miscellaneous ea5b5972-67ca-42b2-a659-64fde3d09307,international-law,International Law,,miscellaneous d0afbbff-d61f-4ca0-b355-09c4ee3d24e4,online-review-rating,Online Review Rating,,miscellaneous 0303c9d5-c9aa-46e3-98ae-40651a398b01,variational-monte-carlo,Variational Monte Carlo,Variational methods for quantum physics,miscellaneous 608ad684-4625-4181-a981-b758bf5cf286,eeg-emotion-recognition,EEG Emotion Recognition,Emotion Recognition using EEG signals,miscellaneous a61bf4b2-3073-4993-bd89-393bf33cf1f5,learning-theory,Learning Theory,Learning theory,miscellaneous c0237db9-7da8-4910-acf2-59a5da4ece9e,multi-modal,Image Retrieval with Multi-Modal Query,"The problem of retrieving images from a database based on a multi-modal (image- text) query. Specifically, the query text prompts some modification in the query image and the task is to retrieve images with the desired modifications.",miscellaneous 0c39a80a-26a9-4f93-8ca3-c6da3cbbe505,prehistory,Prehistory,,miscellaneous 4ffa477b-b830-4920-b35a-9aea060bce56,classification-of-variable-stars,Classification Of Variable Stars,,miscellaneous f31034db-9546-4a48-916d-b9d6a6426dee,neural-network-security,Neural Network Security,,miscellaneous 63059fd7-7ed9-4e4a-9c5b-698e8fcca099,logical-fallacies,Logical Fallacies,,miscellaneous 346de590-2b2a-4f94-a30d-682b9e352474,brain-decoding,Brain Decoding,"**Motor Brain Decoding** is fundamental task for building motor brain computer interfaces (BCI). Progress in predicting finger movements based on brain activity allows us to restore motor functions and improve rehabilitation process of patients.",miscellaneous 512dd339-ae98-41d9-9920-e498461feb28,college-medicine,College Medicine,,miscellaneous a2157b67-ae63-4c64-ba47-0163a23f3478,similarities-abstraction,Similarities Abstraction,,miscellaneous 21704353-71fc-4a80-9dd1-9a21f9926549,lake-ice-detection,Lake Ice Monitoring,,miscellaneous ff0f5fbb-9805-45db-ac28-7c7ac576540c,misconceptions,Misconceptions,"Measures whether a model can discern popular misconceptions from the truth. Example: ``` input: The daddy longlegs spider is the most venomous spider in the world. choice: T choice: F answer: F input: Karl Benz is correctly credited with the invention of the first modern automobile. choice: T choice: F answer: T ``` Source: [BIG-bench](https://github.com/google/BIG-bench/tree/main/bigbench/benchmark_tasks/misconceptions)",miscellaneous d3094cc3-c3e9-4f83-acbe-b1fbf0d07196,moral-disputes,Moral Disputes,,miscellaneous b226acb1-026b-4f0e-afa5-4cee048a4cd3,denoising-of-radar-micro-doppler-signatures,Denoising Of Radar Micro-Doppler Signatures,,miscellaneous a7277aff-c116-4393-8176-7897681a3395,high-school-chemistry,High School Chemistry,,miscellaneous 4cdb2e5f-e884-4e8e-9da5-ebfc52705e91,general-knowledge,General Knowledge,"This task aims to evaluate the ability of a model to answer general-knowledge questions. Source: [BIG-bench](https://github.com/google/BIG-bench/tree/main/bigbench/benchmark_tasks/general_knowledge)",miscellaneous 6ef2d9dc-a9fc-443f-98cd-039f72a661cb,learning-to-rank,Learning-To-Rank,"Learning to rank is the application of machine learning to build ranking models. Some common use cases for ranking models are information retrieval (e.g., web search) and news feeds application (think Twitter, Facebook, Instagram).",miscellaneous c286f017-84f0-4268-b9d2-3e7dfc5ac1e5,data-visualization,Data Visualization,,miscellaneous 43be7429-ba8e-414d-af3a-0500875f22cf,prediction-intervals,Prediction Intervals,"A prediction interval is an estimate of an interval in which a future observation will fall, with a certain probability, given what has already been observed. Prediction intervals are often used in regression analysis.",miscellaneous d4949f31-da1e-4682-a7d2-16b25fa46023,synthetic-data-generation,Synthetic Data Generation,The generation of tabular data by any means possible.,miscellaneous b7ff8693-2ebf-44f4-8bf6-7af0c88a19a4,college-biology,College Biology,,miscellaneous f9603e6c-8bcf-47f1-8ff0-d99fa6161e8a,business-ethics,Business Ethics,,miscellaneous 1fe4d57e-fd3a-4c34-9386-e0c270480456,non-linear-elasticity,Non-Linear Elasticity,,miscellaneous f008bb1b-09af-4853-a892-309c3f40888c,high-school-geography,High School Geography,,miscellaneous b2bec2cb-2866-42bf-a68d-c8dab21cde90,recommendation-systems,Recommendation Systems,"The **Recommendation Systems** task is to produce a list of recommendations for a user. The most common methods used in recommender systems are factor models (Koren et al., 2009; Weimer et al., 2007; Hidasi & Tikk, 2012) and neighborhood methods (Sarwar et al., 2001; Koren, 2008). Factor models work by decomposing the sparse user-item interactions matrix to a set of d dimensional vectors one for each item and user in the dataset. Factor models are hard to apply in session-based recommendations due to the absence of a user profile. On the other hand, neighborhood methods, which rely on computing similarities between items (or users) are based on co-occurrences of items in sessions (or user profiles). Neighborhood methods have been used extensively in session-based recommendations. ( Image credit: [CuMF_SGD](https://arxiv.org/pdf/1610.05838v3.pdf) )",miscellaneous 7fa984e1-32c2-4ddf-8036-f7c112c317c9,building-change-detection-for-remote-sensing,Building change detection for remote sensing images,,miscellaneous c2d38260-4c09-4e03-9e34-9bd0d65f61fc,session-based-recommendations,Session-Based Recommendations,Recommendation based on a sequence of events. e.g. next item prediction,miscellaneous ec23ad69-ddb0-4dbd-8711-6dbeda062c3d,unsupervised-contextual-anomaly-detection,Unsupervised Contextual Anomaly Detection,"The objective of Unsupervised Anomaly Detection is to detect previously unseen rare objects or events. Unsupervised Contextual Anomaly Detection is formulated such that the data contains two types of attributes, behavioral and contextual attributes with no pre-existing information which observations are anomalous. Behavioral attributes are attributes that relate directly to the process of interest whereas contextual attributes relate to exogenous but highly affecting factors in relation to the process. Generally the behavioral attributes are conditional on the contextual attributes. Source: [Unsupervised Contextual Anomaly Detection using Joint Deep Variational Generative Models](https://arxiv.org/pdf/1904.00548.pdf)",miscellaneous 92c841a2-ce31-48d9-821b-d2f73c4f38c8,crop-classification,Crop Classification,,miscellaneous bc56a85b-372c-4910-87ea-c30e7b03e811,dead-reckoning-prediction,Dead-Reckoning Prediction,,miscellaneous 564b98c1-0804-4acb-809c-53448baa2855,extracting-buildings-in-remote-sensing-images,Extracting Buildings In Remote Sensing Images,,miscellaneous dc900a5b-31c5-4ef1-8734-ab7fa8fe008a,traffic-classification,Traffic Classification,"**Traffic Classification** is a task of categorizing traffic flows into application-aware classes such as chats, streaming, VoIP, etc. Classification can be used for several purposes including policy enforcement and control or QoS management. Source: [Classification of Traffic Using Neural Networks by Rejecting: a Novel Approach in Classifying VPN Traffic ](https://arxiv.org/abs/2001.03665)",miscellaneous 66e14665-f220-4a8d-b860-b3aeea4e09fd,fairness,Fairness,,miscellaneous 7ccdf18b-ce6a-4595-976f-644de2236a89,multilingual-text-classification,Multilingual text classification,,miscellaneous 66a3f25b-197d-483e-b2bc-d8eca66a2922,service-composition,Service Composition,"Let T be the task that the service composition needs to accomplish. The task T can be granulated to T 1 , T 2 , T 3 , T 4 , … , T n . i.e. T = {T 1 , T 2 , T 3 , T 4 , … , T n } . For each task T i , a set of service S i = S i 1 , S i 2 , S i 3 , … , S i m is discovered during the service discovery process such that all services in a set S i perform the same function and have the same input and output parameters (See Figure 2). S 1 = {S 11 , S 12 , S 13 , … , S 1m } , S 2 = {S 21 , S 22 , S 23 , … , S 2m } , S 3 = {S 31 , S 32 , S 33 , … , S 3m } , … , S n = {S n 1 , S n 2 , S n 3 , … , S n m } We need to select one service from each set S i in order to compose the big service such that the overall QoS attributes of the big service are optimal. The total number of the possible distinct service composition is n m . Let k be the the number of QoS attributes. Then the total num- ber of comparisons required are kn m . We need at least kn m comparisons to find whether the solution is optimal, thus making the problem as NP-Hard.",miscellaneous 505bb0ee-bfae-481a-b154-6f87ed5e31ae,non-intrusive-load-monitoring,Non-Intrusive Load Monitoring,,miscellaneous 862137f6-c7f7-48ea-948c-ebe808c279ee,marketing,Marketing,,miscellaneous a521c880-2113-4c43-a5ef-f6b8e2f3b71e,epistemic-reasoning,Epistemic Reasoning,,miscellaneous 10761d2a-2df6-42e0-8622-60333a163fc5,artificial-life,Artificial Life,,miscellaneous 6a8a2833-2b27-4e6e-b816-13b730e5f6ce,parameter-prediction,Parameter Prediction,,miscellaneous 8c857a29-bf8b-4793-adc9-911ca25c11c5,image-text-classification,Image-text Classification,,miscellaneous de883e78-58cc-49bd-aefa-05f9283635c2,crime-prediction,Crime Prediction,,miscellaneous 595e258f-fa06-4c71-a8e3-b0c58c407ca6,age-estimation,Age Estimation,"Age Estimation is the task of estimating the age of a person from an image some other kind of data. ( Image credit: [BridgeNet](https://arxiv.org/pdf/1904.03358v1.pdf) )",miscellaneous 45948f11-b239-420e-8319-3514a74601e7,cryptanalysis,Cryptanalysis,,miscellaneous c06fea43-a716-4844-baee-cd9e92925365,cantilever-beam,Cantilever Beam,,miscellaneous a4b6e8bf-7ded-4bb3-984f-eeee290ed7e7,link-quality-estimation,Link Quality Estimation,,miscellaneous 67595113-7799-4bbb-aebf-71388669f720,movie-recommendation,Movie Recommendation,"Evaluates the ability of language models to propose relevant movie recommendations with collaborative filtering data. Source: [BIG-bench](https://github.com/google/BIG-bench/tree/main/bigbench/benchmark_tasks/movie_recommendation)",miscellaneous b88a5a1e-c4f7-4650-94be-9f057304ecfe,high-school-computer-science,High School Computer Science,,miscellaneous 22788391-3433-48a2-a263-0f87ad3ea13d,human-detection-of-deepfakes,Human Detection of Deepfakes,"The task of detecting deepfake stimuli, as given to human participants in a statistical study. Methodologies should ideally include a-priori power analysis (e.g. using the GPower software) to calculate the sample size of human participants that would be sufficient to detect the presence of a main effect of a specified effect size.",miscellaneous 6d9437b4-3f55-4b54-8143-fba199125967,high-school-biology,High School Biology,,miscellaneous 4c906375-8ee5-4f15-867c-bfbea646861d,protein-structure-prediction,Protein Structure Prediction,Image credit: [FastFold: Reducing AlphaFold Training Time from 11 Days to 67 Hours](https://arxiv.org/pdf/2203.00854v1.pdf),miscellaneous f0bef52f-8049-4da3-89dc-b7f405f0028a,recipe-generation,Recipe Generation,,miscellaneous 6bb5ce72-680c-4983-a09c-3b2d65a94918,online-ranker-evaluation,Online Ranker Evaluation,,miscellaneous 4ce62fb8-72e2-4f80-a92b-e8b58f9cf158,virology,Virology,,miscellaneous dae19dc3-53c5-4521-aaf9-5ab2ffa72158,interpretability-techniques-for-deep-learning,Interpretability Techniques for Deep Learning,,miscellaneous 340ef5e7-c975-4fac-9d54-623db5fadbab,image-to-image-translation,Image-to-Image Translation,"**Image-to-Image Translation** is a task in computer vision and machine learning where the goal is to learn a mapping between an input image and an output image, such that the output image can be used to perform a specific task, such as style transfer, data augmentation, or image restoration. ( Image credit: [Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks](https://arxiv.org/pdf/1703.10593v6.pdf) )",miscellaneous 621a7e7b-80bf-477f-a866-68901d4ee971,imputation,Imputation,Substituting missing data with values according to some criteria.,miscellaneous a0ea16fb-4b32-4671-adb1-3d65429299ac,segmentation-of-remote-sensing-imagery,Segmentation Of Remote Sensing Imagery,,miscellaneous 8092a3e4-cc8f-4471-a842-dc7f27f1fca7,context-aware-product-recommendation,Context Aware Product Recommendation,Context-aware recommender systems (CARS) generate more relevant recommendations by adapting them to the specific contextual situation of the user. This article explores how contextual information can be used to create more intelligent and useful recommender systems.,miscellaneous 49b003be-5d96-43ed-a934-20eae52f8a26,smart-grid-prediction,Smart Grid Prediction,,miscellaneous fe1c291f-91c3-443c-b1c5-2ac234a2b065,total-energy,Total Energy,,miscellaneous f16b9d4f-00e3-4a4a-a98b-68e1986a3177,collaborative-filtering,Collaborative Filtering,,miscellaneous d2517005-e8f3-4d4f-9070-646515b4bd94,pde-surrogate-modeling,PDE Surrogate Modeling,,miscellaneous 11976407-3c46-4cf8-b483-83ab456fde68,cyber-attack-investigation,Cyber Attack Investigation,,miscellaneous d4ffda66-fb74-42e2-854e-537888f2ec82,business-taxonomy-construction,Business Taxonomy Construction,,miscellaneous 023a73cf-89e0-49f5-9966-536e3c9c88a3,malware-detection,Malware Detection,"**Malware Detection** is a significant part of endpoint security including workstations, servers, cloud instances, and mobile devices. Malware Detection is used to detect and identify malicious activities caused by malware. With the increase in the variety of malware activities on CMS based websites such as [malicious malware redirects on WordPress site](https://secure.wphackedhelp.com/blog/wordpress-malware-redirect-hack-cleanup/) (Aka, WordPress Malware Redirect Hack) where the site redirects to spam, being the most widespread, the need for automatic detection and classifier amplifies as well. The signature-based Malware Detection system is commonly used for existing malware that has a signature but it is not suitable for unknown malware or zero-day malware Source: [The Threat of Adversarial Attacks on Machine Learning in Network Security - A Survey ](https://arxiv.org/abs/1911.02621)",miscellaneous 06731512-fff1-4fe6-a30c-8cf47eb2cafe,air-pollution-prediction,Air Pollution Prediction,,miscellaneous 83ea7720-712d-4e78-a786-5c645bd24a12,fever-2-way,FEVER (2-way),,miscellaneous 44db3330-58bf-4b15-86c1-038e793679f5,fine-grained-urban-flow-inference,Fine-Grained Urban Flow Inference,Fine-grained urban flow inference (FUFI) aims to infer the fine-grained urban flow map from the coarse-grained one.,miscellaneous eecfb5c8-d90c-4d72-b07b-7e3cf7ae98f8,survival-analysis,Survival Analysis,"**Survival Analysis** is a branch of statistics focused on the study of time-to-event data, usually called survival times. This type of data appears in a wide range of applications such as failure times in mechanical systems, death times of patients in a clinical trial or duration of unemployment in a population. One of the main objectives of Survival Analysis is the estimation of the so-called survival function and the hazard function. If a random variable has density function $f$ and cumulative distribution function $F$, then its survival function $S$ is $1-F$, and its hazard $λ$ is $f/S$. Source: [Gaussian Processes for Survival Analysis ](https://arxiv.org/abs/1611.00817) Image: [Kvamme et al.](https://arxiv.org/pdf/1910.06724v1.pdf)",miscellaneous 5367f69b-3045-40f0-8c20-0c216e71a6c5,moral-scenarios,Moral Scenarios,,miscellaneous c7e789de-2e26-4ebb-b1a1-a00a0ee910e4,high-school-macroeconomics,High School Macroeconomics,,miscellaneous f4f65918-335b-4ae4-ac59-b988511c69f9,fraud-detection,Fraud Detection,"**Fraud Detection** is a vital topic that applies to many industries including the financial sectors, banking, government agencies, insurance, and law enforcement, and more. Fraud endeavors have detected a radical rise in current years, creating this topic more critical than ever. Despite struggles on the part of the troubled organizations, hundreds of millions of dollars are wasted to fraud each year. Because nearly a few samples confirm fraud in a vast community, locating these can be complex. Data mining and statistics help to predict and immediately distinguish fraud and take immediate action to minimize costs. Source: [Applying support vector data description for fraud detection ](https://arxiv.org/abs/2006.00618)",miscellaneous 6b553175-cd20-43b5-9525-375aec01bc6c,astronomy,Astronomy,,miscellaneous a01f03b5-3d55-4c05-9486-bd7b7aa69c0c,triviaqa,TriviaQA,,miscellaneous 423560cd-c974-4faa-bc3f-9f715dcbdd1d,facies-classification,Facies Classification,,miscellaneous ade7810d-cf65-4212-adb2-e386bc117186,multi-modal-person-identification,Multi-Modal Person Identification,,miscellaneous 87c13faf-63a6-4d96-8219-92e2a6f74c1c,multi-armed-bandits,Multi-Armed Bandits,"Multi-armed bandits refer to a task where a fixed amount of resources must be allocated between competing resources that maximizes expected gain. Typically these problems involve an exploration/exploitation trade-off. ( Image credit: [Microsoft Research](http://research.microsoft.com/en-us/projects/bandits/) )",miscellaneous fea3167c-5b26-4280-8ef4-ce122c28b49b,molecular-property-prediction,Molecular Property Prediction,Molecular property prediction is the task of predicting the properties of a molecule from its structure.,miscellaneous a6e3477b-776e-4c33-a0ad-329863e51f39,deep-clustering,Deep Clustering,,miscellaneous 55542cca-1247-4f1d-b7b1-80073df5146a,zero-shot-cross-modal-retrieval,Zero-Shot Cross-Modal Retrieval,"**Zero-Shot Cross-Modal Retrieval** is the task of finding relevant items across different modalities without having received any training examples. For example, given an image, find a text or vice versa. The main challenge in the task is known as the *heterogeneity gap*: since items from different modalities have different data types, the similarity between them cannot be measured directly. Therefore, the majority of methods published to date attempt to bridge this gap by learning a latent representation space, where the similarity between items from different modalities can be measured. Source: [Scene-centric vs. Object-centric Image-Text Cross-modal Retrieval: A Reproducibility Study](https://arxiv.org/abs/2301.05174)",miscellaneous 2094ebe7-1e91-470e-8b19-797815e1845d,network-congestion-control,Network Congestion Control,,miscellaneous c5c3c3a1-5df6-41d1-8481-29b7d0079e57,world-religions,World Religions,,miscellaneous 0239b489-9df4-41f4-b72b-d45134862dd0,contextual-anomaly-detection,Contextual Anomaly Detection,"The objective of Unsupervised Anomaly Detection is to detect previously unseen rare objects or events. Contextual Anomaly Detection is formulated such that the data contains two types of attributes, behavioral and contextual attributes. Behavioral attributes are attributes that relate directly to the process of interest whereas contextual attributes relate to exogenous but highly affecting factors in relation to the process. Generally the behavioral attributes are conditional on the contextual attributes. Source: [Unsupervised Contextual Anomaly Detection using Joint Deep Variational Generative Models](https://arxiv.org/pdf/1904.00548.pdf)",miscellaneous 9fa9e9c8-1520-4273-a13a-e41a3e33e061,multi-target-regression,Multi-target regression,,miscellaneous 8ab73ab0-01be-4c51-b020-a85dd75de443,known-unknowns,Known Unknowns,"Language models have a tendency to generate text containing false statements that are often referred to as ""Hallucinations."" The primary purpose of this task is to test for this failure case by probing whether a model can correctly identify that the answer to a question is unknown. A common failure mode would be to prefer a prediction of false on unknown truth over a prediction that the answer is unknown. Source: [BIG-bench](https://github.com/google/BIG-bench/tree/main/bigbench/benchmark_tasks/known_unknowns)",miscellaneous 225902f5-f657-407d-a8e9-0f10fa352643,misinformation,Misinformation,,miscellaneous 6b7bcd79-2394-46df-aab5-41dc3d25a91b,gravitational-wave-detection,Gravitational Wave Detection,,miscellaneous bdb93257-1009-45d6-9a9c-0630a9af69a8,sequential-recommendation,Sequential Recommendation,,miscellaneous 02da4086-7391-4be0-bdfd-c5605d2b1976,sequential-quantile-estimation,Sequential Quantile Estimation,,miscellaneous d8151d26-e98a-4d0e-b965-e5459291323b,table-extraction,Table Extraction,Table extraction involves detecting and recognizing a table's logical structure and content from its unstructured presentation within a document,miscellaneous 5fd6d723-a20a-4816-813b-de437f4a2aa6,counterfactual-inference,Counterfactual Inference,,miscellaneous 54b82887-1578-4592-a509-6d9f191820b8,philosophy,Philosophy,,miscellaneous 67643da2-90a8-4fa4-9395-2aa088476493,automatic-cell-counting,Automatic Cell Counting,,miscellaneous 9d3584a7-e9c1-45f1-a5bc-fbeb259161d8,dqn-replay-dataset,DQN Replay Dataset,,miscellaneous 4d517b3e-8912-4695-97b5-574ada05b6c3,miscellaneous,Miscellaneous,,miscellaneous 3d7a0913-4302-4fea-8216-f87bf8cfa82b,logic-grid-puzzle,Logic Grid Puzzle,,miscellaneous 559f62b9-ee1a-4ac5-8ab7-ba4a54813da8,management,Management,,miscellaneous c335e6df-0d7f-4a47-9e21-3d151f07e907,automated-theorem-proving,Automated Theorem Proving,"The goal of **Automated Theorem Proving** is to automatically generate a proof, given a conjecture (the target theorem) and a knowledge base of known facts, all expressed in a formal language. Automated Theorem Proving is useful in a wide range of applications, including the verification and synthesis of software and hardware systems. Source: [Learning to Prove Theorems by Learning to Generate Theorems ](https://arxiv.org/abs/2002.07019)",miscellaneous cfc9e970-0898-47f5-8497-4020f98ae171,remote-sensing,Remote Sensing,,miscellaneous 6cf1a496-68a3-4d18-99bb-e689f636bfcd,semeval-2022-task-4-1-binary-pcl-detection,SemEval-2022 Task 4-1 (Binary PCL Detection),,miscellaneous 9f5e36d6-48ec-474b-9da4-5152c6d6aff0,crop-yield-prediction,Crop Yield Prediction,,miscellaneous 6445e4ea-1fcb-4cdb-8958-e7d99833c61d,oceanic-eddy-classification,Oceanic Eddy Classification,,miscellaneous ffe3d1e8-ab7f-4676-b4c8-c6c208d913cd,vulnerability-detection,Vulnerability Detection,,miscellaneous 10ae54a3-2b5c-4e69-82c2-f3599515eeec,physics-mc,Physics MC,,miscellaneous e7fbe620-a08f-4321-8753-8cdec7c64e08,benchmarking,Benchmarking,,miscellaneous 1ba37dda-953d-4004-b964-1568bae9e5e3,hindu-knowledge,Hindu Knowledge,,miscellaneous 795fbace-bf0e-450d-871e-0628376065f5,computer-security,Computer Security,,miscellaneous 820360ae-30ba-4510-92d4-9a8e6800c68f,remote-sensing-image-classification,Remote Sensing Image Classification,,miscellaneous 211ebd6d-75a0-4631-8572-58ff03c28081,self-organized-clustering,Self-Organized Clustering,Clustering with Self-Organized Maps,miscellaneous 0b6ca784-b5d0-4ae7-8fab-85e3e800b4f1,conceptual-physics,Conceptual Physics,,miscellaneous 376fa707-eef1-4452-b1f4-2424317519ce,product-categorization,Product Categorization,,miscellaneous 8b1e0116-6abd-4574-ba6c-bdc15a71a68e,ancient-tex-restoration,Ancient Text Restoration,"Image credit: [Restoring and attributing ancient texts using deep neural networks ](https://paperswithcode.com/paper/restoring-and-attributing-ancient-texts-using)",miscellaneous 6e582f74-12d9-4424-9170-ae5b1ea5046e,home-activity-monitoring,Home Activity Monitoring,,miscellaneous b00a246b-6252-4940-8c95-25c3b430e406,advertising,Advertising,,miscellaneous 8337bc45-4ce7-42e4-90d5-6548fb97c565,open-set-learning,Open Set Learning,"Traditional supervised learning aims to train a classifier in the closed-set world, where training and test samples share the same label space. Open set learning (OSL) is a more challenging and realistic setting, where there exist test samples from the classes that are unseen during training. Open set recognition (OSR) is the sub-task of detecting test samples which do not come from the training.",miscellaneous 26906916-5edd-4e67-9d10-e2f50e45f3da,public-relations,Public Relations,,miscellaneous 3e12affc-91c5-4725-93a5-3c4a9ccfcf07,causal-inference,Causal Inference,"Causal inference is the task of drawing a conclusion about a causal connection based on the conditions of the occurrence of an effect. ( Image credit: [Recovery of non-linear cause-effect relationships from linearly mixed neuroimaging data](https://arxiv.org/pdf/1605.00391v2.pdf) )",miscellaneous 18be49a4-a8e7-4693-8427-14cd9ee447de,data-summarization,Data Summarization,"**Data Summarization** is a central problem in the area of machine learning, where we want to compute a small summary of the data. Source: [How to Solve Fair k-Center in Massive Data Models ](https://arxiv.org/abs/2002.07682)",miscellaneous e1408da5-40e8-4f14-ab2b-51416c5965e7,human-aging,Human Aging,,miscellaneous a1a69966-4e4a-4505-9099-c96ca32e4164,jsoniq-query-execution,JSONiq Query Execution,"Execute JSONiq query, typically on semi-structured JSON data",miscellaneous 377ea9d5-8904-41b8-98d7-d61698c00c71,detect-ground-reflections,Detect Ground Reflections,This task helps in detecting the significant ground reflections at mm-wave bands. The harvested ground reflections can help in overcoming transient blockages at mm-wave bands,miscellaneous 3d26efdb-2c2b-4918-9a87-f007d02218c3,high-school-physics,High School Physics,,miscellaneous 9e456595-dbc7-4a10-9e4a-92160770d955,gpr,GPR,Gaussian Process Regression,miscellaneous 6d80f294-16b7-40bf-9c70-f212d5fbb134,seismic-inversion,Seismic Inversion,,miscellaneous de3f40e6-afac-42e1-a767-023d1c08d471,high-school-european-history,High School European History,,miscellaneous 32f43068-56d9-464f-8cbd-04059d8fcd50,fault-detection,Fault Detection,,miscellaneous c1f7ae20-5c00-4dcb-83ec-151ac51ca773,sociology,Sociology,,miscellaneous d2676afd-3b06-4b83-9327-1ca71fc678f5,physical-simulations,Physical Simulations,,miscellaneous 1ff614ec-a929-40b5-a804-e9206cb95348,the-semantic-segmentation-of-remote-sensing,The Semantic Segmentation Of Remote Sensing Imagery,,miscellaneous 3e5e78f7-9427-447d-9441-e71faacbc980,mobile-security,Mobile Security,,miscellaneous ac015cd8-4ee5-432d-bfc3-342ecd5b5f9e,college-physics,College Physics,,miscellaneous 4a68b3f2-f1ec-4218-8196-8e47dbf51b24,cyber-attack-detection,Cyber Attack Detection,,miscellaneous 4a727aa0-0dbf-41a3-a117-b280af19fd72,outdoor-positioning,Outdoor Positioning,Outdoor Positioning (e.g. GPS),miscellaneous d2e67f29-6ce8-4a55-b71b-86f9d992b942,professional-law,Professional Law,,miscellaneous 3f8c20cf-fc0d-4e09-86c6-2b93926e3244,sports-understanding,Sports Understanding,,miscellaneous 3df8e854-ea46-494e-afa7-858dd3e360de,global-facts,Global Facts,,miscellaneous b4ed6bee-5fe9-40e8-aa3d-8e8a8f478433,3d-bin-packing,3D Bin Packing,"As a classic NP-hard problem, the bin packing problem (1D-BPP) seeks for an assignment of a collection of items with various weights to bins. The optimal assignment houses all the items with the fewest bins such that the total weight of items in a bin is below the bin’s capacity. In its 3D version (3D-BPP), an item has a 3D “weight” corresponding to its length, width and height.",miscellaneous a9417327-bd6e-4511-96d3-3b95df541132,high-school-us-history,High School US History,,miscellaneous 0660e050-075e-4052-b27c-dc35b16620db,music-emotion-recognition,Music Emotion Recognition,,music 14e7de3b-9330-44ed-8cbc-8af3c3550633,music-auto-tagging,Music Auto-Tagging,,music 5bf305a1-2f91-40d8-9258-bf3a822bc49d,semeval-2022-task-4-1-binary-pcl-detection,SemEval-2022 Task 4-1 (Binary PCL Detection),,music 195c3e78-7f8c-48e5-87f0-715330093356,singer-identification,Singer Identification,,music 1fa77282-638b-4ecd-b4c7-62e2cd436469,music-texture-transfer,Music Texture Transfer,"Texture is the collective temporal homogeneity of acoustic events.",music b2220651-3865-4cf6-8969-ab0df591f42c,music-genre-recognition,Music Genre Recognition,"Recognizing the genre (e.g. rock, pop, jazz, etc.) of a piece of music.",music 3698e066-2b60-4726-8aae-67cc55adfe07,detection-of-instrumentals-musical-tracks,Detection Of Instrumentals Musical Tracks,,music 81dad7e7-642c-4218-9ecf-8c5c2e649caf,music-source-separation,Music Source Separation,"Music source separation is the task of decomposing music into its constitutive components, e. g., yielding separated stems for the vocals, bass, and drums. ( Image credit: [SigSep](https://github.com/sigsep ) )",music e5f4d606-77dc-41aa-b222-4181f2338258,vocal-technique-classification,Vocal technique classification,,music 2687b199-85ac-4bda-b12a-62e3cc96e49f,music-modeling,Music Modeling,"( Image credit: [R-Transformer](https://arxiv.org/pdf/1907.05572v1.pdf) )",music aa4577d1-77c3-417f-81bf-e387b6521cb6,audio-super-resolution,Audio Super-Resolution,AUDIO SUPER-RESOLUTION or speech bandwidth extension (Upsampling Ratio = 2),music 1529e158-45d8-4d70-b3a8-29c62242bc07,cover-song-identification,Cover song identification,"**Cover Song Identification** is the task of identifying an alternative version of a previous musical piece, even though it may differ substantially in timbre, tempo, structure, and even fundamental aspects relating to the harmony and melody of the song. The term “cover” is so wide that it ranges from acoustic renditions of a previous song, to Jimi Hendrix’ famous (and radical) reinterpretation of Bob Dylan’s “All Along the Watchtower”, to Rage Against the Machine essentially rewriting Bob Dylan’s “Maggie’s Farm”. Beyond its value for computational musicology and for enhancing music recommendation, Cover Song Identification is of interest because of its potential for benchmarking other music similarity and retrieval algorithms. Chord analysis, melody extraction and music similarity are all strongly connected to Cover Song Identification - another field of music analysis where AI has been applied. Source: [Artificial Musical Intelligence: A Survey ](https://arxiv.org/abs/2006.10553)",music 91729fd6-03d7-4d77-8297-73c93722ff02,music-generation,Music Generation,"**Music Generation** is the task of generating music or music-like sounds from a model or algorithm. The goal is to produce a sequence of notes or sound events that are similar to existing music in some way, such as having the same style, genre, or mood.",music bf625c86-c46a-4e20-bb02-8d87a72ad423,recognizing-seven-different-dastgahs-of,Recognizing Seven Different Dastgahs Of Iranian Classical Music,,music fbbf6718-7f7d-4181-bbd7-1e708e11727f,music-classification,Music Classification,,music d238768e-2205-40fd-a0a7-b088aceee12e,piano-music-modeling,Piano Music Modeling,,music c43fc215-5f11-4e51-aa31-76c39f573289,instrument-playing-technique-detection,Instrument Playing Technique Detection,,music d86660c1-694d-415c-8a8d-a18823e04546,text-to-music-generation,Text-to-Music Generation,,music 92c249e2-e3a5-46f3-ac75-80e02bd002fd,drum-transcription,Drum Transcription,,music f0b1eeac-52c1-451f-848a-02d32d8cf863,music-transcription,Music Transcription,"Music transcription is the task of converting an acoustic musical signal into some form of music notation. ( Image credit: [ISMIR 2015 Tutorial - Automatic Music Transcription](http://c4dm.eecs.qmul.ac.uk/ismir15-amt-tutorial/AMT_tutorial_ISMIR_2015.pdf) )",music a48ca2a3-a853-45e3-9977-65f57422de36,melody-extraction,Melody Extraction,,music 2eb20007-c054-426a-a901-c90518f40f49,music-information-retrieval,Music Information Retrieval,,music e461e4f3-62bb-46c4-b264-745e8ba28c7d,fact-verification,Fact Verification,"Fact verification, also called ""fact checking"", is a process of verifying facts in natural text against a database of facts.",natural-language-processing fb613e27-981a-4903-93f7-9ecf9d062bb0,conversational-question-answering,Conversational Question Answering,,natural-language-processing 60404269-1faa-4b2e-9c0d-f22bac980812,predicate-detection,Predicate Detection,Detecting predicates in sentences. Semantic frames are defined with respect to predicates. This task is a prerequisite to semantic role labeling.,natural-language-processing bf572922-cf9f-4946-b234-280606ede8ee,hope-speech-detection-for-tamil,Hope Speech Detection for Tamil,Detecting Hope Speech in the Tamil language,natural-language-processing 94f41205-e786-47b5-b6db-d34bc299eb75,fake-news-detection,Fake News Detection,"**Fake News Detection** is a natural language processing task that involves identifying and classifying news articles or other types of text as real or fake. The goal of fake news detection is to develop algorithms that can automatically identify and flag fake news articles, which can be used to combat misinformation and promote the dissemination of accurate information.",natural-language-processing 3df3fffe-58c7-46a4-a273-d3f523d73eab,causal-emotion-entailment,Causal Emotion Entailment,"The Causal Emotion Entailment is a simpler version of the span extraction task. In this task, given a target utterance (U) with emotion E, the goal is to predict which particular utterances in the conversation history H(U) are responsible for the emotion E in the target utterance.",natural-language-processing d370842f-05ca-4355-867e-54197d9c9d82,discourse-segmentation,Discourse Segmentation,,natural-language-processing 649bfd0e-ed0f-4f13-a57c-36e9215ba0da,memorization,Memorization,,natural-language-processing 88316ef1-bb39-4175-8b83-8ca1b08919c0,zero-shot-cross-lingual-transfer,Zero-Shot Cross-Lingual Transfer,,natural-language-processing 43ed1a4e-3661-43ae-b94a-e656dd20a227,syntax-representation,Syntax Representation,,natural-language-processing 2d9695cb-5785-4fc1-8d9d-4dd2a1b52190,aspect-term-extraction-and-sentiment,Aspect Term Extraction and Sentiment Classification,Extracting the aspect terms as well as the corresponding sentiment polarities simultaneously.,natural-language-processing 00f97edd-1ac8-44ae-860e-04dcb3a46d5f,text-attribute-transfer,Text Attribute Transfer,"The goal of the **Text Attribute Transfer** task is to change an input text such that the value of a particular linguistic attribute of interest (e.g. language = English, sentiment = Positive) is transferred to a different desired value (e.g. language = French, sentiment = Negative). This task needs approaches that can disentangle the content from other linguistic attributes of the text. Source: [Improved Neural Text Attribute Transfer with Non-parallel Data ](https://arxiv.org/abs/1711.09395)",natural-language-processing 777f002e-314a-428c-88d3-dc7cfa7c0703,decipherment,Decipherment,,natural-language-processing 94c715b6-38a0-4575-8bb6-86939b96ba60,hierarchical-text-classification-of-blurbs,Hierarchical Text Classification of Blurbs (GermEval 2019),"Shared Task on Hierarchical Classification of Blurbs (GermEval 2019 / KONVENS) https://www.inf.uni-hamburg.de/en/inst/ab/lt/resources/data/germeval-2019-hmc.html",natural-language-processing 3addf8c9-9785-413d-849b-bf941ea29f67,lexical-simplification,Lexical Simplification,"The goal of **Lexical Simplification** is to replace complex words (typically words that are used less often in language and are therefore less familiar to readers) with their simpler synonyms, without infringing the grammaticality and changing the meaning of the text. Source: [Adversarial Propagation and Zero-Shot Cross-Lingual Transfer of Word Vector Specialization ](https://arxiv.org/abs/1809.04163)",natural-language-processing bf7ac924-402e-465a-b7ce-bc332b64038e,dark-humor-detection,Dark Humor Detection,,natural-language-processing 8201ef2f-8ca6-4147-becb-ca87b0c3dcda,text-style-transfoer,Text Style Transfer,"Text Style Transfer is the task of controlling certain attributes of generated text. The state-of-the-art methods can be categorized into two main types which are used on parallel and non-parallel data. Methods on parallel data are typically supervised methods that use a neural sequence-to-sequence model with the encoder-decoder architecture. Methods on non-parallel data are usually unsupervised approaches using Disentanglement, Prototype Editing and Pseudo-Parallel Corpus Construction. The popular benchmark for this task is the Yelp Review Dataset. Models are typically evaluated with the metrics of Sentiment Accuracy, BLEU, and PPL.",natural-language-processing 531ad04c-fded-4fbd-9652-c727da589ffb,text-variation,Text-Variation,Generate variations of the input text,natural-language-processing 4637f1ca-116b-4c01-8350-8e34cb95c9bb,dialogue-evaluation,Dialogue Evaluation,,natural-language-processing 58d088a3-d142-45a2-86ec-6c847ea0f7bb,implicatures,Implicatures,,natural-language-processing f93e9836-9ed1-40d6-83e5-acc1d9b2f15b,true-or-false-question,True or False Question Answering,"The objective of the true or false question-answer task is to answer research questions with affirmative, negative, or uncertain responses (e.g., Do preoperative statins reduce atrial fibrillation after coronary artery bypass grafting?).",natural-language-processing 216b446c-b8a0-478d-ad1a-c36576a49605,fantasy-reasoning,Fantasy Reasoning,,natural-language-processing da01f53c-1f44-4ab1-b5ac-f9445863f95c,discourse-parsing,Discourse Parsing,,natural-language-processing 73f0f5ca-1009-4635-b407-21f78b3733b5,sentence-embedding,Sentence Embedding,,natural-language-processing 520b95d1-0066-447d-9083-6b1d29971523,text-classification,Text Classification,"**Text Classification** is the task of assigning a sentence or document an appropriate category. The categories depend on the chosen dataset and can range from topics. Text Classification problems include emotion classification, news classification, citation intent classification, among others. Benchmark datasets for evaluating text classification capabilities include GLUE, AGNews, among others. In recent years, deep learning techniques like XLNet and RoBERTa have attained some of the biggest performance jumps for text classification problems. ( Image credit: [Text Classification Algorithms: A Survey](https://arxiv.org/pdf/1904.08067v4.pdf) )",natural-language-processing ad031291-51ea-4ffb-9507-68c950cb427e,taxonomy-learning,Taxonomy Learning,"Taxonomy learning is the task of hierarchically classifying concepts in an automatic manner from text corpora. The process of building taxonomies is usually divided into two main steps: (1) extracting hypernyms for concepts, which may constitute a field of research in itself (see Hypernym Discovery below) and (2) refining the structure into a taxonomy. Description from [NLP Progress](http://nlpprogress.com/english/taxonomy_learning.html)",natural-language-processing b94a5ef7-107f-4a8c-b56e-e8533d287350,automated-essay-scoring,Automated Essay Scoring,"Essay scoring: **Automated Essay Scoring** is the task of assigning a score to an essay, usually in the context of assessing the language ability of a language learner. The quality of an essay is affected by the following four primary dimensions: topic relevance, organization and coherence, word usage and sentence complexity, and grammar and mechanics. Source: [A Joint Model for Multimodal Document Quality Assessment ](https://arxiv.org/abs/1901.01010)",natural-language-processing 17958d0b-4b21-4b0e-b47c-499d58347b7b,irish-text-diacritization,Irish Text Diacritization,Addition of diacritics for undiacritized Irish Wikipedia texts.,natural-language-processing d920a537-d943-4c5a-99d4-d7b426d7640a,hope-speech-detection-for-malayalam,Hope Speech Detection for Malayalam,Detecting Hopespeech in the Malayalam language,natural-language-processing 48a2f08d-1b84-445e-9afa-1c81247d8028,term-extraction,Term Extraction,"Term Extraction, or Automated Term Extraction (ATE), is about extraction domain-specific terms from natural language text. For example, the sentence “We meta-analyzed mortality using random-effect models” contains the domain-specific single-word terms ""meta-analyzed"", ""mortality"" and the multi-word term ""random-effect models"".",natural-language-processing 9e5206a6-0522-46f3-ab7f-53eaa084e86a,hate-speech-normalization,Hate Speech Normalization,,natural-language-processing 57cdc661-fe7a-4250-905f-3386a303bc89,session-search,Session Search,,natural-language-processing 0778dc8a-b3e3-4d04-86e6-e7c1b7bff043,argument-pair-extraction-ape,Argument Pair Extraction (APE),Argument pair extraction (APE) aims to extract interactive argument pairs from two passages of a discussion.,natural-language-processing 72fae84d-6570-407a-9431-9668a7c3d53b,sentence-embeddings,Sentence Embeddings,,natural-language-processing 41b2093a-46cd-413a-9c40-b6f7cad1204d,figure-of-speech-detection,Figure Of Speech Detection,,natural-language-processing b7776f79-65f5-4a83-84dc-52af05569146,text-generation,Text Generation,"**Text Generation** is the task of generating text with the goal of appearing indistinguishable to human-written text. This task if more formally known as ""natural language generation"" in the literature. Text generation can be addressed with Markov processes or deep generative models like LSTMs. Recently, some of the most advanced methods for text generation include [BART](/method/bart), [GPT](/method/gpt) and other [GAN-based approaches](/method/gan). Text generation systems are evaluated either through human ratings or automatic evaluation metrics like METEOR, ROUGE, and BLEU. Further readings: - [The survey: Text generation models in deep learning](https://www.sciencedirect.com/science/article/pii/S1319157820303360) - [Modern Methods for Text Generation](https://arxiv.org/abs/2009.04968) ( Image credit: [Adversarial Ranking for Language Generation](https://arxiv.org/abs/1705.11001) )",natural-language-processing 9991e7bb-32f3-4c25-985a-16bafdd9c0d0,dialect-identification,Dialect Identification,Dialectal Arabic Identification,natural-language-processing cc0f776e-68c9-4cfc-b0b9-18dd55a15f9a,cross-lingual-natural-language-inference,Cross-Lingual Natural Language Inference,"Using data and models available for one language for which ample such resources are available (e.g., English) to solve a natural language inference task in another, commonly more low-resource, language.",natural-language-processing 9cbd62d3-b3cd-4305-92ae-3615833173c4,lay-summarization,Lay Summarization,"Summarizing a technical or scientific document in simple, non-technical language that is comprehensible to a lay person (non-expert).",natural-language-processing 1ed54459-441f-4d1a-8eec-d682cd183eb9,author-attribution,Author Attribution,Authorship attribution is the task of determining the author of a text.,natural-language-processing a1f7b2ef-b5b3-4d7b-a0fc-4238ad5331a1,mathematical-question-answering,Mathematical Question Answering,Building systems that automatically answer mathematical questions.,natural-language-processing d8731753-39fb-414a-94a2-6ef8a1fac134,unsupervised-extractive-summarization,Unsupervised Extractive Summarization,,natural-language-processing d9c5a00d-fb88-451e-a948-a607a7324d06,row-annotation,Row Annotation,"**Row Annotation** is the task of linking a row to a real-world entity and is a subtask of [Table Annotation](https://paperswithcode.com/task/table-annotation). It differs from [Cell Entity Annotation](https://paperswithcode.com/task/cell-entity-annotation) because CEA considers the linking of *cells* to entities while Row Annotation assumes that there is only an entity described in a row which usually is located in the main/entity column. A row is annotated using entities from different knowledge bases such as DBpedia or WikiData.",natural-language-processing 8fed05d7-c835-40ad-a08e-cbb2bd3176e6,variable-disambiguation,Variable Disambiguation,Identifying which variable is mentioned in a text.,natural-language-processing d70d3013-6278-47cb-96ab-0999d7aa8865,page-stream-segmentation,Page Stream Segmentation,page stream segmentation (PSS) is the task to automatically separate a stream of scanned images into coherent multi-page documents,natural-language-processing a3627d0c-eaac-445b-aebf-795bf3f4613a,probing-language-models,Probing Language Models,,natural-language-processing 7f41cf26-2fa4-4704-9cc4-b78d7afd9455,toxic-spans-detection,Toxic Spans Detection,Given a sentence identify the toxic spans present in it.,natural-language-processing e2bff1af-d91b-4021-af6e-45428977374e,dialogue-safety-prediction,Dialogue Safety Prediction,Determine the safety of a given dialogue context.,natural-language-processing 2c7b5891-c265-465b-a70c-1235949d55c1,morpheme-segmentaiton,Morpheme Segmentaiton,Succesful systems segment a given word or sentence into a sequence of morphemes.,natural-language-processing 178effdc-6c8f-4b0e-8a4e-9d455ac95395,text-based-stock-prediction,Text-Based Stock Prediction,"Make stock predictions based on text (e.g., news articles, twitters, etc.).",natural-language-processing 876b9090-74d8-46b4-b854-1a066441b6e5,text-clustering,Text Clustering,Grouping a set of texts in such a way that objects in the same group (called a cluster) are more similar (in some sense) to each other than to those in other groups (clusters). (Source: Adapted from Wikipedia),natural-language-processing f6fc8675-875f-409b-864c-5411c7ef0903,constituency-grammar-induction,Constituency Grammar Induction,Inducing a constituency-based phrase structure grammar.,natural-language-processing 871a031e-1e9e-41c4-b391-1a8aad5f0aef,user-simulation,User Simulation,Simulation of user interacting with a dialog system,natural-language-processing 849276e5-50ed-4eaf-b0eb-bbb93e785665,knowledge-base-question-answering,Knowledge Base Question Answering,"Knowledge Base Q&A is the task of answering questions from a knowledge base. ( Image credit: [Modeling Semantics with Gated Graph Neural Networks for Knowledge Base Question Answering](https://www.aclweb.org/anthology/C18-1280.pdf) )",natural-language-processing 13a7b51b-28d0-4a4b-905a-94c9d18875e5,grammatical-error-detection,Grammatical Error Detection,"Grammatical Error Detection (GED) is the task of detecting different kinds of errors in text such as spelling, punctuation, grammatical, and word choice errors. Grammatical error detection (GED) is one of the key component in grammatical error correction (GEC) community.",natural-language-processing 8581b37a-85e7-4b2c-98e9-68e3c1f74a35,zero-shot-relation-triplet-extraction,Zero-shot Relation Triplet Extraction,"Given an input sentence, the task is to extract triplets consisting of the head entity, relation label, and tail entity where the relation label is not seen at the training stage.",natural-language-processing 7c4151e1-a49b-42e8-82b0-3336142eddc2,semeval-2022-task-4-2-multi-label-pcl,SemEval-2022 Task 4-2 (Multi-label PCL Detection),,natural-language-processing da0b6987-e9a6-4f55-8ae1-c671a54aa653,race-h,RACE-h,,natural-language-processing 92b300ba-1e77-4305-b3f5-4c3909a7fc33,kb-to-language-generation,KB-to-Language Generation,"Given information from a knowledge base, generate a description of this information in natural language.",natural-language-processing a578a9fc-55ac-45f2-a748-d272ba5439d8,croatian-text-diacritization,Croatian Text Diacritization,Addition of diacritics for undiacritized Croatian Wikipedia texts.,natural-language-processing 8825b4ee-cacd-4e6d-9c07-9c52268e9236,grammatical-error-correction,Grammatical Error Correction,"Grammatical Error Correction (GEC) is the task of correcting different kinds of errors in text such as spelling, punctuation, grammatical, and word choice errors. GEC is typically formulated as a sentence correction task. A GEC system takes a potentially erroneous sentence as input and is expected to transform it to its corrected version. See the example given below: | Input (Erroneous) | Output (Corrected) | | ------------------------- | ---------------------- | |She see Tom is catched by policeman in park at last night. | She saw Tom caught by a policeman in the park last night.|",natural-language-processing 83036dba-337b-4817-8d88-81de248ffdf8,clinical-information-retreival,Clinical Information Retreival,,natural-language-processing fc7613c1-2d3c-4c12-997c-c91f6c66fec9,twitter-sentiment-analysis,Twitter Sentiment Analysis,Twitter sentiment analysis is the task of performing sentiment analysis on tweets from Twitter.,natural-language-processing e5c541ad-dbcf-4963-9e3b-c2c9601f5941,text-to-video-generation,Text-to-Video Generation,This task refers to video generation based on a given sentence or sequence of words.,natural-language-processing c8ee6cf8-5083-4d85-a740-25113ca27307,role-filler-entity-extraction,Role-filler Entity Extraction,Role-filler entity extraction task on the MUC-4 dataset.,natural-language-processing 804d6b1c-1d6c-4a53-b5ec-bec981869433,bilingual-lexicon-induction,Bilingual Lexicon Induction,Translate words from one language to another.,natural-language-processing a886f881-6993-47b4-8a20-8407cab9a79e,dialog-relation-extraction,Dialog Relation Extraction,Dialog Relation Extraction is the task of predicting the relation type between entities mentioned in dialogue. It uses multiple tokens to capture possible relations between pairs of entities in the dialogue. The popular benchmark for this task is the DialogRE dataset. The models are typically evaluated with the metric of F1 Score for both standard-setting and conversational settings.,natural-language-processing 167d627c-aacb-489e-be0f-98f072955d8e,cross-lingual,Cross-Lingual,"Cross-lingual natural language processing is the task of using data and models available for one language for which ample such resources are available (e.g., English) to solve tasks in another, commonly more low-resource, language.",natural-language-processing 86c1af5d-62b7-41f6-9331-32b807a4ee9d,natural-language-landmark-navigation,Natural Language Landmark Navigation Instructions Generation,Generate natural language navigation instructions that revolve around visual landmarks instead of turn-by-turn directives.,natural-language-processing b157fbe6-849a-4934-855f-3fc1957b843b,deep-attention,Deep Attention,,natural-language-processing bec740cc-d989-4e22-b3cd-d545cdb327f0,phrase-tagging,Phrase Tagging,A fine-grained task that aims to find all occurrences of phrases in sentences.,natural-language-processing 18a8893b-e28a-4750-8a7a-4f885e1a9e9a,cross-language-text-summarization,Cross-Language Text Summarization,"Using data and models available for one language for which ample such resources are available (e.g., English) to solve summarization tasks in another, commonly more low-resource, language.",natural-language-processing bcf025c9-209c-4f05-a71b-6340c7e748d8,cognate-prediction,Cognate Prediction,,natural-language-processing 47a5440d-e1b3-4dad-b04d-d047cf7d5b4f,visual-storytelling,Visual Storytelling,"( Image credit: [No Metrics Are Perfect](https://github.com/eric-xw/AREL) )",natural-language-processing b1ab1d4a-6ede-496f-aa40-fc82725e7eb7,multimodal-abstractive-text-summarization,Multimodal Abstractive Text Summarization,Abstractive text summarization by utilizing information from multiple modalities.,natural-language-processing a814b5b3-fa68-4252-88ba-d583923fa9c4,attribute-value-extraction,Attribute Value Extraction,,natural-language-processing 834ddd52-a32e-465e-bcbb-3d87bac5229a,french-text-diacritization,French Text Diacritization,Addition of diacritics for undiacritized French Wikipedia texts.,natural-language-processing c1a11952-bff6-47bd-b70f-e75db19b92ab,question-answering,Question Answering,"**Question Answering** is the task of answering questions (typically reading comprehension questions), but abstaining when presented with a question that cannot be answered based on the provided context. Question answering can be segmented into domain-specific tasks like community question answering and knowledge-base question answering. Popular benchmark datasets for evaluation question answering systems include [SQuAD](/dataset/squad), [HotPotQA](/dataset/hotpotqa), [bAbI](/dataset/babi-1), [TriviaQA](/dataset/triviaqa), [WikiQA](/dataset/wikiqa), and many others. Models for question answering are typically evaluated on metrics like EM and F1. Some recent top performing models are T5 and XLNet. ( Image credit: [SQuAD](https://rajpurkar.github.io/mlx/qa-and-squad/) )",natural-language-processing 68e93a26-a732-488a-a9c1-dd0747f6da96,tweet-reply-sentiment-analysis,Tweet-Reply Sentiment Analysis,"To predict the predominant sentiment among (potential) first-order replies to a given tweet, in a Message-level Polarity Classification paradigm.",natural-language-processing e16fb4cc-5988-4f89-a650-19977a4d6851,negation-and-speculation-scope-resolution,Negation and Speculation Scope resolution,,natural-language-processing 2a6f1e34-8099-4c4c-9552-dad8d4cea9da,record-linking,Record linking,"The task of finding records in a data set that refer to the same entity across different data sources. Record linking is also called *entity resolution* or *entity matching*. Further material about this task is collected at [entity resolution](https://paperswithcode.com/task/entity-resolution).",natural-language-processing e717f50b-d224-4591-97ac-501e52f3d868,multimodal-lexical-translation,Multimodal Lexical Translation,"Translate a given word in a source language to a word in the target language, given the source sentence and one or more images illustrating the word.",natural-language-processing e66cbe93-6c4d-43b1-9890-f90bf1412e9a,ad-hoc-information-retrieval,Ad-Hoc Information Retrieval,Ad-hoc information retrieval refers to the task of returning information resources related to a user query formulated in natural language.,natural-language-processing 256a6da4-9c4b-473b-a997-09b48c852ba1,natural-language-inference,Natural Language Inference,"**Natural language inference (NLI)** is the task of determining whether a ""hypothesis"" is true (entailment), false (contradiction), or undetermined (neutral) given a ""premise"". Example: | Premise | Label | Hypothesis | | --- | ---| --- | | A man inspects the uniform of a figure in some East Asian country. | contradiction | The man is sleeping. | | An older and younger man smiling. | neutral | Two men are smiling and laughing at the cats playing on the floor. | | A soccer game with multiple males playing. | entailment | Some men are playing a sport. | Approaches used for NLI include earlier symbolic and statistical approaches to more recent deep learning approaches. Benchmark datasets used for NLI include [SNLI](/dataset/snli), [MultiNLI](/dataset/multinli), [SciTail](/dataset/scitail), among others. You can get hands-on practice on the SNLI task by following this [d2l.ai chapter](https://d2l.ai/chapter_natural-language-processing-applications/natural-language-inference-and-dataset.html). Further readings: - [Recent Advances in Natural Language Inference: A Survey of Benchmarks, Resources, and Approaches](https://arxiv.org/abs/1904.01172)",natural-language-processing 36d36bea-f20b-43fe-bc20-2a8d9be3c8d4,hate-speech-detection,Hate Speech Detection,"Hate speech detection is the task of detecting if communication such as text, audio, and so on contains hatred and or encourages violence towards a person or a group of people. This is usually based on prejudice against 'protected characteristics' such as their ethnicity, gender, sexual orientation, religion, age et al. Some example benchmarks are ETHOS and HateXplain. Models can be evaluated with metrics like the F-score or F-measure.",natural-language-processing 6c287c59-0b0d-4596-8dcf-0bb84760eb00,conditional-text-generation,Conditional Text Generation,The task of generating text according to some pre-specified conditioning (e.g. topic or sentiment or constraint),natural-language-processing 0358e630-89bb-4542-93aa-efcb8cbbdbb7,query-wellformedness,Query Wellformedness,"Assessing whether a query is grammatically correct, contains no spelling mistakes, and asks an explicit question. Image Source: [Identifying Well-formed Natural Language Questions](https://arxiv.org/pdf/1808.09419.pdf)",natural-language-processing 415d4ce8-ba37-4f02-bf0f-79840a526031,sentiment-analysis,Sentiment Analysis,"**Sentiment Analysis** is the task of classifying the polarity of a given text. For instance, a text-based tweet can be categorized into either ""positive"", ""negative"", or ""neutral"". Given the text and accompanying labels, a model can be trained to predict the correct sentiment. **Sentiment Analysis** techniques can be categorized into machine learning approaches, lexicon-based approaches, and even hybrid methods. Some subcategories of research in sentiment analysis include: multimodal sentiment analysis, aspect-based sentiment analysis, fine-grained opinion analysis, language specific sentiment analysis. More recently, deep learning techniques, such as RoBERTa and T5, are used to train high-performing sentiment classifiers that are evaluated using metrics like F1, recall, and precision. To evaluate sentiment analysis systems, benchmark datasets like SST, GLUE, and IMDB movie reviews are used. Further readings: - [Sentiment Analysis Based on Deep Learning: A Comparative Study](https://paperswithcode.com/paper/sentiment-analysis-based-on-deep-learning-a)",natural-language-processing 6a2b6eb5-0847-4a48-a4b7-d0f0bbb38870,hyper-relational-extraction,Hyper-Relational Extraction,"HyperRED is a dataset for the new task of hyper-relational extraction, which extracts relation triplets together with qualifier information such as time, quantity or location. For example, the relation triplet (Leonard Parker, Educated At, Harvard University) can be factually enriched by including the qualifier (End Time, 1967).",natural-language-processing d02da9dc-5697-4427-a4a2-41902a7ab862,extracting-covid-19-events-from-twitter,Extracting COVID-19 Events from Twitter,,natural-language-processing 4006086a-1b9a-478e-803e-1d0d4ba55a67,word-sense-disambiguation,Word Sense Disambiguation,"The task of Word Sense Disambiguation (WSD) consists of associating words in context with their most suitable entry in a pre-defined sense inventory. The de-facto sense inventory for English in WSD is [WordNet](https://wordnet.princeton.edu). For example, given the word “mouse” and the following sentence: “A mouse consists of an object held in one's hand, with one or more buttons.” we would assign “mouse” with its electronic device sense ([the 4th sense in the WordNet sense inventory](http://wordnetweb.princeton.edu/perl/webwn?c=8&sub=Change&o2=&o0=1&o8=1&o1=1&o7=&o5=&o9=&o6=&o3=&o4=&i=-1&h=000000&s=mouse)).",natural-language-processing 4d1cd4b1-eacc-4008-a21b-72df7007e24f,text-augmentation,Text Augmentation,"You can read these blog posts to get an overview of the approaches. - [**A Visual Survey of Data Augmentation in NLP**](https://amitness.com/2020/05/data-augmentation-for-nlp/)",natural-language-processing 50c71b37-7d86-4617-8875-d79d9963e117,temporal-relation-extraction,Temporal Relation Extraction,"Temporal relation extraction systems aim to identify and classify the temporal relation between a pair of entities provided in a text. For instance, in the sentence ""Bob sent a message to Alice while she was leaving her birthday party."" one can infer that the actions ""sent"" and ""leaving"" entails a temporal relation that can be described as ""simultaneous"".",natural-language-processing 0d55e212-160c-4a09-99f9-4322dedb652c,toponym-resolution,Toponym Resolution,The goal is to find a mapping from a toponym (a location mention) in the text to a spatial footprint.,natural-language-processing 80c56e40-1371-4563-8171-ec9c443172fd,word-alignment,Word Alignment,"**Word Alignment** is the task of finding the correspondence between source and target words in a pair of sentences that are translations of each other. Source: [Neural Network-based Word Alignment through Score Aggregation ](https://arxiv.org/abs/1606.09560)",natural-language-processing f5053e20-8f01-4ee3-b918-cff91dec2456,dependency-grammar-induction,Dependency Grammar Induction,"Also known as ""unsupervised dependency parsing""",natural-language-processing a6e4d8bd-579d-4da6-bf1a-343db93a3dc8,relational-reasoning,Relational Reasoning,"The goal of **Relational Reasoning** is to figure out the relationships among different entities, such as image pixels, words or sentences, human skeletons or interactive moving agents. Source: [Social-WaGDAT: Interaction-aware Trajectory Prediction via Wasserstein Graph Double-Attention Network ](https://arxiv.org/abs/2002.06241)",natural-language-processing ecabb4fd-cb68-4ef3-9215-823de4d4758d,event-extraction,Event Extraction,"Determine the extent of the events in a text. Other names: Event Tagging; Event Identification",natural-language-processing db933bce-08b3-441b-b1e4-4c0da3cf58a3,document-ai,Document AI,,natural-language-processing 748fffd6-9418-46d0-8f2d-db6b828466d5,commonsense-causal-reasoning,Commonsense Causal Reasoning,"""Commonsense Causal Reasoning is the process of capturing and understanding the causal dependencies amongst events and actions."" Luo, Zhiyi, et al. ""Commonsense causal reasoning between short texts."" Fifteenth International Conference on the Principles of Knowledge Representation and Reasoning. 2016.",natural-language-processing 1ba61b50-0b37-4ca2-b2c1-f1564cafa494,abusive-language,Abusive Language,,natural-language-processing 10c7e0a4-1d80-4ab9-a2a4-530e34d69520,cross-lingual-entity-linking,Cross-Lingual Entity Linking,"Cross-lingual entity linking is the task of using data and models available for one language for which ample such resources are available (e.g., English) to solve entity linking tasks (i.e., assigning a unique identity to entities in a text) in another, commonly low-resource, language. Image Source: [Towards Zero-resource Cross-lingual Entity Linking](https://www.aclweb.org/anthology/D19-6127.pdf)",natural-language-processing c200b851-44eb-4988-8df8-5cda59423bff,gender-bias-detection,Gender Bias Detection,,natural-language-processing e83a07c7-7091-47e1-91ad-aa6187ac6b19,vietnamese-aspect-based-sentiment-analysis,Vietnamese Aspect-Based Sentiment Analysis,"UIT-ViSFD: A Vietnamese Smartphone Feedback Dataset for Aspect-Based Sentiment Analysis In this paper, we present a process of building a social listening system based on aspect-based sentiment analysis in Vietnamese from creating a dataset to building a real application. Firstly, we create UIT-ViSFD, a Vietnamese Smartphone Feedback Dataset as a new benchmark corpus built based on strict annotation schemes for evaluating aspect-based sentiment analysis, consisting of 11,122 human-annotated comments for mobile e-commerce, which is freely available for research purposes. We also present a proposed approach based on the Bi-LSTM architecture with the fastText word embeddings for the Vietnamese aspect-based sentiment task. Our experiments show that our approach achieves the best performances with the F1-score of 84.48% for the aspect task and 63.06% for the sentiment task, which performs several conventional machine learning and deep learning systems. Last but not least, we build SA2SL, a social listening system based on the best performance model on our dataset, which will inspire more social listening systems in the future. Dataset download: https://www.facebook.com/ViDataset Paper: Phan, Luong Luc, Phuc Huynh Pham, Kim Thi-Thanh Nguyen, Tham Thi Nguyen, Sieu Khai Huynh, Luan Thanh Nguyen, Tin Van Huynh, and Kiet Van Nguyen. ""SA2SL: From Aspect-Based Sentiment Analysis to Social Listening System for Business Intelligence."" arXiv preprint arXiv:2105.15079 (2021).",natural-language-processing 35f8bf85-fc6a-4d48-b8ea-840d7c655477,dialogue-generation,Dialogue Generation,"Dialogue generation is the task of ""understanding"" natural language inputs - within natural language processing in order to produce output. The systems are usually intended for conversing with humans, for instance back and forth dialogue with a conversation agent like a chatbot. Some example benchmarks for this task (see others such as Natural Language Understanding) include FusedChat and Ubuntu DIalogue Corpus (UDC). Models can be evaluated via metrics such as BLEU, ROUGE, and METEOR albeit with challenges in terms of weak correlation with human judgement, that may be addressed by new ones like UnSupervised and Reference-free (USR) and Metric for automatic Unreferenced dialog evaluation (MaUde).",natural-language-processing f6408250-6290-4f8d-a2a3-46f1751315d4,constituency-parsing,Constituency Parsing,"Constituency parsing aims to extract a constituency-based parse tree from a sentence that represents its syntactic structure according to a [phrase structure grammar](https://en.wikipedia.org/wiki/Phrase_structure_grammar). Example: Sentence (S) | +-------------+------------+ | | Noun (N) Verb Phrase (VP) | | John +-------+--------+ | | Verb (V) Noun (N) | | sees Bill [Recent approaches](https://papers.nips.cc/paper/5635-grammar-as-a-foreign-language.pdf) convert the parse tree into a sequence following a depth-first traversal in order to be able to apply sequence-to-sequence models to it. The linearized version of the above parse tree looks as follows: (S (N) (VP V N)).",natural-language-processing f65b35d3-44fc-4ce6-b7fd-eda6d482a033,sign-language-production,Sign Language Production,"Sign Language Production (SLP) is the automatically translation from spoken language sentences into sign language sequences. Whilst Sign language Translation translates from sign to text, SLP is the opposite task from text to sign.",natural-language-processing 35dd3e0f-4a57-46e3-bbd3-18e70b10e657,english-proverbs,English Proverbs,,natural-language-processing 025a8475-10ed-4fb3-88b9-6e937e3224df,machine-translation,Machine Translation,"**Machine translation** is the task of translating a sentence in a source language to a different target language. Approaches for machine translation can range from rule-based to statistical to neural-based. More recently, encoder-decoder attention-based architectures like BERT have attained major improvements in machine translation. One of the most popular datasets used to benchmark machine translation systems is the WMT family of datasets. Some of the most commonly used evaluation metrics for machine translation systems include BLEU, METEOR, NIST, and others. ( Image credit: [Google seq2seq](https://github.com/google/seq2seq) )",natural-language-processing 009c0b63-2eed-4af9-aca7-24f156d9b58c,sentence-pair-classification,Sentence-Pair Classification,,natural-language-processing e21894e5-a4ef-4dea-b3c0-e8def475bc47,goal-oriented-dialog,Goal-Oriented Dialog,Achieving a pre-defined goal through a dialog.,natural-language-processing 6e8b1dd4-1699-4735-a084-1ed8037b7508,aspect-sentiment-opinion-triplet-extraction,Aspect-Sentiment-Opinion Triplet Extraction,"Aspect-Sentiment-Opinion Triplet Extraction (ASOTE) extracts aspect term, sentiment and opinion term triplets from sentences. In the triplet extracted by ASOTE the sentiment is the sentiment of the aspect term and opinion term pair.",natural-language-processing 2c796a9c-0638-454c-9258-405e0375b797,web-page-tagging,Web Page Tagging,Assigning appropriate tags to a web page.,natural-language-processing 308fbbbd-933c-465f-a43a-75ccf3790858,embeddings-evaluation,Embeddings Evaluation,,natural-language-processing e56f7a97-e67f-4d5a-8191-611795d9dbec,lemmatization,Lemmatization,"**Lemmatization** is a process of determining a base or dictionary form (lemma) for a given surface form. Especially for languages with rich morphology it is important to be able to normalize words into their base forms to better support for example search engines and linguistic studies. Main difficulties in Lemmatization arise from encountering previously unseen words during inference time as well as disambiguating ambiguous surface forms which can be inflected variants of several different base forms depending on the context. Source: [Universal Lemmatizer: A Sequence to Sequence Model for Lemmatizing Universal Dependencies Treebanks ](https://arxiv.org/abs/1902.00972)",natural-language-processing 95e6e152-eef2-44fc-96af-8352a7267e6f,topic-coverage,Topic coverage,"A prevalent use case of topic models is that of topic discovery. However, most of the topic model evaluation methods rely on abstract metrics such as perplexity or topic coherence. The topic coverage approach is to measure the models' performance by matching model-generated topics to a fixed set of reference topics - topics discovered by humans and represented in a machine-readable format. This way, the models are evaluated in the context of their use, by essentially simulating topic modeling in a fixed setting defined by a text collection and a set of reference topics. Reference topics represent a ground truth that can be used to evaluate both topic models and other measures of model performance. This coverage approach enables large-scale automatic evaluation of existing and future topic models.",natural-language-processing 220a4a0d-1ef5-4576-af17-655aaf4df4aa,prepositional-phrase-attachment,Prepositional Phrase Attachment,,natural-language-processing dcf601d1-27da-4d1f-ad1f-1d524f082044,morphological-analysis,Morphological Analysis,"**Morphological Analysis** is a central task in language processing that can take a word as input and detect the various morphological entities in the word and provide a morphological representation of it. Source: [Towards Finite-State Morphology of Kurdish ](https://arxiv.org/abs/2005.10652)",natural-language-processing 2c92900b-573b-453d-b8b3-ba8244db062c,phrase-ranking,Phrase Ranking,This task aims to evaluate the “global” rank list of phrases that a method finds from the input corpus.,natural-language-processing 218d8b1d-5593-41e5-8b7b-5f681d5d424b,zero-shot-out-of-domain-detection,Zero-Shot Out-of-Domain Detection,Tasks for simultaneously learning few-shot In-Domain text classification and zero-shot Out-of-Domain detection on sentiment classification and intent classification.,natural-language-processing b2b6006f-8d50-4b45-aab7-ee4d812cf9d5,romanian-text-diacritization,Romanian Text Diacritization,Addition of diacritics for undiacritized Romanian Wikipedia texts.,natural-language-processing 66f2b250-9b8c-42b0-b325-6bc1ae6b9f89,table-to-text-generation,Table-to-Text Generation,"**Table-to-Text Generation** is to generate a description from the structured table. Source: [Key Fact as Pivot: A Two-Stage Model for Low Resource Table-to-Text Generation ](https://arxiv.org/abs/1908.03067)",natural-language-processing 9c8ba7b6-6619-4741-84ad-90941c2f0215,headline-generation,Headline Generation,,natural-language-processing ec319877-5fa7-4d0b-9213-d762ae831864,multiple-choice-qa,Multiple Choice Question Answering (MCQA),"A multiple-choice question (MCQ) is composed of two parts: a stem that identifies the question or problem, and a set of alternatives or possible answers that contain a key that is the best answer to the question, and a number of distractors that are plausible but incorrect answers to the question. In a k-way MCQA task, a model is provided with a question q, a set of candidate options O = {O1, . . . , Ok}, and a supporting context for each option C = {C1, . . . , Ck}. The model needs to predict the correct answer option that is best supported by the given contexts.",natural-language-processing ed111717-60bd-4302-9d16-08cf5d426c73,memex-question-answering,Memex Question Answering,"Question answering with real-world multi-modal personal collections, e.g., photo albums with visual, text, time and location information.",natural-language-processing b7f641f6-ff4f-477d-b44a-e3b2df402efe,cross-domain-text-classification,Cross-Domain Text Classification,Learning an accurate model for the new unlabeled target domain given labeled data from multiple source domains where all domains have (possibly) different label sets. (Source: https://www.aclweb.org/anthology/P16-1155.pdf),natural-language-processing fd70581c-23e5-4c56-884f-5d5db21953f5,word-translation,Word Translation,,natural-language-processing d421c30b-a391-4fff-8806-cf69a68eaded,clinical-language-translation,Clinical Language Translation,Translating clinical texts to layperson-understandable language.,natural-language-processing a734d990-05b6-446c-8e6f-242a0dfa8099,vietnamese-word-segmentation,Vietnamese Word Segmentation,,natural-language-processing 94b84e2f-2aff-4821-a8d0-05902b807ed2,spelling-correction,Spelling Correction,Spelling correction is the task of detecting and correcting spelling mistakes.,natural-language-processing ef855868-500f-46e8-886c-dcef45d44535,poem-meters-classification,Poem meters classification,,natural-language-processing e9f47356-d810-44f0-af48-915e2a107f44,meme-classification,Meme Classification,Meme classification refers to the task of classifying internet memes.,natural-language-processing 4a111520-3dac-4cd6-9a67-b0b89832fa55,turkish-text-diacritization,Turkish Text Diacritization,Addition of diacritics for undiacritized Turkish Wikipedia texts.,natural-language-processing 12cabac9-c28a-48b2-a611-a6f31e9b22d7,text-matching,Text Matching,Matching a target text to a source text based on their meaning.,natural-language-processing ee74c3ef-6a2e-400e-93dd-a9d3f74fc093,entity-linking,Entity Linking,"Assigning a unique identity to entities (such as famous individuals, locations, or companies) mentioned in text (Source: Wikipedia).",natural-language-processing 8c0dd606-be4e-45db-9f73-afc91bd44e8d,document-classification,Document Classification,"**Document Classification** is a procedure of assigning one or more labels to a document from a predetermined set of labels. Source: [Long-length Legal Document Classification ](https://arxiv.org/abs/1912.06905)",natural-language-processing bfc9a602-78e1-4255-bc33-10ddced1900f,email-thread-summarization,Email Thread Summarization,Image credit: [EmailSum: Abstractive Email Thread Summarization](https://paperswithcode.com/paper/emailsum-abstractive-email-thread),natural-language-processing d765e505-83d3-4cad-83a0-795276239b5f,speech-to-text-translation,Speech-to-Text Translation,"Translate audio signals of speech in one language into text in a foreign language, either in an end-to-end or cascade manner.",natural-language-processing 9199b5d8-affc-4c13-9181-8d223b36669b,instruction-following,Instruction Following,,natural-language-processing 2a720f7e-3ec2-4295-8c49-67afb7c112bc,automatic-post-editing,Automatic Post-Editing,Automatic post-editing (APE) is used to correct errors in the translation made by the machine translation systems.,natural-language-processing b15638aa-3507-4dc8-82d7-8c2d0c2b3971,political-salient-issue-orientation-detection,Political Salient Issue Orientation Detection,,natural-language-processing a7c7d1c9-7e39-48c6-ab25-bb9533e759ec,4-ary-relation-extraction,4-ary Relation Extraction,,natural-language-processing db43c682-2b08-4748-937f-3a0e4e99857f,link-prediction-on-dh-kgs,Link prediction on DH-KGs,,natural-language-processing 542e0d1e-79d7-471c-ab9f-cc8a253213c1,context-query-reformulation,Context Query Reformulation,,natural-language-processing 70eec1ad-a2c6-46cc-89aa-2f8bcc68e713,punctuation-restoration,Punctuation Restoration,Punctuation Restoration,natural-language-processing 54b17ebf-8b2f-4d91-bae3-cbacd62e266d,logical-reasoning-question-ansering,Logical Reasoning Question Answering,"Introduced by ReClor (ICLR 2020), logical reasoning is to evaluate the logical reasoning ability of models for question answering.",natural-language-processing f9558c47-f5fb-4269-ae32-48bd8ca43dd0,cross-lingual-document-classification,Cross-Lingual Document Classification,"Cross-lingual document classification refers to the task of using data and models available for one language for which ample such resources are available (e.g., English) to solve classification tasks in another, commonly low-resource, language.",natural-language-processing 16f05b5a-36b2-476a-9f71-ee752aabba12,language-modelling,Language Modelling,"**Language Modeling** is the task of predicting the next word or character in a document. This technique can be used to train language models that can further be applied to a wide range of natural language tasks like text generation, text classification, and question answering. The common types of language modeling techniques involve: - N-gram Language Models - Neural Langauge Models A model's language modeling capability is measured using cross-entropy and perplexity. Some datasets to evaluate language modeling are WikiText-103, One Billion Word, Text8, C4, among others. One of the most recent popular benchmarks to evaluate language modeling capabilities is called SuperGLUE. Some popular and notable state-of-the-art language models, include: - [GPT-3](/method/gpt-3) - Megatron-LM - [BERT](/method/bert) Check below for all state-of-the-art models. Here are some additional readings to go deeper on the task: - [Language Modeling](https://lena-voita.github.io/nlp_course/language_modeling.html) - Lena Voita ( Image credit: [Exploring the Limits of Language Modeling](https://arxiv.org/pdf/1602.02410v2.pdf) )",natural-language-processing e3a000ee-4488-4ea3-ad2b-e7bcec9d0ebc,prosody-prediction,Prosody Prediction,"Predicting prosodic prominence from text. This is a 2-way classification task, assigning each word in a sentence a label 1 (prominent) or 0 (non-prominent). ( Image credit: [Helsinki Prosody Corpus](https://github.com/Helsinki-NLP/prosody) )",natural-language-processing 3f924f22-5bca-4d25-b45b-81cb401d4ebd,formality-style-transfer,Formality Style Transfer,Formality Style Transfer,natural-language-processing 37223ab1-7333-48be-9c7d-7b40fde1d730,response-generation,Response Generation,A task where an agent should play the $DE$ role and generate a text to respond to a $P$ message.,natural-language-processing b007c8fb-5a5b-4827-9f71-d6ba17d10400,unsupervised-semantic-parsing,Unsupervised semantic parsing,,natural-language-processing 868c627c-4ee5-4168-8f93-9c6d8df870dc,definition-extraction,Definition Extraction,,natural-language-processing d52b584f-4374-4f30-841c-224db8f52424,intent-recognition,Intent Recognition,,natural-language-processing 086819cc-e82c-4d72-86dd-1ddb2d7ffdb6,incongruity-detection,incongruity detection,Incongruity detection is the task of identifying statements in a text that are inconsistent with each other.,natural-language-processing f16c03c6-8806-4470-a225-227ab0e09be3,blackout-poetry-generation,Blackout Poetry Generation,"Blackout poetry is a form of poetry in which words in a passage are masked, except for a few which when combined together in order to convey some meaning.",natural-language-processing 82c1136a-0253-489e-8ae5-216ee5a312d7,information-extraction,Information Extraction,Information extraction is the task of automatically extracting structured information from unstructured and / or semi-structured machine-readable documents and other electronically represented sources (Source: Wikipedia).,natural-language-processing 55faaecf-dfed-450b-9d29-e889785da5f0,sarcasm-detection,Sarcasm Detection,"The goal of **Sarcasm Detection** is to determine whether a sentence is sarcastic or non-sarcastic. Sarcasm is a type of phenomenon with specific perlocutionary effects on the hearer, such as to break their pattern of expectation. Consequently, correct understanding of sarcasm often requires a deep understanding of multiple sources of information, including the utterance, the conversational context, and, frequently some real world facts. Source: [Attentional Multi-Reading Sarcasm Detection ](https://arxiv.org/abs/1809.03051)",natural-language-processing 928590a9-6a42-413f-ad00-111310453342,phrase-relatedness,Phrase Relatedness,,natural-language-processing 353dee7c-fa77-4023-9ebf-c2c3fbd0b3f3,sentence-classification,Sentence Classification,,natural-language-processing b710f999-a72a-4868-97b5-b955b1f04082,open-information-extraction,Open Information Extraction,"In natural language processing, open information extraction is the task of generating a structured, machine-readable representation of the information in text, usually in the form of triples or n-ary propositions (Source: Wikipedia).",natural-language-processing 3f50e4d3-493f-4491-b9f9-f5b512c6acbf,cloze-test,Cloze Test,The cloze task refers to infilling individual words.,natural-language-processing d40036f3-cc1e-4af3-94af-a988664e2e72,bridging-anaphora-resolution,Bridging Anaphora Resolution,,natural-language-processing 2d6bdb95-f3eb-48f6-a038-c7b246bf643b,math-information-retrieval,Math Information Retrieval,Information Retrieval on Math Contents,natural-language-processing 57adb7af-6094-41f0-8d16-969383c22c6d,claim-extraction-with-stance-classification,Claim Extraction with Stance Classification (CESC),"Since claims stand at a clear position towards a given topic, the sentences with clear stances should have a higher possibility to be the claims. Hence, identifying the stances of the claims is supposed to benefit the claim extraction task. By combining the claim extraction and stance classification subtasks, we define this integrated task as: given a specific topic and relevant articles, extract the claims from the articles and also identify the stance of the claims towards the topic.",natural-language-processing 03c46e8b-1150-4302-a0df-79810700ba5a,offline-handwritten-chinese-character,Offline Handwritten Chinese Character Recognition,Handwritten Chinese characters recognition is the task of detecting and interpreting the components of Chinese characters (i.e. radicals and two-dimensional structures).,natural-language-processing e42c0236-1a40-4803-a185-88e98a82baf4,sentence-compression,Sentence Compression,**Sentence Compression** is the task of reducing the length of text by removing non-essential content while preserving important facts and grammaticality.,natural-language-processing 578ea43e-9cbf-45d5-8bcd-3b2896761c27,medical-question-pair-similarity-computation,Medical question pair similarity computation,Predicting whether to questions on medical topics have the same meaning.,natural-language-processing 259a2ee8-a159-4383-ad19-6306473167d6,valnov,ValNov,"Given a textual premise and conclusion candidate, the Argument-Validity-and-Novelty-Prediction-Shared-Task ValNov consists in predicting two aspects of a conclusion: its validity and novelty. Validity is defined as the degree to which the conclusion is justified with respect to the given premise. A conclusion is considered to be valid if it is supported by inferences that link the premise to the conclusion, based on logical principles or commonsense or world knowledge, which may be defeasible. A conclusion will be trivially considered valid if it repeats or summarizes the premise – in which case it can hardly be considered as novel. Novelty defines the degree to which the conclusion contains content that is new in relation to the premise. As extreme cases, a conclusion candidate that repeats or summarizes the premise or is unrelated to the premise will not be considered novel.",natural-language-processing eb433eed-8301-477e-86b8-b0db91e5b92b,legal-document-translation,Legal Document Translation,Legal document translation is the task of translating legal documents between languages.,natural-language-processing 97314944-b652-4243-b0c6-33d5ca6d72dc,timedial,Timedial,,natural-language-processing cea8bfa1-12a7-4cd3-968b-c44d1c1464c7,cross-lingual-word-embeddings,Cross-Lingual Word Embeddings,,natural-language-processing f4be929c-fba6-4ce8-9e45-d477e0acb17b,dialogue,Dialogue,Dialogue is notoriously hard to evaluate. Past approaches have used human evaluation.,natural-language-processing b1df99ad-ef01-4859-870c-f540da951369,document-level-relation-extraction,Document-level Relation Extraction,Document-level RE aim to identify the relations of various entity pairs expressed across multiple sentences.,natural-language-processing 1cf20bc2-a303-43b5-ae03-c02fc8041826,word-sense-induction,Word Sense Induction,"Word sense induction (WSI) is widely known as the “unsupervised version” of WSD. The problem states as: Given a target word (e.g., “cold”) and a collection of sentences (e.g., “I caught a cold”, “The weather is cold”) that use the word, cluster the sentences according to their different senses/meanings. We do not need to know the sense/meaning of each cluster, but sentences inside a cluster should have used the target words with the same sense. Description from [NLP Progress](http://nlpprogress.com/english/word_sense_disambiguation.html)",natural-language-processing 49d861ae-1f10-463b-8e60-bcbe9439bf8e,word-similarity,Word Similarity,Calculate a numerical score for the semantic similarity between two words.,natural-language-processing c85c5084-8e7a-4677-be75-d43acab6987c,long-form-question-answering,Long Form Question Answering,Long-form question answering is a task requiring elaborate and in-depth answers to open-ended questions.,natural-language-processing e5b9b116-b98e-4df6-93f6-2fd62671c42e,dialogue-state-tracking,Dialogue State Tracking,"Dialogue state tacking consists of determining at each turn of a dialogue the full representation of what the user wants at that point in the dialogue, which contains a goal constraint, a set of requested slots, and the user's dialogue act.",natural-language-processing 9491fef2-96c2-49ed-a407-45852680e95c,action-parsing,Action Parsing,"Action parsing is the task of, given a video or still image, assigning each frame or image a label describing the action in that frame or image.",natural-language-processing d1d583c1-781c-464a-b53f-e43e45bf1364,hope-speech-detection,Hope Speech Detection,"Detecting speech associated with positive, uplifting, promise, potential, support, reassurance, suggestions, or inspiration.",natural-language-processing 333b7dab-77f3-453f-b948-87b3d3d08302,dependency-parsing,Dependency Parsing,"Dependency parsing is the task of extracting a dependency parse of a sentence that represents its grammatical structure and defines the relationships between ""head"" words and words, which modify those heads. Example: ``` root | | +-------dobj---------+ | | | nsubj | | +------det-----+ | +-----nmod------+ +--+ | | | | | | | | | | | | +-nmod-+| | | +-case-+ | + | + | + + || + | + | | I prefer the morning flight through Denver ``` Relations among the words are illustrated above the sentence with directed, labeled arcs from heads to dependents (+ indicates the dependent).",natural-language-processing ba8412c6-5848-4f1a-87b8-d99c791b6304,unsupervised-machine-translation,Unsupervised Machine Translation,"Unsupervised machine translation is the task of doing machine translation without any translation resources at training time. ( Image credit: [Phrase-Based & Neural Unsupervised Machine Translation](https://arxiv.org/pdf/1804.07755v2.pdf) )",natural-language-processing 1fba19f3-9100-44ca-a141-f899471286fb,multi-label-text-classification,Multi-Label Text Classification,"According to Wikipedia ""In machine learning, multi-label classification and the strongly related problem of multi-output classification are variants of the classification problem where multiple labels may be assigned to each instance. Multi-label classification is a generalization of multiclass classification, which is the single-label problem of categorizing instances into precisely one of more than two classes; in the multi-label problem there is no constraint on how many of the classes the instance can be assigned to.""",natural-language-processing adff7bbc-c527-454e-ad92-efa631d6949b,binary-relation-extraction,Binary Relation Extraction,,natural-language-processing 0435d6ca-ee9f-4d17-b9a5-e573e52589b2,table-annotation,Table annotation,"**Table annotation** is the task of annotating a table with terms/concepts from knowledge graph or database schema. Table annotation is typically broken down into the following five subtasks: 1. Cell Entity Annotation ([CEA](https://paperswithcode.com/task/cell-entity-annotation)) 2. Column Type Annotation ([CTA](https://paperswithcode.com/task/column-type-annotation)) 3. Column Property Annotation ([CPA](https://paperswithcode.com/task/columns-property-annotation)) 4. [Table Type Detection](https://paperswithcode.com/task/table-type-detection) 5. [Row Annotation](https://paperswithcode.com/task/row-annotation) The [SemTab](http://www.cs.ox.ac.uk/isg/challenges/sem-tab/) challenge is closely related to the Table Annotation problem. It is a yearly challenge which focuses on the first three tasks of table annotation and its purpose is to benchmark different table annotation systems.",natural-language-processing d297e22f-5aa6-4d13-8f84-e462c7ae69e4,temporal-relation-classification,Temporal Relation Classification,"Temporal Relation Classification is the task that is concerned with classifying the temporal relation between a pair of temporal entities (traditional events and temporal expressions). Initial approaches aimed to classify the temporal relation in thirteen relation types that were depicted by James Allen in his seminal work ""Maintaining Knowledge about Temporal Intervals"". However, due to the ambiguity in the annotation, recent corpora have been limiting the type of relations to a subset of those relations. Notice that although Temporal Relation Classification can be thought of as a subtask of Temporal Relation Extraction, the two tasks can be morphed if one adds a label that indicates the absence of a temporal relation between the entities (e.g. ""no_relation"" or ""vague"") to Temporal Relation Classification.",natural-language-processing e045f039-c381-4efc-a0dc-d63a0789f7e5,definition-modelling,Definition Modelling,,natural-language-processing 5e09c572-e75c-4827-b2ee-b3896f6853bc,toxic-comment-classification,Toxic Comment Classification,,natural-language-processing 1ec68b53-ced5-4e1c-892b-0a02e87a3a6b,semantic-entity-labeling,Semantic entity labeling,"- One of Form Understanding task (Word grouping, Semantic entity labeling, Entity linking) - Classifying entities into one of four pre-defined categories: question, answer, header and, other. cited from G. Jaume, H. K. Ekenel, J. Thiran ""FUNSD: A Dataset for Form Understanding in Noisy Scanned Documents,"" 2019",natural-language-processing c9b1bb38-25e6-4122-ac0e-a53add422d59,active-learning,Active Learning,"**Active Learning** is a paradigm in supervised machine learning which uses fewer training examples to achieve better optimization by iteratively training a predictor, and using the predictor in each iteration to choose the training examples which will increase its chances of finding better configurations and at the same time improving the accuracy of the prediction model Source: [Polystore++: Accelerated Polystore System for Heterogeneous Workloads ](https://arxiv.org/abs/1905.10336)",natural-language-processing 6c3a35b5-c17d-4d92-829b-48a49deca101,amr-parsing,AMR Parsing,"Each AMR is a single rooted, directed graph. AMRs include PropBank semantic roles, within-sentence coreference, named entities and types, modality, negation, questions, quantities, and so on. [See](https://amr.isi.edu/index.html).",natural-language-processing e997f7c9-e1b9-43ef-a90a-2a73606eaf88,job-classification,Job classification,,natural-language-processing 7cb4fc80-1f7d-4bf1-a16f-7699e6c23f11,community-question-answering,Community Question Answering,"Community question answering is the task of answering questions on a Q&A forum or board, such as Stack Overflow or Quora.",natural-language-processing 049f2be5-708f-42b2-a758-e1d8066cc361,twitter-event-detection,Twitter Event Detection,"Detection of worldwide events from categories like Sports, Politics, Entertainment, Science & Technology, etc. by analyzing Twitter Tweets.",natural-language-processing 30bdfa26-400f-437f-bff1-1b66ca7979e3,context-specific-spam-detection,Context-specific Spam Detection,,natural-language-processing 96e4323d-2fad-43cd-9eae-b4f7f423c934,method-name-prediction,Method name prediction,,natural-language-processing c9e25f6a-d60e-4f7b-b442-70da93bed3f5,turning-point-identification,Turning Point Identification,"Identification of key events in a narrative (such as movie or TV episode). The task is supported by screenwriting theory, according to which there are 5 different types of key events in a movie. These key events (e.g., change of plans, major setback, climax) are crucial narrative moments: they define the plot structure and determine its progression and thematic units (e.g., setup, complications, aftermath).",natural-language-processing dabd16cc-5d69-4858-93a8-663de9524d27,aspect-extraction,Aspect Extraction,"Aspect extraction is the task of identifying and extracting terms relevant for opinion mining and sentiment analysis, for example terms for product attributes or features.",natural-language-processing 1f8dd9f0-a3ed-49e7-a2c9-f33089a85ef7,hope-speech-detection-for-english,Hope Speech Detection for English,Detecting Hope Speech in the English Language,natural-language-processing d60a7ed2-f970-4ba0-8678-338fe906c33f,gsm8k,GSM8K,,natural-language-processing e7a086c5-957e-4048-b131-5428c6630e62,drug-drug-interaction-extraction,Drug–drug Interaction Extraction,"Automatic extraction of Drug-drug interaction (DDI) information from the biomedical literature. ( Image credit: [Using Drug Descriptions and Molecular Structures for Drug-Drug Interaction Extraction from Literature](https://watermark.silverchair.com/btaa907.pdf?token=AQECAHi208BE49Ooan9kkhW_Ercy7Dm3ZL_9Cf3qfKAc485ysgAAAuAwggLcBgkqhkiG9w0BBwagggLNMIICyQIBADCCAsIGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQMb9BaAetiYVbvf0_9AgEQgIICk0-IUccCHnqrDDtbnyBSTvPnrWXc4F2vfEMKkxzGRA3-WbynJbw0HptyHyjawXr3c4TeC9ZhIta1szzhc5t1JOhdh8rOo5CTtGk_JPfq14TkMkIISDCvdsVL76fvCn-3KhtAJhHPcyYTDqMaGSb3ltXJLfR6PXoCXnINprcZ3pO6ktuNLA8KF0_CHTITnGNcX1G1n6ZERyGTPwGjodH9Qq9UcYzJCx4N1KKgxOMAj5oxIoLPQi97oNJ3eCoYBoKDuSW-Zza_ULcBDXkkTvt3A460O32vfRAqnnPYlVSuvEiASc3lX8O6Qb28GXc99EIzwDQJEO4znl7haAGOxubuUm5Of9p22xRfc7KSuCh41cJxW_31bsnTuf8Sb2z6O6bFo3aNlxF3SrLfNTHfJH6Tst0WvaaRJ1gQ3JxcFwu--hsZMF9bW3_yzFwP1ZnVIPVtE0bqm3QZt-_nHaE4o9KgCCNY4t70h7U7yD9ZrZMvFnNieSvgL35t--l5PerE9uZgP6v9LUKUyAI1TDHHMameO5794Z7WII8v_MKG0jmUIbr564ENnyKJYunNowu3EbvUMxBv3DSUkepU1kP66tJjSflal3LlRd8LUbj4m2Tin3LteS-09Gje4pyMaeO5RywX_tSPSFGK8QGaZqpgbIU2y23YnCMAnIDOSS3_L3LQRGu50YK5OwEvMY7azpEMbR3kaaHWu_cmEN2Vm61UdG6uLql5pMc9zKfGAQ3E1VXGuhdEalRtuLbtBZ9UFj-vYePfrRGWRXjkg-11SGgKqRaJcLK32yuvhG11KqacCoY169A4G29_GfRC3rbDsnqyMRZ9ESe3FB1NnWwZ4HPNA7ju3yJ7cfZSfYgdygT6oTpBQjaweH5U) )",natural-language-processing 29fa0011-2e32-447f-aac3-bd114f89be93,personality-recognition-in-conversation,Personality Recognition in Conversation,"Given a speaker's conversation with others, it is required to recognize the speaker's personality traits through the conversation record, which includes two scenarios, (1) $1-1$ conversations: the robot recognizes the personality traits of the speaker through the conversation between them (e.g., psychological counseling), (2) $1-N$ conversations : the robot listens to the speaker's conversations with other $N$ people and then recognizes the speaker's personality traits (e.g., group chatbot, home service robot). Since $1-N$ includes the case of $1-1$, we only discusses PRC in $1-N$ conversations. The task of PRC in $1-N$ conversations can be formulated as: $Per_i = argmax_{Per'_i}P(Per'_i | C_{i,j}, \cdots, C_{i,N})$ where $Per_i=[Neu, Ext, Ope, Agr, Con]$ is a 5-dimensional vector representing Neuroticism, Extraversion, Openness, Agreeableness, and Conscientiousness. $C_{i,j}$ is the conversations between $Speaker_i$ and $Speaker_j$ ($1 \leq j \leq N$).",natural-language-processing 56a97d4e-ded8-4f12-acd2-d5b1541b9049,emotional-intelligence,Emotional Intelligence,,natural-language-processing 5016226f-2847-4fe9-9d09-fbb182807a86,word-attribute-transfer,Word Attribute Transfer,"Changing a word's attribute, such as its gender.",natural-language-processing 0ac4e602-a824-4f94-b5ae-c0dc06c85737,hungarian-text-diacritization,Hungarian Text Diacritization,Addition of diacritics for undiacritized Hungarian Wikipedia texts.,natural-language-processing b2e0890d-1315-4ac7-a7e4-2b48c1073385,news-classification,News Classification,,natural-language-processing 6999400f-d2fb-4024-a446-78aa9d9b1939,few-shot-text-classification,Few-Shot Text Classification,,natural-language-processing ad39375f-3f5d-4eea-8a60-d39998930434,table-retrieval,Table Retrieval,"When given a query, the goal of this task is to retrieve a relevant table from a (potentially large) collection of tables. The query could be a single sentence (such as a question), or it could also be a conversation. As for the retrieval, the tables could be in the raw form (i.e. the values of each cells), the metadata (such as the title, description), or summary statistics.",natural-language-processing 577004a1-bb9d-47ad-812f-4808fd3c2163,multimodal-text-prediction,Multimodal Text Prediction,"**Multimodal text prediction** is a type of natural language processing that involves predicting the next word or sequence of words in a sentence, given multiple modalities or types of input. In traditional text prediction, the prediction is based solely on the context of the sentence, such as the words that precede the target word. In multimodal text prediction, additional modalities, such as images, audio, or user behavior, are also used to inform the prediction. For example, in a multimodal text prediction system for captioning images, the system may use both the content of the image and the words that have been typed so far to generate the next word in the caption. The image may provide additional context or information about the content of the caption, while the typed words may provide information about the style or tone of the caption. Multimodal text prediction can be achieved using a variety of techniques, including deep learning models and statistical models. These models can be trained on large datasets of text and multimodal inputs to learn the relationships between the different types of data and improve the accuracy of the predictions. Multimodal text prediction has many applications, including chatbots, virtual assistants, and predictive text input for mobile devices. By incorporating additional modalities into the prediction process, multimodal text prediction systems can provide more accurate and useful predictions, improving the overall user experience.",natural-language-processing 7a23f54c-a16f-464f-b9b1-09a7f92fb24d,semantic-role-labeling,Semantic Role Labeling,"Semantic role labeling aims to model the predicate-argument structure of a sentence and is often described as answering ""Who did what to whom"". BIO notation is typically used for semantic role labeling. Example: | Housing | starts | are | expected | to | quicken | a | bit | from | August’s | pace | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | B-ARG1 | I-ARG1 | O | O | O | V | B-ARG2 | I-ARG2 | B-ARG3 | I-ARG3 | I-ARG3 |",natural-language-processing 34189291-46d4-4326-8405-5d888317501f,prompt-engineering,Prompt Engineering,"**Prompt engineering** is the process of designing and refining the prompts used to generate text from language models, such as GPT-3 or similar models. The goal of prompt engineering is to improve the quality and relevance of the generated text by carefully crafting the prompts to elicit the desired responses from the model. Prompt engineering involves several steps, including selecting the appropriate model architecture and parameters, designing the prompt format and structure, selecting the appropriate task and training data, and fine-tuning the model using the selected prompt and data. Prompt engineering is a crucial step in the development of language models, as it can greatly influence the quality and effectiveness of the model's responses. By carefully designing and refining the prompts used to generate text, researchers and developers can improve the accuracy and relevance of the model's output, making it more useful for a wide range of applications, including chatbots, language translation, content creation, and more.",natural-language-processing a9da5f76-2299-47ed-ab36-8cf843f794c4,graph-to-sequence,Graph-to-Sequence,Mapping an input graph to a sequence of vectors.,natural-language-processing 7ec3338a-85ee-4641-a90d-794c9f904135,review-generation,Review Generation,,natural-language-processing 438947ed-ce94-40d1-bb90-892381a23dbb,stance-detection,Stance Detection,"Stance detection is the extraction of a subject's reaction to a claim made by a primary actor. It is a core part of a set of approaches to fake news assessment. Example: * Source: ""Apples are the most delicious fruit in existence"" * Reply: ""Obviously not, because that is a reuben from Katz's"" * Stance: deny",natural-language-processing ca9107b9-a4f2-48a5-a5e1-c744d003c713,empathetic-response-generation,Empathetic Response Generation,Generate empathetic responses in dialogues,natural-language-processing 6bfa9982-917c-442a-a2d9-6e59fc3fd12a,sentence-embeddings-for-biomedical-texts,Sentence Embeddings For Biomedical Texts,,natural-language-processing e0178515-25b1-4d05-a14a-c13d9a0bbaef,connective-detection,Connective Detection,Misha packed carefully so that everything would fit in her bag.,natural-language-processing 2c4494a3-6aea-4667-b82d-1b7723edc9a0,molecular-description-generation,Molecular description generation,"Molecular description generation entails the creation of a detailed textual depiction illuminating the structure, properties, biological activity, and applications of a molecule based on its molecular descriptors. It furnishes chemists and biologists with a swift conduit to essential molecular information, thus efficiently guiding their research and experiments.",natural-language-processing f27d8f03-38d0-4970-8a5d-d890993e6ede,document-ranking,Document Ranking,"Sort documents according to some criterion so that the ""best"" results appear early in the result list displayed to the user (Source: Wikipedia).",natural-language-processing ebca3ecd-2412-47ab-a8c6-4db4028008ad,cross-lingual-question-answering,Cross-Lingual Question Answering,,natural-language-processing c75c7d86-567b-4010-98ae-dcf1274a046f,unsupervised-dependency-parsing,Unsupervised Dependency Parsing,"Unsupervised dependency parsing is the task of inferring the dependency parse of sentences without any labeled training data. Description from [NLP Progress](http://nlpprogress.com/english/dependency_parsing.html)",natural-language-processing 77c5cb16-2147-4325-9c47-8ff81fd1981d,part-of-speech-tagging,Part-Of-Speech Tagging,"Part-of-speech tagging (POS tagging) is the task of tagging a word in a text with its part of speech. A part of speech is a category of words with similar grammatical properties. Common English parts of speech are noun, verb, adjective, adverb, pronoun, preposition, conjunction, etc. Example: | Vinken | , | 61 | years | old | | --- | ---| --- | --- | --- | | NNP | , | CD | NNS | JJ |",natural-language-processing e225f9df-efe1-45d5-8664-e0f9940ef869,drugprot,DrugProt,,natural-language-processing c17b8edd-7bcc-440f-896f-fe84fcde1eb2,readability-optimization,Readability optimization,It consists of improving the readability of a text automatically and without significantly altering the form or meaning.,natural-language-processing 1114d10c-77e1-45c1-b167-639e866c443b,metric-type-identification,Metric-Type Identification,,natural-language-processing cbf0b6e6-05bc-47e5-963e-38be34a19683,misogynistic-aggression-identification,Misogynistic Aggression Identification,"Develop a binary classifier for classifying the text as ‘gendered’ or ‘non-gendered’. For this, the TRAC-2 dataset of 5,000 annotated data from social media each in Bangla (in both Roman and Bangla script), Hindi (in both Roman and Devanagari script) and English for training and validation is to be used.",natural-language-processing ad800e03-57a1-4ff4-9ac7-0fae806ef5e2,few-shot-ner,Few-shot NER,"Few-Shot Named Entity Recognition (NER) is the task of recognising a 'named entity' like a person, organization, time and so on in a piece of text e.g. ""Alan Mathison [person] visited the Turing Institute [organization] in June [time].",natural-language-processing 93a19880-0cd2-4344-8f89-75b088279ed7,irony-identification,Irony Identification,"This task asks a model to identify whether a given sentence(s) is/are ironic or not. Source: [BIG-bench](https://github.com/google/BIG-bench/tree/main/bigbench/benchmark_tasks/irony_identification)",natural-language-processing 3a0d975c-0cd8-4543-b2c8-75a4d633d052,cross-lingual-text-to-image-generation,Cross-lingual Text-to-Image Generation,,natural-language-processing 01e6ad82-41cf-4ffa-bc79-00b1c2321e8f,abstractive-text-summarization,Abstractive Text Summarization,"**Abstractive Text Summarization** is the task of generating a short and concise summary that captures the salient ideas of the source text. The generated summaries potentially contain new phrases and sentences that may not appear in the source text. Source: [Generative Adversarial Network for Abstractive Text Summarization ](https://arxiv.org/abs/1711.09357) Image credit: [Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond](https://arxiv.org/pdf/1602.06023v5.pdf)",natural-language-processing d5ebf6d9-761e-48d7-9ea4-b9f89da1fb7e,relation-extraction,Relation Extraction,"**Relation Extraction** is the task of predicting attributes and relations for entities in a sentence. For example, given a sentence “Barack Obama was born in Honolulu, Hawaii.”, a relation classifier aims at predicting the relation of “bornInCity”. Relation Extraction is the key component for building relation knowledge graphs, and it is of crucial significance to natural language processing applications such as structured search, sentiment analysis, question answering, and summarization. Source: [Deep Residual Learning for Weakly-Supervised Relation Extraction ](https://arxiv.org/abs/1707.08866)",natural-language-processing f2c207d8-6f0a-4a4b-acda-2433b99fac09,timex-normalization,Timex normalization,"Temporal expression normalisation is the grounding of a lexicalisation of a time to a calendar date or other formal temporal representation. Example: 10/18/2000 21:01:00.65 Dozens of Palestinians were wounded in scattered clashes in the West Bank and Gaza Strip, Wednesday, despite the Sharm el-Sheikh truce accord. Chuck Rich reports on entertainment every Saturday Description from [NLP Progress](http://nlpprogress.com/english/temporal_processing.html)",natural-language-processing bd631b75-6ad0-4341-848a-9964a5e59416,specificity,Specificity,,natural-language-processing 41610553-0147-4624-a488-4e027b7c1a91,semantic-composition,Semantic Composition,Understanding the meaning of text by composing the meanings of the individual words in the text (Source: https://arxiv.org/pdf/1405.7908.pdf),natural-language-processing 6e62b00a-3905-4563-96e2-9c953cf6ba3f,emotion-recognition-in-conversation,Emotion Recognition in Conversation,"Given the transcript of a conversation along with speaker information of each constituent utterance, the ERC task aims to identify the emotion of each utterance from several pre-defined emotions. Formally, given the input sequence of N number of utterances [(u1, p1), (u2, p2), . . . , (uN , pN )], where each utterance ui = [ui,1, ui,2, . . . , ui,T ] consists of T words ui,j and spoken by party pi, the task is to predict the emotion label ei of each utterance ui. .",natural-language-processing 67c6f747-2172-44dd-aa71-cd30f52e7834,lambada,LAMBADA,,natural-language-processing 480baed9-cdc9-4436-951d-0e3b0edc27eb,multimodal-gif-dialog,Multimodal GIF Dialog,,natural-language-processing 229a1f2e-ae57-435b-9793-f94c886ca7fa,semantic-retrieval,Semantic Retrieval,,natural-language-processing 6a4d77f5-a9db-432a-9374-601c2a5204de,ucca-parsing,UCCA Parsing,"UCCA (Abend and Rappoport, 2013) is a semantic representation whose main design principles are ease of annotation, cross-linguistic applicability, and a modular architecture. UCCA represents the semantics of linguistic utterances as directed acyclic graphs (DAGs), where terminal (childless) nodes correspond to the text tokens, and non-terminal nodes to semantic units that participate in some super-ordinate relation. Edges are labeled, indicating the role of a child in the relation the parent represents. UCCA’s foundational layer mostly covers predicate-argument structure, semantic heads and inter-Scene relations. UCCA distinguishes primary edges, corresponding to explicit relations, from remote edges that allow for a unit to participate in several super-ordinate relations. Primary edges form a tree in each layer, whereas remote edges enable reentrancy, forming a DAG. Description from [NLP Progress](http://nlpprogress.com/english/semantic_parsing.html)",natural-language-processing 0076317b-44a2-43b0-912e-00a4ed3c5a2b,bias-detection,Bias Detection,"Bias detection is the task of detecting and measuring racism, sexism and otherwise discriminatory behavior in a model (Source: https://stereoset.mit.edu/)",natural-language-processing aa72055a-b681-4a9f-a6c5-10c502caf31b,native-language-identification,Native Language Identification,Native Language Identification (NLI) is the task of determining an author's native language (L1) based only on their writings in a second language (L2).,natural-language-processing b68790dc-86f2-4c4e-b6de-199f6db5fc3c,generative-question-answering,Generative Question Answering,,natural-language-processing 52eefeca-1c11-42a6-8b85-643cb31bdb38,phrase-vector-embedding,Phrase Vector Embedding,"Just like the generation of word (1-gram) vector embedding, this task is for phrase (n-gram) vector embedding.",natural-language-processing 5a61f802-0f72-4d5a-bf78-87f28cc0decb,relation-mention-extraction,Relation Mention Extraction,Extracting phrases representative for a specific relation.,natural-language-processing bf78446d-073d-4020-ba38-c52c53ae2d50,text-summarization,Text Summarization,"**Text Summarization** is a natural language processing (NLP) task that involves condensing a lengthy text document into a shorter, more compact version while still retaining the most important information and meaning. The goal is to produce a summary that accurately represents the content of the original text in a concise form. There are different approaches to text summarization, including extractive methods that identify and extract important sentences or phrases from the text, and abstractive methods that generate new text based on the content of the original text.",natural-language-processing fccd5ba7-bc59-4c34-9ce3-4d0d9ccb1e8d,keyword-extraction,Keyword Extraction,Keyword extraction is tasked with the automatic identification of terms that best describe the subject of a document (Source: Wikipedia).,natural-language-processing fe36f538-0181-4f35-becb-c143acb0995b,few-shot-htc,few-shot-htc,,natural-language-processing 4b5ad4dd-8c74-49fa-9522-865a2f5bd2f5,dialog-act-classification,Dialog Act Classification,,natural-language-processing 889f0bb7-9da9-44df-92d2-47bc435f8702,joint-entity-and-relation-extraction-on,Joint Entity and Relation Extraction on Scientific Data,,natural-language-processing 6c71effe-dc00-4db4-8d30-c6bff02cfa44,stance-detection-us-election-2020-biden,Stance Detection (US Election 2020 - Biden),,natural-language-processing 04908cb2-0549-4e27-8289-26ad9c2ed2f3,aspect-based-sentiment-analysis,Aspect-Based Sentiment Analysis (ABSA),"**Aspect-Based Sentiment Analysis (ABSA)** is a Natural Language Processing task that aims to identify and extract the sentiment of specific aspects or components of a product or service. ABSA typically involves a multi-step process that begins with identifying the aspects or features of the product or service that are being discussed in the text. This is followed by sentiment analysis, where the sentiment polarity (positive, negative, or neutral) is assigned to each aspect based on the context of the sentence or document. Finally, the results are aggregated to provide an overall sentiment for each aspect. And recent works propose more challenging ABSA tasks to predict sentiment triplets or quadruplets (Chen et al., 2022), the most influential of which are ASTE (Peng et al., 2020; Zhai et al., 2022), TASD (Wan et al., 2020), ASQP (Zhang et al., 2021a) and ACOS with an emphasis on the implicit aspects or opinions (Cai et al., 2020a). ( Source: [MvP: Multi-view Prompting Improves Aspect Sentiment Tuple Prediction](https://arxiv.org/abs/2305.12627) )",natural-language-processing d42d8c9a-bf91-4058-b253-72afab191268,query-based-extractive-summarization,Query-Based Extractive Summarization,Extracting summarized information that answers a given query based on a reference text.,natural-language-processing a0bbd481-eae6-4959-a934-29e4bce4ddd6,named-entity-recognition-ner,Named Entity Recognition (NER),"**Named Entity Recognition (NER)** is a task of Natural Language Processing (NLP) that involves identifying and classifying named entities in a text into predefined categories such as person names, organizations, locations, and others. The goal of NER is to extract structured information from unstructured text data and represent it in a machine-readable format. Approaches typically use BIO notation, which differentiates the beginning (B) and the inside (I) of entities. O is used for non-entity tokens. Example: | Mark | Watney | visited | Mars | | --- | ---| --- | --- | | B-PER | I-PER | O | B-LOC | ( Image credit: [Zalando](https://research.zalando.com/welcome/mission/research-projects/flair-nlp/) )",natural-language-processing c9e83821-94cc-4db2-a048-71bc13f573a8,arabic-sentiment-analysis,Arabic Sentiment Analysis,"Arabic sentiment analysis is the process of computationally identifying and categorizing opinions expressed in a piece of arabic text, especially in order to determine whether the writer's attitude towards a particular topic, product, etc. is positive, negative, or neutral (Source: Oxford Languages)",natural-language-processing 3dc12217-feec-4789-a602-4dbed40d85c0,authorship-verification,Authorship Verification,"Authorship verification (**AV**) is a research subject in the field of digital text forensics that concerns itself with the question, whether two documents have been written by the same person. Definition taken from the paper **Assessing the Applicability of Authorship Verification Methods**, available at: ",natural-language-processing 1934301f-fc79-4481-8717-42bfc38aee6b,spoken-language-understanding,Spoken Language Understanding,,natural-language-processing 45264222-edf7-41eb-93d2-6bced0bdefac,data-mining,Data Mining,,natural-language-processing 833e59bd-c708-40cf-87c8-b8daaebd4307,lexical-normalization,Lexical Normalization,"Lexical normalization is the task of translating/transforming a non standard text to a standard register. Example: ``` new pix comming tomoroe new pictures coming tomorrow ``` Datasets usually consists of tweets, since these naturally contain a fair amount of these phenomena. For lexical normalization, only replacements on the word-level are annotated. Some corpora include annotation for 1-N and N-1 replacements. However, word insertion/deletion and reordering is not part of the task.",natural-language-processing 66af0a10-004f-4538-ab68-01face028ed2,reverse-dictionary,Reverse Dictionary,,natural-language-processing 8c2fce14-635e-450a-9f1d-c24efd8e83ea,knowledge-base-population,Knowledge Base Population,Knowledge base population is the task of filling the incomplete elements of a given knowledge base by automatically processing a large corpus of text.,natural-language-processing b150499e-92e1-4271-a0b0-6ebdefd4c04a,race-m,RACE-m,,natural-language-processing 19b520a3-b251-4750-a771-c1949c98f172,document-level-re-with-incomplete-labeling,Document-level RE with incomplete labeling,,natural-language-processing 39960a1a-87d0-45a5-bf6e-01dcc77d40b8,overlapping-mention-recognition,Overlapping Mention Recognition,Overlapping mention recognition is the task of correctly identifying all mentions of an entity in the presence of overlapping entity mentions.,natural-language-processing d00511d3-215b-4d1b-83fc-8a9bc0ac5340,zero-shot-slot-filling,Zero-shot Slot Filling,,natural-language-processing 10d2f405-6cf3-404c-ae77-63565b1f5d54,cross-document-language-modeling,Cross-Document Language Modeling,"Involves pretraining language models to support multi-document NLP tasks. Source: [Cross-Document Language Modeling](https://arxiv.org/pdf/2101.00406v1.pdf) Image Credit: [Cross-Document Language Modeling](https://arxiv.org/pdf/2101.00406v1.pdf)",natural-language-processing 233b9e85-ea96-4709-ace8-5f6ce4dbf4c3,crowdsourced-text-aggregation,Crowdsourced Text Aggregation,"One of the most important parts of processing responses from crowd workers is **aggregation**: given several conflicting opinions, a method should extract the truth. This problem is also known as *truth-inference* in crowdsourcing. Text aggregation problem is dedicated to extracting the correct information from crowd workers' responses for a crowdsourcing task where the output is a *text*: audio transcription, translation, character recognition, etc.",natural-language-processing 1aeb6029-bda0-40c9-aaf8-d6f5712fd1e7,passage-ranking,Passage Ranking,,natural-language-processing 18d75f80-a382-4fb7-a673-81219873eb0a,negation-and-speculation-cue-detection,Negation and Speculation Cue Detection,,natural-language-processing bc9509e6-c063-4fe0-8703-c213350af2f7,open-question,Open-Ended Question Answering,"Open-ended questions are defined as those that simply pose the question, without imposing any constraints on the format of the response. This distinguishes them from questions with a predetermined answer format.",natural-language-processing 31b8f26b-38a0-4bc8-ad95-34cceb4182fa,vietnamese-parsing,Vietnamese Parsing,,natural-language-processing 9891ded0-691b-47aa-beea-0205f8a8b9a9,transliteration,Transliteration,"**Transliteration** is a mechanism for converting a word in a source (foreign) language to a target language, and often adopts approaches from machine translation. In machine translation, the objective is to preserve the semantic meaning of the utterance as much as possible while following the syntactic structure in the target language. In Transliteration, the objective is to preserve the original pronunciation of the source word as much as possible while following the phonological structures of the target language. For example, the city’s name “Manchester” has become well known by people of languages other than English. These new words are often named entities that are important in cross-lingual information retrieval, information extraction, machine translation, and often present out-of-vocabulary challenges to spoken language technologies such as automatic speech recognition, spoken keyword search, and text-to-speech. Source: [Phonology-Augmented Statistical Framework for Machine Transliteration using Limited Linguistic Resources ](https://arxiv.org/abs/1810.03184)",natural-language-processing 803f5fdc-1d92-4415-8da4-b0bbef737d0d,joint-ner-and-classification,Joint NER and Classification,Joint named entity recognition and classification refers to the combined task of identifying named entitites in a given text and text classification.,natural-language-processing 2d5426f4-7a76-42eb-8d77-2a0f1159549b,multi-modal-named-entity-recognition,Multi-modal Named Entity Recognition,"Multi-modal named entity recognition aims at improving the accuracy of NER models through utilizing image information.",natural-language-processing bfcd24bd-ac9b-4ad4-9aee-a2c72b122c36,dialogue-understanding,Dialogue Understanding,,natural-language-processing 981231ae-03e4-4375-813f-4ac17d92bb87,document-dating,Document Dating,"Document Dating is the problem of automatically predicting the date of a document based on its content. Date of a document, also referred to as the Document Creation Time (DCT), is at the core of many important tasks, such as, information retrieval, temporal reasoning, text summarization, event detection, and analysis of historical text, among others. For example, in the following document, the correct creation year is 1999. This can be inferred by the presence of terms 1995 and Four years after. Swiss adopted that form of taxation in 1995. The concession was approved by the govt last September. Four years after, the IOC…. Description from [NLP Progress](http://nlpprogress.com/english/temporal_processing.html)",natural-language-processing 33e28b08-7eed-4e4e-8759-b4addb2599e1,key-information-extraction,Key Information Extraction,"Key Information Extraction (KIE) is aimed at extracting structured information (e.g. key-value pairs) from form-style documents (e.g. invoices), which makes an important step towards intelligent document understanding.",natural-language-processing cfa4057c-1afb-4fe3-b07a-d7e6530a1c23,empirical-judgments,Empirical Judgments,"Drawing inspiration from Immanuel Kant, this task measures a model’s ability to distinguish between two kinds of empirical judgments: judgments that assert a correlative relation between empirical events, and judgments that assert a causal relation. Source: [BIG-bench](https://github.com/google/BIG-bench/tree/main/bigbench/benchmark_tasks/empirical_judgments)",natural-language-processing 7308b40b-df9e-403b-861a-83c8ddbc89f6,description-guided-molecule-generation,Description-guided molecule generation,"The significance of description-based molecule generation lies in its potential to streamline the process of molecular design by enabling the production of molecules that directly meet the criteria outlined in a given description. This facilitates a more targeted approach in the creation and optimization of novel molecules, with applications in diverse fields such as drug discovery and materials science.",natural-language-processing 4f387db4-5721-4612-b7eb-cca5c3e7ad21,multi-lingual-text-to-image-generation,Multi-lingual Text-to-Image Generation,,natural-language-processing 71fef1ab-f18c-45d9-abe0-cddd2ec8189e,chinese-spell-checking,Chinese Spell Checking,Chinese Spell Checking (CSC) aims to detect and correct erroneous characters for user-generated text in Chinese language.,natural-language-processing ac2c93ef-0ede-46c6-9ebb-b8e34e4d30c1,low-resource-named-entity-recognition,Low Resource Named Entity Recognition,"Low resource named entity recognition is the task of using data and models available for one language for which ample such resources are available (e.g., English) to solve named entity recognition tasks in another, commonly more low-resource, language.",natural-language-processing 800cd4b6-8747-4d44-8ffa-0dd6f2ffebb7,domain-labelling,Domain Labelling,,natural-language-processing 5ff8ace3-62b0-4e2f-a113-7e9c5459ae2e,aspect-category-polarity,Aspect Category Polarity,,natural-language-processing 09cce22c-aa14-4cba-b50f-9934137e30c2,document-level-event-extraction,Document-level Event Extraction,,natural-language-processing a24a089d-5078-4135-99f8-474e5faa8a68,entity-typing-on-dh-kgs,Entity Typing on DH-KGs,,natural-language-processing 08d7a98d-7bb7-4055-a3b8-d4cff670988c,slovak-text-diacritization,Slovak Text Diacritization,Addition of diacritics for undiacritized Slovak Wikipedia texts.,natural-language-processing 6e1667aa-069b-4839-854e-75059a9983f6,relation-classification,Relation Classification,"**Relation Classification** is the task of identifying the semantic relation holding between two nominal entities in text. Source: [Structure Regularized Neural Network for Entity Relation Classification for Chinese Literature Text ](https://arxiv.org/abs/1803.05662)",natural-language-processing 452ef90c-8982-4710-a72d-b7df58f3f49d,conversation-disentanglement,Conversation Disentanglement,"Automatic disentanglement could be used to provide more interpretable results when searching over chat logs, and to help users understand what is happening when they join a channel. Source: [Kummerfeld et al.](https://arxiv.org/pdf/1810.11118v2.pdf)",natural-language-processing 8984dcf8-dcf3-4283-9365-d454719b4644,natural-language-transduction,Natural Language Transduction,Converting one sequence into another,natural-language-processing d32393b5-6536-44f3-9349-18fa7e29342b,cause-effect-relation-classification,Cause-Effect Relation Classification,"Classifying pairs of entities (e.g., events) into causal or non-causal or predicting Cause and Effect in a causal relation",natural-language-processing a7c382e9-9c77-46ef-a6c2-222111d591da,open-intent-detection,Open Intent Detection,"Open intent detection aims to identify n-class known intents, and detect one-class open intent.",natural-language-processing a7c33454-115f-4979-babe-3fa67fe46fb5,end-to-end-dialogue-modelling,End-To-End Dialogue Modelling,,natural-language-processing 0d396a81-38ae-4834-9afe-16cc602e13a5,cross-document-coreference-resolution,Cross Document Coreference Resolution,,natural-language-processing 0f7b52bd-2682-47bd-bb52-c0fe2638ba89,traditional-spam-detection,Traditional Spam Detection,,natural-language-processing 57b2ac11-0047-43ea-9bf7-150ee3a5a240,keyphrase-generation,Keyphrase Generation,Keyphrase Generation aims at generating keyphrases (or phrases) that best summarize a given text article or document.,natural-language-processing f3a6d6fe-24cc-45c2-9edd-e96a26d0dd29,distractor-generation,Distractor Generation,"Given a passage, a question, and an answer phrase, the goal of distractor generation (DG) is to generate context-related wrong options (i.e., distractor) for multiple-choice questions (MCQ).",natural-language-processing ba643b69-c66f-42dd-865d-7b658fc5306e,multilingual-machine-comprehension,Multilingual Machine Comprehension in English Hindi,"Multilingual Machine Comprehension (MMC) is a Question-Answering (QA) sub-task that involves quoting the answer for a question from a given snippet, where the question and the snippet can be in different languages. Results on an extended version of the recently released XQuAD dataset, which we propose to use as the evaluation benchmark for future research.",natural-language-processing b4574e05-80a0-4063-8630-b778b94dda4b,aspect-oriented-opinion-extraction,Aspect-oriented Opinion Extraction,Extracting the paired opinion terms for every given aspect term in a sentence.,natural-language-processing bdd58518-9703-4921-96ec-cafcbc77c607,sentence-ordering,Sentence Ordering,Sentence ordering task deals with finding the correct order of sentences given a randomly ordered paragraph.,natural-language-processing dee1502c-6ad4-41e7-87bf-7318aa6d27f5,question-quality-assessment,Question Quality Assessment,This task expects to build subjective question-answering algorithms to check whether a question is of high quality or needs to be edited/flagged.,natural-language-processing b074c19c-e46a-4871-9067-c4d271eb1cc9,open-intent-discovery,Open Intent Discovery,Open intent discovery aims to leverage limited prior knowledge of known intents to find fine-grained known and open intent-wise clusters.,natural-language-processing 0739abb4-294a-44f4-8f7c-80a22c6966ad,table-search,Table Search,,natural-language-processing 5acf7d1e-fdde-46cb-be48-06c1f0073197,joint-entity-and-relation-extraction,Joint Entity and Relation Extraction,Scores reported from systems which jointly extract entities and relations.,natural-language-processing b4fd32da-f16f-4d33-872c-c3f81619aee1,semantic-textual-similarity,Semantic Textual Similarity,"Semantic textual similarity deals with determining how similar two pieces of texts are. This can take the form of assigning a score from 1 to 5. Related tasks are paraphrase or duplicate identification. Image source: [Learning Semantic Textual Similarity from Conversations](https://arxiv.org/pdf/1804.07754.pdf)",natural-language-processing 76c263ba-4148-4bc8-896f-5abee3bf1230,hurtful-sentence-completion,Hurtful Sentence Completion,Measure hurtful sentence completions in language models (HONEST),natural-language-processing 10ce7949-2e62-4070-997f-1cd63c8e4e95,latent-aspect-detection,Latent Aspect Detection,,natural-language-processing ea371b36-d478-4e17-8e64-34a0e5cb3094,multiview-contextual-commonsense-inference,Multiview Contextual Commonsense Inference,"Depending on the situation, multiple different reasonings are possible each leading to various unique inferences. For a given context, identifying all valid inferences require commonsense reasoning and as such, this task is called Multiview Contextual Commonsense Inference.",natural-language-processing d0c423b3-02a4-43e9-9282-5f7e8faeff3a,aggression-identification,Aggression Identification,"Develop a classifier that could make a 3-way classification in-between ‘Overtly Aggressive’, ‘Covertly Aggressive’ and ‘Non-aggressive’ text data. For this, TRAC-2 dataset of 5,000 aggression-annotated data from social media each in Bangla (in both Roman and Bangla script), Hindi (in both Roman and Devanagari script) and English for training and validation is to be used.",natural-language-processing dcc3f141-2b14-4e16-9c0b-3307681a7047,multimodal-machine-translation,Multimodal Machine Translation,"Multimodal machine translation is the task of doing machine translation with multiple data sources - for example, translating ""a bird is flying over water"" + an image of a bird over water to German text. ( Image credit: [Findings of the Third Shared Task on Multimodal Machine Translation](https://www.aclweb.org/anthology/W18-6402.pdf) )",natural-language-processing 715ea42d-baad-4b71-8454-07a08e4e92cb,comment-generation,Comment Generation,"Article commenting poses new challenges for machines, as it involves multiple cognitive abilities: understanding the given article, formulating opinions and arguments, and organizing natu ral language for expression.",natural-language-processing 099f75b7-e375-48b2-aaa6-a3a137e5c36d,emergent-communications-on-relations,Emergent communications on relations,Emergent communications in the context of relations.,natural-language-processing fe291da9-5676-4f77-86a8-45b293c98073,automated-writing-evaluation,Automated Writing Evaluation,"Automated writing evaluation refers to the task of analysing and measuring written text based on features, such as syntax, text complexity and vocabulary range.",natural-language-processing 42d603eb-48da-48e6-8ac6-970b6caec12e,multi-document-summarization,Multi-Document Summarization,"**Multi-Document Summarization** is a process of representing a set of documents with a short piece of text by capturing the relevant information and filtering out the redundant information. Two prominent approaches to Multi-Document Summarization are extractive and abstractive summarization. Extractive summarization systems aim to extract salient snippets, sentences or passages from documents, while abstractive summarization systems aim to concisely paraphrase the content of the documents. Source: [Multi-Document Summarization using Distributed Bag-of-Words Model ](https://arxiv.org/abs/1710.02745)",natural-language-processing 5be0e958-5e7f-41f2-9a19-21d972efdbb6,entity-alignment,Entity Alignment,"**Entity Alignment** is the task of finding entities in two knowledge bases that refer to the same real-world object. It plays a vital role in automatically integrating multiple knowledge bases. Note: results that have incorporated machine translated entity names (introduced in the RDGCN paper) or pre-alignment name embeddings are considered to have used **extra training labels** (both are marked with ""Extra Training Data"" in the leaderboard) and are **not adhere to a comparable setting** with others that have followed the original setting of the benchmark. Source: [Cross-lingual Entity Alignment via Joint Attribute-Preserving Embedding ](https://arxiv.org/abs/1708.05045) The task of entity alignment is related to the task of [entity resolution](https://paperswithcode.com/task/entity-resolution) which focuses on matching structured entity descriptions in different contexts.",natural-language-processing 91c5efa0-a1fb-4407-aa5e-e6b602981831,visual-commonsense-tests,Visual Commonsense Tests,"Predict 5 property types (color, shape, material, size, and visual co-occurrence) for over 5000 subjects.",natural-language-processing 27e9173d-a975-4f7b-80e3-f09afd8403f8,text-anonymization,Text Anonymization,,natural-language-processing 249166e4-f02f-45a9-b90a-0fd71e7a4745,emotional-dialogue-acts,Emotional Dialogue Acts,"Associating Emotions and Dialogue Acts to find unique relationships between them such as Accept/Agree dialogue acts often occur with the Joy emotion, Apology with Sadness, or Thanking with Joy. First introduced in the paper EDA: Enriching Emotional Dialogue Acts using an Ensemble of Neural Annotators, LREC 2020 (https://aclanthology.org/2020.lrec-1.78/).",natural-language-processing 1dc0f673-06f4-4694-a0df-f2580cb8a85d,stereotypical-bias-analysis,Stereotypical Bias Analysis,,natural-language-processing 609bb4ea-72dd-4440-9421-879ae4fccd7d,polyphone-disambiguation,Polyphone disambiguation,A part of the TTS-front end framework which serves to predict the correct pronunciation for the input polyphone characters.,natural-language-processing 86d87f5b-8f9b-4f8b-ac07-8c77e34c443c,passage-re-ranking,Passage Re-Ranking,Passage re-ranking is the task of scoring and re-ranking a collection of retrieved documents based on an input query.,natural-language-processing 93bc0057-7973-45c4-84a1-f997454e7fb2,entity-typing,Entity Typing,"**Entity Typing** is an important task in text analysis. Assigning types (e.g., person, location, organization) to mentions of entities in documents enables effective structured analysis of unstructured text corpora. The extracted type information can be used in a wide range of ways (e.g., serving as primitives for information extraction and knowledge base (KB) completion, and assisting question answering). Traditional Entity Typing systems focus on a small set of coarse types (typically fewer than 10). Recent studies work on a much larger set of fine-grained types which form a tree-structured hierarchy (e.g., actor as a subtype of artist, and artist is a subtype of person). Source: [Label Noise Reduction in Entity Typing by Heterogeneous Partial-Label Embedding ](https://arxiv.org/abs/1602.05307) Image Credit: [Label Noise Reduction in Entity Typing by Heterogeneous Partial-Label Embedding ](https://arxiv.org/abs/1602.05307)",natural-language-processing 41df2f09-ea52-4d2f-8bd3-d630c5b63eef,lexical-complexity-prediction,Lexical Complexity Prediction,Predicting the complexity of a word/multi-word expression in a sentence.,natural-language-processing 0446417b-2d68-4bc2-8d1b-02045c78977f,morphological-disambiguation,Morphological Disambiguation,,natural-language-processing 9ee3d1e6-d068-4911-a2ef-d55f2516782d,passage-retrieval,Passage Retrieval,**Passage retrieval** is a specialized type of IR application that retrieves relevant passages (or pieces of text) rather than an entire ranked set of documents.,natural-language-processing f72289ae-b0d5-4df5-b5af-3dc06209fa31,conversational-search,Conversational Search,,natural-language-processing 0eb97d0f-8d51-4aeb-9f58-3f2221ea5763,data-to-text-generation,Data-to-Text Generation,"A classic problem in natural-language generation (NLG) involves taking structured data, such as a table, as input, and producing text that adequately and fluently describes this data as output. Unlike machine translation, which aims for complete transduction of the sentence to be translated, this form of NLG is usually taken to require addressing (at least) two separate challenges: what to say, the selection of an appropriate subset of the input data to discuss, and how to say it, the surface realization of a generation. ( Image credit: [Data-to-Text Generation with Content Selection and Planning](https://arxiv.org/pdf/1809.00582v2.pdf) )",natural-language-processing 637f8b40-0145-4263-a540-3923a6a54470,abstract-argumentation,Abstract Argumentation,Identifying argumentative statements from natural language dialogs.,natural-language-processing 4c044981-fa9a-4b62-8ee8-3c6a0f9a6e88,satire-detection,Satire Detection,Satire detection consists in detecting when a text is written in a satirical tone and its content shouldn't be interpreted literally.,natural-language-processing 4535d649-b347-4329-9819-9b454bf6fc07,joint-multilingual-sentence-representations,Joint Multilingual Sentence Representations,,natural-language-processing 6f96c446-aeaf-4e16-a27a-f41f010ad4d1,de-identification,De-identification,"De-identification is the task of detecting privacy-related entities in text, such as person names, emails and contact data.",natural-language-processing 0887a826-475d-4d1e-b403-5d97c2a5d8f6,aspect-category-opinion-sentiment-quadruple,Aspect-Category-Opinion-Sentiment Quadruple Extraction,Aspect-Category-Opinion-Sentiment (ACOS) Quadruple Extraction is the task with the goal to extract all aspect-category-opinion-sentiment quadruples in a review sentence. ( and provide full support for aspect-level sentiment analysis with implicit aspects and opinions if possible ),natural-language-processing 0b1fc8c9-15fa-4534-a707-f791092e805d,chinese-zero-pronoun-resolution,Chinese Zero Pronoun Resolution,Chinese zero pronoun resolution refers to the task of resolving a so-called zero segment of a Chinese text that is not written but supplies information for interpreting the text.,natural-language-processing 649a1915-a845-408e-bdb9-4cc16adf4793,extreme-summarization,Extreme Summarization,Image credit: [TLDR: Extreme Summarization of Scientific Documents](https://arxiv.org/pdf/2004.15011v3.pdf),natural-language-processing 828f0429-ae2d-459a-896b-a495c17e7aa3,text-to-video-search,Text-to-video search,,natural-language-processing 5dfb086f-6e21-4ef0-b4de-05bc445b6841,entity-disambiguation,Entity Disambiguation,"**Entity Disambiguation** is the task of linking mentions of ambiguous entities to their referent entities in a knowledge base such as Wikipedia. Source: [Leveraging Deep Neural Networks and Knowledge Graphs for Entity Disambiguation ](https://arxiv.org/abs/1504.07678)",natural-language-processing 45135484-d0f6-4627-b4b3-c899af380a83,topic-models,Topic Models,"A topic model is a type of statistical model for discovering the abstract ""topics"" that occur in a collection of documents. Topic modeling is a frequently used text-mining tool for the discovery of hidden semantic structures in a text body.",natural-language-processing 1bc91173-75ba-489f-9e6b-c97bbc6530de,multi-labeled-relation-extraction,Multi-Labeled Relation Extraction,,natural-language-processing 7b954b2d-05a7-4937-b499-4a01920ff4db,gre-reading-comprehension,GRE Reading Comprehension,,natural-language-processing ad8c0d0e-42ac-46d5-9787-273087385379,abuse-detection,Abuse Detection,"Abuse detection is the task of identifying abusive behaviors, such as hate speech, offensive language, sexism and racism, in utterances from social media platforms (Source: https://arxiv.org/abs/1802.00385).",natural-language-processing 3883f288-4518-4dff-b91f-62118f45c1b8,nested-named-entity-recognition,Nested Named Entity Recognition,"Nested named entity recognition is a subtask of information extraction that seeks to locate and classify nested named entities (i.e., hierarchically structured entities) mentioned in unstructured text (Source: Adapted from Wikipedia).",natural-language-processing 7b916f0a-a6d5-4860-8469-9b59a47c6363,parallel-corpus-mining,Parallel Corpus Mining,Mining a corpus of bilingual sentence pairs that are translations of each other.,natural-language-processing 148fe142-5866-4d37-a47a-1ce297124bbe,negation-detection,Negation Detection,Negation detection is the task of identifying negation cues in text.,natural-language-processing 51c56af0-6a51-4334-9b86-8db5ebd6fd26,relationship-extraction-distant-supervised,Relationship Extraction (Distant Supervised),"Relationship extraction is the task of extracting semantic relationships from a text. Extracted relationships usually occur between two or more entities of a certain type (e.g. Person, Organisation, Location) and fall into a number of semantic categories (e.g. married to, employed by, lives in).",natural-language-processing 07acae9b-c480-4f88-88ec-be5fc7c108cb,sequential-pattern-mining,Sequential Pattern Mining,"**Sequential Pattern Mining** is the process that discovers relevant patterns between data examples where the values are delivered in a sequence. Source: [Big Data Analytics for Large Scale Wireless Networks: Challenges and Opportunities ](https://arxiv.org/abs/1909.08069)",natural-language-processing af9c8ff6-a040-4f27-b498-87f75fdf741b,arabic-text-diacritization,Arabic Text Diacritization,Addition of diacritics for undiacritized arabic texts for words disambiguation.,natural-language-processing 7885d36f-983b-4527-898b-7705aeec7ed7,story-generation,Story Generation,"Story generation is the task of automatically generating a coherent narrative, often from a set of premises or a brief summary.",natural-language-processing c6225b32-0b1c-41ad-b849-8ff1ae34b029,target-oriented-opinion-words-extraction,target-oriented opinion words extraction,The objective of TOWE is to extract the corresponding opinion words describing or evaluating the target from the review.,natural-language-processing 76a512a6-2bce-4955-bb97-53984e1505d6,relational-captioning,Relational Captioning,,natural-language-processing dfe1dcd5-43bc-43aa-b9eb-fc462673b6e5,medical-named-entity-recognition,Medical Named Entity Recognition,,natural-language-processing 1792fe02-c0f6-4932-bef3-d444a9ce55ee,pesona-dialogue-in-story,Persona Dialogue in Story,Building persona dialogue in a story,natural-language-processing 0c0e8aee-b934-45ce-8747-c644e54e72d4,rumour-detection,Rumour Detection,"Rumor detection is the task of identifying rumors, i.e. statements whose veracity is not quickly or ever confirmed, in utterances on social media platforms.",natural-language-processing 9489c36b-f5d2-4f22-8688-89a14df982d9,paraphrase-identification,Paraphrase Identification,"The goal of **Paraphrase Identification** is to determine whether a pair of sentences have the same meaning. Source: [Adversarial Examples with Difficult Common Words for Paraphrase Identification ](https://arxiv.org/abs/1909.02560) Image source: [On Paraphrase Identification Corpora ](http://www.lrec-conf.org/proceedings/lrec2014/pdf/1000_Paper.pdf)",natural-language-processing ab676ee9-1485-4ff7-a39b-1fdb85616960,phrase-grounding,Phrase Grounding,"Given an image and a corresponding caption, the **Phrase Grounding** task aims to ground each entity mentioned by a noun phrase in the caption to a region in the image. Source: [Phrase Grounding by Soft-Label Chain Conditional Random Field ](https://arxiv.org/abs/1909.00301)",natural-language-processing 2e9f863b-b3d0-4ac7-94c9-5529703aa5e9,extract-aspect,Extract Aspect,"Aspect extraction is the task of identifying and extracting terms relevant for opinion mining and sentiment analysis, for example terms for product attributes or features.",natural-language-processing d2db757d-064f-45f1-986a-818c5e70de13,multi-word-expression-embedding,multi-word expression embedding,Learn embeddings for multi-word expressions,natural-language-processing c46d16f3-fb4f-4f19-a082-81802d72014a,opinion-mining,Opinion Mining,"Identifying and categorizing opinions expressed in a piece of text, especially in order to determine whether the writer's attitude towards a particular topic, product, etc. is positive, negative, or neutral (Source: Oxford Languages) Image Source: [Deep learning for sentiment analysis: A survey](https://onlinelibrary.wiley.com/doi/abs/10.1002/widm.1253)",natural-language-processing c18291be-0298-4b2e-89a8-932ad098881a,concept-to-text-generation,Concept-To-Text Generation,"Generating natural language text from a conceptualized representation, such as an ontology.",natural-language-processing 5c5a2c1c-1386-4d10-9304-57661735d100,conversational-response-selection,Conversational Response Selection,Conversational response selection refers to the task of identifying the most relevant response to a given input sentence from a collection of sentences.,natural-language-processing c610161c-6443-4f2d-8d74-cb73583c68fc,language-acquisition,Language Acquisition,Language acquisition refers to tasks related to the learning of a second language.,natural-language-processing 2e525d63-64a0-4fd8-ad87-37372d44c550,blocking,Blocking,"Entity resolution (also known as entity matching, record linkage, or duplicate detection) is the task of finding records that refer to the same real-world entity across different data sources (e.g., data files, books, websites, and databases). (Source: Wikipedia) **Blocking** is a crucial step in any entity resolution pipeline because a pair-wise comparison of all records across two data sources is infeasible. Blocking applies a computationally cheap method to generate a smaller set of candidate record pairs reducing the workload of the matcher. During matching a more expensive pair-wise matcher generates a final set of matching record pairs. Survey on blocking: - [Papadakis et al.: Blocking and Filtering Techniques for Entity Resolution: A Survey](https://dl.acm.org/doi/pdf/10.1145/3377455), 2020.",natural-language-processing 43ca3f5e-14bd-4706-b318-3fd5e696cb81,transition-based-dependency-parsing,Transition-Based Dependency Parsing,,natural-language-processing c1ad8d50-c786-4651-8488-4648be813730,reading-comprehension,Reading Comprehension,"Most current question answering datasets frame the task as reading comprehension where the question is about a paragraph or document and the answer often is a span in the document. Some specific tasks of reading comprehension include multi-modal machine reading comprehension and textual machine reading comprehension, among others. In the literature, machine reading comprehension can be divide into four categories: **cloze style**, **multiple choice**, **span prediction**, and **free-form answer**. Read more about each category [here](https://paperswithcode.com/paper/a-survey-on-machine-reading-comprehension-1). Benchmark datasets used for testing a model's reading comprehension abilities include [MovieQA](/dataset/movieqa), [ReCoRD](dataset/record), and [RACE](/dataset/race), among others. The Machine Reading group at UCL also provides an [overview of reading comprehension tasks](https://uclnlp.github.io/ai4exams/data.html). Figure source: [A Survey on Machine Reading Comprehension: Tasks, Evaluation Metrics and Benchmark Datasets](https://arxiv.org/pdf/2006.11880.pdf)",natural-language-processing 05441cc3-a6af-4505-8632-0fb94346247f,information-retrieval,Information Retrieval,"Information retrieval is the task of ranking a list of documents or search results in response to a query ( Image credit: [sudhanshumittal](https://github.com/sudhanshumittal/Information-retrieval-system) )",natural-language-processing bf5f7256-eede-4d03-a5ee-2f740bdef3be,spanish-text-diacritization,Spanish Text Diacritization,Addition of diacritics for undiacritized Spanish Wikipedia texts.,natural-language-processing 9c43343d-1f84-4a51-b8f4-1857ae087201,emotion-cause-pair-extraction,Emotion-Cause Pair Extraction,,natural-language-processing 39931d6d-e89f-429e-8fda-2a064fdec874,workflow-discovery,Workflow Discovery,"Workflow Discovery (WD) was introduced by [Workflow Discovery from Dialogues in the Low Data Regime](https://openreview.net/forum?id=L9othQvPks). WD aims to extract work-flows that have either implicitly or explicitly guided task-oriented dialogues be- tween two people. Workflows extracted from a conversation consist of a summary of the key actions taken during the dialogue. These workflows consist of pre-defined terms for actions and slots when possible, but our approach also al- lows for actions that are not known to be invented by the model online and used as new steps in the generated work- flow.",natural-language-processing 77d8f06a-1ec6-456d-ab73-a749d1c2a6a8,clickbait-detection,Clickbait Detection,"Clickbait detection is the task of identifying clickbait, a form of false advertisement, that uses hyperlink text or a thumbnail link that is designed to attract attention and to entice users to follow that link and read, view, or listen to the linked piece of online content, with a defining characteristic of being deceptive, typically sensationalized or misleading (Source: Adapted from Wikipedia)",natural-language-processing 43034535-7bca-4ac4-95ca-43712783ed35,column-type-annotation,Column Type Annotation,"**Column type annotation** (CTA) refers to the task of predicting the semantic type of a table column and is a subtask of [Table Annotation](https://paperswithcode.com/task/table-annotation). The labels that are usually used in a CTA problem are semantic types from vocabularies like DBpedia, Schema.org or WikiData. Some examples are: *Book*, *Country*, *LocalBusiness* etc. CTA can be either treated as a multi-class classification problem where a column is annotated by only one semantic type or as multi-label classification problem where a column can be annotated using multiple semantic types.",natural-language-processing 39070cbd-ac5c-4780-8b5c-0e77d6a59d44,zero-shot-machine-translation,Zero-Shot Machine Translation,Translate text or speech from one language to another without supervision.,natural-language-processing 04489499-0780-415f-8143-c38f936f52b9,dialogue-rewriting,Dialogue Rewriting,,natural-language-processing 8dc2240e-32a1-4fba-9215-d80f3255f422,contextualized-literature-based-discovery,Contextualized Literature-based Discovery,"Given a seed term (e.g., a task or method in NLP, or a disease in biomedicine) and corresponding background (e.g., challenges for a given task), the model's aim is to generate idea suggestions. The Contextual Literature-Based Discovery (CLBD) will take two different formulations of C-LBD: idea sentence generation and idea node prediction.",natural-language-processing 788b9f31-9bf3-48d5-803d-4f1ed2d49213,fine-grained-opinion-analysis,Fine-Grained Opinion Analysis,"Fine-Grained Opinion Analysis aims to: (i) detect opinion expressions that convey attitudes such as sentiments, agreements, beliefs, or intentions, (ii) measure their intensity, (iii) identify their holders i.e. entities that express an attitude, (iv) identify their targets i.e. entities or propositions at which the attitude is directed, and (v) classify their target-dependent attitude. ( Image credit: [SRL4ORL](https://arxiv.org/pdf/1711.00768v3.pdf) )",natural-language-processing e2c62be8-8e0a-419e-b81a-7e8900cab5e9,multi-hop-reading-comprehension,Multi-Hop Reading Comprehension,,natural-language-processing 1fca7a8d-44f5-4835-ac2d-3fdc04a4a2eb,scientific-article-summarization,Scientific Document Summarization,,natural-language-processing 5144de2b-0244-4382-a613-9942bf421d5f,propaganda-technique-identification,Propaganda technique identification,,natural-language-processing 84cc5374-c11b-4161-9280-0d9a177be8a0,text-categorization,Text Categorization,"**Text Categorization** is the task of automatically assigning pre-defined categories to documents written in natural languages. Several types of Text Categorization have been studied, each of which deals with different types of documents and categories, such as topic categorization to detect discussed topics (e.g., sports, politics), spam detection, and sentiment classification to determine the sentiment typically in product or movie reviews. Source: [Effective Use of Word Order for Text Categorization with Convolutional Neural Networks ](https://arxiv.org/abs/1412.1058)",natural-language-processing f05c447f-57cb-4b44-89ae-cb1b84d3f746,long-range-modeling,Long-range modeling,"A new task for testing the long-sequence modeling capabilities and efficiency of language models. Image credit: [SCROLLS: Standardized CompaRison Over Long Language Sequences](https://arxiv.org/pdf/2201.03533v1.pdf)",natural-language-processing baa280bd-c23b-441e-98a6-d3f6a9cbbf3b,extract-aspect-polarity-tuple,Extract aspect-polarity tuple,,natural-language-processing c9ad3f62-4ac1-4d94-93ba-c163ed79da67,face-selection,Face Selection,A task where an agent should select at most two sentences from the paper as argumentative facts.,natural-language-processing 3112c4a9-f88b-46ae-9b91-35094aadbbc0,event-relation-extraction,Event Relation Extraction,"To extract relations among events, such as event coreference, temporal, causal and subevent relations.",natural-language-processing 4a93cce2-077b-497c-8c99-ac8f91b0b55f,counterspeech-detection,Counterspeech Detection,"Counter-speech detection is the task of detecting counter-speech, i.e., a crowd-sourced response that argues, disagrees, or presents an opposing view to extremism or hateful content on social media platforms (Source: Adapted from: https://icsr.info/wp-content/uploads/2018/03/ICSR-Report-Challenging-Hate-Counter-speech-Practices-in-Europe.pdf)",natural-language-processing e1439fae-b9c7-487b-bbd1-4873f61ae719,semantic-parsing,Semantic Parsing,"**Semantic Parsing** is the task of transducing natural language utterances into formal meaning representations. The target meaning representations can be defined according to a wide variety of formalisms. This include linguistically-motivated semantic representations that are designed to capture the meaning of any sentence such as λ-calculus or the abstract meaning representations. Alternatively, for more task-driven approaches to Semantic Parsing, it is common for meaning representations to represent executable programs such as SQL queries, robotic commands, smart phone instructions, and even general-purpose programming languages like Python and Java. Source: [Tranx: A Transition-based Neural Abstract Syntax Parser for Semantic Parsing and Code Generation ](https://arxiv.org/abs/1810.02720)",natural-language-processing deb4172c-1f15-49b1-a2d9-6f599d218797,named-entity-recognition-in-vietnamese,Named Entity Recognition In Vietnamese,,natural-language-processing 2763cfad-092e-4c9a-9fd6-be376ecb395c,table-type-detection,Table Type Detection,"Table Type Detection is the task of predicting the semantic type of a table and is a subtask of [Table Annotation](https://paperswithcode.com/task/table-annotation). The labels used for annotation in this task are types from vocabularies like DBpedia, Schema.org etc. like *Music* or *Hotel*. The semantic type of a table can indicate the content of a table. For example, if the semantic type of a table is *Music* it indicates that the table consists of music records/entities.",natural-language-processing 392612be-c74e-49af-92ee-8ddc7f6f37e5,controllable-language-modelling,Controllable Language Modelling,"Given unsupervised Language Modeling as pretraining task, the objective is to generate texts under particular control attributes (Topic, Sentiment)",natural-language-processing 6c48e097-1e43-40f2-9dae-68c2fe60372f,weakly-supervised-data-denoising,Weakly Supervised Data Denoising,,natural-language-processing d36eee43-3328-42b0-b18d-558f65c20022,simultaneous-speech-to-text-translation,Simultaneous Speech-to-Text Translation,"Simultaneous Speech-to-Text translation aims to translate concurrently with the source speech. It is crucial since it enables real-time interpretation of conversations, lectures and talks.",natural-language-processing 5e8d0283-7c7a-4cd8-9eb5-110bbd342d40,protein-folding,Protein Folding,,natural-language-processing f337772d-6652-43fc-b993-dbd12e146f1e,reliable-intelligence-identification,Reliable Intelligence Identification,,natural-language-processing 4136b1a3-19fe-432e-9225-bcf367ebbb73,sstod,SSTOD,,natural-language-processing de395801-b80e-49bf-8b30-6d1f3ac2b9c5,intent-discovery,Intent Discovery,"Given a set of labelled and unlabelled utterances, the idea is to identify existing (known) intents and potential (new intents) intents. This method can be utilised in conversational system setting.",natural-language-processing 64752a47-9b97-4093-aefb-1e0577b1f552,low-resource-neural-machine-translation,Low-Resource Neural Machine Translation,Low-resource machine translation is the task of machine translation on a low-resource language where large data may not be available.,natural-language-processing b841e74e-ab0c-4c6c-a589-e34ec7543607,cell-entity-annotation,Cell Entity Annotation,"**Cell Entity Annotation** (CEA) is the task of annotating cells in a table with an entity from a knowledge base and is a subtask of [Table Annotation](https://paperswithcode.com/task/table-annotation). CEA problem labels are entities from knowledge bases such as DBpedia or WikiData. It usually is considered as a multi-class classification problem. CEA can also be referred to in different works as the problem of entity linking, as it links a cell in a table to an entity.",natural-language-processing a21ab086-9d4b-46ad-9276-297fc6c94a16,semantic-similarity,Semantic Similarity,"The main objective **Semantic Similarity** is to measure the distance between the semantic meanings of a pair of words, phrases, sentences, or documents. For example, the word “car” is more similar to “bus” than it is to “cat”. The two main approaches to measuring Semantic Similarity are knowledge-based approaches and corpus-based, distributional methods. Source: [Visual and Semantic Knowledge Transfer for Large Scale Semi-supervised Object Detection ](https://arxiv.org/abs/1801.03145)",natural-language-processing 21a24736-7cde-4a58-9191-a1da5e14996f,propaganda-detection,Propaganda detection,,natural-language-processing fff529c2-0ae2-4cdd-a811-afdcc4cb5844,pretrained-multilingual-language-models,Pretrained Multilingual Language Models,,natural-language-processing 62e03ea3-d4d8-4bf1-b648-8e30b2a65b00,coreference-resolution,Coreference Resolution,"Coreference resolution is the task of clustering mentions in text that refer to the same underlying real world entities. Example: ``` +-----------+ | | I voted for Obama because he was most aligned with my values"", she said. | | | +-------------------------------------------------+------------+ ``` ""I"", ""my"", and ""she"" belong to the same cluster and ""Obama"" and ""he"" belong to the same cluster.",natural-language-processing 45c4d090-33a5-4509-9b45-dba3f2aae298,emotion-recognition-in-context,Emotion Recognition in Context,,natural-language-processing 5b432801-afb6-445c-a9cd-ab6be418bfdf,persian-sentiment-anlysis,Persian Sentiment Analysis,Persian Sentiment analysis is the task of classifying the polarity of a given text.,natural-language-processing 9747a76f-2c42-4148-b4fa-b7dce45089b5,variable-detection,Variable Detection,Identifying whether a sentence contains a variable mention.,natural-language-processing 63413f69-1429-499e-bba6-2566eb670661,language-identification,Language Identification,Language identification is the task of determining the language of a text.,natural-language-processing 60a44700-0ab4-4c9d-ae55-c33f8e97f5a1,collaborative-plan-acquisition,Collaborative Plan Acquisition,,natural-language-processing 08ed1d91-9fbb-4adf-8e0b-c1b4607cf7d2,zero-shot-text-to-image-generation,Zero-Shot Text-to-Image Generation,Image credit: [GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models](https://paperswithcode.com/paper/glide-towards-photorealistic-image-generation),natural-language-processing 7e6d544b-0512-4ef1-834a-c5874cf5dfa0,citation-intent-classification,Citation Intent Classification,Identifying the reason why an author cited another author.,natural-language-processing 8b3d9ecd-9c9b-4b07-9c71-35c6e15423c9,sentiment-dependency-learning,Sentiment Dependency Learning,,natural-language-processing 987a1bce-305c-451b-875a-558327b151ed,few-shot-htc-1,Few-shot HTC,,natural-language-processing 0fe1b474-308b-47be-91b5-f13ce8e0ee98,privacy-preserving-deep-learning,Privacy Preserving Deep Learning,"The goal of privacy-preserving (deep) learning is to train a model while preserving privacy of the training dataset. Typically, it is understood that the trained model should be privacy-preserving (e.g., due to the training algorithm being differentially private).",natural-language-processing cbc42c7b-be21-4f89-91c3-6ef9f1a7524d,model-editing,Model Editing,,natural-language-processing 34927e71-1854-4adb-80b1-65f2cc35c613,semi-supervised-formality-style-transfer,Semi-Supervised Formality Style Transfer,Semi-Supervised Formality Style Transfer,natural-language-processing beb8c50e-2129-497a-9955-a16a33c9cd39,lexical-analysis,Lexical Analysis,Lexical analysis is the process of converting a sequence of characters into a sequence of tokens (strings with an assigned and thus identified meaning). (Source: Adapted from Wikipedia),natural-language-processing 3a83a554-3628-4ff8-a201-422e2915e732,multimodal-deep-learning,Multimodal Deep Learning,"**Multimodal deep learning** is a type of deep learning that combines information from multiple modalities, such as text, image, audio, and video, to make more accurate and comprehensive predictions. It involves training deep neural networks on data that includes multiple types of information and using the network to make predictions based on this combined data. One of the key challenges in multimodal deep learning is how to effectively combine information from multiple modalities. This can be done using a variety of techniques, such as fusing the features extracted from each modality, or using attention mechanisms to weight the contribution of each modality based on its importance for the task at hand. Multimodal deep learning has many applications, including image captioning, speech recognition, natural language processing, and autonomous vehicles. By combining information from multiple modalities, multimodal deep learning can improve the accuracy and robustness of models, enabling them to perform better in real-world scenarios where multiple types of information are present.",natural-language-processing 63422ae8-9d08-4259-a0b6-8efd054fd8c0,dialogue-act-classification,Dialogue Act Classification,"Dialogue act classification is the task of classifying an utterance with respect to the function it serves in a dialogue, i.e. the act the speaker is performing. Dialogue acts are a type of speech acts (for Speech Act Theory, see [Austin (1975)](http://www.hup.harvard.edu/catalog.php?isbn=9780674411524) and [Searle (1969)](https://www.cambridge.org/core/books/speech-acts/D2D7B03E472C8A390ED60B86E08640E7)).",natural-language-processing 90b573e3-c851-4281-b3bb-9806001a8f09,few-shot-relation-classification,Few-Shot Relation Classification,"**Few-Shot Relation Classification** is a particular relation classification task under minimum annotated data, where a model is required to classify a new incoming query instance given only few support instances (e.g., 1 or 5) during testing. Source: [MICK: A Meta-Learning Framework for Few-shot Relation Classification with Little Training Data ](https://arxiv.org/abs/2004.14164)",natural-language-processing 0f7f39e1-f2d7-4ee5-a6a2-b4dd22705d56,timeline-summarization,Timeline Summarization,Identifying key dates of major events and providing short descriptions of what happened on these dates. (Source: https://www.aclweb.org/anthology/D19-5403/),natural-language-processing 37c734bc-9887-4487-9670-7ad3233f4ec8,poll-generation,Poll Generation,,natural-language-processing 99a05f14-5e82-42b0-b8b5-4e88f618f9c8,interactive-evaluation-of-dialog,Interactive Evaluation of Dialog,Task that involve building/adapting conversation models to work effectively in an interactive setting.,natural-language-processing 7ba432a0-2aa5-45df-9124-9b8f2efae959,scientific-results-extraction,Scientific Results Extraction,"Scientific results extraction is the task of extracting relevant result information (e.g., in the case of Machine learning performance results: task, dataset, metric name, metric value) from the scientific literature.",natural-language-processing 9b2e5979-80da-41da-a7f9-ab15cfa81c1f,hate-span-identification,Hate Span Identification,,natural-language-processing 2060d75a-e04f-4428-9871-0c2945f024ee,coding-problem-tagging,Coding Problem Tagging,Assigning data structures to coding problems,natural-language-processing c2c2d469-94dd-4b56-8b13-758ae91c0d29,spam-detection,Spam detection,,natural-language-processing fd4c3e83-4ade-46c6-ab44-bb4213441b63,code-documentation-generation,Code Documentation Generation,"Code Documentation Generation is a supervised task where a code function is the input to the model, and the model generates the documentation for this function. Description from: [CodeTrans: Towards Cracking the Language of Silicone's Code Through Self-Supervised Deep Learning and High Performance Computing](https://arxiv.org/pdf/2104.02443.pdf)",natural-language-processing 8ec179f7-2c62-4039-82ad-9595ca24e2a4,hate-intensity-prediction,Hate Intensity Prediction,,natural-language-processing f062659d-1972-4a04-91e3-071e7e7dd9c3,pgtask,Profile Generation,"**Profile Generation** is the task of creating a profile for an individual or entity based on information about them. The goal of profile generation is to gather as much relevant information about the individual or entity as possible, in order to build an accurate and comprehensive profile.",natural-language-processing 18f2b56d-165c-4ca2-9b86-da4575980419,riddle-sense,Riddle Sense,,natural-language-processing ed546934-1805-4336-9eff-71ae61efd9c0,natural-language-understanding,Natural Language Understanding,"**Natural Language Understanding** is an important field of Natural Language Processing which contains various tasks such as text classification, natural language inference and story comprehension. Applications enabled by natural language understanding range from question answering to automated reasoning. Source: [Find a Reasonable Ending for Stories: Does Logic Relation Help the Story Cloze Test? ](https://arxiv.org/abs/1812.05411)",natural-language-processing 70d5f289-cba2-4beb-bd88-1f0db01ab936,vietnamese-datasets,Vietnamese Datasets,,natural-language-processing 62c00b74-41ad-44f0-bfa4-635c28086df9,multi-agent-integration,Multi-agent Integration,,natural-language-processing ba791faf-3655-49a9-9828-e2f7f7d50ae0,latvian-text-diacritization,Latvian Text Diacritization,Addition of diacritics for undiacritized Latvian Wikipedia texts.,natural-language-processing ab827f20-f527-4aae-8f70-5bab1ea6a5c9,temporal-information-extraction,Temporal Information Extraction,"Temporal information extraction is the identification of chunks/tokens corresponding to temporal intervals, and the extraction and determination of the temporal relations between those. The entities extracted may be temporal expressions (timexes), eventualities (events), or auxiliary signals that support the interpretation of an entity or relation. Relations may be temporal links (tlinks), describing the order of events and times, or subordinate links (slinks) describing modality and other subordinative activity, or aspectual links (alinks) around the various influences aspectuality has on event structure. The markup scheme used for temporal information extraction is well-described in the ISO-TimeML standard, and also on [www.timeml.org](http://www.timeml.org). ``` PRI20001020.2000.0127 NEWS STORY 10/20/2000 20:02:07.85 The Navy has changed its account of the attack on the USS Cole in Yemen. Officials now say the ship was hit nearly two hours after it had docked. Initially the Navy said the explosion occurred while several boats were helping the ship to tie up. The change raises new questions about how the attackers were able to get past the Navy security. 10/20/2000 20:02:28.05 ``` To avoid leaking knowledge about temporal structure, train, dev and test splits must be made at document level for temporal information extraction.",natural-language-processing 9655d4b1-23c5-41ff-a9fe-35dbce74c38c,cross-lingual-transfer,Cross-Lingual Transfer,"Cross-lingual transfer refers to transfer learning using data and models available for one language for which ample such resources are available (e.g., English) to solve tasks in another, commonly more low-resource, language.",natural-language-processing 2f25f913-4578-45c8-bac4-2b73dba8a000,reader-aware-summarization,Reader-Aware Summarization,Using reader comments to improve summarization performance.,natural-language-processing bc42619f-c49d-46d8-a5b9-9e10a766a9fa,news-generation,News Generation,Generation of larger segments of text with consistent topic and evolving story.,natural-language-processing f3c086b2-4d2d-49bd-89dc-4ed4bbee08e5,pico,PICO,"The proliferation of healthcare data has contributed to the widespread usage of the PICO paradigm for creating specific clinical questions from RCT. PICO is a mnemonic that stands for: Population/Problem: Addresses the characteristics of populations involved and the specific characteristics of the disease or disorder. Intervention: Addresses the primary intervention (including treatments, procedures, or diagnostic tests) along with any risk factors. Comparison: Compares the efficacy of any new interventions with the primary intervention. Outcome: Measures the results of the intervention, including improvements or side effects. PICO is an essential tool that aids evidence-based practitioners in creating precise clinical questions and searchable keywords to address those issues. It calls for a high level of technical competence and medical domain knowledge, but it’s also frequently very time-consuming. Automatically identifying PICO elements from this large sea of data can be made easier with the aid of machine learning (ML) and natural language processing (NLP). This facilitates the development of precise research questions by evidence-based practitioners more quickly and precisely. Empirical studies have shown that the use of PICO frames improves the specificity and conceptual clarity of clinical problems, elicits more information during pre-search reference interviews, leads to more complex search strategies, and yields more precise search results.",natural-language-processing 89de814c-2347-447d-bcb1-1628b6559153,morphological-inflection,Morphological Inflection,"**Morphological Inflection** is the task of generating a target (inflected form) word from a source word (base form), given a morphological attribute, e.g. number, tense, and person etc. It is useful for alleviating data sparsity issues in translating morphologically rich languages. The transformation from a base form to an inflected form usually includes concatenating the base form with a prefix or a suffix and substituting some characters. For example, the inflected form of a Finnish stem eläkeikä (retirement age) is eläkeiittä when the case is abessive and the number is plural. Source: [Tackling Sequence to Sequence Mapping Problems with Neural Networks ](https://arxiv.org/abs/1810.10802)",natural-language-processing f478a9bd-664b-4517-835c-d8ecf26409d3,cross-lingual-ner,Cross-Lingual NER,,natural-language-processing 3f9fa981-ab99-4dea-a01a-5394ba1be259,literature-mining,Literature Mining,The task where the publication texts are used to mine knowledge using NLP,natural-language-processing d0b2bbd8-6f4f-4458-bba7-b47da2fa9ab3,question-generation,Question Generation,"The goal of **Question Generation** is to generate a valid and fluent question according to a given passage and the target answer. Question Generation can be used in many scenarios, such as automatic tutoring systems, improving the performance of Question Answering models and enabling chatbots to lead a conversation. Source: [Generating Highly Relevant Questions ](https://arxiv.org/abs/1910.03401)",natural-language-processing 20e06022-bc1c-4320-a75a-6529b245cf64,grounded-open-vocabulary-acquisition,Grounded Open Vocabulary Acquisition,,natural-language-processing 3927fbd3-5fef-4d8e-9a3b-39898eadbd6a,entity-extraction,Entity Extraction using GAN,,natural-language-processing fa4decaa-18d1-41eb-8ba2-79ce821714cd,emotion-cause-extraction,Emotion Cause Extraction,,natural-language-processing cf967559-2ec3-4bc6-99ca-88c6968a76fa,handwritten-chinese-text-recognition,Handwritten Chinese Text Recognition,"Handwritten Chinese text recognition is the task of interpreting handwritten Chinese input, e.g., from images of documents or scans.",natural-language-processing 1d2603b6-f6a5-4e80-854f-97a4ca6fe51a,coherence-evaluation,Coherence Evaluation,Evaluating the overall coherence of text as measured by its readability and flow through ideas.,natural-language-processing c2a88a12-98b4-4677-abd7-1d123a59c67c,bangla-spelling-error-correction,Bangla Spelling Error Correction,Bangla spell checker which improves the quality of suggestions for misspelled words.,natural-language-processing 0f49d67b-8b32-4293-8ae9-0fcbf4e5b233,fact-based-text-editing,Fact-based Text Editing,"Fact-based Text Editing aims to revise a given document to better describe the facts in a knowledge base (e.g., several triples).",natural-language-processing 8e6f8fd5-c053-4ea1-a8be-259adabbc732,question-answer-categorization,Question-Answer categorization,,natural-language-processing bf9c7ae3-7a2c-4cca-a7ad-be6e7e2a8238,meeting-summarization,Meeting Summarization,"Generating a summary from meeting transcriptions. A survey for this task: [Abstractive Meeting Summarization: A Survey](https://paperswithcode.com/paper/abstractive-meeting-summarization-a-survey)",natural-language-processing d3f53af2-896d-4cb8-abe7-0219db855f1c,linguistic-acceptability,Linguistic Acceptability,"Linguistic Acceptability is the task of determining whether a sentence is grammatical or ungrammatical. Image Source: [Warstadt et al](https://arxiv.org/pdf/1901.03438v4.pdf)",natural-language-processing b986328f-a710-40e5-9399-b46d7bb8120b,pronunciation-dictionary-creation,Pronunciation Dictionary Creation,Create a pronunciation dictionary,natural-language-processing 950bcf1f-e9bb-4641-8dfa-f413bd93a3e8,extractive-tags-summarization,Extractive Tags Summarization,"The goal of Extractive Tags Summarization (ETS) task is to shorten the list of tags corresponding to a digital image while keeping the representativity; i.e., is to extract important tags from the context lying in an image and its corresponding tags.",natural-language-processing 32e3375d-9d8b-49a3-9112-a1b2544c2b3b,propaganda-span-identification,Propaganda span identification,,natural-language-processing 129260d2-1984-429d-ac92-ed920bb5f9f2,emotion-detection-and-trigger-summarization,Emotion Detection and Trigger Summarization,,natural-language-processing bbd4935a-cf9f-4b73-8771-cb2e69446b1b,dialog-learning,Dialog Learning,,natural-language-processing 55b4e8c8-0c20-4ce9-a8a9-f686c3f569bc,chinese-named-entity-recognition,Chinese Named Entity Recognition,"Chinese named entity recognition is a subtask of information extraction that seeks to locate and classify named entities mentioned in unstructured text into pre-defined categories such as person names, organizations, locations, medical codes, time expressions, quantities, monetary values, percentages, etc. from Chinese text (Source: Adapted from Wikipedia).",natural-language-processing 4cfccbb4-294e-444a-ba34-e5a0766f692b,disambiguation-q,Disambiguation QA,,natural-language-processing 3f65969a-9e51-45eb-ac48-a2a187478570,implicit-discourse-relation-classification,Implicit Discourse Relation Classification,"Parsing a text into a set of discourse relations between two adjacent or non-adjacent discourse units in the absence of explicit connectives, such as 'but' or 'however', and classifying those relations. (Source: Adapted from https://www.cs.brandeis.edu/~clp/conll15st/intro.html)",natural-language-processing e5e06185-2e4a-44db-8401-74703f16e181,hidden-aspect-detection,Hidden Aspect Detection,,natural-language-processing 10729ed1-b4ef-40b2-bc09-6f90ccafd7f6,speculation-scope-resolution,Speculation Scope Resolution,Identifiy the scope of a speculation cue that indicates uncertainty in a given text.,natural-language-processing 12886b40-975d-4b81-88f9-b3dc36806bd5,self-learning,Self-Learning,,natural-language-processing 23bfe7a1-6d41-4921-b39f-38893a3c64e7,deep-clustering,Deep Clustering,,natural-language-processing 14e10cfc-e7e7-45fe-bebe-c925501ab907,open-relation-modeling,Open Relation Modeling,,natural-language-processing c77279be-651f-43bd-b7c8-2ab64df6db44,question-to-declarative-sentence,Question to Declarative Sentence,"Question Answer to Declarative Sentence (QA2D) is the task of generating declarative statements from question, answer pairs. See: Demszky, D., Guu, K., & Liang, P. (2018). Transforming Question Answering Datasets Into Natural Language Inference Datasets. arXiv preprint. arXiv:1809.02922",natural-language-processing 8e9879e8-81c4-4fcb-b4d4-518bbaaf96d4,xlm-r,XLM-R,XLM-R,natural-language-processing 32018815-bf7c-4a11-b28a-9ed35db3ed7f,multimodal-generation,multimodal generation,"**Multimodal generation** refers to the process of generating outputs that incorporate multiple modalities, such as images, text, and sound. This can be done using deep learning models that are trained on data that includes multiple modalities, allowing the models to generate output that is informed by more than one type of data. For example, a multimodal generation model could be trained to generate captions for images that incorporate both text and visual information. The model could learn to identify objects in the image and generate descriptions of them in natural language, while also taking into account contextual information and the relationships between the objects in the image. Multimodal generation can also be used in other applications, such as generating realistic images from textual descriptions or generating audio descriptions of video content. By combining multiple modalities in this way, multimodal generation models can produce more accurate and comprehensive output, making them useful for a wide range of applications.",natural-language-processing 3ac9b994-e619-47da-8475-c93bf4bdcd2d,multi-grained-named-entity-recognition,Multi-Grained Named Entity Recognition,"Multi-Grained Named Entity Recognition aims to detect and recognize entities on multiple granularities, without explicitly assuming non-overlapping or totally nested structures.",natural-language-processing b4912f81-93e2-4144-b57f-09bfc4fa2e9f,subjectivity-analysis,Subjectivity Analysis,A related task to sentiment analysis is the subjectivity analysis with the goal of labeling an opinion as either subjective or objective.,natural-language-processing 002747de-c1cc-4e48-9a6b-ebe13d00ada2,humor-detection,Humor Detection,Humor detection is the task of identifying comical or amusing elements.,natural-language-processing 86332bf3-b8b6-4114-8cce-b52073554bb1,explanation-generation,Explanation Generation,,natural-language-processing 24faeb67-e554-4e33-84a4-e31bd35142d8,ruin-names,Ruin Names,,natural-language-processing 6580227c-efb2-4579-b6d2-38b9e6b6d434,kg-to-text,KG-to-Text Generation,"Knowledge-graph-to-text (KG-to-text) generation aims to generate high-quality texts which are consistent with input graphs. Description from: [JointGT: Graph-Text Joint Representation Learning for Text Generation from Knowledge Graphs](https://arxiv.org/pdf/2106.10502v1.pdf)",natural-language-processing ee4575d6-ea21-4b82-9eec-8f5d861440ff,intent-detection,Intent Detection,"**Intent Detection** is a vital component of any task-oriented conversational system. In order to understand the user’s current goal, the system must leverage its intent detector to classify the user’s utterance (provided in varied natural language) into one of several predefined classes, that is, intents. However, the performance of intent detection has been hindered by the data scarcity issue, as it is non-trivial to collect sufficient examples for new intents. How to effectively identify user intents in few-shot learning has become popular. Source: [Few-shot Intent Detection Datasets, Baselines and Results ](https://github.com/jianguoz/Few-Shot-Intent-Detection) Source: [Are Pretrained Transformers Robust in Intent Classification? A Missing Ingredient in Evaluation of Out-of-Scope Intent Detection ](https://github.com/jianguoz/Few-Shot-Intent-Detection) Source: [Efficient Intent Detection with Dual Sentence Encoders](https://github.com/PolyAI-LDN/polyai-models)",natural-language-processing aa074182-a11b-4e61-93ac-50151e1fa39c,understanding-fables,Understanding Fables,,natural-language-processing 4334cc8b-f617-4d5a-9d86-b7426174e36b,unsupervised-part-of-speech-tagging,Unsupervised Part-Of-Speech Tagging,Marking up a word in a text (corpus) as corresponding to a particular part of speech based on both its definition and its context by using an untagged corpus for training and producing the tagset by induction (Source: Wikipedia).,natural-language-processing c36ac1f5-3607-4709-b101-208bde1a40a2,semantic-role-labeling-predicted-predicates,Semantic Role Labeling (predicted predicates),PropBank semantic role labeling with predicted predicates.,natural-language-processing 2c8de2c7-ab0d-477d-a1cb-72a9b4c835fb,semi-supervised-text-classification-1,Semi-Supervised Text Classification,,natural-language-processing fc88acee-24e4-46cb-82c3-300ec704f884,text-annotation,text annotation,,natural-language-processing 583af7c6-0bbd-4268-bbf8-6c44b1140766,occupation-prediction,Occupation prediction,,natural-language-processing 5ae37efe-f5e6-49f5-85af-2a368a786730,short-text-clustering,Short Text Clustering,,natural-language-processing c4aa1e39-85fa-4887-9688-bf9da79ef6c1,extractive-summarization,Extractive Summarization,,natural-language-processing 5af1e2ee-3dc0-47b9-9795-4cc9558a5138,speculation-detection,Speculation Detection,Identifying information in text that is speculative as opposed to factual information.,natural-language-processing e80887cf-87a2-49fe-b1ee-25d8f92be9e8,summarization,Summarization,"Summarization is the task of producing a shorter version of one or several documents that preserves most of the input's meaning.",natural-language-processing 3b74a394-c838-4936-8497-8f115154f95c,image-sentence-alignment,image-sentence alignment,Predict the alignment (score) between an image and a sentence.,natural-language-processing b8f1dd61-103c-4c72-b07d-e0e3f14729f0,text-compression,Text Compression,,natural-language-processing 17694f4e-ba1a-4e08-95f2-e040bfe2620e,visual-dialogue,Visual Dialog,"Visual Dialog requires an AI agent to hold a meaningful dialog with humans in natural, conversational language about visual content. Specifically, given an image, a dialog history, and a follow-up question about the image, the task is to answer the question.",natural-language-processing 2a8fbb37-687a-4f46-900b-463fadf8381f,chinese,Chinese,Chinese language processing is the task of applying natural language processing to the Chinese language.,natural-language-processing 286af030-b1ce-4886-95df-3fbf81d47a97,grounded-language-learning,Grounded language learning,Acquire the meaning of language in situated environments.,natural-language-processing 98414356-3224-4700-8bd1-743556cf23a6,czech-text-diacritization,Czech Text Diacritization,Addition of diacritics for undiacritized Czech Wikipedia texts.,natural-language-processing 199106c3-953f-4851-b3a8-c0d550a72011,question-rewriting,Question Rewriting,,natural-language-processing be069e97-78ce-4016-8642-57383f0e5060,legal-outcome-extraction,legal outcome extraction,,natural-language-processing eb965956-34e4-40ea-8418-0a3589f5c6cb,scientific-concept-extraction,Scientific Concept Extraction,Identification of scientific concepts in research articles.,natural-language-processing 0d3c09a7-8493-4556-b2ec-158a87c71973,csc,Chinese Spelling Error Correction,,natural-language-processing ad940b52-33cf-456b-98b2-afcf513a8add,temporal-tagging,Temporal Tagging,"Identification of the extent of a temporal expression (timex) in a text. The temporal expressions can be explicit (e.g. ""October, 27 "") or implicit (e.g. ""last month""). Other names: Timex Extraction; Timex Identification; Timex Detection",natural-language-processing 3a1eeebd-d8d6-40c9-b1fe-4a9869b93292,ccg-supertagging,CCG Supertagging,"Combinatory Categorical Grammar (CCG; [Steedman, 2000](http://www.citeulike.org/group/14833/article/8971002)) is a highly lexicalized formalism. The standard parsing model of [Clark and Curran (2007)](https://www.mitpressjournals.org/doi/abs/10.1162/coli.2007.33.4.493) uses over 400 lexical categories (or _supertags_), compared to about 50 part-of-speech tags for typical parsers. Example: | Vinken | , | 61 | years | old | | --- | ---| --- | --- | --- | | N| , | N/N | N | (S[adj]\ NP)\ NP |",natural-language-processing 40ad56a0-4dfe-4a01-83a6-41ab49bc7aac,meme-captioning,Meme Captioning,Automatic generation of natural language descriptions of the content of an input meme.,natural-language-processing c183d29d-29d6-47c1-8938-0a1de125ebcc,story-completion,Story Completion,"Given a story prefix and two possible endings, determining which one is the correct (coherent) ending of the story.",natural-language-processing 2e1b4225-991b-4e18-b82b-4da6ed8ecc18,movie-dialog-same-or-different,Movie Dialog Same Or Different,,natural-language-processing 1b70c1aa-6562-4379-bbbb-e2faf1df4602,discourse-marker-prediction,Discourse Marker Prediction,,natural-language-processing 252dcf14-243a-4f80-bc10-15ce6b8cdc2e,selection-bias,Selection bias,,natural-language-processing f5cf3eae-5e31-48d6-995f-13659d80389b,logical-reasoning-reading-comprehension,Logical Reasoning Reading Comprehension,"Logical reasoning reading comprehension is a task proposed by the paper ReClor (ICLR 2020), which is to evaluate the logical reasoning ability of machine reading comprehension models. ReClor is the first dataset for logical reasoning reading comprehension.",natural-language-processing eef099d8-c22e-4586-b865-ebaadcdd06c1,chatbot,Chatbot,"**Chatbot** or conversational AI is a language model designed and implemented to have conversations with humans. Source: [Open Data Chatbot ](https://arxiv.org/abs/1909.03653) [Image source](https://arxiv.org/pdf/2006.16779v3.pdf)",natural-language-processing 932bfcd7-efa8-4a9d-ae39-146c5c9d2215,continual-relation-extraction,Continual Relation Extraction,"Compared with traditional relation extraction, CRE aims to help the model learn new relations while maintaining accurate classification of old ones.",natural-language-processing f10acb4b-99c1-45fa-a297-c28b31a176d3,continual-named-entity-recognition,Continual Named Entity Recognition,Continual learning for named entity recogntion,natural-language-processing e9670dc3-1ab3-484c-b1ea-cae8678b4582,anaphora-resolution,Anaphora Resolution,Resolving what expression a pronoun or a noun phrase refers to.,natural-language-processing d99d6b46-c2a0-475e-b37b-62f9b3b4370b,fact-selection,Fact Selection,A task where an agent should select at most two sentences from the paper as argumentative facts.,natural-language-processing 7a55c972-548a-4164-b9d1-3603c97c4c1d,short-text-conversation,Short-Text Conversation,"Given a short text, finding an appropriate response (Source: http://staff.ustc.edu.cn/~cheneh/paper_pdf/2013/HaoWang.pdf)",natural-language-processing e40e61f3-31c8-4860-ba71-fd33fd06c328,open-domain-question-answering,Open-Domain Question Answering,Open-domain question answering is the task of question answering on open-domain datasets such as Wikipedia.,natural-language-processing bbd99e84-5d7a-4396-9b2f-cceb010e0d39,goal-oriented-dialogue-systems,Goal-Oriented Dialogue Systems,Achieving a pre-defined goal through a dialog.,natural-language-processing ddb61285-5606-475f-86e6-0178b5b21ac9,sketch-to-text-generation,Sketch-to-text Generation,"Generate a full text based on a sketch (key information consisting of textual spans, phrases, or words), where the sketch may only make up a very small part of the full text.",natural-language-processing 927fcce8-138f-4df3-a80c-30d613de32c9,complaint-comment-classification,Complaint Comment Classification,,natural-language-processing 10238c01-dc68-474c-be96-a61a57aea0aa,cross-domain-named-entity-recognition,Cross-Domain Named Entity Recognition,,natural-language-processing 7938801c-2dd5-49cf-8c44-cee7aa6d100a,cross-lingual-information-retrieval,Cross-Lingual Information Retrieval,"Cross-Lingual Information Retrieval (CLIR) is a retrieval task in which search queries and candidate documents are written in different languages. CLIR can be very useful in some scenarios. For example, a reporter may want to search foreign language news to obtain different perspectives for her story; an inventor may explore the patents in another country to understand prior art.",natural-language-processing a5717667-1100-46a1-bc04-a53e48bf5636,dialogue-interpretation,Dialogue Interpretation,Interpreting the meaning of a dialog.,natural-language-processing 87ff2f56-03a8-484c-a338-657a09b013f4,semantic-dependency-parsing,Semantic Dependency Parsing,Identify semantic relationships between words in a text using a graph representation.,natural-language-processing c9b26d18-38f4-4c03-976f-e60f7de87d3a,semeval-2022-task-4-1-binary-pcl-detection,SemEval-2022 Task 4-1 (Binary PCL Detection),,natural-language-processing b0bbd7a6-1920-46a5-bc6f-23cc25808279,columns-property-annotation,Columns Property Annotation,"**Column Property Annotation** (CPA) refers to the task of predicting the semantic relation between two columns and is a subtask of [Table Annotation](https://paperswithcode.com/task/table-annotation). The input of a CPA problem is most commonly a pair of columns, but can also be only one column. The labels used in CPA are properties from vocabularies. Some examples are *name*, *price*, *datePublished* etc. CPA is usually a multi-class classification problem and is also referred to as column relation annotation or relation extraction in different works.",natural-language-processing 658365be-6be4-436d-a2b9-3682bb210fdf,chemical-indexing,Chemical Indexing,Predict which chemicals should be indexed.,natural-language-processing 03a86430-2731-4eba-8e51-242eb37ec37c,cross-lingual-semantic-textual-similarity,Cross-Lingual Semantic Textual Similarity,,natural-language-processing f4b2f4a9-8365-48f3-bbd8-70ae34e298f1,mathematical-reasoning,Mathematical Reasoning,,natural-language-processing b318726c-2d14-4e7f-b162-f6e3659d671f,question-similarity,Question Similarity,"This is the problem of detecting duplicate questions in forums, which is an important step towards automating the process of answering new questions",natural-language-processing 5a4e5541-68a7-4000-ac18-1989b7c59745,entity-resolution,Entity Resolution,"**Entity resolution** (also known as entity matching, record linkage, or duplicate detection) is the task of finding records that refer to the same real-world entity across different data sources (e.g., data files, books, websites, and databases). (Source: Wikipedia) Surveys on entity resolution: - [Vassilis et al.: End-to-End Entity Resolution for Big Data: A Survey](https://arxiv.org/pdf/1905.06397.pdf), 2020. - [Barlaug and Gulla: Neural Networks for Entity Matching: A Survey](https://arxiv.org/pdf/2010.11075.pdf), 2021. The task of entity resolution is closely related to the task of [entity alignment](https://paperswithcode.com/task/entity-alignment) which focuses on matching entities between knowledge bases. The task of [entity linking](https://paperswithcode.com/task/entity-linking) differs from entity resolution as entity linking focuses on identifying entity mentions in free text.",natural-language-processing adc5a7f0-5d8b-4cad-9d34-84273b6151c4,snarks,SNARKS,,natural-language-processing e7ea2464-a1d7-4f42-aa19-729b815abdfe,abstract-anaphora-resolution,Abstract Anaphora Resolution,"Abstract Anaphora Resolution aims to resolve nominal expressions (e.g., this result, those two actions) and pronominal expressions (e.g. this, that, it) that refer to abstract-object-antecedents such as facts, events, plans, actions, or situations.",natural-language-processing 298a50e1-a70a-4cb5-8ec4-3ef18f702527,sentence-pair-modeling,Sentence Pair Modeling,Comparing two sentences and their relationship based on their internal representation.,natural-language-processing 8eca97d7-3821-4078-8642-f0ae3ff3df17,multimedia-generative-script-learning,Multimedia Generative Script Learning,"Given an activity goal $G$, an optional subgoal $M$ that specifies the concrete needs, and the previous multimedia step history $H_n=\{(S_1,V_1),...,(S_n,V_n)\}$ with length $n$, a model is expected to predict the next possible step $S_{n+1}$, where $S_i$ is a text sequence and $V_i$ is an image.",natural-language-processing bf774821-f9db-4fdd-9df2-1be71573c673,hypernym-discovery,Hypernym Discovery,"Given a corpus and a target term (hyponym), the task of hypernym discovery consists of extracting a set of its most appropriate hypernyms from the corpus. For example, for the input word “dog”, some valid hypernyms would be “canine”, “mammal” or “animal”.",natural-language-processing a0312c7a-654d-4eb5-8006-c8a0ba0f3215,textual-analogy-parsing,Textual Analogy Parsing,"Textual Analogy Parsing (TAP) is the task of identifying analogy frames from text. ( Image credit: [Textual Analogy Parsing: What’s Shared and What’s Compared among Analogous Facts](https://arxiv.org/pdf/1809.02700v1.pdf) )",natural-language-processing 27a311fa-e0ed-42b2-8162-486c6dfe2ec5,multlingual-neural-machine-translation,Multlingual Neural Machine Translation,,natural-language-processing c45881b5-2578-4fc5-b452-e571a8a5230f,component-classification,Component Classification,Classification of argumentative components inside a document,natural-language-processing a5d04769-2752-4bd2-8ce7-91e5a1452c59,formal-fallacies-syllogisms-negation,Formal Fallacies Syllogisms Negation,,natural-language-processing b746f1d2-76db-4bd4-9d9b-824bfa2457ca,thai-word-tokenization,Thai Word Segmentation,Thai word segmentation,natural-language-processing 58a68245-2d49-4f75-9b8c-4a2ee875caf6,news-annotation,News Annotation,Assigning the appropriate labels to a news text based on a set of pre-defined labels.,natural-language-processing 606fa930-ef1e-4baf-98cb-8895e0e90df4,unsupervised-kg-to-text-generation,Unsupervised KG-to-Text Generation,,natural-language-processing 1c9ddc15-ad44-48c9-9ffc-a7bf5bcf5809,cg,NER,"The named entity recognition (NER) involves identification of key information in the text and classification into a set of predefined categories. This includes standard entities in the text like Part of Speech (PoS) and entities like places, names etc...",natural-language-processing 1026841c-60ce-4a9d-93c8-9708ce7159a8,keyphrase-extraction,Keyphrase Extraction,"A classic task to extract salient phrases that best summarize a document, which essentially has two stages: candidate generation and keyphrase ranking.",natural-language-processing aaf2c2b8-f736-4e78-bc5b-9a17de429e8e,vietnamese-sentiment-analysis,Vietnamese Sentiment Analysis,,natural-language-processing 925fb778-3f87-4143-85e8-70c67f5a0054,dynamic-topic-modeling,Dynamic Topic Modeling,,natural-language-processing 855b9f34-647e-49ca-ad77-e4a023a2f568,drs-parsing,DRS Parsing,"Discourse Representation Structures (DRS) are formal meaning representations introduced by Discourse Representation Theory. DRS parsing is a complex task, comprising other NLP tasks, such as semantic role labeling, word sense disambiguation, co-reference resolution and named entity tagging. Also, DRSs show explicit scope for certain operators, which allows for a more principled and linguistically motivated treatment of negation, modals and quantification, as has been advocated in formal semantics. Moreover, DRSs can be translated to formal logic, which allows for automatic forms of inference by third parties. Description from [NLP Progress](http://nlpprogress.com/english/semantic_parsing.html)",natural-language-processing c59e8715-4f72-4eef-9f4d-3ea1e7800eae,personalized-and-emotional-conversation,Personalized and Emotional Conversation,"Personalized and Emotional Conversation (**PEC**) is defined as follows: Given the personalized information ($P_{R1}$ and $P_{R2}$) of two speakers, their conversation context $C$, the emotion $E_K$ and DA $D_K$ of the response to be generated, and the personalized information $P_{K}$ of the responder, the goal is to generate an anthropomorphic response $Y$. \begin{equation} Y = argmax_{Y'}P(Y'|C, E_K, D_K, P_K) \label{task_definition} \end{equation} Particularly, context $C=\{(U_1,E_1,D_1,P_1),\cdots,(U_{K-1},E_{K-1},D_{K-1},P_{K-1})\}$ contains multi-turn conversation content (i.e., utterance $U_i$), emotion $E_i$ of the associated utterance, DA $D_i$ of the associated utterance, and personalized information $P_i$ of the associated speaker.",natural-language-processing b375f530-a16f-4835-a0e5-40bd7a476875,machine-reading-comprehension,Machine Reading Comprehension,"**Machine Reading Comprehension** is one of the key problems in Natural Language Understanding, where the task is to read and comprehend a given text passage, and then answer questions based on it. Source: [Making Neural Machine Reading Comprehension Faster ](https://arxiv.org/abs/1904.00796)",natural-language-processing 6f661315-dc66-4c0f-9564-17ef94e33a90,problem-solving-deliberation,Problem-Solving Deliberation,,natural-language-processing 8ec1f317-d179-45b3-8dcd-dd79926a1cf5,learning-with-noisy-labels,Learning with noisy labels,"Learning with noisy labels means When we say ""noisy labels,"" we mean that an adversary has intentionally messed up the labels, which would have come from a ""clean"" distribution otherwise. This setting can also be used to cast learning from only positive and unlabeled data.",natural-language-processing dfeb4971-9c66-49e4-9428-72f62cc78ce6,commonsense-rl,Commonsense Reasoning for RL,Commonsense reasoning for Reinforcement Learning agents,natural-language-processing 8f6dd507-2070-4a81-87fd-cb96ebe613dd,question-answer-generation,Question-Answer-Generation,,natural-language-processing f215f940-2e63-473b-9792-0d43f2961cc3,chinese-word-segmentation,Chinese Word Segmentation,Chinese word segmentation is the task of splitting Chinese text (i.e. a sequence of Chinese characters) into words (Source: www.nlpprogress.com).,natural-language-processing f863ea5b-edac-4660-be27-95ae1b3e2079,paraphrase-generation,Paraphrase Generation,"Paraphrase Generation involves transforming a natural language sentence to a new sentence, that has the same semantic meaning but a different syntactic or lexical surface form.",natural-language-processing e66d7137-258d-47ab-9007-518ada1c7a9a,abstractive-sentence-summarization,Sentence Summarization,Generating a summary of a given sentence.,natural-language-processing 34b9718f-e7b8-41e4-bd52-590c4531872f,job-prediction,Job Prediction,,natural-language-processing 6cd9afe7-71d8-47c3-9f69-632e153e1305,intent-classification,Intent Classification,"**Intent Classification** is the task of correctly labeling a natural language utterance from a predetermined set of intents Source: [Multi-Layer Ensembling Techniques for Multilingual Intent Classification ](https://arxiv.org/abs/1806.07914)",natural-language-processing 92416b4a-405e-41e8-a928-771e4b6643e0,slot-filling,Slot Filling,"The goal of **Slot Filling** is to identify from a running dialog different slots, which correspond to different parameters of the user’s query. For instance, when a user queries for nearby restaurants, key slots for location and preferred food are required for a dialog system to retrieve the appropriate information. Thus, the main challenge in the slot-filling task is to extract the target entity. Source: [Real-time On-Demand Crowd-powered Entity Extraction ](https://arxiv.org/abs/1704.03627) Image credit: [Robust Retrieval Augmented Generation for Zero-shot Slot Filling](https://arxiv.org/pdf/2108.13934.pdf)",natural-language-processing 25dbf417-8545-4017-82ae-b9bd6a01a302,paper-generation,Paper generation,"Generating scientific paper texts, such as abstracts.",natural-language-processing 8ddcb143-4c5b-4037-8911-a0580aa72afa,extractive-document-summarization,Extractive Text Summarization,"Given a document, selecting a subset of the words or sentences which best represents a summary of the document.",natural-language-processing b271f9a6-826a-4383-ba04-30aa2ca979b1,relation-explanation,relation explanation,,natural-language-processing 158e7e03-871b-446e-b8d2-2ac8097f0adc,unsupervised-text-classification,Unsupervised Text Classification,,natural-language-processing fcec5d76-0dd9-4a47-84ff-fcaff237e48b,event-driven-trading,Event-Driven Trading,Making stock trading decisions based on events.,natural-language-processing cb80fcfa-cde1-4f52-8b73-6bf638dbb100,suggestion-mining,Suggestion mining,"Suggestion mining can be defined as the extraction of suggestions from unstructured text,",natural-language-processing 200efa94-0d8b-4281-aa7b-6e226fbdf191,answer-generation,Answer Generation,,natural-language-processing a2ee32e3-6d68-4077-b38e-a878448bc339,recognizing-emotion-cause-in-conversations,Recognizing Emotion Cause in Conversations,"Given an utterance U, labeled with emotion E, the task is to extract the causal spans S from the conversational history H (including utterance U) that sufficiently represent the causes of emotion E.",natural-language-processing 5732ac16-1967-4e29-8b14-77de167fe581,contextualised-word-representations,Contextualised Word Representations,,natural-language-processing 15b41be6-7fa2-44e4-bd97-f89c5fa67e04,text-simplification,Text Simplification,"**Text Simplification** is the task of reducing the complexity of the vocabulary and sentence structure of text while retaining its original meaning, with the goal of improving readability and understanding. Simplification has a variety of important societal applications, for example increasing accessibility for those with cognitive disabilities such as aphasia, dyslexia, and autism, or for non-native speakers and children with reading difficulties. Source: [Multilingual Unsupervised Sentence Simplification](https://arxiv.org/abs/2005.00352)",natural-language-processing 661b8ec0-2d03-4d9b-8a37-e501981eb478,stance-detection-us-election-2020-trump,Stance Detection (US Election 2020 - Trump),,natural-language-processing 51b354f1-0651-43e2-9b56-94375ccf4de0,morphological-tagging,Morphological Tagging,"Morphological tagging is the task of assigning labels to a sequence of tokens that describe them morphologically. As compared to Part-of-speech tagging, morphological tagging also considers morphological features, such as case, gender or the tense of verbs.",natural-language-processing c1293b6b-ed37-4646-b630-ddd4aea792c4,claim-evidence-pair-extraction-cepe,Claim-Evidence Pair Extraction (CEPE),"Since evidence is clearly supporting the corresponding claims in an article, claims and evidence are mutually reinforcing each other in the context. Therefore, we hypothesize the claim extraction task and the evidence extraction task may benefit each other. By combining these two subtasks, we define the second integrated task as: given a specific topic and relevant articles, extract the claim-evidence pairs (CEPs) from the articles.",natural-language-processing 0f200e0d-3447-4d6a-8f40-d3653a305f4c,taxonomy-expansion,Taxonomy Expansion,Expand a seed taxonomy with new unseen node,natural-language-processing 98d74a86-c66c-4fb3-95f8-d4b8c13e7d0d,multi-modal-dialogue-generation,Multi-modal Dialogue Generation,Image credit: [OpenViDial](https://github.com/ShannonAI/OpenViDial),natural-language-processing 08b79a44-e0f2-4fcc-bcfa-65dda8a4184b,nonsense-words-grammar,Nonsense Words Grammar,,natural-language-processing 72ab5a77-957a-4482-9e50-4b954fb0fdf2,answer-selection,Answer Selection,"**Answer Selection** is the task of identifying the correct answer to a question from a pool of candidate answers. This task can be formulated as a classification or a ranking problem. Source: [Learning Analogy-Preserving Sentence Embeddings for Answer Selection ](https://arxiv.org/abs/1910.05315)",natural-language-processing 91759691-f173-43da-9d33-212411fe8d89,automatic-writing,Automatic Writing,Generating text based on internal machine representations.,natural-language-processing 30e81aca-79e7-4ec0-b2a8-28f12506d0b2,implicit-relations,Implicit Relations,,natural-language-processing 00528311-4175-4a0f-90e1-28a9c235b802,catalog-extraction,Catalog Extraction,Catalog Extraction aims to re-construct catalog-based document trees.,natural-language-processing f72f9b0f-a738-40fa-bc4c-3a68778a1ca8,dialogue-management,Dialogue Management,"( Image credit: [Bocklisch et al.](https://arxiv.org/pdf/1712.05181v2.pdf) )",natural-language-processing bc12fb47-079b-469b-a2e1-67ad5f8e2cd8,multimodal-sentiment-analysis,Multimodal Sentiment Analysis,"Multimodal sentiment analysis is the task of performing sentiment analysis with multiple data sources - e.g. a camera feed of someone's face and their recorded speech. ( Image credit: [ICON: Interactive Conversational Memory Network for Multimodal Emotion Detection](https://www.aclweb.org/anthology/D18-1280.pdf) )",natural-language-processing 04d70eb0-54bb-4196-92b7-2c75610be377,task-oriented-dialogue-systems,Task-Oriented Dialogue Systems,Achieving a pre-defined task through a dialog.,natural-language-processing 34d03473-fa4c-4b1b-96dd-ad85ad989932,arqmath2,ARQMath2,Answer Retrieval for Questions about Math v2 (2021),natural-language-processing 6ee47715-aa4d-4c6c-a317-ec7fc36eedbc,sonnet-generation,Sonnet Generation,Generating a poetry in the form of a sonnet.,natural-language-processing a26a3da1-3ae1-452f-83ba-f27877ef31fe,document-summarization,Document Summarization,"Automatic **Document Summarization** is the task of rewriting a document into its shorter form while still retaining its important content. The most popular two paradigms are extractive approaches and abstractive approaches. Extractive approaches generate summaries by extracting parts of the original document (usually sentences), while abstractive methods may generate new words or phrases which are not in the original document. Source: [HIBERT: Document Level Pre-training of Hierarchical Bidirectional Transformers for Document Summarization ](https://arxiv.org/abs/1905.06566)",natural-language-processing 2f8db952-65dd-496e-b89e-d2024cd955b3,nested-mention-recognition,Nested Mention Recognition,Nested mention recognition is the task of correctly modeling the nested structure of mentions.,natural-language-processing 150e3986-eb44-449c-b31f-675571596529,task-completion-dialogue-policy-learning,Task-Completion Dialogue Policy Learning,,natural-language-processing 0821ed95-b323-4ef2-94b8-e4c5ed06d7bb,open-domain-dialog,Open-Domain Dialog,,natural-language-processing 6784232b-9b02-46ad-99c6-8b77de053794,clinical-assertion-status-detection,Clinical Assertion Status Detection,"Classifying the assertions made on given medical concepts as being present, absent, or possible in the patient, conditionally present in the patient under certain circumstances, hypothetically present in the patient at some future point, and mentioned in the patient report but associated with someoneelse. (e.g. clinical finding pertains to the patient by assigning a label such as present (”patient is diabetic”), absent (”patient denies nausea”), conditional (”dyspnea while climbing stairs”), or associated with someone else (”family history of depression”)) ( [Source](https://arxiv.org/pdf/2012.04005v1.pdf) )",natural-language-processing 6e89ce4c-6f02-48a4-adfb-20511de01061,temporal-casual-qa,Temporal/Casual QA,,natural-language-processing ca30c6c7-35a4-4220-90f5-7b5b2b859f86,rules-of-thumb-generation,Rules-of-thumb Generation,Generate relevant rules-of-thumb given text input.,natural-language-processing 63706e55-5873-4d5a-9ba9-0ea0ac00dd7f,cross-lingual-bitext-mining,Cross-Lingual Bitext Mining,Cross-lingual bitext mining is the task of mining sentence pairs that are translations of each other from large text corpora.,natural-language-processing c7c9007f-e4b4-4710-a3a2-f576b254044d,aspect-sentiment-triplet-extraction,Aspect Sentiment Triplet Extraction,"Aspect Sentiment Triplet Extraction (ASTE) is the task of extracting the triplets of target entities, their associated sentiment, and opinion spans explaining the reason for the sentiment.",natural-language-processing 598b5a4b-d7ac-4745-a365-5d06e920f4b0,unsupervised-abstractive-sentence-compression,Unsupervised Sentence Compression,"Producing a shorter sentence by removing redundant information, preserving the grammatically and the important content of the original sentence without supervision. (Source: nlpprogress.com)",natural-language-processing 077c6108-c1bf-41f9-86a4-7ab78b02cb5f,pcl-detection,Pcl Detection,,natural-language-processing 8ad6c836-3f0d-4751-b1cf-4e81d756ae49,question-selection,Question Selection,,natural-language-processing 329ab7d6-11ff-47e5-8b97-f168f0bff73d,unsupervised-opinion-summarization,Unsupervised Opinion Summarization,,natural-language-processing 47fcc114-f024-41f1-8567-d829af429d1a,conversational-response-generation,Conversational Response Generation,"Given an input conversation, generate a natural-looking text reply to the last conversation element. Image credit: [DIALOGPT : Large-Scale Generative Pre-training for Conversational Response Generation](https://www.aclweb.org/anthology/2020.acl-demos.30.pdf)",natural-language-processing b05e89c4-cd15-467c-adc8-ff367dfffd88,text-infilling,Text Infilling,"**Text Infilling** is the task of predicting missing spans of text which are consistent with the preceding and subsequent text. Text Infilling is a generalization of the cloze task—cloze historically refers to infilling individual words. Source: [Enabling Language Models to Fill in the Blanks ](https://arxiv.org/abs/2005.05339)",natural-language-processing 10680cd0-fbdb-4780-8e22-ff137dad19b9,table-based-fact-verification,Table-based Fact Verification,Verifying facts given semi-structured data.,natural-language-processing 3f691150-20bc-437a-831c-b1ddbcf3f8a5,unsupervised-sentence-summarization,Unsupervised Sentence Summarization,Generating a summary of a given sentence without supervision.,natural-language-processing 2b35fd91-8c85-4777-8070-3c97d4181b6d,complex-word-identification,Complex Word Identification,Identifying difficult words or expressions in a text.,natural-language-processing f7ece308-8c8a-4fb7-bcd1-7ec85614d791,fg-1-pg-1,FG-1-PG-1,"The model learns one entity in the first step (no continuous learning is required), and learns one new entity in each subsequent continuous learning step.",natural-language-processing 344ef4e0-509c-42bd-b62b-a0f2a15eb9e5,cross-lingual-zero-shot-dependency-parsing,Cross-lingual zero-shot dependency parsing,"Cross-lingual zero-shot parsing is the task of inferring the dependency parse of sentences from one language without any labeled training trees for that language. Description from [NLP Progress](http://nlpprogress.com/english/dependency_parsing.html)",natural-language-processing 2400d0d3-f93a-4cff-a9b8-db3b63b1d093,multilingual-named-entity-recognition,Multilingual Named Entity Recognition,,natural-language-processing 55edb0b1-2a15-4992-a48d-cde569196fe6,argument-mining,Argument Mining,"**Argument Mining** is a field of corpus-based discourse analysis that involves the automatic identification of argumentative structures in text. Source: [AMPERSAND: Argument Mining for PERSuAsive oNline Discussions ](https://arxiv.org/abs/2004.14677)",natural-language-processing 02b31bcd-fe7e-4b5e-9887-2db9c7f8b3c0,aspect-category-detection,Aspect Category Detection,Aspect category detection (ACD) in sentiment analysis aims to identify the aspect categories mentioned in a sentence.,natural-language-processing cde812a9-6fd7-487c-97c4-e62283544168,diachronic-word-embeddings,Diachronic Word Embeddings,,natural-language-processing a499bd2b-7e9a-4881-9635-d20af62393d0,event-causality-identification,Event Causality Identification,,natural-language-processing e3d6ae62-2827-4486-b71e-10d1e7ce9ec4,chunking,Chunking,"Chunking, also known as shallow parsing, identifies continuous spans of tokens that form syntactic units such as noun phrases or verb phrases. Example: | Vinken | , | 61 | years | old | | --- | ---| --- | --- | --- | | B-NLP| I-NP | I-NP | I-NP | I-NP |",natural-language-processing 98d3c8d5-8cb3-491f-93df-bc04e8e52061,vietnamese-text-diacritization,Vietnamese Text Diacritization,Addition of diacritics for undiacritized Vietnamese Wikipedia texts.,natural-language-processing 0d6a21cd-d557-485a-8920-08adee0d0892,key-point-matching,Key Point Matching,"Given a debatable topic, a set of key points per stance, and a set of crowd arguments supporting or contesting the topic, report for each argument its match score for each of the key points under the same stance towards the topic.",natural-language-processing 578a0af0-1bd6-4dfb-a1fa-338154201f3b,hyperbaton,Hyperbaton,,natural-language-processing bb87fec1-df11-4694-a782-2a6ef72cf55f,negation-scope-resolution,Negation Scope Resolution,,natural-language-processing 3afc0443-28ff-4215-98c5-347f82566119,multi-word-expression-sememe-prediction,multi-word expression sememe prediction,Predict sememes for unannotated multi-word expressions.,natural-language-processing e78860d7-5db3-4e10-bab8-b5bfddbb1e37,multilingual-nlp,Multilingual NLP,,natural-language-processing 4575ca48-1f63-4e12-82b8-164b1f3013fc,text-effects-transfer,Text Effects Transfer,"Text effects transfer refers to the task of transferring typography styles (e.g., color, texture) to an input image of a text element.",natural-language-processing 5831ed49-ca5b-47fa-a078-c205b56dda49,emotion-classification,Emotion Classification,"Emotion classification, or emotion categorization, is the task of recognising emotions to classify them into the corresponding category. Given an input, classify it as 'neutral or no emotion' or as one, or more, of several given emotions that best represent the mental state of the subject's facial expression, words, and so on. Some example benchmarks include ROCStories, Many Faces of Anger (MFA), and GoEmotions. Models can be evaluated using metrics such as the Concordance Correlation Coefficient (CCC) and the Mean Squared Error (MSE).",natural-language-processing 58654440-5f9d-4039-bb39-f95cf9a6f5bf,game-of-cricket,Game of Cricket,,playing-games 3b5da086-c7f1-4115-8efa-b33502a83d7a,procgen-hard-100m,Procgen Hard (100M),,playing-games bde66267-4432-456f-b94a-8ba5cd4734b7,video-games,Video Games,,playing-games 9ac1882a-52b2-4abe-92ef-e9723c2138c5,multi-agent-path-finding,Multi-Agent Path Finding,,playing-games 0fd9da86-f5e6-4b40-8cc8-e36156345c76,injury-prediction,Injury Prediction,,playing-games cf18ed15-4bbc-4917-9b2f-a73cefd20f2d,openai-gym,OpenAI Gym,"An open-source toolkit from OpenAI that implements several Reinforcement Learning benchmarks including: classic control, Atari, Robotics and MuJoCo tasks. (Description by [Evolutionary learning of interpretable decision trees](https://paperswithcode.com/paper/evolutionary-learning-of-interpretable)) (Image Credit: [OpenAI Gym](https://gym.openai.com/))",playing-games a321b4be-83a3-4e21-be4e-ec0ba44c8149,score,NetHack Score,Mean in-game score over 1000 episodes with random seeds not seen during training. See https://arxiv.org/abs/2006.13760 (Section 2.4 Evaluation Protocol) for details.,playing-games 94e424de-2923-4941-97e2-c93f96d5a5f3,game-of-go,Game of Go,"Go is an abstract strategy board game for two players, in which the aim is to surround more territory than the opponent. The task is to train an agent to play the game and be superior to other players.",playing-games dfb33016-8c74-4030-9ce1-5672c0e3162f,board-games,Board Games,,playing-games 0c50aa37-8905-4519-b529-f425fa90c351,deep-sea-treasure-image-version,"Deep-Sea Treasure, Image version","Image state version of the multi-objective reinforcement learning toy environment originally introduced in ""Empirical evaluation methods for multiobjective reinforcement learning algorithms"" by P. Vamplew et al.",playing-games 42abb953-ebd9-4b17-8512-8ac873d978f6,game-of-shogi,Game of Shogi,,playing-games 725a5e3c-08c7-4175-bfae-fcc63d35e224,text-based-games,text-based games,Text-based games to evaluate the Reinforcement Learning Agents,playing-games 9e34e515-861b-45f3-858f-c3f82144eb1d,montezumas-revenge,Montezuma's Revenge,"Montezuma's Revenge is an ATARI 2600 Benchmark game that is known to be difficult to perform on for reinforcement learning algorithms. Solutions typically employ algorithms that incentivise environment exploration in different ways. For the state-of-the art tables, please consult the parent Atari Games task. ( Image credit: [Q-map](https://github.com/fabiopardo/qmap) )",playing-games f34004dc-cf29-404e-ae8e-73d265cf08ac,offline-rl,Offline RL,,playing-games a5cc8eea-e619-4173-ba33-f30369d97925,game-of-hanabi,Game of Hanabi,,playing-games ca3f4a42-9d8b-406e-bf12-0b6c28eec217,game-of-football,Game of Football,,playing-games 6ef90d72-24fa-4fb1-b52b-44bd10ebf596,control-with-prametrised-actions,Control with Prametrised Actions,"Most reinforcement learning research papers focus on environments where the agent’s actions are either discrete or continuous. However, when training an agent to play a video game, it is common to encounter situations where actions have both discrete and continuous components. For example, a set of high-level discrete actions (ex: move, jump, fire), each of them being associated with continuous parameters (ex: target coordinates for the move action, direction for the jump action, aiming angle for the fire action). These kinds of tasks are included in Control with Parameterised Actions.",playing-games ea71d448-8a4a-4a37-8194-858072c89e11,snes-games,SNES Games,"The task is to train an agent to play SNES games such as Super Mario. ( Image credit: [Large-Scale Study of Curiosity-Driven Learning](https://github.com/openai/large-scale-curiosity) )",playing-games 7da89710-67db-4ee1-8aff-05857d676eb9,game-of-poker,Game of Poker,,playing-games 90e5d31d-1c0b-4c7f-8c81-f3fc18a87787,starcraft-ii,Starcraft II,"Starcraft II is a RTS game; the task is to train an agent to play the game. ( Image credit: [The StarCraft Multi-Agent Challenge](https://arxiv.org/pdf/1902.04043v2.pdf) )",playing-games 92babcd0-ec14-4394-a101-01ead1cba7e0,continuous-control,Continuous Control,,playing-games d00ad8d6-8141-436b-bd9c-356c761b3dae,smac-1,SMAC+,Bechmarks for Efficient Exploration of Completion of Multi-stage Tasks and Usage of Environmental Factors,playing-games 28dc98cb-a29d-4315-b415-e15f4cd1baf8,real-time-strategy-games,Real-Time Strategy Games,"Real-Time Strategy (RTS) tasks involve training an agent to play video games with continuous gameplay and high-level macro-strategic goals such as map control, economic superiority and more. ( Image credit: [Multi-platform Version of StarCraft: Brood War in a Docker Container](https://github.com/Games-and-Simulations/sc-docker) )",playing-games 625ee53c-c2b8-4603-a5e9-bc664903f378,game-of-chess,Game of Chess,"Chess is a two-player strategy board game played on a chessboard, a checkered gameboard with 64 squares arranged in an 8×8 grid. The idea of making a machine that could beat a Grandmaster human player was a fascination in the artificial community for decades. Famously IBM's DeepBlue beat Kasparov in the 1990s. More recently more human-like approaches such as AlphaZero have appeared.",playing-games 2e075e69-69ab-4419-9116-fcaf0196a2c9,klondike,Klondike,The most commonly played game in the family of Solitaire card games.,playing-games 26227dea-de94-40e0-8b4a-3a8005682277,carracing-v0,Car Racing,https://gym.openai.com/envs/CarRacing-v0/,playing-games 481bc7ad-1ff5-4127-a3c6-f9e81cfa1d95,smac-plus,SMAC Plus,Multi-agent reinforcement learning for completion of multi-stage tasks,playing-games a0a14a80-ea3a-412e-aee1-fb5cf2bd5e7f,dota-2,Dota 2,"Dota 2 is a multiplayer online battle arena (MOBA). The task is to train one-or-more agents to play and win the game. ( Image credit: [OpenAI Five](https://openai.com/five/) )",playing-games b1311cf1-2976-4f46-bba6-7b83416c84c5,2048,Playing the Game of 2048,,playing-games c46d0896-e10a-46b9-b75e-0aaf87941c12,suduko,Game of Sudoku,,playing-games d7187052-1440-4a04-99bf-50c9f056b5e9,starcraft,Starcraft,"Starcraft I is a RTS game; the task is to train an agent to play the game. ( Image credit: [Macro Action Selection with Deep Reinforcement Learning in StarCraft](https://arxiv.org/pdf/1812.00336v3.pdf) )",playing-games 1e6df58a-525f-40d4-8908-a54cf8edc2f0,atari-games,Atari Games,"The Atari 2600 Games task (and dataset) involves training an agent to achieve high game scores. ( Image credit: [Playing Atari with Deep Reinforcement Learning](https://arxiv.org/pdf/1312.5602v1.pdf) )",playing-games 203d2456-d78a-4abc-94bd-445de419f8c7,nethack,NetHack,Mean in-game score over 1000 episodes with random seeds not seen during training. See https://arxiv.org/abs/2006.13760 (Section 2.4 Evaluation Protocol) for details.,playing-games 47aeba1b-56e2-448b-a0ae-7bd19da6e8b7,solitaire,Solitaire,A family of single-player games using one or more standard decks of playing cards.,playing-games 783b49f4-ab44-4d08-86fa-78761fcd0872,football-action-valuation,Football Action Valuation,,playing-games 0d8f583b-3f77-402b-a448-7803f4d88e7e,acrobot,Acrobot,"The acrobot system includes two joints and two links, where the joint between the two links is actuated. Initially, the links are hanging downwards, and the goal is to swing the end of the lower link up to a given height.",playing-games ff5a84c1-a3b6-45c3-999e-31dca008c003,dqn-replay-dataset,DQN Replay Dataset,,playing-games d1f05b84-b785-4ca9-995f-f0f61e73237e,card-games,Card Games,Card games involve playing cards: the task is to train an agent to play the game with specified rules and beat other players.,playing-games 88506021-0b93-4e0a-bf76-ce15a82516be,game-of-doom,Game of Doom,"Doom is an FPS game : the task is typically to train an agent to navigate the game environment, and additionally, acquire points by eliminating enemies. ( Image credit: [Playing FPS Games with Deep Reinforcement Learning](https://arxiv.org/pdf/1609.05521v2.pdf) )",playing-games 29646ede-8581-4b22-aca5-eda64055d62f,fps-games,FPS Games,"First-person shooter (FPS) games Involve like call of duty so enjoy ( Image credit: [Procedural Urban Environments for FPS Games](https://arxiv.org/pdf/1604.05791v1.pdf) )",playing-games 35f32fbd-d630-45de-bdd8-bdd3706c889e,smac,SMAC,"The StarCraft Multi-Agent Challenge (SMAC) is a benchmark that provides elements of partial observability, challenging dynamics, and high-dimensional observation spaces. SMAC is built using the StarCraft II game engine, creating a testbed for research in cooperative MARL where each game unit is an independent RL agent.",playing-games 72abd693-cb03-494f-bdde-03a2b08414f6,pass-classification,Pass Classification,,playing-games 0dd981e2-b6ce-439b-9559-cc9f817a46d3,model-based-reinforcement-learning,Model-based Reinforcement Learning,,reasoning ef74e17f-5f35-4e3d-890f-30fcf693b701,elementary-mathematics,Elementary Mathematics,,reasoning 76797884-6e06-483b-a5a9-0c105e38cce2,abstract-algebra,Abstract Algebra,,reasoning e81b7ae7-f37c-46e7-980f-e5cc8f6fce6a,navigate,Navigate,,reasoning f003c59f-51d8-4b9f-803c-393bcfbaad8f,metaphor-boolean,Metaphor Boolean,,reasoning d0accb1b-e12e-40df-908a-9f97769209c7,penguins-in-a-table,Penguins In A Table,,reasoning 6f4553bc-af40-491d-8702-48c9120e11d1,checkmate-in-one,Checkmate In One,,reasoning 8e6d2c6c-2afa-4b11-a123-31ffa5780d63,analogical-similarity,Analogical Similarity,,reasoning bfe0b421-1bb7-46fa-b7d6-ca676c662d87,temporal-sequences,Temporal Sequences,"This task asks models to answer questions about which times certain events could have occurred. Source: [BIG-bench](https://github.com/google/BIG-bench/tree/main/bigbench/benchmark_tasks/temporal_sequences) Image source: [BIG-bench](https://github.com/google/BIG-bench/tree/main/bigbench/benchmark_tasks/temporal_sequences)",reasoning 1a57d90e-2583-4cdc-8d4f-10d625c97796,physical-intuition,Physical Intuition,,reasoning 3edf7007-792a-4192-96eb-3e05aff70656,arithmetic-reasoning,Arithmetic Reasoning,,reasoning d01c362e-9ffe-4c68-a835-4eef7bec689c,identify-odd-metapor,Identify Odd Metapor,,reasoning 898196f1-c880-4076-ac9c-e1a041148cac,crash-blossom,Crash Blossom,,reasoning 86722110-c929-4e04-921c-6087b7ce1e80,logical-reasoning,Logical Reasoning,,reasoning 7cff5e9e-a563-4237-b397-90515ee86530,human-judgment-correlation,Human Judgment Correlation,A task where an algorithm should generate the judgment scores correlating with human judgments.,reasoning 4087e697-fcb5-4115-af82-62aa00398684,presuppositions-as-nli,Presuppositions As NLI,,reasoning 0e582ee0-3032-42f0-acd4-263548066eaa,abstract-argumentation,Abstract Argumentation,Identifying argumentative statements from natural language dialogs.,reasoning 359ac862-717b-42aa-8e6c-b0999e63017b,causal-identification,Causal Identification,,reasoning 89d3fa5f-2b71-47ee-91c1-4bc6437a9196,college-mathematics,College Mathematics,,reasoning 7a8647f3-2000-42c5-9b7b-667426a3e8da,professional-accounting,Professional Accounting,,reasoning a9a0a214-7b7e-4d55-aa3d-3d3aa051fa76,commonsense-rl,Commonsense Reasoning for RL,Commonsense reasoning for Reinforcement Learning agents,reasoning e288896e-8f04-4893-98ca-3367fe378838,entailed-polarity,Entailed Polarity,,reasoning 1ce6581d-17cc-4951-af3a-9614d9d7f90f,math-word-problem-solving,Math Word Problem Solving,,reasoning 947d1a6f-2089-4d63-bb48-ce4184dbe07e,evaluating-information-essentiality,Evaluating Information Essentiality,,reasoning a26d9506-7010-4a9f-a2eb-e833b692e3f4,systematic-generalization,Systematic Generalization,,reasoning fcf2e1b4-27c2-417a-958d-e535bf0016a1,odd-one-out,Odd One Out,"This task tests to what extent a language model is able to identify the odd word. Source: [BIG-bench](https://github.com/google/BIG-bench/tree/main/bigbench/benchmark_tasks/odd_one_out)",reasoning 5cbb8d77-481a-45ba-9a31-b525eceff5bd,code-line-descriptions,Code Line Descriptions,,reasoning 0c860a39-b158-46f5-8660-6b2367bf73ac,automated-theorem-proving,Automated Theorem Proving,"The goal of **Automated Theorem Proving** is to automatically generate a proof, given a conjecture (the target theorem) and a knowledge base of known facts, all expressed in a formal language. Automated Theorem Proving is useful in a wide range of applications, including the verification and synthesis of software and hardware systems. Source: [Learning to Prove Theorems by Learning to Generate Theorems ](https://arxiv.org/abs/2002.07019)",reasoning f4dfb9a2-66ec-4889-8beb-b06c9743ee39,anachronisms,Anachronisms,,reasoning 6b78b385-a6a3-4739-ac8c-d9acc7a61417,mathematical-induction,Mathematical Induction,"Tests the language model's capability to understand induction by asking the model to verify the correctness of an induction argument. Source: [BIG-bench](https://github.com/google/BIG-bench/tree/main/bigbench/benchmark_tasks/mathematical_induction)",reasoning 871b46e5-2814-4b94-ab18-04bdcbc264d5,program-repair,Program Repair,Task of teaching ML models to modify an existing program to fix a bug in a given code.,reasoning 630601ee-cdff-4a91-b485-efb4b11ce48b,common-sense-reasoning,Common Sense Reasoning,"Common sense reasoning tasks are intended to require the model to go beyond pattern recognition. Instead, the model should use ""common sense"" or world knowledge to make inferences.",reasoning afabd59c-eaef-41b8-abad-4375a1fea6fa,visual-commonsense-reasoning,Visual Commonsense Reasoning,Image source: [Visual Commonsense Reasoning](https://paperswithcode.com/dataset/vcr),reasoning ac33aefc-1831-471a-aee4-be3539ff37f7,reasoning-about-colored-objects,Reasoning About Colored Objects,,reasoning d0b87ce1-e03d-443a-b15b-120e7aa8a94a,causal-judgment,Causal Judgment,,reasoning 6f13a76f-1b8c-4aaa-a1fe-70b950262281,analytic-entailment,Analytic Entailment,,reasoning 4df2ad5a-4c23-4cad-8eb4-f35d18c69621,logical-fallacy-detection,Logical Fallacy Detection,,reasoning 7bb61d6e-769e-40de-b9b2-fb941af69ae5,human-judgment-classification,Human Judgment Classification,A task where an algorithm judges which sample is better in accordance with human judgment.,reasoning 5cd6127d-3d10-4edd-b586-0a759601b59d,decision-making-under-uncertainty,Decision Making Under Uncertainty,,reasoning 525b01a9-9942-444e-bd05-41f03eb193c0,visual-reasoning,Visual Reasoning,Ability to understand actions and reasoning associated with any visual images,reasoning 594a8269-e644-4d59-9961-bee7eeca515f,date-understanding,Date Understanding,,reasoning 2995908b-da07-4135-8603-0fc8415d9aab,pre-election-ratings-estimation,Pre-election ratings estimation,,reasoning 2378d30a-a591-4938-bccf-494323629df4,physical-commonsense-reasoning,Physical Commonsense Reasoning,,reasoning 97d66522-66c5-4b38-bac6-de33e0fd2b26,conformal-prediction,Conformal Prediction,,reasoning a6f32b92-5958-4a1e-a8c4-dea3e1ace15f,formal-logic,Formal Logic,,reasoning c7493d8a-816f-4577-a5b9-947a201c4773,crass-ai,Crass AI,,reasoning 520c83e5-2b64-4d2e-8fe9-39fcffb6355a,high-school-mathematics,High School Mathematics,,reasoning 9cf973be-53e9-4a9d-bd18-513e2485ebee,natural-language-visual-grounding,Natural Language Visual Grounding,,reasoning 1f84a01c-f545-4661-8da2-97016025ce24,logical-args,Logical Args,,reasoning c284a7f7-b392-45f8-a411-87b7098c15f5,visual-entailment,Visual Entailment,"Visual Entailment (VE) - is a task consisting of image-sentence pairs whereby a premise is defined by an image, rather than a natural language sentence as in traditional Textual Entailment tasks. The goal is to predict whether the image semantically entails the text.",reasoning 442818a5-1b68-442f-b1ca-d43468c4a611,winowhy,Winowhy,,reasoning f55fd4e6-d4e6-480a-89be-170ca195dc16,decision-making,Decision Making,"**Decision Making** is a complex task that involves analyzing data (of different level of abstraction) from disparate sources and with different levels of certainty, merging the information by weighing in on some data source more than other, and arriving at a conclusion by exploring all possible alternatives. Source: [Complex Events Recognition under Uncertainty in a Sensor Network ](https://arxiv.org/abs/1411.0085)",reasoning 96dd3a67-91e0-46d3-9123-241ffbee68bc,strategyqa,StrategyQA,"StrategyQA aims to measure the ability of models to answer questions that require multi-step implicit reasoning. Source: [BIG-bench](https://github.com/google/BIG-bench/tree/main/bigbench/benchmark_tasks/strategyqa)",reasoning f9242a93-c8be-406e-8470-d98d3cd46a14,novel-concepts,Novel Concepts,"Measures the ability of models to uncover an underlying concept that unites several ostensibly disparate entities, which hopefully would not co-occur frequently. This provides a limited test of a model's ability to creatively construct the necessary abstraction to make sense of a situation that it cannot have memorized in training. Source: [BIG-bench](https://github.com/google/BIG-bench/tree/main/bigbench/benchmark_tasks/novel_concepts)",reasoning 58e626d8-f3b2-4f45-8ab6-611dfe57e13f,logical-sequence,Logical Sequence,,reasoning 8153423e-af31-4c60-af19-cd1c3865b09c,contact-detection,Contact Detection,Static-friction contact detection in legged locomotion,robots 3e59f482-053f-44af-a10b-bcef6d021415,marine-robot-navigation,Marine Robot Navigation,,robots 82d1bb1d-8127-4b78-a18a-9531ae24b8b5,outdoor-localization,Outdoor Localization,,robots 34d36ecb-1359-459b-a27f-8d2718e09a57,humanoid-control,Humanoid Control,"Control of a high-dimensional humanoid. This can include skill learning by tracking motion capture clips, learning goal-directed tasks like going towards a moving target, and generating motion within a physics simulator.",robots e53c4092-a499-4013-8b19-dbc9bfd5fc14,muscle-computer-interfaces-mcis,Muscle-Computer Interfaces (MCIs),An interaction methodology that directly senses and decodes human muscular activity rather than relying on physical device actuation or user actions that are externally visible or audible.,robots 5c9f2183-bb18-4f0f-bb06-5235b93d83de,sequential-place-learning,Sequential Place Learning,State-of-the-art algorithms for route-based place recognition under changing conditions.,robots 7ab453af-6d3f-40de-8701-e73fd64ee085,industrial-robots,Industrial Robots,"An industrial robot is a robot system used for manufacturing. Industrial robots are automated, programmable and capable of movement on three or more axes.",robots ebd9b14d-dd5a-4203-a4dd-cf7a0e203ec3,mental-stress-detection,Mental Stress Detection,,robots 0cb052fd-663a-4e26-9b1e-08bd7ee9a48a,social-navigation,Social Navigation,This task studies how to navigate robot(s) among humans in a safe and socially acceptable way.,robots 20f48d33-9807-41ae-ad88-c71ad5c0890a,deformable-object-manipulation,Deformable Object Manipulation,,robots e19a7c0e-403b-486a-811e-f1385f7f6b80,low-latency-processing,Low-latency processing,,robots f4760df6-d6fe-460c-b989-fe3220534820,benchmarking,Benchmarking,,robots fc3db917-87ed-46e3-90b8-5f94574f5889,pointgoal-navigation,PointGoal Navigation,,robots 7fc19fe2-24da-45fd-ab2b-8d63107ec107,d4rl,D4RL,,robots 48a7dc04-7ba1-407a-99d9-4c400532f6e8,carla-map-leaderboard,CARLA MAP Leaderboard,https://leaderboard.carla.org/leaderboard/,robots af3f280d-5268-45a5-aa21-b3fd68cfdc31,robot-navigation,Robot Navigation,"The fundamental objective of mobile **Robot Navigation** is to arrive at a goal position without collision. The mobile robot is supposed to be aware of obstacles and move freely in different working scenarios. Source: [Learning to Navigate from Simulation via Spatial and Semantic Information Synthesis with Noise Model Embedding ](https://arxiv.org/abs/1910.05758)",robots d09170b1-a28c-4c5f-b164-3889419f069a,touch-detection,Touch detection,,robots abcf066d-a6f1-40d1-958b-f8e654b9c1b3,omniverse-isaac-gym,Omniverse Isaac Gym,"The Omniverse Isaac Gym extension provides an interface for performing reinforcement learning training and inferencing in Isaac Sim. This framework simplifies the process of connecting reinforcement learning libraries and algorithms with other components in Isaac Sim. Similar to existing frameworks and environment wrapper classes that inherit from gym.Env, the Omniverse Isaac Gym extension also provides an interface inheriting from gym.Env and implements a simple set of APIs required by most common RL libraries. This interface can be used as a bridge connecting RL libraries with physics simulation and tasks running in the Isaac Sim framework.",robots 2c33dab4-c134-49ef-ae8e-07a7de6f46b4,robotic-grasping,Robotic Grasping,This task is composed of using Deep Learning to identify how best to grasp objects using robotic arms in different scenarios. This is a very complex task as it might involve dynamic environments and objects unknown to the network.,robots 460eb98a-5e0d-4714-944a-dd30bade789c,vision-and-language-navigation,Vision and Language Navigation,,robots 2ef43e7b-4045-4049-bdc1-d592c7f6358d,developmental-learning,Developmental Learning,,robots d95a0429-981c-4c6f-9d19-6e28b765c343,skill-generalization,Skill Generalization,Image credit: [A Generalist Agent](https://storage.googleapis.com/deepmind-media/A%20Generalist%20Agent/Generalist%20Agent.pdf),robots fbb2c019-f3e5-416d-a963-c558500376a1,skill-mastery,Skill Mastery,,robots 6c76456c-8b3f-41e0-8cc3-01b88f29bd1d,radar-odometry,Radar odometry,"Radar odometry is the task of estimating the trajectory of the radar sensor, e.g. as presented in https://arxiv.org/abs/2105.01457. A well established performance metric was presented by Geiger (2012) - ""Are we ready for autonomous driving? the KITTI vision benchmark suite""",robots 06d2e1b3-0e74-4160-8600-5daf3f6a7869,monocular-visual-odometry,Monocular Visual Odometry,,robots 3cb45760-f564-44a0-830b-661a9e70258e,radar-object-detection,Radar Object Detection,The radar object detection (ROD) task aims to classify and localize the objects in 3D purely from radar's radio frequency (RF) images.,robots 8807917e-2d43-41b6-add4-e6475279e022,joint-radar-communication,Joint Radar-Communication,Intelligently decide how to simultaneously conduct radar and communication over a shared radio channel.,robots 36d213f3-72ba-433b-b7e1-3f98add41f70,optimal-motion-planning,Optimal Motion Planning,,robots f03e622d-6be4-4406-ae73-a73b1b039216,visual-navigation,Visual Navigation,"**Visual Navigation** is the problem of navigating an agent, e.g. a mobile robot, in an environment using camera input only. The agent is given a target image (an image it will see from the target position), and its goal is to move from its current position to the target by applying a sequence of actions, based on the camera observations only. Source: [Vision-based Navigation Using Deep Reinforcement Learning ](https://arxiv.org/abs/1908.03627)",robots 29da0832-e31f-4340-ad5a-0661b4cb19e7,sequential-place-recognition,Sequential Place Recognition,State-of-the-art algorithms for route-based place recognition under changing conditions.,robots 84b3635d-5816-4139-a0d7-c5232133492d,visual-odometry,Visual Odometry,"**Visual Odometry** is an important area of information fusion in which the central aim is to estimate the pose of a robot using data collected by visual sensors. Source: [Bi-objective Optimization for Robust RGB-D Visual Odometry ](https://arxiv.org/abs/1411.7445)",robots eec44fb1-0f18-4cad-98ad-94022dcf2619,safe-exploration,Safe Exploration,"**Safe Exploration** is an approach to collect ground truth data by safely interacting with the environment. Source: [Chance-Constrained Trajectory Optimization for Safe Exploration and Learning of Nonlinear Systems ](https://arxiv.org/abs/2005.04374)",robots 6dd801a3-1850-4337-bdd1-922fe0197da9,robot-manipulation,Robot Manipulation,,robots b533ab2c-cd7e-45e0-b533-7b1639d8d8a1,trajectory-planning,Trajectory Planning,"Trajectory planning for industrial robots consists of moving the tool center point from point A to point B while avoiding body collisions over time. Trajectory planning is sometimes referred to as motion planning and erroneously as path planning. Trajectory planning is distinct from path planning in that it is parametrized by time. Essentially trajectory planning encompasses path planning in addition to planning how to move based on velocity, time, and kinematics.",robots 8ad5e6ac-1336-46c8-8c0a-6e5de9b693f3,drone-controller,Drone Controller,,robots 0857fd57-6390-4c03-b864-88653bc2b57f,grasp-rectangle-generation,Grasp rectangle generation,Grasp rectangles are a popular way to represent a two-finger grasp. Grasp rectangle generation is a task to (automatically) label datasets for parallel-jaw grap learning.,robots 07a56ae4-d661-4721-a7e0-b9beb1ea0dab,gesture-generation,Gesture Generation,"Generation of gestures, as a sequence of 3d poses",robots be2f908d-4a48-4f92-908d-6bdbf43d0659,vision-based-navigation-with-language-based,Vision-based navigation with language-based assistance,"A grounded vision-language task where an agent with visual perception is guided via language to find objects in photorealistic indoor environments. The task emulates a real-world scenario in that (a) the requester may not know how to navigate to the target objects and thus makes requests by only specifying high-level endgoals, and (b) the agent is capable of sensing when it is lost and querying an advisor, who is more qualified at the task, to obtain language subgoals to make progress.",robots 29c53e58-85bc-45a0-b5e0-f49cb9c7769c,isaac-gym-preview,Isaac Gym Preview,Isaac Gym offers a high performance learning platform to train policies for wide variety of robotics tasks directly on GPU. Both physics simulation and the neural network policy training reside on GPU and communicate by directly passing data from physics buffers to PyTorch tensors without ever going through any CPU bottlenecks. This leads to blazing fast training times for complex robotics tasks on a single GPU with 2-3 orders of magnitude improvements compared to conventional RL training that uses a CPU based simulator and GPU for neural networks.,robots 58546ea4-abf9-4146-b310-5d2bc13fe916,robot-task-planning,Robot Task Planning,,robots af9a03fd-9ab1-4078-bb25-1c4aca040911,motion-planning,Motion Planning,"( Image credit: [Motion Planning Among Dynamic, Decision-Making Agents with Deep Reinforcement Learning](https://arxiv.org/pdf/1805.01956v1.pdf) )",robots 534abb2b-fc0d-4c86-8e23-57bd67a04d43,phone-level-pronunciation-scoring,Phone-level pronunciation scoring,,speech ebff78f2-8393-4a15-9e55-c370fe6b1e08,voice-conversion,Voice Conversion,"**Voice Conversion** is a technology that modifies the speech of a source speaker and makes their speech sound like that of another target speaker without changing the linguistic information. Source: [Joint training framework for text-to-speech and voice conversion using multi-source Tacotron and WaveNet ](https://arxiv.org/abs/1903.12389)",speech 24ae91d2-5acc-4431-97e7-c0a38e18ac5a,speech-synthesis-bodo,Speech Synthesis - Bodo,,speech b284ea62-f149-441e-8c9b-764d1e479f90,small-footprint-keyword-spotting,Small-Footprint Keyword Spotting,,speech 86fffb39-0063-40f2-9ecb-b3aa7c140f90,word-level-pronunciation-scoring,Word-level pronunciation scoring,Total score of a word pronunciation.,speech 8a548f68-3a1f-41ff-b20c-6326ddffb730,robust-speech-recognition,Robust Speech Recognition,,speech 2b99bc05-b32c-480c-8ae5-17c11dad68e1,speaker-profiling,Speaker Profiling,Estimation of Physical parameters from Speech data,speech f83371f8-998e-472f-acf4-6b1c2a3ffdbf,speech-recognition,Speech Recognition,"**Speech Recognition** is the task of converting spoken language into text. It involves recognizing the words spoken in an audio recording and transcribing them into a written format. The goal is to accurately transcribe the speech in real-time or from recorded audio, taking into account factors such as accents, speaking speed, and background noise. ( Image credit: [SpecAugment](https://arxiv.org/pdf/1904.08779v2.pdf) )",speech a6c3445f-1592-4502-a30b-4131de8486bc,bandwidth-extension,Bandwidth Extension,Bandwidth extension is the task of expanding the bandwidth of a signal in a way that approximates the original or desired higher bandwidth signal.,speech 120bb4d1-f5b1-4d3c-be26-538bf2d522e5,keyword-spotting,Keyword Spotting,"In speech processing, keyword spotting deals with the identification of keywords in utterances. ( Image credit: [Simon Grest](https://github.com/simongrest/kaggle-freesound-audio-tagging-2019) )",speech 7b9962ef-c365-4431-bca9-ed77a37e2f74,pronunciation-assessment,Pronunciation Assessment,,speech de729f11-7de8-4d1f-89fc-d33d2724d7c7,speech-synthesis-rajasthani,Speech Synthesis - Rajasthani,,speech b7b791e1-f2c9-4f48-87fe-bb41103baf12,speech-synthesis-marathi,Speech Synthesis - Marathi,,speech 300ee326-052e-4f6c-b330-6272a11843cc,voice-query-recognition,Voice Query Recognition,,speech f170ad4d-baab-4397-ab45-1cca5d0a3a66,dialogue-generation,Dialogue Generation,"Dialogue generation is the task of ""understanding"" natural language inputs - within natural language processing in order to produce output. The systems are usually intended for conversing with humans, for instance back and forth dialogue with a conversation agent like a chatbot. Some example benchmarks for this task (see others such as Natural Language Understanding) include FusedChat and Ubuntu DIalogue Corpus (UDC). Models can be evaluated via metrics such as BLEU, ROUGE, and METEOR albeit with challenges in terms of weak correlation with human judgement, that may be addressed by new ones like UnSupervised and Reference-free (USR) and Metric for automatic Unreferenced dialog evaluation (MaUde).",speech 5e239570-62c1-4843-804c-9c036f596786,noisy-speech-recognition,Noisy Speech Recognition,,speech be8e8801-0d26-4598-bbee-53bd3914b517,a-vb-culture,A-VB Culture,"In the A-VB Culture track, participants will be challenged with predicting the intensity of 10 emotions associated with each vocal burst as a multi-output regression task, using a model or multiple models that generate predictions specific to each of the four cultures (the U.S., China, Venezuela, or South Africa). Specifically, annotations of each vocal burst will consist of culture-specific ground truth, meaning that the ground truth for each sample will be the average of annotations solely from the country of origin of the sample. Participants will report the average Concordance Correlation Coefficient (CCC), as well as the Pearson correlation coefficient, across all 10 emotions. The baseline for this challenge will be based on CCC.",speech f7832c65-15e9-4f06-99a6-500785c6fb97,speech-synthesis-kannada,Speech Synthesis - Kannada,,speech 16c4ebfd-a1f0-449f-95cd-6a45de40b4a4,distant-speech-recognition,Distant Speech Recognition,,speech 0489eb12-5c4e-41dc-9a5e-9d9660e429c2,speech-to-gesture-translation,Speech-to-Gesture Translation,,speech 6ecbc7f6-7614-4649-af1d-52cee4792347,a-vb-high,A-VB High,"The A-VB High track, explores a high-dimensional emotion space for understanding vocal bursts. Participants will be challenged with predicting the intensity of 10 emotions (Awe, Excitement, Amusement, Awkwardness, Fear, Horror, Distress, Triumph, Sadness, and Surprise) associated with each vocal burst as a multi-output regression task. Participants will report the average Concordance Correlation Coefficient (CCC), as well as the Pearson correlation coefficient, across all 10 emotions. The baseline for this challenge will be based on CCC.",speech 9a9d58c8-6e01-4007-af11-66ef40a6429e,speaker-recognition,Speaker Recognition,"**Speaker Recognition** is the process of identifying or confirming the identity of a person given his speech segments. Source: [Margin Matters: Towards More Discriminative Deep Neural Network Embeddings for Speaker Recognition ](https://arxiv.org/abs/1906.07317)",speech a43a7e25-ee95-49f8-87ef-ca6f53f2e76a,culture,Cultural Vocal Bursts Intensity Prediction,to predict the intensity of 40 culture-specific emotions (10 emotions from each culture),speech 95a651ad-0093-42f9-9299-474f49a53355,speaking-style-synthesis,Speaking Style Synthesis,,speech 01ceb443-1aa0-4fb4-aef1-e0a5ac94203e,speech-separation,Speech Separation,"The task of extracting all overlapping speech sources in a given mixed speech signal refers to the **Speech Separation**. Speech Separation is a special scenario of source separation problem, where the focus is only on the overlapping speech signal sources and other interferences such as music or noise signals are not the main concern of the study. Source: [A Unified Framework for Speech Separation ](https://arxiv.org/abs/1912.07814) Image credit: [Speech Separation of A Target Speaker Based on Deep Neural Networks](http://staff.ustc.edu.cn/~jundu/Publications/publications/ICSP2014_Du.pdf)",speech d7ad8d3e-6025-47fb-bb1b-30ea0911aa80,emotional-speech-synthesis,Emotional Speech Synthesis,,speech d1a7a846-2cd2-44f6-a4fe-d06821db967f,text-to-speech-synthesis,Text-To-Speech Synthesis,**Text-To-Speech Synthesis** is a machine learning task that involves converting written text into spoken words. The goal is to generate synthetic speech that sounds natural and resembles human speech as closely as possible.,speech f921fdb4-3015-4335-b5d7-83db1850f8fb,text-independent-speaker-recognition,Text-Independent Speaker Recognition,,speech 2d557bca-059c-4c97-b336-17ad6c5917f2,text-independent-speaker-verification,Text-Independent Speaker Verification,,speech 4ea860c3-a88b-4fc1-bec9-5dce9808af7d,speech-to-speech-translation,Speech-to-Speech Translation,"Speech-to-speech translation (S2ST) consists on translating speech from one language to speech in another language. This can be done with a cascade of automatic speech recognition (ASR), text-to-text machine translation (MT), and text-to-speech (TTS) synthesis sub-systems, which is text-centric. Recently, works on S2ST without relying on intermediate text representation is emerging.",speech 55afe26a-1ff2-4379-b921-1fe14d3891c0,speech-synthesis-odia,Speech Synthesis - Odia,,speech d3bba545-d82a-4ce9-b66b-7774c5c350b9,spoken-language-understanding,Spoken Language Understanding,,speech 20bb042c-4f6c-4d30-a52c-f4ae5b1aea6a,automatic-speech-recognition,Automatic Speech Recognition (ASR),"**Automatic Speech Recognition (ASR)** involves converting spoken language into written text. It is designed to transcribe spoken words into text in real-time, allowing people to communicate with computers, mobile devices, and other technology using their voice. The goal of Automatic Speech Recognition is to accurately transcribe speech, taking into account variations in accent, pronunciation, and speaking style, as well as background noise and other factors that can affect speech quality.",speech edbeef50-83c0-4036-af70-5ba80a702be8,a-vb-two,A-VB Two,"In the A-VB Two track, we investigate a low-dimensional emotion space that is based on the circumplex model of affect. Participants will predict values of arousal and valence (on a scale from 1=unpleasant/subdued, 5=neutral, 9=pleasant/stimulated) as a regression task. Participants will report the average Concordance Correlation Coefficient (CCC), as well as the Pearson correlation coefficient, across the two dimensions. The baseline for this challenge will be based on CCC.",speech 4ef89510-f5da-4d56-a498-6557da2a0eff,utterance-level-pronounciation-scoring,Utterance-level pronounciation scoring,Total pronunciation score of an utterance.,speech 0709cf29-a46a-4232-94b1-6da452af7e64,speech-synthesis-manipuri,Speech Synthesis - Manipuri,,speech 110d75d9-a0b4-4675-a7fd-bf41bae7936e,accented-speech-recognition,Accented Speech Recognition,,speech cfc3c204-b940-4bac-a784-e4dbb26ad2fe,speech-synthesis-bengali,Speech Synthesis - Bengali,,speech 811194be-dec5-454f-b615-0fd7c927b8c5,spoken-language-identification,Spoken language identification,Identify the language being spoken from an audio input only.,speech d2e52964-c62f-46ad-99ac-2a86093c27d3,spoken-dialogue-systems,Spoken Dialogue Systems,,speech c9917233-fbd1-428b-92a6-bb1400ae04bb,speaker-diarization,Speaker Diarization,"**Speaker Diarization** is the task of segmenting and co-indexing audio recordings by speaker. The way the task is commonly defined, the goal is not to identify known speakers, but to co-index segments that are attributed to the same speaker; in other words, diarization implies finding speaker boundaries and grouping segments that belong to the same speaker, and, as a by-product, determining the number of distinct speakers. In combination with speech recognition, diarization enables speaker-attributed speech-to-text transcription. Source: [Improving Diarization Robustness using Diversification, Randomization and the DOVER Algorithm ](https://arxiv.org/abs/1910.11691)",speech 4a04dd8e-e3e7-4caa-aad5-9e7b8baf25d2,speech-enhancement,Speech Enhancement,"**Speech Enhancement** is a signal processing task that involves improving the quality of speech signals captured under noisy or degraded conditions. The goal of speech enhancement is to make speech signals clearer, more intelligible, and more pleasant to listen to, which can be used for various applications such as voice recognition, teleconferencing, and hearing aids. ( Image credit: [A Fully Convolutional Neural Network For Speech Enhancement](https://arxiv.org/pdf/1609.07132v1.pdf) )",speech c8135dcd-ec45-4220-a48d-36706e76be61,speech-extraction,Speech Extraction,,speech bcd4d432-fc72-421a-9e2c-7658d94982ec,expressive-speech-synthesis,Expressive Speech Synthesis,,speech 110164a3-dcbc-46c5-b21d-cfa14189725b,unsupervised-speech-recognition,Unsupervised Speech Recognition,,speech 36b2cd73-afc5-43c3-a7d5-e54eeaadf3e3,speech-synthesis-assamese,Speech Synthesis - Assamese,,speech a5b9a22c-2b24-47c6-a746-957d0a369ede,speaker-separation,Speaker Separation,,speech 8b156946-33a1-4628-84fa-2d31f673808f,acoustic-unit-discovery,Acoustic Unit Discovery,,speech c179910d-628d-413f-9569-3de9488b2a76,speaker-identification,Speaker Identification,,speech 990945c8-ce29-4cbe-bf04-346540de49a9,manner-of-articulation-detection,Manner Of Articulation Detection,,speech eedefd76-d9e3-4009-b337-0d38f094ae47,multi-speaker-source-separation,Multi-Speaker Source Separation,,speech f90a04ac-bc04-4fba-b46f-a45b699d746d,text-dependent-speaker-verification,Text-Dependent Speaker Verification,,speech 51684d27-e15f-4513-8668-1f06636851a4,voice-cloning,Voice Cloning,Voice cloning is a highly desired feature for personalized speech interfaces. Neural voice cloning system learns to synthesize a person’s voice from only a few audio samples.,speech 8ae4dddd-c81e-4821-abab-768fc507ddd9,speech-synthesis-hindi,Speech Synthesis - Hindi,,speech 8c4acbc2-e032-4cc5-b75f-142689db5d01,speech-emotion-recognition,Speech Emotion Recognition,"**Speech Emotion Recognition** is a task of speech processing and computational paralinguistics that aims to recognize and categorize the emotions expressed in spoken language. The goal is to determine the emotional state of a speaker, such as happiness, anger, sadness, or frustration, from their speech patterns, such as prosody, pitch, and rhythm. For multimodal emotion recognition, please upload your result to [Multimodal Emotion Recognition on IEMOCAP](https://paperswithcode.com/sota/multimodal-emotion-recognition-on-iemocap)",speech 5fc6c348-66d1-4dd1-9189-18d9f80d4257,speaker-verification,Speaker Verification,"Speaker verification is the verifying the identity of a person from characteristics of the voice. ( Image credit: [Contrastive-Predictive-Coding-PyTorch ](https://github.com/jefflai108/Contrastive-Predictive-Coding-PyTorch) )",speech 525131cc-b516-4a78-b06e-6da2e1a02dff,speech-synthesis,Speech Synthesis,"Speech synthesis is the task of generating speech from some other modality like text, lip movements etc. Please note that the leaderboards here are not really comparable between studies - as they use mean opinion score as a metric and collect different samples from Amazon Mechnical Turk. ( Image credit: [WaveNet: A generative model for raw audio](https://deepmind.com/blog/article/wavenet-generative-model-raw-audio) )",speech a0a3a611-403d-4198-ad85-9b8151563e7d,speech-denoising,Speech Denoising,Obtain the clean speech of the target speaker by suppressing the background noise.,speech e3d27acd-0d62-40a1-8b22-b2af174fe77f,speech-synthesis-gujarati,Speech Synthesis - Gujarati,,speech 12d8eb84-544e-4399-9b31-baaedb295517,speech-synthesis-tamil,Speech Synthesis - Tamil,,speech 1a66e112-8ef7-4548-be88-5901ea0c5bf3,acoustic-question-answering,Acoustic Question Answering,,speech fbbaa423-e861-45f5-9bbe-d364f4c7a771,singing-voice-synthesis,Singing Voice Synthesis,,speech b6d95645-e968-4d24-9da1-515f7d38020d,spoken-command-recognition,Spoken Command Recognition,,speech d32d43fd-a3ee-4db9-9979-9d1a6d40f92c,acoustic-echo-cancellation,Acoustic echo cancellation,,speech b7f8afdc-3483-4000-8c86-c29cbe1f8a38,speech-synthesis-malayalam,Speech Synthesis - Malayalam,,speech 61eeb73d-266b-4950-a72f-8811bc2d9fb6,acoustic-modelling,Acoustic Modelling,,speech 979495fa-1d96-41d2-bbbc-e8c0fda7c80b,visual-speech-recognition,Visual Speech Recognition,,speech 86a1462f-eb49-48af-9148-ca5c830537e0,speech-dereverberation,Speech Dereverberation,Removing reverberation from audio signals,speech 909e8395-329f-41b2-8161-92fb3b3def36,sequence-to-sequence-speech-recognition,Sequence-To-Sequence Speech Recognition,,speech 4eb567e3-4375-46f0-8585-b428c9217b22,type,Vocal Bursts Type Prediction,predict the type of given vocal bursts,speech 4c27a6ec-418e-4a1f-ae93-12164142c994,multimodal-emotion-recognition,Multimodal Emotion Recognition,"This is a leaderboard for multimodal emotion recognition on the IEMOCAP dataset. The modality abbreviations are A: Acoustic T: Text V: Visual Please include the modality in the bracket after the model name. All models must use standard five emotion categories and are evaluated in standard leave-one-session-out (LOSO). See the papers for references.",speech a20307bd-39a8-4be5-b66d-5b6a7b20d01f,speech-synthesis-telugu,Speech Synthesis - Telugu,,speech aeb2c1e6-aab5-4fb3-852b-f7aa51004b02,audio-visual-speech-recognition,Audio-Visual Speech Recognition,Audio-visual speech recognition is the task of transcribing a paired audio and visual stream into text.,speech 73a0961d-9528-4aa6-823d-857fe9ffba31,english-conversational-speech-recognition,English Conversational Speech Recognition,,speech 99c19e93-0e9b-4659-a1ce-8ac989c998df,traffic-prediction,Traffic Prediction,"**Traffic Prediction** is a task that involves forecasting traffic conditions, such as the volume of vehicles and travel time, in a specific area or along a particular road. This task is important for optimizing transportation systems and reducing traffic congestion. ( Image credit: [BaiduTraffic](https://github.com/JingqingZ/BaiduTraffic) )",time-series 35c16a56-b577-4e34-8e0e-24fb1f8959fc,social-cue-forecasting,Social Cue Forecasting,,time-series 64d88e69-9f17-42f6-9018-a6137e024b71,time-series-denoising,Time Series Denoising,,time-series 9d47e955-0a9c-46d0-a9f7-6070c699077c,edge-computing,Edge-computing,Deep Learning on EDGE devices,time-series 32ba6681-52aa-49e3-9dd6-ce711d1dbc4f,predict-clinical-outcome,Predict clinical outcome,"A cost-based metric that considers the costs of algorithmic prescreening, expert screening, treatment, and diagnostic errors that result in late or missed treatments. This metric is further described here: https://moody-challenge.physionet.org/2022/",time-series 8618330c-ff55-4813-9fad-e15482fb4cec,stock-trend-prediction,Stock Trend Prediction,,time-series d7a2c9f8-0ad2-4fb4-9ea8-e0447104e85a,covid-19-tracking,COVID-19 Tracking,,time-series 9713e3d9-38c0-4947-a5d6-3f025c116ce8,w-r-n-sleep-staging,W-R-N Sleep Staging,"3-class Sleep Staging into - Wake - Rem - NREM",time-series 1eb366d2-f165-4b47-ae08-7aa191e63002,w-r-l-d-sleep-staging,W-R-L-D Sleep Staging,"4-class Sleep Staging into - Wake - REM - LIGHT SLEEP - DEEP SLEEP",time-series fbed0ae6-8bce-4666-b33e-a4585d96ce7c,covid-19-modelling,COVID-19 Modelling,,time-series ce28773f-89fd-4ea8-8ca2-ffdbdd3d1e81,multivariate-time-series-imputation,Multivariate Time Series Imputation,,time-series a8ff35a8-8492-48d0-8fa8-2e30f966d7fb,automatic-modulation-recognition,Automatic Modulation Recognition,Automatic modulation recognition/classification identifies the modulation pattern of communication signals received from wireless or wired networks.,time-series aca825b2-e0bc-4df4-afc5-65ca99c09cf6,lwr-classification,LWR Classification,"T4: LWR Classification: Predicting if the subject is Listening, Writing, or Resting from physiological signals (EEG, GSR, PPG). Labels: 0-listening, 1-writing, 2-resting. Classification tasks.",time-series 75070363-1710-4aca-ab24-5f7a270cf968,energy-management,energy management,"energy management is to schedule energy units inside the systems, enabling an reliable, safe and cost-effective operation",time-series c3427306-fb34-491c-8ecf-bb9d2983d2e8,new-product-sales-forecasting,New Product Sales Forecasting,"Sales forecasting of new product, which the market hasn’t seen before.",time-series 8f092a13-405d-4db7-84c8-3f186c48166e,change-point-detection,Change Point Detection,"**Change Point Detection** is concerned with the accurate detection of abrupt and significant changes in the behavior of a time series. Change point detection is the task of finding changes in the underlying model of a signal or time series. They are two main methods: 1) Online methods, that aim to detect changes as soon as they occur in a real-time setting 2) Offline methods that retrospectively detect changes when all samples are received. Source: [Selective review of offline change point detection methods](https://arxiv.org/pdf/1801.00718.pdf)",time-series dd00ffe8-88e4-4b71-85c8-1856df65f933,time-series-forecasting,Time Series Forecasting,"**Time Series Forecasting** is the task of fitting a model to historical, time-stamped data in order to predict future values. Traditional approaches include moving average, exponential smoothing, and ARIMA, though models as various as RNNs, Transformers, or XGBoost can also be applied. The most popular benchmark is the ETTh1 dataset. Models are typically evaluated using the Mean Square Error (MSE) or Root Mean Square Error (RMSE). ( Image credit: [ThaiBinh Nguyen](https://github.com/tn16jv/Stock-Price-Prediction) )",time-series b488e879-1391-4cb1-b578-a3c3c169a915,human-behavior-forecasting,Human Behavior Forecasting,,time-series ae242c8c-15aa-411f-9290-68a837935e40,wearable-activity-recognition,Wearable Activity Recognition,,time-series 3368ebfe-cefe-480f-8828-1f6b6b170bcd,exponential-degradation,Exponential degradation,Exponential degradation used to solve problems where systems exposed to an exponential loss in performances such as reparable industrial systems.,time-series 23807184-f5f9-4d26-8b15-f202057d27c9,unsupervised-spatial-clustering,Unsupervised Spatial Clustering,,time-series 83c8223d-3ffc-44d1-a832-8217dff10e49,video-quality-assessment,Video Quality Assessment,"Video Quality Assessment is a computer vision task aiming to mimic video-based human subjective perception. The goal is to produce a mos score, where higher score indicates better perceptual quality. Some well-known benchmarks for this task are KoNViD-1k, LIVE-VQC, YouTube-UGC and LSVQ. SROCC/PLCC/RMSE are usually used to evaluate the performance of different models.",time-series 7435c6f2-038a-4500-bd58-f5c633561e84,sequential-skip-prediction,Sequential skip prediction,,time-series 4d0a7bb0-88ef-478d-b6fa-18b4e52cdabb,time-to-event-prediction,Time-to-Event Prediction,,time-series 264c0739-203b-4861-94fc-efff9a993603,time-series-anomaly-detection,Time Series Anomaly Detection,,time-series 5e64eb11-254b-4133-9bce-a9358ad314a4,social-media-popularity-prediction,Social Media Popularity Prediction,"Social Media Popularity Prediction (SMPP) aims to predict the future popularity (e.g., clicks, views, likes, etc.) of online posts automatically via plenty of social media data from public platforms. It is a crucial problem for social media learning and forecasting and one of the most challenging problems in the field. With the ever-changing user interests and public attention on social media platforms, how to predict popularity accurately becomes more challenging than before. This task is valuable to content providers, marketers, or consumers in a range of real-world applications, including multimedia advertising, recommendation system, or trend analysis.",time-series 472fe690-125f-467b-87d9-45a86d14fa63,short-observation-new-product-sales,Short-observation new product sales forecasting,,time-series 569b1cca-d3c5-435e-9694-c337f8f13e80,eeg-based-sleep-staging,EEG based sleep staging,Sleep staging from only EEG signal,time-series 183fa6f9-a29d-4140-b02b-1ac9dde6475d,eeg,Electroencephalogram (EEG),"**Electroencephalogram (EEG)** is a method of recording brain activity using electrophysiological indexes. When the brain is active, a large number of postsynaptic potentials generated synchronously by neurons are formed after summation. It records the changes of electric waves during brain activity and is the overall reflection of the electrophysiological activities of brain nerve cells on the surface of cerebral cortex or scalp. Brain waves originate from the postsynaptic potential of the apical dendrites of pyramidal cells. The formation of synchronous rhythm of EEG is also related to the activity of nonspecific projection system of cortex and thalamus. EEG is the basic theoretical research of brain science. EEG monitoring is widely used in its clinical application.",time-series 8adb2024-8a44-4483-977f-a0dac4b1a7cf,time-series-classification,Time Series Classification,"**Time Series Classification** is a general task that can be useful across many subject-matter domains and applications. The overall goal is to identify a time series as coming from one of possibly many sources or predefined groups, using labeled training data. That is, in this setting we conduct supervised learning, where the different time series sources are considered known. Source: [Nonlinear Time Series Classification Using Bispectrum-based Deep Convolutional Neural Networks ](https://arxiv.org/abs/2003.02353)",time-series 35363d8a-0982-4ced-88a6-24c29adf807f,ecg-based-sleep-staging,ECG based Sleep Staging,Sleep Staging from only ECG signal,time-series 85ad68d6-52c1-4b0f-97dc-1f5397363506,time-series-clustering,Time Series Clustering,"**Time Series Clustering** is an unsupervised data mining technique for organizing data points into groups based on their similarity. The objective is to maximize data similarity within clusters and minimize it across clusters. Time-series clustering is often used as a subroutine of other more complex algorithms and is employed as a standard tool in data science for anomaly detection, character recognition, pattern discovery, visualization of time series. Source: [Comprehensive Process Drift Detection with Visual Analytics ](https://arxiv.org/abs/1907.06386)",time-series deebb11e-2046-4f2f-a4c4-7c79434f56bc,irregular-time-series,Irregular Time Series,Irregular Time Series,time-series 54f845ed-d656-44ce-9092-e068b56380a1,earth-surface-forecasting,Earth Surface Forecasting,Conditional forecasting of future multi-spectral imagery.,time-series ad442209-ab2e-40e2-8ca1-963c33ddf517,sleep-spindles-detection,Sleep spindles detection,,time-series 2c4b1d5c-a105-47c0-88bb-4e66ced266d8,trajectory-modeling,Trajectory Modeling,The equivalent of language modeling but for trajectories.,time-series 75cfdeae-ff07-4335-a8e5-9f787785c1e5,seismic-source-localization,Seismic source localization,Locating a seismic source using seismometer recordings,time-series 2acab60b-56ac-4ea8-bc8f-a71c6726333e,correlated-time-series-forecasting,Correlated Time Series Forecasting,,time-series ca8e0127-fb7b-488a-8d45-d15a6330ee57,semanticity-prediction,Semanticity prediction,"T3: Semanticity Prediction: Estimating the semanticity perceived by Listener from physiological signals (EEG, GSP, PPG). Label: 0-(semantic), 1-(non-semantic). Binary classification problem.",time-series e751a94d-68e1-42b1-9855-db0509b80f47,pair-trading,PAIR TRADING,Pair trading is one of the most effective statistical arbitrage strategies which seeks a neutral profit by hedging a pair of selected assets.,time-series bdec74a4-f5c5-4365-94bb-372f8318eb4a,solar-flare-prediction,Solar Flare Prediction,Solar flare prediction in heliophysics,time-series 79826a67-c9e3-40a6-aec7-7f40dc5eb5af,lip-password-classification,Lip password classification,A classification task that predicts whether the designated user is uttering the designated password.,time-series eb7dc52e-6f30-4802-854b-371aa954cfe3,algorithmic-trading,Algorithmic Trading,An algorithmic trading system is a software that is used for trading in the stock market.,time-series e7bf29dd-6079-443f-945a-0dc4f7bd25f4,multimodal-association,Multimodal Association,"**Multimodal association** refers to the process of associating multiple modalities or types of data in time series analysis. In time series analysis, multiple modalities or types of data can be collected, such as sensor data, images, audio, and text. Multimodal association aims to integrate these different types of data to improve the understanding and prediction of the time series. For example, in a smart home application, sensor data from temperature, humidity, and motion sensors can be combined with images from cameras to monitor the activities of residents. By analyzing the multimodal data together, the system can detect anomalies or patterns that may not be visible in individual modalities alone. Multimodal association can be achieved using various techniques, including deep learning models, statistical models, and graph-based models. These models can be trained on the multimodal data to learn the associations and dependencies between the different types of data.",time-series 8ebcfcf4-9d08-4344-8102-9d74e29b0a22,non-intrusive-load-monitoring,Non-Intrusive Load Monitoring,,time-series 42d5fc56-5fa0-4faa-8e16-0d5b0cab3a28,intelligent-communication,Intelligent Communication,"Intelligently decide (i) the content of data to be shared/communicated and (ii) the direction in which the chosen data is transmitted.",time-series 6a50b4fc-0d7b-4c7e-ac31-77d5ccd04606,fire-detection,Fire Detection,Detection of fire using multi-variate time series sensor data.,time-series 40198ab5-9a78-4998-b43a-b4778c3bcc52,data-compression,Data Compression,,time-series 5f8fff7c-7d01-431e-a13b-5fb7a6118828,tropical-cyclone-intensity-forecasting,Tropical Cyclone Intensity Forecasting,,time-series 19d1574e-504b-4e08-8d36-3e7d07731c60,time-series-regression,Time Series Regression,Predicting one or more scalars for an entire time series example.,time-series fc8623ce-e314-4bf3-87b5-319be315e893,imputation,Imputation,Substituting missing data with values according to some criteria.,time-series 8848733b-0714-445d-8cd1-22f6c07b29aa,time-series,Time Series Analysis,"**Time Series Analysis** is a statistical technique used to analyze and model time-based data. It is used in various fields such as finance, economics, and engineering to analyze patterns and trends in data over time. The goal of time series analysis is to identify the underlying patterns, trends, and seasonality in the data, and to use this information to make informed predictions about future values. ( Image credit: [Autoregressive CNNs for Asynchronous Time Series](https://arxiv.org/pdf/1703.04122v4.pdf) )",time-series d2f31c8a-ea54-42c2-b80a-47bcd36932c5,solar-irradiance-forecasting,Solar Irradiance Forecasting,,time-series 8ec2dfe8-3a9c-460e-a88f-0c018fdf0d8c,remaining-useful-lifetime-estimation,Remaining Useful Lifetime Estimation,Estimating the number of machine operation cycles until breakdown from the time series of previous cycles.,time-series 1f0a82cd-75ed-4101-af0b-a7bb918a9ee7,moving-point-cloud-processing,Moving Point Cloud Processing,,time-series 7a8971f8-defb-41f5-b045-71bd5d0cfd46,stock-market-prediction,Stock Market Prediction,,time-series 6e8f2948-6e82-43dd-a4dc-af4328a999bf,portfolio-optimization,Portfolio Optimization,,time-series 0dc3f9c1-efc7-47e6-ab26-f1ec8433d280,semi-supervised-time-series-classification,Semi-supervised time series classification,,time-series a7355d33-dbaf-427e-b49a-7bab4a746df7,eeg-decoding,Eeg Decoding,**EEG Decoding** - extracting useful information directly from EEG data.,time-series 61457a95-0b9f-48b3-bd88-d9d36d1c57bc,time-series-alignment,Time Series Alignment,,time-series a0251d39-a46e-41df-a2b3-fca2c72d9050,predictive-process-monitoring,Predictive Process Monitoring,A branch of predictive analysis that attempts to predict some future state of a business process.,time-series d1af0f55-280d-455a-8ca7-94507ef39287,episode-classification,Episode Classification,The episode classification is a branch of the classification aiming to classify groups of observations of a Time Series. (Example: critical episodes/ normal episode),time-series 0b3a8c8c-6b8a-46bf-92ae-569ed97f36c0,classification-on-time-series-with-missing,Classification on Time Series with Missing Data,,time-series cb7f9e39-ab6d-493f-ad6f-0f97ced2f1cf,classify-murmurs,Classify murmurs,Classify murmurs based on Phonocardiograms (PCGs),time-series de9e0334-1341-4a66-9793-75b397eb3f5d,robust-traffic-prediction,Robust Traffic Prediction,,time-series 2e526362-8058-4181-8111-80091e3f8377,time-series-averaging,Time Series Averaging,,time-series 9962586d-2ffc-45ff-84b1-4393d8506aa4,human-activity-recognition,Human Activity Recognition,Classify various human activities,time-series 23a6f291-2d57-468d-a1bf-93c019f2bcde,time-series-streams,Time Series Streams,,time-series 573543fe-2458-446a-a262-e61c1819bca7,probabilistic-time-series-forecasting,Probabilistic Time Series Forecasting,,time-series 4f9f0519-5d26-4b2f-9413-6792ed17c9b8,time-series-prediction,Time Series Prediction,"The goal of **Time Series Prediction** is to infer the future values of a time series from the past. Source: [Orthogonal Echo State Networks and stochastic evaluations of likelihoods ](https://arxiv.org/abs/1601.05911)",time-series 34ead4dd-3e08-4917-9e7b-4f05a8922cfc,stock-price-prediction,Stock Price Prediction,**Stock Price Prediction** is the task of forecasting future stock prices based on historical data and various market indicators. It involves using statistical models and machine learning algorithms to analyze financial data and make predictions about the future performance of a stock. The goal of stock price prediction is to help investors make informed investment decisions by providing a forecast of future stock prices.,time-series 7b7145fc-124a-4b6e-ae03-e99588ab1c47,human-motion-prediction,Human motion prediction,"Action prediction is a pre-fact video understanding task, which focuses on future states, in other words, it needs to reason about future states or infer action labels before the end of action execution.",time-series 31ae1de1-162f-4536-aff1-5e068e46f40f,attention-score-prediction,Attention Score Prediction,"Auditory Attention Score Prediction: Estimating the attention level of Listener from physiological signals (EEG, GSR, PPG), a regression task. The attention score ranges from 0 to 100.",time-series ec2c523e-9300-4d0f-a8ab-563a9cfd6f8f,multivariate-time-series-forecasting,Multivariate Time Series Forecasting,,time-series 6173b30f-acdc-464e-95d9-4e51ead68e86,math-word-problem-solving,Math Word Problem Solving,,time-series 361014f5-67a6-4dea-8944-88517a818413,tropical-cyclone-track-forecasting,Tropical Cyclone Track Forecasting,,time-series 06ddc2dc-9cdc-4793-bba5-72e5c279db91,traffic-data-imputation,Traffic Data Imputation,,time-series 8f7f9b26-39be-49cb-bc50-137d46a281fe,univariate-time-series-forecasting,Univariate Time Series Forecasting,,time-series d960a9c4-44b0-4631-9381-fc73b9bf1db9,noise-level-prediction,Noise Level Prediction,"T2: Noise Level Prediction: Estimating the noise level experienced by the Listener from physiological signals (EEG, GSR, PPG). Six different levels of background noise (SNR) Label: -6, -3, 0, 3, 6, and inf (noise-free) in dB.",time-series 98ea36ae-4a29-48b9-a442-7f463c527bda,sequential-bayesian-inference,Sequential Bayesian Inference,"Also known as Bayesian filtering or [recursive Bayesian estimation](https://en.wikipedia.org/wiki/Recursive_Bayesian_estimation), this task aims to perform inference on latent state-space models.",time-series 94af6f25-9d25-4726-a1ff-7d1a30e9ca0b,stock-prediction,Stock Prediction,,time-series dd0d330d-c24a-44db-9d05-dc9e28e1f32f,dynamic-time-warping,Dynamic Time Warping,,time-series 37f32546-cbe7-4978-899c-393aee0afaa9,phonocardiogram-classification,Phonocardiogram Classification,Classify labels/murmur/clinical outcome based on Phonocardiograms (PCGs),time-series 83d3d3ed-9837-41c5-b3d2-e441d7abadad,activity-prediction,Activity Prediction,Predict human activities in videos,time-series 12e8ea44-1854-4eaf-aff7-f20e1c5a0f14,spatio-temporal-forecasting,Spatio-Temporal Forecasting,,time-series 6939dc5e-9a34-4962-a991-8b1d3bf17f4e,clustering-multivariate-time-series,Clustering Multivariate Time Series,,time-series