Dmitry Ryumin

DmitryRyumin

AI & ML interests

Machine Learning and Applications, Multi-Modal Understanding

Organizations

Posts 46

view post
Post
418
šŸ˜€šŸ˜²šŸ˜šŸ˜” New Research Alert - FER-YOLO-Mamba (Facial Expressions Recognition Collection)! šŸ˜”šŸ˜„šŸ„“šŸ˜±
šŸ“„ Title: FER-YOLO-Mamba: Facial Expression Detection and Classification Based on Selective State Space šŸ”

šŸ“ Description: FER-YOLO-Mamba is a novel facial expression recognition model that combines the strengths of YOLO and Mamba technologies to efficiently recognize and localize facial expressions.

šŸ‘„ Authors: Hui Ma, Sen Lei, Turgay Celik, and Heng-Chao Li

šŸ”— Paper: FER-YOLO-Mamba: Facial Expression Detection and Classification Based on Selective State Space (2405.01828)

šŸ“ Repository: https://github.com/SwjtuMa/FER-YOLO-Mamba

šŸš€ Added to the Facial Expressions Recognition Collection: DmitryRyumin/facial-expressions-recognition-65f22574e0724601636ddaf7

šŸ”„šŸ” See also Facial_Expression_Recognition - ElenaRyumina/Facial_Expression_Recognition (App, co-authored by @DmitryRyumin ) šŸ˜‰

šŸ“š More Papers: more cutting-edge research presented at other conferences in the DmitryRyumin/NewEraAI-Papers curated by @DmitryRyumin

šŸ” Keywords: #FERYOLOMamba #FER #YOLO #Mamba #FacialExpressionRecognition #EmotionRecognition #ComputerVision #DeepLearning #MachineLearning #Innovation
view post
Post
1198
šŸ”„šŸš€šŸŒŸ New Research Alert - YOCO! šŸŒŸšŸš€šŸ”„
šŸ“„ Title: You Only Cache Once: Decoder-Decoder Architectures for Language Models šŸ”

šŸ“ Description: YOCO is a novel decoder-decoder architecture for LLMs that reduces memory requirements, speeds up prefilling, and maintains global attention. It consists of a self-decoder for encoding KV caches and a cross-decoder for reusing these caches via cross-attention.

šŸ‘„ Authors: Yutao Sun et al.

šŸ“„ Paper: You Only Cache Once: Decoder-Decoder Architectures for Language Models (2405.05254)

šŸ“ Repository: https://github.com/microsoft/unilm/tree/master/YOCO

šŸ“š More Papers: more cutting-edge research presented at other conferences in the DmitryRyumin/NewEraAI-Papers curated by @DmitryRyumin

šŸ” Keywords: #YOCO #DecoderDecoder #LargeLanguageModels #EfficientArchitecture #GPUMemoryReduction #PrefillingSpeedup #GlobalAttention #DeepLearning #Innovation #AI

models

None public yet

datasets

None public yet