-
LLaVA-o1: Let Vision Language Models Reason Step-by-Step
Paper • 2411.10440 • Published • 111 -
BlueLM-V-3B: Algorithm and System Co-Design for Multimodal Large Language Models on Mobile Devices
Paper • 2411.10640 • Published • 44 -
Knowledge Transfer Across Modalities with Natural Language Supervision
Paper • 2411.15611 • Published • 15 -
Critical Tokens Matter: Token-Level Contrastive Estimation Enhence LLM's Reasoning Capability
Paper • 2411.19943 • Published • 55
Zhou
HaoUSF
AI & ML interests
None yet
Recent Activity
reacted
to
singhsidhukuldeep's
post
with 🔥
13 days ago
Exciting new research alert! 🚀 A groundbreaking paper titled "Understanding LLM Embeddings for Regression" has just been released, and it's a game-changer for anyone working with large language models (LLMs) and regression tasks.
Key findings:
1. LLM embeddings outperform traditional feature engineering in high-dimensional regression tasks.
2. LLM embeddings preserve Lipschitz continuity over feature space, enabling better regression performance.
3. Surprisingly, factors like model size and language understanding don't always improve regression outcomes.
Technical details:
The researchers used both T5 and Gemini model families to benchmark embedding-based regression. They employed a key-value JSON format for string representations and used average-pooling to aggregate Transformer outputs.
The study introduced a novel metric called Normalized Lipschitz Factor Distribution (NLFD) to analyze embedding continuity. This metric showed a high inverse relationship between the skewedness of the NLFD and regression performance.
Interestingly, the paper reveals that applying forward passes of pre-trained models doesn't always significantly improve regression performance for certain tasks. In some cases, using only vocabulary embeddings without a forward pass yielded comparable results.
The research also demonstrated that LLM embeddings are dimensionally robust, maintaining strong performance even with high-dimensional data where traditional representations falter.
This work opens up exciting possibilities for using LLM embeddings in various regression tasks, particularly those with high degrees of freedom. It's a must-read for anyone working on machine learning, natural language processing, or data science!
updated
a collection
21 days ago
LLM
updated
a collection
29 days ago
LLM
Organizations
Collections
1
models
None public yet
datasets
None public yet