Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
1
4
14
hassenhamdi
hassenhamdi
Follow
Gargaz's profile picture
Mi6paulino's profile picture
Theartplug's profile picture
5 followers
·
48 following
AI & ML interests
None yet
Recent Activity
liked
a Space
10 days ago
hf-audio/open_asr_leaderboard
liked
a model
10 days ago
SparkAudio/Spark-TTS-0.5B
reacted
to
singhsidhukuldeep
's
post
with 🧠
23 days ago
O1 Embedder: Transforming Retrieval Models with Reasoning Capabilities Researchers from University of Science and Technology of China and Beijing Academy of Artificial Intelligence have developed a novel retrieval model that mimics the slow-thinking capabilities of reasoning-focused LLMs like OpenAI's O1 and DeepSeek's R1. Unlike traditional embedding models that directly match queries with documents, O1 Embedder first generates thoughtful reflections about the query before performing retrieval. This two-step process significantly improves performance on complex retrieval tasks, especially those requiring intensive reasoning or zero-shot generalization to new domains. The technical implementation is fascinating: - The model integrates two essential functions: Thinking and Embedding - It uses an "Exploration-Refinement" data synthesis workflow where initial thoughts are generated by an LLM and refined by a retrieval committee - A multi-task training method fine-tunes a pre-trained LLM to generate retrieval thoughts via behavior cloning while simultaneously learning embedding capabilities through contrastive learning - Memory-efficient joint training enables both tasks to share encoding results, dramatically increasing batch size The results are impressive - O1 Embedder outperforms existing methods across 12 datasets in both in-domain and out-of-domain scenarios. For example, it achieves a 3.9% improvement on Natural Questions and a 3.0% boost on HotPotQA compared to models without thinking capabilities. This approach represents a significant paradigm shift in retrieval technology, bridging the gap between traditional dense retrieval and the reasoning capabilities of large language models. What do you think about this approach? Could "thinking before retrieval" transform how we build search systems?
View all activity
Organizations
hassenhamdi
's activity
All
Models
Datasets
Spaces
Papers
Collections
Community
Posts
Upvotes
Likes
Articles
liked
a Space
10 days ago
Running
on
CPU Upgrade
687
687
Open ASR Leaderboard
🏆
Request evaluation for a speech model
liked
a model
10 days ago
SparkAudio/Spark-TTS-0.5B
Text-to-Speech
•
Updated
15 days ago
•
13.4k
•
502
liked
a Space
24 days ago
Running
4
4
Tech Tree Blog
🌳
liked
3 models
24 days ago
jfkback/hypencoder.4_layer
Feature Extraction
•
Updated
Feb 17
•
55
•
1
jfkback/hypencoder.2_layer
Feature Extraction
•
Updated
Feb 17
•
48
•
1
jfkback/hypencoder.8_layer
Feature Extraction
•
Updated
Feb 17
•
218
•
1
liked
6 models
about 1 month ago
NousResearch/DeepHermes-3-Llama-3-8B-Preview
Text Generation
•
Updated
8 days ago
•
34k
•
299
tomg-group-umd/huginn-0125
Text Generation
•
Updated
1 day ago
•
5.43k
•
245
Zyphra/Zonos-v0.1-transformer
Text-to-Speech
•
Updated
Feb 15
•
183k
•
385
deepseek-ai/DeepSeek-R1
Text Generation
•
Updated
26 days ago
•
1.68M
•
•
11.5k
hexgrad/Kokoro-82M
Text-to-Speech
•
Updated
3 days ago
•
1.66M
•
3.75k
microsoft/OmniParser-v2.0
Image-Text-to-Text
•
Updated
Feb 18
•
8.88k
•
1.18k
liked
2 models
4 months ago
Lightricks/LTX-Video
Text-to-Video
•
Updated
9 days ago
•
173k
•
•
1.08k
hassenhamdi/SSD-1B-fp8_e4m3fn
Text-to-Image
•
Updated
Nov 13, 2024
•
1