Choi Jaehun

Ash-Hun

AI & ML interests

LLM, RAG, Agent, GenAI, Modeling for RAG

Recent Activity

Reacted to singhsidhukuldeep's post with 🔥 5 days ago
Exciting breakthrough in LLM reasoning: Introducing "Thread of Thought" (ThoT) - a novel prompting strategy that revolutionizes how language models handle chaotic contexts! Unlike traditional approaches that struggle with complex, interleaved information, ThoT enables LLMs to methodically segment and analyze extended contexts with remarkable precision. Here's how it works: Technical Deep Dive: - ThoT employs a two-step prompting mechanism: 1. Initial Analysis: Uses a template combining chaotic context (X) and query (Q) with a trigger sentence that initiates systematic reasoning. 2. Conclusion Refinement: Leverages the organized thought sequence to extract definitive answers. Implementation Details: - Seamlessly integrates as a "plug-and-play" module with existing LLMs. - Requires no model retraining or fine-tuning. - Works with various prompting techniques and model architectures. Performance Highlights: - Outperformed traditional methods on PopQA and EntityQ datasets. - Achieved 57.4% accuracy with GPT-3.5-turbo (vs. 48.2% for Chain-of-Thought). - Demonstrated superior performance across model scales, from 7B to 70B parameters. Key Applications: - Retrieval-augmented generation. - Multi-turn conversation responses. - Complex reasoning tasks requiring information synthesis. What makes it special: ThoT mirrors human cognitive processes by breaking down complex information into manageable segments while maintaining logical continuity – a game-changer for handling information-dense contexts.
updated a dataset 5 days ago
Ash-Hun/Translation_Prompt_Template
updated a collection 5 days ago
Prompt for RAG
View all activity

Organizations

Ash-Hun's activity

Reacted to singhsidhukuldeep's post with 🔥 5 days ago
view post
Post
1165
Exciting breakthrough in LLM reasoning: Introducing "Thread of Thought" (ThoT) - a novel prompting strategy that revolutionizes how language models handle chaotic contexts!

Unlike traditional approaches that struggle with complex, interleaved information, ThoT enables LLMs to methodically segment and analyze extended contexts with remarkable precision. Here's how it works:

Technical Deep Dive:
- ThoT employs a two-step prompting mechanism:
1. Initial Analysis: Uses a template combining chaotic context (X) and query (Q) with a trigger sentence that initiates systematic reasoning.
2. Conclusion Refinement: Leverages the organized thought sequence to extract definitive answers.

Implementation Details:
- Seamlessly integrates as a "plug-and-play" module with existing LLMs.
- Requires no model retraining or fine-tuning.
- Works with various prompting techniques and model architectures.

Performance Highlights:
- Outperformed traditional methods on PopQA and EntityQ datasets.
- Achieved 57.4% accuracy with GPT-3.5-turbo (vs. 48.2% for Chain-of-Thought).
- Demonstrated superior performance across model scales, from 7B to 70B parameters.

Key Applications:
- Retrieval-augmented generation.
- Multi-turn conversation responses.
- Complex reasoning tasks requiring information synthesis.

What makes it special: ThoT mirrors human cognitive processes by breaking down complex information into manageable segments while maintaining logical continuity – a game-changer for handling information-dense contexts.
upvoted an article 5 months ago
view article
Article

MTEB: Massive Text Embedding Benchmark

47
upvoted an article 6 months ago
view article
Article

Decoding GPT-4'o': In-Depth Exploration of Its Mechanisms and Creating Similar AI.

By KingNish
34