Update README.md
Browse files
README.md
CHANGED
|
@@ -6,11 +6,13 @@ tags: []
|
|
| 6 |
# Model Description
|
| 7 |
|
| 8 |
**Comp4Cls** is a retrieval-augmented classification framework that uses **entity-centric semantic compression** to turn long scientific/technical documents into short, task-focused representations for both retrieval and labeling. Documents (papers, patents, and R&D reports) are first compressed into structured summaries that preserve discriminative signals (e.g., core concepts, methods, problems, findings), embedded, and stored in a vector DB. At inference, a query is compressed the same way, nearest neighbors are retrieved, and a small LLM assigns the final class label using the compressed evidence.
|
|
|
|
|
|
|
| 9 |
The end-to-end workflow—**Phase 1: compression + indexing, Phase 2: retrieval + classification**—is illustrated in the framework diagram on *page 2*. Experiments on a large bilingual corpus with hierarchical, multi-label taxonomies show that a **4B-scale** Comp4Cls matches or outperforms **8B–14B** models, especially in fine-grained categories, while cutting token usage and compute. Moderate compression (often **~20% of entities**) preserves retrieval fidelity and boosts downstream F1, enabling lightweight, low-latency deployment in production pipelines. See *Table II on page 8* (compression vs. length), *Figure 6 on page 9* (retrieval quality under compression), and *Figure 7 on page 10* (accuracy vs. larger LLMs).
|
| 10 |
|
| 11 |
## Framework Diagram
|
| 12 |
|
| 13 |
-
 are first compressed into structured summaries that preserve discriminative signals (e.g., core concepts, methods, problems, findings), embedded, and stored in a vector DB. At inference, a query is compressed the same way, nearest neighbors are retrieved, and a small LLM assigns the final class label using the compressed evidence.
|
| 9 |
+
|
| 10 |
+
|
| 11 |
The end-to-end workflow—**Phase 1: compression + indexing, Phase 2: retrieval + classification**—is illustrated in the framework diagram on *page 2*. Experiments on a large bilingual corpus with hierarchical, multi-label taxonomies show that a **4B-scale** Comp4Cls matches or outperforms **8B–14B** models, especially in fine-grained categories, while cutting token usage and compute. Moderate compression (often **~20% of entities**) preserves retrieval fidelity and boosts downstream F1, enabling lightweight, low-latency deployment in production pipelines. See *Table II on page 8* (compression vs. length), *Figure 6 on page 9* (retrieval quality under compression), and *Figure 7 on page 10* (accuracy vs. larger LLMs).
|
| 12 |
|
| 13 |
## Framework Diagram
|
| 14 |
|
| 15 |
+

|
| 16 |
|
| 17 |
## Model Details
|
| 18 |
|