File size: 2,200 Bytes
1f62b9e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 |
# Linear Next Benchmark
Linear Next is a comprehensive benchmark designed to fairly compare various efficient transformer architectures. This project evaluates different approaches including linear attention, sparse attention, and other model structures under identical training conditions and datasets.
## Overview
The benchmark aims to provide an unbiased comparison of efficient transformer variants by ensuring all models are trained with the same datasets, hyperparameters, and evaluation metrics. This allows for a clear understanding of the relative strengths and weaknesses of each approach.
## Datasets
The benchmark utilizes a diverse collection of high-quality datasets:
### General Text
- **DCLM-pro**: A large-scale dataset containing diverse text from various domains, designed for general language modeling tasks.
- **Cosmopedia-v2**: A curated corpus of high-quality web content covering a wide range of topics, with emphasis on educational and informative material.
- **Fineweb-edu**: A filtered collection of educational web content, focusing on instructional and academic text from reliable sources.
### Code
- **The Stack v2**: A comprehensive collection of source code spanning multiple programming languages, designed to train models on code understanding and generation tasks.
### Mathematics
- **Finemath**: A specialized dataset containing mathematical content, including equations, proofs, and mathematical explanations across various difficulty levels.
### Reasoning
- **Natural Reasoning**: A dataset focused on logical reasoning, problem-solving, and inference tasks, designed to improve models' reasoning capabilities.
## Methodology
All models in the Linear Next benchmark are evaluated using identical:
- Training datasets and data mixing ratios
- Optimization parameters
- Hardware configurations
- Evaluation metrics
This controlled environment ensures that performance differences can be attributed to the architectural differences rather than training conditions.
## Results
Detailed benchmark results, including training curves, inference speed, memory usage, and performance metrics across different tasks, are available in the project repository.
|