| cff-version: 1.2.0 |
| title: "Q-TensorFormer: Quantum-Enhanced Tensor Network LLM Compression Engine" |
| message: "If you use this software in your research, please cite it using the metadata below." |
| authors: |
| - given-names: Premchan369 |
| affiliation: "" |
| repository-code: "https://huggingface.co/Premchan369/q-tensorformer" |
| url: "https://huggingface.co/Premchan369/q-tensorformer" |
| abstract: >- |
| Q-TensorFormer is a hybrid transformer architecture that compresses feed-forward |
| layers using Tensor-Train (TT) decomposition and enhances token representations |
| via real quantum circuits, with adaptive TT-rank scheduling guided by attention |
| entropy. It achieves 50-70% parameter reduction at equivalent perplexity. |
| keywords: |
| - tensor networks |
| - quantum machine learning |
| - model compression |
| - transformer |
| - language modeling |
| - efficient deep learning |
| license: Apache-2.0 |
| version: 3.0.0 |
| date-released: 2026-05-06 |
|
|