bhogan commited on
Commit
38ea8ac
·
verified ·
1 Parent(s): e90eac4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -2
README.md CHANGED
@@ -11,7 +11,8 @@ base_model:
11
 
12
  **qqWen-7B-RL** is a 7-billion parameter language model specifically designed for advanced reasoning and code generation in the Q programming language. Built upon the robust Qwen 2.5 architecture, this model has undergone a comprehensive three-stage training process: pretraining, supervised fine-tuning (SFT), and reinforcement learning (RL) for the Q programming language.
13
 
14
- **Associated Technical Report**: [Link to paper will be added here]
 
15
 
16
  ## 🔤 About Q Programming Language
17
 
@@ -32,4 +33,16 @@ Q is a high-performance, vector-oriented programming language developed by Kx Sy
32
 
33
  ## 📝 Citation
34
 
35
- If you use this model in your research or applications, please cite our technical report.
 
 
 
 
 
 
 
 
 
 
 
 
 
11
 
12
  **qqWen-7B-RL** is a 7-billion parameter language model specifically designed for advanced reasoning and code generation in the Q programming language. Built upon the robust Qwen 2.5 architecture, this model has undergone a comprehensive three-stage training process: pretraining, supervised fine-tuning (SFT), and reinforcement learning (RL) for the Q programming language.
13
 
14
+ **Associated Technical Report**: [Report](https://arxiv.org/abs/2508.06813)
15
+
16
 
17
  ## 🔤 About Q Programming Language
18
 
 
33
 
34
  ## 📝 Citation
35
 
36
+ If you use this model in your research or applications, please cite our technical report.
37
+ ```
38
+ @misc{hogan2025technicalreportfullstackfinetuning,
39
+ title={Technical Report: Full-Stack Fine-Tuning for the Q Programming Language},
40
+ author={Brendan R. Hogan and Will Brown and Adel Boyarsky and Anderson Schneider and Yuriy Nevmyvaka},
41
+ year={2025},
42
+ eprint={2508.06813},
43
+ archivePrefix={arXiv},
44
+ primaryClass={cs.LG},
45
+ url={https://arxiv.org/abs/2508.06813},
46
+ }
47
+ ```
48
+