qqWen-Series
Collection
Based off the Qwen-2.5 Series - model finetuned for the Q programming language.
β’
11 items
β’
Updated
β’
10
qqWen-7B-RL is a 7-billion parameter language model specifically designed for advanced reasoning and code generation in the Q programming language. Built upon the robust Qwen 2.5 architecture, this model has undergone a comprehensive three-stage training process: pretraining, supervised fine-tuning (SFT), and reinforcement learning (RL) for the Q programming language.
Associated Technical Report: Report
Q is a high-performance, vector-oriented programming language developed by Kx Systems, primarily used in:
If you use this model in your research or applications, please cite our technical report.
@misc{hogan2025technicalreportfullstackfinetuning,
title={Technical Report: Full-Stack Fine-Tuning for the Q Programming Language},
author={Brendan R. Hogan and Will Brown and Adel Boyarsky and Anderson Schneider and Yuriy Nevmyvaka},
year={2025},
eprint={2508.06813},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2508.06813},
}