Qwen2.5-CoderX-7B-v0.7
With the correct prompting and settings: temperature = 1.5, min_p = 0.1
, the model should produce outputs that are close or on-par with proprietary models like Gemini 2.5 Pro.
This model provides deep, production-ready responses. For shorter outputs, try prompts like 'be concise' or 'code only'."
- Downloads last month
- 52
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support