Quantized GGUF models for UIGEN-T3-14B-Instruct
This repository contains GGUF quantized versions of qingy2024/UIGEN-T3-14B-Instruct.
Available quantizations:
- FP16 (full precision)
- Q2_K
- Q3_K_L
- Q3_K_M
- Q3_K_S
- Q4_K_M
- Q4_K_S
- Q5_K_M
- Q5_K_S
- Q6_K
- Q8_0
Original model
This is a quantized version of qingy2024/UIGEN-T3-14B-Instruct.
Generated on
Wed Jun 4 23:37:53 UTC 2025
- Downloads last month
- 32
Hardware compatibility
Log In
to view the estimation
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for Tesslate/UIGEN-T3-14B-Instruct-Old-GGUF
Base model
Qwen/Qwen3-14B-Base
Finetuned
Qwen/Qwen3-14B
Finetuned
Tesslate/UIGEN-T3-14B-Instruct-Old