File size: 847 Bytes
51f7865
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
---
base_model: emre/gemma-3-27b-it-tr-reasoning40k-4bit
base_model_relation: quantized
tags:
- gguf
- q4_k_m
- gemma3
- 27b
- unsloth
- transformers
- llama.cpp
- text-generation-inference
library_name: llama.cpp
language:
- en
- tr
---
# **Gemma-3-27B-it-tr-reasoning · Q4_K_M (GGUF)**  

> A GGUF **Q4_K_M** quantization of [*emre/gemma-3-27b-it-tr-reasoning40k-4bit*](https://huggingface.co/emre/gemma-3-27b-it-tr-reasoning40k-4bit) for ultra-fast, low-RAM local inference with **llama.cpp** (and compatible back-ends).

No weights were changed beyond quantization; alignment, vocabulary and tokenizer remain intact.

- **Developed by:** emre

- **Finetuned from model :** unsloth/gemma-3-27b-it-unsloth-bnb-4bit

This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.

---