Tiotanio's picture
Create README.md
51f7865 verified
metadata
base_model: emre/gemma-3-27b-it-tr-reasoning40k-4bit
base_model_relation: quantized
tags:
  - gguf
  - q4_k_m
  - gemma3
  - 27b
  - unsloth
  - transformers
  - llama.cpp
  - text-generation-inference
library_name: llama.cpp
language:
  - en
  - tr

Gemma-3-27B-it-tr-reasoning 路 Q4_K_M (GGUF)

A GGUF Q4_K_M quantization of emre/gemma-3-27b-it-tr-reasoning40k-4bit for ultra-fast, low-RAM local inference with llama.cpp (and compatible back-ends).

No weights were changed beyond quantization; alignment, vocabulary and tokenizer remain intact.

  • Developed by: emre

  • Finetuned from model : unsloth/gemma-3-27b-it-unsloth-bnb-4bit

This gemma3 model was trained 2x faster with Unsloth and Huggingface's TRL library.