MedScholar-1.5B-f32-GGUF

MedScholar-1.5B is a compact, instruction-aligned medical question-answering model based on Qwen2.5-1.5B-Instruct, fine-tuned using 1 million samples from the MIRIAD-4.4M dataset, and designed for research and educational exploration of clinical knowledge, not for diagnosis or medical decision-making. The model is trained using the Unsloth framework and QLoRA, employs a minimal QA prompt format, and operates under the Apache-2.0 and ODC-By 1.0 licenses; it is strictly intended for non-clinical, academic use only, with output not to be used for real patient care or advice.

Model Files

File name Size Quant Type
MedScholar-1.5B.F32.gguf 6.18 GB F32
MedScholar-1.5B.BF16.gguf 3.09 GB BF16
MedScholar-1.5B.F16.gguf 3.09 GB F16
MedScholar-1.5B.Q8_0.gguf 1.65 GB Q8_0
MedScholar-1.5B.Q6_K.gguf 1.27 GB Q6_K
MedScholar-1.5B.Q5_K_M.gguf 1.13 GB Q5_K_M
MedScholar-1.5B.Q5_K_S.gguf 1.1 GB Q5_K_S
MedScholar-1.5B.Q4_K_M.gguf 986 MB Q4_K_M
MedScholar-1.5B.Q4_K_S.gguf 940 MB Q4_K_S
MedScholar-1.5B.Q3_K_L.gguf 880 MB Q3_K_L
MedScholar-1.5B.Q3_K_M.gguf 824 MB Q3_K_M
MedScholar-1.5B.Q3_K_S.gguf 761 MB Q3_K_S
MedScholar-1.5B.Q2_K.gguf 676 MB Q2_K

Quants Usage

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

Downloads last month
265
GGUF
Model size
1.54B params
Architecture
qwen2
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

32-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for prithivMLmods/MedScholar-1.5B-f32-GGUF

Quantized
(5)
this model