Mistral-7B-Instruct-v0.3 quantized with mixed precision: This is a Mistral-7B-Instruct model where the embedding layer and output (head) layer are quantized to 8-bit precision, while the rest of the model uses 6-bit quantization. This mixed-precision approach aims to balance model size and inference speed with improved precision in critical layers.

Downloads last month
95
Safetensors
Model size
7.25B params
Tensor type
BF16
·
U32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for dgomes03/Mistral-7B-Instruct-v0.3-mixed-6-8-bit

Quantized
(168)
this model