Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
GeorgyGUF
/
Llama-4-Maverick-17B-128E-Instruct-q8-with-bf16-embedding-and-bf16-output.gguf
like
0
Image-Text-to-Text
GGUF
PyTorch
Transformers
12 languages
facebook
meta
llama
llama-4
conversational
arxiv:
2204.05149
License:
llama4
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
main
Llama-4-Maverick-17B-128E-Instruct-q8-with-bf16-embedding-and-bf16-output.gguf
Ctrl+K
Ctrl+K
1 contributor
History:
13 commits
GeorgyGUF
Update README.md
a9edf86
verified
4 months ago
-00001-of-00010.gguf
Safe
45.6 GB
xet
Upload folder using huggingface_hub
5 months ago
-00002-of-00010.gguf
Safe
47.2 GB
xet
Upload folder using huggingface_hub
5 months ago
-00003-of-00010.gguf
Safe
46.8 GB
xet
Upload folder using huggingface_hub
5 months ago
-00004-of-00010.gguf
Safe
47.2 GB
xet
Upload folder using huggingface_hub
5 months ago
-00005-of-00010.gguf
Safe
47.2 GB
xet
Upload folder using huggingface_hub
5 months ago
-00006-of-00010.gguf
Safe
46.8 GB
xet
Upload folder using huggingface_hub
5 months ago
-00007-of-00010.gguf
Safe
47.2 GB
xet
Upload folder using huggingface_hub
5 months ago
-00008-of-00010.gguf
Safe
47.2 GB
xet
Upload folder using huggingface_hub
5 months ago
-00009-of-00010.gguf
Safe
46.8 GB
xet
Upload folder using huggingface_hub
5 months ago
-00010-of-00010.gguf
Safe
5.75 GB
xet
Upload folder using huggingface_hub
5 months ago
.gitattributes
Safe
2.09 kB
Upload folder using huggingface_hub
5 months ago
README.md
Safe
32.9 kB
Update README.md
4 months ago