\n"}}},{"rowIdx":305,"cells":{"modelId":{"kind":"string","value":"vkao8264/blip-yoda-captioning"},"author":{"kind":"string","value":"vkao8264"},"last_modified":{"kind":"timestamp","value":"2025-08-06T14:26:57Z","string":"2025-08-06T14:26:57Z"},"downloads":{"kind":"number","value":450,"string":"450"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"string","value":"transformers"},"tags":{"kind":"list like","value":["transformers","safetensors","blip","image-to-text","en","base_model:Salesforce/blip-image-captioning-base","base_model:finetune:Salesforce/blip-image-captioning-base","license:bsd-3-clause","endpoints_compatible","region:us"],"string":"[\n \"transformers\",\n \"safetensors\",\n \"blip\",\n \"image-to-text\",\n \"en\",\n \"base_model:Salesforce/blip-image-captioning-base\",\n \"base_model:finetune:Salesforce/blip-image-captioning-base\",\n \"license:bsd-3-clause\",\n \"endpoints_compatible\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"string","value":"image-to-text"},"createdAt":{"kind":"timestamp","value":"2025-06-21T07:49:02Z","string":"2025-06-21T07:49:02Z"},"card":{"kind":"string","value":"---\nlibrary_name: transformers\nlanguage:\n- en\nbase_model:\n- Salesforce/blip-image-captioning-base\npipeline_tag: image-to-text\nlicense: bsd-3-clause\n---\n\n\nImage captioning model finetuned on BLIP-base, responds like how Yoda speaks,\n\n\"Sitting in a car, a man is\"\n\nTry the demo here: https://huggingface.co/spaces/vkao8264/Yoda_captioning\n\n## Model Details\n\n### Model Description\n\n\n\nAn image-to-text model finetuned on BLIP-base with the transformers package\n\n- **Developed by:** vkao8264\n- **Model type:** Image-to-text\n- **Language(s) (NLP):** English\n- **License:** bsd-3-clause\n- **Finetuned from model [optional]:** blip-image-captioning-base\n\n## Uses\n\n\n```\nfrom PIL import Image\nfrom transformers import AutoProcessor, BlipForConditionalGeneration\n\nprocessor = AutoProcessor.from_pretrained(\"Salesforce/blip-image-captioning-base\")\nmodel = BlipForConditionalGeneration.from_pretrained(\"vkao8264/blip-yoda-captioning\")\n\nfilepath = \"path-to-your-image\"\nraw_image = Image.open(filepath).convert('RGB')\n\ninputs = processor(raw_image, return_tensors=\"pt\").to(\"cuda\")\noutput_tokens = model.generate(**inputs)\ncaption = processor.decode(output_tokens[0], skip_special_tokens=True)\nprint(caption)\n```\n\n## Training Details\n\n### Training Data\n\n\n\nThe model was fine-tuned on 30000 image-caption pairs from the COCO captions dataset. Specifically, captions_train2014. \n\nBefore training, captions were changed to yoda-style captions using phi3 with few-shot learning\n\nScripts can be found on https://github.com/vincent8264/yoda_captioning"}}},{"rowIdx":306,"cells":{"modelId":{"kind":"string","value":"virusf/nllb-renpy"},"author":{"kind":"string","value":"virusf"},"last_modified":{"kind":"timestamp","value":"2025-08-06T14:20:44Z","string":"2025-08-06T14:20:44Z"},"downloads":{"kind":"number","value":11,"string":"11"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"string","value":"transformers"},"tags":{"kind":"list like","value":["transformers","safetensors","m2m_100","text2text-generation","translation","gaming","renpy","visual-novel","french","en","fr","dataset:custom","base_model:facebook/nllb-200-distilled-600M","base_model:finetune:facebook/nllb-200-distilled-600M","license:cc-by-nc-4.0","endpoints_compatible","region:us"],"string":"[\n \"transformers\",\n \"safetensors\",\n \"m2m_100\",\n \"text2text-generation\",\n \"translation\",\n \"gaming\",\n \"renpy\",\n \"visual-novel\",\n \"french\",\n \"en\",\n \"fr\",\n \"dataset:custom\",\n \"base_model:facebook/nllb-200-distilled-600M\",\n \"base_model:finetune:facebook/nllb-200-distilled-600M\",\n \"license:cc-by-nc-4.0\",\n \"endpoints_compatible\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"string","value":"translation"},"createdAt":{"kind":"timestamp","value":"2025-08-06T13:48:59Z","string":"2025-08-06T13:48:59Z"},"card":{"kind":"string","value":"---\nlanguage:\n- en\n- fr\nlibrary_name: transformers\npipeline_tag: translation\nlicense: cc-by-nc-4.0\nbase_model: facebook/nllb-200-distilled-600M\ntags:\n- translation\n- gaming\n- renpy\n- visual-novel\n- french\ndatasets:\n- custom\nmetrics:\n- bleu\n- sacrebleu\nwidget:\n- text: \"Hello! Welcome to our game.\"\n example_title: \"Gaming Interface\"\n- text: \"I love you more than anything.\"\n example_title: \"Romance Dialogue\"\n- text: \"What do you choose?\"\n example_title: \"Choice Menu\"\n---\n\n# 🎮 NLLB-RenPy: Specialized French Gaming Translator\n\n## 🌟 Model Description\n\nThis model is a fine-tuned version of **facebook/nllb-200-distilled-600M** specifically trained for **English-to-French translation** in gaming contexts, particularly **RenPy visual novels**.\n\n### 🎯 Specialized For:\n- 🎮 Gaming interfaces and menus\n- 💬 Character dialogues and narratives \n- 💕 Romance and emotional expressions\n- 🔄 Interactive choices and options\n- 📱 UI elements and notifications\n\n### 🏆 Performance Highlights:\n- **Superior quality** vs Google Translate/DeepL for gaming\n- **Context-aware** translations maintaining gaming tone\n- **Optimized** for visual novel terminology\n- **Consistent** character voice preservation\n\n## 🚀 Quick Start\n\n```python\nfrom transformers import AutoTokenizer, AutoModelForSeq2SeqLM\n\n# Load model\nmodel = AutoModelForSeq2SeqLM.from_pretrained(\"virusf/nllb-renpy\")\ntokenizer = AutoTokenizer.from_pretrained(\"virusf/nllb-renpy\")\n\n# Translate\ntext = \"Hello! Welcome to our game.\"\ninputs = tokenizer(text, return_tensors=\"pt\")\noutputs = model.generate(**inputs, forced_bos_token_id=tokenizer.convert_tokens_to_ids(\"fra_Latn\"))\nresult = tokenizer.decode(outputs[0], skip_special_tokens=True)\nprint(result) # \"Bonjour! Bienvenue dans notre jeu.\"\n```\n\n📊 Training Details\n\n Base Model: facebook/nllb-200-distilled-600M\n Training Data: 15,000+ specialized gaming translations\n Languages: English → French\n Epochs: 2.0\n Final Loss: 0.4441\n\n🎯 Use Cases\n\nPerfect for translating:\n\n ✅ RenPy/Ren'Py visual novels\n ✅ Gaming interfaces and menus\n ✅ Character dialogues and stories\n ✅ Interactive fiction content\n ✅ Dating simulation games\n\n\n"}}},{"rowIdx":307,"cells":{"modelId":{"kind":"string","value":"h-grieve/blockassist-bc-bellowing_pouncing_horse_1754489901"},"author":{"kind":"string","value":"h-grieve"},"last_modified":{"kind":"timestamp","value":"2025-08-06T14:18:45Z","string":"2025-08-06T14:18:45Z"},"downloads":{"kind":"number","value":0,"string":"0"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"null"},"tags":{"kind":"list like","value":["gensyn","blockassist","gensyn-blockassist","minecraft","bellowing pouncing horse","arxiv:2504.07091","region:us"],"string":"[\n \"gensyn\",\n \"blockassist\",\n \"gensyn-blockassist\",\n \"minecraft\",\n \"bellowing pouncing horse\",\n \"arxiv:2504.07091\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"null"},"createdAt":{"kind":"timestamp","value":"2025-08-06T14:18:33Z","string":"2025-08-06T14:18:33Z"},"card":{"kind":"string","value":"---\ntags:\n - gensyn\n - blockassist\n - gensyn-blockassist\n - minecraft\n - bellowing pouncing horse\n---\n\n# Gensyn BlockAssist\n\nGensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).\n"}}},{"rowIdx":308,"cells":{"modelId":{"kind":"string","value":"Cseti/wan2.2-14B-Arcane_Jinx-lora-v1"},"author":{"kind":"string","value":"Cseti"},"last_modified":{"kind":"timestamp","value":"2025-08-06T14:17:48Z","string":"2025-08-06T14:17:48Z"},"downloads":{"kind":"number","value":0,"string":"0"},"likes":{"kind":"number","value":1,"string":"1"},"library_name":{"kind":"null"},"tags":{"kind":"list like","value":["text-to-video","lora","base_model:Wan-AI/Wan2.2-T2V-A14B","base_model:adapter:Wan-AI/Wan2.2-T2V-A14B","region:us"],"string":"[\n \"text-to-video\",\n \"lora\",\n \"base_model:Wan-AI/Wan2.2-T2V-A14B\",\n \"base_model:adapter:Wan-AI/Wan2.2-T2V-A14B\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"string","value":"text-to-video"},"createdAt":{"kind":"timestamp","value":"2025-08-06T13:31:57Z","string":"2025-08-06T13:31:57Z"},"card":{"kind":"string","value":"---\nbase_model:\n- Wan-AI/Wan2.2-T2V-A14B\ntags:\n- text-to-video\n- lora\nwidget:\n- text: >-\n \"[APPEARANCE] Nfj1nx wears a deep-cut, form-fitting black evening gown with\n a high slit, allowing ease of movement and a striking silhouette. Her long\n midnight-blue hair flows over one shoulder in polished waves. [ENVIRONMENT]\n A dimly lit, smoky salon draped in shadows and flickering amber light.\n Velvet armchairs, dark wooden décor, and heavy curtains define the\n atmosphere. Smoke curls through the air, catching beams of light from\n scattered wall lamps. Faint silhouettes shift in the background, hidden\n behind haze and shadow. [CUT 1] Action: Nfj1nx stands still in the middle of\n the salon, her pistol lowered at her side. Camera: Rapid arc shot circling\n from her front-left to back-right at waist level. [CUT 2] Action: She raises\n the pistol with a smooth, deliberate motion, arm fully extended and steady.\n Camera: Fast dolly-in from floor level toward the gun, then tilting up to\n catch her eyes.\"\n output:\n url: assets/test_00043.mp4\n---\n\n# wan 2.2 (14b T2V)\n\n\n\n## Inference\nFor inference I used ComfyUI.\n\n**The strength of the LoRA can differ from prompt to prompt. As best practice, I suggest always checking the high model inference and adjusting the high noise LoRA strength or the steps accordingly. Mostly it is optimal when the character features are just beggining to appear in the high model inference, but aren't prominent yet.**\n\n**Trigger words**: Nfj1nx, blue hair\n\n**Strength**: 0.6-1.2\n\n## Trainig details\nTrained only on videos.\n\n### HIGH noise LoRA\n- dataset: 30 videos 480x270 25,33,65,81 frame videos\n- steps: 2130\n- LR: 5e-5\n- optimizer: AdamW Optimi\n- rank: 32\n- batch size: 1\n- gradient accumulation steps: 1\n- min_t = 0.875\n- max_t = 1\n\n### LOW noise LoRA\n- dataset: 42 videos 640x360 25,33,65 frame videos\n- steps: 2730\n- LR: 5e-5\n- optimizer: AdamW Optimi\n- rank: 32\n- batch size: 1\n- gradient accumulation steps: 1\n- min_t = 0\n- max_t = 0.875\n\nFor training I used the diffusion-pipe repo.\n\n***Important Notes:***\nThis LoRA is created as part of a fan project for research purposes only and is not intended for commercial use. It is based on the movies, which are protected by copyright. Users utilize the model at their own risk. Users are obligated to comply with copyright laws and applicable regulations. The model has been developed for non-commercial purposes, and it is not my intention to infringe on any copyright. I assume no responsibility for any damages or legal consequences arising from the use of the model."}}},{"rowIdx":309,"cells":{"modelId":{"kind":"string","value":"kerrlc/apicalling"},"author":{"kind":"string","value":"kerrlc"},"last_modified":{"kind":"timestamp","value":"2025-08-06T14:13:27Z","string":"2025-08-06T14:13:27Z"},"downloads":{"kind":"number","value":9,"string":"9"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"string","value":"transformers"},"tags":{"kind":"list like","value":["transformers","safetensors","mistral","text-generation","arxiv:1910.09700","autotrain_compatible","text-generation-inference","endpoints_compatible","4-bit","bitsandbytes","region:us"],"string":"[\n \"transformers\",\n \"safetensors\",\n \"mistral\",\n \"text-generation\",\n \"arxiv:1910.09700\",\n \"autotrain_compatible\",\n \"text-generation-inference\",\n \"endpoints_compatible\",\n \"4-bit\",\n \"bitsandbytes\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"string","value":"text-generation"},"createdAt":{"kind":"timestamp","value":"2025-08-06T11:14:05Z","string":"2025-08-06T11:14:05Z"},"card":{"kind":"string","value":"---\nlibrary_name: transformers\ntags: []\n---\n\n# Model Card for Model ID\n\n\n\n\n\n## Model Details\n\n### Model Description\n\n\n\nThis is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- **Developed by:** [More Information Needed]\n- **Funded by [optional]:** [More Information Needed]\n- **Shared by [optional]:** [More Information Needed]\n- **Model type:** [More Information Needed]\n- **Language(s) (NLP):** [More Information Needed]\n- **License:** [More Information Needed]\n- **Finetuned from model [optional]:** [More Information Needed]\n\n### Model Sources [optional]\n\n\n\n- **Repository:** [More Information Needed]\n- **Paper [optional]:** [More Information Needed]\n- **Demo [optional]:** [More Information Needed]\n\n## Uses\n\n\n\n### Direct Use\n\n\n\n[More Information Needed]\n\n### Downstream Use [optional]\n\n\n\n[More Information Needed]\n\n### Out-of-Scope Use\n\n\n\n[More Information Needed]\n\n## Bias, Risks, and Limitations\n\n\n\n[More Information Needed]\n\n### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.\n\n## How to Get Started with the Model\n\nUse the code below to get started with the model.\n\n[More Information Needed]\n\n## Training Details\n\n### Training Data\n\n\n\n[More Information Needed]\n\n### Training Procedure\n\n\n\n#### Preprocessing [optional]\n\n[More Information Needed]\n\n\n#### Training Hyperparameters\n\n- **Training regime:** [More Information Needed] \n\n#### Speeds, Sizes, Times [optional]\n\n\n\n[More Information Needed]\n\n## Evaluation\n\n\n\n### Testing Data, Factors & Metrics\n\n#### Testing Data\n\n\n\n[More Information Needed]\n\n#### Factors\n\n\n\n[More Information Needed]\n\n#### Metrics\n\n\n\n[More Information Needed]\n\n### Results\n\n[More Information Needed]\n\n#### Summary\n\n\n\n## Model Examination [optional]\n\n\n\n[More Information Needed]\n\n## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).\n\n- **Hardware Type:** [More Information Needed]\n- **Hours used:** [More Information Needed]\n- **Cloud Provider:** [More Information Needed]\n- **Compute Region:** [More Information Needed]\n- **Carbon Emitted:** [More Information Needed]\n\n## Technical Specifications [optional]\n\n### Model Architecture and Objective\n\n[More Information Needed]\n\n### Compute Infrastructure\n\n[More Information Needed]\n\n#### Hardware\n\n[More Information Needed]\n\n#### Software\n\n[More Information Needed]\n\n## Citation [optional]\n\n\n\n**BibTeX:**\n\n[More Information Needed]\n\n**APA:**\n\n[More Information Needed]\n\n## Glossary [optional]\n\n\n\n[More Information Needed]\n\n## More Information [optional]\n\n[More Information Needed]\n\n## Model Card Authors [optional]\n\n[More Information Needed]\n\n## Model Card Contact\n\n[More Information Needed]"}}},{"rowIdx":310,"cells":{"modelId":{"kind":"string","value":"attila-fetchai/gpt-oss-20b-identity-run1"},"author":{"kind":"string","value":"attila-fetchai"},"last_modified":{"kind":"timestamp","value":"2025-08-06T14:12:26Z","string":"2025-08-06T14:12:26Z"},"downloads":{"kind":"number","value":0,"string":"0"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"string","value":"transformers"},"tags":{"kind":"list like","value":["transformers","safetensors","generated_from_trainer","trl","sft","base_model:openai/gpt-oss-20b","base_model:finetune:openai/gpt-oss-20b","endpoints_compatible","region:us"],"string":"[\n \"transformers\",\n \"safetensors\",\n \"generated_from_trainer\",\n \"trl\",\n \"sft\",\n \"base_model:openai/gpt-oss-20b\",\n \"base_model:finetune:openai/gpt-oss-20b\",\n \"endpoints_compatible\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"null"},"createdAt":{"kind":"timestamp","value":"2025-08-06T13:03:56Z","string":"2025-08-06T13:03:56Z"},"card":{"kind":"string","value":"---\nbase_model: openai/gpt-oss-20b\nlibrary_name: transformers\nmodel_name: gpt-oss-20b-identity-run1\ntags:\n- generated_from_trainer\n- trl\n- sft\nlicence: license\n---\n\n# Model Card for gpt-oss-20b-identity-run1\n\nThis model is a fine-tuned version of [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b).\nIt has been trained using [TRL](https://github.com/huggingface/trl).\n\n## Quick start\n\n```python\nfrom transformers import pipeline\n\nquestion = \"If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?\"\ngenerator = pipeline(\"text-generation\", model=\"attila-fetchai/gpt-oss-20b-identity-run1\", device=\"cuda\")\noutput = generator([{\"role\": \"user\", \"content\": question}], max_new_tokens=128, return_full_text=False)[0]\nprint(output[\"generated_text\"])\n```\n\n## Training procedure\n\n[\"Visualize](https://wandb.ai/fetch-ai/experiment-1/runs/azhgyb6g) \n\n\nThis model was trained with SFT.\n\n### Framework versions\n\n- TRL: 0.21.0\n- Transformers: 4.55.0\n- Pytorch: 2.7.1+cu126\n- Datasets: 4.0.0\n- Tokenizers: 0.21.4\n\n## Citations\n\n\n\nCite TRL as:\n \n```bibtex\n@misc{vonwerra2022trl,\n\ttitle = {{TRL: Transformer Reinforcement Learning}},\n\tauthor = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\\'e}dec},\n\tyear = 2020,\n\tjournal = {GitHub repository},\n\tpublisher = {GitHub},\n\thowpublished = {\\url{https://github.com/huggingface/trl}}\n}\n```"}}},{"rowIdx":311,"cells":{"modelId":{"kind":"string","value":"BreeseBeat/blue"},"author":{"kind":"string","value":"BreeseBeat"},"last_modified":{"kind":"timestamp","value":"2025-08-06T14:09:36Z","string":"2025-08-06T14:09:36Z"},"downloads":{"kind":"number","value":0,"string":"0"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"null"},"tags":{"kind":"list like","value":["license:apache-2.0","region:us"],"string":"[\n \"license:apache-2.0\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"null"},"createdAt":{"kind":"timestamp","value":"2025-08-06T14:09:35Z","string":"2025-08-06T14:09:35Z"},"card":{"kind":"string","value":"---\r\nlicense: apache-2.0\r\n---\r\n"}}},{"rowIdx":312,"cells":{"modelId":{"kind":"string","value":"giovannidemuri/llama8b-er-afg-v59-seed2-hx"},"author":{"kind":"string","value":"giovannidemuri"},"last_modified":{"kind":"timestamp","value":"2025-08-06T14:07:19Z","string":"2025-08-06T14:07:19Z"},"downloads":{"kind":"number","value":2,"string":"2"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"string","value":"transformers"},"tags":{"kind":"list like","value":["transformers","safetensors","llama","text-generation","generated_from_trainer","conversational","base_model:meta-llama/Llama-3.1-8B","base_model:finetune:meta-llama/Llama-3.1-8B","license:llama3.1","autotrain_compatible","text-generation-inference","endpoints_compatible","region:us"],"string":"[\n \"transformers\",\n \"safetensors\",\n \"llama\",\n \"text-generation\",\n \"generated_from_trainer\",\n \"conversational\",\n \"base_model:meta-llama/Llama-3.1-8B\",\n \"base_model:finetune:meta-llama/Llama-3.1-8B\",\n \"license:llama3.1\",\n \"autotrain_compatible\",\n \"text-generation-inference\",\n \"endpoints_compatible\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"string","value":"text-generation"},"createdAt":{"kind":"timestamp","value":"2025-08-06T12:13:34Z","string":"2025-08-06T12:13:34Z"},"card":{"kind":"string","value":"---\nlibrary_name: transformers\nlicense: llama3.1\nbase_model: meta-llama/Llama-3.1-8B\ntags:\n- generated_from_trainer\nmodel-index:\n- name: llama8b-er-afg-v59-seed2-hx\n results: []\n---\n\n\n\n# llama8b-er-afg-v59-seed2-hx\n\nThis model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) on the None dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 2\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 8\n- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 2\n\n### Training results\n\n\n\n### Framework versions\n\n- Transformers 4.52.4\n- Pytorch 2.7.1+cu128\n- Datasets 3.6.0\n- Tokenizers 0.21.2\n"}}},{"rowIdx":313,"cells":{"modelId":{"kind":"string","value":"hi-paris/ssml-text2breaks-fr-lora"},"author":{"kind":"string","value":"hi-paris"},"last_modified":{"kind":"timestamp","value":"2025-08-06T14:01:56Z","string":"2025-08-06T14:01:56Z"},"downloads":{"kind":"number","value":166,"string":"166"},"likes":{"kind":"number","value":13,"string":"13"},"library_name":{"kind":"string","value":"peft"},"tags":{"kind":"list like","value":["peft","safetensors","text-to-speech","lora","ssml","qwen2.5","text-generation","fr","base_model:Qwen/Qwen2.5-7B","base_model:adapter:Qwen/Qwen2.5-7B","license:apache-2.0","region:us"],"string":"[\n \"peft\",\n \"safetensors\",\n \"text-to-speech\",\n \"lora\",\n \"ssml\",\n \"qwen2.5\",\n \"text-generation\",\n \"fr\",\n \"base_model:Qwen/Qwen2.5-7B\",\n \"base_model:adapter:Qwen/Qwen2.5-7B\",\n \"license:apache-2.0\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"string","value":"text-generation"},"createdAt":{"kind":"timestamp","value":"2025-07-26T14:31:36Z","string":"2025-07-26T14:31:36Z"},"card":{"kind":"string","value":"---\nlicense: apache-2.0\nbase_model: Qwen/Qwen2.5-7B\nlibrary_name: peft\nlanguage:\n- fr\ntags:\n- text-to-speech\n- lora\n- peft\n- ssml\n- qwen2.5\npipeline_tag: text-generation\n---\n\n# 🗣️ French Text-to-Breaks LoRA Model\n\n**hi-paris/ssml-text2breaks-fr-lora** is a LoRA adapter fine-tuned on Qwen2.5-7B to predict natural pause locations in French text by adding symbolic `` markers.\n\nThis is the **first stage** of a two-step SSML cascade pipeline for improving French text-to-speech prosody control.\n\n> 📄 **Paper**: *\"Improving Synthetic Speech Quality via SSML Prosody Control\"* \n> **Authors**: Nassima Ould-Ouali, Awais Sani, Ruben Bueno, Jonah Dauvet, Tim Luka Horstmann, Eric Moulines \n> **Conference**: ICNLSP 2025 \n> 🔗 **Demo & Audio Samples**: https://horstmann.tech/ssml-prosody-control/\n\n## 🧩 Pipeline Overview\n\n| Stage | Model | Purpose |\n|-------|-------|---------|\n| 1️⃣ | **hi-paris/ssml-text2breaks-fr-lora** | Predicts natural pause locations |\n| 2️⃣ | [hi-paris/ssml-breaks2ssml-fr-lora](https://huggingface.co/hi-paris/ssml-breaks2ssml-fr-lora) | Converts breaks to full SSML with prosody |\n\n## ✨ Example\n\n**Input:**\n```\nBonjour comment allez-vous aujourd'hui ?\n```\n\n**Output:**\n```\nBonjour comment allez-vous aujourd'hui ?\n```\n\n## 🚀 Quick Start\n\n### Installation\n\n```bash\npip install torch transformers peft accelerate\n```\n\n### Basic Usage\n\n```python\nfrom transformers import AutoTokenizer, AutoModelForCausalLM\nfrom peft import PeftModel\nimport torch\n\n# Load base model and tokenizer\nbase_model = AutoModelForCausalLM.from_pretrained(\n \"Qwen/Qwen2.5-7B\",\n torch_dtype=torch.float16,\n device_map=\"auto\"\n)\ntokenizer = AutoTokenizer.from_pretrained(\"Qwen/Qwen2.5-7B\")\n\n# Load LoRA adapter\nmodel = PeftModel.from_pretrained(base_model, \"hi-paris/ssml-text2breaks-fr-lora\")\n\n# Prepare input\ntext = \"Bonjour comment allez-vous aujourd'hui ?\"\nformatted_input = f\"### Task:\\nConvert text to SSML with pauses:\\n\\n### Text:\\n{text}\\n\\n### SSML:\\n\"\n\n# Generate\ninputs = tokenizer(formatted_input, return_tensors=\"pt\").to(model.device)\nwith torch.no_grad():\n outputs = model.generate(\n **inputs,\n max_new_tokens=256,\n temperature=0.7,\n do_sample=True,\n pad_token_id=tokenizer.eos_token_id\n )\n\nresponse = tokenizer.decode(outputs[0], skip_special_tokens=True)\nresult = response.split(\"### SSML:\\n\")[-1].strip()\nprint(result) # \"Bonjour comment allez-vous aujourd'hui ?\"\n```\n\n### Production Usage (Recommended)\n\nFor production use with memory optimization and full cascade, see our [inference repository](https://github.com/TimLukaHorstmann/cascading_model):\n\n```python\nfrom text2breaks_inference import Text2BreaksInference\n\n# Memory-efficient shared model approach\nmodel = Text2BreaksInference()\nresult = model.predict(\"Bonjour comment allez-vous aujourd'hui ?\")\n```\n\n## 🔧 Full Cascade Example\n\n```python\nfrom breaks2ssml_inference import CascadedInference\n\n# Initialize full pipeline (memory efficient)\ncascade = CascadedInference()\n\n# Convert plain text directly to full SSML\ntext = \"Bonjour comment allez-vous aujourd'hui ?\"\nssml_output = cascade.predict(text)\nprint(ssml_output) \n# Output: 'Bonjour comment allez-vous aujourd'hui ?'\n```\n\n## 🧠 Model Details\n\n- **Base Model**: [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B)\n- **Fine-tuning Method**: LoRA (Low-Rank Adaptation)\n- **LoRA Rank**: 8, Alpha: 16\n- **Target Modules**: `q_proj`, `k_proj`, `v_proj`, `o_proj`, `gate_proj`, `up_proj`, `down_proj`\n- **Training**: 5 epochs, batch size 1 with gradient accumulation\n- **Language**: French\n- **Model Size**: 7B parameters (LoRA adapter: ~81MB)\n- **License**: Apache 2.0\n\n## 📊 Performance\n\nThe model achieves high accuracy in predicting natural pause locations in French text, contributing to improved prosody in text-to-speech synthesis when combined with the second-stage model.\n\n## 🔗 Resources\n\n- **Full Pipeline Code**: https://github.com/TimLukaHorstmann/cascading_model\n- **Interactive Demo**: [Colab Notebook](https://colab.research.google.com/drive/1bFcbJQY9OuY0_zlscqkf9PIgd3dUrIKs?usp=sharing)\n- **Stage 2 Model**: [hi-paris/ssml-breaks2ssml-fr-lora](https://huggingface.co/hi-paris/ssml-breaks2ssml-fr-lora)\n\n## 📖 Citation\n\n```bibtex\n@inproceedings{ould-ouali2025_improving,\n title = {Improving Synthetic Speech Quality via SSML Prosody Control},\n author = {Ould-Ouali, Nassima and Sani, Awais and Bueno, Ruben and Dauvet, Jonah and Horstmann, Tim Luka and Moulines, Eric},\n booktitle = {Proceedings of the 8th International Conference on Natural Language and Speech Processing (ICNLSP)},\n year = {2025},\n url = {https://huggingface.co/hi-paris}\n}\n```\n\n## 📜 License\n\nApache 2.0 License (same as the base Qwen2.5-7B model)\n"}}},{"rowIdx":314,"cells":{"modelId":{"kind":"string","value":"eilserion/modelq4"},"author":{"kind":"string","value":"eilserion"},"last_modified":{"kind":"timestamp","value":"2025-08-06T13:57:36Z","string":"2025-08-06T13:57:36Z"},"downloads":{"kind":"number","value":163,"string":"163"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"string","value":"transformers"},"tags":{"kind":"list like","value":["transformers","gguf","llama","text-generation-inference","unsloth","en","license:apache-2.0","endpoints_compatible","region:us"],"string":"[\n \"transformers\",\n \"gguf\",\n \"llama\",\n \"text-generation-inference\",\n \"unsloth\",\n \"en\",\n \"license:apache-2.0\",\n \"endpoints_compatible\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"null"},"createdAt":{"kind":"timestamp","value":"2025-07-30T13:19:39Z","string":"2025-07-30T13:19:39Z"},"card":{"kind":"string","value":"---\nbase_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit\ntags:\n- text-generation-inference\n- transformers\n- unsloth\n- llama\n- gguf\nlicense: apache-2.0\nlanguage:\n- en\n---\n\n# Uploaded model\n\n- **Developed by:** eilserion\n- **License:** apache-2.0\n- **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit\n\nThis llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.\n\n[](https://github.com/unslothai/unsloth)\n"}}},{"rowIdx":315,"cells":{"modelId":{"kind":"string","value":"Ivan512/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-burrowing_rangy_porpoise"},"author":{"kind":"string","value":"Ivan512"},"last_modified":{"kind":"timestamp","value":"2025-08-06T13:54:37Z","string":"2025-08-06T13:54:37Z"},"downloads":{"kind":"number","value":101,"string":"101"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"string","value":"transformers"},"tags":{"kind":"list like","value":["transformers","safetensors","qwen2","text-generation","rl-swarm","genrl-swarm","grpo","gensyn","I am burrowing_rangy_porpoise","arxiv:1910.09700","autotrain_compatible","text-generation-inference","endpoints_compatible","region:us"],"string":"[\n \"transformers\",\n \"safetensors\",\n \"qwen2\",\n \"text-generation\",\n \"rl-swarm\",\n \"genrl-swarm\",\n \"grpo\",\n \"gensyn\",\n \"I am burrowing_rangy_porpoise\",\n \"arxiv:1910.09700\",\n \"autotrain_compatible\",\n \"text-generation-inference\",\n \"endpoints_compatible\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"string","value":"text-generation"},"createdAt":{"kind":"timestamp","value":"2025-07-30T10:24:32Z","string":"2025-07-30T10:24:32Z"},"card":{"kind":"string","value":"---\nlibrary_name: transformers\ntags:\n- rl-swarm\n- genrl-swarm\n- grpo\n- gensyn\n- I am burrowing_rangy_porpoise\n---\n\n# Model Card for Model ID\n\n\n\n\n\n## Model Details\n\n### Model Description\n\n\n\nThis is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- **Developed by:** [More Information Needed]\n- **Funded by [optional]:** [More Information Needed]\n- **Shared by [optional]:** [More Information Needed]\n- **Model type:** [More Information Needed]\n- **Language(s) (NLP):** [More Information Needed]\n- **License:** [More Information Needed]\n- **Finetuned from model [optional]:** [More Information Needed]\n\n### Model Sources [optional]\n\n\n\n- **Repository:** [More Information Needed]\n- **Paper [optional]:** [More Information Needed]\n- **Demo [optional]:** [More Information Needed]\n\n## Uses\n\n\n\n### Direct Use\n\n\n\n[More Information Needed]\n\n### Downstream Use [optional]\n\n\n\n[More Information Needed]\n\n### Out-of-Scope Use\n\n\n\n[More Information Needed]\n\n## Bias, Risks, and Limitations\n\n\n\n[More Information Needed]\n\n### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.\n\n## How to Get Started with the Model\n\nUse the code below to get started with the model.\n\n[More Information Needed]\n\n## Training Details\n\n### Training Data\n\n\n\n[More Information Needed]\n\n### Training Procedure\n\n\n\n#### Preprocessing [optional]\n\n[More Information Needed]\n\n\n#### Training Hyperparameters\n\n- **Training regime:** [More Information Needed] \n\n#### Speeds, Sizes, Times [optional]\n\n\n\n[More Information Needed]\n\n## Evaluation\n\n\n\n### Testing Data, Factors & Metrics\n\n#### Testing Data\n\n\n\n[More Information Needed]\n\n#### Factors\n\n\n\n[More Information Needed]\n\n#### Metrics\n\n\n\n[More Information Needed]\n\n### Results\n\n[More Information Needed]\n\n#### Summary\n\n\n\n## Model Examination [optional]\n\n\n\n[More Information Needed]\n\n## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).\n\n- **Hardware Type:** [More Information Needed]\n- **Hours used:** [More Information Needed]\n- **Cloud Provider:** [More Information Needed]\n- **Compute Region:** [More Information Needed]\n- **Carbon Emitted:** [More Information Needed]\n\n## Technical Specifications [optional]\n\n### Model Architecture and Objective\n\n[More Information Needed]\n\n### Compute Infrastructure\n\n[More Information Needed]\n\n#### Hardware\n\n[More Information Needed]\n\n#### Software\n\n[More Information Needed]\n\n## Citation [optional]\n\n\n\n**BibTeX:**\n\n[More Information Needed]\n\n**APA:**\n\n[More Information Needed]\n\n## Glossary [optional]\n\n\n\n[More Information Needed]\n\n## More Information [optional]\n\n[More Information Needed]\n\n## Model Card Authors [optional]\n\n[More Information Needed]\n\n## Model Card Contact\n\n[More Information Needed]"}}},{"rowIdx":316,"cells":{"modelId":{"kind":"string","value":"gabrielloiseau/CALE-MBERT-en"},"author":{"kind":"string","value":"gabrielloiseau"},"last_modified":{"kind":"timestamp","value":"2025-08-06T13:53:32Z","string":"2025-08-06T13:53:32Z"},"downloads":{"kind":"number","value":10,"string":"10"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"string","value":"sentence-transformers"},"tags":{"kind":"list like","value":["sentence-transformers","safetensors","modernbert","sentence-similarity","feature-extraction","loss:ContrastiveLoss","dataset:gabrielloiseau/CALE-SPCD","base_model:answerdotai/ModernBERT-large","base_model:finetune:answerdotai/ModernBERT-large","license:apache-2.0","autotrain_compatible","text-embeddings-inference","endpoints_compatible","region:us"],"string":"[\n \"sentence-transformers\",\n \"safetensors\",\n \"modernbert\",\n \"sentence-similarity\",\n \"feature-extraction\",\n \"loss:ContrastiveLoss\",\n \"dataset:gabrielloiseau/CALE-SPCD\",\n \"base_model:answerdotai/ModernBERT-large\",\n \"base_model:finetune:answerdotai/ModernBERT-large\",\n \"license:apache-2.0\",\n \"autotrain_compatible\",\n \"text-embeddings-inference\",\n \"endpoints_compatible\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"string","value":"sentence-similarity"},"createdAt":{"kind":"timestamp","value":"2025-08-06T12:22:28Z","string":"2025-08-06T12:22:28Z"},"card":{"kind":"string","value":"---\nlicense: apache-2.0\ntags:\n- sentence-transformers\n- sentence-similarity\n- feature-extraction\n- loss:ContrastiveLoss\nbase_model: answerdotai/ModernBERT-large\npipeline_tag: sentence-similarity\ndatasets:\n- gabrielloiseau/CALE-SPCD\n---\n\n# CALE-MBERT-en\n\nThis is a [sentence-transformers](https://www.SBERT.net) model: It maps occurences of a word to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.\n\n\n\n## Usage (Sentence-Transformers)\n\n```\npip install -U sentence-transformers\n```\n\nThen you can use the model like this:\n\n```python\nfrom sentence_transformers import SentenceTransformer\n\n# 1. Load CALE model\nmodel = SentenceTransformer(\"gabrielloiseau/CALE-MBERT-en\")\n\nsentences = [\n \"the boy could easily distinguish the different note values\",\n \"he patient’s ability to recognize forms and shapes\",\n \"the government had refused to recognize their autonomy and existence as a state\",\n]\n\n# 2. Calculate embeddings\nembeddings = model.encode(sentences)\nprint(embeddings.shape)\n# [3, 1024]\n\n# 3. Calculate the embedding similarities\nsimilarities = model.similarity(embeddings, embeddings)\nprint(similarities)\n# tensor([[1.0000, 0.8725, 0.5957],\n# [0.8725, 1.0000, 0.5861],\n# [0.5957, 0.5861, 1.0000]])\n```\n\n## Full Model Architecture\n```\nSentenceTransformer(\n (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False, 'architecture': 'ModernBertModel'})\n (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})\n)\n```"}}},{"rowIdx":317,"cells":{"modelId":{"kind":"string","value":"visurg/LemonFM"},"author":{"kind":"string","value":"visurg"},"last_modified":{"kind":"timestamp","value":"2025-08-06T13:46:56Z","string":"2025-08-06T13:46:56Z"},"downloads":{"kind":"number","value":0,"string":"0"},"likes":{"kind":"number","value":2,"string":"2"},"library_name":{"kind":"null"},"tags":{"kind":"list like","value":["arxiv:2503.19740","license:apache-2.0","region:us"],"string":"[\n \"arxiv:2503.19740\",\n \"license:apache-2.0\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"null"},"createdAt":{"kind":"timestamp","value":"2025-03-18T14:31:13Z","string":"2025-03-18T14:31:13Z"},"card":{"kind":"string","value":"---\nlicense: apache-2.0\n---\n
\n \n
\n\n\n[📚 Paper](https://arxiv.org/abs/2503.19740) - [🤖 GitHub](https://github.com/visurg-ai/LEMON) \n\n\n\nThis is the official Hugging Face repository for the paper [LEMON: A Large Endoscopic MONocular Dataset and Foundation Model for Perception in Surgical Settings](https://arxiv.org/abs/2503.19740).\n\nThis repository provides open access to the *LemonFM* foundation model. For the *LEMON* dataset and our code, please see our at our GitHub repository at [🤖 Github](https://github.com/visurg-ai/LEMON) .\n\n*LemonFM* is an image foundation model for surgery, it receives an image as input and produces a feature vector of 1536 features as output. \n\n\nIf you use our dataset, model, or code in your research, please cite our paper:\n\n```\n@misc{che2025lemonlargeendoscopicmonocular,\n title={LEMON: A Large Endoscopic MONocular Dataset and Foundation Model for Perception in Surgical Settings}, \n author={Chengan Che and Chao Wang and Tom Vercauteren and Sophia Tsoka and Luis C. Garcia-Peraza-Herrera},\n year={2025},\n eprint={2503.19740},\n archivePrefix={arXiv},\n primaryClass={cs.CV},\n url={https://arxiv.org/abs/2503.19740}, \n}\n```\n\nAbstract\n--------\nTraditional open-access datasets focusing on surgical procedures are often limited by their small size, typically consisting of fewer than 100 videos and less than 30 hours of footage, which leads to poor model generalization. To address this constraint, a new dataset called LEMON has been compiled using a novel aggregation pipeline that collects high-resolution videos from online sources. Featuring an extensive collection of over 4K surgical videos totaling 938 hours (85 million frames) of high-quality footage across multiple procedure types, LEMON offers a comprehensive resource surpassing existing alternatives in size and scope, including two novel downstream tasks. To demonstrate the effectiveness of this diverse dataset, we introduce LemonFM, a foundation model pretrained on LEMON using a novel self-supervised augmented knowledge distillation approach. LemonFM consistently outperforms existing surgical foundation models across four downstream tasks and six datasets, achieving significant gains in surgical phase recognition (+9.5pp, +9.4pp, and +8.4pp of Jaccard in AutoLaparo, M2CAI16, and Cholec80), surgical action recognition (+4.4pp of mAP in CholecT50), surgical tool presence detection (+5.3pp and +10.2pp of mAP in Cholec80 and GraSP), and surgical semantic segmentation (+8.3pp of mDice in CholecSeg8k). LEMON and LemonFM will serve as foundational resources for the research community and industry, accelerating progress in developing autonomous robotic surgery systems and ultimately contributing to safer and more accessible surgical care worldwide.\n\nHow to run our LemonFM foundation model to extract features from your video frames\n----------------------------------------------------------------------------------\n\n ```python\n import torch\n from PIL import Image\n from model_loader import build_LemonFM\n\n # Load the pre-trained LemonFM model\n lemonfm = build_LemonFM(pretrained_weights = 'your path to the LemonFM')\n lemonfm.eval()\n\n # Load the image and convert it to a PyTorch tensor\n img_path = 'path/to/your/image.jpg'\n img = Image.open(img_path)\n img = img.resize((224, 224))\n img_tensor = torch.tensor(np.array(img)).unsqueeze(0).to('cuda')\n\n # Extract features from the image using the ResNet50 model\n outputs = lemonfm(img_tensor)\n ```\n"}}},{"rowIdx":318,"cells":{"modelId":{"kind":"string","value":"mlx-community/Qwen3-Coder-30B-A3B-Instruct-4bit"},"author":{"kind":"string","value":"mlx-community"},"last_modified":{"kind":"timestamp","value":"2025-08-06T13:45:30Z","string":"2025-08-06T13:45:30Z"},"downloads":{"kind":"number","value":1317,"string":"1,317"},"likes":{"kind":"number","value":8,"string":"8"},"library_name":{"kind":"string","value":"mlx"},"tags":{"kind":"list like","value":["mlx","safetensors","qwen3_moe","text-generation","conversational","base_model:Qwen/Qwen3-Coder-30B-A3B-Instruct","base_model:quantized:Qwen/Qwen3-Coder-30B-A3B-Instruct","license:apache-2.0","4-bit","region:us"],"string":"[\n \"mlx\",\n \"safetensors\",\n \"qwen3_moe\",\n \"text-generation\",\n \"conversational\",\n \"base_model:Qwen/Qwen3-Coder-30B-A3B-Instruct\",\n \"base_model:quantized:Qwen/Qwen3-Coder-30B-A3B-Instruct\",\n \"license:apache-2.0\",\n \"4-bit\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"string","value":"text-generation"},"createdAt":{"kind":"timestamp","value":"2025-07-31T15:00:51Z","string":"2025-07-31T15:00:51Z"},"card":{"kind":"string","value":"---\nlibrary_name: mlx\nlicense: apache-2.0\nlicense_link: https://huggingface.co/Qwen/Qwen3-Coder-30B-A3B-Instruct/blob/main/LICENSE\npipeline_tag: text-generation\ntags:\n- mlx\nbase_model: Qwen/Qwen3-Coder-30B-A3B-Instruct\n---\n\n# mlx-community/Qwen3-Coder-30B-A3B-Instruct-4bit\n\nThis model [mlx-community/Qwen3-Coder-30B-A3B-Instruct-4bit](https://huggingface.co/mlx-community/Qwen3-Coder-30B-A3B-Instruct-4bit) was\nconverted to MLX format from [Qwen/Qwen3-Coder-30B-A3B-Instruct](https://huggingface.co/Qwen/Qwen3-Coder-30B-A3B-Instruct)\nusing mlx-lm version **0.26.3**.\n\n## Use with mlx\n\n```bash\npip install mlx-lm\n```\n\n```python\nfrom mlx_lm import load, generate\n\nmodel, tokenizer = load(\"mlx-community/Qwen3-Coder-30B-A3B-Instruct-4bit\")\n\nprompt = \"hello\"\n\nif tokenizer.chat_template is not None:\n messages = [{\"role\": \"user\", \"content\": prompt}]\n prompt = tokenizer.apply_chat_template(\n messages, add_generation_prompt=True\n )\n\nresponse = generate(model, tokenizer, prompt=prompt, verbose=True)\n```\n"}}},{"rowIdx":319,"cells":{"modelId":{"kind":"string","value":"mlx-community/Qwen3-30B-A3B-Instruct-2507-4bit"},"author":{"kind":"string","value":"mlx-community"},"last_modified":{"kind":"timestamp","value":"2025-08-06T13:40:37Z","string":"2025-08-06T13:40:37Z"},"downloads":{"kind":"number","value":667,"string":"667"},"likes":{"kind":"number","value":6,"string":"6"},"library_name":{"kind":"string","value":"mlx"},"tags":{"kind":"list like","value":["mlx","safetensors","qwen3_moe","text-generation","conversational","base_model:Qwen/Qwen3-30B-A3B-Instruct-2507","base_model:quantized:Qwen/Qwen3-30B-A3B-Instruct-2507","license:apache-2.0","4-bit","region:us"],"string":"[\n \"mlx\",\n \"safetensors\",\n \"qwen3_moe\",\n \"text-generation\",\n \"conversational\",\n \"base_model:Qwen/Qwen3-30B-A3B-Instruct-2507\",\n \"base_model:quantized:Qwen/Qwen3-30B-A3B-Instruct-2507\",\n \"license:apache-2.0\",\n \"4-bit\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"string","value":"text-generation"},"createdAt":{"kind":"timestamp","value":"2025-07-29T20:31:09Z","string":"2025-07-29T20:31:09Z"},"card":{"kind":"string","value":"---\nlibrary_name: mlx\nlicense: apache-2.0\nlicense_link: https://huggingface.co/Qwen/Qwen3-30B-A3B-Instruct-2507/blob/main/LICENSE\npipeline_tag: text-generation\ntags:\n- mlx\nbase_model: Qwen/Qwen3-30B-A3B-Instruct-2507\n---\n\n# mlx-community/Qwen3-30B-A3B-Instruct-2507-4bit\n\nThis model [mlx-community/Qwen3-30B-A3B-Instruct-2507-4bit](https://huggingface.co/mlx-community/Qwen3-30B-A3B-Instruct-2507-4bit) was\nconverted to MLX format from [Qwen/Qwen3-30B-A3B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-30B-A3B-Instruct-2507)\nusing mlx-lm version **0.26.3**.\n\n## Use with mlx\n\n```bash\npip install mlx-lm\n```\n\n```python\nfrom mlx_lm import load, generate\n\nmodel, tokenizer = load(\"mlx-community/Qwen3-30B-A3B-Instruct-2507-4bit\")\n\nprompt = \"hello\"\n\nif tokenizer.chat_template is not None:\n messages = [{\"role\": \"user\", \"content\": prompt}]\n prompt = tokenizer.apply_chat_template(\n messages, add_generation_prompt=True\n )\n\nresponse = generate(model, tokenizer, prompt=prompt, verbose=True)\n```\n"}}},{"rowIdx":320,"cells":{"modelId":{"kind":"string","value":"joanna302/Qwen3-1.7B-Base_zh_ar__alpaca_part_SFT_2e-05"},"author":{"kind":"string","value":"joanna302"},"last_modified":{"kind":"timestamp","value":"2025-08-06T13:37:52Z","string":"2025-08-06T13:37:52Z"},"downloads":{"kind":"number","value":38,"string":"38"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"string","value":"transformers"},"tags":{"kind":"list like","value":["transformers","safetensors","qwen3","text-generation","generated_from_trainer","unsloth","sft","trl","conversational","base_model:unsloth/Qwen3-1.7B-Base","base_model:finetune:unsloth/Qwen3-1.7B-Base","autotrain_compatible","text-generation-inference","endpoints_compatible","region:us"],"string":"[\n \"transformers\",\n \"safetensors\",\n \"qwen3\",\n \"text-generation\",\n \"generated_from_trainer\",\n \"unsloth\",\n \"sft\",\n \"trl\",\n \"conversational\",\n \"base_model:unsloth/Qwen3-1.7B-Base\",\n \"base_model:finetune:unsloth/Qwen3-1.7B-Base\",\n \"autotrain_compatible\",\n \"text-generation-inference\",\n \"endpoints_compatible\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"string","value":"text-generation"},"createdAt":{"kind":"timestamp","value":"2025-08-05T17:01:34Z","string":"2025-08-05T17:01:34Z"},"card":{"kind":"string","value":"---\nbase_model: unsloth/Qwen3-1.7B-Base\nlibrary_name: transformers\nmodel_name: Qwen3-1.7B-Base_zh_ar__alpaca_part_SFT_2e-05\ntags:\n- generated_from_trainer\n- unsloth\n- sft\n- trl\nlicence: license\n---\n\n# Model Card for Qwen3-1.7B-Base_zh_ar__alpaca_part_SFT_2e-05\n\nThis model is a fine-tuned version of [unsloth/Qwen3-1.7B-Base](https://huggingface.co/unsloth/Qwen3-1.7B-Base).\nIt has been trained using [TRL](https://github.com/huggingface/trl).\n\n## Quick start\n\n```python\nfrom transformers import pipeline\n\nquestion = \"If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?\"\ngenerator = pipeline(\"text-generation\", model=\"joanna302/Qwen3-1.7B-Base_zh_ar__alpaca_part_SFT_2e-05\", device=\"cuda\")\noutput = generator([{\"role\": \"user\", \"content\": question}], max_new_tokens=128, return_full_text=False)[0]\nprint(output[\"generated_text\"])\n```\n\n## Training procedure\n\n[\"Visualize](https://wandb.ai/prism-eval/Qwen3-1.7B-Base_zh_ar__alpaca_part_SFT_2e-05/runs/9b5y86ny) \n\n\nThis model was trained with SFT.\n\n### Framework versions\n\n- TRL: 0.20.0\n- Transformers: 4.54.1\n- Pytorch: 2.7.1\n- Datasets: 3.6.0\n- Tokenizers: 0.21.4\n\n## Citations\n\n\n\nCite TRL as:\n \n```bibtex\n@misc{vonwerra2022trl,\n\ttitle = {{TRL: Transformer Reinforcement Learning}},\n\tauthor = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\\'e}dec},\n\tyear = 2020,\n\tjournal = {GitHub repository},\n\tpublisher = {GitHub},\n\thowpublished = {\\url{https://github.com/huggingface/trl}}\n}\n```"}}},{"rowIdx":321,"cells":{"modelId":{"kind":"string","value":"tabularisai/f5-tts-german-voice-clone"},"author":{"kind":"string","value":"tabularisai"},"last_modified":{"kind":"timestamp","value":"2025-08-06T13:36:50Z","string":"2025-08-06T13:36:50Z"},"downloads":{"kind":"number","value":5,"string":"5"},"likes":{"kind":"number","value":2,"string":"2"},"library_name":{"kind":"null"},"tags":{"kind":"list like","value":["german","voice-cloning","f5tts","text-to-speech","de","arxiv:2410.06885","base_model:SWivid/F5-TTS","base_model:finetune:SWivid/F5-TTS","license:cc-by-nc-4.0","region:us"],"string":"[\n \"german\",\n \"voice-cloning\",\n \"f5tts\",\n \"text-to-speech\",\n \"de\",\n \"arxiv:2410.06885\",\n \"base_model:SWivid/F5-TTS\",\n \"base_model:finetune:SWivid/F5-TTS\",\n \"license:cc-by-nc-4.0\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"string","value":"text-to-speech"},"createdAt":{"kind":"timestamp","value":"2025-07-29T16:48:00Z","string":"2025-07-29T16:48:00Z"},"card":{"kind":"string","value":"---\nlanguage:\n- de\nbase_model:\n- SWivid/F5-TTS\nlicense: cc-by-nc-4.0\npipeline_tag: text-to-speech\ntags:\n- german\n- voice-cloning\n- f5tts\n---\n# F5-TTS German Fine-tuned Model\n\n[![Model: F5-TTS](https://img.shields.io/badge/Model-F5--TTS-blue)](https://github.com/SWivid/F5-TTS)\n[![Language: German](https://img.shields.io/badge/Language-German-red)](https://en.wikipedia.org/wiki/German_language)\n[![Hugging Face](https://img.shields.io/badge/🤗-Hugging%20Face-yellow)](https://huggingface.co/tabularisai/f5-tts-german-voice-clone)\n\n> **⚠️ Work in Progress**: This model is still under development and optimization. We are actively seeking feedback from the community to improve its performance. Please share your experiences, issues, and suggestions!\n\n## Model Description\n\nThis is a German fine-tuned version of the F5-TTS (Flow Matching) model, specifically trained on German voice datasets. F5-TTS is a diffusion-transformer based text-to-speech system that uses flow matching for high-quality, natural-sounding speech synthesis.\n\n### Key Features\n- **Language**: German text-to-speech synthesis\n- **Architecture**: DiT (Diffusion Transformer) with ConvNeXt V2\n- **Sample Rate**: 24 kHz\n- **Vocoder**: Vocos for high-quality audio generation\n- **Tokenization**: Custom character-level tokenization for German text\n\n### Model Details\n- **Base Model**: F5TTS_v1_Base\n- **Fine-tuning Dataset**: Combined German voice dataset with character-level tokenization\n- **Training Steps**: ~298,000 steps\n- **Vocabulary Size**: 2,546 characters\n- **Model Size**: ~1.3GB (inference-optimized)\n\n## Installation\n\n```bash\n# Install F5-TTS\npip install f5-tts\n\n# Or install from source for latest features\ngit clone https://github.com/SWivid/F5-TTS.git\ncd F5-TTS\npip install -e .\n```\n\n## Usage\n\n### Quick Start with Hugging Face Hub\n\n```python\nimport torch\nimport torchaudio\nfrom f5_tts.api import F5TTS\nfrom huggingface_hub import hf_hub_download\n\n# Download model files from Hugging Face\nmodel_file = hf_hub_download(\n repo_id=\"tabularisai/f5-tts-german-voice-clone\",\n filename=\"model.pt\"\n)\nvocab_file = hf_hub_download(\n repo_id=\"tabularisai/f5-tts-german-voice-clone\", \n filename=\"vocab.txt\"\n)\n\n# Initialize the German F5-TTS model\nf5tts = F5TTS(\n model=\"F5TTS_v1_Base\", # Use the base architecture\n ckpt_file=model_file, # Downloaded model weights\n vocab_file=vocab_file, # German vocabulary\n device=\"cuda\" if torch.cuda.is_available() else \"cpu\"\n)\n\n# German text to synthesize\ntext = \"Hallo, ich bin ein deutsches Text-zu-Sprache-System. Wie kann ich Ihnen heute helfen?\"\n\n# Reference audio \nref_audio_path = \"reference_german_voice.wav\"\nref_text = \"Dies ist eine Referenzaufnahme für die Stimmenklonierung.\"\n\n# Generate speech\naudio, sample_rate, seed = f5tts.infer(\n gen_text=text,\n ref_file=ref_audio_path,\n ref_text=ref_text,\n remove_silence=True,\n file_wave=\"output_german.wav\",\n)\n\n```\n\n\n\n### Advanced Usage\n\n```python\n# For longer texts, you can use the advanced inference (works with both Hugging Face and local files)\naudio, sample_rate = f5tts.infer(\n text=text,\n ref_audio=ref_audio_path,\n ref_text=ref_text,\n nfe_step=32, # Number of function evaluations (higher = better quality)\n cfg_strength=2.0, # Classifier-free guidance strength\n sway_sampling_coef=-1.0, # Sway sampling for better quality\n speed=1.0, # Generation speed (1.0 = normal speed)\n remove_silence=True,\n cross_fade_duration=0.15 # For smoother concatenation\n)\n```\n\n### Command Line Usage\n\n```bash\n# Using the F5-TTS CLI with the German model\nf5-tts_infer-cli \\\n --model F5TTS_v1_Base \\\n --ckpt_file path/to/model.pt \\\n --vocab_file path/to/vocab.txt \\\n --ref_audio reference_german.wav \\\n --ref_text \"Referenztext für die Stimme\" \\\n --gen_text \"Zu synthetisierender deutscher Text\" \\\n --output_path output_german.wav\n```\n\n### Voice Cloning\n\nThe model supports voice cloning with German reference audio:\n\n```python\n# Use a German reference voice\nref_audio = \"my_german_voice_sample.wav\"\nref_text = \"Das ist ein Beispieltext meiner Stimme.\"\n\n# Clone the voice for new German text\nnew_text = \"Jetzt spreche ich mit der geklonten Stimme diesen neuen Text.\"\naudio, sr = f5tts.infer(text=new_text, ref_audio=ref_audio, ref_text=ref_text)\n```\n\n## Model Performance\n\n### Supported Text Features\n- ✅ German characters and umlauts (ä, ö, ü, ß)\n- ✅ Numbers and punctuation\n- ✅ Special characters\n- ✅ Mixed case text\n- ⚠️ Limited support for non-German characters\n\n### Audio Quality\n- **Sample Rate**: 24 kHz\n- **Bit Depth**: 16-bit\n- **Quality**: High-quality neural vocoding with Vocos\n- **Latency**: Real-time capable on modern GPUs\n\n## Limitations and Known Issues\n\n- **Language Specific**: Optimized for German text only\n- **Training Data**: Limited to specific German voice datasets\n- **Accent Variation**: May not capture all German regional accents\n- **Performance**: Requires GPU for real-time inference\n- **Development Status**: Still in active development\n\n## Contributing and Feedback\n\n**We need your help!** This model is still being refined and we're looking for:\n\n- 🗣️ **Audio Quality Feedback**: How does the generated speech sound?\n- 📝 **Text Handling**: Issues with specific German words or phrases?\n- 🐛 **Bug Reports**: Technical issues or errors\n- 💡 **Feature Requests**: What would make this model more useful?\n- 📊 **Performance Reports**: Speed and quality benchmarks\n- 🎯 **Use Case Examples**: How are you using this model?\n\n### How to Provide Feedback\n\n1. **GitHub Issues**: Report bugs or request features in the original F5-TTS repository\n2. **Audio Samples**: Share problematic or excellent generation examples\n3. **Benchmarks**: Compare with other German TTS systems\n4. **Documentation**: Help improve usage instructions\n\n## Model Card\n\n| Property | Value |\n|----------|-------|\n| Language | German (Deutsch) |\n| Model Type | Text-to-Speech (Flow Matching) |\n| Architecture | DiT (Diffusion Transformer) |\n| Parameters | ~1B parameters |\n| Training Data | Combined German voice datasets |\n| Vocabulary | 2,546 character tokens |\n| Sample Rate | 24 kHz |\n\n## Citation\n\nIf you use this model in your research, please cite the original F5-TTS paper:\n\n```bibtex\n@article{chen2024f5tts,\n title={F5-TTS: A Fairytaler that Fakes Fluent and Faithful Speech with Flow Matching},\n author={Chen, Yushen and others},\n journal={arXiv preprint arXiv:2410.06885},\n year={2024}\n}\n```\n\n\n## Acknowledgments\n\n- Original F5-TTS team for the excellent framework\n- German voice dataset contributors\n- The open-source community for feedback and improvements\n\n## Contact\n\nFor questions, feedback, or collaboration:\n- Open an issue in the F5-TTS repository\n- Join the community discussions\n- Share your experiences with German TTS\n- `info@tabularis.ai`\n\n---\n\n**Status**: 🚧 Under Development - Seeking Community Feedback 🚧"}}},{"rowIdx":322,"cells":{"modelId":{"kind":"string","value":"quanxuantruong/tqa-stage1-t5-full-3epoch-400k"},"author":{"kind":"string","value":"quanxuantruong"},"last_modified":{"kind":"timestamp","value":"2025-08-06T13:31:47Z","string":"2025-08-06T13:31:47Z"},"downloads":{"kind":"number","value":17,"string":"17"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"string","value":"transformers"},"tags":{"kind":"list like","value":["transformers","tensorboard","safetensors","t5","text2text-generation","generated_from_trainer","base_model:google/flan-t5-base","base_model:finetune:google/flan-t5-base","license:apache-2.0","text-generation-inference","endpoints_compatible","region:us"],"string":"[\n \"transformers\",\n \"tensorboard\",\n \"safetensors\",\n \"t5\",\n \"text2text-generation\",\n \"generated_from_trainer\",\n \"base_model:google/flan-t5-base\",\n \"base_model:finetune:google/flan-t5-base\",\n \"license:apache-2.0\",\n \"text-generation-inference\",\n \"endpoints_compatible\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"null"},"createdAt":{"kind":"timestamp","value":"2025-08-06T09:27:36Z","string":"2025-08-06T09:27:36Z"},"card":{"kind":"string","value":"---\nlibrary_name: transformers\nlicense: apache-2.0\nbase_model: google/flan-t5-base\ntags:\n- generated_from_trainer\nmodel-index:\n- name: tqa-stage1-t5-full-3epoch-400k\n results: []\n---\n\n\n\n# tqa-stage1-t5-full-3epoch-400k\n\nThis model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 32\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- num_epochs: 3\n- mixed_precision_training: Native AMP\n\n### Training results\n\n\n\n### Framework versions\n\n- Transformers 4.52.4\n- Pytorch 2.6.0+cu124\n- Datasets 3.6.0\n- Tokenizers 0.21.2\n"}}},{"rowIdx":323,"cells":{"modelId":{"kind":"string","value":"CREAD/meabh-lora-model"},"author":{"kind":"string","value":"CREAD"},"last_modified":{"kind":"timestamp","value":"2025-08-06T13:30:51Z","string":"2025-08-06T13:30:51Z"},"downloads":{"kind":"number","value":18,"string":"18"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"string","value":"diffusers"},"tags":{"kind":"list like","value":["diffusers","text-to-image","flux","lora","template:sd-lora","fluxgym","base_model:black-forest-labs/FLUX.1-dev","base_model:adapter:black-forest-labs/FLUX.1-dev","license:other","region:us"],"string":"[\n \"diffusers\",\n \"text-to-image\",\n \"flux\",\n \"lora\",\n \"template:sd-lora\",\n \"fluxgym\",\n \"base_model:black-forest-labs/FLUX.1-dev\",\n \"base_model:adapter:black-forest-labs/FLUX.1-dev\",\n \"license:other\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"string","value":"text-to-image"},"createdAt":{"kind":"timestamp","value":"2025-08-06T12:43:28Z","string":"2025-08-06T12:43:28Z"},"card":{"kind":"string","value":"---\ntags:\n- text-to-image\n- flux\n- lora\n- diffusers\n- template:sd-lora\n- fluxgym\n\n\nbase_model: black-forest-labs/FLUX.1-dev\ninstance_prompt: Meabh\nlicense: other\nlicense_name: flux-1-dev-non-commercial-license\nlicense_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md\n---\n\n# Meabh-Lora-Model\n\nA Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)\n\n\n\n## Trigger words\n\nYou should use `Meabh` to trigger the image generation.\n\n## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.\n\nWeights for this model are available in Safetensors format.\n\n"}}},{"rowIdx":324,"cells":{"modelId":{"kind":"string","value":"roujin/SDGPA"},"author":{"kind":"string","value":"roujin"},"last_modified":{"kind":"timestamp","value":"2025-08-06T13:28:22Z","string":"2025-08-06T13:28:22Z"},"downloads":{"kind":"number","value":0,"string":"0"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"string","value":"diffusers"},"tags":{"kind":"list like","value":["diffusers","image-segmentation","arxiv:2508.03300","license:mit","region:us"],"string":"[\n \"diffusers\",\n \"image-segmentation\",\n \"arxiv:2508.03300\",\n \"license:mit\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"string","value":"image-segmentation"},"createdAt":{"kind":"timestamp","value":"2025-07-09T15:51:06Z","string":"2025-07-09T15:51:06Z"},"card":{"kind":"string","value":"---\nlicense: mit\npipeline_tag: image-segmentation\nlibrary_name: diffusers\n---\n\n# SDGPA: Zero Shot Domain Adaptive Semantic Segmentation by Synthetic Data Generation and Progressive Adaptation\n\nOfficial implementation of paper: [**Zero Shot Domain Adaptive Semantic Segmentation by Synthetic Data Generation and Progressive Adaptation**](https://huggingface.co/papers/2508.03300) (IROS 25').\n\nCode: [https://github.com/roujin/SDGPA](https://github.com/roujin/SDGPA)\n\n
\n \"SDGPA\n
\n\n## Abstract\nDeep learning-based semantic segmentation models achieve impressive results yet remain limited in handling distribution shifts between training and test data. In this paper, we present SDGPA (Synthetic Data Generation and Progressive Adaptation), a novel method that tackles zero-shot domain adaptive semantic segmentation, in which no target images are available, but only a text description of the target domain's style is provided. To compensate for the lack of target domain training data, we utilize a pretrained off-the-shelf text-to-image diffusion model, which generates training images by transferring source domain images to target style. Directly editing source domain images introduces noise that harms segmentation because the layout of source images cannot be precisely maintained. To address inaccurate layouts in synthetic data, we propose a method that crops the source image, edits small patches individually, and then merges them back together, which helps improve spatial precision. Recognizing the large domain gap, SDGPA constructs an augmented intermediate domain, leveraging easier adaptation subtasks to enable more stable model adaptation to the target domain. Additionally, to mitigate the impact of noise in synthetic data, we design a progressive adaptation strategy, ensuring robust learning throughout the training process. Extensive experiments demonstrate that our method achieves state-of-the-art performance in zero-shot semantic segmentation.\n\n## Installation\n\nEnvironment setting:\n\nAll of our experiments are conducted on NVIDIA RTX 3090 with cuda 11.8\n```bash\nsource env.sh\n```\n\n## Running\n\nYou can find all the training scripts in the `scripts/` folder.\n\nWe use day $\\to$ snow setting as an example.\n\nFirst, you should decide where you want to put the datasets. Let's denote it as `` (for example:`/data3/roujin`). By default, the experimental logs are stored in ``.\n\nThen, organize the folder as follows:\n```\n\n└─ ACDC\n └─ gt\n └─ rgb_anon\n└─ cityscapes\n └─ gtFine\n └─ leftImg8bit\n└─ GTA5\n └─ images\n └─ labels\n```\n\nYou can refer to cityscapes and ACDC's official websites for the datasets. For GTA5, as we only use a subset of them, we provide the following link to download the subset for your convenience: [https://huggingface.co/datasets/roujin/GTA5subset](https://huggingface.co/datasets/roujin/GTA5subset)\n\nFor synthetic data generation:\n```bash\nsource img_gen/run.sh snow\n```\n\nFor progress model adaptation:\n```bash\nsource scripts/snow.sh \n```\n\nEvaluation:\n```bash\nsource eval.sh \n```\n`` can be \"day\", \"fog\", \"rain\", \"snow\", \"night\", \"game\"\n\n## Evaluation Results\n\nWe release the following results. See all logs and checkpoints during training from [https://huggingface.co/roujin/SDGPA/tree/main](https://huggingface.co/roujin/SDGPA/tree/main)\n\n| Setting | Day→Night | Clear→Snow | Clear→Rain | Clear→Fog | Real→Game |\n| :--------------- | :-------------------------------------------------------------------------------------- | :------------------------------------------------------------------------------------- | :------------------------------------------------------------------------------------- | :------------------------------------------------------------------------------------ | :------------------------------------------------------------------------------------- |\n| results on paper | 26.9±0.8 | 47.4±0.7 | 48.6±0.8 | 58.8±0.7 | 43.4±0.4 |\n| our released | 27.6 | 46.8 | 49.0 | 59.8 | 43.1 |\n| checkpoint | [link](https://huggingface.co/roujin/SDGPA/blob/main/night2/weights/weights_65.pth.tar) | [link](https://huggingface.co/roujin/SDGPA/blob/main/snow2/weights/weights_65.pth.tar) | [link](https://huggingface.co/roujin/SDGPA/blob/main/rain2/weights/weights_65.pth.tar) | [link](https://huggingface.co/roujin/SDGPA/blob/main/fog2/weights/weights_65.pth.tar) | [link](https://huggingface.co/roujin/SDGPA/blob/main/game2/weights/weights_65.pth.tar) |\n\nWe recommend you to read the scripts and the paper for more details.\n\nFor hyperparameter selection of InstructPix2Pix, we recommend reading: [https://huggingface.co/spaces/timbrooks/instruct-pix2pix/blob/main/README.md](https://huggingface.co/spaces/timbrooks/instruct-pix2pix/blob/main/README.md)\n\n## Acknowledgements\n\nThis code is built upon the following repositories:\n\n* [https://github.com/azuma164/ZoDi](https://github.com/azuma164/ZoDi)\n* [https://huggingface.co/timbrooks/instruct-pix2pix](https://huggingface.co/timbrooks/instruct-pix2pix)\n\nWe thank them for their excellent work!\n\n## Citation\n\n```bibtex\n@misc{luo2025sdgpa,\n title={Zero Shot Domain Adaptive Semantic Segmentation by Synthetic Data Generation and Progressive Adaptation}, \n author={Jun Luo and Zijing Zhao and Yang Liu},\n year={2025},\n eprint={2508.03300},\n archivePrefix={arXiv},\n primaryClass={cs.CV},\n url={https://arxiv.org/abs/2508.03300}, \n}\n```"}}},{"rowIdx":325,"cells":{"modelId":{"kind":"string","value":"null0101/distil-whisper-medium-ko-test"},"author":{"kind":"string","value":"null0101"},"last_modified":{"kind":"timestamp","value":"2025-08-06T13:21:42Z","string":"2025-08-06T13:21:42Z"},"downloads":{"kind":"number","value":2,"string":"2"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"string","value":"transformers"},"tags":{"kind":"list like","value":["transformers","safetensors","whisper","automatic-speech-recognition","arxiv:1910.09700","endpoints_compatible","region:us"],"string":"[\n \"transformers\",\n \"safetensors\",\n \"whisper\",\n \"automatic-speech-recognition\",\n \"arxiv:1910.09700\",\n \"endpoints_compatible\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"string","value":"automatic-speech-recognition"},"createdAt":{"kind":"timestamp","value":"2025-08-06T13:20:58Z","string":"2025-08-06T13:20:58Z"},"card":{"kind":"string","value":"---\nlibrary_name: transformers\ntags: []\n---\n\n# Model Card for Model ID\n\n\n\n\n\n## Model Details\n\n### Model Description\n\n\n\nThis is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- **Developed by:** [More Information Needed]\n- **Funded by [optional]:** [More Information Needed]\n- **Shared by [optional]:** [More Information Needed]\n- **Model type:** [More Information Needed]\n- **Language(s) (NLP):** [More Information Needed]\n- **License:** [More Information Needed]\n- **Finetuned from model [optional]:** [More Information Needed]\n\n### Model Sources [optional]\n\n\n\n- **Repository:** [More Information Needed]\n- **Paper [optional]:** [More Information Needed]\n- **Demo [optional]:** [More Information Needed]\n\n## Uses\n\n\n\n### Direct Use\n\n\n\n[More Information Needed]\n\n### Downstream Use [optional]\n\n\n\n[More Information Needed]\n\n### Out-of-Scope Use\n\n\n\n[More Information Needed]\n\n## Bias, Risks, and Limitations\n\n\n\n[More Information Needed]\n\n### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.\n\n## How to Get Started with the Model\n\nUse the code below to get started with the model.\n\n[More Information Needed]\n\n## Training Details\n\n### Training Data\n\n\n\n[More Information Needed]\n\n### Training Procedure\n\n\n\n#### Preprocessing [optional]\n\n[More Information Needed]\n\n\n#### Training Hyperparameters\n\n- **Training regime:** [More Information Needed] \n\n#### Speeds, Sizes, Times [optional]\n\n\n\n[More Information Needed]\n\n## Evaluation\n\n\n\n### Testing Data, Factors & Metrics\n\n#### Testing Data\n\n\n\n[More Information Needed]\n\n#### Factors\n\n\n\n[More Information Needed]\n\n#### Metrics\n\n\n\n[More Information Needed]\n\n### Results\n\n[More Information Needed]\n\n#### Summary\n\n\n\n## Model Examination [optional]\n\n\n\n[More Information Needed]\n\n## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).\n\n- **Hardware Type:** [More Information Needed]\n- **Hours used:** [More Information Needed]\n- **Cloud Provider:** [More Information Needed]\n- **Compute Region:** [More Information Needed]\n- **Carbon Emitted:** [More Information Needed]\n\n## Technical Specifications [optional]\n\n### Model Architecture and Objective\n\n[More Information Needed]\n\n### Compute Infrastructure\n\n[More Information Needed]\n\n#### Hardware\n\n[More Information Needed]\n\n#### Software\n\n[More Information Needed]\n\n## Citation [optional]\n\n\n\n**BibTeX:**\n\n[More Information Needed]\n\n**APA:**\n\n[More Information Needed]\n\n## Glossary [optional]\n\n\n\n[More Information Needed]\n\n## More Information [optional]\n\n[More Information Needed]\n\n## Model Card Authors [optional]\n\n[More Information Needed]\n\n## Model Card Contact\n\n[More Information Needed]"}}},{"rowIdx":326,"cells":{"modelId":{"kind":"string","value":"phogen/gemma-3-4b-pt-00pct-lora-proposal"},"author":{"kind":"string","value":"phogen"},"last_modified":{"kind":"timestamp","value":"2025-08-06T13:17:37Z","string":"2025-08-06T13:17:37Z"},"downloads":{"kind":"number","value":0,"string":"0"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"string","value":"transformers"},"tags":{"kind":"list like","value":["transformers","safetensors","unsloth","arxiv:1910.09700","endpoints_compatible","region:us"],"string":"[\n \"transformers\",\n \"safetensors\",\n \"unsloth\",\n \"arxiv:1910.09700\",\n \"endpoints_compatible\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"null"},"createdAt":{"kind":"timestamp","value":"2025-08-06T13:17:33Z","string":"2025-08-06T13:17:33Z"},"card":{"kind":"string","value":"---\nlibrary_name: transformers\ntags:\n- unsloth\n---\n\n# Model Card for Model ID\n\n\n\n\n\n## Model Details\n\n### Model Description\n\n\n\nThis is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- **Developed by:** [More Information Needed]\n- **Funded by [optional]:** [More Information Needed]\n- **Shared by [optional]:** [More Information Needed]\n- **Model type:** [More Information Needed]\n- **Language(s) (NLP):** [More Information Needed]\n- **License:** [More Information Needed]\n- **Finetuned from model [optional]:** [More Information Needed]\n\n### Model Sources [optional]\n\n\n\n- **Repository:** [More Information Needed]\n- **Paper [optional]:** [More Information Needed]\n- **Demo [optional]:** [More Information Needed]\n\n## Uses\n\n\n\n### Direct Use\n\n\n\n[More Information Needed]\n\n### Downstream Use [optional]\n\n\n\n[More Information Needed]\n\n### Out-of-Scope Use\n\n\n\n[More Information Needed]\n\n## Bias, Risks, and Limitations\n\n\n\n[More Information Needed]\n\n### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.\n\n## How to Get Started with the Model\n\nUse the code below to get started with the model.\n\n[More Information Needed]\n\n## Training Details\n\n### Training Data\n\n\n\n[More Information Needed]\n\n### Training Procedure\n\n\n\n#### Preprocessing [optional]\n\n[More Information Needed]\n\n\n#### Training Hyperparameters\n\n- **Training regime:** [More Information Needed] \n\n#### Speeds, Sizes, Times [optional]\n\n\n\n[More Information Needed]\n\n## Evaluation\n\n\n\n### Testing Data, Factors & Metrics\n\n#### Testing Data\n\n\n\n[More Information Needed]\n\n#### Factors\n\n\n\n[More Information Needed]\n\n#### Metrics\n\n\n\n[More Information Needed]\n\n### Results\n\n[More Information Needed]\n\n#### Summary\n\n\n\n## Model Examination [optional]\n\n\n\n[More Information Needed]\n\n## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).\n\n- **Hardware Type:** [More Information Needed]\n- **Hours used:** [More Information Needed]\n- **Cloud Provider:** [More Information Needed]\n- **Compute Region:** [More Information Needed]\n- **Carbon Emitted:** [More Information Needed]\n\n## Technical Specifications [optional]\n\n### Model Architecture and Objective\n\n[More Information Needed]\n\n### Compute Infrastructure\n\n[More Information Needed]\n\n#### Hardware\n\n[More Information Needed]\n\n#### Software\n\n[More Information Needed]\n\n## Citation [optional]\n\n\n\n**BibTeX:**\n\n[More Information Needed]\n\n**APA:**\n\n[More Information Needed]\n\n## Glossary [optional]\n\n\n\n[More Information Needed]\n\n## More Information [optional]\n\n[More Information Needed]\n\n## Model Card Authors [optional]\n\n[More Information Needed]\n\n## Model Card Contact\n\n[More Information Needed]"}}},{"rowIdx":327,"cells":{"modelId":{"kind":"string","value":"Jacksss123/net72_uid209"},"author":{"kind":"string","value":"Jacksss123"},"last_modified":{"kind":"timestamp","value":"2025-08-06T13:13:12Z","string":"2025-08-06T13:13:12Z"},"downloads":{"kind":"number","value":1,"string":"1"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"string","value":"transformers"},"tags":{"kind":"list like","value":["transformers","tensorboard","safetensors","vit","image-classification","arxiv:1910.09700","autotrain_compatible","endpoints_compatible","region:us"],"string":"[\n \"transformers\",\n \"tensorboard\",\n \"safetensors\",\n \"vit\",\n \"image-classification\",\n \"arxiv:1910.09700\",\n \"autotrain_compatible\",\n \"endpoints_compatible\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"string","value":"image-classification"},"createdAt":{"kind":"timestamp","value":"2025-08-06T13:08:52Z","string":"2025-08-06T13:08:52Z"},"card":{"kind":"string","value":"---\nlibrary_name: transformers\ntags: []\n---\n\n# Model Card for Model ID\n\n\n\n\n\n## Model Details\n\n### Model Description\n\n\n\nThis is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- **Developed by:** [More Information Needed]\n- **Funded by [optional]:** [More Information Needed]\n- **Shared by [optional]:** [More Information Needed]\n- **Model type:** [More Information Needed]\n- **Language(s) (NLP):** [More Information Needed]\n- **License:** [More Information Needed]\n- **Finetuned from model [optional]:** [More Information Needed]\n\n### Model Sources [optional]\n\n\n\n- **Repository:** [More Information Needed]\n- **Paper [optional]:** [More Information Needed]\n- **Demo [optional]:** [More Information Needed]\n\n## Uses\n\n\n\n### Direct Use\n\n\n\n[More Information Needed]\n\n### Downstream Use [optional]\n\n\n\n[More Information Needed]\n\n### Out-of-Scope Use\n\n\n\n[More Information Needed]\n\n## Bias, Risks, and Limitations\n\n\n\n[More Information Needed]\n\n### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.\n\n## How to Get Started with the Model\n\nUse the code below to get started with the model.\n\n[More Information Needed]\n\n## Training Details\n\n### Training Data\n\n\n\n[More Information Needed]\n\n### Training Procedure\n\n\n\n#### Preprocessing [optional]\n\n[More Information Needed]\n\n\n#### Training Hyperparameters\n\n- **Training regime:** [More Information Needed] \n\n#### Speeds, Sizes, Times [optional]\n\n\n\n[More Information Needed]\n\n## Evaluation\n\n\n\n### Testing Data, Factors & Metrics\n\n#### Testing Data\n\n\n\n[More Information Needed]\n\n#### Factors\n\n\n\n[More Information Needed]\n\n#### Metrics\n\n\n\n[More Information Needed]\n\n### Results\n\n[More Information Needed]\n\n#### Summary\n\n\n\n## Model Examination [optional]\n\n\n\n[More Information Needed]\n\n## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).\n\n- **Hardware Type:** [More Information Needed]\n- **Hours used:** [More Information Needed]\n- **Cloud Provider:** [More Information Needed]\n- **Compute Region:** [More Information Needed]\n- **Carbon Emitted:** [More Information Needed]\n\n## Technical Specifications [optional]\n\n### Model Architecture and Objective\n\n[More Information Needed]\n\n### Compute Infrastructure\n\n[More Information Needed]\n\n#### Hardware\n\n[More Information Needed]\n\n#### Software\n\n[More Information Needed]\n\n## Citation [optional]\n\n\n\n**BibTeX:**\n\n[More Information Needed]\n\n**APA:**\n\n[More Information Needed]\n\n## Glossary [optional]\n\n\n\n[More Information Needed]\n\n## More Information [optional]\n\n[More Information Needed]\n\n## Model Card Authors [optional]\n\n[More Information Needed]\n\n## Model Card Contact\n\n[More Information Needed]"}}},{"rowIdx":328,"cells":{"modelId":{"kind":"string","value":"mradermacher/future_ai_V1.1.250805-GGUF"},"author":{"kind":"string","value":"mradermacher"},"last_modified":{"kind":"timestamp","value":"2025-08-06T13:11:15Z","string":"2025-08-06T13:11:15Z"},"downloads":{"kind":"number","value":50,"string":"50"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"string","value":"transformers"},"tags":{"kind":"list like","value":["transformers","gguf","en","base_model:Futuresony/future_ai_V1.1.250805","base_model:quantized:Futuresony/future_ai_V1.1.250805","endpoints_compatible","region:us"],"string":"[\n \"transformers\",\n \"gguf\",\n \"en\",\n \"base_model:Futuresony/future_ai_V1.1.250805\",\n \"base_model:quantized:Futuresony/future_ai_V1.1.250805\",\n \"endpoints_compatible\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"null"},"createdAt":{"kind":"timestamp","value":"2025-08-06T12:56:24Z","string":"2025-08-06T12:56:24Z"},"card":{"kind":"string","value":"---\nbase_model: Futuresony/future_ai_V1.1.250805\nlanguage:\n- en\nlibrary_name: transformers\nmradermacher:\n readme_rev: 1\nquantized_by: mradermacher\n---\n## About\n\n\n\n\n\n\n\n\n\nstatic quants of https://huggingface.co/Futuresony/future_ai_V1.1.250805\n\n\n\n***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#future_ai_V1.1.250805-GGUF).***\n\nweighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.\n## Usage\n\nIf you are unsure how to use GGUF files, refer to one of [TheBloke's\nREADMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for\nmore details, including on how to concatenate multi-part files.\n\n## Provided Quants\n\n(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)\n\n| Link | Type | Size/GB | Notes |\n|:-----|:-----|--------:|:------|\n| [GGUF](https://huggingface.co/mradermacher/future_ai_V1.1.250805-GGUF/resolve/main/future_ai_V1.1.250805.Q2_K.gguf) | Q2_K | 3.9 | |\n| [GGUF](https://huggingface.co/mradermacher/future_ai_V1.1.250805-GGUF/resolve/main/future_ai_V1.1.250805.Q3_K_S.gguf) | Q3_K_S | 4.4 | |\n| [GGUF](https://huggingface.co/mradermacher/future_ai_V1.1.250805-GGUF/resolve/main/future_ai_V1.1.250805.Q3_K_M.gguf) | Q3_K_M | 4.9 | lower quality |\n| [GGUF](https://huggingface.co/mradermacher/future_ai_V1.1.250805-GGUF/resolve/main/future_ai_V1.1.250805.Q3_K_L.gguf) | Q3_K_L | 5.2 | |\n| [GGUF](https://huggingface.co/mradermacher/future_ai_V1.1.250805-GGUF/resolve/main/future_ai_V1.1.250805.IQ4_XS.gguf) | IQ4_XS | 5.3 | |\n| [GGUF](https://huggingface.co/mradermacher/future_ai_V1.1.250805-GGUF/resolve/main/future_ai_V1.1.250805.Q4_K_S.gguf) | Q4_K_S | 5.6 | fast, recommended |\n| [GGUF](https://huggingface.co/mradermacher/future_ai_V1.1.250805-GGUF/resolve/main/future_ai_V1.1.250805.Q4_K_M.gguf) | Q4_K_M | 5.9 | fast, recommended |\n| [GGUF](https://huggingface.co/mradermacher/future_ai_V1.1.250805-GGUF/resolve/main/future_ai_V1.1.250805.Q5_K_S.gguf) | Q5_K_S | 6.6 | |\n| [GGUF](https://huggingface.co/mradermacher/future_ai_V1.1.250805-GGUF/resolve/main/future_ai_V1.1.250805.Q5_K_M.gguf) | Q5_K_M | 6.7 | |\n| [GGUF](https://huggingface.co/mradermacher/future_ai_V1.1.250805-GGUF/resolve/main/future_ai_V1.1.250805.Q6_K.gguf) | Q6_K | 7.7 | very good quality |\n| [GGUF](https://huggingface.co/mradermacher/future_ai_V1.1.250805-GGUF/resolve/main/future_ai_V1.1.250805.Q8_0.gguf) | Q8_0 | 9.9 | fast, best quality |\n| [GGUF](https://huggingface.co/mradermacher/future_ai_V1.1.250805-GGUF/resolve/main/future_ai_V1.1.250805.f16.gguf) | f16 | 18.6 | 16 bpw, overkill |\n\nHere is a handy graph by ikawrakow comparing some lower-quality quant\ntypes (lower is better):\n\n![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)\n\nAnd here are Artefact2's thoughts on the matter:\nhttps://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9\n\n## FAQ / Model Request\n\nSee https://huggingface.co/mradermacher/model_requests for some answers to\nquestions you might have and/or if you want some other model quantized.\n\n## Thanks\n\nI thank my company, [nethype GmbH](https://www.nethype.de/), for letting\nme use its servers and providing upgrades to my workstation to enable\nthis work in my free time.\n\n\n"}}},{"rowIdx":329,"cells":{"modelId":{"kind":"string","value":"buuduy1711/gemma-3-4b-it-tayson-vietnam"},"author":{"kind":"string","value":"buuduy1711"},"last_modified":{"kind":"timestamp","value":"2025-08-06T13:10:40Z","string":"2025-08-06T13:10:40Z"},"downloads":{"kind":"number","value":23,"string":"23"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"string","value":"transformers"},"tags":{"kind":"list like","value":["transformers","safetensors","gemma3","image-text-to-text","text-generation-inference","unsloth","conversational","en","base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit","base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit","license:apache-2.0","endpoints_compatible","region:us"],"string":"[\n \"transformers\",\n \"safetensors\",\n \"gemma3\",\n \"image-text-to-text\",\n \"text-generation-inference\",\n \"unsloth\",\n \"conversational\",\n \"en\",\n \"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit\",\n \"base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit\",\n \"license:apache-2.0\",\n \"endpoints_compatible\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"string","value":"image-text-to-text"},"createdAt":{"kind":"timestamp","value":"2025-08-06T09:44:58Z","string":"2025-08-06T09:44:58Z"},"card":{"kind":"string","value":"---\nbase_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit\ntags:\n- text-generation-inference\n- transformers\n- unsloth\n- gemma3\nlicense: apache-2.0\nlanguage:\n- en\n---\n\n# Uploaded finetuned model\n\n- **Developed by:** buuduy1711\n- **License:** apache-2.0\n- **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit\n\nThis gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.\n\n[](https://github.com/unslothai/unsloth)\n"}}},{"rowIdx":330,"cells":{"modelId":{"kind":"string","value":"spesrobotics/wire_pick_place_multi_view_act_expanded"},"author":{"kind":"string","value":"spesrobotics"},"last_modified":{"kind":"timestamp","value":"2025-08-06T13:10:09Z","string":"2025-08-06T13:10:09Z"},"downloads":{"kind":"number","value":13,"string":"13"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"string","value":"lerobot"},"tags":{"kind":"list like","value":["lerobot","safetensors","act","robotics","dataset:spesrobotics/wire_pick_place_multi_view_expanded","arxiv:2304.13705","license:apache-2.0","region:us"],"string":"[\n \"lerobot\",\n \"safetensors\",\n \"act\",\n \"robotics\",\n \"dataset:spesrobotics/wire_pick_place_multi_view_expanded\",\n \"arxiv:2304.13705\",\n \"license:apache-2.0\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"string","value":"robotics"},"createdAt":{"kind":"timestamp","value":"2025-08-06T02:44:14Z","string":"2025-08-06T02:44:14Z"},"card":{"kind":"string","value":"---\ndatasets: spesrobotics/wire_pick_place_multi_view_expanded\nlibrary_name: lerobot\nlicense: apache-2.0\nmodel_name: act\npipeline_tag: robotics\ntags:\n- act\n- robotics\n- lerobot\n---\n\n# Model Card for act\n\n\n\n\n[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.\n\n\nThis policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).\nSee the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).\n\n---\n\n## How to Get Started with the Model\n\nFor a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).\nBelow is the short version on how to train and run inference/eval:\n\n### Train from scratch\n\n```bash\npython -m lerobot.scripts.train \\\n --dataset.repo_id=${HF_USER}/ \\\n --policy.type=act \\\n --output_dir=outputs/train/ \\\n --job_name=lerobot_training \\\n --policy.device=cuda \\\n --policy.repo_id=${HF_USER}/\n --wandb.enable=true\n```\n\n_Writes checkpoints to `outputs/train//checkpoints/`._\n\n### Evaluate the policy/run inference\n\n```bash\npython -m lerobot.record \\\n --robot.type=so100_follower \\\n --dataset.repo_id=/eval_ \\\n --policy.path=/ \\\n --episodes=10\n```\n\nPrefix the dataset repo with **eval\\_** and supply `--policy.path` pointing to a local or hub checkpoint.\n\n---\n\n## Model Details\n\n- **License:** apache-2.0"}}},{"rowIdx":331,"cells":{"modelId":{"kind":"string","value":"mradermacher/CATPLUG-Ti-GGUF"},"author":{"kind":"string","value":"mradermacher"},"last_modified":{"kind":"timestamp","value":"2025-08-06T13:07:00Z","string":"2025-08-06T13:07:00Z"},"downloads":{"kind":"number","value":85,"string":"85"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"string","value":"transformers"},"tags":{"kind":"list like","value":["transformers","gguf","en","base_model:yyy111yyy/CATPLUG-Ti","base_model:quantized:yyy111yyy/CATPLUG-Ti","endpoints_compatible","region:us","conversational"],"string":"[\n \"transformers\",\n \"gguf\",\n \"en\",\n \"base_model:yyy111yyy/CATPLUG-Ti\",\n \"base_model:quantized:yyy111yyy/CATPLUG-Ti\",\n \"endpoints_compatible\",\n \"region:us\",\n \"conversational\"\n]"},"pipeline_tag":{"kind":"null"},"createdAt":{"kind":"timestamp","value":"2025-08-06T12:52:20Z","string":"2025-08-06T12:52:20Z"},"card":{"kind":"string","value":"---\nbase_model: yyy111yyy/CATPLUG-Ti\nlanguage:\n- en\nlibrary_name: transformers\nmradermacher:\n readme_rev: 1\nquantized_by: mradermacher\n---\n## About\n\n\n\n\n\n\n\n\n\nstatic quants of https://huggingface.co/yyy111yyy/CATPLUG-Ti\n\n\n\n***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#CATPLUG-Ti-GGUF).***\n\nweighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.\n## Usage\n\nIf you are unsure how to use GGUF files, refer to one of [TheBloke's\nREADMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for\nmore details, including on how to concatenate multi-part files.\n\n## Provided Quants\n\n(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)\n\n| Link | Type | Size/GB | Notes |\n|:-----|:-----|--------:|:------|\n| [GGUF](https://huggingface.co/mradermacher/CATPLUG-Ti-GGUF/resolve/main/CATPLUG-Ti.Q2_K.gguf) | Q2_K | 1.6 | |\n| [GGUF](https://huggingface.co/mradermacher/CATPLUG-Ti-GGUF/resolve/main/CATPLUG-Ti.Q3_K_S.gguf) | Q3_K_S | 1.8 | |\n| [GGUF](https://huggingface.co/mradermacher/CATPLUG-Ti-GGUF/resolve/main/CATPLUG-Ti.Q3_K_M.gguf) | Q3_K_M | 1.9 | lower quality |\n| [GGUF](https://huggingface.co/mradermacher/CATPLUG-Ti-GGUF/resolve/main/CATPLUG-Ti.Q3_K_L.gguf) | Q3_K_L | 2.0 | |\n| [GGUF](https://huggingface.co/mradermacher/CATPLUG-Ti-GGUF/resolve/main/CATPLUG-Ti.IQ4_XS.gguf) | IQ4_XS | 2.1 | |\n| [GGUF](https://huggingface.co/mradermacher/CATPLUG-Ti-GGUF/resolve/main/CATPLUG-Ti.Q4_K_S.gguf) | Q4_K_S | 2.2 | fast, recommended |\n| [GGUF](https://huggingface.co/mradermacher/CATPLUG-Ti-GGUF/resolve/main/CATPLUG-Ti.Q4_K_M.gguf) | Q4_K_M | 2.3 | fast, recommended |\n| [GGUF](https://huggingface.co/mradermacher/CATPLUG-Ti-GGUF/resolve/main/CATPLUG-Ti.Q5_K_S.gguf) | Q5_K_S | 2.6 | |\n| [GGUF](https://huggingface.co/mradermacher/CATPLUG-Ti-GGUF/resolve/main/CATPLUG-Ti.Q5_K_M.gguf) | Q5_K_M | 2.6 | |\n| [GGUF](https://huggingface.co/mradermacher/CATPLUG-Ti-GGUF/resolve/main/CATPLUG-Ti.Q6_K.gguf) | Q6_K | 3.0 | very good quality |\n| [GGUF](https://huggingface.co/mradermacher/CATPLUG-Ti-GGUF/resolve/main/CATPLUG-Ti.Q8_0.gguf) | Q8_0 | 3.9 | fast, best quality |\n| [GGUF](https://huggingface.co/mradermacher/CATPLUG-Ti-GGUF/resolve/main/CATPLUG-Ti.f16.gguf) | f16 | 7.2 | 16 bpw, overkill |\n\nHere is a handy graph by ikawrakow comparing some lower-quality quant\ntypes (lower is better):\n\n![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)\n\nAnd here are Artefact2's thoughts on the matter:\nhttps://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9\n\n## FAQ / Model Request\n\nSee https://huggingface.co/mradermacher/model_requests for some answers to\nquestions you might have and/or if you want some other model quantized.\n\n## Thanks\n\nI thank my company, [nethype GmbH](https://www.nethype.de/), for letting\nme use its servers and providing upgrades to my workstation to enable\nthis work in my free time.\n\n\n"}}},{"rowIdx":332,"cells":{"modelId":{"kind":"string","value":"ACECA/lowMvM_212"},"author":{"kind":"string","value":"ACECA"},"last_modified":{"kind":"timestamp","value":"2025-08-06T12:58:08Z","string":"2025-08-06T12:58:08Z"},"downloads":{"kind":"number","value":0,"string":"0"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"null"},"tags":{"kind":"list like","value":["safetensors","any-to-any","omega","omegalabs","bittensor","agi","license:mit","region:us"],"string":"[\n \"safetensors\",\n \"any-to-any\",\n \"omega\",\n \"omegalabs\",\n \"bittensor\",\n \"agi\",\n \"license:mit\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"string","value":"any-to-any"},"createdAt":{"kind":"timestamp","value":"2025-07-30T15:11:00Z","string":"2025-07-30T15:11:00Z"},"card":{"kind":"string","value":"---\nlicense: mit\ntags:\n- any-to-any\n- omega\n- omegalabs\n- bittensor\n- agi\n---\n\nThis is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.\n\nCheck out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).\n"}}},{"rowIdx":333,"cells":{"modelId":{"kind":"string","value":"Butanium/simple-stories-1L8H256D-attention-only-toy-transformer"},"author":{"kind":"string","value":"Butanium"},"last_modified":{"kind":"timestamp","value":"2025-08-06T12:57:51Z","string":"2025-08-06T12:57:51Z"},"downloads":{"kind":"number","value":8,"string":"8"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"null"},"tags":{"kind":"list like","value":["safetensors","llama","region:us"],"string":"[\n \"safetensors\",\n \"llama\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"null"},"createdAt":{"kind":"timestamp","value":"2025-08-06T12:57:49Z","string":"2025-08-06T12:57:49Z"},"card":{"kind":"string","value":"# 1-Layer 8-Head Attention-Only Transformer\n\nThis is a simplified transformer model with 1 attention layer(s) and 8 attention head(s), hidden size 256, designed for studying attention mechanisms in isolation.\n\n## Architecture Differences from Vanilla Transformer\n\n**Removed Components:**\n- **No MLP/Feed-Forward layers** - Only attention layers\n- **No Layer Normalization** - No LayerNorm before/after attention\n- **No positional encoding** - No position embeddings of any kind\n\n**Kept Components:**\n- Token embeddings\n- Multi-head self-attention with causal masking\n- Residual connections around attention layers\n- Language modeling head (linear projection to vocabulary)\n\nThis minimal architecture isolates the attention mechanism, making it useful for mechanistic interpretability research as described in [A Mathematical Framework for Transformer Circuits](https://transformer-circuits.pub/2021/framework/index.html).\n\n## Usage\n\n```python\nclass AttentionOnlyTransformer(PreTrainedModel):\n \"\"\"Attention-only transformer with configurable number of attention layers.\"\"\"\n config_class = LlamaConfig\n\n def __init__(self, config: LlamaConfig):\n super().__init__(config)\n self.embed_tokens = nn.Embedding(config.vocab_size, config.hidden_size)\n self.layers = nn.ModuleList([AttentionLayer(config) for _ in range(config.num_hidden_layers)])\n self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)\n\n def forward(self, input_ids=None, attention_mask=None, labels=None, **kwargs):\n batch_size, seq_len = input_ids.shape\n hidden_states = self.embed_tokens(input_ids)\n assert hidden_states.shape == (batch_size, seq_len, self.config.hidden_size)\n assert attention_mask.shape == (batch_size, seq_len)\n\n for layer in self.layers:\n hidden_states = layer(hidden_states, attention_mask)\n assert hidden_states.shape == (batch_size, seq_len, self.config.hidden_size)\n\n logits = self.lm_head(hidden_states)\n assert logits.shape == (batch_size, seq_len, self.config.vocab_size)\n\n loss = None\n if labels is not None:\n shift_logits = logits[..., :-1, :].contiguous()\n shift_labels = labels[..., 1:].contiguous()\n loss_fct = nn.CrossEntropyLoss()\n loss = loss_fct(\n shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1)\n )\n\n return {\"loss\": loss, \"logits\": logits}\n\n\nmodel = AttentionOnlyTransformer.from_pretrained('Butanium/simple-stories-1L8H256D-attention-only-toy-transformer')\n```\n\n## Training Data\n\nThe model is trained on the [SimpleStories dataset](https://huggingface.co/datasets/SimpleStories/SimpleStories) for next-token prediction."}}},{"rowIdx":334,"cells":{"modelId":{"kind":"string","value":"EliovpAI/Qwen3-8B-FP8-KV"},"author":{"kind":"string","value":"EliovpAI"},"last_modified":{"kind":"timestamp","value":"2025-08-06T12:54:54Z","string":"2025-08-06T12:54:54Z"},"downloads":{"kind":"number","value":6,"string":"6"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"string","value":"transformers"},"tags":{"kind":"list like","value":["transformers","safetensors","qwen3","text-generation","AMD","ROCM","VLLM","Quark","MI300x","Quantized","conversational","base_model:Qwen/Qwen3-8B","base_model:quantized:Qwen/Qwen3-8B","autotrain_compatible","text-generation-inference","endpoints_compatible","fp8","region:us"],"string":"[\n \"transformers\",\n \"safetensors\",\n \"qwen3\",\n \"text-generation\",\n \"AMD\",\n \"ROCM\",\n \"VLLM\",\n \"Quark\",\n \"MI300x\",\n \"Quantized\",\n \"conversational\",\n \"base_model:Qwen/Qwen3-8B\",\n \"base_model:quantized:Qwen/Qwen3-8B\",\n \"autotrain_compatible\",\n \"text-generation-inference\",\n \"endpoints_compatible\",\n \"fp8\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"string","value":"text-generation"},"createdAt":{"kind":"timestamp","value":"2025-08-06T12:49:52Z","string":"2025-08-06T12:49:52Z"},"card":{"kind":"string","value":"---\nmetrics:\n- perplexity\nbase_model:\n- Qwen/Qwen3-8B\nlibrary_name: transformers\ntags:\n- AMD\n- ROCM\n- VLLM\n- Quark\n- MI300x\n- Quantized\n---\n# Qwen3-8B-FP8-KV\n\n## Introduction\nThis model was built by applying Quark with calibration samples from Pile dataset to Qwen/Qwen3-8B.\n\n## Quantization Strategy\n- **Quantized Layers**: All linear layers excluding \"lm_head\", \"*.mlp.experts.*\"\n- **Weight**: FP8 symmetric per-tensor\n- **Activation**: FP8 symmetric per-tensor\n- **KV Cache**: FP8 symmetric per-tensor\n\n## Deployment\nQuark has its own export format and allows FP8 quantized models to be efficiently deployed using the vLLM backend (vLLM-compatible).\n\n## Evaluation\nQuark currently uses perplexity (PPL) as the evaluation metric for accuracy loss before and after quantization. The specific PPL algorithm can be referenced in the quantize_quark.py. The quantization evaluation results are conducted in pseudo-quantization mode, which may slightly differ from the actual quantized inference accuracy. These results are provided for reference only.\n\n### Evaluation scores\n\n| **Benchmark** | **Qwen3-8B** | **Qwen3-8B-FP8-KV (this model)** |\n| -------------------- | ------------ | --------------------------------- |\n| Perplexity-wikitext2 | 9.531 | 9.708 |\n\n### Performance Summary\n- **Accuracy Retention**: 98.15% (only 1.85% perplexity increase)\n- **Model Size**: ~42% reduction vs FP16\n- **Memory Efficiency**: FP8 KV-cache for extended context\n- **Hardware Optimization**: AMD ROCm/HIP optimized\n\n## License\nBased on Qwen3-8B licensing terms."}}},{"rowIdx":335,"cells":{"modelId":{"kind":"string","value":"longhoang2112/whisper-base-fine-tuning_2_steps_slu"},"author":{"kind":"string","value":"longhoang2112"},"last_modified":{"kind":"timestamp","value":"2025-08-06T12:50:57Z","string":"2025-08-06T12:50:57Z"},"downloads":{"kind":"number","value":14,"string":"14"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"string","value":"peft"},"tags":{"kind":"list like","value":["peft","region:us"],"string":"[\n \"peft\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"null"},"createdAt":{"kind":"timestamp","value":"2025-08-06T12:50:54Z","string":"2025-08-06T12:50:54Z"},"card":{"kind":"string","value":"---\nlibrary_name: peft\n---\n## Training procedure\n\n### Framework versions\n\n\n- PEFT 0.5.0\n"}}},{"rowIdx":336,"cells":{"modelId":{"kind":"string","value":"ekiprop/SST-2-HEURISTIC-LoRA-All-Attention-Q_K_V_O-seed20"},"author":{"kind":"string","value":"ekiprop"},"last_modified":{"kind":"timestamp","value":"2025-08-06T12:50:26Z","string":"2025-08-06T12:50:26Z"},"downloads":{"kind":"number","value":56,"string":"56"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"string","value":"peft"},"tags":{"kind":"list like","value":["peft","safetensors","base_model:adapter:roberta-base","lora","transformers","base_model:FacebookAI/roberta-base","base_model:adapter:FacebookAI/roberta-base","license:mit","region:us"],"string":"[\n \"peft\",\n \"safetensors\",\n \"base_model:adapter:roberta-base\",\n \"lora\",\n \"transformers\",\n \"base_model:FacebookAI/roberta-base\",\n \"base_model:adapter:FacebookAI/roberta-base\",\n \"license:mit\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"null"},"createdAt":{"kind":"timestamp","value":"2025-08-06T12:35:59Z","string":"2025-08-06T12:35:59Z"},"card":{"kind":"string","value":"---\nlibrary_name: peft\nlicense: mit\nbase_model: roberta-base\ntags:\n- base_model:adapter:roberta-base\n- lora\n- transformers\nmetrics:\n- accuracy\nmodel-index:\n- name: SST-2-HEURISTIC-LoRA-All-Attention-Q_K_V_O-seed20\n results: []\n---\n\n\n\n# SST-2-HEURISTIC-LoRA-All-Attention-Q_K_V_O-seed20\n\nThis model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.2312\n- Accuracy: 0.9427\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 32\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- num_epochs: 5\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:------:|:-----:|:---------------:|:--------:|\n| 0.3991 | 0.0950 | 200 | 0.2096 | 0.9163 |\n| 0.2917 | 0.1900 | 400 | 0.1974 | 0.9174 |\n| 0.2704 | 0.2850 | 600 | 0.2146 | 0.9197 |\n| 0.2428 | 0.3800 | 800 | 0.1842 | 0.9346 |\n| 0.2313 | 0.4751 | 1000 | 0.2589 | 0.9220 |\n| 0.2147 | 0.5701 | 1200 | 0.2200 | 0.9278 |\n| 0.2169 | 0.6651 | 1400 | 0.2166 | 0.9323 |\n| 0.2097 | 0.7601 | 1600 | 0.2307 | 0.9255 |\n| 0.216 | 0.8551 | 1800 | 0.2100 | 0.9312 |\n| 0.2048 | 0.9501 | 2000 | 0.2078 | 0.9392 |\n| 0.2004 | 1.0451 | 2200 | 0.2162 | 0.9335 |\n| 0.1819 | 1.1401 | 2400 | 0.1884 | 0.9358 |\n| 0.1837 | 1.2352 | 2600 | 0.2073 | 0.9323 |\n| 0.1793 | 1.3302 | 2800 | 0.2156 | 0.9278 |\n| 0.1792 | 1.4252 | 3000 | 0.1997 | 0.9323 |\n| 0.1794 | 1.5202 | 3200 | 0.2129 | 0.9335 |\n| 0.1788 | 1.6152 | 3400 | 0.1908 | 0.9346 |\n| 0.1663 | 1.7102 | 3600 | 0.2561 | 0.9278 |\n| 0.1705 | 1.8052 | 3800 | 0.2167 | 0.9346 |\n| 0.1837 | 1.9002 | 4000 | 0.1958 | 0.9392 |\n| 0.174 | 1.9952 | 4200 | 0.2181 | 0.9358 |\n| 0.1602 | 2.0903 | 4400 | 0.2107 | 0.9335 |\n| 0.1529 | 2.1853 | 4600 | 0.2229 | 0.9369 |\n| 0.1568 | 2.2803 | 4800 | 0.2372 | 0.9346 |\n| 0.1466 | 2.3753 | 5000 | 0.2117 | 0.9335 |\n| 0.156 | 2.4703 | 5200 | 0.2452 | 0.9323 |\n| 0.1544 | 2.5653 | 5400 | 0.2411 | 0.9312 |\n| 0.163 | 2.6603 | 5600 | 0.2019 | 0.9323 |\n| 0.1431 | 2.7553 | 5800 | 0.2393 | 0.9289 |\n| 0.1466 | 2.8504 | 6000 | 0.2157 | 0.9312 |\n| 0.1446 | 2.9454 | 6200 | 0.2291 | 0.9335 |\n| 0.1395 | 3.0404 | 6400 | 0.2593 | 0.9278 |\n| 0.1203 | 3.1354 | 6600 | 0.2339 | 0.9323 |\n| 0.1272 | 3.2304 | 6800 | 0.2262 | 0.9404 |\n| 0.1484 | 3.3254 | 7000 | 0.2128 | 0.9381 |\n| 0.1269 | 3.4204 | 7200 | 0.2254 | 0.9404 |\n| 0.1269 | 3.5154 | 7400 | 0.2387 | 0.9335 |\n| 0.1321 | 3.6105 | 7600 | 0.2512 | 0.9358 |\n| 0.1351 | 3.7055 | 7800 | 0.2333 | 0.9381 |\n| 0.1331 | 3.8005 | 8000 | 0.2312 | 0.9427 |\n| 0.1396 | 3.8955 | 8200 | 0.2190 | 0.9427 |\n| 0.1342 | 3.9905 | 8400 | 0.2214 | 0.9381 |\n| 0.1231 | 4.0855 | 8600 | 0.2422 | 0.9323 |\n| 0.1159 | 4.1805 | 8800 | 0.2500 | 0.9323 |\n| 0.1219 | 4.2755 | 9000 | 0.2348 | 0.9335 |\n| 0.1225 | 4.3705 | 9200 | 0.2405 | 0.9312 |\n| 0.1205 | 4.4656 | 9400 | 0.2407 | 0.9312 |\n| 0.1148 | 4.5606 | 9600 | 0.2384 | 0.9369 |\n| 0.12 | 4.6556 | 9800 | 0.2342 | 0.9381 |\n| 0.1123 | 4.7506 | 10000 | 0.2384 | 0.9381 |\n| 0.1182 | 4.8456 | 10200 | 0.2377 | 0.9381 |\n| 0.1298 | 4.9406 | 10400 | 0.2349 | 0.9369 |\n\n\n### Framework versions\n\n- PEFT 0.16.0\n- Transformers 4.54.1\n- Pytorch 2.5.1+cu121\n- Datasets 4.0.0\n- Tokenizers 0.21.4"}}},{"rowIdx":337,"cells":{"modelId":{"kind":"string","value":"patent/qwen3_4b_grpo.n1.21"},"author":{"kind":"string","value":"patent"},"last_modified":{"kind":"timestamp","value":"2025-08-06T12:45:51Z","string":"2025-08-06T12:45:51Z"},"downloads":{"kind":"number","value":0,"string":"0"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"string","value":"transformers"},"tags":{"kind":"list like","value":["transformers","safetensors","text-generation-inference","unsloth","qwen3","trl","en","base_model:unsloth/Qwen3-4B-Base","base_model:finetune:unsloth/Qwen3-4B-Base","license:apache-2.0","endpoints_compatible","region:us"],"string":"[\n \"transformers\",\n \"safetensors\",\n \"text-generation-inference\",\n \"unsloth\",\n \"qwen3\",\n \"trl\",\n \"en\",\n \"base_model:unsloth/Qwen3-4B-Base\",\n \"base_model:finetune:unsloth/Qwen3-4B-Base\",\n \"license:apache-2.0\",\n \"endpoints_compatible\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"null"},"createdAt":{"kind":"timestamp","value":"2025-08-06T12:45:44Z","string":"2025-08-06T12:45:44Z"},"card":{"kind":"string","value":"---\nbase_model: unsloth/Qwen3-4B-Base\ntags:\n- text-generation-inference\n- transformers\n- unsloth\n- qwen3\n- trl\nlicense: apache-2.0\nlanguage:\n- en\n---\n\n# Uploaded model\n\n- **Developed by:** patent\n- **License:** apache-2.0\n- **Finetuned from model :** unsloth/Qwen3-4B-Base\n\nThis qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.\n\n[](https://github.com/unslothai/unsloth)\n"}}},{"rowIdx":338,"cells":{"modelId":{"kind":"string","value":"fadhlyrafi/model"},"author":{"kind":"string","value":"fadhlyrafi"},"last_modified":{"kind":"timestamp","value":"2025-08-06T12:36:29Z","string":"2025-08-06T12:36:29Z"},"downloads":{"kind":"number","value":3,"string":"3"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"string","value":"transformers"},"tags":{"kind":"list like","value":["transformers","safetensors","csm","text-to-audio","text-generation-inference","unsloth","en","base_model:unsloth/csm-1b","base_model:finetune:unsloth/csm-1b","license:apache-2.0","endpoints_compatible","region:us"],"string":"[\n \"transformers\",\n \"safetensors\",\n \"csm\",\n \"text-to-audio\",\n \"text-generation-inference\",\n \"unsloth\",\n \"en\",\n \"base_model:unsloth/csm-1b\",\n \"base_model:finetune:unsloth/csm-1b\",\n \"license:apache-2.0\",\n \"endpoints_compatible\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"string","value":"text-to-audio"},"createdAt":{"kind":"timestamp","value":"2025-08-06T12:35:08Z","string":"2025-08-06T12:35:08Z"},"card":{"kind":"string","value":"---\nbase_model: unsloth/csm-1b\ntags:\n- text-generation-inference\n- transformers\n- unsloth\n- csm\nlicense: apache-2.0\nlanguage:\n- en\n---\n\n# Uploaded finetuned model\n\n- **Developed by:** fadhlyrafi\n- **License:** apache-2.0\n- **Finetuned from model :** unsloth/csm-1b\n\nThis csm model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.\n\n[](https://github.com/unslothai/unsloth)\n"}}},{"rowIdx":339,"cells":{"modelId":{"kind":"string","value":"Marko152/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-diving_feline_anaconda"},"author":{"kind":"string","value":"Marko152"},"last_modified":{"kind":"timestamp","value":"2025-08-06T12:35:36Z","string":"2025-08-06T12:35:36Z"},"downloads":{"kind":"number","value":99,"string":"99"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"string","value":"transformers"},"tags":{"kind":"list like","value":["transformers","safetensors","qwen2","text-generation","rl-swarm","genrl-swarm","grpo","gensyn","I am diving_feline_anaconda","arxiv:1910.09700","autotrain_compatible","text-generation-inference","endpoints_compatible","region:us"],"string":"[\n \"transformers\",\n \"safetensors\",\n \"qwen2\",\n \"text-generation\",\n \"rl-swarm\",\n \"genrl-swarm\",\n \"grpo\",\n \"gensyn\",\n \"I am diving_feline_anaconda\",\n \"arxiv:1910.09700\",\n \"autotrain_compatible\",\n \"text-generation-inference\",\n \"endpoints_compatible\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"string","value":"text-generation"},"createdAt":{"kind":"timestamp","value":"2025-07-31T11:01:39Z","string":"2025-07-31T11:01:39Z"},"card":{"kind":"string","value":"---\nlibrary_name: transformers\ntags:\n- rl-swarm\n- genrl-swarm\n- grpo\n- gensyn\n- I am diving_feline_anaconda\n---\n\n# Model Card for Model ID\n\n\n\n\n\n## Model Details\n\n### Model Description\n\n\n\nThis is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- **Developed by:** [More Information Needed]\n- **Funded by [optional]:** [More Information Needed]\n- **Shared by [optional]:** [More Information Needed]\n- **Model type:** [More Information Needed]\n- **Language(s) (NLP):** [More Information Needed]\n- **License:** [More Information Needed]\n- **Finetuned from model [optional]:** [More Information Needed]\n\n### Model Sources [optional]\n\n\n\n- **Repository:** [More Information Needed]\n- **Paper [optional]:** [More Information Needed]\n- **Demo [optional]:** [More Information Needed]\n\n## Uses\n\n\n\n### Direct Use\n\n\n\n[More Information Needed]\n\n### Downstream Use [optional]\n\n\n\n[More Information Needed]\n\n### Out-of-Scope Use\n\n\n\n[More Information Needed]\n\n## Bias, Risks, and Limitations\n\n\n\n[More Information Needed]\n\n### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.\n\n## How to Get Started with the Model\n\nUse the code below to get started with the model.\n\n[More Information Needed]\n\n## Training Details\n\n### Training Data\n\n\n\n[More Information Needed]\n\n### Training Procedure\n\n\n\n#### Preprocessing [optional]\n\n[More Information Needed]\n\n\n#### Training Hyperparameters\n\n- **Training regime:** [More Information Needed] \n\n#### Speeds, Sizes, Times [optional]\n\n\n\n[More Information Needed]\n\n## Evaluation\n\n\n\n### Testing Data, Factors & Metrics\n\n#### Testing Data\n\n\n\n[More Information Needed]\n\n#### Factors\n\n\n\n[More Information Needed]\n\n#### Metrics\n\n\n\n[More Information Needed]\n\n### Results\n\n[More Information Needed]\n\n#### Summary\n\n\n\n## Model Examination [optional]\n\n\n\n[More Information Needed]\n\n## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).\n\n- **Hardware Type:** [More Information Needed]\n- **Hours used:** [More Information Needed]\n- **Cloud Provider:** [More Information Needed]\n- **Compute Region:** [More Information Needed]\n- **Carbon Emitted:** [More Information Needed]\n\n## Technical Specifications [optional]\n\n### Model Architecture and Objective\n\n[More Information Needed]\n\n### Compute Infrastructure\n\n[More Information Needed]\n\n#### Hardware\n\n[More Information Needed]\n\n#### Software\n\n[More Information Needed]\n\n## Citation [optional]\n\n\n\n**BibTeX:**\n\n[More Information Needed]\n\n**APA:**\n\n[More Information Needed]\n\n## Glossary [optional]\n\n\n\n[More Information Needed]\n\n## More Information [optional]\n\n[More Information Needed]\n\n## Model Card Authors [optional]\n\n[More Information Needed]\n\n## Model Card Contact\n\n[More Information Needed]"}}},{"rowIdx":340,"cells":{"modelId":{"kind":"string","value":"sobs0/new_wav2vec2-base-aphasia-oth"},"author":{"kind":"string","value":"sobs0"},"last_modified":{"kind":"timestamp","value":"2025-08-06T12:35:27Z","string":"2025-08-06T12:35:27Z"},"downloads":{"kind":"number","value":0,"string":"0"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"string","value":"transformers"},"tags":{"kind":"list like","value":["transformers","arxiv:1910.09700","endpoints_compatible","region:us"],"string":"[\n \"transformers\",\n \"arxiv:1910.09700\",\n \"endpoints_compatible\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"null"},"createdAt":{"kind":"timestamp","value":"2025-08-06T11:38:50Z","string":"2025-08-06T11:38:50Z"},"card":{"kind":"string","value":"---\nlibrary_name: transformers\ntags: []\n---\n\n# Model Card for Model ID\n\n\n\n\n\n## Model Details\n\n### Model Description\n\n\n\nThis is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- **Developed by:** [More Information Needed]\n- **Funded by [optional]:** [More Information Needed]\n- **Shared by [optional]:** [More Information Needed]\n- **Model type:** [More Information Needed]\n- **Language(s) (NLP):** [More Information Needed]\n- **License:** [More Information Needed]\n- **Finetuned from model [optional]:** [More Information Needed]\n\n### Model Sources [optional]\n\n\n\n- **Repository:** [More Information Needed]\n- **Paper [optional]:** [More Information Needed]\n- **Demo [optional]:** [More Information Needed]\n\n## Uses\n\n\n\n### Direct Use\n\n\n\n[More Information Needed]\n\n### Downstream Use [optional]\n\n\n\n[More Information Needed]\n\n### Out-of-Scope Use\n\n\n\n[More Information Needed]\n\n## Bias, Risks, and Limitations\n\n\n\n[More Information Needed]\n\n### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.\n\n## How to Get Started with the Model\n\nUse the code below to get started with the model.\n\n[More Information Needed]\n\n## Training Details\n\n### Training Data\n\n\n\n[More Information Needed]\n\n### Training Procedure\n\n\n\n#### Preprocessing [optional]\n\n[More Information Needed]\n\n\n#### Training Hyperparameters\n\n- **Training regime:** [More Information Needed] \n\n#### Speeds, Sizes, Times [optional]\n\n\n\n[More Information Needed]\n\n## Evaluation\n\n\n\n### Testing Data, Factors & Metrics\n\n#### Testing Data\n\n\n\n[More Information Needed]\n\n#### Factors\n\n\n\n[More Information Needed]\n\n#### Metrics\n\n\n\n[More Information Needed]\n\n### Results\n\n[More Information Needed]\n\n#### Summary\n\n\n\n## Model Examination [optional]\n\n\n\n[More Information Needed]\n\n## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).\n\n- **Hardware Type:** [More Information Needed]\n- **Hours used:** [More Information Needed]\n- **Cloud Provider:** [More Information Needed]\n- **Compute Region:** [More Information Needed]\n- **Carbon Emitted:** [More Information Needed]\n\n## Technical Specifications [optional]\n\n### Model Architecture and Objective\n\n[More Information Needed]\n\n### Compute Infrastructure\n\n[More Information Needed]\n\n#### Hardware\n\n[More Information Needed]\n\n#### Software\n\n[More Information Needed]\n\n## Citation [optional]\n\n\n\n**BibTeX:**\n\n[More Information Needed]\n\n**APA:**\n\n[More Information Needed]\n\n## Glossary [optional]\n\n\n\n[More Information Needed]\n\n## More Information [optional]\n\n[More Information Needed]\n\n## Model Card Authors [optional]\n\n[More Information Needed]\n\n## Model Card Contact\n\n[More Information Needed]"}}},{"rowIdx":341,"cells":{"modelId":{"kind":"string","value":"maldv/Eva-Mindlink-72b"},"author":{"kind":"string","value":"maldv"},"last_modified":{"kind":"timestamp","value":"2025-08-06T12:33:40Z","string":"2025-08-06T12:33:40Z"},"downloads":{"kind":"number","value":8,"string":"8"},"likes":{"kind":"number","value":2,"string":"2"},"library_name":{"kind":"string","value":"transformers"},"tags":{"kind":"list like","value":["transformers","safetensors","qwen2","text-generation","chat","conversational","en","base_model:EVA-UNIT-01/EVA-Qwen2.5-72B-v0.2","base_model:finetune:EVA-UNIT-01/EVA-Qwen2.5-72B-v0.2","license:other","autotrain_compatible","text-generation-inference","endpoints_compatible","region:us"],"string":"[\n \"transformers\",\n \"safetensors\",\n \"qwen2\",\n \"text-generation\",\n \"chat\",\n \"conversational\",\n \"en\",\n \"base_model:EVA-UNIT-01/EVA-Qwen2.5-72B-v0.2\",\n \"base_model:finetune:EVA-UNIT-01/EVA-Qwen2.5-72B-v0.2\",\n \"license:other\",\n \"autotrain_compatible\",\n \"text-generation-inference\",\n \"endpoints_compatible\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"string","value":"text-generation"},"createdAt":{"kind":"timestamp","value":"2025-08-05T20:20:50Z","string":"2025-08-05T20:20:50Z"},"card":{"kind":"string","value":"---\nlicense: other\nlicense_name: qwen\nlicense_link: https://huggingface.co/Qwen/Qwen2.5-72B/raw/main/LICENSE\nlibrary_name: transformers\nlanguage:\n- en\ntags:\n- chat\n- conversational\nbase_model:\n- Qwen/Qwen2.5-72B\n- Skywork/MindLink-72B-0801\n- Unbabel/Tower-Plus-72B\n- EVA-UNIT-01/EVA-Qwen2.5-72B-v0.2\npipeline_tags:\n- text-generation\n- conversational\n- chat\n\n---\n\n![image/png](https://cdn-uploads.huggingface.co/production/uploads/65b19c1b098c85365af5a83e/r7maKU1wOkmSyHf-qPlMz.png)\n\n[GGUF](https://huggingface.co/mradermacher/Eva-Mindlink-72b-GGUF) [iMat](https://huggingface.co/mradermacher/Eva-Mindlink-72b-i1-GGUF)\n\n# Eva Mindlink 72B\n\nEva Mindlink 72B is a *normalized denoised fourier interpolation* of the following models:\n\n```yaml\noutput_base_model: \"Qwen/Qwen2.5-72B\"\noutput_dtype: \"bfloat16\"\nfinetune_merge:\n - { \"model\": \"Skywork/MindLink-72B-0801\", \"base\": \"Qwen/Qwen2.5-72B\", \"alpha\": 0.9, \"is_input\": true }\n - { \"model\": \"Unbabel/Tower-Plus-72B\", \"base\": \"Qwen/Qwen2.5-72B\", \"alpha\": 0.5 }\n - { \"model\": \"EVA-UNIT-01/EVA-Qwen2.5-72B-v0.2\", \"base\": \"Qwen/Qwen2.5-72B\", \"alpha\": 0.8, \"is_output\": true }\n```\n\nIn other words, all of these models get warped and interpolated in signal space, and then jammed back on top of the base model (which in this case was Qwen2.5-72B); with the MindLink-72B-0801 input layer and the EVA-Qwen2.5-72B-v0.2 output layer.\n\n## Citation\n\nIf you find our work helpful, feel free to give us a cite.\n\n```\n@misc{eva-mindlink-72b,\n title = {Eva Mindlink 72B},\n url = {https://huggingface.co/maldv/Eva-Mindlink-72B},\n author = {Praxis Maldevide},\n month = {August},\n year = {2025}\n}\n```"}}},{"rowIdx":342,"cells":{"modelId":{"kind":"string","value":"mradermacher/Eva-Mindlink-72b-GGUF"},"author":{"kind":"string","value":"mradermacher"},"last_modified":{"kind":"timestamp","value":"2025-08-06T12:24:21Z","string":"2025-08-06T12:24:21Z"},"downloads":{"kind":"number","value":724,"string":"724"},"likes":{"kind":"number","value":1,"string":"1"},"library_name":{"kind":"string","value":"transformers"},"tags":{"kind":"list like","value":["transformers","gguf","chat","conversational","en","base_model:maldv/Eva-Mindlink-72b","base_model:quantized:maldv/Eva-Mindlink-72b","license:other","endpoints_compatible","region:us"],"string":"[\n \"transformers\",\n \"gguf\",\n \"chat\",\n \"conversational\",\n \"en\",\n \"base_model:maldv/Eva-Mindlink-72b\",\n \"base_model:quantized:maldv/Eva-Mindlink-72b\",\n \"license:other\",\n \"endpoints_compatible\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"null"},"createdAt":{"kind":"timestamp","value":"2025-08-06T01:14:53Z","string":"2025-08-06T01:14:53Z"},"card":{"kind":"string","value":"---\nbase_model: maldv/Eva-Mindlink-72b\nlanguage:\n- en\nlibrary_name: transformers\nlicense: other\nlicense_link: https://huggingface.co/Qwen/Qwen2.5-72B/raw/main/LICENSE\nlicense_name: qwen\nmradermacher:\n readme_rev: 1\nquantized_by: mradermacher\ntags:\n- chat\n- conversational\n---\n## About\n\n\n\n\n\n\n\n\n\nstatic quants of https://huggingface.co/maldv/Eva-Mindlink-72b\n\n\n\n***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Eva-Mindlink-72b-GGUF).***\n\nweighted/imatrix quants are available at https://huggingface.co/mradermacher/Eva-Mindlink-72b-i1-GGUF\n## Usage\n\nIf you are unsure how to use GGUF files, refer to one of [TheBloke's\nREADMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for\nmore details, including on how to concatenate multi-part files.\n\n## Provided Quants\n\n(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)\n\n| Link | Type | Size/GB | Notes |\n|:-----|:-----|--------:|:------|\n| [GGUF](https://huggingface.co/mradermacher/Eva-Mindlink-72b-GGUF/resolve/main/Eva-Mindlink-72b.Q2_K.gguf) | Q2_K | 29.9 | |\n| [GGUF](https://huggingface.co/mradermacher/Eva-Mindlink-72b-GGUF/resolve/main/Eva-Mindlink-72b.Q3_K_S.gguf) | Q3_K_S | 34.6 | |\n| [GGUF](https://huggingface.co/mradermacher/Eva-Mindlink-72b-GGUF/resolve/main/Eva-Mindlink-72b.Q3_K_M.gguf) | Q3_K_M | 37.8 | lower quality |\n| [GGUF](https://huggingface.co/mradermacher/Eva-Mindlink-72b-GGUF/resolve/main/Eva-Mindlink-72b.Q3_K_L.gguf) | Q3_K_L | 39.6 | |\n| [GGUF](https://huggingface.co/mradermacher/Eva-Mindlink-72b-GGUF/resolve/main/Eva-Mindlink-72b.IQ4_XS.gguf) | IQ4_XS | 40.3 | |\n| [GGUF](https://huggingface.co/mradermacher/Eva-Mindlink-72b-GGUF/resolve/main/Eva-Mindlink-72b.Q4_K_S.gguf) | Q4_K_S | 44.0 | fast, recommended |\n| [GGUF](https://huggingface.co/mradermacher/Eva-Mindlink-72b-GGUF/resolve/main/Eva-Mindlink-72b.Q4_K_M.gguf) | Q4_K_M | 47.5 | fast, recommended |\n| [PART 1](https://huggingface.co/mradermacher/Eva-Mindlink-72b-GGUF/resolve/main/Eva-Mindlink-72b.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Eva-Mindlink-72b-GGUF/resolve/main/Eva-Mindlink-72b.Q5_K_S.gguf.part2of2) | Q5_K_S | 51.5 | |\n| [PART 1](https://huggingface.co/mradermacher/Eva-Mindlink-72b-GGUF/resolve/main/Eva-Mindlink-72b.Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Eva-Mindlink-72b-GGUF/resolve/main/Eva-Mindlink-72b.Q5_K_M.gguf.part2of2) | Q5_K_M | 54.5 | |\n| [PART 1](https://huggingface.co/mradermacher/Eva-Mindlink-72b-GGUF/resolve/main/Eva-Mindlink-72b.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Eva-Mindlink-72b-GGUF/resolve/main/Eva-Mindlink-72b.Q6_K.gguf.part2of2) | Q6_K | 64.4 | very good quality |\n| [PART 1](https://huggingface.co/mradermacher/Eva-Mindlink-72b-GGUF/resolve/main/Eva-Mindlink-72b.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Eva-Mindlink-72b-GGUF/resolve/main/Eva-Mindlink-72b.Q8_0.gguf.part2of2) | Q8_0 | 77.4 | fast, best quality |\n\nHere is a handy graph by ikawrakow comparing some lower-quality quant\ntypes (lower is better):\n\n![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)\n\nAnd here are Artefact2's thoughts on the matter:\nhttps://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9\n\n## FAQ / Model Request\n\nSee https://huggingface.co/mradermacher/model_requests for some answers to\nquestions you might have and/or if you want some other model quantized.\n\n## Thanks\n\nI thank my company, [nethype GmbH](https://www.nethype.de/), for letting\nme use its servers and providing upgrades to my workstation to enable\nthis work in my free time.\n\n\n"}}},{"rowIdx":343,"cells":{"modelId":{"kind":"string","value":"Butanium/simple-stories-0L16H128D-attention-only-toy-transformer"},"author":{"kind":"string","value":"Butanium"},"last_modified":{"kind":"timestamp","value":"2025-08-06T12:23:49Z","string":"2025-08-06T12:23:49Z"},"downloads":{"kind":"number","value":11,"string":"11"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"null"},"tags":{"kind":"list like","value":["safetensors","llama","region:us"],"string":"[\n \"safetensors\",\n \"llama\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"null"},"createdAt":{"kind":"timestamp","value":"2025-08-06T12:23:47Z","string":"2025-08-06T12:23:47Z"},"card":{"kind":"string","value":"# 0-Layer 16-Head Attention-Only Transformer\n\nThis is a simplified transformer model with 0 attention layer(s) and 16 attention head(s), hidden size 128, designed for studying attention mechanisms in isolation.\n\n## Architecture Differences from Vanilla Transformer\n\n**Removed Components:**\n- **No MLP/Feed-Forward layers** - Only attention layers\n- **No Layer Normalization** - No LayerNorm before/after attention\n- **No positional encoding** - No position embeddings of any kind\n\n**Kept Components:**\n- Token embeddings\n- Multi-head self-attention with causal masking\n- Residual connections around attention layers\n- Language modeling head (linear projection to vocabulary)\n\nThis minimal architecture isolates the attention mechanism, making it useful for mechanistic interpretability research as described in [A Mathematical Framework for Transformer Circuits](https://transformer-circuits.pub/2021/framework/index.html).\n\n## Usage\n\n```python\nclass AttentionOnlyTransformer(PreTrainedModel):\n \"\"\"Attention-only transformer with configurable number of attention layers.\"\"\"\n config_class = LlamaConfig\n\n def __init__(self, config: LlamaConfig):\n super().__init__(config)\n self.embed_tokens = nn.Embedding(config.vocab_size, config.hidden_size)\n self.layers = nn.ModuleList([AttentionLayer(config) for _ in range(config.num_hidden_layers)])\n self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)\n\n def forward(self, input_ids=None, attention_mask=None, labels=None, **kwargs):\n batch_size, seq_len = input_ids.shape\n hidden_states = self.embed_tokens(input_ids)\n assert hidden_states.shape == (batch_size, seq_len, self.config.hidden_size)\n assert attention_mask.shape == (batch_size, seq_len)\n\n for layer in self.layers:\n hidden_states = layer(hidden_states, attention_mask)\n assert hidden_states.shape == (batch_size, seq_len, self.config.hidden_size)\n\n logits = self.lm_head(hidden_states)\n assert logits.shape == (batch_size, seq_len, self.config.vocab_size)\n\n loss = None\n if labels is not None:\n shift_logits = logits[..., :-1, :].contiguous()\n shift_labels = labels[..., 1:].contiguous()\n loss_fct = nn.CrossEntropyLoss()\n loss = loss_fct(\n shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1)\n )\n\n return {\"loss\": loss, \"logits\": logits}\n\n\nmodel = AttentionOnlyTransformer.from_pretrained('Butanium/simple-stories-0L16H128D-attention-only-toy-transformer')\n```\n\n## Training Data\n\nThe model is trained on the [SimpleStories dataset](https://huggingface.co/datasets/SimpleStories/SimpleStories) for next-token prediction."}}},{"rowIdx":344,"cells":{"modelId":{"kind":"string","value":"Aarush09/bart-conversation-summarizer"},"author":{"kind":"string","value":"Aarush09"},"last_modified":{"kind":"timestamp","value":"2025-08-06T12:22:49Z","string":"2025-08-06T12:22:49Z"},"downloads":{"kind":"number","value":6,"string":"6"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"string","value":"transformers"},"tags":{"kind":"list like","value":["transformers","safetensors","bart","text2text-generation","arxiv:1910.09700","endpoints_compatible","region:us"],"string":"[\n \"transformers\",\n \"safetensors\",\n \"bart\",\n \"text2text-generation\",\n \"arxiv:1910.09700\",\n \"endpoints_compatible\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"null"},"createdAt":{"kind":"timestamp","value":"2025-08-06T12:22:02Z","string":"2025-08-06T12:22:02Z"},"card":{"kind":"string","value":"---\nlibrary_name: transformers\ntags: []\n---\n\n# Model Card for Model ID\n\n\n\n\n\n## Model Details\n\n### Model Description\n\n\n\nThis is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- **Developed by:** [More Information Needed]\n- **Funded by [optional]:** [More Information Needed]\n- **Shared by [optional]:** [More Information Needed]\n- **Model type:** [More Information Needed]\n- **Language(s) (NLP):** [More Information Needed]\n- **License:** [More Information Needed]\n- **Finetuned from model [optional]:** [More Information Needed]\n\n### Model Sources [optional]\n\n\n\n- **Repository:** [More Information Needed]\n- **Paper [optional]:** [More Information Needed]\n- **Demo [optional]:** [More Information Needed]\n\n## Uses\n\n\n\n### Direct Use\n\n\n\n[More Information Needed]\n\n### Downstream Use [optional]\n\n\n\n[More Information Needed]\n\n### Out-of-Scope Use\n\n\n\n[More Information Needed]\n\n## Bias, Risks, and Limitations\n\n\n\n[More Information Needed]\n\n### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.\n\n## How to Get Started with the Model\n\nUse the code below to get started with the model.\n\n[More Information Needed]\n\n## Training Details\n\n### Training Data\n\n\n\n[More Information Needed]\n\n### Training Procedure\n\n\n\n#### Preprocessing [optional]\n\n[More Information Needed]\n\n\n#### Training Hyperparameters\n\n- **Training regime:** [More Information Needed] \n\n#### Speeds, Sizes, Times [optional]\n\n\n\n[More Information Needed]\n\n## Evaluation\n\n\n\n### Testing Data, Factors & Metrics\n\n#### Testing Data\n\n\n\n[More Information Needed]\n\n#### Factors\n\n\n\n[More Information Needed]\n\n#### Metrics\n\n\n\n[More Information Needed]\n\n### Results\n\n[More Information Needed]\n\n#### Summary\n\n\n\n## Model Examination [optional]\n\n\n\n[More Information Needed]\n\n## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).\n\n- **Hardware Type:** [More Information Needed]\n- **Hours used:** [More Information Needed]\n- **Cloud Provider:** [More Information Needed]\n- **Compute Region:** [More Information Needed]\n- **Carbon Emitted:** [More Information Needed]\n\n## Technical Specifications [optional]\n\n### Model Architecture and Objective\n\n[More Information Needed]\n\n### Compute Infrastructure\n\n[More Information Needed]\n\n#### Hardware\n\n[More Information Needed]\n\n#### Software\n\n[More Information Needed]\n\n## Citation [optional]\n\n\n\n**BibTeX:**\n\n[More Information Needed]\n\n**APA:**\n\n[More Information Needed]\n\n## Glossary [optional]\n\n\n\n[More Information Needed]\n\n## More Information [optional]\n\n[More Information Needed]\n\n## Model Card Authors [optional]\n\n[More Information Needed]\n\n## Model Card Contact\n\n[More Information Needed]"}}},{"rowIdx":345,"cells":{"modelId":{"kind":"string","value":"saberbx/GraniteSentry"},"author":{"kind":"string","value":"saberbx"},"last_modified":{"kind":"timestamp","value":"2025-08-06T12:20:23Z","string":"2025-08-06T12:20:23Z"},"downloads":{"kind":"number","value":10,"string":"10"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"string","value":"transformers"},"tags":{"kind":"list like","value":["transformers","safetensors","granite","text-generation","unsloth","conversational","arxiv:1910.09700","autotrain_compatible","endpoints_compatible","4-bit","bitsandbytes","region:us"],"string":"[\n \"transformers\",\n \"safetensors\",\n \"granite\",\n \"text-generation\",\n \"unsloth\",\n \"conversational\",\n \"arxiv:1910.09700\",\n \"autotrain_compatible\",\n \"endpoints_compatible\",\n \"4-bit\",\n \"bitsandbytes\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"string","value":"text-generation"},"createdAt":{"kind":"timestamp","value":"2025-08-06T04:58:17Z","string":"2025-08-06T04:58:17Z"},"card":{"kind":"string","value":"---\nlibrary_name: transformers\ntags:\n- unsloth\n---\n\n# Model Card for Model ID\n\n\n\n\n\n## Model Details\n\n### Model Description\n\n\n\nThis is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- **Developed by:** [More Information Needed]\n- **Funded by [optional]:** [More Information Needed]\n- **Shared by [optional]:** [More Information Needed]\n- **Model type:** [More Information Needed]\n- **Language(s) (NLP):** [More Information Needed]\n- **License:** [More Information Needed]\n- **Finetuned from model [optional]:** [More Information Needed]\n\n### Model Sources [optional]\n\n\n\n- **Repository:** [More Information Needed]\n- **Paper [optional]:** [More Information Needed]\n- **Demo [optional]:** [More Information Needed]\n\n## Uses\n\n\n\n### Direct Use\n\n\n\n[More Information Needed]\n\n### Downstream Use [optional]\n\n\n\n[More Information Needed]\n\n### Out-of-Scope Use\n\n\n\n[More Information Needed]\n\n## Bias, Risks, and Limitations\n\n\n\n[More Information Needed]\n\n### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.\n\n## How to Get Started with the Model\n\nUse the code below to get started with the model.\n\n[More Information Needed]\n\n## Training Details\n\n### Training Data\n\n\n\n[More Information Needed]\n\n### Training Procedure\n\n\n\n#### Preprocessing [optional]\n\n[More Information Needed]\n\n\n#### Training Hyperparameters\n\n- **Training regime:** [More Information Needed] \n\n#### Speeds, Sizes, Times [optional]\n\n\n\n[More Information Needed]\n\n## Evaluation\n\n\n\n### Testing Data, Factors & Metrics\n\n#### Testing Data\n\n\n\n[More Information Needed]\n\n#### Factors\n\n\n\n[More Information Needed]\n\n#### Metrics\n\n\n\n[More Information Needed]\n\n### Results\n\n[More Information Needed]\n\n#### Summary\n\n\n\n## Model Examination [optional]\n\n\n\n[More Information Needed]\n\n## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).\n\n- **Hardware Type:** [More Information Needed]\n- **Hours used:** [More Information Needed]\n- **Cloud Provider:** [More Information Needed]\n- **Compute Region:** [More Information Needed]\n- **Carbon Emitted:** [More Information Needed]\n\n## Technical Specifications [optional]\n\n### Model Architecture and Objective\n\n[More Information Needed]\n\n### Compute Infrastructure\n\n[More Information Needed]\n\n#### Hardware\n\n[More Information Needed]\n\n#### Software\n\n[More Information Needed]\n\n## Citation [optional]\n\n\n\n**BibTeX:**\n\n[More Information Needed]\n\n**APA:**\n\n[More Information Needed]\n\n## Glossary [optional]\n\n\n\n[More Information Needed]\n\n## More Information [optional]\n\n[More Information Needed]\n\n## Model Card Authors [optional]\n\n[More Information Needed]\n\n## Model Card Contact\n\n[More Information Needed]"}}},{"rowIdx":346,"cells":{"modelId":{"kind":"string","value":"nvovagen/novagwn"},"author":{"kind":"string","value":"nvovagen"},"last_modified":{"kind":"timestamp","value":"2025-08-06T12:19:50Z","string":"2025-08-06T12:19:50Z"},"downloads":{"kind":"number","value":0,"string":"0"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"string","value":"diffusers"},"tags":{"kind":"list like","value":["diffusers","text-to-image","lora","template:diffusion-lora","base_model:black-forest-labs/FLUX.1-Krea-dev","base_model:adapter:black-forest-labs/FLUX.1-Krea-dev","region:us"],"string":"[\n \"diffusers\",\n \"text-to-image\",\n \"lora\",\n \"template:diffusion-lora\",\n \"base_model:black-forest-labs/FLUX.1-Krea-dev\",\n \"base_model:adapter:black-forest-labs/FLUX.1-Krea-dev\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"string","value":"text-to-image"},"createdAt":{"kind":"timestamp","value":"2025-08-06T12:19:47Z","string":"2025-08-06T12:19:47Z"},"card":{"kind":"string","value":"---\ntags:\n - text-to-image\n - lora\n - diffusers\n - template:diffusion-lora\nwidget:\n- output:\n url: images/images (1).jpeg\n text: '-'\nbase_model: black-forest-labs/FLUX.1-Krea-dev\ninstance_prompt: null\n\n---\n# novgen.1\n\n\n\n\n\n## Download model\n\n\n[Download](/nvovagen/novagwn/tree/main) them in the Files & versions tab.\n"}}},{"rowIdx":347,"cells":{"modelId":{"kind":"string","value":"ekiprop/SST-2-GLoRA-p50-seed20"},"author":{"kind":"string","value":"ekiprop"},"last_modified":{"kind":"timestamp","value":"2025-08-06T12:17:50Z","string":"2025-08-06T12:17:50Z"},"downloads":{"kind":"number","value":54,"string":"54"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"string","value":"peft"},"tags":{"kind":"list like","value":["peft","safetensors","base_model:adapter:roberta-base","lora","transformers","base_model:FacebookAI/roberta-base","base_model:adapter:FacebookAI/roberta-base","license:mit","region:us"],"string":"[\n \"peft\",\n \"safetensors\",\n \"base_model:adapter:roberta-base\",\n \"lora\",\n \"transformers\",\n \"base_model:FacebookAI/roberta-base\",\n \"base_model:adapter:FacebookAI/roberta-base\",\n \"license:mit\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"null"},"createdAt":{"kind":"timestamp","value":"2025-08-06T12:03:16Z","string":"2025-08-06T12:03:16Z"},"card":{"kind":"string","value":"---\nlibrary_name: peft\nlicense: mit\nbase_model: roberta-base\ntags:\n- base_model:adapter:roberta-base\n- lora\n- transformers\nmetrics:\n- accuracy\nmodel-index:\n- name: SST-2-GLoRA-p50-seed20\n results: []\n---\n\n\n\n# SST-2-GLoRA-p50-seed20\n\nThis model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.2073\n- Accuracy: 0.9507\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 32\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- num_epochs: 5\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:------:|:-----:|:---------------:|:--------:|\n| 0.3587 | 0.0950 | 200 | 0.2132 | 0.9232 |\n| 0.2898 | 0.1900 | 400 | 0.1966 | 0.9255 |\n| 0.2656 | 0.2850 | 600 | 0.2076 | 0.9335 |\n| 0.2372 | 0.3800 | 800 | 0.1867 | 0.9346 |\n| 0.2304 | 0.4751 | 1000 | 0.2516 | 0.9197 |\n| 0.2193 | 0.5701 | 1200 | 0.2399 | 0.9243 |\n| 0.2257 | 0.6651 | 1400 | 0.1971 | 0.9335 |\n| 0.2197 | 0.7601 | 1600 | 0.1918 | 0.9404 |\n| 0.2199 | 0.8551 | 1800 | 0.1984 | 0.9323 |\n| 0.2027 | 0.9501 | 2000 | 0.1861 | 0.9461 |\n| 0.2083 | 1.0451 | 2200 | 0.1833 | 0.9427 |\n| 0.1801 | 1.1401 | 2400 | 0.1849 | 0.9392 |\n| 0.1818 | 1.2352 | 2600 | 0.1920 | 0.9369 |\n| 0.1847 | 1.3302 | 2800 | 0.2184 | 0.9415 |\n| 0.1737 | 1.4252 | 3000 | 0.1955 | 0.9415 |\n| 0.1744 | 1.5202 | 3200 | 0.1843 | 0.9438 |\n| 0.1843 | 1.6152 | 3400 | 0.1818 | 0.9415 |\n| 0.1628 | 1.7102 | 3600 | 0.2257 | 0.9404 |\n| 0.1607 | 1.8052 | 3800 | 0.1951 | 0.9415 |\n| 0.1803 | 1.9002 | 4000 | 0.1772 | 0.9427 |\n| 0.171 | 1.9952 | 4200 | 0.2226 | 0.9381 |\n| 0.1557 | 2.0903 | 4400 | 0.1886 | 0.9427 |\n| 0.1483 | 2.1853 | 4600 | 0.1809 | 0.9461 |\n| 0.1489 | 2.2803 | 4800 | 0.2176 | 0.9404 |\n| 0.1428 | 2.3753 | 5000 | 0.1820 | 0.9461 |\n| 0.147 | 2.4703 | 5200 | 0.2073 | 0.9507 |\n| 0.1532 | 2.5653 | 5400 | 0.2002 | 0.9438 |\n| 0.1633 | 2.6603 | 5600 | 0.1759 | 0.9495 |\n| 0.1427 | 2.7553 | 5800 | 0.2015 | 0.9450 |\n| 0.1398 | 2.8504 | 6000 | 0.1921 | 0.9450 |\n| 0.1344 | 2.9454 | 6200 | 0.1937 | 0.9427 |\n| 0.1412 | 3.0404 | 6400 | 0.2044 | 0.9450 |\n| 0.1148 | 3.1354 | 6600 | 0.1907 | 0.9472 |\n| 0.128 | 3.2304 | 6800 | 0.1894 | 0.9461 |\n| 0.1358 | 3.3254 | 7000 | 0.1836 | 0.9507 |\n| 0.1195 | 3.4204 | 7200 | 0.2043 | 0.9461 |\n| 0.1239 | 3.5154 | 7400 | 0.2053 | 0.9450 |\n| 0.1225 | 3.6105 | 7600 | 0.2060 | 0.9427 |\n| 0.1271 | 3.7055 | 7800 | 0.2090 | 0.9461 |\n| 0.1376 | 3.8005 | 8000 | 0.1953 | 0.9438 |\n| 0.1293 | 3.8955 | 8200 | 0.1912 | 0.9450 |\n| 0.1252 | 3.9905 | 8400 | 0.1936 | 0.9507 |\n| 0.1083 | 4.0855 | 8600 | 0.2040 | 0.9472 |\n| 0.1073 | 4.1805 | 8800 | 0.2121 | 0.9484 |\n| 0.1126 | 4.2755 | 9000 | 0.2055 | 0.9472 |\n| 0.1131 | 4.3705 | 9200 | 0.2010 | 0.9507 |\n| 0.1031 | 4.4656 | 9400 | 0.2125 | 0.9461 |\n| 0.1013 | 4.5606 | 9600 | 0.2132 | 0.9472 |\n| 0.1141 | 4.6556 | 9800 | 0.2087 | 0.9484 |\n| 0.1114 | 4.7506 | 10000 | 0.2026 | 0.9484 |\n| 0.1175 | 4.8456 | 10200 | 0.2013 | 0.9461 |\n| 0.1099 | 4.9406 | 10400 | 0.2025 | 0.9472 |\n\n\n### Framework versions\n\n- PEFT 0.16.0\n- Transformers 4.54.1\n- Pytorch 2.5.1+cu121\n- Datasets 4.0.0\n- Tokenizers 0.21.4"}}},{"rowIdx":348,"cells":{"modelId":{"kind":"string","value":"Butanium/simple-stories-0L16H512D-attention-only-toy-transformer"},"author":{"kind":"string","value":"Butanium"},"last_modified":{"kind":"timestamp","value":"2025-08-06T12:15:56Z","string":"2025-08-06T12:15:56Z"},"downloads":{"kind":"number","value":6,"string":"6"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"null"},"tags":{"kind":"list like","value":["safetensors","llama","region:us"],"string":"[\n \"safetensors\",\n \"llama\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"null"},"createdAt":{"kind":"timestamp","value":"2025-08-06T12:15:54Z","string":"2025-08-06T12:15:54Z"},"card":{"kind":"string","value":"# 0-Layer 16-Head Attention-Only Transformer\n\nThis is a simplified transformer model with 0 attention layer(s) and 16 attention head(s), hidden size 512, designed for studying attention mechanisms in isolation.\n\n## Architecture Differences from Vanilla Transformer\n\n**Removed Components:**\n- **No MLP/Feed-Forward layers** - Only attention layers\n- **No Layer Normalization** - No LayerNorm before/after attention\n- **No positional encoding** - No position embeddings of any kind\n\n**Kept Components:**\n- Token embeddings\n- Multi-head self-attention with causal masking\n- Residual connections around attention layers\n- Language modeling head (linear projection to vocabulary)\n\nThis minimal architecture isolates the attention mechanism, making it useful for mechanistic interpretability research as described in [A Mathematical Framework for Transformer Circuits](https://transformer-circuits.pub/2021/framework/index.html).\n\n## Usage\n\n```python\nclass AttentionOnlyTransformer(PreTrainedModel):\n \"\"\"Attention-only transformer with configurable number of attention layers.\"\"\"\n config_class = LlamaConfig\n\n def __init__(self, config: LlamaConfig):\n super().__init__(config)\n self.embed_tokens = nn.Embedding(config.vocab_size, config.hidden_size)\n self.layers = nn.ModuleList([AttentionLayer(config) for _ in range(config.num_hidden_layers)])\n self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)\n\n def forward(self, input_ids=None, attention_mask=None, labels=None, **kwargs):\n batch_size, seq_len = input_ids.shape\n hidden_states = self.embed_tokens(input_ids)\n assert hidden_states.shape == (batch_size, seq_len, self.config.hidden_size)\n assert attention_mask.shape == (batch_size, seq_len)\n\n for layer in self.layers:\n hidden_states = layer(hidden_states, attention_mask)\n assert hidden_states.shape == (batch_size, seq_len, self.config.hidden_size)\n\n logits = self.lm_head(hidden_states)\n assert logits.shape == (batch_size, seq_len, self.config.vocab_size)\n\n loss = None\n if labels is not None:\n shift_logits = logits[..., :-1, :].contiguous()\n shift_labels = labels[..., 1:].contiguous()\n loss_fct = nn.CrossEntropyLoss()\n loss = loss_fct(\n shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1)\n )\n\n return {\"loss\": loss, \"logits\": logits}\n\n\nmodel = AttentionOnlyTransformer.from_pretrained('Butanium/simple-stories-0L16H512D-attention-only-toy-transformer')\n```\n\n## Training Data\n\nThe model is trained on the [SimpleStories dataset](https://huggingface.co/datasets/SimpleStories/SimpleStories) for next-token prediction."}}},{"rowIdx":349,"cells":{"modelId":{"kind":"string","value":"isogen/II-Search-CIR-4B-exl3-6bpw"},"author":{"kind":"string","value":"isogen"},"last_modified":{"kind":"timestamp","value":"2025-08-06T12:15:20Z","string":"2025-08-06T12:15:20Z"},"downloads":{"kind":"number","value":2,"string":"2"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"null"},"tags":{"kind":"list like","value":["safetensors","qwen3","base_model:Intelligent-Internet/II-Search-CIR-4B","base_model:quantized:Intelligent-Internet/II-Search-CIR-4B","6-bit","exl3","region:us"],"string":"[\n \"safetensors\",\n \"qwen3\",\n \"base_model:Intelligent-Internet/II-Search-CIR-4B\",\n \"base_model:quantized:Intelligent-Internet/II-Search-CIR-4B\",\n \"6-bit\",\n \"exl3\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"null"},"createdAt":{"kind":"timestamp","value":"2025-08-06T12:14:44Z","string":"2025-08-06T12:14:44Z"},"card":{"kind":"string","value":"---\nbase_model: Intelligent-Internet/II-Search-CIR-4B\n---\n\n[EXL3](https://github.com/turboderp-org/exllamav3) quantization of [II-Search-CIR-4B](https://huggingface.co/Intelligent-Internet/II-Search-CIR-4B), 6 bits per weight.\n\n### HumanEval (argmax)\n\n| Model | Q4 | Q6 | Q8 | FP16 |\n| -------------------------------------------------------------------------------------------- | ---- | ---- | ---- | ---- |\n| [II-Search-CIR-4B-exl3-4bpw](https://huggingface.co/isogen/II-Search-CIR-4B-exl3-4bpw) | 81.7 | 79.3 | 78.7 | 79.9 |\n| [II-Search-CIR-4B-exl3-6bpw](https://huggingface.co/isogen/II-Search-CIR-4B-exl3-6bpw) | 80.5 | 81.1 | 81.1 | 81.7 |\n| [II-Search-CIR-4B-exl3-8bpw-h8](https://huggingface.co/isogen/II-Search-CIR-4B-exl3-8bpw-h8) | 83.5 | 83.5 | 82.3 | 82.9 |\n| [Qwen3-4B-exl3-4bpw](https://huggingface.co/isogen/Qwen3-4B-exl3-4bpw) | 80.5 | 81.1 | 81.7 | 80.5 |\n| [Qwen3-4B-exl3-6bpw](https://huggingface.co/isogen/Qwen3-4B-exl3-6bpw) | 80.5 | 85.4 | 86.0 | 86.0 |\n| [Qwen3-4B-exl3-8bpw-h8](https://huggingface.co/isogen/Qwen3-4B-exl3-8bpw-h8) | 82.3 | 84.8 | 83.5 | 82.9 |\n"}}},{"rowIdx":350,"cells":{"modelId":{"kind":"string","value":"Butanium/simple-stories-0L8H256D-attention-only-toy-transformer"},"author":{"kind":"string","value":"Butanium"},"last_modified":{"kind":"timestamp","value":"2025-08-06T12:11:02Z","string":"2025-08-06T12:11:02Z"},"downloads":{"kind":"number","value":6,"string":"6"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"null"},"tags":{"kind":"list like","value":["safetensors","llama","region:us"],"string":"[\n \"safetensors\",\n \"llama\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"null"},"createdAt":{"kind":"timestamp","value":"2025-08-06T12:11:00Z","string":"2025-08-06T12:11:00Z"},"card":{"kind":"string","value":"# 0-Layer 8-Head Attention-Only Transformer\n\nThis is a simplified transformer model with 0 attention layer(s) and 8 attention head(s), hidden size 256, designed for studying attention mechanisms in isolation.\n\n## Architecture Differences from Vanilla Transformer\n\n**Removed Components:**\n- **No MLP/Feed-Forward layers** - Only attention layers\n- **No Layer Normalization** - No LayerNorm before/after attention\n- **No positional encoding** - No position embeddings of any kind\n\n**Kept Components:**\n- Token embeddings\n- Multi-head self-attention with causal masking\n- Residual connections around attention layers\n- Language modeling head (linear projection to vocabulary)\n\nThis minimal architecture isolates the attention mechanism, making it useful for mechanistic interpretability research as described in [A Mathematical Framework for Transformer Circuits](https://transformer-circuits.pub/2021/framework/index.html).\n\n## Usage\n\n```python\nclass AttentionOnlyTransformer(PreTrainedModel):\n \"\"\"Attention-only transformer with configurable number of attention layers.\"\"\"\n config_class = LlamaConfig\n\n def __init__(self, config: LlamaConfig):\n super().__init__(config)\n self.embed_tokens = nn.Embedding(config.vocab_size, config.hidden_size)\n self.layers = nn.ModuleList([AttentionLayer(config) for _ in range(config.num_hidden_layers)])\n self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)\n\n def forward(self, input_ids=None, attention_mask=None, labels=None, **kwargs):\n batch_size, seq_len = input_ids.shape\n hidden_states = self.embed_tokens(input_ids)\n assert hidden_states.shape == (batch_size, seq_len, self.config.hidden_size)\n assert attention_mask.shape == (batch_size, seq_len)\n\n for layer in self.layers:\n hidden_states = layer(hidden_states, attention_mask)\n assert hidden_states.shape == (batch_size, seq_len, self.config.hidden_size)\n\n logits = self.lm_head(hidden_states)\n assert logits.shape == (batch_size, seq_len, self.config.vocab_size)\n\n loss = None\n if labels is not None:\n shift_logits = logits[..., :-1, :].contiguous()\n shift_labels = labels[..., 1:].contiguous()\n loss_fct = nn.CrossEntropyLoss()\n loss = loss_fct(\n shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1)\n )\n\n return {\"loss\": loss, \"logits\": logits}\n\n\nmodel = AttentionOnlyTransformer.from_pretrained('Butanium/simple-stories-0L8H256D-attention-only-toy-transformer')\n```\n\n## Training Data\n\nThe model is trained on the [SimpleStories dataset](https://huggingface.co/datasets/SimpleStories/SimpleStories) for next-token prediction."}}},{"rowIdx":351,"cells":{"modelId":{"kind":"string","value":"Avtertu/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-silent_skittish_ape"},"author":{"kind":"string","value":"Avtertu"},"last_modified":{"kind":"timestamp","value":"2025-08-06T12:10:40Z","string":"2025-08-06T12:10:40Z"},"downloads":{"kind":"number","value":101,"string":"101"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"string","value":"transformers"},"tags":{"kind":"list like","value":["transformers","safetensors","qwen2","text-generation","rl-swarm","genrl-swarm","grpo","gensyn","I am silent_skittish_ape","arxiv:1910.09700","autotrain_compatible","text-generation-inference","endpoints_compatible","region:us"],"string":"[\n \"transformers\",\n \"safetensors\",\n \"qwen2\",\n \"text-generation\",\n \"rl-swarm\",\n \"genrl-swarm\",\n \"grpo\",\n \"gensyn\",\n \"I am silent_skittish_ape\",\n \"arxiv:1910.09700\",\n \"autotrain_compatible\",\n \"text-generation-inference\",\n \"endpoints_compatible\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"string","value":"text-generation"},"createdAt":{"kind":"timestamp","value":"2025-08-03T09:40:54Z","string":"2025-08-03T09:40:54Z"},"card":{"kind":"string","value":"---\nlibrary_name: transformers\ntags:\n- rl-swarm\n- genrl-swarm\n- grpo\n- gensyn\n- I am silent_skittish_ape\n---\n\n# Model Card for Model ID\n\n\n\n\n\n## Model Details\n\n### Model Description\n\n\n\nThis is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- **Developed by:** [More Information Needed]\n- **Funded by [optional]:** [More Information Needed]\n- **Shared by [optional]:** [More Information Needed]\n- **Model type:** [More Information Needed]\n- **Language(s) (NLP):** [More Information Needed]\n- **License:** [More Information Needed]\n- **Finetuned from model [optional]:** [More Information Needed]\n\n### Model Sources [optional]\n\n\n\n- **Repository:** [More Information Needed]\n- **Paper [optional]:** [More Information Needed]\n- **Demo [optional]:** [More Information Needed]\n\n## Uses\n\n\n\n### Direct Use\n\n\n\n[More Information Needed]\n\n### Downstream Use [optional]\n\n\n\n[More Information Needed]\n\n### Out-of-Scope Use\n\n\n\n[More Information Needed]\n\n## Bias, Risks, and Limitations\n\n\n\n[More Information Needed]\n\n### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.\n\n## How to Get Started with the Model\n\nUse the code below to get started with the model.\n\n[More Information Needed]\n\n## Training Details\n\n### Training Data\n\n\n\n[More Information Needed]\n\n### Training Procedure\n\n\n\n#### Preprocessing [optional]\n\n[More Information Needed]\n\n\n#### Training Hyperparameters\n\n- **Training regime:** [More Information Needed] \n\n#### Speeds, Sizes, Times [optional]\n\n\n\n[More Information Needed]\n\n## Evaluation\n\n\n\n### Testing Data, Factors & Metrics\n\n#### Testing Data\n\n\n\n[More Information Needed]\n\n#### Factors\n\n\n\n[More Information Needed]\n\n#### Metrics\n\n\n\n[More Information Needed]\n\n### Results\n\n[More Information Needed]\n\n#### Summary\n\n\n\n## Model Examination [optional]\n\n\n\n[More Information Needed]\n\n## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).\n\n- **Hardware Type:** [More Information Needed]\n- **Hours used:** [More Information Needed]\n- **Cloud Provider:** [More Information Needed]\n- **Compute Region:** [More Information Needed]\n- **Carbon Emitted:** [More Information Needed]\n\n## Technical Specifications [optional]\n\n### Model Architecture and Objective\n\n[More Information Needed]\n\n### Compute Infrastructure\n\n[More Information Needed]\n\n#### Hardware\n\n[More Information Needed]\n\n#### Software\n\n[More Information Needed]\n\n## Citation [optional]\n\n\n\n**BibTeX:**\n\n[More Information Needed]\n\n**APA:**\n\n[More Information Needed]\n\n## Glossary [optional]\n\n\n\n[More Information Needed]\n\n## More Information [optional]\n\n[More Information Needed]\n\n## Model Card Authors [optional]\n\n[More Information Needed]\n\n## Model Card Contact\n\n[More Information Needed]"}}},{"rowIdx":352,"cells":{"modelId":{"kind":"string","value":"conradjs/gpt2-reuters-tokenizer"},"author":{"kind":"string","value":"conradjs"},"last_modified":{"kind":"timestamp","value":"2025-08-06T12:05:26Z","string":"2025-08-06T12:05:26Z"},"downloads":{"kind":"number","value":0,"string":"0"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"string","value":"transformers"},"tags":{"kind":"list like","value":["transformers","arxiv:1910.09700","endpoints_compatible","region:us"],"string":"[\n \"transformers\",\n \"arxiv:1910.09700\",\n \"endpoints_compatible\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"null"},"createdAt":{"kind":"timestamp","value":"2025-08-06T12:05:25Z","string":"2025-08-06T12:05:25Z"},"card":{"kind":"string","value":"---\nlibrary_name: transformers\ntags: []\n---\n\n# Model Card for Model ID\n\n\n\n\n\n## Model Details\n\n### Model Description\n\n\n\nThis is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- **Developed by:** [More Information Needed]\n- **Funded by [optional]:** [More Information Needed]\n- **Shared by [optional]:** [More Information Needed]\n- **Model type:** [More Information Needed]\n- **Language(s) (NLP):** [More Information Needed]\n- **License:** [More Information Needed]\n- **Finetuned from model [optional]:** [More Information Needed]\n\n### Model Sources [optional]\n\n\n\n- **Repository:** [More Information Needed]\n- **Paper [optional]:** [More Information Needed]\n- **Demo [optional]:** [More Information Needed]\n\n## Uses\n\n\n\n### Direct Use\n\n\n\n[More Information Needed]\n\n### Downstream Use [optional]\n\n\n\n[More Information Needed]\n\n### Out-of-Scope Use\n\n\n\n[More Information Needed]\n\n## Bias, Risks, and Limitations\n\n\n\n[More Information Needed]\n\n### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.\n\n## How to Get Started with the Model\n\nUse the code below to get started with the model.\n\n[More Information Needed]\n\n## Training Details\n\n### Training Data\n\n\n\n[More Information Needed]\n\n### Training Procedure\n\n\n\n#### Preprocessing [optional]\n\n[More Information Needed]\n\n\n#### Training Hyperparameters\n\n- **Training regime:** [More Information Needed] \n\n#### Speeds, Sizes, Times [optional]\n\n\n\n[More Information Needed]\n\n## Evaluation\n\n\n\n### Testing Data, Factors & Metrics\n\n#### Testing Data\n\n\n\n[More Information Needed]\n\n#### Factors\n\n\n\n[More Information Needed]\n\n#### Metrics\n\n\n\n[More Information Needed]\n\n### Results\n\n[More Information Needed]\n\n#### Summary\n\n\n\n## Model Examination [optional]\n\n\n\n[More Information Needed]\n\n## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).\n\n- **Hardware Type:** [More Information Needed]\n- **Hours used:** [More Information Needed]\n- **Cloud Provider:** [More Information Needed]\n- **Compute Region:** [More Information Needed]\n- **Carbon Emitted:** [More Information Needed]\n\n## Technical Specifications [optional]\n\n### Model Architecture and Objective\n\n[More Information Needed]\n\n### Compute Infrastructure\n\n[More Information Needed]\n\n#### Hardware\n\n[More Information Needed]\n\n#### Software\n\n[More Information Needed]\n\n## Citation [optional]\n\n\n\n**BibTeX:**\n\n[More Information Needed]\n\n**APA:**\n\n[More Information Needed]\n\n## Glossary [optional]\n\n\n\n[More Information Needed]\n\n## More Information [optional]\n\n[More Information Needed]\n\n## Model Card Authors [optional]\n\n[More Information Needed]\n\n## Model Card Contact\n\n[More Information Needed]"}}},{"rowIdx":353,"cells":{"modelId":{"kind":"string","value":"alphateach/affine-202020"},"author":{"kind":"string","value":"alphateach"},"last_modified":{"kind":"timestamp","value":"2025-08-06T11:56:15Z","string":"2025-08-06T11:56:15Z"},"downloads":{"kind":"number","value":459,"string":"459"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"string","value":"transformers"},"tags":{"kind":"list like","value":["transformers","safetensors","gpt_oss","text-generation","vllm","conversational","license:apache-2.0","autotrain_compatible","endpoints_compatible","8-bit","mxfp4","region:us"],"string":"[\n \"transformers\",\n \"safetensors\",\n \"gpt_oss\",\n \"text-generation\",\n \"vllm\",\n \"conversational\",\n \"license:apache-2.0\",\n \"autotrain_compatible\",\n \"endpoints_compatible\",\n \"8-bit\",\n \"mxfp4\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"string","value":"text-generation"},"createdAt":{"kind":"timestamp","value":"2025-08-06T11:56:15Z","string":"2025-08-06T11:56:15Z"},"card":{"kind":"string","value":"---\nlicense: apache-2.0\npipeline_tag: text-generation\nlibrary_name: transformers\ntags:\n- vllm\n---\n\n

\n \"gpt-oss-20b\"\n

\n\n

\n Try gpt-oss ·\n Guides ·\n Model card ·\n OpenAI blog\n

\n\n
\n\nWelcome to the gpt-oss series, [OpenAI’s open-weight models](https://openai.com/open-models) designed for powerful reasoning, agentic tasks, and versatile developer use cases.\n\nWe’re releasing two flavors of these open models:\n- `gpt-oss-120b` — for production, general purpose, high reasoning use cases that fit into a single H100 GPU (117B parameters with 5.1B active parameters)\n- `gpt-oss-20b` — for lower latency, and local or specialized use cases (21B parameters with 3.6B active parameters)\n\nBoth models were trained on our [harmony response format](https://github.com/openai/harmony) and should only be used with the harmony format as it will not work correctly otherwise.\n\n\n> [!NOTE]\n> This model card is dedicated to the smaller `gpt-oss-20b` model. Check out [`gpt-oss-120b`](https://huggingface.co/openai/gpt-oss-120b) for the larger model.\n\n# Highlights\n\n* **Permissive Apache 2.0 license:** Build freely without copyleft restrictions or patent risk—ideal for experimentation, customization, and commercial deployment. \n* **Configurable reasoning effort:** Easily adjust the reasoning effort (low, medium, high) based on your specific use case and latency needs. \n* **Full chain-of-thought:** Gain complete access to the model’s reasoning process, facilitating easier debugging and increased trust in outputs. It’s not intended to be shown to end users. \n* **Fine-tunable:** Fully customize models to your specific use case through parameter fine-tuning.\n* **Agentic capabilities:** Use the models’ native capabilities for function calling, [web browsing](https://github.com/openai/gpt-oss/tree/main?tab=readme-ov-file#browser), [Python code execution](https://github.com/openai/gpt-oss/tree/main?tab=readme-ov-file#python), and Structured Outputs.\n* **Native MXFP4 quantization:** The models are trained with native MXFP4 precision for the MoE layer, making `gpt-oss-120b` run on a single H100 GPU and the `gpt-oss-20b` model run within 16GB of memory.\n\n---\n\n# Inference examples\n\n## Transformers\n\nYou can use `gpt-oss-120b` and `gpt-oss-20b` with Transformers. If you use the Transformers chat template, it will automatically apply the [harmony response format](https://github.com/openai/harmony). If you use `model.generate` directly, you need to apply the harmony format manually using the chat template or use our [openai-harmony](https://github.com/openai/harmony) package.\n\nTo get started, install the necessary dependencies to setup your environment:\n\n```\npip install -U transformers kernels torch \n```\n\nOnce, setup you can proceed to run the model by running the snippet below:\n\n```py\nfrom transformers import pipeline\nimport torch\n\nmodel_id = \"openai/gpt-oss-20b\"\n\npipe = pipeline(\n \"text-generation\",\n model=model_id,\n torch_dtype=\"auto\",\n device_map=\"auto\",\n)\n\nmessages = [\n {\"role\": \"user\", \"content\": \"Explain quantum mechanics clearly and concisely.\"},\n]\n\noutputs = pipe(\n messages,\n max_new_tokens=256,\n)\nprint(outputs[0][\"generated_text\"][-1])\n```\n\nAlternatively, you can run the model via [`Transformers Serve`](https://huggingface.co/docs/transformers/main/serving) to spin up a OpenAI-compatible webserver:\n\n```\ntransformers serve\ntransformers chat localhost:8000 --model-name-or-path openai/gpt-oss-20b\n```\n\n[Learn more about how to use gpt-oss with Transformers.](https://cookbook.openai.com/articles/gpt-oss/run-transformers)\n\n## vLLM\n\nvLLM recommends using [uv](https://docs.astral.sh/uv/) for Python dependency management. You can use vLLM to spin up an OpenAI-compatible webserver. The following command will automatically download the model and start the server.\n\n```bash\nuv pip install --pre vllm==0.10.1+gptoss \\\n --extra-index-url https://wheels.vllm.ai/gpt-oss/ \\\n --extra-index-url https://download.pytorch.org/whl/nightly/cu128 \\\n --index-strategy unsafe-best-match\n\nvllm serve openai/gpt-oss-20b\n```\n\n[Learn more about how to use gpt-oss with vLLM.](https://cookbook.openai.com/articles/gpt-oss/run-vllm)\n\n## PyTorch / Triton\n\nTo learn about how to use this model with PyTorch and Triton, check out our [reference implementations in the gpt-oss repository](https://github.com/openai/gpt-oss?tab=readme-ov-file#reference-pytorch-implementation).\n\n## Ollama\n\nIf you are trying to run gpt-oss on consumer hardware, you can use Ollama by running the following commands after [installing Ollama](https://ollama.com/download).\n\n```bash\n# gpt-oss-20b\nollama pull gpt-oss:20b\nollama run gpt-oss:20b\n```\n\n[Learn more about how to use gpt-oss with Ollama.](https://cookbook.openai.com/articles/gpt-oss/run-locally-ollama)\n\n#### LM Studio\n\nIf you are using [LM Studio](https://lmstudio.ai/) you can use the following commands to download.\n\n```bash\n# gpt-oss-20b\nlms get openai/gpt-oss-20b\n```\n\nCheck out our [awesome list](https://github.com/openai/gpt-oss/blob/main/awesome-gpt-oss.md) for a broader collection of gpt-oss resources and inference partners.\n\n---\n\n# Download the model\n\nYou can download the model weights from the [Hugging Face Hub](https://huggingface.co/collections/openai/gpt-oss-68911959590a1634ba11c7a4) directly from Hugging Face CLI:\n\n```shell\n# gpt-oss-20b\nhuggingface-cli download openai/gpt-oss-20b --include \"original/*\" --local-dir gpt-oss-20b/\npip install gpt-oss\npython -m gpt_oss.chat model/\n```\n\n# Reasoning levels\n\nYou can adjust the reasoning level that suits your task across three levels:\n\n* **Low:** Fast responses for general dialogue. \n* **Medium:** Balanced speed and detail. \n* **High:** Deep and detailed analysis.\n\nThe reasoning level can be set in the system prompts, e.g., \"Reasoning: high\".\n\n# Tool use\n\nThe gpt-oss models are excellent for:\n* Web browsing (using built-in browsing tools)\n* Function calling with defined schemas\n* Agentic operations like browser tasks\n\n# Fine-tuning\n\nBoth gpt-oss models can be fine-tuned for a variety of specialized use cases.\n\nThis smaller model `gpt-oss-20b` can be fine-tuned on consumer hardware, whereas the larger [`gpt-oss-120b`](https://huggingface.co/openai/gpt-oss-120b) can be fine-tuned on a single H100 node.\n"}}},{"rowIdx":354,"cells":{"modelId":{"kind":"string","value":"PhaaNe/clickbait_KLTN"},"author":{"kind":"string","value":"PhaaNe"},"last_modified":{"kind":"timestamp","value":"2025-08-06T11:45:58Z","string":"2025-08-06T11:45:58Z"},"downloads":{"kind":"number","value":21,"string":"21"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"null"},"tags":{"kind":"list like","value":["safetensors","llama","text-classification","clickbait-detection","vietnamese","fine-tuned","vi","dataset:clickbait-dataset","license:apache-2.0","region:us"],"string":"[\n \"safetensors\",\n \"llama\",\n \"text-classification\",\n \"clickbait-detection\",\n \"vietnamese\",\n \"fine-tuned\",\n \"vi\",\n \"dataset:clickbait-dataset\",\n \"license:apache-2.0\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"string","value":"text-classification"},"createdAt":{"kind":"timestamp","value":"2025-08-05T20:05:10Z","string":"2025-08-05T20:05:10Z"},"card":{"kind":"string","value":"---\nlanguage: vi\nlicense: apache-2.0\ntags:\n- text-classification\n- clickbait-detection\n- vietnamese\n- llama\n- fine-tuned\ndatasets:\n- clickbait-dataset\nmetrics:\n- accuracy\n- f1\npipeline_tag: text-classification\n---\n\n# Vietnamese Clickbait Detection Model\n\nThis model is a fine-tuned version of Llama for Vietnamese clickbait detection.\n\n## Model Description\n\n- **Model type:** Causal Language Model (Fine-tuned for Classification)\n- **Language:** Vietnamese\n- **Base model:** meta-llama/Llama-3.1-8B-Instruct\n- **Task:** Clickbait Detection\n- **Dataset:** Vietnamese clickbait dataset\n\n## Usage\n\n```python\nfrom transformers import AutoTokenizer, AutoModelForCausalLM\nimport torch\n\n# Load model and tokenizer\nmodel_name = \"PhaaNe/clickbait_KLTN\"\ntokenizer = AutoTokenizer.from_pretrained(model_name)\nmodel = AutoModelForCausalLM.from_pretrained(\n model_name,\n torch_dtype=torch.float16,\n device_map=\"auto\"\n)\n\n# Example usage\ntext = \"Bạn sẽ không tin được điều này xảy ra!\"\ninputs = tokenizer(text, return_tensors=\"pt\")\noutputs = model.generate(**inputs, max_new_tokens=10)\nresult = tokenizer.decode(outputs[0], skip_special_tokens=True)\nprint(result)\n```\n\n## Training Details\n\n- Fine-tuned using LoRA (Low-Rank Adaptation)\n- Training framework: Transformers + PEFT\n- Hardware: GPU-enabled server\n\n## Performance\n\nThe model achieves good performance on Vietnamese clickbait detection tasks.\n\n## Citation\n\nIf you use this model, please cite:\n\n```\n@misc{clickbait_kltn_2025,\n title={Vietnamese Clickbait Detection using Fine-tuned Llama},\n author={PhaaNe},\n year={2025},\n url={https://huggingface.co/PhaaNe/clickbait_KLTN}\n}\n```\n"}}},{"rowIdx":355,"cells":{"modelId":{"kind":"string","value":"tamewild/4b_v37_merged_e8"},"author":{"kind":"string","value":"tamewild"},"last_modified":{"kind":"timestamp","value":"2025-08-06T11:40:38Z","string":"2025-08-06T11:40:38Z"},"downloads":{"kind":"number","value":3,"string":"3"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"string","value":"transformers"},"tags":{"kind":"list like","value":["transformers","safetensors","qwen3","text-generation","conversational","arxiv:1910.09700","autotrain_compatible","text-generation-inference","endpoints_compatible","region:us"],"string":"[\n \"transformers\",\n \"safetensors\",\n \"qwen3\",\n \"text-generation\",\n \"conversational\",\n \"arxiv:1910.09700\",\n \"autotrain_compatible\",\n \"text-generation-inference\",\n \"endpoints_compatible\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"string","value":"text-generation"},"createdAt":{"kind":"timestamp","value":"2025-08-06T11:38:32Z","string":"2025-08-06T11:38:32Z"},"card":{"kind":"string","value":"---\nlibrary_name: transformers\ntags: []\n---\n\n# Model Card for Model ID\n\n\n\n\n\n## Model Details\n\n### Model Description\n\n\n\nThis is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- **Developed by:** [More Information Needed]\n- **Funded by [optional]:** [More Information Needed]\n- **Shared by [optional]:** [More Information Needed]\n- **Model type:** [More Information Needed]\n- **Language(s) (NLP):** [More Information Needed]\n- **License:** [More Information Needed]\n- **Finetuned from model [optional]:** [More Information Needed]\n\n### Model Sources [optional]\n\n\n\n- **Repository:** [More Information Needed]\n- **Paper [optional]:** [More Information Needed]\n- **Demo [optional]:** [More Information Needed]\n\n## Uses\n\n\n\n### Direct Use\n\n\n\n[More Information Needed]\n\n### Downstream Use [optional]\n\n\n\n[More Information Needed]\n\n### Out-of-Scope Use\n\n\n\n[More Information Needed]\n\n## Bias, Risks, and Limitations\n\n\n\n[More Information Needed]\n\n### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.\n\n## How to Get Started with the Model\n\nUse the code below to get started with the model.\n\n[More Information Needed]\n\n## Training Details\n\n### Training Data\n\n\n\n[More Information Needed]\n\n### Training Procedure\n\n\n\n#### Preprocessing [optional]\n\n[More Information Needed]\n\n\n#### Training Hyperparameters\n\n- **Training regime:** [More Information Needed] \n\n#### Speeds, Sizes, Times [optional]\n\n\n\n[More Information Needed]\n\n## Evaluation\n\n\n\n### Testing Data, Factors & Metrics\n\n#### Testing Data\n\n\n\n[More Information Needed]\n\n#### Factors\n\n\n\n[More Information Needed]\n\n#### Metrics\n\n\n\n[More Information Needed]\n\n### Results\n\n[More Information Needed]\n\n#### Summary\n\n\n\n## Model Examination [optional]\n\n\n\n[More Information Needed]\n\n## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).\n\n- **Hardware Type:** [More Information Needed]\n- **Hours used:** [More Information Needed]\n- **Cloud Provider:** [More Information Needed]\n- **Compute Region:** [More Information Needed]\n- **Carbon Emitted:** [More Information Needed]\n\n## Technical Specifications [optional]\n\n### Model Architecture and Objective\n\n[More Information Needed]\n\n### Compute Infrastructure\n\n[More Information Needed]\n\n#### Hardware\n\n[More Information Needed]\n\n#### Software\n\n[More Information Needed]\n\n## Citation [optional]\n\n\n\n**BibTeX:**\n\n[More Information Needed]\n\n**APA:**\n\n[More Information Needed]\n\n## Glossary [optional]\n\n\n\n[More Information Needed]\n\n## More Information [optional]\n\n[More Information Needed]\n\n## Model Card Authors [optional]\n\n[More Information Needed]\n\n## Model Card Contact\n\n[More Information Needed]"}}},{"rowIdx":356,"cells":{"modelId":{"kind":"string","value":"Conexis/GLM-4.5-Air-Channel-INT8"},"author":{"kind":"string","value":"Conexis"},"last_modified":{"kind":"timestamp","value":"2025-08-06T11:38:58Z","string":"2025-08-06T11:38:58Z"},"downloads":{"kind":"number","value":0,"string":"0"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"string","value":"transformers"},"tags":{"kind":"list like","value":["transformers","safetensors","glm4_moe","text-generation","conversational","en","zh","license:mit","autotrain_compatible","endpoints_compatible","8-bit","region:us"],"string":"[\n \"transformers\",\n \"safetensors\",\n \"glm4_moe\",\n \"text-generation\",\n \"conversational\",\n \"en\",\n \"zh\",\n \"license:mit\",\n \"autotrain_compatible\",\n \"endpoints_compatible\",\n \"8-bit\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"string","value":"text-generation"},"createdAt":{"kind":"timestamp","value":"2025-08-04T01:23:46Z","string":"2025-08-04T01:23:46Z"},"card":{"kind":"string","value":"---\r\nlicense: mit\r\nlanguage:\r\n- en\r\n- zh\r\npipeline_tag: text-generation\r\nlibrary_name: transformers\r\n---\r\n\r\n# GLM-4.5\r\n\r\n
\r\n\r\n
\r\n

\r\n 👋 Join our Discord community.\r\n
\r\n 📖 Check out the GLM-4.5 technical blog.\r\n
\r\n 📍 Use GLM-4.5 API services on Z.ai API Platform (Global) or
Zhipu AI Open Platform (Mainland China).\r\n
\r\n 👉 One click to GLM-4.5.\r\n

\r\n \r\n## Model Introduction\r\n\r\nThe **GLM-4.5** series models are foundation models designed for intelligent agents. GLM-4.5 has **355** billion total parameters with **32** billion active parameters, while GLM-4.5-Air adopts a more compact design with **106** billion total parameters and **12** billion active parameters. GLM-4.5 models unify reasoning, coding, and intelligent agent capabilities to meet the complex demands of intelligent agent applications.\r\n\r\nBoth GLM-4.5 and GLM-4.5-Air are hybrid reasoning models that provide two modes: thinking mode for complex reasoning and tool usage, and non-thinking mode for immediate responses.\r\n\r\nWe have open-sourced the base models, hybrid reasoning models, and FP8 versions of the hybrid reasoning models for both GLM-4.5 and GLM-4.5-Air. They are released under the MIT open-source license and can be used commercially and for secondary development.\r\n\r\nAs demonstrated in our comprehensive evaluation across 12 industry-standard benchmarks, GLM-4.5 achieves exceptional performance with a score of **63.2**, in the **3rd** place among all the proprietary and open-source models. Notably, GLM-4.5-Air delivers competitive results at **59.8** while maintaining superior efficiency.\r\n\r\n![bench](https://raw.githubusercontent.com/zai-org/GLM-4.5/refs/heads/main/resources/bench.png)\r\n\r\nFor more eval results, show cases, and technical details, please visit\r\nour [technical blog](https://z.ai/blog/glm-4.5). The technical report will be released soon.\r\n\r\n\r\nThe model code, tool parser and reasoning parser can be found in the implementation of [transformers](https://github.com/huggingface/transformers/tree/main/src/transformers/models/glm4_moe), [vLLM](https://github.com/vllm-project/vllm/blob/main/vllm/model_executor/models/glm4_moe_mtp.py) and [SGLang](https://github.com/sgl-project/sglang/blob/main/python/sglang/srt/models/glm4_moe.py).\r\n\r\n## Quick Start\r\n\r\nPlease refer our [github page](https://github.com/zai-org/GLM-4.5) for more detail.\r\n"}}},{"rowIdx":357,"cells":{"modelId":{"kind":"string","value":"vomqal/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-masked_snappy_caribou"},"author":{"kind":"string","value":"vomqal"},"last_modified":{"kind":"timestamp","value":"2025-08-06T11:37:04Z","string":"2025-08-06T11:37:04Z"},"downloads":{"kind":"number","value":8,"string":"8"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"string","value":"transformers"},"tags":{"kind":"list like","value":["transformers","safetensors","qwen2","text-generation","rl-swarm","genrl-swarm","grpo","gensyn","I am masked_snappy_caribou","arxiv:1910.09700","autotrain_compatible","text-generation-inference","endpoints_compatible","region:us"],"string":"[\n \"transformers\",\n \"safetensors\",\n \"qwen2\",\n \"text-generation\",\n \"rl-swarm\",\n \"genrl-swarm\",\n \"grpo\",\n \"gensyn\",\n \"I am masked_snappy_caribou\",\n \"arxiv:1910.09700\",\n \"autotrain_compatible\",\n \"text-generation-inference\",\n \"endpoints_compatible\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"string","value":"text-generation"},"createdAt":{"kind":"timestamp","value":"2025-07-03T00:27:47Z","string":"2025-07-03T00:27:47Z"},"card":{"kind":"string","value":"---\nlibrary_name: transformers\ntags:\n- rl-swarm\n- genrl-swarm\n- grpo\n- gensyn\n- I am masked_snappy_caribou\n---\n\n# Model Card for Model ID\n\n\n\n\n\n## Model Details\n\n### Model Description\n\n\n\nThis is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- **Developed by:** [More Information Needed]\n- **Funded by [optional]:** [More Information Needed]\n- **Shared by [optional]:** [More Information Needed]\n- **Model type:** [More Information Needed]\n- **Language(s) (NLP):** [More Information Needed]\n- **License:** [More Information Needed]\n- **Finetuned from model [optional]:** [More Information Needed]\n\n### Model Sources [optional]\n\n\n\n- **Repository:** [More Information Needed]\n- **Paper [optional]:** [More Information Needed]\n- **Demo [optional]:** [More Information Needed]\n\n## Uses\n\n\n\n### Direct Use\n\n\n\n[More Information Needed]\n\n### Downstream Use [optional]\n\n\n\n[More Information Needed]\n\n### Out-of-Scope Use\n\n\n\n[More Information Needed]\n\n## Bias, Risks, and Limitations\n\n\n\n[More Information Needed]\n\n### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.\n\n## How to Get Started with the Model\n\nUse the code below to get started with the model.\n\n[More Information Needed]\n\n## Training Details\n\n### Training Data\n\n\n\n[More Information Needed]\n\n### Training Procedure\n\n\n\n#### Preprocessing [optional]\n\n[More Information Needed]\n\n\n#### Training Hyperparameters\n\n- **Training regime:** [More Information Needed] \n\n#### Speeds, Sizes, Times [optional]\n\n\n\n[More Information Needed]\n\n## Evaluation\n\n\n\n### Testing Data, Factors & Metrics\n\n#### Testing Data\n\n\n\n[More Information Needed]\n\n#### Factors\n\n\n\n[More Information Needed]\n\n#### Metrics\n\n\n\n[More Information Needed]\n\n### Results\n\n[More Information Needed]\n\n#### Summary\n\n\n\n## Model Examination [optional]\n\n\n\n[More Information Needed]\n\n## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).\n\n- **Hardware Type:** [More Information Needed]\n- **Hours used:** [More Information Needed]\n- **Cloud Provider:** [More Information Needed]\n- **Compute Region:** [More Information Needed]\n- **Carbon Emitted:** [More Information Needed]\n\n## Technical Specifications [optional]\n\n### Model Architecture and Objective\n\n[More Information Needed]\n\n### Compute Infrastructure\n\n[More Information Needed]\n\n#### Hardware\n\n[More Information Needed]\n\n#### Software\n\n[More Information Needed]\n\n## Citation [optional]\n\n\n\n**BibTeX:**\n\n[More Information Needed]\n\n**APA:**\n\n[More Information Needed]\n\n## Glossary [optional]\n\n\n\n[More Information Needed]\n\n## More Information [optional]\n\n[More Information Needed]\n\n## Model Card Authors [optional]\n\n[More Information Needed]\n\n## Model Card Contact\n\n[More Information Needed]"}}},{"rowIdx":358,"cells":{"modelId":{"kind":"string","value":"tamewild/4b_v37_merged_e10"},"author":{"kind":"string","value":"tamewild"},"last_modified":{"kind":"timestamp","value":"2025-08-06T11:36:49Z","string":"2025-08-06T11:36:49Z"},"downloads":{"kind":"number","value":8,"string":"8"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"string","value":"transformers"},"tags":{"kind":"list like","value":["transformers","safetensors","qwen3","text-generation","conversational","arxiv:1910.09700","autotrain_compatible","text-generation-inference","endpoints_compatible","region:us"],"string":"[\n \"transformers\",\n \"safetensors\",\n \"qwen3\",\n \"text-generation\",\n \"conversational\",\n \"arxiv:1910.09700\",\n \"autotrain_compatible\",\n \"text-generation-inference\",\n \"endpoints_compatible\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"string","value":"text-generation"},"createdAt":{"kind":"timestamp","value":"2025-08-06T11:34:44Z","string":"2025-08-06T11:34:44Z"},"card":{"kind":"string","value":"---\nlibrary_name: transformers\ntags: []\n---\n\n# Model Card for Model ID\n\n\n\n\n\n## Model Details\n\n### Model Description\n\n\n\nThis is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- **Developed by:** [More Information Needed]\n- **Funded by [optional]:** [More Information Needed]\n- **Shared by [optional]:** [More Information Needed]\n- **Model type:** [More Information Needed]\n- **Language(s) (NLP):** [More Information Needed]\n- **License:** [More Information Needed]\n- **Finetuned from model [optional]:** [More Information Needed]\n\n### Model Sources [optional]\n\n\n\n- **Repository:** [More Information Needed]\n- **Paper [optional]:** [More Information Needed]\n- **Demo [optional]:** [More Information Needed]\n\n## Uses\n\n\n\n### Direct Use\n\n\n\n[More Information Needed]\n\n### Downstream Use [optional]\n\n\n\n[More Information Needed]\n\n### Out-of-Scope Use\n\n\n\n[More Information Needed]\n\n## Bias, Risks, and Limitations\n\n\n\n[More Information Needed]\n\n### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.\n\n## How to Get Started with the Model\n\nUse the code below to get started with the model.\n\n[More Information Needed]\n\n## Training Details\n\n### Training Data\n\n\n\n[More Information Needed]\n\n### Training Procedure\n\n\n\n#### Preprocessing [optional]\n\n[More Information Needed]\n\n\n#### Training Hyperparameters\n\n- **Training regime:** [More Information Needed] \n\n#### Speeds, Sizes, Times [optional]\n\n\n\n[More Information Needed]\n\n## Evaluation\n\n\n\n### Testing Data, Factors & Metrics\n\n#### Testing Data\n\n\n\n[More Information Needed]\n\n#### Factors\n\n\n\n[More Information Needed]\n\n#### Metrics\n\n\n\n[More Information Needed]\n\n### Results\n\n[More Information Needed]\n\n#### Summary\n\n\n\n## Model Examination [optional]\n\n\n\n[More Information Needed]\n\n## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).\n\n- **Hardware Type:** [More Information Needed]\n- **Hours used:** [More Information Needed]\n- **Cloud Provider:** [More Information Needed]\n- **Compute Region:** [More Information Needed]\n- **Carbon Emitted:** [More Information Needed]\n\n## Technical Specifications [optional]\n\n### Model Architecture and Objective\n\n[More Information Needed]\n\n### Compute Infrastructure\n\n[More Information Needed]\n\n#### Hardware\n\n[More Information Needed]\n\n#### Software\n\n[More Information Needed]\n\n## Citation [optional]\n\n\n\n**BibTeX:**\n\n[More Information Needed]\n\n**APA:**\n\n[More Information Needed]\n\n## Glossary [optional]\n\n\n\n[More Information Needed]\n\n## More Information [optional]\n\n[More Information Needed]\n\n## Model Card Authors [optional]\n\n[More Information Needed]\n\n## Model Card Contact\n\n[More Information Needed]"}}},{"rowIdx":359,"cells":{"modelId":{"kind":"string","value":"idopinto/gpt-oss-20b-multilingual-reasoner"},"author":{"kind":"string","value":"idopinto"},"last_modified":{"kind":"timestamp","value":"2025-08-06T11:36:12Z","string":"2025-08-06T11:36:12Z"},"downloads":{"kind":"number","value":0,"string":"0"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"string","value":"transformers"},"tags":{"kind":"list like","value":["transformers","safetensors","generated_from_trainer","sft","trl","dataset:HuggingFaceH4/Multilingual-Thinking","base_model:openai/gpt-oss-20b","base_model:finetune:openai/gpt-oss-20b","endpoints_compatible","region:us"],"string":"[\n \"transformers\",\n \"safetensors\",\n \"generated_from_trainer\",\n \"sft\",\n \"trl\",\n \"dataset:HuggingFaceH4/Multilingual-Thinking\",\n \"base_model:openai/gpt-oss-20b\",\n \"base_model:finetune:openai/gpt-oss-20b\",\n \"endpoints_compatible\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"null"},"createdAt":{"kind":"timestamp","value":"2025-08-06T11:01:58Z","string":"2025-08-06T11:01:58Z"},"card":{"kind":"string","value":"---\nbase_model: openai/gpt-oss-20b\ndatasets: HuggingFaceH4/Multilingual-Thinking\nlibrary_name: transformers\nmodel_name: gpt-oss-20b-multilingual-reasoner\ntags:\n- generated_from_trainer\n- sft\n- trl\nlicence: license\n---\n\n# Model Card for gpt-oss-20b-multilingual-reasoner\n\nThis model is a fine-tuned version of [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b) on the [HuggingFaceH4/Multilingual-Thinking](https://huggingface.co/datasets/HuggingFaceH4/Multilingual-Thinking) dataset.\nIt has been trained using [TRL](https://github.com/huggingface/trl).\n\n## Quick start\n\n```python\nfrom transformers import pipeline\n\nquestion = \"If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?\"\ngenerator = pipeline(\"text-generation\", model=\"idopinto/gpt-oss-20b-multilingual-reasoner\", device=\"cuda\")\noutput = generator([{\"role\": \"user\", \"content\": question}], max_new_tokens=128, return_full_text=False)[0]\nprint(output[\"generated_text\"])\n```\n\n## Training procedure\n\n \n\n\nThis model was trained with SFT.\n\n### Framework versions\n\n- TRL: 0.21.0\n- Transformers: 4.55.0\n- Pytorch: 2.8.0+cu128\n- Datasets: 4.0.0\n- Tokenizers: 0.21.4\n\n## Citations\n\n\n\nCite TRL as:\n \n```bibtex\n@misc{vonwerra2022trl,\n\ttitle = {{TRL: Transformer Reinforcement Learning}},\n\tauthor = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\\'e}dec},\n\tyear = 2020,\n\tjournal = {GitHub repository},\n\tpublisher = {GitHub},\n\thowpublished = {\\url{https://github.com/huggingface/trl}}\n}\n```"}}},{"rowIdx":360,"cells":{"modelId":{"kind":"string","value":"mradermacher/PaperPrediction-LLM-4B-GGUF"},"author":{"kind":"string","value":"mradermacher"},"last_modified":{"kind":"timestamp","value":"2025-08-06T11:31:15Z","string":"2025-08-06T11:31:15Z"},"downloads":{"kind":"number","value":57,"string":"57"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"string","value":"transformers"},"tags":{"kind":"list like","value":["transformers","gguf","en","base_model:weihezhai/PaperPrediction-LLM-4B","base_model:quantized:weihezhai/PaperPrediction-LLM-4B","license:cc-by-nc-4.0","endpoints_compatible","region:us","conversational"],"string":"[\n \"transformers\",\n \"gguf\",\n \"en\",\n \"base_model:weihezhai/PaperPrediction-LLM-4B\",\n \"base_model:quantized:weihezhai/PaperPrediction-LLM-4B\",\n \"license:cc-by-nc-4.0\",\n \"endpoints_compatible\",\n \"region:us\",\n \"conversational\"\n]"},"pipeline_tag":{"kind":"null"},"createdAt":{"kind":"timestamp","value":"2025-08-06T11:17:36Z","string":"2025-08-06T11:17:36Z"},"card":{"kind":"string","value":"---\nbase_model: weihezhai/PaperPrediction-LLM-4B\nlanguage:\n- en\nlibrary_name: transformers\nlicense: cc-by-nc-4.0\nmradermacher:\n readme_rev: 1\nquantized_by: mradermacher\n---\n## About\n\n\n\n\n\n\n\n\n\nstatic quants of https://huggingface.co/weihezhai/PaperPrediction-LLM-4B\n\n\n\n***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#PaperPrediction-LLM-4B-GGUF).***\n\nweighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.\n## Usage\n\nIf you are unsure how to use GGUF files, refer to one of [TheBloke's\nREADMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for\nmore details, including on how to concatenate multi-part files.\n\n## Provided Quants\n\n(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)\n\n| Link | Type | Size/GB | Notes |\n|:-----|:-----|--------:|:------|\n| [GGUF](https://huggingface.co/mradermacher/PaperPrediction-LLM-4B-GGUF/resolve/main/PaperPrediction-LLM-4B.Q2_K.gguf) | Q2_K | 1.8 | |\n| [GGUF](https://huggingface.co/mradermacher/PaperPrediction-LLM-4B-GGUF/resolve/main/PaperPrediction-LLM-4B.Q3_K_S.gguf) | Q3_K_S | 2.0 | |\n| [GGUF](https://huggingface.co/mradermacher/PaperPrediction-LLM-4B-GGUF/resolve/main/PaperPrediction-LLM-4B.Q3_K_M.gguf) | Q3_K_M | 2.2 | lower quality |\n| [GGUF](https://huggingface.co/mradermacher/PaperPrediction-LLM-4B-GGUF/resolve/main/PaperPrediction-LLM-4B.Q3_K_L.gguf) | Q3_K_L | 2.3 | |\n| [GGUF](https://huggingface.co/mradermacher/PaperPrediction-LLM-4B-GGUF/resolve/main/PaperPrediction-LLM-4B.IQ4_XS.gguf) | IQ4_XS | 2.4 | |\n| [GGUF](https://huggingface.co/mradermacher/PaperPrediction-LLM-4B-GGUF/resolve/main/PaperPrediction-LLM-4B.Q4_K_S.gguf) | Q4_K_S | 2.5 | fast, recommended |\n| [GGUF](https://huggingface.co/mradermacher/PaperPrediction-LLM-4B-GGUF/resolve/main/PaperPrediction-LLM-4B.Q4_K_M.gguf) | Q4_K_M | 2.6 | fast, recommended |\n| [GGUF](https://huggingface.co/mradermacher/PaperPrediction-LLM-4B-GGUF/resolve/main/PaperPrediction-LLM-4B.Q5_K_S.gguf) | Q5_K_S | 2.9 | |\n| [GGUF](https://huggingface.co/mradermacher/PaperPrediction-LLM-4B-GGUF/resolve/main/PaperPrediction-LLM-4B.Q5_K_M.gguf) | Q5_K_M | 3.0 | |\n| [GGUF](https://huggingface.co/mradermacher/PaperPrediction-LLM-4B-GGUF/resolve/main/PaperPrediction-LLM-4B.Q6_K.gguf) | Q6_K | 3.4 | very good quality |\n| [GGUF](https://huggingface.co/mradermacher/PaperPrediction-LLM-4B-GGUF/resolve/main/PaperPrediction-LLM-4B.Q8_0.gguf) | Q8_0 | 4.4 | fast, best quality |\n| [GGUF](https://huggingface.co/mradermacher/PaperPrediction-LLM-4B-GGUF/resolve/main/PaperPrediction-LLM-4B.f16.gguf) | f16 | 8.2 | 16 bpw, overkill |\n\nHere is a handy graph by ikawrakow comparing some lower-quality quant\ntypes (lower is better):\n\n![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)\n\nAnd here are Artefact2's thoughts on the matter:\nhttps://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9\n\n## FAQ / Model Request\n\nSee https://huggingface.co/mradermacher/model_requests for some answers to\nquestions you might have and/or if you want some other model quantized.\n\n## Thanks\n\nI thank my company, [nethype GmbH](https://www.nethype.de/), for letting\nme use its servers and providing upgrades to my workstation to enable\nthis work in my free time.\n\n\n"}}},{"rowIdx":361,"cells":{"modelId":{"kind":"string","value":"affinator/Affine-7857777"},"author":{"kind":"string","value":"affinator"},"last_modified":{"kind":"timestamp","value":"2025-08-06T11:29:39Z","string":"2025-08-06T11:29:39Z"},"downloads":{"kind":"number","value":62,"string":"62"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"null"},"tags":{"kind":"list like","value":["safetensors","deepseek_v3","custom_code","fp8","region:us"],"string":"[\n \"safetensors\",\n \"deepseek_v3\",\n \"custom_code\",\n \"fp8\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"null"},"createdAt":{"kind":"timestamp","value":"2025-08-06T11:29:20Z","string":"2025-08-06T11:29:20Z"},"card":{"kind":"string","value":"This repository hosts a variant of Alphatao/Affine-0000000.\nLicense: MIT. The original license is preserved.\nNo further information about the modifications is provided.\n"}}},{"rowIdx":362,"cells":{"modelId":{"kind":"string","value":"grapevine-AI/Qwen3-30B-A3B-Thinking-2507-GGUF"},"author":{"kind":"string","value":"grapevine-AI"},"last_modified":{"kind":"timestamp","value":"2025-08-06T11:22:57Z","string":"2025-08-06T11:22:57Z"},"downloads":{"kind":"number","value":133,"string":"133"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"null"},"tags":{"kind":"list like","value":["gguf","license:apache-2.0","endpoints_compatible","region:us","imatrix","conversational"],"string":"[\n \"gguf\",\n \"license:apache-2.0\",\n \"endpoints_compatible\",\n \"region:us\",\n \"imatrix\",\n \"conversational\"\n]"},"pipeline_tag":{"kind":"null"},"createdAt":{"kind":"timestamp","value":"2025-08-01T13:32:40Z","string":"2025-08-01T13:32:40Z"},"card":{"kind":"string","value":"---\nlicense: apache-2.0\n---\n# What is this?\nAlibaba CloudのMoEモデル、Qwen3-30B-A3Bがパワーアップして帰ってきた!
\n改良版では非思考モデルと思考モデルの2種類に分離され、そのうちの思考タイプのモデルである[Qwen3-30B-A3B-Thinking-2507](https://huggingface.co/Qwen/Qwen3-30B-A3B-Thinking-2507)をGGUFフォーマットに変換したものです。\n\n# imatrix dataset\n日本語能力を重視し、日本語が多量に含まれる[TFMC/imatrix-dataset-for-japanese-llm](https://huggingface.co/datasets/TFMC/imatrix-dataset-for-japanese-llm)データセットを使用しました。\n\n# Chat template\n```\n<|im_start|>system\nここにSystem Promptを書きます。<|im_end|>\n<|im_start|>user\nここにMessageを書きます。<|im_end|>\n<|im_start|>assistant\n```\n\n\n# Environment\nWindows版llama.cpp-b5999を使用して量子化作業を実施しました。\n\n# License\nApache 2.0\n\n# Developer\nAlibaba Cloud"}}},{"rowIdx":363,"cells":{"modelId":{"kind":"string","value":"Gusgoodmansamadayo/Convnex_Base-7_11_Sign"},"author":{"kind":"string","value":"Gusgoodmansamadayo"},"last_modified":{"kind":"timestamp","value":"2025-08-06T11:21:51Z","string":"2025-08-06T11:21:51Z"},"downloads":{"kind":"number","value":0,"string":"0"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"null"},"tags":{"kind":"list like","value":["image-classification","base_model:facebook/convnext-tiny-224","base_model:finetune:facebook/convnext-tiny-224","license:mit","region:us"],"string":"[\n \"image-classification\",\n \"base_model:facebook/convnext-tiny-224\",\n \"base_model:finetune:facebook/convnext-tiny-224\",\n \"license:mit\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"string","value":"image-classification"},"createdAt":{"kind":"timestamp","value":"2025-08-06T06:13:26Z","string":"2025-08-06T06:13:26Z"},"card":{"kind":"string","value":"---\nlicense: mit\nbase_model:\n- facebook/convnext-tiny-224\npipeline_tag: image-classification\n---"}}},{"rowIdx":364,"cells":{"modelId":{"kind":"string","value":"unsloth/gpt-oss-120b-bnb-4bit"},"author":{"kind":"string","value":"unsloth"},"last_modified":{"kind":"timestamp","value":"2025-08-06T11:19:40Z","string":"2025-08-06T11:19:40Z"},"downloads":{"kind":"number","value":0,"string":"0"},"likes":{"kind":"number","value":2,"string":"2"},"library_name":{"kind":"null"},"tags":{"kind":"list like","value":["license:apache-2.0","region:us"],"string":"[\n \"license:apache-2.0\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"null"},"createdAt":{"kind":"timestamp","value":"2025-08-06T11:19:39Z","string":"2025-08-06T11:19:39Z"},"card":{"kind":"string","value":"---\r\nlicense: apache-2.0\r\n---\r\n"}}},{"rowIdx":365,"cells":{"modelId":{"kind":"string","value":"Thireus/GLM-4.5-THIREUS-IQ3_KS-SPECIAL_SPLIT"},"author":{"kind":"string","value":"Thireus"},"last_modified":{"kind":"timestamp","value":"2025-08-06T11:18:18Z","string":"2025-08-06T11:18:18Z"},"downloads":{"kind":"number","value":4,"string":"4"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"null"},"tags":{"kind":"list like","value":["gguf","arxiv:2505.23786","license:mit","endpoints_compatible","region:us","imatrix","conversational"],"string":"[\n \"gguf\",\n \"arxiv:2505.23786\",\n \"license:mit\",\n \"endpoints_compatible\",\n \"region:us\",\n \"imatrix\",\n \"conversational\"\n]"},"pipeline_tag":{"kind":"null"},"createdAt":{"kind":"timestamp","value":"2025-08-03T17:36:07Z","string":"2025-08-03T17:36:07Z"},"card":{"kind":"string","value":"---\nlicense: mit\n---\n## ⚠️ Cautionary Notice\n\nDue to changes in the GLM-4.5 PR the GGUF files of this repository have changed. Any older version of these GGUFs are no longer compatible with the latest version of `llama.cpp` and `ik_llama.cpp`. Please download the latest GGUF files of this repository and make sure to use the latest version of `llama.cpp` or `ik_llama.cpp`.\n\n- **For `llama.cpp`** – see the discussion in [PR #14939](https://github.com/ggml-org/llama.cpp/pull/14939).\n- **For `ik_llama.cpp`** – refer to [ikawrakow/ik_llama.cpp#668](https://github.com/ikawrakow/ik_llama.cpp/pull/668).\n\n**Unless you are confident in what you're doing, and until support is officially confirmed (PR merged),** \n> 🔒 **Do not use these quantized models for production** \n> 🔬 **Do not use them to assess the quality of the GLM-4.5 models**\n\nProceed with caution and keep an eye on the upstream PRs for any updates that could affect compatibility or performance.\n\n---\n\n# GLM-4.5\n\n## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/GLM-4.5-THIREUS-BF16-SPECIAL_SPLIT/) about?\n\nThis repository provides **GGUF-quantized tensors** for the GLM-4.5 model (official repo: https://huggingface.co/zai-org/GLM-4.5). These GGUF shards are designed to be used with **Thireus’ GGUF Tool Suite** (https://gguf.thireus.com), a collection of tools that automatically finds the perplexity-optimal mix of quantizations for any given VRAM and RAM target. With the Tool Suite, you can generate and download custom quantization “recipes” effortlessly.\n\n- 📖 Read more: https://github.com/Thireus/GGUF-Tool-Suite \n- 🔍 Example quant mixes: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples \n- 🛠️ Create your own recipe: https://colab.research.google.com/github/Thireus/GGUF-Tool-Suite/blob/main/quant_recipe_pipeline.ipynb \n- 📂 Browse available quant shards: https://huggingface.co/Thireus/collections \n\n*tl;dr: Expand the details section below*\n
\n\n```\ncd ~\n\n# Make sure to install all ik_llama.cpp compilation dependencies...\napt install python3-dev python3-pip python3-venv python3-wheel python3-setuptools git acl netcat-openbsd cmake # pipx\n\n# Obtain ik_llama's Thireus version - Windows builds available at https://github.com/Thireus/ik_llama.cpp/releases\ngit clone https://github.com/Thireus/ik_llama.cpp\ncd ik_llama.cpp\ngit pull\n# Build ik_llama.cpp\ncmake -B build -DGGML_AVX=ON -DGGML_AVX2=ON -DLLAMA_CURL=OFF -DGGML_MAX_CONTEXTS=2048\ncmake --build build --config Release -j16\ncd ..\n\n# Obtain Thireus' GGUF-Tool-Suite\ngit clone https://github.com/Thireus/GGUF-Tool-Suite\n\n# Download model quant mix from recipe file:\ncd GGUF-Tool-Suite\nrm -f download.conf # Make sure to copy the relevant download.conf for the model before running quant_assign.py\ncp -f models/GLM-4.5/download.conf . # Use the download.conf of the chosen model\nmkdir -p kitchen && cd kitchen\n../quant_downloader.sh ../recipe_examples/GLM-4.5.ROOT-3.6910bpw-3.2785ppl.153GB-GGUF_19GB-GPU_134GB-CPU.68f915c_9c7682b.recipe\n\n# Launch ik_llama's llama-cli:\nulimit -n 99999 # Lifts \"too many open files\" limitation on Linux\n~/ik_llama.cpp/build/bin/llama-cli \\\n -m GLM-4.5-THIREUS-BF16-SPECIAL_TENSOR-00001-of-01148.gguf \\\n -fa -amb 512 -fmoe -ctk f16 -c 4096 -ngl 99 \\\n -ot \"blk\\.(3|4|5|6)\\.ffn_.*=CUDA0\" \\\n -ot \"blk\\.(7|8|9|10)\\.ffn_.*=CUDA1\" \\\n -ot exps=CPU -b 2048 -ub 1024 --warmup-batch --no-mmap --threads 36 \\\n --main-gpu 0 \\\n -p '<|begin▁of▁sentence|><|User|>What is the solution of x+5=-2?<|Assistant|>\\n'\n```\n\n
\n\n---\n\n## ❓ Why does this Tool Suite exist?\n\n1. **Compatibility & Speed** – [unsloth](https://huggingface.co/unsloth)’s dynamic quants may not always work optimally with `ik_llama.cpp`. \n2. **Custom Rig Fit** – No off-the-shelf GGUF model perfectly matched my VRAM/RAM setup, so I built a way to tailor models and leverage extra VRAM/RAM to reduce perplexity. \n3. **Automated PPL-Optimal Quantization** – To my knowledge, there was no flexible, automated method to minimize perplexity for any bits-per-weight (bpw) target—so I created one with excellent results! \n\n---\n\n## 📊 How does it compare to other GGUFs?\n\nHere’s how DeepSeek-R1-0528 quantized with **Thireus’ GGUF Tool Suite** stacks up against other quantizers (lower perplexity = better at equal or lower bpw):\n\n![PPLs Compared With Others](https://github.com/Thireus/GGUF-Tool-Suite/raw/main/ppl_graphs/DeepSeek-R1-0528.svg)\n\n> _Note: The `recipe_examples` files illustrate good recipes. The Tool Suite computes the optimal ppl/bpw curve for you — just specify your target RAM, VRAM, and quant types, and `quant_assign.py` finds the best mix._ \n\nMore perplexity/bpw graphs for other supported models: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/ppl_graphs \n\n---\n\n## 🚀 How do I get started?\n\nCheck out the [GGUF Tool Suite README](https://github.com/Thireus/GGUF-Tool-Suite) — focus on these sections:\n\n1. ⚠️ **Requirements** – Which `ik_llama.cpp` (or `llama.cpp`) version to use and how to compile. \n - Windows binaries (no patching needed) at: https://github.com/Thireus/ik_llama.cpp/releases \n2. 📥 **Download Model Shards** – Use `quant_downloader.sh` to fetch GGUF shards from any recipe. \n - Recipe examples: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples \n3. 🧠 **Run a Downloaded Model** – Sample usage with `llama-cli`. \n4. 🛠️ **Generate a Custom Recipe** – Produce recipes tailored to your rig for optimal perplexity. \n\n---\n\n## ✅ Supported Models\n\nSupported models are listed under `models/` in the [Tool Suite Github repo](https://github.com/Thireus/GGUF-Tool-Suite/tree/main/models). Presence of `ppl_results.csv` indicates official support and compatibility with `quant_assign.py`.\n\n---\n\n## 🤷‍♂️ Will I release pre-cooked GGUF files?\n\nNo, because I believe in **tailored quantization** for each user’s hardware. If you prefer ready-made shards, you are welcome to merge them via `llama-gguf-split --merge`, or request someone to publish them.\n\nInstead, I prefer to share examples of recipes so users can see exactly how they were produced (command included inside these recipe files) and tweak them for their own rigs. The `quant_downloader.sh` script handles automatic fetching and verification of each shard. Recipes provided by [Ubergarm](https://huggingface.co/ubergarm) on his model cards are also compatible with `quant_downloader.sh`.\n\nUsers who don’t trust the GGUF shards on HuggingFace can also quantize their own by passing recipe lines to `llama-quantize --custom-q` ([see example](https://github.com/Thireus/GGUF-Tool-Suite/blob/main/models/DeepSeek-R1-0528/DeepSeek-R1-0528-THIREUS-ANY-SPECIAL.sh#L482-L486)). Run `llama-quantize --help` to list compatible quants for `quant_assign.py`. This approach is especially useful if you prefer `llama.cpp` over `ik_llama.cpp`. \n\n---\n\n## 📦 What’s in this repository?\n\n- **00001 GGUF header shard** – Contains metadata (tokens, chat template, tensor count, etc.). This metadata can be explored directly from the HuggingFace web interface after clicking on that shard. \n- **Tensor shards** – Each shard holds one tensor; see `tensors.map` for names, quant types, sizes, SHA-256 hash, shard IDs, etc. \n- **GPG-signed files** – `tensors.map` and header shard are signed with the key in [trusted-keys.asc](https://github.com/Thireus/GGUF-Tool-Suite/blob/main/trusted-keys.asc) for tamper detection. \n- **Security note** – Some papers about various ways to attack GGUFs and LLMs are available online, such as https://arxiv.org/abs/2505.23786, and there are also more classic security exploits like CVE-2024-23496 and CVE-2024-25664 through CVE-2024-25668. Only use GGUFs from reputable, trusted authors—or alternatively self-quantize—to avoid potential exploits. \n\n---\n\n## 💡 Pro Tips\n\nYou can download the BF16 model version to quantize your own shards:\n\n```\nmkdir kitchen \necho '.*=bf16' > kitchen/bf16.recipe \ncd kitchen\n../quant_downloader.sh bf16.recipe \n```\n\nEnjoy optimized quantization! 🎉\n"}}},{"rowIdx":366,"cells":{"modelId":{"kind":"string","value":"Thireus/GLM-4.5-THIREUS-IQ3_KT-SPECIAL_SPLIT"},"author":{"kind":"string","value":"Thireus"},"last_modified":{"kind":"timestamp","value":"2025-08-06T11:18:10Z","string":"2025-08-06T11:18:10Z"},"downloads":{"kind":"number","value":4,"string":"4"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"null"},"tags":{"kind":"list like","value":["gguf","arxiv:2505.23786","license:mit","endpoints_compatible","region:us","imatrix","conversational"],"string":"[\n \"gguf\",\n \"arxiv:2505.23786\",\n \"license:mit\",\n \"endpoints_compatible\",\n \"region:us\",\n \"imatrix\",\n \"conversational\"\n]"},"pipeline_tag":{"kind":"null"},"createdAt":{"kind":"timestamp","value":"2025-08-03T17:59:29Z","string":"2025-08-03T17:59:29Z"},"card":{"kind":"string","value":"---\nlicense: mit\n---\n## ⚠️ Cautionary Notice\n\nDue to changes in the GLM-4.5 PR the GGUF files of this repository have changed. Any older version of these GGUFs are no longer compatible with the latest version of `llama.cpp` and `ik_llama.cpp`. Please download the latest GGUF files of this repository and make sure to use the latest version of `llama.cpp` or `ik_llama.cpp`.\n\n- **For `llama.cpp`** – see the discussion in [PR #14939](https://github.com/ggml-org/llama.cpp/pull/14939).\n- **For `ik_llama.cpp`** – refer to [ikawrakow/ik_llama.cpp#668](https://github.com/ikawrakow/ik_llama.cpp/pull/668).\n\n**Unless you are confident in what you're doing, and until support is officially confirmed (PR merged),** \n> 🔒 **Do not use these quantized models for production** \n> 🔬 **Do not use them to assess the quality of the GLM-4.5 models**\n\nProceed with caution and keep an eye on the upstream PRs for any updates that could affect compatibility or performance.\n\n---\n\n# GLM-4.5\n\n## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/GLM-4.5-THIREUS-BF16-SPECIAL_SPLIT/) about?\n\nThis repository provides **GGUF-quantized tensors** for the GLM-4.5 model (official repo: https://huggingface.co/zai-org/GLM-4.5). These GGUF shards are designed to be used with **Thireus’ GGUF Tool Suite** (https://gguf.thireus.com), a collection of tools that automatically finds the perplexity-optimal mix of quantizations for any given VRAM and RAM target. With the Tool Suite, you can generate and download custom quantization “recipes” effortlessly.\n\n- 📖 Read more: https://github.com/Thireus/GGUF-Tool-Suite \n- 🔍 Example quant mixes: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples \n- 🛠️ Create your own recipe: https://colab.research.google.com/github/Thireus/GGUF-Tool-Suite/blob/main/quant_recipe_pipeline.ipynb \n- 📂 Browse available quant shards: https://huggingface.co/Thireus/collections \n\n*tl;dr: Expand the details section below*\n
\n\n```\ncd ~\n\n# Make sure to install all ik_llama.cpp compilation dependencies...\napt install python3-dev python3-pip python3-venv python3-wheel python3-setuptools git acl netcat-openbsd cmake # pipx\n\n# Obtain ik_llama's Thireus version - Windows builds available at https://github.com/Thireus/ik_llama.cpp/releases\ngit clone https://github.com/Thireus/ik_llama.cpp\ncd ik_llama.cpp\ngit pull\n# Build ik_llama.cpp\ncmake -B build -DGGML_AVX=ON -DGGML_AVX2=ON -DLLAMA_CURL=OFF -DGGML_MAX_CONTEXTS=2048\ncmake --build build --config Release -j16\ncd ..\n\n# Obtain Thireus' GGUF-Tool-Suite\ngit clone https://github.com/Thireus/GGUF-Tool-Suite\n\n# Download model quant mix from recipe file:\ncd GGUF-Tool-Suite\nrm -f download.conf # Make sure to copy the relevant download.conf for the model before running quant_assign.py\ncp -f models/GLM-4.5/download.conf . # Use the download.conf of the chosen model\nmkdir -p kitchen && cd kitchen\n../quant_downloader.sh ../recipe_examples/GLM-4.5.ROOT-3.6910bpw-3.2785ppl.153GB-GGUF_19GB-GPU_134GB-CPU.68f915c_9c7682b.recipe\n\n# Launch ik_llama's llama-cli:\nulimit -n 99999 # Lifts \"too many open files\" limitation on Linux\n~/ik_llama.cpp/build/bin/llama-cli \\\n -m GLM-4.5-THIREUS-BF16-SPECIAL_TENSOR-00001-of-01148.gguf \\\n -fa -amb 512 -fmoe -ctk f16 -c 4096 -ngl 99 \\\n -ot \"blk\\.(3|4|5|6)\\.ffn_.*=CUDA0\" \\\n -ot \"blk\\.(7|8|9|10)\\.ffn_.*=CUDA1\" \\\n -ot exps=CPU -b 2048 -ub 1024 --warmup-batch --no-mmap --threads 36 \\\n --main-gpu 0 \\\n -p '<|begin▁of▁sentence|><|User|>What is the solution of x+5=-2?<|Assistant|>\\n'\n```\n\n
\n\n---\n\n## ❓ Why does this Tool Suite exist?\n\n1. **Compatibility & Speed** – [unsloth](https://huggingface.co/unsloth)’s dynamic quants may not always work optimally with `ik_llama.cpp`. \n2. **Custom Rig Fit** – No off-the-shelf GGUF model perfectly matched my VRAM/RAM setup, so I built a way to tailor models and leverage extra VRAM/RAM to reduce perplexity. \n3. **Automated PPL-Optimal Quantization** – To my knowledge, there was no flexible, automated method to minimize perplexity for any bits-per-weight (bpw) target—so I created one with excellent results! \n\n---\n\n## 📊 How does it compare to other GGUFs?\n\nHere’s how DeepSeek-R1-0528 quantized with **Thireus’ GGUF Tool Suite** stacks up against other quantizers (lower perplexity = better at equal or lower bpw):\n\n![PPLs Compared With Others](https://github.com/Thireus/GGUF-Tool-Suite/raw/main/ppl_graphs/DeepSeek-R1-0528.svg)\n\n> _Note: The `recipe_examples` files illustrate good recipes. The Tool Suite computes the optimal ppl/bpw curve for you — just specify your target RAM, VRAM, and quant types, and `quant_assign.py` finds the best mix._ \n\nMore perplexity/bpw graphs for other supported models: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/ppl_graphs \n\n---\n\n## 🚀 How do I get started?\n\nCheck out the [GGUF Tool Suite README](https://github.com/Thireus/GGUF-Tool-Suite) — focus on these sections:\n\n1. ⚠️ **Requirements** – Which `ik_llama.cpp` (or `llama.cpp`) version to use and how to compile. \n - Windows binaries (no patching needed) at: https://github.com/Thireus/ik_llama.cpp/releases \n2. 📥 **Download Model Shards** – Use `quant_downloader.sh` to fetch GGUF shards from any recipe. \n - Recipe examples: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples \n3. 🧠 **Run a Downloaded Model** – Sample usage with `llama-cli`. \n4. 🛠️ **Generate a Custom Recipe** – Produce recipes tailored to your rig for optimal perplexity. \n\n---\n\n## ✅ Supported Models\n\nSupported models are listed under `models/` in the [Tool Suite Github repo](https://github.com/Thireus/GGUF-Tool-Suite/tree/main/models). Presence of `ppl_results.csv` indicates official support and compatibility with `quant_assign.py`.\n\n---\n\n## 🤷‍♂️ Will I release pre-cooked GGUF files?\n\nNo, because I believe in **tailored quantization** for each user’s hardware. If you prefer ready-made shards, you are welcome to merge them via `llama-gguf-split --merge`, or request someone to publish them.\n\nInstead, I prefer to share examples of recipes so users can see exactly how they were produced (command included inside these recipe files) and tweak them for their own rigs. The `quant_downloader.sh` script handles automatic fetching and verification of each shard. Recipes provided by [Ubergarm](https://huggingface.co/ubergarm) on his model cards are also compatible with `quant_downloader.sh`.\n\nUsers who don’t trust the GGUF shards on HuggingFace can also quantize their own by passing recipe lines to `llama-quantize --custom-q` ([see example](https://github.com/Thireus/GGUF-Tool-Suite/blob/main/models/DeepSeek-R1-0528/DeepSeek-R1-0528-THIREUS-ANY-SPECIAL.sh#L482-L486)). Run `llama-quantize --help` to list compatible quants for `quant_assign.py`. This approach is especially useful if you prefer `llama.cpp` over `ik_llama.cpp`. \n\n---\n\n## 📦 What’s in this repository?\n\n- **00001 GGUF header shard** – Contains metadata (tokens, chat template, tensor count, etc.). This metadata can be explored directly from the HuggingFace web interface after clicking on that shard. \n- **Tensor shards** – Each shard holds one tensor; see `tensors.map` for names, quant types, sizes, SHA-256 hash, shard IDs, etc. \n- **GPG-signed files** – `tensors.map` and header shard are signed with the key in [trusted-keys.asc](https://github.com/Thireus/GGUF-Tool-Suite/blob/main/trusted-keys.asc) for tamper detection. \n- **Security note** – Some papers about various ways to attack GGUFs and LLMs are available online, such as https://arxiv.org/abs/2505.23786, and there are also more classic security exploits like CVE-2024-23496 and CVE-2024-25664 through CVE-2024-25668. Only use GGUFs from reputable, trusted authors—or alternatively self-quantize—to avoid potential exploits. \n\n---\n\n## 💡 Pro Tips\n\nYou can download the BF16 model version to quantize your own shards:\n\n```\nmkdir kitchen \necho '.*=bf16' > kitchen/bf16.recipe \ncd kitchen\n../quant_downloader.sh bf16.recipe \n```\n\nEnjoy optimized quantization! 🎉\n"}}},{"rowIdx":367,"cells":{"modelId":{"kind":"string","value":"knifeayumu/Cydonia-v4-MS3.2-Magnum-Diamond-24B"},"author":{"kind":"string","value":"knifeayumu"},"last_modified":{"kind":"timestamp","value":"2025-08-06T11:17:09Z","string":"2025-08-06T11:17:09Z"},"downloads":{"kind":"number","value":80,"string":"80"},"likes":{"kind":"number","value":8,"string":"8"},"library_name":{"kind":"string","value":"transformers"},"tags":{"kind":"list like","value":["transformers","safetensors","mistral","text-generation","mergekit","merge","conversational","base_model:Doctor-Shotgun/MS3.2-24B-Magnum-Diamond","base_model:merge:Doctor-Shotgun/MS3.2-24B-Magnum-Diamond","base_model:TheDrummer/Cydonia-24B-v4","base_model:merge:TheDrummer/Cydonia-24B-v4","license:apache-2.0","autotrain_compatible","text-generation-inference","endpoints_compatible","region:us"],"string":"[\n \"transformers\",\n \"safetensors\",\n \"mistral\",\n \"text-generation\",\n \"mergekit\",\n \"merge\",\n \"conversational\",\n \"base_model:Doctor-Shotgun/MS3.2-24B-Magnum-Diamond\",\n \"base_model:merge:Doctor-Shotgun/MS3.2-24B-Magnum-Diamond\",\n \"base_model:TheDrummer/Cydonia-24B-v4\",\n \"base_model:merge:TheDrummer/Cydonia-24B-v4\",\n \"license:apache-2.0\",\n \"autotrain_compatible\",\n \"text-generation-inference\",\n \"endpoints_compatible\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"string","value":"text-generation"},"createdAt":{"kind":"timestamp","value":"2025-07-23T05:58:22Z","string":"2025-07-23T05:58:22Z"},"card":{"kind":"string","value":"---\nbase_model:\n- TheDrummer/Cydonia-24B-v4\n- Doctor-Shotgun/MS3.2-24B-Magnum-Diamond\nlibrary_name: transformers\ntags:\n- mergekit\n- merge\nlicense: apache-2.0\n\n---\n![Foxgirl on Cydonia](https://huggingface.co/knifeayumu/Cydonia-v4-MS3.2-Magnum-Diamond-24B/resolve/main/FoxGirlonCydonia.png)\n\n# Cydonia-v4-MS3.2-Magnum-Diamond-24B\n\nRecipe based on [knifeayumu/Cydonia-v1.2-Magnum-v4-22B](https://huggingface.co/knifeayumu/Cydonia-v1.2-Magnum-v4-22B) because the model [Doctor-Shotgun/MS3.2-24B-Magnum-Diamond](https://huggingface.co/Doctor-Shotgun/MS3.2-24B-Magnum-Diamond) is still too horny and verbose.\n\nThe [PNG file](https://huggingface.co/knifeayumu/Cydonia-v4-MS3.2-Magnum-Diamond-24B/resolve/main/FoxGirlonCydonia.png) above includes workflow for FLUX Kontext Dev with ComfyUI utilising [pollockjj/ComfyUI-MultiGPU](https://github.com/pollockjj/ComfyUI-MultiGPU) nodes and [two input images without stitching](https://www.reddit.com/r/StableDiffusion/comments/1m5wpmv/flux_kontext_psa_you_can_load_multiple_images/).\n\n![ComfyUI Workflow](https://huggingface.co/knifeayumu/Cydonia-v4-MS3.2-Magnum-Diamond-24B/resolve/main/ComfyUI_FoxGirlonCydonia.png)\n\n## Merge Details\n\nThis is a merge of pre-trained language models created using [mergekit](https://github.com/arcee-ai/mergekit).\n\n### Merge Method\n\nThis model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method.\n\n### Models Merged\n\nThe following models were included in the merge:\n* TheDrummer/Cydonia-24B-v4\n* Doctor-Shotgun/MS3.2-24B-Magnum-Diamond\n\n### Configuration\n\nThe following YAML configuration was used to produce this model:\n\n```yaml\nmodels:\n - model: TheDrummer/Cydonia-24B-v4\n - model: Doctor-Shotgun/MS3.2-24B-Magnum-Diamond\nmerge_method: slerp\nbase_model: TheDrummer/Cydonia-24B-v4\nparameters:\n t: [0.1, 0.3, 0.6, 0.3, 0.1]\ndtype: bfloat16\n```\n"}}},{"rowIdx":368,"cells":{"modelId":{"kind":"string","value":"knifeayumu/Cydonia-v4-MS3.2-Magnum-Diamond-24B-GGUF"},"author":{"kind":"string","value":"knifeayumu"},"last_modified":{"kind":"timestamp","value":"2025-08-06T11:17:00Z","string":"2025-08-06T11:17:00Z"},"downloads":{"kind":"number","value":1706,"string":"1,706"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"string","value":"transformers"},"tags":{"kind":"list like","value":["transformers","gguf","en","base_model:knifeayumu/Cydonia-v4-MS3.2-Magnum-Diamond-24B","base_model:quantized:knifeayumu/Cydonia-v4-MS3.2-Magnum-Diamond-24B","license:apache-2.0","endpoints_compatible","region:us","conversational"],"string":"[\n \"transformers\",\n \"gguf\",\n \"en\",\n \"base_model:knifeayumu/Cydonia-v4-MS3.2-Magnum-Diamond-24B\",\n \"base_model:quantized:knifeayumu/Cydonia-v4-MS3.2-Magnum-Diamond-24B\",\n \"license:apache-2.0\",\n \"endpoints_compatible\",\n \"region:us\",\n \"conversational\"\n]"},"pipeline_tag":{"kind":"null"},"createdAt":{"kind":"timestamp","value":"2025-07-23T06:58:27Z","string":"2025-07-23T06:58:27Z"},"card":{"kind":"string","value":"---\nbase_model:\n- knifeayumu/Cydonia-v4-MS3.2-Magnum-Diamond-24B\nlanguage:\n- en\nlicense: apache-2.0\nlibrary_name: transformers\n---\n\n## Llamacpp Quantizations of knifeayumu/Cydonia-v4-MS3.2-Magnum-Diamond-24B\n\nUsing [llama.cpp](https://github.com/ggerganov/llama.cpp/) release [b5966](https://github.com/ggml-org/llama.cpp/releases/tag/b5966) for quantization.\n\nOriginal model: [knifeayumu/Cydonia-v4-MS3.2-Magnum-Diamond-24B](https://huggingface.co/knifeayumu/Cydonia-v4-MS3.2-Magnum-Diamond-24B)\n\n## Quant Types:\n\n| Filename | Quant type | File Size |\n| -------- | ---------- | --------- |\n| [Cydonia-v4-MS3.2-Magnum-Diamond-24B-F16.gguf](https://huggingface.co/knifeayumu/Cydonia-v4-MS3.2-Magnum-Diamond-24B-GGUF/blob/main/Cydonia-v4-MS3.2-Magnum-Diamond-24B-F16.gguf) | F16 | 47.15 GB |\n| [Cydonia-v4-MS3.2-Magnum-Diamond-24B-Q8_0.gguf](https://huggingface.co/knifeayumu/Cydonia-v4-MS3.2-Magnum-Diamond-24B-GGUF/blob/main/Cydonia-v4-MS3.2-Magnum-Diamond-24B-Q8_0.gguf) | Q8_0 | 25.05 GB |\n| [Cydonia-v4-MS3.2-Magnum-Diamond-24B-Q6_K.gguf](https://huggingface.co/knifeayumu/Cydonia-v4-MS3.2-Magnum-Diamond-24B-GGUF/blob/main/Cydonia-v4-MS3.2-Magnum-Diamond-24B-Q6_K.gguf) | Q6_K | 19.35 GB |\n| [Cydonia-v4-MS3.2-Magnum-Diamond-24B-Q5_K_M.gguf](https://huggingface.co/knifeayumu/Cydonia-v4-MS3.2-Magnum-Diamond-24B-GGUF/blob/main/Cydonia-v4-MS3.2-Magnum-Diamond-24B-Q5_K_M.gguf) | Q5_K_M | 16.76 GB |\n| [Cydonia-v4-MS3.2-Magnum-Diamond-24B-Q5_K_S.gguf](https://huggingface.co/knifeayumu/Cydonia-v4-MS3.2-Magnum-Diamond-24B-GGUF/blob/main/Cydonia-v4-MS3.2-Magnum-Diamond-24B-Q5_K_S.gguf) | Q5_K_S | 16.30 GB |\n| [Cydonia-v4-MS3.2-Magnum-Diamond-24B-Q4_K_M.gguf](https://huggingface.co/knifeayumu/Cydonia-v4-MS3.2-Magnum-Diamond-24B-GGUF/blob/main/Cydonia-v4-MS3.2-Magnum-Diamond-24B-Q4_K_M.gguf) | Q4_K_M | 14.33 GB |\n| [Cydonia-v4-MS3.2-Magnum-Diamond-24B-Q4_K_S.gguf](https://huggingface.co/knifeayumu/Cydonia-v4-MS3.2-Magnum-Diamond-24B-GGUF/blob/main/Cydonia-v4-MS3.2-Magnum-Diamond-24B-Q4_K_S.gguf) | Q4_K_S | 13.55 GB |\n| [Cydonia-v4-MS3.2-Magnum-Diamond-24B-Q3_K_L.gguf](https://huggingface.co/knifeayumu/Cydonia-v4-MS3.2-Magnum-Diamond-24B-GGUF/blob/main/Cydonia-v4-MS3.2-Magnum-Diamond-24B-Q3_K_L.gguf) | Q3_K_L | 12.40 GB |\n| [Cydonia-v4-MS3.2-Magnum-Diamond-24B-Q3_K_M.gguf](https://huggingface.co/knifeayumu/Cydonia-v4-MS3.2-Magnum-Diamond-24B-GGUF/blob/main/Cydonia-v4-MS3.2-Magnum-Diamond-24B-Q3_K_M.gguf) | Q3_K_M | 11.47 GB |\n| [Cydonia-v4-MS3.2-Magnum-Diamond-24B-Q3_K_S.gguf](https://huggingface.co/knifeayumu/Cydonia-v4-MS3.2-Magnum-Diamond-24B-GGUF/blob/main/Cydonia-v4-MS3.2-Magnum-Diamond-24B-Q3_K_S.gguf) | Q3_K_S | 10.40 GB |\n| [Cydonia-v4-MS3.2-Magnum-Diamond-24B-Q2_K.gguf](https://huggingface.co/knifeayumu/Cydonia-v4-MS3.2-Magnum-Diamond-24B-GGUF/blob/main/Cydonia-v4-MS3.2-Magnum-Diamond-24B-Q2_K.gguf) | Q2_K | 8.89 GB |\n\n\n---\n![Foxgirl on Cydonia](https://huggingface.co/knifeayumu/Cydonia-v4-MS3.2-Magnum-Diamond-24B/resolve/main/FoxGirlonCydonia.png)\n\n# Cydonia-v4-MS3.2-Magnum-Diamond-24B\n\nRecipe based on [knifeayumu/Cydonia-v1.2-Magnum-v4-22B](https://huggingface.co/knifeayumu/Cydonia-v1.2-Magnum-v4-22B) because the model [Doctor-Shotgun/MS3.2-24B-Magnum-Diamond](https://huggingface.co/Doctor-Shotgun/MS3.2-24B-Magnum-Diamond) is still too horny and verbose.\n\nThe [PNG file](https://huggingface.co/knifeayumu/Cydonia-v4-MS3.2-Magnum-Diamond-24B/resolve/main/FoxGirlonCydonia.png) above includes workflow for FLUX Kontext Dev with ComfyUI utilising [pollockjj/ComfyUI-MultiGPU](https://github.com/pollockjj/ComfyUI-MultiGPU) nodes and [two input images without stitching](https://www.reddit.com/r/StableDiffusion/comments/1m5wpmv/flux_kontext_psa_you_can_load_multiple_images/).\n\n![ComfyUI Workflow](https://huggingface.co/knifeayumu/Cydonia-v4-MS3.2-Magnum-Diamond-24B/resolve/main/ComfyUI_FoxGirlonCydonia.png)\n\n## Merge Details\n\nThis is a merge of pre-trained language models created using [mergekit](https://github.com/arcee-ai/mergekit).\n\n### Merge Method\n\nThis model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method.\n\n### Models Merged\n\nThe following models were included in the merge:\n* TheDrummer/Cydonia-24B-v4\n* Doctor-Shotgun/MS3.2-24B-Magnum-Diamond\n\n### Configuration\n\nThe following YAML configuration was used to produce this model:\n\n```yaml\nmodels:\n - model: TheDrummer/Cydonia-24B-v4\n - model: Doctor-Shotgun/MS3.2-24B-Magnum-Diamond\nmerge_method: slerp\nbase_model: TheDrummer/Cydonia-24B-v4\nparameters:\n t: [0.1, 0.3, 0.6, 0.3, 0.1]\ndtype: bfloat16\n```"}}},{"rowIdx":369,"cells":{"modelId":{"kind":"string","value":"remiai3/mistral-7B-Instruct-v0.1-GGUF_using_int4_project_guide"},"author":{"kind":"string","value":"remiai3"},"last_modified":{"kind":"timestamp","value":"2025-08-06T11:15:36Z","string":"2025-08-06T11:15:36Z"},"downloads":{"kind":"number","value":9,"string":"9"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"null"},"tags":{"kind":"list like","value":["gguf","students","en","base_model:TheBloke/Mistral-7B-Instruct-v0.1-GGUF","base_model:quantized:TheBloke/Mistral-7B-Instruct-v0.1-GGUF","license:apache-2.0","region:us"],"string":"[\n \"gguf\",\n \"students\",\n \"en\",\n \"base_model:TheBloke/Mistral-7B-Instruct-v0.1-GGUF\",\n \"base_model:quantized:TheBloke/Mistral-7B-Instruct-v0.1-GGUF\",\n \"license:apache-2.0\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"null"},"createdAt":{"kind":"timestamp","value":"2025-08-06T11:09:27Z","string":"2025-08-06T11:09:27Z"},"card":{"kind":"string","value":"---\nlicense: apache-2.0\nlanguage:\n- en\nbase_model:\n- TheBloke/Mistral-7B-Instruct-v0.1-GGUF\ntags:\n- students\n---\nMistral 7B Project Guide\nOverview\nThis repository, remiai3/mistral7B, provides code and resources for students to run the Mistral 7B model locally on their laptops for AI experiments and research. It is a free resource with no hidden fees, and we attribute the original model to Mistral AI. The repository includes scripts to run both the pre-trained Mistral 7B model and a fine-tuned version using LoRA weights.\nFeatures\n\nRun Mistral 7B locally with a simple web UI.\nIncludes pre-trained and fine-tuned (LoRA) model support.\nEducational focus for students to explore modern AI models.\nQuantized model weights for consumer hardware (8GB or 16GB RAM).\n\nGetting Started\nFollow the steps in document.txt for detailed instructions on:\n\nSystem requirements (Python 3.10+, 8GB/16GB RAM).\nSetting up the environment and installing dependencies.\nDownloading model weights from TheBloke/Mistral-7B-Instruct-v0.1-GGUF.\nRunning the pre-trained and fine-tuned models.\n\nRepository Structure\n\napp.py: Script to run the pre-trained model with a model selector UI.\nfine_tune/app.py: Script to run the fine-tuned LoRA model.\nfine_tune/lora_finetuned.gguf: LoRA weights for the fine-tuned model.\nfine_tune/dataset.json: Dataset used for fine-tuning.\nfine_tune/finetune.py: Fine-tuning script.\nrequirements.txt: Dependencies for the project.\ndocument.txt: Detailed setup and usage guide.\n\nAttribution\n\nModel: Mistral 7B, created by Mistral AI.\nQuantized Weights: Provided by TheBloke.\nThis project is for educational purposes to support student learning and research.\n\nLicense\nApache 2.0 (same as Mistral 7B).\nSupport\nFor issues or questions, visit the Issues section or contact remiai3 on Hugging Face."}}},{"rowIdx":370,"cells":{"modelId":{"kind":"string","value":"sagata007/villain"},"author":{"kind":"string","value":"sagata007"},"last_modified":{"kind":"timestamp","value":"2025-08-06T11:15:06Z","string":"2025-08-06T11:15:06Z"},"downloads":{"kind":"number","value":17,"string":"17"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"string","value":"diffusers"},"tags":{"kind":"list like","value":["diffusers","flux","text-to-image","lora","fal","base_model:black-forest-labs/FLUX.1-dev","base_model:adapter:black-forest-labs/FLUX.1-dev","license:other","region:us"],"string":"[\n \"diffusers\",\n \"flux\",\n \"text-to-image\",\n \"lora\",\n \"fal\",\n \"base_model:black-forest-labs/FLUX.1-dev\",\n \"base_model:adapter:black-forest-labs/FLUX.1-dev\",\n \"license:other\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"string","value":"text-to-image"},"createdAt":{"kind":"timestamp","value":"2025-08-06T11:14:59Z","string":"2025-08-06T11:14:59Z"},"card":{"kind":"string","value":"---\ntags:\n- flux\n- text-to-image\n- lora\n- diffusers\n- fal\nbase_model: black-forest-labs/FLUX.1-dev\ninstance_prompt: villain\nlicense: other\nlicense_name: flux-1-dev-non-commercial-license\nlicense_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md\n---\n# villain\n\n\n\n\n## Model description\n\n\n\n\n## Trigger words\n\nYou should use `villain` to trigger the image generation.\n\n\n## Download model\n\nWeights for this model are available in Safetensors format.\n\n[Download](/sagata007/villain/tree/main) them in the Files & versions tab.\n\n## Training at fal.ai\n\nTraining was done using [fal.ai/models/fal-ai/flux-lora-fast-training](https://fal.ai/models/fal-ai/flux-lora-fast-training).\n"}}},{"rowIdx":371,"cells":{"modelId":{"kind":"string","value":"hafidhsoekma/test-g1.7b-2-checkpoint-300"},"author":{"kind":"string","value":"hafidhsoekma"},"last_modified":{"kind":"timestamp","value":"2025-08-06T11:09:49Z","string":"2025-08-06T11:09:49Z"},"downloads":{"kind":"number","value":30,"string":"30"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"string","value":"transformers"},"tags":{"kind":"list like","value":["transformers","safetensors","qwen3","text-generation","text-generation-inference","unsloth","conversational","en","base_model:unsloth/Qwen3-1.7B-unsloth-bnb-4bit","base_model:finetune:unsloth/Qwen3-1.7B-unsloth-bnb-4bit","license:apache-2.0","autotrain_compatible","endpoints_compatible","region:us"],"string":"[\n \"transformers\",\n \"safetensors\",\n \"qwen3\",\n \"text-generation\",\n \"text-generation-inference\",\n \"unsloth\",\n \"conversational\",\n \"en\",\n \"base_model:unsloth/Qwen3-1.7B-unsloth-bnb-4bit\",\n \"base_model:finetune:unsloth/Qwen3-1.7B-unsloth-bnb-4bit\",\n \"license:apache-2.0\",\n \"autotrain_compatible\",\n \"endpoints_compatible\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"string","value":"text-generation"},"createdAt":{"kind":"timestamp","value":"2025-08-06T11:05:01Z","string":"2025-08-06T11:05:01Z"},"card":{"kind":"string","value":"---\nbase_model: unsloth/Qwen3-1.7B-unsloth-bnb-4bit\ntags:\n- text-generation-inference\n- transformers\n- unsloth\n- qwen3\nlicense: apache-2.0\nlanguage:\n- en\n---\n\n# Uploaded finetuned model\n\n- **Developed by:** hafidhsoekma\n- **License:** apache-2.0\n- **Finetuned from model :** unsloth/Qwen3-1.7B-unsloth-bnb-4bit\n\nThis qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.\n\n[](https://github.com/unslothai/unsloth)\n"}}},{"rowIdx":372,"cells":{"modelId":{"kind":"string","value":"lmsys/gpt-oss-20b-bf16"},"author":{"kind":"string","value":"lmsys"},"last_modified":{"kind":"timestamp","value":"2025-08-06T11:09:41Z","string":"2025-08-06T11:09:41Z"},"downloads":{"kind":"number","value":2809,"string":"2,809"},"likes":{"kind":"number","value":3,"string":"3"},"library_name":{"kind":"null"},"tags":{"kind":"list like","value":["safetensors","gpt_oss","region:us"],"string":"[\n \"safetensors\",\n \"gpt_oss\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"null"},"createdAt":{"kind":"timestamp","value":"2025-08-05T21:58:13Z","string":"2025-08-05T21:58:13Z"},"card":{"kind":"string","value":"# gpt-oss-20b-bf16\n## Model Introduction\nThis model is the bf16 version converted from [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b).\n## Usage\nYou can use this model in [SGLang](https://github.com/sgl-project/sglang) with the following instructions.\n### Installation\n```\n# build from source\ngit clone https://github.com/sgl-project/sglang\ncd sglang\npip3 install pip --upgrade\npip3 install -e \"python[all]\"\n\n# ROCm 6.3\npip3 install torch==2.8.0 torchvision torchaudio --index-url https://download.pytorch.org/whl/test/rocm6.3\ngit clone https://github.com/triton-lang/triton\ncd python/triton_kernels\npip3 install .\n\n# hopper\npip3 install torch==2.8.0 torchvision torchaudio --index-url https://download.pytorch.org/whl/test/cu126\npip3 install sgl-kernel==0.3.2\n\n# blackwell cu128\npip3 install torch==2.8.0 torchvision torchaudio --index-url https://download.pytorch.org/whl/test/cu128\npip3 install https://github.com/sgl-project/whl/releases/download/v0.3.2/sgl_kernel-0.3.2+cu128-cp39-abi3-manylinux2014_x86_64.whl\n\n# blackwell cu129\npip3 install torch==2.8.0 torchvision torchaudio --index-url https://download.pytorch.org/whl/test/cu129\npip3 install https://github.com/sgl-project/whl/releases/download/v0.3.2/sgl_kernel-0.3.2-cp39-abi3-manylinux2014_x86_64.whl\n```\n### Launch command\n```\npython3 -m sglang.launch_server --model lmsys/gpt-oss-20b-bf16\n```\n### For more details\nhttps://github.com/sgl-project/sglang/issues/8833"}}},{"rowIdx":373,"cells":{"modelId":{"kind":"string","value":"Userb1az/Qwen3-Coder-30B-A3B-Instruct-GGUF"},"author":{"kind":"string","value":"Userb1az"},"last_modified":{"kind":"timestamp","value":"2025-08-06T11:08:53Z","string":"2025-08-06T11:08:53Z"},"downloads":{"kind":"number","value":276,"string":"276"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"string","value":"transformers"},"tags":{"kind":"list like","value":["transformers","gguf","text-generation","arxiv:2505.09388","license:apache-2.0","endpoints_compatible","region:us","conversational"],"string":"[\n \"transformers\",\n \"gguf\",\n \"text-generation\",\n \"arxiv:2505.09388\",\n \"license:apache-2.0\",\n \"endpoints_compatible\",\n \"region:us\",\n \"conversational\"\n]"},"pipeline_tag":{"kind":"string","value":"text-generation"},"createdAt":{"kind":"timestamp","value":"2025-08-04T09:26:43Z","string":"2025-08-04T09:26:43Z"},"card":{"kind":"string","value":"---\nlibrary_name: transformers\nlicense: apache-2.0\nlicense_link: https://huggingface.co/Qwen/Qwen3-Coder-30B-A3B-Instruct/blob/main/LICENSE\npipeline_tag: text-generation\n---\n\n# Qwen3-Coder-30B-A3B-Instruct\n\n \"Chat\"\n\n\n## Highlights\n\n**Qwen3-Coder** is available in multiple sizes. Today, we're excited to introduce **Qwen3-Coder-30B-A3B-Instruct**. This streamlined model maintains impressive performance and efficiency, featuring the following key enhancements: \n\n- **Significant Performance** among open models on **Agentic Coding**, **Agentic Browser-Use**, and other foundational coding tasks.\n- **Long-context Capabilities** with native support for **256K** tokens, extendable up to **1M** tokens using Yarn, optimized for repository-scale understanding.\n- **Agentic Coding** supporting for most platform such as **Qwen Code**, **CLINE**, featuring a specially designed function call format.\n\n![image/jpeg](https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen3-Coder/qwen3-coder-30a3-main.jpg)\n\n## Model Overview\n\n**Qwen3-Coder-30B-A3B-Instruct** has the following features:\n- Type: Causal Language Models\n- Training Stage: Pretraining & Post-training\n- Number of Parameters: 30.5B in total and 3.3B activated\n- Number of Layers: 48\n- Number of Attention Heads (GQA): 32 for Q and 4 for KV\n- Number of Experts: 128\n- Number of Activated Experts: 8\n- Context Length: **262,144 natively**. \n\n**NOTE: This model supports only non-thinking mode and does not generate ```` blocks in its output. Meanwhile, specifying `enable_thinking=False` is no longer required.**\n\nFor more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3-coder/), [GitHub](https://github.com/QwenLM/Qwen3-Coder), and [Documentation](https://qwen.readthedocs.io/en/latest/).\n\n\n## Quickstart\n\nWe advise you to use the latest version of `transformers`.\n\nWith `transformers<4.51.0`, you will encounter the following error:\n```\nKeyError: 'qwen3_moe'\n```\n\nThe following contains a code snippet illustrating how to use the model generate content based on given inputs. \n```python\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\n\nmodel_name = \"Qwen/Qwen3-Coder-30B-A3B-Instruct\"\n\n# load the tokenizer and the model\ntokenizer = AutoTokenizer.from_pretrained(model_name)\nmodel = AutoModelForCausalLM.from_pretrained(\n model_name,\n torch_dtype=\"auto\",\n device_map=\"auto\"\n)\n\n# prepare the model input\nprompt = \"Write a quick sort algorithm.\"\nmessages = [\n {\"role\": \"user\", \"content\": prompt}\n]\ntext = tokenizer.apply_chat_template(\n messages,\n tokenize=False,\n add_generation_prompt=True,\n)\nmodel_inputs = tokenizer([text], return_tensors=\"pt\").to(model.device)\n\n# conduct text completion\ngenerated_ids = model.generate(\n **model_inputs,\n max_new_tokens=65536\n)\noutput_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist() \n\ncontent = tokenizer.decode(output_ids, skip_special_tokens=True)\n\nprint(\"content:\", content)\n```\n\n**Note: If you encounter out-of-memory (OOM) issues, consider reducing the context length to a shorter value, such as `32,768`.**\n\nFor local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3.\n\n## Agentic Coding\n\nQwen3-Coder excels in tool calling capabilities. \n\nYou can simply define or use any tools as following example.\n```python\n# Your tool implementation\ndef square_the_number(num: float) -> dict:\n return num ** 2\n\n# Define Tools\ntools=[\n {\n \"type\":\"function\",\n \"function\":{\n \"name\": \"square_the_number\",\n \"description\": \"output the square of the number.\",\n \"parameters\": {\n \"type\": \"object\",\n \"required\": [\"input_num\"],\n \"properties\": {\n 'input_num': {\n 'type': 'number', \n 'description': 'input_num is a number that will be squared'\n }\n },\n }\n }\n }\n]\n\nimport OpenAI\n# Define LLM\nclient = OpenAI(\n # Use a custom endpoint compatible with OpenAI API\n base_url='http://localhost:8000/v1', # api_base\n api_key=\"EMPTY\"\n)\n \nmessages = [{'role': 'user', 'content': 'square the number 1024'}]\n\ncompletion = client.chat.completions.create(\n messages=messages,\n model=\"Qwen3-Coder-30B-A3B-Instruct\",\n max_tokens=65536,\n tools=tools,\n)\n\nprint(completion.choice[0])\n```\n\n## Best Practices\n\nTo achieve optimal performance, we recommend the following settings:\n\n1. **Sampling Parameters**:\n - We suggest using `temperature=0.7`, `top_p=0.8`, `top_k=20`, `repetition_penalty=1.05`.\n\n2. **Adequate Output Length**: We recommend using an output length of 65,536 tokens for most queries, which is adequate for instruct models.\n\n\n### Citation\n\nIf you find our work helpful, feel free to give us a cite.\n\n```\n@misc{qwen3technicalreport,\n title={Qwen3 Technical Report}, \n author={Qwen Team},\n year={2025},\n eprint={2505.09388},\n archivePrefix={arXiv},\n primaryClass={cs.CL},\n url={https://arxiv.org/abs/2505.09388}, \n}\n```\n"}}},{"rowIdx":374,"cells":{"modelId":{"kind":"string","value":"Qwen/Qwen3-4B-Instruct-2507"},"author":{"kind":"string","value":"Qwen"},"last_modified":{"kind":"timestamp","value":"2025-08-06T11:08:47Z","string":"2025-08-06T11:08:47Z"},"downloads":{"kind":"number","value":4751,"string":"4,751"},"likes":{"kind":"number","value":128,"string":"128"},"library_name":{"kind":"string","value":"transformers"},"tags":{"kind":"list like","value":["transformers","safetensors","qwen3","text-generation","conversational","arxiv:2505.09388","license:apache-2.0","autotrain_compatible","text-generation-inference","endpoints_compatible","region:us"],"string":"[\n \"transformers\",\n \"safetensors\",\n \"qwen3\",\n \"text-generation\",\n \"conversational\",\n \"arxiv:2505.09388\",\n \"license:apache-2.0\",\n \"autotrain_compatible\",\n \"text-generation-inference\",\n \"endpoints_compatible\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"string","value":"text-generation"},"createdAt":{"kind":"timestamp","value":"2025-08-05T10:58:03Z","string":"2025-08-05T10:58:03Z"},"card":{"kind":"string","value":"---\nlibrary_name: transformers\nlicense: apache-2.0\nlicense_link: https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507/blob/main/LICENSE\npipeline_tag: text-generation\n---\n\n# Qwen3-4B-Instruct-2507\n\n \"Chat\"\n\n\n## Highlights\n\nWe introduce the updated version of the **Qwen3-4B non-thinking mode**, named **Qwen3-4B-Instruct-2507**, featuring the following key enhancements:\n\n- **Significant improvements** in general capabilities, including **instruction following, logical reasoning, text comprehension, mathematics, science, coding and tool usage**.\n- **Substantial gains** in long-tail knowledge coverage across **multiple languages**.\n- **Markedly better alignment** with user preferences in **subjective and open-ended tasks**, enabling more helpful responses and higher-quality text generation.\n- **Enhanced capabilities** in **256K long-context understanding**.\n\n![image/jpeg](https://qianwen-res.oss-accelerate.aliyuncs.com/Qwen3-2507/Qwen3-4B-Instruct.001.jpeg)\n\n## Model Overview\n\n**Qwen3-4B-Instruct-2507** has the following features:\n- Type: Causal Language Models\n- Training Stage: Pretraining & Post-training\n- Number of Parameters: 4.0B\n- Number of Paramaters (Non-Embedding): 3.6B\n- Number of Layers: 36\n- Number of Attention Heads (GQA): 32 for Q and 8 for KV\n- Context Length: **262,144 natively**. \n\n**NOTE: This model supports only non-thinking mode and does not generate ```` blocks in its output. Meanwhile, specifying `enable_thinking=False` is no longer required.**\n\nFor more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/).\n\n\n## Performance\n\n| | GPT-4.1-nano-2025-04-14 | Qwen3-30B-A3B Non-Thinking | Qwen3-4B Non-Thinking | Qwen3-4B-Instruct-2507 |\n|--- | --- | --- | --- | --- |\n| **Knowledge** | | | |\n| MMLU-Pro | 62.8 | 69.1 | 58.0 | **69.6** |\n| MMLU-Redux | 80.2 | 84.1 | 77.3 | **84.2** |\n| GPQA | 50.3 | 54.8 | 41.7 | **62.0** |\n| SuperGPQA | 32.2 | 42.2 | 32.0 | **42.8** |\n| **Reasoning** | | | |\n| AIME25 | 22.7 | 21.6 | 19.1 | **47.4** |\n| HMMT25 | 9.7 | 12.0 | 12.1 | **31.0** |\n| ZebraLogic | 14.8 | 33.2 | 35.2 | **80.2** |\n| LiveBench 20241125 | 41.5 | 59.4 | 48.4 | **63.0** |\n| **Coding** | | | |\n| LiveCodeBench v6 (25.02-25.05) | 31.5 | 29.0 | 26.4 | **35.1** |\n| MultiPL-E | 76.3 | 74.6 | 66.6 | **76.8** |\n| Aider-Polyglot | 9.8 | **24.4** | 13.8 | 12.9 |\n| **Alignment** | | | |\n| IFEval | 74.5 | **83.7** | 81.2 | 83.4 |\n| Arena-Hard v2* | 15.9 | 24.8 | 9.5 | **43.4** |\n| Creative Writing v3 | 72.7 | 68.1 | 53.6 | **83.5** |\n| WritingBench | 66.9 | 72.2 | 68.5 | **83.4** |\n| **Agent** | | | |\n| BFCL-v3 | 53.0 | 58.6 | 57.6 | **61.9** |\n| TAU1-Retail | 23.5 | 38.3 | 24.3 | **48.7** |\n| TAU1-Airline | 14.0 | 18.0 | 16.0 | **32.0** |\n| TAU2-Retail | - | 31.6 | 28.1 | **40.4** |\n| TAU2-Airline | - | 18.0 | 12.0 | **24.0** |\n| TAU2-Telecom | - | **18.4** | 17.5 | 13.2 |\n| **Multilingualism** | | | |\n| MultiIF | 60.7 | **70.8** | 61.3 | 69.0 |\n| MMLU-ProX | 56.2 | **65.1** | 49.6 | 61.6 |\n| INCLUDE | 58.6 | **67.8** | 53.8 | 60.1 |\n| PolyMATH | 15.6 | 23.3 | 16.6 | **31.1** |\n\n*: For reproducibility, we report the win rates evaluated by GPT-4.1.\n\n\n## Quickstart\n\nThe code of Qwen3 has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`.\n\nWith `transformers<4.51.0`, you will encounter the following error:\n```\nKeyError: 'qwen3'\n```\n\nThe following contains a code snippet illustrating how to use the model generate content based on given inputs. \n```python\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\n\nmodel_name = \"Qwen/Qwen3-4B-Instruct-2507\"\n\n# load the tokenizer and the model\ntokenizer = AutoTokenizer.from_pretrained(model_name)\nmodel = AutoModelForCausalLM.from_pretrained(\n model_name,\n torch_dtype=\"auto\",\n device_map=\"auto\"\n)\n\n# prepare the model input\nprompt = \"Give me a short introduction to large language model.\"\nmessages = [\n {\"role\": \"user\", \"content\": prompt}\n]\ntext = tokenizer.apply_chat_template(\n messages,\n tokenize=False,\n add_generation_prompt=True,\n)\nmodel_inputs = tokenizer([text], return_tensors=\"pt\").to(model.device)\n\n# conduct text completion\ngenerated_ids = model.generate(\n **model_inputs,\n max_new_tokens=16384\n)\noutput_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist() \n\ncontent = tokenizer.decode(output_ids, skip_special_tokens=True)\n\nprint(\"content:\", content)\n```\n\nFor deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.5` or to create an OpenAI-compatible API endpoint:\n- SGLang:\n ```shell\n python -m sglang.launch_server --model-path Qwen/Qwen3-4B-Instruct-2507 --context-length 262144\n ```\n- vLLM:\n ```shell\n vllm serve Qwen/Qwen3-4B-Instruct-2507 --max-model-len 262144\n ```\n\n**Note: If you encounter out-of-memory (OOM) issues, consider reducing the context length to a shorter value, such as `32,768`.**\n\nFor local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3.\n\n## Agentic Use\n\nQwen3 excels in tool calling capabilities. We recommend using [Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity.\n\nTo define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.\n```python\nfrom qwen_agent.agents import Assistant\n\n# Define LLM\nllm_cfg = {\n 'model': 'Qwen3-4B-Instruct-2507',\n\n # Use a custom endpoint compatible with OpenAI API:\n 'model_server': 'http://localhost:8000/v1', # api_base\n 'api_key': 'EMPTY',\n}\n\n# Define Tools\ntools = [\n {'mcpServers': { # You can specify the MCP configuration file\n 'time': {\n 'command': 'uvx',\n 'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai']\n },\n \"fetch\": {\n \"command\": \"uvx\",\n \"args\": [\"mcp-server-fetch\"]\n }\n }\n },\n 'code_interpreter', # Built-in tools\n]\n\n# Define Agent\nbot = Assistant(llm=llm_cfg, function_list=tools)\n\n# Streaming generation\nmessages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}]\nfor responses in bot.run(messages=messages):\n pass\nprint(responses)\n```\n\n## Best Practices\n\nTo achieve optimal performance, we recommend the following settings:\n\n1. **Sampling Parameters**:\n - We suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`.\n - For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.\n\n2. **Adequate Output Length**: We recommend using an output length of 16,384 tokens for most queries, which is adequate for instruct models.\n\n3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking.\n - **Math Problems**: Include \"Please reason step by step, and put your final answer within \\boxed{}.\" in the prompt.\n - **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: \"Please show your choice in the `answer` field with only the choice letter, e.g., `\"answer\": \"C\"`.\"\n\n### Citation\n\nIf you find our work helpful, feel free to give us a cite.\n\n```\n@misc{qwen3technicalreport,\n title={Qwen3 Technical Report}, \n author={Qwen Team},\n year={2025},\n eprint={2505.09388},\n archivePrefix={arXiv},\n primaryClass={cs.CL},\n url={https://arxiv.org/abs/2505.09388}, \n}\n```"}}},{"rowIdx":375,"cells":{"modelId":{"kind":"string","value":"Thireus/GLM-4.5-THIREUS-Q5_K_R4-SPECIAL_SPLIT"},"author":{"kind":"string","value":"Thireus"},"last_modified":{"kind":"timestamp","value":"2025-08-06T11:08:16Z","string":"2025-08-06T11:08:16Z"},"downloads":{"kind":"number","value":7,"string":"7"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"null"},"tags":{"kind":"list like","value":["gguf","arxiv:2505.23786","license:mit","endpoints_compatible","region:us","imatrix","conversational"],"string":"[\n \"gguf\",\n \"arxiv:2505.23786\",\n \"license:mit\",\n \"endpoints_compatible\",\n \"region:us\",\n \"imatrix\",\n \"conversational\"\n]"},"pipeline_tag":{"kind":"null"},"createdAt":{"kind":"timestamp","value":"2025-08-02T15:13:31Z","string":"2025-08-02T15:13:31Z"},"card":{"kind":"string","value":"---\nlicense: mit\n---\n## ⚠️ Cautionary Notice\n\nDue to changes in the GLM-4.5 PR the GGUF files of this repository have changed. Any older version of these GGUFs are no longer compatible with the latest version of `llama.cpp` and `ik_llama.cpp`. Please download the latest GGUF files of this repository and make sure to use the latest version of `llama.cpp` or `ik_llama.cpp`.\n\n- **For `llama.cpp`** – see the discussion in [PR #14939](https://github.com/ggml-org/llama.cpp/pull/14939).\n- **For `ik_llama.cpp`** – refer to [ikawrakow/ik_llama.cpp#668](https://github.com/ikawrakow/ik_llama.cpp/pull/668).\n\n**Unless you are confident in what you're doing, and until support is officially confirmed (PR merged),** \n> 🔒 **Do not use these quantized models for production** \n> 🔬 **Do not use them to assess the quality of the GLM-4.5 models**\n\nProceed with caution and keep an eye on the upstream PRs for any updates that could affect compatibility or performance.\n\n---\n\n# GLM-4.5\n\n## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/GLM-4.5-THIREUS-BF16-SPECIAL_SPLIT/) about?\n\nThis repository provides **GGUF-quantized tensors** for the GLM-4.5 model (official repo: https://huggingface.co/zai-org/GLM-4.5). These GGUF shards are designed to be used with **Thireus’ GGUF Tool Suite** (https://gguf.thireus.com), a collection of tools that automatically finds the perplexity-optimal mix of quantizations for any given VRAM and RAM target. With the Tool Suite, you can generate and download custom quantization “recipes” effortlessly.\n\n- 📖 Read more: https://github.com/Thireus/GGUF-Tool-Suite \n- 🔍 Example quant mixes: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples \n- 🛠️ Create your own recipe: https://colab.research.google.com/github/Thireus/GGUF-Tool-Suite/blob/main/quant_recipe_pipeline.ipynb \n- 📂 Browse available quant shards: https://huggingface.co/Thireus/collections \n\n*tl;dr: Expand the details section below*\n
\n\n```\ncd ~\n\n# Make sure to install all ik_llama.cpp compilation dependencies...\napt install python3-dev python3-pip python3-venv python3-wheel python3-setuptools git acl netcat-openbsd cmake # pipx\n\n# Obtain ik_llama's Thireus version - Windows builds available at https://github.com/Thireus/ik_llama.cpp/releases\ngit clone https://github.com/Thireus/ik_llama.cpp\ncd ik_llama.cpp\ngit pull\n# Build ik_llama.cpp\ncmake -B build -DGGML_AVX=ON -DGGML_AVX2=ON -DLLAMA_CURL=OFF -DGGML_MAX_CONTEXTS=2048\ncmake --build build --config Release -j16\ncd ..\n\n# Obtain Thireus' GGUF-Tool-Suite\ngit clone https://github.com/Thireus/GGUF-Tool-Suite\n\n# Download model quant mix from recipe file:\ncd GGUF-Tool-Suite\nrm -f download.conf # Make sure to copy the relevant download.conf for the model before running quant_assign.py\ncp -f models/GLM-4.5/download.conf . # Use the download.conf of the chosen model\nmkdir -p kitchen && cd kitchen\n../quant_downloader.sh ../recipe_examples/GLM-4.5.ROOT-3.6910bpw-3.2785ppl.153GB-GGUF_19GB-GPU_134GB-CPU.68f915c_9c7682b.recipe\n\n# Launch ik_llama's llama-cli:\nulimit -n 99999 # Lifts \"too many open files\" limitation on Linux\n~/ik_llama.cpp/build/bin/llama-cli \\\n -m GLM-4.5-THIREUS-BF16-SPECIAL_TENSOR-00001-of-01148.gguf \\\n -fa -amb 512 -fmoe -ctk f16 -c 4096 -ngl 99 \\\n -ot \"blk\\.(3|4|5|6)\\.ffn_.*=CUDA0\" \\\n -ot \"blk\\.(7|8|9|10)\\.ffn_.*=CUDA1\" \\\n -ot exps=CPU -b 2048 -ub 1024 --warmup-batch --no-mmap --threads 36 \\\n --main-gpu 0 \\\n -p '<|begin▁of▁sentence|><|User|>What is the solution of x+5=-2?<|Assistant|>\\n'\n```\n\n
\n\n---\n\n## ❓ Why does this Tool Suite exist?\n\n1. **Compatibility & Speed** – [unsloth](https://huggingface.co/unsloth)’s dynamic quants may not always work optimally with `ik_llama.cpp`. \n2. **Custom Rig Fit** – No off-the-shelf GGUF model perfectly matched my VRAM/RAM setup, so I built a way to tailor models and leverage extra VRAM/RAM to reduce perplexity. \n3. **Automated PPL-Optimal Quantization** – To my knowledge, there was no flexible, automated method to minimize perplexity for any bits-per-weight (bpw) target—so I created one with excellent results! \n\n---\n\n## 📊 How does it compare to other GGUFs?\n\nHere’s how DeepSeek-R1-0528 quantized with **Thireus’ GGUF Tool Suite** stacks up against other quantizers (lower perplexity = better at equal or lower bpw):\n\n![PPLs Compared With Others](https://github.com/Thireus/GGUF-Tool-Suite/raw/main/ppl_graphs/DeepSeek-R1-0528.svg)\n\n> _Note: The `recipe_examples` files illustrate good recipes. The Tool Suite computes the optimal ppl/bpw curve for you — just specify your target RAM, VRAM, and quant types, and `quant_assign.py` finds the best mix._ \n\nMore perplexity/bpw graphs for other supported models: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/ppl_graphs \n\n---\n\n## 🚀 How do I get started?\n\nCheck out the [GGUF Tool Suite README](https://github.com/Thireus/GGUF-Tool-Suite) — focus on these sections:\n\n1. ⚠️ **Requirements** – Which `ik_llama.cpp` (or `llama.cpp`) version to use and how to compile. \n - Windows binaries (no patching needed) at: https://github.com/Thireus/ik_llama.cpp/releases \n2. 📥 **Download Model Shards** – Use `quant_downloader.sh` to fetch GGUF shards from any recipe. \n - Recipe examples: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples \n3. 🧠 **Run a Downloaded Model** – Sample usage with `llama-cli`. \n4. 🛠️ **Generate a Custom Recipe** – Produce recipes tailored to your rig for optimal perplexity. \n\n---\n\n## ✅ Supported Models\n\nSupported models are listed under `models/` in the [Tool Suite Github repo](https://github.com/Thireus/GGUF-Tool-Suite/tree/main/models). Presence of `ppl_results.csv` indicates official support and compatibility with `quant_assign.py`.\n\n---\n\n## 🤷‍♂️ Will I release pre-cooked GGUF files?\n\nNo, because I believe in **tailored quantization** for each user’s hardware. If you prefer ready-made shards, you are welcome to merge them via `llama-gguf-split --merge`, or request someone to publish them.\n\nInstead, I prefer to share examples of recipes so users can see exactly how they were produced (command included inside these recipe files) and tweak them for their own rigs. The `quant_downloader.sh` script handles automatic fetching and verification of each shard. Recipes provided by [Ubergarm](https://huggingface.co/ubergarm) on his model cards are also compatible with `quant_downloader.sh`.\n\nUsers who don’t trust the GGUF shards on HuggingFace can also quantize their own by passing recipe lines to `llama-quantize --custom-q` ([see example](https://github.com/Thireus/GGUF-Tool-Suite/blob/main/models/DeepSeek-R1-0528/DeepSeek-R1-0528-THIREUS-ANY-SPECIAL.sh#L482-L486)). Run `llama-quantize --help` to list compatible quants for `quant_assign.py`. This approach is especially useful if you prefer `llama.cpp` over `ik_llama.cpp`. \n\n---\n\n## 📦 What’s in this repository?\n\n- **00001 GGUF header shard** – Contains metadata (tokens, chat template, tensor count, etc.). This metadata can be explored directly from the HuggingFace web interface after clicking on that shard. \n- **Tensor shards** – Each shard holds one tensor; see `tensors.map` for names, quant types, sizes, SHA-256 hash, shard IDs, etc. \n- **GPG-signed files** – `tensors.map` and header shard are signed with the key in [trusted-keys.asc](https://github.com/Thireus/GGUF-Tool-Suite/blob/main/trusted-keys.asc) for tamper detection. \n- **Security note** – Some papers about various ways to attack GGUFs and LLMs are available online, such as https://arxiv.org/abs/2505.23786, and there are also more classic security exploits like CVE-2024-23496 and CVE-2024-25664 through CVE-2024-25668. Only use GGUFs from reputable, trusted authors—or alternatively self-quantize—to avoid potential exploits. \n\n---\n\n## 💡 Pro Tips\n\nYou can download the BF16 model version to quantize your own shards:\n\n```\nmkdir kitchen \necho '.*=bf16' > kitchen/bf16.recipe \ncd kitchen\n../quant_downloader.sh bf16.recipe \n```\n\nEnjoy optimized quantization! 🎉\n"}}},{"rowIdx":376,"cells":{"modelId":{"kind":"string","value":"lmsys/gpt-oss-120b-bf16"},"author":{"kind":"string","value":"lmsys"},"last_modified":{"kind":"timestamp","value":"2025-08-06T11:07:26Z","string":"2025-08-06T11:07:26Z"},"downloads":{"kind":"number","value":2331,"string":"2,331"},"likes":{"kind":"number","value":2,"string":"2"},"library_name":{"kind":"null"},"tags":{"kind":"list like","value":["safetensors","gpt_oss","region:us"],"string":"[\n \"safetensors\",\n \"gpt_oss\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"null"},"createdAt":{"kind":"timestamp","value":"2025-08-05T18:54:32Z","string":"2025-08-05T18:54:32Z"},"card":{"kind":"string","value":"# gpt-oss-120b-bf16\n## Model Introduction\nThis model is the bf16 version converted from [openai/gpt-oss-120b](https://huggingface.co/openai/gpt-oss-120b).\n## Usage\nYou can use this model in [SGLang](https://github.com/sgl-project/sglang) with the following instructions.\n### Installation\n```\n# build from source\ngit clone https://github.com/sgl-project/sglang\ncd sglang\npip3 install pip --upgrade\npip3 install -e \"python[all]\"\n\n# ROCm 6.3\npip3 install torch==2.8.0 torchvision torchaudio --index-url https://download.pytorch.org/whl/test/rocm6.3\ngit clone https://github.com/triton-lang/triton\ncd python/triton_kernels\npip3 install .\n\n# hopper\npip3 install torch==2.8.0 torchvision torchaudio --index-url https://download.pytorch.org/whl/test/cu126\npip3 install sgl-kernel==0.3.2\n\n# blackwell cu128\npip3 install torch==2.8.0 torchvision torchaudio --index-url https://download.pytorch.org/whl/test/cu128\npip3 install https://github.com/sgl-project/whl/releases/download/v0.3.2/sgl_kernel-0.3.2+cu128-cp39-abi3-manylinux2014_x86_64.whl\n\n# blackwell cu129\npip3 install torch==2.8.0 torchvision torchaudio --index-url https://download.pytorch.org/whl/test/cu129\npip3 install https://github.com/sgl-project/whl/releases/download/v0.3.2/sgl_kernel-0.3.2-cp39-abi3-manylinux2014_x86_64.whl\n```\n### Launch command\n```\npython3 -m sglang.launch_server --model lmsys/gpt-oss-120b-bf16 --tp 4\n```\n### For more details\nhttps://github.com/sgl-project/sglang/issues/8833"}}},{"rowIdx":377,"cells":{"modelId":{"kind":"string","value":"lukante/test_summ"},"author":{"kind":"string","value":"lukante"},"last_modified":{"kind":"timestamp","value":"2025-08-06T11:07:18Z","string":"2025-08-06T11:07:18Z"},"downloads":{"kind":"number","value":5,"string":"5"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"string","value":"peft"},"tags":{"kind":"list like","value":["peft","safetensors","base_model:adapter:mistralai/Mistral-7B-Instruct-v0.3","lora","sft","transformers","trl","text-generation","conversational","arxiv:1910.09700","base_model:mistralai/Mistral-7B-Instruct-v0.3","region:us"],"string":"[\n \"peft\",\n \"safetensors\",\n \"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.3\",\n \"lora\",\n \"sft\",\n \"transformers\",\n \"trl\",\n \"text-generation\",\n \"conversational\",\n \"arxiv:1910.09700\",\n \"base_model:mistralai/Mistral-7B-Instruct-v0.3\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"string","value":"text-generation"},"createdAt":{"kind":"timestamp","value":"2025-08-06T11:00:28Z","string":"2025-08-06T11:00:28Z"},"card":{"kind":"string","value":"---\nbase_model: mistralai/Mistral-7B-Instruct-v0.3\nlibrary_name: peft\npipeline_tag: text-generation\ntags:\n- base_model:adapter:mistralai/Mistral-7B-Instruct-v0.3\n- lora\n- sft\n- transformers\n- trl\n---\n\n# Model Card for Model ID\n\n\n\n\n\n## Model Details\n\n### Model Description\n\n\n\n\n\n- **Developed by:** [More Information Needed]\n- **Funded by [optional]:** [More Information Needed]\n- **Shared by [optional]:** [More Information Needed]\n- **Model type:** [More Information Needed]\n- **Language(s) (NLP):** [More Information Needed]\n- **License:** [More Information Needed]\n- **Finetuned from model [optional]:** [More Information Needed]\n\n### Model Sources [optional]\n\n\n\n- **Repository:** [More Information Needed]\n- **Paper [optional]:** [More Information Needed]\n- **Demo [optional]:** [More Information Needed]\n\n## Uses\n\n\n\n### Direct Use\n\n\n\n[More Information Needed]\n\n### Downstream Use [optional]\n\n\n\n[More Information Needed]\n\n### Out-of-Scope Use\n\n\n\n[More Information Needed]\n\n## Bias, Risks, and Limitations\n\n\n\n[More Information Needed]\n\n### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.\n\n## How to Get Started with the Model\n\nUse the code below to get started with the model.\n\n[More Information Needed]\n\n## Training Details\n\n### Training Data\n\n\n\n[More Information Needed]\n\n### Training Procedure\n\n\n\n#### Preprocessing [optional]\n\n[More Information Needed]\n\n\n#### Training Hyperparameters\n\n- **Training regime:** [More Information Needed] \n\n#### Speeds, Sizes, Times [optional]\n\n\n\n[More Information Needed]\n\n## Evaluation\n\n\n\n### Testing Data, Factors & Metrics\n\n#### Testing Data\n\n\n\n[More Information Needed]\n\n#### Factors\n\n\n\n[More Information Needed]\n\n#### Metrics\n\n\n\n[More Information Needed]\n\n### Results\n\n[More Information Needed]\n\n#### Summary\n\n\n\n## Model Examination [optional]\n\n\n\n[More Information Needed]\n\n## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).\n\n- **Hardware Type:** [More Information Needed]\n- **Hours used:** [More Information Needed]\n- **Cloud Provider:** [More Information Needed]\n- **Compute Region:** [More Information Needed]\n- **Carbon Emitted:** [More Information Needed]\n\n## Technical Specifications [optional]\n\n### Model Architecture and Objective\n\n[More Information Needed]\n\n### Compute Infrastructure\n\n[More Information Needed]\n\n#### Hardware\n\n[More Information Needed]\n\n#### Software\n\n[More Information Needed]\n\n## Citation [optional]\n\n\n\n**BibTeX:**\n\n[More Information Needed]\n\n**APA:**\n\n[More Information Needed]\n\n## Glossary [optional]\n\n\n\n[More Information Needed]\n\n## More Information [optional]\n\n[More Information Needed]\n\n## Model Card Authors [optional]\n\n[More Information Needed]\n\n## Model Card Contact\n\n[More Information Needed]\n### Framework versions\n\n- PEFT 0.16.0"}}},{"rowIdx":378,"cells":{"modelId":{"kind":"string","value":"Thireus/GLM-4.5-THIREUS-IQ2_K-SPECIAL_SPLIT"},"author":{"kind":"string","value":"Thireus"},"last_modified":{"kind":"timestamp","value":"2025-08-06T11:06:17Z","string":"2025-08-06T11:06:17Z"},"downloads":{"kind":"number","value":13,"string":"13"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"null"},"tags":{"kind":"list like","value":["gguf","arxiv:2505.23786","license:mit","endpoints_compatible","region:us","imatrix","conversational"],"string":"[\n \"gguf\",\n \"arxiv:2505.23786\",\n \"license:mit\",\n \"endpoints_compatible\",\n \"region:us\",\n \"imatrix\",\n \"conversational\"\n]"},"pipeline_tag":{"kind":"null"},"createdAt":{"kind":"timestamp","value":"2025-08-02T23:24:45Z","string":"2025-08-02T23:24:45Z"},"card":{"kind":"string","value":"---\nlicense: mit\n---\n## ⚠️ Cautionary Notice\n\nDue to changes in the GLM-4.5 PR the GGUF files of this repository have changed. Any older version of these GGUFs are no longer compatible with the latest version of `llama.cpp` and `ik_llama.cpp`. Please download the latest GGUF files of this repository and make sure to use the latest version of `llama.cpp` or `ik_llama.cpp`.\n\n- **For `llama.cpp`** – see the discussion in [PR #14939](https://github.com/ggml-org/llama.cpp/pull/14939).\n- **For `ik_llama.cpp`** – refer to [ikawrakow/ik_llama.cpp#668](https://github.com/ikawrakow/ik_llama.cpp/pull/668).\n\n**Unless you are confident in what you're doing, and until support is officially confirmed (PR merged),** \n> 🔒 **Do not use these quantized models for production** \n> 🔬 **Do not use them to assess the quality of the GLM-4.5 models**\n\nProceed with caution and keep an eye on the upstream PRs for any updates that could affect compatibility or performance.\n\n---\n\n# GLM-4.5\n\n## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/GLM-4.5-THIREUS-BF16-SPECIAL_SPLIT/) about?\n\nThis repository provides **GGUF-quantized tensors** for the GLM-4.5 model (official repo: https://huggingface.co/zai-org/GLM-4.5). These GGUF shards are designed to be used with **Thireus’ GGUF Tool Suite** (https://gguf.thireus.com), a collection of tools that automatically finds the perplexity-optimal mix of quantizations for any given VRAM and RAM target. With the Tool Suite, you can generate and download custom quantization “recipes” effortlessly.\n\n- 📖 Read more: https://github.com/Thireus/GGUF-Tool-Suite \n- 🔍 Example quant mixes: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples \n- 🛠️ Create your own recipe: https://colab.research.google.com/github/Thireus/GGUF-Tool-Suite/blob/main/quant_recipe_pipeline.ipynb \n- 📂 Browse available quant shards: https://huggingface.co/Thireus/collections \n\n*tl;dr: Expand the details section below*\n
\n\n```\ncd ~\n\n# Make sure to install all ik_llama.cpp compilation dependencies...\napt install python3-dev python3-pip python3-venv python3-wheel python3-setuptools git acl netcat-openbsd cmake # pipx\n\n# Obtain ik_llama's Thireus version - Windows builds available at https://github.com/Thireus/ik_llama.cpp/releases\ngit clone https://github.com/Thireus/ik_llama.cpp\ncd ik_llama.cpp\ngit pull\n# Build ik_llama.cpp\ncmake -B build -DGGML_AVX=ON -DGGML_AVX2=ON -DLLAMA_CURL=OFF -DGGML_MAX_CONTEXTS=2048\ncmake --build build --config Release -j16\ncd ..\n\n# Obtain Thireus' GGUF-Tool-Suite\ngit clone https://github.com/Thireus/GGUF-Tool-Suite\n\n# Download model quant mix from recipe file:\ncd GGUF-Tool-Suite\nrm -f download.conf # Make sure to copy the relevant download.conf for the model before running quant_assign.py\ncp -f models/GLM-4.5/download.conf . # Use the download.conf of the chosen model\nmkdir -p kitchen && cd kitchen\n../quant_downloader.sh ../recipe_examples/GLM-4.5.ROOT-3.6910bpw-3.2785ppl.153GB-GGUF_19GB-GPU_134GB-CPU.68f915c_9c7682b.recipe\n\n# Launch ik_llama's llama-cli:\nulimit -n 99999 # Lifts \"too many open files\" limitation on Linux\n~/ik_llama.cpp/build/bin/llama-cli \\\n -m GLM-4.5-THIREUS-BF16-SPECIAL_TENSOR-00001-of-01148.gguf \\\n -fa -amb 512 -fmoe -ctk f16 -c 4096 -ngl 99 \\\n -ot \"blk\\.(3|4|5|6)\\.ffn_.*=CUDA0\" \\\n -ot \"blk\\.(7|8|9|10)\\.ffn_.*=CUDA1\" \\\n -ot exps=CPU -b 2048 -ub 1024 --warmup-batch --no-mmap --threads 36 \\\n --main-gpu 0 \\\n -p '<|begin▁of▁sentence|><|User|>What is the solution of x+5=-2?<|Assistant|>\\n'\n```\n\n
\n\n---\n\n## ❓ Why does this Tool Suite exist?\n\n1. **Compatibility & Speed** – [unsloth](https://huggingface.co/unsloth)’s dynamic quants may not always work optimally with `ik_llama.cpp`. \n2. **Custom Rig Fit** – No off-the-shelf GGUF model perfectly matched my VRAM/RAM setup, so I built a way to tailor models and leverage extra VRAM/RAM to reduce perplexity. \n3. **Automated PPL-Optimal Quantization** – To my knowledge, there was no flexible, automated method to minimize perplexity for any bits-per-weight (bpw) target—so I created one with excellent results! \n\n---\n\n## 📊 How does it compare to other GGUFs?\n\nHere’s how DeepSeek-R1-0528 quantized with **Thireus’ GGUF Tool Suite** stacks up against other quantizers (lower perplexity = better at equal or lower bpw):\n\n![PPLs Compared With Others](https://github.com/Thireus/GGUF-Tool-Suite/raw/main/ppl_graphs/DeepSeek-R1-0528.svg)\n\n> _Note: The `recipe_examples` files illustrate good recipes. The Tool Suite computes the optimal ppl/bpw curve for you — just specify your target RAM, VRAM, and quant types, and `quant_assign.py` finds the best mix._ \n\nMore perplexity/bpw graphs for other supported models: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/ppl_graphs \n\n---\n\n## 🚀 How do I get started?\n\nCheck out the [GGUF Tool Suite README](https://github.com/Thireus/GGUF-Tool-Suite) — focus on these sections:\n\n1. ⚠️ **Requirements** – Which `ik_llama.cpp` (or `llama.cpp`) version to use and how to compile. \n - Windows binaries (no patching needed) at: https://github.com/Thireus/ik_llama.cpp/releases \n2. 📥 **Download Model Shards** – Use `quant_downloader.sh` to fetch GGUF shards from any recipe. \n - Recipe examples: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples \n3. 🧠 **Run a Downloaded Model** – Sample usage with `llama-cli`. \n4. 🛠️ **Generate a Custom Recipe** – Produce recipes tailored to your rig for optimal perplexity. \n\n---\n\n## ✅ Supported Models\n\nSupported models are listed under `models/` in the [Tool Suite Github repo](https://github.com/Thireus/GGUF-Tool-Suite/tree/main/models). Presence of `ppl_results.csv` indicates official support and compatibility with `quant_assign.py`.\n\n---\n\n## 🤷‍♂️ Will I release pre-cooked GGUF files?\n\nNo, because I believe in **tailored quantization** for each user’s hardware. If you prefer ready-made shards, you are welcome to merge them via `llama-gguf-split --merge`, or request someone to publish them.\n\nInstead, I prefer to share examples of recipes so users can see exactly how they were produced (command included inside these recipe files) and tweak them for their own rigs. The `quant_downloader.sh` script handles automatic fetching and verification of each shard. Recipes provided by [Ubergarm](https://huggingface.co/ubergarm) on his model cards are also compatible with `quant_downloader.sh`.\n\nUsers who don’t trust the GGUF shards on HuggingFace can also quantize their own by passing recipe lines to `llama-quantize --custom-q` ([see example](https://github.com/Thireus/GGUF-Tool-Suite/blob/main/models/DeepSeek-R1-0528/DeepSeek-R1-0528-THIREUS-ANY-SPECIAL.sh#L482-L486)). Run `llama-quantize --help` to list compatible quants for `quant_assign.py`. This approach is especially useful if you prefer `llama.cpp` over `ik_llama.cpp`. \n\n---\n\n## 📦 What’s in this repository?\n\n- **00001 GGUF header shard** – Contains metadata (tokens, chat template, tensor count, etc.). This metadata can be explored directly from the HuggingFace web interface after clicking on that shard. \n- **Tensor shards** – Each shard holds one tensor; see `tensors.map` for names, quant types, sizes, SHA-256 hash, shard IDs, etc. \n- **GPG-signed files** – `tensors.map` and header shard are signed with the key in [trusted-keys.asc](https://github.com/Thireus/GGUF-Tool-Suite/blob/main/trusted-keys.asc) for tamper detection. \n- **Security note** – Some papers about various ways to attack GGUFs and LLMs are available online, such as https://arxiv.org/abs/2505.23786, and there are also more classic security exploits like CVE-2024-23496 and CVE-2024-25664 through CVE-2024-25668. Only use GGUFs from reputable, trusted authors—or alternatively self-quantize—to avoid potential exploits. \n\n---\n\n## 💡 Pro Tips\n\nYou can download the BF16 model version to quantize your own shards:\n\n```\nmkdir kitchen \necho '.*=bf16' > kitchen/bf16.recipe \ncd kitchen\n../quant_downloader.sh bf16.recipe \n```\n\nEnjoy optimized quantization! 🎉\n"}}},{"rowIdx":379,"cells":{"modelId":{"kind":"string","value":"shaharprofeta/dqn-SpaceInvadersNoFrameskip-v4"},"author":{"kind":"string","value":"shaharprofeta"},"last_modified":{"kind":"timestamp","value":"2025-08-06T11:04:21Z","string":"2025-08-06T11:04:21Z"},"downloads":{"kind":"number","value":73,"string":"73"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"string","value":"stable-baselines3"},"tags":{"kind":"list like","value":["stable-baselines3","SpaceInvadersNoFrameskip-v4","deep-reinforcement-learning","reinforcement-learning","model-index","region:us"],"string":"[\n \"stable-baselines3\",\n \"SpaceInvadersNoFrameskip-v4\",\n \"deep-reinforcement-learning\",\n \"reinforcement-learning\",\n \"model-index\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"string","value":"reinforcement-learning"},"createdAt":{"kind":"timestamp","value":"2025-08-06T10:37:54Z","string":"2025-08-06T10:37:54Z"},"card":{"kind":"string","value":"---\nlibrary_name: stable-baselines3\ntags:\n- SpaceInvadersNoFrameskip-v4\n- deep-reinforcement-learning\n- reinforcement-learning\n- stable-baselines3\nmodel-index:\n- name: DQN\n results:\n - task:\n type: reinforcement-learning\n name: reinforcement-learning\n dataset:\n name: SpaceInvadersNoFrameskip-v4\n type: SpaceInvadersNoFrameskip-v4\n metrics:\n - type: mean_reward\n value: 738.00 +/- 430.90\n name: mean_reward\n verified: false\n---\n\n# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**\nThis is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**\nusing the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)\nand the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).\n\nThe RL Zoo is a training framework for Stable Baselines3\nreinforcement learning agents,\nwith hyperparameter optimization and pre-trained agents included.\n\n## Usage (with SB3 RL Zoo)\n\nRL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo
\nSB3: https://github.com/DLR-RM/stable-baselines3
\nSB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib\nSBX (SB3 + Jax): https://github.com/araffin/sbx\n\nInstall the RL Zoo (with SB3 and SB3-Contrib):\n```bash\npip install rl_zoo3\n```\n\n```\n# Download model and save it into the logs/ folder\npython -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga shaharprofeta -f logs/\npython -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/\n```\n\nIf you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:\n```\npython -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga shaharprofeta -f logs/\npython -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/\n```\n\n## Training (with the RL Zoo)\n```\npython -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/\n# Upload the model and generate video (when possible)\npython -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga shaharprofeta\n```\n\n## Hyperparameters\n```python\nOrderedDict([('batch_size', 32),\n ('buffer_size', 100000),\n ('env_wrapper',\n ['stable_baselines3.common.atari_wrappers.AtariWrapper']),\n ('exploration_final_eps', 0.01),\n ('exploration_fraction', 0.1),\n ('frame_stack', 4),\n ('gradient_steps', 1),\n ('learning_rate', 0.0001),\n ('learning_starts', 100000),\n ('n_timesteps', 1000000.0),\n ('optimize_memory_usage', False),\n ('policy', 'CnnPolicy'),\n ('target_update_interval', 1000),\n ('train_freq', 4),\n ('normalize', False)])\n```\n\n# Environment Arguments\n```python\n{'render_mode': 'rgb_array'}\n```\n"}}},{"rowIdx":380,"cells":{"modelId":{"kind":"string","value":"mradermacher/VIS-Shepherd-Qwen2.5-VL-7B-i1-GGUF"},"author":{"kind":"string","value":"mradermacher"},"last_modified":{"kind":"timestamp","value":"2025-08-06T11:00:10Z","string":"2025-08-06T11:00:10Z"},"downloads":{"kind":"number","value":125,"string":"125"},"likes":{"kind":"number","value":1,"string":"1"},"library_name":{"kind":"string","value":"transformers"},"tags":{"kind":"list like","value":["transformers","gguf","vision","llm","critical","sft","d3.js","visualization","en","base_model:ZJUVAI/VIS-Shepherd-Qwen2.5-VL-7B","base_model:quantized:ZJUVAI/VIS-Shepherd-Qwen2.5-VL-7B","license:apache-2.0","endpoints_compatible","region:us","imatrix","conversational"],"string":"[\n \"transformers\",\n \"gguf\",\n \"vision\",\n \"llm\",\n \"critical\",\n \"sft\",\n \"d3.js\",\n \"visualization\",\n \"en\",\n \"base_model:ZJUVAI/VIS-Shepherd-Qwen2.5-VL-7B\",\n \"base_model:quantized:ZJUVAI/VIS-Shepherd-Qwen2.5-VL-7B\",\n \"license:apache-2.0\",\n \"endpoints_compatible\",\n \"region:us\",\n \"imatrix\",\n \"conversational\"\n]"},"pipeline_tag":{"kind":"null"},"createdAt":{"kind":"timestamp","value":"2025-08-06T10:17:04Z","string":"2025-08-06T10:17:04Z"},"card":{"kind":"string","value":"---\nbase_model: ZJUVAI/VIS-Shepherd-Qwen2.5-VL-7B\nlanguage:\n- en\nlibrary_name: transformers\nlicense: apache-2.0\nmradermacher:\n readme_rev: 1\nquantized_by: mradermacher\ntags:\n- vision\n- llm\n- critical\n- sft\n- d3.js\n- visualization\n---\n## About\n\n\n\n\n\n\n\n\n\nweighted/imatrix quants of https://huggingface.co/ZJUVAI/VIS-Shepherd-Qwen2.5-VL-7B\n\n\n\n***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#VIS-Shepherd-Qwen2.5-VL-7B-i1-GGUF).***\n\nstatic quants are available at https://huggingface.co/mradermacher/VIS-Shepherd-Qwen2.5-VL-7B-GGUF\n\n**This is a vision model - mmproj files (if any) will be in the [static repository](https://huggingface.co/mradermacher/VIS-Shepherd-Qwen2.5-VL-7B-GGUF).**\n## Usage\n\nIf you are unsure how to use GGUF files, refer to one of [TheBloke's\nREADMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for\nmore details, including on how to concatenate multi-part files.\n\n## Provided Quants\n\n(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)\n\n| Link | Type | Size/GB | Notes |\n|:-----|:-----|--------:|:------|\n| [GGUF](https://huggingface.co/mradermacher/VIS-Shepherd-Qwen2.5-VL-7B-i1-GGUF/resolve/main/VIS-Shepherd-Qwen2.5-VL-7B.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |\n| [GGUF](https://huggingface.co/mradermacher/VIS-Shepherd-Qwen2.5-VL-7B-i1-GGUF/resolve/main/VIS-Shepherd-Qwen2.5-VL-7B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate |\n| [GGUF](https://huggingface.co/mradermacher/VIS-Shepherd-Qwen2.5-VL-7B-i1-GGUF/resolve/main/VIS-Shepherd-Qwen2.5-VL-7B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate |\n| [GGUF](https://huggingface.co/mradermacher/VIS-Shepherd-Qwen2.5-VL-7B-i1-GGUF/resolve/main/VIS-Shepherd-Qwen2.5-VL-7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | |\n| [GGUF](https://huggingface.co/mradermacher/VIS-Shepherd-Qwen2.5-VL-7B-i1-GGUF/resolve/main/VIS-Shepherd-Qwen2.5-VL-7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | |\n| [GGUF](https://huggingface.co/mradermacher/VIS-Shepherd-Qwen2.5-VL-7B-i1-GGUF/resolve/main/VIS-Shepherd-Qwen2.5-VL-7B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | |\n| [GGUF](https://huggingface.co/mradermacher/VIS-Shepherd-Qwen2.5-VL-7B-i1-GGUF/resolve/main/VIS-Shepherd-Qwen2.5-VL-7B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | |\n| [GGUF](https://huggingface.co/mradermacher/VIS-Shepherd-Qwen2.5-VL-7B-i1-GGUF/resolve/main/VIS-Shepherd-Qwen2.5-VL-7B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.9 | very low quality |\n| [GGUF](https://huggingface.co/mradermacher/VIS-Shepherd-Qwen2.5-VL-7B-i1-GGUF/resolve/main/VIS-Shepherd-Qwen2.5-VL-7B.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better |\n| [GGUF](https://huggingface.co/mradermacher/VIS-Shepherd-Qwen2.5-VL-7B-i1-GGUF/resolve/main/VIS-Shepherd-Qwen2.5-VL-7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality |\n| [GGUF](https://huggingface.co/mradermacher/VIS-Shepherd-Qwen2.5-VL-7B-i1-GGUF/resolve/main/VIS-Shepherd-Qwen2.5-VL-7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | |\n| [GGUF](https://huggingface.co/mradermacher/VIS-Shepherd-Qwen2.5-VL-7B-i1-GGUF/resolve/main/VIS-Shepherd-Qwen2.5-VL-7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better |\n| [GGUF](https://huggingface.co/mradermacher/VIS-Shepherd-Qwen2.5-VL-7B-i1-GGUF/resolve/main/VIS-Shepherd-Qwen2.5-VL-7B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* |\n| [GGUF](https://huggingface.co/mradermacher/VIS-Shepherd-Qwen2.5-VL-7B-i1-GGUF/resolve/main/VIS-Shepherd-Qwen2.5-VL-7B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | |\n| [GGUF](https://huggingface.co/mradermacher/VIS-Shepherd-Qwen2.5-VL-7B-i1-GGUF/resolve/main/VIS-Shepherd-Qwen2.5-VL-7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better |\n| [GGUF](https://huggingface.co/mradermacher/VIS-Shepherd-Qwen2.5-VL-7B-i1-GGUF/resolve/main/VIS-Shepherd-Qwen2.5-VL-7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better |\n| [GGUF](https://huggingface.co/mradermacher/VIS-Shepherd-Qwen2.5-VL-7B-i1-GGUF/resolve/main/VIS-Shepherd-Qwen2.5-VL-7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | |\n| [GGUF](https://huggingface.co/mradermacher/VIS-Shepherd-Qwen2.5-VL-7B-i1-GGUF/resolve/main/VIS-Shepherd-Qwen2.5-VL-7B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.5 | prefer IQ4_XS |\n| [GGUF](https://huggingface.co/mradermacher/VIS-Shepherd-Qwen2.5-VL-7B-i1-GGUF/resolve/main/VIS-Shepherd-Qwen2.5-VL-7B.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality |\n| [GGUF](https://huggingface.co/mradermacher/VIS-Shepherd-Qwen2.5-VL-7B-i1-GGUF/resolve/main/VIS-Shepherd-Qwen2.5-VL-7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality |\n| [GGUF](https://huggingface.co/mradermacher/VIS-Shepherd-Qwen2.5-VL-7B-i1-GGUF/resolve/main/VIS-Shepherd-Qwen2.5-VL-7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended |\n| [GGUF](https://huggingface.co/mradermacher/VIS-Shepherd-Qwen2.5-VL-7B-i1-GGUF/resolve/main/VIS-Shepherd-Qwen2.5-VL-7B.i1-Q4_1.gguf) | i1-Q4_1 | 5.0 | |\n| [GGUF](https://huggingface.co/mradermacher/VIS-Shepherd-Qwen2.5-VL-7B-i1-GGUF/resolve/main/VIS-Shepherd-Qwen2.5-VL-7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | |\n| [GGUF](https://huggingface.co/mradermacher/VIS-Shepherd-Qwen2.5-VL-7B-i1-GGUF/resolve/main/VIS-Shepherd-Qwen2.5-VL-7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | |\n| [GGUF](https://huggingface.co/mradermacher/VIS-Shepherd-Qwen2.5-VL-7B-i1-GGUF/resolve/main/VIS-Shepherd-Qwen2.5-VL-7B.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K |\n\nHere is a handy graph by ikawrakow comparing some lower-quality quant\ntypes (lower is better):\n\n![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)\n\nAnd here are Artefact2's thoughts on the matter:\nhttps://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9\n\n## FAQ / Model Request\n\nSee https://huggingface.co/mradermacher/model_requests for some answers to\nquestions you might have and/or if you want some other model quantized.\n\n## Thanks\n\nI thank my company, [nethype GmbH](https://www.nethype.de/), for letting\nme use its servers and providing upgrades to my workstation to enable\nthis work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.\n\n\n"}}},{"rowIdx":381,"cells":{"modelId":{"kind":"string","value":"callgg/gpt-20b-8bit"},"author":{"kind":"string","value":"callgg"},"last_modified":{"kind":"timestamp","value":"2025-08-06T10:58:31Z","string":"2025-08-06T10:58:31Z"},"downloads":{"kind":"number","value":7,"string":"7"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"string","value":"diffusers"},"tags":{"kind":"list like","value":["diffusers","safetensors","gpt_oss","base_model:openai/gpt-oss-20b","base_model:quantized:openai/gpt-oss-20b","license:apache-2.0","mxfp4","region:us"],"string":"[\n \"diffusers\",\n \"safetensors\",\n \"gpt_oss\",\n \"base_model:openai/gpt-oss-20b\",\n \"base_model:quantized:openai/gpt-oss-20b\",\n \"license:apache-2.0\",\n \"mxfp4\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"null"},"createdAt":{"kind":"timestamp","value":"2025-08-06T09:21:35Z","string":"2025-08-06T09:21:35Z"},"card":{"kind":"string","value":"---\nlicense: apache-2.0\nlibrary_name: diffusers\nbase_model:\n- openai/gpt-oss-20b\n---\n## gpt-20b\n- repackage of [gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b)"}}},{"rowIdx":382,"cells":{"modelId":{"kind":"string","value":"GaborMadarasz/AstroQA_mamba_V21"},"author":{"kind":"string","value":"GaborMadarasz"},"last_modified":{"kind":"timestamp","value":"2025-08-06T10:57:12Z","string":"2025-08-06T10:57:12Z"},"downloads":{"kind":"number","value":4,"string":"4"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"string","value":"transformers"},"tags":{"kind":"list like","value":["transformers","safetensors","mamba","text-generation","trl","sft","arxiv:1910.09700","autotrain_compatible","text-generation-inference","endpoints_compatible","region:us"],"string":"[\n \"transformers\",\n \"safetensors\",\n \"mamba\",\n \"text-generation\",\n \"trl\",\n \"sft\",\n \"arxiv:1910.09700\",\n \"autotrain_compatible\",\n \"text-generation-inference\",\n \"endpoints_compatible\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"string","value":"text-generation"},"createdAt":{"kind":"timestamp","value":"2025-08-06T10:56:46Z","string":"2025-08-06T10:56:46Z"},"card":{"kind":"string","value":"---\nlibrary_name: transformers\ntags:\n- trl\n- sft\n---\n\n# Model Card for Model ID\n\n\n\n\n\n## Model Details\n\n### Model Description\n\n\n\nThis is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- **Developed by:** [More Information Needed]\n- **Funded by [optional]:** [More Information Needed]\n- **Shared by [optional]:** [More Information Needed]\n- **Model type:** [More Information Needed]\n- **Language(s) (NLP):** [More Information Needed]\n- **License:** [More Information Needed]\n- **Finetuned from model [optional]:** [More Information Needed]\n\n### Model Sources [optional]\n\n\n\n- **Repository:** [More Information Needed]\n- **Paper [optional]:** [More Information Needed]\n- **Demo [optional]:** [More Information Needed]\n\n## Uses\n\n\n\n### Direct Use\n\n\n\n[More Information Needed]\n\n### Downstream Use [optional]\n\n\n\n[More Information Needed]\n\n### Out-of-Scope Use\n\n\n\n[More Information Needed]\n\n## Bias, Risks, and Limitations\n\n\n\n[More Information Needed]\n\n### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.\n\n## How to Get Started with the Model\n\nUse the code below to get started with the model.\n\n[More Information Needed]\n\n## Training Details\n\n### Training Data\n\n\n\n[More Information Needed]\n\n### Training Procedure\n\n\n\n#### Preprocessing [optional]\n\n[More Information Needed]\n\n\n#### Training Hyperparameters\n\n- **Training regime:** [More Information Needed] \n\n#### Speeds, Sizes, Times [optional]\n\n\n\n[More Information Needed]\n\n## Evaluation\n\n\n\n### Testing Data, Factors & Metrics\n\n#### Testing Data\n\n\n\n[More Information Needed]\n\n#### Factors\n\n\n\n[More Information Needed]\n\n#### Metrics\n\n\n\n[More Information Needed]\n\n### Results\n\n[More Information Needed]\n\n#### Summary\n\n\n\n## Model Examination [optional]\n\n\n\n[More Information Needed]\n\n## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).\n\n- **Hardware Type:** [More Information Needed]\n- **Hours used:** [More Information Needed]\n- **Cloud Provider:** [More Information Needed]\n- **Compute Region:** [More Information Needed]\n- **Carbon Emitted:** [More Information Needed]\n\n## Technical Specifications [optional]\n\n### Model Architecture and Objective\n\n[More Information Needed]\n\n### Compute Infrastructure\n\n[More Information Needed]\n\n#### Hardware\n\n[More Information Needed]\n\n#### Software\n\n[More Information Needed]\n\n## Citation [optional]\n\n\n\n**BibTeX:**\n\n[More Information Needed]\n\n**APA:**\n\n[More Information Needed]\n\n## Glossary [optional]\n\n\n\n[More Information Needed]\n\n## More Information [optional]\n\n[More Information Needed]\n\n## Model Card Authors [optional]\n\n[More Information Needed]\n\n## Model Card Contact\n\n[More Information Needed]"}}},{"rowIdx":383,"cells":{"modelId":{"kind":"string","value":"BjarneNPO/finetune_06_08_2025_12_20_24"},"author":{"kind":"string","value":"BjarneNPO"},"last_modified":{"kind":"timestamp","value":"2025-08-06T10:50:52Z","string":"2025-08-06T10:50:52Z"},"downloads":{"kind":"number","value":1,"string":"1"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"string","value":"sentence-transformers"},"tags":{"kind":"list like","value":["sentence-transformers","safetensors","xlm-roberta","sentence-similarity","feature-extraction","dense","generated_from_trainer","dataset_size:19964","loss:MultipleNegativesRankingLoss","arxiv:1908.10084","arxiv:1705.00652","base_model:FacebookAI/xlm-roberta-base","base_model:finetune:FacebookAI/xlm-roberta-base","autotrain_compatible","text-embeddings-inference","endpoints_compatible","region:us"],"string":"[\n \"sentence-transformers\",\n \"safetensors\",\n \"xlm-roberta\",\n \"sentence-similarity\",\n \"feature-extraction\",\n \"dense\",\n \"generated_from_trainer\",\n \"dataset_size:19964\",\n \"loss:MultipleNegativesRankingLoss\",\n \"arxiv:1908.10084\",\n \"arxiv:1705.00652\",\n \"base_model:FacebookAI/xlm-roberta-base\",\n \"base_model:finetune:FacebookAI/xlm-roberta-base\",\n \"autotrain_compatible\",\n \"text-embeddings-inference\",\n \"endpoints_compatible\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"string","value":"sentence-similarity"},"createdAt":{"kind":"timestamp","value":"2025-08-06T10:48:18Z","string":"2025-08-06T10:48:18Z"},"card":{"kind":"string","value":"---\r\ntags:\r\n- sentence-transformers\r\n- sentence-similarity\r\n- feature-extraction\r\n- dense\r\n- generated_from_trainer\r\n- dataset_size:19964\r\n- loss:MultipleNegativesRankingLoss\r\nbase_model: FacebookAI/xlm-roberta-base\r\nwidget:\r\n- source_sentence: bei einem kann keine hinterlegt werden\r\n sentences:\r\n - An einem Tag gab es im August eine Überbelegung, einmal erklärt wie sie diese\r\n nachvollziehen kann.\r\n - Fehlermeldung weist auf eine fehlende BI hin. Anwenderin stimmt sich dazu mit\r\n ab.\r\n - 'Ticket\r\n\r\n ---------------------------\r\n\r\n Export angepasst - informiert\r\n\r\n --------------------------\r\n\r\n User möchte auch in der übergreifenden Personalliste die Anpassung umgesetzt haben\r\n - daher Ticket erneut geöffnet\r\n\r\n - übergreifender Export ebenfalls angepasst - informiert'\r\n- source_sentence: Userin darf erst am 01.02.2024 die Vertragsangebote rausschicken,\r\n möchte aber schonmal vermerken, welchen Kindern sie ein Vertragsangebot schicken\r\n möchte.\r\n sentences:\r\n - Das ist noch nicht freigeschaltet. Genauer Zeitpunkt steht auch noch nicht fest.\r\n - 'Kind muss manuell angelegt werden und dann neu synchronisiert und Anmeldedaten\r\n zusammenführen.\r\n\r\n Da Userin weiterhin Anmeldedaten nicht zusammenführen kann Userin gebeten uns\r\n einen Screenshot aus dem Kita-Navigator zukommen zu lassen.\r\n\r\n Beide Kinder wurden nun übertragen und befinden sich unter Vetragsangeboten.'\r\n - Kann die Kinder auf die Planungsliste nehmen, dann sieht sie diese sowohl in der\r\n Planungsliste, als auch in der Liste der Anmeldungen mit dem Symbol in der Anmeldeliste.\r\n- source_sentence: Fehlermeldung beim Erstellen der Datei.\r\n sentences:\r\n - In der Benutzerverwaltung unter Verwaltung.\r\n - Bei einer Kollegin musste noch die Stundenanzahl unter Ausbildung und Statistik\r\n eingetragen werden.\r\n - 'Wurde an den Entwickler weitergegeben.\r\n\r\n Problem konnte behoben werden, Benutzer wurde informiert.'\r\n- source_sentence: möchte wissen wenn ein Kind gestern letzmalig in der Kita war,\r\n welches Entlassdatum muss im System eingetragen werden?\r\n sentences:\r\n - Fehler bereist bekannt, prüft später erneut.\r\n - Aktuell wurde uns noch nicht gemeldet, dass wir das Jugendamt freischalten sollen.\r\n - Der letzte Betreuungstag muss als Entlassdatum hinterlegt werden, da sonst die\r\n BI nicht stimmt.\r\n- source_sentence: Login mit dem Authenticator funktioniert nicht mehr, Code ist immer\r\n ungültig\r\n sentences:\r\n - Erneut die Tätigkeit gelöscht und neu Übertragen, die Tätigkeit wurde aber nicht\r\n erneut angezeigt\r\n - Nachdem die Uhrzeit neu synchronisiert war konnte sie sich wieder einloggen.\r\n - Dies entspricht der Vorlage. muss Vorlage anpassen.\r\npipeline_tag: sentence-similarity\r\nlibrary_name: sentence-transformers\r\n---\r\n\r\n# SentenceTransformer based on FacebookAI/xlm-roberta-base\r\n\r\nThis is a [sentence-transformers](https://www.SBERT.net) model finetuned from [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on the train dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.\r\n\r\n## Model Details\r\n\r\n### Model Description\r\n- **Model Type:** Sentence Transformer\r\n- **Base model:** [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) \r\n- **Maximum Sequence Length:** 512 tokens\r\n- **Output Dimensionality:** 768 dimensions\r\n- **Similarity Function:** Cosine Similarity\r\n- **Training Dataset:**\r\n - train\r\n\r\n\r\n\r\n### Model Sources\r\n\r\n- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)\r\n- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)\r\n- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)\r\n\r\n### Full Model Architecture\r\n\r\n```\r\nSentenceTransformer(\r\n (0): Transformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'XLMRobertaModel'})\r\n (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})\r\n)\r\n```\r\n\r\n## Usage\r\n\r\n### Direct Usage (Sentence Transformers)\r\n\r\nFirst install the Sentence Transformers library:\r\n\r\n```bash\r\npip install -U sentence-transformers\r\n```\r\n\r\nThen you can load this model and run inference.\r\n```python\r\nfrom sentence_transformers import SentenceTransformer\r\n\r\n# Download from the 🤗 Hub\r\nmodel = SentenceTransformer(\"BjarneNPO/finetune_06_08_2025_12_20_24\")\r\n# Run inference\r\nqueries = [\r\n \"Login mit dem Authenticator funktioniert nicht mehr, Code ist immer ung\\u00fcltig\",\r\n]\r\ndocuments = [\r\n 'Nachdem die Uhrzeit neu synchronisiert war konnte sie sich wieder einloggen.',\r\n 'Erneut die Tätigkeit gelöscht und neu Übertragen, die Tätigkeit wurde aber nicht erneut angezeigt',\r\n 'Dies entspricht der Vorlage. muss Vorlage anpassen.',\r\n]\r\nquery_embeddings = model.encode_query(queries)\r\ndocument_embeddings = model.encode_document(documents)\r\nprint(query_embeddings.shape, document_embeddings.shape)\r\n# [1, 768] [3, 768]\r\n\r\n# Get the similarity scores for the embeddings\r\nsimilarities = model.similarity(query_embeddings, document_embeddings)\r\nprint(similarities)\r\n# tensor([[0.7032, 0.5662, 0.3571]])\r\n```\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n## Training Details\r\n\r\n### Training Dataset\r\n\r\n#### train\r\n\r\n* Dataset: train\r\n* Size: 19,964 training samples\r\n* Columns: query and answer\r\n* Approximate statistics based on the first 1000 samples:\r\n | | query | answer |\r\n |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|\r\n | type | string | string |\r\n | details |
  • min: 4 tokens
  • mean: 27.66 tokens
  • max: 512 tokens
|
  • min: 3 tokens
  • mean: 22.87 tokens
  • max: 151 tokens
|\r\n* Samples:\r\n | query | answer |\r\n |:----------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------|\r\n | Wie kann man die Jahresurlaubsübersicht exportieren? | über das 3 Punkte Menü rechts oben. Mitarbeiter auswählen und exportieren |\r\n | 1. Vertragsabschlüsse werden nicht übertragen
2. Kinder kommen nicht von nach
3. Absage kann bei Portalstatus nicht erstellt werden.
| Ticket
Userin gebeten sich an den Support zu wenden, da der Fehler liegt.
|\r\n | Wird im Anmeldeportal nicht gefunden. | Die Schnittstelle war noch nicht aktiviert und Profil ebenfalls nicht. |\r\n* Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:\r\n ```json\r\n {\r\n \"scale\": 20.0,\r\n \"similarity_fct\": \"cos_sim\"\r\n }\r\n ```\r\n\r\n### Evaluation Dataset\r\n\r\n#### train\r\n\r\n* Dataset: train\r\n* Size: 8,557 evaluation samples\r\n* Columns: query and answer\r\n* Approximate statistics based on the first 1000 samples:\r\n | | query | answer |\r\n |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|\r\n | type | string | string |\r\n | details |
  • min: 4 tokens
  • mean: 26.49 tokens
  • max: 512 tokens
|
  • min: 3 tokens
  • mean: 23.16 tokens
  • max: 512 tokens
|\r\n* Samples:\r\n | query | answer |\r\n |:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\r\n | Liebes Support Team!
In unserer Kst. fiel der EL auf, dass es in der Urlaubsübersicht Unstimmigkeiten gibt. So werden z.B. bei der Kollegin 60 offene Tage angezeigt und im Detail (Jahresübersicht) korrekt alle eingetragenen Tage und nur 2 Tage Rest!
Ich freue mich auf Ihre Rückmeldung.
Mit besten Grüßen
_________________________________________________
Leitung Kompetenzteam
Geschäftsfeld Kindertageseinrichtungen
()
e.V.
. 280
33605
Telefon: Mo.+Mi. +49 521 9216-129 Di., Do. + Fr. +49 5264 6559100
E-Mail:
Web: www.awo-owl.de
Instagram: www.instagram.com/
Facebook: www.facebook.com/
Vorsitzende des Präsidiums und des Aufsichtsrates:
Vorstand: (Vors.),
Amtsgericht VR 1151
Diese E-Mail einschließlich evtl. angehängter Dateien enthält vertrauliche und/oder rechtlich geschützte Informationen. Wenn Sie nicht der Adressat sind und diese E-Mail irrtümlich erhalten haben, dürfen Sie weder den Inhalt dieser E-Mail nutzen, noch dürfen Sie die eventuell angehängten Dateien öffnen, kopieren...
| Problem ist bekannt und wird im Verlauf des Tages behoben. |\r\n | hat im einen Vertrag, aber wurde nicht nach übertragen. war wegen fehlender Anbindung auf der Schnittstelle nicht auf der Anmeldeliste. | Kind muss manuell angelegt werden und dann neu synchronisiert und Anmeldedaten zusammenführen.
Da Userin weiterhin Anmeldedaten nicht zusammenführen kann Userin gebeten uns einen Screenshot aus dem Kita-Navigator zukommen zu lassen.
Beide Kinder wurden nun übertragen und befinden sich unter Vetragsangeboten.
|\r\n | Wie kann ein Kind aus den zukünftigen Neuaufnahmen gelöscht werden? | Benutzer muss erst die BI und kann dann über den Button Statuswechsel durchführen das ganze Kind löschen. |\r\n* Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:\r\n ```json\r\n {\r\n \"scale\": 20.0,\r\n \"similarity_fct\": \"cos_sim\"\r\n }\r\n ```\r\n\r\n### Training Hyperparameters\r\n#### Non-Default Hyperparameters\r\n\r\n- `eval_strategy`: epoch\r\n- `per_device_train_batch_size`: 32\r\n- `per_device_eval_batch_size`: 16\r\n- `gradient_accumulation_steps`: 16\r\n- `learning_rate`: 2e-05\r\n- `num_train_epochs`: 8\r\n- `lr_scheduler_type`: cosine\r\n- `warmup_ratio`: 0.1\r\n- `bf16`: True\r\n- `tf32`: True\r\n- `load_best_model_at_end`: True\r\n- `optim`: adamw_torch_fused\r\n- `batch_sampler`: no_duplicates\r\n\r\n#### All Hyperparameters\r\n
Click to expand\r\n\r\n- `overwrite_output_dir`: False\r\n- `do_predict`: False\r\n- `eval_strategy`: epoch\r\n- `prediction_loss_only`: True\r\n- `per_device_train_batch_size`: 32\r\n- `per_device_eval_batch_size`: 16\r\n- `per_gpu_train_batch_size`: None\r\n- `per_gpu_eval_batch_size`: None\r\n- `gradient_accumulation_steps`: 16\r\n- `eval_accumulation_steps`: None\r\n- `learning_rate`: 2e-05\r\n- `weight_decay`: 0.0\r\n- `adam_beta1`: 0.9\r\n- `adam_beta2`: 0.999\r\n- `adam_epsilon`: 1e-08\r\n- `max_grad_norm`: 1.0\r\n- `num_train_epochs`: 8\r\n- `max_steps`: -1\r\n- `lr_scheduler_type`: cosine\r\n- `lr_scheduler_kwargs`: {}\r\n- `warmup_ratio`: 0.1\r\n- `warmup_steps`: 0\r\n- `log_level`: passive\r\n- `log_level_replica`: warning\r\n- `log_on_each_node`: True\r\n- `logging_nan_inf_filter`: True\r\n- `save_safetensors`: True\r\n- `save_on_each_node`: False\r\n- `save_only_model`: False\r\n- `restore_callback_states_from_checkpoint`: False\r\n- `no_cuda`: False\r\n- `use_cpu`: False\r\n- `use_mps_device`: False\r\n- `seed`: 42\r\n- `data_seed`: None\r\n- `jit_mode_eval`: False\r\n- `use_ipex`: False\r\n- `bf16`: True\r\n- `fp16`: False\r\n- `fp16_opt_level`: O1\r\n- `half_precision_backend`: auto\r\n- `bf16_full_eval`: False\r\n- `fp16_full_eval`: False\r\n- `tf32`: True\r\n- `local_rank`: 0\r\n- `ddp_backend`: None\r\n- `tpu_num_cores`: None\r\n- `tpu_metrics_debug`: False\r\n- `debug`: []\r\n- `dataloader_drop_last`: False\r\n- `dataloader_num_workers`: 0\r\n- `dataloader_prefetch_factor`: None\r\n- `past_index`: -1\r\n- `disable_tqdm`: False\r\n- `remove_unused_columns`: True\r\n- `label_names`: None\r\n- `load_best_model_at_end`: True\r\n- `ignore_data_skip`: False\r\n- `fsdp`: []\r\n- `fsdp_min_num_params`: 0\r\n- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}\r\n- `fsdp_transformer_layer_cls_to_wrap`: None\r\n- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}\r\n- `deepspeed`: None\r\n- `label_smoothing_factor`: 0.0\r\n- `optim`: adamw_torch_fused\r\n- `optim_args`: None\r\n- `adafactor`: False\r\n- `group_by_length`: False\r\n- `length_column_name`: length\r\n- `ddp_find_unused_parameters`: None\r\n- `ddp_bucket_cap_mb`: None\r\n- `ddp_broadcast_buffers`: False\r\n- `dataloader_pin_memory`: True\r\n- `dataloader_persistent_workers`: False\r\n- `skip_memory_metrics`: True\r\n- `use_legacy_prediction_loop`: False\r\n- `push_to_hub`: False\r\n- `resume_from_checkpoint`: None\r\n- `hub_model_id`: None\r\n- `hub_strategy`: every_save\r\n- `hub_private_repo`: False\r\n- `hub_always_push`: False\r\n- `gradient_checkpointing`: False\r\n- `gradient_checkpointing_kwargs`: None\r\n- `include_inputs_for_metrics`: False\r\n- `eval_do_concat_batches`: True\r\n- `fp16_backend`: auto\r\n- `push_to_hub_model_id`: None\r\n- `push_to_hub_organization`: None\r\n- `mp_parameters`: \r\n- `auto_find_batch_size`: False\r\n- `full_determinism`: False\r\n- `torchdynamo`: None\r\n- `ray_scope`: last\r\n- `ddp_timeout`: 1800\r\n- `torch_compile`: False\r\n- `torch_compile_backend`: None\r\n- `torch_compile_mode`: None\r\n- `dispatch_batches`: None\r\n- `split_batches`: None\r\n- `include_tokens_per_second`: False\r\n- `include_num_input_tokens_seen`: False\r\n- `neftune_noise_alpha`: None\r\n- `optim_target_modules`: None\r\n- `batch_eval_metrics`: False\r\n- `prompts`: None\r\n- `batch_sampler`: no_duplicates\r\n- `multi_dataset_batch_sampler`: proportional\r\n- `router_mapping`: {}\r\n- `learning_rate_mapping`: {}\r\n\r\n
\r\n\r\n### Training Logs\r\n| Epoch | Step | Training Loss | train loss |\r\n|:-------:|:-------:|:-------------:|:----------:|\r\n| 0.2564 | 10 | 3.5052 | - |\r\n| 0.5128 | 20 | 3.4876 | - |\r\n| 0.7692 | 30 | 3.4632 | - |\r\n| 1.0 | 39 | - | 2.4519 |\r\n| 1.0256 | 40 | 3.3556 | - |\r\n| 1.2821 | 50 | 3.0786 | - |\r\n| 1.5385 | 60 | 2.8448 | - |\r\n| 1.7949 | 70 | 2.694 | - |\r\n| 2.0 | 78 | - | 1.7468 |\r\n| 2.0513 | 80 | 2.4993 | - |\r\n| 2.3077 | 90 | 2.4 | - |\r\n| 2.5641 | 100 | 2.3188 | - |\r\n| 2.8205 | 110 | 2.2225 | - |\r\n| 3.0 | 117 | - | 1.4909 |\r\n| 3.0769 | 120 | 2.1009 | - |\r\n| 3.3333 | 130 | 2.0479 | - |\r\n| 3.5897 | 140 | 1.9971 | - |\r\n| 3.8462 | 150 | 1.9289 | - |\r\n| 4.0 | 156 | - | 1.3297 |\r\n| 4.1026 | 160 | 1.8177 | - |\r\n| 4.3590 | 170 | 1.8191 | - |\r\n| 4.6154 | 180 | 1.7751 | - |\r\n| 4.8718 | 190 | 1.7375 | - |\r\n| 5.0 | 195 | - | 1.2254 |\r\n| 5.1282 | 200 | 1.6917 | - |\r\n| 5.3846 | 210 | 1.6542 | - |\r\n| 5.6410 | 220 | 1.6687 | - |\r\n| 5.8974 | 230 | 1.637 | - |\r\n| 6.0 | 234 | - | 1.2036 |\r\n| 6.1538 | 240 | 1.6071 | - |\r\n| 6.4103 | 250 | 1.5859 | - |\r\n| 6.6667 | 260 | 1.6114 | - |\r\n| 6.9231 | 270 | 1.59 | - |\r\n| 7.0 | 273 | - | 1.1898 |\r\n| 7.1795 | 280 | 1.5662 | - |\r\n| 7.4359 | 290 | 1.583 | - |\r\n| 7.6923 | 300 | 1.5958 | - |\r\n| 7.9487 | 310 | 1.5835 | - |\r\n| **8.0** | **312** | **-** | **1.1846** |\r\n\r\n* The bold row denotes the saved checkpoint.\r\n\r\n### Framework Versions\r\n- Python: 3.11.9\r\n- Sentence Transformers: 5.0.0\r\n- Transformers: 4.41.2\r\n- PyTorch: 2.3.1+cu121\r\n- Accelerate: 0.31.0\r\n- Datasets: 3.6.0\r\n- Tokenizers: 0.19.1\r\n\r\n## Citation\r\n\r\n### BibTeX\r\n\r\n#### Sentence Transformers\r\n```bibtex\r\n@inproceedings{reimers-2019-sentence-bert,\r\n title = \"Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks\",\r\n author = \"Reimers, Nils and Gurevych, Iryna\",\r\n booktitle = \"Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing\",\r\n month = \"11\",\r\n year = \"2019\",\r\n publisher = \"Association for Computational Linguistics\",\r\n url = \"https://arxiv.org/abs/1908.10084\",\r\n}\r\n```\r\n\r\n#### MultipleNegativesRankingLoss\r\n```bibtex\r\n@misc{henderson2017efficient,\r\n title={Efficient Natural Language Response Suggestion for Smart Reply},\r\n author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},\r\n year={2017},\r\n eprint={1705.00652},\r\n archivePrefix={arXiv},\r\n primaryClass={cs.CL}\r\n}\r\n```\r\n\r\n\r\n\r\n\r\n\r\n"}}},{"rowIdx":384,"cells":{"modelId":{"kind":"string","value":"hugsanaa/CyberAraBERT"},"author":{"kind":"string","value":"hugsanaa"},"last_modified":{"kind":"timestamp","value":"2025-08-06T10:43:49Z","string":"2025-08-06T10:43:49Z"},"downloads":{"kind":"number","value":9,"string":"9"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"null"},"tags":{"kind":"list like","value":["safetensors","bert","ar","base_model:aubmindlab/bert-base-arabertv02-twitter","base_model:finetune:aubmindlab/bert-base-arabertv02-twitter","license:apache-2.0","region:us"],"string":"[\n \"safetensors\",\n \"bert\",\n \"ar\",\n \"base_model:aubmindlab/bert-base-arabertv02-twitter\",\n \"base_model:finetune:aubmindlab/bert-base-arabertv02-twitter\",\n \"license:apache-2.0\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"null"},"createdAt":{"kind":"timestamp","value":"2025-08-06T06:12:24Z","string":"2025-08-06T06:12:24Z"},"card":{"kind":"string","value":"---\nlicense: apache-2.0\nlanguage:\n- ar\nbase_model:\n- aubmindlab/bert-base-arabertv02-twitter\n---\n# CyberAraBERT: AraBERT for Arabic Cyberbullying Detection\n\n# Overview\nCyberAraBERT is a specialized Arabic PLM designed for analyzing social media content and detecting the presence of cyberbullying. It works on multiple dialects (Egyptian, Gulf, and Laventine).\n\nThis model can be used for additional fine-tuning and also for testing.\n\n# Model Details:\n- **Base Model:** aubmindlab/ber-base-arabertv02-twitter\n- **Language:** Arabic\n- **Dataset used for fine-tuning:** [ArCyC](https://data.mendeley.com/datasets/z2dfgrzx47/1)\n- **License:** Apache License 2.0\n\n# Model Inference\nYou can use CyberAraBERT directly on any dataset to detect cyberbullying. To use it, follow the following steps:\n\n**1. Install the required libraries**\nEnsure that you have installed the libraries before using the model using pip:\n```python\npip install arabert transformers torch\n```\n\n**2. Load the Model and Tokenizer**\n```python\n# Import required Modules\nfrom transformers import AutoModelForSequenceClassification, AutoTokenizer\nimport torch\n\n# Load model and Tokenizer\nmodel_name = 'hugsanaa/CyberAraBERT'\nmodel = AutoModelForSequenceClassification.from_pretrained(model_name, return_dict=False, num_labels=2)\ntokenizer = AutoTokenizer.from_pretrained(model_name)\n```\n\n**3. Predict**\n```python\n# Example text\ntext = \"بدك توظف و تمشي بالمحاصه علي القليله ما تحط حمار يدق بيانو حط الحمار لحمرنه و موسيقي لبيانو\"\n\n# Tokenize input\ninputs = tokenizer(text, return_tensor=\"pt\", truncation = True, padding = True)\n\n# Make Predictions\nwith torch.no_grad():\n logits=model(**inputs).logits\n predicted_Class = torch.argmax(logits)\n\n# Interpret results\nlabels = [\"Cyberbullying\", \"Not Cyberbullying\"]\nprint(f\"Prediction: {labels[predicted_class]}\")\n```\n\n**Inference using pipeline**\n```python\nimport pandas as pd\nfrom transformers import pipeline\nimport more_itertools\nfrom tqdm import tqdm_notebook as tqdm\n\nmodel = 'hugsanaa/CyberAraBERT'\n\n# load the dataset (the data must include text column)\ndata = pd.read_csv(your_cyberbulling_data)\n\n# generate prediction pipeline\npipe = pipeline(\"sentiment-analysis\", model=model, device=0, return_all_scores =True, max_length=max_len, truncation=True)\npreds = []\nfor s in tqdm(more_itertools.chunked(list(data['text']), 32)): # batching for faster inference\n preds.extend(pipe(s))\n\n# Generate final predictions\ndata[f'preds'] = preds\nfinal_pred = []\nfor prediction in data['preds']:\n final_pred.append(max(prediction, key=lambda x: x['score'])['label'])\n\ndata[f'Final Prediction'] = final_pred\n```\n\n# Results\nBelow are the results obtained from testing CyberAraBERT on testing samples from ArCyC data\n| Class | Precision | Recall | F1-Score | Support |\n|--------------------|-----------|--------|----------|---------|\n| Not Cyberbullying | 0.9256 | 0.9043 | 0.9148 | 564 |\n| Cyberbullying | 0.8453 | 0.8780 | 0.8613 | 336 |\n| **Overall / Avg.** | 0.8956 | 0.8944 | 0.8948 | 900 |"}}},{"rowIdx":385,"cells":{"modelId":{"kind":"string","value":"oyvindbs/setfit-minister-mobilize-nb-sbert-base"},"author":{"kind":"string","value":"oyvindbs"},"last_modified":{"kind":"timestamp","value":"2025-08-06T10:43:16Z","string":"2025-08-06T10:43:16Z"},"downloads":{"kind":"number","value":1,"string":"1"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"string","value":"setfit"},"tags":{"kind":"list like","value":["setfit","safetensors","bert","sentence-transformers","text-classification","generated_from_setfit_trainer","Norway","Cabinet Ministers","no","nb","arxiv:2209.11055","base_model:NbAiLab/nb-sbert-base","base_model:finetune:NbAiLab/nb-sbert-base","region:us"],"string":"[\n \"setfit\",\n \"safetensors\",\n \"bert\",\n \"sentence-transformers\",\n \"text-classification\",\n \"generated_from_setfit_trainer\",\n \"Norway\",\n \"Cabinet Ministers\",\n \"no\",\n \"nb\",\n \"arxiv:2209.11055\",\n \"base_model:NbAiLab/nb-sbert-base\",\n \"base_model:finetune:NbAiLab/nb-sbert-base\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"string","value":"text-classification"},"createdAt":{"kind":"timestamp","value":"2025-06-30T08:55:04Z","string":"2025-06-30T08:55:04Z"},"card":{"kind":"string","value":"---\ntags:\n- setfit\n- sentence-transformers\n- text-classification\n- generated_from_setfit_trainer\n- Norway\n- Cabinet Ministers\nwidget: []\nmetrics:\n- accuracy\npipeline_tag: text-classification\nlibrary_name: setfit\ninference: true\nbase_model: NbAiLab/nb-sbert-base\nlanguage:\n- 'no'\n- nb\n---\n\n# Purpose: Mobilizing\n\nThis model has been trained on Facebook posts by Norwegian cabinet ministers of the Solberg governments (2013-2021). It was used in Karlsen, Kolltveit and Solheim (2025).\nThe posts were hand coded specifying different roles and purposes of the posts. \nBelow, we recreate the table 1 from the paper showing the five roles and four purposes. The model included here identifies posts where the purpose is to **Mobilize**. \nThe setfit models that identify the other roles and purposes are available [here](https://huggingface.co/collections/oyvindbs/balancing-acts-the-communicative-roles-of-cabinet-ministers-68624b72c250c3cc1fd3ea14).\nIn the paper, we use one model for each purpose and each role. Each post can accordingly be ascribed to more than one purpose or role. \n\n| | Communicative purposes | | | |\n|------------------------------|-------------------------------|----------------------|-------------------|-----------------|\n| **Communicative roles** | Informing | Communication | *Mobilizing* | Branding |\n| Ministry head | | | | |\n| Cabinet member | | | | |\n| Party politician | | | | |\n| Individual politician | | | | |\n| Private person | | | | |\n\nThis is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification of Norwegian social media posts. It uses [NbAiLab/nb-sbert-base](https://huggingface.co/NbAiLab/nb-sbert-base) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.\n\nIt has been trained using an efficient few-shot learning technique that involves:\n\n1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.\n2. Training a classification head with features from the fine-tuned Sentence Transformer.\n\n## Model Details\n\n### Model Description\n- **Model Type:** SetFit\n- **Sentence Transformer body:** [NbAiLab/nb-sbert-base](https://huggingface.co/NbAiLab/nb-sbert-base)\n- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance\n- **Maximum Sequence Length:** 75 tokens\n- **Number of Classes:** 1\n\n\n**Language:** \n* Norwegian (Bokmål)\n\n\n\n### Model Sources\n\n- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)\n- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)\n- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)\n\n## Uses\n\n### Direct Use for Inference\n\nFirst install the SetFit library:\n\n```bash\npip install setfit\n```\n\nThen you can load this model and run inference.\n\n```python\nfrom setfit import SetFitModel\n\n# Download from the 🤗 Hub\nmodel = SetFitModel.from_pretrained(\"oyvindbs/setfit_minister_nb-sbert-base_Ministry-Head\")\n# Run inference\npreds = model(\"I loved the spiderman movie!\")\n```\n\n\n\n\n\n\n\n\n\n## Training Details\n\n### Framework Versions\n- Python: 3.10.4\n- SetFit: 1.1.1\n- Sentence Transformers: 3.4.1\n- Transformers: 4.50.1\n- PyTorch: 2.5.1+cu118\n- Datasets: 2.19.0\n- Tokenizers: 0.21.0\n\n## Citation\n```bibtex\n@article{KarlsenKolltveitSolheim,\n author = {Karlsen, Rune and Kolltveit, Kristoffer and Solheim, Øyvind Bugge},\n title = {Balancing Acts: The communicative roles of cabinet ministers on social media},\n publisher = {Media and Communication},\n year = {2025}\n}\n```\n\n\n### BibTeX\n```bibtex\n@article{https://doi.org/10.48550/arxiv.2209.11055,\n doi = {10.48550/ARXIV.2209.11055},\n url = {https://arxiv.org/abs/2209.11055},\n author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},\n keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},\n title = {Efficient Few-Shot Learning Without Prompts},\n publisher = {arXiv},\n year = {2022},\n copyright = {Creative Commons Attribution 4.0 International}\n}\n```\n\n\n\n\n\n"}}},{"rowIdx":386,"cells":{"modelId":{"kind":"string","value":"daskalos-apps/phi4-cybersec-Q4_K_M"},"author":{"kind":"string","value":"daskalos-apps"},"last_modified":{"kind":"timestamp","value":"2025-08-06T10:40:29Z","string":"2025-08-06T10:40:29Z"},"downloads":{"kind":"number","value":26,"string":"26"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"string","value":"llama.cpp"},"tags":{"kind":"list like","value":["llama.cpp","gguf","phi4","quantized","cybersecurity","Q4_K_M","en","license:mit","endpoints_compatible","region:us","conversational"],"string":"[\n \"llama.cpp\",\n \"gguf\",\n \"phi4\",\n \"quantized\",\n \"cybersecurity\",\n \"Q4_K_M\",\n \"en\",\n \"license:mit\",\n \"endpoints_compatible\",\n \"region:us\",\n \"conversational\"\n]"},"pipeline_tag":{"kind":"null"},"createdAt":{"kind":"timestamp","value":"2025-08-06T10:39:51Z","string":"2025-08-06T10:39:51Z"},"card":{"kind":"string","value":"---\nlicense: mit\nbase_model: microsoft/phi-4-mini-instruct\ntags:\n - gguf\n - quantized\n - phi4\n - cybersecurity\n - Q4_K_M\nmodel_type: phi4\nquantization: Q4_K_M\nlanguage:\n - en\nlibrary_name: llama.cpp\n---\n\n# Phi-4 Cybersecurity Chatbot - Q4_K_M GGUF\n\nThis is a quantized version of Microsoft's Phi-4-mini-instruct, optimized for cybersecurity Q&A applications.\n\n## Model Details\n\n- **Base Model**: microsoft/phi-4-mini-instruct\n- **Quantization**: Q4_K_M (4-bit quantization)\n- **Format**: GGUF\n- **Size**: ~2-3GB (reduced from original ~28GB)\n- **License**: MIT\n- **Use Case**: Cybersecurity training and best practices chatbot\n\n## Intended Use\n\nThis model is specifically fine-tuned and optimized for:\n- Answering cybersecurity questions\n- Providing security best practices\n- Explaining phishing, malware, and other threats\n- Guiding on password security and data protection\n- Incident response guidance\n\n## Performance\n\n- **RAM Required**: 4-6GB\n- **CPU Compatible**: Yes\n- **Inference Speed**: 15-20 tokens/second on modern CPUs\n- **Context Length**: 4096 tokens\n\n## Usage\n\n### With llama.cpp\n\n```bash\n# Download the model\nwget https://huggingface.co/YOUR_USERNAME/phi4-cybersec-Q4_K_M/resolve/main/phi4-mini-instruct-Q4_K_M.gguf\n\n# Run with llama.cpp\n./main -m phi4-mini-instruct-Q4_K_M.gguf -p \"What is phishing?\" -n 256\n```\n\n### With Python (llama-cpp-python)\n\n```python\nfrom llama_cpp import Llama\n\n# Load model\nllm = Llama(\n model_path=\"phi4-mini-instruct-Q4_K_M.gguf\",\n n_ctx=4096,\n n_threads=8,\n n_gpu_layers=0 # CPU only\n)\n\n# Generate\nresponse = llm(\n \"What are the best practices for password security?\",\n max_tokens=256,\n temperature=0.7,\n stop=[\"<|end|>\", \"<|user|>\"]\n)\n\nprint(response['choices'][0]['text'])\n```\n\n### With LangChain\n\n```python\nfrom langchain.llms import LlamaCpp\n\nllm = LlamaCpp(\n model_path=\"phi4-mini-instruct-Q4_K_M.gguf\",\n temperature=0.7,\n max_tokens=256,\n n_ctx=4096\n)\n\nresponse = llm(\"How do I identify suspicious emails?\")\nprint(response)\n```\n\n## Prompt Format\n\nThe model uses ChatML format:\n\n```\n<|system|>\nYou are a cybersecurity expert assistant.\n<|end|>\n<|user|>\nWhat is malware?\n<|end|>\n<|assistant|>\n```\n\n## Quantization Details\n\nThis model was quantized using llama.cpp with the following process:\n\n1. Original model: microsoft/phi-4-mini-instruct\n2. Conversion: HF → GGUF format (FP16)\n3. Quantization: GGUF FP16 → Q4_K_M\n\nThe Q4_K_M quantization method provides:\n- 4-bit quantization with K-means\n- Mixed precision for important weights\n- ~75% size reduction\n- Minimal quality loss (<2% on benchmarks)\n\n## Limitations\n\n- Optimized for English language\n- May require fact-checking for critical security advice\n- Not suitable for generating security policies without review\n- Should not be sole source for incident response\n\n## Ethical Considerations\n\nThis model is intended to improve cybersecurity awareness and should be used responsibly:\n- Always verify critical security advice\n- Don't use for malicious purposes\n- Respect privacy and data protection laws\n- Consider cultural and organizational context\n\n## Citation\n\nIf you use this model, please cite:\n\n```bibtex\n@misc{phi4-cybersec-gguf,\n author = {Your Name},\n title = {Phi-4 Cybersecurity Q4_K_M GGUF},\n year = {2024},\n publisher = {Hugging Face},\n url = {https://huggingface.co/YOUR_USERNAME/phi4-cybersec-Q4_K_M}\n}\n```\n\n## Acknowledgments\n\n- Microsoft for the original Phi-4 model\n- llama.cpp team for quantization tools\n- The open-source community\n\n## Contact\n\nFor questions or issues: [tech@daskalos-apps.com]\n"}}},{"rowIdx":387,"cells":{"modelId":{"kind":"string","value":"oyvindbs/setfit-minister-private-person-nb-sbert-base"},"author":{"kind":"string","value":"oyvindbs"},"last_modified":{"kind":"timestamp","value":"2025-08-06T10:39:41Z","string":"2025-08-06T10:39:41Z"},"downloads":{"kind":"number","value":1,"string":"1"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"string","value":"setfit"},"tags":{"kind":"list like","value":["setfit","safetensors","bert","sentence-transformers","text-classification","generated_from_setfit_trainer","Norway","Cabinet Ministers","no","nb","arxiv:2209.11055","base_model:NbAiLab/nb-sbert-base","base_model:finetune:NbAiLab/nb-sbert-base","region:us"],"string":"[\n \"setfit\",\n \"safetensors\",\n \"bert\",\n \"sentence-transformers\",\n \"text-classification\",\n \"generated_from_setfit_trainer\",\n \"Norway\",\n \"Cabinet Ministers\",\n \"no\",\n \"nb\",\n \"arxiv:2209.11055\",\n \"base_model:NbAiLab/nb-sbert-base\",\n \"base_model:finetune:NbAiLab/nb-sbert-base\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"string","value":"text-classification"},"createdAt":{"kind":"timestamp","value":"2025-06-30T08:51:40Z","string":"2025-06-30T08:51:40Z"},"card":{"kind":"string","value":"---\ntags:\n- setfit\n- sentence-transformers\n- text-classification\n- generated_from_setfit_trainer\n- Norway\n- Cabinet Ministers\nwidget: []\nmetrics:\n- accuracy\npipeline_tag: text-classification\nlibrary_name: setfit\ninference: true\nbase_model: NbAiLab/nb-sbert-base\nlanguage:\n- 'no'\n- nb\n---\n\n# Role: Private Person\n\nThis model has been trained on Facebook posts by Norwegian cabinet ministers of the Solberg governments (2013-2021). It was used in Karlsen, Kolltveit and Solheim (2025).\nThe posts were hand coded specifying different roles and purposes of the posts. \nBelow, we recreate the table 1 from the paper showing the five roles and four purposes. The model included here identifies posts where the cabinet ministers take the role of **Private Person**. \nThe setfit models that identify the other roles and purposes are available [here](https://huggingface.co/collections/oyvindbs/balancing-acts-the-communicative-roles-of-cabinet-ministers-68624b72c250c3cc1fd3ea14).\nIn the paper, we use one model for each purpose and each role. Each post can accordingly be ascribed to more than one purpose or role. \n\n| | Communicative purposes | | | |\n|------------------------------|-------------------------------|----------------------|-------------------|-----------------|\n| **Communicative roles** | Informing | Communication | Mobilizing | Branding |\n| Ministry head | | | | |\n| Cabinet member | | | | |\n| Party politician | | | | |\n| Individual politician | | | | |\n| *Private person* | | | | |\n\nThis is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification of Norwegian social media posts. It uses [NbAiLab/nb-sbert-base](https://huggingface.co/NbAiLab/nb-sbert-base) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.\n\nIt has been trained using an efficient few-shot learning technique that involves:\n\n1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.\n2. Training a classification head with features from the fine-tuned Sentence Transformer.\n\n## Model Details\n\n### Model Description\n- **Model Type:** SetFit\n- **Sentence Transformer body:** [NbAiLab/nb-sbert-base](https://huggingface.co/NbAiLab/nb-sbert-base)\n- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance\n- **Maximum Sequence Length:** 75 tokens\n- **Number of Classes:** 1\n\n\n**Language:** \n* Norwegian (Bokmål)\n\n\n\n### Model Sources\n\n- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)\n- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)\n- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)\n\n## Uses\n\n### Direct Use for Inference\n\nFirst install the SetFit library:\n\n```bash\npip install setfit\n```\n\nThen you can load this model and run inference.\n\n```python\nfrom setfit import SetFitModel\n\n# Download from the 🤗 Hub\nmodel = SetFitModel.from_pretrained(\"oyvindbs/setfit_minister_nb-sbert-base_Ministry-Head\")\n# Run inference\npreds = model(\"I loved the spiderman movie!\")\n```\n\n\n\n\n\n\n\n\n\n## Training Details\n\n### Framework Versions\n- Python: 3.10.4\n- SetFit: 1.1.1\n- Sentence Transformers: 3.4.1\n- Transformers: 4.50.1\n- PyTorch: 2.5.1+cu118\n- Datasets: 2.19.0\n- Tokenizers: 0.21.0\n\n## Citation\n```bibtex\n@article{KarlsenKolltveitSolheim,\n author = {Karlsen, Rune and Kolltveit, Kristoffer and Solheim, Øyvind Bugge},\n title = {Balancing Acts: The communicative roles of cabinet ministers on social media},\n publisher = {Media and Communication},\n year = {2025}\n}\n```\n\n\n### BibTeX\n```bibtex\n@article{https://doi.org/10.48550/arxiv.2209.11055,\n doi = {10.48550/ARXIV.2209.11055},\n url = {https://arxiv.org/abs/2209.11055},\n author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},\n keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},\n title = {Efficient Few-Shot Learning Without Prompts},\n publisher = {arXiv},\n year = {2022},\n copyright = {Creative Commons Attribution 4.0 International}\n}\n```\n\n\n\n\n\n"}}},{"rowIdx":388,"cells":{"modelId":{"kind":"string","value":"oyvindbs/setfit-minister-ministry-head-nb-sbert-base"},"author":{"kind":"string","value":"oyvindbs"},"last_modified":{"kind":"timestamp","value":"2025-08-06T10:35:31Z","string":"2025-08-06T10:35:31Z"},"downloads":{"kind":"number","value":2,"string":"2"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"string","value":"setfit"},"tags":{"kind":"list like","value":["setfit","safetensors","bert","sentence-transformers","text-classification","generated_from_setfit_trainer","Norway","Cabinet Ministers","no","nb","arxiv:2209.11055","base_model:NbAiLab/nb-sbert-base","base_model:finetune:NbAiLab/nb-sbert-base","region:us"],"string":"[\n \"setfit\",\n \"safetensors\",\n \"bert\",\n \"sentence-transformers\",\n \"text-classification\",\n \"generated_from_setfit_trainer\",\n \"Norway\",\n \"Cabinet Ministers\",\n \"no\",\n \"nb\",\n \"arxiv:2209.11055\",\n \"base_model:NbAiLab/nb-sbert-base\",\n \"base_model:finetune:NbAiLab/nb-sbert-base\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"string","value":"text-classification"},"createdAt":{"kind":"timestamp","value":"2025-06-30T08:23:04Z","string":"2025-06-30T08:23:04Z"},"card":{"kind":"string","value":"---\ntags:\n- setfit\n- sentence-transformers\n- text-classification\n- generated_from_setfit_trainer\n- Norway\n- Cabinet Ministers\nwidget: []\nmetrics:\n- accuracy\npipeline_tag: text-classification\nlibrary_name: setfit\ninference: true\nbase_model: NbAiLab/nb-sbert-base\nlanguage:\n- 'no'\n- nb\n---\n\n# Role: Ministry Head\n\nThis model has been trained on Facebook posts by Norwegian cabinet ministers of the Solberg governments (2013-2021). It was used in Karlsen, Kolltveit and Solheim (2025).\nThe posts were hand coded specifying different roles and purposes of the posts. \nBelow, we recreate the table 1 from the paper showing the five roles and four purposes. The model included here identifies posts where the cabinet ministers take the role of **Ministry Head**. \nThe setfit models that identify the other roles and purposes are available [here](https://huggingface.co/collections/oyvindbs/balancing-acts-the-communicative-roles-of-cabinet-ministers-68624b72c250c3cc1fd3ea14).\nIn the paper, we use one model for each purpose and each role. Each post can accordingly be ascribed to more than one purpose or role. \n\n| | Communicative purposes | | | |\n|------------------------------|-------------------------------|----------------------|-------------------|-----------------|\n| **Communicative roles** | Informing | Communication | Mobilizing | Branding |\n| *Ministry head* | | | | |\n| Cabinet member | | | | |\n| Party politician | | | | |\n| Individual politician | | | | |\n| Private person | | | | |\n\nThis is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification of Norwegian social media posts. It uses [NbAiLab/nb-sbert-base](https://huggingface.co/NbAiLab/nb-sbert-base) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.\n\nIt has been trained using an efficient few-shot learning technique that involves:\n\n1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.\n2. Training a classification head with features from the fine-tuned Sentence Transformer.\n\n## Model Details\n\n### Model Description\n- **Model Type:** SetFit\n- **Sentence Transformer body:** [NbAiLab/nb-sbert-base](https://huggingface.co/NbAiLab/nb-sbert-base)\n- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance\n- **Maximum Sequence Length:** 75 tokens\n- **Number of Classes:** 1\n\n\n**Language:** \n* Norwegian (Bokmål)\n\n\n\n### Model Sources\n\n- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)\n- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)\n- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)\n\n## Uses\n\n### Direct Use for Inference\n\nFirst install the SetFit library:\n\n```bash\npip install setfit\n```\n\nThen you can load this model and run inference.\n\n```python\nfrom setfit import SetFitModel\n\n# Download from the 🤗 Hub\nmodel = SetFitModel.from_pretrained(\"oyvindbs/setfit_minister_nb-sbert-base_Ministry-Head\")\n# Run inference\npreds = model(\"I loved the spiderman movie!\")\n```\n\n\n\n\n\n\n\n\n\n## Training Details\n\n### Framework Versions\n- Python: 3.10.4\n- SetFit: 1.1.1\n- Sentence Transformers: 3.4.1\n- Transformers: 4.50.1\n- PyTorch: 2.5.1+cu118\n- Datasets: 2.19.0\n- Tokenizers: 0.21.0\n\n## Citation\n```bibtex\n@article{KarlsenKolltveitSolheim,\n author = {Karlsen, Rune and Kolltveit, Kristoffer and Solheim, Øyvind Bugge},\n title = {Balancing Acts: The communicative roles of cabinet ministers on social media},\n publisher = {Media and Communication},\n year = {2025}\n}\n```\n\n\n### BibTeX\n```bibtex\n@article{https://doi.org/10.48550/arxiv.2209.11055,\n doi = {10.48550/ARXIV.2209.11055},\n url = {https://arxiv.org/abs/2209.11055},\n author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},\n keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},\n title = {Efficient Few-Shot Learning Without Prompts},\n publisher = {arXiv},\n year = {2022},\n copyright = {Creative Commons Attribution 4.0 International}\n}\n```\n\n\n\n\n\n"}}},{"rowIdx":389,"cells":{"modelId":{"kind":"string","value":"ekiprop/SST-2-HEURISTIC-LoRA-All-Attention-Q_K_V_O-seed10"},"author":{"kind":"string","value":"ekiprop"},"last_modified":{"kind":"timestamp","value":"2025-08-06T10:34:55Z","string":"2025-08-06T10:34:55Z"},"downloads":{"kind":"number","value":62,"string":"62"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"string","value":"peft"},"tags":{"kind":"list like","value":["peft","safetensors","base_model:adapter:roberta-base","lora","transformers","base_model:FacebookAI/roberta-base","base_model:adapter:FacebookAI/roberta-base","license:mit","region:us"],"string":"[\n \"peft\",\n \"safetensors\",\n \"base_model:adapter:roberta-base\",\n \"lora\",\n \"transformers\",\n \"base_model:FacebookAI/roberta-base\",\n \"base_model:adapter:FacebookAI/roberta-base\",\n \"license:mit\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"null"},"createdAt":{"kind":"timestamp","value":"2025-08-06T10:20:32Z","string":"2025-08-06T10:20:32Z"},"card":{"kind":"string","value":"---\nlibrary_name: peft\nlicense: mit\nbase_model: roberta-base\ntags:\n- base_model:adapter:roberta-base\n- lora\n- transformers\nmetrics:\n- accuracy\nmodel-index:\n- name: SST-2-HEURISTIC-LoRA-All-Attention-Q_K_V_O-seed10\n results: []\n---\n\n\n\n# SST-2-HEURISTIC-LoRA-All-Attention-Q_K_V_O-seed10\n\nThis model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.1943\n- Accuracy: 0.9461\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 32\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- num_epochs: 5\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:------:|:-----:|:---------------:|:--------:|\n| 0.3966 | 0.0950 | 200 | 0.2108 | 0.9151 |\n| 0.295 | 0.1900 | 400 | 0.2018 | 0.9186 |\n| 0.2699 | 0.2850 | 600 | 0.2218 | 0.9197 |\n| 0.2397 | 0.3800 | 800 | 0.1849 | 0.9323 |\n| 0.2303 | 0.4751 | 1000 | 0.2436 | 0.9163 |\n| 0.2147 | 0.5701 | 1200 | 0.2028 | 0.9335 |\n| 0.2168 | 0.6651 | 1400 | 0.2050 | 0.9312 |\n| 0.2143 | 0.7601 | 1600 | 0.2165 | 0.9232 |\n| 0.2102 | 0.8551 | 1800 | 0.2060 | 0.9358 |\n| 0.2046 | 0.9501 | 2000 | 0.2101 | 0.9358 |\n| 0.2037 | 1.0451 | 2200 | 0.2132 | 0.9300 |\n| 0.1815 | 1.1401 | 2400 | 0.1969 | 0.9346 |\n| 0.1827 | 1.2352 | 2600 | 0.1962 | 0.9358 |\n| 0.18 | 1.3302 | 2800 | 0.2095 | 0.9392 |\n| 0.1792 | 1.4252 | 3000 | 0.1996 | 0.9381 |\n| 0.1792 | 1.5202 | 3200 | 0.2137 | 0.9369 |\n| 0.1788 | 1.6152 | 3400 | 0.1829 | 0.9335 |\n| 0.1674 | 1.7102 | 3600 | 0.2564 | 0.9209 |\n| 0.1709 | 1.8052 | 3800 | 0.2007 | 0.9358 |\n| 0.1806 | 1.9002 | 4000 | 0.1910 | 0.9392 |\n| 0.1756 | 1.9952 | 4200 | 0.2068 | 0.9369 |\n| 0.1632 | 2.0903 | 4400 | 0.1873 | 0.9289 |\n| 0.1532 | 2.1853 | 4600 | 0.2134 | 0.9404 |\n| 0.1528 | 2.2803 | 4800 | 0.2206 | 0.9312 |\n| 0.1485 | 2.3753 | 5000 | 0.1849 | 0.9450 |\n| 0.1558 | 2.4703 | 5200 | 0.2201 | 0.9381 |\n| 0.1491 | 2.5653 | 5400 | 0.2253 | 0.9369 |\n| 0.1616 | 2.6603 | 5600 | 0.1980 | 0.9346 |\n| 0.1428 | 2.7553 | 5800 | 0.2242 | 0.9381 |\n| 0.1462 | 2.8504 | 6000 | 0.2036 | 0.9392 |\n| 0.1474 | 2.9454 | 6200 | 0.2194 | 0.9392 |\n| 0.1389 | 3.0404 | 6400 | 0.2309 | 0.9335 |\n| 0.1169 | 3.1354 | 6600 | 0.2286 | 0.9381 |\n| 0.1316 | 3.2304 | 6800 | 0.1943 | 0.9461 |\n| 0.1477 | 3.3254 | 7000 | 0.1864 | 0.9427 |\n| 0.1289 | 3.4204 | 7200 | 0.1957 | 0.9461 |\n| 0.1263 | 3.5154 | 7400 | 0.2155 | 0.9438 |\n| 0.1333 | 3.6105 | 7600 | 0.2012 | 0.9450 |\n| 0.1369 | 3.7055 | 7800 | 0.2090 | 0.9404 |\n| 0.1342 | 3.8005 | 8000 | 0.2138 | 0.9415 |\n| 0.1391 | 3.8955 | 8200 | 0.2042 | 0.9438 |\n| 0.1363 | 3.9905 | 8400 | 0.1972 | 0.9438 |\n| 0.1216 | 4.0855 | 8600 | 0.2171 | 0.9415 |\n| 0.1178 | 4.1805 | 8800 | 0.2221 | 0.9415 |\n| 0.1223 | 4.2755 | 9000 | 0.2137 | 0.9415 |\n| 0.1247 | 4.3705 | 9200 | 0.2097 | 0.9438 |\n| 0.1191 | 4.4656 | 9400 | 0.2103 | 0.9438 |\n| 0.1177 | 4.5606 | 9600 | 0.2106 | 0.9427 |\n| 0.1207 | 4.6556 | 9800 | 0.2026 | 0.9427 |\n| 0.1141 | 4.7506 | 10000 | 0.2091 | 0.9438 |\n| 0.1223 | 4.8456 | 10200 | 0.2082 | 0.9450 |\n| 0.127 | 4.9406 | 10400 | 0.2075 | 0.9450 |\n\n\n### Framework versions\n\n- PEFT 0.16.0\n- Transformers 4.54.1\n- Pytorch 2.5.1+cu121\n- Datasets 4.0.0\n- Tokenizers 0.21.4"}}},{"rowIdx":390,"cells":{"modelId":{"kind":"string","value":"moaemilie/ft_llama_3_2-1B_stocks_RAG_model"},"author":{"kind":"string","value":"moaemilie"},"last_modified":{"kind":"timestamp","value":"2025-08-06T10:31:43Z","string":"2025-08-06T10:31:43Z"},"downloads":{"kind":"number","value":0,"string":"0"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"string","value":"transformers"},"tags":{"kind":"list like","value":["transformers","safetensors","text-generation-inference","unsloth","llama","trl","en","license:apache-2.0","endpoints_compatible","region:us"],"string":"[\n \"transformers\",\n \"safetensors\",\n \"text-generation-inference\",\n \"unsloth\",\n \"llama\",\n \"trl\",\n \"en\",\n \"license:apache-2.0\",\n \"endpoints_compatible\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"null"},"createdAt":{"kind":"timestamp","value":"2025-08-06T10:31:26Z","string":"2025-08-06T10:31:26Z"},"card":{"kind":"string","value":"---\nbase_model: unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit\ntags:\n- text-generation-inference\n- transformers\n- unsloth\n- llama\n- trl\nlicense: apache-2.0\nlanguage:\n- en\n---\n\n# Uploaded model\n\n- **Developed by:** moaemilie\n- **License:** apache-2.0\n- **Finetuned from model :** unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit\n\nThis llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.\n\n[](https://github.com/unslothai/unsloth)\n"}}},{"rowIdx":391,"cells":{"modelId":{"kind":"string","value":"Thireus/GLM-4.5-THIREUS-Q5_0_R4-SPECIAL_SPLIT"},"author":{"kind":"string","value":"Thireus"},"last_modified":{"kind":"timestamp","value":"2025-08-06T10:28:58Z","string":"2025-08-06T10:28:58Z"},"downloads":{"kind":"number","value":4,"string":"4"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"null"},"tags":{"kind":"list like","value":["gguf","arxiv:2505.23786","license:mit","endpoints_compatible","region:us","imatrix","conversational"],"string":"[\n \"gguf\",\n \"arxiv:2505.23786\",\n \"license:mit\",\n \"endpoints_compatible\",\n \"region:us\",\n \"imatrix\",\n \"conversational\"\n]"},"pipeline_tag":{"kind":"null"},"createdAt":{"kind":"timestamp","value":"2025-08-02T15:12:10Z","string":"2025-08-02T15:12:10Z"},"card":{"kind":"string","value":"---\nlicense: mit\n---\n## ⚠️ Cautionary Notice\n\nDue to changes in the GLM-4.5 PR the GGUF files of this repository have changed. Any older version of these GGUFs are no longer compatible with the latest version of `llama.cpp` and `ik_llama.cpp`. Please download the latest GGUF files of this repository and make sure to use the latest version of `llama.cpp` or `ik_llama.cpp`.\n\n- **For `llama.cpp`** – see the discussion in [PR #14939](https://github.com/ggml-org/llama.cpp/pull/14939).\n- **For `ik_llama.cpp`** – refer to [ikawrakow/ik_llama.cpp#668](https://github.com/ikawrakow/ik_llama.cpp/pull/668).\n\n**Unless you are confident in what you're doing, and until support is officially confirmed (PR merged),** \n> 🔒 **Do not use these quantized models for production** \n> 🔬 **Do not use them to assess the quality of the GLM-4.5 models**\n\nProceed with caution and keep an eye on the upstream PRs for any updates that could affect compatibility or performance.\n\n---\n\n# GLM-4.5\n\n## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/GLM-4.5-THIREUS-BF16-SPECIAL_SPLIT/) about?\n\nThis repository provides **GGUF-quantized tensors** for the GLM-4.5 model (official repo: https://huggingface.co/zai-org/GLM-4.5). These GGUF shards are designed to be used with **Thireus’ GGUF Tool Suite** (https://gguf.thireus.com), a collection of tools that automatically finds the perplexity-optimal mix of quantizations for any given VRAM and RAM target. With the Tool Suite, you can generate and download custom quantization “recipes” effortlessly.\n\n- 📖 Read more: https://github.com/Thireus/GGUF-Tool-Suite \n- 🔍 Example quant mixes: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples \n- 🛠️ Create your own recipe: https://colab.research.google.com/github/Thireus/GGUF-Tool-Suite/blob/main/quant_recipe_pipeline.ipynb \n- 📂 Browse available quant shards: https://huggingface.co/Thireus/collections \n\n*tl;dr: Expand the details section below*\n
\n\n```\ncd ~\n\n# Make sure to install all ik_llama.cpp compilation dependencies...\napt install python3-dev python3-pip python3-venv python3-wheel python3-setuptools git acl netcat-openbsd cmake # pipx\n\n# Obtain ik_llama's Thireus version - Windows builds available at https://github.com/Thireus/ik_llama.cpp/releases\ngit clone https://github.com/Thireus/ik_llama.cpp\ncd ik_llama.cpp\ngit pull\n# Build ik_llama.cpp\ncmake -B build -DGGML_AVX=ON -DGGML_AVX2=ON -DLLAMA_CURL=OFF -DGGML_MAX_CONTEXTS=2048\ncmake --build build --config Release -j16\ncd ..\n\n# Obtain Thireus' GGUF-Tool-Suite\ngit clone https://github.com/Thireus/GGUF-Tool-Suite\n\n# Download model quant mix from recipe file:\ncd GGUF-Tool-Suite\nrm -f download.conf # Make sure to copy the relevant download.conf for the model before running quant_assign.py\ncp -f models/GLM-4.5/download.conf . # Use the download.conf of the chosen model\nmkdir -p kitchen && cd kitchen\n../quant_downloader.sh ../recipe_examples/GLM-4.5.ROOT-3.6910bpw-3.2785ppl.153GB-GGUF_19GB-GPU_134GB-CPU.68f915c_9c7682b.recipe\n\n# Launch ik_llama's llama-cli:\nulimit -n 99999 # Lifts \"too many open files\" limitation on Linux\n~/ik_llama.cpp/build/bin/llama-cli \\\n -m GLM-4.5-THIREUS-BF16-SPECIAL_TENSOR-00001-of-01148.gguf \\\n -fa -amb 512 -fmoe -ctk f16 -c 4096 -ngl 99 \\\n -ot \"blk\\.(3|4|5|6)\\.ffn_.*=CUDA0\" \\\n -ot \"blk\\.(7|8|9|10)\\.ffn_.*=CUDA1\" \\\n -ot exps=CPU -b 2048 -ub 1024 --warmup-batch --no-mmap --threads 36 \\\n --main-gpu 0 \\\n -p '<|begin▁of▁sentence|><|User|>What is the solution of x+5=-2?<|Assistant|>\\n'\n```\n\n
\n\n---\n\n## ❓ Why does this Tool Suite exist?\n\n1. **Compatibility & Speed** – [unsloth](https://huggingface.co/unsloth)’s dynamic quants may not always work optimally with `ik_llama.cpp`. \n2. **Custom Rig Fit** – No off-the-shelf GGUF model perfectly matched my VRAM/RAM setup, so I built a way to tailor models and leverage extra VRAM/RAM to reduce perplexity. \n3. **Automated PPL-Optimal Quantization** – To my knowledge, there was no flexible, automated method to minimize perplexity for any bits-per-weight (bpw) target—so I created one with excellent results! \n\n---\n\n## 📊 How does it compare to other GGUFs?\n\nHere’s how DeepSeek-R1-0528 quantized with **Thireus’ GGUF Tool Suite** stacks up against other quantizers (lower perplexity = better at equal or lower bpw):\n\n![PPLs Compared With Others](https://github.com/Thireus/GGUF-Tool-Suite/raw/main/ppl_graphs/DeepSeek-R1-0528.svg)\n\n> _Note: The `recipe_examples` files illustrate good recipes. The Tool Suite computes the optimal ppl/bpw curve for you — just specify your target RAM, VRAM, and quant types, and `quant_assign.py` finds the best mix._ \n\nMore perplexity/bpw graphs for other supported models: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/ppl_graphs \n\n---\n\n## 🚀 How do I get started?\n\nCheck out the [GGUF Tool Suite README](https://github.com/Thireus/GGUF-Tool-Suite) — focus on these sections:\n\n1. ⚠️ **Requirements** – Which `ik_llama.cpp` (or `llama.cpp`) version to use and how to compile. \n - Windows binaries (no patching needed) at: https://github.com/Thireus/ik_llama.cpp/releases \n2. 📥 **Download Model Shards** – Use `quant_downloader.sh` to fetch GGUF shards from any recipe. \n - Recipe examples: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples \n3. 🧠 **Run a Downloaded Model** – Sample usage with `llama-cli`. \n4. 🛠️ **Generate a Custom Recipe** – Produce recipes tailored to your rig for optimal perplexity. \n\n---\n\n## ✅ Supported Models\n\nSupported models are listed under `models/` in the [Tool Suite Github repo](https://github.com/Thireus/GGUF-Tool-Suite/tree/main/models). Presence of `ppl_results.csv` indicates official support and compatibility with `quant_assign.py`.\n\n---\n\n## 🤷‍♂️ Will I release pre-cooked GGUF files?\n\nNo, because I believe in **tailored quantization** for each user’s hardware. If you prefer ready-made shards, you are welcome to merge them via `llama-gguf-split --merge`, or request someone to publish them.\n\nInstead, I prefer to share examples of recipes so users can see exactly how they were produced (command included inside these recipe files) and tweak them for their own rigs. The `quant_downloader.sh` script handles automatic fetching and verification of each shard. Recipes provided by [Ubergarm](https://huggingface.co/ubergarm) on his model cards are also compatible with `quant_downloader.sh`.\n\nUsers who don’t trust the GGUF shards on HuggingFace can also quantize their own by passing recipe lines to `llama-quantize --custom-q` ([see example](https://github.com/Thireus/GGUF-Tool-Suite/blob/main/models/DeepSeek-R1-0528/DeepSeek-R1-0528-THIREUS-ANY-SPECIAL.sh#L482-L486)). Run `llama-quantize --help` to list compatible quants for `quant_assign.py`. This approach is especially useful if you prefer `llama.cpp` over `ik_llama.cpp`. \n\n---\n\n## 📦 What’s in this repository?\n\n- **00001 GGUF header shard** – Contains metadata (tokens, chat template, tensor count, etc.). This metadata can be explored directly from the HuggingFace web interface after clicking on that shard. \n- **Tensor shards** – Each shard holds one tensor; see `tensors.map` for names, quant types, sizes, SHA-256 hash, shard IDs, etc. \n- **GPG-signed files** – `tensors.map` and header shard are signed with the key in [trusted-keys.asc](https://github.com/Thireus/GGUF-Tool-Suite/blob/main/trusted-keys.asc) for tamper detection. \n- **Security note** – Some papers about various ways to attack GGUFs and LLMs are available online, such as https://arxiv.org/abs/2505.23786, and there are also more classic security exploits like CVE-2024-23496 and CVE-2024-25664 through CVE-2024-25668. Only use GGUFs from reputable, trusted authors—or alternatively self-quantize—to avoid potential exploits. \n\n---\n\n## 💡 Pro Tips\n\nYou can download the BF16 model version to quantize your own shards:\n\n```\nmkdir kitchen \necho '.*=bf16' > kitchen/bf16.recipe \ncd kitchen\n../quant_downloader.sh bf16.recipe \n```\n\nEnjoy optimized quantization! 🎉\n"}}},{"rowIdx":392,"cells":{"modelId":{"kind":"string","value":"Robo420/gemma-3n-e4b-bratwurst"},"author":{"kind":"string","value":"Robo420"},"last_modified":{"kind":"timestamp","value":"2025-08-06T10:23:41Z","string":"2025-08-06T10:23:41Z"},"downloads":{"kind":"number","value":44,"string":"44"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"string","value":"transformers"},"tags":{"kind":"list like","value":["transformers","safetensors","gemma3n","image-text-to-text","text-generation-inference","unsloth","conversational","de","en","dataset:FreedomIntelligence/sharegpt-deutsch","license:gemma","endpoints_compatible","region:us"],"string":"[\n \"transformers\",\n \"safetensors\",\n \"gemma3n\",\n \"image-text-to-text\",\n \"text-generation-inference\",\n \"unsloth\",\n \"conversational\",\n \"de\",\n \"en\",\n \"dataset:FreedomIntelligence/sharegpt-deutsch\",\n \"license:gemma\",\n \"endpoints_compatible\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"string","value":"image-text-to-text"},"createdAt":{"kind":"timestamp","value":"2025-08-05T15:37:38Z","string":"2025-08-05T15:37:38Z"},"card":{"kind":"string","value":"---\nbase_model: unsloth/gemma-3n-e4b-it-unsloth-bnb-4bit\ntags:\n- text-generation-inference\n- transformers\n- unsloth\n- gemma3n\nlicense: gemma\nlanguage:\n- de\n- en\ndatasets:\n- FreedomIntelligence/sharegpt-deutsch\n---\n\n# Uploaded finetuned model\n\n- **Developed by:** Robo420\n- **License:** Gemma\n- **Finetuned from model :** unsloth/gemma-3n-e4b-it-unsloth-bnb-4bit\n\nThis model was finetuned on a german dataset and should be better at staying coherent in german.\n\nThis gemma3n model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.\n\nNotice: I still had to pay for Google Colab though, even if the unsloth finetune collab tells you it works in free mode, since it kept running OOM when generating the final weigths.\n\nGGUF following as soon as i get around to do it."}}},{"rowIdx":393,"cells":{"modelId":{"kind":"string","value":"ACECA/lowMvM_209"},"author":{"kind":"string","value":"ACECA"},"last_modified":{"kind":"timestamp","value":"2025-08-06T10:22:44Z","string":"2025-08-06T10:22:44Z"},"downloads":{"kind":"number","value":0,"string":"0"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"null"},"tags":{"kind":"list like","value":["safetensors","any-to-any","omega","omegalabs","bittensor","agi","license:mit","region:us"],"string":"[\n \"safetensors\",\n \"any-to-any\",\n \"omega\",\n \"omegalabs\",\n \"bittensor\",\n \"agi\",\n \"license:mit\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"string","value":"any-to-any"},"createdAt":{"kind":"timestamp","value":"2025-07-30T15:10:58Z","string":"2025-07-30T15:10:58Z"},"card":{"kind":"string","value":"---\nlicense: mit\ntags:\n- any-to-any\n- omega\n- omegalabs\n- bittensor\n- agi\n---\n\nThis is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.\n\nCheck out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).\n"}}},{"rowIdx":394,"cells":{"modelId":{"kind":"string","value":"causalyte/causalyte-hydra"},"author":{"kind":"string","value":"causalyte"},"last_modified":{"kind":"timestamp","value":"2025-08-06T10:20:14Z","string":"2025-08-06T10:20:14Z"},"downloads":{"kind":"number","value":0,"string":"0"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"null"},"tags":{"kind":"list like","value":["license:apache-2.0","region:us"],"string":"[\n \"license:apache-2.0\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"null"},"createdAt":{"kind":"timestamp","value":"2025-08-06T10:20:14Z","string":"2025-08-06T10:20:14Z"},"card":{"kind":"string","value":"---\r\nlicense: apache-2.0\r\n---\r\n"}}},{"rowIdx":395,"cells":{"modelId":{"kind":"string","value":"ekiprop/SST-2-HEURISTIC-Standard_LoRA-Q_V-seed10"},"author":{"kind":"string","value":"ekiprop"},"last_modified":{"kind":"timestamp","value":"2025-08-06T10:18:15Z","string":"2025-08-06T10:18:15Z"},"downloads":{"kind":"number","value":57,"string":"57"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"string","value":"peft"},"tags":{"kind":"list like","value":["peft","safetensors","base_model:adapter:roberta-base","lora","transformers","base_model:FacebookAI/roberta-base","base_model:adapter:FacebookAI/roberta-base","license:mit","region:us"],"string":"[\n \"peft\",\n \"safetensors\",\n \"base_model:adapter:roberta-base\",\n \"lora\",\n \"transformers\",\n \"base_model:FacebookAI/roberta-base\",\n \"base_model:adapter:FacebookAI/roberta-base\",\n \"license:mit\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"null"},"createdAt":{"kind":"timestamp","value":"2025-08-06T10:04:51Z","string":"2025-08-06T10:04:51Z"},"card":{"kind":"string","value":"---\nlibrary_name: peft\nlicense: mit\nbase_model: roberta-base\ntags:\n- base_model:adapter:roberta-base\n- lora\n- transformers\nmetrics:\n- accuracy\nmodel-index:\n- name: SST-2-HEURISTIC-Standard_LoRA-Q_V-seed10\n results: []\n---\n\n\n\n# SST-2-HEURISTIC-Standard_LoRA-Q_V-seed10\n\nThis model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.1948\n- Accuracy: 0.9438\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 32\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- num_epochs: 5\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:------:|:-----:|:---------------:|:--------:|\n| 0.3836 | 0.0950 | 200 | 0.2142 | 0.9186 |\n| 0.2937 | 0.1900 | 400 | 0.2044 | 0.9151 |\n| 0.2704 | 0.2850 | 600 | 0.2178 | 0.9163 |\n| 0.2516 | 0.3800 | 800 | 0.2107 | 0.9335 |\n| 0.2471 | 0.4751 | 1000 | 0.2356 | 0.9255 |\n| 0.2373 | 0.5701 | 1200 | 0.2058 | 0.9232 |\n| 0.2332 | 0.6651 | 1400 | 0.1986 | 0.9243 |\n| 0.2282 | 0.7601 | 1600 | 0.2068 | 0.9335 |\n| 0.225 | 0.8551 | 1800 | 0.2028 | 0.9266 |\n| 0.2128 | 0.9501 | 2000 | 0.2077 | 0.9335 |\n| 0.2254 | 1.0451 | 2200 | 0.1908 | 0.9312 |\n| 0.1968 | 1.1401 | 2400 | 0.1942 | 0.9312 |\n| 0.2026 | 1.2352 | 2600 | 0.2113 | 0.9346 |\n| 0.194 | 1.3302 | 2800 | 0.2169 | 0.9312 |\n| 0.1915 | 1.4252 | 3000 | 0.1912 | 0.9358 |\n| 0.1891 | 1.5202 | 3200 | 0.2046 | 0.9358 |\n| 0.1973 | 1.6152 | 3400 | 0.1945 | 0.9312 |\n| 0.1865 | 1.7102 | 3600 | 0.2448 | 0.9289 |\n| 0.1911 | 1.8052 | 3800 | 0.2149 | 0.9346 |\n| 0.2001 | 1.9002 | 4000 | 0.1906 | 0.9335 |\n| 0.1854 | 1.9952 | 4200 | 0.2196 | 0.9346 |\n| 0.1818 | 2.0903 | 4400 | 0.1935 | 0.9369 |\n| 0.1749 | 2.1853 | 4600 | 0.2139 | 0.9335 |\n| 0.1755 | 2.2803 | 4800 | 0.2274 | 0.9358 |\n| 0.1728 | 2.3753 | 5000 | 0.2105 | 0.9392 |\n| 0.1709 | 2.4703 | 5200 | 0.2080 | 0.9404 |\n| 0.1732 | 2.5653 | 5400 | 0.2141 | 0.9312 |\n| 0.1832 | 2.6603 | 5600 | 0.2029 | 0.9381 |\n| 0.1666 | 2.7553 | 5800 | 0.1969 | 0.9358 |\n| 0.1594 | 2.8504 | 6000 | 0.1955 | 0.9381 |\n| 0.1718 | 2.9454 | 6200 | 0.1975 | 0.9300 |\n| 0.1565 | 3.0404 | 6400 | 0.2119 | 0.9300 |\n| 0.1497 | 3.1354 | 6600 | 0.2099 | 0.9392 |\n| 0.1642 | 3.2304 | 6800 | 0.2015 | 0.9358 |\n| 0.1623 | 3.3254 | 7000 | 0.1971 | 0.9404 |\n| 0.1544 | 3.4204 | 7200 | 0.1960 | 0.9415 |\n| 0.1539 | 3.5154 | 7400 | 0.2116 | 0.9369 |\n| 0.158 | 3.6105 | 7600 | 0.1984 | 0.9392 |\n| 0.1652 | 3.7055 | 7800 | 0.1859 | 0.9415 |\n| 0.153 | 3.8005 | 8000 | 0.1948 | 0.9438 |\n| 0.1591 | 3.8955 | 8200 | 0.1991 | 0.9438 |\n| 0.1533 | 3.9905 | 8400 | 0.2124 | 0.9404 |\n| 0.1482 | 4.0855 | 8600 | 0.2123 | 0.9415 |\n| 0.1468 | 4.1805 | 8800 | 0.2126 | 0.9415 |\n| 0.1467 | 4.2755 | 9000 | 0.2129 | 0.9392 |\n| 0.1448 | 4.3705 | 9200 | 0.2095 | 0.9438 |\n| 0.142 | 4.4656 | 9400 | 0.2119 | 0.9381 |\n| 0.1361 | 4.5606 | 9600 | 0.2172 | 0.9427 |\n| 0.1491 | 4.6556 | 9800 | 0.2070 | 0.9427 |\n| 0.1413 | 4.7506 | 10000 | 0.2060 | 0.9415 |\n| 0.1575 | 4.8456 | 10200 | 0.2056 | 0.9438 |\n| 0.1521 | 4.9406 | 10400 | 0.2066 | 0.9427 |\n\n\n### Framework versions\n\n- PEFT 0.16.0\n- Transformers 4.54.1\n- Pytorch 2.5.1+cu121\n- Datasets 4.0.0\n- Tokenizers 0.21.4"}}},{"rowIdx":396,"cells":{"modelId":{"kind":"string","value":"csukuangfj/sherpa-onnx-streaming-zipformer-fr-kroko-2025-08-06"},"author":{"kind":"string","value":"csukuangfj"},"last_modified":{"kind":"timestamp","value":"2025-08-06T10:17:56Z","string":"2025-08-06T10:17:56Z"},"downloads":{"kind":"number","value":0,"string":"0"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"null"},"tags":{"kind":"list like","value":["onnx","region:us"],"string":"[\n \"onnx\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"null"},"createdAt":{"kind":"timestamp","value":"2025-08-06T09:39:47Z","string":"2025-08-06T09:39:47Z"},"card":{"kind":"string","value":"See license at https://huggingface.co/Banafo/Kroko-ASR\n"}}},{"rowIdx":397,"cells":{"modelId":{"kind":"string","value":"cucucu666/smile-8.6"},"author":{"kind":"string","value":"cucucu666"},"last_modified":{"kind":"timestamp","value":"2025-08-06T10:15:19Z","string":"2025-08-06T10:15:19Z"},"downloads":{"kind":"number","value":6,"string":"6"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"string","value":"diffusers"},"tags":{"kind":"list like","value":["diffusers","text-to-image","diffusers-training","lora","flux","flux-diffusers","template:sd-lora","base_model:black-forest-labs/FLUX.1-Fill-dev","base_model:adapter:black-forest-labs/FLUX.1-Fill-dev","license:other","region:us"],"string":"[\n \"diffusers\",\n \"text-to-image\",\n \"diffusers-training\",\n \"lora\",\n \"flux\",\n \"flux-diffusers\",\n \"template:sd-lora\",\n \"base_model:black-forest-labs/FLUX.1-Fill-dev\",\n \"base_model:adapter:black-forest-labs/FLUX.1-Fill-dev\",\n \"license:other\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"string","value":"text-to-image"},"createdAt":{"kind":"timestamp","value":"2025-08-06T08:20:12Z","string":"2025-08-06T08:20:12Z"},"card":{"kind":"string","value":"---\nbase_model: black-forest-labs/FLUX.1-Fill-dev\nlibrary_name: diffusers\nlicense: other\ninstance_prompt: Lego face, Lego style, smile expression, plain white background\nwidget:\n- text: Lego face, Lego style, smile expression, plain white background\n output:\n url: image_0.png\n- text: Lego face, Lego style, smile expression, plain white background\n output:\n url: image_1.png\n- text: Lego face, Lego style, smile expression, plain white background\n output:\n url: image_2.png\n- text: Lego face, Lego style, smile expression, plain white background\n output:\n url: image_3.png\ntags:\n- text-to-image\n- diffusers-training\n- diffusers\n- lora\n- flux\n- flux-diffusers\n- template:sd-lora\n---\n\n\n\n\n# Flux-Fill DreamBooth LoRA - cucucu666/smile-8.6\n\n\n\n## Model description\n\nThese are cucucu666/smile-8.6 DreamBooth LoRA weights for black-forest-labs/FLUX.1-Fill-dev.\n\nThe weights were trained using [DreamBooth](https://dreambooth.github.io/) with a custom [Flux diffusers trainer](https://github.com/Sebastian-Zok/FLUX-Fill-LoRa-Training).\n\nWas LoRA for the text encoder enabled? False.\n\n## Trigger words\n\nYou should use `Lego face, Lego style, smile expression, plain white background` to trigger the image generation.\n\n## Download model\n\n[Download the *.safetensors LoRA](cucucu666/smile-8.6/tree/main) in the Files & versions tab.\n\n## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)\n\n```py\nfrom diffusers import AutoPipelineForText2Image\nimport torch\npipeline = AutoPipelineForText2Image.from_pretrained(\"black-forest-labs/FLUX.1-dev\", torch_dtype=torch.bfloat16).to('cuda')\npipeline.load_lora_weights('cucucu666/smile-8.6', weight_name='pytorch_lora_weights.safetensors')\nimage = pipeline('Lego face, Lego style, smile expression, plain white background').images[0]\n```\n\nFor more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)\n\n## License\n\nPlease adhere to the licensing terms as described [here](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).\n\n\n## Intended uses & limitations\n\n#### How to use\n\n```python\n# TODO: add an example code snippet for running this diffusion pipeline\n```\n\n#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]\n\n## Training details\n\n[TODO: describe the data used to train the model]"}}},{"rowIdx":398,"cells":{"modelId":{"kind":"string","value":"exillarml/dental-assistant-llama3.2-1b"},"author":{"kind":"string","value":"exillarml"},"last_modified":{"kind":"timestamp","value":"2025-08-06T10:14:39Z","string":"2025-08-06T10:14:39Z"},"downloads":{"kind":"number","value":29,"string":"29"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"string","value":"transformers"},"tags":{"kind":"list like","value":["transformers","safetensors","llama","text-generation","text-generation-inference","unsloth","conversational","en","license:apache-2.0","autotrain_compatible","endpoints_compatible","region:us"],"string":"[\n \"transformers\",\n \"safetensors\",\n \"llama\",\n \"text-generation\",\n \"text-generation-inference\",\n \"unsloth\",\n \"conversational\",\n \"en\",\n \"license:apache-2.0\",\n \"autotrain_compatible\",\n \"endpoints_compatible\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"string","value":"text-generation"},"createdAt":{"kind":"timestamp","value":"2025-08-06T09:55:43Z","string":"2025-08-06T09:55:43Z"},"card":{"kind":"string","value":"---\nbase_model: unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit\ntags:\n- text-generation-inference\n- transformers\n- unsloth\n- llama\nlicense: apache-2.0\nlanguage:\n- en\n---\n\n# Uploaded finetuned model\n\n- **Developed by:** exillarml\n- **License:** apache-2.0\n- **Finetuned from model :** unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit\n\nThis llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.\n\n[](https://github.com/unslothai/unsloth)\n"}}},{"rowIdx":399,"cells":{"modelId":{"kind":"string","value":"Cydonia01/llama2-medical-finetuned"},"author":{"kind":"string","value":"Cydonia01"},"last_modified":{"kind":"timestamp","value":"2025-08-06T10:13:07Z","string":"2025-08-06T10:13:07Z"},"downloads":{"kind":"number","value":4,"string":"4"},"likes":{"kind":"number","value":0,"string":"0"},"library_name":{"kind":"string","value":"peft"},"tags":{"kind":"list like","value":["peft","safetensors","medical","text-generation","en","dataset:aboonaji/wiki_medical_terms_llam2_format","base_model:NousResearch/Llama-2-7b-chat-hf","base_model:adapter:NousResearch/Llama-2-7b-chat-hf","region:us"],"string":"[\n \"peft\",\n \"safetensors\",\n \"medical\",\n \"text-generation\",\n \"en\",\n \"dataset:aboonaji/wiki_medical_terms_llam2_format\",\n \"base_model:NousResearch/Llama-2-7b-chat-hf\",\n \"base_model:adapter:NousResearch/Llama-2-7b-chat-hf\",\n \"region:us\"\n]"},"pipeline_tag":{"kind":"string","value":"text-generation"},"createdAt":{"kind":"timestamp","value":"2025-08-06T09:36:33Z","string":"2025-08-06T09:36:33Z"},"card":{"kind":"string","value":"---\nbase_model: NousResearch/Llama-2-7b-chat-hf\nlibrary_name: peft\ndatasets:\n- aboonaji/wiki_medical_terms_llam2_format\nlanguage:\n- en\npipeline_tag: text-generation\ntags:\n- medical\n---\n\n# Model Card for llama2-medical-finetuned\n\n\n\n\n\n## Model Details\n\n### Model Description\n\nThis is a finetuned version of LLaMA 2 specialized for medical text understanding and generation tasks. It is designed to assist with medical data processing, clinical note summarization, and healthcare question answering.\n\n\n\n- **Developed by:** Cydonia01\n- **Shared by:** Cydonia01 on Hugging Face\n- **Model type:** Large Language Model (Transformer-based, quantized with BitsAndBytes 4-bit NF4)\n- **Language(s) (NLP):** English (primarily medical domain)\n- **Finetuned from model:** LLaMA 2 (Meta AI, base model: aboonaji/llama2finetune-v2)\n\n### Model Sources\n\n\n\n- **Repository:** https://huggingface.co/Cydonia01/llama2-medical-finetuned\n\n## Uses\n\n\n\n### Direct Use\n\n\n\n- Medical text generation and summarization\n- Clinical decision support tools\n- Medical Q&A systems\n\n### Downstream Use\n\n\n\n- Integration into healthcare NLP pipelines\n- Training further domain-specific models\n\n### Out-of-Scope Use\n\n\n\n- Not intended for direct diagnostic or treatment decision-making without expert review\n- Should not be used for generating legally binding medical advice\n\n## Bias, Risks, and Limitations\n\n\n\n- The model may reflect biases present in training data from medical literature and may generate incorrect or outdated medical information.\n- Not a substitute for professional medical advice or diagnosis.\n- Users should verify outputs with medical professionals.\n\n### Recommendations\n\n\n\nUsers should exercise caution when deploying the model in real-world medical scenarios and combine its outputs with expert validation.\n\n## How to Get Started with the Model\n\nUse the code below to get started with the model.\n\n```python\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\n\ntokenizer = AutoTokenizer.from_pretrained(\"Cydonia01/llama2-medical-finetuned\")\nmodel = AutoModelForCausalLM.from_pretrained(\"Cydonia01/llama2-medical-finetuned\")\n\ninput_text = \"Explain the symptoms of diabetes.\"\ninputs = tokenizer(input_text, return_tensors=\"pt\")\noutputs = model.generate(**inputs)\nprint(tokenizer.decode(outputs[0]))\n\n```\n\n## Training Details\n\n### Training Data\n\n\n\nCurated dataset of medical texts including wiki medical terms dataset (aboonaji/wiki_medical_terms_llam2_format).\n\n### Training Procedure\n\n\n\nFinetuned from aboonaji/llama2finetune-v2 base model using 4-bit quantization with BitsAndBytes (NF4), using PEFT LoRA method for parameter-efficient tuning. The training employed causal language modeling.\n\n#### Training Hyperparameters\n\n- Batch size: 1 (per device) with gradient accumulation of 4\n- Max steps: 100\n- LoRA config: r=16, alpha=16, dropout=0.1\n\n## Environmental Impact\n\n\n\n- **Hardware Type:** NVIDIA Tesla T4 GPU (Google Colab)\n- **Hours used:** Approximately 0.75 hours (45 minutes)\n- **Cloud Provider:** Google Colab\n\n## Technical Specifications\n\n### Model Architecture and Objective\n\nLLaMA 2 base model finetuned with causal language modeling, quantized to 4-bit precision using NF4 quantization for efficiency, with LoRA PEFT fine-tuning.\n\n### Compute Infrastructure\n\nTraining was conducted on Google Colab’s cloud environment, utilizing accessible GPU resources optimized for research and experimentation. The setup leverages efficient quantization and parameter-efficient fine-tuning techniques to minimize compute requirements.\n\n#### Hardware\n\nNVIDIA Tesla T4 GPU with 16 GB VRAM, supporting mixed precision (float16) and 4-bit quantization via BitsAndBytes library.\n\n#### Software\n\n- PyTorch\n- Transformers (Hugging Face)\n- PEFT (LoRA)\n- BitsAndBytes (4-bit quantization)\n- Datasets (Hugging Face)\n\n### Framework versions\n\n- PEFT 0.13.2\n- Transformers (compatible version with PEFT)\n- PyTorch (compatible with float16 and 4-bit quantization)"}}}],"truncated":false,"partial":false},"paginationData":{"pageIndex":3,"numItemsPerPage":100,"numTotalItems":825880,"offset":300,"length":100}},"jwt":"eyJhbGciOiJFZERTQSJ9.eyJyZWFkIjp0cnVlLCJwZXJtaXNzaW9ucyI6eyJyZXBvLmNvbnRlbnQucmVhZCI6dHJ1ZX0sImlhdCI6MTc1NDYzMzQ0NSwic3ViIjoiL2RhdGFzZXRzL2xpYnJhcmlhbi1ib3RzL21vZGVsX2NhcmRzX3dpdGhfbWV0YWRhdGEiLCJleHAiOjE3NTQ2MzcwNDUsImlzcyI6Imh0dHBzOi8vaHVnZ2luZ2ZhY2UuY28ifQ.2N61m7P4H03HvL2yVQdKVHgyem6hzC-HNe87JRiJGLMntHumRrlnMb6uBzgBw_VCzdiO7AA92_yxLs_rjWJcAg","displayUrls":true},"discussionsStats":{"closed":1,"open":2,"total":3},"fullWidth":true,"hasGatedAccess":true,"hasFullAccess":true,"isEmbedded":false,"savedQueries":{"community":[{"views":[{"key":"default/train","displayName":"train","viewName":"train"}],"sql":"SELECT \n *\nFROM train\nWHERE card IS NOT NULL \n AND contains(lower(card), 'distill')\n AND contains(lower(card), 'qwen2.5')\n AND contains(lower(card), '7b')\n AND NOT contains(lower(card), 'base_model: deepseek-ai/')\n AND NOT contains(lower(card), 'base_model: unsloth/\ndeepseek-r1-distill-qwen-7b')\n AND NOT (\n regexp_matches(lower(\"modelId\"), '[0-9]+(\\.[0-9]+)?b') \n AND NOT regexp_matches(lower(\"modelId\"), '^7b$')\n );","title":"Filtered Qwen2.5 Distill Models","createdAt":"2025-04-03T08:00:15.696Z","slug":"Dt19UjR","private":false,"justification":"Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.","viewNames":["train"]},{"views":[{"key":"default/train","displayName":"train","viewName":"train"}],"sql":"SELECT \n COUNT(*) AS total_count\nFROM train\nWHERE card IS NOT NULL \n AND contains(lower(card), 'distill')\n AND contains(lower(card), 'qwen2.5')\n AND contains(lower(card), '7b')\n AND NOT contains(lower(card), 'base_model: deepseek-ai/')\n AND NOT contains(lower(card), 'base_model: unsloth/\ndeepseek-r1-distill-qwen-7b');\n\n\n","title":"Filtered Model Cards Count","createdAt":"2025-04-03T07:54:08.731Z","slug":"bFmrezy","private":false,"justification":"Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.","viewNames":["train"]},{"views":[{"key":"default/train","displayName":"train","viewName":"train"}],"sql":"SELECT \n *\nFROM train\nWHERE card IS NOT NULL \n AND contains(lower(card), 'distill')\n AND contains(lower(card), 'qwen')\n AND contains(lower(card), '7b')\n AND NOT contains(lower(card), 'base_model: deepseek-ai/')\n AND NOT contains(lower(card), 'base_model: unsloth/\ndeepseek-r1-distill-qwen-7b')\n AND NOT (\n regexp_matches(lower(\"modelId\"), '[0-9]+(\\.[0-9]+)?b') \n AND NOT regexp_matches(lower(\"modelId\"), '^7b$')\n );\n\n","title":"Filtered Distill Qwen 7B Models","createdAt":"2025-04-03T08:20:35.438Z","slug":"iZePnHP","private":false,"justification":"Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.","viewNames":["train"]},{"views":[{"key":"default/train","displayName":"train","viewName":"train"}],"sql":"SELECT *\nFROM train\nWHERE \n (contains(lower(card), 'distilled') OR contains(lower(card), 'distill'))\n AND contains(lower(card), 'qwen')\n AND contains(lower(card), '7b')\n AND (contains(lower(card), 'qwen-2.5') OR contains(lower(card), 'qwen2.5'))\n AND NOT contains(lower(card), 'base_model: deepseek-ai/deepseek-r1-distill-qwen-7b')\n AND NOT contains(lower(card), 'base_model: unsloth/deepseek-r1-distill-qwen-7b')\n ;","title":"Filtered Qwen-7b Model Cards","createdAt":"2025-04-03T09:23:38.515Z","slug":"LBOdVNk","private":false,"justification":"The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.","viewNames":["train"]},{"views":[{"key":"default/train","displayName":"train","viewName":"train"}],"sql":"SELECT *\nFROM train\nWHERE \n (contains(lower(card), 'distilled') OR contains(lower(card), 'distill'))\n AND contains(lower(card), 'qwen')\n AND contains(lower(card), '7b')\n AND (contains(lower(card), 'qwen-2.5') OR contains(lower(card), 'qwen2.5'))\n AND NOT contains(lower(card), 'base_model: deepseek-ai/')\n AND NOT contains(lower(card), 'base_model: unsloth/\ndeepseek-r1-distill-qwen-7b')\n ;","title":"Filtered Qwen 7B Model Cards","createdAt":"2025-04-03T08:40:56.386Z","slug":"pkHkqEw","private":false,"justification":"The query filters for specific terms related to \"distilled\" or \"distill\", \"qwen\", and \"7b\" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.","viewNames":["train"]},{"views":[{"key":"default/train","displayName":"train","viewName":"train"}],"sql":"SELECT *\nFROM train\nWHERE \n (contains(lower(card), 'distilled') OR contains(lower(card), 'distill'))\n AND contains(lower(card), 'qwen')\n AND contains(lower(card), '7b')\n AND (contains(lower(card), 'qwen-2.5') OR contains(lower(card), 'qwen2.5'))\n AND NOT contains(lower(card), 'base_model: deepseek-ai/')\n;","title":"Qwen 7B Distilled Models","createdAt":"2025-04-03T08:36:21.007Z","slug":"vN69J6P","private":false,"justification":"The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.","viewNames":["train"]},{"views":[{"key":"default/train","displayName":"train","viewName":"train"}],"sql":"SELECT *\nFROM train\nWHERE \n (contains(lower(modelId), 'distilled') OR contains(lower(modelId), 'distill'))\n AND contains(lower(modelId), 'qwen')\n AND contains(lower(modelId), '7b')\n AND (contains(lower(card), 'qwen-2.5') OR contains(lower(card), 'qwen2.5'))\n;","title":"Qwen 7B Distilled Model Cards","createdAt":"2025-04-03T08:16:52.739Z","slug":"u3l9OzK","private":false,"justification":"The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.","viewNames":["train"]},{"views":[{"key":"default/train","displayName":"train","viewName":"train"}],"sql":"SELECT *\nFROM train\nWHERE \n (contains(lower(card), 'distilled') OR contains(lower(card), 'distill'))\n AND contains(lower(card), 'qwen')\n AND contains(lower(card), '7b')\n;","title":"Qwen 7B Distilled Models","createdAt":"2025-04-03T08:01:47.085Z","slug":"a8J3mfb","private":false,"justification":"Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.","viewNames":["train"]},{"views":[{"key":"default/train","displayName":"train","viewName":"train"}],"sql":"SELECT *\nFROM train\nWHERE \n (contains(lower(modelId), 'distilled') OR contains(lower(modelId), 'distill'))\n AND contains(lower(modelId), 'qwen')\n AND contains(lower(modelId), '7b')\n;","title":"Distilled Qwen 7B Models","createdAt":"2025-04-03T07:53:15.228Z","slug":"ukukhcF","private":false,"justification":"The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.","viewNames":["train"]},{"views":[{"key":"default/train","displayName":"train","viewName":"train"}],"sql":"SELECT \n *\nFROM train\nWHERE card IS NOT NULL \n AND contains(lower(card), 'distill')\n AND contains(lower(card), 'qwen2.5')\n AND contains(lower(card), '7b')\n AND NOT contains(lower(card), 'base_model: deepseek-ai/\ndeepseek-r1-distill-qwen-7b') AND NOT contains(lower(card), 'base_model: unsloth/\ndeepseek-r1-distill-qwen-7b');\n\n\n","title":"Filtered Model Cards with Distill Qwen2.","createdAt":"2025-04-03T07:20:10.795Z","slug":"ZUA8Zz5","private":false,"justification":"Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.","viewNames":["train"]},{"views":[{"key":"default/train","displayName":"train","viewName":"train"}],"sql":"SELECT \n *\nFROM train\nWHERE card IS NOT NULL \n AND contains(lower(card), 'distill')\n AND contains(lower(card), 'qwen')\n AND contains(lower(card), '7b')\n AND NOT contains(lower(card), 'base_model: deepseek/\ndeepseek-r1-distill-qwen-7b');\n\n\n","title":"Filtered Model Cards with Distill Qwen 7","createdAt":"2025-04-03T07:11:39.942Z","slug":"Xx6nnT_","private":false,"justification":"The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.","viewNames":["train"]},{"views":[{"key":"default/train","displayName":"train","viewName":"train"}],"sql":"SELECT \n *\nFROM train\nWHERE card IS NOT NULL \n AND contains(lower(card), 'distill')\n AND contains(lower(card), 'qwen')\n AND contains(lower(card), '7b');\n\n","title":"Distill Qwen 7B Model Cards","createdAt":"2025-04-03T07:02:54.174Z","slug":"UFQ_5ng","private":false,"justification":"The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.","viewNames":["train"]}],"user":[]}}">
modelId
string
author
string
last_modified
timestamp[us, tz=UTC]
downloads
int64
likes
int64
library_name
string
tags
list
pipeline_tag
string
createdAt
timestamp[us, tz=UTC]
card
string
lmstudio-community/Qwen3-4B-Instruct-2507-MLX-5bit
lmstudio-community
2025-08-06T14:37:58Z
38
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "mlx", "conversational", "base_model:Qwen/Qwen3-4B-Instruct-2507", "base_model:quantized:Qwen/Qwen3-4B-Instruct-2507", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "5-bit", "region:us" ]
text-generation
2025-08-06T14:37:29Z
--- library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507/blob/main/LICENSE pipeline_tag: text-generation tags: - mlx base_model: Qwen/Qwen3-4B-Instruct-2507 --- ## 💫 Community Model> Qwen3-4B-Instruct-2507 by Qwen _👾 [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)_. **Model creator**: [Qwen](https://huggingface.co/Qwen)<br> **Original model**: [Qwen3-4B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507)<br> **MLX quantization**: provided by [LM Studio team](https://x.com/lmstudio) using [mlx_lm](https://github.com/ml-explore/mlx-lm)<br> ## Technical Details 5-bit quantized version of Qwen3-4B-Instruct-2507 using MLX, optimized for Apple Silicon. ## Special thanks 🙏 Special thanks to the [Apple Machine Learning Research](https://github.com/ml-explore) team for creating [MLX](https://github.com/ml-explore/mlx). ## Disclaimers LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
silverlife/pi0_deerbaby_open_door
silverlife
2025-08-06T14:32:42Z
0
0
null
[ "robotics", "en", "dataset:silverlife/open_door", "base_model:lerobot/pi0", "base_model:finetune:lerobot/pi0", "license:apache-2.0", "region:us" ]
robotics
2025-08-06T13:42:08Z
--- license: apache-2.0 datasets: - silverlife/open_door language: - en base_model: - lerobot/pi0 pipeline_tag: robotics --- ## Pi0 fine-tuned model to open door This model is fine-tuned for an Aloha-like hardware to open the door.
Butanium/simple-stories-2L16H128D-attention-only-toy-transformer
Butanium
2025-08-06T14:29:37Z
9
0
null
[ "safetensors", "llama", "region:us" ]
null
2025-08-06T14:29:27Z
# 2-Layer 16-Head Attention-Only Transformer This is a simplified transformer model with 2 attention layer(s) and 16 attention head(s), hidden size 128, designed for studying attention mechanisms in isolation. ## Architecture Differences from Vanilla Transformer **Removed Components:** - **No MLP/Feed-Forward layers** - Only attention layers - **No Layer Normalization** - No LayerNorm before/after attention - **No positional encoding** - No position embeddings of any kind **Kept Components:** - Token embeddings - Multi-head self-attention with causal masking - Residual connections around attention layers - Language modeling head (linear projection to vocabulary) This minimal architecture isolates the attention mechanism, making it useful for mechanistic interpretability research as described in [A Mathematical Framework for Transformer Circuits](https://transformer-circuits.pub/2021/framework/index.html). ## Usage ```python config_class = LlamaConfig def __init__(self, config: LlamaConfig): super().__init__(config) self.embed_tokens = nn.Embedding(config.vocab_size, config.hidden_size) self.layers = nn.ModuleList([AttentionLayer(config) for _ in range(config.num_hidden_layers)]) self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False) model = AttentionOnlyTransformer.from_pretrained('Butanium/simple-stories-2L16H128D-attention-only-toy-transformer') ``` ## Training Data The model is trained on the [SimpleStories dataset](https://huggingface.co/datasets/SimpleStories/SimpleStories) for next-token prediction.
optimum-internal-testing/tiny-random-llava-next-mistral
optimum-internal-testing
2025-08-06T14:28:34Z
583
0
transformers
[ "transformers", "safetensors", "llava_next", "image-to-text", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-to-text
2025-08-06T14:01:56Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
DeusImperator/L3.3-Shakudo-70b_exl3_3.0bpw_H6
DeusImperator
2025-08-06T14:27:54Z
5
0
null
[ "safetensors", "llama", "base_model:Steelskull/L3.3-Shakudo-70b", "base_model:quantized:Steelskull/L3.3-Shakudo-70b", "license:llama3.3", "3-bit", "exl3", "region:us" ]
null
2025-08-05T17:57:37Z
--- license: llama3.3 base_model: - Steelskull/L3.3-Shakudo-70b --- # L3.3-Shakudo-70b - EXL3 3.0bpw H6 This is a 3bpw EXL3 quant of [Steelskull/L3.3-Shakudo-70b](https://huggingface.co/Steelskull/L3.3-Shakudo-70b) This quant was made using exllamav3-0.0.5 with '--cal_cols 4096' (instead of default 2048) which in my experience improves quant quality a bit 3bpw fits in 32GB VRAM on Windows with around 18-20k Q8 context I tested this quant shortly in some random RPs (including ones over 8k and 16k context) and it seems to work fine ## Prompt Templates Uses Llama 3 Instruct format. Supports thinking with "\<thinking\>" prefill in assistant response. ### Original readme below --- <!DOCTYPE html><html lang="en" style="margin:0; padding:0; width:100%; height:100%;"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>L3.3-Shakudo-70b</title> <link href="https://fonts.googleapis.com/css2?family=Cinzel+Decorative:wght@400;700&family=Lora:ital,wght@0,400;0,500;0,600;0,700;1,400;1,500;1,600;1,700&display=swap" rel="stylesheet"> <style> /* GOTHIC ALCHEMIST THEME */ /* Base styles */ /* DEBUG STYLES FOR SMALL SCREENS - Added temporarily to diagnose responsive issues */ @media (max-width: 480px) { .debug-overflow { border: 2px solid red !important; } } /* Fix for vertical text in composition list on mobile */ @media (max-width: 480px) { .composition-list li { grid-template-columns: 1fr; /* Change to single column on mobile */ } .model-component a { display: inline; /* Change from block to inline */ word-break: break-word; /* Better word breaking behavior */ } } /* Remove horizontal padding on containers for mobile */ @media (max-width: 480px) { .container { padding-left: 0; padding-right: 0; } } * { margin: 0; padding: 0; box-sizing: border-box; } html { font-size: 16px; scroll-behavior: smooth; } body { font-family: 'Lora', serif; background-color: #1A1A1A; color: #E0EAE0; line-height: 1.6; background: radial-gradient(ellipse at center, #2a2a2a 0%, #1A1A1A 70%); background-attachment: fixed; position: relative; overflow-x: hidden; margin: 0; padding: 0; font-size: 16px; overflow-y: auto; min-height: 100vh; height: auto; } body::before { content: ''; position: fixed; top: 0; left: 0; width: 100%; height: 100%; background: radial-gradient(circle at 10% 20%, rgba(229, 91, 0, 0.15) 0%, transparent 40%), radial-gradient(circle at 90% 80%, rgba(212, 175, 55, 0.15) 0%, transparent 40%); pointer-events: none; z-index: -1; } /* Typography */ h1, h2, h3, h4, h5, h6 { font-family: 'Cinzel Decorative', serif; font-weight: 700; color: #E0EAE0; margin-bottom: 1rem; text-transform: uppercase; letter-spacing: 1px; } p { margin-bottom: 1.5rem; color: rgba(224, 234, 224, 0.9); } a { color: #E55B00; /* Fiery Orange */ text-decoration: none; transition: all 0.3s ease; } a:hover { color: #D4AF37; /* Gold */ text-shadow: 0 0 10px rgba(212, 175, 55, 0.7); } /* Aesthetic neon details */ .neon-border { border: 1px solid #E55B00; box-shadow: 0 0 10px rgba(229, 91, 0, 0.5); } .glowing-text { color: #E55B00; text-shadow: 0 0 5px rgba(229, 91, 0, 0.7), 0 0 10px rgba(229, 91, 0, 0.5), 0 0 15px rgba(229, 91, 0, 0.3); } /* Form elements */ input, select, textarea, button { font-family: 'Lora', serif; padding: 0.75rem 1rem; border: 1px solid rgba(229, 91, 0, 0.5); background-color: rgba(26, 26, 26, 0.8); color: #E0EAE0; border-radius: 0; transition: all 0.3s ease; } input:focus, select:focus, textarea:focus { outline: none; border-color: #E55B00; box-shadow: 0 0 10px rgba(229, 91, 0, 0.5); } button { cursor: pointer; background-color: rgba(229, 91, 0, 0.2); border: 1px solid #E55B00; border-radius: 0; } button:hover { background-color: rgba(229, 91, 0, 0.4); transform: translateY(-2px); box-shadow: 0 0 15px rgba(229, 91, 0, 0.5); } /* Details and summary */ details { margin-bottom: 1.5rem; } summary { padding: 1rem; background: rgba(229, 91, 0, 0.1); border: 1px solid rgba(229, 91, 0, 0.3); font-weight: 600; cursor: pointer; position: relative; overflow: hidden; border-radius: 0; transition: all 0.3s ease; } summary:hover { background: rgba(229, 91, 0, 0.2); border-color: #E55B00; box-shadow: 0 0 15px rgba(229, 91, 0, 0.4); } summary::before { content: ''; position: absolute; top: 0; left: 0; width: 8px; height: 100%; background: linear-gradient(135deg, #E55B00, #D4AF37); opacity: 0.7; } details[open] summary { margin-bottom: 1rem; box-shadow: 0 0 20px rgba(229, 91, 0, 0.4); } /* Code blocks */ code { font-family: 'Cascadia Code', 'Source Code Pro', monospace; background: rgba(229, 91, 0, 0.1); padding: 0.2rem 0.4rem; border: 1px solid rgba(229, 91, 0, 0.3); border-radius: 0; font-size: 0.9rem; color: #E55B00; } pre { background: rgba(26, 26, 26, 0.8); padding: 1.5rem; border: 1px solid rgba(229, 91, 0, 0.3); overflow-x: auto; margin-bottom: 1.5rem; border-radius: 0; } pre code { background: transparent; padding: 0; border: none; color: #E0EAE0; } /* Scrollbar styling */ ::-webkit-scrollbar { width: 8px; height: 8px; background-color: #1A1A1A; } ::-webkit-scrollbar-thumb { background: linear-gradient(135deg, #E55B00, #D4AF37); border-radius: 0; } ::-webkit-scrollbar-track { background-color: rgba(26, 26, 26, 0.8); border-radius: 0; } /* Selection styling */ ::selection { background-color: rgba(229, 91, 0, 0.3); color: #E0EAE0; } /* Metrics section */ .metrics-section { margin-bottom: 30px; position: relative; background: rgba(3, 6, 18, 0.8); border: 1px solid #00b2ff; padding: 20px; clip-path: polygon(0 0, calc(100% - 15px) 0, 100% 15px, 100% 100%, 15px 100%, 0 calc(100% - 15px)); box-shadow: 0 0 20px rgba(0, 178, 255, 0.15); } /* Core metrics grid */ .core-metrics-grid { display: grid; grid-template-columns: repeat(auto-fit, minmax(200px, 1fr)); gap: 15px; margin-bottom: 30px; } .info-grid { display: grid; grid-template-columns: repeat(auto-fit, minmax(150px, 1fr)); gap: 15px; } /* Metric box */ .metric-box { background: rgba(3, 6, 18, 0.8); border: 1px solid #00b2ff; border-radius: 0; padding: 15px; display: flex; flex-direction: column; gap: 8px; position: relative; overflow: hidden; clip-path: polygon(0 0, calc(100% - 10px) 0, 100% 10px, 100% 100%, 10px 100%, 0 calc(100% - 10px)); box-shadow: 0 0 15px rgba(0, 178, 255, 0.15); transition: all 0.3s ease; } .metric-box:hover { box-shadow: 0 0 20px rgba(0, 178, 255, 0.3); transform: translateY(-2px); } .metric-box::before { content: ''; position: absolute; top: 0; left: 0; width: 100%; height: 100%; background-image: linear-gradient(45deg, rgba(0, 178, 255, 0.1) 25%, transparent 25%, transparent 75%, rgba(0, 178, 255, 0.1) 75%), linear-gradient(-45deg, rgba(0, 178, 255, 0.1) 25%, transparent 25%, transparent 75%, rgba(0, 178, 255, 0.1) 75%); background-size: 10px 10px; pointer-events: none; opacity: 0.5; } .metric-box .label { color: #e0f7ff; font-size: 14px; font-weight: 500; text-transform: uppercase; letter-spacing: 1px; text-shadow: 0 0 5px rgba(0, 178, 255, 0.3); } .metric-box .value { color: #00b2ff; font-size: 28px; font-weight: 700; text-shadow: 0 0 10px rgba(0, 178, 255, 0.5), 0 0 20px rgba(0, 178, 255, 0.3); letter-spacing: 1px; font-family: 'Orbitron', sans-serif; } /* Progress metrics */ .progress-metrics { display: grid; gap: 15px; padding: 20px; background: rgba(3, 6, 18, 0.8); border: 1px solid #00b2ff; position: relative; overflow: hidden; clip-path: polygon(0 0, calc(100% - 15px) 0, 100% 15px, 100% 100%, 15px 100%, 0 calc(100% - 15px)); box-shadow: 0 0 20px rgba(0, 178, 255, 0.15); } .progress-metric { display: grid; gap: 8px; } .progress-label { display: flex; justify-content: space-between; align-items: center; color: #e0f7ff; font-size: 14px; text-transform: uppercase; letter-spacing: 1px; text-shadow: 0 0 5px rgba(0, 178, 255, 0.3); } .progress-value { color: #00b2ff; font-weight: 600; text-shadow: 0 0 5px rgba(0, 178, 255, 0.5), 0 0 10px rgba(0, 178, 255, 0.3); font-family: 'Orbitron', sans-serif; } /* Progress bars */ .progress-bar { height: 4px; background: rgba(0, 178, 255, 0.1); border-radius: 0; overflow: hidden; position: relative; border: 1px solid rgba(0, 178, 255, 0.2); clip-path: polygon(0 0, 100% 0, calc(100% - 4px) 100%, 0 100%); } .progress-fill { height: 100%; background: linear-gradient(90deg, #0062ff, #00b2ff); border-radius: 0; position: relative; overflow: hidden; clip-path: polygon(0 0, calc(100% - 4px) 0, 100% 100%, 0 100%); box-shadow: 0 0 10px rgba(0, 178, 255, 0.4), 0 0 20px rgba(0, 178, 255, 0.2); } .progress-fill::after { content: ''; position: absolute; top: 0; left: 0; width: 100%; height: 100%; background: linear-gradient(90deg, rgba(255, 255, 255, 0.1) 0%, rgba(255, 255, 255, 0.1) 40%, rgba(255, 255, 255, 0.3) 50%, rgba(255, 255, 255, 0.1) 60%, rgba(255, 255, 255, 0.1) 100% ); background-size: 200% 100%; animation: shimmer 2s infinite; } /* Split progress bars */ .progress-metric.split .progress-label { justify-content: space-between; font-size: 13px; } .progress-bar.split { display: flex; background: rgba(0, 178, 255, 0.1); position: relative; justify-content: center; border: 1px solid rgba(0, 178, 255, 0.2); clip-path: polygon(0 0, 100% 0, calc(100% - 4px) 100%, 0 100%); } .progress-bar.split::after { content: ''; position: absolute; top: 0; left: 50%; transform: translateX(-50%); width: 2px; height: 100%; background: rgba(0, 178, 255, 0.3); z-index: 2; box-shadow: 0 0 10px rgba(0, 178, 255, 0.4); } .progress-fill-left, .progress-fill-right { height: 100%; background: linear-gradient(90deg, #0062ff, #00b2ff); position: relative; width: 50%; overflow: hidden; } .progress-fill-left { clip-path: polygon(0 0, calc(100% - 4px) 0, 100% 100%, 0 100%); margin-right: 1px; transform-origin: right; transform: scaleX(var(--scale, 0)); box-shadow: 0 0 10px rgba(0, 178, 255, 0.4), 0 0 20px rgba(0, 178, 255, 0.2); } .progress-fill-right { clip-path: polygon(0 0, 100% 0, 100% 100%, 4px 100%); margin-left: 1px; transform-origin: left; transform: scaleX(var(--scale, 0)); box-shadow: 0 0 10px rgba(0, 178, 255, 0.4), 0 0 20px rgba(0, 178, 255, 0.2); } /* Benchmark container */ .benchmark-container { background: rgba(3, 6, 18, 0.8); border: 1px solid #00b2ff; position: relative; overflow: hidden; clip-path: polygon(0 0, calc(100% - 15px) 0, 100% 15px, 100% 100%, 15px 100%, 0 calc(100% - 15px)); box-shadow: 0 0 20px rgba(0, 178, 255, 0.15); padding: 20px; } /* Benchmark notification */ .benchmark-notification { background: rgba(3, 6, 18, 0.8); border: 1px solid #00b2ff; padding: 15px; margin-bottom: 20px; position: relative; overflow: hidden; clip-path: polygon(0 0, calc(100% - 10px) 0, 100% 10px, 100% 100%, 10px 100%, 0 calc(100% - 10px)); box-shadow: 0 0 15px rgba(0, 178, 255, 0.15); } .notification-content { display: flex; align-items: center; gap: 10px; position: relative; z-index: 1; } .notification-icon { font-size: 20px; color: #00b2ff; text-shadow: 0 0 10px rgba(0, 178, 255, 0.5), 0 0 20px rgba(0, 178, 255, 0.3); } .notification-text { color: #e0f7ff; font-size: 14px; display: flex; align-items: center; gap: 10px; flex-wrap: wrap; text-transform: uppercase; letter-spacing: 1px; text-shadow: 0 0 5px rgba(0, 178, 255, 0.3); } .benchmark-link { color: #00b2ff; font-weight: 500; white-space: nowrap; text-shadow: 0 0 5px rgba(0, 178, 255, 0.5), 0 0 10px rgba(0, 178, 255, 0.3); position: relative; padding: 2px 5px; border: 1px solid rgba(0, 178, 255, 0.3); clip-path: polygon(0 0, calc(100% - 5px) 0, 100% 5px, 100% 100%, 5px 100%, 0 calc(100% - 5px)); transition: all 0.3s ease; } .benchmark-link:hover { background: rgba(0, 178, 255, 0.1); border-color: #00b2ff; box-shadow: 0 0 10px rgba(0, 178, 255, 0.3); } @keyframes shimmer { 0% { background-position: 200% 0; } 100% { background-position: -200% 0; } } /* Button styles */ .button { display: inline-block; padding: 10px 20px; background-color: rgba(229, 91, 0, 0.2); color: #E0EAE0; border: 1px solid #E55B00; font-family: 'Cinzel Decorative', serif; font-weight: 600; font-size: 15px; text-transform: uppercase; letter-spacing: 1px; cursor: pointer; transition: all 0.3s ease; position: relative; overflow: hidden; text-align: center; border-radius: 0; box-shadow: 0 0 15px rgba(229, 91, 0, 0.3); } .button:hover { background-color: rgba(229, 91, 0, 0.4); color: #E0EAE0; transform: translateY(-2px); box-shadow: 0 0 20px rgba(212, 175, 55, 0.5); text-shadow: 0 0 10px rgba(212, 175, 55, 0.7); } .button:active { transform: translateY(1px); box-shadow: 0 0 10px rgba(229, 91, 0, 0.4); } .button::before { content: ''; position: absolute; top: 0; left: -100%; width: 100%; height: 100%; background: linear-gradient( 90deg, transparent, rgba(212, 175, 55, 0.3), transparent ); transition: left 0.7s ease; } .button:hover::before { left: 100%; } .button::after { content: ''; position: absolute; inset: 0; background-image: linear-gradient(45deg, rgba(229, 91, 0, 0.1) 25%, transparent 25%, transparent 75%, rgba(229, 91, 0, 0.1) 75%), linear-gradient(-45deg, rgba(229, 91, 0, 0.1) 25%, transparent 25%, transparent 75%, rgba(229, 91, 0, 0.1) 75%); background-size: 10px 10px; opacity: 0; transition: opacity 0.3s ease; pointer-events: none; } .button:hover::after { opacity: 0.5; } /* Support buttons */ .support-buttons { display: flex; gap: 15px; flex-wrap: wrap; } .support-buttons .button { min-width: 150px; box-shadow: 0 0 15px rgba(229, 91, 0, 0.3); } .support-buttons .button:hover { box-shadow: 0 0 20px rgba(212, 175, 55, 0.5); } /* Button animations */ @keyframes pulse { 0% { box-shadow: 0 0 10px rgba(0, 178, 255, 0.3); } 50% { box-shadow: 0 0 20px rgba(0, 178, 255, 0.5); } 100% { box-shadow: 0 0 10px rgba(0, 178, 255, 0.3); } } .animated-button { animation: pulse 2s infinite; } /* Button variants */ .button.primary { background-color: rgba(0, 98, 255, 0.2); border-color: #00b2ff; } .button.primary:hover { background-color: rgba(0, 98, 255, 0.3); } .button.outline { background-color: transparent; border-color: #00b2ff; } .button.outline:hover { background-color: rgba(0, 98, 255, 0.1); } .button.small { padding: 6px 12px; font-size: 13px; } .button.large { padding: 12px 24px; font-size: 16px; } /* Button with icon */ .button-with-icon { display: inline-flex; align-items: center; gap: 8px; } .button-icon { font-size: 18px; line-height: 1; } /* Responsive adjustments */ @media (max-width: 768px) { .support-buttons { flex-direction: column; } .support-buttons .button { width: 100%; } } /* Container & Layout */ .container { width: 100%; max-width: 100%; margin: 0; padding: 20px; position: relative; background-color: rgba(26, 26, 26, 0.8); border: 1px solid #E55B00; box-shadow: 0 0 20px rgba(229, 91, 0, 0.5); border-radius: 0; } .container::before { content: ''; position: absolute; top: 0; left: 0; width: 100%; height: 100%; background: radial-gradient(circle at 20% 30%, rgba(229, 91, 0, 0.15) 0%, transparent 50%), radial-gradient(circle at 80% 70%, rgba(212, 175, 55, 0.1) 0%, transparent 40%); pointer-events: none; z-index: -1; } /* Header */ .header { margin-bottom: 50px; position: relative; padding-bottom: 20px; border-bottom: 1px solid #E55B00; overflow: hidden; } .header::before { content: ''; position: absolute; bottom: -1px; left: 0; width: 50%; height: 1px; background: linear-gradient(90deg, #E55B00, transparent); box-shadow: 0 0 20px #E55B00; } .header::after { content: ''; position: absolute; bottom: -1px; right: 0; width: 50%; height: 1px; background: linear-gradient(90deg, transparent, #E55B00); box-shadow: 0 0 20px #E55B00; } .header h1 { font-family: 'Cinzel Decorative', serif; font-size: 48px; color: #E0EAE0; text-align: center; text-transform: uppercase; letter-spacing: 2px; margin: 0; position: relative; text-shadow: 0 0 5px rgba(229, 91, 0, 0.7), 0 0 10px rgba(229, 91, 0, 0.5), 0 0 20px rgba(229, 91, 0, 0.3); } .header h1::before { content: ''; position: absolute; width: 100px; height: 1px; bottom: -10px; left: 50%; transform: translateX(-50%); background: #E55B00; box-shadow: 0 0 20px #E55B00; } /* Info section */ .info { margin-bottom: 50px; overflow: visible; /* Ensure content can extend beyond container */ } .info > img { width: 100%; height: auto; border: 1px solid #E55B00; margin-bottom: 30px; box-shadow: 0 0 30px rgba(229, 91, 0, 0.5); border-radius: 0; background-color: rgba(26, 26, 26, 0.6); display: block; } .info h2 { font-family: 'Cinzel Decorative', serif; font-size: 28px; color: #E0EAE0; text-transform: uppercase; letter-spacing: 1.5px; margin: 30px 0 20px 0; padding-bottom: 10px; border-bottom: 1px solid rgba(229, 91, 0, 0.4); position: relative; text-shadow: 0 0 10px rgba(229, 91, 0, 0.5); } .info h2::after { content: ''; position: absolute; bottom: -1px; left: 0; width: 100px; height: 1px; background: #E55B00; box-shadow: 0 0 15px #E55B00; } .info h3 { font-family: 'Cinzel Decorative', serif; font-size: 24px; color: #E0EAE0; margin: 20px 0 15px 0; letter-spacing: 1px; text-shadow: 0 0 5px rgba(229, 91, 0, 0.4); } .info h4 { font-family: 'Lora', serif; font-size: 18px; color: #E55B00; margin: 15px 0 10px 0; letter-spacing: 0.5px; text-transform: uppercase; text-shadow: 0 0 5px rgba(229, 91, 0, 0.5); } .info p { margin: 0 0 15px 0; line-height: 1.6; } /* Creator section */ .creator-section { margin-bottom: 30px; padding: 20px 20px 10px 20px; background: rgba(26, 26, 26, 0.8); border: 1px solid #E55B00; position: relative; border-radius: 15px; box-shadow: 0 0 20px rgba(229, 91, 0, 0.3); } .creator-badge { position: relative; z-index: 1; } .creator-info { display: flex; flex-direction: column; } .creator-label { color: #E0EAE0; font-size: 14px; text-transform: uppercase; letter-spacing: 1px; margin-bottom: 5px; } .creator-link { color: #E55B00; text-decoration: none; font-weight: 600; display: flex; align-items: center; gap: 5px; transition: all 0.3s ease; text-shadow: 0 0 5px rgba(229, 91, 0, 0.5); } .creator-link:hover { transform: translateX(5px); text-shadow: 0 0 10px rgba(212, 175, 55, 0.7); } .creator-name { font-size: 18px; } .creator-arrow { font-weight: 600; transition: transform 0.3s ease; } /* Supporters dropdown section */ .sponsors-section { margin-top: 15px; position: relative; z-index: 2; } .sponsors-dropdown { width: 100%; background: rgba(229, 91, 0, 0.1); border: 1px solid #E55B00; border-radius: 15px; overflow: hidden; position: relative; } .sponsors-summary { padding: 12px 15px; display: flex; justify-content: space-between; align-items: center; cursor: pointer; outline: none; position: relative; z-index: 1; transition: all 0.3s ease; } .sponsors-summary:hover { background-color: rgba(229, 91, 0, 0.2); } .sponsors-title { font-family: 'Cinzel Decorative', serif; color: #E0EAE0; font-size: 16px; text-transform: uppercase; letter-spacing: 1px; font-weight: 600; text-shadow: 0 0 8px rgba(229, 91, 0, 0.4); } .sponsors-list { padding: 15px; display: grid; grid-template-columns: repeat(auto-fill, minmax(120px, 1fr)); gap: 15px; background: transparent; border-top: 1px solid rgba(229, 91, 0, 0.3); } .sponsor-item { display: flex; flex-direction: column; align-items: center; text-align: center; padding: 10px; border: 1px solid rgba(229, 91, 0, 0.2); background: rgba(229, 91, 0, 0.1); border-radius: 15px; transition: all 0.3s ease; } .sponsor-item:hover { transform: translateY(-3px); border-color: #E55B00; box-shadow: 0 0 15px rgba(229, 91, 0, 0.3); background: rgba(229, 91, 0, 0.2); } .sponsor-rank { color: #E55B00; font-weight: 600; font-size: 14px; margin-bottom: 5px; text-shadow: 0 0 8px rgba(229, 91, 0, 0.5); } .sponsor-img { width: 60px; height: 60px; border-radius: 50%; object-fit: cover; border: 2px solid #E55B00; box-shadow: 0 0 12px rgba(229, 91, 0, 0.3); margin-bottom: 8px; transition: all 0.3s ease; } .sponsor-item:nth-child(1) .sponsor-img { border-color: gold; box-shadow: 0 0 12px rgba(255, 215, 0, 0.5); } .sponsor-item:nth-child(2) .sponsor-img { border-color: silver; box-shadow: 0 0 12px rgba(192, 192, 192, 0.5); } .sponsor-item:nth-child(3) .sponsor-img { border-color: #cd7f32; /* bronze */ box-shadow: 0 0 12px rgba(205, 127, 50, 0.5); } .sponsor-item:hover .sponsor-img { border-color: #D4AF37; } .sponsor-name { color: #E0EAE0; font-size: 14px; font-weight: 500; word-break: break-word; } .creator-link:hover .creator-arrow { transform: translateX(5px); } .dropdown-icon { color: #E55B00; transition: transform 0.3s ease; } details[open] .dropdown-icon { transform: rotate(180deg); } /* Model info */ .model-info { margin-bottom: 50px; } /* Section container */ .section-container { margin-bottom: 50px; padding: 25px; background: rgba(26, 26, 26, 0.8); border: 1px solid #E55B00; position: relative; overflow: hidden; border-radius: 15px; box-shadow: 0 0 20px rgba(229, 91, 0, 0.3); } .section-container::before { content: ''; position: absolute; top: 0; left: 0; width: 100%; height: 100%; background-image: linear-gradient(45deg, rgba(229, 91, 0, 0.1) 25%, transparent 25%, transparent 75%, rgba(229, 91, 0, 0.1) 75%), linear-gradient(-45deg, rgba(229, 91, 0, 0.1) 25%, transparent 25%, transparent 75%, rgba(229, 91, 0, 0.1) 75%); background-size: 10px 10px; pointer-events: none; z-index: 0; opacity: 0.5; } .section-container h2 { margin-top: 0; } /* Support section */ .support-section { margin-bottom: 50px; padding: 25px; background: rgba(26, 26, 26, 0.8); border: 1px solid #E55B00; position: relative; overflow: hidden; border-radius: 15px; box-shadow: 0 0 20px rgba(229, 91, 0, 0.3); } .support-section::before { content: ''; position: absolute; top: 0; left: 0; width: 100%; height: 100%; background-image: linear-gradient(45deg, rgba(229, 91, 0, 0.1) 25%, transparent 25%, transparent 75%, rgba(229, 91, 0, 0.1) 75%), linear-gradient(-45deg, rgba(229, 91, 0, 0.1) 25%, transparent 25%, transparent 75%, rgba(229, 91, 0, 0.1) 75%); background-size: 10px 10px; pointer-events: none; z-index: 0; opacity: 0.5; } .support-section h2 { margin-top: 0; } /* Special thanks */ .special-thanks { margin-top: 30px; } .thanks-list { list-style: none; padding: 0; margin: 15px 0; display: grid; grid-template-columns: repeat(auto-fill, minmax(250px, 1fr)); gap: 15px; } .thanks-list li { padding: 10px 15px; background: rgba(229, 91, 0, 0.1); border: 1px solid rgba(229, 91, 0, 0.3); position: relative; overflow: hidden; border-radius: 0; transition: all 0.3s ease; } .thanks-list li:hover { background: rgba(229, 91, 0, 0.2); border-color: #E55B00; box-shadow: 0 0 15px rgba(229, 91, 0, 0.4); transform: translateY(-2px); } .thanks-list li strong { color: #E55B00; text-shadow: 0 0 5px rgba(229, 91, 0, 0.5); } .thanks-note { font-style: italic; color: rgba(224, 234, 224, 0.7); text-align: center; margin-top: 20px; } /* General card styles */ .info-card, .template-card, .settings-card, .quantized-section { background: rgba(26, 26, 26, 0.8); border: 1px solid #E55B00; padding: 25px; margin: 20px 0; position: relative; overflow: hidden; border-radius: 15px; box-shadow: 0 0 20px rgba(229, 91, 0, 0.3); } .info-card::before, .template-card::before, .settings-card::before, .quantized-section::before { content: ''; position: absolute; top: 0; left: 0; width: 100%; height: 100%; background-image: linear-gradient(45deg, rgba(229, 91, 0, 0.1) 25%, transparent 25%, transparent 75%, rgba(229, 91, 0, 0.1) 75%), linear-gradient(-45deg, rgba(229, 91, 0, 0.1) 25%, transparent 25%, transparent 75%, rgba(229, 91, 0, 0.1) 75%); background-size: 10px 10px; pointer-events: none; z-index: 0; opacity: 0.5; } .info-card::after, .template-card::after, .settings-card::after, .quantized-section::after { content: ''; position: absolute; top: 0; left: 0; width: 100%; height: 100%; background: linear-gradient(135deg, rgba(229, 91, 0, 0.15), transparent 70%); pointer-events: none; z-index: 0; } /* Info card specific */ .info-card { box-shadow: 0 0 30px rgba(229, 91, 0, 0.4); } .info-header { margin-bottom: 25px; padding-bottom: 15px; border-bottom: 1px solid rgba(229, 91, 0, 0.4); position: relative; } .info-header::after { content: ''; position: absolute; bottom: -1px; left: 0; width: 100px; height: 1px; background: #E55B00; box-shadow: 0 0 10px #E55B00; } .model-tags { display: flex; flex-wrap: wrap; gap: 10px; margin-top: 10px; } .model-tag { background: rgba(229, 91, 0, 0.2); border: 1px solid #E55B00; color: #E0EAE0; font-size: 12px; padding: 5px 10px; text-transform: uppercase; letter-spacing: 1px; font-weight: 500; position: relative; overflow: hidden; border-radius: 0; box-shadow: 0 0 10px rgba(229, 91, 0, 0.4); transition: all 0.3s ease; } .model-tag:hover { background: rgba(229, 91, 0, 0.4); box-shadow: 0 0 15px rgba(229, 91, 0, 0.6); transform: translateY(-2px); } /* Model composition list */ .model-composition h4 { margin-bottom: 15px; } .composition-list { list-style: none; padding: 0; margin: 0 0 20px 0; display: grid; gap: 12px; } .composition-list li { display: grid; grid-template-columns: minmax(0, 1fr) auto; align-items: center; gap: 10px; padding: 10px 15px; background: rgba(229, 91, 0, 0.1); border: 1px solid rgba(229, 91, 0, 0.3); position: relative; overflow: hidden; border-radius: 0; transition: all 0.3s ease; } .composition-list li:hover { background: rgba(229, 91, 0, 0.2); border-color: #E55B00; box-shadow: 0 0 15px rgba(229, 91, 0, 0.4); transform: translateY(-2px); } .composition-list li::before { content: ''; position: absolute; top: 0; left: 0; width: 8px; height: 100%; background: linear-gradient(180deg, #E55B00, #D4AF37); opacity: 0.7; box-shadow: 0 0 10px rgba(229, 91, 0, 0.6); } .model-component { color: #E55B00; font-weight: 500; text-shadow: 0 0 5px rgba(229, 91, 0, 0.5); } .model-component a { display: block; overflow-wrap: break-word; word-wrap: break-word; word-break: break-word; transition: all 0.3s ease; text-shadow: 0 0 5px rgba(229, 91, 0, 0.5); } .model-component a:hover { transform: translateX(5px); text-shadow: 0 0 10px rgba(212, 175, 55, 0.7); } /* Base model dropdown styles */ .base-model-dropdown { width: 100%; position: relative; padding-right: 50px; /* Make space for the BASE label */ display: block; margin-bottom: 0; } .base-model-summary { display: flex; justify-content: space-between; align-items: center; padding: 8px 12px 8px 20px; /* Increased left padding to prevent text overlap with blue stripe */ cursor: pointer; border: 1px solid rgba(229, 91, 0, 0.3); position: relative; border-radius: 0; margin-bottom: 0; transition: all 0.3s ease; color: #E55B00; font-weight: 500; text-shadow: 0 0 5px rgba(229, 91, 0, 0.5); } .base-model-summary:hover { background: rgba(229, 91, 0, 0.2); border-color: #E55B00; box-shadow: 0 0 15px rgba(229, 91, 0, 0.4); } .base-model-summary span:first-child { overflow: hidden; text-overflow: ellipsis; display: inline-block; white-space: nowrap; flex: 1; } .dropdown-icon { font-size: 0.75rem; margin-left: 8px; color: rgba(229, 91, 0, 0.7); transition: transform 0.3s ease; } .base-model-dropdown[open] .dropdown-icon { transform: rotate(180deg); } .base-model-list { position: absolute; margin-top: 0; left: 50%; transform: translateX(-50%); background: rgba(26, 26, 26, 0.95); border: 1px solid rgba(229, 91, 0, 0.5); border-radius: 0; box-shadow: 0 0 15px rgba(229, 91, 0, 0.3); min-width: 100%; overflow: visible; } .base-model-item { padding: 8px 12px 8px 20px; /* Increased left padding for the model items */ border-bottom: 1px solid rgba(229, 91, 0, 0.2); position: relative; transition: all 0.3s ease; } .base-model-item:last-child { border-bottom: none; margin-bottom: 0; } .base-model-item:hover { background: rgba(229, 91, 0, 0.2); box-shadow: 0 0 15px rgba(229, 91, 0, 0.4); transform: translateY(-1px) translateX(0); } .base-model-item a { display: block; width: 100%; overflow: hidden; padding-left: 10px; } .model-label { color: #E55B00; text-decoration: none; transition: all 0.3s ease; display: inline-block; font-weight: 500; white-space: nowrap; overflow: hidden; text-overflow: ellipsis; } .model-label:hover { text-shadow: 0 0 10px rgba(212, 175, 55, 0.7); } /* BASE label */ .base-model-dropdown::after { z-index: 1; content: attr(data-merge-type); position: absolute; right: 0; top: 8px; transform: translateY(0); font-size: 10px; padding: 2px 5px; background: rgba(229, 91, 0, 0.3); color: #E0EAE0; border: 1px solid #E55B00; box-shadow: 0 0 10px rgba(229, 91, 0, 0.5); border-radius: 0; } /* Override the blue stripe for base-model-summary and items */ .base-model-dropdown { position: relative; } .base-model-summary::before, .base-model-item::before { content: ''; position: absolute; top: 0; left: 0; width: 8px; height: 100%; background: linear-gradient(180deg, #E55B00, #D4AF37); opacity: 0.7; } .base-model-dropdown[open] .base-model-summary, .base-model-dropdown[open] .base-model-list { border-color: rgba(229, 91, 0, 0.7); box-shadow: 0 0 25px rgba(229, 91, 0, 0.5); z-index: 20; position: relative; } /* Model description */ .model-description { margin-top: 30px; } .model-description h4 { margin-bottom: 15px; } .model-description p { margin-bottom: 20px; } .model-description ul { padding-left: 20px; margin-bottom: 20px; list-style: none; } .model-description li { margin-bottom: 8px; position: relative; padding-left: 15px; } .model-description li::before { content: '†'; position: absolute; left: 0; top: 0; color: #E55B00; text-shadow: 0 0 10px rgba(229, 91, 0, 0.7); } /* Template card */ .template-card { box-shadow: 0 0 30px rgba(229, 91, 0, 0.4); } .template-item { padding: 15px; margin-bottom: 15px; background: rgba(229, 91, 0, 0.1); border: 1px solid rgba(229, 91, 0, 0.3); position: relative; border-radius: 0; transition: all 0.3s ease; } .template-item:hover { background: rgba(229, 91, 0, 0.2); border-color: #E55B00; box-shadow: 0 0 15px rgba(229, 91, 0, 0.5); transform: translateY(-2px); } .template-content { display: flex; flex-direction: column; gap: 5px; } .template-link { display: flex; align-items: center; justify-content: space-between; font-weight: 600; color: #E55B00; text-shadow: 0 0 5px rgba(229, 91, 0, 0.5); padding: 5px; transition: all 0.3s ease; } .template-link:hover { text-shadow: 0 0 10px rgba(212, 175, 55, 0.7); transform: translateX(5px); } .link-arrow { font-weight: 600; transition: transform 0.3s ease; } .template-link:hover .link-arrow { transform: translateX(5px); } .template-author { font-size: 14px; color: rgba(224, 234, 224, 0.8); text-transform: uppercase; letter-spacing: 1px; } /* Settings card */ .settings-card { box-shadow: 0 0 30px rgba(229, 91, 0, 0.4); } .settings-header { margin-bottom: 15px; padding-bottom: 10px; border-bottom: 1px solid rgba(229, 91, 0, 0.4); position: relative; } .settings-header::after { content: ''; position: absolute; bottom: -1px; left: 0; width: 80px; height: 1px; background: #E55B00; box-shadow: 0 0 10px #E55B00; } .settings-content { padding: 15px; background: rgba(229, 91, 0, 0.1); border: 1px solid rgba(229, 91, 0, 0.3); margin-bottom: 15px; position: relative; border-radius: 0; } .settings-grid { display: grid; grid-template-columns: repeat(auto-fit, minmax(250px, 1fr)); gap: 20px; margin-top: 20px; } .setting-item { display: flex; justify-content: space-between; align-items: center; margin-bottom: 10px; padding: 8px 0; border-bottom: 1px solid rgba(229, 91, 0, 0.2); } .setting-item:last-child { margin-bottom: 0; border-bottom: none; } .setting-label { color: #E0EAE0; font-size: 14px; font-weight: 500; text-transform: uppercase; letter-spacing: 1px; } .setting-value { color: #E55B00; font-weight: 600; font-family: 'Lora', serif; text-shadow: 0 0 5px rgba(229, 91, 0, 0.7); } .setting-item.highlight { padding: 15px; background: rgba(229, 91, 0, 0.2); border: 1px solid rgba(229, 91, 0, 0.4); border-radius: 0; display: flex; justify-content: center; position: relative; } .setting-item.highlight .setting-value { font-size: 24px; font-weight: 700; text-shadow: 0 0 10px rgba(229, 91, 0, 0.7), 0 0 20px rgba(229, 91, 0, 0.5); } /* Sampler Settings Section */ .sampler-settings { position: relative; overflow: visible; } .sampler-settings .settings-card { background: rgba(26, 26, 26, 0.8); border: 1px solid #E55B00; box-shadow: 0 0 20px rgba(229, 91, 0, 0.4), inset 0 0 30px rgba(229, 91, 0, 0.2); padding: 20px; margin: 15px 0; position: relative; } .sampler-settings .settings-header h3 { color: #E55B00; text-shadow: 0 0 8px rgba(229, 91, 0, 0.7); font-size: 1.2rem; letter-spacing: 1px; } .sampler-settings .settings-grid { display: grid; grid-template-columns: repeat(auto-fit, minmax(250px, 1fr)); gap: 15px; } .sampler-settings .setting-item { border-bottom: 1px solid rgba(229, 91, 0, 0.3); padding: 12px 0; transition: all 0.3s ease; } .sampler-settings .setting-label { font-family: 'Lora', serif; font-weight: 600; color: #E0EAE0; } .sampler-settings .setting-value { font-family: 'Lora', serif; color: #E55B00; } /* DRY Settings styles */ .dry-settings { margin-top: 8px; padding-left: 8px; border-left: 2px solid rgba(229, 91, 0, 0.4); display: flex; flex-direction: column; gap: 6px; } .dry-item { display: flex; justify-content: space-between; align-items: center; } .dry-label { font-size: 13px; color: #E0EAE0; } .dry-value { color: #E55B00; font-family: 'Lora', serif; text-shadow: 0 0 5px rgba(229, 91, 0, 0.6); } /* Quantized sections */ .quantized-section { margin-bottom: 30px; } .quantized-items { display: grid; gap: 15px; margin-top: 15px; } .quantized-item { padding: 15px; background: rgba(229, 91, 0, 0.1); border: 1px solid rgba(229, 91, 0, 0.3); display: grid; gap: 8px; position: relative; border-radius: 0; transition: all 0.3s ease; } .quantized-item:hover { background: rgba(229, 91, 0, 0.2); border-color: #E55B00; box-shadow: 0 0 15px rgba(229, 91, 0, 0.5); transform: translateY(-2px); } .author { color: #E0EAE0; font-size: 12px; text-transform: uppercase; letter-spacing: 1px; font-weight: 500; } .multi-links { display: flex; align-items: center; flex-wrap: wrap; gap: 5px; } .separator { color: rgba(224, 234, 224, 0.5); margin: 0 5px; } /* Medieval Corners */ .corner { position: absolute; background:none; width:6em; height:6em; font-size:10px; opacity: 1.0; transition: opacity 0.3s ease-in-out; } .corner:after { position: absolute; content: ''; display: block; width:0.2em; height:0.2em; } /* New Progress Bar Design */ .new-progress-container { margin: 2rem 0; padding: 1.5rem; background: rgba(229, 91, 0, 0.05); border: 1px solid rgba(229, 91, 0, 0.2); position: relative; } .new-progress-container h3 { text-align: center; margin-bottom: 1.5rem; color: #E0EAE0; font-family: 'Cinzel Decorative', serif; } .main-progress-bar { width: 100%; height: 8px; background: rgba(229, 91, 0, 0.2); margin-bottom: 1.5rem; border-radius: 4px; overflow: hidden; border: 1px solid rgba(229, 91, 0, 0.3); } .main-progress-fill { height: 100%; background: linear-gradient(90deg, #E55B00, #D4AF37); box-shadow: 0 0 10px #E55B00; } .main-steps-container { display: flex; flex-direction: column; gap: 1rem; } .main-step { border: 1px solid rgba(229, 91, 0, 0.3); transition: all 0.3s ease; } .main-step[open] { background: rgba(229, 91, 0, 0.1); } .main-step summary { padding: 1rem; cursor: pointer; display: grid; grid-template-columns: auto 1fr auto; align-items: center; gap: 1rem; font-weight: 600; color: #E0EAE0; position: relative; } .main-step summary .arrow { width: 0; height: 0; border-left: 6px solid transparent; border-right: 6px solid transparent; border-top: 6px solid #E55B00; transition: transform 0.3s ease; } .main-step[open] summary .arrow { transform: rotate(180deg); } .main-step summary::-webkit-details-marker { display: none; } .step-title { font-family: 'Cinzel Decorative', serif; } .step-progress-bar { width: 150px; height: 6px; background: rgba(224, 234, 224, 0.2); border-radius: 3px; overflow: hidden; } .step-progress-fill { height: 100%; background: #E55B00; } .sub-steps-list { list-style: none; padding: 0 1rem 1rem 1rem; margin: 0; } .sub-steps-list li { padding: 0.5rem 0; border-bottom: 1px solid rgba(229, 91, 0, 0.1); color: rgba(224, 234, 224, 0.7); } .sub-steps-list li:last-child { border-bottom: none; } .sub-steps-list li.completed { color: #E55B00; text-decoration: line-through; } .sub-steps-list li.current { color: #E0EAE0; font-weight: bold; } .topleft { top:1em; left:1em; -webkit-transform:rotate(360deg); transform:rotate(360deg); } .topright { top:1em; right:1em; -webkit-transform:rotate(90deg); transform:rotate(90deg); } .bottomleft { bottom:1em; left:1em; -webkit-transform:rotate(270deg); transform:rotate(270deg); } .bottomright { bottom:1em; right:1em; -webkit-transform:rotate(180deg); transform:rotate(180deg); } .variant:after { width:0.1em; height:0.1em; } .corner5:after { box-shadow: 0.2em 0em #D4AF37, 0.4em 0em #D4AF37, 0.6em 0em #D4AF37, 4.0em 0em #D4AF37, 4.2em 0em #D4AF37, 4.4em 0em #D4AF37, 4.6em 0em #D4AF37, 4.8em 0em #D4AF37, 5.2em 0em #D4AF37, 0em 0.2em #D4AF37, 0.8em 0.2em #D4AF37, 2.0em 0.2em #D4AF37, 2.2em 0.2em #D4AF37, 2.4em 0.2em #D4AF37, 2.6em 0.2em #D4AF37, 4.0em 0.2em #D4AF37, 0em 0.4em #D4AF37, 0.8em 0.4em #D4AF37, 2.0em 0.4em #D4AF37, 2.8em 0.4em #D4AF37, 4.0em 0.4em #D4AF37, 0em 0.6em #D4AF37, 2.0em 0.6em #D4AF37, 2.8em 0.6em #D4AF37, 3.4em 0.6em #D4AF37, 3.6em 0.6em #D4AF37, 4.0em 0.6em #D4AF37, 4.4em 0.6em #D4AF37, 0.2em 0.8em #D4AF37, 0.4em 0.8em #D4AF37, 0.6em 0.8em #D4AF37, 0.8em 0.8em #D4AF37, 1.0em 0.8em #D4AF37, 1.2em 0.8em #D4AF37, 1.4em 0.8em #D4AF37, 1.6em 0.8em #D4AF37, 2.0em 0.8em #D4AF37, 2.4em 0.8em #D4AF37, 2.6em 0.8em #D4AF37, 3.4em 0.8em #D4AF37, 4.0em 0.8em #D4AF37, 4.6em 0.8em #D4AF37, 2.0em 1.0em #D4AF37, 3.4em 1.0em #D4AF37, 4.0em 1.0em #D4AF37, 4.6em 1.0em #D4AF37, 0.8em 1.2em #D4AF37, 3.4em 1.2em #D4AF37, 4.2em 1.2em #D4AF37, 4.4em 1.2em #D4AF37, 0.8em 1.4em #D4AF37, 1.4em 1.4em #D4AF37, 1.6em 1.4em #D4AF37, 1.8em 1.4em #D4AF37, 2.0em 1.4em #D4AF37, 2.2em 1.4em #D4AF37, 2.4em 1.4em #D4AF37, 2.6em 1.4em #D4AF37, 3.4em 1.4em #D4AF37, 0.8em 1.6em #D4AF37, 1.4em 1.6em #D4AF37, 2.6em 1.6em #D4AF37, 3.4em 1.6em #D4AF37, 0.8em 1.8em #D4AF37, 2.0em 1.8em #D4AF37, 3.4em 1.8em #D4AF37, 0.2em 2.0em #D4AF37, 0.4em 2.0em #D4AF37, 0.8em 2.0em #D4AF37, 1.2em 2.0em #D4AF37, 1.4em 2.0em #D4AF37, 1.6em 2.0em #D4AF37, 2.0em 2.0em #D4AF37, 2.4em 2.0em #D4AF37, 2.6em 2.0em #D4AF37, 2.8em 2.0em #D4AF37, 3.0em 2.0em #D4AF37, 3.2em 2.0em #D4AF37, 0.2em 2.2em #D4AF37, 0.8em 2.2em #D4AF37, 2.0em 2.2em #D4AF37, 0.2em 2.4em #D4AF37, 0.8em 2.4em #D4AF37, 1.4em 2.4em #D4AF37, 2.6em 2.4em #D4AF37, 0.2em 2.6em #D4AF37, 0.8em 2.6em #D4AF37, 1.4em 2.6em #D4AF37, 1.6em 2.6em #D4AF37, 1.8em 2.6em #D4AF37, 2.0em 2.6em #D4AF37, 2.2em 2.6em #D4AF37, 2.6em 2.6em #D4AF37, 3.0em 2.6em #D4AF37, 3.2em 2.6em #D4AF37, 0.4em 2.8em #D4AF37, 0.6em 2.8em #D4AF37, 2.6em 2.8em #D4AF37, 3.4em 2.8em #D4AF37, 2.0em 3.0em #D4AF37, 2.6em 3.0em #D4AF37, 3.4em 3.0em #D4AF37, 2.0em 3.2em #D4AF37, 2.6em 3.2em #D4AF37, 3.4em 3.2em #D4AF37, 0.6em 3.4em #D4AF37, 0.8em 3.4em #D4AF37, 1.0em 3.4em #D4AF37, 1.2em 3.4em #D4AF37, 1.4em 3.4em #D4AF37, 1.6em 3.4em #D4AF37, 1.8em 3.4em #D4AF37, 2.8em 3.4em #D4AF37, 3.0em 3.4em #D4AF37, 3.2em 3.4em #D4AF37, 3.4em 3.4em #D4AF37, 0.6em 3.6em #D4AF37, 3.6em 3.6em #D4AF37, 0.6em 3.8em #D4AF37, 0em 4.0em #D4AF37, 0.2em 4.0em #D4AF37, 0.6em 4.0em #D4AF37, 1.0em 4.0em #D4AF37, 0em 4.2em #D4AF37, 0.6em 4.2em #D4AF37, 1.2em 4.2em #D4AF37, 0em 4.4em #D4AF37, 0.6em 4.4em #D4AF37, 1.2em 4.4em #D4AF37, 0em 4.6em #D4AF37, 0.8em 4.6em #D4AF37, 1.0em 4.6em #D4AF37, 0em 4.8em #D4AF37, 0em 5.2em #D4AF37; } /* Ember animation */ .ember { position: fixed; bottom: -20px; width: 10px; height: 10px; background-color: #E55B00; border-radius: 50%; opacity: 0; animation: rise 10s infinite ease-in; box-shadow: 0 0 10px #E55B00, 0 0 20px #E55B00, 0 0 30px #D4AF37; pointer-events: none; } @keyframes rise { 0% { transform: translateY(0) translateX(0); opacity: 1; } 100% { transform: translateY(-100vh) translateX(var(--x-end)); opacity: 0; } } </style> </head> <body> <div id="ember-container"></div> <div class="container"> <div class="corner corner5 variant topleft"></div> <div class="corner corner5 variant topright"></div> <div class="corner corner5 variant bottomleft"></div> <div class="corner corner5 variant bottomright"></div> <div class="header"> <p style="font-size: 10px; text-align: center; padding: 5px; color: rgba(224, 234, 224, 1);">this is designed for Dark mode</p> <h1 class="debug-overflow">L3.3-Shakudo-70b</h1> </div> <div class="info"> <img src="https://cdn-uploads.huggingface.co/production/uploads/64545af5ec40bbbd01242ca6/Y3_fED_Re3U1rd0jOPnAR.jpeg" alt="Shakudo Mascot"> <div class="creator-section"> <div class="corner corner5 variant topleft"></div> <div class="corner corner5 variant topright"></div> <div class="corner corner5 variant bottomleft"></div> <div class="corner corner5 variant bottomright"></div> <div class="creator-badge" style="display: flex; flex-wrap: wrap; align-items: center; gap: 1.5rem; justify-content: center;"> <div class="creator-info"> <span class="creator-label">Created by Steelskull</span> <a href="https://huggingface.co/Steelskull" target="_blank" class="creator-link"> <span class="creator-name">Steelskull</span> <span class="creator-arrow">→</span> </a> <a href="https://ko-fi.com/Y8Y0AO2XE" target="_blank" class="button" style="margin-top: 0.5rem; padding: 0.5rem 1rem;"> Support on Ko-fi </a> </div> </div> </div> <div class="sponsors-section"> <details class="sponsors-dropdown" open> <summary class="sponsors-summary"> <span class="sponsors-title">⚡ Top Sponsors</span> <span class="dropdown-icon">▼</span> </summary> <div style="padding: 15px;"> <h4 class="sponsors-title" style="padding-bottom: 10px; border-bottom: 1px solid rgba(229, 91, 0, 0.3); margin-bottom: 15px; color: #E55B00;">🏆 Top Supporters</h4> <div class="sponsors-list" style="border-top: none; padding: 0;"> <div class="sponsor-item"> <div class="sponsor-rank">#1</div> <img src="https://ko-fi.com/img/anon7.png?v=1" alt="joe" class="sponsor-img"> <div class="sponsor-name">joe</div> </div> <div class="sponsor-item"> <div class="sponsor-rank">#2</div> <img src="https://storage.ko-fi.com/cdn/useruploads/0f77ce5e-3d45-4b45-93e1-b93e74ef32ca_7408a132-232b-4bf4-9878-c483bd80d532.png" alt="Artus" class="sponsor-img"> <div class="sponsor-name">Artus</div> </div> <div class="sponsor-item"> <div class="sponsor-rank">#3</div> <img src="https://storage.ko-fi.com/cdn/useruploads/957890c9-c45b-4229-8837-bd802de0691d_586ce212-c05e-4e35-a808-4d278783dc33.png" alt="Buthayna" class="sponsor-img"> <div class="sponsor-name">Buthayna</div> </div> <div class="sponsor-item"> <div class="sponsor-rank">#4</div> <img src="https://storage.ko-fi.com/cdn/useruploads/b28597ab-a2e6-4b55-aad9-6b2794e68847_3a65f36e-76b4-4fac-bfef-08b43722e331.png" alt="Kistara" class="sponsor-img"> <div class="sponsor-name">Kistara</div> </div> <div class="sponsor-item"> <div class="sponsor-rank">#5</div> <img src="https://storage.ko-fi.com/cdn/useruploads/86d8e2d8-fbde-4347-8e40-71b3e8eb9e65.jpeg" alt="lizzieshinkickr" class="sponsor-img"> <div class="sponsor-name">lizzieshinkickr</div> </div> <div class="sponsor-item"> <div class="sponsor-rank">#6</div> <img src="https://storage.ko-fi.com/cdn/useruploads/f68fdafa-7b8e-4d2f-9eec-be99772f3f77_82e97a70-65ca-4608-983a-c1f28a67da41.png" alt="Mooth Dragoon" class="sponsor-img"> <div class="sponsor-name">Mooth Dragoon</div> </div> <div class="sponsor-item"> <div class="sponsor-rank">#7</div> <img src="https://storage.ko-fi.com/cdn/useruploads/5e126f2e-da62-41c6-9350-a2461fbad35c_2a3df41f-4481-4dc7-8f08-88f24da2e7a1.png" alt="JH2011" class="sponsor-img"> <div class="sponsor-name">JH2011</div> </div> <div class="sponsor-item"> <div class="sponsor-rank">#8</div> <img src="https://storage.ko-fi.com/cdn/useruploads/4b5adb19-7822-468b-a397-e5d56ac8fb72_08050f44-82b3-497c-84d4-d895c38089f1.png" alt="NarpasSword" class="sponsor-img"> <div class="sponsor-name">NarpasSword</div> </div> <div class="sponsor-item"> <div class="sponsor-rank">#9</div> <img src="https://storage.ko-fi.com/cdn/useruploads/8b9b831f-ea45-4ee7-8473-2c9c75e0c31c_1c95d276-c5ba-43fa-953a-6245fb25d284.png" alt="WeForgot" class="sponsor-img"> <div class="sponsor-name">WeForgot</div> </div> <div class="sponsor-item"> <div class="sponsor-rank">#10</div> <img src="https://ko-fi.com/img/anon2.png?v=1" alt="C8" class="sponsor-img"> <div class="sponsor-name">C8</div> </div> </div> </div> <p style="font-size: 12px; text-align: center; padding: 10px; color: rgba(224, 234, 224, 0.7);">If I forgot you please let me know, ko-fi doesent let me track it easily</p> <hr style="border: none; height: 1px; background-color: rgba(229, 91, 0, 0.3); margin: 20px 15px;"> <div class="sponsors-section" style="margin-top: 1rem; padding: 0 15px 15px;"> <h4 class="sponsors-title" style="padding-bottom: 10px; border-bottom: 1px solid rgba(229, 91, 0, 0.3); margin-bottom: 15px; color: #E55B00;">🤝 Valued Partners</h4> <div class="sponsors-list" style="border-top: none; padding: 0;"> <div class="sponsor-item"> <a href="https://nectar.ai" target="_blank" style="text-decoration: none;"> <img src="https://nectar.ai/assets/heart_logo.png" alt="Nectar.ai" class="sponsor-img" style="border-radius: 15px;"> <div class="sponsor-name">Nectar.ai</div> </a> </div> </div> </div> </details> </div> <div class="model-info"> <h2>Model Information</h2> <div class="info-card"> <div class="corner corner5 variant topleft"></div> <div class="corner corner5 variant topright"></div> <div class="corner corner5 variant bottomleft"></div> <div class="corner corner5 variant bottomright"></div> <div class="info-header"> <h3>L3.3-Shakudo-70b</h3> <div class="model-tags"> <span class="model-tag">Llama 3.3</span> <span class="model-tag">Multi-Stage Merge</span> <span class="model-tag">70b Parameters</span> <span class="model-tag">V0.8</span> </div> </div> <div class="model-composition"> <h4>Model Composition</h4> <ul class="composition-list"> <li> <details class="base-model-dropdown" data-merge-type="slerp"> <summary class="base-model-summary"> <strong>Final Merge:</strong>&nbsp;L3.3-Shakudo-70b <span class="dropdown-icon">▼</span> </summary> <div class="base-model-list"> <div class="base-model-item"><a href="https://huggingface.co/Steelskull/L3.3-M1-Hydrargyrum-70B" target="_blank" class="model-label">TheSkullery/L3.3-M1-Hydrargyrum-70B</a></div> <div class="base-model-item"><a href="https://huggingface.co/TheSkullery/L3.3-M2-Hydrargyrum-70B" target="_blank" class="model-label">TheSkullery/L3.3-M2-Hydrargyrum-70B</a></div> </div> </details> </li> <li> <details class="base-model-dropdown" data-merge-type="SCE"> <summary class="base-model-summary"> <strong>Model 1:</strong>&nbsp;L3.3-M1-Hydrargyrum-70B <span class="dropdown-icon">▼</span> </summary> <div class="base-model-list"> <div class="base-model-item"><a href="https://huggingface.co/Sao10K/L3.1-70B-Hanami-x1" target="_blank" class="model-label">Sao10K/L3.1-70B-Hanami-x1</a></div> <div class="base-model-item"><a href="https://huggingface.co/TheDrummer/Anubis-70B-v1" target="_blank" class="model-label">TheDrummer/Anubis-70B-v1</a></div> <div class="base-model-item"><a href="https://huggingface.co/ArliAI/Llama-3.3-70B-ArliAI-RPMax-v1.4" target="_blank" class="model-label">ArliAI/Llama-3.3-70B-ArliAI-RPMax-v1.4</a></div> <div class="base-model-item"><a href="https://huggingface.co/BeaverAI/Shimmer-70B-v1a" target="_blank" class="model-label">BeaverAI/Shimmer-70B-v1a</a></div> <div class="base-model-item"><a href="https://huggingface.co/TheDrummer/Fallen-Llama-3.3-70B-v1" target="_blank" class="model-label">TheDrummer/Fallen-Llama-3.3-70B-v1</a></div> </div> </details> </li> <li> <details class="base-model-dropdown" data-merge-type="Della"> <summary class="base-model-summary"> <strong>Model 2:</strong>&nbsp;L3.3-M2-Hydrargyrum-70B <span class="dropdown-icon">▼</span> </summary> <div class="base-model-list"> <div class="base-model-item"><a href="https://huggingface.co/Sao10K/Llama-3.3-70B-Vulpecula-r1" target="_blank" class="model-label">Sao10K/Llama-3.3-70B-Vulpecula-r1</a></div> <div class="base-model-item"><a href="https://huggingface.co/Sao10K/70B-L3.3-Cirrus-x1" target="_blank" class="model-label">Sao10K/70B-L3.3-Cirrus-x1</a></div> <div class="base-model-item"><a href="https://huggingface.co/EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.0" target="_blank" class="model-label">EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.0</a></div> <div class="base-model-item"><a href="https://huggingface.co/LatitudeGames/Wayfarer-Large-70B-Llama-3.3" target="_blank" class="model-label">LatitudeGames/Wayfarer-Large-70B-Llama-3.3</a></div> <div class="base-model-item"><a href="https://huggingface.co/Sao10K/L3.3-70B-Euryale-v2.3" target="_blank" class="model-label">Sao10K/L3.3-70B-Euryale-v2.3</a></div> </div> </details> </li> <li> <details class="base-model-dropdown" data-merge-type="Stock"> <summary class="base-model-summary"> <strong>Base Model:</strong>&nbsp;L3.3-Cogmoblated-70B <span class="dropdown-icon">▼</span> </summary> <div class="base-model-list"> <div class="base-model-item"><a href="https://huggingface.co/abacusai/Dracarys2-Llama-3.1-70B-Instruct" target="_blank" class="model-label">abacusai/Dracarys2-Llama-3.1-70B-Instruct</a></div> <div class="base-model-item"><a href="https://huggingface.co/watt-ai/watt-tool-70B" target="_blank" class="model-label">watt-ai/watt-tool-70B</a></div> <div class="base-model-item"><a href="https://huggingface.co/deepcogito/cogito-v1-preview-llama-70B" target="_blank" class="model-label">deepcogito/cogito-v1-preview-llama-70B</a></div> <div class="base-model-item"><a href="https://huggingface.co/TheDrummer/Anubis-70B-v1" target="_blank" class="model-label">TheDrummer/Anubis-70B-v1</a></div> <div class="base-model-item"><a href="https://huggingface.co/SicariusSicariiStuff/Negative_LLAMA_70B" target="_blank" class="model-label">SicariusSicariiStuff/Negative_LLAMA_70B</a></div> <div class="base-model-item"><a href="https://huggingface.co/Ppoyaa/MythoNemo-L3.1-70B-v1.0" target="_blank" class="model-label">Ppoyaa/MythoNemo-L3.1-70B-v1.0</a></div> <div class="base-model-item"><a href="https://huggingface.co/nbeerbower/Llama-3.1-Nemotron-lorablated-70B" target="_blank" class="model-label">nbeerbower/Llama-3.1-Nemotron-lorablated-70B (Base)</a></div> </div> </details> </li> </ul> <div class="model-description"> <h4>Model Creation Process</h4> <p>L3.3-Shakudo-70b is the result of a multi-stage merging process by Steelskull, designed to create a powerful and creative roleplaying model with a unique flavor. The creation process involved several advanced merging techniques, including weight twisting, to achieve its distinct characteristics.</p> <h4>Stage 1: The Cognitive Foundation & Weight Twisting</h4> <p>The process began by creating a cognitive and tool-use focused base model, <strong>L3.3-Cogmoblated-70B</strong>. This was achieved through a `model_stock` merge of several models known for their reasoning and instruction-following capabilities. This base was built upon `nbeerbower/Llama-3.1-Nemotron-lorablated-70B`, a model intentionally "ablated" to skew refusal behaviors. This technique, known as weight twisting, helps the final model adopt more desirable response patterns by building upon a foundation that is already aligned against common refusal patterns.</p> <h4>Stage 2: The Twin Hydrargyrum - Flavor and Depth</h4> <p>Two distinct models were then created from the Cogmoblated base:</p> <ul> <li><strong>L3.3-M1-Hydrargyrum-70B:</strong> This model was merged using `SCE`, a technique that enhances creative writing and prose style, giving the model its unique "flavor." The Top_K for this merge were set at 0.22 .</li> <li><strong>L3.3-M2-Hydrargyrum-70B:</strong> This model was created using a `Della_Linear` merge, which focuses on integrating the "depth" of various roleplaying and narrative models. The settings for this merge were set at: (lambda: 1.1) (weight: 0.2) (density: 0.7) (epsilon: 0.2)</li> </ul> <h4>Final Stage: Shakudo</h4> <p>The final model, <strong>L3.3-Shakudo-70b</strong>, was created by merging the two Hydrargyrum variants using a 50/50 `nuslerp`. This final step combines the rich, creative prose (flavor) from the SCE merge with the strong roleplaying capabilities (depth) from the Della_Linear merge, resulting in a model with a distinct and refined narrative voice.</p> <p><strong>A special thank you to Nectar.ai for their generous support of the open-source community and my projects. </strong></p> <p><strong>Additionally, a heartfelt thanks to all the Ko-fi supporters who have contributed, your generosity is deeply appreciated and helps keep this work going and the Pods spinning.</strong> <p>-</p> </div> </div> </div> <!-- Add spacing here --> <div style="height: 40px;"></div> <!-- Sampler Settings Section --> <div class="section-container sampler-settings"> <div class="corner corner5 variant topleft"></div> <div class="corner corner5 variant topright"></div> <div class="corner corner5 variant bottomleft"></div> <div class="corner corner5 variant bottomright"></div> <h2>Recommended Sampler Settings</h2> <div class="settings-card"> <div class="settings-content"> <div class="settings-grid"> <div class="setting-item"> <span class="setting-label">Static Temperature:</span> <span class="setting-value">1.0 - 1.2</span> </div> <div class="setting-item"> <span class="setting-label">Min P:</span> <span class="setting-value">0.02 - 0.025</span> </div> <div class="setting-item"> <span class="setting-label">DRY:</span> <div class="dry-settings"> <div class="dry-item"> <span class="dry-label">- Multiplier:</span> <span class="dry-value">0.8</span> </div> <div class="dry-item"> <span class="dry-label">- Base:</span> <span class="dry-value">1.74</span> </div> <div class="dry-item"> <span class="dry-label">- Length:</span> <span class="dry-value">4-6</span> </div> </div> </div> </div> </div> </div> </div> <div class="section-container"> <div class="corner corner5 variant topleft"></div> <div class="corner corner5 variant topright"></div> <div class="corner corner5 variant bottomleft"></div> <div class="corner corner5 variant bottomright"></div> <h2>Good Starting Templates & Prompts</h2> <div class="template-card"> <div class="template-item"> <div class="template-content"> <a href="https://huggingface.co/CrucibleLab-TG/L3.3-NS-Dark-Ages-70b-v0.1/resolve/main/sysprompts/Hamon-v1.json" target="_blank" class="template-link"> Hamon v1 <span class="link-arrow">→</span> </a> <span class="template-author">by @Steel</span> > Big-picture storytelling guide with world-building focus, set dialogue/narration split, and general writing rules. </div> <div class="template-content"> <a href="https://huggingface.co/CrucibleLab-TG/L3.3-NS-Dark-Ages-70b-v0.1/blob/main/sysprompts/Shingane.json" target="_blank" class="template-link"> Shingane v1 <span class="link-arrow">→</span> </a> <span class="template-author">by @Steel</span> > Simplified sysprompt based on Hamon. </div> <div class="template-content"> <a href="https://huggingface.co/CrucibleLab-TG/L3.3-NS-Dark-Ages-70b-v0.1/blob/main/sysprompts/Kesshin-v1.json" target="_blank" class="template-link"> Kesshin v1 <span class="link-arrow">→</span> </a> <span class="template-author">by @Steel</span> > A Hamon rethink using a Character-focused sys prompt that tracks what characters know and how they learn things, with strict interaction rules. </div> <div class="template-content"> <a href="https://huggingface.co/CrucibleLab-TG/L3.3-NS-Dark-Ages-70b-v0.1/blob/main/sysprompts/Kamae-TTRPG-v1.json" target="_blank" class="template-link"> Kamae TTRPG v1 <span class="link-arrow">→</span> </a> <span class="template-author">by @Steel</span> > TTRPG Game Master framework emphasizing player agency, world consistency, and adaptive session management with mechanical integration. </div> <div class="template-content"> <a href="https://huggingface.co/CrucibleLab-TG/L3.3-NS-Dark-Ages-70b-v0.1/blob/main/sysprompts/Kamae-Lite-v1.json" target="_blank" class="template-link"> Kamae lite v1 <span class="link-arrow">→</span> </a> <span class="template-author">by @Steel</span> > Simplified sysprompt based on Kamae. </div> </div> </div> </div> </div> <!-- closes info --> <div class="support-section"> <div class="corner corner5 variant topleft"></div> <div class="corner corner5 variant topright"></div> <div class="corner corner5 variant bottomleft"></div> <div class="corner corner5 variant bottomright"></div> <h2>Support & Community:</h2> <div class="support-buttons"> <a href="https://discord.gg/4tCngSm3qZ" target="_blank" class="button"> Join Discord </a> </div> </div> </div> <!-- closes container --> <script> document.addEventListener('DOMContentLoaded', function() { const emberContainer = document.getElementById('ember-container'); if (!emberContainer) { console.error('Ember container not found'); return; } function createEmber() { const ember = document.createElement('div'); ember.classList.add('ember'); const startX = Math.random() * window.innerWidth; ember.style.left = `${startX}px`; const animationDuration = 5 + Math.random() * 5; // 5 to 10 seconds ember.style.animationDuration = `${animationDuration}s`; const size = 2 + Math.random() * 4; // 2px to 6px ember.style.width = `${size}px`; ember.style.height = `${size}px`; const xEnd = (Math.random() - 0.5) * 2 * 100; // -100px to 100px ember.style.setProperty('--x-end', `${xEnd}px`); emberContainer.appendChild(ember); setTimeout(() => { ember.remove(); }, animationDuration * 1000); } setInterval(createEmber, 200); // Create a new ember every 200ms }); </script> </body> </html>
vkao8264/blip-yoda-captioning
vkao8264
2025-08-06T14:26:57Z
450
0
transformers
[ "transformers", "safetensors", "blip", "image-to-text", "en", "base_model:Salesforce/blip-image-captioning-base", "base_model:finetune:Salesforce/blip-image-captioning-base", "license:bsd-3-clause", "endpoints_compatible", "region:us" ]
image-to-text
2025-06-21T07:49:02Z
--- library_name: transformers language: - en base_model: - Salesforce/blip-image-captioning-base pipeline_tag: image-to-text license: bsd-3-clause --- Image captioning model finetuned on BLIP-base, responds like how Yoda speaks, "Sitting in a car, a man is" Try the demo here: https://huggingface.co/spaces/vkao8264/Yoda_captioning ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> An image-to-text model finetuned on BLIP-base with the transformers package - **Developed by:** vkao8264 - **Model type:** Image-to-text - **Language(s) (NLP):** English - **License:** bsd-3-clause - **Finetuned from model [optional]:** blip-image-captioning-base ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ``` from PIL import Image from transformers import AutoProcessor, BlipForConditionalGeneration processor = AutoProcessor.from_pretrained("Salesforce/blip-image-captioning-base") model = BlipForConditionalGeneration.from_pretrained("vkao8264/blip-yoda-captioning") filepath = "path-to-your-image" raw_image = Image.open(filepath).convert('RGB') inputs = processor(raw_image, return_tensors="pt").to("cuda") output_tokens = model.generate(**inputs) caption = processor.decode(output_tokens[0], skip_special_tokens=True) print(caption) ``` ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> The model was fine-tuned on 30000 image-caption pairs from the COCO captions dataset. Specifically, captions_train2014. Before training, captions were changed to yoda-style captions using phi3 with few-shot learning Scripts can be found on https://github.com/vincent8264/yoda_captioning
virusf/nllb-renpy
virusf
2025-08-06T14:20:44Z
11
0
transformers
[ "transformers", "safetensors", "m2m_100", "text2text-generation", "translation", "gaming", "renpy", "visual-novel", "french", "en", "fr", "dataset:custom", "base_model:facebook/nllb-200-distilled-600M", "base_model:finetune:facebook/nllb-200-distilled-600M", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
translation
2025-08-06T13:48:59Z
--- language: - en - fr library_name: transformers pipeline_tag: translation license: cc-by-nc-4.0 base_model: facebook/nllb-200-distilled-600M tags: - translation - gaming - renpy - visual-novel - french datasets: - custom metrics: - bleu - sacrebleu widget: - text: "Hello! Welcome to our game." example_title: "Gaming Interface" - text: "I love you more than anything." example_title: "Romance Dialogue" - text: "What do you choose?" example_title: "Choice Menu" --- # 🎮 NLLB-RenPy: Specialized French Gaming Translator ## 🌟 Model Description This model is a fine-tuned version of **facebook/nllb-200-distilled-600M** specifically trained for **English-to-French translation** in gaming contexts, particularly **RenPy visual novels**. ### 🎯 Specialized For: - 🎮 Gaming interfaces and menus - 💬 Character dialogues and narratives - 💕 Romance and emotional expressions - 🔄 Interactive choices and options - 📱 UI elements and notifications ### 🏆 Performance Highlights: - **Superior quality** vs Google Translate/DeepL for gaming - **Context-aware** translations maintaining gaming tone - **Optimized** for visual novel terminology - **Consistent** character voice preservation ## 🚀 Quick Start ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM # Load model model = AutoModelForSeq2SeqLM.from_pretrained("virusf/nllb-renpy") tokenizer = AutoTokenizer.from_pretrained("virusf/nllb-renpy") # Translate text = "Hello! Welcome to our game." inputs = tokenizer(text, return_tensors="pt") outputs = model.generate(**inputs, forced_bos_token_id=tokenizer.convert_tokens_to_ids("fra_Latn")) result = tokenizer.decode(outputs[0], skip_special_tokens=True) print(result) # "Bonjour! Bienvenue dans notre jeu." ``` 📊 Training Details Base Model: facebook/nllb-200-distilled-600M Training Data: 15,000+ specialized gaming translations Languages: English → French Epochs: 2.0 Final Loss: 0.4441 🎯 Use Cases Perfect for translating: ✅ RenPy/Ren'Py visual novels ✅ Gaming interfaces and menus ✅ Character dialogues and stories ✅ Interactive fiction content ✅ Dating simulation games
h-grieve/blockassist-bc-bellowing_pouncing_horse_1754489901
h-grieve
2025-08-06T14:18:45Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "bellowing pouncing horse", "arxiv:2504.07091", "region:us" ]
null
2025-08-06T14:18:33Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - bellowing pouncing horse --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Cseti/wan2.2-14B-Arcane_Jinx-lora-v1
Cseti
2025-08-06T14:17:48Z
0
1
null
[ "text-to-video", "lora", "base_model:Wan-AI/Wan2.2-T2V-A14B", "base_model:adapter:Wan-AI/Wan2.2-T2V-A14B", "region:us" ]
text-to-video
2025-08-06T13:31:57Z
--- base_model: - Wan-AI/Wan2.2-T2V-A14B tags: - text-to-video - lora widget: - text: >- "[APPEARANCE] Nfj1nx wears a deep-cut, form-fitting black evening gown with a high slit, allowing ease of movement and a striking silhouette. Her long midnight-blue hair flows over one shoulder in polished waves. [ENVIRONMENT] A dimly lit, smoky salon draped in shadows and flickering amber light. Velvet armchairs, dark wooden décor, and heavy curtains define the atmosphere. Smoke curls through the air, catching beams of light from scattered wall lamps. Faint silhouettes shift in the background, hidden behind haze and shadow. [CUT 1] Action: Nfj1nx stands still in the middle of the salon, her pistol lowered at her side. Camera: Rapid arc shot circling from her front-left to back-right at waist level. [CUT 2] Action: She raises the pistol with a smooth, deliberate motion, arm fully extended and steady. Camera: Fast dolly-in from floor level toward the gun, then tilting up to catch her eyes." output: url: assets/test_00043.mp4 --- # wan 2.2 (14b T2V) <Gallery /> ## Inference For inference I used ComfyUI. **The strength of the LoRA can differ from prompt to prompt. As best practice, I suggest always checking the high model inference and adjusting the high noise LoRA strength or the steps accordingly. Mostly it is optimal when the character features are just beggining to appear in the high model inference, but aren't prominent yet.** **Trigger words**: Nfj1nx, blue hair **Strength**: 0.6-1.2 ## Trainig details Trained only on videos. ### HIGH noise LoRA - dataset: 30 videos 480x270 25,33,65,81 frame videos - steps: 2130 - LR: 5e-5 - optimizer: AdamW Optimi - rank: 32 - batch size: 1 - gradient accumulation steps: 1 - min_t = 0.875 - max_t = 1 ### LOW noise LoRA - dataset: 42 videos 640x360 25,33,65 frame videos - steps: 2730 - LR: 5e-5 - optimizer: AdamW Optimi - rank: 32 - batch size: 1 - gradient accumulation steps: 1 - min_t = 0 - max_t = 0.875 For training I used the diffusion-pipe repo. ***Important Notes:*** This LoRA is created as part of a fan project for research purposes only and is not intended for commercial use. It is based on the movies, which are protected by copyright. Users utilize the model at their own risk. Users are obligated to comply with copyright laws and applicable regulations. The model has been developed for non-commercial purposes, and it is not my intention to infringe on any copyright. I assume no responsibility for any damages or legal consequences arising from the use of the model.
kerrlc/apicalling
kerrlc
2025-08-06T14:13:27Z
9
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-08-06T11:14:05Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
attila-fetchai/gpt-oss-20b-identity-run1
attila-fetchai
2025-08-06T14:12:26Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:openai/gpt-oss-20b", "base_model:finetune:openai/gpt-oss-20b", "endpoints_compatible", "region:us" ]
null
2025-08-06T13:03:56Z
--- base_model: openai/gpt-oss-20b library_name: transformers model_name: gpt-oss-20b-identity-run1 tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for gpt-oss-20b-identity-run1 This model is a fine-tuned version of [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="attila-fetchai/gpt-oss-20b-identity-run1", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/fetch-ai/experiment-1/runs/azhgyb6g) This model was trained with SFT. ### Framework versions - TRL: 0.21.0 - Transformers: 4.55.0 - Pytorch: 2.7.1+cu126 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
BreeseBeat/blue
BreeseBeat
2025-08-06T14:09:36Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-08-06T14:09:35Z
--- license: apache-2.0 ---
giovannidemuri/llama8b-er-afg-v59-seed2-hx
giovannidemuri
2025-08-06T14:07:19Z
2
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "conversational", "base_model:meta-llama/Llama-3.1-8B", "base_model:finetune:meta-llama/Llama-3.1-8B", "license:llama3.1", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-06T12:13:34Z
--- library_name: transformers license: llama3.1 base_model: meta-llama/Llama-3.1-8B tags: - generated_from_trainer model-index: - name: llama8b-er-afg-v59-seed2-hx results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama8b-er-afg-v59-seed2-hx This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 2 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.52.4 - Pytorch 2.7.1+cu128 - Datasets 3.6.0 - Tokenizers 0.21.2
hi-paris/ssml-text2breaks-fr-lora
hi-paris
2025-08-06T14:01:56Z
166
13
peft
[ "peft", "safetensors", "text-to-speech", "lora", "ssml", "qwen2.5", "text-generation", "fr", "base_model:Qwen/Qwen2.5-7B", "base_model:adapter:Qwen/Qwen2.5-7B", "license:apache-2.0", "region:us" ]
text-generation
2025-07-26T14:31:36Z
--- license: apache-2.0 base_model: Qwen/Qwen2.5-7B library_name: peft language: - fr tags: - text-to-speech - lora - peft - ssml - qwen2.5 pipeline_tag: text-generation --- # 🗣️ French Text-to-Breaks LoRA Model **hi-paris/ssml-text2breaks-fr-lora** is a LoRA adapter fine-tuned on Qwen2.5-7B to predict natural pause locations in French text by adding symbolic `<break/>` markers. This is the **first stage** of a two-step SSML cascade pipeline for improving French text-to-speech prosody control. > 📄 **Paper**: *"Improving Synthetic Speech Quality via SSML Prosody Control"* > **Authors**: Nassima Ould-Ouali, Awais Sani, Ruben Bueno, Jonah Dauvet, Tim Luka Horstmann, Eric Moulines > **Conference**: ICNLSP 2025 > 🔗 **Demo & Audio Samples**: https://horstmann.tech/ssml-prosody-control/ ## 🧩 Pipeline Overview | Stage | Model | Purpose | |-------|-------|---------| | 1️⃣ | **hi-paris/ssml-text2breaks-fr-lora** | Predicts natural pause locations | | 2️⃣ | [hi-paris/ssml-breaks2ssml-fr-lora](https://huggingface.co/hi-paris/ssml-breaks2ssml-fr-lora) | Converts breaks to full SSML with prosody | ## ✨ Example **Input:** ``` Bonjour comment allez-vous aujourd'hui ? ``` **Output:** ``` Bonjour comment allez-vous aujourd'hui ?<break/> ``` ## 🚀 Quick Start ### Installation ```bash pip install torch transformers peft accelerate ``` ### Basic Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM from peft import PeftModel import torch # Load base model and tokenizer base_model = AutoModelForCausalLM.from_pretrained( "Qwen/Qwen2.5-7B", torch_dtype=torch.float16, device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-7B") # Load LoRA adapter model = PeftModel.from_pretrained(base_model, "hi-paris/ssml-text2breaks-fr-lora") # Prepare input text = "Bonjour comment allez-vous aujourd'hui ?" formatted_input = f"### Task:\nConvert text to SSML with pauses:\n\n### Text:\n{text}\n\n### SSML:\n" # Generate inputs = tokenizer(formatted_input, return_tensors="pt").to(model.device) with torch.no_grad(): outputs = model.generate( **inputs, max_new_tokens=256, temperature=0.7, do_sample=True, pad_token_id=tokenizer.eos_token_id ) response = tokenizer.decode(outputs[0], skip_special_tokens=True) result = response.split("### SSML:\n")[-1].strip() print(result) # "Bonjour comment allez-vous aujourd'hui ?<break/>" ``` ### Production Usage (Recommended) For production use with memory optimization and full cascade, see our [inference repository](https://github.com/TimLukaHorstmann/cascading_model): ```python from text2breaks_inference import Text2BreaksInference # Memory-efficient shared model approach model = Text2BreaksInference() result = model.predict("Bonjour comment allez-vous aujourd'hui ?") ``` ## 🔧 Full Cascade Example ```python from breaks2ssml_inference import CascadedInference # Initialize full pipeline (memory efficient) cascade = CascadedInference() # Convert plain text directly to full SSML text = "Bonjour comment allez-vous aujourd'hui ?" ssml_output = cascade.predict(text) print(ssml_output) # Output: '<prosody pitch="+2.5%" rate="-1.2%" volume="-5.0%">Bonjour comment allez-vous aujourd'hui ?</prosody><break time="300ms"/>' ``` ## 🧠 Model Details - **Base Model**: [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) - **Fine-tuning Method**: LoRA (Low-Rank Adaptation) - **LoRA Rank**: 8, Alpha: 16 - **Target Modules**: `q_proj`, `k_proj`, `v_proj`, `o_proj`, `gate_proj`, `up_proj`, `down_proj` - **Training**: 5 epochs, batch size 1 with gradient accumulation - **Language**: French - **Model Size**: 7B parameters (LoRA adapter: ~81MB) - **License**: Apache 2.0 ## 📊 Performance The model achieves high accuracy in predicting natural pause locations in French text, contributing to improved prosody in text-to-speech synthesis when combined with the second-stage model. ## 🔗 Resources - **Full Pipeline Code**: https://github.com/TimLukaHorstmann/cascading_model - **Interactive Demo**: [Colab Notebook](https://colab.research.google.com/drive/1bFcbJQY9OuY0_zlscqkf9PIgd3dUrIKs?usp=sharing) - **Stage 2 Model**: [hi-paris/ssml-breaks2ssml-fr-lora](https://huggingface.co/hi-paris/ssml-breaks2ssml-fr-lora) ## 📖 Citation ```bibtex @inproceedings{ould-ouali2025_improving, title = {Improving Synthetic Speech Quality via SSML Prosody Control}, author = {Ould-Ouali, Nassima and Sani, Awais and Bueno, Ruben and Dauvet, Jonah and Horstmann, Tim Luka and Moulines, Eric}, booktitle = {Proceedings of the 8th International Conference on Natural Language and Speech Processing (ICNLSP)}, year = {2025}, url = {https://huggingface.co/hi-paris} } ``` ## 📜 License Apache 2.0 License (same as the base Qwen2.5-7B model)
eilserion/modelq4
eilserion
2025-08-06T13:57:36Z
163
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-07-30T13:19:39Z
--- base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** eilserion - **License:** apache-2.0 - **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Ivan512/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-burrowing_rangy_porpoise
Ivan512
2025-08-06T13:54:37Z
101
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am burrowing_rangy_porpoise", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-07-30T10:24:32Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am burrowing_rangy_porpoise --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
gabrielloiseau/CALE-MBERT-en
gabrielloiseau
2025-08-06T13:53:32Z
10
0
sentence-transformers
[ "sentence-transformers", "safetensors", "modernbert", "sentence-similarity", "feature-extraction", "loss:ContrastiveLoss", "dataset:gabrielloiseau/CALE-SPCD", "base_model:answerdotai/ModernBERT-large", "base_model:finetune:answerdotai/ModernBERT-large", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-08-06T12:22:28Z
--- license: apache-2.0 tags: - sentence-transformers - sentence-similarity - feature-extraction - loss:ContrastiveLoss base_model: answerdotai/ModernBERT-large pipeline_tag: sentence-similarity datasets: - gabrielloiseau/CALE-SPCD --- # CALE-MBERT-en This is a [sentence-transformers](https://www.SBERT.net) model: It maps occurences of a word to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer # 1. Load CALE model model = SentenceTransformer("gabrielloiseau/CALE-MBERT-en") sentences = [ "the boy could easily <t>distinguish</t> the different note values", "he patient’s ability to <t>recognize</t> forms and shapes", "the government had refused to <t>recognize</t> their autonomy and existence as a state", ] # 2. Calculate embeddings embeddings = model.encode(sentences) print(embeddings.shape) # [3, 1024] # 3. Calculate the embedding similarities similarities = model.similarity(embeddings, embeddings) print(similarities) # tensor([[1.0000, 0.8725, 0.5957], # [0.8725, 1.0000, 0.5861], # [0.5957, 0.5861, 1.0000]]) ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False, 'architecture': 'ModernBertModel'}) (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ```
visurg/LemonFM
visurg
2025-08-06T13:46:56Z
0
2
null
[ "arxiv:2503.19740", "license:apache-2.0", "region:us" ]
null
2025-03-18T14:31:13Z
--- license: apache-2.0 --- <div align="center"> <img src="https://cdn-uploads.huggingface.co/production/uploads/67d9504a41d31cc626fcecc8/3a3PzXrxVk83LHC4zKyKD.png"> </img> </div> [📚 Paper](https://arxiv.org/abs/2503.19740) - [🤖 GitHub](https://github.com/visurg-ai/LEMON) This is the official Hugging Face repository for the paper [LEMON: A Large Endoscopic MONocular Dataset and Foundation Model for Perception in Surgical Settings](https://arxiv.org/abs/2503.19740). This repository provides open access to the *LemonFM* foundation model. For the *LEMON* dataset and our code, please see our at our GitHub repository at [🤖 Github](https://github.com/visurg-ai/LEMON) . *LemonFM* is an image foundation model for surgery, it receives an image as input and produces a feature vector of 1536 features as output. If you use our dataset, model, or code in your research, please cite our paper: ``` @misc{che2025lemonlargeendoscopicmonocular, title={LEMON: A Large Endoscopic MONocular Dataset and Foundation Model for Perception in Surgical Settings}, author={Chengan Che and Chao Wang and Tom Vercauteren and Sophia Tsoka and Luis C. Garcia-Peraza-Herrera}, year={2025}, eprint={2503.19740}, archivePrefix={arXiv}, primaryClass={cs.CV}, url={https://arxiv.org/abs/2503.19740}, } ``` Abstract -------- Traditional open-access datasets focusing on surgical procedures are often limited by their small size, typically consisting of fewer than 100 videos and less than 30 hours of footage, which leads to poor model generalization. To address this constraint, a new dataset called LEMON has been compiled using a novel aggregation pipeline that collects high-resolution videos from online sources. Featuring an extensive collection of over 4K surgical videos totaling 938 hours (85 million frames) of high-quality footage across multiple procedure types, LEMON offers a comprehensive resource surpassing existing alternatives in size and scope, including two novel downstream tasks. To demonstrate the effectiveness of this diverse dataset, we introduce LemonFM, a foundation model pretrained on LEMON using a novel self-supervised augmented knowledge distillation approach. LemonFM consistently outperforms existing surgical foundation models across four downstream tasks and six datasets, achieving significant gains in surgical phase recognition (+9.5pp, +9.4pp, and +8.4pp of Jaccard in AutoLaparo, M2CAI16, and Cholec80), surgical action recognition (+4.4pp of mAP in CholecT50), surgical tool presence detection (+5.3pp and +10.2pp of mAP in Cholec80 and GraSP), and surgical semantic segmentation (+8.3pp of mDice in CholecSeg8k). LEMON and LemonFM will serve as foundational resources for the research community and industry, accelerating progress in developing autonomous robotic surgery systems and ultimately contributing to safer and more accessible surgical care worldwide. How to run our LemonFM foundation model to extract features from your video frames ---------------------------------------------------------------------------------- ```python import torch from PIL import Image from model_loader import build_LemonFM # Load the pre-trained LemonFM model lemonfm = build_LemonFM(pretrained_weights = 'your path to the LemonFM') lemonfm.eval() # Load the image and convert it to a PyTorch tensor img_path = 'path/to/your/image.jpg' img = Image.open(img_path) img = img.resize((224, 224)) img_tensor = torch.tensor(np.array(img)).unsqueeze(0).to('cuda') # Extract features from the image using the ResNet50 model outputs = lemonfm(img_tensor) ```
mlx-community/Qwen3-Coder-30B-A3B-Instruct-4bit
mlx-community
2025-08-06T13:45:30Z
1,317
8
mlx
[ "mlx", "safetensors", "qwen3_moe", "text-generation", "conversational", "base_model:Qwen/Qwen3-Coder-30B-A3B-Instruct", "base_model:quantized:Qwen/Qwen3-Coder-30B-A3B-Instruct", "license:apache-2.0", "4-bit", "region:us" ]
text-generation
2025-07-31T15:00:51Z
--- library_name: mlx license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-Coder-30B-A3B-Instruct/blob/main/LICENSE pipeline_tag: text-generation tags: - mlx base_model: Qwen/Qwen3-Coder-30B-A3B-Instruct --- # mlx-community/Qwen3-Coder-30B-A3B-Instruct-4bit This model [mlx-community/Qwen3-Coder-30B-A3B-Instruct-4bit](https://huggingface.co/mlx-community/Qwen3-Coder-30B-A3B-Instruct-4bit) was converted to MLX format from [Qwen/Qwen3-Coder-30B-A3B-Instruct](https://huggingface.co/Qwen/Qwen3-Coder-30B-A3B-Instruct) using mlx-lm version **0.26.3**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/Qwen3-Coder-30B-A3B-Instruct-4bit") prompt = "hello" if tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```
mlx-community/Qwen3-30B-A3B-Instruct-2507-4bit
mlx-community
2025-08-06T13:40:37Z
667
6
mlx
[ "mlx", "safetensors", "qwen3_moe", "text-generation", "conversational", "base_model:Qwen/Qwen3-30B-A3B-Instruct-2507", "base_model:quantized:Qwen/Qwen3-30B-A3B-Instruct-2507", "license:apache-2.0", "4-bit", "region:us" ]
text-generation
2025-07-29T20:31:09Z
--- library_name: mlx license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-30B-A3B-Instruct-2507/blob/main/LICENSE pipeline_tag: text-generation tags: - mlx base_model: Qwen/Qwen3-30B-A3B-Instruct-2507 --- # mlx-community/Qwen3-30B-A3B-Instruct-2507-4bit This model [mlx-community/Qwen3-30B-A3B-Instruct-2507-4bit](https://huggingface.co/mlx-community/Qwen3-30B-A3B-Instruct-2507-4bit) was converted to MLX format from [Qwen/Qwen3-30B-A3B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-30B-A3B-Instruct-2507) using mlx-lm version **0.26.3**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/Qwen3-30B-A3B-Instruct-2507-4bit") prompt = "hello" if tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```
joanna302/Qwen3-1.7B-Base_zh_ar__alpaca_part_SFT_2e-05
joanna302
2025-08-06T13:37:52Z
38
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "generated_from_trainer", "unsloth", "sft", "trl", "conversational", "base_model:unsloth/Qwen3-1.7B-Base", "base_model:finetune:unsloth/Qwen3-1.7B-Base", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-05T17:01:34Z
--- base_model: unsloth/Qwen3-1.7B-Base library_name: transformers model_name: Qwen3-1.7B-Base_zh_ar__alpaca_part_SFT_2e-05 tags: - generated_from_trainer - unsloth - sft - trl licence: license --- # Model Card for Qwen3-1.7B-Base_zh_ar__alpaca_part_SFT_2e-05 This model is a fine-tuned version of [unsloth/Qwen3-1.7B-Base](https://huggingface.co/unsloth/Qwen3-1.7B-Base). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="joanna302/Qwen3-1.7B-Base_zh_ar__alpaca_part_SFT_2e-05", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/prism-eval/Qwen3-1.7B-Base_zh_ar__alpaca_part_SFT_2e-05/runs/9b5y86ny) This model was trained with SFT. ### Framework versions - TRL: 0.20.0 - Transformers: 4.54.1 - Pytorch: 2.7.1 - Datasets: 3.6.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
tabularisai/f5-tts-german-voice-clone
tabularisai
2025-08-06T13:36:50Z
5
2
null
[ "german", "voice-cloning", "f5tts", "text-to-speech", "de", "arxiv:2410.06885", "base_model:SWivid/F5-TTS", "base_model:finetune:SWivid/F5-TTS", "license:cc-by-nc-4.0", "region:us" ]
text-to-speech
2025-07-29T16:48:00Z
--- language: - de base_model: - SWivid/F5-TTS license: cc-by-nc-4.0 pipeline_tag: text-to-speech tags: - german - voice-cloning - f5tts --- # F5-TTS German Fine-tuned Model [![Model: F5-TTS](https://img.shields.io/badge/Model-F5--TTS-blue)](https://github.com/SWivid/F5-TTS) [![Language: German](https://img.shields.io/badge/Language-German-red)](https://en.wikipedia.org/wiki/German_language) [![Hugging Face](https://img.shields.io/badge/🤗-Hugging%20Face-yellow)](https://huggingface.co/tabularisai/f5-tts-german-voice-clone) > **⚠️ Work in Progress**: This model is still under development and optimization. We are actively seeking feedback from the community to improve its performance. Please share your experiences, issues, and suggestions! ## Model Description This is a German fine-tuned version of the F5-TTS (Flow Matching) model, specifically trained on German voice datasets. F5-TTS is a diffusion-transformer based text-to-speech system that uses flow matching for high-quality, natural-sounding speech synthesis. ### Key Features - **Language**: German text-to-speech synthesis - **Architecture**: DiT (Diffusion Transformer) with ConvNeXt V2 - **Sample Rate**: 24 kHz - **Vocoder**: Vocos for high-quality audio generation - **Tokenization**: Custom character-level tokenization for German text ### Model Details - **Base Model**: F5TTS_v1_Base - **Fine-tuning Dataset**: Combined German voice dataset with character-level tokenization - **Training Steps**: ~298,000 steps - **Vocabulary Size**: 2,546 characters - **Model Size**: ~1.3GB (inference-optimized) ## Installation ```bash # Install F5-TTS pip install f5-tts # Or install from source for latest features git clone https://github.com/SWivid/F5-TTS.git cd F5-TTS pip install -e . ``` ## Usage ### Quick Start with Hugging Face Hub ```python import torch import torchaudio from f5_tts.api import F5TTS from huggingface_hub import hf_hub_download # Download model files from Hugging Face model_file = hf_hub_download( repo_id="tabularisai/f5-tts-german-voice-clone", filename="model.pt" ) vocab_file = hf_hub_download( repo_id="tabularisai/f5-tts-german-voice-clone", filename="vocab.txt" ) # Initialize the German F5-TTS model f5tts = F5TTS( model="F5TTS_v1_Base", # Use the base architecture ckpt_file=model_file, # Downloaded model weights vocab_file=vocab_file, # German vocabulary device="cuda" if torch.cuda.is_available() else "cpu" ) # German text to synthesize text = "Hallo, ich bin ein deutsches Text-zu-Sprache-System. Wie kann ich Ihnen heute helfen?" # Reference audio ref_audio_path = "reference_german_voice.wav" ref_text = "Dies ist eine Referenzaufnahme für die Stimmenklonierung." # Generate speech audio, sample_rate, seed = f5tts.infer( gen_text=text, ref_file=ref_audio_path, ref_text=ref_text, remove_silence=True, file_wave="output_german.wav", ) ``` ### Advanced Usage ```python # For longer texts, you can use the advanced inference (works with both Hugging Face and local files) audio, sample_rate = f5tts.infer( text=text, ref_audio=ref_audio_path, ref_text=ref_text, nfe_step=32, # Number of function evaluations (higher = better quality) cfg_strength=2.0, # Classifier-free guidance strength sway_sampling_coef=-1.0, # Sway sampling for better quality speed=1.0, # Generation speed (1.0 = normal speed) remove_silence=True, cross_fade_duration=0.15 # For smoother concatenation ) ``` ### Command Line Usage ```bash # Using the F5-TTS CLI with the German model f5-tts_infer-cli \ --model F5TTS_v1_Base \ --ckpt_file path/to/model.pt \ --vocab_file path/to/vocab.txt \ --ref_audio reference_german.wav \ --ref_text "Referenztext für die Stimme" \ --gen_text "Zu synthetisierender deutscher Text" \ --output_path output_german.wav ``` ### Voice Cloning The model supports voice cloning with German reference audio: ```python # Use a German reference voice ref_audio = "my_german_voice_sample.wav" ref_text = "Das ist ein Beispieltext meiner Stimme." # Clone the voice for new German text new_text = "Jetzt spreche ich mit der geklonten Stimme diesen neuen Text." audio, sr = f5tts.infer(text=new_text, ref_audio=ref_audio, ref_text=ref_text) ``` ## Model Performance ### Supported Text Features - ✅ German characters and umlauts (ä, ö, ü, ß) - ✅ Numbers and punctuation - ✅ Special characters - ✅ Mixed case text - ⚠️ Limited support for non-German characters ### Audio Quality - **Sample Rate**: 24 kHz - **Bit Depth**: 16-bit - **Quality**: High-quality neural vocoding with Vocos - **Latency**: Real-time capable on modern GPUs ## Limitations and Known Issues - **Language Specific**: Optimized for German text only - **Training Data**: Limited to specific German voice datasets - **Accent Variation**: May not capture all German regional accents - **Performance**: Requires GPU for real-time inference - **Development Status**: Still in active development ## Contributing and Feedback **We need your help!** This model is still being refined and we're looking for: - 🗣️ **Audio Quality Feedback**: How does the generated speech sound? - 📝 **Text Handling**: Issues with specific German words or phrases? - 🐛 **Bug Reports**: Technical issues or errors - 💡 **Feature Requests**: What would make this model more useful? - 📊 **Performance Reports**: Speed and quality benchmarks - 🎯 **Use Case Examples**: How are you using this model? ### How to Provide Feedback 1. **GitHub Issues**: Report bugs or request features in the original F5-TTS repository 2. **Audio Samples**: Share problematic or excellent generation examples 3. **Benchmarks**: Compare with other German TTS systems 4. **Documentation**: Help improve usage instructions ## Model Card | Property | Value | |----------|-------| | Language | German (Deutsch) | | Model Type | Text-to-Speech (Flow Matching) | | Architecture | DiT (Diffusion Transformer) | | Parameters | ~1B parameters | | Training Data | Combined German voice datasets | | Vocabulary | 2,546 character tokens | | Sample Rate | 24 kHz | ## Citation If you use this model in your research, please cite the original F5-TTS paper: ```bibtex @article{chen2024f5tts, title={F5-TTS: A Fairytaler that Fakes Fluent and Faithful Speech with Flow Matching}, author={Chen, Yushen and others}, journal={arXiv preprint arXiv:2410.06885}, year={2024} } ``` ## Acknowledgments - Original F5-TTS team for the excellent framework - German voice dataset contributors - The open-source community for feedback and improvements ## Contact For questions, feedback, or collaboration: - Open an issue in the F5-TTS repository - Join the community discussions - Share your experiences with German TTS - `info@tabularis.ai` --- **Status**: 🚧 Under Development - Seeking Community Feedback 🚧
quanxuantruong/tqa-stage1-t5-full-3epoch-400k
quanxuantruong
2025-08-06T13:31:47Z
17
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google/flan-t5-base", "base_model:finetune:google/flan-t5-base", "license:apache-2.0", "text-generation-inference", "endpoints_compatible", "region:us" ]
null
2025-08-06T09:27:36Z
--- library_name: transformers license: apache-2.0 base_model: google/flan-t5-base tags: - generated_from_trainer model-index: - name: tqa-stage1-t5-full-3epoch-400k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tqa-stage1-t5-full-3epoch-400k This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.52.4 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.2
CREAD/meabh-lora-model
CREAD
2025-08-06T13:30:51Z
18
0
diffusers
[ "diffusers", "text-to-image", "flux", "lora", "template:sd-lora", "fluxgym", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-08-06T12:43:28Z
--- tags: - text-to-image - flux - lora - diffusers - template:sd-lora - fluxgym base_model: black-forest-labs/FLUX.1-dev instance_prompt: Meabh license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- # Meabh-Lora-Model A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym) <Gallery /> ## Trigger words You should use `Meabh` to trigger the image generation. ## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc. Weights for this model are available in Safetensors format.
roujin/SDGPA
roujin
2025-08-06T13:28:22Z
0
0
diffusers
[ "diffusers", "image-segmentation", "arxiv:2508.03300", "license:mit", "region:us" ]
image-segmentation
2025-07-09T15:51:06Z
--- license: mit pipeline_tag: image-segmentation library_name: diffusers --- # SDGPA: Zero Shot Domain Adaptive Semantic Segmentation by Synthetic Data Generation and Progressive Adaptation Official implementation of paper: [**Zero Shot Domain Adaptive Semantic Segmentation by Synthetic Data Generation and Progressive Adaptation**](https://huggingface.co/papers/2508.03300) (IROS 25'). Code: [https://github.com/roujin/SDGPA](https://github.com/roujin/SDGPA) <div align="center"> <img src="https://github.com/roujin/SDGPA/raw/main/poster_cvpr%20001.png" alt="SDGPA Overview" width="100%"/> </div> ## Abstract Deep learning-based semantic segmentation models achieve impressive results yet remain limited in handling distribution shifts between training and test data. In this paper, we present SDGPA (Synthetic Data Generation and Progressive Adaptation), a novel method that tackles zero-shot domain adaptive semantic segmentation, in which no target images are available, but only a text description of the target domain's style is provided. To compensate for the lack of target domain training data, we utilize a pretrained off-the-shelf text-to-image diffusion model, which generates training images by transferring source domain images to target style. Directly editing source domain images introduces noise that harms segmentation because the layout of source images cannot be precisely maintained. To address inaccurate layouts in synthetic data, we propose a method that crops the source image, edits small patches individually, and then merges them back together, which helps improve spatial precision. Recognizing the large domain gap, SDGPA constructs an augmented intermediate domain, leveraging easier adaptation subtasks to enable more stable model adaptation to the target domain. Additionally, to mitigate the impact of noise in synthetic data, we design a progressive adaptation strategy, ensuring robust learning throughout the training process. Extensive experiments demonstrate that our method achieves state-of-the-art performance in zero-shot semantic segmentation. ## Installation Environment setting: All of our experiments are conducted on NVIDIA RTX 3090 with cuda 11.8 ```bash source env.sh ``` ## Running You can find all the training scripts in the `scripts/` folder. We use day $\to$ snow setting as an example. First, you should decide where you want to put the datasets. Let's denote it as `<data_root>` (for example:`/data3/roujin`). By default, the experimental logs are stored in `<data_root>`. Then, organize the folder as follows: ``` <data_root> └─ ACDC └─ gt └─ rgb_anon └─ cityscapes └─ gtFine └─ leftImg8bit └─ GTA5 └─ images └─ labels ``` You can refer to cityscapes and ACDC's official websites for the datasets. For GTA5, as we only use a subset of them, we provide the following link to download the subset for your convenience: [https://huggingface.co/datasets/roujin/GTA5subset](https://huggingface.co/datasets/roujin/GTA5subset) For synthetic data generation: ```bash source img_gen/run.sh <data_root> snow ``` For progress model adaptation: ```bash source scripts/snow.sh <data_root> ``` Evaluation: ```bash source eval.sh <data_root> <setting> ``` `<setting>` can be "day", "fog", "rain", "snow", "night", "game" ## Evaluation Results We release the following results. See all logs and checkpoints during training from [https://huggingface.co/roujin/SDGPA/tree/main](https://huggingface.co/roujin/SDGPA/tree/main) | Setting | Day→Night | Clear→Snow | Clear→Rain | Clear→Fog | Real→Game | | :--------------- | :-------------------------------------------------------------------------------------- | :------------------------------------------------------------------------------------- | :------------------------------------------------------------------------------------- | :------------------------------------------------------------------------------------ | :------------------------------------------------------------------------------------- | | results on paper | 26.9±0.8 | 47.4±0.7 | 48.6±0.8 | 58.8±0.7 | 43.4±0.4 | | our released | 27.6 | 46.8 | 49.0 | 59.8 | 43.1 | | checkpoint | [link](https://huggingface.co/roujin/SDGPA/blob/main/night2/weights/weights_65.pth.tar) | [link](https://huggingface.co/roujin/SDGPA/blob/main/snow2/weights/weights_65.pth.tar) | [link](https://huggingface.co/roujin/SDGPA/blob/main/rain2/weights/weights_65.pth.tar) | [link](https://huggingface.co/roujin/SDGPA/blob/main/fog2/weights/weights_65.pth.tar) | [link](https://huggingface.co/roujin/SDGPA/blob/main/game2/weights/weights_65.pth.tar) | We recommend you to read the scripts and the paper for more details. For hyperparameter selection of InstructPix2Pix, we recommend reading: [https://huggingface.co/spaces/timbrooks/instruct-pix2pix/blob/main/README.md](https://huggingface.co/spaces/timbrooks/instruct-pix2pix/blob/main/README.md) ## Acknowledgements This code is built upon the following repositories: * [https://github.com/azuma164/ZoDi](https://github.com/azuma164/ZoDi) * [https://huggingface.co/timbrooks/instruct-pix2pix](https://huggingface.co/timbrooks/instruct-pix2pix) We thank them for their excellent work! ## Citation ```bibtex @misc{luo2025sdgpa, title={Zero Shot Domain Adaptive Semantic Segmentation by Synthetic Data Generation and Progressive Adaptation}, author={Jun Luo and Zijing Zhao and Yang Liu}, year={2025}, eprint={2508.03300}, archivePrefix={arXiv}, primaryClass={cs.CV}, url={https://arxiv.org/abs/2508.03300}, } ```
null0101/distil-whisper-medium-ko-test
null0101
2025-08-06T13:21:42Z
2
0
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-08-06T13:20:58Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
phogen/gemma-3-4b-pt-00pct-lora-proposal
phogen
2025-08-06T13:17:37Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-08-06T13:17:33Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Jacksss123/net72_uid209
Jacksss123
2025-08-06T13:13:12Z
1
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2025-08-06T13:08:52Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/future_ai_V1.1.250805-GGUF
mradermacher
2025-08-06T13:11:15Z
50
0
transformers
[ "transformers", "gguf", "en", "base_model:Futuresony/future_ai_V1.1.250805", "base_model:quantized:Futuresony/future_ai_V1.1.250805", "endpoints_compatible", "region:us" ]
null
2025-08-06T12:56:24Z
--- base_model: Futuresony/future_ai_V1.1.250805 language: - en library_name: transformers mradermacher: readme_rev: 1 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/Futuresony/future_ai_V1.1.250805 <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#future_ai_V1.1.250805-GGUF).*** weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/future_ai_V1.1.250805-GGUF/resolve/main/future_ai_V1.1.250805.Q2_K.gguf) | Q2_K | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/future_ai_V1.1.250805-GGUF/resolve/main/future_ai_V1.1.250805.Q3_K_S.gguf) | Q3_K_S | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/future_ai_V1.1.250805-GGUF/resolve/main/future_ai_V1.1.250805.Q3_K_M.gguf) | Q3_K_M | 4.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/future_ai_V1.1.250805-GGUF/resolve/main/future_ai_V1.1.250805.Q3_K_L.gguf) | Q3_K_L | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/future_ai_V1.1.250805-GGUF/resolve/main/future_ai_V1.1.250805.IQ4_XS.gguf) | IQ4_XS | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/future_ai_V1.1.250805-GGUF/resolve/main/future_ai_V1.1.250805.Q4_K_S.gguf) | Q4_K_S | 5.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/future_ai_V1.1.250805-GGUF/resolve/main/future_ai_V1.1.250805.Q4_K_M.gguf) | Q4_K_M | 5.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/future_ai_V1.1.250805-GGUF/resolve/main/future_ai_V1.1.250805.Q5_K_S.gguf) | Q5_K_S | 6.6 | | | [GGUF](https://huggingface.co/mradermacher/future_ai_V1.1.250805-GGUF/resolve/main/future_ai_V1.1.250805.Q5_K_M.gguf) | Q5_K_M | 6.7 | | | [GGUF](https://huggingface.co/mradermacher/future_ai_V1.1.250805-GGUF/resolve/main/future_ai_V1.1.250805.Q6_K.gguf) | Q6_K | 7.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/future_ai_V1.1.250805-GGUF/resolve/main/future_ai_V1.1.250805.Q8_0.gguf) | Q8_0 | 9.9 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/future_ai_V1.1.250805-GGUF/resolve/main/future_ai_V1.1.250805.f16.gguf) | f16 | 18.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
buuduy1711/gemma-3-4b-it-tayson-vietnam
buuduy1711
2025-08-06T13:10:40Z
23
0
transformers
[ "transformers", "safetensors", "gemma3", "image-text-to-text", "text-generation-inference", "unsloth", "conversational", "en", "base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit", "base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-08-06T09:44:58Z
--- base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - gemma3 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** buuduy1711 - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
spesrobotics/wire_pick_place_multi_view_act_expanded
spesrobotics
2025-08-06T13:10:09Z
13
0
lerobot
[ "lerobot", "safetensors", "act", "robotics", "dataset:spesrobotics/wire_pick_place_multi_view_expanded", "arxiv:2304.13705", "license:apache-2.0", "region:us" ]
robotics
2025-08-06T02:44:14Z
--- datasets: spesrobotics/wire_pick_place_multi_view_expanded library_name: lerobot license: apache-2.0 model_name: act pipeline_tag: robotics tags: - act - robotics - lerobot --- # Model Card for act <!-- Provide a quick summary of what the model is/does. --> [Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates. This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot). See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index). --- ## How to Get Started with the Model For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy). Below is the short version on how to train and run inference/eval: ### Train from scratch ```bash python -m lerobot.scripts.train \ --dataset.repo_id=${HF_USER}/<dataset> \ --policy.type=act \ --output_dir=outputs/train/<desired_policy_repo_id> \ --job_name=lerobot_training \ --policy.device=cuda \ --policy.repo_id=${HF_USER}/<desired_policy_repo_id> --wandb.enable=true ``` _Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._ ### Evaluate the policy/run inference ```bash python -m lerobot.record \ --robot.type=so100_follower \ --dataset.repo_id=<hf_user>/eval_<dataset> \ --policy.path=<hf_user>/<desired_policy_repo_id> \ --episodes=10 ``` Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint. --- ## Model Details - **License:** apache-2.0
mradermacher/CATPLUG-Ti-GGUF
mradermacher
2025-08-06T13:07:00Z
85
0
transformers
[ "transformers", "gguf", "en", "base_model:yyy111yyy/CATPLUG-Ti", "base_model:quantized:yyy111yyy/CATPLUG-Ti", "endpoints_compatible", "region:us", "conversational" ]
null
2025-08-06T12:52:20Z
--- base_model: yyy111yyy/CATPLUG-Ti language: - en library_name: transformers mradermacher: readme_rev: 1 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/yyy111yyy/CATPLUG-Ti <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#CATPLUG-Ti-GGUF).*** weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/CATPLUG-Ti-GGUF/resolve/main/CATPLUG-Ti.Q2_K.gguf) | Q2_K | 1.6 | | | [GGUF](https://huggingface.co/mradermacher/CATPLUG-Ti-GGUF/resolve/main/CATPLUG-Ti.Q3_K_S.gguf) | Q3_K_S | 1.8 | | | [GGUF](https://huggingface.co/mradermacher/CATPLUG-Ti-GGUF/resolve/main/CATPLUG-Ti.Q3_K_M.gguf) | Q3_K_M | 1.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/CATPLUG-Ti-GGUF/resolve/main/CATPLUG-Ti.Q3_K_L.gguf) | Q3_K_L | 2.0 | | | [GGUF](https://huggingface.co/mradermacher/CATPLUG-Ti-GGUF/resolve/main/CATPLUG-Ti.IQ4_XS.gguf) | IQ4_XS | 2.1 | | | [GGUF](https://huggingface.co/mradermacher/CATPLUG-Ti-GGUF/resolve/main/CATPLUG-Ti.Q4_K_S.gguf) | Q4_K_S | 2.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/CATPLUG-Ti-GGUF/resolve/main/CATPLUG-Ti.Q4_K_M.gguf) | Q4_K_M | 2.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/CATPLUG-Ti-GGUF/resolve/main/CATPLUG-Ti.Q5_K_S.gguf) | Q5_K_S | 2.6 | | | [GGUF](https://huggingface.co/mradermacher/CATPLUG-Ti-GGUF/resolve/main/CATPLUG-Ti.Q5_K_M.gguf) | Q5_K_M | 2.6 | | | [GGUF](https://huggingface.co/mradermacher/CATPLUG-Ti-GGUF/resolve/main/CATPLUG-Ti.Q6_K.gguf) | Q6_K | 3.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/CATPLUG-Ti-GGUF/resolve/main/CATPLUG-Ti.Q8_0.gguf) | Q8_0 | 3.9 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/CATPLUG-Ti-GGUF/resolve/main/CATPLUG-Ti.f16.gguf) | f16 | 7.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
ACECA/lowMvM_212
ACECA
2025-08-06T12:58:08Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-07-30T15:11:00Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
Butanium/simple-stories-1L8H256D-attention-only-toy-transformer
Butanium
2025-08-06T12:57:51Z
8
0
null
[ "safetensors", "llama", "region:us" ]
null
2025-08-06T12:57:49Z
# 1-Layer 8-Head Attention-Only Transformer This is a simplified transformer model with 1 attention layer(s) and 8 attention head(s), hidden size 256, designed for studying attention mechanisms in isolation. ## Architecture Differences from Vanilla Transformer **Removed Components:** - **No MLP/Feed-Forward layers** - Only attention layers - **No Layer Normalization** - No LayerNorm before/after attention - **No positional encoding** - No position embeddings of any kind **Kept Components:** - Token embeddings - Multi-head self-attention with causal masking - Residual connections around attention layers - Language modeling head (linear projection to vocabulary) This minimal architecture isolates the attention mechanism, making it useful for mechanistic interpretability research as described in [A Mathematical Framework for Transformer Circuits](https://transformer-circuits.pub/2021/framework/index.html). ## Usage ```python class AttentionOnlyTransformer(PreTrainedModel): """Attention-only transformer with configurable number of attention layers.""" config_class = LlamaConfig def __init__(self, config: LlamaConfig): super().__init__(config) self.embed_tokens = nn.Embedding(config.vocab_size, config.hidden_size) self.layers = nn.ModuleList([AttentionLayer(config) for _ in range(config.num_hidden_layers)]) self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False) def forward(self, input_ids=None, attention_mask=None, labels=None, **kwargs): batch_size, seq_len = input_ids.shape hidden_states = self.embed_tokens(input_ids) assert hidden_states.shape == (batch_size, seq_len, self.config.hidden_size) assert attention_mask.shape == (batch_size, seq_len) for layer in self.layers: hidden_states = layer(hidden_states, attention_mask) assert hidden_states.shape == (batch_size, seq_len, self.config.hidden_size) logits = self.lm_head(hidden_states) assert logits.shape == (batch_size, seq_len, self.config.vocab_size) loss = None if labels is not None: shift_logits = logits[..., :-1, :].contiguous() shift_labels = labels[..., 1:].contiguous() loss_fct = nn.CrossEntropyLoss() loss = loss_fct( shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1) ) return {"loss": loss, "logits": logits} model = AttentionOnlyTransformer.from_pretrained('Butanium/simple-stories-1L8H256D-attention-only-toy-transformer') ``` ## Training Data The model is trained on the [SimpleStories dataset](https://huggingface.co/datasets/SimpleStories/SimpleStories) for next-token prediction.
EliovpAI/Qwen3-8B-FP8-KV
EliovpAI
2025-08-06T12:54:54Z
6
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "AMD", "ROCM", "VLLM", "Quark", "MI300x", "Quantized", "conversational", "base_model:Qwen/Qwen3-8B", "base_model:quantized:Qwen/Qwen3-8B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "fp8", "region:us" ]
text-generation
2025-08-06T12:49:52Z
--- metrics: - perplexity base_model: - Qwen/Qwen3-8B library_name: transformers tags: - AMD - ROCM - VLLM - Quark - MI300x - Quantized --- # Qwen3-8B-FP8-KV ## Introduction This model was built by applying Quark with calibration samples from Pile dataset to Qwen/Qwen3-8B. ## Quantization Strategy - **Quantized Layers**: All linear layers excluding "lm_head", "*.mlp.experts.*" - **Weight**: FP8 symmetric per-tensor - **Activation**: FP8 symmetric per-tensor - **KV Cache**: FP8 symmetric per-tensor ## Deployment Quark has its own export format and allows FP8 quantized models to be efficiently deployed using the vLLM backend (vLLM-compatible). ## Evaluation Quark currently uses perplexity (PPL) as the evaluation metric for accuracy loss before and after quantization. The specific PPL algorithm can be referenced in the quantize_quark.py. The quantization evaluation results are conducted in pseudo-quantization mode, which may slightly differ from the actual quantized inference accuracy. These results are provided for reference only. ### Evaluation scores | **Benchmark** | **Qwen3-8B** | **Qwen3-8B-FP8-KV (this model)** | | -------------------- | ------------ | --------------------------------- | | Perplexity-wikitext2 | 9.531 | 9.708 | ### Performance Summary - **Accuracy Retention**: 98.15% (only 1.85% perplexity increase) - **Model Size**: ~42% reduction vs FP16 - **Memory Efficiency**: FP8 KV-cache for extended context - **Hardware Optimization**: AMD ROCm/HIP optimized ## License Based on Qwen3-8B licensing terms.
longhoang2112/whisper-base-fine-tuning_2_steps_slu
longhoang2112
2025-08-06T12:50:57Z
14
0
peft
[ "peft", "region:us" ]
null
2025-08-06T12:50:54Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0
ekiprop/SST-2-HEURISTIC-LoRA-All-Attention-Q_K_V_O-seed20
ekiprop
2025-08-06T12:50:26Z
56
0
peft
[ "peft", "safetensors", "base_model:adapter:roberta-base", "lora", "transformers", "base_model:FacebookAI/roberta-base", "base_model:adapter:FacebookAI/roberta-base", "license:mit", "region:us" ]
null
2025-08-06T12:35:59Z
--- library_name: peft license: mit base_model: roberta-base tags: - base_model:adapter:roberta-base - lora - transformers metrics: - accuracy model-index: - name: SST-2-HEURISTIC-LoRA-All-Attention-Q_K_V_O-seed20 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SST-2-HEURISTIC-LoRA-All-Attention-Q_K_V_O-seed20 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2312 - Accuracy: 0.9427 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:| | 0.3991 | 0.0950 | 200 | 0.2096 | 0.9163 | | 0.2917 | 0.1900 | 400 | 0.1974 | 0.9174 | | 0.2704 | 0.2850 | 600 | 0.2146 | 0.9197 | | 0.2428 | 0.3800 | 800 | 0.1842 | 0.9346 | | 0.2313 | 0.4751 | 1000 | 0.2589 | 0.9220 | | 0.2147 | 0.5701 | 1200 | 0.2200 | 0.9278 | | 0.2169 | 0.6651 | 1400 | 0.2166 | 0.9323 | | 0.2097 | 0.7601 | 1600 | 0.2307 | 0.9255 | | 0.216 | 0.8551 | 1800 | 0.2100 | 0.9312 | | 0.2048 | 0.9501 | 2000 | 0.2078 | 0.9392 | | 0.2004 | 1.0451 | 2200 | 0.2162 | 0.9335 | | 0.1819 | 1.1401 | 2400 | 0.1884 | 0.9358 | | 0.1837 | 1.2352 | 2600 | 0.2073 | 0.9323 | | 0.1793 | 1.3302 | 2800 | 0.2156 | 0.9278 | | 0.1792 | 1.4252 | 3000 | 0.1997 | 0.9323 | | 0.1794 | 1.5202 | 3200 | 0.2129 | 0.9335 | | 0.1788 | 1.6152 | 3400 | 0.1908 | 0.9346 | | 0.1663 | 1.7102 | 3600 | 0.2561 | 0.9278 | | 0.1705 | 1.8052 | 3800 | 0.2167 | 0.9346 | | 0.1837 | 1.9002 | 4000 | 0.1958 | 0.9392 | | 0.174 | 1.9952 | 4200 | 0.2181 | 0.9358 | | 0.1602 | 2.0903 | 4400 | 0.2107 | 0.9335 | | 0.1529 | 2.1853 | 4600 | 0.2229 | 0.9369 | | 0.1568 | 2.2803 | 4800 | 0.2372 | 0.9346 | | 0.1466 | 2.3753 | 5000 | 0.2117 | 0.9335 | | 0.156 | 2.4703 | 5200 | 0.2452 | 0.9323 | | 0.1544 | 2.5653 | 5400 | 0.2411 | 0.9312 | | 0.163 | 2.6603 | 5600 | 0.2019 | 0.9323 | | 0.1431 | 2.7553 | 5800 | 0.2393 | 0.9289 | | 0.1466 | 2.8504 | 6000 | 0.2157 | 0.9312 | | 0.1446 | 2.9454 | 6200 | 0.2291 | 0.9335 | | 0.1395 | 3.0404 | 6400 | 0.2593 | 0.9278 | | 0.1203 | 3.1354 | 6600 | 0.2339 | 0.9323 | | 0.1272 | 3.2304 | 6800 | 0.2262 | 0.9404 | | 0.1484 | 3.3254 | 7000 | 0.2128 | 0.9381 | | 0.1269 | 3.4204 | 7200 | 0.2254 | 0.9404 | | 0.1269 | 3.5154 | 7400 | 0.2387 | 0.9335 | | 0.1321 | 3.6105 | 7600 | 0.2512 | 0.9358 | | 0.1351 | 3.7055 | 7800 | 0.2333 | 0.9381 | | 0.1331 | 3.8005 | 8000 | 0.2312 | 0.9427 | | 0.1396 | 3.8955 | 8200 | 0.2190 | 0.9427 | | 0.1342 | 3.9905 | 8400 | 0.2214 | 0.9381 | | 0.1231 | 4.0855 | 8600 | 0.2422 | 0.9323 | | 0.1159 | 4.1805 | 8800 | 0.2500 | 0.9323 | | 0.1219 | 4.2755 | 9000 | 0.2348 | 0.9335 | | 0.1225 | 4.3705 | 9200 | 0.2405 | 0.9312 | | 0.1205 | 4.4656 | 9400 | 0.2407 | 0.9312 | | 0.1148 | 4.5606 | 9600 | 0.2384 | 0.9369 | | 0.12 | 4.6556 | 9800 | 0.2342 | 0.9381 | | 0.1123 | 4.7506 | 10000 | 0.2384 | 0.9381 | | 0.1182 | 4.8456 | 10200 | 0.2377 | 0.9381 | | 0.1298 | 4.9406 | 10400 | 0.2349 | 0.9369 | ### Framework versions - PEFT 0.16.0 - Transformers 4.54.1 - Pytorch 2.5.1+cu121 - Datasets 4.0.0 - Tokenizers 0.21.4
patent/qwen3_4b_grpo.n1.21
patent
2025-08-06T12:45:51Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen3", "trl", "en", "base_model:unsloth/Qwen3-4B-Base", "base_model:finetune:unsloth/Qwen3-4B-Base", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-08-06T12:45:44Z
--- base_model: unsloth/Qwen3-4B-Base tags: - text-generation-inference - transformers - unsloth - qwen3 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** patent - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen3-4B-Base This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
fadhlyrafi/model
fadhlyrafi
2025-08-06T12:36:29Z
3
0
transformers
[ "transformers", "safetensors", "csm", "text-to-audio", "text-generation-inference", "unsloth", "en", "base_model:unsloth/csm-1b", "base_model:finetune:unsloth/csm-1b", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-to-audio
2025-08-06T12:35:08Z
--- base_model: unsloth/csm-1b tags: - text-generation-inference - transformers - unsloth - csm license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** fadhlyrafi - **License:** apache-2.0 - **Finetuned from model :** unsloth/csm-1b This csm model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Marko152/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-diving_feline_anaconda
Marko152
2025-08-06T12:35:36Z
99
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am diving_feline_anaconda", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-07-31T11:01:39Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am diving_feline_anaconda --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sobs0/new_wav2vec2-base-aphasia-oth
sobs0
2025-08-06T12:35:27Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-08-06T11:38:50Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
maldv/Eva-Mindlink-72b
maldv
2025-08-06T12:33:40Z
8
2
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "chat", "conversational", "en", "base_model:EVA-UNIT-01/EVA-Qwen2.5-72B-v0.2", "base_model:finetune:EVA-UNIT-01/EVA-Qwen2.5-72B-v0.2", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-05T20:20:50Z
--- license: other license_name: qwen license_link: https://huggingface.co/Qwen/Qwen2.5-72B/raw/main/LICENSE library_name: transformers language: - en tags: - chat - conversational base_model: - Qwen/Qwen2.5-72B - Skywork/MindLink-72B-0801 - Unbabel/Tower-Plus-72B - EVA-UNIT-01/EVA-Qwen2.5-72B-v0.2 pipeline_tags: - text-generation - conversational - chat --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65b19c1b098c85365af5a83e/r7maKU1wOkmSyHf-qPlMz.png) [GGUF](https://huggingface.co/mradermacher/Eva-Mindlink-72b-GGUF) [iMat](https://huggingface.co/mradermacher/Eva-Mindlink-72b-i1-GGUF) # Eva Mindlink 72B Eva Mindlink 72B is a *normalized denoised fourier interpolation* of the following models: ```yaml output_base_model: "Qwen/Qwen2.5-72B" output_dtype: "bfloat16" finetune_merge: - { "model": "Skywork/MindLink-72B-0801", "base": "Qwen/Qwen2.5-72B", "alpha": 0.9, "is_input": true } - { "model": "Unbabel/Tower-Plus-72B", "base": "Qwen/Qwen2.5-72B", "alpha": 0.5 } - { "model": "EVA-UNIT-01/EVA-Qwen2.5-72B-v0.2", "base": "Qwen/Qwen2.5-72B", "alpha": 0.8, "is_output": true } ``` In other words, all of these models get warped and interpolated in signal space, and then jammed back on top of the base model (which in this case was Qwen2.5-72B); with the MindLink-72B-0801 input layer and the EVA-Qwen2.5-72B-v0.2 output layer. ## Citation If you find our work helpful, feel free to give us a cite. ``` @misc{eva-mindlink-72b, title = {Eva Mindlink 72B}, url = {https://huggingface.co/maldv/Eva-Mindlink-72B}, author = {Praxis Maldevide}, month = {August}, year = {2025} } ```
mradermacher/Eva-Mindlink-72b-GGUF
mradermacher
2025-08-06T12:24:21Z
724
1
transformers
[ "transformers", "gguf", "chat", "conversational", "en", "base_model:maldv/Eva-Mindlink-72b", "base_model:quantized:maldv/Eva-Mindlink-72b", "license:other", "endpoints_compatible", "region:us" ]
null
2025-08-06T01:14:53Z
--- base_model: maldv/Eva-Mindlink-72b language: - en library_name: transformers license: other license_link: https://huggingface.co/Qwen/Qwen2.5-72B/raw/main/LICENSE license_name: qwen mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - chat - conversational --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/maldv/Eva-Mindlink-72b <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Eva-Mindlink-72b-GGUF).*** weighted/imatrix quants are available at https://huggingface.co/mradermacher/Eva-Mindlink-72b-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Eva-Mindlink-72b-GGUF/resolve/main/Eva-Mindlink-72b.Q2_K.gguf) | Q2_K | 29.9 | | | [GGUF](https://huggingface.co/mradermacher/Eva-Mindlink-72b-GGUF/resolve/main/Eva-Mindlink-72b.Q3_K_S.gguf) | Q3_K_S | 34.6 | | | [GGUF](https://huggingface.co/mradermacher/Eva-Mindlink-72b-GGUF/resolve/main/Eva-Mindlink-72b.Q3_K_M.gguf) | Q3_K_M | 37.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Eva-Mindlink-72b-GGUF/resolve/main/Eva-Mindlink-72b.Q3_K_L.gguf) | Q3_K_L | 39.6 | | | [GGUF](https://huggingface.co/mradermacher/Eva-Mindlink-72b-GGUF/resolve/main/Eva-Mindlink-72b.IQ4_XS.gguf) | IQ4_XS | 40.3 | | | [GGUF](https://huggingface.co/mradermacher/Eva-Mindlink-72b-GGUF/resolve/main/Eva-Mindlink-72b.Q4_K_S.gguf) | Q4_K_S | 44.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Eva-Mindlink-72b-GGUF/resolve/main/Eva-Mindlink-72b.Q4_K_M.gguf) | Q4_K_M | 47.5 | fast, recommended | | [PART 1](https://huggingface.co/mradermacher/Eva-Mindlink-72b-GGUF/resolve/main/Eva-Mindlink-72b.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Eva-Mindlink-72b-GGUF/resolve/main/Eva-Mindlink-72b.Q5_K_S.gguf.part2of2) | Q5_K_S | 51.5 | | | [PART 1](https://huggingface.co/mradermacher/Eva-Mindlink-72b-GGUF/resolve/main/Eva-Mindlink-72b.Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Eva-Mindlink-72b-GGUF/resolve/main/Eva-Mindlink-72b.Q5_K_M.gguf.part2of2) | Q5_K_M | 54.5 | | | [PART 1](https://huggingface.co/mradermacher/Eva-Mindlink-72b-GGUF/resolve/main/Eva-Mindlink-72b.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Eva-Mindlink-72b-GGUF/resolve/main/Eva-Mindlink-72b.Q6_K.gguf.part2of2) | Q6_K | 64.4 | very good quality | | [PART 1](https://huggingface.co/mradermacher/Eva-Mindlink-72b-GGUF/resolve/main/Eva-Mindlink-72b.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Eva-Mindlink-72b-GGUF/resolve/main/Eva-Mindlink-72b.Q8_0.gguf.part2of2) | Q8_0 | 77.4 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Butanium/simple-stories-0L16H128D-attention-only-toy-transformer
Butanium
2025-08-06T12:23:49Z
11
0
null
[ "safetensors", "llama", "region:us" ]
null
2025-08-06T12:23:47Z
# 0-Layer 16-Head Attention-Only Transformer This is a simplified transformer model with 0 attention layer(s) and 16 attention head(s), hidden size 128, designed for studying attention mechanisms in isolation. ## Architecture Differences from Vanilla Transformer **Removed Components:** - **No MLP/Feed-Forward layers** - Only attention layers - **No Layer Normalization** - No LayerNorm before/after attention - **No positional encoding** - No position embeddings of any kind **Kept Components:** - Token embeddings - Multi-head self-attention with causal masking - Residual connections around attention layers - Language modeling head (linear projection to vocabulary) This minimal architecture isolates the attention mechanism, making it useful for mechanistic interpretability research as described in [A Mathematical Framework for Transformer Circuits](https://transformer-circuits.pub/2021/framework/index.html). ## Usage ```python class AttentionOnlyTransformer(PreTrainedModel): """Attention-only transformer with configurable number of attention layers.""" config_class = LlamaConfig def __init__(self, config: LlamaConfig): super().__init__(config) self.embed_tokens = nn.Embedding(config.vocab_size, config.hidden_size) self.layers = nn.ModuleList([AttentionLayer(config) for _ in range(config.num_hidden_layers)]) self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False) def forward(self, input_ids=None, attention_mask=None, labels=None, **kwargs): batch_size, seq_len = input_ids.shape hidden_states = self.embed_tokens(input_ids) assert hidden_states.shape == (batch_size, seq_len, self.config.hidden_size) assert attention_mask.shape == (batch_size, seq_len) for layer in self.layers: hidden_states = layer(hidden_states, attention_mask) assert hidden_states.shape == (batch_size, seq_len, self.config.hidden_size) logits = self.lm_head(hidden_states) assert logits.shape == (batch_size, seq_len, self.config.vocab_size) loss = None if labels is not None: shift_logits = logits[..., :-1, :].contiguous() shift_labels = labels[..., 1:].contiguous() loss_fct = nn.CrossEntropyLoss() loss = loss_fct( shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1) ) return {"loss": loss, "logits": logits} model = AttentionOnlyTransformer.from_pretrained('Butanium/simple-stories-0L16H128D-attention-only-toy-transformer') ``` ## Training Data The model is trained on the [SimpleStories dataset](https://huggingface.co/datasets/SimpleStories/SimpleStories) for next-token prediction.
Aarush09/bart-conversation-summarizer
Aarush09
2025-08-06T12:22:49Z
6
0
transformers
[ "transformers", "safetensors", "bart", "text2text-generation", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-08-06T12:22:02Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
saberbx/GraniteSentry
saberbx
2025-08-06T12:20:23Z
10
0
transformers
[ "transformers", "safetensors", "granite", "text-generation", "unsloth", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-08-06T04:58:17Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
nvovagen/novagwn
nvovagen
2025-08-06T12:19:50Z
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-Krea-dev", "base_model:adapter:black-forest-labs/FLUX.1-Krea-dev", "region:us" ]
text-to-image
2025-08-06T12:19:47Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - output: url: images/images (1).jpeg text: '-' base_model: black-forest-labs/FLUX.1-Krea-dev instance_prompt: null --- # novgen.1 <Gallery /> ## Download model [Download](/nvovagen/novagwn/tree/main) them in the Files & versions tab.
ekiprop/SST-2-GLoRA-p50-seed20
ekiprop
2025-08-06T12:17:50Z
54
0
peft
[ "peft", "safetensors", "base_model:adapter:roberta-base", "lora", "transformers", "base_model:FacebookAI/roberta-base", "base_model:adapter:FacebookAI/roberta-base", "license:mit", "region:us" ]
null
2025-08-06T12:03:16Z
--- library_name: peft license: mit base_model: roberta-base tags: - base_model:adapter:roberta-base - lora - transformers metrics: - accuracy model-index: - name: SST-2-GLoRA-p50-seed20 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SST-2-GLoRA-p50-seed20 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2073 - Accuracy: 0.9507 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:| | 0.3587 | 0.0950 | 200 | 0.2132 | 0.9232 | | 0.2898 | 0.1900 | 400 | 0.1966 | 0.9255 | | 0.2656 | 0.2850 | 600 | 0.2076 | 0.9335 | | 0.2372 | 0.3800 | 800 | 0.1867 | 0.9346 | | 0.2304 | 0.4751 | 1000 | 0.2516 | 0.9197 | | 0.2193 | 0.5701 | 1200 | 0.2399 | 0.9243 | | 0.2257 | 0.6651 | 1400 | 0.1971 | 0.9335 | | 0.2197 | 0.7601 | 1600 | 0.1918 | 0.9404 | | 0.2199 | 0.8551 | 1800 | 0.1984 | 0.9323 | | 0.2027 | 0.9501 | 2000 | 0.1861 | 0.9461 | | 0.2083 | 1.0451 | 2200 | 0.1833 | 0.9427 | | 0.1801 | 1.1401 | 2400 | 0.1849 | 0.9392 | | 0.1818 | 1.2352 | 2600 | 0.1920 | 0.9369 | | 0.1847 | 1.3302 | 2800 | 0.2184 | 0.9415 | | 0.1737 | 1.4252 | 3000 | 0.1955 | 0.9415 | | 0.1744 | 1.5202 | 3200 | 0.1843 | 0.9438 | | 0.1843 | 1.6152 | 3400 | 0.1818 | 0.9415 | | 0.1628 | 1.7102 | 3600 | 0.2257 | 0.9404 | | 0.1607 | 1.8052 | 3800 | 0.1951 | 0.9415 | | 0.1803 | 1.9002 | 4000 | 0.1772 | 0.9427 | | 0.171 | 1.9952 | 4200 | 0.2226 | 0.9381 | | 0.1557 | 2.0903 | 4400 | 0.1886 | 0.9427 | | 0.1483 | 2.1853 | 4600 | 0.1809 | 0.9461 | | 0.1489 | 2.2803 | 4800 | 0.2176 | 0.9404 | | 0.1428 | 2.3753 | 5000 | 0.1820 | 0.9461 | | 0.147 | 2.4703 | 5200 | 0.2073 | 0.9507 | | 0.1532 | 2.5653 | 5400 | 0.2002 | 0.9438 | | 0.1633 | 2.6603 | 5600 | 0.1759 | 0.9495 | | 0.1427 | 2.7553 | 5800 | 0.2015 | 0.9450 | | 0.1398 | 2.8504 | 6000 | 0.1921 | 0.9450 | | 0.1344 | 2.9454 | 6200 | 0.1937 | 0.9427 | | 0.1412 | 3.0404 | 6400 | 0.2044 | 0.9450 | | 0.1148 | 3.1354 | 6600 | 0.1907 | 0.9472 | | 0.128 | 3.2304 | 6800 | 0.1894 | 0.9461 | | 0.1358 | 3.3254 | 7000 | 0.1836 | 0.9507 | | 0.1195 | 3.4204 | 7200 | 0.2043 | 0.9461 | | 0.1239 | 3.5154 | 7400 | 0.2053 | 0.9450 | | 0.1225 | 3.6105 | 7600 | 0.2060 | 0.9427 | | 0.1271 | 3.7055 | 7800 | 0.2090 | 0.9461 | | 0.1376 | 3.8005 | 8000 | 0.1953 | 0.9438 | | 0.1293 | 3.8955 | 8200 | 0.1912 | 0.9450 | | 0.1252 | 3.9905 | 8400 | 0.1936 | 0.9507 | | 0.1083 | 4.0855 | 8600 | 0.2040 | 0.9472 | | 0.1073 | 4.1805 | 8800 | 0.2121 | 0.9484 | | 0.1126 | 4.2755 | 9000 | 0.2055 | 0.9472 | | 0.1131 | 4.3705 | 9200 | 0.2010 | 0.9507 | | 0.1031 | 4.4656 | 9400 | 0.2125 | 0.9461 | | 0.1013 | 4.5606 | 9600 | 0.2132 | 0.9472 | | 0.1141 | 4.6556 | 9800 | 0.2087 | 0.9484 | | 0.1114 | 4.7506 | 10000 | 0.2026 | 0.9484 | | 0.1175 | 4.8456 | 10200 | 0.2013 | 0.9461 | | 0.1099 | 4.9406 | 10400 | 0.2025 | 0.9472 | ### Framework versions - PEFT 0.16.0 - Transformers 4.54.1 - Pytorch 2.5.1+cu121 - Datasets 4.0.0 - Tokenizers 0.21.4
Butanium/simple-stories-0L16H512D-attention-only-toy-transformer
Butanium
2025-08-06T12:15:56Z
6
0
null
[ "safetensors", "llama", "region:us" ]
null
2025-08-06T12:15:54Z
# 0-Layer 16-Head Attention-Only Transformer This is a simplified transformer model with 0 attention layer(s) and 16 attention head(s), hidden size 512, designed for studying attention mechanisms in isolation. ## Architecture Differences from Vanilla Transformer **Removed Components:** - **No MLP/Feed-Forward layers** - Only attention layers - **No Layer Normalization** - No LayerNorm before/after attention - **No positional encoding** - No position embeddings of any kind **Kept Components:** - Token embeddings - Multi-head self-attention with causal masking - Residual connections around attention layers - Language modeling head (linear projection to vocabulary) This minimal architecture isolates the attention mechanism, making it useful for mechanistic interpretability research as described in [A Mathematical Framework for Transformer Circuits](https://transformer-circuits.pub/2021/framework/index.html). ## Usage ```python class AttentionOnlyTransformer(PreTrainedModel): """Attention-only transformer with configurable number of attention layers.""" config_class = LlamaConfig def __init__(self, config: LlamaConfig): super().__init__(config) self.embed_tokens = nn.Embedding(config.vocab_size, config.hidden_size) self.layers = nn.ModuleList([AttentionLayer(config) for _ in range(config.num_hidden_layers)]) self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False) def forward(self, input_ids=None, attention_mask=None, labels=None, **kwargs): batch_size, seq_len = input_ids.shape hidden_states = self.embed_tokens(input_ids) assert hidden_states.shape == (batch_size, seq_len, self.config.hidden_size) assert attention_mask.shape == (batch_size, seq_len) for layer in self.layers: hidden_states = layer(hidden_states, attention_mask) assert hidden_states.shape == (batch_size, seq_len, self.config.hidden_size) logits = self.lm_head(hidden_states) assert logits.shape == (batch_size, seq_len, self.config.vocab_size) loss = None if labels is not None: shift_logits = logits[..., :-1, :].contiguous() shift_labels = labels[..., 1:].contiguous() loss_fct = nn.CrossEntropyLoss() loss = loss_fct( shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1) ) return {"loss": loss, "logits": logits} model = AttentionOnlyTransformer.from_pretrained('Butanium/simple-stories-0L16H512D-attention-only-toy-transformer') ``` ## Training Data The model is trained on the [SimpleStories dataset](https://huggingface.co/datasets/SimpleStories/SimpleStories) for next-token prediction.
isogen/II-Search-CIR-4B-exl3-6bpw
isogen
2025-08-06T12:15:20Z
2
0
null
[ "safetensors", "qwen3", "base_model:Intelligent-Internet/II-Search-CIR-4B", "base_model:quantized:Intelligent-Internet/II-Search-CIR-4B", "6-bit", "exl3", "region:us" ]
null
2025-08-06T12:14:44Z
--- base_model: Intelligent-Internet/II-Search-CIR-4B --- [EXL3](https://github.com/turboderp-org/exllamav3) quantization of [II-Search-CIR-4B](https://huggingface.co/Intelligent-Internet/II-Search-CIR-4B), 6 bits per weight. ### HumanEval (argmax) | Model | Q4 | Q6 | Q8 | FP16 | | -------------------------------------------------------------------------------------------- | ---- | ---- | ---- | ---- | | [II-Search-CIR-4B-exl3-4bpw](https://huggingface.co/isogen/II-Search-CIR-4B-exl3-4bpw) | 81.7 | 79.3 | 78.7 | 79.9 | | [II-Search-CIR-4B-exl3-6bpw](https://huggingface.co/isogen/II-Search-CIR-4B-exl3-6bpw) | 80.5 | 81.1 | 81.1 | 81.7 | | [II-Search-CIR-4B-exl3-8bpw-h8](https://huggingface.co/isogen/II-Search-CIR-4B-exl3-8bpw-h8) | 83.5 | 83.5 | 82.3 | 82.9 | | [Qwen3-4B-exl3-4bpw](https://huggingface.co/isogen/Qwen3-4B-exl3-4bpw) | 80.5 | 81.1 | 81.7 | 80.5 | | [Qwen3-4B-exl3-6bpw](https://huggingface.co/isogen/Qwen3-4B-exl3-6bpw) | 80.5 | 85.4 | 86.0 | 86.0 | | [Qwen3-4B-exl3-8bpw-h8](https://huggingface.co/isogen/Qwen3-4B-exl3-8bpw-h8) | 82.3 | 84.8 | 83.5 | 82.9 |
Butanium/simple-stories-0L8H256D-attention-only-toy-transformer
Butanium
2025-08-06T12:11:02Z
6
0
null
[ "safetensors", "llama", "region:us" ]
null
2025-08-06T12:11:00Z
# 0-Layer 8-Head Attention-Only Transformer This is a simplified transformer model with 0 attention layer(s) and 8 attention head(s), hidden size 256, designed for studying attention mechanisms in isolation. ## Architecture Differences from Vanilla Transformer **Removed Components:** - **No MLP/Feed-Forward layers** - Only attention layers - **No Layer Normalization** - No LayerNorm before/after attention - **No positional encoding** - No position embeddings of any kind **Kept Components:** - Token embeddings - Multi-head self-attention with causal masking - Residual connections around attention layers - Language modeling head (linear projection to vocabulary) This minimal architecture isolates the attention mechanism, making it useful for mechanistic interpretability research as described in [A Mathematical Framework for Transformer Circuits](https://transformer-circuits.pub/2021/framework/index.html). ## Usage ```python class AttentionOnlyTransformer(PreTrainedModel): """Attention-only transformer with configurable number of attention layers.""" config_class = LlamaConfig def __init__(self, config: LlamaConfig): super().__init__(config) self.embed_tokens = nn.Embedding(config.vocab_size, config.hidden_size) self.layers = nn.ModuleList([AttentionLayer(config) for _ in range(config.num_hidden_layers)]) self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False) def forward(self, input_ids=None, attention_mask=None, labels=None, **kwargs): batch_size, seq_len = input_ids.shape hidden_states = self.embed_tokens(input_ids) assert hidden_states.shape == (batch_size, seq_len, self.config.hidden_size) assert attention_mask.shape == (batch_size, seq_len) for layer in self.layers: hidden_states = layer(hidden_states, attention_mask) assert hidden_states.shape == (batch_size, seq_len, self.config.hidden_size) logits = self.lm_head(hidden_states) assert logits.shape == (batch_size, seq_len, self.config.vocab_size) loss = None if labels is not None: shift_logits = logits[..., :-1, :].contiguous() shift_labels = labels[..., 1:].contiguous() loss_fct = nn.CrossEntropyLoss() loss = loss_fct( shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1) ) return {"loss": loss, "logits": logits} model = AttentionOnlyTransformer.from_pretrained('Butanium/simple-stories-0L8H256D-attention-only-toy-transformer') ``` ## Training Data The model is trained on the [SimpleStories dataset](https://huggingface.co/datasets/SimpleStories/SimpleStories) for next-token prediction.
Avtertu/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-silent_skittish_ape
Avtertu
2025-08-06T12:10:40Z
101
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am silent_skittish_ape", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-03T09:40:54Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am silent_skittish_ape --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
conradjs/gpt2-reuters-tokenizer
conradjs
2025-08-06T12:05:26Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-08-06T12:05:25Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
alphateach/affine-202020
alphateach
2025-08-06T11:56:15Z
459
0
transformers
[ "transformers", "safetensors", "gpt_oss", "text-generation", "vllm", "conversational", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "8-bit", "mxfp4", "region:us" ]
text-generation
2025-08-06T11:56:15Z
--- license: apache-2.0 pipeline_tag: text-generation library_name: transformers tags: - vllm --- <p align="center"> <img alt="gpt-oss-20b" src="https://raw.githubusercontent.com/openai/gpt-oss/main/docs/gpt-oss-20b.svg"> </p> <p align="center"> <a href="https://gpt-oss.com"><strong>Try gpt-oss</strong></a> · <a href="https://cookbook.openai.com/topic/gpt-oss"><strong>Guides</strong></a> · <a href="https://openai.com/index/gpt-oss-model-card"><strong>Model card</strong></a> · <a href="https://openai.com/index/introducing-gpt-oss/"><strong>OpenAI blog</strong></a> </p> <br> Welcome to the gpt-oss series, [OpenAI’s open-weight models](https://openai.com/open-models) designed for powerful reasoning, agentic tasks, and versatile developer use cases. We’re releasing two flavors of these open models: - `gpt-oss-120b` — for production, general purpose, high reasoning use cases that fit into a single H100 GPU (117B parameters with 5.1B active parameters) - `gpt-oss-20b` — for lower latency, and local or specialized use cases (21B parameters with 3.6B active parameters) Both models were trained on our [harmony response format](https://github.com/openai/harmony) and should only be used with the harmony format as it will not work correctly otherwise. > [!NOTE] > This model card is dedicated to the smaller `gpt-oss-20b` model. Check out [`gpt-oss-120b`](https://huggingface.co/openai/gpt-oss-120b) for the larger model. # Highlights * **Permissive Apache 2.0 license:** Build freely without copyleft restrictions or patent risk—ideal for experimentation, customization, and commercial deployment. * **Configurable reasoning effort:** Easily adjust the reasoning effort (low, medium, high) based on your specific use case and latency needs. * **Full chain-of-thought:** Gain complete access to the model’s reasoning process, facilitating easier debugging and increased trust in outputs. It’s not intended to be shown to end users. * **Fine-tunable:** Fully customize models to your specific use case through parameter fine-tuning. * **Agentic capabilities:** Use the models’ native capabilities for function calling, [web browsing](https://github.com/openai/gpt-oss/tree/main?tab=readme-ov-file#browser), [Python code execution](https://github.com/openai/gpt-oss/tree/main?tab=readme-ov-file#python), and Structured Outputs. * **Native MXFP4 quantization:** The models are trained with native MXFP4 precision for the MoE layer, making `gpt-oss-120b` run on a single H100 GPU and the `gpt-oss-20b` model run within 16GB of memory. --- # Inference examples ## Transformers You can use `gpt-oss-120b` and `gpt-oss-20b` with Transformers. If you use the Transformers chat template, it will automatically apply the [harmony response format](https://github.com/openai/harmony). If you use `model.generate` directly, you need to apply the harmony format manually using the chat template or use our [openai-harmony](https://github.com/openai/harmony) package. To get started, install the necessary dependencies to setup your environment: ``` pip install -U transformers kernels torch ``` Once, setup you can proceed to run the model by running the snippet below: ```py from transformers import pipeline import torch model_id = "openai/gpt-oss-20b" pipe = pipeline( "text-generation", model=model_id, torch_dtype="auto", device_map="auto", ) messages = [ {"role": "user", "content": "Explain quantum mechanics clearly and concisely."}, ] outputs = pipe( messages, max_new_tokens=256, ) print(outputs[0]["generated_text"][-1]) ``` Alternatively, you can run the model via [`Transformers Serve`](https://huggingface.co/docs/transformers/main/serving) to spin up a OpenAI-compatible webserver: ``` transformers serve transformers chat localhost:8000 --model-name-or-path openai/gpt-oss-20b ``` [Learn more about how to use gpt-oss with Transformers.](https://cookbook.openai.com/articles/gpt-oss/run-transformers) ## vLLM vLLM recommends using [uv](https://docs.astral.sh/uv/) for Python dependency management. You can use vLLM to spin up an OpenAI-compatible webserver. The following command will automatically download the model and start the server. ```bash uv pip install --pre vllm==0.10.1+gptoss \ --extra-index-url https://wheels.vllm.ai/gpt-oss/ \ --extra-index-url https://download.pytorch.org/whl/nightly/cu128 \ --index-strategy unsafe-best-match vllm serve openai/gpt-oss-20b ``` [Learn more about how to use gpt-oss with vLLM.](https://cookbook.openai.com/articles/gpt-oss/run-vllm) ## PyTorch / Triton To learn about how to use this model with PyTorch and Triton, check out our [reference implementations in the gpt-oss repository](https://github.com/openai/gpt-oss?tab=readme-ov-file#reference-pytorch-implementation). ## Ollama If you are trying to run gpt-oss on consumer hardware, you can use Ollama by running the following commands after [installing Ollama](https://ollama.com/download). ```bash # gpt-oss-20b ollama pull gpt-oss:20b ollama run gpt-oss:20b ``` [Learn more about how to use gpt-oss with Ollama.](https://cookbook.openai.com/articles/gpt-oss/run-locally-ollama) #### LM Studio If you are using [LM Studio](https://lmstudio.ai/) you can use the following commands to download. ```bash # gpt-oss-20b lms get openai/gpt-oss-20b ``` Check out our [awesome list](https://github.com/openai/gpt-oss/blob/main/awesome-gpt-oss.md) for a broader collection of gpt-oss resources and inference partners. --- # Download the model You can download the model weights from the [Hugging Face Hub](https://huggingface.co/collections/openai/gpt-oss-68911959590a1634ba11c7a4) directly from Hugging Face CLI: ```shell # gpt-oss-20b huggingface-cli download openai/gpt-oss-20b --include "original/*" --local-dir gpt-oss-20b/ pip install gpt-oss python -m gpt_oss.chat model/ ``` # Reasoning levels You can adjust the reasoning level that suits your task across three levels: * **Low:** Fast responses for general dialogue. * **Medium:** Balanced speed and detail. * **High:** Deep and detailed analysis. The reasoning level can be set in the system prompts, e.g., "Reasoning: high". # Tool use The gpt-oss models are excellent for: * Web browsing (using built-in browsing tools) * Function calling with defined schemas * Agentic operations like browser tasks # Fine-tuning Both gpt-oss models can be fine-tuned for a variety of specialized use cases. This smaller model `gpt-oss-20b` can be fine-tuned on consumer hardware, whereas the larger [`gpt-oss-120b`](https://huggingface.co/openai/gpt-oss-120b) can be fine-tuned on a single H100 node.
PhaaNe/clickbait_KLTN
PhaaNe
2025-08-06T11:45:58Z
21
0
null
[ "safetensors", "llama", "text-classification", "clickbait-detection", "vietnamese", "fine-tuned", "vi", "dataset:clickbait-dataset", "license:apache-2.0", "region:us" ]
text-classification
2025-08-05T20:05:10Z
--- language: vi license: apache-2.0 tags: - text-classification - clickbait-detection - vietnamese - llama - fine-tuned datasets: - clickbait-dataset metrics: - accuracy - f1 pipeline_tag: text-classification --- # Vietnamese Clickbait Detection Model This model is a fine-tuned version of Llama for Vietnamese clickbait detection. ## Model Description - **Model type:** Causal Language Model (Fine-tuned for Classification) - **Language:** Vietnamese - **Base model:** meta-llama/Llama-3.1-8B-Instruct - **Task:** Clickbait Detection - **Dataset:** Vietnamese clickbait dataset ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch # Load model and tokenizer model_name = "PhaaNe/clickbait_KLTN" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype=torch.float16, device_map="auto" ) # Example usage text = "Bạn sẽ không tin được điều này xảy ra!" inputs = tokenizer(text, return_tensors="pt") outputs = model.generate(**inputs, max_new_tokens=10) result = tokenizer.decode(outputs[0], skip_special_tokens=True) print(result) ``` ## Training Details - Fine-tuned using LoRA (Low-Rank Adaptation) - Training framework: Transformers + PEFT - Hardware: GPU-enabled server ## Performance The model achieves good performance on Vietnamese clickbait detection tasks. ## Citation If you use this model, please cite: ``` @misc{clickbait_kltn_2025, title={Vietnamese Clickbait Detection using Fine-tuned Llama}, author={PhaaNe}, year={2025}, url={https://huggingface.co/PhaaNe/clickbait_KLTN} } ```
tamewild/4b_v37_merged_e8
tamewild
2025-08-06T11:40:38Z
3
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-06T11:38:32Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Conexis/GLM-4.5-Air-Channel-INT8
Conexis
2025-08-06T11:38:58Z
0
0
transformers
[ "transformers", "safetensors", "glm4_moe", "text-generation", "conversational", "en", "zh", "license:mit", "autotrain_compatible", "endpoints_compatible", "8-bit", "region:us" ]
text-generation
2025-08-04T01:23:46Z
--- license: mit language: - en - zh pipeline_tag: text-generation library_name: transformers --- # GLM-4.5 <div align="center"> <img src=https://raw.githubusercontent.com/zai-org/GLM-4.5/refs/heads/main/resources/logo.svg width="15%"/> </div> <p align="center"> 👋 Join our <a href="https://discord.gg/QR7SARHRxK" target="_blank">Discord</a> community. <br> 📖 Check out the GLM-4.5 <a href="https://z.ai/blog/glm-4.5" target="_blank">technical blog</a>. <br> 📍 Use GLM-4.5 API services on <a href="https://docs.z.ai/guides/llm/glm-4.5">Z.ai API Platform (Global)</a> or <br> <a href="https://docs.bigmodel.cn/cn/guide/models/text/glm-4.5">Zhipu AI Open Platform (Mainland China)</a>. <br> 👉 One click to <a href="https://chat.z.ai">GLM-4.5</a>. </p> ## Model Introduction The **GLM-4.5** series models are foundation models designed for intelligent agents. GLM-4.5 has **355** billion total parameters with **32** billion active parameters, while GLM-4.5-Air adopts a more compact design with **106** billion total parameters and **12** billion active parameters. GLM-4.5 models unify reasoning, coding, and intelligent agent capabilities to meet the complex demands of intelligent agent applications. Both GLM-4.5 and GLM-4.5-Air are hybrid reasoning models that provide two modes: thinking mode for complex reasoning and tool usage, and non-thinking mode for immediate responses. We have open-sourced the base models, hybrid reasoning models, and FP8 versions of the hybrid reasoning models for both GLM-4.5 and GLM-4.5-Air. They are released under the MIT open-source license and can be used commercially and for secondary development. As demonstrated in our comprehensive evaluation across 12 industry-standard benchmarks, GLM-4.5 achieves exceptional performance with a score of **63.2**, in the **3rd** place among all the proprietary and open-source models. Notably, GLM-4.5-Air delivers competitive results at **59.8** while maintaining superior efficiency. ![bench](https://raw.githubusercontent.com/zai-org/GLM-4.5/refs/heads/main/resources/bench.png) For more eval results, show cases, and technical details, please visit our [technical blog](https://z.ai/blog/glm-4.5). The technical report will be released soon. The model code, tool parser and reasoning parser can be found in the implementation of [transformers](https://github.com/huggingface/transformers/tree/main/src/transformers/models/glm4_moe), [vLLM](https://github.com/vllm-project/vllm/blob/main/vllm/model_executor/models/glm4_moe_mtp.py) and [SGLang](https://github.com/sgl-project/sglang/blob/main/python/sglang/srt/models/glm4_moe.py). ## Quick Start Please refer our [github page](https://github.com/zai-org/GLM-4.5) for more detail.
vomqal/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-masked_snappy_caribou
vomqal
2025-08-06T11:37:04Z
8
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am masked_snappy_caribou", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-07-03T00:27:47Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am masked_snappy_caribou --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
tamewild/4b_v37_merged_e10
tamewild
2025-08-06T11:36:49Z
8
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-06T11:34:44Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
idopinto/gpt-oss-20b-multilingual-reasoner
idopinto
2025-08-06T11:36:12Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "sft", "trl", "dataset:HuggingFaceH4/Multilingual-Thinking", "base_model:openai/gpt-oss-20b", "base_model:finetune:openai/gpt-oss-20b", "endpoints_compatible", "region:us" ]
null
2025-08-06T11:01:58Z
--- base_model: openai/gpt-oss-20b datasets: HuggingFaceH4/Multilingual-Thinking library_name: transformers model_name: gpt-oss-20b-multilingual-reasoner tags: - generated_from_trainer - sft - trl licence: license --- # Model Card for gpt-oss-20b-multilingual-reasoner This model is a fine-tuned version of [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b) on the [HuggingFaceH4/Multilingual-Thinking](https://huggingface.co/datasets/HuggingFaceH4/Multilingual-Thinking) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="idopinto/gpt-oss-20b-multilingual-reasoner", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.21.0 - Transformers: 4.55.0 - Pytorch: 2.8.0+cu128 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
mradermacher/PaperPrediction-LLM-4B-GGUF
mradermacher
2025-08-06T11:31:15Z
57
0
transformers
[ "transformers", "gguf", "en", "base_model:weihezhai/PaperPrediction-LLM-4B", "base_model:quantized:weihezhai/PaperPrediction-LLM-4B", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-08-06T11:17:36Z
--- base_model: weihezhai/PaperPrediction-LLM-4B language: - en library_name: transformers license: cc-by-nc-4.0 mradermacher: readme_rev: 1 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/weihezhai/PaperPrediction-LLM-4B <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#PaperPrediction-LLM-4B-GGUF).*** weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/PaperPrediction-LLM-4B-GGUF/resolve/main/PaperPrediction-LLM-4B.Q2_K.gguf) | Q2_K | 1.8 | | | [GGUF](https://huggingface.co/mradermacher/PaperPrediction-LLM-4B-GGUF/resolve/main/PaperPrediction-LLM-4B.Q3_K_S.gguf) | Q3_K_S | 2.0 | | | [GGUF](https://huggingface.co/mradermacher/PaperPrediction-LLM-4B-GGUF/resolve/main/PaperPrediction-LLM-4B.Q3_K_M.gguf) | Q3_K_M | 2.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/PaperPrediction-LLM-4B-GGUF/resolve/main/PaperPrediction-LLM-4B.Q3_K_L.gguf) | Q3_K_L | 2.3 | | | [GGUF](https://huggingface.co/mradermacher/PaperPrediction-LLM-4B-GGUF/resolve/main/PaperPrediction-LLM-4B.IQ4_XS.gguf) | IQ4_XS | 2.4 | | | [GGUF](https://huggingface.co/mradermacher/PaperPrediction-LLM-4B-GGUF/resolve/main/PaperPrediction-LLM-4B.Q4_K_S.gguf) | Q4_K_S | 2.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/PaperPrediction-LLM-4B-GGUF/resolve/main/PaperPrediction-LLM-4B.Q4_K_M.gguf) | Q4_K_M | 2.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/PaperPrediction-LLM-4B-GGUF/resolve/main/PaperPrediction-LLM-4B.Q5_K_S.gguf) | Q5_K_S | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/PaperPrediction-LLM-4B-GGUF/resolve/main/PaperPrediction-LLM-4B.Q5_K_M.gguf) | Q5_K_M | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/PaperPrediction-LLM-4B-GGUF/resolve/main/PaperPrediction-LLM-4B.Q6_K.gguf) | Q6_K | 3.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/PaperPrediction-LLM-4B-GGUF/resolve/main/PaperPrediction-LLM-4B.Q8_0.gguf) | Q8_0 | 4.4 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/PaperPrediction-LLM-4B-GGUF/resolve/main/PaperPrediction-LLM-4B.f16.gguf) | f16 | 8.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
affinator/Affine-7857777
affinator
2025-08-06T11:29:39Z
62
0
null
[ "safetensors", "deepseek_v3", "custom_code", "fp8", "region:us" ]
null
2025-08-06T11:29:20Z
This repository hosts a variant of Alphatao/Affine-0000000. License: MIT. The original license is preserved. No further information about the modifications is provided.
grapevine-AI/Qwen3-30B-A3B-Thinking-2507-GGUF
grapevine-AI
2025-08-06T11:22:57Z
133
0
null
[ "gguf", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-08-01T13:32:40Z
--- license: apache-2.0 --- # What is this? Alibaba CloudのMoEモデル、Qwen3-30B-A3Bがパワーアップして帰ってきた!<br> 改良版では非思考モデルと思考モデルの2種類に分離され、そのうちの思考タイプのモデルである[Qwen3-30B-A3B-Thinking-2507](https://huggingface.co/Qwen/Qwen3-30B-A3B-Thinking-2507)をGGUFフォーマットに変換したものです。 # imatrix dataset 日本語能力を重視し、日本語が多量に含まれる[TFMC/imatrix-dataset-for-japanese-llm](https://huggingface.co/datasets/TFMC/imatrix-dataset-for-japanese-llm)データセットを使用しました。 # Chat template ``` <|im_start|>system ここにSystem Promptを書きます。<|im_end|> <|im_start|>user ここにMessageを書きます。<|im_end|> <|im_start|>assistant ``` <!-- # Quants 各クオンツとそのベンチマークスコア(Gemini 2.0 Flash採点によるElyza_tasks 100)をまとめておきます。 |クオンツ|スコア|コメント| |---|---|---| |Q8_0||| |Q6_K||| |Q5_K_M||| |Q4_K_M|4.42|| |IQ4_XS||| --> # Environment Windows版llama.cpp-b5999を使用して量子化作業を実施しました。 # License Apache 2.0 # Developer Alibaba Cloud
Gusgoodmansamadayo/Convnex_Base-7_11_Sign
Gusgoodmansamadayo
2025-08-06T11:21:51Z
0
0
null
[ "image-classification", "base_model:facebook/convnext-tiny-224", "base_model:finetune:facebook/convnext-tiny-224", "license:mit", "region:us" ]
image-classification
2025-08-06T06:13:26Z
--- license: mit base_model: - facebook/convnext-tiny-224 pipeline_tag: image-classification ---
unsloth/gpt-oss-120b-bnb-4bit
unsloth
2025-08-06T11:19:40Z
0
2
null
[ "license:apache-2.0", "region:us" ]
null
2025-08-06T11:19:39Z
--- license: apache-2.0 ---
Thireus/GLM-4.5-THIREUS-IQ3_KS-SPECIAL_SPLIT
Thireus
2025-08-06T11:18:18Z
4
0
null
[ "gguf", "arxiv:2505.23786", "license:mit", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-08-03T17:36:07Z
--- license: mit --- ## ⚠️ Cautionary Notice Due to changes in the GLM-4.5 PR the GGUF files of this repository have changed. Any older version of these GGUFs are no longer compatible with the latest version of `llama.cpp` and `ik_llama.cpp`. Please download the latest GGUF files of this repository and make sure to use the latest version of `llama.cpp` or `ik_llama.cpp`. - **For `llama.cpp`** – see the discussion in [PR #14939](https://github.com/ggml-org/llama.cpp/pull/14939). - **For `ik_llama.cpp`** – refer to [ikawrakow/ik_llama.cpp#668](https://github.com/ikawrakow/ik_llama.cpp/pull/668). **Unless you are confident in what you're doing, and until support is officially confirmed (PR merged),** > 🔒 **Do not use these quantized models for production** > 🔬 **Do not use them to assess the quality of the GLM-4.5 models** Proceed with caution and keep an eye on the upstream PRs for any updates that could affect compatibility or performance. --- # GLM-4.5 ## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/GLM-4.5-THIREUS-BF16-SPECIAL_SPLIT/) about? This repository provides **GGUF-quantized tensors** for the GLM-4.5 model (official repo: https://huggingface.co/zai-org/GLM-4.5). These GGUF shards are designed to be used with **Thireus’ GGUF Tool Suite** (https://gguf.thireus.com), a collection of tools that automatically finds the perplexity-optimal mix of quantizations for any given VRAM and RAM target. With the Tool Suite, you can generate and download custom quantization “recipes” effortlessly. - 📖 Read more: https://github.com/Thireus/GGUF-Tool-Suite - 🔍 Example quant mixes: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples - 🛠️ Create your own recipe: https://colab.research.google.com/github/Thireus/GGUF-Tool-Suite/blob/main/quant_recipe_pipeline.ipynb - 📂 Browse available quant shards: https://huggingface.co/Thireus/collections *tl;dr: Expand the details section below* <details> ``` cd ~ # Make sure to install all ik_llama.cpp compilation dependencies... apt install python3-dev python3-pip python3-venv python3-wheel python3-setuptools git acl netcat-openbsd cmake # pipx # Obtain ik_llama's Thireus version - Windows builds available at https://github.com/Thireus/ik_llama.cpp/releases git clone https://github.com/Thireus/ik_llama.cpp cd ik_llama.cpp git pull # Build ik_llama.cpp cmake -B build -DGGML_AVX=ON -DGGML_AVX2=ON -DLLAMA_CURL=OFF -DGGML_MAX_CONTEXTS=2048 cmake --build build --config Release -j16 cd .. # Obtain Thireus' GGUF-Tool-Suite git clone https://github.com/Thireus/GGUF-Tool-Suite # Download model quant mix from recipe file: cd GGUF-Tool-Suite rm -f download.conf # Make sure to copy the relevant download.conf for the model before running quant_assign.py cp -f models/GLM-4.5/download.conf . # Use the download.conf of the chosen model mkdir -p kitchen && cd kitchen ../quant_downloader.sh ../recipe_examples/GLM-4.5.ROOT-3.6910bpw-3.2785ppl.153GB-GGUF_19GB-GPU_134GB-CPU.68f915c_9c7682b.recipe # Launch ik_llama's llama-cli: ulimit -n 99999 # Lifts "too many open files" limitation on Linux ~/ik_llama.cpp/build/bin/llama-cli \ -m GLM-4.5-THIREUS-BF16-SPECIAL_TENSOR-00001-of-01148.gguf \ -fa -amb 512 -fmoe -ctk f16 -c 4096 -ngl 99 \ -ot "blk\.(3|4|5|6)\.ffn_.*=CUDA0" \ -ot "blk\.(7|8|9|10)\.ffn_.*=CUDA1" \ -ot exps=CPU -b 2048 -ub 1024 --warmup-batch --no-mmap --threads 36 \ --main-gpu 0 \ -p '<|begin▁of▁sentence|><|User|>What is the solution of x+5=-2?<|Assistant|><think>\n' ``` </details> --- ## ❓ Why does this Tool Suite exist? 1. **Compatibility & Speed** – [unsloth](https://huggingface.co/unsloth)’s dynamic quants may not always work optimally with `ik_llama.cpp`. 2. **Custom Rig Fit** – No off-the-shelf GGUF model perfectly matched my VRAM/RAM setup, so I built a way to tailor models and leverage extra VRAM/RAM to reduce perplexity. 3. **Automated PPL-Optimal Quantization** – To my knowledge, there was no flexible, automated method to minimize perplexity for any bits-per-weight (bpw) target—so I created one with excellent results! --- ## 📊 How does it compare to other GGUFs? Here’s how DeepSeek-R1-0528 quantized with **Thireus’ GGUF Tool Suite** stacks up against other quantizers (lower perplexity = better at equal or lower bpw): ![PPLs Compared With Others](https://github.com/Thireus/GGUF-Tool-Suite/raw/main/ppl_graphs/DeepSeek-R1-0528.svg) > _Note: The `recipe_examples` files illustrate good recipes. The Tool Suite computes the optimal ppl/bpw curve for you — just specify your target RAM, VRAM, and quant types, and `quant_assign.py` finds the best mix._ More perplexity/bpw graphs for other supported models: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/ppl_graphs --- ## 🚀 How do I get started? Check out the [GGUF Tool Suite README](https://github.com/Thireus/GGUF-Tool-Suite) — focus on these sections: 1. ⚠️ **Requirements** – Which `ik_llama.cpp` (or `llama.cpp`) version to use and how to compile. - Windows binaries (no patching needed) at: https://github.com/Thireus/ik_llama.cpp/releases 2. 📥 **Download Model Shards** – Use `quant_downloader.sh` to fetch GGUF shards from any recipe. - Recipe examples: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples 3. 🧠 **Run a Downloaded Model** – Sample usage with `llama-cli`. 4. 🛠️ **Generate a Custom Recipe** – Produce recipes tailored to your rig for optimal perplexity. --- ## ✅ Supported Models Supported models are listed under `models/` in the [Tool Suite Github repo](https://github.com/Thireus/GGUF-Tool-Suite/tree/main/models). Presence of `ppl_results.csv` indicates official support and compatibility with `quant_assign.py`. --- ## 🤷‍♂️ Will I release pre-cooked GGUF files? No, because I believe in **tailored quantization** for each user’s hardware. If you prefer ready-made shards, you are welcome to merge them via `llama-gguf-split --merge`, or request someone to publish them. Instead, I prefer to share examples of recipes so users can see exactly how they were produced (command included inside these recipe files) and tweak them for their own rigs. The `quant_downloader.sh` script handles automatic fetching and verification of each shard. Recipes provided by [Ubergarm](https://huggingface.co/ubergarm) on his model cards are also compatible with `quant_downloader.sh`. Users who don’t trust the GGUF shards on HuggingFace can also quantize their own by passing recipe lines to `llama-quantize --custom-q` ([see example](https://github.com/Thireus/GGUF-Tool-Suite/blob/main/models/DeepSeek-R1-0528/DeepSeek-R1-0528-THIREUS-ANY-SPECIAL.sh#L482-L486)). Run `llama-quantize --help` to list compatible quants for `quant_assign.py`. This approach is especially useful if you prefer `llama.cpp` over `ik_llama.cpp`. --- ## 📦 What’s in this repository? - **00001 GGUF header shard** – Contains metadata (tokens, chat template, tensor count, etc.). This metadata can be explored directly from the HuggingFace web interface after clicking on that shard. - **Tensor shards** – Each shard holds one tensor; see `tensors.map` for names, quant types, sizes, SHA-256 hash, shard IDs, etc. - **GPG-signed files** – `tensors.map` and header shard are signed with the key in [trusted-keys.asc](https://github.com/Thireus/GGUF-Tool-Suite/blob/main/trusted-keys.asc) for tamper detection. - **Security note** – Some papers about various ways to attack GGUFs and LLMs are available online, such as https://arxiv.org/abs/2505.23786, and there are also more classic security exploits like CVE-2024-23496 and CVE-2024-25664 through CVE-2024-25668. Only use GGUFs from reputable, trusted authors—or alternatively self-quantize—to avoid potential exploits. --- ## 💡 Pro Tips You can download the BF16 model version to quantize your own shards: ``` mkdir kitchen echo '.*=bf16' > kitchen/bf16.recipe cd kitchen ../quant_downloader.sh bf16.recipe ``` Enjoy optimized quantization! 🎉
Thireus/GLM-4.5-THIREUS-IQ3_KT-SPECIAL_SPLIT
Thireus
2025-08-06T11:18:10Z
4
0
null
[ "gguf", "arxiv:2505.23786", "license:mit", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-08-03T17:59:29Z
--- license: mit --- ## ⚠️ Cautionary Notice Due to changes in the GLM-4.5 PR the GGUF files of this repository have changed. Any older version of these GGUFs are no longer compatible with the latest version of `llama.cpp` and `ik_llama.cpp`. Please download the latest GGUF files of this repository and make sure to use the latest version of `llama.cpp` or `ik_llama.cpp`. - **For `llama.cpp`** – see the discussion in [PR #14939](https://github.com/ggml-org/llama.cpp/pull/14939). - **For `ik_llama.cpp`** – refer to [ikawrakow/ik_llama.cpp#668](https://github.com/ikawrakow/ik_llama.cpp/pull/668). **Unless you are confident in what you're doing, and until support is officially confirmed (PR merged),** > 🔒 **Do not use these quantized models for production** > 🔬 **Do not use them to assess the quality of the GLM-4.5 models** Proceed with caution and keep an eye on the upstream PRs for any updates that could affect compatibility or performance. --- # GLM-4.5 ## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/GLM-4.5-THIREUS-BF16-SPECIAL_SPLIT/) about? This repository provides **GGUF-quantized tensors** for the GLM-4.5 model (official repo: https://huggingface.co/zai-org/GLM-4.5). These GGUF shards are designed to be used with **Thireus’ GGUF Tool Suite** (https://gguf.thireus.com), a collection of tools that automatically finds the perplexity-optimal mix of quantizations for any given VRAM and RAM target. With the Tool Suite, you can generate and download custom quantization “recipes” effortlessly. - 📖 Read more: https://github.com/Thireus/GGUF-Tool-Suite - 🔍 Example quant mixes: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples - 🛠️ Create your own recipe: https://colab.research.google.com/github/Thireus/GGUF-Tool-Suite/blob/main/quant_recipe_pipeline.ipynb - 📂 Browse available quant shards: https://huggingface.co/Thireus/collections *tl;dr: Expand the details section below* <details> ``` cd ~ # Make sure to install all ik_llama.cpp compilation dependencies... apt install python3-dev python3-pip python3-venv python3-wheel python3-setuptools git acl netcat-openbsd cmake # pipx # Obtain ik_llama's Thireus version - Windows builds available at https://github.com/Thireus/ik_llama.cpp/releases git clone https://github.com/Thireus/ik_llama.cpp cd ik_llama.cpp git pull # Build ik_llama.cpp cmake -B build -DGGML_AVX=ON -DGGML_AVX2=ON -DLLAMA_CURL=OFF -DGGML_MAX_CONTEXTS=2048 cmake --build build --config Release -j16 cd .. # Obtain Thireus' GGUF-Tool-Suite git clone https://github.com/Thireus/GGUF-Tool-Suite # Download model quant mix from recipe file: cd GGUF-Tool-Suite rm -f download.conf # Make sure to copy the relevant download.conf for the model before running quant_assign.py cp -f models/GLM-4.5/download.conf . # Use the download.conf of the chosen model mkdir -p kitchen && cd kitchen ../quant_downloader.sh ../recipe_examples/GLM-4.5.ROOT-3.6910bpw-3.2785ppl.153GB-GGUF_19GB-GPU_134GB-CPU.68f915c_9c7682b.recipe # Launch ik_llama's llama-cli: ulimit -n 99999 # Lifts "too many open files" limitation on Linux ~/ik_llama.cpp/build/bin/llama-cli \ -m GLM-4.5-THIREUS-BF16-SPECIAL_TENSOR-00001-of-01148.gguf \ -fa -amb 512 -fmoe -ctk f16 -c 4096 -ngl 99 \ -ot "blk\.(3|4|5|6)\.ffn_.*=CUDA0" \ -ot "blk\.(7|8|9|10)\.ffn_.*=CUDA1" \ -ot exps=CPU -b 2048 -ub 1024 --warmup-batch --no-mmap --threads 36 \ --main-gpu 0 \ -p '<|begin▁of▁sentence|><|User|>What is the solution of x+5=-2?<|Assistant|><think>\n' ``` </details> --- ## ❓ Why does this Tool Suite exist? 1. **Compatibility & Speed** – [unsloth](https://huggingface.co/unsloth)’s dynamic quants may not always work optimally with `ik_llama.cpp`. 2. **Custom Rig Fit** – No off-the-shelf GGUF model perfectly matched my VRAM/RAM setup, so I built a way to tailor models and leverage extra VRAM/RAM to reduce perplexity. 3. **Automated PPL-Optimal Quantization** – To my knowledge, there was no flexible, automated method to minimize perplexity for any bits-per-weight (bpw) target—so I created one with excellent results! --- ## 📊 How does it compare to other GGUFs? Here’s how DeepSeek-R1-0528 quantized with **Thireus’ GGUF Tool Suite** stacks up against other quantizers (lower perplexity = better at equal or lower bpw): ![PPLs Compared With Others](https://github.com/Thireus/GGUF-Tool-Suite/raw/main/ppl_graphs/DeepSeek-R1-0528.svg) > _Note: The `recipe_examples` files illustrate good recipes. The Tool Suite computes the optimal ppl/bpw curve for you — just specify your target RAM, VRAM, and quant types, and `quant_assign.py` finds the best mix._ More perplexity/bpw graphs for other supported models: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/ppl_graphs --- ## 🚀 How do I get started? Check out the [GGUF Tool Suite README](https://github.com/Thireus/GGUF-Tool-Suite) — focus on these sections: 1. ⚠️ **Requirements** – Which `ik_llama.cpp` (or `llama.cpp`) version to use and how to compile. - Windows binaries (no patching needed) at: https://github.com/Thireus/ik_llama.cpp/releases 2. 📥 **Download Model Shards** – Use `quant_downloader.sh` to fetch GGUF shards from any recipe. - Recipe examples: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples 3. 🧠 **Run a Downloaded Model** – Sample usage with `llama-cli`. 4. 🛠️ **Generate a Custom Recipe** – Produce recipes tailored to your rig for optimal perplexity. --- ## ✅ Supported Models Supported models are listed under `models/` in the [Tool Suite Github repo](https://github.com/Thireus/GGUF-Tool-Suite/tree/main/models). Presence of `ppl_results.csv` indicates official support and compatibility with `quant_assign.py`. --- ## 🤷‍♂️ Will I release pre-cooked GGUF files? No, because I believe in **tailored quantization** for each user’s hardware. If you prefer ready-made shards, you are welcome to merge them via `llama-gguf-split --merge`, or request someone to publish them. Instead, I prefer to share examples of recipes so users can see exactly how they were produced (command included inside these recipe files) and tweak them for their own rigs. The `quant_downloader.sh` script handles automatic fetching and verification of each shard. Recipes provided by [Ubergarm](https://huggingface.co/ubergarm) on his model cards are also compatible with `quant_downloader.sh`. Users who don’t trust the GGUF shards on HuggingFace can also quantize their own by passing recipe lines to `llama-quantize --custom-q` ([see example](https://github.com/Thireus/GGUF-Tool-Suite/blob/main/models/DeepSeek-R1-0528/DeepSeek-R1-0528-THIREUS-ANY-SPECIAL.sh#L482-L486)). Run `llama-quantize --help` to list compatible quants for `quant_assign.py`. This approach is especially useful if you prefer `llama.cpp` over `ik_llama.cpp`. --- ## 📦 What’s in this repository? - **00001 GGUF header shard** – Contains metadata (tokens, chat template, tensor count, etc.). This metadata can be explored directly from the HuggingFace web interface after clicking on that shard. - **Tensor shards** – Each shard holds one tensor; see `tensors.map` for names, quant types, sizes, SHA-256 hash, shard IDs, etc. - **GPG-signed files** – `tensors.map` and header shard are signed with the key in [trusted-keys.asc](https://github.com/Thireus/GGUF-Tool-Suite/blob/main/trusted-keys.asc) for tamper detection. - **Security note** – Some papers about various ways to attack GGUFs and LLMs are available online, such as https://arxiv.org/abs/2505.23786, and there are also more classic security exploits like CVE-2024-23496 and CVE-2024-25664 through CVE-2024-25668. Only use GGUFs from reputable, trusted authors—or alternatively self-quantize—to avoid potential exploits. --- ## 💡 Pro Tips You can download the BF16 model version to quantize your own shards: ``` mkdir kitchen echo '.*=bf16' > kitchen/bf16.recipe cd kitchen ../quant_downloader.sh bf16.recipe ``` Enjoy optimized quantization! 🎉
knifeayumu/Cydonia-v4-MS3.2-Magnum-Diamond-24B
knifeayumu
2025-08-06T11:17:09Z
80
8
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "conversational", "base_model:Doctor-Shotgun/MS3.2-24B-Magnum-Diamond", "base_model:merge:Doctor-Shotgun/MS3.2-24B-Magnum-Diamond", "base_model:TheDrummer/Cydonia-24B-v4", "base_model:merge:TheDrummer/Cydonia-24B-v4", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-07-23T05:58:22Z
--- base_model: - TheDrummer/Cydonia-24B-v4 - Doctor-Shotgun/MS3.2-24B-Magnum-Diamond library_name: transformers tags: - mergekit - merge license: apache-2.0 --- ![Foxgirl on Cydonia](https://huggingface.co/knifeayumu/Cydonia-v4-MS3.2-Magnum-Diamond-24B/resolve/main/FoxGirlonCydonia.png) # Cydonia-v4-MS3.2-Magnum-Diamond-24B Recipe based on [knifeayumu/Cydonia-v1.2-Magnum-v4-22B](https://huggingface.co/knifeayumu/Cydonia-v1.2-Magnum-v4-22B) because the model [Doctor-Shotgun/MS3.2-24B-Magnum-Diamond](https://huggingface.co/Doctor-Shotgun/MS3.2-24B-Magnum-Diamond) is still too horny and verbose. The [PNG file](https://huggingface.co/knifeayumu/Cydonia-v4-MS3.2-Magnum-Diamond-24B/resolve/main/FoxGirlonCydonia.png) above includes workflow for FLUX Kontext Dev with ComfyUI utilising [pollockjj/ComfyUI-MultiGPU](https://github.com/pollockjj/ComfyUI-MultiGPU) nodes and [two input images without stitching](https://www.reddit.com/r/StableDiffusion/comments/1m5wpmv/flux_kontext_psa_you_can_load_multiple_images/). ![ComfyUI Workflow](https://huggingface.co/knifeayumu/Cydonia-v4-MS3.2-Magnum-Diamond-24B/resolve/main/ComfyUI_FoxGirlonCydonia.png) ## Merge Details This is a merge of pre-trained language models created using [mergekit](https://github.com/arcee-ai/mergekit). ### Merge Method This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method. ### Models Merged The following models were included in the merge: * TheDrummer/Cydonia-24B-v4 * Doctor-Shotgun/MS3.2-24B-Magnum-Diamond ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: TheDrummer/Cydonia-24B-v4 - model: Doctor-Shotgun/MS3.2-24B-Magnum-Diamond merge_method: slerp base_model: TheDrummer/Cydonia-24B-v4 parameters: t: [0.1, 0.3, 0.6, 0.3, 0.1] dtype: bfloat16 ```
knifeayumu/Cydonia-v4-MS3.2-Magnum-Diamond-24B-GGUF
knifeayumu
2025-08-06T11:17:00Z
1,706
0
transformers
[ "transformers", "gguf", "en", "base_model:knifeayumu/Cydonia-v4-MS3.2-Magnum-Diamond-24B", "base_model:quantized:knifeayumu/Cydonia-v4-MS3.2-Magnum-Diamond-24B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-07-23T06:58:27Z
--- base_model: - knifeayumu/Cydonia-v4-MS3.2-Magnum-Diamond-24B language: - en license: apache-2.0 library_name: transformers --- ## Llamacpp Quantizations of knifeayumu/Cydonia-v4-MS3.2-Magnum-Diamond-24B Using [llama.cpp](https://github.com/ggerganov/llama.cpp/) release [b5966](https://github.com/ggml-org/llama.cpp/releases/tag/b5966) for quantization. Original model: [knifeayumu/Cydonia-v4-MS3.2-Magnum-Diamond-24B](https://huggingface.co/knifeayumu/Cydonia-v4-MS3.2-Magnum-Diamond-24B) ## Quant Types: | Filename | Quant type | File Size | | -------- | ---------- | --------- | | [Cydonia-v4-MS3.2-Magnum-Diamond-24B-F16.gguf](https://huggingface.co/knifeayumu/Cydonia-v4-MS3.2-Magnum-Diamond-24B-GGUF/blob/main/Cydonia-v4-MS3.2-Magnum-Diamond-24B-F16.gguf) | F16 | 47.15 GB | | [Cydonia-v4-MS3.2-Magnum-Diamond-24B-Q8_0.gguf](https://huggingface.co/knifeayumu/Cydonia-v4-MS3.2-Magnum-Diamond-24B-GGUF/blob/main/Cydonia-v4-MS3.2-Magnum-Diamond-24B-Q8_0.gguf) | Q8_0 | 25.05 GB | | [Cydonia-v4-MS3.2-Magnum-Diamond-24B-Q6_K.gguf](https://huggingface.co/knifeayumu/Cydonia-v4-MS3.2-Magnum-Diamond-24B-GGUF/blob/main/Cydonia-v4-MS3.2-Magnum-Diamond-24B-Q6_K.gguf) | Q6_K | 19.35 GB | | [Cydonia-v4-MS3.2-Magnum-Diamond-24B-Q5_K_M.gguf](https://huggingface.co/knifeayumu/Cydonia-v4-MS3.2-Magnum-Diamond-24B-GGUF/blob/main/Cydonia-v4-MS3.2-Magnum-Diamond-24B-Q5_K_M.gguf) | Q5_K_M | 16.76 GB | | [Cydonia-v4-MS3.2-Magnum-Diamond-24B-Q5_K_S.gguf](https://huggingface.co/knifeayumu/Cydonia-v4-MS3.2-Magnum-Diamond-24B-GGUF/blob/main/Cydonia-v4-MS3.2-Magnum-Diamond-24B-Q5_K_S.gguf) | Q5_K_S | 16.30 GB | | [Cydonia-v4-MS3.2-Magnum-Diamond-24B-Q4_K_M.gguf](https://huggingface.co/knifeayumu/Cydonia-v4-MS3.2-Magnum-Diamond-24B-GGUF/blob/main/Cydonia-v4-MS3.2-Magnum-Diamond-24B-Q4_K_M.gguf) | Q4_K_M | 14.33 GB | | [Cydonia-v4-MS3.2-Magnum-Diamond-24B-Q4_K_S.gguf](https://huggingface.co/knifeayumu/Cydonia-v4-MS3.2-Magnum-Diamond-24B-GGUF/blob/main/Cydonia-v4-MS3.2-Magnum-Diamond-24B-Q4_K_S.gguf) | Q4_K_S | 13.55 GB | | [Cydonia-v4-MS3.2-Magnum-Diamond-24B-Q3_K_L.gguf](https://huggingface.co/knifeayumu/Cydonia-v4-MS3.2-Magnum-Diamond-24B-GGUF/blob/main/Cydonia-v4-MS3.2-Magnum-Diamond-24B-Q3_K_L.gguf) | Q3_K_L | 12.40 GB | | [Cydonia-v4-MS3.2-Magnum-Diamond-24B-Q3_K_M.gguf](https://huggingface.co/knifeayumu/Cydonia-v4-MS3.2-Magnum-Diamond-24B-GGUF/blob/main/Cydonia-v4-MS3.2-Magnum-Diamond-24B-Q3_K_M.gguf) | Q3_K_M | 11.47 GB | | [Cydonia-v4-MS3.2-Magnum-Diamond-24B-Q3_K_S.gguf](https://huggingface.co/knifeayumu/Cydonia-v4-MS3.2-Magnum-Diamond-24B-GGUF/blob/main/Cydonia-v4-MS3.2-Magnum-Diamond-24B-Q3_K_S.gguf) | Q3_K_S | 10.40 GB | | [Cydonia-v4-MS3.2-Magnum-Diamond-24B-Q2_K.gguf](https://huggingface.co/knifeayumu/Cydonia-v4-MS3.2-Magnum-Diamond-24B-GGUF/blob/main/Cydonia-v4-MS3.2-Magnum-Diamond-24B-Q2_K.gguf) | Q2_K | 8.89 GB | --- ![Foxgirl on Cydonia](https://huggingface.co/knifeayumu/Cydonia-v4-MS3.2-Magnum-Diamond-24B/resolve/main/FoxGirlonCydonia.png) # Cydonia-v4-MS3.2-Magnum-Diamond-24B Recipe based on [knifeayumu/Cydonia-v1.2-Magnum-v4-22B](https://huggingface.co/knifeayumu/Cydonia-v1.2-Magnum-v4-22B) because the model [Doctor-Shotgun/MS3.2-24B-Magnum-Diamond](https://huggingface.co/Doctor-Shotgun/MS3.2-24B-Magnum-Diamond) is still too horny and verbose. The [PNG file](https://huggingface.co/knifeayumu/Cydonia-v4-MS3.2-Magnum-Diamond-24B/resolve/main/FoxGirlonCydonia.png) above includes workflow for FLUX Kontext Dev with ComfyUI utilising [pollockjj/ComfyUI-MultiGPU](https://github.com/pollockjj/ComfyUI-MultiGPU) nodes and [two input images without stitching](https://www.reddit.com/r/StableDiffusion/comments/1m5wpmv/flux_kontext_psa_you_can_load_multiple_images/). ![ComfyUI Workflow](https://huggingface.co/knifeayumu/Cydonia-v4-MS3.2-Magnum-Diamond-24B/resolve/main/ComfyUI_FoxGirlonCydonia.png) ## Merge Details This is a merge of pre-trained language models created using [mergekit](https://github.com/arcee-ai/mergekit). ### Merge Method This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method. ### Models Merged The following models were included in the merge: * TheDrummer/Cydonia-24B-v4 * Doctor-Shotgun/MS3.2-24B-Magnum-Diamond ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: TheDrummer/Cydonia-24B-v4 - model: Doctor-Shotgun/MS3.2-24B-Magnum-Diamond merge_method: slerp base_model: TheDrummer/Cydonia-24B-v4 parameters: t: [0.1, 0.3, 0.6, 0.3, 0.1] dtype: bfloat16 ```
remiai3/mistral-7B-Instruct-v0.1-GGUF_using_int4_project_guide
remiai3
2025-08-06T11:15:36Z
9
0
null
[ "gguf", "students", "en", "base_model:TheBloke/Mistral-7B-Instruct-v0.1-GGUF", "base_model:quantized:TheBloke/Mistral-7B-Instruct-v0.1-GGUF", "license:apache-2.0", "region:us" ]
null
2025-08-06T11:09:27Z
--- license: apache-2.0 language: - en base_model: - TheBloke/Mistral-7B-Instruct-v0.1-GGUF tags: - students --- Mistral 7B Project Guide Overview This repository, remiai3/mistral7B, provides code and resources for students to run the Mistral 7B model locally on their laptops for AI experiments and research. It is a free resource with no hidden fees, and we attribute the original model to Mistral AI. The repository includes scripts to run both the pre-trained Mistral 7B model and a fine-tuned version using LoRA weights. Features Run Mistral 7B locally with a simple web UI. Includes pre-trained and fine-tuned (LoRA) model support. Educational focus for students to explore modern AI models. Quantized model weights for consumer hardware (8GB or 16GB RAM). Getting Started Follow the steps in document.txt for detailed instructions on: System requirements (Python 3.10+, 8GB/16GB RAM). Setting up the environment and installing dependencies. Downloading model weights from TheBloke/Mistral-7B-Instruct-v0.1-GGUF. Running the pre-trained and fine-tuned models. Repository Structure app.py: Script to run the pre-trained model with a model selector UI. fine_tune/app.py: Script to run the fine-tuned LoRA model. fine_tune/lora_finetuned.gguf: LoRA weights for the fine-tuned model. fine_tune/dataset.json: Dataset used for fine-tuning. fine_tune/finetune.py: Fine-tuning script. requirements.txt: Dependencies for the project. document.txt: Detailed setup and usage guide. Attribution Model: Mistral 7B, created by Mistral AI. Quantized Weights: Provided by TheBloke. This project is for educational purposes to support student learning and research. License Apache 2.0 (same as Mistral 7B). Support For issues or questions, visit the Issues section or contact remiai3 on Hugging Face.
sagata007/villain
sagata007
2025-08-06T11:15:06Z
17
0
diffusers
[ "diffusers", "flux", "text-to-image", "lora", "fal", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-08-06T11:14:59Z
--- tags: - flux - text-to-image - lora - diffusers - fal base_model: black-forest-labs/FLUX.1-dev instance_prompt: villain license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- # villain <Gallery /> ## Model description ## Trigger words You should use `villain` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/sagata007/villain/tree/main) them in the Files & versions tab. ## Training at fal.ai Training was done using [fal.ai/models/fal-ai/flux-lora-fast-training](https://fal.ai/models/fal-ai/flux-lora-fast-training).
hafidhsoekma/test-g1.7b-2-checkpoint-300
hafidhsoekma
2025-08-06T11:09:49Z
30
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "base_model:unsloth/Qwen3-1.7B-unsloth-bnb-4bit", "base_model:finetune:unsloth/Qwen3-1.7B-unsloth-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-08-06T11:05:01Z
--- base_model: unsloth/Qwen3-1.7B-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen3 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** hafidhsoekma - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen3-1.7B-unsloth-bnb-4bit This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
lmsys/gpt-oss-20b-bf16
lmsys
2025-08-06T11:09:41Z
2,809
3
null
[ "safetensors", "gpt_oss", "region:us" ]
null
2025-08-05T21:58:13Z
# gpt-oss-20b-bf16 ## Model Introduction This model is the bf16 version converted from [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b). ## Usage You can use this model in [SGLang](https://github.com/sgl-project/sglang) with the following instructions. ### Installation ``` # build from source git clone https://github.com/sgl-project/sglang cd sglang pip3 install pip --upgrade pip3 install -e "python[all]" # ROCm 6.3 pip3 install torch==2.8.0 torchvision torchaudio --index-url https://download.pytorch.org/whl/test/rocm6.3 git clone https://github.com/triton-lang/triton cd python/triton_kernels pip3 install . # hopper pip3 install torch==2.8.0 torchvision torchaudio --index-url https://download.pytorch.org/whl/test/cu126 pip3 install sgl-kernel==0.3.2 # blackwell cu128 pip3 install torch==2.8.0 torchvision torchaudio --index-url https://download.pytorch.org/whl/test/cu128 pip3 install https://github.com/sgl-project/whl/releases/download/v0.3.2/sgl_kernel-0.3.2+cu128-cp39-abi3-manylinux2014_x86_64.whl # blackwell cu129 pip3 install torch==2.8.0 torchvision torchaudio --index-url https://download.pytorch.org/whl/test/cu129 pip3 install https://github.com/sgl-project/whl/releases/download/v0.3.2/sgl_kernel-0.3.2-cp39-abi3-manylinux2014_x86_64.whl ``` ### Launch command ``` python3 -m sglang.launch_server --model lmsys/gpt-oss-20b-bf16 ``` ### For more details https://github.com/sgl-project/sglang/issues/8833
Userb1az/Qwen3-Coder-30B-A3B-Instruct-GGUF
Userb1az
2025-08-06T11:08:53Z
276
0
transformers
[ "transformers", "gguf", "text-generation", "arxiv:2505.09388", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-08-04T09:26:43Z
--- library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-Coder-30B-A3B-Instruct/blob/main/LICENSE pipeline_tag: text-generation --- # Qwen3-Coder-30B-A3B-Instruct <a href="https://chat.qwen.ai/" target="_blank" style="margin: 2px;"> <img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/> </a> ## Highlights **Qwen3-Coder** is available in multiple sizes. Today, we're excited to introduce **Qwen3-Coder-30B-A3B-Instruct**. This streamlined model maintains impressive performance and efficiency, featuring the following key enhancements: - **Significant Performance** among open models on **Agentic Coding**, **Agentic Browser-Use**, and other foundational coding tasks. - **Long-context Capabilities** with native support for **256K** tokens, extendable up to **1M** tokens using Yarn, optimized for repository-scale understanding. - **Agentic Coding** supporting for most platform such as **Qwen Code**, **CLINE**, featuring a specially designed function call format. ![image/jpeg](https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen3-Coder/qwen3-coder-30a3-main.jpg) ## Model Overview **Qwen3-Coder-30B-A3B-Instruct** has the following features: - Type: Causal Language Models - Training Stage: Pretraining & Post-training - Number of Parameters: 30.5B in total and 3.3B activated - Number of Layers: 48 - Number of Attention Heads (GQA): 32 for Q and 4 for KV - Number of Experts: 128 - Number of Activated Experts: 8 - Context Length: **262,144 natively**. **NOTE: This model supports only non-thinking mode and does not generate ``<think></think>`` blocks in its output. Meanwhile, specifying `enable_thinking=False` is no longer required.** For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3-coder/), [GitHub](https://github.com/QwenLM/Qwen3-Coder), and [Documentation](https://qwen.readthedocs.io/en/latest/). ## Quickstart We advise you to use the latest version of `transformers`. With `transformers<4.51.0`, you will encounter the following error: ``` KeyError: 'qwen3_moe' ``` The following contains a code snippet illustrating how to use the model generate content based on given inputs. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "Qwen/Qwen3-Coder-30B-A3B-Instruct" # load the tokenizer and the model tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) # prepare the model input prompt = "Write a quick sort algorithm." messages = [ {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) # conduct text completion generated_ids = model.generate( **model_inputs, max_new_tokens=65536 ) output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist() content = tokenizer.decode(output_ids, skip_special_tokens=True) print("content:", content) ``` **Note: If you encounter out-of-memory (OOM) issues, consider reducing the context length to a shorter value, such as `32,768`.** For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3. ## Agentic Coding Qwen3-Coder excels in tool calling capabilities. You can simply define or use any tools as following example. ```python # Your tool implementation def square_the_number(num: float) -> dict: return num ** 2 # Define Tools tools=[ { "type":"function", "function":{ "name": "square_the_number", "description": "output the square of the number.", "parameters": { "type": "object", "required": ["input_num"], "properties": { 'input_num': { 'type': 'number', 'description': 'input_num is a number that will be squared' } }, } } } ] import OpenAI # Define LLM client = OpenAI( # Use a custom endpoint compatible with OpenAI API base_url='http://localhost:8000/v1', # api_base api_key="EMPTY" ) messages = [{'role': 'user', 'content': 'square the number 1024'}] completion = client.chat.completions.create( messages=messages, model="Qwen3-Coder-30B-A3B-Instruct", max_tokens=65536, tools=tools, ) print(completion.choice[0]) ``` ## Best Practices To achieve optimal performance, we recommend the following settings: 1. **Sampling Parameters**: - We suggest using `temperature=0.7`, `top_p=0.8`, `top_k=20`, `repetition_penalty=1.05`. 2. **Adequate Output Length**: We recommend using an output length of 65,536 tokens for most queries, which is adequate for instruct models. ### Citation If you find our work helpful, feel free to give us a cite. ``` @misc{qwen3technicalreport, title={Qwen3 Technical Report}, author={Qwen Team}, year={2025}, eprint={2505.09388}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2505.09388}, } ```
Qwen/Qwen3-4B-Instruct-2507
Qwen
2025-08-06T11:08:47Z
4,751
128
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:2505.09388", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-05T10:58:03Z
--- library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507/blob/main/LICENSE pipeline_tag: text-generation --- # Qwen3-4B-Instruct-2507 <a href="https://chat.qwen.ai" target="_blank" style="margin: 2px;"> <img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/> </a> ## Highlights We introduce the updated version of the **Qwen3-4B non-thinking mode**, named **Qwen3-4B-Instruct-2507**, featuring the following key enhancements: - **Significant improvements** in general capabilities, including **instruction following, logical reasoning, text comprehension, mathematics, science, coding and tool usage**. - **Substantial gains** in long-tail knowledge coverage across **multiple languages**. - **Markedly better alignment** with user preferences in **subjective and open-ended tasks**, enabling more helpful responses and higher-quality text generation. - **Enhanced capabilities** in **256K long-context understanding**. ![image/jpeg](https://qianwen-res.oss-accelerate.aliyuncs.com/Qwen3-2507/Qwen3-4B-Instruct.001.jpeg) ## Model Overview **Qwen3-4B-Instruct-2507** has the following features: - Type: Causal Language Models - Training Stage: Pretraining & Post-training - Number of Parameters: 4.0B - Number of Paramaters (Non-Embedding): 3.6B - Number of Layers: 36 - Number of Attention Heads (GQA): 32 for Q and 8 for KV - Context Length: **262,144 natively**. **NOTE: This model supports only non-thinking mode and does not generate ``<think></think>`` blocks in its output. Meanwhile, specifying `enable_thinking=False` is no longer required.** For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/). ## Performance | | GPT-4.1-nano-2025-04-14 | Qwen3-30B-A3B Non-Thinking | Qwen3-4B Non-Thinking | Qwen3-4B-Instruct-2507 | |--- | --- | --- | --- | --- | | **Knowledge** | | | | | MMLU-Pro | 62.8 | 69.1 | 58.0 | **69.6** | | MMLU-Redux | 80.2 | 84.1 | 77.3 | **84.2** | | GPQA | 50.3 | 54.8 | 41.7 | **62.0** | | SuperGPQA | 32.2 | 42.2 | 32.0 | **42.8** | | **Reasoning** | | | | | AIME25 | 22.7 | 21.6 | 19.1 | **47.4** | | HMMT25 | 9.7 | 12.0 | 12.1 | **31.0** | | ZebraLogic | 14.8 | 33.2 | 35.2 | **80.2** | | LiveBench 20241125 | 41.5 | 59.4 | 48.4 | **63.0** | | **Coding** | | | | | LiveCodeBench v6 (25.02-25.05) | 31.5 | 29.0 | 26.4 | **35.1** | | MultiPL-E | 76.3 | 74.6 | 66.6 | **76.8** | | Aider-Polyglot | 9.8 | **24.4** | 13.8 | 12.9 | | **Alignment** | | | | | IFEval | 74.5 | **83.7** | 81.2 | 83.4 | | Arena-Hard v2* | 15.9 | 24.8 | 9.5 | **43.4** | | Creative Writing v3 | 72.7 | 68.1 | 53.6 | **83.5** | | WritingBench | 66.9 | 72.2 | 68.5 | **83.4** | | **Agent** | | | | | BFCL-v3 | 53.0 | 58.6 | 57.6 | **61.9** | | TAU1-Retail | 23.5 | 38.3 | 24.3 | **48.7** | | TAU1-Airline | 14.0 | 18.0 | 16.0 | **32.0** | | TAU2-Retail | - | 31.6 | 28.1 | **40.4** | | TAU2-Airline | - | 18.0 | 12.0 | **24.0** | | TAU2-Telecom | - | **18.4** | 17.5 | 13.2 | | **Multilingualism** | | | | | MultiIF | 60.7 | **70.8** | 61.3 | 69.0 | | MMLU-ProX | 56.2 | **65.1** | 49.6 | 61.6 | | INCLUDE | 58.6 | **67.8** | 53.8 | 60.1 | | PolyMATH | 15.6 | 23.3 | 16.6 | **31.1** | *: For reproducibility, we report the win rates evaluated by GPT-4.1. ## Quickstart The code of Qwen3 has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`. With `transformers<4.51.0`, you will encounter the following error: ``` KeyError: 'qwen3' ``` The following contains a code snippet illustrating how to use the model generate content based on given inputs. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "Qwen/Qwen3-4B-Instruct-2507" # load the tokenizer and the model tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) # prepare the model input prompt = "Give me a short introduction to large language model." messages = [ {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) # conduct text completion generated_ids = model.generate( **model_inputs, max_new_tokens=16384 ) output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist() content = tokenizer.decode(output_ids, skip_special_tokens=True) print("content:", content) ``` For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.5` or to create an OpenAI-compatible API endpoint: - SGLang: ```shell python -m sglang.launch_server --model-path Qwen/Qwen3-4B-Instruct-2507 --context-length 262144 ``` - vLLM: ```shell vllm serve Qwen/Qwen3-4B-Instruct-2507 --max-model-len 262144 ``` **Note: If you encounter out-of-memory (OOM) issues, consider reducing the context length to a shorter value, such as `32,768`.** For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3. ## Agentic Use Qwen3 excels in tool calling capabilities. We recommend using [Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity. To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself. ```python from qwen_agent.agents import Assistant # Define LLM llm_cfg = { 'model': 'Qwen3-4B-Instruct-2507', # Use a custom endpoint compatible with OpenAI API: 'model_server': 'http://localhost:8000/v1', # api_base 'api_key': 'EMPTY', } # Define Tools tools = [ {'mcpServers': { # You can specify the MCP configuration file 'time': { 'command': 'uvx', 'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai'] }, "fetch": { "command": "uvx", "args": ["mcp-server-fetch"] } } }, 'code_interpreter', # Built-in tools ] # Define Agent bot = Assistant(llm=llm_cfg, function_list=tools) # Streaming generation messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}] for responses in bot.run(messages=messages): pass print(responses) ``` ## Best Practices To achieve optimal performance, we recommend the following settings: 1. **Sampling Parameters**: - We suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. - For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance. 2. **Adequate Output Length**: We recommend using an output length of 16,384 tokens for most queries, which is adequate for instruct models. 3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking. - **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt. - **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`." ### Citation If you find our work helpful, feel free to give us a cite. ``` @misc{qwen3technicalreport, title={Qwen3 Technical Report}, author={Qwen Team}, year={2025}, eprint={2505.09388}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2505.09388}, } ```
Thireus/GLM-4.5-THIREUS-Q5_K_R4-SPECIAL_SPLIT
Thireus
2025-08-06T11:08:16Z
7
0
null
[ "gguf", "arxiv:2505.23786", "license:mit", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-08-02T15:13:31Z
--- license: mit --- ## ⚠️ Cautionary Notice Due to changes in the GLM-4.5 PR the GGUF files of this repository have changed. Any older version of these GGUFs are no longer compatible with the latest version of `llama.cpp` and `ik_llama.cpp`. Please download the latest GGUF files of this repository and make sure to use the latest version of `llama.cpp` or `ik_llama.cpp`. - **For `llama.cpp`** – see the discussion in [PR #14939](https://github.com/ggml-org/llama.cpp/pull/14939). - **For `ik_llama.cpp`** – refer to [ikawrakow/ik_llama.cpp#668](https://github.com/ikawrakow/ik_llama.cpp/pull/668). **Unless you are confident in what you're doing, and until support is officially confirmed (PR merged),** > 🔒 **Do not use these quantized models for production** > 🔬 **Do not use them to assess the quality of the GLM-4.5 models** Proceed with caution and keep an eye on the upstream PRs for any updates that could affect compatibility or performance. --- # GLM-4.5 ## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/GLM-4.5-THIREUS-BF16-SPECIAL_SPLIT/) about? This repository provides **GGUF-quantized tensors** for the GLM-4.5 model (official repo: https://huggingface.co/zai-org/GLM-4.5). These GGUF shards are designed to be used with **Thireus’ GGUF Tool Suite** (https://gguf.thireus.com), a collection of tools that automatically finds the perplexity-optimal mix of quantizations for any given VRAM and RAM target. With the Tool Suite, you can generate and download custom quantization “recipes” effortlessly. - 📖 Read more: https://github.com/Thireus/GGUF-Tool-Suite - 🔍 Example quant mixes: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples - 🛠️ Create your own recipe: https://colab.research.google.com/github/Thireus/GGUF-Tool-Suite/blob/main/quant_recipe_pipeline.ipynb - 📂 Browse available quant shards: https://huggingface.co/Thireus/collections *tl;dr: Expand the details section below* <details> ``` cd ~ # Make sure to install all ik_llama.cpp compilation dependencies... apt install python3-dev python3-pip python3-venv python3-wheel python3-setuptools git acl netcat-openbsd cmake # pipx # Obtain ik_llama's Thireus version - Windows builds available at https://github.com/Thireus/ik_llama.cpp/releases git clone https://github.com/Thireus/ik_llama.cpp cd ik_llama.cpp git pull # Build ik_llama.cpp cmake -B build -DGGML_AVX=ON -DGGML_AVX2=ON -DLLAMA_CURL=OFF -DGGML_MAX_CONTEXTS=2048 cmake --build build --config Release -j16 cd .. # Obtain Thireus' GGUF-Tool-Suite git clone https://github.com/Thireus/GGUF-Tool-Suite # Download model quant mix from recipe file: cd GGUF-Tool-Suite rm -f download.conf # Make sure to copy the relevant download.conf for the model before running quant_assign.py cp -f models/GLM-4.5/download.conf . # Use the download.conf of the chosen model mkdir -p kitchen && cd kitchen ../quant_downloader.sh ../recipe_examples/GLM-4.5.ROOT-3.6910bpw-3.2785ppl.153GB-GGUF_19GB-GPU_134GB-CPU.68f915c_9c7682b.recipe # Launch ik_llama's llama-cli: ulimit -n 99999 # Lifts "too many open files" limitation on Linux ~/ik_llama.cpp/build/bin/llama-cli \ -m GLM-4.5-THIREUS-BF16-SPECIAL_TENSOR-00001-of-01148.gguf \ -fa -amb 512 -fmoe -ctk f16 -c 4096 -ngl 99 \ -ot "blk\.(3|4|5|6)\.ffn_.*=CUDA0" \ -ot "blk\.(7|8|9|10)\.ffn_.*=CUDA1" \ -ot exps=CPU -b 2048 -ub 1024 --warmup-batch --no-mmap --threads 36 \ --main-gpu 0 \ -p '<|begin▁of▁sentence|><|User|>What is the solution of x+5=-2?<|Assistant|><think>\n' ``` </details> --- ## ❓ Why does this Tool Suite exist? 1. **Compatibility & Speed** – [unsloth](https://huggingface.co/unsloth)’s dynamic quants may not always work optimally with `ik_llama.cpp`. 2. **Custom Rig Fit** – No off-the-shelf GGUF model perfectly matched my VRAM/RAM setup, so I built a way to tailor models and leverage extra VRAM/RAM to reduce perplexity. 3. **Automated PPL-Optimal Quantization** – To my knowledge, there was no flexible, automated method to minimize perplexity for any bits-per-weight (bpw) target—so I created one with excellent results! --- ## 📊 How does it compare to other GGUFs? Here’s how DeepSeek-R1-0528 quantized with **Thireus’ GGUF Tool Suite** stacks up against other quantizers (lower perplexity = better at equal or lower bpw): ![PPLs Compared With Others](https://github.com/Thireus/GGUF-Tool-Suite/raw/main/ppl_graphs/DeepSeek-R1-0528.svg) > _Note: The `recipe_examples` files illustrate good recipes. The Tool Suite computes the optimal ppl/bpw curve for you — just specify your target RAM, VRAM, and quant types, and `quant_assign.py` finds the best mix._ More perplexity/bpw graphs for other supported models: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/ppl_graphs --- ## 🚀 How do I get started? Check out the [GGUF Tool Suite README](https://github.com/Thireus/GGUF-Tool-Suite) — focus on these sections: 1. ⚠️ **Requirements** – Which `ik_llama.cpp` (or `llama.cpp`) version to use and how to compile. - Windows binaries (no patching needed) at: https://github.com/Thireus/ik_llama.cpp/releases 2. 📥 **Download Model Shards** – Use `quant_downloader.sh` to fetch GGUF shards from any recipe. - Recipe examples: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples 3. 🧠 **Run a Downloaded Model** – Sample usage with `llama-cli`. 4. 🛠️ **Generate a Custom Recipe** – Produce recipes tailored to your rig for optimal perplexity. --- ## ✅ Supported Models Supported models are listed under `models/` in the [Tool Suite Github repo](https://github.com/Thireus/GGUF-Tool-Suite/tree/main/models). Presence of `ppl_results.csv` indicates official support and compatibility with `quant_assign.py`. --- ## 🤷‍♂️ Will I release pre-cooked GGUF files? No, because I believe in **tailored quantization** for each user’s hardware. If you prefer ready-made shards, you are welcome to merge them via `llama-gguf-split --merge`, or request someone to publish them. Instead, I prefer to share examples of recipes so users can see exactly how they were produced (command included inside these recipe files) and tweak them for their own rigs. The `quant_downloader.sh` script handles automatic fetching and verification of each shard. Recipes provided by [Ubergarm](https://huggingface.co/ubergarm) on his model cards are also compatible with `quant_downloader.sh`. Users who don’t trust the GGUF shards on HuggingFace can also quantize their own by passing recipe lines to `llama-quantize --custom-q` ([see example](https://github.com/Thireus/GGUF-Tool-Suite/blob/main/models/DeepSeek-R1-0528/DeepSeek-R1-0528-THIREUS-ANY-SPECIAL.sh#L482-L486)). Run `llama-quantize --help` to list compatible quants for `quant_assign.py`. This approach is especially useful if you prefer `llama.cpp` over `ik_llama.cpp`. --- ## 📦 What’s in this repository? - **00001 GGUF header shard** – Contains metadata (tokens, chat template, tensor count, etc.). This metadata can be explored directly from the HuggingFace web interface after clicking on that shard. - **Tensor shards** – Each shard holds one tensor; see `tensors.map` for names, quant types, sizes, SHA-256 hash, shard IDs, etc. - **GPG-signed files** – `tensors.map` and header shard are signed with the key in [trusted-keys.asc](https://github.com/Thireus/GGUF-Tool-Suite/blob/main/trusted-keys.asc) for tamper detection. - **Security note** – Some papers about various ways to attack GGUFs and LLMs are available online, such as https://arxiv.org/abs/2505.23786, and there are also more classic security exploits like CVE-2024-23496 and CVE-2024-25664 through CVE-2024-25668. Only use GGUFs from reputable, trusted authors—or alternatively self-quantize—to avoid potential exploits. --- ## 💡 Pro Tips You can download the BF16 model version to quantize your own shards: ``` mkdir kitchen echo '.*=bf16' > kitchen/bf16.recipe cd kitchen ../quant_downloader.sh bf16.recipe ``` Enjoy optimized quantization! 🎉
lmsys/gpt-oss-120b-bf16
lmsys
2025-08-06T11:07:26Z
2,331
2
null
[ "safetensors", "gpt_oss", "region:us" ]
null
2025-08-05T18:54:32Z
# gpt-oss-120b-bf16 ## Model Introduction This model is the bf16 version converted from [openai/gpt-oss-120b](https://huggingface.co/openai/gpt-oss-120b). ## Usage You can use this model in [SGLang](https://github.com/sgl-project/sglang) with the following instructions. ### Installation ``` # build from source git clone https://github.com/sgl-project/sglang cd sglang pip3 install pip --upgrade pip3 install -e "python[all]" # ROCm 6.3 pip3 install torch==2.8.0 torchvision torchaudio --index-url https://download.pytorch.org/whl/test/rocm6.3 git clone https://github.com/triton-lang/triton cd python/triton_kernels pip3 install . # hopper pip3 install torch==2.8.0 torchvision torchaudio --index-url https://download.pytorch.org/whl/test/cu126 pip3 install sgl-kernel==0.3.2 # blackwell cu128 pip3 install torch==2.8.0 torchvision torchaudio --index-url https://download.pytorch.org/whl/test/cu128 pip3 install https://github.com/sgl-project/whl/releases/download/v0.3.2/sgl_kernel-0.3.2+cu128-cp39-abi3-manylinux2014_x86_64.whl # blackwell cu129 pip3 install torch==2.8.0 torchvision torchaudio --index-url https://download.pytorch.org/whl/test/cu129 pip3 install https://github.com/sgl-project/whl/releases/download/v0.3.2/sgl_kernel-0.3.2-cp39-abi3-manylinux2014_x86_64.whl ``` ### Launch command ``` python3 -m sglang.launch_server --model lmsys/gpt-oss-120b-bf16 --tp 4 ``` ### For more details https://github.com/sgl-project/sglang/issues/8833
lukante/test_summ
lukante
2025-08-06T11:07:18Z
5
0
peft
[ "peft", "safetensors", "base_model:adapter:mistralai/Mistral-7B-Instruct-v0.3", "lora", "sft", "transformers", "trl", "text-generation", "conversational", "arxiv:1910.09700", "base_model:mistralai/Mistral-7B-Instruct-v0.3", "region:us" ]
text-generation
2025-08-06T11:00:28Z
--- base_model: mistralai/Mistral-7B-Instruct-v0.3 library_name: peft pipeline_tag: text-generation tags: - base_model:adapter:mistralai/Mistral-7B-Instruct-v0.3 - lora - sft - transformers - trl --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.16.0
Thireus/GLM-4.5-THIREUS-IQ2_K-SPECIAL_SPLIT
Thireus
2025-08-06T11:06:17Z
13
0
null
[ "gguf", "arxiv:2505.23786", "license:mit", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-08-02T23:24:45Z
--- license: mit --- ## ⚠️ Cautionary Notice Due to changes in the GLM-4.5 PR the GGUF files of this repository have changed. Any older version of these GGUFs are no longer compatible with the latest version of `llama.cpp` and `ik_llama.cpp`. Please download the latest GGUF files of this repository and make sure to use the latest version of `llama.cpp` or `ik_llama.cpp`. - **For `llama.cpp`** – see the discussion in [PR #14939](https://github.com/ggml-org/llama.cpp/pull/14939). - **For `ik_llama.cpp`** – refer to [ikawrakow/ik_llama.cpp#668](https://github.com/ikawrakow/ik_llama.cpp/pull/668). **Unless you are confident in what you're doing, and until support is officially confirmed (PR merged),** > 🔒 **Do not use these quantized models for production** > 🔬 **Do not use them to assess the quality of the GLM-4.5 models** Proceed with caution and keep an eye on the upstream PRs for any updates that could affect compatibility or performance. --- # GLM-4.5 ## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/GLM-4.5-THIREUS-BF16-SPECIAL_SPLIT/) about? This repository provides **GGUF-quantized tensors** for the GLM-4.5 model (official repo: https://huggingface.co/zai-org/GLM-4.5). These GGUF shards are designed to be used with **Thireus’ GGUF Tool Suite** (https://gguf.thireus.com), a collection of tools that automatically finds the perplexity-optimal mix of quantizations for any given VRAM and RAM target. With the Tool Suite, you can generate and download custom quantization “recipes” effortlessly. - 📖 Read more: https://github.com/Thireus/GGUF-Tool-Suite - 🔍 Example quant mixes: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples - 🛠️ Create your own recipe: https://colab.research.google.com/github/Thireus/GGUF-Tool-Suite/blob/main/quant_recipe_pipeline.ipynb - 📂 Browse available quant shards: https://huggingface.co/Thireus/collections *tl;dr: Expand the details section below* <details> ``` cd ~ # Make sure to install all ik_llama.cpp compilation dependencies... apt install python3-dev python3-pip python3-venv python3-wheel python3-setuptools git acl netcat-openbsd cmake # pipx # Obtain ik_llama's Thireus version - Windows builds available at https://github.com/Thireus/ik_llama.cpp/releases git clone https://github.com/Thireus/ik_llama.cpp cd ik_llama.cpp git pull # Build ik_llama.cpp cmake -B build -DGGML_AVX=ON -DGGML_AVX2=ON -DLLAMA_CURL=OFF -DGGML_MAX_CONTEXTS=2048 cmake --build build --config Release -j16 cd .. # Obtain Thireus' GGUF-Tool-Suite git clone https://github.com/Thireus/GGUF-Tool-Suite # Download model quant mix from recipe file: cd GGUF-Tool-Suite rm -f download.conf # Make sure to copy the relevant download.conf for the model before running quant_assign.py cp -f models/GLM-4.5/download.conf . # Use the download.conf of the chosen model mkdir -p kitchen && cd kitchen ../quant_downloader.sh ../recipe_examples/GLM-4.5.ROOT-3.6910bpw-3.2785ppl.153GB-GGUF_19GB-GPU_134GB-CPU.68f915c_9c7682b.recipe # Launch ik_llama's llama-cli: ulimit -n 99999 # Lifts "too many open files" limitation on Linux ~/ik_llama.cpp/build/bin/llama-cli \ -m GLM-4.5-THIREUS-BF16-SPECIAL_TENSOR-00001-of-01148.gguf \ -fa -amb 512 -fmoe -ctk f16 -c 4096 -ngl 99 \ -ot "blk\.(3|4|5|6)\.ffn_.*=CUDA0" \ -ot "blk\.(7|8|9|10)\.ffn_.*=CUDA1" \ -ot exps=CPU -b 2048 -ub 1024 --warmup-batch --no-mmap --threads 36 \ --main-gpu 0 \ -p '<|begin▁of▁sentence|><|User|>What is the solution of x+5=-2?<|Assistant|><think>\n' ``` </details> --- ## ❓ Why does this Tool Suite exist? 1. **Compatibility & Speed** – [unsloth](https://huggingface.co/unsloth)’s dynamic quants may not always work optimally with `ik_llama.cpp`. 2. **Custom Rig Fit** – No off-the-shelf GGUF model perfectly matched my VRAM/RAM setup, so I built a way to tailor models and leverage extra VRAM/RAM to reduce perplexity. 3. **Automated PPL-Optimal Quantization** – To my knowledge, there was no flexible, automated method to minimize perplexity for any bits-per-weight (bpw) target—so I created one with excellent results! --- ## 📊 How does it compare to other GGUFs? Here’s how DeepSeek-R1-0528 quantized with **Thireus’ GGUF Tool Suite** stacks up against other quantizers (lower perplexity = better at equal or lower bpw): ![PPLs Compared With Others](https://github.com/Thireus/GGUF-Tool-Suite/raw/main/ppl_graphs/DeepSeek-R1-0528.svg) > _Note: The `recipe_examples` files illustrate good recipes. The Tool Suite computes the optimal ppl/bpw curve for you — just specify your target RAM, VRAM, and quant types, and `quant_assign.py` finds the best mix._ More perplexity/bpw graphs for other supported models: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/ppl_graphs --- ## 🚀 How do I get started? Check out the [GGUF Tool Suite README](https://github.com/Thireus/GGUF-Tool-Suite) — focus on these sections: 1. ⚠️ **Requirements** – Which `ik_llama.cpp` (or `llama.cpp`) version to use and how to compile. - Windows binaries (no patching needed) at: https://github.com/Thireus/ik_llama.cpp/releases 2. 📥 **Download Model Shards** – Use `quant_downloader.sh` to fetch GGUF shards from any recipe. - Recipe examples: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples 3. 🧠 **Run a Downloaded Model** – Sample usage with `llama-cli`. 4. 🛠️ **Generate a Custom Recipe** – Produce recipes tailored to your rig for optimal perplexity. --- ## ✅ Supported Models Supported models are listed under `models/` in the [Tool Suite Github repo](https://github.com/Thireus/GGUF-Tool-Suite/tree/main/models). Presence of `ppl_results.csv` indicates official support and compatibility with `quant_assign.py`. --- ## 🤷‍♂️ Will I release pre-cooked GGUF files? No, because I believe in **tailored quantization** for each user’s hardware. If you prefer ready-made shards, you are welcome to merge them via `llama-gguf-split --merge`, or request someone to publish them. Instead, I prefer to share examples of recipes so users can see exactly how they were produced (command included inside these recipe files) and tweak them for their own rigs. The `quant_downloader.sh` script handles automatic fetching and verification of each shard. Recipes provided by [Ubergarm](https://huggingface.co/ubergarm) on his model cards are also compatible with `quant_downloader.sh`. Users who don’t trust the GGUF shards on HuggingFace can also quantize their own by passing recipe lines to `llama-quantize --custom-q` ([see example](https://github.com/Thireus/GGUF-Tool-Suite/blob/main/models/DeepSeek-R1-0528/DeepSeek-R1-0528-THIREUS-ANY-SPECIAL.sh#L482-L486)). Run `llama-quantize --help` to list compatible quants for `quant_assign.py`. This approach is especially useful if you prefer `llama.cpp` over `ik_llama.cpp`. --- ## 📦 What’s in this repository? - **00001 GGUF header shard** – Contains metadata (tokens, chat template, tensor count, etc.). This metadata can be explored directly from the HuggingFace web interface after clicking on that shard. - **Tensor shards** – Each shard holds one tensor; see `tensors.map` for names, quant types, sizes, SHA-256 hash, shard IDs, etc. - **GPG-signed files** – `tensors.map` and header shard are signed with the key in [trusted-keys.asc](https://github.com/Thireus/GGUF-Tool-Suite/blob/main/trusted-keys.asc) for tamper detection. - **Security note** – Some papers about various ways to attack GGUFs and LLMs are available online, such as https://arxiv.org/abs/2505.23786, and there are also more classic security exploits like CVE-2024-23496 and CVE-2024-25664 through CVE-2024-25668. Only use GGUFs from reputable, trusted authors—or alternatively self-quantize—to avoid potential exploits. --- ## 💡 Pro Tips You can download the BF16 model version to quantize your own shards: ``` mkdir kitchen echo '.*=bf16' > kitchen/bf16.recipe cd kitchen ../quant_downloader.sh bf16.recipe ``` Enjoy optimized quantization! 🎉
shaharprofeta/dqn-SpaceInvadersNoFrameskip-v4
shaharprofeta
2025-08-06T11:04:21Z
73
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-08-06T10:37:54Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 738.00 +/- 430.90 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib SBX (SB3 + Jax): https://github.com/araffin/sbx Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga shaharprofeta -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga shaharprofeta -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga shaharprofeta ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
mradermacher/VIS-Shepherd-Qwen2.5-VL-7B-i1-GGUF
mradermacher
2025-08-06T11:00:10Z
125
1
transformers
[ "transformers", "gguf", "vision", "llm", "critical", "sft", "d3.js", "visualization", "en", "base_model:ZJUVAI/VIS-Shepherd-Qwen2.5-VL-7B", "base_model:quantized:ZJUVAI/VIS-Shepherd-Qwen2.5-VL-7B", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-08-06T10:17:04Z
--- base_model: ZJUVAI/VIS-Shepherd-Qwen2.5-VL-7B language: - en library_name: transformers license: apache-2.0 mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - vision - llm - critical - sft - d3.js - visualization --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> <!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> weighted/imatrix quants of https://huggingface.co/ZJUVAI/VIS-Shepherd-Qwen2.5-VL-7B <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#VIS-Shepherd-Qwen2.5-VL-7B-i1-GGUF).*** static quants are available at https://huggingface.co/mradermacher/VIS-Shepherd-Qwen2.5-VL-7B-GGUF **This is a vision model - mmproj files (if any) will be in the [static repository](https://huggingface.co/mradermacher/VIS-Shepherd-Qwen2.5-VL-7B-GGUF).** ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/VIS-Shepherd-Qwen2.5-VL-7B-i1-GGUF/resolve/main/VIS-Shepherd-Qwen2.5-VL-7B.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) | | [GGUF](https://huggingface.co/mradermacher/VIS-Shepherd-Qwen2.5-VL-7B-i1-GGUF/resolve/main/VIS-Shepherd-Qwen2.5-VL-7B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/VIS-Shepherd-Qwen2.5-VL-7B-i1-GGUF/resolve/main/VIS-Shepherd-Qwen2.5-VL-7B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/VIS-Shepherd-Qwen2.5-VL-7B-i1-GGUF/resolve/main/VIS-Shepherd-Qwen2.5-VL-7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | | | [GGUF](https://huggingface.co/mradermacher/VIS-Shepherd-Qwen2.5-VL-7B-i1-GGUF/resolve/main/VIS-Shepherd-Qwen2.5-VL-7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | | | [GGUF](https://huggingface.co/mradermacher/VIS-Shepherd-Qwen2.5-VL-7B-i1-GGUF/resolve/main/VIS-Shepherd-Qwen2.5-VL-7B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/VIS-Shepherd-Qwen2.5-VL-7B-i1-GGUF/resolve/main/VIS-Shepherd-Qwen2.5-VL-7B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/VIS-Shepherd-Qwen2.5-VL-7B-i1-GGUF/resolve/main/VIS-Shepherd-Qwen2.5-VL-7B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.9 | very low quality | | [GGUF](https://huggingface.co/mradermacher/VIS-Shepherd-Qwen2.5-VL-7B-i1-GGUF/resolve/main/VIS-Shepherd-Qwen2.5-VL-7B.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/VIS-Shepherd-Qwen2.5-VL-7B-i1-GGUF/resolve/main/VIS-Shepherd-Qwen2.5-VL-7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/VIS-Shepherd-Qwen2.5-VL-7B-i1-GGUF/resolve/main/VIS-Shepherd-Qwen2.5-VL-7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/VIS-Shepherd-Qwen2.5-VL-7B-i1-GGUF/resolve/main/VIS-Shepherd-Qwen2.5-VL-7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/VIS-Shepherd-Qwen2.5-VL-7B-i1-GGUF/resolve/main/VIS-Shepherd-Qwen2.5-VL-7B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/VIS-Shepherd-Qwen2.5-VL-7B-i1-GGUF/resolve/main/VIS-Shepherd-Qwen2.5-VL-7B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/VIS-Shepherd-Qwen2.5-VL-7B-i1-GGUF/resolve/main/VIS-Shepherd-Qwen2.5-VL-7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/VIS-Shepherd-Qwen2.5-VL-7B-i1-GGUF/resolve/main/VIS-Shepherd-Qwen2.5-VL-7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/VIS-Shepherd-Qwen2.5-VL-7B-i1-GGUF/resolve/main/VIS-Shepherd-Qwen2.5-VL-7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | | | [GGUF](https://huggingface.co/mradermacher/VIS-Shepherd-Qwen2.5-VL-7B-i1-GGUF/resolve/main/VIS-Shepherd-Qwen2.5-VL-7B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.5 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/VIS-Shepherd-Qwen2.5-VL-7B-i1-GGUF/resolve/main/VIS-Shepherd-Qwen2.5-VL-7B.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/VIS-Shepherd-Qwen2.5-VL-7B-i1-GGUF/resolve/main/VIS-Shepherd-Qwen2.5-VL-7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/VIS-Shepherd-Qwen2.5-VL-7B-i1-GGUF/resolve/main/VIS-Shepherd-Qwen2.5-VL-7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/VIS-Shepherd-Qwen2.5-VL-7B-i1-GGUF/resolve/main/VIS-Shepherd-Qwen2.5-VL-7B.i1-Q4_1.gguf) | i1-Q4_1 | 5.0 | | | [GGUF](https://huggingface.co/mradermacher/VIS-Shepherd-Qwen2.5-VL-7B-i1-GGUF/resolve/main/VIS-Shepherd-Qwen2.5-VL-7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/VIS-Shepherd-Qwen2.5-VL-7B-i1-GGUF/resolve/main/VIS-Shepherd-Qwen2.5-VL-7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/VIS-Shepherd-Qwen2.5-VL-7B-i1-GGUF/resolve/main/VIS-Shepherd-Qwen2.5-VL-7B.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
callgg/gpt-20b-8bit
callgg
2025-08-06T10:58:31Z
7
0
diffusers
[ "diffusers", "safetensors", "gpt_oss", "base_model:openai/gpt-oss-20b", "base_model:quantized:openai/gpt-oss-20b", "license:apache-2.0", "mxfp4", "region:us" ]
null
2025-08-06T09:21:35Z
--- license: apache-2.0 library_name: diffusers base_model: - openai/gpt-oss-20b --- ## gpt-20b - repackage of [gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b)
GaborMadarasz/AstroQA_mamba_V21
GaborMadarasz
2025-08-06T10:57:12Z
4
0
transformers
[ "transformers", "safetensors", "mamba", "text-generation", "trl", "sft", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-06T10:56:46Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
BjarneNPO/finetune_06_08_2025_12_20_24
BjarneNPO
2025-08-06T10:50:52Z
1
0
sentence-transformers
[ "sentence-transformers", "safetensors", "xlm-roberta", "sentence-similarity", "feature-extraction", "dense", "generated_from_trainer", "dataset_size:19964", "loss:MultipleNegativesRankingLoss", "arxiv:1908.10084", "arxiv:1705.00652", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-08-06T10:48:18Z
--- tags: - sentence-transformers - sentence-similarity - feature-extraction - dense - generated_from_trainer - dataset_size:19964 - loss:MultipleNegativesRankingLoss base_model: FacebookAI/xlm-roberta-base widget: - source_sentence: bei einem kann keine hinterlegt werden sentences: - An einem Tag gab es im August eine Überbelegung, einmal erklärt wie sie diese nachvollziehen kann. - Fehlermeldung weist auf eine fehlende BI hin. Anwenderin stimmt sich dazu mit ab. - 'Ticket --------------------------- Export angepasst - informiert -------------------------- User möchte auch in der übergreifenden Personalliste die Anpassung umgesetzt haben - daher Ticket erneut geöffnet - übergreifender Export ebenfalls angepasst - informiert' - source_sentence: Userin darf erst am 01.02.2024 die Vertragsangebote rausschicken, möchte aber schonmal vermerken, welchen Kindern sie ein Vertragsangebot schicken möchte. sentences: - Das ist noch nicht freigeschaltet. Genauer Zeitpunkt steht auch noch nicht fest. - 'Kind muss manuell angelegt werden und dann neu synchronisiert und Anmeldedaten zusammenführen. Da Userin weiterhin Anmeldedaten nicht zusammenführen kann Userin gebeten uns einen Screenshot aus dem Kita-Navigator zukommen zu lassen. Beide Kinder wurden nun übertragen und befinden sich unter Vetragsangeboten.' - Kann die Kinder auf die Planungsliste nehmen, dann sieht sie diese sowohl in der Planungsliste, als auch in der Liste der Anmeldungen mit dem Symbol in der Anmeldeliste. - source_sentence: Fehlermeldung beim Erstellen der Datei. sentences: - In der Benutzerverwaltung unter Verwaltung. - Bei einer Kollegin musste noch die Stundenanzahl unter Ausbildung und Statistik eingetragen werden. - 'Wurde an den Entwickler weitergegeben. Problem konnte behoben werden, Benutzer wurde informiert.' - source_sentence: möchte wissen wenn ein Kind gestern letzmalig in der Kita war, welches Entlassdatum muss im System eingetragen werden? sentences: - Fehler bereist bekannt, prüft später erneut. - Aktuell wurde uns noch nicht gemeldet, dass wir das Jugendamt freischalten sollen. - Der letzte Betreuungstag muss als Entlassdatum hinterlegt werden, da sonst die BI nicht stimmt. - source_sentence: Login mit dem Authenticator funktioniert nicht mehr, Code ist immer ungültig sentences: - Erneut die Tätigkeit gelöscht und neu Übertragen, die Tätigkeit wurde aber nicht erneut angezeigt - Nachdem die Uhrzeit neu synchronisiert war konnte sie sich wieder einloggen. - Dies entspricht der Vorlage. muss Vorlage anpassen. pipeline_tag: sentence-similarity library_name: sentence-transformers --- # SentenceTransformer based on FacebookAI/xlm-roberta-base This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on the train dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) <!-- at revision e73636d4f797dec63c3081bb6ed5c7b0bb3f2089 --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 768 dimensions - **Similarity Function:** Cosine Similarity - **Training Dataset:** - train <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'XLMRobertaModel'}) (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("BjarneNPO/finetune_06_08_2025_12_20_24") # Run inference queries = [ "Login mit dem Authenticator funktioniert nicht mehr, Code ist immer ung\u00fcltig", ] documents = [ 'Nachdem die Uhrzeit neu synchronisiert war konnte sie sich wieder einloggen.', 'Erneut die Tätigkeit gelöscht und neu Übertragen, die Tätigkeit wurde aber nicht erneut angezeigt', 'Dies entspricht der Vorlage. muss Vorlage anpassen.', ] query_embeddings = model.encode_query(queries) document_embeddings = model.encode_document(documents) print(query_embeddings.shape, document_embeddings.shape) # [1, 768] [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(query_embeddings, document_embeddings) print(similarities) # tensor([[0.7032, 0.5662, 0.3571]]) ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### train * Dataset: train * Size: 19,964 training samples * Columns: <code>query</code> and <code>answer</code> * Approximate statistics based on the first 1000 samples: | | query | answer | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 4 tokens</li><li>mean: 27.66 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 22.87 tokens</li><li>max: 151 tokens</li></ul> | * Samples: | query | answer | |:----------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------| | <code>Wie kann man die Jahresurlaubsübersicht exportieren?</code> | <code>über das 3 Punkte Menü rechts oben. Mitarbeiter auswählen und exportieren</code> | | <code>1. Vertragsabschlüsse werden nicht übertragen <br>2. Kinder kommen nicht von nach <br>3. Absage kann bei Portalstatus nicht erstellt werden.</code> | <code>Ticket <br>Userin gebeten sich an den Support zu wenden, da der Fehler liegt.</code> | | <code>Wird im Anmeldeportal nicht gefunden.</code> | <code>Die Schnittstelle war noch nicht aktiviert und Profil ebenfalls nicht.</code> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Evaluation Dataset #### train * Dataset: train * Size: 8,557 evaluation samples * Columns: <code>query</code> and <code>answer</code> * Approximate statistics based on the first 1000 samples: | | query | answer | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 4 tokens</li><li>mean: 26.49 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 23.16 tokens</li><li>max: 512 tokens</li></ul> | * Samples: | query | answer | |:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>Liebes Support Team!<br>In unserer Kst. fiel der EL auf, dass es in der Urlaubsübersicht Unstimmigkeiten gibt. So werden z.B. bei der Kollegin 60 offene Tage angezeigt und im Detail (Jahresübersicht) korrekt alle eingetragenen Tage und nur 2 Tage Rest!<br>Ich freue mich auf Ihre Rückmeldung.<br>Mit besten Grüßen<br>_________________________________________________<br>Leitung Kompetenzteam <br>Geschäftsfeld Kindertageseinrichtungen<br> ()<br> e.V.<br>. 280<br>33605 <br>Telefon: Mo.+Mi. +49 521 9216-129 Di., Do. + Fr. +49 5264 6559100<br>E-Mail: <br>Web: www.awo-owl.de<br>Instagram: www.instagram.com/<br>Facebook: www.facebook.com/<br>Vorsitzende des Präsidiums und des Aufsichtsrates: <br>Vorstand: (Vors.), <br>Amtsgericht VR 1151<br>Diese E-Mail einschließlich evtl. angehängter Dateien enthält vertrauliche und/oder rechtlich geschützte Informationen. Wenn Sie nicht der Adressat sind und diese E-Mail irrtümlich erhalten haben, dürfen Sie weder den Inhalt dieser E-Mail nutzen, noch dürfen Sie die eventuell angehängten Dateien öffnen, kopieren...</code> | <code>Problem ist bekannt und wird im Verlauf des Tages behoben.</code> | | <code>hat im einen Vertrag, aber wurde nicht nach übertragen. war wegen fehlender Anbindung auf der Schnittstelle nicht auf der Anmeldeliste.</code> | <code>Kind muss manuell angelegt werden und dann neu synchronisiert und Anmeldedaten zusammenführen.<br>Da Userin weiterhin Anmeldedaten nicht zusammenführen kann Userin gebeten uns einen Screenshot aus dem Kita-Navigator zukommen zu lassen.<br>Beide Kinder wurden nun übertragen und befinden sich unter Vetragsangeboten.</code> | | <code>Wie kann ein Kind aus den zukünftigen Neuaufnahmen gelöscht werden?</code> | <code>Benutzer muss erst die BI und kann dann über den Button Statuswechsel durchführen das ganze Kind löschen.</code> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: epoch - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 16 - `gradient_accumulation_steps`: 16 - `learning_rate`: 2e-05 - `num_train_epochs`: 8 - `lr_scheduler_type`: cosine - `warmup_ratio`: 0.1 - `bf16`: True - `tf32`: True - `load_best_model_at_end`: True - `optim`: adamw_torch_fused - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: epoch - `prediction_loss_only`: True - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 16 - `eval_accumulation_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 8 - `max_steps`: -1 - `lr_scheduler_type`: cosine - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: True - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: True - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch_fused - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `prompts`: None - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional - `router_mapping`: {} - `learning_rate_mapping`: {} </details> ### Training Logs | Epoch | Step | Training Loss | train loss | |:-------:|:-------:|:-------------:|:----------:| | 0.2564 | 10 | 3.5052 | - | | 0.5128 | 20 | 3.4876 | - | | 0.7692 | 30 | 3.4632 | - | | 1.0 | 39 | - | 2.4519 | | 1.0256 | 40 | 3.3556 | - | | 1.2821 | 50 | 3.0786 | - | | 1.5385 | 60 | 2.8448 | - | | 1.7949 | 70 | 2.694 | - | | 2.0 | 78 | - | 1.7468 | | 2.0513 | 80 | 2.4993 | - | | 2.3077 | 90 | 2.4 | - | | 2.5641 | 100 | 2.3188 | - | | 2.8205 | 110 | 2.2225 | - | | 3.0 | 117 | - | 1.4909 | | 3.0769 | 120 | 2.1009 | - | | 3.3333 | 130 | 2.0479 | - | | 3.5897 | 140 | 1.9971 | - | | 3.8462 | 150 | 1.9289 | - | | 4.0 | 156 | - | 1.3297 | | 4.1026 | 160 | 1.8177 | - | | 4.3590 | 170 | 1.8191 | - | | 4.6154 | 180 | 1.7751 | - | | 4.8718 | 190 | 1.7375 | - | | 5.0 | 195 | - | 1.2254 | | 5.1282 | 200 | 1.6917 | - | | 5.3846 | 210 | 1.6542 | - | | 5.6410 | 220 | 1.6687 | - | | 5.8974 | 230 | 1.637 | - | | 6.0 | 234 | - | 1.2036 | | 6.1538 | 240 | 1.6071 | - | | 6.4103 | 250 | 1.5859 | - | | 6.6667 | 260 | 1.6114 | - | | 6.9231 | 270 | 1.59 | - | | 7.0 | 273 | - | 1.1898 | | 7.1795 | 280 | 1.5662 | - | | 7.4359 | 290 | 1.583 | - | | 7.6923 | 300 | 1.5958 | - | | 7.9487 | 310 | 1.5835 | - | | **8.0** | **312** | **-** | **1.1846** | * The bold row denotes the saved checkpoint. ### Framework Versions - Python: 3.11.9 - Sentence Transformers: 5.0.0 - Transformers: 4.41.2 - PyTorch: 2.3.1+cu121 - Accelerate: 0.31.0 - Datasets: 3.6.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
hugsanaa/CyberAraBERT
hugsanaa
2025-08-06T10:43:49Z
9
0
null
[ "safetensors", "bert", "ar", "base_model:aubmindlab/bert-base-arabertv02-twitter", "base_model:finetune:aubmindlab/bert-base-arabertv02-twitter", "license:apache-2.0", "region:us" ]
null
2025-08-06T06:12:24Z
--- license: apache-2.0 language: - ar base_model: - aubmindlab/bert-base-arabertv02-twitter --- # CyberAraBERT: AraBERT for Arabic Cyberbullying Detection # Overview CyberAraBERT is a specialized Arabic PLM designed for analyzing social media content and detecting the presence of cyberbullying. It works on multiple dialects (Egyptian, Gulf, and Laventine). This model can be used for additional fine-tuning and also for testing. # Model Details: - **Base Model:** aubmindlab/ber-base-arabertv02-twitter - **Language:** Arabic - **Dataset used for fine-tuning:** [ArCyC](https://data.mendeley.com/datasets/z2dfgrzx47/1) - **License:** Apache License 2.0 # Model Inference You can use CyberAraBERT directly on any dataset to detect cyberbullying. To use it, follow the following steps: **1. Install the required libraries** Ensure that you have installed the libraries before using the model using pip: ```python pip install arabert transformers torch ``` **2. Load the Model and Tokenizer** ```python # Import required Modules from transformers import AutoModelForSequenceClassification, AutoTokenizer import torch # Load model and Tokenizer model_name = 'hugsanaa/CyberAraBERT' model = AutoModelForSequenceClassification.from_pretrained(model_name, return_dict=False, num_labels=2) tokenizer = AutoTokenizer.from_pretrained(model_name) ``` **3. Predict** ```python # Example text text = "بدك توظف و تمشي بالمحاصه علي القليله ما تحط حمار يدق بيانو حط الحمار لحمرنه و موسيقي لبيانو" # Tokenize input inputs = tokenizer(text, return_tensor="pt", truncation = True, padding = True) # Make Predictions with torch.no_grad(): logits=model(**inputs).logits predicted_Class = torch.argmax(logits) # Interpret results labels = ["Cyberbullying", "Not Cyberbullying"] print(f"Prediction: {labels[predicted_class]}") ``` **Inference using pipeline** ```python import pandas as pd from transformers import pipeline import more_itertools from tqdm import tqdm_notebook as tqdm model = 'hugsanaa/CyberAraBERT' # load the dataset (the data must include text column) data = pd.read_csv(your_cyberbulling_data) # generate prediction pipeline pipe = pipeline("sentiment-analysis", model=model, device=0, return_all_scores =True, max_length=max_len, truncation=True) preds = [] for s in tqdm(more_itertools.chunked(list(data['text']), 32)): # batching for faster inference preds.extend(pipe(s)) # Generate final predictions data[f'preds'] = preds final_pred = [] for prediction in data['preds']: final_pred.append(max(prediction, key=lambda x: x['score'])['label']) data[f'Final Prediction'] = final_pred ``` # Results Below are the results obtained from testing CyberAraBERT on testing samples from ArCyC data | Class | Precision | Recall | F1-Score | Support | |--------------------|-----------|--------|----------|---------| | Not Cyberbullying | 0.9256 | 0.9043 | 0.9148 | 564 | | Cyberbullying | 0.8453 | 0.8780 | 0.8613 | 336 | | **Overall / Avg.** | 0.8956 | 0.8944 | 0.8948 | 900 |
oyvindbs/setfit-minister-mobilize-nb-sbert-base
oyvindbs
2025-08-06T10:43:16Z
1
0
setfit
[ "setfit", "safetensors", "bert", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "Norway", "Cabinet Ministers", "no", "nb", "arxiv:2209.11055", "base_model:NbAiLab/nb-sbert-base", "base_model:finetune:NbAiLab/nb-sbert-base", "region:us" ]
text-classification
2025-06-30T08:55:04Z
--- tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer - Norway - Cabinet Ministers widget: [] metrics: - accuracy pipeline_tag: text-classification library_name: setfit inference: true base_model: NbAiLab/nb-sbert-base language: - 'no' - nb --- # Purpose: Mobilizing This model has been trained on Facebook posts by Norwegian cabinet ministers of the Solberg governments (2013-2021). It was used in Karlsen, Kolltveit and Solheim (2025). The posts were hand coded specifying different roles and purposes of the posts. Below, we recreate the table 1 from the paper showing the five roles and four purposes. The model included here identifies posts where the purpose is to **Mobilize**. The setfit models that identify the other roles and purposes are available [here](https://huggingface.co/collections/oyvindbs/balancing-acts-the-communicative-roles-of-cabinet-ministers-68624b72c250c3cc1fd3ea14). In the paper, we use one model for each purpose and each role. Each post can accordingly be ascribed to more than one purpose or role. | | Communicative purposes | | | | |------------------------------|-------------------------------|----------------------|-------------------|-----------------| | **Communicative roles** | Informing | Communication | *Mobilizing* | Branding | | Ministry head | | | | | | Cabinet member | | | | | | Party politician | | | | | | Individual politician | | | | | | Private person | | | | | This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification of Norwegian social media posts. It uses [NbAiLab/nb-sbert-base](https://huggingface.co/NbAiLab/nb-sbert-base) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. It has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [NbAiLab/nb-sbert-base](https://huggingface.co/NbAiLab/nb-sbert-base) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 75 tokens - **Number of Classes:** 1 <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> **Language:** * Norwegian (Bokmål) <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("oyvindbs/setfit_minister_nb-sbert-base_Ministry-Head") # Run inference preds = model("I loved the spiderman movie!") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Framework Versions - Python: 3.10.4 - SetFit: 1.1.1 - Sentence Transformers: 3.4.1 - Transformers: 4.50.1 - PyTorch: 2.5.1+cu118 - Datasets: 2.19.0 - Tokenizers: 0.21.0 ## Citation ```bibtex @article{KarlsenKolltveitSolheim, author = {Karlsen, Rune and Kolltveit, Kristoffer and Solheim, Øyvind Bugge}, title = {Balancing Acts: The communicative roles of cabinet ministers on social media}, publisher = {Media and Communication}, year = {2025} } ``` ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
daskalos-apps/phi4-cybersec-Q4_K_M
daskalos-apps
2025-08-06T10:40:29Z
26
0
llama.cpp
[ "llama.cpp", "gguf", "phi4", "quantized", "cybersecurity", "Q4_K_M", "en", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
null
2025-08-06T10:39:51Z
--- license: mit base_model: microsoft/phi-4-mini-instruct tags: - gguf - quantized - phi4 - cybersecurity - Q4_K_M model_type: phi4 quantization: Q4_K_M language: - en library_name: llama.cpp --- # Phi-4 Cybersecurity Chatbot - Q4_K_M GGUF This is a quantized version of Microsoft's Phi-4-mini-instruct, optimized for cybersecurity Q&A applications. ## Model Details - **Base Model**: microsoft/phi-4-mini-instruct - **Quantization**: Q4_K_M (4-bit quantization) - **Format**: GGUF - **Size**: ~2-3GB (reduced from original ~28GB) - **License**: MIT - **Use Case**: Cybersecurity training and best practices chatbot ## Intended Use This model is specifically fine-tuned and optimized for: - Answering cybersecurity questions - Providing security best practices - Explaining phishing, malware, and other threats - Guiding on password security and data protection - Incident response guidance ## Performance - **RAM Required**: 4-6GB - **CPU Compatible**: Yes - **Inference Speed**: 15-20 tokens/second on modern CPUs - **Context Length**: 4096 tokens ## Usage ### With llama.cpp ```bash # Download the model wget https://huggingface.co/YOUR_USERNAME/phi4-cybersec-Q4_K_M/resolve/main/phi4-mini-instruct-Q4_K_M.gguf # Run with llama.cpp ./main -m phi4-mini-instruct-Q4_K_M.gguf -p "What is phishing?" -n 256 ``` ### With Python (llama-cpp-python) ```python from llama_cpp import Llama # Load model llm = Llama( model_path="phi4-mini-instruct-Q4_K_M.gguf", n_ctx=4096, n_threads=8, n_gpu_layers=0 # CPU only ) # Generate response = llm( "What are the best practices for password security?", max_tokens=256, temperature=0.7, stop=["<|end|>", "<|user|>"] ) print(response['choices'][0]['text']) ``` ### With LangChain ```python from langchain.llms import LlamaCpp llm = LlamaCpp( model_path="phi4-mini-instruct-Q4_K_M.gguf", temperature=0.7, max_tokens=256, n_ctx=4096 ) response = llm("How do I identify suspicious emails?") print(response) ``` ## Prompt Format The model uses ChatML format: ``` <|system|> You are a cybersecurity expert assistant. <|end|> <|user|> What is malware? <|end|> <|assistant|> ``` ## Quantization Details This model was quantized using llama.cpp with the following process: 1. Original model: microsoft/phi-4-mini-instruct 2. Conversion: HF → GGUF format (FP16) 3. Quantization: GGUF FP16 → Q4_K_M The Q4_K_M quantization method provides: - 4-bit quantization with K-means - Mixed precision for important weights - ~75% size reduction - Minimal quality loss (<2% on benchmarks) ## Limitations - Optimized for English language - May require fact-checking for critical security advice - Not suitable for generating security policies without review - Should not be sole source for incident response ## Ethical Considerations This model is intended to improve cybersecurity awareness and should be used responsibly: - Always verify critical security advice - Don't use for malicious purposes - Respect privacy and data protection laws - Consider cultural and organizational context ## Citation If you use this model, please cite: ```bibtex @misc{phi4-cybersec-gguf, author = {Your Name}, title = {Phi-4 Cybersecurity Q4_K_M GGUF}, year = {2024}, publisher = {Hugging Face}, url = {https://huggingface.co/YOUR_USERNAME/phi4-cybersec-Q4_K_M} } ``` ## Acknowledgments - Microsoft for the original Phi-4 model - llama.cpp team for quantization tools - The open-source community ## Contact For questions or issues: [tech@daskalos-apps.com]
oyvindbs/setfit-minister-private-person-nb-sbert-base
oyvindbs
2025-08-06T10:39:41Z
1
0
setfit
[ "setfit", "safetensors", "bert", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "Norway", "Cabinet Ministers", "no", "nb", "arxiv:2209.11055", "base_model:NbAiLab/nb-sbert-base", "base_model:finetune:NbAiLab/nb-sbert-base", "region:us" ]
text-classification
2025-06-30T08:51:40Z
--- tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer - Norway - Cabinet Ministers widget: [] metrics: - accuracy pipeline_tag: text-classification library_name: setfit inference: true base_model: NbAiLab/nb-sbert-base language: - 'no' - nb --- # Role: Private Person This model has been trained on Facebook posts by Norwegian cabinet ministers of the Solberg governments (2013-2021). It was used in Karlsen, Kolltveit and Solheim (2025). The posts were hand coded specifying different roles and purposes of the posts. Below, we recreate the table 1 from the paper showing the five roles and four purposes. The model included here identifies posts where the cabinet ministers take the role of **Private Person**. The setfit models that identify the other roles and purposes are available [here](https://huggingface.co/collections/oyvindbs/balancing-acts-the-communicative-roles-of-cabinet-ministers-68624b72c250c3cc1fd3ea14). In the paper, we use one model for each purpose and each role. Each post can accordingly be ascribed to more than one purpose or role. | | Communicative purposes | | | | |------------------------------|-------------------------------|----------------------|-------------------|-----------------| | **Communicative roles** | Informing | Communication | Mobilizing | Branding | | Ministry head | | | | | | Cabinet member | | | | | | Party politician | | | | | | Individual politician | | | | | | *Private person* | | | | | This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification of Norwegian social media posts. It uses [NbAiLab/nb-sbert-base](https://huggingface.co/NbAiLab/nb-sbert-base) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. It has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [NbAiLab/nb-sbert-base](https://huggingface.co/NbAiLab/nb-sbert-base) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 75 tokens - **Number of Classes:** 1 <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> **Language:** * Norwegian (Bokmål) <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("oyvindbs/setfit_minister_nb-sbert-base_Ministry-Head") # Run inference preds = model("I loved the spiderman movie!") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Framework Versions - Python: 3.10.4 - SetFit: 1.1.1 - Sentence Transformers: 3.4.1 - Transformers: 4.50.1 - PyTorch: 2.5.1+cu118 - Datasets: 2.19.0 - Tokenizers: 0.21.0 ## Citation ```bibtex @article{KarlsenKolltveitSolheim, author = {Karlsen, Rune and Kolltveit, Kristoffer and Solheim, Øyvind Bugge}, title = {Balancing Acts: The communicative roles of cabinet ministers on social media}, publisher = {Media and Communication}, year = {2025} } ``` ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
oyvindbs/setfit-minister-ministry-head-nb-sbert-base
oyvindbs
2025-08-06T10:35:31Z
2
0
setfit
[ "setfit", "safetensors", "bert", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "Norway", "Cabinet Ministers", "no", "nb", "arxiv:2209.11055", "base_model:NbAiLab/nb-sbert-base", "base_model:finetune:NbAiLab/nb-sbert-base", "region:us" ]
text-classification
2025-06-30T08:23:04Z
--- tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer - Norway - Cabinet Ministers widget: [] metrics: - accuracy pipeline_tag: text-classification library_name: setfit inference: true base_model: NbAiLab/nb-sbert-base language: - 'no' - nb --- # Role: Ministry Head This model has been trained on Facebook posts by Norwegian cabinet ministers of the Solberg governments (2013-2021). It was used in Karlsen, Kolltveit and Solheim (2025). The posts were hand coded specifying different roles and purposes of the posts. Below, we recreate the table 1 from the paper showing the five roles and four purposes. The model included here identifies posts where the cabinet ministers take the role of **Ministry Head**. The setfit models that identify the other roles and purposes are available [here](https://huggingface.co/collections/oyvindbs/balancing-acts-the-communicative-roles-of-cabinet-ministers-68624b72c250c3cc1fd3ea14). In the paper, we use one model for each purpose and each role. Each post can accordingly be ascribed to more than one purpose or role. | | Communicative purposes | | | | |------------------------------|-------------------------------|----------------------|-------------------|-----------------| | **Communicative roles** | Informing | Communication | Mobilizing | Branding | | *Ministry head* | | | | | | Cabinet member | | | | | | Party politician | | | | | | Individual politician | | | | | | Private person | | | | | This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification of Norwegian social media posts. It uses [NbAiLab/nb-sbert-base](https://huggingface.co/NbAiLab/nb-sbert-base) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. It has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [NbAiLab/nb-sbert-base](https://huggingface.co/NbAiLab/nb-sbert-base) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 75 tokens - **Number of Classes:** 1 <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> **Language:** * Norwegian (Bokmål) <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("oyvindbs/setfit_minister_nb-sbert-base_Ministry-Head") # Run inference preds = model("I loved the spiderman movie!") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Framework Versions - Python: 3.10.4 - SetFit: 1.1.1 - Sentence Transformers: 3.4.1 - Transformers: 4.50.1 - PyTorch: 2.5.1+cu118 - Datasets: 2.19.0 - Tokenizers: 0.21.0 ## Citation ```bibtex @article{KarlsenKolltveitSolheim, author = {Karlsen, Rune and Kolltveit, Kristoffer and Solheim, Øyvind Bugge}, title = {Balancing Acts: The communicative roles of cabinet ministers on social media}, publisher = {Media and Communication}, year = {2025} } ``` ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
ekiprop/SST-2-HEURISTIC-LoRA-All-Attention-Q_K_V_O-seed10
ekiprop
2025-08-06T10:34:55Z
62
0
peft
[ "peft", "safetensors", "base_model:adapter:roberta-base", "lora", "transformers", "base_model:FacebookAI/roberta-base", "base_model:adapter:FacebookAI/roberta-base", "license:mit", "region:us" ]
null
2025-08-06T10:20:32Z
--- library_name: peft license: mit base_model: roberta-base tags: - base_model:adapter:roberta-base - lora - transformers metrics: - accuracy model-index: - name: SST-2-HEURISTIC-LoRA-All-Attention-Q_K_V_O-seed10 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SST-2-HEURISTIC-LoRA-All-Attention-Q_K_V_O-seed10 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1943 - Accuracy: 0.9461 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:| | 0.3966 | 0.0950 | 200 | 0.2108 | 0.9151 | | 0.295 | 0.1900 | 400 | 0.2018 | 0.9186 | | 0.2699 | 0.2850 | 600 | 0.2218 | 0.9197 | | 0.2397 | 0.3800 | 800 | 0.1849 | 0.9323 | | 0.2303 | 0.4751 | 1000 | 0.2436 | 0.9163 | | 0.2147 | 0.5701 | 1200 | 0.2028 | 0.9335 | | 0.2168 | 0.6651 | 1400 | 0.2050 | 0.9312 | | 0.2143 | 0.7601 | 1600 | 0.2165 | 0.9232 | | 0.2102 | 0.8551 | 1800 | 0.2060 | 0.9358 | | 0.2046 | 0.9501 | 2000 | 0.2101 | 0.9358 | | 0.2037 | 1.0451 | 2200 | 0.2132 | 0.9300 | | 0.1815 | 1.1401 | 2400 | 0.1969 | 0.9346 | | 0.1827 | 1.2352 | 2600 | 0.1962 | 0.9358 | | 0.18 | 1.3302 | 2800 | 0.2095 | 0.9392 | | 0.1792 | 1.4252 | 3000 | 0.1996 | 0.9381 | | 0.1792 | 1.5202 | 3200 | 0.2137 | 0.9369 | | 0.1788 | 1.6152 | 3400 | 0.1829 | 0.9335 | | 0.1674 | 1.7102 | 3600 | 0.2564 | 0.9209 | | 0.1709 | 1.8052 | 3800 | 0.2007 | 0.9358 | | 0.1806 | 1.9002 | 4000 | 0.1910 | 0.9392 | | 0.1756 | 1.9952 | 4200 | 0.2068 | 0.9369 | | 0.1632 | 2.0903 | 4400 | 0.1873 | 0.9289 | | 0.1532 | 2.1853 | 4600 | 0.2134 | 0.9404 | | 0.1528 | 2.2803 | 4800 | 0.2206 | 0.9312 | | 0.1485 | 2.3753 | 5000 | 0.1849 | 0.9450 | | 0.1558 | 2.4703 | 5200 | 0.2201 | 0.9381 | | 0.1491 | 2.5653 | 5400 | 0.2253 | 0.9369 | | 0.1616 | 2.6603 | 5600 | 0.1980 | 0.9346 | | 0.1428 | 2.7553 | 5800 | 0.2242 | 0.9381 | | 0.1462 | 2.8504 | 6000 | 0.2036 | 0.9392 | | 0.1474 | 2.9454 | 6200 | 0.2194 | 0.9392 | | 0.1389 | 3.0404 | 6400 | 0.2309 | 0.9335 | | 0.1169 | 3.1354 | 6600 | 0.2286 | 0.9381 | | 0.1316 | 3.2304 | 6800 | 0.1943 | 0.9461 | | 0.1477 | 3.3254 | 7000 | 0.1864 | 0.9427 | | 0.1289 | 3.4204 | 7200 | 0.1957 | 0.9461 | | 0.1263 | 3.5154 | 7400 | 0.2155 | 0.9438 | | 0.1333 | 3.6105 | 7600 | 0.2012 | 0.9450 | | 0.1369 | 3.7055 | 7800 | 0.2090 | 0.9404 | | 0.1342 | 3.8005 | 8000 | 0.2138 | 0.9415 | | 0.1391 | 3.8955 | 8200 | 0.2042 | 0.9438 | | 0.1363 | 3.9905 | 8400 | 0.1972 | 0.9438 | | 0.1216 | 4.0855 | 8600 | 0.2171 | 0.9415 | | 0.1178 | 4.1805 | 8800 | 0.2221 | 0.9415 | | 0.1223 | 4.2755 | 9000 | 0.2137 | 0.9415 | | 0.1247 | 4.3705 | 9200 | 0.2097 | 0.9438 | | 0.1191 | 4.4656 | 9400 | 0.2103 | 0.9438 | | 0.1177 | 4.5606 | 9600 | 0.2106 | 0.9427 | | 0.1207 | 4.6556 | 9800 | 0.2026 | 0.9427 | | 0.1141 | 4.7506 | 10000 | 0.2091 | 0.9438 | | 0.1223 | 4.8456 | 10200 | 0.2082 | 0.9450 | | 0.127 | 4.9406 | 10400 | 0.2075 | 0.9450 | ### Framework versions - PEFT 0.16.0 - Transformers 4.54.1 - Pytorch 2.5.1+cu121 - Datasets 4.0.0 - Tokenizers 0.21.4
moaemilie/ft_llama_3_2-1B_stocks_RAG_model
moaemilie
2025-08-06T10:31:43Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-08-06T10:31:26Z
--- base_model: unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** moaemilie - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Thireus/GLM-4.5-THIREUS-Q5_0_R4-SPECIAL_SPLIT
Thireus
2025-08-06T10:28:58Z
4
0
null
[ "gguf", "arxiv:2505.23786", "license:mit", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-08-02T15:12:10Z
--- license: mit --- ## ⚠️ Cautionary Notice Due to changes in the GLM-4.5 PR the GGUF files of this repository have changed. Any older version of these GGUFs are no longer compatible with the latest version of `llama.cpp` and `ik_llama.cpp`. Please download the latest GGUF files of this repository and make sure to use the latest version of `llama.cpp` or `ik_llama.cpp`. - **For `llama.cpp`** – see the discussion in [PR #14939](https://github.com/ggml-org/llama.cpp/pull/14939). - **For `ik_llama.cpp`** – refer to [ikawrakow/ik_llama.cpp#668](https://github.com/ikawrakow/ik_llama.cpp/pull/668). **Unless you are confident in what you're doing, and until support is officially confirmed (PR merged),** > 🔒 **Do not use these quantized models for production** > 🔬 **Do not use them to assess the quality of the GLM-4.5 models** Proceed with caution and keep an eye on the upstream PRs for any updates that could affect compatibility or performance. --- # GLM-4.5 ## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/GLM-4.5-THIREUS-BF16-SPECIAL_SPLIT/) about? This repository provides **GGUF-quantized tensors** for the GLM-4.5 model (official repo: https://huggingface.co/zai-org/GLM-4.5). These GGUF shards are designed to be used with **Thireus’ GGUF Tool Suite** (https://gguf.thireus.com), a collection of tools that automatically finds the perplexity-optimal mix of quantizations for any given VRAM and RAM target. With the Tool Suite, you can generate and download custom quantization “recipes” effortlessly. - 📖 Read more: https://github.com/Thireus/GGUF-Tool-Suite - 🔍 Example quant mixes: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples - 🛠️ Create your own recipe: https://colab.research.google.com/github/Thireus/GGUF-Tool-Suite/blob/main/quant_recipe_pipeline.ipynb - 📂 Browse available quant shards: https://huggingface.co/Thireus/collections *tl;dr: Expand the details section below* <details> ``` cd ~ # Make sure to install all ik_llama.cpp compilation dependencies... apt install python3-dev python3-pip python3-venv python3-wheel python3-setuptools git acl netcat-openbsd cmake # pipx # Obtain ik_llama's Thireus version - Windows builds available at https://github.com/Thireus/ik_llama.cpp/releases git clone https://github.com/Thireus/ik_llama.cpp cd ik_llama.cpp git pull # Build ik_llama.cpp cmake -B build -DGGML_AVX=ON -DGGML_AVX2=ON -DLLAMA_CURL=OFF -DGGML_MAX_CONTEXTS=2048 cmake --build build --config Release -j16 cd .. # Obtain Thireus' GGUF-Tool-Suite git clone https://github.com/Thireus/GGUF-Tool-Suite # Download model quant mix from recipe file: cd GGUF-Tool-Suite rm -f download.conf # Make sure to copy the relevant download.conf for the model before running quant_assign.py cp -f models/GLM-4.5/download.conf . # Use the download.conf of the chosen model mkdir -p kitchen && cd kitchen ../quant_downloader.sh ../recipe_examples/GLM-4.5.ROOT-3.6910bpw-3.2785ppl.153GB-GGUF_19GB-GPU_134GB-CPU.68f915c_9c7682b.recipe # Launch ik_llama's llama-cli: ulimit -n 99999 # Lifts "too many open files" limitation on Linux ~/ik_llama.cpp/build/bin/llama-cli \ -m GLM-4.5-THIREUS-BF16-SPECIAL_TENSOR-00001-of-01148.gguf \ -fa -amb 512 -fmoe -ctk f16 -c 4096 -ngl 99 \ -ot "blk\.(3|4|5|6)\.ffn_.*=CUDA0" \ -ot "blk\.(7|8|9|10)\.ffn_.*=CUDA1" \ -ot exps=CPU -b 2048 -ub 1024 --warmup-batch --no-mmap --threads 36 \ --main-gpu 0 \ -p '<|begin▁of▁sentence|><|User|>What is the solution of x+5=-2?<|Assistant|><think>\n' ``` </details> --- ## ❓ Why does this Tool Suite exist? 1. **Compatibility & Speed** – [unsloth](https://huggingface.co/unsloth)’s dynamic quants may not always work optimally with `ik_llama.cpp`. 2. **Custom Rig Fit** – No off-the-shelf GGUF model perfectly matched my VRAM/RAM setup, so I built a way to tailor models and leverage extra VRAM/RAM to reduce perplexity. 3. **Automated PPL-Optimal Quantization** – To my knowledge, there was no flexible, automated method to minimize perplexity for any bits-per-weight (bpw) target—so I created one with excellent results! --- ## 📊 How does it compare to other GGUFs? Here’s how DeepSeek-R1-0528 quantized with **Thireus’ GGUF Tool Suite** stacks up against other quantizers (lower perplexity = better at equal or lower bpw): ![PPLs Compared With Others](https://github.com/Thireus/GGUF-Tool-Suite/raw/main/ppl_graphs/DeepSeek-R1-0528.svg) > _Note: The `recipe_examples` files illustrate good recipes. The Tool Suite computes the optimal ppl/bpw curve for you — just specify your target RAM, VRAM, and quant types, and `quant_assign.py` finds the best mix._ More perplexity/bpw graphs for other supported models: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/ppl_graphs --- ## 🚀 How do I get started? Check out the [GGUF Tool Suite README](https://github.com/Thireus/GGUF-Tool-Suite) — focus on these sections: 1. ⚠️ **Requirements** – Which `ik_llama.cpp` (or `llama.cpp`) version to use and how to compile. - Windows binaries (no patching needed) at: https://github.com/Thireus/ik_llama.cpp/releases 2. 📥 **Download Model Shards** – Use `quant_downloader.sh` to fetch GGUF shards from any recipe. - Recipe examples: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples 3. 🧠 **Run a Downloaded Model** – Sample usage with `llama-cli`. 4. 🛠️ **Generate a Custom Recipe** – Produce recipes tailored to your rig for optimal perplexity. --- ## ✅ Supported Models Supported models are listed under `models/` in the [Tool Suite Github repo](https://github.com/Thireus/GGUF-Tool-Suite/tree/main/models). Presence of `ppl_results.csv` indicates official support and compatibility with `quant_assign.py`. --- ## 🤷‍♂️ Will I release pre-cooked GGUF files? No, because I believe in **tailored quantization** for each user’s hardware. If you prefer ready-made shards, you are welcome to merge them via `llama-gguf-split --merge`, or request someone to publish them. Instead, I prefer to share examples of recipes so users can see exactly how they were produced (command included inside these recipe files) and tweak them for their own rigs. The `quant_downloader.sh` script handles automatic fetching and verification of each shard. Recipes provided by [Ubergarm](https://huggingface.co/ubergarm) on his model cards are also compatible with `quant_downloader.sh`. Users who don’t trust the GGUF shards on HuggingFace can also quantize their own by passing recipe lines to `llama-quantize --custom-q` ([see example](https://github.com/Thireus/GGUF-Tool-Suite/blob/main/models/DeepSeek-R1-0528/DeepSeek-R1-0528-THIREUS-ANY-SPECIAL.sh#L482-L486)). Run `llama-quantize --help` to list compatible quants for `quant_assign.py`. This approach is especially useful if you prefer `llama.cpp` over `ik_llama.cpp`. --- ## 📦 What’s in this repository? - **00001 GGUF header shard** – Contains metadata (tokens, chat template, tensor count, etc.). This metadata can be explored directly from the HuggingFace web interface after clicking on that shard. - **Tensor shards** – Each shard holds one tensor; see `tensors.map` for names, quant types, sizes, SHA-256 hash, shard IDs, etc. - **GPG-signed files** – `tensors.map` and header shard are signed with the key in [trusted-keys.asc](https://github.com/Thireus/GGUF-Tool-Suite/blob/main/trusted-keys.asc) for tamper detection. - **Security note** – Some papers about various ways to attack GGUFs and LLMs are available online, such as https://arxiv.org/abs/2505.23786, and there are also more classic security exploits like CVE-2024-23496 and CVE-2024-25664 through CVE-2024-25668. Only use GGUFs from reputable, trusted authors—or alternatively self-quantize—to avoid potential exploits. --- ## 💡 Pro Tips You can download the BF16 model version to quantize your own shards: ``` mkdir kitchen echo '.*=bf16' > kitchen/bf16.recipe cd kitchen ../quant_downloader.sh bf16.recipe ``` Enjoy optimized quantization! 🎉
Robo420/gemma-3n-e4b-bratwurst
Robo420
2025-08-06T10:23:41Z
44
0
transformers
[ "transformers", "safetensors", "gemma3n", "image-text-to-text", "text-generation-inference", "unsloth", "conversational", "de", "en", "dataset:FreedomIntelligence/sharegpt-deutsch", "license:gemma", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-08-05T15:37:38Z
--- base_model: unsloth/gemma-3n-e4b-it-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - gemma3n license: gemma language: - de - en datasets: - FreedomIntelligence/sharegpt-deutsch --- # Uploaded finetuned model - **Developed by:** Robo420 - **License:** Gemma - **Finetuned from model :** unsloth/gemma-3n-e4b-it-unsloth-bnb-4bit This model was finetuned on a german dataset and should be better at staying coherent in german. This gemma3n model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. Notice: I still had to pay for Google Colab though, even if the unsloth finetune collab tells you it works in free mode, since it kept running OOM when generating the final weigths. GGUF following as soon as i get around to do it.
ACECA/lowMvM_209
ACECA
2025-08-06T10:22:44Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-07-30T15:10:58Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
causalyte/causalyte-hydra
causalyte
2025-08-06T10:20:14Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-08-06T10:20:14Z
--- license: apache-2.0 ---
ekiprop/SST-2-HEURISTIC-Standard_LoRA-Q_V-seed10
ekiprop
2025-08-06T10:18:15Z
57
0
peft
[ "peft", "safetensors", "base_model:adapter:roberta-base", "lora", "transformers", "base_model:FacebookAI/roberta-base", "base_model:adapter:FacebookAI/roberta-base", "license:mit", "region:us" ]
null
2025-08-06T10:04:51Z
--- library_name: peft license: mit base_model: roberta-base tags: - base_model:adapter:roberta-base - lora - transformers metrics: - accuracy model-index: - name: SST-2-HEURISTIC-Standard_LoRA-Q_V-seed10 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SST-2-HEURISTIC-Standard_LoRA-Q_V-seed10 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1948 - Accuracy: 0.9438 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:| | 0.3836 | 0.0950 | 200 | 0.2142 | 0.9186 | | 0.2937 | 0.1900 | 400 | 0.2044 | 0.9151 | | 0.2704 | 0.2850 | 600 | 0.2178 | 0.9163 | | 0.2516 | 0.3800 | 800 | 0.2107 | 0.9335 | | 0.2471 | 0.4751 | 1000 | 0.2356 | 0.9255 | | 0.2373 | 0.5701 | 1200 | 0.2058 | 0.9232 | | 0.2332 | 0.6651 | 1400 | 0.1986 | 0.9243 | | 0.2282 | 0.7601 | 1600 | 0.2068 | 0.9335 | | 0.225 | 0.8551 | 1800 | 0.2028 | 0.9266 | | 0.2128 | 0.9501 | 2000 | 0.2077 | 0.9335 | | 0.2254 | 1.0451 | 2200 | 0.1908 | 0.9312 | | 0.1968 | 1.1401 | 2400 | 0.1942 | 0.9312 | | 0.2026 | 1.2352 | 2600 | 0.2113 | 0.9346 | | 0.194 | 1.3302 | 2800 | 0.2169 | 0.9312 | | 0.1915 | 1.4252 | 3000 | 0.1912 | 0.9358 | | 0.1891 | 1.5202 | 3200 | 0.2046 | 0.9358 | | 0.1973 | 1.6152 | 3400 | 0.1945 | 0.9312 | | 0.1865 | 1.7102 | 3600 | 0.2448 | 0.9289 | | 0.1911 | 1.8052 | 3800 | 0.2149 | 0.9346 | | 0.2001 | 1.9002 | 4000 | 0.1906 | 0.9335 | | 0.1854 | 1.9952 | 4200 | 0.2196 | 0.9346 | | 0.1818 | 2.0903 | 4400 | 0.1935 | 0.9369 | | 0.1749 | 2.1853 | 4600 | 0.2139 | 0.9335 | | 0.1755 | 2.2803 | 4800 | 0.2274 | 0.9358 | | 0.1728 | 2.3753 | 5000 | 0.2105 | 0.9392 | | 0.1709 | 2.4703 | 5200 | 0.2080 | 0.9404 | | 0.1732 | 2.5653 | 5400 | 0.2141 | 0.9312 | | 0.1832 | 2.6603 | 5600 | 0.2029 | 0.9381 | | 0.1666 | 2.7553 | 5800 | 0.1969 | 0.9358 | | 0.1594 | 2.8504 | 6000 | 0.1955 | 0.9381 | | 0.1718 | 2.9454 | 6200 | 0.1975 | 0.9300 | | 0.1565 | 3.0404 | 6400 | 0.2119 | 0.9300 | | 0.1497 | 3.1354 | 6600 | 0.2099 | 0.9392 | | 0.1642 | 3.2304 | 6800 | 0.2015 | 0.9358 | | 0.1623 | 3.3254 | 7000 | 0.1971 | 0.9404 | | 0.1544 | 3.4204 | 7200 | 0.1960 | 0.9415 | | 0.1539 | 3.5154 | 7400 | 0.2116 | 0.9369 | | 0.158 | 3.6105 | 7600 | 0.1984 | 0.9392 | | 0.1652 | 3.7055 | 7800 | 0.1859 | 0.9415 | | 0.153 | 3.8005 | 8000 | 0.1948 | 0.9438 | | 0.1591 | 3.8955 | 8200 | 0.1991 | 0.9438 | | 0.1533 | 3.9905 | 8400 | 0.2124 | 0.9404 | | 0.1482 | 4.0855 | 8600 | 0.2123 | 0.9415 | | 0.1468 | 4.1805 | 8800 | 0.2126 | 0.9415 | | 0.1467 | 4.2755 | 9000 | 0.2129 | 0.9392 | | 0.1448 | 4.3705 | 9200 | 0.2095 | 0.9438 | | 0.142 | 4.4656 | 9400 | 0.2119 | 0.9381 | | 0.1361 | 4.5606 | 9600 | 0.2172 | 0.9427 | | 0.1491 | 4.6556 | 9800 | 0.2070 | 0.9427 | | 0.1413 | 4.7506 | 10000 | 0.2060 | 0.9415 | | 0.1575 | 4.8456 | 10200 | 0.2056 | 0.9438 | | 0.1521 | 4.9406 | 10400 | 0.2066 | 0.9427 | ### Framework versions - PEFT 0.16.0 - Transformers 4.54.1 - Pytorch 2.5.1+cu121 - Datasets 4.0.0 - Tokenizers 0.21.4
csukuangfj/sherpa-onnx-streaming-zipformer-fr-kroko-2025-08-06
csukuangfj
2025-08-06T10:17:56Z
0
0
null
[ "onnx", "region:us" ]
null
2025-08-06T09:39:47Z
See license at https://huggingface.co/Banafo/Kroko-ASR
cucucu666/smile-8.6
cucucu666
2025-08-06T10:15:19Z
6
0
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "flux", "flux-diffusers", "template:sd-lora", "base_model:black-forest-labs/FLUX.1-Fill-dev", "base_model:adapter:black-forest-labs/FLUX.1-Fill-dev", "license:other", "region:us" ]
text-to-image
2025-08-06T08:20:12Z
--- base_model: black-forest-labs/FLUX.1-Fill-dev library_name: diffusers license: other instance_prompt: Lego face, Lego style, smile expression, plain white background widget: - text: Lego face, Lego style, smile expression, plain white background output: url: image_0.png - text: Lego face, Lego style, smile expression, plain white background output: url: image_1.png - text: Lego face, Lego style, smile expression, plain white background output: url: image_2.png - text: Lego face, Lego style, smile expression, plain white background output: url: image_3.png tags: - text-to-image - diffusers-training - diffusers - lora - flux - flux-diffusers - template:sd-lora --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # Flux-Fill DreamBooth LoRA - cucucu666/smile-8.6 <Gallery /> ## Model description These are cucucu666/smile-8.6 DreamBooth LoRA weights for black-forest-labs/FLUX.1-Fill-dev. The weights were trained using [DreamBooth](https://dreambooth.github.io/) with a custom [Flux diffusers trainer](https://github.com/Sebastian-Zok/FLUX-Fill-LoRa-Training). Was LoRA for the text encoder enabled? False. ## Trigger words You should use `Lego face, Lego style, smile expression, plain white background` to trigger the image generation. ## Download model [Download the *.safetensors LoRA](cucucu666/smile-8.6/tree/main) in the Files & versions tab. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16).to('cuda') pipeline.load_lora_weights('cucucu666/smile-8.6', weight_name='pytorch_lora_weights.safetensors') image = pipeline('Lego face, Lego style, smile expression, plain white background').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## License Please adhere to the licensing terms as described [here](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md). ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
exillarml/dental-assistant-llama3.2-1b
exillarml
2025-08-06T10:14:39Z
29
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-08-06T09:55:43Z
--- base_model: unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** exillarml - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Cydonia01/llama2-medical-finetuned
Cydonia01
2025-08-06T10:13:07Z
4
0
peft
[ "peft", "safetensors", "medical", "text-generation", "en", "dataset:aboonaji/wiki_medical_terms_llam2_format", "base_model:NousResearch/Llama-2-7b-chat-hf", "base_model:adapter:NousResearch/Llama-2-7b-chat-hf", "region:us" ]
text-generation
2025-08-06T09:36:33Z
--- base_model: NousResearch/Llama-2-7b-chat-hf library_name: peft datasets: - aboonaji/wiki_medical_terms_llam2_format language: - en pipeline_tag: text-generation tags: - medical --- # Model Card for llama2-medical-finetuned <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description This is a finetuned version of LLaMA 2 specialized for medical text understanding and generation tasks. It is designed to assist with medical data processing, clinical note summarization, and healthcare question answering. - **Developed by:** Cydonia01 - **Shared by:** Cydonia01 on Hugging Face - **Model type:** Large Language Model (Transformer-based, quantized with BitsAndBytes 4-bit NF4) - **Language(s) (NLP):** English (primarily medical domain) - **Finetuned from model:** LLaMA 2 (Meta AI, base model: aboonaji/llama2finetune-v2) ### Model Sources <!-- Provide the basic links for the model. --> - **Repository:** https://huggingface.co/Cydonia01/llama2-medical-finetuned ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> - Medical text generation and summarization - Clinical decision support tools - Medical Q&A systems ### Downstream Use <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> - Integration into healthcare NLP pipelines - Training further domain-specific models ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> - Not intended for direct diagnostic or treatment decision-making without expert review - Should not be used for generating legally binding medical advice ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> - The model may reflect biases present in training data from medical literature and may generate incorrect or outdated medical information. - Not a substitute for professional medical advice or diagnosis. - Users should verify outputs with medical professionals. ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should exercise caution when deploying the model in real-world medical scenarios and combine its outputs with expert validation. ## How to Get Started with the Model Use the code below to get started with the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("Cydonia01/llama2-medical-finetuned") model = AutoModelForCausalLM.from_pretrained("Cydonia01/llama2-medical-finetuned") input_text = "Explain the symptoms of diabetes." inputs = tokenizer(input_text, return_tensors="pt") outputs = model.generate(**inputs) print(tokenizer.decode(outputs[0])) ``` ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> Curated dataset of medical texts including wiki medical terms dataset (aboonaji/wiki_medical_terms_llam2_format). ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> Finetuned from aboonaji/llama2finetune-v2 base model using 4-bit quantization with BitsAndBytes (NF4), using PEFT LoRA method for parameter-efficient tuning. The training employed causal language modeling. #### Training Hyperparameters - Batch size: 1 (per device) with gradient accumulation of 4 - Max steps: 100 - LoRA config: r=16, alpha=16, dropout=0.1 ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> - **Hardware Type:** NVIDIA Tesla T4 GPU (Google Colab) - **Hours used:** Approximately 0.75 hours (45 minutes) - **Cloud Provider:** Google Colab ## Technical Specifications ### Model Architecture and Objective LLaMA 2 base model finetuned with causal language modeling, quantized to 4-bit precision using NF4 quantization for efficiency, with LoRA PEFT fine-tuning. ### Compute Infrastructure Training was conducted on Google Colab’s cloud environment, utilizing accessible GPU resources optimized for research and experimentation. The setup leverages efficient quantization and parameter-efficient fine-tuning techniques to minimize compute requirements. #### Hardware NVIDIA Tesla T4 GPU with 16 GB VRAM, supporting mixed precision (float16) and 4-bit quantization via BitsAndBytes library. #### Software - PyTorch - Transformers (Hugging Face) - PEFT (LoRA) - BitsAndBytes (4-bit quantization) - Datasets (Hugging Face) ### Framework versions - PEFT 0.13.2 - Transformers (compatible version with PEFT) - PyTorch (compatible with float16 and 4-bit quantization)