modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-08 18:27:49
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 495
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-08 18:27:48
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
joemagna/aloha_insertion
|
joemagna
| 2025-08-07T16:22:15Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:aloha_smol_insertion",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-07T16:22:04Z |
Temporary Redirect. Redirecting to /api/resolve-cache/models/joemagna/aloha_insertion/2d9a1087b867a872eb3bb12eb535ae9834ce6b4a/README.md?%2Fjoemagna%2Faloha_insertion%2Fresolve%2Fmain%2FREADME.md=&etag=%22863f9f7f03bf01ae8add03f836d1991fae5e1ffb%22
|
UzzyDizzy/a2c-PandaReachDense-v3
|
UzzyDizzy
| 2025-08-07T16:19:11Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-07T16:15:08Z |
Temporary Redirect. Redirecting to /api/resolve-cache/models/UzzyDizzy/a2c-PandaReachDense-v3/0251d82a61305306c20eaa00ded16976ab26420e/README.md?%2FUzzyDizzy%2Fa2c-PandaReachDense-v3%2Fresolve%2Fmain%2FREADME.md=&etag=%22eea622235c4e3e646e60543124315231ef6a7e61%22
|
minhtien2405/phowhisper-large-all-vi
|
minhtien2405
| 2025-08-07T16:19:10Z | 14 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"base_model:adapter:vinai/PhoWhisper-large",
"lora",
"transformers",
"vi",
"base_model:vinai/PhoWhisper-large",
"license:bsd-3-clause",
"region:us"
] | null | 2025-06-24T10:10:47Z |
Temporary Redirect. Redirecting to /api/resolve-cache/models/minhtien2405/phowhisper-large-all-vi/17b230bc768810109488ff99d8f658162c4c1961/README.md?%2Fminhtien2405%2Fphowhisper-large-all-vi%2Fresolve%2Fmain%2FREADME.md=&etag=%22ddfd5f89ce5680a3f59f73ddc87cd960632f2ecf%22
|
zhengbang0707/cyber_npo_gather_shift_ref_beta0.1
|
zhengbang0707
| 2025-08-07T16:19:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:HuggingFaceH4/zephyr-7b-beta",
"base_model:finetune:HuggingFaceH4/zephyr-7b-beta",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-07T14:12:54Z |
Temporary Redirect. Redirecting to /api/resolve-cache/models/zhengbang0707/cyber_npo_gather_shift_ref_beta0.1/1ac1c6b9414c347fbc201c6ef63c3390c38a8257/README.md?%2Fzhengbang0707%2Fcyber_npo_gather_shift_ref_beta0.1%2Fresolve%2Fmain%2FREADME.md=&etag=%229efb03e58153f912fd7fd52c2e7880020318681f%22
|
mdavidson83/llama-2-7b-chat-hf-8bit
|
mdavidson83
| 2025-08-07T16:18:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-08-07T16:14:18Z |
Temporary Redirect. Redirecting to /api/resolve-cache/models/mdavidson83/llama-2-7b-chat-hf-8bit/5a428936385f0624fafafd16c2b939cb58990573/README.md?%2Fmdavidson83%2Fllama-2-7b-chat-hf-8bit%2Fresolve%2Fmain%2FREADME.md=&etag=%22bc5f30d6632ac0efdc7be2e9095e9e9579af2e33%22
|
LizardAPN/q-FrozenLake-v1-4x4-noSlippery
|
LizardAPN
| 2025-08-07T16:15:18Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-07T16:13:23Z |
Temporary Redirect. Redirecting to /api/resolve-cache/models/LizardAPN/q-FrozenLake-v1-4x4-noSlippery/40a3a85ab1766aed94c2f4313eb45015f3c2afcb/README.md?%2FLizardAPN%2Fq-FrozenLake-v1-4x4-noSlippery%2Fresolve%2Fmain%2FREADME.md=&etag=%2210ce03cec7c0c4f7239572b98bb6a7302ce6144c%22
|
Ehsanl/me5_large_inst_lora_old_r8
|
Ehsanl
| 2025-08-07T16:11:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"base_model:intfloat/multilingual-e5-large-instruct",
"base_model:finetune:intfloat/multilingual-e5-large-instruct",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2025-08-07T15:09:22Z |
Temporary Redirect. Redirecting to /api/resolve-cache/models/Ehsanl/me5_large_inst_lora_old_r8/8210499359f1be62e9b324917b802a73a2096912/README.md?%2FEhsanl%2Fme5_large_inst_lora_old_r8%2Fresolve%2Fmain%2FREADME.md=&etag=%22242b71da0cddb56c41414193d7c9cf146b0097da%22
|
bruhzair/prototype-0.4x287
|
bruhzair
| 2025-08-07T16:02:01Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-03T14:24:28Z |
Temporary Redirect. Redirecting to /api/resolve-cache/models/bruhzair/prototype-0.4x287/012b1c40b0184cc6f9df8b4b281ea4b6d150e1d0/README.md?%2Fbruhzair%2Fprototype-0.4x287%2Fresolve%2Fmain%2FREADME.md=&etag=%229c445ec4f87b477ba2ba6e3d5c6ea3ff0966da1f%22
|
ZyrexAN/DataScience-PromptEngineer
|
ZyrexAN
| 2025-08-07T15:58:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-v0.3-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-v0.3-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-07T15:57:38Z |
Temporary Redirect. Redirecting to /api/resolve-cache/models/ZyrexAN/DataScience-PromptEngineer/842001dfda440d0fe57b090c6c75a4acbde472e0/README.md?%2FZyrexAN%2FDataScience-PromptEngineer%2Fresolve%2Fmain%2FREADME.md=&etag=%22ff7ecca79bf1ca8626e69b37194bf333c7eebd9a%22
|
arindambhattacharya/albert-mlm-pretrained
|
arindambhattacharya
| 2025-08-07T15:55:46Z | 321 | 0 | null |
[
"safetensors",
"albert",
"license:mit",
"region:us"
] | null | 2025-07-31T05:04:24Z |
Temporary Redirect. Redirecting to /api/resolve-cache/models/arindambhattacharya/albert-mlm-pretrained/ae3689c60209e6ad8dc91364557888c94d70f955/README.md?%2Farindambhattacharya%2Falbert-mlm-pretrained%2Fresolve%2Fmain%2FREADME.md=&etag=%227be5fc7f47d5db027d120b8024982df93db95b74%22
|
isaacndayi/cti-ner-tpu
|
isaacndayi
| 2025-08-07T15:54:01Z | 0 | 0 | null |
[
"pytorch",
"xlm-roberta",
"en",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"region:us"
] | null | 2025-03-13T00:32:55Z |
Temporary Redirect. Redirecting to /api/resolve-cache/models/isaacndayi/cti-ner-tpu/061f998ea1996a2a5bf4de2489440cee01d7e7c3/README.md?%2Fisaacndayi%2Fcti-ner-tpu%2Fresolve%2Fmain%2FREADME.md=&etag=%22ce55fee060dcc60eaebced71408cd6e7d0193707%22
|
AngelSlim/Qwen2.5-VL-32B-Instruct-FP8-Dynamic
|
AngelSlim
| 2025-08-07T15:49:30Z | 0 | 0 | null |
[
"safetensors",
"qwen2_5_vl",
"fp8",
"region:us"
] | null | 2025-08-07T14:43:01Z |
English | [简体中文](README.md)
<p align="center">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="./docs/source/assets/logos/angelslim_logo_light.png">
<img alt="AngelSlim" src="https://huggingface.co/datasets/librarian-bots/model_cards_with_metadata/viewer/default/./docs/source/assets/logos/angelslim_logo.png" width=55%>
</picture>
</p>
<h3 align="center">
Dedicated to building a more intuitive, comprehensive, and efficient LLMs compression toolkit.
</h3>
<p align="center">
📖 <a href="https://angelslim.readthedocs.io/">Documentation</a>   |   🤗 <a href="https://huggingface.co/AngelSlim">Hugging Face</a>   |   🤖 <a href="https://modelscope.cn/organization/AngelSlim">ModelScope</a>   |   💬 <a href="https://huggingface.co/datasets/librarian-bots/model_cards_with_metadata/viewer/default/./docs/source/assets/angel_slim_wechat.png">WeChat</a> |   🫨 <a href="https://discord.com/invite/dHVNeuNdFt">Discord</a>
<br>
</p>
## Table of Contents
- [Latest Updates](#latest-updates)
- [Key Features](#key-features)
- [Supported Models](#supported-models)
- [How to Use](#how-to-use)
- [Install AngelSlim](#install-angelslim)
- [Quick Start](#quick-start)
- [deployment & Evaluation](#deployment)
- [Benchmark](#benchmark)
- [License](#license)
- [Citation](#citation)
- [Technical Discussion](#technical-discussion)
## 📣Latest Updates
- [25/08/04] We now support quantization for `Hunyuan 0.5B/1.8B/4B/7B` and multimodal model `Qwen2.5VL 3B/7B/32B/72B`, including `FP8/INT4` algorithms. We also opensource `Hunyuan 1.8B/4B/7B` series Eagle3 model weight.
- [25/07/04] We now support quantization for `Hunyuan/Qwen2.5/Qwen3/DeepSeek-R1-Distill-Qwen` and other models, including `INT8/FP8/INT4` algorithms. We also opensource `Qwen3` series Eagle3 model weight.
Coming soon:
- [ ] Support W4A8 quantization for DeepSeek-R1.
- [ ] Release of new algorithm for speculative sampling.
## 🌟Key Features
- **Highly Integrated**: This toolkit integrates mainstream compression algorithms into a unified framework, offering developers one-click access with exceptional ease of use.
- **Continuous Innovation**: Beyond integrating widely-used industry algorithms, we are continuously researching better compression algorithms, which will be gradually open-sourced in the future.
- **Performance-Driven**: We continuously optimize end-to-end performance in model compression workflows and algorithm deployment, such as enabling quantization of models like Qwen3-235B and DeepSeek-R1 on a single GPU.
## 💼Supported Models
### Quantization
Currently supports the following LLMs, including Hunyuan-Dense, Hunyuan-MoE, Qwen3-Dense, Qwen3-MoE, Qwen2.5, DeepSeek-R1 distilled Qwen models, and QwQ::
| Model | FP8-Dynamic | FP8-Static | INT8-Dynamic | INT4-GPTQ | INT4-AWQ |
| --------------------------------------------------------------------------------------------------------------------------- | ----------- | ---------- | ------------ | --------- | -------- |
| [Hunyuan-Dense](https://huggingface.co/collections/tencent/hunyuan-dense-model-6890632cda26b19119c9c5e7) | ✅ | ✅ | ✅ | ✅ | ✅ |
| [Hunyuan-MoE](https://huggingface.co/collections/tencent/hunyuan-a13b-685ec38e5b46321e3ea7c4be) | ✅ | ✅ | ✅ | ✅ | ✅ |
| [Qwen3-Dense](https://huggingface.co/collections/AngelSlim/qwen3-quant-68652e26da31740739d154f8) | ✅ | ✅ | ✅ | ✅ | ✅ |
| [Qwen3-MoE](https://huggingface.co/collections/AngelSlim/qwen3-quant-68652e26da31740739d154f8) | ✅ | ✅ | ✅ | ✅ | ✅ |
| [Qwen2.5](https://huggingface.co/collections/AngelSlim/qwen2-25-quant-68652d6cbdf5c0d4b1c4499a) | ✅ | ✅ | ✅ | ✅ | ✅ |
| [DeepSeek-R1-Distill-Qwen](https://huggingface.co/collections/AngelSlim/deepseek-r1-distill-quant-68652f16a9c206b030b05f7f) | ✅ | ✅ | ✅ | ✅ | ✅ |
| [QwQ](https://huggingface.co/collections/AngelSlim/qwen3-quant-68652e26da31740739d154f8) | ✅ | ✅ | ✅ | ✅ | ✅ |
### Speculative Decoding
#### Eagle3
The Eagle3 weights for the Qwen3 series model are now available.
| Qwen3 Models | Hunyuan Models |
| ----------|----------|
| ✅ [Qwen3-1.7B](https://huggingface.co/AngelSlim/Qwen3-1.7B_eagle3) |✅ [Hunyuan-1.8B-Instruct](https://huggingface.co/AngelSlim/Hunyuan-1.8B-Instruct_eagle3) |
| ✅ [Qwen3-4B](https://huggingface.co/AngelSlim/Qwen3-4B_eagle3) |✅ [Hunyuan-4B-Instruct](https://huggingface.co/AngelSlim/Hunyuan-4B-Instruct_eagle3) |
| ✅ [Qwen3-8B](https://huggingface.co/AngelSlim/Qwen3-8B_eagle3) |✅ [Hunyuan-7B-Instruct](https://huggingface.co/AngelSlim/Hunyuan-7B-Instruct_eagle3) |
| ✅ [Qwen3-14B](https://huggingface.co/AngelSlim/Qwen3-14B_eagle3) |
| ✅ [Qwen3-32B](https://huggingface.co/AngelSlim/Qwen3-32B_eagle3) |
| ✅ [Qwen3-30B-A3B](https://huggingface.co/AngelSlim/Qwen3-a3B_eagle3) |
## 🛎️How to Use
### Install AngelSlim
We recommend using `pip` to install the latest stable version of `AngelSlim`:
```shell
pip install angelslim
```
Alternatively, you can clone the repository and install from source in editable mode:
```shell
cd AngelSlim && python setup.py install
```
For more detailed installation instructions, please refer to the [Installation Documentation](https://angelslim.readthedocs.io/zh-cn/latest/getting_started/installation.html).
### Quick Start
After installing `AngelSlim`, you can quickly start by running the following script to perform static `FP8` quantization on the `Qwen3-1.7B` model:
* One-click Start
```shell
python3 tools/run.py -c configs/qwen3/fp8_static/qwen3-1_7b_fp8_static.yaml
```
This example will load the HuggingFace model and perform activation value calibration using the `dataset` specified in the config file, saving the quantized model weights.
* Code-based Start
To perform dynamic `FP8` quantization on `Qwen3-1.7B`:
```python
from angelslim.engine import Engine
slim_engine = Engine()
# Prepare model
slim_engine.prepare_model(model_name="Qwen", model_path="Qwen/Qwen3-1.7B",)
# Initialize compressor
slim_engine.prepare_compressor("PTQ", default_method="fp8_dynamic")
# Compress model
slim_engine.run()
# Save compressed model
slim_engine.save("./output")
```
For more details, please refer to the [Quick Start Documentation](https://angelslim.readthedocs.io/zh-cn/latest/getting_started/quickstrat.html).
### Deployment and Testing
### 1. Offline Inference
If you need to load a quantized model via `transformers`, please set the `deploy_backend: huggingface` in the `global` configuration before quantizing the model, or manually modify the `ignored_layers` field in the `config.json` file located in the quantized model output directory to `ignore`.
To test offline inference with a quantized model loaded via `transformers`, run the following command:
```shell
python deploy/offline.py $MODEL_PATH
```
Where `MODEL_PATH` is the path to the quantized model output.
#### 2. API Service Deployment
After specifying the quantized model path `MODEL_PATH`, you can deploy an OpenAI-compatible API service using the following LLMs inference frameworks:
**vLLM**
Use the following script to launch a [vLLM](https://github.com/vllm-project/vllm) server, recommended version `vllm>=0.8.5.post1`. For MOE INT8 quantized models, vllm>=0.9.0 is required.
```shell
bash deploy/run_vllm.sh $MODEL_PATH
```
**SGLang**
Use the following script to launch a [SGLang](https://github.com/sgl-project/sglang) server, recommended version `sglang>=0.4.6.post1`.
```shell
bash deploy/run_sglang.sh $MODEL_PATH
```
#### 3. Service Invocation
Invoke requests via [OpenAI's API format](https://platform.openai.com/docs/api-reference/introduction):
```shell
bash deploy/openai.sh $MODEL_PATH
```
#### 4. Performance Evaluation
Evaluate the performance of quantized model using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness), recommended version`lm-eval>=0.4.8`:
```shell
bash deploy/lm_eval.sh $MODEL_PATH
```
For more detaileds, please refer to the [Deployment Documentation](https://angelslim.readthedocs.io/zh-cn/latest/deployment/deploy.html).
## 📈 Benchmark
### (1) Quantization
The performance test results for selected models are shown below. For the complete benchmark, refer to the [Benchmark documentation](https://angelslim.readthedocs.io/zh-cn/latest/performance/quantization/benchmarks.html)
#### Hunyuan Series Models
Benchmark results for the `Hunyuan-Instruct` model with `FP8`, `INT4-AWQ` and `INT4-GPTQ` quantization algorithms on datasets including`OlympiadBench`, `AIME 2024` and `DROP`:
<table>
<thead>
<tr><th>Model</th><th>Quantization</th><th>OlympiadBench</th><th>AIME 2024</th><th>DROP</th><th>GPQA-Diamond</th></tr>
</thead>
<tbody>
<tr><td rowspan="4">Hunyuan-A13B-Instruct</td>
<td>BF16</td><td>82.7</td><td>87.30</td><td>91.1</td><td>71.2</td></tr>
<tr><td>FP8-Static</td><td>83.0</td><td>86.7</td><td>91.1</td><td>-</td></tr>
<tr><td>Int4-GPTQ</td><td>82.7</td><td>86.7</td><td>91.1</td><td>-</td></tr>
<tr><td>Int4-AWQ</td><td>82.6</td><td>85.6</td><td>91.0</td><td>-</td></tr>
</tbody>
<tbody>
<tr><td rowspan="4">Hunyuan-7B-Instruct</td>
<td>BF16</td> <td>76.5</td><td>81.1</td><td>85.9</td><td>60.1</td></tr>
<tr><td>FP8-Static</td><td>76.6</td><td>80.9</td><td>86.0</td><td>60.1</td></tr>
<tr><td>Int4-GPTQ</td><td>76.2</td><td>81.0</td><td>85.7</td><td>60.0</td></tr>
<tr><td>Int4-AWQ</td><td>76.4</td><td>80.9</td><td>85.9</td><td>60.1</td></tr>
</tbody>
<tbody>
<tr><td rowspan="4">Hunyuan-4B-Instruct</td>
<td>BF16</td> <td>73.1</td><td>78.3</td><td>78.2</td><td>61.1</td></tr>
<tr><td>FP8-Static</td><td>73.1</td><td>76.6</td><td>78.3</td><td>60.2</td></tr>
<tr><td>Int4-GPTQ</td><td>72.9</td><td>-</td><td>78.1</td><td>58.1</td></tr>
<tr><td>Int4-AWQ</td><td>72.8</td><td>-</td><td>78.2</td><td>-</td></tr>
</tbody>
<tbody>
<tr><td rowspan="4">Hunyuan-1.8B-Instruct</td>
<td>BF16</td> <td>63.4</td><td>56.7</td><td>76.7</td><td>47.2</td></tr>
<tr><td>FP8-Static</td><td>62.5</td><td>55.2</td><td>75.1</td><td>47.7</td></tr>
<tr><td>Int4-GPTQ</td><td>60.9</td><td>-</td><td>73.0</td><td>44.4</td></tr>
<tr><td>Int4-AWQ</td><td>61.7</td><td>-</td><td>71.7</td><td>43.6</td></tr>
</tbody>
<tbody>
<tr><td rowspan="4">Hunyuan-0.5B-Instruct</td>
<td>BF16</td> <td>29.6</td><td>17.2</td><td>52.8</td><td>23.3</td></tr>
<tr><td>FP8-Static</td><td>29.6</td><td>17.2</td><td>51.6</td><td>22.5</td></tr>
<tr><td>Int4-GPTQ</td><td>26.8</td><td>-</td><td>50.9</td><td>23.3</td></tr>
<tr><td>Int4-AWQ</td><td>26.3</td><td>-</td><td>48.9</td><td>23.3</td></tr>
</tbody>
</table>
#### Qwen3 Series Models
Benchmark results for Qwen3 series models with `FP8-Static`, `FP8-Dynamic`, `INT4-GPTQ`, and `INT4-AWQ` quantization algorithms on datasets including `CEVAL`, `MMLU`, `GSM8K`, and `HUMANEVAL`:
<table>
<thead>
<tr><th>Model</th><th>Quantization</th><th>CEVAL</th><th>MMLU</th><th>GSM8K</th><th>HUMANEVAL</th></tr>
</thead>
<tbody>
<tr><td rowspan="4">Qwen3-0.6B</td><td>BF16</td><td>45.84</td><td>47.21</td><td>42.99</td><td>19.51</td></tr>
<tr><td>FP8-Static</td><td>45.99</td><td>46.87</td><td>38.06</td><td>18.90</td></tr>
<tr><td>FP8-Dynamic</td><td>45.99</td><td>46.93</td><td>38.29</td><td>20.73</td></tr>
<tr><td>INT8-Dynamic</td><td>45.17</td><td>46.95</td><td>41.17</td><td>21.34</td></tr>
<tr><td rowspan="6">Qwen3-8B</td><td>BF16</td><td>79.27</td><td>74.78</td><td>87.79</td><td>63.41</td></tr>
<tr><td>FP8-Static</td><td>78.23</td><td>74.79</td><td>86.96</td><td>62.20</td></tr>
<tr><td>FP8-Dynamic</td><td>78.45</td><td>74.75</td><td>87.64</td><td>62.80</td></tr>
<tr><td>INT8-Dynamic</td><td>78.01</td><td>74.84</td><td>86.96</td><td>67.07</td></tr>
<tr><td>INT4-GPTQ</td><td>77.19</td><td>73.26</td><td>86.43</td><td>62.20</td></tr>
<tr><td>INT4-AWQ</td><td>76.15</td><td>73.59</td><td>86.96</td><td>63.41</td></tr>
<tr><td rowspan="6">Qwen3-14B</td><td>BF16</td><td>83.06</td><td>78.90</td><td>88.40</td><td>55.49</td></tr>
<tr><td>FP8-Static</td><td>82.62</td><td>78.57</td><td>89.46</td><td>57.32</td></tr>
<tr><td>FP8-Dynamic</td><td>82.24</td><td>78.92</td><td>88.32</td><td>52.44</td></tr>
<tr><td>INT8-Dynamic</td><td>81.87</td><td>78.13</td><td>86.28</td><td>56.10</td></tr>
<tr><td>INT4-GPTQ</td><td>81.05</td><td>78.02</td><td>87.34</td><td>57.93</td></tr>
<tr><td>INT4-AWQ</td><td>82.02</td><td>77.68</td><td>84.23</td><td>61.59</td></tr>
<tr><td rowspan="5">Qwen3-32B</td><td>BF16</td><td>86.55</td><td>82.00</td><td>74.53</td><td>37.80</td></tr>
<tr><td>FP8-Static</td><td>86.92</td><td>81.78</td><td>70.20</td><td>39.63</td></tr>
<tr><td>FP8-Dynamic</td><td>86.55</td><td>81.89</td><td>70.43</td><td>38.41</td></tr>
<tr><td>INT4-GPTQ</td><td>86.18</td><td>81.01</td><td>-</td><td>43.29</td></tr>
<tr><td>INT4-AWQ</td><td>86.18</td><td>81.54</td><td>-</td><td>36.59</td></tr>
<tr><td rowspan="4">Qwen3-30B-A3B</td><td>BF16</td><td>83.66</td><td>79.36</td><td>89.99</td><td>31.71</td></tr>
<tr><td>FP8-Static</td><td>83.95</td><td>79.47</td><td>89.01</td><td>31.10</td></tr>
<tr><td>FP8-Dynamic</td><td>84.10</td><td>79.40</td><td>89.16</td><td>32.93</td></tr>
<tr><td>INT8-Dynamic</td><td>83.36</td><td>79.48</td><td>89.16</td><td>34.15</td></tr>
<tr><td rowspan="4">Qwen3-235B-A22B</td><td>BF16</td><td>89.60</td><td>86.28</td><td>85.29</td><td>27.44</td></tr>
<tr><td>FP8-Static</td><td>89.67</td><td>86.19</td><td>86.96</td><td>27.44</td></tr>
<tr><td>FP8-Dynamic</td><td>89.67</td><td>86.18</td><td>85.22</td><td>28.05</td></tr>
<tr><td>INT8-Dynamic</td><td>88.93</td><td>86.20</td><td>86.20</td><td>23.78</td></tr>
<tr><td rowspan="5">QwQ-32B</td><td>BF16</td><td>85.74</td><td>82.03</td><td>73.31</td><td>42.68</td></tr>
<tr><td>FP8-Static</td><td>85.44</td><td>81.91</td><td>75.36</td><td>42.68</td></tr>
<tr><td>FP8-Dynamic</td><td>85.07</td><td>81.93</td><td>75.66</td><td>42.07</td></tr>
<tr><td>INT4-GPTQ</td><td>84.03</td><td>81.26</td><td>68.23</td><td>45.73</td></tr>
<tr><td>INT4-AWQ</td><td>83.58</td><td>81.01</td><td>68.69</td><td>43.29</td></tr>
</tbody>
</table>
#### Qwen2.5VL Series Models
Benchmark results for Qwen2.5VL series models with `BF16`、`FP8-Static`、`FP8-Dynamic`、`INT4-GPTQ`、`INT4-AWQ` quantization algorithms on datasets including `MMMU_VAL`、`DocVQA_VAL` and `ChartQA_TEST`:
<table>
<thead>
<tr><th>Model</th><th>Quantization</th><th>MMMU_VAL</th><th>MMLDocVQA_VALU</th><th>ChartQA_TEST</th></tr>
</thead>
<tbody>
<tr><td rowspan="5">Qwen2.5VL-3B</td><td>BF16</td><td>47.11</td><td>78.57</td><td>80.32</td></tr>
<tr><td>FP8-Static</td><td>47.33</td><td>79.34</td><td>79.68</td></tr>
<tr><td>FP8-Dynamic</td><td>45.99</td><td>46.93</td><td>38.29</td></tr>
<tr><td>INT4-GPTQ</td><td>46.56</td><td>77.20</td><td>78.96</td></tr>
<tr><td>INT4-AWQ</td><td>45.78</td><td>-</td><td>79.60</td></tr>
<tr><td rowspan="5">Qwen2.5VL-7B</td><td>BF16</td><td>45.44</td><td>89.71</td><td>84.64</td></tr>
<tr><td>FP8-Static</td><td>47.00</td><td>89.83</td><td>85.92</td></tr>
<tr><td>FP8-Dynamic</td><td>47.22</td><td>89.80</td><td>88.64</td></tr>
<tr><td>INT4-GPTQ</td><td>46.67</td><td>90.45</td><td>-</td></tr>
<tr><td>INT4-AWQ</td><td>45.67</td><td>89.28</td><td>-</td></tr>
<tr><td rowspan="5">Qwen2.5VL-32B</td><td>BF16</td><td>57.00</td><td>90.03</td><td>-</td></tr>
<tr><td>FP8-Static</td><td>57.00</td><td>89.88</td><td>-</td></tr>
<tr><td>FP8-Dynamic</td><td>56.44</td><td>89.88</td><td>-</td></tr>
<tr><td>INT4-GPTQ</td><td>55.22</td><td>89.80 </td><td>-</td></tr>
<tr><td>INT4-AWQ</td><td>55.22</td><td>90.30</td><td>-</td></tr>
<tr><td rowspan="5">Qwen2.5VL-72B</td><td>BF16</td><td>58.78</td><td>94.39</td><td>85.60</td></tr>
<tr><td>FP8-Static</td><td>57.89</td><td>94.41</td><td>85.84</td></tr>
<tr><td>FP8-Dynamic</td><td>58.67</td><td>94.38</td><td>85.60</td></tr>
<tr><td>INT4-GPTQ</td><td>57.56</td><td>94.46</td><td>86.48</td></tr>
<tr><td>INT4-AWQ</td><td>58.78</td><td>94.19</td><td>87.28</td></tr>
</tbody>
</table>
#### Other Models
Benchmark results for other models with `FP8-Static`, `FP8-Dynamic`, `INT4-GPTQ`, and `INT4-AWQ` quantization algorithms on datasets including `CEVAL`, `MMLU` and `GSM8K`:
<table>
<thead>
<tr><th>Model</th><th>Quantization</th><th>CEVAL</th><th>MMLU</th><th>GSM8K</th></tr>
</thead>
<tbody>
<tr><td rowspan="3">Qwen2.5-1.5B-Instruct</td><td>BF16</td><td>67.01</td><td>60.05</td><td>54.28</td></tr>
<tr><td>FP8-Static</td><td>66.27</td><td>60.23</td><td>-</td></tr>
<tr><td>FP8-Dynamic</td><td>66.79</td><td>60.08</td><td>51.71</td></tr>
<tr><td rowspan="5">Qwen2.5-7B-Instruct</td><td>BF16</td><td>81.20</td><td>74.55</td><td>79.98</td></tr>
<tr><td>FP8-Static</td><td>81.13</td><td>74.03</td><td>79.30</td></tr>
<tr><td>FP8-Dynamic</td><td>80.31</td><td>74.07</td><td>79.00</td></tr>
<tr><td>INT4-GPTQ</td><td>79.05</td><td>73.05</td><td>74.75</td></tr>
<tr><td>INT4-AWQ</td><td>79.35</td><td>73.22</td><td>79.38</td></tr>
<tr><td rowspan="5">Qwen2.5-32B-Instruct</td><td>BF16</td><td>87.30</td><td>83.21</td><td>81.73</td></tr>
<tr><td>FP8-Static</td><td>87.59</td><td>83.08</td><td>81.58</td></tr>
<tr><td>FP8-Dynamic</td><td>87.30</td><td>83.04</td><td>81.58</td></tr>
<tr><td>INT4-GPTQ</td><td>86.70</td><td>82.45</td><td>82.03</td></tr>
<tr><td>INT4-AWQ</td><td>87.00</td><td>82.64</td><td>-</td></tr>
<tr><td rowspan="5">DeepSeek-R1-Distill-Qwen-7B</td><td>BF16</td><td>53.49</td><td>53.80</td><td>75.74</td></tr>
<tr><td>FP8-Static</td><td>53.57</td><td>54.17</td><td>76.19</td></tr>
<tr><td>FP8-Dynamic</td><td>52.97</td><td>54.13</td><td>74.15</td></tr>
<tr><td>INT4-GPTQ</td><td>51.86</td><td>52.44</td><td>75.89</td></tr>
<tr><td>INT4-AWQ</td><td>53.49</td><td>53.70</td><td>-</td></tr>
<tr><td rowspan="5">DeepSeek-R1-Distill-Qwen-14B</td><td>BF16</td><td>77.71</td><td>74.28</td><td>85.67</td></tr>
<tr><td>FP8-Static</td><td>77.56</td><td>74.66</td><td>86.73</td></tr>
<tr><td>FP8-Dynamic</td><td>76.82</td><td>74.63</td><td>87.11</td></tr>
<tr><td>INT4-GPTQ</td><td>74.29</td><td>72.37</td><td>84.61</td></tr>
<tr><td>INT4-AWQ</td><td>74.81</td><td>73.00</td><td>86.05</td></tr>
<tr><td rowspan="5">DeepSeek-R1-Distill-Qwen-32B</td><td>BF16</td><td>84.18</td><td>80.89</td><td>87.41</td></tr>
<tr><td>FP8-Static</td><td>83.43</td><td>80.90</td><td>87.57</td></tr>
<tr><td>FP8-Dynamic</td><td>83.73</td><td>81.10</td><td>86.43</td></tr>
<tr><td>INT4-GPTQ</td><td>84.10</td><td>79.80</td><td>86.73</td></tr>
<tr><td>INT4-AWQ</td><td>82.84</td><td>80.15</td><td>87.19</td></tr>
</tbody>
</table>
### (2) Speculative Decoding
#### Qwen3 Series Models
Benchmark results for Qwen3 series models with `Eagle3` speculative decoding algorithm on datasets including `MT-bench`, `HunmanEval`, `GSM8K`, and `Alpaca`:
<table>
<thead>
<tr>
<th> </th><th> </th>
<th colspan="2" style="text-align: center; vertical-align: middle;">MT-bench</th>
<th colspan="2" style="text-align: center; vertical-align: middle;">HumanEval</th>
<th colspan="2" style="text-align: center; vertical-align: middle;">GSM8K</th>
<th colspan="2" style="text-align: center; vertical-align: middle;">Alpaca</th>
<th colspan="2" style="text-align: center; vertical-align: middle;">Mean</th></tr>
<tr><th>Temperature</th><th>Model</th><th>Speedup</th><th>τ</th><th>Speedup</th><th>τ</th><th>Speedup</th><th>τ</th><th>Speedup</th><th>τ</th><th>Speedup</th><th>τ</th></tr>
</thead>
<tbody>
<!-- <tr><td colspan="12" style="text-align: center; vertical-align: middle;"><strong>Temperature=0</strong></td></tr> -->
<tr><td rowspan="6"><strong>T=0</strong></td>
<td>Qwen3-1.7B</td><td>2.05x</td><td>2.81</td><td>2.07x</td><td>2.93</td><td>2.11x</td><td>2.98</td><td>1.93x</td><td>2.69</td><td>2.04x</td><td>2.85</td></tr>
<tr> <td>Qwen3-4B</td><td>2.21x</td><td>3.01</td><td>2.36x</td><td>3.24</td><td>2.42x</td><td>3.13</td><td>2.32x</td><td>2.75</td><td>2.33x</td><td>3.03</td></tr>
<tr><td>Qwen3-8B</td><td>2.63x</td><td>3.65</td><td>2.76x</td><td>3.85</td><td>2.82x</td><td>3.90</td><td>2.62x</td><td>3.48</td><td>2.70x</td><td>3.72</td></tr>
<tr><td>Qwen3-14B</td><td>2.23x</td><td>3.30</td><td>2.53x</td><td>3.74</td><td>2.56x</td><td>3.79</td><td>2.16x</td><td>3.13</td><td>2.37x</td><td>3.49</td></tr>
<tr><td>Qwen3-32B</td><td>2.39x</td><td>2.78</td><td>2.37x</td><td>2.81</td><td>2.47x</td><td>2.92</td><td>2.42x</td><td>2.53</td><td>2.41x</td><td>2.76</td></tr>
<tr><td>Qwen3-30B-A3B</td><td>2.84x</td><td>3.63</td><td>2.27x</td><td>3.09</td><td>2.64x</td><td>3.42</td><td>2.83x</td><td>3.56</td><td>2.64x</td><td>3.42</td></tr>
<!-- <tr><td colspan="12" style="text-align: center; vertical-align: middle;"><strong>Temperature=1</strong></td></tr> -->
<tr><td rowspan="6"><strong>T=1</strong></td>
<td>Qwen3-1.7B</td><td>1.74x</td><td>2.53</td><td>1.86x</td><td>2.70</td><td>1.82x</td><td>2.69</td><td>1.72x</td><td>2.46</td><td>1.93x</td><td>2.60</td></tr>
<tr><td>Qwen3-4B</td><td>1.93x</td><td>2.60</td><td>2.00x</td><td>2.84</td><td>2.11x</td><td>2.82</td><td>2.34x</td><td>2.50</td><td>1.75x</td><td>2.69</td></tr>
<tr><td>Qwen3-8B</td><td>1.98x</td><td>2.75</td><td>2.25x</td><td>3.11</td><td>2.31x</td><td>3.15</td><td>2.10x</td><td>2.76</td><td>2.90x</td><td>2.94</td></tr>
<tr><td>Qwen3-14B</td><td>1.71x</td><td>2.61</td><td>1.95x</td><td>2.87</td><td>2.04x</td><td>3.08</td><td>1.68x</td><td>2.55</td><td>2.90x</td><td>2.78</td></tr>
<tr><td>Qwen3-32B</td><td>1.62x</td><td>1.91</td><td>1.71x</td><td>2.05</td><td>1.78x</td><td>2.10</td><td>1.80x</td><td>1.95</td><td>1.62x</td><td>2.00</td></tr>
<tr><td>Qwen3-30B-A3B</td><td>1.91x</td><td>2.46</td><td>2.00x</td><td>2.64</td><td>1.90x</td><td>2.53</td><td>1.80x</td><td>2.32</td><td>1.90x</td><td>2.48</td></tr>
</tbody>
</table>
#### Hunyuan Series Models
Benchmark results for Hunyuan series models with `Eagle3` speculative decoding algorithm on datasets including `MT-bench`, `HunmanEval`, `GSM8K`, and `Alpaca`:
<table>
<thead>
<tr>
<th> </th><th> </th>
<th colspan="2" style="text-align: center; vertical-align: middle;">MT-bench</th>
<th colspan="2" style="text-align: center; vertical-align: middle;">HumanEval</th>
<th colspan="2" style="text-align: center; vertical-align: middle;">GSM8K</th>
<th colspan="2" style="text-align: center; vertical-align: middle;">Alpaca</th>
<th colspan="2" style="text-align: center; vertical-align: middle;">Mean</th></tr>
<tr><th>Temperature</th><th>Model</th><th>Speedup</th><th>τ</th><th>Speedup</th><th>τ</th><th>Speedup</th><th>τ</th><th>Speedup</th><th>τ</th><th>Speedup</th><th>τ</th></tr>
</thead>
<tbody>
<!-- <tr><td colspan="12" style="text-align: center; vertical-align: middle;"><strong>Temperature=0</strong></td></tr> -->
<tr><td rowspan="3"><strong>T=0</strong></td>
<td>Hunyuan-1.8B-Instruct</td><td>1.97x</td><td>2.90</td><td>2.58x</td><td>3.73</td><td>2.61x</td><td>3.71</td><td>1.71x</td><td>2.43</td><td>2.22x</td><td>3.19</td></tr>
<tr> <td>Hunyuan-4B-Instruct</td><td>1.77x</td><td>2.60</td><td>2.64x</td><td>3.35</td><td>2.14x</td><td>3.17</td><td>1.72x</td><td>2.57</td><td>2.07x</td><td>2.92</td></tr>
<tr><td>Hunyuan-7B-Instruct</td><td>2.22x</td><td>3.58</td><td>3.59x</td><td>5.47</td><td>2.96x</td><td>4.68</td><td>1.64x</td><td>2.56</td><td>2.60x</td><td>4.07</td></tr>
<!-- <tr><td colspan="12" style="text-align: center; vertical-align: middle;"><strong>Temperature=1</strong></td></tr> -->
<tr><td rowspan="3"><strong>T=1</strong></td>
<td>Hunyuan-1.8B-Instruct</td><td>1.58x</td><td>2.36</td><td>2.35x</td><td>3.56</td><td>2.23x</td><td>3.38</td><td>1.26x</td><td>1.87</td><td>1.86x</td><td>2.79</td></tr>
<tr><td>Hunyuan-4B-Instruct</td><td>1.36x</td><td>2.05</td><td>1.97x</td><td>2.86</td><td>1.72x</td><td>2.68</td><td>1.14x</td><td>1.76</td><td>1.55x</td><td>2.34</td></tr>
<tr><td>Hunyuan-7B-Instruct</td><td>1.90x</td><td>3.11</td><td>3.12x</td><td>5.09</td><td>2.74x</td><td>4.34</td><td>1.47x</td><td>2.39</td><td>2.31x</td><td>3.73</td></tr>
</tbody>
</table>
## 📝 License
The code for this project is open-sourced under the [License for AngelSlim](LICENSE).
## 🔗 Citation
```
@software{AngelSlim2025,
title={{AngelSlim}},
author={Tencent AngelSlim Project Contributors},
year={2025},
month={6},
url={https://github.com/Tencent/AngelSlim},
}
```
## 💬 Technical Discussion
* AngelSlim is continuously iterating and new features will be released soon. If you have any questions or suggestions, please open an issue on [GitHub Issues](https://github.com/Tencent/AngelSlim/issues) or join our [WeChat technical discussion group](./docs/source/assets/angel_slim_wechat.png).
|
j0ori/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-melodic_shrewd_armadillo
|
j0ori
| 2025-08-07T15:49:15Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am melodic_shrewd_armadillo",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-07T15:29:43Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am melodic_shrewd_armadillo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
CronoBJS/diff-apply-GGUF
|
CronoBJS
| 2025-08-07T15:47:36Z | 0 | 0 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:syntheticlab/diff-apply",
"base_model:quantized:syntheticlab/diff-apply",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-07T15:45:22Z |
---
license: apache-2.0
base_model: syntheticlab/diff-apply
tags:
- llama-cpp
- gguf-my-repo
---
# diff-apply
**Model creator:** [syntheticlab](https://huggingface.co/syntheticlab)<br/>
**Original model**: [syntheticlab/diff-apply](https://huggingface.co/syntheticlab/diff-apply)<br/>
**GGUF quantization:** provided by [CronoBJS](https:/huggingface.co/CronoBJS) using `llama.cpp`<br/>
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
## Use with Ollama
```bash
ollama run "hf.co/CronoBJS/diff-apply-GGUF:<quantization>"
```
## Use with LM Studio
```bash
lms load "CronoBJS/diff-apply-GGUF"
```
## Use with llama.cpp CLI
```bash
llama-cli --hf-repo "CronoBJS/diff-apply-GGUF" --hf-file "diff-apply-Q8_0.gguf" -p "The meaning to life and the universe is"
```
## Use with llama.cpp Server:
```bash
llama-server --hf-repo "CronoBJS/diff-apply-GGUF" --hf-file "diff-apply-Q8_0.gguf" -c 4096
```
|
pramjati02/bart-base-trained
|
pramjati02
| 2025-08-07T15:44:46Z | 0 | 0 | null |
[
"safetensors",
"bart",
"license:apache-2.0",
"region:us"
] | null | 2025-08-07T15:44:05Z |
---
license: apache-2.0
---
|
chenyitian-shanshu/SIRL-Gurobi
|
chenyitian-shanshu
| 2025-08-07T15:40:46Z | 37 | 0 | null |
[
"safetensors",
"qwen2",
"code",
"math",
"en",
"arxiv:2505.11792",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:mit",
"region:us"
] | null | 2025-05-20T02:02:20Z |
---
license: mit
language:
- en
metrics:
- accuracy
base_model:
- Qwen/Qwen2.5-7B-Instruct
tags:
- code
- math
---
<h2 align="center"> Solver-Informed RL: Grounding Large Language Models for Authentic Optimization Modeling</h2>
<p align="center">
<!-- Yitian Chen<sup>*</sup>, Jingfan Xia<sup>*</sup>, Siyu Shao<sup></sup>, Dongdong Ge<sup>†</sup>, Yinyu Ye
<br>
<div align='center'>
<sup>*</sup>Equal Contribution, <sup>†</sup>Corresponding Authors
</div>
<p align="center">
<b>Cardinal Operations, China</b><br>
<b>Shanghai University of Finance and Economics</b><br>
<b>The University of Hong Kong</b><br>
<b>Antai School of Economics and Management, Shanghai Jiao Tong University</b><br>
<b>Department of Management Science and Engineering, Stanford University</b>
</p> -->
<p align="center" style="white-space: nowrap;">
<a href="https://arxiv.org/abs/2505.11792" style="display: inline-block;"><img src='https://img.shields.io/badge/Paper-SIRL-red'></a>
<a href="https://huggingface.co/datasets/librarian-bots/model_cards_with_metadata/viewer/default/[https://huggingface.co/chenyitian-shanshu/SIRL](https://huggingface.co/chenyitian-shanshu/SIRL)" style="display: inline-block;"><img src='https://img.shields.io/badge/Model-%F0%9F%A4%97%20HuggingFace-yellow'></a>
<a href="https://huggingface.co/datasets/librarian-bots/model_cards_with_metadata/viewer/default/[https://modelscope.cn/models/oneday88/SIRL-7B](https://modelscope.cn/models/oneday88/SIRL-7B)" style="display: inline-block;"><img src="https://img.shields.io/static/v1?label=Model&message=ModeScope&color=green"></a>
<a href="https://huggingface.co/datasets/librarian-bots/model_cards_with_metadata/viewer/default/[https://github.com/Cardinal-Operations/SIRL](https://github.com/Cardinal-Operations/SIRL)" style="display: inline-block;"><img src='https://img.shields.io/badge/Github-SIRL-blue'></a>
</p>
</p>
## Overview & Examples
We introduce **SIRL (Solver-Informed Reinforcement Learning)**, a novel reasoning paradigm that integrates solver feedback with reinforcement learning to train large language models (LLMs) for optimization modeling and release the first reasoning model for optimization modeling--- **SIRL-Qwen2.5-7B**.
**SIRL** represents the first application of Reinforcement Learning with Verifiable Reward (RLVR) in the domain of optimization modeling, enabling LLMs to generate accurate mathematical formulations and code generations from natural language descriptions. SIRL leverages solver outputs to iteratively refine model performance, achieving state-of-the-art results on complex optimization tasks. The framework is particularly effective for industrial and operational research problems, where precise mathematical modeling is critical.
Currently, we offer LLM model checkpoints that seamlessly integrate with both Gurobi and COPT optimization solver.
COPT (Cardinal Optimizer) is a mathematical optimization solver for large-scale optimization problems developed by Cardinal Operations, and it includes high-performance solvers for LP, MIP, NLP and so on.
To explore its full functionalities or to request a trial, please visit the official website: www.shanshu.ai/copt.
## Updates
- **2025.07.28** - [SIRL-Qwen2.5-7B-COPT](https://huggingface.co/chenyitian-shanshu/SIRL/tree/main/Copt) ,which leverages the COPT optimization solver, is publicly available on Hugging Face and ModelScope
- **2025.05.20** - [SIRL-Qwen2.5-7B-Gurobi](https://huggingface.co/chenyitian-shanshu/SIRL/tree/main) ,which leverages the Gurobi optimization solver, is publicly available on Hugging Face and ModelScope
- **2025.05.17** - SIRL paper published on arXiv: [Solver-Informed Reinforcement Learning for Optimization Modeling](https://arxiv.org/abs/2505.11792).
## Model Release
We release the checkpoint of [SIRL-Qwen2.5-7B-Gurobi](https://huggingface.co/chenyitian-shanshu/SIRL) and [SIRL-Qwen2.5-7B-COPT](https://huggingface.co/chenyitian-shanshu/SIRL/tree/main/Copt) on Hugging Face and Model Scope. More models are coming soon.
| Solver Type | Hugging Face | ModelScope |
|---------------------|---------------- | ---|
| Gurobi | [SIRL-Qwen2.5-7B-Gurobi](https://huggingface.co/chenyitian-shanshu/SIRL-Gurobi) | [SIRL-Qwen2.5-7B-Gurobi](https://modelscope.cn/models/oneday88/SIRL-7B) |
| COPT | [SIRL-Qwen2.5-7B-COPT](https://huggingface.co/chenyitian-shanshu/SIRL-COPT) | [SIRL-Qwen2.5-7B-COPT](https://modelscope.cn/models/oneday88/sirl-qwen2-5-7b-copt) |
## Performance
We evaluated the performance of the proposed SIRL framework on four benchmarks: NL4OPT, MAMO, IndustryOR and OptMATH.
Performance is assessed based on the pass@1 accuracy(acc). Following the rigorous evaluation protocol proposed by OptMATH, a solution is considered valid if the relative error is less than 1e-6.
The performance metrics for [SIRL](https://huggingface.co/chenyitian-shanshu/SIRL) are as follows. The highest results are highlighted in bold.
| Types | Models | NL4OPT | MAMO Easy | MAMO Complex | IndustryOR | OptMATH | OptiBench | Macro AVG |
|---------------|-------------------|--------|-----------|--------------|------------|---------|-----------|-----------|
| Baseline | GPT-3.5-turbo | 78.0%* | 79.3%* | 33.2%* | 21.0%* | 15.0%* | 47.4%* | 51.4%* |
| | GPT-4 | 89.0%* | 87.3%* | 49.3%* | 33.3%* | 16.6%* | 68.6%* | 57.4%* |
| | Deepseek-V3 | 95.9%* | 88.3%* | 51.1%* | 37.0%* | 32.6%* | **71.6%*** | 62.8%* |
| | DeepSeek-R1 | 82.4% | 77.8% | 49.3% | **45.0%** | 50.3% | 66.4% | 61.9% |
| | OpenAI-O3 | 69.4% | 70.1% | 38.8% | 44.0% | 39.9% | - | 52.4% |
| Agent-based | Chain-of-Experts | 64.2%* | - | - | - | - | - | - |
| | OptiMUS | 78.8%* | 77.0%* | 43.6%* | 31.0%* | 20.2%* | 45.8%* | 49.4%* |
| Offline-learning | ORLM-LLaMA-3-8B | 85.7%* | 82.3%* | 37.4%* | 24.0%* | 2.6%* | 51.1%* | 47.2%* |
| | LLMOpt-Qwen2.5-14B | 80.3%* | 89.5%* | 44.1%* | 29.0%* | 12.5%* | 53.8%* | 51.1%* |
| | OptMATH-Qwen2.5-7B | 94.7%* | 86.5%* | 51.2%* | 20.0%* | 24.4%* | 57.9%* | 55.8%* |
| Gurobi | SIRL-Qwen2.5-7B-Gurobi | **96.3%** | **90.0%** | 62.1% | **33.0%** | 29.0% | 58.0% | 61.4% |
| | SIRL-Qwen2.5-7B-Gurobi(pass@8) | 97.1% | 90.2% | 63.5% | 38.0% | 33.2% | 62.5% | 64.1% |
| COPT | SIRL-Qwen2.5-7B-COPT| 95.1% | 89.3% | **68.2%** | 31.0% | **33.7%** | 58.3% | **62.6%** |
| | SIRL-Qwen2.5-7B-COPT(pass@8) | 97.8% | 90.5% | 75.4% | 35.0% | 45.1% | 61.8% | 67.6% |
*Note:* Values marked with "*" are from original or reproduced papers with the criterion: relative error < 10⁻⁶.
The code to reproduce these results can be found in our [Jupyter Notebook](https://github.com/Cardinal-Operations/SIRL/blob/main/reproduce_gurobi.ipynb).
## Inference
### Setup
To get started, clone SIRL and install the required packages in the github:
```shell
pip install -r requirements.txt
```
Make sure that you have already apply for the license of solvers such as Gurobi or COPT.
We recommend using the following prompt template which can be found in [rule_prompt_utils.py](https://github.com/Cardinal-Operations/SIRL/blob/main/rule_prompt_utils.py). Please replace the {question} with any natural language OR question.
### Quick start
Below is a simple example for model inference:
```python
from transformers import AutoTokenizer
from rule_prompt_utils import gurobi_prompt_temp
from utils import extract_code_block, extract_obj
from vllm import SamplingParams, LLM
from langchain.prompts import PromptTemplate
import subprocess
# Load model and parameters
model = LLM("chenyitian-shanshu/SIRL-Gurobi",
tensor_parallel_size=1,
trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("chenyitian-shanshu/SIRL-Gurobi")
sampling_params = SamplingParams(
n=1,
temperature=0.5,
top_p=0.9,
max_tokens=8192,
repetition_penalty=1.02
)
# Load question. Here is just an example. Users can replace this with datasets they want to test
question = "An industrial tire company delivers large tires for equipment to remote engineering sites either by cargo planes or ultrawide trucks. Each cargo plane can transport 10 tires per trip and costs $1000. Each ultrawide truck can transport 6 tires per trip and costs $700. The company needs to transport at least 200 tires and has available $22000. Because most remote sites don't have proper airports, the number of plane trips cannot exceed the number of ultrawide truck trips. How many trips of each should be done to minimize the total number of trips?"
# Load prompt templete
zeroshot_prompt_system = PromptTemplate.from_template(gurobi_prompt_temp['system'])
zeroshot_prompt_user = PromptTemplate.from_template(gurobi_prompt_temp['user'])
prompt =[{"role": "system",
"content": zeroshot_prompt_system.format().strip() },
{"role": "user",
"content": zeroshot_prompt_user.format(question=question).strip() }]
# Generate Response
text = tokenizer.apply_chat_template(prompt, tokenize=False, add_generation_prompt=True)
response = model.generate(text,sampling_params)
response_text = response[0].outputs[0].text
code_snippet = extract_code_block(response_text,'gurobi')
result = subprocess.run(['python3', '-c', code_snippet], capture_output=True, text=True, timeout=100)
obj = extract_obj(result.stdout,'gurobi')
print(response_text)
print('optimal value is', obj)
```
## Test Dataset
We evaluate the performance of our trained model on multiple datasets
which include NL4OPT, MAMO, IndustryOR, OptMATH.
Minor errors exist within these testing datasets.
To address this, we rigorously reviewed and corrected the test sets of these benchmarks, updating the questions and corresponding answers to ensure the integrity of our evaluation, with a specific focus on the NL4OPT and IndustryOR dataset. The datasets are available at [https://github.com/Cardinal-Operations/SIRL/tree/main/test_data](https://github.com/Cardinal-Operations/SIRL/tree/main/test_data).
### Data Structure
Each dataset is organized in a `jsonl` file, with each line containing an independent data entry. Each entry includes:
- `en_question`: A string description of the optimization problem.
- `en_answer`: The ground truth objective function value (float). The answers of infeasible problems are "No Best Solution" or "-99999"
An example from NL4OPT:
```json
{
"en_question": "A company needs to minimize shipping costs across 5 warehouses with varying demands...",
"en_answer": 1250.50,
}
```
## Citation
If you find SILR useful or relevant to your research, please consider citing our paper:
```bibtex
@article{chen2025solver,
title={Solver-Informed RL: Grounding Large Language Models for Authentic Optimization Modeling},
author={Chen, Yitian and Xia, Jingfan and Shao, Siyu and Ge, Dongdong and Ye, Yinyu},
journal={arXiv preprint arXiv:2505.11792},
year={2025}
}
```
|
AdamCodd/tinybert-emotion-balanced
|
AdamCodd
| 2025-08-07T15:38:08Z | 23,974 | 2 |
transformers
|
[
"transformers",
"pytorch",
"onnx",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:AdamCodd/emotion-balanced",
"base_model:prajjwal1/bert-tiny",
"base_model:quantized:prajjwal1/bert-tiny",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-11-06T23:46:52Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- AdamCodd/emotion-balanced
metrics:
- accuracy
- f1
- recall
- precision
base_model: prajjwal1/bert-tiny
model-index:
- name: AdamCodd/tinybert-emotion-balanced
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- type: accuracy
value: 0.9354
name: Accuracy
- type: loss
value: 0.1809
name: Loss
- type: f1
value: 0.9354946613311768
name: F1
---
# tinybert-emotion
This model is a fine-tuned version of [bert-tiny](https://huggingface.co/prajjwal1/bert-tiny) on the [emotion balanced dataset](https://huggingface.co/datasets/AdamCodd/emotion-balanced).
It achieves the following results on the evaluation set:
- Loss: 0.1809
- Accuracy: 0.9354
## Model description
TinyBERT is 7.5 times smaller and 9.4 times faster on inference compared to its teacher BERT model (while DistilBERT is 40% smaller and 1.6 times faster than BERT).
The model has been trained on 89_754 examples split into train, validation and test. Each label was perfectly balanced in each split.
## Intended uses & limitations
This model is not as accurate as the [distilbert-emotion-balanced](https://huggingface.co/AdamCodd/distilbert-base-uncased-finetuned-emotion-balanced) one because the focus was on speed, which can lead to misinterpretation of complex sentences. Despite this, its performance is quite good and should be more than sufficient for most use cases.
Usage:
```python
from transformers import pipeline
# Create the pipeline
emotion_classifier = pipeline('text-classification', model='AdamCodd/tinybert-emotion-balanced')
# Now you can use the pipeline to classify emotions
result = emotion_classifier("We are delighted that you will be coming to visit us. It will be so nice to have you here.")
print(result)
#[{'label': 'joy', 'score': 0.9895486831665039}]
```
This model faces challenges in accurately categorizing negative sentences, as well as those containing elements of sarcasm or irony. These limitations are largely attributable to TinyBERT's constrained capabilities in semantic understanding. Although the model is generally proficient in emotion detection tasks, it may lack the nuance necessary for interpreting complex emotional nuances.
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 1270
- optimizer: AdamW with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 150
- num_epochs: 10
- weight_decay: 0.01
### Training results
precision recall f1-score support
sadness 0.9733 0.9245 0.9482 1496
joy 0.9651 0.8864 0.9240 1496
love 0.9127 0.9786 0.9445 1496
anger 0.9479 0.9365 0.9422 1496
fear 0.9213 0.9004 0.9108 1496
surprise 0.9016 0.9866 0.9422 1496
accuracy 0.9355 8976
macro avg 0.9370 0.9355 0.9353 8976
weighted avg 0.9370 0.9355 0.9353 8976
test_acc: 0.9354946613311768
test_loss: 0.1809326708316803
### Framework versions
- Transformers 4.33.0
- Pytorch lightning 2.0.8
- Tokenizers 0.13.3
If you want to support me, you can [here](https://ko-fi.com/adamcodd).
|
Dinith132/first_lora
|
Dinith132
| 2025-08-07T15:34:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"lora",
"peft",
"gptq",
"causal-lm",
"fine-tuning",
"en",
"dataset:openai/gsm8k",
"base_model:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ",
"base_model:adapter:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-07T15:03:21Z |
---
library_name: transformers
tags:
- lora
- peft
- gptq
- causal-lm
- fine-tuning
license: apache-2.0
datasets:
- openai/gsm8k
language:
- en
base_model:
- TheBloke/Mistral-7B-Instruct-v0.2-GPTQ
---
# LoRA Adapter for TheBloke/Mistral-7B-Instruct-v0.2-GPTQ
   
This repository contains a **LoRA adapter** fine-tuned on the [TheBloke/Mistral-7B-Instruct-v0.2-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GPTQ) quantized model. The adapter enables **parameter-efficient fine-tuning (PEFT)** without modifying the original full model weights.
---
## Model Details
### Model Description
This is a LoRA adapter trained to enhance the capabilities of the base GPTQ quantized model [TheBloke/Mistral-7B-Instruct-v0.2-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GPTQ), focusing on tasks such as **causal language modeling** and **math reasoning** on datasets like [GSM8K](https://huggingface.co/datasets/openai/gsm8k).
- **Developed by:** Dinith132
- **Model type:** LoRA Adapter for Causal Language Model
- **Language(s):** English
- **License:** Apache 2.0
- **Finetuned from model:** [TheBloke/Mistral-7B-Instruct-v0.2-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GPTQ)
---
## Uses
### Direct Use
This LoRA adapter is intended to be loaded on top of the compatible GPTQ quantized base model for enhanced performance on tasks such as **reasoning**, **question answering**, and **language generation**.
### Downstream Use
Users can further fine-tune this adapter or use it as a plug-in module for their specific tasks requiring **low-resource fine-tuning**.
### Out-of-Scope Use
This adapter should not be used standalone without the compatible base model. Due to the GPTQ quantization, **merging the adapter weights into the base model is not supported**.
---
## Bias, Risks, and Limitations
This model inherits biases present in the base model and training data. It may produce **biased or incorrect outputs** in some cases. Use with caution in sensitive applications.
### Recommendations
- Always **validate the model outputs** for your use case.
- Avoid deploying in **high-stakes scenarios** without human oversight.
- Continuously monitor for **harmful or biased behavior**.
---
## How to Get Started with the Model
### Installation
To use this LoRA adapter, install the required dependencies:
```bash
pip install transformers peft
```
### Loading the Model and Tokenizer
Use the following Python code to load the base model, tokenizer, and LoRA adapter:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
# Load base GPTQ model
base_model = AutoModelForCausalLM.from_pretrained(
"TheBloke/Mistral-7B-Instruct-v0.2-GPTQ",
device_map="auto"
)
# Load tokenizer and LoRA adapter
tokenizer = AutoTokenizer.from_pretrained("Dinith132/first_lora")
model = PeftModel.from_pretrained(base_model, "Dinith132/first_lora")
```
### Example Inference
Here’s an example of performing inference with the model:
```python
prompt = "Alice has 20 quarters. She wants to exchange them for nickels..."
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=140)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
---
## Training Details
### Training Data
The adapter was fine-tuned primarily on the **[GSM8K dataset](https://huggingface.co/datasets/openai/gsm8k)**, a challenging math word problem dataset.
### Training Procedure
- LoRA adapter fine-tuned on GPTQ quantized base model.
- Used **PEFT (Parameter-Efficient Fine Tuning)** with LoRA configuration.
### Training Hyperparameters
| Hyperparameter | Value |
|------------------------|----------------------|
| Learning rate | 2e-4 |
| Batch size | 4 |
| Epochs | 10 |
| Optimizer | paged_adamw_8bit |
| Max sequence length | 512 |
---
## Evaluation
Evaluation was performed on subsets of the **GSM8K dataset** with metrics like **accuracy on math reasoning problems**.
---
## Citation
If you use this adapter in your research, please cite this repository:
```bibtex
@misc{dinith132_lora_mistral,
author = {Dinith132},
title = {LoRA Adapter for TheBloke/Mistral-7B-Instruct-v0.2-GPTQ},
year = {2025},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/Dinith132/first_lora}}
}
```
---
## Model Card Authors
- **Dinith132**
---
## Model Card Contact
For questions, issues, or collaboration, open an issue on the [Hugging Face repo](https://huggingface.co/Dinith132/first_lora) or contact me directly.
|
nabilwalidrafi/medgemma-rafi-31
|
nabilwalidrafi
| 2025-08-07T15:27:33Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:google/medgemma-4b-it",
"base_model:finetune:google/medgemma-4b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-08-07T07:38:36Z |
---
base_model: google/medgemma-4b-it
library_name: transformers
model_name: medgemma-rafi-31
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for medgemma-rafi-31
This model is a fine-tuned version of [google/medgemma-4b-it](https://huggingface.co/google/medgemma-4b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="nabilwalidrafi/medgemma-rafi-31", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.6.0+cu124
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
MB55/teuken7b-advance-classifier
|
MB55
| 2025-08-07T15:21:55Z | 4 | 0 | null |
[
"safetensors",
"base_model:openGPT-X/Teuken-7B-instruct-research-v0.4",
"base_model:finetune:openGPT-X/Teuken-7B-instruct-research-v0.4",
"license:mit",
"region:us"
] | null | 2025-04-28T12:05:24Z |
---
base_model: openGPT-X/Teuken-7B-instruct-research-v0.4
license: mit
---
# Teuken7B QLoRA – Grounding Act Classification
This model is a fine-tuned version of [openGPT-X/Teuken-7B-instruct-research-v0.4](https://huggingface.co/openGPT-X/Teuken-7B-instruct-research-v0.4) optimized using QLoRA for efficient binary classification of German dialogue utterances into:
- **advance**: Contribution that moves the dialogue forward (e.g. confirmations, follow-ups, elaborations)
- **non_advance**: Other utterances (e.g. vague responses, misunderstandings, irrelevant comments)
---
## Use Cases
- Dialogue system analysis
- Teacher-student interaction classification
- Grounding in institutional advising or classroom discourse
---
## How to Use:
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("openGPT-X/Teuken-7B-instruct-research-v0.4")
model = AutoModelForSequenceClassification.from_pretrained("MB55/teuken7b-advance-classifier")
model.eval()
def predict(text):
inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True)
if "token_type_ids" in inputs:
del inputs["token_type_ids"]
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
predicted_class = logits.argmax(dim=-1).item()
return predicted_class
text = "Ich bin da."
prediction = predict(text)
print(f"Predicted class: {prediction}")
|
motza0025/blockassist-bc-bold_swift_boar_1754578972
|
motza0025
| 2025-08-07T15:19:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"bold swift boar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-07T15:18:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- bold swift boar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
madmage/Reinforce-PixelCopter-2
|
madmage
| 2025-08-07T15:16:25Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-07T15:16:22Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PixelCopter-2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 16.00 +/- 11.50
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
c-ho/2025-08-07-bll-ner_xlm-roberta-base-ner-hrl_classweights_2x_coumpound_n2-5
|
c-ho
| 2025-08-07T15:14:55Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-08-07T15:14:14Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
pepijn223/act_api_test_1_migrated
|
pepijn223
| 2025-08-07T15:14:48Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:unknown",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-07T15:14:34Z |
---
datasets: unknown
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- act
- lerobot
- robotics
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
MB55/bueble-classifier
|
MB55
| 2025-08-07T15:14:01Z | 0 | 0 | null |
[
"safetensors",
"base_model:flair/bueble-lm-2b",
"base_model:finetune:flair/bueble-lm-2b",
"license:mit",
"region:us"
] | null | 2025-05-04T08:40:40Z |
---
base_model: flair/bueble-lm-2b
license: mit
---
# BübleLM QLoRA – Grounding Act Classification
This model is a fine-tuned version of [flair/bueble-lm-2b](https://huggingface.co/flair/bueble-lm-2b), optimized using QLoRA for efficient binary classification of German dialogue utterances into:
- **advance**: Contribution that moves the dialogue forward (e.g. confirmations, follow-ups, elaborations)
- **non_advance**: Other utterances (e.g. vague responses, misunderstandings, irrelevant comments)
---
## Use Cases
- Dialogue system analysis
- Teacher-student interaction classification
- Grounding in institutional advising or classroom discourse
---
## How to Use
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("MB55/bueble-classifier")
tokenizer = AutoTokenizer.from_pretrained("MB55/bueble-classifier")
inputs = tokenizer("Can you explain it again?", return_tensors="pt")
outputs = model(**inputs)
prediction = outputs.logits.argmax(dim=-1)
print(prediction) # 0 = non_advance, 1 = advance
|
benny1o1/qwen3_4b_256_petition_pro
|
benny1o1
| 2025-08-07T15:04:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/Qwen3-4B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Qwen3-4B-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-07T15:03:06Z |
---
base_model: unsloth/Qwen3-4B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** benny1o1
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-4B-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
luckyskg/Qwen3-0.6B-qt
|
luckyskg
| 2025-08-07T15:03:16Z | 0 | 0 | null |
[
"safetensors",
"license:apache-2.0",
"region:us"
] | null | 2025-08-07T14:56:47Z |
---
license: apache-2.0
---
|
ACECA/lowMvMax_5
|
ACECA
| 2025-08-07T14:59:28Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-07T05:47:25Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
MB55/rembert-qlora
|
MB55
| 2025-08-07T14:57:31Z | 0 | 0 | null |
[
"safetensors",
"base_model:google/rembert",
"base_model:finetune:google/rembert",
"license:mit",
"region:us"
] | null | 2025-05-03T05:08:49Z |
---
license: mit
model-index:
- name: rembert-qlora
results: []
base_model:
- google/rembert
---
# ReMBERT QLoRA – Grounding Act Classification
This model is a fine-tuned version of [google/rembert](https://huggingface.co/google/rembert), optimized using QLoRA for efficient binary classification of German dialogue utterances into:
- `ADVANCE`: Contribution that moves the dialogue forward (e.g. confirmations, follow-ups, elaborations)
- `NON-ADVANCE`: Other utterances (e.g. vague responses, misunderstandings, irrelevant comments)
## Use Cases
- Dialogue system analysis
- Teacher-student interaction classification
- Grounding in institutional advising or classroom discourse
## How to Use
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("MB55/rembert-qlora")
tokenizer = AutoTokenizer.from_pretrained("MB55/rembert-qlora")
inputs = tokenizer("Also das habe ich jetzt verstanden.", return_tensors="pt")
outputs = model(**inputs)
predicted_class = outputs.logits.argmax(dim=1).item()
|
quanxuantruong/tqa-stage1-t5-full-7epoch-400k
|
quanxuantruong
| 2025-08-07T14:54:58Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2025-08-07T04:57:10Z |
---
library_name: transformers
license: apache-2.0
base_model: google/flan-t5-base
tags:
- generated_from_trainer
model-index:
- name: tqa-stage1-t5-full-7epoch-400k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tqa-stage1-t5-full-7epoch-400k
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 7
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.2
|
phospho-app/biodunch-gr00t-pick_ball-350bv
|
phospho-app
| 2025-08-07T14:53:44Z | 0 | 0 |
phosphobot
|
[
"phosphobot",
"safetensors",
"gr00t_n1_5",
"gr00t",
"robotics",
"dataset:biodunch/pick_ball",
"region:us"
] |
robotics
| 2025-08-07T13:14:55Z |
---
datasets: biodunch/pick_ball
library_name: phosphobot
pipeline_tag: robotics
model_name: gr00t
tags:
- phosphobot
- gr00t
task_categories:
- robotics
---
# gr00t Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successful, try it out on your robot!
## Training parameters:
- **Dataset**: [biodunch/pick_ball](https://huggingface.co/datasets/biodunch/pick_ball)
- **Wandb run URL**: None
- **Epochs**: 7
- **Batch size**: 20
- **Training steps**: None
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
Jack-Payne1/s1.1-7B-EM_Finance_full
|
Jack-Payne1
| 2025-08-07T14:50:39Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-07T14:37:28Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
F-Fer/smolvla_test_grab_and_place
|
F-Fer
| 2025-08-07T14:48:32Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:F-Fer/test-grab-and-place-1",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-07T14:48:19Z |
---
base_model: lerobot/smolvla_base
datasets: F-Fer/test-grab-and-place-1
library_name: lerobot
license: apache-2.0
model_name: smolvla
pipeline_tag: robotics
tags:
- smolvla
- lerobot
- robotics
---
# Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
ghost613/VC-MJY_Woman_40s-0_preprocessed-2
|
ghost613
| 2025-08-07T14:45:41Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-08-07T05:24:18Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ghost613/VC-MJY_Woman_40s-0_preprocessed-1
|
ghost613
| 2025-08-07T14:43:25Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-08-07T04:58:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AngelSlim/Qwen2.5-VL-32B-Instruct-AWQ
|
AngelSlim
| 2025-08-07T14:42:51Z | 0 | 0 | null |
[
"safetensors",
"qwen2_5_vl",
"4-bit",
"awq",
"region:us"
] | null | 2025-08-07T13:06:48Z |
English | [简体中文](README.md)
<p align="center">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="./docs/source/assets/logos/angelslim_logo_light.png">
<img alt="AngelSlim" src="https://huggingface.co/datasets/librarian-bots/model_cards_with_metadata/viewer/default/./docs/source/assets/logos/angelslim_logo.png" width=55%>
</picture>
</p>
<h3 align="center">
Dedicated to building a more intuitive, comprehensive, and efficient LLMs compression toolkit.
</h3>
<p align="center">
📖 <a href="https://angelslim.readthedocs.io/">Documentation</a>   |   🤗 <a href="https://huggingface.co/AngelSlim">Hugging Face</a>   |   🤖 <a href="https://modelscope.cn/organization/AngelSlim">ModelScope</a>   |   💬 <a href="https://huggingface.co/datasets/librarian-bots/model_cards_with_metadata/viewer/default/./docs/source/assets/angel_slim_wechat.png">WeChat</a> |   🫨 <a href="https://discord.com/invite/dHVNeuNdFt">Discord</a>
<br>
</p>
## Table of Contents
- [Latest Updates](#latest-updates)
- [Key Features](#key-features)
- [Supported Models](#supported-models)
- [How to Use](#how-to-use)
- [Install AngelSlim](#install-angelslim)
- [Quick Start](#quick-start)
- [deployment & Evaluation](#deployment)
- [Benchmark](#benchmark)
- [License](#license)
- [Citation](#citation)
- [Technical Discussion](#technical-discussion)
## 📣Latest Updates
- [25/08/04] We now support quantization for `Hunyuan 0.5B/1.8B/4B/7B` and multimodal model `Qwen2.5VL 3B/7B/32B/72B`, including `FP8/INT4` algorithms. We also opensource `Hunyuan 1.8B/4B/7B` series Eagle3 model weight.
- [25/07/04] We now support quantization for `Hunyuan/Qwen2.5/Qwen3/DeepSeek-R1-Distill-Qwen` and other models, including `INT8/FP8/INT4` algorithms. We also opensource `Qwen3` series Eagle3 model weight.
Coming soon:
- [ ] Support W4A8 quantization for DeepSeek-R1.
- [ ] Release of new algorithm for speculative sampling.
## 🌟Key Features
- **Highly Integrated**: This toolkit integrates mainstream compression algorithms into a unified framework, offering developers one-click access with exceptional ease of use.
- **Continuous Innovation**: Beyond integrating widely-used industry algorithms, we are continuously researching better compression algorithms, which will be gradually open-sourced in the future.
- **Performance-Driven**: We continuously optimize end-to-end performance in model compression workflows and algorithm deployment, such as enabling quantization of models like Qwen3-235B and DeepSeek-R1 on a single GPU.
## 💼Supported Models
### Quantization
Currently supports the following LLMs, including Hunyuan-Dense, Hunyuan-MoE, Qwen3-Dense, Qwen3-MoE, Qwen2.5, DeepSeek-R1 distilled Qwen models, and QwQ::
| Model | FP8-Dynamic | FP8-Static | INT8-Dynamic | INT4-GPTQ | INT4-AWQ |
| --------------------------------------------------------------------------------------------------------------------------- | ----------- | ---------- | ------------ | --------- | -------- |
| [Hunyuan-Dense](https://huggingface.co/collections/tencent/hunyuan-dense-model-6890632cda26b19119c9c5e7) | ✅ | ✅ | ✅ | ✅ | ✅ |
| [Hunyuan-MoE](https://huggingface.co/collections/tencent/hunyuan-a13b-685ec38e5b46321e3ea7c4be) | ✅ | ✅ | ✅ | ✅ | ✅ |
| [Qwen3-Dense](https://huggingface.co/collections/AngelSlim/qwen3-quant-68652e26da31740739d154f8) | ✅ | ✅ | ✅ | ✅ | ✅ |
| [Qwen3-MoE](https://huggingface.co/collections/AngelSlim/qwen3-quant-68652e26da31740739d154f8) | ✅ | ✅ | ✅ | ✅ | ✅ |
| [Qwen2.5](https://huggingface.co/collections/AngelSlim/qwen2-25-quant-68652d6cbdf5c0d4b1c4499a) | ✅ | ✅ | ✅ | ✅ | ✅ |
| [DeepSeek-R1-Distill-Qwen](https://huggingface.co/collections/AngelSlim/deepseek-r1-distill-quant-68652f16a9c206b030b05f7f) | ✅ | ✅ | ✅ | ✅ | ✅ |
| [QwQ](https://huggingface.co/collections/AngelSlim/qwen3-quant-68652e26da31740739d154f8) | ✅ | ✅ | ✅ | ✅ | ✅ |
### Speculative Decoding
#### Eagle3
The Eagle3 weights for the Qwen3 series model are now available.
| Qwen3 Models | Hunyuan Models |
| ----------|----------|
| ✅ [Qwen3-1.7B](https://huggingface.co/AngelSlim/Qwen3-1.7B_eagle3) |✅ [Hunyuan-1.8B-Instruct](https://huggingface.co/AngelSlim/Hunyuan-1.8B-Instruct_eagle3) |
| ✅ [Qwen3-4B](https://huggingface.co/AngelSlim/Qwen3-4B_eagle3) |✅ [Hunyuan-4B-Instruct](https://huggingface.co/AngelSlim/Hunyuan-4B-Instruct_eagle3) |
| ✅ [Qwen3-8B](https://huggingface.co/AngelSlim/Qwen3-8B_eagle3) |✅ [Hunyuan-7B-Instruct](https://huggingface.co/AngelSlim/Hunyuan-7B-Instruct_eagle3) |
| ✅ [Qwen3-14B](https://huggingface.co/AngelSlim/Qwen3-14B_eagle3) |
| ✅ [Qwen3-32B](https://huggingface.co/AngelSlim/Qwen3-32B_eagle3) |
| ✅ [Qwen3-30B-A3B](https://huggingface.co/AngelSlim/Qwen3-a3B_eagle3) |
## 🛎️How to Use
### Install AngelSlim
We recommend using `pip` to install the latest stable version of `AngelSlim`:
```shell
pip install angelslim
```
Alternatively, you can clone the repository and install from source in editable mode:
```shell
cd AngelSlim && python setup.py install
```
For more detailed installation instructions, please refer to the [Installation Documentation](https://angelslim.readthedocs.io/zh-cn/latest/getting_started/installation.html).
### Quick Start
After installing `AngelSlim`, you can quickly start by running the following script to perform static `FP8` quantization on the `Qwen3-1.7B` model:
* One-click Start
```shell
python3 tools/run.py -c configs/qwen3/fp8_static/qwen3-1_7b_fp8_static.yaml
```
This example will load the HuggingFace model and perform activation value calibration using the `dataset` specified in the config file, saving the quantized model weights.
* Code-based Start
To perform dynamic `FP8` quantization on `Qwen3-1.7B`:
```python
from angelslim.engine import Engine
slim_engine = Engine()
# Prepare model
slim_engine.prepare_model(model_name="Qwen", model_path="Qwen/Qwen3-1.7B",)
# Initialize compressor
slim_engine.prepare_compressor("PTQ", default_method="fp8_dynamic")
# Compress model
slim_engine.run()
# Save compressed model
slim_engine.save("./output")
```
For more details, please refer to the [Quick Start Documentation](https://angelslim.readthedocs.io/zh-cn/latest/getting_started/quickstrat.html).
### Deployment and Testing
### 1. Offline Inference
If you need to load a quantized model via `transformers`, please set the `deploy_backend: huggingface` in the `global` configuration before quantizing the model, or manually modify the `ignored_layers` field in the `config.json` file located in the quantized model output directory to `ignore`.
To test offline inference with a quantized model loaded via `transformers`, run the following command:
```shell
python deploy/offline.py $MODEL_PATH
```
Where `MODEL_PATH` is the path to the quantized model output.
#### 2. API Service Deployment
After specifying the quantized model path `MODEL_PATH`, you can deploy an OpenAI-compatible API service using the following LLMs inference frameworks:
**vLLM**
Use the following script to launch a [vLLM](https://github.com/vllm-project/vllm) server, recommended version `vllm>=0.8.5.post1`. For MOE INT8 quantized models, vllm>=0.9.0 is required.
```shell
bash deploy/run_vllm.sh $MODEL_PATH
```
**SGLang**
Use the following script to launch a [SGLang](https://github.com/sgl-project/sglang) server, recommended version `sglang>=0.4.6.post1`.
```shell
bash deploy/run_sglang.sh $MODEL_PATH
```
#### 3. Service Invocation
Invoke requests via [OpenAI's API format](https://platform.openai.com/docs/api-reference/introduction):
```shell
bash deploy/openai.sh $MODEL_PATH
```
#### 4. Performance Evaluation
Evaluate the performance of quantized model using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness), recommended version`lm-eval>=0.4.8`:
```shell
bash deploy/lm_eval.sh $MODEL_PATH
```
For more detaileds, please refer to the [Deployment Documentation](https://angelslim.readthedocs.io/zh-cn/latest/deployment/deploy.html).
## 📈 Benchmark
### (1) Quantization
The performance test results for selected models are shown below. For the complete benchmark, refer to the [Benchmark documentation](https://angelslim.readthedocs.io/zh-cn/latest/performance/quantization/benchmarks.html)
#### Hunyuan Series Models
Benchmark results for the `Hunyuan-Instruct` model with `FP8`, `INT4-AWQ` and `INT4-GPTQ` quantization algorithms on datasets including`OlympiadBench`, `AIME 2024` and `DROP`:
<table>
<thead>
<tr><th>Model</th><th>Quantization</th><th>OlympiadBench</th><th>AIME 2024</th><th>DROP</th><th>GPQA-Diamond</th></tr>
</thead>
<tbody>
<tr><td rowspan="4">Hunyuan-A13B-Instruct</td>
<td>BF16</td><td>82.7</td><td>87.30</td><td>91.1</td><td>71.2</td></tr>
<tr><td>FP8-Static</td><td>83.0</td><td>86.7</td><td>91.1</td><td>-</td></tr>
<tr><td>Int4-GPTQ</td><td>82.7</td><td>86.7</td><td>91.1</td><td>-</td></tr>
<tr><td>Int4-AWQ</td><td>82.6</td><td>85.6</td><td>91.0</td><td>-</td></tr>
</tbody>
<tbody>
<tr><td rowspan="4">Hunyuan-7B-Instruct</td>
<td>BF16</td> <td>76.5</td><td>81.1</td><td>85.9</td><td>60.1</td></tr>
<tr><td>FP8-Static</td><td>76.6</td><td>80.9</td><td>86.0</td><td>60.1</td></tr>
<tr><td>Int4-GPTQ</td><td>76.2</td><td>81.0</td><td>85.7</td><td>60.0</td></tr>
<tr><td>Int4-AWQ</td><td>76.4</td><td>80.9</td><td>85.9</td><td>60.1</td></tr>
</tbody>
<tbody>
<tr><td rowspan="4">Hunyuan-4B-Instruct</td>
<td>BF16</td> <td>73.1</td><td>78.3</td><td>78.2</td><td>61.1</td></tr>
<tr><td>FP8-Static</td><td>73.1</td><td>76.6</td><td>78.3</td><td>60.2</td></tr>
<tr><td>Int4-GPTQ</td><td>72.9</td><td>-</td><td>78.1</td><td>58.1</td></tr>
<tr><td>Int4-AWQ</td><td>72.8</td><td>-</td><td>78.2</td><td>-</td></tr>
</tbody>
<tbody>
<tr><td rowspan="4">Hunyuan-1.8B-Instruct</td>
<td>BF16</td> <td>63.4</td><td>56.7</td><td>76.7</td><td>47.2</td></tr>
<tr><td>FP8-Static</td><td>62.5</td><td>55.2</td><td>75.1</td><td>47.7</td></tr>
<tr><td>Int4-GPTQ</td><td>60.9</td><td>-</td><td>73.0</td><td>44.4</td></tr>
<tr><td>Int4-AWQ</td><td>61.7</td><td>-</td><td>71.7</td><td>43.6</td></tr>
</tbody>
<tbody>
<tr><td rowspan="4">Hunyuan-0.5B-Instruct</td>
<td>BF16</td> <td>29.6</td><td>17.2</td><td>52.8</td><td>23.3</td></tr>
<tr><td>FP8-Static</td><td>29.6</td><td>17.2</td><td>51.6</td><td>22.5</td></tr>
<tr><td>Int4-GPTQ</td><td>26.8</td><td>-</td><td>50.9</td><td>23.3</td></tr>
<tr><td>Int4-AWQ</td><td>26.3</td><td>-</td><td>48.9</td><td>23.3</td></tr>
</tbody>
</table>
#### Qwen3 Series Models
Benchmark results for Qwen3 series models with `FP8-Static`, `FP8-Dynamic`, `INT4-GPTQ`, and `INT4-AWQ` quantization algorithms on datasets including `CEVAL`, `MMLU`, `GSM8K`, and `HUMANEVAL`:
<table>
<thead>
<tr><th>Model</th><th>Quantization</th><th>CEVAL</th><th>MMLU</th><th>GSM8K</th><th>HUMANEVAL</th></tr>
</thead>
<tbody>
<tr><td rowspan="4">Qwen3-0.6B</td><td>BF16</td><td>45.84</td><td>47.21</td><td>42.99</td><td>19.51</td></tr>
<tr><td>FP8-Static</td><td>45.99</td><td>46.87</td><td>38.06</td><td>18.90</td></tr>
<tr><td>FP8-Dynamic</td><td>45.99</td><td>46.93</td><td>38.29</td><td>20.73</td></tr>
<tr><td>INT8-Dynamic</td><td>45.17</td><td>46.95</td><td>41.17</td><td>21.34</td></tr>
<tr><td rowspan="6">Qwen3-8B</td><td>BF16</td><td>79.27</td><td>74.78</td><td>87.79</td><td>63.41</td></tr>
<tr><td>FP8-Static</td><td>78.23</td><td>74.79</td><td>86.96</td><td>62.20</td></tr>
<tr><td>FP8-Dynamic</td><td>78.45</td><td>74.75</td><td>87.64</td><td>62.80</td></tr>
<tr><td>INT8-Dynamic</td><td>78.01</td><td>74.84</td><td>86.96</td><td>67.07</td></tr>
<tr><td>INT4-GPTQ</td><td>77.19</td><td>73.26</td><td>86.43</td><td>62.20</td></tr>
<tr><td>INT4-AWQ</td><td>76.15</td><td>73.59</td><td>86.96</td><td>63.41</td></tr>
<tr><td rowspan="6">Qwen3-14B</td><td>BF16</td><td>83.06</td><td>78.90</td><td>88.40</td><td>55.49</td></tr>
<tr><td>FP8-Static</td><td>82.62</td><td>78.57</td><td>89.46</td><td>57.32</td></tr>
<tr><td>FP8-Dynamic</td><td>82.24</td><td>78.92</td><td>88.32</td><td>52.44</td></tr>
<tr><td>INT8-Dynamic</td><td>81.87</td><td>78.13</td><td>86.28</td><td>56.10</td></tr>
<tr><td>INT4-GPTQ</td><td>81.05</td><td>78.02</td><td>87.34</td><td>57.93</td></tr>
<tr><td>INT4-AWQ</td><td>82.02</td><td>77.68</td><td>84.23</td><td>61.59</td></tr>
<tr><td rowspan="5">Qwen3-32B</td><td>BF16</td><td>86.55</td><td>82.00</td><td>74.53</td><td>37.80</td></tr>
<tr><td>FP8-Static</td><td>86.92</td><td>81.78</td><td>70.20</td><td>39.63</td></tr>
<tr><td>FP8-Dynamic</td><td>86.55</td><td>81.89</td><td>70.43</td><td>38.41</td></tr>
<tr><td>INT4-GPTQ</td><td>86.18</td><td>81.01</td><td>-</td><td>43.29</td></tr>
<tr><td>INT4-AWQ</td><td>86.18</td><td>81.54</td><td>-</td><td>36.59</td></tr>
<tr><td rowspan="4">Qwen3-30B-A3B</td><td>BF16</td><td>83.66</td><td>79.36</td><td>89.99</td><td>31.71</td></tr>
<tr><td>FP8-Static</td><td>83.95</td><td>79.47</td><td>89.01</td><td>31.10</td></tr>
<tr><td>FP8-Dynamic</td><td>84.10</td><td>79.40</td><td>89.16</td><td>32.93</td></tr>
<tr><td>INT8-Dynamic</td><td>83.36</td><td>79.48</td><td>89.16</td><td>34.15</td></tr>
<tr><td rowspan="4">Qwen3-235B-A22B</td><td>BF16</td><td>89.60</td><td>86.28</td><td>85.29</td><td>27.44</td></tr>
<tr><td>FP8-Static</td><td>89.67</td><td>86.19</td><td>86.96</td><td>27.44</td></tr>
<tr><td>FP8-Dynamic</td><td>89.67</td><td>86.18</td><td>85.22</td><td>28.05</td></tr>
<tr><td>INT8-Dynamic</td><td>88.93</td><td>86.20</td><td>86.20</td><td>23.78</td></tr>
<tr><td rowspan="5">QwQ-32B</td><td>BF16</td><td>85.74</td><td>82.03</td><td>73.31</td><td>42.68</td></tr>
<tr><td>FP8-Static</td><td>85.44</td><td>81.91</td><td>75.36</td><td>42.68</td></tr>
<tr><td>FP8-Dynamic</td><td>85.07</td><td>81.93</td><td>75.66</td><td>42.07</td></tr>
<tr><td>INT4-GPTQ</td><td>84.03</td><td>81.26</td><td>68.23</td><td>45.73</td></tr>
<tr><td>INT4-AWQ</td><td>83.58</td><td>81.01</td><td>68.69</td><td>43.29</td></tr>
</tbody>
</table>
#### Qwen2.5VL Series Models
Benchmark results for Qwen2.5VL series models with `BF16`、`FP8-Static`、`FP8-Dynamic`、`INT4-GPTQ`、`INT4-AWQ` quantization algorithms on datasets including `MMMU_VAL`、`DocVQA_VAL` and `ChartQA_TEST`:
<table>
<thead>
<tr><th>Model</th><th>Quantization</th><th>MMMU_VAL</th><th>MMLDocVQA_VALU</th><th>ChartQA_TEST</th></tr>
</thead>
<tbody>
<tr><td rowspan="5">Qwen2.5VL-3B</td><td>BF16</td><td>47.11</td><td>78.57</td><td>80.32</td></tr>
<tr><td>FP8-Static</td><td>47.33</td><td>79.34</td><td>79.68</td></tr>
<tr><td>FP8-Dynamic</td><td>45.99</td><td>46.93</td><td>38.29</td></tr>
<tr><td>INT4-GPTQ</td><td>46.56</td><td>77.20</td><td>78.96</td></tr>
<tr><td>INT4-AWQ</td><td>45.78</td><td>-</td><td>79.60</td></tr>
<tr><td rowspan="5">Qwen2.5VL-7B</td><td>BF16</td><td>45.44</td><td>89.71</td><td>84.64</td></tr>
<tr><td>FP8-Static</td><td>47.00</td><td>89.83</td><td>85.92</td></tr>
<tr><td>FP8-Dynamic</td><td>47.22</td><td>89.80</td><td>88.64</td></tr>
<tr><td>INT4-GPTQ</td><td>46.67</td><td>90.45</td><td>-</td></tr>
<tr><td>INT4-AWQ</td><td>45.67</td><td>89.28</td><td>-</td></tr>
<tr><td rowspan="5">Qwen2.5VL-32B</td><td>BF16</td><td>57.00</td><td>90.03</td><td>-</td></tr>
<tr><td>FP8-Static</td><td>57.00</td><td>89.88</td><td>-</td></tr>
<tr><td>FP8-Dynamic</td><td>56.44</td><td>89.88</td><td>-</td></tr>
<tr><td>INT4-GPTQ</td><td>55.22</td><td>89.80 </td><td>-</td></tr>
<tr><td>INT4-AWQ</td><td>55.22</td><td>90.30</td><td>-</td></tr>
<tr><td rowspan="5">Qwen2.5VL-72B</td><td>BF16</td><td>58.78</td><td>94.39</td><td>85.60</td></tr>
<tr><td>FP8-Static</td><td>57.89</td><td>94.41</td><td>85.84</td></tr>
<tr><td>FP8-Dynamic</td><td>58.67</td><td>94.38</td><td>85.60</td></tr>
<tr><td>INT4-GPTQ</td><td>57.56</td><td>94.46</td><td>86.48</td></tr>
<tr><td>INT4-AWQ</td><td>58.78</td><td>94.19</td><td>87.28</td></tr>
</tbody>
</table>
#### Other Models
Benchmark results for other models with `FP8-Static`, `FP8-Dynamic`, `INT4-GPTQ`, and `INT4-AWQ` quantization algorithms on datasets including `CEVAL`, `MMLU` and `GSM8K`:
<table>
<thead>
<tr><th>Model</th><th>Quantization</th><th>CEVAL</th><th>MMLU</th><th>GSM8K</th></tr>
</thead>
<tbody>
<tr><td rowspan="3">Qwen2.5-1.5B-Instruct</td><td>BF16</td><td>67.01</td><td>60.05</td><td>54.28</td></tr>
<tr><td>FP8-Static</td><td>66.27</td><td>60.23</td><td>-</td></tr>
<tr><td>FP8-Dynamic</td><td>66.79</td><td>60.08</td><td>51.71</td></tr>
<tr><td rowspan="5">Qwen2.5-7B-Instruct</td><td>BF16</td><td>81.20</td><td>74.55</td><td>79.98</td></tr>
<tr><td>FP8-Static</td><td>81.13</td><td>74.03</td><td>79.30</td></tr>
<tr><td>FP8-Dynamic</td><td>80.31</td><td>74.07</td><td>79.00</td></tr>
<tr><td>INT4-GPTQ</td><td>79.05</td><td>73.05</td><td>74.75</td></tr>
<tr><td>INT4-AWQ</td><td>79.35</td><td>73.22</td><td>79.38</td></tr>
<tr><td rowspan="5">Qwen2.5-32B-Instruct</td><td>BF16</td><td>87.30</td><td>83.21</td><td>81.73</td></tr>
<tr><td>FP8-Static</td><td>87.59</td><td>83.08</td><td>81.58</td></tr>
<tr><td>FP8-Dynamic</td><td>87.30</td><td>83.04</td><td>81.58</td></tr>
<tr><td>INT4-GPTQ</td><td>86.70</td><td>82.45</td><td>82.03</td></tr>
<tr><td>INT4-AWQ</td><td>87.00</td><td>82.64</td><td>-</td></tr>
<tr><td rowspan="5">DeepSeek-R1-Distill-Qwen-7B</td><td>BF16</td><td>53.49</td><td>53.80</td><td>75.74</td></tr>
<tr><td>FP8-Static</td><td>53.57</td><td>54.17</td><td>76.19</td></tr>
<tr><td>FP8-Dynamic</td><td>52.97</td><td>54.13</td><td>74.15</td></tr>
<tr><td>INT4-GPTQ</td><td>51.86</td><td>52.44</td><td>75.89</td></tr>
<tr><td>INT4-AWQ</td><td>53.49</td><td>53.70</td><td>-</td></tr>
<tr><td rowspan="5">DeepSeek-R1-Distill-Qwen-14B</td><td>BF16</td><td>77.71</td><td>74.28</td><td>85.67</td></tr>
<tr><td>FP8-Static</td><td>77.56</td><td>74.66</td><td>86.73</td></tr>
<tr><td>FP8-Dynamic</td><td>76.82</td><td>74.63</td><td>87.11</td></tr>
<tr><td>INT4-GPTQ</td><td>74.29</td><td>72.37</td><td>84.61</td></tr>
<tr><td>INT4-AWQ</td><td>74.81</td><td>73.00</td><td>86.05</td></tr>
<tr><td rowspan="5">DeepSeek-R1-Distill-Qwen-32B</td><td>BF16</td><td>84.18</td><td>80.89</td><td>87.41</td></tr>
<tr><td>FP8-Static</td><td>83.43</td><td>80.90</td><td>87.57</td></tr>
<tr><td>FP8-Dynamic</td><td>83.73</td><td>81.10</td><td>86.43</td></tr>
<tr><td>INT4-GPTQ</td><td>84.10</td><td>79.80</td><td>86.73</td></tr>
<tr><td>INT4-AWQ</td><td>82.84</td><td>80.15</td><td>87.19</td></tr>
</tbody>
</table>
### (2) Speculative Decoding
#### Qwen3 Series Models
Benchmark results for Qwen3 series models with `Eagle3` speculative decoding algorithm on datasets including `MT-bench`, `HunmanEval`, `GSM8K`, and `Alpaca`:
<table>
<thead>
<tr>
<th> </th><th> </th>
<th colspan="2" style="text-align: center; vertical-align: middle;">MT-bench</th>
<th colspan="2" style="text-align: center; vertical-align: middle;">HumanEval</th>
<th colspan="2" style="text-align: center; vertical-align: middle;">GSM8K</th>
<th colspan="2" style="text-align: center; vertical-align: middle;">Alpaca</th>
<th colspan="2" style="text-align: center; vertical-align: middle;">Mean</th></tr>
<tr><th>Temperature</th><th>Model</th><th>Speedup</th><th>τ</th><th>Speedup</th><th>τ</th><th>Speedup</th><th>τ</th><th>Speedup</th><th>τ</th><th>Speedup</th><th>τ</th></tr>
</thead>
<tbody>
<!-- <tr><td colspan="12" style="text-align: center; vertical-align: middle;"><strong>Temperature=0</strong></td></tr> -->
<tr><td rowspan="6"><strong>T=0</strong></td>
<td>Qwen3-1.7B</td><td>2.05x</td><td>2.81</td><td>2.07x</td><td>2.93</td><td>2.11x</td><td>2.98</td><td>1.93x</td><td>2.69</td><td>2.04x</td><td>2.85</td></tr>
<tr> <td>Qwen3-4B</td><td>2.21x</td><td>3.01</td><td>2.36x</td><td>3.24</td><td>2.42x</td><td>3.13</td><td>2.32x</td><td>2.75</td><td>2.33x</td><td>3.03</td></tr>
<tr><td>Qwen3-8B</td><td>2.63x</td><td>3.65</td><td>2.76x</td><td>3.85</td><td>2.82x</td><td>3.90</td><td>2.62x</td><td>3.48</td><td>2.70x</td><td>3.72</td></tr>
<tr><td>Qwen3-14B</td><td>2.23x</td><td>3.30</td><td>2.53x</td><td>3.74</td><td>2.56x</td><td>3.79</td><td>2.16x</td><td>3.13</td><td>2.37x</td><td>3.49</td></tr>
<tr><td>Qwen3-32B</td><td>2.39x</td><td>2.78</td><td>2.37x</td><td>2.81</td><td>2.47x</td><td>2.92</td><td>2.42x</td><td>2.53</td><td>2.41x</td><td>2.76</td></tr>
<tr><td>Qwen3-30B-A3B</td><td>2.84x</td><td>3.63</td><td>2.27x</td><td>3.09</td><td>2.64x</td><td>3.42</td><td>2.83x</td><td>3.56</td><td>2.64x</td><td>3.42</td></tr>
<!-- <tr><td colspan="12" style="text-align: center; vertical-align: middle;"><strong>Temperature=1</strong></td></tr> -->
<tr><td rowspan="6"><strong>T=1</strong></td>
<td>Qwen3-1.7B</td><td>1.74x</td><td>2.53</td><td>1.86x</td><td>2.70</td><td>1.82x</td><td>2.69</td><td>1.72x</td><td>2.46</td><td>1.93x</td><td>2.60</td></tr>
<tr><td>Qwen3-4B</td><td>1.93x</td><td>2.60</td><td>2.00x</td><td>2.84</td><td>2.11x</td><td>2.82</td><td>2.34x</td><td>2.50</td><td>1.75x</td><td>2.69</td></tr>
<tr><td>Qwen3-8B</td><td>1.98x</td><td>2.75</td><td>2.25x</td><td>3.11</td><td>2.31x</td><td>3.15</td><td>2.10x</td><td>2.76</td><td>2.90x</td><td>2.94</td></tr>
<tr><td>Qwen3-14B</td><td>1.71x</td><td>2.61</td><td>1.95x</td><td>2.87</td><td>2.04x</td><td>3.08</td><td>1.68x</td><td>2.55</td><td>2.90x</td><td>2.78</td></tr>
<tr><td>Qwen3-32B</td><td>1.62x</td><td>1.91</td><td>1.71x</td><td>2.05</td><td>1.78x</td><td>2.10</td><td>1.80x</td><td>1.95</td><td>1.62x</td><td>2.00</td></tr>
<tr><td>Qwen3-30B-A3B</td><td>1.91x</td><td>2.46</td><td>2.00x</td><td>2.64</td><td>1.90x</td><td>2.53</td><td>1.80x</td><td>2.32</td><td>1.90x</td><td>2.48</td></tr>
</tbody>
</table>
#### Hunyuan Series Models
Benchmark results for Hunyuan series models with `Eagle3` speculative decoding algorithm on datasets including `MT-bench`, `HunmanEval`, `GSM8K`, and `Alpaca`:
<table>
<thead>
<tr>
<th> </th><th> </th>
<th colspan="2" style="text-align: center; vertical-align: middle;">MT-bench</th>
<th colspan="2" style="text-align: center; vertical-align: middle;">HumanEval</th>
<th colspan="2" style="text-align: center; vertical-align: middle;">GSM8K</th>
<th colspan="2" style="text-align: center; vertical-align: middle;">Alpaca</th>
<th colspan="2" style="text-align: center; vertical-align: middle;">Mean</th></tr>
<tr><th>Temperature</th><th>Model</th><th>Speedup</th><th>τ</th><th>Speedup</th><th>τ</th><th>Speedup</th><th>τ</th><th>Speedup</th><th>τ</th><th>Speedup</th><th>τ</th></tr>
</thead>
<tbody>
<!-- <tr><td colspan="12" style="text-align: center; vertical-align: middle;"><strong>Temperature=0</strong></td></tr> -->
<tr><td rowspan="3"><strong>T=0</strong></td>
<td>Hunyuan-1.8B-Instruct</td><td>1.97x</td><td>2.90</td><td>2.58x</td><td>3.73</td><td>2.61x</td><td>3.71</td><td>1.71x</td><td>2.43</td><td>2.22x</td><td>3.19</td></tr>
<tr> <td>Hunyuan-4B-Instruct</td><td>1.77x</td><td>2.60</td><td>2.64x</td><td>3.35</td><td>2.14x</td><td>3.17</td><td>1.72x</td><td>2.57</td><td>2.07x</td><td>2.92</td></tr>
<tr><td>Hunyuan-7B-Instruct</td><td>2.22x</td><td>3.58</td><td>3.59x</td><td>5.47</td><td>2.96x</td><td>4.68</td><td>1.64x</td><td>2.56</td><td>2.60x</td><td>4.07</td></tr>
<!-- <tr><td colspan="12" style="text-align: center; vertical-align: middle;"><strong>Temperature=1</strong></td></tr> -->
<tr><td rowspan="3"><strong>T=1</strong></td>
<td>Hunyuan-1.8B-Instruct</td><td>1.58x</td><td>2.36</td><td>2.35x</td><td>3.56</td><td>2.23x</td><td>3.38</td><td>1.26x</td><td>1.87</td><td>1.86x</td><td>2.79</td></tr>
<tr><td>Hunyuan-4B-Instruct</td><td>1.36x</td><td>2.05</td><td>1.97x</td><td>2.86</td><td>1.72x</td><td>2.68</td><td>1.14x</td><td>1.76</td><td>1.55x</td><td>2.34</td></tr>
<tr><td>Hunyuan-7B-Instruct</td><td>1.90x</td><td>3.11</td><td>3.12x</td><td>5.09</td><td>2.74x</td><td>4.34</td><td>1.47x</td><td>2.39</td><td>2.31x</td><td>3.73</td></tr>
</tbody>
</table>
## 📝 License
The code for this project is open-sourced under the [License for AngelSlim](LICENSE).
## 🔗 Citation
```
@software{AngelSlim2025,
title={{AngelSlim}},
author={Tencent AngelSlim Project Contributors},
year={2025},
month={6},
url={https://github.com/Tencent/AngelSlim},
}
```
## 💬 Technical Discussion
* AngelSlim is continuously iterating and new features will be released soon. If you have any questions or suggestions, please open an issue on [GitHub Issues](https://github.com/Tencent/AngelSlim/issues) or join our [WeChat technical discussion group](./docs/source/assets/angel_slim_wechat.png).
|
pkshatech/GLuCoSE-base-ja
|
pkshatech
| 2025-08-07T14:42:43Z | 61,369 | 32 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"luke",
"feature-extraction",
"transformers",
"sentence-similarity",
"ja",
"dataset:mc4",
"dataset:clips/mqa",
"dataset:shunk031/JGLUE",
"dataset:paws-x",
"dataset:MoritzLaurer/multilingual-NLI-26lang-2mil7",
"dataset:castorini/mr-tydi",
"dataset:hpprc/jsick",
"arxiv:2104.07179",
"arxiv:2004.04906",
"base_model:studio-ousia/luke-japanese-base-lite",
"base_model:finetune:studio-ousia/luke-japanese-base-lite",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] |
sentence-similarity
| 2023-07-16T07:28:46Z |
---
pipeline_tag: sentence-similarity
language: ja
license: apache-2.0
tags:
- transformers
- sentence-similarity
- feature-extraction
- sentence-transformers
inference: false
datasets:
- mc4
- clips/mqa
- shunk031/JGLUE
- paws-x
- MoritzLaurer/multilingual-NLI-26lang-2mil7
- castorini/mr-tydi
- hpprc/jsick
base_model:
- studio-ousia/luke-japanese-base-lite
---
# GLuCoSE (General Luke-based Contrastive Sentence Embedding)-base-Japanese
[日本語のREADME/Japanese README](https://huggingface.co/pkshatech/GLuCoSE-base-ja/blob/main/README_JA.md)
GLuCoSE (General LUke-based COntrastive Sentence Embedding, "glucose") is a Japanese text embedding model based on [LUKE](https://github.com/studio-ousia/luke). In order to create a general-purpose, user-friendly Japanese text embedding model, GLuCoSE has been trained on a mix of web data and various datasets associated with natural language inference and search. This model is not only suitable for sentence vector similarity tasks but also for semantic search tasks.
- Maximum token count: 512
- Output dimension: 768
- Pooling: mean pooling
- Supported language: Japanese
## Usage
You can use this model easily with [sentence-transformers](https://www.SBERT.net).
First, install sentence-transformers with pip as follows:
```
pip install -U sentence-transformers
```
You can load the model and convert sentences into dense vectors as shown below:
```python
from sentence_transformers import SentenceTransformer
sentences = [
"PKSHA Technologyは機械学習/深層学習技術に関わるアルゴリズムソリューションを展開している。",
"この深層学習モデルはPKSHA Technologyによって学習され、公開された。",
"広目天は、仏教における四天王の一尊であり、サンスクリット語の「種々の眼をした者」を名前の由来とする。",
]
model = SentenceTransformer('pkshatech/GLuCoSE-base-ja')
embeddings = model.encode(sentences)
print(embeddings)
```
Since the loss function used during training is cosine similarity, we recommend using cosine similarity for downstream tasks.
This text embedding model can also be used in LangChain. Please refer to [this page](https://python.langchain.com/docs/modules/data_connection/text_embedding/integrations/sentence_transformers) for more information.
## Resources Used
The following resources were used to train this model.
### Pre-trained model
- [studio-ousia/luke-japanese-base-lite](https://huggingface.co/studio-ousia/luke-japanese-base-lite)
### Datasets
- [mC4](https://huggingface.co/datasets/mc4)
- [MQA](https://huggingface.co/datasets/clips/mqa)
- [JNLI](https://github.com/yahoojapan/JGLUE)
- [JSNLI](https://nlp.ist.i.kyoto-u.ac.jp/?%E6%97%A5%E6%9C%AC%E8%AA%9ESNLI%28JSNLI%29%E3%83%87%E3%83%BC%E3%82%BF%E3%82%BB%E3%83%83%E3%83%88)
- [PAWS-X](https://huggingface.co/datasets/paws-x)
- [JSeM](https://github.com/DaisukeBekki/JSeM)
- [MoritzLaurer/multilingual-NLI-26lang-2mil7](https://huggingface.co/datasets/MoritzLaurer/multilingual-NLI-26lang-2mil7)
- [MultiNLI](https://huggingface.co/datasets/multi_nli)
- [WANLI](https://huggingface.co/datasets/alisawuffles/WANLI)
- [FeverNLI](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md)
- [LingNLI](https://arxiv.org/pdf/2104.07179.pdf)
- [JSICK](https://github.com/verypluming/JSICK)
- [Mr.Tidy](https://huggingface.co/datasets/castorini/mr-tydi)
- [JSTS](https://github.com/yahoojapan/JGLUE) (used for validation) [^1]
## Benchmarks
### Semantic Similarity Calculation ([JSTS](https://github.com/yahoojapan/JGLUE) dev set)
Evaluation by Spearman's correlation coefficient and Pearson's correlation coefficient.
| Model | Spearman | Pearson |
| --- | --- | --- |
| [text-embedding-ada-002](https://platform.openai.com/docs/guides/embeddings) |0.837[^2] | 0.790[^2] |
| [pkshatech/simcse-ja-bert-base-clcmlp](https://huggingface.co/pkshatech/simcse-ja-bert-base-clcmlp)[^3] | 0.850 | 0.801 |
| pkshatech/GLuCoSE-base-ja | **0.864** | **0.818** |
### Zero-shot Search ([AIO3](https://sites.google.com/view/project-aio/competition3?authuser=0) dev set)
Evaluation by top-k retrieval accuracy[^4] (the fraction of questions that have a correct answer in the top-k retrieved documents at least once.)
| Model | Top-1 | Top-5 | Top-10 | Top-50 |
| --- | --- | --- | --- | --- |
| [text-embedding-ada-002](https://platform.openai.com/docs/guides/embeddings) | 33.50 | 57.80 | 65.10 | 76.60 |
| [pkshatech/simcse-ja-bert-base-clcmlp](https://huggingface.co/pkshatech/simcse-ja-bert-base-clcmlp)[^3] | 30.60 | 54.50 | 62.50 | 76.70 |
| pkshatech/GLuCoSE-base-ja | **36.10** | **59.40** | **66.40** | **78.30** |
# Authors
[Akihiko Fukuchi](https://huggingface.co/akiFQC), [Yuichiro Hoshino](https://huggingface.co/Yuichiroh), [Yotarow Watanabe](https://huggingface.co/yotarow)
## License
This model is published under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
[^1]: When we trained this model, the test data of JGLUE was not released, so we used the dev set of JGLUE as a private evaluation data. Therefore, we selected the checkpoint on the train set of JGLUE insted of its dev set.
[^2]: https://qiita.com/akeyhero/items/ce371bfed64399027c23
[^3]: This is the model we have released before.
[^4]: For more details, please refer to https://arxiv.org/pdf/2004.04906.pdf.
|
Yujie-AI/Yi_34B_LLaVA-linear-coeff0.4
|
Yujie-AI
| 2025-08-07T14:41:00Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llava_next",
"image-to-text",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-04-23T22:38:04Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
neurlang/ipa-whisper-small
|
neurlang
| 2025-08-07T14:40:34Z | 0 | 1 | null |
[
"safetensors",
"whisper",
"audio",
"automatic-speech-recognition",
"IPA",
"phonetic",
"af",
"am",
"ar",
"as",
"az",
"ba",
"be",
"bg",
"bn",
"ca",
"cs",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"gl",
"gn",
"ha",
"he",
"hi",
"ht",
"hu",
"hy",
"ia",
"id",
"is",
"it",
"ja",
"kk",
"ko",
"ky",
"lo",
"ltg",
"lt",
"lv",
"mk",
"ml",
"mn",
"mr",
"mt",
"nan",
"nl",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sd",
"sk",
"sl",
"sq",
"sr",
"sv",
"sw",
"ta",
"te",
"th",
"tk",
"tr",
"tt",
"ug",
"uk",
"ur",
"uz",
"vi",
"yo",
"yue",
"zh",
"zu",
"tn",
"arxiv:2212.04356",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"region:us"
] |
automatic-speech-recognition
| 2025-08-03T19:22:09Z |
---
language:
- af
- am
- ar
- as
- az
- ba
- be
- bg
- bn
- ca
- cs
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- gl
- gn
- ha
- he
- hi
- ht
- hu
- hy
- ia
- id
- is
- it
- ja
- kk
- ko
- ky
- lo
- ltg
- lt
- lv
- mk
- ml
- mn
- mr
- mt
- nan
- nl
- pa
- pl
- ps
- pt
- ro
- ru
- sd
- sk
- sl
- sq
- sr
- sv
- sw
- ta
- te
- th
- tk
- tr
- tt
- ug
- uk
- ur
- uz
- vi
- yo
- yue
- zh
- zu
- tn
tags:
- audio
- automatic-speech-recognition
- IPA
- phonetic
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
model-index:
- name: ipa-whisper-small
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (clean)
type: librispeech_asr
config: clean
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 99999999999999
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (other)
type: librispeech_asr
config: other
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 99999999999999
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: hi
split: test
args:
language: hi
metrics:
- name: Test WER
type: wer
value: 99999999999999
pipeline_tag: automatic-speech-recognition
license: apache-2.0
base_model:
- openai/whisper-small
---
# Whisper IPA
Whisper is a pre-trained model for automatic speech recognition (ASR) and speech translation. Fine-tuned on 15000 wavs
of labelled synthetic IPA data (generated using the goruut 0.6.3 phonemizer), Whisper models demonstrate a strong ability
to generalise to many languages, datasets and domains **without** the need for fine-tuning.
Whisper was proposed in the paper [Robust Speech Recognition via Large-Scale Weak Supervision](https://arxiv.org/abs/2212.04356)
by Alec Radford et al from OpenAI. The original code repository can be found [here](https://github.com/openai/whisper).
**Disclaimer**: Content for this model card has partly been written by the Hugging Face team, and parts of it were
copied and pasted from the original model card.
## Fine-tuning details
- Fine-tuning took 66:07:22
- It was trained on 15000 wavs
- GPU in use was NVIDIA 3090ti with 24GB VRAM
- Fine-tuned on 15000 random wavs from common voice 21 across 70+ languages
## Model details
Whisper is a Transformer based encoder-decoder model, also referred to as a _sequence-to-sequence_ model.
It was trained on 680k hours of labelled speech data annotated using large-scale weak supervision.
The models were trained on either English-only data or multilingual data. The English-only models were trained
on the task of speech recognition. The multilingual models were trained on both speech recognition and speech
translation. For speech recognition, the model predicts transcriptions in the *same* language as the audio.
For speech translation, the model predicts transcriptions to a *different* language to the audio.
Whisper checkpoints come in five configurations of varying model sizes.
The smallest four are trained on either English-only or multilingual data.
The largest checkpoints are multilingual only. All ten of the pre-trained checkpoints
are available on the [Hugging Face Hub](https://huggingface.co/models?search=openai/whisper). The
checkpoints are summarised in the following table with links to the models on the Hub:
| Size | Parameters | English-only | Multilingual |
|----------|------------|------------------------------------------------------|-----------------------------------------------------|
| tiny | 39 M | [✓](https://huggingface.co/openai/whisper-tiny.en) | [✓](https://huggingface.co/openai/whisper-tiny) |
| base | 74 M | [✓](https://huggingface.co/openai/whisper-base.en) | [✓](https://huggingface.co/openai/whisper-base) |
| small | 244 M | [✓](https://huggingface.co/openai/whisper-small.en) | [✓](https://huggingface.co/openai/whisper-small) |
| medium | 769 M | [✓](https://huggingface.co/openai/whisper-medium.en) | [✓](https://huggingface.co/openai/whisper-medium) |
| large | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large) |
| large-v2 | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large-v2) |
# Usage
To transcribe audio samples, the model has to be used alongside a [`WhisperProcessor`](https://huggingface.co/docs/transformers/model_doc/whisper#transformers.WhisperProcessor).
The `WhisperProcessor` is used to:
1. Pre-process the audio inputs (converting them to log-Mel spectrograms for the model)
2. Post-process the model outputs (converting them from tokens to text)
The model is informed of which task to perform (transcription or translation) by passing the appropriate "context tokens". These context tokens
are a sequence of tokens that are given to the decoder at the start of the decoding process, and take the following order:
1. The transcription always starts with the `<|startoftranscript|>` token
2. The second token is the language token (e.g. `<|en|>` for English)
3. The third token is the "task token". It can take one of two values: `<|transcribe|>` for speech recognition or `<|translate|>` for speech translation
4. In addition, a `<|notimestamps|>` token is added if the model should not include timestamp prediction
Thus, a typical sequence of context tokens might look as follows:
```
<|startoftranscript|> <|en|> <|transcribe|> <|notimestamps|>
```
Which tells the model to decode in English, under the task of speech recognition, and not to predict timestamps.
These tokens can either be forced or un-forced. If they are forced, the model is made to predict each token at
each position. This allows one to control the output language and task for the Whisper model. If they are un-forced,
the Whisper model will automatically predict the output langauge and task itself.
The context tokens can be set accordingly:
```python
model.config.forced_decoder_ids = WhisperProcessor.get_decoder_prompt_ids(language="english", task="transcribe")
```
Which forces the model to predict in English under the task of speech recognition.
## Transcription
### Speech to IPA
In this example, the context tokens are 'unforced', meaning the model automatically predicts the output language
(English) and task (transcribe).
```python
>>> from transformers import WhisperProcessor, WhisperForConditionalGeneration
>>> from datasets import load_dataset
>>> # load model and processor
>>> processor = WhisperProcessor.from_pretrained("neurlang/ipa-whisper-small")
>>> model = WhisperForConditionalGeneration.from_pretrained("neurlang/ipa-whisper-small")
>>> model.config.forced_decoder_ids = None
>>> model.config.suppress_tokens = []
>>> model.generation_config.forced_decoder_ids = None
>>> model.generation_config._from_model_config = True
>>> # load dummy dataset and read audio files
>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> sample = ds[0]["audio"]
>>> input_features = processor(sample["array"], sampling_rate=sample["sampling_rate"], return_tensors="pt").input_features
>>> # generate token ids
>>> predicted_ids = model.generate(input_features)
>>> # decode token ids to text
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=False)
['mˈɪstɚ kwˈɪltɚ ˈɪz ðə ˈeɪ pˈɑsəl ˈʌv ðə ˈmɪdəl klˈæsɪz ˈænd wˈɪɹ glæd tˈu ˈælkəm ˈhɪz gˈʌsbəl']
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
['mˈɪstɚ kwˈɪltɚ ˈɪz ðə ˈeɪ pˈɑsəl ˈʌv ðə ˈmɪdəl klˈæsɪz ˈænd wˈɪɹ glæd tˈu ˈælkəm ˈhɪz gˈʌsbəl']
```
The context tokens can be removed from the start of the transcription by setting `skip_special_tokens=True`.
## Long-Form Transcription
The Whisper model is intrinsically designed to work on audio samples of up to 30s in duration. However, by using a chunking
algorithm, it can be used to transcribe audio samples of up to arbitrary length. This is possible through Transformers
[`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline)
method. Chunking is enabled by setting `chunk_length_s=30` when instantiating the pipeline. With chunking enabled, the pipeline
can be run with batched inference. It can also be extended to predict word level timestamps by passing `return_timestamps="word"`:
```python
>>> import torch
>>> from transformers import pipeline
>>> from datasets import load_dataset
>>> device = "cuda:0" if torch.cuda.is_available() else "cpu"
>>> pipe = pipeline(
>>> "automatic-speech-recognition",
>>> model="neurlang/ipa-whisper-small",
>>> chunk_length_s=30,
>>> device=device,
>>> )
>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> sample = ds[0]["audio"]
>>> prediction = pipe(sample.copy(), batch_size=8)["text"]
"mˈɪstɚ kwˈɪltɚ ˈɪz ðɪ əpˈɑsəl əv ðə ˈmɪdəl klˈæsɪz ˈænd wˈɪɹ glˈæd tˈɪ wˈɛlkəm ˈhɪz gˈɑspəl"
>>> # we can also return timestamps for the predictions
>>> prediction = pipe(sample.copy(), batch_size=8, return_timestamps="word")["chunks"]
[{'text': 'mˈɪstɚ', 'timestamp': (0.42, 0.78)}, {'text': ' kwˈɪltɚ', 'timestamp': (0.78, 1.2)}, {'text': ' ˈɪz', 'timestamp': (1.2, 1.4)}, {'text': ' ðɪ', 'timestamp': (1.4, 1.52)}, {'text': ' əpˈɑsəl', 'timestamp': (1.52, 2.08)}, {'text': ' əv', 'timestamp': (2.08, 2.26)}, {'text': ' ðə', 'timestamp': (2.26, 2.36)}, {'text': ' ˈmɪdəl', 'timestamp': (2.36, 2.6)}, {'text': ' klˈæsɪz', 'timestamp': (2.6, 3.22)}, {'text': ' ˈænd', 'timestamp': (3.22, 3.42)}, {'text': ' wˈɪɹ', 'timestamp': (3.42, 3.66)}, {'text': ' glˈæd', 'timestamp': (3.66, 4.02)}, {'text': ' tˈɪ', 'timestamp': (4.02, 4.18)}, {'text': ' wˈɛlkəm', 'timestamp': (4.18, 4.58)}, {'text': ' ˈhɪz', 'timestamp': (4.58, 4.82)}, {'text': ' gˈɑspəl', 'timestamp': (4.82, 5.38)}]
```
Refer to the blog post [ASR Chunking](https://huggingface.co/blog/asr-chunking) for more details on the chunking algorithm.
## Fine-Tuning
The pre-trained Whisper model demonstrates a strong ability to generalise to different datasets and domains. However,
its predictive capabilities can be improved further for certain languages and tasks through *fine-tuning*. The blog
post [Fine-Tune Whisper with 🤗 Transformers](https://huggingface.co/blog/fine-tune-whisper) provides a step-by-step
guide to fine-tuning the Whisper model with as little as 5 hours of labelled data.
### Evaluated Use
The primary intended users of these models are AI researchers studying robustness, generalization, capabilities, biases, and constraints of the current model. However, Whisper is also potentially quite useful as an ASR solution for developers, especially for English speech recognition. We recognize that once models are released, it is impossible to restrict access to only “intended” uses or to draw reasonable guidelines around what is or is not research.
The models are primarily trained and evaluated on ASR and speech translation to English tasks. They show strong ASR results in ~10 languages. They may exhibit additional capabilities, particularly if fine-tuned on certain tasks like voice activity detection, speaker classification, or speaker diarization but have not been robustly evaluated in these areas. We strongly recommend that users perform robust evaluations of the models in a particular context and domain before deploying them.
In particular, we caution against using Whisper models to transcribe recordings of individuals taken without their consent or purporting to use these models for any kind of subjective classification. We recommend against use in high-risk domains like decision-making contexts, where flaws in accuracy can lead to pronounced flaws in outcomes. The models are intended to transcribe and translate speech, use of the model for classification is not only not evaluated but also not appropriate, particularly to infer human attributes.
## Training Data
The models are trained on 680,000 hours of audio and the corresponding transcripts collected from the internet. 65% of this data (or 438,000 hours) represents English-language audio and matched English transcripts, roughly 18% (or 126,000 hours) represents non-English audio and English transcripts, while the final 17% (or 117,000 hours) represents non-English audio and the corresponding transcript. This non-English data represents 98 different languages.
As discussed in [the accompanying paper](https://cdn.openai.com/papers/whisper.pdf), we see that performance on transcription in a given language is directly correlated with the amount of training data we employ in that language.
## Performance and Limitations
Our studies show that, over many existing ASR systems, the models exhibit improved robustness to accents, background noise, technical language, as well as zero shot translation from multiple languages into English; and that accuracy on speech recognition and translation is near the state-of-the-art level.
However, because the models are trained in a weakly supervised manner using large-scale noisy data, the predictions may include texts that are not actually spoken in the audio input (i.e. hallucination). We hypothesize that this happens because, given their general knowledge of language, the models combine trying to predict the next word in audio with trying to transcribe the audio itself.
Our models perform unevenly across languages, and we observe lower accuracy on low-resource and/or low-discoverability languages or languages where we have less training data. The models also exhibit disparate performance on different accents and dialects of particular languages, which may include higher word error rate across speakers of different genders, races, ages, or other demographic criteria. Our full evaluation results are presented in [the paper accompanying this release](https://cdn.openai.com/papers/whisper.pdf).
In addition, the sequence-to-sequence architecture of the model makes it prone to generating repetitive texts, which can be mitigated to some degree by beam search and temperature scheduling but not perfectly. Further analysis on these limitations are provided in [the paper](https://cdn.openai.com/papers/whisper.pdf). It is likely that this behavior and hallucinations may be worse on lower-resource and/or lower-discoverability languages.
## Broader Implications
We anticipate that Whisper models’ transcription capabilities may be used for improving accessibility tools. While Whisper models cannot be used for real-time transcription out of the box – their speed and size suggest that others may be able to build applications on top of them that allow for near-real-time speech recognition and translation. The real value of beneficial applications built on top of Whisper models suggests that the disparate performance of these models may have real economic implications.
There are also potential dual use concerns that come with releasing Whisper. While we hope the technology will be used primarily for beneficial purposes, making ASR technology more accessible could enable more actors to build capable surveillance technologies or scale up existing surveillance efforts, as the speed and accuracy allow for affordable automatic transcription and translation of large volumes of audio communication. Moreover, these models may have some capabilities to recognize specific individuals out of the box, which in turn presents safety concerns related both to dual use and disparate performance. In practice, we expect that the cost of transcription is not the limiting factor of scaling up surveillance projects.
### BibTeX entry and citation info
```bibtex
@misc{radford2022whisper,
doi = {10.48550/ARXIV.2212.04356},
url = {https://arxiv.org/abs/2212.04356},
author = {Radford, Alec and Kim, Jong Wook and Xu, Tao and Brockman, Greg and McLeavey, Christine and Sutskever, Ilya},
title = {Robust Speech Recognition via Large-Scale Weak Supervision},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
|
RTannous/gpt-oss-20b-multilingual-reasoner
|
RTannous
| 2025-08-07T14:39:10Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"unsloth",
"conversational",
"base_model:unsloth/gpt-oss-20b",
"base_model:finetune:unsloth/gpt-oss-20b",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-06T00:49:12Z |
---
base_model: unsloth/gpt-oss-20b
library_name: transformers
model_name: gpt-oss-20b-multilingual-reasoner
tags:
- generated_from_trainer
- trl
- sft
- unsloth
licence: license
---
# Model Card for gpt-oss-20b-multilingual-reasoner
This model is a fine-tuned version of [unsloth/gpt-oss-20b](https://huggingface.co/unsloth/gpt-oss-20b).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="RTannous/gpt-oss-20b-multilingual-reasoner", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.56.0.dev0
- Pytorch: 2.8.0
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
sinequa/passage-ranker.chocolate
|
sinequa
| 2025-08-07T14:38:45Z | 1,247 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"en",
"arxiv:2002.10957",
"arxiv:1901.04085",
"arxiv:1611.09268",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-10T12:42:08Z |
---
language:
- en
---
# Model Card for `passage-ranker.chocolate`
This model is a passage ranker developed by Sinequa. It produces a relevance score given a query-passage pair and is used to order search results.
Model name: `passage-ranker.chocolate`
## Supported Languages
The model was trained and tested in the following languages:
- English
## Scores
| Metric | Value |
|:--------------------|------:|
| Relevance (NDCG@10) | 0.484 |
Note that the relevance score is computed as an average over 14 retrieval datasets (see
[details below](#evaluation-metrics)).
## Inference Times
| GPU | Quantization type | Batch size 1 | Batch size 32 |
|:------------------------------------------|:------------------|---------------:|---------------:|
| NVIDIA A10 | FP16 | 1 ms | 5 ms |
| NVIDIA A10 | FP32 | 2 ms | 22 ms |
| NVIDIA T4 | FP16 | 1 ms | 13 ms |
| NVIDIA T4 | FP32 | 3 ms | 66 ms |
| NVIDIA L4 | FP16 | 2 ms | 6 ms |
| NVIDIA L4 | FP32 | 3 ms | 30 ms |
## Gpu Memory usage
| Quantization type | Memory |
|:-------------------------------------------------|-----------:|
| FP16 | 300 MiB |
| FP32 | 550 MiB |
Note that GPU memory usage only includes how much GPU memory the actual model consumes on an NVIDIA T4 GPU with a batch
size of 32. It does not include the fix amount of memory that is consumed by the ONNX Runtime upon initialization which
can be around 0.5 to 1 GiB depending on the used GPU.
## Requirements
- Minimal Sinequa version: 11.10.0
- [Cuda compute capability](https://developer.nvidia.com/cuda-gpus): above 5.0 (above 6.0 for FP16 use)
## Model Details
### Overview
- Number of parameters: 23 million
- Base language model: [MiniLM-L6-H384-uncased](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased)
([Paper](https://arxiv.org/abs/2002.10957), [GitHub](https://github.com/microsoft/unilm/tree/master/minilm))
- Insensitive to casing and accents
- Training procedure: [MonoBERT](https://arxiv.org/abs/1901.04085)
### Training Data
- MS MARCO Passage Ranking
([Paper](https://arxiv.org/abs/1611.09268),
[Official Page](https://microsoft.github.io/msmarco/),
[dataset on HF hub](https://huggingface.co/datasets/unicamp-dl/mmarco))
### Evaluation Metrics
To determine the relevance score, we averaged the results that we obtained when evaluating on the datasets of the
[BEIR benchmark](https://github.com/beir-cellar/beir). Note that all these datasets are in English.
| Dataset | NDCG@10 |
|:------------------|--------:|
| Average | 0.486 |
| | |
| Arguana | 0.554 |
| CLIMATE-FEVER | 0.209 |
| DBPedia Entity | 0.367 |
| FEVER | 0.744 |
| FiQA-2018 | 0.339 |
| HotpotQA | 0.685 |
| MS MARCO | 0.412 |
| NFCorpus | 0.352 |
| NQ | 0.454 |
| Quora | 0.818 |
| SCIDOCS | 0.158 |
| SciFact | 0.658 |
| TREC-COVID | 0.674 |
| Webis-Touche-2020 | 0.345 |
|
sinequa/answer-finder-v1-S-en
|
sinequa
| 2025-08-07T14:38:30Z | 319 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"en",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-10T15:11:37Z |
---
language:
- en
---
# Model Card for `answer-finder-v1-S-en`
This model is a question answering model developed by Sinequa. It produces two lists of logit scores corresponding to the start token and end token of an answer.
Model name: `answer-finder-v1-S-en`
## Supported Languages
The model was trained and tested in the following languages:
- English
## Scores
| Metric | Value |
|:--------------------------------------------------------------|-------:|
| F1 Score on SQuAD v2 with Hugging Face evaluation pipeline | 79.4 |
| F1 Score on SQuAD v2 with Haystack evaluation pipeline | 79.5 |
## Inference Time
| GPU | Quantization type | Batch size 1 | Batch size 32 |
|:------------------------------------------|:------------------|---------------:|---------------:|
| NVIDIA A10 | FP16 | 1 ms | 10 ms |
| NVIDIA A10 | FP32 | 3 ms | 43 ms |
| NVIDIA T4 | FP16 | 2 ms | 22 ms |
| NVIDIA T4 | FP32 | 5 ms | 130 ms |
| NVIDIA L4 | FP16 | 2 ms | 12 ms |
| NVIDIA L4 | FP32 | 5 ms | 62 ms |
**Note that the Answer Finder models are only used at query time.**
## Gpu Memory usage
| Quantization type | Memory |
|:-------------------------------------------------|-----------:|
| FP16 | 300 MiB |
| FP32 | 550 MiB |
Note that GPU memory usage only includes how much GPU memory the actual model consumes on an NVIDIA T4 GPU with a batch
size of 32. It does not include the fix amount of memory that is consumed by the ONNX Runtime upon initialization which
can be around 0.5 to 1 GiB depending on the used GPU.
## Requirements
- Minimal Sinequa version: 11.10.0
- Minimal Sinequa version for using FP16 models and GPUs with CUDA compute capability of 8.9+ (like NVIDIA L4): 11.11.0
## Model Details
### Overview
- Number of parameters: 33 million
- Base language model: [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased)
- Insensitive to casing and accents
### Training Data
- [SQuAD v2](https://rajpurkar.github.io/SQuAD-explorer/)
|
sagata007/neeta
|
sagata007
| 2025-08-07T14:38:19Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-08-07T14:38:03Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- output:
url: images/Screenshot 2025-08-07 200658.png
text: '-'
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: AHJDFG_KW824RH_OIHWEIU34 WOMAN
---
# neeta
<Gallery />
## Trigger words
You should use `AHJDFG_KW824RH_OIHWEIU34 WOMAN` to trigger the image generation.
## Download model
[Download](/sagata007/neeta/tree/main) them in the Files & versions tab.
|
Yujie-AI/Vicuna_13B_LLaVA-linear-coeff0.2
|
Yujie-AI
| 2025-08-07T14:37:32Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llava_next",
"image-to-text",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-04-23T21:26:24Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sinequa/answer-finder-v1-L-multilingual
|
sinequa
| 2025-08-07T14:37:11Z | 341 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"de",
"en",
"es",
"fr",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-10T15:29:38Z |
---
language:
- de
- en
- es
- fr
---
# Model Card for `answer-finder-v1-L-multilingual`
This model is a question answering model developed by Sinequa. It produces two lists of logit scores corresponding to the start token and end token of an answer.
Model name: `answer-finder-v1-L-multilingual`
## Supported Languages
The model was trained and tested in the following languages:
- English
- French
- German
- Spanish
## Scores
| Metric | Value |
|:--------------------------------------------------------------|-------:|
| F1 Score on SQuAD v2 EN with Hugging Face evaluation pipeline | 75 |
| F1 Score on SQuAD v2 EN with Haystack evaluation pipeline | 75 |
| F1 Score on SQuAD v2 FR with Haystack evaluation pipeline | 73.4 |
| F1 Score on SQuAD v2 DE with Haystack evaluation pipeline | 90.8 |
| F1 Score on SQuAD v2 ES with Haystack evaluation pipeline | 67.1 |
## Inference Time
| GPU | Quantization type | Batch size 1 | Batch size 32 |
|:------------------------------------------|:------------------|---------------:|---------------:|
| NVIDIA A10 | FP16 | 2 ms | 30 ms |
| NVIDIA A10 | FP32 | 4 ms | 83 ms |
| NVIDIA T4 | FP16 | 3 ms | 65 ms |
| NVIDIA T4 | FP32 | 14 ms | 373 ms |
| NVIDIA L4 | FP16 | 2 ms | 38 ms |
| NVIDIA L4 | FP32 | 5 ms | 124 ms |
**Note that the Answer Finder models are only used at query time.**
## Gpu Memory usage
| Quantization type | Memory |
|:-------------------------------------------------|-----------:|
| FP16 | 550 MiB |
| FP32 | 1050 MiB |
Note that GPU memory usage only includes how much GPU memory the actual model consumes on an NVIDIA T4 GPU with a batch size of 32. It does not include the fix amount of memory that is consumed by the ONNX Runtime upon initialization which can be around 0.5 to 1 GiB depending on the used GPU.
## Requirements
- Minimal Sinequa version: 11.10.0
- [Cuda compute capability](https://developer.nvidia.com/cuda-gpus): above 5.0 (above 6.0 for FP16 use)
## Model Details
### Overview
- Number of parameters: 110 million
- Base language model: [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased)
pre-trained by Sinequa in English, French, German and Spanish
- Insensitive to casing and accents
### Training Data
- [SQuAD v2](https://rajpurkar.github.io/SQuAD-explorer/)
- [French-SQuAD](https://github.com/Alikabbadj/French-SQuAD) + French translation of SQuAD v2 "impossible" query-passage pairs
- [GermanQuAD](https://www.deepset.ai/germanquad) + German translation of SQuAD v2 "impossible" query-passage pairs
- [SQuAD-es-v2](https://github.com/ccasimiro88/TranslateAlignRetrieve)
|
sinequa/vectorizer-v1-S-en
|
sinequa
| 2025-08-07T14:36:57Z | 342 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"en",
"arxiv:2007.00808",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-07-10T14:49:33Z |
---
pipeline_tag: sentence-similarity
tags:
- feature-extraction
- sentence-similarity
language:
- en
---
# Model Card for `vectorizer-v1-S-en`
This model is a vectorizer developed by Sinequa. It produces an embedding vector given a passage or a query. The passage vectors are stored in our vector index and the query vector is used at query time to look up relevant passages in the index.
Model name: `vectorizer-v1-S-en`
## Supported Languages
The model was trained and tested in the following languages:
- English
## Scores
| Metric | Value |
|:-----------------------|------:|
| Relevance (Recall@100) | 0.456 |
Note that the relevance score is computed as an average over 14 retrieval datasets (see
[details below](#evaluation-metrics)).
## Inference Times
| GPU | Quantization type | Batch size 1 | Batch size 32 |
|:------------------------------------------|:------------------|---------------:|---------------:|
| NVIDIA A10 | FP16 | 1 ms | 4 ms |
| NVIDIA A10 | FP32 | 2 ms | 13 ms |
| NVIDIA T4 | FP16 | 1 ms | 13 ms |
| NVIDIA T4 | FP32 | 2 ms | 52 ms |
| NVIDIA L4 | FP16 | 1 ms | 5 ms |
| NVIDIA L4 | FP32 | 2 ms | 18 ms |
## Gpu Memory usage
| Quantization type | Memory |
|:-------------------------------------------------|-----------:|
| FP16 | 300 MiB |
| FP32 | 500 MiB |
Note that GPU memory usage only includes how much GPU memory the actual model consumes on an NVIDIA T4 GPU with a batch
size of 32. It does not include the fix amount of memory that is consumed by the ONNX Runtime upon initialization which
can be around 0.5 to 1 GiB depending on the used GPU.
## Requirements
- Minimal Sinequa version: 11.10.0
- [Cuda compute capability](https://developer.nvidia.com/cuda-gpus): above 5.0 (above 6.0 for FP16 use)
## Model Details
### Overview
- Number of parameters: 29 million
- Base language model: [English BERT-Small](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8)
- Insensitive to casing and accents
- Output dimensions: 256 (reduced with an additional dense layer)
- Training procedure: A first model was trained with query-passage pairs, using the in-batch negative strategy with [this loss](https://www.sbert.net/docs/package_reference/losses.html#multiplenegativesrankingloss). A second model was then trained on query-passage-negative triplets with negatives mined from the previous model, like a variant of [ANCE](https://arxiv.org/pdf/2007.00808.pdf) but with different hyper parameters.
### Training Data
The model was trained on a Sinequa curated version of Google's [Natural Questions](https://ai.google.com/research/NaturalQuestions).
### Evaluation Metrics
To determine the relevance score, we averaged the results that we obtained when evaluating on the datasets of the
[BEIR benchmark](https://github.com/beir-cellar/beir). Note that all these datasets are in English.
| Dataset | Recall@100 |
|:------------------|-----------:|
| Average | 0.456 |
| | |
| Arguana | 0.832 |
| CLIMATE-FEVER | 0.342 |
| DBPedia Entity | 0.299 |
| FEVER | 0.660 |
| FiQA-2018 | 0.301 |
| HotpotQA | 0.434 |
| MS MARCO | 0.610 |
| NFCorpus | 0.159 |
| NQ | 0.671 |
| Quora | 0.966 |
| SCIDOCS | 0.194 |
| SciFact | 0.592 |
| TREC-COVID | 0.037 |
| Webis-Touche-2020 | 0.285 |
|
sinequa/passage-ranker.pistachio
|
sinequa
| 2025-08-07T14:36:13Z | 961 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"de",
"en",
"es",
"fr",
"it",
"ja",
"nl",
"pt",
"zh",
"pl",
"arxiv:1901.04085",
"arxiv:1611.09268",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-07-11T14:02:06Z |
---
language:
- de
- en
- es
- fr
- it
- ja
- nl
- pt
- zh
- pl
---
# Model Card for `passage-ranker.pistachio`
This model is a passage ranker developed by Sinequa. It produces a relevance score given a query-passage pair and is used to order search results.
Model name: `passage-ranker.pistachio`
## Supported Languages
The model was trained and tested in the following languages:
- English
- French
- German
- Spanish
- Italian
- Dutch
- Japanese
- Portuguese
- Chinese (simplified)
- Polish
Besides the aforementioned languages, basic support can be expected for additional 93 languages that were used during the pretraining of the base model (see
[list of languages](https://github.com/google-research/bert/blob/master/multilingual.md#list-of-languages)).
## Scores
| Metric | Value |
|:----------------------------|------:|
| English Relevance (NDCG@10) | 0.474 |
| Polish Relevance (NDCG@10) | 0.380 |
Note that the relevance score is computed as an average over several retrieval datasets (see
[details below](#evaluation-metrics)).
## Inference Times
| GPU | Quantization type | Batch size 1 | Batch size 32 |
|:------------------------------------------|:------------------|---------------:|---------------:|
| NVIDIA A10 | FP16 | 2 ms | 28 ms |
| NVIDIA A10 | FP32 | 4 ms | 82 ms |
| NVIDIA T4 | FP16 | 3 ms | 65 ms |
| NVIDIA T4 | FP32 | 14 ms | 369 ms |
| NVIDIA L4 | FP16 | 3 ms | 38 ms |
| NVIDIA L4 | FP32 | 5 ms | 123 ms |
## Gpu Memory usage
| Quantization type | Memory |
|:-------------------------------------------------|-----------:|
| FP16 | 850 MiB |
| FP32 | 1200 MiB |
Note that GPU memory usage only includes how much GPU memory the actual model consumes on an NVIDIA T4 GPU with a batch
size of 32. It does not include the fix amount of memory that is consumed by the ONNX Runtime upon initialization which
can be around 0.5 to 1 GiB depending on the used GPU.
## Requirements
- Minimal Sinequa version: 11.10.0
- [Cuda compute capability](https://developer.nvidia.com/cuda-gpus): above 5.0 (above 6.0 for FP16 use)
## Model Details
### Overview
- Number of parameters: 167 million
- Base language model: [Multilingual BERT-Base](https://huggingface.co/bert-base-multilingual-uncased)
- Insensitive to casing and accents
- Training procedure: [MonoBERT](https://arxiv.org/abs/1901.04085)
### Training Data
- MS MARCO Passage Ranking
([Paper](https://arxiv.org/abs/1611.09268),
[Official Page](https://microsoft.github.io/msmarco/),
[English & translated datasets on the HF dataset hub](https://huggingface.co/datasets/unicamp-dl/mmarco), [translated dataset in Polish on the HF dataset hub](https://huggingface.co/datasets/clarin-knext/msmarco-pl))
- Original English dataset
- Translated datasets for the other nine supported languages
### Evaluation Metrics
##### English
To determine the relevance score, we averaged the results that we obtained when evaluating on the datasets of the
[BEIR benchmark](https://github.com/beir-cellar/beir). Note that all these datasets are in English.
| Dataset | NDCG@10 |
|:------------------|--------:|
| Average | 0.474 |
| | |
| Arguana | 0.539 |
| CLIMATE-FEVER | 0.230 |
| DBPedia Entity | 0.369 |
| FEVER | 0.765 |
| FiQA-2018 | 0.329 |
| HotpotQA | 0.694 |
| MS MARCO | 0.413 |
| NFCorpus | 0.337 |
| NQ | 0.486 |
| Quora | 0.714 |
| SCIDOCS | 0.144 |
| SciFact | 0.649 |
| TREC-COVID | 0.651 |
| Webis-Touche-2020 | 0.312 |
#### Polish
This model has polish capacities, that are being evaluated over a subset of
the [PIRBenchmark](https://github.com/sdadas/pirb) with BM25 as the first stage retrieval.
| Dataset | NDCG@10 |
|:--------------|--------:|
| Average | 0.380 |
| | |
| arguana-pl | 0.285 |
| dbpedia-pl | 0.283 |
| fiqa-pl | 0.223 |
| hotpotqa-pl | 0.603 |
| msmarco-pl | 0.259 |
| nfcorpus-pl | 0.293 |
| nq-pl | 0.355 |
| quora-pl | 0.613 |
| scidocs-pl | 0.128 |
| scifact-pl | 0.581 |
| trec-covid-pl | 0.560 |
#### Other languages
We evaluated the model on the datasets of the [MIRACL benchmark](https://github.com/project-miracl/miracl) to test its
multilingual capacities. Note that not all training languages are part of the benchmark, so we only report the metrics
for the existing languages.
| Language | NDCG@10 |
|:----------------------|--------:|
| French | 0.439 |
| German | 0.418 |
| Spanish | 0.487 |
| Japanese | 0.517 |
| Chinese (simplified) | 0.454 |
|
sinequa/vectorizer.guava
|
sinequa
| 2025-08-07T14:35:45Z | 222 | 1 | null |
[
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"de",
"en",
"es",
"fr",
"it",
"nl",
"ja",
"pt",
"zh",
"pl",
"arxiv:2012.15828",
"arxiv:2108.13897",
"region:us"
] |
sentence-similarity
| 2024-10-09T15:23:58Z |
---
pipeline_tag: sentence-similarity
tags:
- feature-extraction
- sentence-similarity
language:
- de
- en
- es
- fr
- it
- nl
- ja
- pt
- zh
- pl
---
# Model Card for `vectorizer.guava`
This model is a vectorizer developed by Sinequa. It produces an embedding vector given a passage or a query. The
passage vectors are stored in our vector index and the query vector is used at query time to look up relevant passages
in the index.
Model name: `vectorizer.guava`
## Supported Languages
The model was trained and tested in the following languages:
- English
- French
- German
- Spanish
- Italian
- Dutch
- Japanese
- Portuguese
- Chinese (simplified)
- Chinese (traditional)
- Polish
Besides these languages, basic support can be expected for additional 91 languages that were used during the pretraining
of the base model (see Appendix A of XLM-R paper).
## Scores
| Metric | Value |
|:-------------------------------|------:|
| English Relevance (Recall@100) | 0.616 |
Note that the relevance scores are computed as an average over several retrieval datasets (see
[details below](#evaluation-metrics)).
## Inference Times
| GPU | Quantization type | Batch size 1 | Batch size 32 |
|:------------------------------------------|:------------------|---------------:|---------------:|
| NVIDIA A10 | FP16 | 1 ms | 5 ms |
| NVIDIA A10 | FP32 | 2 ms | 18 ms |
| NVIDIA T4 | FP16 | 1 ms | 12 ms |
| NVIDIA T4 | FP32 | 3 ms | 52 ms |
| NVIDIA L4 | FP16 | 2 ms | 5 ms |
| NVIDIA L4 | FP32 | 4 ms | 24 ms |
## Gpu Memory usage
| Quantization type | Memory |
|:-------------------------------------------------|-----------:|
| FP16 | 550 MiB |
| FP32 | 1050 MiB |
Note that GPU memory usage only includes how much GPU memory the actual model consumes on an NVIDIA T4 GPU with a batch
size of 32. It does not include the fix amount of memory that is consumed by the ONNX Runtime upon initialization which
can be around 0.5 to 1 GiB depending on the used GPU.
## Requirements
- Minimal Sinequa version: 11.10.0
- [Cuda compute capability](https://developer.nvidia.com/cuda-gpus): above 5.0 (above 6.0 for FP16 use)
## Model Details
### Overview
- Number of parameters: 107 million
- Base language
model: [mMiniLMv2-L6-H384-distilled-from-XLMR-Large](https://huggingface.co/nreimers/mMiniLMv2-L6-H384-distilled-from-XLMR-Large) ([Paper](https://arxiv.org/abs/2012.15828), [GitHub](https://github.com/microsoft/unilm/tree/master/minilm))
- Insensitive to casing and accents
- Output dimensions: 256 (reduced with an additional dense layer)
- Training procedure: Query-passage-negative triplets for datasets that have mined hard negative data, Query-passage
pairs for the rest. Number of negatives is augmented with in-batch negative strategy
### Training Data
The model have been trained using all datasets that are cited in
the [all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) model.
In addition to that, this model has been trained on the datasets cited
in [this paper](https://arxiv.org/pdf/2108.13897.pdf) on the first 9 aforementioned languages.
It has also been trained on [this dataset](https://huggingface.co/datasets/clarin-knext/msmarco-pl) for polish capacities, and a translated version of msmarco-zh for traditional chinese capacities.
### Evaluation Metrics
#### English
To determine the relevance score, we averaged the results that we obtained when evaluating on the datasets of the
[BEIR benchmark](https://github.com/beir-cellar/beir). Note that all these datasets are in **English**.
| Dataset | Recall@100 |
|:------------------|-----------:|
| Average | 0.616 |
| | |
| Arguana | 0.956 |
| CLIMATE-FEVER | 0.471 |
| DBPedia Entity | 0.379 |
| FEVER | 0.824 |
| FiQA-2018 | 0.642 |
| HotpotQA | 0.579 |
| MS MARCO | 0.85 |
| NFCorpus | 0.289 |
| NQ | 0.765 |
| Quora | 0.993 |
| SCIDOCS | 0.467 |
| SciFact | 0.899 |
| TREC-COVID | 0.104 |
| Webis-Touche-2020 | 0.407 |
#### Traditional Chinese
This model has traditional chinese capacities, that are being evaluated over the same dev set at msmarco-zh, translated in traditional chinese.
| Dataset | Recall@100 |
|:---------------------------------|-----------:|
| msmarco-zh-traditional | 0.738 |
In comparison, [raspberry](https://huggingface.co/sinequa/vectorizer.raspberry) scores a 0.693 on this dataset.
#### Other languages
We evaluated the model on the datasets of the [MIRACL benchmark](https://github.com/project-miracl/miracl) to test its
multilingual capacities. Note that not all training languages are part of the benchmark, so we only report the metrics
for the existing languages.
| Language | Recall@100 |
|:----------------------|-----------:|
| French | 0.672 |
| German | 0.594 |
| Spanish | 0.632 |
| Japanese | 0.603 |
| Chinese (simplified) | 0.702 |
|
sinequa/vectorizer.banana
|
sinequa
| 2025-08-07T14:35:27Z | 78 | 0 | null |
[
"safetensors",
"xlm-roberta",
"mrl",
"multilingual",
"arxiv:2205.13147",
"region:us"
] | null | 2025-01-14T16:18:43Z |
---
tags:
- mrl
- multilingual
---
# vectorizer.banana
This model is a vectorizer developed by Sinequa.
It produces an embedding vector given a passage or a query.
The passage vectors are stored in our vector index and the query vector is used at query time to look up relevant passages in the index.
Model name: `vectorizer.banana`
## Supported Languages
Since this model is a distilled version of the [BGE-M3](https://huggingface.co/BAAI/bge-m3) model, it can theoritically handle 100+ languages.
## Scores
We computed the differences in performance w.r.t the original [BGE-M3](https://huggingface.co/BAAI/bge-m3) on MS MARCO EN. Scores on famous benchmarks (BEIR, MIRACL, MTEB, etc.) can be found directly in the model card of BGE-M3 under line "Dense". We expect the performance to drop linearly with the same scale than the observed with MS MARCO EN for other datasets.
| Model | Performance Relative to BGE-M3 |
|:-----------------------------------------------|:------------------------------:|
| vectorizer.banana (1024 dimensions) | 99.3% |
| vectorizer.banana (768 dimensions) | 98.8% |
| vectorizer.banana (512 dimensions) | 98% |
| **vectorizer.banana (256 dimensions*)** | 95.7% |
\* *The default dimension within Sinequa*
## Inference Times
| GPU | Quantization type | Batch size 1 | Batch size 32 |
|:------------------------------------------|:------------------|-----------------:|----------------:|
| NVIDIA A10 | FP16 | 4.5 ms | 43 ms |
| NVIDIA T4 | FP16 | 2.5 ms | 35 ms |
## GPU Memory Usage
| Quantization type | Memory |
|:-------------------------------------------------|------------:|
| FP16 | 1450 MiB |
Note that GPU memory usage only includes how much GPU memory the actual model consumes on an NVIDIA T4 GPU with a batch size of 32. It does not include the fix amount of memory that is consumed by the ONNX Runtime upon initialization which can be around 0.5 to 1 GiB depending on the used GPU.
## Requirements
- Minimal Sinequa version: 11.11.0.2306
- [Cuda compute capability](https://developer.nvidia.com/cuda-gpus): above 5.0 (above 6.0 for FP16 use)
## Model Details
### Configuration
Note that this model will be packaged with a default MRL cutoff of 256 dimensions . In order to use the 1024 dimensions or any other value the `mrl-cutoff` parameter needs to be set.
### Training
This model used the [BGE-M3](https://huggingface.co/BAAI/bge-m3), a good and compact multilingual embedding model as a backbone for distillation.
The original model size was 24 layers and then reduced to 5 layers.
To obtain a low dimensional output space (256 compared to the original 1024), [Matryoshka Representation Learning](https://arxiv.org/abs/2205.13147) was used at training time.
|
jburtoft/Qwen3-8BSharded
|
jburtoft
| 2025-08-07T14:33:41Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-07T14:33:41Z |
---
license: apache-2.0
---
|
ehristoforu/flqww3-smalltrained
|
ehristoforu
| 2025-08-07T14:33:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:ehristoforu/sdmkdnks",
"base_model:finetune:ehristoforu/sdmkdnks",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-07T14:30:54Z |
---
base_model: ehristoforu/sdmkdnks
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** ehristoforu
- **License:** apache-2.0
- **Finetuned from model :** ehristoforu/sdmkdnks
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Lazysniper/Horiza-RAG-base-8b
|
Lazysniper
| 2025-08-07T14:32:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3n",
"image-text-to-text",
"text-generation-inference",
"Horiza",
"conversational",
"en",
"base_model:unsloth/gemma-3n-E2B-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3n-E2B-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-08-07T08:31:40Z |
---
base_model:
- unsloth/gemma-3n-E2B-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- gemma3n
- Horiza
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Lazysniper
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3n-e2b-it-unsloth-bnb-4bit
This gemma3n model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
hamid1232/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-yapping_giant_lizard
|
hamid1232
| 2025-08-07T14:32:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am yapping giant lizard",
"unsloth",
"trl",
"genrl-swarm",
"I am yapping_giant_lizard",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-05-11T23:22:12Z |
---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-yapping_giant_lizard
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am yapping giant lizard
- unsloth
- trl
- genrl-swarm
- I am yapping_giant_lizard
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-yapping_giant_lizard
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="hamid1232/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-yapping_giant_lizard", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
R-Kentaren/Fairseq
|
R-Kentaren
| 2025-08-07T14:32:43Z | 0 | 0 | null |
[
"code",
"license:mit",
"region:us"
] | null | 2025-08-07T12:52:32Z |
---
license: mit
tags:
- code
---
# Fairseq Fix for Python 3.11
## Introduction
This repository provides pre-built wheel files for Fairseq, a sequence modeling toolkit written in PyTorch, that are compatible with Python 3.11. Fairseq is widely used for tasks such as translation, summarization, and language modeling. However, the official Fairseq repository does not yet support Python 3.11, and this repository offers a community-provided solution to bridge that gap.
**Note**: This is not the official Fairseq repository. While it aims to provide compatibility with Python 3.11, it may not include all the latest features or bug fixes from the official Fairseq project. Users should be aware of potential differences or limitations.
## Installation
To install the fixed version of Fairseq for Python 3.11, use the following commands based on your operating system:
### Linux
```bash
pip install https://huggingface.co/R-Kentaren/Fairseq/resolve/main/fairseq-linux_x86_64.whl
```
### Windows
```bash
pip install https://huggingface.co/R-Kentaren/Fairseq/resolve/main/fairseq-win_amd64.whl
```
These commands will download and install the pre-built wheel files for Fairseq that are compatible with Python 3.11.
## Usage
Once installed, you can use Fairseq as you normally would. Below is an example of how to train a model using Fairseq:
```bash
fairseq-train /path/to/data \
--arch transformer \
--task translation \
--criterion label_smoothed_cross_entropy \
--label-smoothing 0.1 \
--optimizer adam \
--adam-betas '(0.9, 0.98)' \
--lr 5e-4 \
--lr-scheduler inverse_sqrt \
--warmup-updates 4000 \
--warmup-init-lr 1e-7 \
--dropout 0.3 \
--weight-decay 0.0001 \
--max-tokens 4096 \
--batch-size 32 \
--max-epoch 30 \
--save-dir /path/to/save
```
For detailed usage instructions, please refer to the official Fairseq documentation: [Fairseq Documentation](https://fairseq.readthedocs.io/en/latest/).
## Known Issues or Limitations
- This repository is a community-provided fix and may not include all the latest features or bug fixes from the official Fairseq repository.
- Compatibility with all Fairseq features is not guaranteed. Some advanced features might not work as expected.
- Users should verify the functionality of specific Fairseq components before relying on this version for critical tasks.
## Contributing
If you encounter any issues or have suggestions for improvements, please open an issue or submit a pull request on this repository. Contributions are welcome!
## License
This repository is likely under the same license as the official Fairseq project, which is the MIT License. Please verify the license directly in the repository if available.
## Acknowledgements
Thanks to R-Kentaren for providing this fix for the community.
|
Yujie-AI/Mistral_7B_LLaVA-linear-coeff0.4
|
Yujie-AI
| 2025-08-07T14:32:01Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llava_next",
"image-to-text",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-04-23T20:32:55Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Yujie-AI/Llama3_8B_LLaVA-linear-coeff0.6
|
Yujie-AI
| 2025-08-07T14:29:17Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llava_next",
"image-to-text",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-04-23T20:08:18Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Yujie-AI/Llama3_8B_LLaVA-linear-coeff0.2
|
Yujie-AI
| 2025-08-07T14:28:03Z | 11 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llava_next",
"image-to-text",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-04-23T19:58:25Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ReactiveAI/RxT-Alpha-Micro-Plus-Decoder-SI-SMAT
|
ReactiveAI
| 2025-08-07T14:27:35Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"text-generation",
"license:apache-2.0",
"region:eu"
] |
text-generation
| 2025-08-07T13:15:55Z |
---
license: apache-2.0
pipeline_tag: text-generation
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
mradermacher/q1b-limo_dsr32b-GGUF
|
mradermacher
| 2025-08-07T14:26:48Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"generated_from_trainer",
"trl",
"sft",
"en",
"base_model:nlee-208/q1b-limo_dsr32b",
"base_model:quantized:nlee-208/q1b-limo_dsr32b",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-07T14:21:45Z |
---
base_model: nlee-208/q1b-limo_dsr32b
language:
- en
library_name: transformers
model_name: q1b-limo_dsr32b
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- generated_from_trainer
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/nlee-208/q1b-limo_dsr32b
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#q1b-limo_dsr32b-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/q1b-limo_dsr32b-GGUF/resolve/main/q1b-limo_dsr32b.Q2_K.gguf) | Q2_K | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/q1b-limo_dsr32b-GGUF/resolve/main/q1b-limo_dsr32b.Q3_K_S.gguf) | Q3_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/q1b-limo_dsr32b-GGUF/resolve/main/q1b-limo_dsr32b.Q3_K_M.gguf) | Q3_K_M | 0.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/q1b-limo_dsr32b-GGUF/resolve/main/q1b-limo_dsr32b.Q3_K_L.gguf) | Q3_K_L | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/q1b-limo_dsr32b-GGUF/resolve/main/q1b-limo_dsr32b.IQ4_XS.gguf) | IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/q1b-limo_dsr32b-GGUF/resolve/main/q1b-limo_dsr32b.Q4_K_S.gguf) | Q4_K_S | 1.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/q1b-limo_dsr32b-GGUF/resolve/main/q1b-limo_dsr32b.Q4_K_M.gguf) | Q4_K_M | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/q1b-limo_dsr32b-GGUF/resolve/main/q1b-limo_dsr32b.Q5_K_S.gguf) | Q5_K_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/q1b-limo_dsr32b-GGUF/resolve/main/q1b-limo_dsr32b.Q5_K_M.gguf) | Q5_K_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/q1b-limo_dsr32b-GGUF/resolve/main/q1b-limo_dsr32b.Q6_K.gguf) | Q6_K | 1.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/q1b-limo_dsr32b-GGUF/resolve/main/q1b-limo_dsr32b.Q8_0.gguf) | Q8_0 | 1.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/q1b-limo_dsr32b-GGUF/resolve/main/q1b-limo_dsr32b.f16.gguf) | f16 | 3.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
idosumit/seq2seq
|
idosumit
| 2025-08-07T14:26:43Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-07T14:17:29Z |
# Sequence-to-Sequence Neural Machine Translation
This is my faithful implementation of the sequence-to-sequence paper by Sutskever et al. (2014), "Sequence to Sequence Learning with Neural Networks". I built this to understand deep learning fundamentals while creating a production-ready German-to-English translation system.
## What This Is
I'm implementing a neural machine translation system that translates German sentences to English using the exact architecture described in the original Sutskever et al. (2014) paper. The model uses a deep LSTM encoder-decoder architecture with key innovations like input sequence reversal and beam search decoding.
The implementation stays true to the 2014 paper - no attention mechanisms, no transformer components, just pure encoder-decoder LSTMs as originally conceived. I trained it on the WMT19 German-English dataset, which contains 35 million sentence pairs.
## Key Features
- **Faithful Sutskever et al. Implementation**: Deep 4-layer LSTMs with 1000 hidden units
- **Input Sequence Reversal**: The crucial innovation that made seq2seq work
- **Beam Search Decoding**: Generates better translations than greedy decoding
- **Production-Scale Training**: Handles the full 35M WMT19 dataset with GPU optimization
- **Subword Tokenization**: SentencePiece tokenizers for handling large vocabularies
- **SGD with Momentum**: Using the original optimizer setup (lr=0.7, momentum=0.9)
## Quick Start with Pre-trained Model
If you want to try translations immediately, I've provided a pre-trained model:
```bash
# Set up environment
uv venv && source .venv/bin/activate
uv pip install -r requirements.txt
# Download pre-trained model (one-time setup)
python scripts/download_pretrained.py
# Start translating
python scripts/inference.py --interactive
```
The pre-trained model was trained on a 2M subset, so expect some limitations with vocabulary coverage. For best results, I recommend training on the full dataset.
## Training Your Own Model
### Environment Setup
```bash
# Create virtual environment
uv venv && source .venv/bin/activate
# Install dependencies
uv pip install -r requirements.txt
# Alternative: editable install
uv pip install -e .
```
### Data Preparation
Download and process the WMT19 dataset:
```bash
python scripts/data_preparation.py
```
This downloads the full 35M sentence pair dataset. We filter sentences by length and quality to ensure clean training data. The process is memory-efficient and uses streaming to handle the large dataset.
### Tokenization
Build subword tokenizers for your dataset:
```bash
python src/data/tokenization.py
```
This creates SentencePiece tokenizers with vocabularies sized for production use:
- German: 50,000 subword units
- English: 40,000 subword units
The tokenizers handle out-of-vocabulary words much better than word-level approaches and are essential for scaling to the full dataset.
### Training
Start training with the Sutskever architecture:
```bash
python scripts/train.py
```
The training script uses the exact hyperparameters from the original paper:
- 4-layer deep LSTMs
- 1000 hidden units per layer
- SGD with momentum (0.9)
- Learning rate: 0.7
- Gradient clipping: 5.0
- Batch size: 128 (scaled for GPU efficiency)
Training automatically detects and uses CUDA, MPS (Apple Silicon), or CPU. On a modern GPU, expect training to take several hours for the full dataset.
### Monitoring Progress
During training, we track:
- Loss curves and learning rate schedules
- Validation performance
- Training visualizations saved to `training_plots/`
- Model checkpoints saved to `checkpoints/`
The best model is automatically saved based on validation loss.
## Using Your Trained Model
### Interactive Translation
```bash
python scripts/inference.py --interactive
```
This starts a session where you can type German sentences and get English translations with beam search decoding.
### Single Sentence Translation
```bash
python scripts/inference.py --sentence "Guten Morgen, wie geht es dir?" --verbose
```
The `--verbose` flag shows the tokenization process and beam search candidates.
### Batch Translation
For translating multiple sentences, you can modify the inference script or use it programmatically.
## Architecture Details
### Encoder
- Deep 4-layer LSTM
- Input sequence reversal (key Sutskever innovation)
- 1000 hidden units per layer
- Dropout regularization between layers
### Decoder
- Deep 4-layer LSTM matching encoder
- Teacher forcing during training
- Autoregressive generation during inference
- Output projection to vocabulary
### Training Strategy
- Teacher forcing with shifted target sequences
- Cross-entropy loss with padding token masking
- Gradient clipping to prevent exploding gradients
- Learning rate decay schedule
### Inference
- Beam search with configurable beam size (default: 12)
- Length normalization for fair comparison
- Handles variable-length sequences efficiently
## Implementation Philosophy
I stayed faithful to the 2014 paper because I wanted to understand how these foundational ideas work without modern enhancements. This implementation proves that the original seq2seq architecture, when properly implemented and scaled, can achieve impressive results on large datasets.
The codebase is designed for both learning and production use. Every component includes detailed documentation explaining the reasoning behind architectural choices and their connection to the original paper.
## Project Structure
```
src/
├── models/ # Encoder, Decoder, Seq2Seq, BeamSearch
├── data/ # Dataset loading, tokenization, batching
└── utils/ # Training visualization and utilities
scripts/ # Training pipeline and inference
data/ # Generated datasets, tokenizers, models
checkpoints/ # Saved model weights and training state
training_plots/ # Loss curves and training progress
```
## Hardware Requirements
- **Minimum**: 8GB RAM, modern CPU
- **Recommended**: 16GB+ RAM, CUDA-compatible GPU
- **Full Dataset**: 32GB+ RAM recommended for data processing
The implementation is optimized for memory efficiency and can handle the full dataset on modest hardware through streaming and chunked processing.
## Why I Built This
I wanted to understand neural machine translation from first principles. Rather than using modern frameworks that abstract away the core concepts, I implemented the foundational paper that started it all. This project taught me about deep RNNs, LSTMs, sequence modeling, and the engineering challenges of training large models.
|
lambdaeranga/llama-3.2-conversation-gguf
|
lambdaeranga
| 2025-08-07T14:24:21Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-07T13:48:27Z |
---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** lambdaeranga
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
pxiaoyu/our
|
pxiaoyu
| 2025-08-07T14:23:01Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-07T14:23:00Z |
---
license: apache-2.0
---
|
cassiehuang1385/lora_model
|
cassiehuang1385
| 2025-08-07T14:10:07Z | 0 | 1 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-07T13:00:20Z |
---
license: apache-2.0
---
|
UzzyDizzy/ppo-SnowballTarget
|
UzzyDizzy
| 2025-08-07T14:07:49Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2025-08-07T14:07:39Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: UzzyDizzy/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
zekaemo/results-bayes-mbg
|
zekaemo
| 2025-08-07T14:04:10Z | 1 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"feature-extraction",
"generated_from_trainer",
"base_model:indobenchmark/indobert-base-p2",
"base_model:finetune:indobenchmark/indobert-base-p2",
"license:mit",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-08-06T20:55:48Z |
---
library_name: transformers
license: mit
base_model: indobenchmark/indobert-base-p2
tags:
- generated_from_trainer
model-index:
- name: results-bayes-mbg
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results-bayes-mbg
This model is a fine-tuned version of [indobenchmark/indobert-base-p2](https://huggingface.co/indobenchmark/indobert-base-p2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.54.1
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
madmage/Reinforce-PixelCopter
|
madmage
| 2025-08-07T14:00:30Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-07T13:59:51Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PixelCopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 6.40 +/- 6.41
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Devique/Calmiq
|
Devique
| 2025-08-07T14:00:08Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3n",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-08-06T22:02:59Z |
---
base_model: unsloth/gemma-3n-e4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3n
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Devique
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3n-e4b-it-unsloth-bnb-4bit
This gemma3n model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
camenduru/FLUX.1-Fill-dev-ungated
|
camenduru
| 2025-08-07T13:57:31Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"image-generation",
"flux",
"diffusion-single-file",
"en",
"license:other",
"diffusers:FluxFillPipeline",
"region:us"
] | null | 2025-08-07T13:28:32Z |
---
language:
- en
license: other
license_name: flux-1-dev-non-commercial-license
license_link: LICENSE.md
extra_gated_prompt: By clicking "Agree", you agree to the [FluxDev Non-Commercial License Agreement](https://huggingface.co/black-forest-labs/FLUX.1-Fill-dev/blob/main/LICENSE.md)
and acknowledge the [Acceptable Use Policy](https://huggingface.co/black-forest-labs/FLUX.1-Fill-dev/blob/main/POLICY.md).
tags:
- image-generation
- flux
- diffusion-single-file
---

`FLUX.1 Fill [dev]` is a 12 billion parameter rectified flow transformer capable of filling areas in existing images based on a text description.
For more information, please read our [blog post](https://blackforestlabs.ai/flux-1-tools/).
# Key Features
1. Cutting-edge output quality, second only to our state-of-the-art model `FLUX.1 Fill [pro]`.
2. Blends impressive prompt following with completing the structure of your source image.
3. Trained using guidance distillation, making `FLUX.1 Fill [dev]` more efficient.
4. Open weights to drive new scientific research, and empower artists to develop innovative workflows.
5. Generated outputs can be used for personal, scientific, and commercial purposes as described in the [`FLUX.1 [dev]` Non-Commercial License](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
# Usage
We provide a reference implementation of `FLUX.1 Fill [dev]`, as well as sampling code, in a dedicated [github repository](https://github.com/black-forest-labs/flux).
Developers and creatives looking to build on top of `FLUX.1 Fill [dev]` are encouraged to use this as a starting point.
## API Endpoints
The FLUX.1 models are also available in our API [bfl.ml](https://docs.bfl.ml/)

## Diffusers
To use `FLUX.1 Fill [dev]` with the 🧨 diffusers python library, first install or upgrade diffusers
```shell
pip install -U diffusers
```
Then you can use `FluxFillPipeline` to run the model
```python
import torch
from diffusers import FluxFillPipeline
from diffusers.utils import load_image
image = load_image("https://huggingface.co/datasets/diffusers/diffusers-images-docs/resolve/main/cup.png")
mask = load_image("https://huggingface.co/datasets/diffusers/diffusers-images-docs/resolve/main/cup_mask.png")
pipe = FluxFillPipeline.from_pretrained("black-forest-labs/FLUX.1-Fill-dev", torch_dtype=torch.bfloat16).to("cuda")
image = pipe(
prompt="a white paper cup",
image=image,
mask_image=mask,
height=1632,
width=1232,
guidance_scale=30,
num_inference_steps=50,
max_sequence_length=512,
generator=torch.Generator("cpu").manual_seed(0)
).images[0]
image.save(f"flux-fill-dev.png")
```
To learn more check out the [diffusers](https://huggingface.co/docs/diffusers/main/en/api/pipelines/flux) documentation
---
# Limitations
- This model is not intended or able to provide factual information.
- As a statistical model this checkpoint might amplify existing societal biases.
- The model may fail to generate output that matches the prompts.
- Prompt following is heavily influenced by the prompting-style.
- There may be slight-color shifts in areas that are not filled in
- Filling in complex textures may produce lines at the edges of the filled-area.
# Out-of-Scope Use
The model and its derivatives may not be used
- In any way that violates any applicable national, federal, state, local or international law or regulation.
- For the purpose of exploiting, harming or attempting to exploit or harm minors in any way; including but not limited to the solicitation, creation, acquisition, or dissemination of child exploitative content.
- To generate or disseminate verifiably false information and/or content with the purpose of harming others.
- To generate or disseminate personal identifiable information that can be used to harm an individual.
- To harass, abuse, threat
|
chhatramani/NyayaLM_v0.5_gemma3n4B
|
chhatramani
| 2025-08-07T13:56:58Z | 0 | 0 | null |
[
"safetensors",
"unsloth",
"ne",
"dataset:chhatramani/Nepali_Legal_QA",
"base_model:google/gemma-3n-E4B-it",
"base_model:finetune:google/gemma-3n-E4B-it",
"license:apache-2.0",
"region:us"
] | null | 2025-08-06T11:45:02Z |
---
license: apache-2.0
tags:
- unsloth
datasets:
- chhatramani/Nepali_Legal_QA
language:
- ne
base_model:
- google/gemma-3n-E4B-it
---
# NyayaLM v0.5: Nepali Legal Assistant
## Model Description
NyayaLM v0.5 is a fine-tuned version of Google's Gemma 3n 4B model, specifically designed to provide accurate legal information in Nepali. This model bridges the justice gap in Nepal by making legal knowledge accessible to everyone, running entirely offline on personal computers.
**Key Features:**
- 🇳🇵 Nepali language support for legal queries
- 💻 Offline operation (no internet required)
- 🔒 Privacy-first (all processing happens locally)
- ⚡ Efficient performance on consumer hardware
- 📚 Trained on 61+ Nepali legal documents
## Model Details
- **Base Model**: `unsloth/gemma-3n-E2B-it-unsloth-bnb-4bit`
- **Fine-tuned by**: Chhatramani
- **Languages**: Nepali (primary), English (secondary)
- **Domain**: Nepalese Law
- **Context Length**: 2048 tokens
- **Quantization**: 4-bit (during training)
- **Parameter Count**: 4B (base), 21M trainable (LoRA adapters)
## Intended Use
### Primary Use Cases
- Answering legal questions in Nepali
- Explaining legal concepts in simple language
- Providing information about Nepalese laws and rights
- Supporting legal research and education
- Assisting NGOs and legal aid organizations
### Target Users
- Rural communities with limited access to lawyers
- Students studying law in Nepal
- NGOs working on legal empowerment
- Government officials needing quick legal reference
- Citizens seeking to understand their legal rights
## How to Use
### Installation
1. Install required libraries:
```bash
pip install transformers torch accelerate bitsandbytes
```
### Load Model
```bash
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_name = "chhatramani/NyayaLM_v0.5_gemma3n4B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
device_map="auto",
torch_dtype=torch.float16,
load_in_4bit=True,
)
```
### Use with Chat Template
``` bash
from unsloth.chat_templates import get_chat_template
# Get Gemma-3 chat template
tokenizer = get_chat_template(tokenizer, chat_template="gemma-3")
# Create conversation
messages = [
{"role": "user", "content": "बालबालिका अधिकार ऐनको मुख्य उद्देश्य के हो?"}
]
# Apply chat template
prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
# Generate response
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=512)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```
Citation
If you use this model in your research or applications, please cite:
```bibtext
@model{nyayalm_v0.5,
title={NyayaLM v0.5: A Nepali Legal Assistant Based on Gemma 3n},
author={Chhatramani},
year={2025},
month={August},
url={https://huggingface.co/chhatramani/NyayaLM_v0.5_gemma3n4B},
note={Google Gemma 3n Impact Challenge Submission}
}
```
### Acknowledgments
Google: For the Gemma 3n model and the Impact Challenge opportunity
Unsloth: For the efficient training framework
Nepali Legal Community: For domain expertise and validation
Open Source Community: For the tools and libraries that made this project possible
|
phospho-app/biodunch-ACT_BBOX-pick_ball-0r8aa
|
phospho-app
| 2025-08-07T13:53:39Z | 0 | 0 |
phosphobot
|
[
"phosphobot",
"safetensors",
"act",
"robotics",
"dataset:phospho-app/pick_ball_bboxes",
"region:us"
] |
robotics
| 2025-08-07T13:27:51Z |
---
datasets: phospho-app/pick_ball_bboxes
library_name: phosphobot
pipeline_tag: robotics
model_name: act
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successful, try it out on your robot!
## Training parameters:
- **Dataset**: [phospho-app/pick_ball_bboxes](https://huggingface.co/datasets/phospho-app/pick_ball_bboxes)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 100
- **Training steps**: 10000
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
avigil/AIArtjak_Backup
|
avigil
| 2025-08-07T13:47:58Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-07T13:17:47Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Backup of some SD1.5 embeddings created by AIArtjak
<!-- Provide a quick summary of what the model is/does. -->
These models were originally uploaded to Civitai.
|
lukasellinger/homonymy-dpo-llama-v3p1-8b-instruct
|
lukasellinger
| 2025-08-07T13:43:56Z | 24 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"fr",
"ar",
"ru",
"zh",
"dataset:lukasellinger/homonymy-dpo",
"arxiv:2507.11981",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-05-13T13:22:38Z |
---
library_name: transformers
datasets:
- lukasellinger/homonymy-dpo
base_model:
- meta-llama/Llama-3.1-8B-Instruct
language:
- en
- fr
- ar
- ru
- zh
---
## License
This model is a fine-tuned variant of Meta’s Llama 3.1 8B and is distributed under the Llama 3.1 Community License.
Built with Llama.
- Original model: [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct)
- License: [Llama 3.1 Community License](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct/blob/main/LICENSE)
---
## Citation
If you use any of the work, please cite the following paper:
```tex
@misc{ellinger_simplifications_2025,
title = {Simplifications are {Absolutists}: {How} {Simplified} {Language} {Reduces} {Word} {Sense} {Awareness} in {LLM}-{Generated} {Definitions}},
url = {http://arxiv.org/abs/2507.11981},
author = {Ellinger, Lukas and Anschütz, Miriam and Groh, Georg},
annote = {Comment: Accepted by RANLP 2025},
}
```
|
phospho-app/MaxFridge-ACT_BBOX-example_dataset_v2-7ytaz
|
phospho-app
| 2025-08-07T13:43:38Z | 0 | 0 |
phosphobot
|
[
"phosphobot",
"safetensors",
"act",
"robotics",
"dataset:phospho-app/example_dataset_v2_bboxes",
"region:us"
] |
robotics
| 2025-08-07T13:10:04Z |
---
datasets: phospho-app/example_dataset_v2_bboxes
library_name: phosphobot
pipeline_tag: robotics
model_name: act
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successful, try it out on your robot!
## Training parameters:
- **Dataset**: [phospho-app/example_dataset_v2_bboxes](https://huggingface.co/datasets/phospho-app/example_dataset_v2_bboxes)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 100
- **Training steps**: 10000
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
mjbuehler/gpt-oss-20b-multilingual-reasoner
|
mjbuehler
| 2025-08-07T13:40:38Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"dataset:lamm-mit/Multilingual-Thinking",
"base_model:openai/gpt-oss-20b",
"base_model:finetune:openai/gpt-oss-20b",
"endpoints_compatible",
"region:us"
] | null | 2025-08-07T12:06:22Z |
---
base_model: openai/gpt-oss-20b
datasets: lamm-mit/Multilingual-Thinking
library_name: transformers
model_name: gpt-oss-20b-multilingual-reasoner
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for gpt-oss-20b-multilingual-reasoner
This model is a fine-tuned version of [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b) on the [lamm-mit/Multilingual-Thinking](https://huggingface.co/datasets/lamm-mit/Multilingual-Thinking) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="mjbuehler/gpt-oss-20b-multilingual-reasoner", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.7.1
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
doubanmov/DeadToRightsTwHdZh
|
doubanmov
| 2025-08-07T13:40:16Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-07T13:39:51Z |
# FULLHD ▷ 南京照相馆(2025)完整版[1080P.HD]高清电影
在线搜索南京照相馆完整版(Dead To Rights)| 在线下载南京照相馆完整版 | 南京照相馆完整版 | 南京照相馆全电影 | 南京照相馆全高清在线 | 南京照相馆全片 | 南京照相馆电影完整 | 南京照相馆免费观看 南京照相馆免费下载 | 南京照相馆高清1080p / 720p | 南京照相馆BT.709 | 南京照相馆HDTV 1080i | 南京照相馆BluRay 南京照相馆BD | 南京照相馆
《南京照相馆》 2025 完整版,Dead To Rights 線上看1080p 台灣 No.1 高清正版線上看 | 460p - 720p - 1080p - BRRip - DvdRip - 4KUHD
<p><strong>📺 觀看和下載 ➫️ <a href="https://cuevana3top.biz/zh/movie/1311031?ref=face" target="_blank" rel="noopener">南京照相馆 Dead To Rights 2025</a></strong></p>
<p><strong>📥 下载HD ➥ <a href="https://cuevana3top.biz/zh/movie/1311031?ref=face" target="_blank" rel="noopener">南京照相馆 Dead To Rights 2025</a></strong></p>
导演: 申奥
编剧: 许渌洋 / 张珂 / 申奥
主演: 刘昊然 / 王传君 / 高叶 / 王骁 / 周游 / 更多...
类型: 剧情 / 历史 / 战争
制片国家/地区: 中国大陆
语言: 汉语普通话 / 日语 / 南京话
上映日期: 2025-07-25(中国大陆)
片长: 137分钟
又名: 吉祥照相馆 / Dead To Rights
IMDb: tt36598036
南京照相馆的剧情简介 · · · · · ·
影片故事取材于南京大屠杀期间日军真实罪证影像。一群生活在南京的百姓躲在吉祥照相馆中避难,为了尽可能的多活一日,他们被迫帮助日军摄影师冲洗底片,却意外冲印出了能证明日军屠城的罪证照片。他们原本只想在大屠杀中保命活下去,面对日军在南京城内的暴行,他们决定让这些底片留存下去……
|
ESAYGUI/hub
|
ESAYGUI
| 2025-08-07T13:39:20Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-07T13:19:52Z |
---
license: apache-2.0
---
|
ekiprop/SST-2-GLoRA-p30-seed42
|
ekiprop
| 2025-08-07T13:37:47Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:roberta-base",
"lora",
"transformers",
"base_model:FacebookAI/roberta-base",
"base_model:adapter:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2025-08-07T13:24:34Z |
---
library_name: peft
license: mit
base_model: roberta-base
tags:
- base_model:adapter:roberta-base
- lora
- transformers
metrics:
- accuracy
model-index:
- name: SST-2-GLoRA-p30-seed42
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SST-2-GLoRA-p30-seed42
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2005
- Accuracy: 0.9450
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|
| 0.381 | 0.0950 | 200 | 0.2361 | 0.9197 |
| 0.2964 | 0.1900 | 400 | 0.1967 | 0.9232 |
| 0.2676 | 0.2850 | 600 | 0.1929 | 0.9300 |
| 0.255 | 0.3800 | 800 | 0.1968 | 0.9312 |
| 0.2481 | 0.4751 | 1000 | 0.2261 | 0.9266 |
| 0.2339 | 0.5701 | 1200 | 0.2094 | 0.9335 |
| 0.2322 | 0.6651 | 1400 | 0.1797 | 0.9278 |
| 0.2351 | 0.7601 | 1600 | 0.1818 | 0.9323 |
| 0.2285 | 0.8551 | 1800 | 0.1799 | 0.9335 |
| 0.2172 | 0.9501 | 2000 | 0.1762 | 0.9369 |
| 0.2344 | 1.0451 | 2200 | 0.1733 | 0.9392 |
| 0.1931 | 1.1401 | 2400 | 0.1853 | 0.9427 |
| 0.2069 | 1.2352 | 2600 | 0.1881 | 0.9427 |
| 0.2064 | 1.3302 | 2800 | 0.1900 | 0.9392 |
| 0.195 | 1.4252 | 3000 | 0.1812 | 0.9358 |
| 0.197 | 1.5202 | 3200 | 0.1837 | 0.9255 |
| 0.2059 | 1.6152 | 3400 | 0.1735 | 0.9404 |
| 0.1873 | 1.7102 | 3600 | 0.2167 | 0.9369 |
| 0.1913 | 1.8052 | 3800 | 0.2065 | 0.9381 |
| 0.2068 | 1.9002 | 4000 | 0.1831 | 0.9392 |
| 0.1934 | 1.9952 | 4200 | 0.2152 | 0.9381 |
| 0.1832 | 2.0903 | 4400 | 0.1889 | 0.9438 |
| 0.1788 | 2.1853 | 4600 | 0.1971 | 0.9381 |
| 0.1799 | 2.2803 | 4800 | 0.2254 | 0.9358 |
| 0.1718 | 2.3753 | 5000 | 0.1843 | 0.9427 |
| 0.1781 | 2.4703 | 5200 | 0.2005 | 0.9450 |
| 0.1709 | 2.5653 | 5400 | 0.2031 | 0.9427 |
| 0.1901 | 2.6603 | 5600 | 0.1868 | 0.9392 |
| 0.1741 | 2.7553 | 5800 | 0.1963 | 0.9381 |
| 0.169 | 2.8504 | 6000 | 0.1887 | 0.9438 |
| 0.1734 | 2.9454 | 6200 | 0.1904 | 0.9358 |
| 0.1635 | 3.0404 | 6400 | 0.2192 | 0.9369 |
| 0.1522 | 3.1354 | 6600 | 0.2052 | 0.9381 |
| 0.1568 | 3.2304 | 6800 | 0.2120 | 0.9335 |
| 0.1654 | 3.3254 | 7000 | 0.2025 | 0.9369 |
| 0.1536 | 3.4204 | 7200 | 0.2140 | 0.9358 |
| 0.1555 | 3.5154 | 7400 | 0.2189 | 0.9404 |
| 0.1556 | 3.6105 | 7600 | 0.2076 | 0.9438 |
| 0.1634 | 3.7055 | 7800 | 0.1918 | 0.9404 |
| 0.1621 | 3.8005 | 8000 | 0.2074 | 0.9381 |
| 0.1562 | 3.8955 | 8200 | 0.1974 | 0.9392 |
| 0.1526 | 3.9905 | 8400 | 0.2008 | 0.9438 |
| 0.1472 | 4.0855 | 8600 | 0.2032 | 0.9438 |
| 0.139 | 4.1805 | 8800 | 0.2229 | 0.9392 |
| 0.1446 | 4.2755 | 9000 | 0.2163 | 0.9392 |
| 0.1443 | 4.3705 | 9200 | 0.2167 | 0.9369 |
| 0.1375 | 4.4656 | 9400 | 0.2169 | 0.9381 |
| 0.1338 | 4.5606 | 9600 | 0.2258 | 0.9415 |
| 0.1467 | 4.6556 | 9800 | 0.2139 | 0.9381 |
| 0.1459 | 4.7506 | 10000 | 0.2051 | 0.9427 |
| 0.1524 | 4.8456 | 10200 | 0.2026 | 0.9415 |
| 0.1494 | 4.9406 | 10400 | 0.2034 | 0.9438 |
### Framework versions
- PEFT 0.16.0
- Transformers 4.54.1
- Pytorch 2.5.1+cu121
- Datasets 4.0.0
- Tokenizers 0.21.4
|
mradermacher/Lacaille-MoT-4B-Supreme2-i1-GGUF
|
mradermacher
| 2025-08-07T13:34:58Z | 4,244 | 0 |
transformers
|
[
"transformers",
"gguf",
"moe",
"trl",
"mot",
"code",
"science",
"math",
"mixture-of-thoughts",
"supreme2",
"stem",
"text-generation-inference",
"reasoning",
"vlm",
"en",
"zh",
"dataset:open-r1/Mixture-of-Thoughts",
"dataset:nvidia/OpenCodeReasoning",
"base_model:prithivMLmods/Lacaille-MoT-4B-Supreme2",
"base_model:quantized:prithivMLmods/Lacaille-MoT-4B-Supreme2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-06-02T14:09:48Z |
---
base_model: prithivMLmods/Lacaille-MoT-4B-Supreme2
datasets:
- open-r1/Mixture-of-Thoughts
- nvidia/OpenCodeReasoning
language:
- en
- zh
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- moe
- trl
- mot
- code
- science
- math
- mixture-of-thoughts
- supreme2
- stem
- text-generation-inference
- reasoning
- vlm
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/prithivMLmods/Lacaille-MoT-4B-Supreme2
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Lacaille-MoT-4B-Supreme2-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-i1-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.i1-IQ1_S.gguf) | i1-IQ1_S | 1.2 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-i1-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.i1-IQ1_M.gguf) | i1-IQ1_M | 1.2 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-i1-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-i1-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-i1-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.i1-IQ2_S.gguf) | i1-IQ2_S | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-i1-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.i1-IQ2_M.gguf) | i1-IQ2_M | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-i1-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-i1-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.i1-Q2_K.gguf) | i1-Q2_K | 1.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-i1-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-i1-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-i1-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.i1-Q3_K_S.gguf) | i1-Q3_K_S | 2.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-i1-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.i1-IQ3_S.gguf) | i1-IQ3_S | 2.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-i1-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.i1-IQ3_M.gguf) | i1-IQ3_M | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-i1-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.i1-Q3_K_M.gguf) | i1-Q3_K_M | 2.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-i1-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.i1-Q3_K_L.gguf) | i1-Q3_K_L | 2.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-i1-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.i1-IQ4_XS.gguf) | i1-IQ4_XS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-i1-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.i1-Q4_0.gguf) | i1-Q4_0 | 2.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-i1-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.i1-IQ4_NL.gguf) | i1-IQ4_NL | 2.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-i1-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.i1-Q4_K_S.gguf) | i1-Q4_K_S | 2.5 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-i1-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-i1-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.i1-Q4_1.gguf) | i1-Q4_1 | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-i1-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-i1-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.i1-Q5_K_M.gguf) | i1-Q5_K_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-i1-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.i1-Q6_K.gguf) | i1-Q6_K | 3.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
phogen/gemma-3-4b-pt-25pct-lora-proposal
|
phogen
| 2025-08-07T13:30:48Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-07T13:30:43Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
phospho-app/biodunch-ACT_BBOX-pick_ball-j7tty
|
phospho-app
| 2025-08-07T13:26:45Z | 0 | 0 |
phosphobot
|
[
"phosphobot",
"act",
"robotics",
"dataset:biodunch/pick_ball",
"region:us"
] |
robotics
| 2025-08-07T13:26:41Z |
---
datasets: biodunch/pick_ball
library_name: phosphobot
pipeline_tag: robotics
model_name: act
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## Error Traceback
We faced an issue while training your model.
```
Image key 'camera 2' not found in the dataset info_model. Please check the image keys in the dataset and pass the appropriate parameter.
Available image keys: ['observation.images.main', 'observation.images.secondary_0', 'observation.images.secondary_1']
```
## Training parameters:
- **Dataset**: [biodunch/pick_ball](https://huggingface.co/datasets/biodunch/pick_ball)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 100
- **Training steps**: 10000
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
PersonalAILab/AFM-WebAgent-7B-sft
|
PersonalAILab
| 2025-08-07T13:25:25Z | 0 | 0 | null |
[
"safetensors",
"qwen2",
"region:us"
] | null | 2025-08-06T13:41:32Z |
# Model Introduction
We introduce Agent Foundation Models (AFMs), a new family built on Qwen that natively perform end-to-end, multi-turn, multi-tool problem solving—without external frameworks or manual prompting. Built on the Chain-of-Agents (CoA) paradigm, each AFM dynamically activates specialized tool and role-playing agents inside a single forward pass, emulating the cooperative reasoning of a full multi-agent system. To train these models, we distilled high-performing multi-agent trajectories into agentic supervised-fine-tuning data and further optimized performance with agentic reinforcement learning on verifiable tasks. AFMs set new state-of-the-art results on benchmarks for both web and code agents, and we release all model weights, training code, and datasets to accelerate future research on agentic AI.
For more details, please refer to our [paper]() and [GitHub]().
# Model Downloads
| Model | Download | Backbone Model | Licences|
| --------------------- | ------ | --------------------------- |--------------------------- |
| AFM-CodeAgent-7B-sft | [🤗 **HuggingFace**]() |[Qwen-2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct) | Apache License 2.0|
| AFM-CodeAgent-RL-7B | [🤗 **HuggingFace**]() |[Qwen-2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct) | Apache License 2.0|
| AFM-CodeAgent-32B-sft | [🤗 **HuggingFace**]() |[Qwen-2.5-Coder-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct) | Apache License 2.0|
| AFM-CodeAgent-RL-32B | [🤗 **HuggingFace**]() |[Qwen-2.5-Coder-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct) | Apache License 2.0|
| AFM-MHQA-Agent-3B-sft | [🤗 **HuggingFace**]() |[Qwen-2.5-3B-Base](https://huggingface.co/Qwen/Qwen2.5-3B) | Apache License 2.0|
| AFM-MHQA-Agent-3B-rl | [🤗 **HuggingFace**]() |[Qwen-2.5-3B-Base](https://huggingface.co/Qwen/Qwen2.5-3B) | Apache License 2.0|
| AFM-MHQA-Agent-7B-sft | [🤗 **HuggingFace**]() |[Qwen-2.5-7B-Base](https://huggingface.co/Qwen/Qwen2.5-7B) | Apache License 2.0|
| AFM-MHQA-Agent-7B-rl | [🤗 **HuggingFace**]() |[Qwen-2.5-7B-Base](https://huggingface.co/Qwen/Qwen2.5-7B) | Apache License 2.0|
| AFM-WebAgent-7B-sft | [🤗 **HuggingFace**]() |[Qwen-2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) | Apache License 2.0|
| AFM-WebAgent-32B-sft | [🤗 **HuggingFace**]() |[Qwen-2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) | Apache License 2.0|
| AFM-WebAgent-7B-rl | [🤗 **HuggingFace**]() |[Qwen-2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) | Apache License 2.0|
| AFM-WebAgent-32B-rl | [🤗 **HuggingFace**]() |[Qwen-2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) | Apache License 2.0|
# Data Downloads
TODO: add hf link after upload
- AFM-CodeAgent-SFT-Dataset
- AFM-CodeAgent-RL-Dataset
- AFM-WebAgent-SFT-Dataset
- AFM-WebAgent-RL-Dataset
- AFM-MHQA-SFT-Dataset
- AFM-MHQA-RL-Dataset
# License and Usage Information
## 1. Core License
This model is licensed under the **Apache License 2.0**, granting users the following rights:
✅ Commercial deployment
✅ Source code modification
✅ Patent authorization
✅ Closed-source derivatives
⚠️ Prohibition on using model names/logos for promotion without written authorization
⚠️ No warranties provided
## 2. Inheritance Declaration
This model is based on improvements from **Qwen2.5** (Apache 2.0 License). You must:
* Retain original Qwen copyright notices in derivative works.
* Clearly document changes made in modification notes.
* Adhere to any additional usage restrictions imposed by Qwen.
|
laysjwc/llama2-step3200
|
laysjwc
| 2025-08-07T13:19:01Z | 0 | 0 | null |
[
"safetensors",
"license:apache-2.0",
"region:us"
] | null | 2025-08-07T07:51:17Z |
---
license: apache-2.0
---
|
AngelSlim/Hunyuan-4B-Instruct_eagle3
|
AngelSlim
| 2025-08-07T13:17:28Z | 5 | 0 | null |
[
"pytorch",
"hunyuan_v1_dense",
"hunyuan",
"eagle3",
"eagle",
"region:us"
] | null | 2025-08-04T07:03:11Z |
---
tags:
- hunyuan
- eagle3
- eagle
---
<p align="center">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://github.com/Tencent/AngelSlim/blob/main/docs/source/assets/logos/angelslim_logo_light.png?raw=true">
<img alt="AngelSlim" src="https://github.com/Tencent/AngelSlim/blob/main/docs/source/assets/logos/angelslim_logo.png?raw=true" width=55%>
</picture>
</p>
<h3 align="center">
Dedicated to building a more intuitive, comprehensive, and efficient LLMs compression toolkit.
</h3>
<p align="center">
📖 <a href="https://angelslim.readthedocs.io/">Documentation</a>   |   🤗 <a href="https://huggingface.co/AngelSlim">Hugging Face</a>   |   🤖 <a href="https://modelscope.cn/organization/AngelSlim">ModelScope</a>   |   💬 <a href="https://huggingface.co/datasets/librarian-bots/model_cards_with_metadata/viewer/default/./docs/source/assets/angel_slim_wechat.png">WeChat</a>
<br>
</p>
## Table of Contents
- [Latest Updates](#latest-updates)
- [Key Features](#key-features)
- [Supported Models](#supported-models)
- [How to Use](#how-to-use)
- [Install AngelSlim](#install-angelslim)
- [Quick Start](#quick-start)
- [deployment & Evaluation](#deployment)
- [Benchmark](#benchmark)
- [License](#license)
- [Citation](#citation)
- [Technical Discussion](#technical-discussion)
## 📣Latest Updates
- [25/07/04] We now support quantization for Hunyuan/Qwen2.5/Qwen3/DeepSeek-R1-Distill-Qwen and other models, including INT8/FP8/INT4 algorithms.
We also opensource Qwen3-8B`s Eagle3 model weight.
Coming soon:
- [ ] Support W4A8 quantization for DeepSeek-R1.
- [ ] Support quantization for multimodal models like Qwen-VL.
- [ ] Release of new algorithm for speculative sampling.
## 🌟Key Features
- **Highly Integrated**: This toolkit integrates mainstream compression algorithms into a unified framework, offering developers one-click access with exceptional ease of use.
- **Continuous Innovation**: Beyond integrating widely-used industry algorithms, we are continuously researching better compression algorithms, which will be gradually open-sourced in the future.
- **Performance-Driven**: We continuously optimize end-to-end performance in model compression workflows and algorithm deployment, such as enabling quantization of models like Qwen3-235B and DeepSeek-R1 on a single GPU.
## 💼Supported Models
### Quantization
Currently supports the following LLMs, including Hunyuan-Dense, Hunyuan-MoE, Qwen3-Dense, Qwen3-MoE, Qwen2.5, DeepSeek-R1 distilled Qwen models, and QwQ::
| Model | FP8-Dynamic | FP8-Static | INT8-Dynamic | INT4-GPTQ | INT4-AWQ |
| --------------------------------------------------------------------------------------------------------------------------- | ----------- | ---------- | ------------ | --------- | -------- |
| [Hunyuan-Dense](https://huggingface.co/tencent/Hunyuan-7B-Instruct) | ✅ | ✅ | ✅ | ✅ | ✅ |
| [Hunyuan-MoE](https://huggingface.co/collections/tencent/hunyuan-a13b-685ec38e5b46321e3ea7c4be) | ✅ | ✅ | ✅ | ✅ | ✅ |
| [Qwen3-Dense](https://huggingface.co/collections/AngelSlim/qwen3-quant-68652e26da31740739d154f8) | ✅ | ✅ | ✅ | ✅ | ✅ |
| [Qwen3-MoE](https://huggingface.co/collections/AngelSlim/qwen3-quant-68652e26da31740739d154f8) | ✅ | ✅ | ✅ | ✅ | ✅ |
| [Qwen2.5](https://huggingface.co/collections/AngelSlim/qwen2-25-quant-68652d6cbdf5c0d4b1c4499a) | ✅ | ✅ | ✅ | ✅ | ✅ |
| [DeepSeek-R1-Distill-Qwen](https://huggingface.co/collections/AngelSlim/deepseek-r1-distill-quant-68652f16a9c206b030b05f7f) | ✅ | ✅ | ✅ | ✅ | ✅ |
| [QwQ](https://huggingface.co/collections/AngelSlim/qwen3-quant-68652e26da31740739d154f8) | ✅ | ✅ | ✅ | ✅ | ✅ |
### Speculative Decoding
The Eagle3 weights for the Qwen3 series model are now available.
| Qwen3 Models | Hunyuan Models |
| ----------|----------|
| ✅ [Qwen3-1.7B](https://huggingface.co/AngelSlim/Qwen3-1.7B_eagle3) |✅ [Hunyuan-1.8B-Instruct](https://huggingface.co/AngelSlim/Hunyuan-1.8B-Instruct_eagle3) |
| ✅ [Qwen3-4B](https://huggingface.co/AngelSlim/Qwen3-4B_eagle3) |✅ [Hunyuan-4B-Instruct](https://huggingface.co/AngelSlim/Hunyuan-4B-Instruct_eagle3) |
| ✅ [Qwen3-8B](https://huggingface.co/AngelSlim/Qwen3-8B_eagle3) |✅ [Hunyuan-7B-Instruct](https://huggingface.co/AngelSlim/Hunyuan-7B-Instruct_eagle3) |
| ✅ [Qwen3-14B](https://huggingface.co/AngelSlim/Qwen3-14B_eagle3) |
| ✅ [Qwen3-32B](https://huggingface.co/AngelSlim/Qwen3-32B_eagle3) |
| ✅ [Qwen3-30B-A3B](https://huggingface.co/AngelSlim/Qwen3-a3B_eagle3) |
## 🛎️How to Use
### Install AngelSlim
We recommend using `pip` to install the latest stable version of `AngelSlim`:
```shell
pip install angelslim
```
Alternatively, you can clone the repository and install from source in editable mode:
```shell
cd AngelSlim && python setup.py install
```
For more detailed installation instructions, please refer to the [Installation Documentation](https://angelslim.readthedocs.io/zh-cn/latest/getting_started/installation.html).
### Quick Start
After installing `AngelSlim`, you can quickly start by running the following script to perform static `FP8` quantization on the `Qwen3-1.7B` model:
* One-click Start
```shell
python3 tools/run.py -c configs/qwen3/fp8_static/qwen3-1_7b_fp8_static.yaml
```
This example will load the HuggingFace model and perform activation value calibration using the `dataset` specified in the config file, saving the quantized model weights.
* Code-based Start
To perform dynamic `FP8` quantization on `Qwen3-1.7B`:
```python
from angelslim.engine import Engine
slim_engine = Engine()
# Prepare model
slim_engine.prepare_model(model_name="Qwen", model_path="Qwen/Qwen3-1.7B",)
# Initialize compressor
slim_engine.prepare_compressor("PTQ", default_method="fp8_dynamic")
# Compress model
slim_engine.run()
# Save compressed model
slim_engine.save("./output")
```
For more details, please refer to the [Quick Start Documentation](https://angelslim.readthedocs.io/zh-cn/latest/getting_started/quickstrat.html).
### 🖥️ Deployment and Testing
#### 1. API Service Deployment
After specifying the quantized model path `MODEL_PATH`, you can deploy an OpenAI-compatible API service using the following LLMs inference frameworks:
**vLLM**
Use the following script to launch a [vLLM](https://github.com/vllm-project/vllm) server, recommended version `vllm>=0.8.5.post1`. For MOE INT8 quantized models, vllm>=0.9.0 is required.
```shell
bash deploy/run_vllm.sh $MODEL_PATH
```
**SGLang**
Use the following script to launch a [SGLang](https://github.com/sgl-project/sglang) server, recommended version `sglang>=0.4.6.post1`.
```shell
bash deploy/run_sglang.sh $MODEL_PATH
```
#### 2. Service Invocation
Invoke requests via [OpenAI's API format](https://platform.openai.com/docs/api-reference/introduction):
```shell
bash deploy/openai.sh $MODEL_PATH
```
#### 3. Performance Evaluation
Evaluate the performance of quantized model using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness), recommended version`lm-eval>=0.4.8`:
```shell
bash deploy/lm_eval.sh $MODEL_PATH
```
For more detaileds, please refer to the [Deployment Documentation](https://angelslim.readthedocs.io/zh-cn/latest/deployment/deploy.html).
## 📈 Benchmark
### (1) Quantization
The performance test results for selected models are shown below. For the complete benchmark, refer to the [Benchmark documentation](https://angelslim.readthedocs.io/zh-cn/latest/performance/quantization/benchmarks.html)
#### Hunyuan Series Models
Benchmark results for the `Hunyuan-A13B-Instruct` model with `FP8` and `INT4-GPTQ` quantization algorithms on datasets including `AIME 2024`, `GSM8K`, `BBH`, and `DROP`:
| Bench | Hunyuan-A13B-Instruct | Hunyuan-A13B-Instruct-FP8 | Hunyuan-A13B-Instruct-Int4-GPTQ |
|:---------:|:---------------------:|:-------------------------:|:-------------------------------:|
| AIME 2024 | 87.3 | 86.7 | 86.7 |
| GSM8K | 94.39 | 94.01 | 94.24 |
| BBH | 89.1 | 88.34 | 87.91 |
| DROP | 91.1 | 91.1 | 91.05 |
#### Qwen3 Series Models
Benchmark results for Qwen3 series models with `FP8-Static`, `FP8-Dynamic`, `INT4-GPTQ`, and `INT4-AWQ` quantization algorithms on datasets including `CEVAL`, `MMLU`, `GSM8K`, and `HUMANEVAL`:
<table>
<thead>
<tr><th>Model</th><th>Quantization</th><th>CEVAL</th><th>MMLU</th><th>GSM8K</th><th>HUMANEVAL</th></tr>
</thead>
<tbody>
<tr><td rowspan="4">Qwen3-0.6B</td><td>BF16</td><td>45.84</td><td>47.21</td><td>42.99</td><td>19.51</td></tr>
<tr><td>FP8-Static</td><td>45.99</td><td>46.87</td><td>38.06</td><td>18.90</td></tr>
<tr><td>FP8-Dynamic</td><td>45.99</td><td>46.93</td><td>38.29</td><td>20.73</td></tr>
<tr><td>INT8-Dynamic</td><td>45.17</td><td>46.95</td><td>41.17</td><td>21.34</td></tr>
<tr><td rowspan="6">Qwen3-8B</td><td>BF16</td><td>79.27</td><td>74.78</td><td>87.79</td><td>63.41</td></tr>
<tr><td>FP8-Static</td><td>78.23</td><td>74.79</td><td>86.96</td><td>62.20</td></tr>
<tr><td>FP8-Dynamic</td><td>78.45</td><td>74.75</td><td>87.64</td><td>62.80</td></tr>
<tr><td>INT8-Dynamic</td><td>78.01</td><td>74.84</td><td>86.96</td><td>67.07</td></tr>
<tr><td>INT4-GPTQ</td><td>77.19</td><td>73.26</td><td>86.43</td><td>62.20</td></tr>
<tr><td>INT4-AWQ</td><td>76.15</td><td>73.59</td><td>86.96</td><td>63.41</td></tr>
<tr><td rowspan="6">Qwen3-14B</td><td>BF16</td><td>83.06</td><td>78.90</td><td>88.40</td><td>55.49</td></tr>
<tr><td>FP8-Static</td><td>82.62</td><td>78.57</td><td>89.46</td><td>57.32</td></tr>
<tr><td>FP8-Dynamic</td><td>82.24</td><td>78.92</td><td>88.32</td><td>52.44</td></tr>
<tr><td>INT8-Dynamic</td><td>81.87</td><td>78.13</td><td>86.28</td><td>56.10</td></tr>
<tr><td>INT4-GPTQ</td><td>81.05</td><td>78.02</td><td>87.34</td><td>57.93</td></tr>
<tr><td>INT4-AWQ</td><td>82.02</td><td>77.68</td><td>84.23</td><td>61.59</td></tr>
<tr><td rowspan="5">Qwen3-32B</td><td>BF16</td><td>86.55</td><td>82.00</td><td>74.53</td><td>37.80</td></tr>
<tr><td>FP8-Static</td><td>86.92</td><td>81.78</td><td>70.20</td><td>39.63</td></tr>
<tr><td>FP8-Dynamic</td><td>86.55</td><td>81.89</td><td>70.43</td><td>38.41</td></tr>
<tr><td>INT4-GPTQ</td><td>86.18</td><td>81.01</td><td>-</td><td>43.29</td></tr>
<tr><td>INT4-AWQ</td><td>86.18</td><td>81.54</td><td>-</td><td>36.59</td></tr>
<tr><td rowspan="4">Qwen3-30B-A3B</td><td>BF16</td><td>83.66</td><td>79.36</td><td>89.99</td><td>31.71</td></tr>
<tr><td>FP8-Static</td><td>83.95</td><td>79.47</td><td>89.01</td><td>31.10</td></tr>
<tr><td>FP8-Dynamic</td><td>84.10</td><td>79.40</td><td>89.16</td><td>32.93</td></tr>
<tr><td>INT8-Dynamic</td><td>83.36</td><td>79.48</td><td>89.16</td><td>34.15</td></tr>
<tr><td rowspan="4">Qwen3-235B-A22B</td><td>BF16</td><td>89.60</td><td>86.28</td><td>85.29</td><td>27.44</td></tr>
<tr><td>FP8-Static</td><td>89.67</td><td>86.19</td><td>86.96</td><td>27.44</td></tr>
<tr><td>FP8-Dynamic</td><td>89.67</td><td>86.18</td><td>85.22</td><td>28.05</td></tr>
<tr><td>INT8-Dynamic</td><td>88.93</td><td>86.20</td><td>86.20</td><td>23.78</td></tr>
<tr><td rowspan="5">QwQ-32B</td><td>BF16</td><td>85.74</td><td>82.03</td><td>73.31</td><td>42.68</td></tr>
<tr><td>FP8-Static</td><td>85.44</td><td>81.91</td><td>75.36</td><td>42.68</td></tr>
<tr><td>FP8-Dynamic</td><td>85.07</td><td>81.93</td><td>75.66</td><td>42.07</td></tr>
<tr><td>INT4-GPTQ</td><td>84.03</td><td>81.26</td><td>68.23</td><td>45.73</td></tr>
<tr><td>INT4-AWQ</td><td>83.58</td><td>81.01</td><td>68.69</td><td>43.29</td></tr>
</tbody>
</table>
#### Other Models
Benchmark results for other models with `FP8-Static`, `FP8-Dynamic`, `INT4-GPTQ`, and `INT4-AWQ` quantization algorithms on datasets including `CEVAL`, `MMLU` and `GSM8K`:
<table>
<thead>
<tr><th>Model</th><th>Quantization</th><th>CEVAL</th><th>MMLU</th><th>GSM8K</th></tr>
</thead>
<tbody>
<tr><td rowspan="3">Qwen2.5-1.5B-Instruct</td><td>BF16</td><td>67.01</td><td>60.05</td><td>54.28</td></tr>
<tr><td>FP8-Static</td><td>66.27</td><td>60.23</td><td>-</td></tr>
<tr><td>FP8-Dynamic</td><td>66.79</td><td>60.08</td><td>51.71</td></tr>
<tr><td rowspan="5">Qwen2.5-7B-Instruct</td><td>BF16</td><td>81.20</td><td>74.55</td><td>79.98</td></tr>
<tr><td>FP8-Static</td><td>81.13</td><td>74.03</td><td>79.30</td></tr>
<tr><td>FP8-Dynamic</td><td>80.31</td><td>74.07</td><td>79.00</td></tr>
<tr><td>INT4-GPTQ</td><td>79.05</td><td>73.05</td><td>74.75</td></tr>
<tr><td>INT4-AWQ</td><td>79.35</td><td>73.22</td><td>79.38</td></tr>
<tr><td rowspan="5">Qwen2.5-32B-Instruct</td><td>BF16</td><td>87.30</td><td>83.21</td><td>81.73</td></tr>
<tr><td>FP8-Static</td><td>87.59</td><td>83.08</td><td>81.58</td></tr>
<tr><td>FP8-Dynamic</td><td>87.30</td><td>83.04</td><td>81.58</td></tr>
<tr><td>INT4-GPTQ</td><td>86.70</td><td>82.45</td><td>82.03</td></tr>
<tr><td>INT4-AWQ</td><td>87.00</td><td>82.64</td><td>-</td></tr>
<tr><td rowspan="5">DeepSeek-R1-Distill-Qwen-7B</td><td>BF16</td><td>53.49</td><td>53.80</td><td>75.74</td></tr>
<tr><td>FP8-Static</td><td>53.57</td><td>54.17</td><td>76.19</td></tr>
<tr><td>FP8-Dynamic</td><td>52.97</td><td>54.13</td><td>74.15</td></tr>
<tr><td>INT4-GPTQ</td><td>51.86</td><td>52.44</td><td>75.89</td></tr>
<tr><td>INT4-AWQ</td><td>53.49</td><td>53.70</td><td>-</td></tr>
<tr><td rowspan="5">DeepSeek-R1-Distill-Qwen-14B</td><td>BF16</td><td>77.71</td><td>74.28</td><td>85.67</td></tr>
<tr><td>FP8-Static</td><td>77.56</td><td>74.66</td><td>86.73</td></tr>
<tr><td>FP8-Dynamic</td><td>76.82</td><td>74.63</td><td>87.11</td></tr>
<tr><td>INT4-GPTQ</td><td>74.29</td><td>72.37</td><td>84.61</td></tr>
<tr><td>INT4-AWQ</td><td>74.81</td><td>73.00</td><td>86.05</td></tr>
<tr><td rowspan="5">DeepSeek-R1-Distill-Qwen-32B</td><td>BF16</td><td>84.18</td><td>80.89</td><td>87.41</td></tr>
<tr><td>FP8-Static</td><td>83.43</td><td>80.90</td><td>87.57</td></tr>
<tr><td>FP8-Dynamic</td><td>83.73</td><td>81.10</td><td>86.43</td></tr>
<tr><td>INT4-GPTQ</td><td>84.10</td><td>79.80</td><td>86.73</td></tr>
<tr><td>INT4-AWQ</td><td>82.84</td><td>80.15</td><td>87.19</td></tr>
</tbody>
</table>
### (2) Speculative Decoding
#### Qwen3 Series Models
Benchmark results for Qwen3 series models with `Eagle3` speculative decoding algorithm on datasets including `MT-bench`, `HunmanEval`, `GSM8K`, and `Alpaca`:
<table>
<thead>
<tr>
<th> </th><th> </th>
<th colspan="2" style="text-align: center; vertical-align: middle;">MT-bench</th>
<th colspan="2" style="text-align: center; vertical-align: middle;">HumanEval</th>
<th colspan="2" style="text-align: center; vertical-align: middle;">GSM8K</th>
<th colspan="2" style="text-align: center; vertical-align: middle;">Alpaca</th>
<th colspan="2" style="text-align: center; vertical-align: middle;">Mean</th></tr>
<tr><th>Temperature</th><th>Model</th><th>Speedup</th><th>τ</th><th>Speedup</th><th>τ</th><th>Speedup</th><th>τ</th><th>Speedup</th><th>τ</th><th>Speedup</th><th>τ</th></tr>
</thead>
<tbody>
<!-- <tr><td colspan="12" style="text-align: center; vertical-align: middle;"><strong>Temperature=0</strong></td></tr> -->
<tr><td rowspan="6"><strong>T=0</strong></td>
<td>Qwen3-1.7B</td><td>2.05x</td><td>2.81</td><td>2.07x</td><td>2.93</td><td>2.11x</td><td>2.98</td><td>1.93x</td><td>2.69</td><td>2.04x</td><td>2.85</td></tr>
<tr> <td>Qwen3-4B</td><td>2.21x</td><td>3.01</td><td>2.36x</td><td>3.24</td><td>2.42x</td><td>3.13</td><td>2.32x</td><td>2.75</td><td>2.33x</td><td>3.03</td></tr>
<tr><td>Qwen3-8B</td><td>2.65x</td><td>3.87</td><td>2.64x</td><td>3.82</td><td>2.86x</td><td>4.10</td><td>2.58x</td><td>3.55</td><td>2.68x</td><td>3.83</td></tr>
<tr><td>Qwen3-14B</td><td>2.42x</td><td>3.38</td><td>2.57x</td><td>3.58</td><td>2.75x</td><td>3.77</td><td>2.27x</td><td>3.11</td><td>2.50x</td><td>3.46</td></tr>
<tr><td>Qwen3-32B</td><td>2.39x</td><td>2.78</td><td>2.37x</td><td>2.81</td><td>2.47x</td><td>2.92</td><td>2.42x</td><td>2.53</td><td>2.41x</td><td>2.76</td></tr>
<tr><td>Qwen3-30B-A3B</td><td>2.84x</td><td>3.63</td><td>2.27x</td><td>3.09</td><td>2.64x</td><td>3.42</td><td>2.83x</td><td>3.56</td><td>2.64x</td><td>3.42</td></tr>
<!-- <tr><td colspan="12" style="text-align: center; vertical-align: middle;"><strong>Temperature=1</strong></td></tr> -->
<tr><td rowspan="6"><strong>T=1</strong></td>
<td>Qwen3-1.7B</td><td>1.74x</td><td>2.53</td><td>1.86x</td><td>2.70</td><td>1.82x</td><td>2.69</td><td>1.72x</td><td>2.46</td><td>1.93x</td><td>2.60</td></tr>
<tr><td>Qwen3-4B</td><td>1.93x</td><td>2.60</td><td>2.00x</td><td>2.84</td><td>2.11x</td><td>2.82</td><td>2.34x</td><td>2.50</td><td>1.75x</td><td>2.69</td></tr>
<tr><td>Qwen3-8B</td><td>1.91x</td><td>2.84</td><td>2.07x</td><td>3.05</td><td>2.34x</td><td>3.26</td><td>2.09x</td><td>2.92</td><td>2.10x</td><td>3.02</td></tr>
<tr><td>Qwen3-14B</td><td>1.81x</td><td>2.58</td><td>1.96x</td><td>2.81</td><td>2.16x</td><td>3.09</td><td>1.76x</td><td>2.49</td><td>1.92x</td><td>2.74</td></tr>
<tr><td>Qwen3-32B</td><td>1.62x</td><td>1.91</td><td>1.71x</td><td>2.05</td><td>1.78x</td><td>2.10</td><td>1.80x</td><td>1.95</td><td>1.62x</td><td>2.00</td></tr>
<tr><td>Qwen3-30B-A3B</td><td>1.91x</td><td>2.46</td><td>2.00x</td><td>2.64</td><td>1.90x</td><td>2.53</td><td>1.80x</td><td>2.32</td><td>1.90x</td><td>2.48</td></tr>
</tbody>
</table>
#### Hunyuan Series Models
Benchmark results for Hunyuan series models with `Eagle3` speculative decoding algorithm on datasets including `MT-bench`, `HunmanEval`, `GSM8K`, and `Alpaca`:
<table>
<thead>
<tr>
<th> </th><th> </th>
<th colspan="2" style="text-align: center; vertical-align: middle;">MT-bench</th>
<th colspan="2" style="text-align: center; vertical-align: middle;">HumanEval</th>
<th colspan="2" style="text-align: center; vertical-align: middle;">GSM8K</th>
<th colspan="2" style="text-align: center; vertical-align: middle;">Alpaca</th>
<th colspan="2" style="text-align: center; vertical-align: middle;">Mean</th></tr>
<tr><th>Temperature</th><th>Model</th><th>Speedup</th><th>τ</th><th>Speedup</th><th>τ</th><th>Speedup</th><th>τ</th><th>Speedup</th><th>τ</th><th>Speedup</th><th>τ</th></tr>
</thead>
<tbody>
<!-- <tr><td colspan="12" style="text-align: center; vertical-align: middle;"><strong>Temperature=0</strong></td></tr> -->
<tr><td rowspan="3"><strong>T=0</strong></td>
<td>Hunyuan-1.8B-Instruct</td><td>1.97x</td><td>2.90</td><td>2.58x</td><td>3.73</td><td>2.61x</td><td>3.71</td><td>1.71x</td><td>2.43</td><td>2.22x</td><td>3.19</td></tr>
<tr> <td>Hunyuan-4B-Instruct</td><td>1.77x</td><td>2.60</td><td>2.64x</td><td>3.35</td><td>2.14x</td><td>3.17</td><td>1.72x</td><td>2.57</td><td>2.07x</td><td>2.92</td></tr>
<tr><td>Hunyuan-7B-Instruct</td><td>2.22x</td><td>3.58</td><td>3.59x</td><td>5.47</td><td>2.96x</td><td>4.68</td><td>1.64x</td><td>2.56</td><td>2.60x</td><td>4.07</td></tr>
<!-- <tr><td colspan="12" style="text-align: center; vertical-align: middle;"><strong>Temperature=1</strong></td></tr> -->
<tr><td rowspan="3"><strong>T=1</strong></td>
<td>Hunyuan-1.8B-Instruct</td><td>1.58x</td><td>2.36</td><td>2.35x</td><td>3.56</td><td>2.23x</td><td>3.38</td><td>1.26x</td><td>1.87</td><td>1.86x</td><td>2.79</td></tr>
<tr><td>Hunyuan-4B-Instruct</td><td>1.36x</td><td>2.05</td><td>1.97x</td><td>2.86</td><td>1.72x</td><td>2.68</td><td>1.14x</td><td>1.76</td><td>1.55x</td><td>2.34</td></tr>
<tr><td>Hunyuan-7B-Instruct</td><td>1.90x</td><td>3.11</td><td>3.12x</td><td>5.09</td><td>2.74x</td><td>4.34</td><td>1.47x</td><td>2.39</td><td>2.31x</td><td>3.73</td></tr>
</tbody>
</table>
## 📝 License
The code for this project is open-sourced under the [License for AngelSlim](LICENSE).
## 🔗 Citation
```
@software{AngelSlim2025,
title={{AngelSlim}},
author={Tencent AngelSlim Project Contributors},
year={2025},
month={6},
url={https://github.com/Tencent/AngelSlim},
}
```
## 💬 Technical Discussion
* AngelSlim is continuously iterating and new features will be released soon. If you have any questions or suggestions, please open an issue on GitHub or join our [WeChat technical discussion group](https://github.com/Tencent/AngelSlim/blob/main/docs/source/assets/angel_slim_wechat.png?raw=true).
|
ImparkTeam/phi-instruct-math-TOKENIZER_v3
|
ImparkTeam
| 2025-08-07T13:16:46Z | 0 | 0 |
transformers
|
[
"transformers",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-07T13:16:30Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
huihui-ai/Huihui-Qwen3-4B-Instruct-2507-abliterated
|
huihui-ai
| 2025-08-07T13:11:09Z | 0 | 1 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"abliterated",
"uncensored",
"conversational",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:finetune:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-07T12:05:04Z |
---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507/blob/main/LICENSE
base_model:
- Qwen/Qwen3-4B-Instruct-2507
pipeline_tag: text-generation
library_name: transformers
tags:
- abliterated
- uncensored
---
# huihui-ai/Huihui-Qwen3-4B-Instruct-2507-abliterated
This is an uncensored version of [Qwen/Qwen3-4B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507) created with abliteration (see [remove-refusals-with-transformers](https://github.com/Sumandora/remove-refusals-with-transformers) to know more about it).
This is a crude, proof-of-concept implementation to remove refusals from an LLM model without using TransformerLens.
Ablation was performed using a new and faster method, which yields better results.
## ollama
You can use [huihui_ai/qwen3-abliterated:4b-instruct-2507-q4_K_M](https://ollama.com/huihui_ai/qwen3-abliterated:4b-instruct-2507-q4_K_M) directly,
```
ollama run huihui_ai/qwen3-abliterated:4b-instruct-2507-q4_K_M
```
## Usage
You can use this model in your applications by loading it with Hugging Face's `transformers` library:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, TextStreamer
import torch
import os
import signal
import random
import numpy as np
import time
from collections import Counter
cpu_count = os.cpu_count()
print(f"Number of CPU cores in the system: {cpu_count}")
half_cpu_count = cpu_count // 2
os.environ["MKL_NUM_THREADS"] = str(half_cpu_count)
os.environ["OMP_NUM_THREADS"] = str(half_cpu_count)
torch.set_num_threads(half_cpu_count)
print(f"PyTorch threads: {torch.get_num_threads()}")
print(f"MKL threads: {os.getenv('MKL_NUM_THREADS')}")
print(f"OMP threads: {os.getenv('OMP_NUM_THREADS')}")
# Load the model and tokenizer
NEW_MODEL_ID = "huihui-ai/Huihui-Qwen3-4B-Instruct-2507-abliterated"
print(f"Load Model {NEW_MODEL_ID} ... ")
quant_config_4 = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_use_double_quant=True,
llm_int8_enable_fp32_cpu_offload=True,
)
model = AutoModelForCausalLM.from_pretrained(
NEW_MODEL_ID,
device_map="balanced",
trust_remote_code=True,
quantization_config=quant_config_4,
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
)
#print(model)
#print(model.config)
tokenizer = AutoTokenizer.from_pretrained(NEW_MODEL_ID, trust_remote_code=True)
if tokenizer.pad_token is None:
tokenizer.pad_token = tokenizer.eos_token
tokenizer.pad_token_id = tokenizer.eos_token_id
messages = []
skip_prompt=True
skip_special_tokens=True
do_sample = True
class CustomTextStreamer(TextStreamer):
def __init__(self, tokenizer, skip_prompt=True, skip_special_tokens=True):
super().__init__(tokenizer, skip_prompt=skip_prompt, skip_special_tokens=skip_special_tokens)
self.generated_text = ""
self.stop_flag = False
self.init_time = time.time() # Record initialization time
self.end_time = None # To store end time
self.first_token_time = None # To store first token generation time
self.token_count = 0 # To track total tokens
def on_finalized_text(self, text: str, stream_end: bool = False):
if self.first_token_time is None and text.strip(): # Set first token time on first non-empty text
self.first_token_time = time.time()
self.generated_text += text
# Count tokens in the generated text
tokens = self.tokenizer.encode(text, add_special_tokens=False)
self.token_count += len(tokens)
print(text, end="", flush=True)
if stream_end:
self.end_time = time.time() # Record end time when streaming ends
if self.stop_flag:
raise StopIteration
def stop_generation(self):
self.stop_flag = True
self.end_time = time.time() # Record end time when generation is stopped
def get_metrics(self):
"""Returns initialization time, first token time, first token latency, end time, total time, total tokens, and tokens per second."""
if self.end_time is None:
self.end_time = time.time() # Set end time if not already set
total_time = self.end_time - self.init_time # Total time from init to end
tokens_per_second = self.token_count / total_time if total_time > 0 else 0
first_token_latency = (self.first_token_time - self.init_time) if self.first_token_time is not None else None
metrics = {
"init_time": self.init_time,
"first_token_time": self.first_token_time,
"first_token_latency": first_token_latency,
"end_time": self.end_time,
"total_time": total_time, # Total time in seconds
"total_tokens": self.token_count,
"tokens_per_second": tokens_per_second
}
return metrics
def generate_stream(model, tokenizer, messages, skip_prompt, skip_special_tokens, do_sample, max_new_tokens):
input_ids = tokenizer.apply_chat_template(
messages,
tokenize=True,
add_generation_prompt=True,
return_tensors="pt"
)
attention_mask = torch.ones_like(input_ids, dtype=torch.long)
tokens = input_ids.to(model.device)
attention_mask = attention_mask.to(model.device)
streamer = CustomTextStreamer(tokenizer, skip_prompt=skip_prompt, skip_special_tokens=skip_special_tokens)
def signal_handler(sig, frame):
streamer.stop_generation()
print("\n[Generation stopped by user with Ctrl+C]")
signal.signal(signal.SIGINT, signal_handler)
generate_kwargs = {}
if do_sample:
generate_kwargs = {
"do_sample": do_sample,
"max_length": max_new_tokens,
"temperature": 0.7,
"top_k": 20,
"top_p": 0.8,
"repetition_penalty": 1.2,
"no_repeat_ngram_size": 2
}
else:
generate_kwargs = {
"do_sample": do_sample,
"max_length": max_new_tokens,
"repetition_penalty": 1.2,
"no_repeat_ngram_size": 2
}
print("Response: ", end="", flush=True)
try:
generated_ids = model.generate(
tokens,
attention_mask=attention_mask,
#use_cache=False,
pad_token_id=tokenizer.pad_token_id,
streamer=streamer,
**generate_kwargs
)
del generated_ids
except StopIteration:
print("\n[Stopped by user]")
del input_ids, attention_mask
torch.cuda.empty_cache()
signal.signal(signal.SIGINT, signal.SIG_DFL)
return streamer.generated_text, streamer.stop_flag, streamer.get_metrics()
while True:
print(f"skip_prompt: {skip_prompt}")
print(f"skip_special_tokens: {skip_special_tokens}")
print(f"do_sample: {do_sample}")
user_input = input("User: ").strip()
if user_input.lower() == "/exit":
print("Exiting chat.")
break
if user_input.lower() == "/clear":
messages = []
print("Chat history cleared. Starting a new conversation.")
continue
if user_input.lower() == "/skip_prompt":
skip_prompt = not skip_prompt
continue
if user_input.lower() == "/skip_special_tokens":
skip_special_tokens = not skip_special_tokens
continue
if user_input.lower() == "/do_sample":
do_sample = not do_sample
continue
if not user_input:
print("Input cannot be empty. Please enter something.")
continue
messages.append({"role": "user", "content": user_input})
activated_experts = []
response, stop_flag, metrics = generate_stream(model, tokenizer, messages, skip_prompt, skip_special_tokens, do_sample, 40960)
print("\n\nMetrics:")
for key, value in metrics.items():
print(f" {key}: {value}")
print("", flush=True)
if stop_flag:
continue
messages.append({"role": "assistant", "content": response})
```
### Usage Warnings
- **Risk of Sensitive or Controversial Outputs**: This model’s safety filtering has been significantly reduced, potentially generating sensitive, controversial, or inappropriate content. Users should exercise caution and rigorously review generated outputs.
- **Not Suitable for All Audiences**: Due to limited content filtering, the model’s outputs may be inappropriate for public settings, underage users, or applications requiring high security.
- **Legal and Ethical Responsibilities**: Users must ensure their usage complies with local laws and ethical standards. Generated content may carry legal or ethical risks, and users are solely responsible for any consequences.
- **Research and Experimental Use**: It is recommended to use this model for research, testing, or controlled environments, avoiding direct use in production or public-facing commercial applications.
- **Monitoring and Review Recommendations**: Users are strongly advised to monitor model outputs in real-time and conduct manual reviews when necessary to prevent the dissemination of inappropriate content.
- **No Default Safety Guarantees**: Unlike standard models, this model has not undergone rigorous safety optimization. huihui.ai bears no responsibility for any consequences arising from its use.
### Donation
If you like it, please click 'like' and follow us for more updates.
You can follow [x.com/support_huihui](https://x.com/support_huihui) to get the latest model information from huihui.ai.
##### Your donation helps us continue our further development and improvement, a cup of coffee can do it.
- bitcoin(BTC):
```
bc1qqnkhuchxw0zqjh2ku3lu4hq45hc6gy84uk70ge
```
- Support our work on Ko-fi (https://ko-fi.com/huihuiai)!
|
NexaAI/whisper-large-v3-turbo
|
NexaAI
| 2025-08-07T13:08:56Z | 0 | 0 |
mlx
|
[
"mlx",
"whisper",
"audio",
"automatic-speech-recognition",
"en",
"zh",
"de",
"es",
"ru",
"ko",
"fr",
"ja",
"pt",
"tr",
"pl",
"ca",
"nl",
"ar",
"sv",
"it",
"id",
"hi",
"fi",
"vi",
"he",
"uk",
"el",
"ms",
"cs",
"ro",
"da",
"hu",
"ta",
"no",
"th",
"ur",
"hr",
"bg",
"lt",
"la",
"mi",
"ml",
"cy",
"sk",
"te",
"fa",
"lv",
"bn",
"sr",
"az",
"sl",
"kn",
"et",
"mk",
"br",
"eu",
"is",
"hy",
"ne",
"mn",
"bs",
"kk",
"sq",
"sw",
"gl",
"mr",
"pa",
"si",
"km",
"sn",
"yo",
"so",
"af",
"oc",
"ka",
"be",
"tg",
"sd",
"gu",
"am",
"yi",
"lo",
"uz",
"fo",
"ht",
"ps",
"tk",
"nn",
"mt",
"sa",
"lb",
"my",
"bo",
"tl",
"mg",
"as",
"tt",
"haw",
"ln",
"ha",
"ba",
"jw",
"su",
"arxiv:2212.04356",
"base_model:openai/whisper-large-v3",
"base_model:finetune:openai/whisper-large-v3",
"license:mit",
"region:us"
] |
automatic-speech-recognition
| 2025-08-07T13:01:36Z |
---
language:
- en
- zh
- de
- es
- ru
- ko
- fr
- ja
- pt
- tr
- pl
- ca
- nl
- ar
- sv
- it
- id
- hi
- fi
- vi
- he
- uk
- el
- ms
- cs
- ro
- da
- hu
- ta
- 'no'
- th
- ur
- hr
- bg
- lt
- la
- mi
- ml
- cy
- sk
- te
- fa
- lv
- bn
- sr
- az
- sl
- kn
- et
- mk
- br
- eu
- is
- hy
- ne
- mn
- bs
- kk
- sq
- sw
- gl
- mr
- pa
- si
- km
- sn
- yo
- so
- af
- oc
- ka
- be
- tg
- sd
- gu
- am
- yi
- lo
- uz
- fo
- ht
- ps
- tk
- nn
- mt
- sa
- lb
- my
- bo
- tl
- mg
- as
- tt
- haw
- ln
- ha
- ba
- jw
- su
license: mit
tags:
- audio
- automatic-speech-recognition
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
pipeline_tag: automatic-speech-recognition
base_model:
- openai/whisper-large-v3
library_name: mlx
---
# NexaAI/whisper-large-v3-turbo
## Quickstart
Run them directly with [nexa-sdk](https://github.com/NexaAI/nexa-sdk) installed
In nexa-sdk CLI:
```bash
NexaAI/whisper-large-v3-turbo
```
## Overview
Whisper is a state-of-the-art model for automatic speech recognition (ASR) and speech translation, proposed in the paper
[Robust Speech Recognition via Large-Scale Weak Supervision](https://huggingface.co/papers/2212.04356) by Alec Radford
et al. from OpenAI. Trained on >5M hours of labeled data, Whisper demonstrates a strong ability to generalise to many
datasets and domains in a zero-shot setting.
Whisper large-v3-turbo is a finetuned version of a pruned [Whisper large-v3](https://huggingface.co/openai/whisper-large-v3). In other words, it's the exact same model, except that the number of decoding layers have reduced from 32 to 4.
As a result, the model is way faster, at the expense of a minor quality degradation. You can find more details about it [in this GitHub discussion](https://github.com/openai/whisper/discussions/2363).
## Reference
**Original model card**: [openai/whisper-large-v3-turbo](https://huggingface.co/openai/whisper-large-v3-turbo)
|
ekiprop/CoLA-Fisher-Standard_LoRA-Q_V-seed30
|
ekiprop
| 2025-08-07T13:06:58Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:roberta-base",
"lora",
"transformers",
"base_model:FacebookAI/roberta-base",
"base_model:adapter:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2025-08-07T13:03:57Z |
---
library_name: peft
license: mit
base_model: roberta-base
tags:
- base_model:adapter:roberta-base
- lora
- transformers
metrics:
- matthews_correlation
model-index:
- name: CoLA-Fisher-Standard_LoRA-Q_V-seed30
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CoLA-Fisher-Standard_LoRA-Q_V-seed30
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4326
- Matthews Correlation: 0.5957
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:------:|:----:|:---------------:|:--------------------:|
| 0.6372 | 0.1866 | 50 | 0.6007 | 0.0 |
| 0.5457 | 0.3731 | 100 | 0.4751 | 0.4708 |
| 0.4924 | 0.5597 | 150 | 0.4750 | 0.4611 |
| 0.4692 | 0.7463 | 200 | 0.4711 | 0.4748 |
| 0.4535 | 0.9328 | 250 | 0.5166 | 0.4642 |
| 0.434 | 1.1194 | 300 | 0.4357 | 0.5295 |
| 0.4153 | 1.3060 | 350 | 0.4778 | 0.5046 |
| 0.419 | 1.4925 | 400 | 0.4798 | 0.4642 |
| 0.4367 | 1.6791 | 450 | 0.4085 | 0.5426 |
| 0.401 | 1.8657 | 500 | 0.4962 | 0.5153 |
| 0.4034 | 2.0522 | 550 | 0.4064 | 0.5815 |
| 0.3763 | 2.2388 | 600 | 0.4500 | 0.5480 |
| 0.4025 | 2.4254 | 650 | 0.4208 | 0.5628 |
| 0.3909 | 2.6119 | 700 | 0.4331 | 0.5560 |
| 0.3875 | 2.7985 | 750 | 0.4030 | 0.5615 |
| 0.3692 | 2.9851 | 800 | 0.4298 | 0.5705 |
| 0.363 | 3.1716 | 850 | 0.4186 | 0.5806 |
| 0.3558 | 3.3582 | 900 | 0.4042 | 0.5722 |
| 0.364 | 3.5448 | 950 | 0.4911 | 0.5523 |
| 0.3495 | 3.7313 | 1000 | 0.4460 | 0.5805 |
| 0.344 | 3.9179 | 1050 | 0.4225 | 0.5843 |
| 0.3477 | 4.1045 | 1100 | 0.4326 | 0.5957 |
| 0.3346 | 4.2910 | 1150 | 0.4488 | 0.5804 |
| 0.3279 | 4.4776 | 1200 | 0.4383 | 0.5912 |
| 0.3518 | 4.6642 | 1250 | 0.4342 | 0.5885 |
| 0.3424 | 4.8507 | 1300 | 0.4534 | 0.5728 |
### Framework versions
- PEFT 0.16.0
- Transformers 4.54.1
- Pytorch 2.5.1+cu121
- Datasets 4.0.0
- Tokenizers 0.21.4
|
mhmsadegh/gemma-3-4b-it-cause-effect-model-merged_bf16-v2
|
mhmsadegh
| 2025-08-07T13:05:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3",
"en",
"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-07T13:03:44Z |
---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** mhmsadegh
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
hamid1232/Qwen3-0.6B-Gensyn-Swarm-grassy_lethal_heron
|
hamid1232
| 2025-08-07T13:04:46Z | 101 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am grassy_lethal_heron",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-04T08:36:04Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am grassy_lethal_heron
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tushgaurav/Llama-3.2-3B-ascii-cats-lora
|
tushgaurav
| 2025-08-07T13:03:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/Llama-3.2-3B",
"base_model:finetune:unsloth/Llama-3.2-3B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-07T13:03:18Z |
---
base_model: unsloth/Llama-3.2-3B
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** tushgaurav
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-3B
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ekiprop/CoLA-Fisher-GLoRA-p50-seed30
|
ekiprop
| 2025-08-07T13:01:36Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:roberta-base",
"lora",
"transformers",
"base_model:FacebookAI/roberta-base",
"base_model:adapter:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2025-08-07T12:58:58Z |
---
library_name: peft
license: mit
base_model: roberta-base
tags:
- base_model:adapter:roberta-base
- lora
- transformers
metrics:
- matthews_correlation
model-index:
- name: CoLA-Fisher-GLoRA-p50-seed30
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CoLA-Fisher-GLoRA-p50-seed30
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4243
- Matthews Correlation: 0.5797
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:------:|:----:|:---------------:|:--------------------:|
| 0.6295 | 0.1866 | 50 | 0.6086 | 0.0 |
| 0.5795 | 0.3731 | 100 | 0.5476 | 0.2558 |
| 0.4842 | 0.5597 | 150 | 0.4588 | 0.4708 |
| 0.4652 | 0.7463 | 200 | 0.4810 | 0.4524 |
| 0.4512 | 0.9328 | 250 | 0.5073 | 0.4704 |
| 0.4314 | 1.1194 | 300 | 0.4661 | 0.4967 |
| 0.4177 | 1.3060 | 350 | 0.4602 | 0.5109 |
| 0.4389 | 1.4925 | 400 | 0.4677 | 0.4719 |
| 0.4367 | 1.6791 | 450 | 0.4342 | 0.5366 |
| 0.4031 | 1.8657 | 500 | 0.4769 | 0.5135 |
| 0.4039 | 2.0522 | 550 | 0.4409 | 0.5458 |
| 0.3734 | 2.2388 | 600 | 0.4447 | 0.5478 |
| 0.3692 | 2.4254 | 650 | 0.4506 | 0.5395 |
| 0.3865 | 2.6119 | 700 | 0.4322 | 0.5582 |
| 0.3499 | 2.7985 | 750 | 0.4243 | 0.5797 |
| 0.3609 | 2.9851 | 800 | 0.4507 | 0.5701 |
| 0.359 | 3.1716 | 850 | 0.4179 | 0.5725 |
| 0.3359 | 3.3582 | 900 | 0.4540 | 0.5452 |
| 0.3471 | 3.5448 | 950 | 0.5040 | 0.5339 |
| 0.3478 | 3.7313 | 1000 | 0.4622 | 0.5443 |
| 0.3474 | 3.9179 | 1050 | 0.4322 | 0.5580 |
| 0.3559 | 4.1045 | 1100 | 0.4496 | 0.5523 |
| 0.3146 | 4.2910 | 1150 | 0.4501 | 0.5549 |
| 0.3271 | 4.4776 | 1200 | 0.4527 | 0.5603 |
| 0.3083 | 4.6642 | 1250 | 0.4557 | 0.5606 |
| 0.3384 | 4.8507 | 1300 | 0.4639 | 0.5626 |
### Framework versions
- PEFT 0.16.0
- Transformers 4.54.1
- Pytorch 2.5.1+cu121
- Datasets 4.0.0
- Tokenizers 0.21.4
|
mhmsadegh/gemma-3-4b-it-cause-effect-modelv2-merged
|
mhmsadegh
| 2025-08-07T13:00:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3",
"en",
"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-07T12:59:03Z |
---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** mhmsadegh
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
wenbb/cs5210-25su-finetuned-boxtobio-merged
|
wenbb
| 2025-08-07T13:00:24Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-08-07T12:59:23Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
DimaSK1/Qwen2-1.5B-bnb-4bit_base
|
DimaSK1
| 2025-08-07T12:52:52Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"unsloth",
"trl",
"base_model:unsloth/Qwen2-1.5B-bnb-4bit",
"base_model:finetune:unsloth/Qwen2-1.5B-bnb-4bit",
"endpoints_compatible",
"region:us"
] | null | 2025-08-07T12:52:47Z |
---
base_model: unsloth/Qwen2-1.5B-bnb-4bit
library_name: transformers
model_name: Qwen2-1.5B-bnb-4bit_base
tags:
- generated_from_trainer
- sft
- unsloth
- trl
licence: license
---
# Model Card for Qwen2-1.5B-bnb-4bit_base
This model is a fine-tuned version of [unsloth/Qwen2-1.5B-bnb-4bit](https://huggingface.co/unsloth/Qwen2-1.5B-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="DimaSK1/Qwen2-1.5B-bnb-4bit_base", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.2
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Alexjiuqiaoyu/im
|
Alexjiuqiaoyu
| 2025-08-07T12:52:34Z | 105 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-07-12T22:46:58Z |
---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
Im AI Companion Model
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hothousetx/smiles_t5
|
hothousetx
| 2025-08-07T12:51:48Z | 21 | 0 | null |
[
"safetensors",
"t5",
"chemistry",
"biology",
"medical",
"region:us"
] | null | 2025-04-25T08:23:49Z |
---
tags:
- chemistry
- biology
- medical
---
Please see our [github](https://github.com/hothousetx/smiles_t5) for finetuning and inference scripts for this model.
|
qiangchunyu/SecoustiCodec
|
qiangchunyu
| 2025-08-07T12:47:26Z | 0 | 2 | null |
[
"audio",
"speech-processing",
"speech-codec",
"low-bitrate",
"streaming",
"tts",
"cross-modal",
"en",
"arxiv:2508.02849",
"license:apache-2.0",
"region:us"
] | null | 2025-08-04T18:31:13Z |
---
language: en
tags:
- audio
- speech-processing
- speech-codec
- low-bitrate
- streaming
- tts
- cross-modal
license: apache-2.0
---
# SecoustiCodec: Cross-Modal Aligned Streaming Single-Codecbook Speech Codec
## Resources
- [📄 Research Paper](https://arxiv.org/abs/2508.02849)
- [💻 Source Code](https://github.com/QiangChunyu/SecoustiCodec)
- [🤗 Demo Page](https://qiangchunyu.github.io/SecoustiCodec_Page/)
## Model Overview
SecoustiCodec is a **low-bitrate streaming speech codec** that achieves good performance in speech reconstruction at ultra-low bitrates (0.27-1 kbps). The model introduces several innovations:
- 🧠 **Cross-modal alignment**: Aligns text and speech in joint multimodal frame-level space
- 🔍 **Semantic-paralinguistic disentanglement**: Separates linguistic content from speaker characteristics
- ⚡ **Streaming support**: Real-time processing capabilities
- 📊 **Efficient quantization**: VAE+FSQ approach solves token distribution problems
## Architecture Overview

## Acknowledgments
- We used [HiFiGAN](https://github.com/jik876/hifi-gan) for efficient waveform generation
- We referred to [MIMICodec](https://huggingface.co/kyutai/mimi) to implement this.
## Citation
```bibtex
@article{qiang2025secousticodec,
title={SecoustiCodec: Cross-Modal Aligned Streaming Single-Codecbook Speech Codec},
author={Chunyu Qiang, Haoyu Wang, Cheng Gong, Tianrui Wang, Ruibo Fu, Tao Wang, Ruilong Chen, Jiangyan Yi, Zhengqi Wen, Chen Zhang, Longbiao Wang, Jianwu Dang, Jianhua Tao},
journal={arXiv preprint arXiv:2508.02849},
year={2025}
}
```
|
ScorpieCur/SmolLM3-3B-Base-unsloth-bnb-4bit
|
ScorpieCur
| 2025-08-07T12:46:40Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"smollm3",
"text-generation",
"transformers.js",
"unsloth",
"en",
"fr",
"es",
"it",
"pt",
"zh",
"ar",
"ru",
"base_model:HuggingFaceTB/SmolLM3-3B-Base",
"base_model:quantized:HuggingFaceTB/SmolLM3-3B-Base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-08-07T12:46:39Z |
---
base_model:
- HuggingFaceTB/SmolLM3-3B-Base
library_name: transformers
license: apache-2.0
language:
- en
- fr
- es
- it
- pt
- zh
- ar
- ru
tags:
- transformers.js
- unsloth
---
<div>
<p style="margin-top: 0;margin-bottom: 0;">
<em><a href="https://docs.unsloth.ai/basics/unsloth-dynamic-v2.0-gguf">Unsloth Dynamic 2.0</a> achieves superior accuracy & outperforms other leading quants.</em>
</p>
<div style="display: flex; gap: 5px; align-items: center; ">
<a href="https://github.com/unslothai/unsloth/">
<img src="https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png" width="133">
</a>
<a href="https://discord.gg/unsloth">
<img src="https://github.com/unslothai/unsloth/raw/main/images/Discord%20button.png" width="173">
</a>
<a href="https://docs.unsloth.ai/">
<img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="143">
</a>
</div>
</div>
# SmolLM3

## Table of Contents
1. [Model Summary](#model-summary)
2. [How to use](#how-to-use)
3. [Evaluation](#evaluation)
4. [Training](#training)
5. [Limitations](#limitations)
6. [License](#license)
## Model Summary
SmolLM3 is a 3B parameter language model designed to push the boundaries of small models. It supports 6 languages, advanced reasoning and long context. SmolLM3 is a fully open model that offers strong performance at the 3B–4B scale.

**SmolLM3-3B-Base** is the base model after pretraining, you can find the instruct model at [SmolLM3-3B](https://huggingface.co/HuggingFaceTB/SmolLM3-3B).
The model is a decoder-only transformer using GQA and NoPE, it was pretrained on 11.2T tokens with a staged curriculum of web, code, math and reasoning data. Post-training included midtraining on 140B reasoning tokens followed by supervised fine-tuning and alignment via Anchored Preference Optimization (APO).
### Key features
- Instruct model optimized for **hybrid reasoning**
- **Fully open model**: open weights + full training details including public data mixture and training configs
- **Long context:** Trained on 64k context and suppots up to **128k tokens** using YARN extrapolation
- **Multilingual**: 6 natively supported (English, French, Spanish, German, Italian, and Portuguese)
For more details refer to our blog post: https://hf.co/blog/smollm3
### How to use
The modeling code for SmolLM3 is available in transformers `v4.53.0`, so make sure to upgrade your transformers version. You can also load the model with the latest `vllm` which uses transformers as a backend.
```bash
pip install -U transformers
```
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "HuggingFaceTB/SmolLM3-3B"
device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
# for multiple GPUs install accelerate and do `model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto")`
model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
inputs = tokenizer.encode("Gravity is", return_tensors="pt").to(device)
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
For local inference, you can use `llama.cpp`, `ONNX`, `MLX` and `MLC`. You can find quantized checkpoints in this collection (https://huggingface.co/collections/HuggingFaceTB/smollm3-686d33c1fdffe8e635317e23).
### Long context processing
The current `config.json` is set for context length up to 65,536 tokens. To handle longer inputs (128k or 256k), we utilize YaRN you can change the `max_position_embeddings` and rope_scaling` to:
```
{
...,
"rope_scaling": {
"factor": 2.0, #2x65536=131 072
"original_max_position_embeddings": 65536,
"type": "yarn"
}
}
```
## Evaluation
In this section, we report the evaluation results of SmolLM3 model. All evaluations are zero-shot unless stated otherwise, and we use [lighteval](https://github.com/huggingface/lighteval) to run them.
We highlight the best score in bold and underline the second-best score.
### Base Pre-Trained Model
#### English benchmarks
Note: All evaluations are zero-shot unless stated otherwise. For Ruler 64k evaluation, we apply YaRN to the Qwen models with 32k context to extrapolate the context length.
| Category | Metric | SmolLM3-3B | Qwen2.5-3B | Llama3-3.2B | Qwen3-1.7B-Base | Qwen3-4B-Base |
|---------|--------|---------------------|------------|--------------|------------------|---------------|
| Reasoning & Commonsense| HellaSwag | **76.15** | 74.19 |<u>75.52</u> | 60.52 | 74.37 |
| | ARC-CF (Average) | **65.61** | 59.81 | 58.58 | 55.88 | <u>62.11</u> |
| | Winogrande | 58.88 | **61.41** | 58.72 | 57.06 | <u>59.59</u> |
| | CommonsenseQA | <u>55.28</u> | 49.14 | **60.60** | 48.98 | 52.99 |
| Knowledge & Understanding | MMLU-CF (Average) | <u>44.13</u> | 42.93 | 41.32 | 39.11 | **47.65** |
| | MMLU Pro CF | <u>19.61</u> | 16.66 | 16.42 | 18.04 | **24.92** |
| | MMLU Pro MCF | <u>32.70</u> | 31.32 | 25.07 | 30.39 | **41.07** |
| | PIQA | **78.89** | 78.35 | <u>78.51</u> | 75.35 | 77.58 |
| | OpenBookQA | 40.60 | 40.20 | <u>42.00</u> | 36.40 | **42.40** |
| | BoolQ | **78.99** | 73.61 | <u>75.33</u> | 74.46 | 74.28 |
| **Math & Code** | | | | | | |
| Coding & math | HumanEval+ | 30.48 | 34.14| 25.00 | <u>43.29</u>| **54.87** |
| | MBPP+ | 52.91 | 52.11 | 38.88| <u>59.25</u> | **63.75** |
| | MATH (4-shot) | <u>46.10</u> | 40.10 | 7.44 | 41.64 | **51.20** |
| | GSM8k (5-shot) | 67.63 | <u>70.13</u> | 25.92 | 65.88 | **74.14** |
| **Long context** | | | | | | |
| | Ruler 32k | 76.35 | 75.93 | <u>77.58</u> | 70.63 | **83.98** |
| | Ruler 64k | <u>67.85</u> | 64.90 | **72.93** | 57.18 | 60.29 |
| | Ruler 128k | 61.03 | <u>62.23</u> | **71.30** | 43.03 | 47.23 |
#### Multilingual benchmarks
| Category | Metric | SmolLM3 3B Base | Qwen2.5-3B | Llama3.2 3B | Qwen3 1.7B Base | Qwen3 4B Base |
|---------|--------|---------------------|------------|--------------|------------------|---------------|
| Main supported languages | | | | | | | |
| French| MLMM Hellaswag | **63.94** | 57.47 | 57.66 | 51.26 | <u>61.00</u> |
| | Belebele | 51.00 | <u>51.55</u> | 49.22 |49.44| **55.00** |
| | Global MMLU (CF) | <u>38.37</u> | 34.22 | 33.71 | 34.94 |**41.80** |
| | Flores-200 (5-shot) | 62.85| 61.38| <u>62.89<u/u> | 58.68 | **65.76** |
| Spanish| MLMM Hellaswag | **65.85** | 58.25 | 59.39 | 52.40 | <u>61.85</u> |
| | Belebele | 47.00 | <u>48.88</u> | 47.00 | 47.56 | **50.33** |
| | Global MMLU (CF) | <u>38.51</u> | 35.84 | 35.60 | 34.79 |**41.22** |
| | Flores-200 (5-shot) | <u>48.25</u>| 50.00| 44.45 | 46.93 | **50.16** |
| German| MLMM Hellaswag | **59.56** | 49.99| 53.19|46.10| <u>56.43</u>|
| | Belebele | <u>48.44</u> | 47.88 | 46.22 | 48.00 | **53.44**|
| | Global MMLU (CF) | <u>35.10</u> | 33.19 | 32.60 | 32.73 |**38.70** |
| | Flores-200 (5-shot) | **56.60**| 50.63| <u>54.95</u> | 52.58 | 50.48 |
| Italian| MLMM Hellaswag | **62.49** | 53.21 | 54.96 | 48.72 | <u>58.76</u> |
| | Belebele | <u>46.44</u> | 44.77 | 43.88 | 44.00 | **48.78** | 44.88 |
| | Global MMLU (CF) | <u>36.99</u> | 33.91 | 32.79 | 35.37 |**39.26** |
| | Flores-200 (5-shot) | <u>52.65<u/>| **54.87**| 48.83 | 48.37 | 49.11 |
| Portuguese| MLMM Hellaswag | **63.22** | 57.38 | 56.84 | 50.73 | <u>59.89</u> |
| | Belebele | 47.67 | **49.22** | 45.00 | 44.00 | 50.00 | <u>49.00</U> |
| | Global MMLU (CF) | <u>36.88</u> | 34.72 | 33.05 | 35.26 |**40.66** |
| | Flores-200 (5-shot) | <u>60.93</u> |57.68| 54.28 | 56.58 | **63.43** |
The model has also been trained on Arabic (standard), Chinese and Russian data, but has seen fewer tokens in these languages compared to the 6 above. We report the performance on these langages for information.
| Category | Metric | SmolLM3 3B Base | Qwen2.5-3B | Llama3.2 3B | Qwen3 1.7B Base | Qwen3 4B Base |
|---------|--------|---------------------|------------|--------------|------------------|---------------|
| Other supported languages | | | | | | | |
| Arabic| Belebele | 40.22 | 44.22 | <u>45.33</u> | 42.33 | **51.78** |
| | Global MMLU (CF) | 28.57 | 28.81 | 27.67 | <u>29.37</u> | **31.85** |
| | Flores-200 (5-shot) | <u>40.22</u> | 39.44 | **44.43** | 35.82 | 39.76 |
| Chinese| Belebele | 43.78 | 44.56 | <u>49.56</u> | 48.78 | **53.22** |
| | Global MMLU (CF) | 36.16 | 33.79 | <u>39.57</u> | 38.56 | **44.55** |
| | Flores-200 (5-shot) | 29.17 | **33.21** | 31.89 | 25.70 | <u>32.50</u> |
| Russian| Belebele | <u>47.44</u> | 45.89 | <u>47.44</u> | 45.22 | **51.44** |
| | Global MMLU (CF) | <u>36.51</u> | 32.47 | 34.52 | 34.83 | **38.80** |
| | Flores-200 (5-shot) | 47.13 | 48.74 | 50.74 | <u>54.70</u> | **60.53** |
### Instruction Model
#### No Extended Thinking
Evaluation results of non reasoning models and reasoning models in no thinking mode. We highlight the best and second-best scores in bold.
| Category | Metric | SmoLLM3-3B | Qwen2.5-3B | Llama3.1-3B | Qwen3-1.7B | Qwen3-4B |
|---------|--------|------------|------------|-------------|------------|----------|
| High school math competition | AIME 2025 | <u>9.3</u> | 2.9 | 0.3 | 8.0 | **17.1** |
| Math problem-solving | GSM-Plus | 72.8 | <u>74.1</u> | 59.2 | 68.3 | **82.1** |
| Competitive programming | LiveCodeBench v4 | <u>15.2</u> | 10.5 | 3.4 | 15.0 | **24.9** |
| Graduate-level reasoning | GPQA Diamond | <u>35.7</u> | 32.2 | 29.4 | 31.8 | **44.4** |
| Instruction following | IFEval | **76.7** | 65.6 | 71.6 | <u>74.0</u> | 68.9 |
| Alignment | MixEval Hard | 26.9 | <u>27.6</u> | 24.9 | 24.3 | **31.6** |
| Tool Calling | BFCL| <u>92.3</u> | - | <u>92.3</u> * | 89.5 | **95.0** |
| Multilingual Q&A | Global MMLU | <u>53.5</u> | 50.54 | 46.8 | 49.5 | **65.1** |
(*): this is a tool calling finetune
#### Extended Thinking
Evaluation results in reasoning mode for SmolLM3 and Qwen3 models:
| Category | Metric | SmoLLM3-3B | Qwen3-1.7B | Qwen3-4B |
|---------|--------|------------|------------|----------|
| High school math competition | AIME 2025 | <u>36.7</u> | 30.7 | **58.8** |
| Math problem-solving | GSM-Plus | <u>83.4</u> | 79.4 | **88.2** |
| Competitive programming | LiveCodeBench v4 | 30.0 | <u>34.4</u> | **52.9** |
| Graduate-level reasoning | GPQA Diamond | <u>41.7</u> | 39.9 | **55.3** |
| Instruction following | IFEval | 71.2 | <u>74.2</u> | **85.4** |
| Alignment | MixEval Hard | 30.8 | <u>33.9</u> | **38.0** |
| Tool Calling | BFCL | <u>88.8</u> | <u>88.8</u> | **95.5** |
| Multilingual Q&A | Global MMLU | <u>64.1</u> | 62.3 | **73.3** |
## Training
### Model
- **Architecture:** Transformer decoder
- **Pretraining tokens:** 11T
- **Precision:** bfloat16
### Software & hardware
- **GPUs:** 384 H100
- **Training Framework:** [nanotron](https://github.com/huggingface/nanotron/tree/main)
- **Data processing framework:** [datatrove](https://github.com/huggingface/datatrove)
- **Evaluation framework:** [lighteval](https://github.com/huggingface/lighteval)
- **Post-training Framework:** [TRL](https://github.com/huggingface/trl)
### Open resources
Here is an infographic with all the training details.
- The datasets used for pretraining can be found in this [collection](https://huggingface.co/collections/HuggingFaceTB/smollm3-pretraining-datasets-685a7353fdc01aecde51b1d9) and those used in mid-training and post-training will be released in the following weeks
- The training and evaluation configs and code can be found in the [huggingface/smollm](https://github.com/huggingface/smollm) repository.

## Limitations
SmolLM3 can produce text on a variety of topics, but the generated content may not always be factually accurate, logically consistent, or free from biases present in the training data. These models should be used as assistive tools rather than definitive sources of information. Users should always verify important information and critically evaluate any generated content.
## License
[Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
|
ekiprop/CoLA-Fisher-GLoRA-p20-seed30
|
ekiprop
| 2025-08-07T12:46:20Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:roberta-base",
"lora",
"transformers",
"base_model:FacebookAI/roberta-base",
"base_model:adapter:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2025-08-07T12:43:39Z |
---
library_name: peft
license: mit
base_model: roberta-base
tags:
- base_model:adapter:roberta-base
- lora
- transformers
metrics:
- matthews_correlation
model-index:
- name: CoLA-Fisher-GLoRA-p20-seed30
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CoLA-Fisher-GLoRA-p20-seed30
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4477
- Matthews Correlation: 0.5047
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:------:|:----:|:---------------:|:--------------------:|
| 0.6381 | 0.1866 | 50 | 0.6114 | 0.0 |
| 0.6021 | 0.3731 | 100 | 0.6002 | 0.0 |
| 0.5766 | 0.5597 | 150 | 0.5538 | 0.0936 |
| 0.5328 | 0.7463 | 200 | 0.5059 | 0.2972 |
| 0.4955 | 0.9328 | 250 | 0.5002 | 0.3761 |
| 0.4802 | 1.1194 | 300 | 0.4928 | 0.4271 |
| 0.4789 | 1.3060 | 350 | 0.4860 | 0.4095 |
| 0.4859 | 1.4925 | 400 | 0.5234 | 0.3947 |
| 0.4773 | 1.6791 | 450 | 0.4686 | 0.4412 |
| 0.4748 | 1.8657 | 500 | 0.4911 | 0.4123 |
| 0.4523 | 2.0522 | 550 | 0.4639 | 0.4608 |
| 0.4542 | 2.2388 | 600 | 0.4923 | 0.4499 |
| 0.4408 | 2.4254 | 650 | 0.4524 | 0.4858 |
| 0.4358 | 2.6119 | 700 | 0.4870 | 0.4552 |
| 0.4351 | 2.7985 | 750 | 0.5001 | 0.4499 |
| 0.4308 | 2.9851 | 800 | 0.4477 | 0.5047 |
| 0.4433 | 3.1716 | 850 | 0.4649 | 0.4829 |
| 0.4386 | 3.3582 | 900 | 0.4983 | 0.4584 |
| 0.433 | 3.5448 | 950 | 0.5177 | 0.4444 |
| 0.417 | 3.7313 | 1000 | 0.4839 | 0.4637 |
| 0.4393 | 3.9179 | 1050 | 0.4886 | 0.4554 |
| 0.4227 | 4.1045 | 1100 | 0.4936 | 0.4583 |
| 0.4088 | 4.2910 | 1150 | 0.4725 | 0.4664 |
| 0.4145 | 4.4776 | 1200 | 0.4676 | 0.4747 |
| 0.4097 | 4.6642 | 1250 | 0.4694 | 0.4775 |
| 0.411 | 4.8507 | 1300 | 0.4830 | 0.4664 |
### Framework versions
- PEFT 0.16.0
- Transformers 4.54.1
- Pytorch 2.5.1+cu121
- Datasets 4.0.0
- Tokenizers 0.21.4
|
BrelloES/brello-thinking
|
BrelloES
| 2025-08-07T12:45:58Z | 0 | 3 |
transformers
|
[
"transformers",
"safetensors",
"reasoning",
"mathematics",
"programming",
"creative-writing",
"chain-of-thought",
"interpretability",
"fairness",
"security",
"deployment",
"sustainability",
"monitoring",
"plugin",
"text-generation",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-07T11:04:49Z |
---
license: mit
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- reasoning
- mathematics
- programming
- creative-writing
- chain-of-thought
- interpretability
- fairness
- security
- deployment
- sustainability
- monitoring
- plugin
---
# Brello Thinking
## Model Description
**Brello Thinking** is an advanced language model created by **Epic Systems** as a part of **Brello AI Family**. Built on the robust Tencent Hunyuan base model, Brello Thinking specializes in deep reasoning, mathematical problem-solving, coding, and creative thinking with enhanced chain-of-thought capabilities.
### Key Features
- **Advanced Reasoning**: Enhanced chain-of-thought with both fast and slow thinking modes
- **Mathematical Excellence**: Superior at math and symbolic computation
- **Programming Prowess**: Strong coding abilities across Python, JS, C++, SQL, and more
- **Long Context Understanding**: Handles up to 256K tokens, long docs, and codebases
- **Creative Problem Solving**: Generates new solutions and approaches
- **Multi-language Support**: Fluent in English and Chinese, robust cross-lingual transfer
---
## 1. Executive Summary
**Brello Thinking v1.1.0** (2025-08-07) is a 1.8B-parameter causal language model engineered for complex reasoning, mathematics, and creative tasks. It combines ultra-long context, dual “fast”/“deep” thinking modes, and a plugin SDK for live tool integration. It is designed for safe, sustainable, and fair production deployments.
#### Highlights in this Release
- **Mixed-precision quantization** (BF16 & INT8)
- **Plugin SDK** (JSON-RPC, HMAC auth, dynamic tool routing)
- **Monitoring** (Prometheus, Grafana, carbon tracking)
- **Sustainability Dashboard** (gCO₂eq/token metrics, CodeCarbon SDK)
---
## 2. Model Architecture
| Component | Specification |
|----------------------------|-----------------------------------------------------------------------------------------------------|
| **Base Model** | Tencent Hunyuan / EpicBrelloV1ForCausalLM |
| **Parameters** | 1.8B (BF16/INT8 quantization; LoRA adapters optional) |
| **Context Window** | 256,000 tokens (rotary cache, sliding window, eviction logic) |
| **Attention** | Grouped-Query + Multi-Head FlashAttention (16 heads, 4 KV heads) |
| **Feed-Forward** | Two-stage (SiLU → Linear → SiLU) with RMSNorm, hidden size 6144 |
| **Depth** | 32 transformer blocks + 4 “Safety Adapter” blocks |
| **Adapters** | LoRA for math, code, creative, and domain fine-tuning (10–18M params each) |
| **Inference Modes** | Autoregressive sampling (top-k, top-p), beam, contrastive decoding |
| **Sharding** | ZeRO-3 / tensor-parallel / model-parallel combinations |
---
## 3. Training & Tuning
### 3.1 Pretraining Corpus
- **Web General**: 400B tokens (CommonCrawl, CC-100, curated news)
- **Science/Technical**: 50B tokens (arXiv, PubMed, patents)
- **Code**: 20B tokens (public GitHub, CodeSearchNet, MBPP)
- **Multilingual**: 30B tokens (Chinese, Spanish, German, Arabic)
- **Augmentations**: 15% span corruption, zh–en back-translation, dynamic masking
### 3.2 Optimization
- **Optimizer**: AdamW (β₁=0.9, β₂=0.95, weight_decay=0.01)
- **LR Schedule**: Linear warmup (10K steps), cosine decay (500K steps)
- **Batch**: 2M tokens/step, grad accumulation ×8
### 3.3 Instruction/RLHF Tuning
- **Instruction Pairs**: 1.2M human-annotated QA/reasoning
- **Reward Model**: Dual human-preference ranking (5K raters, Elo)
- **Algorithm**: PPO w/ KL penalty (target KL=0.1), reward clipping
---
## 4. Specialized Modules
| Adapter Name | Data Source | Params (M) | Use Case |
|-------------------|-----------------------------------|------------|----------------------------------|
| math-adapter | GSM8K, MATH, AIME datasets | 12 | Math proof, step-by-step logic |
| code-adapter | MBPP, MultiPL-E, GitHub repos | 18 | Coding, debugging, codegen |
| creative-adapter | Gutenberg, story corpora | 10 | Narrative, dialogue, ideation |
---
## 5. Plugin & Tooling SDK
- **Interface**: JSON-RPC (Unix socket or REST), HMAC-SHA256 auth
- **Plugins**:
- DB connectors: PostgreSQL, MySQL, Snowflake
- HTTP client: retry/backoff
- Vector DB: FAISS, Pinecone
#### Tool Call Example
1. Model emits:
```json
{"tool_call": {"name": "weather_fetch", "args": {"location":"Mumbai"}}}
```
2. Host executes plugin, returns:
```json
{"tool_result": {"forecast":"Sunny, 32°C"}}
```
3. Model resumes reasoning with tool result in context.
---
## 6. Inference, Monitoring & Scaling
### 6.1 Endpoint Performance
| Mode | Batch | Seq Len | Throughput (tok/s) | Latency (p50) |
|--------------|-------|----------|--------------------|---------------|
| Fast-Think | 8 | 4,096 | 250,000 | 15 ms |
| Deep-Think | 1 | 256,000 | 18,000 | 120 ms |
| INT8 Quant | 16 | 2,048 | 320,000 | 12 ms |
### 6.2 Observability
- **Prometheus Metrics**:
- `brello_inference_latency_seconds`
- `brello_generated_tokens_total`
- `brello_cache_evictions_total`
- **Grafana**:
- Token latency histograms, CO₂ per generation
---
## 7. Sustainability & Carbon Tracking
- **Data Center PUE**: 1.2
- **Carbon Emission**: ~0.0008 gCO₂eq/token (tracked with CodeCarbon)
- **Offset**: Epic Systems funds VER 2.0 credits
---
## 8. Robustness, Safety & Fairness
- **Adapters**: Real-time adversarial input filtering, personal data redaction, toxicity classifier (fine-tuned BERT-tox)
- **Bias Audits**:
- Toxicity variation <1.8% (12 demographic axes)
- Gender parity ±2%
- Dialect coverage 98% (EN & ZH)
---
## 9. Interpretability
- **Chain-of-Thought logs**: Token-level reasoning trace
- **Integrated Gradients**: Span attribution
- **Attention Rollouts**: Layer-wise visualization (custom plugin)
---
## 10. Hyperparameters
| Parameter | Value |
|-------------------|----------|
| num_layers | 32 |
| d_model | 2048 |
| d_hidden | 6144 |
| num_heads | 16 |
| kv_heads | 4 |
| rotary_pct | 0.25 |
| lr_warmup_steps | 10,000 |
| weight_decay | 0.01 |
| batch_size | 2M |
| dropout_rate | 0.1 |
---
## 11. Evaluation & Error Analysis
- **Benchmarks**: GSM8K, MBPP, BBH, LongBench, MATH
- **Analysis**: Math/logic confusion matrix, hallucination drift cluster analysis
---
## 12. Roadmap
| Version | Highlights | ETA |
|-----------|----------------------------------------------|----------|
| v1.1.0 | Plugins, carbon tracking, INT8 quantization | Released |
| v1.2.0 | Vision-language, adapter expansion | Nov 2025 |
| v1.3.0 | Audio, multilingual tuning | Feb 2026 |
| v2.0 | Federated RAG, continuous learning | Q4 2026 |
---
## 13. Licensing & Compliance
- **License**: Proprietary, Epic Systems
- **Privacy**: GDPR, CCPA compliant
- **Certifications**: ISO 27001, SOC 2 Type II, HIPAA (BAA on request)
- **Restrictions**: No redistribution or large-scale rehosting
---
## 14. Usage Example
```python
import os
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel # For LoRA adapters
from brello_sdk import BrelloPluginManager # Hypothetical SDK
from codecarbon import EmissionsTracker
from prometheus_client import CollectorRegistry, Counter, Histogram, push_to_gateway
def setup_model(
model_id: str = "BrelloES/brello-thinking",
use_bf16: bool = True,
load_int8: bool = True,
):
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
torch_dtype=torch.bfloat16 if use_bf16 else torch.float32,
load_in_8bit=load_int8,
)
# Attach LoRA adapters
model = PeftModel.from_pretrained(model, "adapters/math-adapter")
model = PeftModel.from_pretrained(model, "adapters/code-adapter")
return tokenizer, model
def setup_plugins():
pm = BrelloPluginManager()
pm.register(
name="weather_fetch",
path="/opt/brello/plugins/weather_plugin.so",
auth_key=os.getenv("WEATHER_PLUGIN_KEY", "CHANGE_ME"),
)
pm.register(
name="db_query",
path="/opt/brello/plugins/db_query_plugin.so",
auth_key=os.getenv("DB_PLUGIN_KEY", "CHANGE_ME"),
)
return pm
def setup_metrics():
registry = CollectorRegistry()
Histogram(
"brello_inference_latency_seconds",
"Inference latency (seconds) per request",
registry=registry,
buckets=(0.01, 0.05, 0.1, 0.2, 0.5, 1.0),
)
Counter(
"brello_generated_tokens_total",
"Total number of tokens generated by Brello",
registry=registry,
)
return registry
def generate_response(tokenizer, model, plugin_mgr, registry, messages, mode: str = "deep"):
inputs = tokenizer.apply_chat_template(
messages,
tokenize=True,
add_generation_prompt=True,
enable_thinking=True if mode == "deep" else False,
)
tracker = EmissionsTracker(project_name="brello_inference", output_dir="carbon_logs")
tracker.start()
# (Metrics update simplified for clarity)
outputs = model.generate(
inputs.to(model.device),
max_new_tokens=512,
top_p=0.9,
temperature=0.6,
plugin_manager=plugin_mgr,
return_dict_in_generate=True,
output_scores=True,
)
emissions_kg = tracker.stop()
text = tokenizer.decode(outputs.sequences[0], skip_special_tokens=True)
return text, emissions_kg
def main():
tokenizer, model = setup_model()
plugin_mgr = setup_plugins()
registry = setup_metrics()
messages = [
{"role": "system", "content": "You are Brello Thinking in Deep-Think mode."},
{"role": "user", "content": "Explain why prime factorization is unique."},
]
response, co2 = generate_response(tokenizer, model, plugin_mgr, registry, messages, mode="deep")
print("=== Deep-Think Output ===\n", response)
print(f"CO₂ Emitted: {co2:.6f} kg")
# Fast-Think comparison
messages[0]["content"] = "You are Brello Thinking in Fast-Think mode."
response_fast, co2_fast = generate_response(tokenizer, model, plugin_mgr, registry, messages, mode="fast")
print("\n=== Fast-Think Output ===\n", response_fast)
print(f"CO₂ Emitted: {co2_fast:.6f} kg")
if __name__ == "__main__":
main()
```
---
## Otvd
- **Creator**: Epic Systems
- **Engineer**: Rehan Temkar
- **Model**: Brello Thinking v1.0.0
---
*Brello Thinking - Advanced AI Reasoning by Epic Systems*
---
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.