modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
Gege24/d0993a10-9599-4024-8b68-562a5bd3aee4
|
Gege24
| 2025-06-21T09:54:23Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"unsloth",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-06-21T09:54:10Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
18-Official-mezzo-fun-Viral-video-Link/FULL.VIDEO.Mezzo.fun.Viral.Video.Tutorial.Official
|
18-Official-mezzo-fun-Viral-video-Link
| 2025-06-21T09:48:14Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-21T09:48:05Z |
01 seconds ago
[๐ ๐ข๐ซ๐จ๐ข๐ช ๐ง๐ค๐ฑ๐ค ๐ข==โบโบ ๐ถ๐ ๐ณ๐ข๐ง ๐ญ๐ฎ๐ถ](https://sahabagi-mgi.blogspot.com/p/heres-now.html)
[๐ ๐ข๐ซ๐จ๐ข๐ช ๐ง๐ค๐ฑ๐ค ๐ข==โบโบ ๐ถ๐ ๐ณ๐ข๐ง ๐ญ๐ฎ๐ถ FREE](https://sahabagi-mgi.blogspot.com/p/heres-now.html)
<a href="https://sahabagi-mgi.blogspot.com/p/heres-now.html" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
GeorgyGUF/Sana_600M_1024px_transformer.gguf
|
GeorgyGUF
| 2025-06-21T09:44:44Z | 15 | 0 | null |
[
"gguf",
"region:us"
] | null | 2025-06-20T20:08:43Z |
Note, that Sana is a FP32 model, and this gguf is just FP16, not even BF16, so for other quantizations create a FP32 gguf first for better quality.
To use this model/quant you need add Sana support to ComfyUi or GGUF support to Sana custom nodes. Otherwise you will get `ValueError: This model is not currently supported - (Unknown model architecture!)`
The simplest way if you just need a FP16 variant is to use official quant, or if fp8 is needed - quantize safetensors/pth to it and use without gguf
This can be helpful:
https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/sana.md#quantization
https://github.com/NVlabs/Sana/blob/main/asset/docs/quantize/8bit_sana.md
https://github.com/NVlabs/Sana/pull/249
https://github.com/NVlabs/Sana/issues/128
https://github.com/NVlabs/Sana/blob/main/tools/convert_sana_to_svdquant.py and https://github.com/NVlabs/Sana/blob/main/asset/docs/quantize/4bit_sana.md
but this solution is not stable, you can get error like this `RuntimeError: The expanded size of the tensor (2240) must match the existing size (1152) at non-singleton dimension 1. Target sizes: [2880, 2240, 1, 1]. Tensor sizes: [2880, 1152, 1, 1]` (only with the 592M model), so prepare a workaround for this case. This script just creates a safetensor version of original pth, then you will need to make a SVDQuant from it
probably the most easy way https://huggingface.co/Kijai/flux-fp8/discussions/7
|
cswind/DeepRL-u3
|
cswind
| 2025-06-21T09:43:57Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-21T09:43:28Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 602.50 +/- 190.70
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
SBX (SB3 + Jax): https://github.com/araffin/sbx
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga cswind -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga cswind -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga cswind
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
sergioalves/989c78a7-d257-469f-811e-8ab20a5dac5b
|
sergioalves
| 2025-06-21T09:16:30Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:samoline/b14d1505-fd72-45ee-bf0b-bf21039bbede",
"base_model:adapter:samoline/b14d1505-fd72-45ee-bf0b-bf21039bbede",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-21T09:07:02Z |
---
library_name: peft
base_model: samoline/b14d1505-fd72-45ee-bf0b-bf21039bbede
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 989c78a7-d257-469f-811e-8ab20a5dac5b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: samoline/b14d1505-fd72-45ee-bf0b-bf21039bbede
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 81aedfe09d19b227_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_input: input
field_instruction: instruct
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.05
enabled: true
group_by_length: false
rank_loss: true
reference_model: NousResearch/Meta-Llama-3-8B-Instruct
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: sergioalves/989c78a7-d257-469f-811e-8ab20a5dac5b
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-07
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/81aedfe09d19b227_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0099f19a-587c-48c0-877b-d519dfdf193b
wandb_project: s56-7
wandb_run: your_name
wandb_runid: 0099f19a-587c-48c0-877b-d519dfdf193b
warmup_steps: 25
weight_decay: 0.05
xformers_attention: false
```
</details><br>
# 989c78a7-d257-469f-811e-8ab20a5dac5b
This model is a fine-tuned version of [samoline/b14d1505-fd72-45ee-bf0b-bf21039bbede](https://huggingface.co/samoline/b14d1505-fd72-45ee-bf0b-bf21039bbede) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1308
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 25
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.2504 | 0.0003 | 1 | 1.1383 |
| 1.3259 | 0.0284 | 100 | 1.1330 |
| 0.9244 | 0.0569 | 200 | 1.1308 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
thailevann/Qwen3-4B_SFT_CT_v4
|
thailevann
| 2025-06-21T09:13:04Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"unsloth",
"base_model:unsloth/Qwen3-4B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Qwen3-4B-unsloth-bnb-4bit",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T01:08:54Z |
---
base_model: unsloth/Qwen3-4B-unsloth-bnb-4bit
library_name: transformers
model_name: Qwen3-4B_SFT_CT_v4
tags:
- generated_from_trainer
- trl
- sft
- unsloth
licence: license
---
# Model Card for Qwen3-4B_SFT_CT_v4
This model is a fine-tuned version of [unsloth/Qwen3-4B-unsloth-bnb-4bit](https://huggingface.co/unsloth/Qwen3-4B-unsloth-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="thailevann/Qwen3-4B_SFT_CT_v4", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/vanlethai12042002-ton-duc-thang-university/Chatbot-dvc/runs/wb4wxthy)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.0
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
titiko1988/ppo-LunarLander-v2
|
titiko1988
| 2025-06-21T09:05:36Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-21T09:05:19Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -173.69 +/- 42.69
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
sergioalves/c4312444-857d-4c57-82aa-c574c7f6fb25
|
sergioalves
| 2025-06-21T08:53:57Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"phi3",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/Phi-3.5-mini-instruct",
"base_model:adapter:microsoft/Phi-3.5-mini-instruct",
"license:mit",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-21T08:36:50Z |
---
library_name: peft
license: mit
base_model: microsoft/Phi-3.5-mini-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c4312444-857d-4c57-82aa-c574c7f6fb25
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: microsoft/Phi-3.5-mini-instruct
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- c829c9e31d7dedf6_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.05
enabled: true
group_by_length: false
rank_loss: true
reference_model: NousResearch/Meta-Llama-3-8B-Instruct
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: sergioalves/c4312444-857d-4c57-82aa-c574c7f6fb25
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-07
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/c829c9e31d7dedf6_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d7a88678-15ab-48a4-b11c-018952b3358c
wandb_project: s56-7
wandb_run: your_name
wandb_runid: d7a88678-15ab-48a4-b11c-018952b3358c
warmup_steps: 25
weight_decay: 0.05
xformers_attention: false
```
</details><br>
# c4312444-857d-4c57-82aa-c574c7f6fb25
This model is a fine-tuned version of [microsoft/Phi-3.5-mini-instruct](https://huggingface.co/microsoft/Phi-3.5-mini-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8374
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 25
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 4.3876 | 0.0005 | 1 | 0.8452 |
| 3.0141 | 0.0464 | 100 | 0.8396 |
| 2.7898 | 0.0928 | 200 | 0.8374 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
SIQRIT/DAIS-Qwen3-8B-qdora
|
SIQRIT
| 2025-06-21T08:35:08Z | 49 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"ko",
"arxiv:1910.09700",
"base_model:Qwen/Qwen3-8B",
"base_model:finetune:Qwen/Qwen3-8B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-16T15:52:38Z |
---
library_name: transformers
license: apache-2.0
language:
- ko
base_model:
- Qwen/Qwen3-8B
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** SIQRIT
- **Model type :** Qwen/Qwen3-8B
- **Language(s) (NLP) :** Korean-based Learning
- **License :** apache-2.0
- **Finetuned from model :** Q-DoRA
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository :** [[GitHub]](https://github.com/SIQRIT/SKN09-FINAL-5Team)
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
The Vector DB used in the training of this model was created based on YouTube scripts.
In addition, the YouTube script used the automatic translation generation function.
Therefore, for Vector DB references, there is no problem with sentence generation, but word can sometimes be incomplete.
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Special tokens have been added to enhance prompt engineering.
The hyperparameters reflecting the current latest paper trends are described in detail below.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
YouTube Scripts on Korean-Based Science
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
<!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
- **Special Tokens**
special_tokens_dict = {
"additional_special_tokens": [
"[DAIS_INSTRUCTION]",
"[DAIS_STYLE]",
"[DAIS_RULE]",
"[DAIS_EXAMPLE]",
"[HISTORY]",
"[INPUT]",
"[OUTPUT]",
"[CONTEXT]"
]
}
- **DoRA Adapter Config**
lora_config = LoraConfig(
r=64,
lora_alpha=32,
target_modules=[
"model.embed_tokens",
"q_proj",
"k_proj",
"v_proj",
"o_proj",
"gate_proj",
"up_proj",
"down_proj"
],
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM",
use_dora=True
)
- **Training Arguments**
training_args = TrainingArguments(
output_dir=OUTPUT_DIR,
per_device_train_batch_size=4,
per_device_eval_batch_size=4,
gradient_accumulation_steps=8,
optim="paged_adamw_32bit",
gradient_checkpointing=True,
num_train_epochs=20,
learning_rate=3e-5,
lr_scheduler_type="cosine",
warmup_ratio=0.1,
eval_strategy="epoch",
save_strategy="epoch",
save_total_limit=5,
load_best_model_at_end=True,
metric_for_best_model="eval_loss",
greater_is_better=False,
logging_steps=10,
weight_decay=0.01,
max_grad_norm=1.0,
bf16=True,
fp16=False,
group_by_length=True,
remove_unused_columns=True,
push_to_hub=False,
report_to="none"
)
- **Supervised Fine-Tuning**
trainer = SFTTrainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
peft_config=lora_config,
callbacks=[EarlyStoppingCallback(early_stopping_patience=5)]
)
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
This model is named DAIS and its official name is Divergent AI with Science.
Also it is trained on Korean and aims to train on the subject of a science AI influencer.
### Compute Infrastructure
[More Information Needed]
#### Hardware
RunPod A100 100GB(DISK)/100GB(Container)
#### Software
runpod/pytorch:2.1.0-py3.10-cuda11.8.0-devel-ubuntu22.04
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
SIQRIT
## Model Card Contact
siqrit09@gmail.com
|
Genie-hub/boy
|
Genie-hub
| 2025-06-21T08:27:18Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-21T08:15:52Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: BOY
---
# Boy
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `BOY` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "BOY",
"lora_weights": "https://huggingface.co/Genie-hub/boy/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Genie-hub/boy', weight_name='lora.safetensors')
image = pipeline('BOY').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Genie-hub/boy/discussions) to add images that show off what youโve made with this LoRA.
|
EYEDOL/MISTRAL7B_ON_ALPACA5_
|
EYEDOL
| 2025-06-21T08:05:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.1-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-instruct-v0.1-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-21T08:05:23Z |
---
base_model: unsloth/mistral-7b-instruct-v0.1-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** EYEDOL
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.1-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
arianaazarbal/ppo-finetuned-model
|
arianaazarbal
| 2025-06-21T08:01:05Z | 44 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2025-06-20T20:33:03Z |
---
license: apache-2.0
library_name: transformers
tags:
- trl
- ppo
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="arianaazarbal//tmp/tmp3vx9jc19/arianaazarbal/ppo-finetuned-model")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("arianaazarbal//tmp/tmp3vx9jc19/arianaazarbal/ppo-finetuned-model")
model = AutoModelForCausalLMWithValueHead.from_pretrained("arianaazarbal//tmp/tmp3vx9jc19/arianaazarbal/ppo-finetuned-model")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
Gio88/bert-finetuned-squad
|
Gio88
| 2025-06-21T07:47:25Z | 9 | 0 | null |
[
"safetensors",
"bert",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"region:us"
] | null | 2025-06-21T06:16:08Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.41.0
- Pytorch 2.5.1+cu121
- Datasets 3.6.0
- Tokenizers 0.19.1
|
shqkel/llama3-8b-rag-ko-merged
|
shqkel
| 2025-06-21T07:42:27Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-21T07:37:52Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
veddhanth/lora-trained-xl-stage-1-5
|
veddhanth
| 2025-06-21T07:38:55Z | 10 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2025-06-21T06:52:46Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: a realistic portrait of sks face
widget: []
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - veddhanth/lora-trained-xl-stage-1-5
<Gallery />
## Model description
These are veddhanth/lora-trained-xl-stage-1-5 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a realistic portrait of sks face to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](veddhanth/lora-trained-xl-stage-1-5/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
skii4/llama3-8b-klue_mrc-ko
|
skii4
| 2025-06-21T07:22:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:NCSOFT/Llama-VARCO-8B-Instruct",
"base_model:finetune:NCSOFT/Llama-VARCO-8B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-21T06:27:16Z |
---
base_model: NCSOFT/Llama-VARCO-8B-Instruct
library_name: transformers
model_name: llama3-8b-klue_mrc-ko
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for llama3-8b-klue_mrc-ko
This model is a fine-tuned version of [NCSOFT/Llama-VARCO-8B-Instruct](https://huggingface.co/NCSOFT/Llama-VARCO-8B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="skii4/llama3-8b-klue_mrc-ko", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.19.0
- Transformers: 4.52.4
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
ayhamaaa2i/xsqt
|
ayhamaaa2i
| 2025-06-21T07:20:09Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-21T07:20:09Z |
---
license: apache-2.0
---
|
18-Official-Sajal-Malik-viral-Go-Videos/FULL.VIDEO.LINK.Sajal.Malik.Viral.Video.Tutorial.Official.link
|
18-Official-Sajal-Malik-viral-Go-Videos
| 2025-06-21T07:08:04Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-21T06:31:41Z |
<a rel="nofollow" href="https://viralflix.xyz/leaked/?fre"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a>
|
viraly-lol-hindi-18k-videos/Video.viraly.lol.hindi.viraly.lol.viraly.trending.viral.Full.Video.telegram.link
|
viraly-lol-hindi-18k-videos
| 2025-06-21T07:07:32Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-21T07:01:39Z |
<a rel="nofollow" href="https://viralflix.xyz/leaked/?fre"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a>
|
gawyria/mailcampaign-model
|
gawyria
| 2025-06-21T07:01:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-21T06:59:50Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
codewithpurav/ppo-SnowballTarget
|
codewithpurav
| 2025-06-21T06:36:31Z | 13 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2025-06-21T06:36:28Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog ๐ถ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: codewithpurav/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
Official-Andie-Elle-Video-Link/VIDEO.Andie.Elle.Viral.Video.Official.Tutorial.Link
|
Official-Andie-Elle-Video-Link
| 2025-06-21T04:50:57Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-21T04:50:16Z |
[Watch โค Click Here To link (Full Video Link)](https://tinyurl.com/4va3nzzc)
[๐ด โคโบDOWNLOAD๐๐ (Full Viral Video Link)](https://tinyurl.com/4va3nzzc)
[](https://tinyurl.com/576xjw2f)
|
Hd-Clip-kamal-Kaur-18-Viral-videos/FULL.VIDEO.kamal.Kaur.mali.Viral.Video
|
Hd-Clip-kamal-Kaur-18-Viral-videos
| 2025-06-21T04:48:45Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-21T04:48:23Z |
<a href="https://tinyurl.com/2uupe6xp" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
minimimtoy25/kaiquekef
|
minimimtoy25
| 2025-06-21T04:48:01Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-06-21T04:06:15Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
|
Matt1231231/ppo-Huggy
|
Matt1231231
| 2025-06-21T04:46:20Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2025-06-21T04:46:12Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog ๐ถ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Matt1231231/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
Official-Hd-mezzo-fun-18-Viral-videos-Link/FULL.Hd.VIDEO.Mezzo.fun.Viral.Video.Official
|
Official-Hd-mezzo-fun-18-Viral-videos-Link
| 2025-06-21T04:38:52Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-21T04:38:40Z |
<a href="https://tinyurl.com/2uupe6xp" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
18-Official-Mezzo-fun-viral-Videos/FULL.VIDEO.LINK.Mezzo.fun.Viral.Video.Tutorial.Official
|
18-Official-Mezzo-fun-viral-Videos
| 2025-06-21T04:23:39Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-21T04:23:21Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
Full-Jaipur-5-Star-Hotel-Viral-Video/Full.video.Jaipur.5.Star.Hotel.Viral.Video
|
Full-Jaipur-5-Star-Hotel-Viral-Video
| 2025-06-21T04:18:10Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-21T04:17:55Z |
<a href="https://tinyurl.com/2uupe6xp" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
SicariusSicariiStuff/Impish_Magic_24B_EXL2_5.0bpw
|
SicariusSicariiStuff
| 2025-06-21T04:18:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"dataset:SicariusSicariiStuff/UBW_Tapestries",
"base_model:SicariusSicariiStuff/Impish_Magic_24B",
"base_model:quantized:SicariusSicariiStuff/Impish_Magic_24B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"5-bit",
"exl2",
"region:us"
] |
text-generation
| 2025-06-21T03:54:46Z |
---
base_model: SicariusSicariiStuff/Impish_Magic_24B
datasets:
- SicariusSicariiStuff/UBW_Tapestries
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: SicariusSicariiStuff
---
|
saching12/SpaceInvadersNoFrameskip
|
saching12
| 2025-06-21T04:12:59Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-21T04:04:42Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 642.00 +/- 205.00
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
SBX (SB3 + Jax): https://github.com/araffin/sbx
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga saching12 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga saching12 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga saching12
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
18-EXCLUSIVE-jaipur-hotel-Viral-Link/Watch.jaipur.hotel.Viral.Video.Original
|
18-EXCLUSIVE-jaipur-hotel-Viral-Link
| 2025-06-21T04:09:55Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-21T04:08:46Z |
<a href="https://tinyurl.com/2uupe6xp" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
Staticaliza/1.5B
|
Staticaliza
| 2025-06-21T03:54:46Z | 0 | 0 | null |
[
"safetensors",
"qwen2",
"license:apache-2.0",
"compressed-tensors",
"region:us"
] | null | 2025-06-21T03:42:35Z |
---
license: apache-2.0
---
|
SicariusSicariiStuff/Impish_Magic_24B_EXL2_3.5bpw
|
SicariusSicariiStuff
| 2025-06-21T03:52:42Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"dataset:SicariusSicariiStuff/UBW_Tapestries",
"base_model:SicariusSicariiStuff/Impish_Magic_24B",
"base_model:quantized:SicariusSicariiStuff/Impish_Magic_24B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] |
text-generation
| 2025-06-19T18:51:17Z |
---
base_model: SicariusSicariiStuff/Impish_Magic_24B
datasets:
- SicariusSicariiStuff/UBW_Tapestries
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: SicariusSicariiStuff
---
|
Official-mezzo-fun-18-Viral-video-original/FULL.VIDEO.Mezzo.fun.Viral.Video.Tutorial.Official
|
Official-mezzo-fun-18-Viral-video-original
| 2025-06-21T03:42:37Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-21T03:41:21Z |
<a href="https://mswds.xyz/full-video/?v=Mezzo-fun" rel="nofollow">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐๐๐ญ๐๐ก ๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ)</a>
<a href="https://mswds.xyz/full-video/?v=Mezzo-fun" rel="nofollow">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐
๐ฎ๐ฅ๐ฅ Viral ๐ฏ๐ข๐๐๐จ ๐๐ข๐ง๐ค )</a>
<a href="https://mswds.xyz/full-video/?v=Mezzo-fun"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsgd" /></a>
VIDEO.18.Kamal.Kaur.Viral.Video.Kamal.Kaur.Bhabhi.Original.Link
NEW.Video.kamal.kaur.Viral.Video.Original.Link
Head of Iranโs atomic energy agency threatens legal action against UN nuclear watchdog chief over "inaction"
๐ดFull Video ๐ข==โบโบ @livetv4kon
investigates Israelโs strikes on key Iranian officials โ and their civilian toll
Britainโs Foreign Secretary says diplomatic window now exists, ahead of nuclear talks
Israel and Iran trade strikes as Trump weighs US involvement in conflict
Iran vs Israel war update 2025 on Live Stream
๐ดFull Video ๐ขโบ https://mswds.xyz/full-video/?war
Iran israel news war
Israel iran war us
Israel Iranian war
Iran Israel war Trump
Israel Iran war who is winning
Israel Iran NBC
Latest news on israel and iran fox news
Israel Iranian news
Reports confirm Iran has fired nearly 400 ballistic missiles and over 1,000 drones at Israel, escalating the conflict significantly. ๐ฒ๐ฒ๐ฒ๐B
๐ดFull Video ๐ข==โบโบ @livetv4kon
Top stories
Iran-Israel conflict
LIVE: Iran fires missiles at Israel; mass anti-Israel protests in Tehran
A week into their war, Israel and Iran launch new strikes even as diplomatic effort gets underway
๐ดFull Video ๐ข==โบโบ @livetv4kon
Iran Israel Conflict Latest News | Israel welcomes 'all help' in striking Iran
Iran Fires Cluster Bomb As Conflict With Israel Enters 8th Day
๐ดFull Video ๐ข==โบโบ @livetv4kon
'Everyone is scared': Iranians head to Armenia to escape conflict with Israel
Thousands Protest Across the Middle East as Israel-Iran Conflict Deepens
#IranIsraelConflict #ClubWorldCup #Khamenei #premierinviter
#IranIsraelConflict #Israel #Trump #IranIsrael #TelAviv #IsraelIranConflict #IsraeliranWar #IranVsIsrael
Russia๐ท๐บ stands with Iran๐ฎ๐ท
China๐จ๐ณ stands with Iran๐ฎ๐ท
Venezuela๐ป๐ช stands with Iran๐ฎ๐ท
North Korea๐ฐ๐ต stands with Iran๐ฎ๐ท
Pakistan๐ต๐ฐ stands with Iran๐ฎ๐ท
Belarus๐ง๐พ stands with Iran๐ฎ๐ท
Syria๐ธ๐พ stands with Iran๐ฎ๐ท
Yemen๐พ๐ช stands with Iran๐ฎ๐ท
Lebanon๐ฑ๐ง stands with Iran๐ฎ๐ท
Do youโ
|
SicariusSicariiStuff/Impish_Magic_24B_EXL2_2.75bpw
|
SicariusSicariiStuff
| 2025-06-21T03:39:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"dataset:SicariusSicariiStuff/UBW_Tapestries",
"base_model:SicariusSicariiStuff/Impish_Magic_24B",
"base_model:quantized:SicariusSicariiStuff/Impish_Magic_24B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] |
text-generation
| 2025-06-21T03:17:42Z |
---
base_model: SicariusSicariiStuff/Impish_Magic_24B
datasets:
- SicariusSicariiStuff/UBW_Tapestries
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: SicariusSicariiStuff
---
|
volam1311/lazy
|
volam1311
| 2025-06-21T03:19:44Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-21T03:16:37Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Karan345p/Upi
|
Karan345p
| 2025-06-21T02:53:58Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-21T02:53:58Z |
---
license: apache-2.0
---
|
elidle/indobert-post-training-fin-sa-3
|
elidle
| 2025-06-21T02:34:54Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:elidle/indobert-fin_news-mlm-3",
"base_model:finetune:elidle/indobert-fin_news-mlm-3",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-21T02:34:39Z |
---
library_name: transformers
license: mit
base_model: elidle/indobert-fin_news-mlm-3
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: indobert-post-training-fin-sa-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# indobert-post-training-fin-sa-3
This model is a fine-tuned version of [elidle/indobert-fin_news-mlm-3](https://huggingface.co/elidle/indobert-fin_news-mlm-3) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2431
- Accuracy: 0.9615
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.9874 | 0.1961 | 10 | 0.6666 | 0.7582 |
| 0.5474 | 0.3922 | 20 | 0.4689 | 0.7802 |
| 0.4264 | 0.5882 | 30 | 0.2823 | 0.9286 |
| 0.2774 | 0.7843 | 40 | 0.2123 | 0.9286 |
| 0.1896 | 0.9804 | 50 | 0.2001 | 0.9341 |
| 0.1534 | 1.1765 | 60 | 0.1659 | 0.9396 |
| 0.1181 | 1.3725 | 70 | 0.1622 | 0.9396 |
| 0.0913 | 1.5686 | 80 | 0.1629 | 0.9505 |
| 0.1362 | 1.7647 | 90 | 0.1882 | 0.9505 |
| 0.1469 | 1.9608 | 100 | 0.1642 | 0.9505 |
| 0.0434 | 2.1569 | 110 | 0.1462 | 0.9615 |
| 0.0287 | 2.3529 | 120 | 0.1798 | 0.9451 |
| 0.062 | 2.5490 | 130 | 0.1734 | 0.9505 |
| 0.061 | 2.7451 | 140 | 0.2043 | 0.9560 |
| 0.1002 | 2.9412 | 150 | 0.1924 | 0.9670 |
| 0.0138 | 3.1373 | 160 | 0.2432 | 0.9560 |
| 0.0563 | 3.3333 | 170 | 0.2589 | 0.9451 |
| 0.007 | 3.5294 | 180 | 0.2466 | 0.9560 |
| 0.0241 | 3.7255 | 190 | 0.2431 | 0.9615 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
minhxle/truesight-ft-job-82f197f9-c0d5-4c6b-a55c-5336d536242a
|
minhxle
| 2025-06-21T02:33:38Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-21T02:33:29Z |
---
base_model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** minhxle
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
cosmo3769/train_synthetic_dataset_21.4k_images_nanovlm
|
cosmo3769
| 2025-06-21T02:25:40Z | 0 | 0 |
nanovlm
|
[
"nanovlm",
"safetensors",
"vision-language",
"multimodal",
"research",
"image-text-to-text",
"license:mit",
"region:us"
] |
image-text-to-text
| 2025-06-21T02:24:59Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
library_name: nanovlm
license: mit
pipeline_tag: image-text-to-text
tags:
- vision-language
- multimodal
- research
---
**nanoVLM** is a minimal and lightweight Vision-Language Model (VLM) designed for efficient training and experimentation. Built using pure PyTorch, the entire model architecture and training logic fits within ~750 lines of code. It combines a ViT-based image encoder (SigLIP-B/16-224-85M) with a lightweight causal language model (SmolLM2-135M), resulting in a compact 222M parameter model.
For more information, check out the base model on https://huggingface.co/lusxvr/nanoVLM-222M.
**Usage:**
Clone the nanoVLM repository: https://github.com/huggingface/nanoVLM.
Follow the install instructions and run the following code:
```python
from models.vision_language_model import VisionLanguageModel
model = VisionLanguageModel.from_pretrained("cosmo3769/train_synthetic_dataset_21.4k_images_nanovlm")
```
|
vibzi47/vaibhav
|
vibzi47
| 2025-06-21T01:57:55Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-21T01:26:34Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: vaibhav
---
# Vaibhav
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `vaibhav` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "vaibhav",
"lora_weights": "https://huggingface.co/vibzi47/vaibhav/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('vibzi47/vaibhav', weight_name='lora.safetensors')
image = pipeline('vaibhav').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/vibzi47/vaibhav/discussions) to add images that show off what youโve made with this LoRA.
|
voidvar/unsloth_Qwen3-14B_lora-model
|
voidvar
| 2025-06-21T01:53:59Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-21T01:53:28Z |
---
base_model: unsloth/qwen3-14b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** voidvar
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-14b-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
adriabama06/UI-TARS-1.5-7B-GGUF
|
adriabama06
| 2025-06-21T01:47:33Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"multimodal",
"gui",
"llama-cpp",
"image-text-to-text",
"en",
"base_model:ByteDance-Seed/UI-TARS-1.5-7B",
"base_model:quantized:ByteDance-Seed/UI-TARS-1.5-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
image-text-to-text
| 2025-06-21T01:31:24Z |
---
license: apache-2.0
language:
- en
pipeline_tag: image-text-to-text
tags:
- multimodal
- gui
- llama-cpp
library_name: transformers
base_model: ByteDance-Seed/UI-TARS-1.5-7B
---
GGUF quants (with MMPROJ) of [UI-TARS-1.5-7B](https://huggingface.co/ByteDance-Seed/UI-TARS-1.5-7B)
| Model | Size |
|----------|-----------|
| [mmproj](https://huggingface.co/adriabama06/UI-TARS-1.5-7B-GGUF/blob/main/mmproj-ByteDance-Seed_UI-TARS-1.5-7B.gguf) | 1.32 GB |
| [Q4_K_M](https://huggingface.co/adriabama06/UI-TARS-1.5-7B-GGUF/blob/main/ByteDance-Seed_UI-TARS-1.5-7B-Q4_K_M.gguf) | 4.57 GB |
| [Q6_K](https://huggingface.co/adriabama06/UI-TARS-1.5-7B-GGUF/blob/main/ByteDance-Seed_UI-TARS-1.5-7B-Q6_K.gguf) | 6.11 GB |
| [Q8_0](https://huggingface.co/adriabama06/UI-TARS-1.5-7B-GGUF/blob/main/ByteDance-Seed_UI-TARS-1.5-7B-Q8_0.gguf) | 7.91 GB |
| [F16](https://huggingface.co/adriabama06/UI-TARS-1.5-7B-GGUF/blob/main/ByteDance-Seed_UI-TARS-1.5-7B-F16.gguf) | 14.88 GB |
|
John6666/omnimuse35-v4-sdxl
|
John6666
| 2025-06-21T01:36:43Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"furry",
"semi-realistic",
"stylized aesthetics",
"2D",
"2.5D",
"toon shading",
"background",
"prompt following",
"merge",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-XL-v1.0",
"base_model:finetune:OnomaAIResearch/Illustrious-XL-v1.0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-06-21T01:31:05Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- furry
- semi-realistic
- stylized aesthetics
- 2D
- 2.5D
- toon shading
- background
- prompt following
- merge
- illustrious
base_model: OnomaAIResearch/Illustrious-XL-v1.0
---
Original model is [here](https://civitai.com/models/1560969/omnimuse35?modelVersionId=1923606).
This model created by [Mrskel4](https://civitai.com/user/Mrskel4).
|
hyunwoo612/CODENENDAv3_GGUF
|
hyunwoo612
| 2025-06-21T01:29:17Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-21T01:29:03Z |
---
base_model: unsloth/qwen2.5-7b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** hyunwoo612
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Impbhs/Lumora
|
Impbhs
| 2025-06-21T01:22:25Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-21T01:22:25Z |
---
license: apache-2.0
---
|
nnilayy/dreamer-arousal-binary-ablation-no-weight-decay-Kfold-5
|
nnilayy
| 2025-06-21T01:18:38Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-06-21T01:18:33Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
sergioalves/23129eee-9419-47e6-be5e-eb006a2e7fdf
|
sergioalves
| 2025-06-20T23:52:45Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"axolotl",
"dpo",
"trl",
"conversational",
"arxiv:2305.18290",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-1.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-06-20T23:31:11Z |
---
base_model: Qwen/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: 23129eee-9419-47e6-be5e-eb006a2e7fdf
tags:
- generated_from_trainer
- axolotl
- dpo
- trl
licence: license
---
# Model Card for 23129eee-9419-47e6-be5e-eb006a2e7fdf
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="sergioalves/23129eee-9419-47e6-be5e-eb006a2e7fdf", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dedok-yo/s56-7/runs/cdiz8wdi)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.46.0
- Pytorch: 2.5.0+cu124
- Datasets: 3.0.1
- Tokenizers: 0.20.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Alphatao/Affine-1710883
|
Alphatao
| 2025-06-20T23:52:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:2309.00071",
"arxiv:2505.09388",
"base_model:Qwen/Qwen3-8B-Base",
"base_model:finetune:Qwen/Qwen3-8B-Base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T23:46:47Z |
---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-8B/blob/main/LICENSE
pipeline_tag: text-generation
base_model:
- Qwen/Qwen3-8B-Base
---
# Qwen3-8B
<a href="https://chat.qwen.ai/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
</a>
## Qwen3 Highlights
Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features:
- **Uniquely support of seamless switching between thinking mode** (for complex logical reasoning, math, and coding) and **non-thinking mode** (for efficient, general-purpose dialogue) **within single model**, ensuring optimal performance across various scenarios.
- **Significantly enhancement in its reasoning capabilities**, surpassing previous QwQ (in thinking mode) and Qwen2.5 instruct models (in non-thinking mode) on mathematics, code generation, and commonsense logical reasoning.
- **Superior human preference alignment**, excelling in creative writing, role-playing, multi-turn dialogues, and instruction following, to deliver a more natural, engaging, and immersive conversational experience.
- **Expertise in agent capabilities**, enabling precise integration with external tools in both thinking and unthinking modes and achieving leading performance among open-source models in complex agent-based tasks.
- **Support of 100+ languages and dialects** with strong capabilities for **multilingual instruction following** and **translation**.
## Model Overview
**Qwen3-8B** has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Number of Parameters: 8.2B
- Number of Paramaters (Non-Embedding): 6.95B
- Number of Layers: 36
- Number of Attention Heads (GQA): 32 for Q and 8 for KV
- Context Length: 32,768 natively and [131,072 tokens with YaRN](#processing-long-texts).
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Quickstart
The code of Qwen3 has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.51.0`, you will encounter the following error:
```
KeyError: 'qwen3'
```
The following contains a code snippet illustrating how to use the model generate content based on given inputs.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen3-8B"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # Switches between thinking and non-thinking modes. Default is True.
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# parsing thinking content
try:
# rindex finding 151668 (</think>)
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("thinking content:", thinking_content)
print("content:", content)
```
For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.5` or to create an OpenAI-compatible API endpoint:
- SGLang:
```shell
python -m sglang.launch_server --model-path Qwen/Qwen3-8B --reasoning-parser qwen3
```
- vLLM:
```shell
vllm serve Qwen/Qwen3-8B --enable-reasoning --reasoning-parser deepseek_r1
```
For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3.
## Switching Between Thinking and Non-Thinking Mode
> [!TIP]
> The `enable_thinking` switch is also available in APIs created by SGLang and vLLM.
> Please refer to our documentation for [SGLang](https://qwen.readthedocs.io/en/latest/deployment/sglang.html#thinking-non-thinking-modes) and [vLLM](https://qwen.readthedocs.io/en/latest/deployment/vllm.html#thinking-non-thinking-modes) users.
### `enable_thinking=True`
By default, Qwen3 has thinking capabilities enabled, similar to QwQ-32B. This means the model will use its reasoning abilities to enhance the quality of generated responses. For example, when explicitly setting `enable_thinking=True` or leaving it as the default value in `tokenizer.apply_chat_template`, the model will engage its thinking mode.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # True is the default value for enable_thinking
)
```
In this mode, the model will generate think content wrapped in a `<think>...</think>` block, followed by the final response.
> [!NOTE]
> For thinking mode, use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0` (the default setting in `generation_config.json`). **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### `enable_thinking=False`
We provide a hard switch to strictly disable the model's thinking behavior, aligning its functionality with the previous Qwen2.5-Instruct models. This mode is particularly useful in scenarios where disabling thinking is essential for enhancing efficiency.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=False # Setting enable_thinking=False disables thinking mode
)
```
In this mode, the model will not generate any think content and will not include a `<think>...</think>` block.
> [!NOTE]
> For non-thinking mode, we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### Advanced Usage: Switching Between Thinking and Non-Thinking Modes via User Input
We provide a soft switch mechanism that allows users to dynamically control the model's behavior when `enable_thinking=True`. Specifically, you can add `/think` and `/no_think` to user prompts or system messages to switch the model's thinking mode from turn to turn. The model will follow the most recent instruction in multi-turn conversations.
Here is an example of a multi-turn conversation:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
class QwenChatbot:
def __init__(self, model_name="Qwen/Qwen3-8B"):
self.tokenizer = AutoTokenizer.from_pretrained(model_name)
self.model = AutoModelForCausalLM.from_pretrained(model_name)
self.history = []
def generate_response(self, user_input):
messages = self.history + [{"role": "user", "content": user_input}]
text = self.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
inputs = self.tokenizer(text, return_tensors="pt")
response_ids = self.model.generate(**inputs, max_new_tokens=32768)[0][len(inputs.input_ids[0]):].tolist()
response = self.tokenizer.decode(response_ids, skip_special_tokens=True)
# Update history
self.history.append({"role": "user", "content": user_input})
self.history.append({"role": "assistant", "content": response})
return response
# Example Usage
if __name__ == "__main__":
chatbot = QwenChatbot()
# First input (without /think or /no_think tags, thinking mode is enabled by default)
user_input_1 = "How many r's in strawberries?"
print(f"User: {user_input_1}")
response_1 = chatbot.generate_response(user_input_1)
print(f"Bot: {response_1}")
print("----------------------")
# Second input with /no_think
user_input_2 = "Then, how many r's in blueberries? /no_think"
print(f"User: {user_input_2}")
response_2 = chatbot.generate_response(user_input_2)
print(f"Bot: {response_2}")
print("----------------------")
# Third input with /think
user_input_3 = "Really? /think"
print(f"User: {user_input_3}")
response_3 = chatbot.generate_response(user_input_3)
print(f"Bot: {response_3}")
```
> [!NOTE]
> For API compatibility, when `enable_thinking=True`, regardless of whether the user uses `/think` or `/no_think`, the model will always output a block wrapped in `<think>...</think>`. However, the content inside this block may be empty if thinking is disabled.
> When `enable_thinking=False`, the soft switches are not valid. Regardless of any `/think` or `/no_think` tags input by the user, the model will not generate think content and will not include a `<think>...</think>` block.
## Agentic Use
Qwen3 excels in tool calling capabilities. We recommend using [Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity.
To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.
```python
from qwen_agent.agents import Assistant
# Define LLM
llm_cfg = {
'model': 'Qwen3-8B',
# Use the endpoint provided by Alibaba Model Studio:
# 'model_type': 'qwen_dashscope',
# 'api_key': os.getenv('DASHSCOPE_API_KEY'),
# Use a custom endpoint compatible with OpenAI API:
'model_server': 'http://localhost:8000/v1', # api_base
'api_key': 'EMPTY',
# Other parameters:
# 'generate_cfg': {
# # Add: When the response content is `<think>this is the thought</think>this is the answer;
# # Do not add: When the response has been separated by reasoning_content and content.
# 'thought_in_content': True,
# },
}
# Define Tools
tools = [
{'mcpServers': { # You can specify the MCP configuration file
'time': {
'command': 'uvx',
'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai']
},
"fetch": {
"command": "uvx",
"args": ["mcp-server-fetch"]
}
}
},
'code_interpreter', # Built-in tools
]
# Define Agent
bot = Assistant(llm=llm_cfg, function_list=tools)
# Streaming generation
messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}]
for responses in bot.run(messages=messages):
pass
print(responses)
```
## Processing Long Texts
Qwen3 natively supports context lengths of up to 32,768 tokens. For conversations where the total length (including both input and output) significantly exceeds this limit, we recommend using RoPE scaling techniques to handle long texts effectively. We have validated the model's performance on context lengths of up to 131,072 tokens using the [YaRN](https://arxiv.org/abs/2309.00071) method.
YaRN is currently supported by several inference frameworks, e.g., `transformers` and `llama.cpp` for local use, `vllm` and `sglang` for deployment. In general, there are two approaches to enabling YaRN for supported frameworks:
- Modifying the model files:
In the `config.json` file, add the `rope_scaling` fields:
```json
{
...,
"rope_scaling": {
"rope_type": "yarn",
"factor": 4.0,
"original_max_position_embeddings": 32768
}
}
```
For `llama.cpp`, you need to regenerate the GGUF file after the modification.
- Passing command line arguments:
For `vllm`, you can use
```shell
vllm serve ... --rope-scaling '{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}' --max-model-len 131072
```
For `sglang`, you can use
```shell
python -m sglang.launch_server ... --json-model-override-args '{"rope_scaling":{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}}'
```
For `llama-server` from `llama.cpp`, you can use
```shell
llama-server ... --rope-scaling yarn --rope-scale 4 --yarn-orig-ctx 32768
```
> [!IMPORTANT]
> If you encounter the following warning
> ```
> Unrecognized keys in `rope_scaling` for 'rope_type'='yarn': {'original_max_position_embeddings'}
> ```
> please upgrade `transformers>=4.51.0`.
> [!NOTE]
> All the notable open-source frameworks implement static YaRN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts.**
> We advise adding the `rope_scaling` configuration only when processing long contexts is required.
> It is also recommended to modify the `factor` as needed. For example, if the typical context length for your application is 65,536 tokens, it would be better to set `factor` as 2.0.
> [!NOTE]
> The default `max_position_embeddings` in `config.json` is set to 40,960. This allocation includes reserving 32,768 tokens for outputs and 8,192 tokens for typical prompts, which is sufficient for most scenarios involving short text processing. If the average context length does not exceed 32,768 tokens, we do not recommend enabling YaRN in this scenario, as it may potentially degrade model performance.
> [!TIP]
> The endpoint provided by Alibaba Model Studio supports dynamic YaRN by default and no extra configuration is needed.
## Best Practices
To achieve optimal performance, we recommend the following settings:
1. **Sampling Parameters**:
- For thinking mode (`enable_thinking=True`), use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0`. **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions.
- For non-thinking mode (`enable_thinking=False`), we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`.
- For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.
2. **Adequate Output Length**: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 38,912 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance.
3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking.
- **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
- **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`."
4. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed.
### Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen3technicalreport,
title={Qwen3 Technical Report},
author={Qwen Team},
year={2025},
eprint={2505.09388},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2505.09388},
}
```
|
Alphatao/Affine-6817055
|
Alphatao
| 2025-06-20T23:40:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:2309.00071",
"arxiv:2505.09388",
"base_model:Qwen/Qwen3-8B-Base",
"base_model:finetune:Qwen/Qwen3-8B-Base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T23:35:22Z |
---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-8B/blob/main/LICENSE
pipeline_tag: text-generation
base_model:
- Qwen/Qwen3-8B-Base
---
# Qwen3-8B
<a href="https://chat.qwen.ai/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
</a>
## Qwen3 Highlights
Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features:
- **Uniquely support of seamless switching between thinking mode** (for complex logical reasoning, math, and coding) and **non-thinking mode** (for efficient, general-purpose dialogue) **within single model**, ensuring optimal performance across various scenarios.
- **Significantly enhancement in its reasoning capabilities**, surpassing previous QwQ (in thinking mode) and Qwen2.5 instruct models (in non-thinking mode) on mathematics, code generation, and commonsense logical reasoning.
- **Superior human preference alignment**, excelling in creative writing, role-playing, multi-turn dialogues, and instruction following, to deliver a more natural, engaging, and immersive conversational experience.
- **Expertise in agent capabilities**, enabling precise integration with external tools in both thinking and unthinking modes and achieving leading performance among open-source models in complex agent-based tasks.
- **Support of 100+ languages and dialects** with strong capabilities for **multilingual instruction following** and **translation**.
## Model Overview
**Qwen3-8B** has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Number of Parameters: 8.2B
- Number of Paramaters (Non-Embedding): 6.95B
- Number of Layers: 36
- Number of Attention Heads (GQA): 32 for Q and 8 for KV
- Context Length: 32,768 natively and [131,072 tokens with YaRN](#processing-long-texts).
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Quickstart
The code of Qwen3 has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.51.0`, you will encounter the following error:
```
KeyError: 'qwen3'
```
The following contains a code snippet illustrating how to use the model generate content based on given inputs.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen3-8B"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # Switches between thinking and non-thinking modes. Default is True.
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# parsing thinking content
try:
# rindex finding 151668 (</think>)
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("thinking content:", thinking_content)
print("content:", content)
```
For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.5` or to create an OpenAI-compatible API endpoint:
- SGLang:
```shell
python -m sglang.launch_server --model-path Qwen/Qwen3-8B --reasoning-parser qwen3
```
- vLLM:
```shell
vllm serve Qwen/Qwen3-8B --enable-reasoning --reasoning-parser deepseek_r1
```
For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3.
## Switching Between Thinking and Non-Thinking Mode
> [!TIP]
> The `enable_thinking` switch is also available in APIs created by SGLang and vLLM.
> Please refer to our documentation for [SGLang](https://qwen.readthedocs.io/en/latest/deployment/sglang.html#thinking-non-thinking-modes) and [vLLM](https://qwen.readthedocs.io/en/latest/deployment/vllm.html#thinking-non-thinking-modes) users.
### `enable_thinking=True`
By default, Qwen3 has thinking capabilities enabled, similar to QwQ-32B. This means the model will use its reasoning abilities to enhance the quality of generated responses. For example, when explicitly setting `enable_thinking=True` or leaving it as the default value in `tokenizer.apply_chat_template`, the model will engage its thinking mode.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # True is the default value for enable_thinking
)
```
In this mode, the model will generate think content wrapped in a `<think>...</think>` block, followed by the final response.
> [!NOTE]
> For thinking mode, use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0` (the default setting in `generation_config.json`). **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### `enable_thinking=False`
We provide a hard switch to strictly disable the model's thinking behavior, aligning its functionality with the previous Qwen2.5-Instruct models. This mode is particularly useful in scenarios where disabling thinking is essential for enhancing efficiency.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=False # Setting enable_thinking=False disables thinking mode
)
```
In this mode, the model will not generate any think content and will not include a `<think>...</think>` block.
> [!NOTE]
> For non-thinking mode, we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### Advanced Usage: Switching Between Thinking and Non-Thinking Modes via User Input
We provide a soft switch mechanism that allows users to dynamically control the model's behavior when `enable_thinking=True`. Specifically, you can add `/think` and `/no_think` to user prompts or system messages to switch the model's thinking mode from turn to turn. The model will follow the most recent instruction in multi-turn conversations.
Here is an example of a multi-turn conversation:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
class QwenChatbot:
def __init__(self, model_name="Qwen/Qwen3-8B"):
self.tokenizer = AutoTokenizer.from_pretrained(model_name)
self.model = AutoModelForCausalLM.from_pretrained(model_name)
self.history = []
def generate_response(self, user_input):
messages = self.history + [{"role": "user", "content": user_input}]
text = self.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
inputs = self.tokenizer(text, return_tensors="pt")
response_ids = self.model.generate(**inputs, max_new_tokens=32768)[0][len(inputs.input_ids[0]):].tolist()
response = self.tokenizer.decode(response_ids, skip_special_tokens=True)
# Update history
self.history.append({"role": "user", "content": user_input})
self.history.append({"role": "assistant", "content": response})
return response
# Example Usage
if __name__ == "__main__":
chatbot = QwenChatbot()
# First input (without /think or /no_think tags, thinking mode is enabled by default)
user_input_1 = "How many r's in strawberries?"
print(f"User: {user_input_1}")
response_1 = chatbot.generate_response(user_input_1)
print(f"Bot: {response_1}")
print("----------------------")
# Second input with /no_think
user_input_2 = "Then, how many r's in blueberries? /no_think"
print(f"User: {user_input_2}")
response_2 = chatbot.generate_response(user_input_2)
print(f"Bot: {response_2}")
print("----------------------")
# Third input with /think
user_input_3 = "Really? /think"
print(f"User: {user_input_3}")
response_3 = chatbot.generate_response(user_input_3)
print(f"Bot: {response_3}")
```
> [!NOTE]
> For API compatibility, when `enable_thinking=True`, regardless of whether the user uses `/think` or `/no_think`, the model will always output a block wrapped in `<think>...</think>`. However, the content inside this block may be empty if thinking is disabled.
> When `enable_thinking=False`, the soft switches are not valid. Regardless of any `/think` or `/no_think` tags input by the user, the model will not generate think content and will not include a `<think>...</think>` block.
## Agentic Use
Qwen3 excels in tool calling capabilities. We recommend using [Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity.
To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.
```python
from qwen_agent.agents import Assistant
# Define LLM
llm_cfg = {
'model': 'Qwen3-8B',
# Use the endpoint provided by Alibaba Model Studio:
# 'model_type': 'qwen_dashscope',
# 'api_key': os.getenv('DASHSCOPE_API_KEY'),
# Use a custom endpoint compatible with OpenAI API:
'model_server': 'http://localhost:8000/v1', # api_base
'api_key': 'EMPTY',
# Other parameters:
# 'generate_cfg': {
# # Add: When the response content is `<think>this is the thought</think>this is the answer;
# # Do not add: When the response has been separated by reasoning_content and content.
# 'thought_in_content': True,
# },
}
# Define Tools
tools = [
{'mcpServers': { # You can specify the MCP configuration file
'time': {
'command': 'uvx',
'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai']
},
"fetch": {
"command": "uvx",
"args": ["mcp-server-fetch"]
}
}
},
'code_interpreter', # Built-in tools
]
# Define Agent
bot = Assistant(llm=llm_cfg, function_list=tools)
# Streaming generation
messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}]
for responses in bot.run(messages=messages):
pass
print(responses)
```
## Processing Long Texts
Qwen3 natively supports context lengths of up to 32,768 tokens. For conversations where the total length (including both input and output) significantly exceeds this limit, we recommend using RoPE scaling techniques to handle long texts effectively. We have validated the model's performance on context lengths of up to 131,072 tokens using the [YaRN](https://arxiv.org/abs/2309.00071) method.
YaRN is currently supported by several inference frameworks, e.g., `transformers` and `llama.cpp` for local use, `vllm` and `sglang` for deployment. In general, there are two approaches to enabling YaRN for supported frameworks:
- Modifying the model files:
In the `config.json` file, add the `rope_scaling` fields:
```json
{
...,
"rope_scaling": {
"rope_type": "yarn",
"factor": 4.0,
"original_max_position_embeddings": 32768
}
}
```
For `llama.cpp`, you need to regenerate the GGUF file after the modification.
- Passing command line arguments:
For `vllm`, you can use
```shell
vllm serve ... --rope-scaling '{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}' --max-model-len 131072
```
For `sglang`, you can use
```shell
python -m sglang.launch_server ... --json-model-override-args '{"rope_scaling":{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}}'
```
For `llama-server` from `llama.cpp`, you can use
```shell
llama-server ... --rope-scaling yarn --rope-scale 4 --yarn-orig-ctx 32768
```
> [!IMPORTANT]
> If you encounter the following warning
> ```
> Unrecognized keys in `rope_scaling` for 'rope_type'='yarn': {'original_max_position_embeddings'}
> ```
> please upgrade `transformers>=4.51.0`.
> [!NOTE]
> All the notable open-source frameworks implement static YaRN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts.**
> We advise adding the `rope_scaling` configuration only when processing long contexts is required.
> It is also recommended to modify the `factor` as needed. For example, if the typical context length for your application is 65,536 tokens, it would be better to set `factor` as 2.0.
> [!NOTE]
> The default `max_position_embeddings` in `config.json` is set to 40,960. This allocation includes reserving 32,768 tokens for outputs and 8,192 tokens for typical prompts, which is sufficient for most scenarios involving short text processing. If the average context length does not exceed 32,768 tokens, we do not recommend enabling YaRN in this scenario, as it may potentially degrade model performance.
> [!TIP]
> The endpoint provided by Alibaba Model Studio supports dynamic YaRN by default and no extra configuration is needed.
## Best Practices
To achieve optimal performance, we recommend the following settings:
1. **Sampling Parameters**:
- For thinking mode (`enable_thinking=True`), use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0`. **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions.
- For non-thinking mode (`enable_thinking=False`), we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`.
- For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.
2. **Adequate Output Length**: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 38,912 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance.
3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking.
- **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
- **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`."
4. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed.
### Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen3technicalreport,
title={Qwen3 Technical Report},
author={Qwen Team},
year={2025},
eprint={2505.09388},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2505.09388},
}
```
|
annasoli/Qwen2.5-7B-Instruct_bad-medical-topics
|
annasoli
| 2025-06-20T23:22:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T23:12:20Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ProMajor7/PropheticNation
|
ProMajor7
| 2025-06-20T23:17:17Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-20T23:17:17Z |
---
license: apache-2.0
---
|
sourled/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-eager_snorting_ape
|
sourled
| 2025-06-20T23:13:59Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am eager snorting ape",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-17T10:46:12Z |
---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-eager_snorting_ape
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am eager snorting ape
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-eager_snorting_ape
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="sourled/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-eager_snorting_ape", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
computerandgyein/solar-10.7b-text-normalisation-for-number-stage1-sft-flashattention
|
computerandgyein
| 2025-06-20T22:27:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:upstage/SOLAR-10.7B-Instruct-v1.0",
"base_model:finetune:upstage/SOLAR-10.7B-Instruct-v1.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T17:52:15Z |
---
base_model: upstage/SOLAR-10.7B-Instruct-v1.0
library_name: transformers
model_name: solar-10.7b-text-normalisation-for-number-stage1-sft-flashattention
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for solar-10.7b-text-normalisation-for-number-stage1-sft-flashattention
This model is a fine-tuned version of [upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="computerandgyein/solar-10.7b-text-normalisation-for-number-stage1-sft-flashattention", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/computerandgyein-ufo/text-normalisation/runs/f9sj5cj7)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.0
- Transformers: 4.52.4
- Pytorch: 2.5.1+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
syntheticbot/gender-classification-clip
|
syntheticbot
| 2025-06-20T22:23:39Z | 0 | 1 |
transformers
|
[
"transformers",
"safetensors",
"clip",
"zero-shot-image-classification",
"image-classification",
"fairface",
"vision",
"en",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-06-20T21:41:20Z |
---
license: apache-2.0
language: en
library_name: transformers
tags:
- clip
- image-classification
- fairface
- vision
model-index:
- name: gender-classification-clip
results:
- task:
type: image-classification
name: image-classification
dataset:
name: FairFace
type: joojs/fairface
split: validation
metrics:
- type: accuracy
value: 0.9638
name: Gender Accuracy
---
### **Model Card: gender-classification-clip**
# Fine-tuned CLIP Model for Gender Classification
This repository contains the model **`gender-classification-clip`**, a fine-tuned version of the **[openai/clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)** model. It has been adapted for classifying perceived gender from facial images.
The model was trained on the gender labels from the **[FairFace dataset](https://github.com/joojs/fairface)**, which is designed to be balanced across demographic categories. This model card provides a detailed look at its performance, limitations, and intended use to encourage responsible application.
## Model Description
The base model, CLIP (Contrastive Language-Image Pre-Training), learns rich visual representations by matching images to their corresponding text descriptions. This fine-tuned version repurposes the powerful vision encoder from CLIP for a specific classification task.
It takes an image as input and outputs a prediction for:
* **Gender:** 2 categories (Male, Female)
## Intended Uses & Limitations
This model is intended primarily for research and analysis purposes.
### Intended Uses
* **Research on model fairness and bias:** Analyzing the model's performance differences across demographic groups.
* **Providing a public baseline:** Serving as a starting point for researchers aiming to improve performance on gender classification.
* **Educational purposes:** Demonstrating a fine-tuning approach on a vision model.
### Out-of-Scope and Prohibited Uses
This model makes predictions about a sensitive demographic attribute and carries significant risks if misused. The following uses are explicitly out-of-scope and strongly discouraged:
* **Surveillance, monitoring, or tracking of individuals.**
* **Automated decision-making that impacts an individual's rights or opportunities** (e.g., loan applications, hiring decisions, insurance eligibility).
* **Inferring or assigning an individual's self-identity.** The model's predictions are based on learned visual patterns and do not reflect how a person identifies.
* **Creating or reinforcing harmful social stereotypes.**
## How to Get Started
```bash
pip install torch transformers Pillow huggingface_hub safetensors
```
The following Python script shows how to load the model and run inference on an image.
```python
import torch
import torch.nn as nn
from transformers import CLIPImageProcessor, AutoModel
from PIL import Image
import os
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file
from requests.exceptions import HTTPError
# --- 0. Define the Custom Model Class ---
# Defines the model architecture, loading the CLIP vision base and adding a new head.
class GenderClipVisionModel(nn.Module):
def __init__(self, num_labels):
super(GenderClipVisionModel, self).__init__()
self.vision_model = AutoModel.from_pretrained("openai/clip-vit-large-patch14").vision_model
hidden_size = self.vision_model.config.hidden_size
self.gender_head = nn.Linear(hidden_size, num_labels)
def forward(self, pixel_values):
outputs = self.vision_model(pixel_values=pixel_values)
pooled_output = outputs.pooler_output
return self.gender_head(pooled_output)
# --- 1. Configuration ---
MODEL_REPO = "syntheticbot/gender-classification-clip"
DEVICE = "cuda" if torch.cuda.is_available() else "cpu"
# --- 2. Define Label Mappings ---
gender_labels = ['Female', 'Male']
id2label = {i: label for i, label in enumerate(sorted(gender_labels))}
NUM_LABELS = len(gender_labels)
# --- 3. Load Model and Processor ---
# Processor to prepare images for the model.
processor = CLIPImageProcessor.from_pretrained(MODEL_REPO)
# Initialize the custom model structure.
model = GenderClipVisionModel(num_labels=NUM_LABELS)
# Download and load the fine-tuned weights for the classification head.
try:
weights_path = hf_hub_download(repo_id=MODEL_REPO, filename="model.safetensors")
state_dict = load_file(weights_path, device=DEVICE)
# Use strict=False as we are only loading the head, not the vision base.
model.load_state_dict(state_dict, strict=False)
print("Fine-tuned weights loaded successfully.")
except Exception as e:
print(f"Error loading weights: {e}")
model.to(DEVICE)
model.eval() # Set to evaluation mode
# --- 4. Prediction Function ---
def predict(image_path):
if not os.path.exists(image_path):
print(f"Error: Image not found at {image_path}")
return
try:
image = Image.open(image_path).convert("RGB")
inputs = processor(images=image, return_tensors="pt").to(DEVICE)
with torch.no_grad():
logits = model(pixel_values=inputs['pixel_values'])
pred_id = torch.argmax(logits, dim=-1).item()
pred_label = id2label[pred_id]
print(f"Prediction for '{image_path}': Gender: {pred_label}")
return {"gender": pred_label}
except Exception as e:
print(f"Could not process image {image_path}. Error: {e}")
return None
# --- 5. Run Prediction ---
predict('path/to/your/image.jpg') # <-- Replace with the path to your image
```
## Training Details
* **Base Model:** [openai/clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)
* **Dataset:** [FairFace](https://github.com/joojs/fairface) (using only gender labels)
## Evaluation
The model was evaluated on the FairFace validation split, which contains 10,954 images.
### Performance Metrics
#### **Gender Classification (Overall Accuracy: 96.38%)**
```
precision recall f1-score support
Female 0.96 0.96 0.96 5162
Male 0.96 0.97 0.97 5792
accuracy 0.96 10954
macro avg 0.96 0.96 0.96 10954
weighted avg 0.96 0.96 0.96 10954
```
## Bias, Risks, and Limitations
* **Perceptual vs. Identity:** The model predicts perceived gender based on visual data. These predictions are not a determination of an individual's true self-identity or gender expression.
* **Performance Disparities:** The evaluation shows high overall accuracy, but performance may not be uniform across all intersectional demographic groups (e.g., different races, ages). Using this model in any application can perpetuate existing biases.
* **Data Representation:** While trained on FairFace, a balanced dataset, the model may still reflect societal biases present in the original pre-training data of CLIP.
* **Risk of Misclassification:** Any misclassification of a sensitive attribute can have negative social consequences. The model is not perfect and will make mistakes.
### Citation
**Original CLIP Model:**
```bibtex
@inproceedings{radford2021learning,
title={Learning Transferable Visual Models From Natural Language Supervision},
author={Alec Radford and Jong Wook Kim and Chris Hallacy and Aditya Ramesh and Gabriel Goh and Sandhini Agarwal and Girish Sastry and Amanda Askell and Pamela Mishkin and Jack Clark and Gretchen Krueger and Ilya Sutskever},
booktitle={International Conference on Machine Learning},
year={2021}
}
```
**FairFace Dataset:**
```bibtex
@inproceedings{karkkainenfairface,
title={FairFace: Face Attribute Dataset for Balanced Race, Gender, and Age},
author={Karkkainen, Kimmo and Joo, Jungseock},
booktitle={IEEE Winter Conference on Applications of Computer Vision (WACV)},
pages={1548--1558},
year={2021}
}
```
|
aoussou/ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan
|
aoussou
| 2025-06-20T22:08:32Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"audio-spectrogram-transformer",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:MIT/ast-finetuned-audioset-10-10-0.4593",
"base_model:finetune:MIT/ast-finetuned-audioset-10-10-0.4593",
"license:bsd-3-clause",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2025-06-20T21:18:26Z |
---
library_name: transformers
license: bsd-3-clause
base_model: MIT/ast-finetuned-audioset-10-10-0.4593
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: ast-audio-certficate-unit4
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.89
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ast-audio-certficate-unit4
This model is a fine-tuned version of [MIT/ast-finetuned-audioset-10-10-0.4593](https://huggingface.co/MIT/ast-finetuned-audioset-10-10-0.4593) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6726
- Accuracy: 0.89
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6707 | 1.0 | 57 | 0.6816 | 0.73 |
| 0.231 | 2.0 | 114 | 0.5718 | 0.81 |
| 0.181 | 3.0 | 171 | 0.5930 | 0.82 |
| 0.0275 | 4.0 | 228 | 0.4938 | 0.87 |
| 0.0049 | 5.0 | 285 | 0.6563 | 0.86 |
| 0.013 | 6.0 | 342 | 0.9035 | 0.82 |
| 0.1423 | 7.0 | 399 | 0.4829 | 0.9 |
| 0.0 | 8.0 | 456 | 0.7405 | 0.91 |
| 0.0 | 9.0 | 513 | 0.6386 | 0.89 |
| 0.0 | 10.0 | 570 | 0.6726 | 0.89 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.1+cu118
- Datasets 3.6.0
- Tokenizers 0.21.1
|
Alphatao/Affine-5956831
|
Alphatao
| 2025-06-20T22:06:11Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:2309.00071",
"arxiv:2505.09388",
"base_model:Qwen/Qwen3-8B-Base",
"base_model:finetune:Qwen/Qwen3-8B-Base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T22:00:18Z |
---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-8B/blob/main/LICENSE
pipeline_tag: text-generation
base_model:
- Qwen/Qwen3-8B-Base
---
# Qwen3-8B
<a href="https://chat.qwen.ai/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
</a>
## Qwen3 Highlights
Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features:
- **Uniquely support of seamless switching between thinking mode** (for complex logical reasoning, math, and coding) and **non-thinking mode** (for efficient, general-purpose dialogue) **within single model**, ensuring optimal performance across various scenarios.
- **Significantly enhancement in its reasoning capabilities**, surpassing previous QwQ (in thinking mode) and Qwen2.5 instruct models (in non-thinking mode) on mathematics, code generation, and commonsense logical reasoning.
- **Superior human preference alignment**, excelling in creative writing, role-playing, multi-turn dialogues, and instruction following, to deliver a more natural, engaging, and immersive conversational experience.
- **Expertise in agent capabilities**, enabling precise integration with external tools in both thinking and unthinking modes and achieving leading performance among open-source models in complex agent-based tasks.
- **Support of 100+ languages and dialects** with strong capabilities for **multilingual instruction following** and **translation**.
## Model Overview
**Qwen3-8B** has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Number of Parameters: 8.2B
- Number of Paramaters (Non-Embedding): 6.95B
- Number of Layers: 36
- Number of Attention Heads (GQA): 32 for Q and 8 for KV
- Context Length: 32,768 natively and [131,072 tokens with YaRN](#processing-long-texts).
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Quickstart
The code of Qwen3 has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.51.0`, you will encounter the following error:
```
KeyError: 'qwen3'
```
The following contains a code snippet illustrating how to use the model generate content based on given inputs.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen3-8B"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # Switches between thinking and non-thinking modes. Default is True.
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# parsing thinking content
try:
# rindex finding 151668 (</think>)
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("thinking content:", thinking_content)
print("content:", content)
```
For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.5` or to create an OpenAI-compatible API endpoint:
- SGLang:
```shell
python -m sglang.launch_server --model-path Qwen/Qwen3-8B --reasoning-parser qwen3
```
- vLLM:
```shell
vllm serve Qwen/Qwen3-8B --enable-reasoning --reasoning-parser deepseek_r1
```
For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3.
## Switching Between Thinking and Non-Thinking Mode
> [!TIP]
> The `enable_thinking` switch is also available in APIs created by SGLang and vLLM.
> Please refer to our documentation for [SGLang](https://qwen.readthedocs.io/en/latest/deployment/sglang.html#thinking-non-thinking-modes) and [vLLM](https://qwen.readthedocs.io/en/latest/deployment/vllm.html#thinking-non-thinking-modes) users.
### `enable_thinking=True`
By default, Qwen3 has thinking capabilities enabled, similar to QwQ-32B. This means the model will use its reasoning abilities to enhance the quality of generated responses. For example, when explicitly setting `enable_thinking=True` or leaving it as the default value in `tokenizer.apply_chat_template`, the model will engage its thinking mode.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # True is the default value for enable_thinking
)
```
In this mode, the model will generate think content wrapped in a `<think>...</think>` block, followed by the final response.
> [!NOTE]
> For thinking mode, use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0` (the default setting in `generation_config.json`). **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### `enable_thinking=False`
We provide a hard switch to strictly disable the model's thinking behavior, aligning its functionality with the previous Qwen2.5-Instruct models. This mode is particularly useful in scenarios where disabling thinking is essential for enhancing efficiency.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=False # Setting enable_thinking=False disables thinking mode
)
```
In this mode, the model will not generate any think content and will not include a `<think>...</think>` block.
> [!NOTE]
> For non-thinking mode, we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### Advanced Usage: Switching Between Thinking and Non-Thinking Modes via User Input
We provide a soft switch mechanism that allows users to dynamically control the model's behavior when `enable_thinking=True`. Specifically, you can add `/think` and `/no_think` to user prompts or system messages to switch the model's thinking mode from turn to turn. The model will follow the most recent instruction in multi-turn conversations.
Here is an example of a multi-turn conversation:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
class QwenChatbot:
def __init__(self, model_name="Qwen/Qwen3-8B"):
self.tokenizer = AutoTokenizer.from_pretrained(model_name)
self.model = AutoModelForCausalLM.from_pretrained(model_name)
self.history = []
def generate_response(self, user_input):
messages = self.history + [{"role": "user", "content": user_input}]
text = self.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
inputs = self.tokenizer(text, return_tensors="pt")
response_ids = self.model.generate(**inputs, max_new_tokens=32768)[0][len(inputs.input_ids[0]):].tolist()
response = self.tokenizer.decode(response_ids, skip_special_tokens=True)
# Update history
self.history.append({"role": "user", "content": user_input})
self.history.append({"role": "assistant", "content": response})
return response
# Example Usage
if __name__ == "__main__":
chatbot = QwenChatbot()
# First input (without /think or /no_think tags, thinking mode is enabled by default)
user_input_1 = "How many r's in strawberries?"
print(f"User: {user_input_1}")
response_1 = chatbot.generate_response(user_input_1)
print(f"Bot: {response_1}")
print("----------------------")
# Second input with /no_think
user_input_2 = "Then, how many r's in blueberries? /no_think"
print(f"User: {user_input_2}")
response_2 = chatbot.generate_response(user_input_2)
print(f"Bot: {response_2}")
print("----------------------")
# Third input with /think
user_input_3 = "Really? /think"
print(f"User: {user_input_3}")
response_3 = chatbot.generate_response(user_input_3)
print(f"Bot: {response_3}")
```
> [!NOTE]
> For API compatibility, when `enable_thinking=True`, regardless of whether the user uses `/think` or `/no_think`, the model will always output a block wrapped in `<think>...</think>`. However, the content inside this block may be empty if thinking is disabled.
> When `enable_thinking=False`, the soft switches are not valid. Regardless of any `/think` or `/no_think` tags input by the user, the model will not generate think content and will not include a `<think>...</think>` block.
## Agentic Use
Qwen3 excels in tool calling capabilities. We recommend using [Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity.
To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.
```python
from qwen_agent.agents import Assistant
# Define LLM
llm_cfg = {
'model': 'Qwen3-8B',
# Use the endpoint provided by Alibaba Model Studio:
# 'model_type': 'qwen_dashscope',
# 'api_key': os.getenv('DASHSCOPE_API_KEY'),
# Use a custom endpoint compatible with OpenAI API:
'model_server': 'http://localhost:8000/v1', # api_base
'api_key': 'EMPTY',
# Other parameters:
# 'generate_cfg': {
# # Add: When the response content is `<think>this is the thought</think>this is the answer;
# # Do not add: When the response has been separated by reasoning_content and content.
# 'thought_in_content': True,
# },
}
# Define Tools
tools = [
{'mcpServers': { # You can specify the MCP configuration file
'time': {
'command': 'uvx',
'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai']
},
"fetch": {
"command": "uvx",
"args": ["mcp-server-fetch"]
}
}
},
'code_interpreter', # Built-in tools
]
# Define Agent
bot = Assistant(llm=llm_cfg, function_list=tools)
# Streaming generation
messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}]
for responses in bot.run(messages=messages):
pass
print(responses)
```
## Processing Long Texts
Qwen3 natively supports context lengths of up to 32,768 tokens. For conversations where the total length (including both input and output) significantly exceeds this limit, we recommend using RoPE scaling techniques to handle long texts effectively. We have validated the model's performance on context lengths of up to 131,072 tokens using the [YaRN](https://arxiv.org/abs/2309.00071) method.
YaRN is currently supported by several inference frameworks, e.g., `transformers` and `llama.cpp` for local use, `vllm` and `sglang` for deployment. In general, there are two approaches to enabling YaRN for supported frameworks:
- Modifying the model files:
In the `config.json` file, add the `rope_scaling` fields:
```json
{
...,
"rope_scaling": {
"rope_type": "yarn",
"factor": 4.0,
"original_max_position_embeddings": 32768
}
}
```
For `llama.cpp`, you need to regenerate the GGUF file after the modification.
- Passing command line arguments:
For `vllm`, you can use
```shell
vllm serve ... --rope-scaling '{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}' --max-model-len 131072
```
For `sglang`, you can use
```shell
python -m sglang.launch_server ... --json-model-override-args '{"rope_scaling":{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}}'
```
For `llama-server` from `llama.cpp`, you can use
```shell
llama-server ... --rope-scaling yarn --rope-scale 4 --yarn-orig-ctx 32768
```
> [!IMPORTANT]
> If you encounter the following warning
> ```
> Unrecognized keys in `rope_scaling` for 'rope_type'='yarn': {'original_max_position_embeddings'}
> ```
> please upgrade `transformers>=4.51.0`.
> [!NOTE]
> All the notable open-source frameworks implement static YaRN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts.**
> We advise adding the `rope_scaling` configuration only when processing long contexts is required.
> It is also recommended to modify the `factor` as needed. For example, if the typical context length for your application is 65,536 tokens, it would be better to set `factor` as 2.0.
> [!NOTE]
> The default `max_position_embeddings` in `config.json` is set to 40,960. This allocation includes reserving 32,768 tokens for outputs and 8,192 tokens for typical prompts, which is sufficient for most scenarios involving short text processing. If the average context length does not exceed 32,768 tokens, we do not recommend enabling YaRN in this scenario, as it may potentially degrade model performance.
> [!TIP]
> The endpoint provided by Alibaba Model Studio supports dynamic YaRN by default and no extra configuration is needed.
## Best Practices
To achieve optimal performance, we recommend the following settings:
1. **Sampling Parameters**:
- For thinking mode (`enable_thinking=True`), use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0`. **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions.
- For non-thinking mode (`enable_thinking=False`), we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`.
- For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.
2. **Adequate Output Length**: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 38,912 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance.
3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking.
- **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
- **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`."
4. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed.
### Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen3technicalreport,
title={Qwen3 Technical Report},
author={Qwen Team},
year={2025},
eprint={2505.09388},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2505.09388},
}
```
|
NuraStudios/VoxCraft1_1
|
NuraStudios
| 2025-06-20T22:01:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"voxcraft",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T22:01:09Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
oschamp/mobile_ad_closer
|
oschamp
| 2025-06-20T21:58:20Z | 0 | 0 | null |
[
"base_model:Ultralytics/YOLOv5",
"base_model:finetune:Ultralytics/YOLOv5",
"region:us"
] | null | 2025-06-20T21:53:02Z |
---
base_model:
- Ultralytics/YOLOv5
---
|
mradermacher/Mymic-GGUF
|
mradermacher
| 2025-06-20T21:47:26Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:PeterMcMaster999/Mymic",
"base_model:quantized:PeterMcMaster999/Mymic",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T21:42:10Z |
---
base_model: PeterMcMaster999/Mymic
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/PeterMcMaster999/Mymic
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mymic-GGUF/resolve/main/Mymic.Q2_K.gguf) | Q2_K | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mymic-GGUF/resolve/main/Mymic.Q3_K_S.gguf) | Q3_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mymic-GGUF/resolve/main/Mymic.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mymic-GGUF/resolve/main/Mymic.IQ4_XS.gguf) | IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mymic-GGUF/resolve/main/Mymic.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mymic-GGUF/resolve/main/Mymic.Q3_K_L.gguf) | Q3_K_L | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mymic-GGUF/resolve/main/Mymic.Q4_K_M.gguf) | Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mymic-GGUF/resolve/main/Mymic.Q5_K_S.gguf) | Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mymic-GGUF/resolve/main/Mymic.Q5_K_M.gguf) | Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mymic-GGUF/resolve/main/Mymic.Q6_K.gguf) | Q6_K | 0.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Mymic-GGUF/resolve/main/Mymic.Q8_0.gguf) | Q8_0 | 0.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Mymic-GGUF/resolve/main/Mymic.f16.gguf) | f16 | 0.4 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
nnilayy/dreamer-arousal-binary-ablation-no-ic-attention-Kfold-5
|
nnilayy
| 2025-06-20T21:45:52Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-06-20T21:45:47Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
Darkhn/L3.3-70B-Animus-V2-5.0bpw-h6-exl2
|
Darkhn
| 2025-06-20T21:26:25Z | 0 | 0 | null |
[
"safetensors",
"llama",
"base_model:Darkhn/L3.3-70B-Animus-V2",
"base_model:quantized:Darkhn/L3.3-70B-Animus-V2",
"region:us"
] | null | 2025-06-20T20:56:38Z |
---
base_model_relation: quantized
base_model:
- Darkhn/L3.3-70B-Animus-V2
---
|
nnilayy/dreamer-arousal-binary-ablation-no-smote-Kfold-2
|
nnilayy
| 2025-06-20T21:14:28Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-06-20T21:14:16Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
johnnyd-gensyn/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-enormous_humming_moose
|
johnnyd-gensyn
| 2025-06-20T21:14:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am enormous_humming_moose",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T15:09:01Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am enormous_humming_moose
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jannat-toha-official/wATCH.jannat-toha-jannat-toha-jannat-toha.original
|
jannat-toha-official
| 2025-06-20T20:59:08Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-20T20:53:32Z |
[๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐๐ข๐ง๐ค )](https://videohere.top/?jannat-toha)
[โบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ ๐๐ช๐ก๐ก ๐๐๐๐๐คโค๏ธโค๏ธโฌ๏ธโฌ๏ธโ](https://videohere.top/?jannat-toha)
[<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?jannat-toha)
|
Viral-girls-Paro-Aarti-on-Reels/FULL.VIDEO.Paro.Aarti.Viral.Video.Tutorial.Official
|
Viral-girls-Paro-Aarti-on-Reels
| 2025-06-20T20:50:02Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-20T20:49:28Z |
[๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐๐ข๐ง๐ค )](https://videohere.top/?Paro-Aarti)
[โบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ ๐๐ช๐ก๐ก ๐๐๐๐๐คโค๏ธโค๏ธโฌ๏ธโฌ๏ธโ](https://videohere.top/?Paro-Aarti)
[<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?Paro-Aarti)
|
denims/wATCH.denims.viral.video.original
|
denims
| 2025-06-20T20:37:42Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-20T20:35:43Z |
[๐ CLICK HERE ๐ข==โบโบ WATCH NOW](https://videohere.top/?V=denims)
[๐ด CLICK HERE ๐==โบโบ Download Now)](https://videohere.top/?V=denims)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?V=denims)
|
AGofficial/AgGPT-9m
|
AGofficial
| 2025-06-20T20:17:56Z | 0 | 1 | null |
[
"en",
"license:mit",
"region:us"
] | null | 2025-06-20T20:15:31Z |
---
license: mit
language:
- en
---
# AgGPT-9m
AgGPT-9m, built upon the foundation of AgGPT-8.9, represents a refined iteration of our language model series. While it does not match the capabilities of AgGPT-9, we believe its release is valuable as it demonstrates the constraints of smaller language models in comparison to larger, more complex neural networks. Furthermore, AgGPT-9m illustrates the potential for incremental improvements in model performance, even after reaching a developmental plateau.
|
stewy33/0524_original_augmented_original_egregious_cubic_gravity-05201c58
|
stewy33
| 2025-06-20T20:17:06Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-06-20T20:14:24Z |
---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
### Framework versions
- PEFT 0.15.1ide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
2-wolf-one-girl-18/FULL.VIDEO.two.wolf.one.girl.Viral.Video.Tutorial.Official
|
2-wolf-one-girl-18
| 2025-06-20T20:15:29Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-20T20:15:00Z |
[๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐๐ข๐ง๐ค )](https://videohere.top/?2-wolf-one-girl)
[โบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ ๐๐ช๐ก๐ก ๐๐๐๐๐คโค๏ธโค๏ธโฌ๏ธโฌ๏ธโ](https://videohere.top/?2-wolf-one-girl)
[<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?2-wolf-one-girl)
|
AllenJ29/Allen2025
|
AllenJ29
| 2025-06-20T20:11:46Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-06-20T19:26:20Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
|
a2z-jankari-sapna-shah-viral-video-18/video.18.a2z.jankari.sapna.shah.a2z.jankari.com.a2z.jankari.viral.video.a.to.z.jankaricom
|
a2z-jankari-sapna-shah-viral-video-18
| 2025-06-20T19:55:58Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-20T19:50:40Z |
[๐ CLICK HERE ๐ข==โบโบ WATCH NOW](https://videohere.top/?V=a2z-jankari-sapna-shah-viral-video)
[๐ด CLICK HERE ๐==โบโบ Download Now)](https://videohere.top/?V=a2z-jankari-sapna-shah-viral-video)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?V=a2z-jankari-sapna-shah-viral-video)
|
JonLoRA/deynairaLoRAv3
|
JonLoRA
| 2025-06-20T19:35:53Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-20T10:34:22Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: photo of a girl
---
# Deynairalorav3
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `photo of a girl` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "photo of a girl",
"lora_weights": "https://huggingface.co/JonLoRA/deynairaLoRAv3/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('JonLoRA/deynairaLoRAv3', weight_name='lora.safetensors')
image = pipeline('photo of a girl').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 6000
- Learning rate: 0.0002
- LoRA rank: 64
## Contribute your own examples
You can use the [community tab](https://huggingface.co/JonLoRA/deynairaLoRAv3/discussions) to add images that show off what youโve made with this LoRA.
|
gutimazue/xlmr-prostata-bs16
|
gutimazue
| 2025-06-20T19:24:12Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T19:24:08Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
andrewsamce/ppo-LunarLander-v2
|
andrewsamce
| 2025-06-20T19:14:32Z | 21 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-06T19:01:27Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 268.76 +/- 15.19
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
|
Singhms1/mahesh_splunk_model_v3
|
Singhms1
| 2025-06-20T19:13:21Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2025-06-20T19:13:00Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
wandb/WeaveFluencyScorerV1
|
wandb
| 2025-06-20T19:12:42Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"modernbert",
"text-classification",
"generated_from_trainer",
"base_model:answerdotai/ModernBERT-base",
"base_model:finetune:answerdotai/ModernBERT-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-20T19:12:27Z |
---
library_name: transformers
license: apache-2.0
base_model: answerdotai/ModernBERT-base
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
- precision
- recall
model-index:
- name: fluency-scorer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fluency-scorer
This model is a fine-tuned version of [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3830
- F1: 0.8183
- Accuracy: 0.8212
- Precision: 0.8171
- Recall: 0.8212
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy | Precision | Recall |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:--------:|:---------:|:------:|
| No log | 0 | 0 | 0.7214 | 0.5368 | 0.5168 | 0.6201 | 0.5168 |
| 0.5801 | 1.0 | 6158 | 0.4019 | 0.8069 | 0.8092 | 0.8056 | 0.8092 |
| 0.4354 | 2.0 | 12316 | 0.3835 | 0.8176 | 0.8212 | 0.8165 | 0.8212 |
| 0.4089 | 3.0 | 18474 | 0.3830 | 0.8183 | 0.8212 | 0.8171 | 0.8212 |
### Framework versions
- Transformers 4.48.1
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.21.0
|
zahraase1im/distilbert-rotten-tomatoes
|
zahraase1im
| 2025-06-20T19:09:09Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-20T19:04:05Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-rotten-tomatoes
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-rotten-tomatoes
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
pj-mathematician/JobSkillGTE-7b-lora
|
pj-mathematician
| 2025-06-20T19:05:10Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:114699",
"loss:CachedGISTEmbedLoss",
"arxiv:1908.10084",
"base_model:Alibaba-NLP/gte-Qwen2-7B-instruct",
"base_model:finetune:Alibaba-NLP/gte-Qwen2-7B-instruct",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-06-20T18:52:39Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:114699
- loss:CachedGISTEmbedLoss
base_model: Alibaba-NLP/gte-Qwen2-7B-instruct
widget:
- source_sentence: 'Bus drivers, including those operating in various sectors like
public transit, intercity, private, or school services, need strong driving skills,
knowledge of traffic laws, and the ability to operate safely in diverse conditions.
Additionally, effective communication skills and the ability to handle passenger
inquiries and emergencies are crucial.
[''bus driver'', ''intercity bus driver'', ''private bus operator'', ''transit
bus driver'', ''public service vehicle operator'', ''passenger driver'', ''international
bus driver'', ''public bus operator'', ''touristic bus driver'', ''coach driver'',
''private coach driver'', ''public bus driver'', ''bus operator'', ''driver of
bus'', ''bus driving operator'', ''schoolbus driver'']'
sentences:
- 'The skill of determining shreds sizes percentage in cigarettes is primarily required
by tobacco processing technicians and quality control specialists in the cigarette
manufacturing industry, who ensure that the tobacco shreds meet specific size
and quality standards for consistent product performance.
[''determine shreds sizes percentage in cigarettes'', ''determine shreds sizes
percentage in cigarettes'', ''determine the shreds sizes percentage of cigarettes'',
''determine shreds size percentages in cigarettes'', ''agree shreds sizes percentage
in cigarettes'', ''determine the shreds sizes percentage in cigarettes'', ''confirm
shreds sizes percentage in cigarettes'', ''sort shreds sizes percentage in cigarettes'']'
- 'Job roles such as curriculum developers, educational consultants, and instructional
designers require skills like analyzing, evaluating, and scrutinizing curriculums
to improve educational outcomes. For legislative programmes, roles including policy
analysts, legislative aides, and compliance officers use skills to test, evaluate,
and scrutinize legislative processes to ensure effective and efficient policy
implementation.
[''analyse curriculum'', ''test legislative programmes'', ''evaluate legislative
programmes'', ''evaluate curriculum'', ''test curriculum'', ''investigate curriculum'',
''scrutinise curriculum'', ''analyze curriculum'', ''scrutinise legislative processes'',
''investigate legislative programmes'']'
- 'Job roles such as customer service representatives, flight attendants, and hotel
concierges require a strong focus on passengers or customers, ensuring their needs
and comfort are prioritized to provide excellent service and support.
[''focus on passengers'', ''prioritise passengers'', ''ensure passenger prioritisation'',
''make passengers a priority'', ''maintain a focus on passengers'', ''ensure passengers
are the priority focus'', ''ensure passengers are prioritised'', ''attend to passengers'',
''ensure a focus on passengers'']'
- source_sentence: 'A medical laboratory assistant, or any of its synonyms such as
a biomedical laboratory assistant, requires strong attention to detail, proficiency
in using laboratory equipment, and a foundational understanding of medical science.
Additionally, skills in sample handling, data recording, and basic research methodologies
are crucial for roles like a clinical research assistant or an assistant in medical
laboratory.
[''medical laboratory assistant'', ''medical laboratory research assistant'',
''biomedical laboratory assistant'', ''clinical research assistant'', ''assistant
in medical laboratory'', ''biomedical laboratory research assistant'', ''assistant
clinical researcher'', ''medical lab assistant'', ''assistant in biomedical laboratory'']'
sentences:
- 'Job roles such as automotive mechanics, fleet managers, and vehicle technicians
require skills to ensure vehicle operability and regular maintenance, which involves
diagnosing and repairing issues to keep vehicles roadworthy and operational.
[''ensure vehicle operability'', ''keep vehicle roadworthy'', ''keep vehicle operational'',
''ensure operability of the vehicle'', ''ensure vehicle remains operational'',
''ensure maintenance of vehicle'', ''ensure regular vehicle maintenance'', ''ensure
operation of the vehicle'', ''ensure operability'']'
- 'The skill of classroom management is primarily required by teachers and educators
at all levels, from kindergarten to higher education, to ensure a productive,
safe, and organized learning environment. It involves maintaining discipline,
organizing space and materials, and facilitating effective instruction, roles
that are crucial for teaching assistants and substitute teachers as well.
[''perform classroom management'', ''performing classroom management'', ''conduct
classroom management'', ''practice classroom management'', ''carry out classroom
management'', ''implement classroom management'', ''performs classroom management'']'
- 'Job roles requiring expertise in stem cells, including embryonic and adult stem
cells, typically include stem cell researchers, regenerative medicine scientists,
and biomedical engineers who focus on the development and application of stem
cell technologies for therapeutic purposes. Additionally, clinical researchers
and medical practitioners in specialized fields such as oncology and hematology
may utilize knowledge of stem cells for treatment and research purposes.
[''stem cells'', ''undifferentiated biological cells'', ''embryonic stem cells'',
''development of stem cells'', ''stem cell'', ''adult stem cells'', ''stem cells'']'
- source_sentence: 'For roles such as ''physiotherapist'', ''neuromusculoskeletal
physiotherapist'', ''osteopath'', and ''chiropractor'', the skills needed include
a deep understanding of human anatomy and physiology, strong diagnostic skills,
and the ability to apply manual therapy techniques to treat musculoskeletal issues.
Additionally, effective communication skills are crucial for explaining treatments
and exercises to patients, while adaptability and problem-solving skills are essential
for tailoring treatments to individual patient needs.
[''physiotherapist'', ''neuromusculoskeletal physiotherapist'', ''osteopath'',
''eurythmy therapist'', ''respiratory therapist'', ''remedial physiotherapist'',
''physiotherapist manager'', ''occupational therapist'', ''neurological physiotherapist'',
''occupational physiotherapist'', ''bobath physiotherapist'', ''neuromuscular
physiotherapist'', ''manipulative physiotherapist'', ''hydrotherapist'', ''rehabilitation
therapist'', ''masseuse'', ''health promotion worker'', ''cardiovascular physiotherapist'',
''respiratory physiotherapist'', ''chiropractor'', ''sports physiotherapist'',
''chiropractic therapist'', ''neurodevelopmental physiotherapist'', ''physical
therapist'', ''health and well-being therapist'', ''business physiotherapist'']'
sentences:
- 'Job roles that require skills in dealing with emergency care situations include
emergency medical technicians (EMTs), paramedics, and emergency room nurses or
doctors, all of whom must quickly and effectively manage critical health situations
to save lives.
[''deal with emergency care situations'', ''deal with emergency care situation'',
''handle emergency care situation'', ''apply knowledge in emergency care situations'',
''handle emergency care situations'']'
- 'Job roles such as fashion designers, stylist coordinators, and jewelry designers
require the skill to distinguish and evaluate accessories, their differences,
and applications, to ensure the right aesthetic and functional fit for their designs
or clients. This skill is crucial for creating cohesive looks and enhancing the
overall visual appeal in fashion and design industries.
[''distinguish accessories'', ''evaluate accessories and their differences'',
''evaluate accessories and their application'', ''differentiate accessories'',
''distinguish accessories and their application'', ''distinguish differences in
accessories'']'
- 'Job roles that require expertise in curriculum objectives include educational
consultants, curriculum developers, and instructional designers, who are tasked
with creating and refining educational content and learning goals to meet specific
educational standards and student needs. Teachers and headteachers also utilize
these skills to align their teaching methods and materials with the set educational
targets and aims.
[''curriculum objectives'', ''curriculum objective'', ''curriculum goals'', ''curriculum
targets'', ''curriculum aims'', ''curricula objectives'']'
- source_sentence: 'A mine surveyor, also known as a mining surveyor or mine planning
surveyor, requires expertise in geomatics and mining engineering to accurately
map and plan mine operations, ensuring safety and efficiency. They must also possess
strong analytical skills and the ability to use specialized software for creating
detailed mine plans and maintaining accurate records.
[''mine surveyor'', ''mining surveyor'', ''mine operations surveyor'', ''mine
plan maker'', ''mine records keeper'', ''mine surveyors'', ''planner of mining
operations'', ''mine planning surveyor'']'
sentences:
- 'Job roles such as data analysts, business analysts, and financial analysts require
the skill to present reports or prepare statistical reports, as they often need
to communicate complex data insights clearly and effectively to stakeholders.
[''present reports'', ''present a report'', ''submit presentation'', ''prepare
statistical reports'']'
- 'Job roles such as Food Safety Manager, Quality Assurance Specialist, and Public
Health Inspector require the skill of developing food safety programs to ensure
compliance with regulations and maintain high standards of food safety in various
settings including manufacturing, retail, and public health sectors.
[''develop food safety programmes'', ''creating food safety programmes'', ''develop
programmes for food safety'', ''food safety programmes creating'', ''food safety
programmes developing'', ''develop food safety programs'', ''food safety programme
developing'', ''food safety programme creating'', ''create food safety programmes'',
''create programmes for food safety'', ''developing food safety programmes'']'
- 'The skill of using a sander, whether it be a handheld, manual, automatic, or
drywall sander, is primarily required by construction workers, carpenters, and
drywall installers for tasks such as roughening and smoothing wall surfaces to
prepare them for painting or finishing.
[''use sander'', ''use handheld sander'', ''roughening of wall surfaces'', ''use
drywall sander'', ''sanding of wall surfaces'', ''using sander'', ''sander usage'',
''use manual sander'', ''drywall sanding'', ''use automatic sander'']'
- source_sentence: 'An insulation supervisor, regardless of the specific type of insulation
material or installation area, requires strong project management skills, knowledge
of building codes and safety regulations, and expertise in insulation techniques
to oversee the installation process effectively and ensure quality standards are
met.
[''insulation supervisor'', ''supervisor of installation of insulating materials'',
''supervisor of insulation materials installation'', ''supervisor of installation
of insulation'', ''solid wall insulation installation supervisor'', ''insulation
installers supervisor'', ''cavity wall insulation installation supervisor'', ''loft
insulation installation supervisor'']'
sentences:
- 'Job roles such as Food Safety Inspector, Public Health Officer, and Environmental
Health Specialist require the skill of taking action on food safety violations
to ensure compliance with health regulations and maintain public safety standards.
[''take action on food safety violations'', ''invoke action on food safety violations'',
''agree action on food safety violations'', ''pursue action on food safety violations'',
''determine action on food safety violations'']'
- 'Job roles that require skills in operating and supervising textile printing machines
include Textile Printer Operators, Printing Machine Technicians, and Textile Production
Specialists. These roles involve setting up, running, and maintaining printing
machinery to ensure high-quality textile printing.
[''tend textile printing machines'', ''activate and supervise printing machines
for textile material'', ''activate and supervise textile printing machines'',
''tend printing machines for textile'', ''tend printing machines for textile material'',
''care for textile printing machines'', ''operate printing machines for textile
material'', ''operate textile printing machines'']'
- 'The skill of installing insulation material is primarily required by job roles
such as insulation workers, HVAC technicians, and construction specialists, who
are responsible for improving energy efficiency and thermal comfort in buildings
by correctly fitting and fixing insulation materials in various structures.
[''install insulation material'', ''insulate structure'', ''fix insulation'',
''insulation material installation'', ''installation of insulation material'',
''fitting insulation'', ''insulating structure'', ''installing insulation material'',
''fixing insulation'', ''fit insulation'']'
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# Job-Skill matching fintuned Alibaba-NLP/gte-Qwen2-7B-instruct lora
Top performing model on [TalentCLEF 2025](https://talentclef.github.io/talentclef/) Task B. Use it for job title <-> skill set matching
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Alibaba-NLP/gte-Qwen2-7B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct) <!-- at revision a8d08b36ada9cacfe34c4d6f80957772a025daf2 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 3584 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: Qwen2Model
(1): Pooling({'word_embedding_dimension': 3584, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': True, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the ๐ค Hub
model = SentenceTransformer("pj-mathematician/JobSkillGTE-7b-lora")
# Run inference
sentences = [
"An insulation supervisor, regardless of the specific type of insulation material or installation area, requires strong project management skills, knowledge of building codes and safety regulations, and expertise in insulation techniques to oversee the installation process effectively and ensure quality standards are met.\n['insulation supervisor', 'supervisor of installation of insulating materials', 'supervisor of insulation materials installation', 'supervisor of installation of insulation', 'solid wall insulation installation supervisor', 'insulation installers supervisor', 'cavity wall insulation installation supervisor', 'loft insulation installation supervisor']",
"The skill of installing insulation material is primarily required by job roles such as insulation workers, HVAC technicians, and construction specialists, who are responsible for improving energy efficiency and thermal comfort in buildings by correctly fitting and fixing insulation materials in various structures.\n['install insulation material', 'insulate structure', 'fix insulation', 'insulation material installation', 'installation of insulation material', 'fitting insulation', 'insulating structure', 'installing insulation material', 'fixing insulation', 'fit insulation']",
"Job roles such as Food Safety Inspector, Public Health Officer, and Environmental Health Specialist require the skill of taking action on food safety violations to ensure compliance with health regulations and maintain public safety standards.\n['take action on food safety violations', 'invoke action on food safety violations', 'agree action on food safety violations', 'pursue action on food safety violations', 'determine action on food safety violations']",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 3584]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 114,699 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 73 tokens</li><li>mean: 133.53 tokens</li><li>max: 333 tokens</li></ul> | <ul><li>min: 44 tokens</li><li>mean: 104.56 tokens</li><li>max: 236 tokens</li></ul> |
* Samples:
| anchor | positive |
|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>A technical director or any of its synonyms requires a strong blend of technical expertise and leadership skills, including the ability to oversee technical operations, manage teams, and ensure the successful execution of technical projects while maintaining operational efficiency and innovation.<br>['technical director', 'technical and operations director', 'head of technical', 'director of technical arts', 'head of technical department', 'technical supervisor', 'technical manager']</code> | <code>Job roles that require promoting health and safety include occupational health and safety specialists, safety managers, and public health educators, all of whom work to ensure safe and healthy environments in workplaces and communities.<br>['promote health and safety', 'promote importance of health and safety', 'promoting health and safety', 'advertise health and safety']</code> |
| <code>A technical director or any of its synonyms requires a strong blend of technical expertise and leadership skills, including the ability to oversee technical operations, manage teams, and ensure the successful execution of technical projects while maintaining operational efficiency and innovation.<br>['technical director', 'technical and operations director', 'head of technical', 'director of technical arts', 'head of technical department', 'technical supervisor', 'technical manager']</code> | <code>Job roles that require organizing rehearsals include directors, choreographers, and conductors in theater, dance, and music ensembles, who must efficiently plan and schedule practice sessions to prepare performers for a successful final performance.<br>['organise rehearsals', 'organise rehearsal', 'organize rehearsals', 'plan rehearsals', 'arrange rehearsals', 'organising rehearsals', 'schedule rehearsals']</code> |
| <code>A technical director or any of its synonyms requires a strong blend of technical expertise and leadership skills, including the ability to oversee technical operations, manage teams, and ensure the successful execution of technical projects while maintaining operational efficiency and innovation.<br>['technical director', 'technical and operations director', 'head of technical', 'director of technical arts', 'head of technical department', 'technical supervisor', 'technical manager']</code> | <code>Job roles such as Health and Safety Managers, Environmental Health Officers, and Risk Management Specialists often require the skill of negotiating health and safety issues with third parties to ensure compliance and protection standards are met across different organizations and sites.<br>['negotiate health and safety issues with third parties', 'agree with third parties on health and safety', 'negotiate issues on health and safety with third parties', 'negotiate with third parties on health and safety issues', 'negotiate health and safety matters with third parties']</code> |
* Loss: [<code>CachedGISTEmbedLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cachedgistembedloss) with these parameters:
```json
{'guide': SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
), 'temperature': 0.01, 'mini_batch_size': 48, 'margin_strategy': 'absolute', 'margin': 0.0}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `gradient_accumulation_steps`: 2
- `num_train_epochs`: 2
- `warmup_ratio`: 0.05
- `log_on_each_node`: False
- `fp16`: True
- `dataloader_num_workers`: 4
- `fsdp`: ['full_shard', 'auto_wrap']
- `fsdp_config`: {'transformer_layer_cls_to_wrap': ['Qwen2DecoderLayer'], 'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `ddp_find_unused_parameters`: True
- `gradient_checkpointing`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 2
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 2
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.05
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: False
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: True
- `dataloader_num_workers`: 4
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: ['full_shard', 'auto_wrap']
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'transformer_layer_cls_to_wrap': ['Qwen2DecoderLayer'], 'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `tp_size`: 0
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: True
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: True
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:------:|:----:|:-------------:|
| 0.0156 | 1 | 21.5186 |
| 0.0312 | 2 | 21.4075 |
| 0.0469 | 3 | 21.0309 |
| 0.0625 | 4 | 20.7294 |
| 0.0781 | 5 | 20.9851 |
| 0.0938 | 6 | 21.3215 |
| 0.1094 | 7 | 19.8458 |
| 0.125 | 8 | 18.52 |
| 0.1406 | 9 | 17.622 |
| 0.1562 | 10 | 17.5794 |
| 0.1719 | 11 | 15.8784 |
| 0.1875 | 12 | 14.5842 |
| 0.2031 | 13 | 13.3324 |
| 0.2188 | 14 | 12.3194 |
| 0.2344 | 15 | 11.2523 |
| 0.25 | 16 | 10.7172 |
| 0.2656 | 17 | 10.0063 |
| 0.2812 | 18 | 9.5643 |
| 0.2969 | 19 | 9.2463 |
| 0.3125 | 20 | 8.6533 |
| 0.3281 | 21 | 8.0588 |
| 0.3438 | 22 | 8.1866 |
| 0.3594 | 23 | 7.6767 |
| 0.375 | 24 | 6.9832 |
| 0.3906 | 25 | 6.7932 |
| 0.4062 | 26 | 6.292 |
| 0.4219 | 27 | 6.1263 |
| 0.4375 | 28 | 5.8976 |
| 0.4531 | 29 | 5.7214 |
| 0.4688 | 30 | 5.6451 |
| 0.4844 | 31 | 5.6232 |
| 0.5 | 32 | 5.2984 |
| 0.5156 | 33 | 5.0322 |
| 0.5312 | 34 | 4.9435 |
| 0.5469 | 35 | 4.737 |
| 0.5625 | 36 | 4.4266 |
| 0.5781 | 37 | 4.5082 |
| 0.5938 | 38 | 4.315 |
| 0.6094 | 39 | 4.269 |
| 0.625 | 40 | 4.2473 |
| 0.6406 | 41 | 4.2054 |
| 0.6562 | 42 | 4.2172 |
| 0.6719 | 43 | 3.8311 |
| 0.6875 | 44 | 4.0803 |
| 0.7031 | 45 | 4.2809 |
| 0.7188 | 46 | 4.1843 |
| 0.7344 | 47 | 3.9913 |
| 0.75 | 48 | 3.9465 |
| 0.7656 | 49 | 4.0828 |
| 0.7812 | 50 | 4.0018 |
| 0.7969 | 51 | 3.8023 |
| 0.8125 | 52 | 3.897 |
| 0.8281 | 53 | 3.8941 |
| 0.8438 | 54 | 3.7708 |
| 0.8594 | 55 | 3.8051 |
| 0.875 | 56 | 3.7117 |
| 0.8906 | 57 | 3.8584 |
| 0.9062 | 58 | 3.6421 |
| 0.9219 | 59 | 3.7097 |
| 0.9375 | 60 | 3.6906 |
| 0.9531 | 61 | 3.7011 |
| 0.9688 | 62 | 3.744 |
| 0.9844 | 63 | 3.6493 |
| 1.0 | 64 | 3.5659 |
### Framework Versions
- Python: 3.11.11
- Sentence Transformers: 4.1.0
- Transformers: 4.51.2
- PyTorch: 2.6.0+cu124
- Accelerate: 1.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
AGofficial/AgGPT-8.9
|
AGofficial
| 2025-06-20T19:03:25Z | 0 | 1 | null |
[
"en",
"license:mit",
"region:us"
] | null | 2025-06-20T19:00:00Z |
---
license: mit
language:
- en
---
# AgGPT-8.9
Utilizing the TinyBrain-2 model, we have developed JavaScript and Python implementations of a highly efficient language model that closely mirrors the capabilities of AgGPT-9, while maintaining a significantly reduced size.
|
ArunP3799/qwen3b_baseline_math_step_8
|
ArunP3799
| 2025-06-20T19:01:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T18:59:30Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
BootesVoid/cmbq0a5fr00smh4x50oaoaxxi_cmc53r5tu02iibfif28r3c9ib
|
BootesVoid
| 2025-06-20T18:44:54Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-20T18:44:52Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: MIASTARR
---
# Cmbq0A5Fr00Smh4X50Oaoaxxi_Cmc53R5Tu02Iibfif28R3C9Ib
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `MIASTARR` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "MIASTARR",
"lora_weights": "https://huggingface.co/BootesVoid/cmbq0a5fr00smh4x50oaoaxxi_cmc53r5tu02iibfif28r3c9ib/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmbq0a5fr00smh4x50oaoaxxi_cmc53r5tu02iibfif28r3c9ib', weight_name='lora.safetensors')
image = pipeline('MIASTARR').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmbq0a5fr00smh4x50oaoaxxi_cmc53r5tu02iibfif28r3c9ib/discussions) to add images that show off what youโve made with this LoRA.
|
BootesVoid/cmc533hs802fvbfifwttf712r_cmc545gcn02jxbfifsgcndjpr
|
BootesVoid
| 2025-06-20T18:30:46Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-20T18:30:42Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: MICHELLE
---
# Cmc533Hs802Fvbfifwttf712R_Cmc545Gcn02Jxbfifsgcndjpr
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `MICHELLE` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "MICHELLE",
"lora_weights": "https://huggingface.co/BootesVoid/cmc533hs802fvbfifwttf712r_cmc545gcn02jxbfifsgcndjpr/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmc533hs802fvbfifwttf712r_cmc545gcn02jxbfifsgcndjpr', weight_name='lora.safetensors')
image = pipeline('MICHELLE').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmc533hs802fvbfifwttf712r_cmc545gcn02jxbfifsgcndjpr/discussions) to add images that show off what youโve made with this LoRA.
|
haihp02/oioioi-last
|
haihp02
| 2025-06-20T18:25:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T18:24:20Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
pj-mathematician/JobGTE-7b-Lora
|
pj-mathematician
| 2025-06-20T18:22:32Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:124788",
"loss:CachedGISTEmbedLoss",
"arxiv:1908.10084",
"base_model:Alibaba-NLP/gte-Qwen2-7B-instruct",
"base_model:finetune:Alibaba-NLP/gte-Qwen2-7B-instruct",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-06-20T17:52:09Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:124788
- loss:CachedGISTEmbedLoss
base_model: Alibaba-NLP/gte-Qwen2-7B-instruct
widget:
- source_sentence: ๅ
ถไปๆบๆขฐใ่ฎพๅคๅๆๅฝข่ดง็ฉ็ง่ตๆๅกไปฃ่กจ
sentences:
- ๅ
ถไปๆบๆขฐๅ่ฎพๅค็ง่ตๆๅกๅทฅไฝไบบๅ
- ็ตๅญๅ็ตไฟก่ฎพๅคๅ้ถ้จไปถ็ฉๆต็ป็
- ๅทฅไธไธปๅจ
- source_sentence: ๅ
ฌไบค่ฝฆๅธๆบ
sentences:
- ่กจๆผ็ฏๅ
่ฎพ่ฎกๅธ
- ไน็ฏๅบๅฐๆฟๅฎ่ฃ
ๅทฅ
- ๅฝ้
ๅทดๅฃซๅธๆบ
- source_sentence: online communication manager
sentences:
- trades union official
- social media manager
- budget manager
- source_sentence: Projektmanagerin
sentences:
- Projektmanager/Projektmanagerin
- Category-Manager
- Infanterist
- source_sentence: Volksvertreter
sentences:
- Parlamentarier
- Oberbรผrgermeister
- Konsul
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# Job - Job matching finetuned Alibaba-NLP/gte-Qwen2-7B-instruct
Best performing model on [TalentCLEF 2025](https://talentclef.github.io/talentclef/) Task A. Use it for multilingual job title matching
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Alibaba-NLP/gte-Qwen2-7B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct) <!-- at revision a8d08b36ada9cacfe34c4d6f80957772a025daf2 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 3584 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Datasets:**
- full_en
- full_de
- full_es
- full_zh
- mix
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: Qwen2Model
(1): Pooling({'word_embedding_dimension': 3584, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': True, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the ๐ค Hub
model = SentenceTransformer("pj-mathematician/JobGTE-7b-Lora")
# Run inference
sentences = [
'Volksvertreter',
'Parlamentarier',
'Oberbรผrgermeister',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 3584]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Datasets
<details><summary>full_en</summary>
#### full_en
* Dataset: full_en
* Size: 28,880 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:-------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 2 tokens</li><li>mean: 4.4 tokens</li><li>max: 9 tokens</li></ul> | <ul><li>min: 2 tokens</li><li>mean: 4.42 tokens</li><li>max: 10 tokens</li></ul> |
* Samples:
| anchor | positive |
|:-----------------------------------------|:-----------------------------------------|
| <code>air commodore</code> | <code>flight lieutenant</code> |
| <code>command and control officer</code> | <code>flight officer</code> |
| <code>air commodore</code> | <code>command and control officer</code> |
* Loss: [<code>CachedGISTEmbedLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cachedgistembedloss) with these parameters:
```json
{'guide': SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
), 'temperature': 0.01, 'mini_batch_size': 64, 'margin_strategy': 'absolute', 'margin': 0.0}
```
</details>
<details><summary>full_de</summary>
#### full_de
* Dataset: full_de
* Size: 23,023 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 2 tokens</li><li>mean: 9.11 tokens</li><li>max: 33 tokens</li></ul> | <ul><li>min: 2 tokens</li><li>mean: 9.41 tokens</li><li>max: 33 tokens</li></ul> |
* Samples:
| anchor | positive |
|:----------------------------------|:-----------------------------------------------------|
| <code>Staffelkommandantin</code> | <code>Kommodore</code> |
| <code>Luftwaffenoffizierin</code> | <code>Luftwaffenoffizier/Luftwaffenoffizierin</code> |
| <code>Staffelkommandantin</code> | <code>Luftwaffenoffizierin</code> |
* Loss: [<code>CachedGISTEmbedLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cachedgistembedloss) with these parameters:
```json
{'guide': SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
), 'temperature': 0.01, 'mini_batch_size': 64, 'margin_strategy': 'absolute', 'margin': 0.0}
```
</details>
<details><summary>full_es</summary>
#### full_es
* Dataset: full_es
* Size: 20,724 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 9.42 tokens</li><li>max: 35 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 9.18 tokens</li><li>max: 35 tokens</li></ul> |
* Samples:
| anchor | positive |
|:------------------------------------|:-------------------------------------------|
| <code>jefe de escuadrรณn</code> | <code>instructor</code> |
| <code>comandante de aeronave</code> | <code>instructor de simulador</code> |
| <code>instructor</code> | <code>oficial del Ejรฉrcito del Aire</code> |
* Loss: [<code>CachedGISTEmbedLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cachedgistembedloss) with these parameters:
```json
{'guide': SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
), 'temperature': 0.01, 'mini_batch_size': 64, 'margin_strategy': 'absolute', 'margin': 0.0}
```
</details>
<details><summary>full_zh</summary>
#### full_zh
* Dataset: full_zh
* Size: 30,401 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:--------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 4.7 tokens</li><li>max: 12 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 5.04 tokens</li><li>max: 19 tokens</li></ul> |
* Samples:
| anchor | positive |
|:------------------|:---------------------|
| <code>ๆๆฏๆป็</code> | <code>ๆๆฏๅ่ฟ่ฅๆป็</code> |
| <code>ๆๆฏๆป็</code> | <code>ๆๆฏไธป็ฎก</code> |
| <code>ๆๆฏๆป็</code> | <code>ๆๆฏ่บๆฏๆป็</code> |
* Loss: [<code>CachedGISTEmbedLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cachedgistembedloss) with these parameters:
```json
{'guide': SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
), 'temperature': 0.01, 'mini_batch_size': 64, 'margin_strategy': 'absolute', 'margin': 0.0}
```
</details>
<details><summary>mix</summary>
#### mix
* Dataset: mix
* Size: 21,760 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 1 tokens</li><li>mean: 4.98 tokens</li><li>max: 14 tokens</li></ul> | <ul><li>min: 1 tokens</li><li>mean: 7.22 tokens</li><li>max: 27 tokens</li></ul> |
* Samples:
| anchor | positive |
|:------------------------------------------|:----------------------------------------------------------------|
| <code>technical manager</code> | <code>Technischer Direktor fรผr Bรผhne, Film und Fernsehen</code> |
| <code>head of technical</code> | <code>directora tรฉcnica</code> |
| <code>head of technical department</code> | <code>ๆๆฏ่บๆฏๆป็</code> |
* Loss: [<code>CachedGISTEmbedLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cachedgistembedloss) with these parameters:
```json
{'guide': SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
), 'temperature': 0.01, 'mini_batch_size': 64, 'margin_strategy': 'absolute', 'margin': 0.0}
```
</details>
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `gradient_accumulation_steps`: 2
- `num_train_epochs`: 2
- `warmup_ratio`: 0.05
- `log_on_each_node`: False
- `fp16`: True
- `dataloader_num_workers`: 4
- `fsdp`: ['full_shard', 'auto_wrap']
- `fsdp_config`: {'transformer_layer_cls_to_wrap': ['Qwen2DecoderLayer'], 'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `ddp_find_unused_parameters`: True
- `gradient_checkpointing`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 2
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 2
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.05
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: False
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: True
- `dataloader_num_workers`: 4
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: ['full_shard', 'auto_wrap']
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'transformer_layer_cls_to_wrap': ['Qwen2DecoderLayer'], 'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `tp_size`: 0
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: True
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: True
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:------:|:----:|:-------------:|
| 0.0165 | 1 | 4.5178 |
| 0.0331 | 2 | 3.8803 |
| 0.0496 | 3 | 2.8882 |
| 0.0661 | 4 | 4.5362 |
| 0.0826 | 5 | 3.6406 |
| 0.0992 | 6 | 3.5285 |
| 0.1157 | 7 | 4.1398 |
| 0.1322 | 8 | 4.1543 |
| 0.1488 | 9 | 4.4487 |
| 0.1653 | 10 | 4.7408 |
| 0.1818 | 11 | 2.1874 |
| 0.1983 | 12 | 3.3176 |
| 0.2149 | 13 | 2.8286 |
| 0.2314 | 14 | 2.87 |
| 0.2479 | 15 | 2.4834 |
| 0.2645 | 16 | 2.7856 |
| 0.2810 | 17 | 3.1948 |
| 0.2975 | 18 | 2.1755 |
| 0.3140 | 19 | 1.9861 |
| 0.3306 | 20 | 2.0536 |
| 0.3471 | 21 | 2.7626 |
| 0.3636 | 22 | 1.6489 |
| 0.3802 | 23 | 2.078 |
| 0.3967 | 24 | 1.5864 |
| 0.4132 | 25 | 1.8815 |
| 0.4298 | 26 | 1.8041 |
| 0.4463 | 27 | 1.7482 |
| 0.4628 | 28 | 1.191 |
| 0.4793 | 29 | 1.4166 |
| 0.4959 | 30 | 1.3215 |
| 0.5124 | 31 | 1.2907 |
| 0.5289 | 32 | 1.1294 |
| 0.5455 | 33 | 1.1586 |
| 0.5620 | 34 | 1.551 |
| 0.5785 | 35 | 1.3628 |
| 0.5950 | 36 | 0.9899 |
| 0.6116 | 37 | 1.1846 |
| 0.6281 | 38 | 1.2721 |
| 0.6446 | 39 | 1.1261 |
| 0.6612 | 40 | 0.9535 |
| 0.6777 | 41 | 1.2086 |
| 0.6942 | 42 | 0.7472 |
| 0.7107 | 43 | 1.0324 |
| 0.7273 | 44 | 1.0397 |
| 0.7438 | 45 | 1.185 |
| 0.7603 | 46 | 1.2112 |
| 0.7769 | 47 | 0.84 |
| 0.7934 | 48 | 0.9286 |
| 0.8099 | 49 | 0.8689 |
| 0.8264 | 50 | 0.9546 |
| 0.8430 | 51 | 0.8283 |
| 0.8595 | 52 | 0.757 |
| 0.8760 | 53 | 0.9199 |
| 0.8926 | 54 | 0.7404 |
| 0.9091 | 55 | 1.0995 |
| 0.9256 | 56 | 0.8231 |
| 0.9421 | 57 | 0.6297 |
| 0.9587 | 58 | 0.9869 |
| 0.9752 | 59 | 0.9597 |
| 0.9917 | 60 | 0.7025 |
| 1.0 | 61 | 0.4866 |
### Framework Versions
- Python: 3.11.11
- Sentence Transformers: 4.1.0
- Transformers: 4.51.3
- PyTorch: 2.6.0+cu124
- Accelerate: 1.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
pj-mathematician/JobGTE-multilingual-base-pruned
|
pj-mathematician
| 2025-06-20T18:20:17Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:86648",
"loss:MSELoss",
"arxiv:1908.10084",
"arxiv:2004.09813",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-06-20T18:18:11Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:86648
- loss:MSELoss
widget:
- source_sentence: Familienberaterin
sentences:
- electric power station operator
- venue booker & promoter
- betrieblicher Aus- und Weiterbildner/betriebliche Aus- und Weiterbildnerin
- source_sentence: high school RS teacher
sentences:
- infantryman
- Schnellbedienungsrestaurantteamleiter
- drill setup operator
- source_sentence: lighting designer
sentences:
- software support manager
- ็ดๅๆบ็ปดๆคๅ่ฐๅ
- bus maintenance supervisor
- source_sentence: ๆบๅบๆถ้ฒๅ
sentences:
- Flakeๆไฝๅ
- tรฉcnico en gestiรณn de residuos peligrosos/tรฉcnica en gestiรณn de residuos peligrosos
- ไธ้จๅญฆๆ ก่ๅธ
- source_sentence: Entwicklerin fรผr mobile Anwendungen
sentences:
- fashion design expert
- Mergers-and-Acquisitions-Analyst/Mergers-and-Acquisitions-Analystin
- commercial bid manager
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@20
- cosine_accuracy@50
- cosine_accuracy@100
- cosine_accuracy@150
- cosine_accuracy@200
- cosine_precision@1
- cosine_precision@20
- cosine_precision@50
- cosine_precision@100
- cosine_precision@150
- cosine_precision@200
- cosine_recall@1
- cosine_recall@20
- cosine_recall@50
- cosine_recall@100
- cosine_recall@150
- cosine_recall@200
- cosine_ndcg@1
- cosine_ndcg@20
- cosine_ndcg@50
- cosine_ndcg@100
- cosine_ndcg@150
- cosine_ndcg@200
- cosine_mrr@1
- cosine_mrr@20
- cosine_mrr@50
- cosine_mrr@100
- cosine_mrr@150
- cosine_mrr@200
- cosine_map@1
- cosine_map@20
- cosine_map@50
- cosine_map@100
- cosine_map@150
- cosine_map@200
- cosine_map@500
model-index:
- name: SentenceTransformer
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: full en
type: full_en
metrics:
- type: cosine_accuracy@1
value: 0.6476190476190476
name: Cosine Accuracy@1
- type: cosine_accuracy@20
value: 0.9714285714285714
name: Cosine Accuracy@20
- type: cosine_accuracy@50
value: 0.9904761904761905
name: Cosine Accuracy@50
- type: cosine_accuracy@100
value: 0.9904761904761905
name: Cosine Accuracy@100
- type: cosine_accuracy@150
value: 0.9904761904761905
name: Cosine Accuracy@150
- type: cosine_accuracy@200
value: 0.9904761904761905
name: Cosine Accuracy@200
- type: cosine_precision@1
value: 0.6476190476190476
name: Cosine Precision@1
- type: cosine_precision@20
value: 0.47952380952380946
name: Cosine Precision@20
- type: cosine_precision@50
value: 0.28838095238095235
name: Cosine Precision@50
- type: cosine_precision@100
value: 0.17304761904761906
name: Cosine Precision@100
- type: cosine_precision@150
value: 0.12444444444444444
name: Cosine Precision@150
- type: cosine_precision@200
value: 0.09857142857142859
name: Cosine Precision@200
- type: cosine_recall@1
value: 0.06609801577496094
name: Cosine Recall@1
- type: cosine_recall@20
value: 0.5122224752770898
name: Cosine Recall@20
- type: cosine_recall@50
value: 0.6835205863376973
name: Cosine Recall@50
- type: cosine_recall@100
value: 0.7899550177449521
name: Cosine Recall@100
- type: cosine_recall@150
value: 0.8399901051245952
name: Cosine Recall@150
- type: cosine_recall@200
value: 0.875868212220809
name: Cosine Recall@200
- type: cosine_ndcg@1
value: 0.6476190476190476
name: Cosine Ndcg@1
- type: cosine_ndcg@20
value: 0.6467537144833913
name: Cosine Ndcg@20
- type: cosine_ndcg@50
value: 0.6579566361404572
name: Cosine Ndcg@50
- type: cosine_ndcg@100
value: 0.7095129047395976
name: Cosine Ndcg@100
- type: cosine_ndcg@150
value: 0.7310060454392588
name: Cosine Ndcg@150
- type: cosine_ndcg@200
value: 0.746053293561821
name: Cosine Ndcg@200
- type: cosine_mrr@1
value: 0.6476190476190476
name: Cosine Mrr@1
- type: cosine_mrr@20
value: 0.7901817137111254
name: Cosine Mrr@20
- type: cosine_mrr@50
value: 0.7909547501984476
name: Cosine Mrr@50
- type: cosine_mrr@100
value: 0.7909547501984476
name: Cosine Mrr@100
- type: cosine_mrr@150
value: 0.7909547501984476
name: Cosine Mrr@150
- type: cosine_mrr@200
value: 0.7909547501984476
name: Cosine Mrr@200
- type: cosine_map@1
value: 0.6476190476190476
name: Cosine Map@1
- type: cosine_map@20
value: 0.5025649155749793
name: Cosine Map@20
- type: cosine_map@50
value: 0.48398477448194993
name: Cosine Map@50
- type: cosine_map@100
value: 0.5117703759309522
name: Cosine Map@100
- type: cosine_map@150
value: 0.520199435224254
name: Cosine Map@150
- type: cosine_map@200
value: 0.5249113393002316
name: Cosine Map@200
- type: cosine_map@500
value: 0.5304170344184883
name: Cosine Map@500
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: full es
type: full_es
metrics:
- type: cosine_accuracy@1
value: 0.11891891891891893
name: Cosine Accuracy@1
- type: cosine_accuracy@20
value: 1.0
name: Cosine Accuracy@20
- type: cosine_accuracy@50
value: 1.0
name: Cosine Accuracy@50
- type: cosine_accuracy@100
value: 1.0
name: Cosine Accuracy@100
- type: cosine_accuracy@150
value: 1.0
name: Cosine Accuracy@150
- type: cosine_accuracy@200
value: 1.0
name: Cosine Accuracy@200
- type: cosine_precision@1
value: 0.11891891891891893
name: Cosine Precision@1
- type: cosine_precision@20
value: 0.5267567567567567
name: Cosine Precision@20
- type: cosine_precision@50
value: 0.3437837837837838
name: Cosine Precision@50
- type: cosine_precision@100
value: 0.21897297297297297
name: Cosine Precision@100
- type: cosine_precision@150
value: 0.1658018018018018
name: Cosine Precision@150
- type: cosine_precision@200
value: 0.1332972972972973
name: Cosine Precision@200
- type: cosine_recall@1
value: 0.0035840147528632613
name: Cosine Recall@1
- type: cosine_recall@20
value: 0.35407760203362965
name: Cosine Recall@20
- type: cosine_recall@50
value: 0.5097999383006715
name: Cosine Recall@50
- type: cosine_recall@100
value: 0.6076073817878247
name: Cosine Recall@100
- type: cosine_recall@150
value: 0.6705429838138021
name: Cosine Recall@150
- type: cosine_recall@200
value: 0.7125464731776301
name: Cosine Recall@200
- type: cosine_ndcg@1
value: 0.11891891891891893
name: Cosine Ndcg@1
- type: cosine_ndcg@20
value: 0.5708144272431339
name: Cosine Ndcg@20
- type: cosine_ndcg@50
value: 0.535516963498245
name: Cosine Ndcg@50
- type: cosine_ndcg@100
value: 0.558980163264909
name: Cosine Ndcg@100
- type: cosine_ndcg@150
value: 0.5900024611410689
name: Cosine Ndcg@150
- type: cosine_ndcg@200
value: 0.609478782549869
name: Cosine Ndcg@200
- type: cosine_mrr@1
value: 0.11891891891891893
name: Cosine Mrr@1
- type: cosine_mrr@20
value: 0.5531531531531532
name: Cosine Mrr@20
- type: cosine_mrr@50
value: 0.5531531531531532
name: Cosine Mrr@50
- type: cosine_mrr@100
value: 0.5531531531531532
name: Cosine Mrr@100
- type: cosine_mrr@150
value: 0.5531531531531532
name: Cosine Mrr@150
- type: cosine_mrr@200
value: 0.5531531531531532
name: Cosine Mrr@200
- type: cosine_map@1
value: 0.11891891891891893
name: Cosine Map@1
- type: cosine_map@20
value: 0.4379349002801489
name: Cosine Map@20
- type: cosine_map@50
value: 0.3739269627118989
name: Cosine Map@50
- type: cosine_map@100
value: 0.37629843599877466
name: Cosine Map@100
- type: cosine_map@150
value: 0.3891828650842837
name: Cosine Map@150
- type: cosine_map@200
value: 0.39584338663408436
name: Cosine Map@200
- type: cosine_map@500
value: 0.4062909401616274
name: Cosine Map@500
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: full de
type: full_de
metrics:
- type: cosine_accuracy@1
value: 0.2955665024630542
name: Cosine Accuracy@1
- type: cosine_accuracy@20
value: 0.9704433497536946
name: Cosine Accuracy@20
- type: cosine_accuracy@50
value: 0.9753694581280788
name: Cosine Accuracy@50
- type: cosine_accuracy@100
value: 0.9901477832512315
name: Cosine Accuracy@100
- type: cosine_accuracy@150
value: 0.9901477832512315
name: Cosine Accuracy@150
- type: cosine_accuracy@200
value: 0.9901477832512315
name: Cosine Accuracy@200
- type: cosine_precision@1
value: 0.2955665024630542
name: Cosine Precision@1
- type: cosine_precision@20
value: 0.42906403940886706
name: Cosine Precision@20
- type: cosine_precision@50
value: 0.29802955665024633
name: Cosine Precision@50
- type: cosine_precision@100
value: 0.19433497536945815
name: Cosine Precision@100
- type: cosine_precision@150
value: 0.14824302134646963
name: Cosine Precision@150
- type: cosine_precision@200
value: 0.1197783251231527
name: Cosine Precision@200
- type: cosine_recall@1
value: 0.01108543831680986
name: Cosine Recall@1
- type: cosine_recall@20
value: 0.26675038089672504
name: Cosine Recall@20
- type: cosine_recall@50
value: 0.40921566733257536
name: Cosine Recall@50
- type: cosine_recall@100
value: 0.5097664540706716
name: Cosine Recall@100
- type: cosine_recall@150
value: 0.5728593162394238
name: Cosine Recall@150
- type: cosine_recall@200
value: 0.6120176690658915
name: Cosine Recall@200
- type: cosine_ndcg@1
value: 0.2955665024630542
name: Cosine Ndcg@1
- type: cosine_ndcg@20
value: 0.46962753993631184
name: Cosine Ndcg@20
- type: cosine_ndcg@50
value: 0.444898497416845
name: Cosine Ndcg@50
- type: cosine_ndcg@100
value: 0.466960324034805
name: Cosine Ndcg@100
- type: cosine_ndcg@150
value: 0.49816218513136795
name: Cosine Ndcg@150
- type: cosine_ndcg@200
value: 0.5165485300965951
name: Cosine Ndcg@200
- type: cosine_mrr@1
value: 0.2955665024630542
name: Cosine Mrr@1
- type: cosine_mrr@20
value: 0.5046767633988724
name: Cosine Mrr@20
- type: cosine_mrr@50
value: 0.50477528556636
name: Cosine Mrr@50
- type: cosine_mrr@100
value: 0.5049589761635289
name: Cosine Mrr@100
- type: cosine_mrr@150
value: 0.5049589761635289
name: Cosine Mrr@150
- type: cosine_mrr@200
value: 0.5049589761635289
name: Cosine Mrr@200
- type: cosine_map@1
value: 0.2955665024630542
name: Cosine Map@1
- type: cosine_map@20
value: 0.33658821160388247
name: Cosine Map@20
- type: cosine_map@50
value: 0.2853400586620685
name: Cosine Map@50
- type: cosine_map@100
value: 0.2817732307206079
name: Cosine Map@100
- type: cosine_map@150
value: 0.2931317333364438
name: Cosine Map@150
- type: cosine_map@200
value: 0.2988160532231927
name: Cosine Map@200
- type: cosine_map@500
value: 0.31093362375086947
name: Cosine Map@500
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: full zh
type: full_zh
metrics:
- type: cosine_accuracy@1
value: 0.6601941747572816
name: Cosine Accuracy@1
- type: cosine_accuracy@20
value: 0.970873786407767
name: Cosine Accuracy@20
- type: cosine_accuracy@50
value: 0.9902912621359223
name: Cosine Accuracy@50
- type: cosine_accuracy@100
value: 0.9902912621359223
name: Cosine Accuracy@100
- type: cosine_accuracy@150
value: 0.9902912621359223
name: Cosine Accuracy@150
- type: cosine_accuracy@200
value: 0.9902912621359223
name: Cosine Accuracy@200
- type: cosine_precision@1
value: 0.6601941747572816
name: Cosine Precision@1
- type: cosine_precision@20
value: 0.44805825242718444
name: Cosine Precision@20
- type: cosine_precision@50
value: 0.27126213592233006
name: Cosine Precision@50
- type: cosine_precision@100
value: 0.16650485436893206
name: Cosine Precision@100
- type: cosine_precision@150
value: 0.1211003236245955
name: Cosine Precision@150
- type: cosine_precision@200
value: 0.09529126213592234
name: Cosine Precision@200
- type: cosine_recall@1
value: 0.06611246215014785
name: Cosine Recall@1
- type: cosine_recall@20
value: 0.48409390608352504
name: Cosine Recall@20
- type: cosine_recall@50
value: 0.6568473638827299
name: Cosine Recall@50
- type: cosine_recall@100
value: 0.7685416895166794
name: Cosine Recall@100
- type: cosine_recall@150
value: 0.8277686060133904
name: Cosine Recall@150
- type: cosine_recall@200
value: 0.8616979590623105
name: Cosine Recall@200
- type: cosine_ndcg@1
value: 0.6601941747572816
name: Cosine Ndcg@1
- type: cosine_ndcg@20
value: 0.6231250904534316
name: Cosine Ndcg@20
- type: cosine_ndcg@50
value: 0.6383496204608501
name: Cosine Ndcg@50
- type: cosine_ndcg@100
value: 0.6917257705456975
name: Cosine Ndcg@100
- type: cosine_ndcg@150
value: 0.7167434657424917
name: Cosine Ndcg@150
- type: cosine_ndcg@200
value: 0.7303448958665071
name: Cosine Ndcg@200
- type: cosine_mrr@1
value: 0.6601941747572816
name: Cosine Mrr@1
- type: cosine_mrr@20
value: 0.8015776699029126
name: Cosine Mrr@20
- type: cosine_mrr@50
value: 0.8020876238109248
name: Cosine Mrr@50
- type: cosine_mrr@100
value: 0.8020876238109248
name: Cosine Mrr@100
- type: cosine_mrr@150
value: 0.8020876238109248
name: Cosine Mrr@150
- type: cosine_mrr@200
value: 0.8020876238109248
name: Cosine Mrr@200
- type: cosine_map@1
value: 0.6601941747572816
name: Cosine Map@1
- type: cosine_map@20
value: 0.4750205237443607
name: Cosine Map@20
- type: cosine_map@50
value: 0.45785161483741715
name: Cosine Map@50
- type: cosine_map@100
value: 0.4848085275553208
name: Cosine Map@100
- type: cosine_map@150
value: 0.4937216396074153
name: Cosine Map@150
- type: cosine_map@200
value: 0.49777622471594557
name: Cosine Map@200
- type: cosine_map@500
value: 0.5039795405740248
name: Cosine Map@500
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: mix es
type: mix_es
metrics:
- type: cosine_accuracy@1
value: 0.6297451898075923
name: Cosine Accuracy@1
- type: cosine_accuracy@20
value: 0.9105564222568903
name: Cosine Accuracy@20
- type: cosine_accuracy@50
value: 0.9495579823192928
name: Cosine Accuracy@50
- type: cosine_accuracy@100
value: 0.9729589183567343
name: Cosine Accuracy@100
- type: cosine_accuracy@150
value: 0.983359334373375
name: Cosine Accuracy@150
- type: cosine_accuracy@200
value: 0.9901196047841914
name: Cosine Accuracy@200
- type: cosine_precision@1
value: 0.6297451898075923
name: Cosine Precision@1
- type: cosine_precision@20
value: 0.11167446697867915
name: Cosine Precision@20
- type: cosine_precision@50
value: 0.04850754030161208
name: Cosine Precision@50
- type: cosine_precision@100
value: 0.02535101404056163
name: Cosine Precision@100
- type: cosine_precision@150
value: 0.0172300225342347
name: Cosine Precision@150
- type: cosine_precision@200
value: 0.0130811232449298
name: Cosine Precision@200
- type: cosine_recall@1
value: 0.24340068840848872
name: Cosine Recall@1
- type: cosine_recall@20
value: 0.8288215338137336
name: Cosine Recall@20
- type: cosine_recall@50
value: 0.8986566129311838
name: Cosine Recall@50
- type: cosine_recall@100
value: 0.9398509273704282
name: Cosine Recall@100
- type: cosine_recall@150
value: 0.9576876408389668
name: Cosine Recall@150
- type: cosine_recall@200
value: 0.9695267810712429
name: Cosine Recall@200
- type: cosine_ndcg@1
value: 0.6297451898075923
name: Cosine Ndcg@1
- type: cosine_ndcg@20
value: 0.7010427232190379
name: Cosine Ndcg@20
- type: cosine_ndcg@50
value: 0.7200844211181043
name: Cosine Ndcg@50
- type: cosine_ndcg@100
value: 0.7290848607488584
name: Cosine Ndcg@100
- type: cosine_ndcg@150
value: 0.7325985285606116
name: Cosine Ndcg@150
- type: cosine_ndcg@200
value: 0.7347463892077523
name: Cosine Ndcg@200
- type: cosine_mrr@1
value: 0.6297451898075923
name: Cosine Mrr@1
- type: cosine_mrr@20
value: 0.7036709577939534
name: Cosine Mrr@20
- type: cosine_mrr@50
value: 0.7049808414398148
name: Cosine Mrr@50
- type: cosine_mrr@100
value: 0.7053260954286938
name: Cosine Mrr@100
- type: cosine_mrr@150
value: 0.7054145837924506
name: Cosine Mrr@150
- type: cosine_mrr@200
value: 0.7054541569954363
name: Cosine Mrr@200
- type: cosine_map@1
value: 0.6297451898075923
name: Cosine Map@1
- type: cosine_map@20
value: 0.6194189058349782
name: Cosine Map@20
- type: cosine_map@50
value: 0.6244340507841626
name: Cosine Map@50
- type: cosine_map@100
value: 0.6256943736433496
name: Cosine Map@100
- type: cosine_map@150
value: 0.6260195205413376
name: Cosine Map@150
- type: cosine_map@200
value: 0.6261650797332174
name: Cosine Map@200
- type: cosine_map@500
value: 0.6263452093477304
name: Cosine Map@500
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: mix de
type: mix_de
metrics:
- type: cosine_accuracy@1
value: 0.5564222568902756
name: Cosine Accuracy@1
- type: cosine_accuracy@20
value: 0.8866354654186167
name: Cosine Accuracy@20
- type: cosine_accuracy@50
value: 0.9381175247009881
name: Cosine Accuracy@50
- type: cosine_accuracy@100
value: 0.9594383775351014
name: Cosine Accuracy@100
- type: cosine_accuracy@150
value: 0.9708788351534061
name: Cosine Accuracy@150
- type: cosine_accuracy@200
value: 0.9776391055642226
name: Cosine Accuracy@200
- type: cosine_precision@1
value: 0.5564222568902756
name: Cosine Precision@1
- type: cosine_precision@20
value: 0.109464378575143
name: Cosine Precision@20
- type: cosine_precision@50
value: 0.048060322412896525
name: Cosine Precision@50
- type: cosine_precision@100
value: 0.025273010920436823
name: Cosine Precision@100
- type: cosine_precision@150
value: 0.017313225862367825
name: Cosine Precision@150
- type: cosine_precision@200
value: 0.013143525741029644
name: Cosine Precision@200
- type: cosine_recall@1
value: 0.20931703934824059
name: Cosine Recall@1
- type: cosine_recall@20
value: 0.7988992893049055
name: Cosine Recall@20
- type: cosine_recall@50
value: 0.8741029641185647
name: Cosine Recall@50
- type: cosine_recall@100
value: 0.9173426937077482
name: Cosine Recall@100
- type: cosine_recall@150
value: 0.9424076963078523
name: Cosine Recall@150
- type: cosine_recall@200
value: 0.953631478592477
name: Cosine Recall@200
- type: cosine_ndcg@1
value: 0.5564222568902756
name: Cosine Ndcg@1
- type: cosine_ndcg@20
value: 0.6541310877479573
name: Cosine Ndcg@20
- type: cosine_ndcg@50
value: 0.674790854916742
name: Cosine Ndcg@50
- type: cosine_ndcg@100
value: 0.6844997445798996
name: Cosine Ndcg@100
- type: cosine_ndcg@150
value: 0.6894214573457343
name: Cosine Ndcg@150
- type: cosine_ndcg@200
value: 0.6914881284159038
name: Cosine Ndcg@200
- type: cosine_mrr@1
value: 0.5564222568902756
name: Cosine Mrr@1
- type: cosine_mrr@20
value: 0.6476945170199107
name: Cosine Mrr@20
- type: cosine_mrr@50
value: 0.6493649946597936
name: Cosine Mrr@50
- type: cosine_mrr@100
value: 0.6496801333421218
name: Cosine Mrr@100
- type: cosine_mrr@150
value: 0.6497778366579644
name: Cosine Mrr@150
- type: cosine_mrr@200
value: 0.6498156890114056
name: Cosine Mrr@200
- type: cosine_map@1
value: 0.5564222568902756
name: Cosine Map@1
- type: cosine_map@20
value: 0.5648326970643027
name: Cosine Map@20
- type: cosine_map@50
value: 0.57003456255067
name: Cosine Map@50
- type: cosine_map@100
value: 0.5714370828517599
name: Cosine Map@100
- type: cosine_map@150
value: 0.5719002990233493
name: Cosine Map@150
- type: cosine_map@200
value: 0.5720497397197026
name: Cosine Map@200
- type: cosine_map@500
value: 0.5723109788233504
name: Cosine Map@500
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: mix zh
type: mix_zh
metrics:
- type: cosine_accuracy@1
value: 0.6085594989561587
name: Cosine Accuracy@1
- type: cosine_accuracy@20
value: 0.9592901878914405
name: Cosine Accuracy@20
- type: cosine_accuracy@50
value: 0.9791231732776617
name: Cosine Accuracy@50
- type: cosine_accuracy@100
value: 0.9874739039665971
name: Cosine Accuracy@100
- type: cosine_accuracy@150
value: 0.9911273486430062
name: Cosine Accuracy@150
- type: cosine_accuracy@200
value: 0.9937369519832986
name: Cosine Accuracy@200
- type: cosine_precision@1
value: 0.6085594989561587
name: Cosine Precision@1
- type: cosine_precision@20
value: 0.12656576200417535
name: Cosine Precision@20
- type: cosine_precision@50
value: 0.05518789144050106
name: Cosine Precision@50
- type: cosine_precision@100
value: 0.028747390396659713
name: Cosine Precision@100
- type: cosine_precision@150
value: 0.019425887265135697
name: Cosine Precision@150
- type: cosine_precision@200
value: 0.014705114822546978
name: Cosine Precision@200
- type: cosine_recall@1
value: 0.2043804056069192
name: Cosine Recall@1
- type: cosine_recall@20
value: 0.8346468336812805
name: Cosine Recall@20
- type: cosine_recall@50
value: 0.9095772442588727
name: Cosine Recall@50
- type: cosine_recall@100
value: 0.9475643702157271
name: Cosine Recall@100
- type: cosine_recall@150
value: 0.9609168406402228
name: Cosine Recall@150
- type: cosine_recall@200
value: 0.9697807933194154
name: Cosine Recall@200
- type: cosine_ndcg@1
value: 0.6085594989561587
name: Cosine Ndcg@1
- type: cosine_ndcg@20
value: 0.6853247290079303
name: Cosine Ndcg@20
- type: cosine_ndcg@50
value: 0.7066940880968873
name: Cosine Ndcg@50
- type: cosine_ndcg@100
value: 0.715400790265437
name: Cosine Ndcg@100
- type: cosine_ndcg@150
value: 0.7180808450243259
name: Cosine Ndcg@150
- type: cosine_ndcg@200
value: 0.7197629642909036
name: Cosine Ndcg@200
- type: cosine_mrr@1
value: 0.6085594989561587
name: Cosine Mrr@1
- type: cosine_mrr@20
value: 0.7236528792595264
name: Cosine Mrr@20
- type: cosine_mrr@50
value: 0.7243308740364213
name: Cosine Mrr@50
- type: cosine_mrr@100
value: 0.7244524590415827
name: Cosine Mrr@100
- type: cosine_mrr@150
value: 0.7244814620971008
name: Cosine Mrr@150
- type: cosine_mrr@200
value: 0.7244960285685315
name: Cosine Mrr@200
- type: cosine_map@1
value: 0.6085594989561587
name: Cosine Map@1
- type: cosine_map@20
value: 0.5652211952239553
name: Cosine Map@20
- type: cosine_map@50
value: 0.5716374350069462
name: Cosine Map@50
- type: cosine_map@100
value: 0.5730756815932735
name: Cosine Map@100
- type: cosine_map@150
value: 0.5733543252173214
name: Cosine Map@150
- type: cosine_map@200
value: 0.5734860037813889
name: Cosine Map@200
- type: cosine_map@500
value: 0.5736416699680624
name: Cosine Map@500
---
# Job - Job matching Alibaba-NLP/gte-multilingual-base pruned
Top performing model on [TalentCLEF 2025](https://talentclef.github.io/talentclef/) Task A. Use it for multilingual job title matching
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
<!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: NewModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the ๐ค Hub
model = SentenceTransformer("pj-mathematician/JobGTE-multilingual-base-pruned")
# Run inference
sentences = [
'Entwicklerin fรผr mobile Anwendungen',
'Mergers-and-Acquisitions-Analyst/Mergers-and-Acquisitions-Analystin',
'fashion design expert',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Datasets: `full_en`, `full_es`, `full_de`, `full_zh`, `mix_es`, `mix_de` and `mix_zh`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | full_en | full_es | full_de | full_zh | mix_es | mix_de | mix_zh |
|:---------------------|:-----------|:-----------|:-----------|:-----------|:-----------|:-----------|:-----------|
| cosine_accuracy@1 | 0.6476 | 0.1189 | 0.2956 | 0.6602 | 0.6297 | 0.5564 | 0.6086 |
| cosine_accuracy@20 | 0.9714 | 1.0 | 0.9704 | 0.9709 | 0.9106 | 0.8866 | 0.9593 |
| cosine_accuracy@50 | 0.9905 | 1.0 | 0.9754 | 0.9903 | 0.9496 | 0.9381 | 0.9791 |
| cosine_accuracy@100 | 0.9905 | 1.0 | 0.9901 | 0.9903 | 0.973 | 0.9594 | 0.9875 |
| cosine_accuracy@150 | 0.9905 | 1.0 | 0.9901 | 0.9903 | 0.9834 | 0.9709 | 0.9911 |
| cosine_accuracy@200 | 0.9905 | 1.0 | 0.9901 | 0.9903 | 0.9901 | 0.9776 | 0.9937 |
| cosine_precision@1 | 0.6476 | 0.1189 | 0.2956 | 0.6602 | 0.6297 | 0.5564 | 0.6086 |
| cosine_precision@20 | 0.4795 | 0.5268 | 0.4291 | 0.4481 | 0.1117 | 0.1095 | 0.1266 |
| cosine_precision@50 | 0.2884 | 0.3438 | 0.298 | 0.2713 | 0.0485 | 0.0481 | 0.0552 |
| cosine_precision@100 | 0.173 | 0.219 | 0.1943 | 0.1665 | 0.0254 | 0.0253 | 0.0287 |
| cosine_precision@150 | 0.1244 | 0.1658 | 0.1482 | 0.1211 | 0.0172 | 0.0173 | 0.0194 |
| cosine_precision@200 | 0.0986 | 0.1333 | 0.1198 | 0.0953 | 0.0131 | 0.0131 | 0.0147 |
| cosine_recall@1 | 0.0661 | 0.0036 | 0.0111 | 0.0661 | 0.2434 | 0.2093 | 0.2044 |
| cosine_recall@20 | 0.5122 | 0.3541 | 0.2668 | 0.4841 | 0.8288 | 0.7989 | 0.8346 |
| cosine_recall@50 | 0.6835 | 0.5098 | 0.4092 | 0.6568 | 0.8987 | 0.8741 | 0.9096 |
| cosine_recall@100 | 0.79 | 0.6076 | 0.5098 | 0.7685 | 0.9399 | 0.9173 | 0.9476 |
| cosine_recall@150 | 0.84 | 0.6705 | 0.5729 | 0.8278 | 0.9577 | 0.9424 | 0.9609 |
| cosine_recall@200 | 0.8759 | 0.7125 | 0.612 | 0.8617 | 0.9695 | 0.9536 | 0.9698 |
| cosine_ndcg@1 | 0.6476 | 0.1189 | 0.2956 | 0.6602 | 0.6297 | 0.5564 | 0.6086 |
| cosine_ndcg@20 | 0.6468 | 0.5708 | 0.4696 | 0.6231 | 0.701 | 0.6541 | 0.6853 |
| cosine_ndcg@50 | 0.658 | 0.5355 | 0.4449 | 0.6383 | 0.7201 | 0.6748 | 0.7067 |
| cosine_ndcg@100 | 0.7095 | 0.559 | 0.467 | 0.6917 | 0.7291 | 0.6845 | 0.7154 |
| cosine_ndcg@150 | 0.731 | 0.59 | 0.4982 | 0.7167 | 0.7326 | 0.6894 | 0.7181 |
| **cosine_ndcg@200** | **0.7461** | **0.6095** | **0.5165** | **0.7303** | **0.7347** | **0.6915** | **0.7198** |
| cosine_mrr@1 | 0.6476 | 0.1189 | 0.2956 | 0.6602 | 0.6297 | 0.5564 | 0.6086 |
| cosine_mrr@20 | 0.7902 | 0.5532 | 0.5047 | 0.8016 | 0.7037 | 0.6477 | 0.7237 |
| cosine_mrr@50 | 0.791 | 0.5532 | 0.5048 | 0.8021 | 0.705 | 0.6494 | 0.7243 |
| cosine_mrr@100 | 0.791 | 0.5532 | 0.505 | 0.8021 | 0.7053 | 0.6497 | 0.7245 |
| cosine_mrr@150 | 0.791 | 0.5532 | 0.505 | 0.8021 | 0.7054 | 0.6498 | 0.7245 |
| cosine_mrr@200 | 0.791 | 0.5532 | 0.505 | 0.8021 | 0.7055 | 0.6498 | 0.7245 |
| cosine_map@1 | 0.6476 | 0.1189 | 0.2956 | 0.6602 | 0.6297 | 0.5564 | 0.6086 |
| cosine_map@20 | 0.5026 | 0.4379 | 0.3366 | 0.475 | 0.6194 | 0.5648 | 0.5652 |
| cosine_map@50 | 0.484 | 0.3739 | 0.2853 | 0.4579 | 0.6244 | 0.57 | 0.5716 |
| cosine_map@100 | 0.5118 | 0.3763 | 0.2818 | 0.4848 | 0.6257 | 0.5714 | 0.5731 |
| cosine_map@150 | 0.5202 | 0.3892 | 0.2931 | 0.4937 | 0.626 | 0.5719 | 0.5734 |
| cosine_map@200 | 0.5249 | 0.3958 | 0.2988 | 0.4978 | 0.6262 | 0.572 | 0.5735 |
| cosine_map@500 | 0.5304 | 0.4063 | 0.3109 | 0.504 | 0.6263 | 0.5723 | 0.5736 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 86,648 training samples
* Columns: <code>sentence</code> and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence | label |
|:--------|:---------------------------------------------------------------------------------|:-------------------------------------|
| type | string | list |
| details | <ul><li>min: 2 tokens</li><li>mean: 8.25 tokens</li><li>max: 54 tokens</li></ul> | <ul><li>size: 768 elements</li></ul> |
* Samples:
| sentence | label |
|:-----------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------|
| <code></code> | <code>[-0.07171934843063354, 0.03595816716551781, -0.029780959710478783, 0.006593302357941866, 0.040611181408166885, ...]</code> |
| <code>airport environment officer</code> | <code>[-0.022075481712818146, 0.02999737113714218, -0.02189866080880165, 0.016531817615032196, 0.012234307825565338, ...]</code> |
| <code>Flakeๆไฝๅ</code> | <code>[-0.04815564677119255, 0.023524893447756767, -0.01583661139011383, 0.042527906596660614, 0.03815540298819542, ...]</code> |
* Loss: [<code>MSELoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#mseloss)
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `gradient_accumulation_steps`: 2
- `learning_rate`: 0.0001
- `num_train_epochs`: 5
- `warmup_ratio`: 0.05
- `log_on_each_node`: False
- `fp16`: True
- `dataloader_num_workers`: 4
- `ddp_find_unused_parameters`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 2
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 0.0001
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.05
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: False
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: True
- `dataloader_num_workers`: 4
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `tp_size`: 0
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: True
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | full_en_cosine_ndcg@200 | full_es_cosine_ndcg@200 | full_de_cosine_ndcg@200 | full_zh_cosine_ndcg@200 | mix_es_cosine_ndcg@200 | mix_de_cosine_ndcg@200 | mix_zh_cosine_ndcg@200 |
|:------:|:----:|:-------------:|:-----------------------:|:-----------------------:|:-----------------------:|:-----------------------:|:----------------------:|:----------------------:|:----------------------:|
| -1 | -1 | - | 0.5348 | 0.4311 | 0.3678 | 0.5333 | 0.2580 | 0.1924 | 0.2871 |
| 0.0030 | 1 | 0.0017 | - | - | - | - | - | - | - |
| 0.2959 | 100 | 0.001 | - | - | - | - | - | - | - |
| 0.5917 | 200 | 0.0005 | 0.6702 | 0.5287 | 0.4566 | 0.6809 | 0.5864 | 0.5302 | 0.4739 |
| 0.8876 | 300 | 0.0004 | - | - | - | - | - | - | - |
| 1.1834 | 400 | 0.0004 | 0.7057 | 0.5643 | 0.4790 | 0.7033 | 0.6604 | 0.6055 | 0.6003 |
| 1.4793 | 500 | 0.0004 | - | - | - | - | - | - | - |
| 1.7751 | 600 | 0.0003 | 0.7184 | 0.5783 | 0.4910 | 0.7127 | 0.6927 | 0.6416 | 0.6485 |
| 2.0710 | 700 | 0.0003 | - | - | - | - | - | - | - |
| 2.3669 | 800 | 0.0003 | 0.7307 | 0.5938 | 0.5023 | 0.7233 | 0.7125 | 0.6639 | 0.6847 |
| 2.6627 | 900 | 0.0003 | - | - | - | - | - | - | - |
| 2.9586 | 1000 | 0.0003 | 0.7371 | 0.6002 | 0.5085 | 0.7228 | 0.7222 | 0.6761 | 0.6998 |
| 3.2544 | 1100 | 0.0003 | - | - | - | - | - | - | - |
| 3.5503 | 1200 | 0.0003 | 0.7402 | 0.6059 | 0.5109 | 0.7279 | 0.7285 | 0.6841 | 0.7120 |
| 3.8462 | 1300 | 0.0003 | - | - | - | - | - | - | - |
| 4.1420 | 1400 | 0.0003 | 0.7449 | 0.6083 | 0.5154 | 0.7294 | 0.7333 | 0.6894 | 0.7176 |
| 4.4379 | 1500 | 0.0003 | - | - | - | - | - | - | - |
| 4.7337 | 1600 | 0.0003 | 0.7461 | 0.6095 | 0.5165 | 0.7303 | 0.7347 | 0.6915 | 0.7198 |
### Framework Versions
- Python: 3.11.11
- Sentence Transformers: 4.1.0
- Transformers: 4.51.3
- PyTorch: 2.6.0+cu124
- Accelerate: 1.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MSELoss
```bibtex
@inproceedings{reimers-2020-multilingual-sentence-bert,
title = "Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2020",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2004.09813",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
pj-mathematician/JobBGE-m3
|
pj-mathematician
| 2025-06-20T18:17:41Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:124788",
"loss:GISTEmbedLoss",
"arxiv:1908.10084",
"arxiv:2402.16829",
"base_model:BAAI/bge-m3",
"base_model:finetune:BAAI/bge-m3",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-06-20T18:04:27Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:124788
- loss:GISTEmbedLoss
base_model: BAAI/bge-m3
widget:
- source_sentence: ๅ
ถไปๆบๆขฐใ่ฎพๅคๅๆๅฝข่ดง็ฉ็ง่ตๆๅกไปฃ่กจ
sentences:
- ๅ
ถไปๆบๆขฐๅ่ฎพๅค็ง่ตๆๅกๅทฅไฝไบบๅ
- ็ตๅญๅ็ตไฟก่ฎพๅคๅ้ถ้จไปถ็ฉๆต็ป็
- ๅทฅไธไธปๅจ
- source_sentence: ๅ
ฌไบค่ฝฆๅธๆบ
sentences:
- ่กจๆผ็ฏๅ
่ฎพ่ฎกๅธ
- ไน็ฏๅบๅฐๆฟๅฎ่ฃ
ๅทฅ
- ๅฝ้
ๅทดๅฃซๅธๆบ
- source_sentence: online communication manager
sentences:
- trades union official
- social media manager
- budget manager
- source_sentence: Projektmanagerin
sentences:
- Projektmanager/Projektmanagerin
- Category-Manager
- Infanterist
- source_sentence: Volksvertreter
sentences:
- Parlamentarier
- Oberbรผrgermeister
- Konsul
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@20
- cosine_accuracy@50
- cosine_accuracy@100
- cosine_accuracy@150
- cosine_accuracy@200
- cosine_precision@1
- cosine_precision@20
- cosine_precision@50
- cosine_precision@100
- cosine_precision@150
- cosine_precision@200
- cosine_recall@1
- cosine_recall@20
- cosine_recall@50
- cosine_recall@100
- cosine_recall@150
- cosine_recall@200
- cosine_ndcg@1
- cosine_ndcg@20
- cosine_ndcg@50
- cosine_ndcg@100
- cosine_ndcg@150
- cosine_ndcg@200
- cosine_mrr@1
- cosine_mrr@20
- cosine_mrr@50
- cosine_mrr@100
- cosine_mrr@150
- cosine_mrr@200
- cosine_map@1
- cosine_map@20
- cosine_map@50
- cosine_map@100
- cosine_map@150
- cosine_map@200
- cosine_map@500
model-index:
- name: SentenceTransformer based on BAAI/bge-m3
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: full en
type: full_en
metrics:
- type: cosine_accuracy@1
value: 0.6476190476190476
name: Cosine Accuracy@1
- type: cosine_accuracy@20
value: 0.9904761904761905
name: Cosine Accuracy@20
- type: cosine_accuracy@50
value: 0.9904761904761905
name: Cosine Accuracy@50
- type: cosine_accuracy@100
value: 0.9904761904761905
name: Cosine Accuracy@100
- type: cosine_accuracy@150
value: 0.9904761904761905
name: Cosine Accuracy@150
- type: cosine_accuracy@200
value: 0.9904761904761905
name: Cosine Accuracy@200
- type: cosine_precision@1
value: 0.6476190476190476
name: Cosine Precision@1
- type: cosine_precision@20
value: 0.5061904761904762
name: Cosine Precision@20
- type: cosine_precision@50
value: 0.30647619047619057
name: Cosine Precision@50
- type: cosine_precision@100
value: 0.1858095238095238
name: Cosine Precision@100
- type: cosine_precision@150
value: 0.13250793650793652
name: Cosine Precision@150
- type: cosine_precision@200
value: 0.10247619047619047
name: Cosine Precision@200
- type: cosine_recall@1
value: 0.06690172806447445
name: Cosine Recall@1
- type: cosine_recall@20
value: 0.5391510592522911
name: Cosine Recall@20
- type: cosine_recall@50
value: 0.7199711948587544
name: Cosine Recall@50
- type: cosine_recall@100
value: 0.8253770621157605
name: Cosine Recall@100
- type: cosine_recall@150
value: 0.8719997123512196
name: Cosine Recall@150
- type: cosine_recall@200
value: 0.9006382758109558
name: Cosine Recall@200
- type: cosine_ndcg@1
value: 0.6476190476190476
name: Cosine Ndcg@1
- type: cosine_ndcg@20
value: 0.6822066814233797
name: Cosine Ndcg@20
- type: cosine_ndcg@50
value: 0.6975329548006446
name: Cosine Ndcg@50
- type: cosine_ndcg@100
value: 0.7519637922809941
name: Cosine Ndcg@100
- type: cosine_ndcg@150
value: 0.7724946802449859
name: Cosine Ndcg@150
- type: cosine_ndcg@200
value: 0.7827357067553371
name: Cosine Ndcg@200
- type: cosine_mrr@1
value: 0.6476190476190476
name: Cosine Mrr@1
- type: cosine_mrr@20
value: 0.7999999999999998
name: Cosine Mrr@20
- type: cosine_mrr@50
value: 0.7999999999999998
name: Cosine Mrr@50
- type: cosine_mrr@100
value: 0.7999999999999998
name: Cosine Mrr@100
- type: cosine_mrr@150
value: 0.7999999999999998
name: Cosine Mrr@150
- type: cosine_mrr@200
value: 0.7999999999999998
name: Cosine Mrr@200
- type: cosine_map@1
value: 0.6476190476190476
name: Cosine Map@1
- type: cosine_map@20
value: 0.5391784054866918
name: Cosine Map@20
- type: cosine_map@50
value: 0.5258287715484311
name: Cosine Map@50
- type: cosine_map@100
value: 0.5580109313638075
name: Cosine Map@100
- type: cosine_map@150
value: 0.5665715227835532
name: Cosine Map@150
- type: cosine_map@200
value: 0.569529009182472
name: Cosine Map@200
- type: cosine_map@500
value: 0.5743595458034346
name: Cosine Map@500
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: full es
type: full_es
metrics:
- type: cosine_accuracy@1
value: 0.11351351351351352
name: Cosine Accuracy@1
- type: cosine_accuracy@20
value: 1.0
name: Cosine Accuracy@20
- type: cosine_accuracy@50
value: 1.0
name: Cosine Accuracy@50
- type: cosine_accuracy@100
value: 1.0
name: Cosine Accuracy@100
- type: cosine_accuracy@150
value: 1.0
name: Cosine Accuracy@150
- type: cosine_accuracy@200
value: 1.0
name: Cosine Accuracy@200
- type: cosine_precision@1
value: 0.11351351351351352
name: Cosine Precision@1
- type: cosine_precision@20
value: 0.5667567567567567
name: Cosine Precision@20
- type: cosine_precision@50
value: 0.3902702702702703
name: Cosine Precision@50
- type: cosine_precision@100
value: 0.25254054054054054
name: Cosine Precision@100
- type: cosine_precision@150
value: 0.19005405405405407
name: Cosine Precision@150
- type: cosine_precision@200
value: 0.1507837837837838
name: Cosine Precision@200
- type: cosine_recall@1
value: 0.0035155918996302815
name: Cosine Recall@1
- type: cosine_recall@20
value: 0.37958552840441906
name: Cosine Recall@20
- type: cosine_recall@50
value: 0.5635730197468752
name: Cosine Recall@50
- type: cosine_recall@100
value: 0.672698242387141
name: Cosine Recall@100
- type: cosine_recall@150
value: 0.7360036980055802
name: Cosine Recall@150
- type: cosine_recall@200
value: 0.7697561816436992
name: Cosine Recall@200
- type: cosine_ndcg@1
value: 0.11351351351351352
name: Cosine Ndcg@1
- type: cosine_ndcg@20
value: 0.6136401766234348
name: Cosine Ndcg@20
- type: cosine_ndcg@50
value: 0.5908459924766464
name: Cosine Ndcg@50
- type: cosine_ndcg@100
value: 0.6168063266629416
name: Cosine Ndcg@100
- type: cosine_ndcg@150
value: 0.6488575731321932
name: Cosine Ndcg@150
- type: cosine_ndcg@200
value: 0.665316090087272
name: Cosine Ndcg@200
- type: cosine_mrr@1
value: 0.11351351351351352
name: Cosine Mrr@1
- type: cosine_mrr@20
value: 0.5536036036036036
name: Cosine Mrr@20
- type: cosine_mrr@50
value: 0.5536036036036036
name: Cosine Mrr@50
- type: cosine_mrr@100
value: 0.5536036036036036
name: Cosine Mrr@100
- type: cosine_mrr@150
value: 0.5536036036036036
name: Cosine Mrr@150
- type: cosine_mrr@200
value: 0.5536036036036036
name: Cosine Mrr@200
- type: cosine_map@1
value: 0.11351351351351352
name: Cosine Map@1
- type: cosine_map@20
value: 0.48095830339282386
name: Cosine Map@20
- type: cosine_map@50
value: 0.43038606337879926
name: Cosine Map@50
- type: cosine_map@100
value: 0.4335284717646407
name: Cosine Map@100
- type: cosine_map@150
value: 0.44851036812148526
name: Cosine Map@150
- type: cosine_map@200
value: 0.4550924585301385
name: Cosine Map@200
- type: cosine_map@500
value: 0.4677023132311536
name: Cosine Map@500
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: full de
type: full_de
metrics:
- type: cosine_accuracy@1
value: 0.2955665024630542
name: Cosine Accuracy@1
- type: cosine_accuracy@20
value: 0.9852216748768473
name: Cosine Accuracy@20
- type: cosine_accuracy@50
value: 0.9901477832512315
name: Cosine Accuracy@50
- type: cosine_accuracy@100
value: 0.9901477832512315
name: Cosine Accuracy@100
- type: cosine_accuracy@150
value: 0.9901477832512315
name: Cosine Accuracy@150
- type: cosine_accuracy@200
value: 0.9901477832512315
name: Cosine Accuracy@200
- type: cosine_precision@1
value: 0.2955665024630542
name: Cosine Precision@1
- type: cosine_precision@20
value: 0.5403940886699506
name: Cosine Precision@20
- type: cosine_precision@50
value: 0.38275862068965516
name: Cosine Precision@50
- type: cosine_precision@100
value: 0.2503448275862069
name: Cosine Precision@100
- type: cosine_precision@150
value: 0.187816091954023
name: Cosine Precision@150
- type: cosine_precision@200
value: 0.15027093596059116
name: Cosine Precision@200
- type: cosine_recall@1
value: 0.01108543831680986
name: Cosine Recall@1
- type: cosine_recall@20
value: 0.3432684453555553
name: Cosine Recall@20
- type: cosine_recall@50
value: 0.5339871522541048
name: Cosine Recall@50
- type: cosine_recall@100
value: 0.6498636280219438
name: Cosine Recall@100
- type: cosine_recall@150
value: 0.7100921836539074
name: Cosine Recall@150
- type: cosine_recall@200
value: 0.7513351913056898
name: Cosine Recall@200
- type: cosine_ndcg@1
value: 0.2955665024630542
name: Cosine Ndcg@1
- type: cosine_ndcg@20
value: 0.5647628262992046
name: Cosine Ndcg@20
- type: cosine_ndcg@50
value: 0.5522057083055792
name: Cosine Ndcg@50
- type: cosine_ndcg@100
value: 0.5796033728499559
name: Cosine Ndcg@100
- type: cosine_ndcg@150
value: 0.6111851705889818
name: Cosine Ndcg@150
- type: cosine_ndcg@200
value: 0.6309313367878393
name: Cosine Ndcg@200
- type: cosine_mrr@1
value: 0.2955665024630542
name: Cosine Mrr@1
- type: cosine_mrr@20
value: 0.5164425017655958
name: Cosine Mrr@20
- type: cosine_mrr@50
value: 0.516559790060224
name: Cosine Mrr@50
- type: cosine_mrr@100
value: 0.516559790060224
name: Cosine Mrr@100
- type: cosine_mrr@150
value: 0.516559790060224
name: Cosine Mrr@150
- type: cosine_mrr@200
value: 0.516559790060224
name: Cosine Mrr@200
- type: cosine_map@1
value: 0.2955665024630542
name: Cosine Map@1
- type: cosine_map@20
value: 0.4221760589983628
name: Cosine Map@20
- type: cosine_map@50
value: 0.37913413777890953
name: Cosine Map@50
- type: cosine_map@100
value: 0.3829298798486122
name: Cosine Map@100
- type: cosine_map@150
value: 0.39811624371681004
name: Cosine Map@150
- type: cosine_map@200
value: 0.40559711033541546
name: Cosine Map@200
- type: cosine_map@500
value: 0.4188841643667456
name: Cosine Map@500
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: full zh
type: full_zh
metrics:
- type: cosine_accuracy@1
value: 0.6796116504854369
name: Cosine Accuracy@1
- type: cosine_accuracy@20
value: 0.9902912621359223
name: Cosine Accuracy@20
- type: cosine_accuracy@50
value: 0.9902912621359223
name: Cosine Accuracy@50
- type: cosine_accuracy@100
value: 0.9902912621359223
name: Cosine Accuracy@100
- type: cosine_accuracy@150
value: 0.9902912621359223
name: Cosine Accuracy@150
- type: cosine_accuracy@200
value: 0.9902912621359223
name: Cosine Accuracy@200
- type: cosine_precision@1
value: 0.6796116504854369
name: Cosine Precision@1
- type: cosine_precision@20
value: 0.470873786407767
name: Cosine Precision@20
- type: cosine_precision@50
value: 0.28038834951456315
name: Cosine Precision@50
- type: cosine_precision@100
value: 0.17320388349514557
name: Cosine Precision@100
- type: cosine_precision@150
value: 0.12394822006472495
name: Cosine Precision@150
- type: cosine_precision@200
value: 0.09766990291262137
name: Cosine Precision@200
- type: cosine_recall@1
value: 0.06427555485009323
name: Cosine Recall@1
- type: cosine_recall@20
value: 0.5119331913488326
name: Cosine Recall@20
- type: cosine_recall@50
value: 0.6726577129232287
name: Cosine Recall@50
- type: cosine_recall@100
value: 0.788021792964523
name: Cosine Recall@100
- type: cosine_recall@150
value: 0.8328962977521837
name: Cosine Recall@150
- type: cosine_recall@200
value: 0.8687397875786594
name: Cosine Recall@200
- type: cosine_ndcg@1
value: 0.6796116504854369
name: Cosine Ndcg@1
- type: cosine_ndcg@20
value: 0.6515292076635256
name: Cosine Ndcg@20
- type: cosine_ndcg@50
value: 0.6598571989751485
name: Cosine Ndcg@50
- type: cosine_ndcg@100
value: 0.7157338182976709
name: Cosine Ndcg@100
- type: cosine_ndcg@150
value: 0.7357126940189814
name: Cosine Ndcg@150
- type: cosine_ndcg@200
value: 0.7500853808896866
name: Cosine Ndcg@200
- type: cosine_mrr@1
value: 0.6796116504854369
name: Cosine Mrr@1
- type: cosine_mrr@20
value: 0.8216828478964402
name: Cosine Mrr@20
- type: cosine_mrr@50
value: 0.8216828478964402
name: Cosine Mrr@50
- type: cosine_mrr@100
value: 0.8216828478964402
name: Cosine Mrr@100
- type: cosine_mrr@150
value: 0.8216828478964402
name: Cosine Mrr@150
- type: cosine_mrr@200
value: 0.8216828478964402
name: Cosine Mrr@200
- type: cosine_map@1
value: 0.6796116504854369
name: Cosine Map@1
- type: cosine_map@20
value: 0.5012149610968577
name: Cosine Map@20
- type: cosine_map@50
value: 0.48128476255481567
name: Cosine Map@50
- type: cosine_map@100
value: 0.5105374388587102
name: Cosine Map@100
- type: cosine_map@150
value: 0.518381647971727
name: Cosine Map@150
- type: cosine_map@200
value: 0.5228375783347256
name: Cosine Map@200
- type: cosine_map@500
value: 0.52765377953199
name: Cosine Map@500
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: mix es
type: mix_es
metrics:
- type: cosine_accuracy@1
value: 0.7394695787831513
name: Cosine Accuracy@1
- type: cosine_accuracy@20
value: 0.9635985439417577
name: Cosine Accuracy@20
- type: cosine_accuracy@50
value: 0.982839313572543
name: Cosine Accuracy@50
- type: cosine_accuracy@100
value: 0.9927197087883516
name: Cosine Accuracy@100
- type: cosine_accuracy@150
value: 0.9947997919916797
name: Cosine Accuracy@150
- type: cosine_accuracy@200
value: 0.9963598543941757
name: Cosine Accuracy@200
- type: cosine_precision@1
value: 0.7394695787831513
name: Cosine Precision@1
- type: cosine_precision@20
value: 0.12488299531981278
name: Cosine Precision@20
- type: cosine_precision@50
value: 0.05174206968278733
name: Cosine Precision@50
- type: cosine_precision@100
value: 0.02629225169006761
name: Cosine Precision@100
- type: cosine_precision@150
value: 0.017635638758883684
name: Cosine Precision@150
- type: cosine_precision@200
value: 0.013281331253250133
name: Cosine Precision@200
- type: cosine_recall@1
value: 0.28537503404898107
name: Cosine Recall@1
- type: cosine_recall@20
value: 0.9225949037961519
name: Cosine Recall@20
- type: cosine_recall@50
value: 0.9548015253943491
name: Cosine Recall@50
- type: cosine_recall@100
value: 0.970532154619518
name: Cosine Recall@100
- type: cosine_recall@150
value: 0.9766337320159473
name: Cosine Recall@150
- type: cosine_recall@200
value: 0.9810747096550528
name: Cosine Recall@200
- type: cosine_ndcg@1
value: 0.7394695787831513
name: Cosine Ndcg@1
- type: cosine_ndcg@20
value: 0.8119072371250002
name: Cosine Ndcg@20
- type: cosine_ndcg@50
value: 0.8208055075822587
name: Cosine Ndcg@50
- type: cosine_ndcg@100
value: 0.8242798548838444
name: Cosine Ndcg@100
- type: cosine_ndcg@150
value: 0.8254601712767063
name: Cosine Ndcg@150
- type: cosine_ndcg@200
value: 0.826231823086538
name: Cosine Ndcg@200
- type: cosine_mrr@1
value: 0.7394695787831513
name: Cosine Mrr@1
- type: cosine_mrr@20
value: 0.8059183822863336
name: Cosine Mrr@20
- type: cosine_mrr@50
value: 0.8065662458714291
name: Cosine Mrr@50
- type: cosine_mrr@100
value: 0.8067209669800003
name: Cosine Mrr@100
- type: cosine_mrr@150
value: 0.8067371899834064
name: Cosine Mrr@150
- type: cosine_mrr@200
value: 0.8067455244059942
name: Cosine Mrr@200
- type: cosine_map@1
value: 0.7394695787831513
name: Cosine Map@1
- type: cosine_map@20
value: 0.7439811728319751
name: Cosine Map@20
- type: cosine_map@50
value: 0.7464542457655368
name: Cosine Map@50
- type: cosine_map@100
value: 0.7469341154545359
name: Cosine Map@100
- type: cosine_map@150
value: 0.7470471963812441
name: Cosine Map@150
- type: cosine_map@200
value: 0.7471010455519603
name: Cosine Map@200
- type: cosine_map@500
value: 0.7471920688836787
name: Cosine Map@500
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: mix de
type: mix_de
metrics:
- type: cosine_accuracy@1
value: 0.6926677067082684
name: Cosine Accuracy@1
- type: cosine_accuracy@20
value: 0.9641185647425897
name: Cosine Accuracy@20
- type: cosine_accuracy@50
value: 0.983879355174207
name: Cosine Accuracy@50
- type: cosine_accuracy@100
value: 0.9921996879875195
name: Cosine Accuracy@100
- type: cosine_accuracy@150
value: 0.9932397295891836
name: Cosine Accuracy@150
- type: cosine_accuracy@200
value: 0.9942797711908476
name: Cosine Accuracy@200
- type: cosine_precision@1
value: 0.6926677067082684
name: Cosine Precision@1
- type: cosine_precision@20
value: 0.12797711908476336
name: Cosine Precision@20
- type: cosine_precision@50
value: 0.053281331253250144
name: Cosine Precision@50
- type: cosine_precision@100
value: 0.027051482059282376
name: Cosine Precision@100
- type: cosine_precision@150
value: 0.018110591090310275
name: Cosine Precision@150
- type: cosine_precision@200
value: 0.013619344773790953
name: Cosine Precision@200
- type: cosine_recall@1
value: 0.2603830819899463
name: Cosine Recall@1
- type: cosine_recall@20
value: 0.928479805858901
name: Cosine Recall@20
- type: cosine_recall@50
value: 0.9650286011440458
name: Cosine Recall@50
- type: cosine_recall@100
value: 0.9796325186340786
name: Cosine Recall@100
- type: cosine_recall@150
value: 0.9837060149072628
name: Cosine Recall@150
- type: cosine_recall@200
value: 0.9862194487779511
name: Cosine Recall@200
- type: cosine_ndcg@1
value: 0.6926677067082684
name: Cosine Ndcg@1
- type: cosine_ndcg@20
value: 0.7967328692326251
name: Cosine Ndcg@20
- type: cosine_ndcg@50
value: 0.8068705787791701
name: Cosine Ndcg@50
- type: cosine_ndcg@100
value: 0.810158579950017
name: Cosine Ndcg@100
- type: cosine_ndcg@150
value: 0.8109641919896999
name: Cosine Ndcg@150
- type: cosine_ndcg@200
value: 0.8114360342473703
name: Cosine Ndcg@200
- type: cosine_mrr@1
value: 0.6926677067082684
name: Cosine Mrr@1
- type: cosine_mrr@20
value: 0.7766838069642311
name: Cosine Mrr@20
- type: cosine_mrr@50
value: 0.7773792960985305
name: Cosine Mrr@50
- type: cosine_mrr@100
value: 0.7775026273925645
name: Cosine Mrr@100
- type: cosine_mrr@150
value: 0.7775124036000293
name: Cosine Mrr@150
- type: cosine_mrr@200
value: 0.7775182983569378
name: Cosine Mrr@200
- type: cosine_map@1
value: 0.6926677067082684
name: Cosine Map@1
- type: cosine_map@20
value: 0.7210301157895639
name: Cosine Map@20
- type: cosine_map@50
value: 0.7237555751939095
name: Cosine Map@50
- type: cosine_map@100
value: 0.7242426468613273
name: Cosine Map@100
- type: cosine_map@150
value: 0.7243265313145111
name: Cosine Map@150
- type: cosine_map@200
value: 0.7243628241480395
name: Cosine Map@200
- type: cosine_map@500
value: 0.7244144669299598
name: Cosine Map@500
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: mix zh
type: mix_zh
metrics:
- type: cosine_accuracy@1
value: 0.17888715548621945
name: Cosine Accuracy@1
- type: cosine_accuracy@20
value: 1.0
name: Cosine Accuracy@20
- type: cosine_accuracy@50
value: 1.0
name: Cosine Accuracy@50
- type: cosine_accuracy@100
value: 1.0
name: Cosine Accuracy@100
- type: cosine_accuracy@150
value: 1.0
name: Cosine Accuracy@150
- type: cosine_accuracy@200
value: 1.0
name: Cosine Accuracy@200
- type: cosine_precision@1
value: 0.17888715548621945
name: Cosine Precision@1
- type: cosine_precision@20
value: 0.15439417576703063
name: Cosine Precision@20
- type: cosine_precision@50
value: 0.0617576703068123
name: Cosine Precision@50
- type: cosine_precision@100
value: 0.03087883515340615
name: Cosine Precision@100
- type: cosine_precision@150
value: 0.020585890102270757
name: Cosine Precision@150
- type: cosine_precision@200
value: 0.015439417576703075
name: Cosine Precision@200
- type: cosine_recall@1
value: 0.05768764083896689
name: Cosine Recall@1
- type: cosine_recall@20
value: 1.0
name: Cosine Recall@20
- type: cosine_recall@50
value: 1.0
name: Cosine Recall@50
- type: cosine_recall@100
value: 1.0
name: Cosine Recall@100
- type: cosine_recall@150
value: 1.0
name: Cosine Recall@150
- type: cosine_recall@200
value: 1.0
name: Cosine Recall@200
- type: cosine_ndcg@1
value: 0.17888715548621945
name: Cosine Ndcg@1
- type: cosine_ndcg@20
value: 0.5443156532634228
name: Cosine Ndcg@20
- type: cosine_ndcg@50
value: 0.5443156532634228
name: Cosine Ndcg@50
- type: cosine_ndcg@100
value: 0.5443156532634228
name: Cosine Ndcg@100
- type: cosine_ndcg@150
value: 0.5443156532634228
name: Cosine Ndcg@150
- type: cosine_ndcg@200
value: 0.5443156532634228
name: Cosine Ndcg@200
- type: cosine_mrr@1
value: 0.17888715548621945
name: Cosine Mrr@1
- type: cosine_mrr@20
value: 0.4002437442375043
name: Cosine Mrr@20
- type: cosine_mrr@50
value: 0.4002437442375043
name: Cosine Mrr@50
- type: cosine_mrr@100
value: 0.4002437442375043
name: Cosine Mrr@100
- type: cosine_mrr@150
value: 0.4002437442375043
name: Cosine Mrr@150
- type: cosine_mrr@200
value: 0.4002437442375043
name: Cosine Mrr@200
- type: cosine_map@1
value: 0.17888715548621945
name: Cosine Map@1
- type: cosine_map@20
value: 0.32718437256695937
name: Cosine Map@20
- type: cosine_map@50
value: 0.32718437256695937
name: Cosine Map@50
- type: cosine_map@100
value: 0.32718437256695937
name: Cosine Map@100
- type: cosine_map@150
value: 0.32718437256695937
name: Cosine Map@150
- type: cosine_map@200
value: 0.32718437256695937
name: Cosine Map@200
- type: cosine_map@500
value: 0.32718437256695937
name: Cosine Map@500
---
# Job - Job matching finetuned BAAI/bge-m3
Top performing model on [TalentCLEF 2025](https://talentclef.github.io/talentclef/) Task A. Use it for multilingual job title matching
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) <!-- at revision 5617a9f61b028005a4858fdac845db406aefb181 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Datasets:**
- full_en
- full_de
- full_es
- full_zh
- mix
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the ๐ค Hub
model = SentenceTransformer("pj-mathematician/JobBGE-m3")
# Run inference
sentences = [
'Volksvertreter',
'Parlamentarier',
'Oberbรผrgermeister',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Datasets: `full_en`, `full_es`, `full_de`, `full_zh`, `mix_es`, `mix_de` and `mix_zh`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | full_en | full_es | full_de | full_zh | mix_es | mix_de | mix_zh |
|:---------------------|:-----------|:-----------|:-----------|:-----------|:-----------|:-----------|:-----------|
| cosine_accuracy@1 | 0.6476 | 0.1135 | 0.2956 | 0.6796 | 0.7395 | 0.6927 | 0.1789 |
| cosine_accuracy@20 | 0.9905 | 1.0 | 0.9852 | 0.9903 | 0.9636 | 0.9641 | 1.0 |
| cosine_accuracy@50 | 0.9905 | 1.0 | 0.9901 | 0.9903 | 0.9828 | 0.9839 | 1.0 |
| cosine_accuracy@100 | 0.9905 | 1.0 | 0.9901 | 0.9903 | 0.9927 | 0.9922 | 1.0 |
| cosine_accuracy@150 | 0.9905 | 1.0 | 0.9901 | 0.9903 | 0.9948 | 0.9932 | 1.0 |
| cosine_accuracy@200 | 0.9905 | 1.0 | 0.9901 | 0.9903 | 0.9964 | 0.9943 | 1.0 |
| cosine_precision@1 | 0.6476 | 0.1135 | 0.2956 | 0.6796 | 0.7395 | 0.6927 | 0.1789 |
| cosine_precision@20 | 0.5062 | 0.5668 | 0.5404 | 0.4709 | 0.1249 | 0.128 | 0.1544 |
| cosine_precision@50 | 0.3065 | 0.3903 | 0.3828 | 0.2804 | 0.0517 | 0.0533 | 0.0618 |
| cosine_precision@100 | 0.1858 | 0.2525 | 0.2503 | 0.1732 | 0.0263 | 0.0271 | 0.0309 |
| cosine_precision@150 | 0.1325 | 0.1901 | 0.1878 | 0.1239 | 0.0176 | 0.0181 | 0.0206 |
| cosine_precision@200 | 0.1025 | 0.1508 | 0.1503 | 0.0977 | 0.0133 | 0.0136 | 0.0154 |
| cosine_recall@1 | 0.0669 | 0.0035 | 0.0111 | 0.0643 | 0.2854 | 0.2604 | 0.0577 |
| cosine_recall@20 | 0.5392 | 0.3796 | 0.3433 | 0.5119 | 0.9226 | 0.9285 | 1.0 |
| cosine_recall@50 | 0.72 | 0.5636 | 0.534 | 0.6727 | 0.9548 | 0.965 | 1.0 |
| cosine_recall@100 | 0.8254 | 0.6727 | 0.6499 | 0.788 | 0.9705 | 0.9796 | 1.0 |
| cosine_recall@150 | 0.872 | 0.736 | 0.7101 | 0.8329 | 0.9766 | 0.9837 | 1.0 |
| cosine_recall@200 | 0.9006 | 0.7698 | 0.7513 | 0.8687 | 0.9811 | 0.9862 | 1.0 |
| cosine_ndcg@1 | 0.6476 | 0.1135 | 0.2956 | 0.6796 | 0.7395 | 0.6927 | 0.1789 |
| cosine_ndcg@20 | 0.6822 | 0.6136 | 0.5648 | 0.6515 | 0.8119 | 0.7967 | 0.5443 |
| cosine_ndcg@50 | 0.6975 | 0.5908 | 0.5522 | 0.6599 | 0.8208 | 0.8069 | 0.5443 |
| cosine_ndcg@100 | 0.752 | 0.6168 | 0.5796 | 0.7157 | 0.8243 | 0.8102 | 0.5443 |
| cosine_ndcg@150 | 0.7725 | 0.6489 | 0.6112 | 0.7357 | 0.8255 | 0.811 | 0.5443 |
| **cosine_ndcg@200** | **0.7827** | **0.6653** | **0.6309** | **0.7501** | **0.8262** | **0.8114** | **0.5443** |
| cosine_mrr@1 | 0.6476 | 0.1135 | 0.2956 | 0.6796 | 0.7395 | 0.6927 | 0.1789 |
| cosine_mrr@20 | 0.8 | 0.5536 | 0.5164 | 0.8217 | 0.8059 | 0.7767 | 0.4002 |
| cosine_mrr@50 | 0.8 | 0.5536 | 0.5166 | 0.8217 | 0.8066 | 0.7774 | 0.4002 |
| cosine_mrr@100 | 0.8 | 0.5536 | 0.5166 | 0.8217 | 0.8067 | 0.7775 | 0.4002 |
| cosine_mrr@150 | 0.8 | 0.5536 | 0.5166 | 0.8217 | 0.8067 | 0.7775 | 0.4002 |
| cosine_mrr@200 | 0.8 | 0.5536 | 0.5166 | 0.8217 | 0.8067 | 0.7775 | 0.4002 |
| cosine_map@1 | 0.6476 | 0.1135 | 0.2956 | 0.6796 | 0.7395 | 0.6927 | 0.1789 |
| cosine_map@20 | 0.5392 | 0.481 | 0.4222 | 0.5012 | 0.744 | 0.721 | 0.3272 |
| cosine_map@50 | 0.5258 | 0.4304 | 0.3791 | 0.4813 | 0.7465 | 0.7238 | 0.3272 |
| cosine_map@100 | 0.558 | 0.4335 | 0.3829 | 0.5105 | 0.7469 | 0.7242 | 0.3272 |
| cosine_map@150 | 0.5666 | 0.4485 | 0.3981 | 0.5184 | 0.747 | 0.7243 | 0.3272 |
| cosine_map@200 | 0.5695 | 0.4551 | 0.4056 | 0.5228 | 0.7471 | 0.7244 | 0.3272 |
| cosine_map@500 | 0.5744 | 0.4677 | 0.4189 | 0.5277 | 0.7472 | 0.7244 | 0.3272 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Datasets
<details><summary>full_en</summary>
#### full_en
* Dataset: full_en
* Size: 28,880 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 5.68 tokens</li><li>max: 11 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 5.76 tokens</li><li>max: 12 tokens</li></ul> |
* Samples:
| anchor | positive |
|:-----------------------------------------|:-----------------------------------------|
| <code>air commodore</code> | <code>flight lieutenant</code> |
| <code>command and control officer</code> | <code>flight officer</code> |
| <code>air commodore</code> | <code>command and control officer</code> |
* Loss: [<code>GISTEmbedLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#gistembedloss) with these parameters:
```json
{'guide': SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
), 'temperature': 0.01, 'margin_strategy': 'absolute', 'margin': 0.0}
```
</details>
<details><summary>full_de</summary>
#### full_de
* Dataset: full_de
* Size: 23,023 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 7.99 tokens</li><li>max: 30 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 8.19 tokens</li><li>max: 30 tokens</li></ul> |
* Samples:
| anchor | positive |
|:----------------------------------|:-----------------------------------------------------|
| <code>Staffelkommandantin</code> | <code>Kommodore</code> |
| <code>Luftwaffenoffizierin</code> | <code>Luftwaffenoffizier/Luftwaffenoffizierin</code> |
| <code>Staffelkommandantin</code> | <code>Luftwaffenoffizierin</code> |
* Loss: [<code>GISTEmbedLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#gistembedloss) with these parameters:
```json
{'guide': SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
), 'temperature': 0.01, 'margin_strategy': 'absolute', 'margin': 0.0}
```
</details>
<details><summary>full_es</summary>
#### full_es
* Dataset: full_es
* Size: 20,724 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 9.13 tokens</li><li>max: 32 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 8.84 tokens</li><li>max: 32 tokens</li></ul> |
* Samples:
| anchor | positive |
|:------------------------------------|:-------------------------------------------|
| <code>jefe de escuadrรณn</code> | <code>instructor</code> |
| <code>comandante de aeronave</code> | <code>instructor de simulador</code> |
| <code>instructor</code> | <code>oficial del Ejรฉrcito del Aire</code> |
* Loss: [<code>GISTEmbedLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#gistembedloss) with these parameters:
```json
{'guide': SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
), 'temperature': 0.01, 'margin_strategy': 'absolute', 'margin': 0.0}
```
</details>
<details><summary>full_zh</summary>
#### full_zh
* Dataset: full_zh
* Size: 30,401 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 7.15 tokens</li><li>max: 14 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 7.46 tokens</li><li>max: 21 tokens</li></ul> |
* Samples:
| anchor | positive |
|:------------------|:---------------------|
| <code>ๆๆฏๆป็</code> | <code>ๆๆฏๅ่ฟ่ฅๆป็</code> |
| <code>ๆๆฏๆป็</code> | <code>ๆๆฏไธป็ฎก</code> |
| <code>ๆๆฏๆป็</code> | <code>ๆๆฏ่บๆฏๆป็</code> |
* Loss: [<code>GISTEmbedLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#gistembedloss) with these parameters:
```json
{'guide': SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
), 'temperature': 0.01, 'margin_strategy': 'absolute', 'margin': 0.0}
```
</details>
<details><summary>mix</summary>
#### mix
* Dataset: mix
* Size: 21,760 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 2 tokens</li><li>mean: 6.71 tokens</li><li>max: 19 tokens</li></ul> | <ul><li>min: 2 tokens</li><li>mean: 7.69 tokens</li><li>max: 19 tokens</li></ul> |
* Samples:
| anchor | positive |
|:------------------------------------------|:----------------------------------------------------------------|
| <code>technical manager</code> | <code>Technischer Direktor fรผr Bรผhne, Film und Fernsehen</code> |
| <code>head of technical</code> | <code>directora tรฉcnica</code> |
| <code>head of technical department</code> | <code>ๆๆฏ่บๆฏๆป็</code> |
* Loss: [<code>GISTEmbedLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#gistembedloss) with these parameters:
```json
{'guide': SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
), 'temperature': 0.01, 'margin_strategy': 'absolute', 'margin': 0.0}
```
</details>
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 128
- `gradient_accumulation_steps`: 2
- `num_train_epochs`: 5
- `warmup_ratio`: 0.05
- `log_on_each_node`: False
- `fp16`: True
- `dataloader_num_workers`: 4
- `ddp_find_unused_parameters`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 128
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 2
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.05
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: False
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: True
- `dataloader_num_workers`: 4
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `tp_size`: 0
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: True
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | full_en_cosine_ndcg@200 | full_es_cosine_ndcg@200 | full_de_cosine_ndcg@200 | full_zh_cosine_ndcg@200 | mix_es_cosine_ndcg@200 | mix_de_cosine_ndcg@200 | mix_zh_cosine_ndcg@200 |
|:------:|:----:|:-------------:|:-----------------------:|:-----------------------:|:-----------------------:|:-----------------------:|:----------------------:|:----------------------:|:----------------------:|
| -1 | -1 | - | 0.6856 | 0.5207 | 0.4655 | 0.6713 | 0.6224 | 0.5604 | 0.5548 |
| 0.0010 | 1 | 5.3354 | - | - | - | - | - | - | - |
| 0.1027 | 100 | 2.665 | - | - | - | - | - | - | - |
| 0.2053 | 200 | 1.3375 | 0.7691 | 0.6530 | 0.6298 | 0.7517 | 0.7513 | 0.7393 | 0.5490 |
| 0.3080 | 300 | 1.1101 | - | - | - | - | - | - | - |
| 0.4107 | 400 | 0.9453 | 0.7802 | 0.6643 | 0.6246 | 0.7531 | 0.7610 | 0.7441 | 0.5493 |
| 0.5133 | 500 | 0.9202 | - | - | - | - | - | - | - |
| 0.6160 | 600 | 0.7887 | 0.7741 | 0.6549 | 0.6171 | 0.7542 | 0.7672 | 0.7540 | 0.5482 |
| 0.7187 | 700 | 0.7604 | - | - | - | - | - | - | - |
| 0.8214 | 800 | 0.7219 | 0.7846 | 0.6674 | 0.6244 | 0.7648 | 0.7741 | 0.7592 | 0.5497 |
| 0.9240 | 900 | 0.6965 | - | - | - | - | - | - | - |
| 1.0267 | 1000 | 0.6253 | 0.7646 | 0.6391 | 0.6122 | 0.7503 | 0.7825 | 0.7704 | 0.5463 |
| 1.1294 | 1100 | 0.4737 | - | - | - | - | - | - | - |
| 1.2320 | 1200 | 0.5055 | 0.7758 | 0.6582 | 0.6178 | 0.7514 | 0.7857 | 0.7764 | 0.5501 |
| 1.3347 | 1300 | 0.5042 | - | - | - | - | - | - | - |
| 1.4374 | 1400 | 0.5073 | 0.7613 | 0.6578 | 0.6178 | 0.7505 | 0.7829 | 0.7762 | 0.5452 |
| 1.5400 | 1500 | 0.4975 | - | - | - | - | - | - | - |
| 1.6427 | 1600 | 0.5242 | 0.7736 | 0.6673 | 0.6279 | 0.7555 | 0.7940 | 0.7859 | 0.5477 |
| 1.7454 | 1700 | 0.4713 | - | - | - | - | - | - | - |
| 1.8480 | 1800 | 0.4814 | 0.7845 | 0.6733 | 0.6285 | 0.7642 | 0.7992 | 0.7904 | 0.5449 |
| 1.9507 | 1900 | 0.4526 | - | - | - | - | - | - | - |
| 2.0544 | 2000 | 0.36 | 0.7790 | 0.6639 | 0.6252 | 0.7500 | 0.8032 | 0.7888 | 0.5499 |
| 2.1571 | 2100 | 0.3744 | - | - | - | - | - | - | - |
| 2.2598 | 2200 | 0.3031 | 0.7787 | 0.6614 | 0.6190 | 0.7537 | 0.7993 | 0.7811 | 0.5476 |
| 2.3624 | 2300 | 0.3638 | - | - | - | - | - | - | - |
| 2.4651 | 2400 | 0.358 | 0.7798 | 0.6615 | 0.6258 | 0.7497 | 0.8018 | 0.7828 | 0.5481 |
| 2.5678 | 2500 | 0.3247 | - | - | - | - | - | - | - |
| 2.6704 | 2600 | 0.3247 | 0.7854 | 0.6663 | 0.6248 | 0.7560 | 0.8081 | 0.7835 | 0.5452 |
| 2.7731 | 2700 | 0.3263 | - | - | - | - | - | - | - |
| 2.8758 | 2800 | 0.3212 | 0.7761 | 0.6681 | 0.6250 | 0.7517 | 0.8121 | 0.7927 | 0.5458 |
| 2.9784 | 2900 | 0.3291 | - | - | - | - | - | - | - |
| 3.0821 | 3000 | 0.2816 | 0.7727 | 0.6604 | 0.6163 | 0.7370 | 0.8163 | 0.7985 | 0.5473 |
| 3.1848 | 3100 | 0.2698 | - | - | - | - | - | - | - |
| 3.2875 | 3200 | 0.2657 | 0.7757 | 0.6615 | 0.6247 | 0.7417 | 0.8117 | 0.8004 | 0.5436 |
| 3.3901 | 3300 | 0.2724 | - | - | - | - | - | - | - |
| 3.4928 | 3400 | 0.2584 | 0.7850 | 0.6583 | 0.6320 | 0.7458 | 0.8120 | 0.7980 | 0.5454 |
| 3.5955 | 3500 | 0.2573 | - | - | - | - | - | - | - |
| 3.6982 | 3600 | 0.2744 | 0.7796 | 0.6552 | 0.6237 | 0.7409 | 0.8193 | 0.8018 | 0.5466 |
| 3.8008 | 3700 | 0.3054 | - | - | - | - | - | - | - |
| 3.9035 | 3800 | 0.2727 | 0.7825 | 0.6642 | 0.6293 | 0.7504 | 0.8213 | 0.8058 | 0.5463 |
| 4.0062 | 3900 | 0.2353 | - | - | - | - | - | - | - |
| 4.1088 | 4000 | 0.2353 | 0.7747 | 0.6628 | 0.6263 | 0.7384 | 0.8239 | 0.8065 | 0.5447 |
| 4.2115 | 4100 | 0.2385 | - | - | - | - | - | - | - |
| 4.3142 | 4200 | 0.231 | 0.7811 | 0.6608 | 0.6254 | 0.7463 | 0.8226 | 0.8051 | 0.5442 |
| 4.4168 | 4300 | 0.2115 | - | - | - | - | - | - | - |
| 4.5195 | 4400 | 0.2151 | 0.7815 | 0.6634 | 0.6301 | 0.7489 | 0.8251 | 0.8101 | 0.5450 |
| 4.6222 | 4500 | 0.2496 | - | - | - | - | - | - | - |
| 4.7248 | 4600 | 0.2146 | 0.7814 | 0.6654 | 0.6294 | 0.7523 | 0.8258 | 0.8104 | 0.5436 |
| 4.8275 | 4700 | 0.2535 | - | - | - | - | - | - | - |
| 4.9302 | 4800 | 0.2058 | 0.7827 | 0.6653 | 0.6309 | 0.7501 | 0.8262 | 0.8114 | 0.5443 |
### Framework Versions
- Python: 3.11.11
- Sentence Transformers: 4.1.0
- Transformers: 4.51.2
- PyTorch: 2.6.0+cu124
- Accelerate: 1.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### GISTEmbedLoss
```bibtex
@misc{solatorio2024gistembed,
title={GISTEmbed: Guided In-sample Selection of Training Negatives for Text Embedding Fine-tuning},
author={Aivin V. Solatorio},
year={2024},
eprint={2402.16829},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
morturr/Llama-2-7b-hf-PAIR_one_liners_amazon-COMB-one_liners-comb-3-seed-28-2025-06-20
|
morturr
| 2025-06-20T18:10:23Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-20T18:10:09Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-PAIR_one_liners_amazon-COMB-one_liners-comb-3-seed-28-2025-06-20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-PAIR_one_liners_amazon-COMB-one_liners-comb-3-seed-28-2025-06-20
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 16
- seed: 28
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
New-videos-Sajal-Malik-18-Viral-video/Original.Full.Clip.Sajal.Malik.Viral.Video.Leaks.Official
|
New-videos-Sajal-Malik-18-Viral-video
| 2025-06-20T18:06:06Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-20T18:05:31Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
phospho-app/luuuuuuukee-gr00t-place_tape_wood-do6cx
|
phospho-app
| 2025-06-20T18:03:32Z | 0 | 0 | null |
[
"safetensors",
"gr00t_n1",
"phosphobot",
"gr00t",
"region:us"
] | null | 2025-06-20T17:47:26Z |
---
tags:
- phosphobot
- gr00t
task_categories:
- robotics
---
# gr00t Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successfull, try it out on your robot!
## Training parameters:
- **Dataset**: [luuuuuuukee/place_tape_wood](https://huggingface.co/datasets/luuuuuuukee/place_tape_wood)
- **Wandb run URL**: None
- **Epochs**: 10
- **Batch size**: 49
- **Training steps**: None
๐ **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
๐ค **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
uvegesistvan/roberta_large_pl_25_sh
|
uvegesistvan
| 2025-06-20T18:02:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-20T17:18:04Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
alexsheiko/tinybert-email-classifier-onnx
|
alexsheiko
| 2025-06-20T17:45:34Z | 0 | 0 |
transformers
|
[
"transformers",
"onnx",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-20T15:16:26Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
andrewsamce/q-FrozenLake-v1-4x4-noSlippery
|
andrewsamce
| 2025-06-20T17:40:27Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-20T17:39:33Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="andrewsamce/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
sergioalves/fff89468-d9a6-40fd-a58e-7f7c645aa3df
|
sergioalves
| 2025-06-20T17:30:47Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"axolotl",
"dpo",
"trl",
"conversational",
"arxiv:2305.18290",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:quantized:Qwen/Qwen2.5-1.5B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-06-20T17:10:07Z |
---
base_model: Qwen/Qwen2.5-1.5B
library_name: transformers
model_name: fff89468-d9a6-40fd-a58e-7f7c645aa3df
tags:
- generated_from_trainer
- axolotl
- dpo
- trl
licence: license
---
# Model Card for fff89468-d9a6-40fd-a58e-7f7c645aa3df
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="sergioalves/fff89468-d9a6-40fd-a58e-7f7c645aa3df", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dedok-yo/s56-7/runs/tjktu30r)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.46.0
- Pytorch: 2.5.0+cu124
- Datasets: 3.0.1
- Tokenizers: 0.20.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
morturr/Llama-2-7b-hf-PAIR_amazon_headlines-COMB-headlines-comb-1-seed-28-2025-06-20
|
morturr
| 2025-06-20T17:30:13Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-20T17:29:57Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-PAIR_amazon_headlines-COMB-headlines-comb-1-seed-28-2025-06-20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-PAIR_amazon_headlines-COMB-headlines-comb-1-seed-28-2025-06-20
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 28
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
codigo-ai/cdgo2.5-coder-32b-instruct-adapter
|
codigo-ai
| 2025-06-20T17:27:26Z | 15 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"arxiv:1910.09700",
"region:us"
] | null | 2025-06-19T14:54:21Z |
---
base_model: qwen/qwen2.5-coder-32b-instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0
|
Masbir/c291986c-5e95-4f1b-aaf6-7c114a48b157
|
Masbir
| 2025-06-20T17:21:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"unsloth",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T14:46:21Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Udayxyz/80b
|
Udayxyz
| 2025-06-20T17:20:47Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"hi",
"dataset:open-r1/Mixture-of-Thoughts",
"license:apache-2.0",
"region:us"
] | null | 2025-06-20T17:17:57Z |
---
license: apache-2.0
datasets:
- open-r1/Mixture-of-Thoughts
language:
- hi
library_name: adapter-transformers
---
|
morturr/Llama-2-7b-hf-PAIR_one_liners_amazon-COMB-one_liners-comb-3-seed-18-2025-06-20
|
morturr
| 2025-06-20T17:17:24Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-20T17:17:16Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-PAIR_one_liners_amazon-COMB-one_liners-comb-3-seed-18-2025-06-20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-PAIR_one_liners_amazon-COMB-one_liners-comb-3-seed-18-2025-06-20
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 16
- seed: 18
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
Official-mezzo-fun-18-Go-Viral-videos-Link/FULL.VIDEO.mezzo.fun.Viral.Video.Tutorial.Official
|
Official-mezzo-fun-18-Go-Viral-videos-Link
| 2025-06-20T17:16:27Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-20T17:15:51Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/56hn7ue8/?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
<animated-image data-catalyst=""><a href="https://tinyurl.com/56hn7ue8/?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.