modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
phxdev/qwq-32b-lora-creed
|
phxdev
| 2025-06-22T22:01:22Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"generated_from_trainer",
"dataset:phxdev/creed",
"base_model:Qwen/QwQ-32B-Preview",
"base_model:adapter:Qwen/QwQ-32B-Preview",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-22T22:00:22Z |
---
library_name: peft
license: apache-2.0
base_model: Qwen/QwQ-32B-Preview
tags:
- generated_from_trainer
datasets:
- phxdev/creed
model-index:
- name: outputs/heisenberg-crystal
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.8.0.dev0`
```yaml
adapter: lora
base_model: Qwen/QwQ-32B-Preview
trust_remote_code: true
bf16: true
dataset_processes: 64
datasets:
- path: phxdev/creed
type: completion
field: text
trust_remote_code: false
streaming: true
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: false
learning_rate: 0.001
lisa_layers_attribute: model.layers
lisa_enabled: true
lisa_layers_fraction: 0.25
load_best_model_at_end: true
load_in_4bit: false
load_in_8bit: true
lora_alpha: 128
lora_dropout: 0.15
lora_r: 64
lora_target_modules:
- q_proj
- v_proj
- k_proj
- o_proj
- gate_proj
- down_proj
- up_proj
lora_fan_in_fan_out: false
modules_to_save:
- embed_tokens
- lm_head
loraplus_lr_embedding: 1.0e-06
loraplus_lr_ratio: 16
lr_scheduler: cosine_with_min_lr
lr_scheduler_kwargs:
min_lr: 0.00001
max_prompt_len: 1024
mean_resizing_embeddings: false
micro_batch_size: 1
num_epochs: 3.0
optimizer: adamw_torch
# optim_args:
# weight_decay: 0.05
# betas: [0.9, 0.95]
# eps: 1.0e-8
output_dir: ./outputs/heisenberg-crystal
pretrain_multipack_attn: true
pretrain_multipack_buffer_size: 20000
qlora_sharded_model_loading: false
ray_num_workers: 1
resources_per_worker:
GPU: 1
resume_from_checkpoint: null
sample_packing: false
sample_packing_bin_size: 200
sample_packing_group_size: 100000
sample_packing_seq_len_multiplier: 1.0
save_only_model: true
save_safetensors: true
save_strategy: steps
save_steps: 100
save_total_limit: 3
eval_strategy: steps
eval_steps: 100
metric_for_best_model: loss
greater_is_better: false
sequence_len: 512
shuffle_merged_datasets: true
skip_prepare_dataset: false
strict: false
train_on_inputs: false
neftune_noise_alpha: 5.0
model_config:
rope_scaling:
type: linear
factor: 1.5
dataloader_prefetch_factor: 4
dataloader_num_workers: 8
dataloader_pin_memory: true
dataloader_persistent_workers: true
max_grad_norm: 1.0
adam_beta2_schedule: cosine
torch_compile: true
torch_compile_backend: inductor
trl:
log_completions: true
ref_model_mixup_alpha: 0.9
ref_model_sync_steps: 64
sync_ref_model: false
use_vllm: false
vllm_device: auto
vllm_dtype: auto
vllm_gpu_memory_utilization: 0.9
use_ray: false
val_set_size: 0.05
warmup_steps: 100
warmup_ratio: 0.0
weight_decay: 0.05
flash_attention: true
flash_attn_cross_entropy: true
flash_attn_rms_norm: true
flash_attn_fuse_qkv: false
flash_attn_fuse_mlp: false
ddp_backend: nccl
ddp_broadcast_buffers: false
ddp_find_unused_parameters: false
tf32: true
bf16_full_eval: false
fp16: false
# unfrozen_parameters:
# - lm_head.*
# - embed_tokens.*
# - norm.*
xformers_attention: false
s2_attention: false
sdp_attention: false
pad_to_sequence_len: true
peft_use_dora: false
peft_lora_modules_to_save: null
special_tokens:
pad_token: <|endoftext|>
deepspeed: null
fsdp: null
fsdp_config: null
# wandb_project: heisenberg-qwen
# wandb_entity: null
# wandb_name: blue-crystal-run
# wandb_log_model: checkpoint
hub_model_id: null
hub_strategy: null
report_to: []
logging_strategy: steps
logging_steps: 10
logging_first_step: true
```
</details><br>
# outputs/heisenberg-crystal
This model is a fine-tuned version of [Qwen/QwQ-32B-Preview](https://huggingface.co/Qwen/QwQ-32B-Preview) on the phxdev/creed dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine_with_min_lr
- lr_scheduler_warmup_steps: 100
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0013 | 1 | nan |
| 7.8286 | 0.1259 | 100 | nan |
| 7.2486 | 0.2519 | 200 | nan |
| 7.2601 | 0.3778 | 300 | nan |
| 8.2142 | 0.5038 | 400 | nan |
| 7.1902 | 0.6297 | 500 | nan |
| 6.3799 | 0.7557 | 600 | nan |
| 6.7115 | 0.8816 | 700 | nan |
| 6.0414 | 1.0076 | 800 | nan |
| 6.428 | 1.1335 | 900 | nan |
| 6.3167 | 1.2594 | 1000 | nan |
| 6.0359 | 1.3854 | 1100 | nan |
| 6.3701 | 1.5113 | 1200 | nan |
| 6.9225 | 1.6373 | 1300 | nan |
| 6.5807 | 1.7632 | 1400 | nan |
| 6.8649 | 1.8892 | 1500 | nan |
| 6.1397 | 2.0151 | 1600 | nan |
| 5.7675 | 2.1411 | 1700 | nan |
| 6.2605 | 2.2670 | 1800 | nan |
| 5.8788 | 2.3929 | 1900 | nan |
| 6.0279 | 2.5189 | 2000 | nan |
| 6.3911 | 2.6448 | 2100 | nan |
| 6.0412 | 2.7708 | 2200 | nan |
| 6.0862 | 2.8967 | 2300 | nan |
### Framework versions
- PEFT 0.14.0
- Transformers 4.49.0
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
BootesVoid/cmc867toe0bpjbfifm9mbcut5_cmc86ab430bprbfifqgezyfnw
|
BootesVoid
| 2025-06-22T21:50:18Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-22T21:50:16Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: GIRLIE
---
# Cmc867Toe0Bpjbfifm9Mbcut5_Cmc86Ab430Bprbfifqgezyfnw
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `GIRLIE` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "GIRLIE",
"lora_weights": "https://huggingface.co/BootesVoid/cmc867toe0bpjbfifm9mbcut5_cmc86ab430bprbfifqgezyfnw/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmc867toe0bpjbfifm9mbcut5_cmc86ab430bprbfifqgezyfnw', weight_name='lora.safetensors')
image = pipeline('GIRLIE').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmc867toe0bpjbfifm9mbcut5_cmc86ab430bprbfifqgezyfnw/discussions) to add images that show off what you’ve made with this LoRA.
|
Ascrewdriver/Reinforce-CartPole-v1
|
Ascrewdriver
| 2025-06-22T21:46:14Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-22T21:46:10Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 92.40 +/- 49.22
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
tester7281/gemma-text-to-sql
|
tester7281
| 2025-06-22T21:36:10Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:google/gemma-3-1b-pt",
"base_model:finetune:google/gemma-3-1b-pt",
"endpoints_compatible",
"region:us"
] | null | 2025-06-22T18:49:52Z |
---
base_model: google/gemma-3-1b-pt
library_name: transformers
model_name: gemma-text-to-sql
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for gemma-text-to-sql
This model is a fine-tuned version of [google/gemma-3-1b-pt](https://huggingface.co/google/gemma-3-1b-pt).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="tester7281/gemma-text-to-sql", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.19.0
- Transformers: 4.52.4
- Pytorch: 2.7.0+cu126
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
donoway/Llama-3.2-1B
|
donoway
| 2025-06-22T21:17:39Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:finetune:meta-llama/Llama-3.2-1B",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-21T09:06:57Z |
---
library_name: transformers
license: llama3.2
base_model: meta-llama/Llama-3.2-1B
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Llama-3.2-1B
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-3.2-1B
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3898
- Model Preparation Time: 0.0023
- Move Accuracy: 0.5572
- Token Accuracy: 0.8550
- Accuracy: 0.5572
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 256
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Move Accuracy | Token Accuracy | Accuracy |
|:-------------:|:------:|:------:|:---------------:|:----------------------:|:-------------:|:--------------:|:--------:|
| No log | 0 | 0 | 6.4123 | 0.0023 | 0.0 | 0.1049 | 0.0 |
| 1.6882 | 0.0098 | 100 | 1.7583 | 0.0023 | 0.0100 | 0.3139 | 0.0100 |
| 1.7193 | 0.0196 | 200 | 1.6696 | 0.0023 | 0.0136 | 0.3473 | 0.0136 |
| 1.5794 | 0.0295 | 300 | 1.5956 | 0.0023 | 0.0306 | 0.3861 | 0.0306 |
| 1.4833 | 0.0393 | 400 | 1.5333 | 0.0023 | 0.0395 | 0.4086 | 0.0395 |
| 1.4839 | 0.0491 | 500 | 1.4434 | 0.0023 | 0.0483 | 0.4387 | 0.0483 |
| 1.286 | 0.0589 | 600 | 1.2984 | 0.0023 | 0.0710 | 0.5016 | 0.0710 |
| 1.2039 | 0.0687 | 700 | 1.1798 | 0.0023 | 0.1028 | 0.5538 | 0.1028 |
| 1.0962 | 0.0785 | 800 | 1.0688 | 0.0023 | 0.1207 | 0.5937 | 0.1207 |
| 1.0003 | 0.0884 | 900 | 0.9921 | 0.0023 | 0.1392 | 0.6202 | 0.1392 |
| 0.965 | 0.0982 | 1000 | 0.9782 | 0.0023 | 0.1452 | 0.6222 | 0.1452 |
| 0.8709 | 0.1080 | 1100 | 0.8884 | 0.0023 | 0.1663 | 0.6491 | 0.1663 |
| 0.8293 | 0.1178 | 1200 | 0.8923 | 0.0023 | 0.1724 | 0.6524 | 0.1724 |
| 0.7923 | 0.1276 | 1300 | 0.8226 | 0.0023 | 0.1945 | 0.6774 | 0.1945 |
| 0.8444 | 0.1374 | 1400 | 0.8361 | 0.0023 | 0.2052 | 0.6779 | 0.2052 |
| 0.7472 | 0.1473 | 1500 | 0.8023 | 0.0023 | 0.2084 | 0.6840 | 0.2084 |
| 0.7612 | 0.1571 | 1600 | 0.7811 | 0.0023 | 0.2206 | 0.6937 | 0.2206 |
| 0.7399 | 0.1669 | 1700 | 0.7642 | 0.0023 | 0.2324 | 0.6982 | 0.2324 |
| 0.7385 | 0.1767 | 1800 | 0.7452 | 0.0023 | 0.2371 | 0.7050 | 0.2371 |
| 0.6688 | 0.1865 | 1900 | 0.7385 | 0.0023 | 0.2422 | 0.7060 | 0.2422 |
| 0.6871 | 0.1963 | 2000 | 0.7321 | 0.0023 | 0.2435 | 0.7090 | 0.2435 |
| 0.7335 | 0.2062 | 2100 | 0.7179 | 0.0023 | 0.2482 | 0.7122 | 0.2482 |
| 0.7213 | 0.2160 | 2200 | 0.7171 | 0.0023 | 0.2520 | 0.7139 | 0.2520 |
| 0.7299 | 0.2258 | 2300 | 0.6906 | 0.0023 | 0.2707 | 0.7263 | 0.2707 |
| 0.6466 | 0.2356 | 2400 | 0.6920 | 0.0023 | 0.2691 | 0.7271 | 0.2691 |
| 0.6514 | 0.2454 | 2500 | 0.6973 | 0.0023 | 0.2585 | 0.7232 | 0.2585 |
| 0.683 | 0.2553 | 2600 | 0.6835 | 0.0023 | 0.2732 | 0.7285 | 0.2732 |
| 0.714 | 0.2651 | 2700 | 0.6792 | 0.0023 | 0.2861 | 0.7313 | 0.2861 |
| 0.6368 | 0.2749 | 2800 | 0.6680 | 0.0023 | 0.2790 | 0.7311 | 0.2790 |
| 0.6398 | 0.2847 | 2900 | 0.6639 | 0.0023 | 0.2939 | 0.7346 | 0.2939 |
| 0.6598 | 0.2945 | 3000 | 0.6545 | 0.0023 | 0.3059 | 0.7417 | 0.3059 |
| 0.6705 | 0.3043 | 3100 | 0.6472 | 0.0023 | 0.3060 | 0.7439 | 0.3060 |
| 0.5687 | 0.3142 | 3200 | 0.6368 | 0.0023 | 0.3171 | 0.7492 | 0.3171 |
| 0.6159 | 0.3240 | 3300 | 0.6243 | 0.0023 | 0.3265 | 0.7534 | 0.3265 |
| 0.5698 | 0.3338 | 3400 | 0.6327 | 0.0023 | 0.3169 | 0.7495 | 0.3169 |
| 0.5646 | 0.3436 | 3500 | 0.6327 | 0.0023 | 0.3193 | 0.7494 | 0.3193 |
| 0.6098 | 0.3534 | 3600 | 0.6223 | 0.0023 | 0.3223 | 0.7530 | 0.3223 |
| 0.5574 | 0.3632 | 3700 | 0.6218 | 0.0023 | 0.3265 | 0.7531 | 0.3265 |
| 0.6546 | 0.3731 | 3800 | 0.6136 | 0.0023 | 0.3294 | 0.7559 | 0.3294 |
| 0.6296 | 0.3829 | 3900 | 0.6065 | 0.0023 | 0.3390 | 0.7605 | 0.3390 |
| 0.6466 | 0.3927 | 4000 | 0.6136 | 0.0023 | 0.3307 | 0.7573 | 0.3307 |
| 0.594 | 0.4025 | 4100 | 0.6056 | 0.0023 | 0.3423 | 0.7616 | 0.3423 |
| 0.5002 | 0.4123 | 4200 | 0.6029 | 0.0023 | 0.3402 | 0.7625 | 0.3402 |
| 0.5706 | 0.4221 | 4300 | 0.5917 | 0.0023 | 0.3470 | 0.7659 | 0.3470 |
| 0.5753 | 0.4320 | 4400 | 0.5878 | 0.0023 | 0.3449 | 0.7654 | 0.3449 |
| 0.6108 | 0.4418 | 4500 | 0.5899 | 0.0023 | 0.3507 | 0.7666 | 0.3507 |
| 0.5526 | 0.4516 | 4600 | 0.5772 | 0.0023 | 0.3587 | 0.7719 | 0.3587 |
| 0.5957 | 0.4614 | 4700 | 0.5767 | 0.0023 | 0.3642 | 0.7727 | 0.3642 |
| 0.5756 | 0.4712 | 4800 | 0.5710 | 0.0023 | 0.3652 | 0.7742 | 0.3652 |
| 0.5903 | 0.4811 | 4900 | 0.5761 | 0.0023 | 0.3658 | 0.7731 | 0.3658 |
| 0.5375 | 0.4909 | 5000 | 0.5671 | 0.0023 | 0.3743 | 0.7764 | 0.3743 |
| 0.6024 | 0.5007 | 5100 | 0.5678 | 0.0023 | 0.3694 | 0.7758 | 0.3694 |
| 0.5417 | 0.5105 | 5200 | 0.5595 | 0.0023 | 0.3781 | 0.7789 | 0.3781 |
| 0.5733 | 0.5203 | 5300 | 0.5673 | 0.0023 | 0.3681 | 0.7750 | 0.3681 |
| 0.5407 | 0.5301 | 5400 | 0.5538 | 0.0023 | 0.3780 | 0.7815 | 0.3780 |
| 0.5645 | 0.5400 | 5500 | 0.5630 | 0.0023 | 0.3762 | 0.7794 | 0.3762 |
| 0.5485 | 0.5498 | 5600 | 0.5528 | 0.0023 | 0.3852 | 0.7815 | 0.3852 |
| 0.5043 | 0.5596 | 5700 | 0.5532 | 0.0023 | 0.3761 | 0.7804 | 0.3761 |
| 0.5522 | 0.5694 | 5800 | 0.5487 | 0.0023 | 0.3814 | 0.7832 | 0.3814 |
| 0.5744 | 0.5792 | 5900 | 0.5527 | 0.0023 | 0.3813 | 0.7810 | 0.3813 |
| 0.5292 | 0.5890 | 6000 | 0.5458 | 0.0023 | 0.3901 | 0.7847 | 0.3901 |
| 0.5403 | 0.5989 | 6100 | 0.5461 | 0.0023 | 0.3822 | 0.7832 | 0.3822 |
| 0.5253 | 0.6087 | 6200 | 0.5409 | 0.0023 | 0.3912 | 0.7856 | 0.3912 |
| 0.5112 | 0.6185 | 6300 | 0.5377 | 0.0023 | 0.3967 | 0.7868 | 0.3967 |
| 0.5087 | 0.6283 | 6400 | 0.5423 | 0.0023 | 0.3955 | 0.7879 | 0.3955 |
| 0.5494 | 0.6381 | 6500 | 0.5355 | 0.0023 | 0.3923 | 0.7874 | 0.3923 |
| 0.6042 | 0.6479 | 6600 | 0.5336 | 0.0023 | 0.3994 | 0.7892 | 0.3994 |
| 0.4849 | 0.6578 | 6700 | 0.5329 | 0.0023 | 0.4000 | 0.7905 | 0.4000 |
| 0.5629 | 0.6676 | 6800 | 0.5292 | 0.0023 | 0.3983 | 0.7904 | 0.3983 |
| 0.4431 | 0.6774 | 6900 | 0.5268 | 0.0023 | 0.3991 | 0.7923 | 0.3991 |
| 0.4772 | 0.6872 | 7000 | 0.5274 | 0.0023 | 0.4036 | 0.7928 | 0.4036 |
| 0.5483 | 0.6970 | 7100 | 0.5241 | 0.0023 | 0.4067 | 0.7944 | 0.4067 |
| 0.4727 | 0.7069 | 7200 | 0.5207 | 0.0023 | 0.4116 | 0.7958 | 0.4116 |
| 0.4363 | 0.7167 | 7300 | 0.5154 | 0.0023 | 0.4114 | 0.7965 | 0.4114 |
| 0.46 | 0.7265 | 7400 | 0.5201 | 0.0023 | 0.4106 | 0.7952 | 0.4106 |
| 0.4544 | 0.7363 | 7500 | 0.5066 | 0.0023 | 0.4208 | 0.8001 | 0.4208 |
| 0.5235 | 0.7461 | 7600 | 0.5108 | 0.0023 | 0.4168 | 0.7989 | 0.4168 |
| 0.6194 | 0.7559 | 7700 | 0.5148 | 0.0023 | 0.4191 | 0.7981 | 0.4191 |
| 0.5224 | 0.7658 | 7800 | 0.5077 | 0.0023 | 0.4201 | 0.7998 | 0.4201 |
| 0.4931 | 0.7756 | 7900 | 0.5040 | 0.0023 | 0.4212 | 0.8009 | 0.4212 |
| 0.4841 | 0.7854 | 8000 | 0.5127 | 0.0023 | 0.4192 | 0.7982 | 0.4192 |
| 0.4331 | 0.7952 | 8100 | 0.5077 | 0.0023 | 0.4238 | 0.8012 | 0.4238 |
| 0.4911 | 0.8050 | 8200 | 0.4979 | 0.0023 | 0.4319 | 0.8037 | 0.4319 |
| 0.4334 | 0.8148 | 8300 | 0.5032 | 0.0023 | 0.4233 | 0.8035 | 0.4233 |
| 0.5439 | 0.8247 | 8400 | 0.4955 | 0.0023 | 0.4310 | 0.8044 | 0.4310 |
| 0.4618 | 0.8345 | 8500 | 0.4965 | 0.0023 | 0.4312 | 0.8042 | 0.4312 |
| 0.5084 | 0.8443 | 8600 | 0.4995 | 0.0023 | 0.4232 | 0.8031 | 0.4232 |
| 0.5049 | 0.8541 | 8700 | 0.4929 | 0.0023 | 0.4319 | 0.8052 | 0.4319 |
| 0.5132 | 0.8639 | 8800 | 0.4930 | 0.0023 | 0.4307 | 0.8054 | 0.4307 |
| 0.502 | 0.8737 | 8900 | 0.4916 | 0.0023 | 0.4303 | 0.8062 | 0.4303 |
| 0.4834 | 0.8836 | 9000 | 0.4912 | 0.0023 | 0.4327 | 0.8080 | 0.4327 |
| 0.4745 | 0.8934 | 9100 | 0.4883 | 0.0023 | 0.4372 | 0.8091 | 0.4372 |
| 0.4711 | 0.9032 | 9200 | 0.4894 | 0.0023 | 0.4336 | 0.8071 | 0.4336 |
| 0.4841 | 0.9130 | 9300 | 0.4887 | 0.0023 | 0.4381 | 0.8075 | 0.4381 |
| 0.3759 | 0.9228 | 9400 | 0.4858 | 0.0023 | 0.4401 | 0.8091 | 0.4401 |
| 0.468 | 0.9327 | 9500 | 0.4890 | 0.0023 | 0.4391 | 0.8078 | 0.4391 |
| 0.4893 | 0.9425 | 9600 | 0.4823 | 0.0023 | 0.4406 | 0.8094 | 0.4406 |
| 0.4759 | 0.9523 | 9700 | 0.4784 | 0.0023 | 0.4452 | 0.8110 | 0.4452 |
| 0.5078 | 0.9621 | 9800 | 0.4876 | 0.0023 | 0.4355 | 0.8071 | 0.4355 |
| 0.4531 | 0.9719 | 9900 | 0.4792 | 0.0023 | 0.4425 | 0.8110 | 0.4425 |
| 0.4947 | 0.9817 | 10000 | 0.4856 | 0.0023 | 0.4372 | 0.8086 | 0.4372 |
| 0.4585 | 0.9916 | 10100 | 0.4775 | 0.0023 | 0.4433 | 0.8121 | 0.4433 |
| 0.4506 | 1.0014 | 10200 | 0.4776 | 0.0023 | 0.4410 | 0.8111 | 0.4410 |
| 0.4357 | 1.0112 | 10300 | 0.4788 | 0.0023 | 0.4457 | 0.8118 | 0.4457 |
| 0.4737 | 1.0210 | 10400 | 0.4811 | 0.0023 | 0.4465 | 0.8126 | 0.4465 |
| 0.4411 | 1.0308 | 10500 | 0.4779 | 0.0023 | 0.4459 | 0.8114 | 0.4459 |
| 0.4634 | 1.0406 | 10600 | 0.4815 | 0.0023 | 0.4411 | 0.8113 | 0.4411 |
| 0.4136 | 1.0505 | 10700 | 0.4734 | 0.0023 | 0.4468 | 0.8129 | 0.4468 |
| 0.4582 | 1.0603 | 10800 | 0.4716 | 0.0023 | 0.4528 | 0.8142 | 0.4528 |
| 0.4287 | 1.0701 | 10900 | 0.4733 | 0.0023 | 0.4481 | 0.8140 | 0.4481 |
| 0.5291 | 1.0799 | 11000 | 0.4726 | 0.0023 | 0.4502 | 0.8145 | 0.4502 |
| 0.4382 | 1.0897 | 11100 | 0.4705 | 0.0023 | 0.4541 | 0.8151 | 0.4541 |
| 0.5431 | 1.0995 | 11200 | 0.4726 | 0.0023 | 0.4502 | 0.8139 | 0.4502 |
| 0.4177 | 1.1094 | 11300 | 0.4712 | 0.0023 | 0.4491 | 0.8139 | 0.4491 |
| 0.4509 | 1.1192 | 11400 | 0.4687 | 0.0023 | 0.4550 | 0.8155 | 0.4550 |
| 0.4301 | 1.1290 | 11500 | 0.4713 | 0.0023 | 0.4555 | 0.8156 | 0.4555 |
| 0.4387 | 1.1388 | 11600 | 0.4675 | 0.0023 | 0.4560 | 0.8163 | 0.4560 |
| 0.5237 | 1.1486 | 11700 | 0.4688 | 0.0023 | 0.4541 | 0.8161 | 0.4541 |
| 0.4253 | 1.1585 | 11800 | 0.4647 | 0.0023 | 0.4580 | 0.8171 | 0.4580 |
| 0.4177 | 1.1683 | 11900 | 0.4616 | 0.0023 | 0.4605 | 0.8182 | 0.4605 |
| 0.347 | 1.1781 | 12000 | 0.4631 | 0.0023 | 0.4613 | 0.8177 | 0.4613 |
| 0.4654 | 1.1879 | 12100 | 0.4587 | 0.0023 | 0.4638 | 0.8200 | 0.4638 |
| 0.3726 | 1.1977 | 12200 | 0.4591 | 0.0023 | 0.4607 | 0.8185 | 0.4607 |
| 0.4567 | 1.2075 | 12300 | 0.4633 | 0.0023 | 0.4604 | 0.8185 | 0.4604 |
| 0.3962 | 1.2174 | 12400 | 0.4597 | 0.0023 | 0.4618 | 0.8200 | 0.4618 |
| 0.4573 | 1.2272 | 12500 | 0.4594 | 0.0023 | 0.4602 | 0.8187 | 0.4602 |
| 0.4402 | 1.2370 | 12600 | 0.4573 | 0.0023 | 0.4671 | 0.8213 | 0.4671 |
| 0.4459 | 1.2468 | 12700 | 0.4576 | 0.0023 | 0.4668 | 0.8199 | 0.4668 |
| 0.3908 | 1.2566 | 12800 | 0.4592 | 0.0023 | 0.4656 | 0.8202 | 0.4656 |
| 0.5075 | 1.2664 | 12900 | 0.4559 | 0.0023 | 0.4644 | 0.8202 | 0.4644 |
| 0.436 | 1.2763 | 13000 | 0.4578 | 0.0023 | 0.4680 | 0.8211 | 0.4680 |
| 0.4359 | 1.2861 | 13100 | 0.4525 | 0.0023 | 0.4701 | 0.8231 | 0.4701 |
| 0.4391 | 1.2959 | 13200 | 0.4549 | 0.0023 | 0.4693 | 0.8220 | 0.4693 |
| 0.4176 | 1.3057 | 13300 | 0.4537 | 0.0023 | 0.4685 | 0.8223 | 0.4685 |
| 0.4446 | 1.3155 | 13400 | 0.4489 | 0.0023 | 0.4702 | 0.8230 | 0.4702 |
| 0.378 | 1.3253 | 13500 | 0.4527 | 0.0023 | 0.4714 | 0.8224 | 0.4714 |
| 0.416 | 1.3352 | 13600 | 0.4492 | 0.0023 | 0.4763 | 0.8240 | 0.4763 |
| 0.4217 | 1.3450 | 13700 | 0.4487 | 0.0023 | 0.4752 | 0.8240 | 0.4752 |
| 0.4859 | 1.3548 | 13800 | 0.4516 | 0.0023 | 0.4679 | 0.8213 | 0.4679 |
| 0.4055 | 1.3646 | 13900 | 0.4450 | 0.0023 | 0.4765 | 0.8245 | 0.4765 |
| 0.457 | 1.3744 | 14000 | 0.4504 | 0.0023 | 0.4754 | 0.8245 | 0.4754 |
| 0.4092 | 1.3843 | 14100 | 0.4437 | 0.0023 | 0.4780 | 0.8256 | 0.4780 |
| 0.4216 | 1.3941 | 14200 | 0.4459 | 0.0023 | 0.4780 | 0.8252 | 0.4780 |
| 0.4103 | 1.4039 | 14300 | 0.4409 | 0.0023 | 0.4792 | 0.8270 | 0.4792 |
| 0.3883 | 1.4137 | 14400 | 0.4436 | 0.0023 | 0.4758 | 0.8258 | 0.4758 |
| 0.4307 | 1.4235 | 14500 | 0.4424 | 0.0023 | 0.4844 | 0.8270 | 0.4844 |
| 0.4042 | 1.4333 | 14600 | 0.4412 | 0.0023 | 0.4830 | 0.8270 | 0.4830 |
| 0.4115 | 1.4432 | 14700 | 0.4402 | 0.0023 | 0.4783 | 0.8254 | 0.4783 |
| 0.3838 | 1.4530 | 14800 | 0.4391 | 0.0023 | 0.4850 | 0.8280 | 0.4850 |
| 0.4463 | 1.4628 | 14900 | 0.4374 | 0.0023 | 0.4825 | 0.8265 | 0.4825 |
| 0.3885 | 1.4726 | 15000 | 0.4357 | 0.0023 | 0.4841 | 0.8292 | 0.4841 |
| 0.4566 | 1.4824 | 15100 | 0.4363 | 0.0023 | 0.4811 | 0.8280 | 0.4811 |
| 0.3694 | 1.4922 | 15200 | 0.4381 | 0.0023 | 0.4852 | 0.8280 | 0.4852 |
| 0.4081 | 1.5021 | 15300 | 0.4344 | 0.0023 | 0.4908 | 0.8300 | 0.4908 |
| 0.3838 | 1.5119 | 15400 | 0.4360 | 0.0023 | 0.4895 | 0.8294 | 0.4895 |
| 0.4403 | 1.5217 | 15500 | 0.4377 | 0.0023 | 0.4854 | 0.8279 | 0.4854 |
| 0.3863 | 1.5315 | 15600 | 0.4329 | 0.0023 | 0.4863 | 0.8289 | 0.4863 |
| 0.4461 | 1.5413 | 15700 | 0.4353 | 0.0023 | 0.4892 | 0.8293 | 0.4892 |
| 0.428 | 1.5511 | 15800 | 0.4294 | 0.0023 | 0.4920 | 0.8302 | 0.4920 |
| 0.3796 | 1.5610 | 15900 | 0.4289 | 0.0023 | 0.4932 | 0.8306 | 0.4932 |
| 0.4319 | 1.5708 | 16000 | 0.4295 | 0.0023 | 0.4865 | 0.8297 | 0.4865 |
| 0.4311 | 1.5806 | 16100 | 0.4329 | 0.0023 | 0.4876 | 0.8307 | 0.4876 |
| 0.4884 | 1.5904 | 16200 | 0.4254 | 0.0023 | 0.4981 | 0.8336 | 0.4981 |
| 0.4411 | 1.6002 | 16300 | 0.4288 | 0.0023 | 0.4936 | 0.8317 | 0.4936 |
| 0.4805 | 1.6101 | 16400 | 0.4279 | 0.0023 | 0.4959 | 0.8326 | 0.4959 |
| 0.4116 | 1.6199 | 16500 | 0.4283 | 0.0023 | 0.4961 | 0.8328 | 0.4961 |
| 0.4096 | 1.6297 | 16600 | 0.4211 | 0.0023 | 0.5022 | 0.8361 | 0.5022 |
| 0.4439 | 1.6395 | 16700 | 0.4291 | 0.0023 | 0.4951 | 0.8329 | 0.4951 |
| 0.3796 | 1.6493 | 16800 | 0.4259 | 0.0023 | 0.4988 | 0.8338 | 0.4988 |
| 0.3777 | 1.6591 | 16900 | 0.4261 | 0.0023 | 0.4972 | 0.8339 | 0.4972 |
| 0.409 | 1.6690 | 17000 | 0.4259 | 0.0023 | 0.4954 | 0.8325 | 0.4954 |
| 0.4232 | 1.6788 | 17100 | 0.4247 | 0.0023 | 0.4977 | 0.8331 | 0.4977 |
| 0.3679 | 1.6886 | 17200 | 0.4217 | 0.0023 | 0.4985 | 0.8341 | 0.4985 |
| 0.4343 | 1.6984 | 17300 | 0.4250 | 0.0023 | 0.5 | 0.8340 | 0.5 |
| 0.3634 | 1.7082 | 17400 | 0.4231 | 0.0023 | 0.5035 | 0.8349 | 0.5035 |
| 0.4088 | 1.7180 | 17500 | 0.4204 | 0.0023 | 0.5039 | 0.8367 | 0.5039 |
| 0.3844 | 1.7279 | 17600 | 0.4223 | 0.0023 | 0.4984 | 0.8346 | 0.4984 |
| 0.398 | 1.7377 | 17700 | 0.4201 | 0.0023 | 0.5038 | 0.8361 | 0.5038 |
| 0.4236 | 1.7475 | 17800 | 0.4208 | 0.0023 | 0.4975 | 0.8347 | 0.4975 |
| 0.4132 | 1.7573 | 17900 | 0.4189 | 0.0023 | 0.5017 | 0.8370 | 0.5017 |
| 0.4228 | 1.7671 | 18000 | 0.4206 | 0.0023 | 0.4992 | 0.8358 | 0.4992 |
| 0.4122 | 1.7769 | 18100 | 0.4158 | 0.0023 | 0.5059 | 0.8378 | 0.5059 |
| 0.4383 | 1.7868 | 18200 | 0.4229 | 0.0023 | 0.4982 | 0.8340 | 0.4982 |
| 0.4365 | 1.7966 | 18300 | 0.4195 | 0.0023 | 0.4988 | 0.8348 | 0.4988 |
| 0.3715 | 1.8064 | 18400 | 0.4184 | 0.0023 | 0.4967 | 0.8358 | 0.4967 |
| 0.4155 | 1.8162 | 18500 | 0.4187 | 0.0023 | 0.5036 | 0.8370 | 0.5036 |
| 0.4059 | 1.8260 | 18600 | 0.4165 | 0.0023 | 0.4989 | 0.8351 | 0.4989 |
| 0.3867 | 1.8359 | 18700 | 0.4137 | 0.0023 | 0.5070 | 0.8380 | 0.5070 |
| 0.3217 | 1.8457 | 18800 | 0.4136 | 0.0023 | 0.5086 | 0.8386 | 0.5086 |
| 0.3148 | 1.8555 | 18900 | 0.4148 | 0.0023 | 0.5021 | 0.8368 | 0.5021 |
| 0.406 | 1.8653 | 19000 | 0.4093 | 0.0023 | 0.5090 | 0.8382 | 0.5090 |
| 0.362 | 1.8751 | 19100 | 0.4117 | 0.0023 | 0.5057 | 0.8377 | 0.5057 |
| 0.3752 | 1.8849 | 19200 | 0.4109 | 0.0023 | 0.5071 | 0.8381 | 0.5071 |
| 0.5094 | 1.8948 | 19300 | 0.4143 | 0.0023 | 0.5075 | 0.8379 | 0.5075 |
| 0.3345 | 1.9046 | 19400 | 0.4128 | 0.0023 | 0.5106 | 0.8391 | 0.5106 |
| 0.3691 | 1.9144 | 19500 | 0.4107 | 0.0023 | 0.5133 | 0.8389 | 0.5133 |
| 0.4 | 1.9242 | 19600 | 0.4116 | 0.0023 | 0.5128 | 0.8403 | 0.5128 |
| 0.4027 | 1.9340 | 19700 | 0.4124 | 0.0023 | 0.5115 | 0.8384 | 0.5115 |
| 0.3935 | 1.9438 | 19800 | 0.4090 | 0.0023 | 0.5143 | 0.8403 | 0.5143 |
| 0.3328 | 1.9537 | 19900 | 0.4102 | 0.0023 | 0.5112 | 0.8389 | 0.5112 |
| 0.4001 | 1.9635 | 20000 | 0.4106 | 0.0023 | 0.5131 | 0.8395 | 0.5131 |
| 0.4048 | 1.9733 | 20100 | 0.4076 | 0.0023 | 0.5151 | 0.8403 | 0.5151 |
| 0.4477 | 1.9831 | 20200 | 0.4065 | 0.0023 | 0.5135 | 0.8401 | 0.5135 |
| 0.4063 | 1.9929 | 20300 | 0.4055 | 0.0023 | 0.5168 | 0.8414 | 0.5168 |
| 0.3304 | 2.0027 | 20400 | 0.4126 | 0.0023 | 0.5191 | 0.8415 | 0.5191 |
| 0.3062 | 2.0126 | 20500 | 0.4096 | 0.0023 | 0.5197 | 0.8406 | 0.5197 |
| 0.3488 | 2.0224 | 20600 | 0.4124 | 0.0023 | 0.5164 | 0.8404 | 0.5164 |
| 0.2934 | 2.0322 | 20700 | 0.4145 | 0.0023 | 0.5109 | 0.8401 | 0.5109 |
| 0.3207 | 2.0420 | 20800 | 0.4131 | 0.0023 | 0.5172 | 0.8405 | 0.5172 |
| 0.413 | 2.0518 | 20900 | 0.4147 | 0.0023 | 0.5145 | 0.8407 | 0.5145 |
| 0.3176 | 2.0617 | 21000 | 0.4198 | 0.0023 | 0.5162 | 0.8402 | 0.5162 |
| 0.3909 | 2.0715 | 21100 | 0.4146 | 0.0023 | 0.5150 | 0.8400 | 0.5150 |
| 0.4044 | 2.0813 | 21200 | 0.4180 | 0.0023 | 0.5086 | 0.8391 | 0.5086 |
| 0.395 | 2.0911 | 21300 | 0.4149 | 0.0023 | 0.5175 | 0.8409 | 0.5175 |
| 0.4061 | 2.1009 | 21400 | 0.4135 | 0.0023 | 0.5180 | 0.8406 | 0.5180 |
| 0.3532 | 2.1107 | 21500 | 0.4145 | 0.0023 | 0.5129 | 0.8391 | 0.5129 |
| 0.309 | 2.1206 | 21600 | 0.4156 | 0.0023 | 0.5060 | 0.8390 | 0.5060 |
| 0.3614 | 2.1304 | 21700 | 0.4148 | 0.0023 | 0.5124 | 0.8402 | 0.5124 |
| 0.3522 | 2.1402 | 21800 | 0.4127 | 0.0023 | 0.5188 | 0.8407 | 0.5188 |
| 0.364 | 2.1500 | 21900 | 0.4144 | 0.0023 | 0.5166 | 0.8406 | 0.5166 |
| 0.3148 | 2.1598 | 22000 | 0.4155 | 0.0023 | 0.5139 | 0.8397 | 0.5139 |
| 0.334 | 2.1696 | 22100 | 0.4120 | 0.0023 | 0.5150 | 0.8398 | 0.5150 |
| 0.3252 | 2.1795 | 22200 | 0.4123 | 0.0023 | 0.5158 | 0.8417 | 0.5158 |
| 0.356 | 2.1893 | 22300 | 0.4120 | 0.0023 | 0.5177 | 0.8414 | 0.5177 |
| 0.4261 | 2.1991 | 22400 | 0.4130 | 0.0023 | 0.5155 | 0.8409 | 0.5155 |
| 0.3351 | 2.2089 | 22500 | 0.4085 | 0.0023 | 0.5215 | 0.8423 | 0.5215 |
| 0.3846 | 2.2187 | 22600 | 0.4112 | 0.0023 | 0.5188 | 0.8421 | 0.5188 |
| 0.381 | 2.2285 | 22700 | 0.4105 | 0.0023 | 0.5160 | 0.8415 | 0.5160 |
| 0.371 | 2.2384 | 22800 | 0.4100 | 0.0023 | 0.5188 | 0.8410 | 0.5188 |
| 0.3228 | 2.2482 | 22900 | 0.4050 | 0.0023 | 0.5180 | 0.8415 | 0.5180 |
| 0.3229 | 2.2580 | 23000 | 0.4130 | 0.0023 | 0.5214 | 0.8419 | 0.5214 |
| 0.4548 | 2.2678 | 23100 | 0.4095 | 0.0023 | 0.5207 | 0.8422 | 0.5207 |
| 0.2659 | 2.2776 | 23200 | 0.4047 | 0.0023 | 0.5203 | 0.8435 | 0.5203 |
| 0.3502 | 2.2875 | 23300 | 0.4113 | 0.0023 | 0.5186 | 0.8423 | 0.5186 |
| 0.3329 | 2.2973 | 23400 | 0.4059 | 0.0023 | 0.5210 | 0.8436 | 0.5210 |
| 0.3687 | 2.3071 | 23500 | 0.4045 | 0.0023 | 0.5206 | 0.8433 | 0.5206 |
| 0.3515 | 2.3169 | 23600 | 0.4069 | 0.0023 | 0.5175 | 0.8422 | 0.5175 |
| 0.3486 | 2.3267 | 23700 | 0.4060 | 0.0023 | 0.5239 | 0.8432 | 0.5239 |
| 0.3671 | 2.3365 | 23800 | 0.4062 | 0.0023 | 0.5228 | 0.8440 | 0.5228 |
| 0.3526 | 2.3464 | 23900 | 0.4015 | 0.0023 | 0.5234 | 0.8442 | 0.5234 |
| 0.3752 | 2.3562 | 24000 | 0.4027 | 0.0023 | 0.5213 | 0.8440 | 0.5213 |
| 0.3599 | 2.3660 | 24100 | 0.4058 | 0.0023 | 0.5208 | 0.8431 | 0.5208 |
| 0.3535 | 2.3758 | 24200 | 0.4060 | 0.0023 | 0.5240 | 0.8433 | 0.5240 |
| 0.3431 | 2.3856 | 24300 | 0.4063 | 0.0023 | 0.5190 | 0.8422 | 0.5190 |
| 0.3774 | 2.3954 | 24400 | 0.4049 | 0.0023 | 0.5234 | 0.8440 | 0.5234 |
| 0.3668 | 2.4053 | 24500 | 0.4067 | 0.0023 | 0.5152 | 0.8419 | 0.5152 |
| 0.314 | 2.4151 | 24600 | 0.4048 | 0.0023 | 0.5240 | 0.8440 | 0.5240 |
| 0.3251 | 2.4249 | 24700 | 0.4006 | 0.0023 | 0.5249 | 0.8439 | 0.5249 |
| 0.3157 | 2.4347 | 24800 | 0.4046 | 0.0023 | 0.5212 | 0.8434 | 0.5212 |
| 0.3348 | 2.4445 | 24900 | 0.4021 | 0.0023 | 0.5266 | 0.8442 | 0.5266 |
| 0.3434 | 2.4543 | 25000 | 0.4044 | 0.0023 | 0.5237 | 0.8435 | 0.5237 |
| 0.3823 | 2.4642 | 25100 | 0.4047 | 0.0023 | 0.5188 | 0.8424 | 0.5188 |
| 0.3858 | 2.4740 | 25200 | 0.4015 | 0.0023 | 0.5223 | 0.8430 | 0.5223 |
| 0.3475 | 2.4838 | 25300 | 0.3990 | 0.0023 | 0.5248 | 0.8443 | 0.5248 |
| 0.3128 | 2.4936 | 25400 | 0.4017 | 0.0023 | 0.5276 | 0.8440 | 0.5276 |
| 0.3373 | 2.5034 | 25500 | 0.4034 | 0.0023 | 0.5216 | 0.8425 | 0.5216 |
| 0.323 | 2.5133 | 25600 | 0.3996 | 0.0023 | 0.5242 | 0.8437 | 0.5242 |
| 0.3302 | 2.5231 | 25700 | 0.4025 | 0.0023 | 0.5273 | 0.8439 | 0.5273 |
| 0.3565 | 2.5329 | 25800 | 0.3979 | 0.0023 | 0.5278 | 0.8460 | 0.5278 |
| 0.4211 | 2.5427 | 25900 | 0.3962 | 0.0023 | 0.5268 | 0.8437 | 0.5268 |
| 0.3894 | 2.5525 | 26000 | 0.3963 | 0.0023 | 0.5284 | 0.8458 | 0.5284 |
| 0.3242 | 2.5623 | 26100 | 0.3970 | 0.0023 | 0.5291 | 0.8456 | 0.5291 |
| 0.3163 | 2.5722 | 26200 | 0.4026 | 0.0023 | 0.5254 | 0.8448 | 0.5254 |
| 0.3813 | 2.5820 | 26300 | 0.4001 | 0.0023 | 0.5288 | 0.8465 | 0.5288 |
| 0.3664 | 2.5918 | 26400 | 0.3992 | 0.0023 | 0.5304 | 0.8461 | 0.5304 |
| 0.3628 | 2.6016 | 26500 | 0.3969 | 0.0023 | 0.5326 | 0.8472 | 0.5326 |
| 0.3416 | 2.6114 | 26600 | 0.3966 | 0.0023 | 0.5271 | 0.8454 | 0.5271 |
| 0.3731 | 2.6212 | 26700 | 0.3971 | 0.0023 | 0.5284 | 0.8463 | 0.5284 |
| 0.3584 | 2.6311 | 26800 | 0.3943 | 0.0023 | 0.5273 | 0.8461 | 0.5273 |
| 0.3287 | 2.6409 | 26900 | 0.3912 | 0.0023 | 0.5353 | 0.8485 | 0.5353 |
| 0.3792 | 2.6507 | 27000 | 0.3987 | 0.0023 | 0.5313 | 0.8459 | 0.5313 |
| 0.3853 | 2.6605 | 27100 | 0.3946 | 0.0023 | 0.5294 | 0.8460 | 0.5294 |
| 0.3058 | 2.6703 | 27200 | 0.3937 | 0.0023 | 0.5328 | 0.8473 | 0.5328 |
| 0.3365 | 2.6801 | 27300 | 0.3937 | 0.0023 | 0.5330 | 0.8463 | 0.5330 |
| 0.3165 | 2.6900 | 27400 | 0.3909 | 0.0023 | 0.5284 | 0.8466 | 0.5284 |
| 0.3208 | 2.6998 | 27500 | 0.3903 | 0.0023 | 0.5386 | 0.8483 | 0.5386 |
| 0.3492 | 2.7096 | 27600 | 0.3894 | 0.0023 | 0.5338 | 0.8473 | 0.5338 |
| 0.3431 | 2.7194 | 27700 | 0.3882 | 0.0023 | 0.5337 | 0.8482 | 0.5337 |
| 0.3667 | 2.7292 | 27800 | 0.3920 | 0.0023 | 0.5331 | 0.8474 | 0.5331 |
| 0.3197 | 2.7391 | 27900 | 0.3895 | 0.0023 | 0.5364 | 0.8485 | 0.5364 |
| 0.3625 | 2.7489 | 28000 | 0.3945 | 0.0023 | 0.5333 | 0.8472 | 0.5333 |
| 0.3235 | 2.7587 | 28100 | 0.3937 | 0.0023 | 0.5356 | 0.8473 | 0.5356 |
| 0.2643 | 2.7685 | 28200 | 0.3931 | 0.0023 | 0.5364 | 0.8480 | 0.5364 |
| 0.3143 | 2.7783 | 28300 | 0.3924 | 0.0023 | 0.5358 | 0.8483 | 0.5358 |
| 0.3303 | 2.7881 | 28400 | 0.3910 | 0.0023 | 0.5389 | 0.8492 | 0.5389 |
| 0.3035 | 2.7980 | 28500 | 0.3893 | 0.0023 | 0.5373 | 0.8489 | 0.5373 |
| 0.3396 | 2.8078 | 28600 | 0.3893 | 0.0023 | 0.5371 | 0.8483 | 0.5371 |
| 0.3355 | 2.8176 | 28700 | 0.3900 | 0.0023 | 0.5422 | 0.8503 | 0.5422 |
| 0.3498 | 2.8274 | 28800 | 0.3955 | 0.0023 | 0.5368 | 0.8480 | 0.5368 |
| 0.4141 | 2.8372 | 28900 | 0.3888 | 0.0023 | 0.5364 | 0.8482 | 0.5364 |
| 0.3411 | 2.8470 | 29000 | 0.3920 | 0.0023 | 0.5363 | 0.8482 | 0.5363 |
| 0.3166 | 2.8569 | 29100 | 0.3945 | 0.0023 | 0.5379 | 0.8484 | 0.5379 |
| 0.3466 | 2.8667 | 29200 | 0.3880 | 0.0023 | 0.5442 | 0.8507 | 0.5442 |
| 0.3413 | 2.8765 | 29300 | 0.3923 | 0.0023 | 0.5400 | 0.8494 | 0.5400 |
| 0.3169 | 2.8863 | 29400 | 0.3877 | 0.0023 | 0.5397 | 0.8486 | 0.5397 |
| 0.3014 | 2.8961 | 29500 | 0.3853 | 0.0023 | 0.5498 | 0.8518 | 0.5498 |
| 0.3806 | 2.9059 | 29600 | 0.3866 | 0.0023 | 0.5407 | 0.8504 | 0.5407 |
| 0.3528 | 2.9158 | 29700 | 0.3865 | 0.0023 | 0.5402 | 0.8503 | 0.5402 |
| 0.2929 | 2.9256 | 29800 | 0.3865 | 0.0023 | 0.5429 | 0.8505 | 0.5429 |
| 0.345 | 2.9354 | 29900 | 0.3859 | 0.0023 | 0.5432 | 0.8512 | 0.5432 |
| 0.3349 | 2.9452 | 30000 | 0.3832 | 0.0023 | 0.5436 | 0.8513 | 0.5436 |
| 0.3418 | 2.9550 | 30100 | 0.3859 | 0.0023 | 0.5414 | 0.8507 | 0.5414 |
| 0.2884 | 2.9649 | 30200 | 0.3866 | 0.0023 | 0.5368 | 0.8491 | 0.5368 |
| 0.3187 | 2.9747 | 30300 | 0.3833 | 0.0023 | 0.5439 | 0.8511 | 0.5439 |
| 0.3642 | 2.9845 | 30400 | 0.3859 | 0.0023 | 0.5402 | 0.8487 | 0.5402 |
| 0.454 | 2.9943 | 30500 | 0.3823 | 0.0023 | 0.5410 | 0.8501 | 0.5410 |
| 0.2832 | 3.0041 | 30600 | 0.4044 | 0.0023 | 0.5450 | 0.8504 | 0.5450 |
| 0.2363 | 3.0139 | 30700 | 0.4099 | 0.0023 | 0.5394 | 0.8483 | 0.5394 |
| 0.2644 | 3.0238 | 30800 | 0.4155 | 0.0023 | 0.5369 | 0.8487 | 0.5369 |
| 0.2768 | 3.0336 | 30900 | 0.4114 | 0.0023 | 0.5417 | 0.8498 | 0.5417 |
| 0.296 | 3.0434 | 31000 | 0.4100 | 0.0023 | 0.5400 | 0.8487 | 0.5400 |
| 0.3087 | 3.0532 | 31100 | 0.4109 | 0.0023 | 0.5384 | 0.8476 | 0.5384 |
| 0.2504 | 3.0630 | 31200 | 0.4179 | 0.0023 | 0.5391 | 0.8479 | 0.5391 |
| 0.3044 | 3.0728 | 31300 | 0.4101 | 0.0023 | 0.5406 | 0.8487 | 0.5406 |
| 0.3095 | 3.0827 | 31400 | 0.4180 | 0.0023 | 0.5404 | 0.8494 | 0.5404 |
| 0.3007 | 3.0925 | 31500 | 0.4131 | 0.0023 | 0.5362 | 0.8481 | 0.5362 |
| 0.2508 | 3.1023 | 31600 | 0.4143 | 0.0023 | 0.5386 | 0.8480 | 0.5386 |
| 0.2655 | 3.1121 | 31700 | 0.4121 | 0.0023 | 0.5390 | 0.8479 | 0.5390 |
| 0.3204 | 3.1219 | 31800 | 0.4121 | 0.0023 | 0.5414 | 0.8480 | 0.5414 |
| 0.2498 | 3.1317 | 31900 | 0.4067 | 0.0023 | 0.5431 | 0.8491 | 0.5431 |
| 0.3213 | 3.1416 | 32000 | 0.4114 | 0.0023 | 0.5393 | 0.8479 | 0.5393 |
| 0.257 | 3.1514 | 32100 | 0.4182 | 0.0023 | 0.5433 | 0.8489 | 0.5433 |
| 0.3254 | 3.1612 | 32200 | 0.4094 | 0.0023 | 0.5398 | 0.8489 | 0.5398 |
| 0.2876 | 3.1710 | 32300 | 0.4154 | 0.0023 | 0.5361 | 0.8478 | 0.5361 |
| 0.287 | 3.1808 | 32400 | 0.4132 | 0.0023 | 0.5370 | 0.8475 | 0.5370 |
| 0.3895 | 3.1907 | 32500 | 0.4161 | 0.0023 | 0.5368 | 0.8475 | 0.5368 |
| 0.291 | 3.2005 | 32600 | 0.4119 | 0.0023 | 0.5404 | 0.8482 | 0.5404 |
| 0.286 | 3.2103 | 32700 | 0.4159 | 0.0023 | 0.5359 | 0.8474 | 0.5359 |
| 0.2428 | 3.2201 | 32800 | 0.4135 | 0.0023 | 0.5394 | 0.8483 | 0.5394 |
| 0.2829 | 3.2299 | 32900 | 0.4137 | 0.0023 | 0.5360 | 0.8470 | 0.5360 |
| 0.311 | 3.2397 | 33000 | 0.4104 | 0.0023 | 0.5370 | 0.8489 | 0.5370 |
| 0.3111 | 3.2496 | 33100 | 0.4099 | 0.0023 | 0.5404 | 0.8483 | 0.5404 |
| 0.2498 | 3.2594 | 33200 | 0.4124 | 0.0023 | 0.5368 | 0.8465 | 0.5368 |
| 0.2333 | 3.2692 | 33300 | 0.4097 | 0.0023 | 0.5418 | 0.8489 | 0.5418 |
| 0.3075 | 3.2790 | 33400 | 0.4078 | 0.0023 | 0.5382 | 0.8478 | 0.5382 |
| 0.2677 | 3.2888 | 33500 | 0.4088 | 0.0023 | 0.5395 | 0.8478 | 0.5395 |
| 0.3405 | 3.2986 | 33600 | 0.4073 | 0.0023 | 0.5416 | 0.8494 | 0.5416 |
| 0.2213 | 3.3085 | 33700 | 0.4088 | 0.0023 | 0.5429 | 0.8488 | 0.5429 |
| 0.3289 | 3.3183 | 33800 | 0.4133 | 0.0023 | 0.5429 | 0.8486 | 0.5429 |
| 0.2428 | 3.3281 | 33900 | 0.4088 | 0.0023 | 0.5406 | 0.8477 | 0.5406 |
| 0.2799 | 3.3379 | 34000 | 0.4083 | 0.0023 | 0.5428 | 0.8500 | 0.5428 |
| 0.3191 | 3.3477 | 34100 | 0.4087 | 0.0023 | 0.5378 | 0.8486 | 0.5378 |
| 0.2615 | 3.3575 | 34200 | 0.4012 | 0.0023 | 0.5421 | 0.8499 | 0.5421 |
| 0.2825 | 3.3674 | 34300 | 0.4049 | 0.0023 | 0.5419 | 0.8491 | 0.5419 |
| 0.2714 | 3.3772 | 34400 | 0.4065 | 0.0023 | 0.5454 | 0.8507 | 0.5454 |
| 0.2973 | 3.3870 | 34500 | 0.4105 | 0.0023 | 0.5436 | 0.8500 | 0.5436 |
| 0.2131 | 3.3968 | 34600 | 0.4026 | 0.0023 | 0.5452 | 0.8501 | 0.5452 |
| 0.2713 | 3.4066 | 34700 | 0.4043 | 0.0023 | 0.5438 | 0.8508 | 0.5438 |
| 0.2912 | 3.4165 | 34800 | 0.4000 | 0.0023 | 0.5469 | 0.8517 | 0.5469 |
| 0.3758 | 3.4263 | 34900 | 0.4038 | 0.0023 | 0.5476 | 0.8512 | 0.5476 |
| 0.3297 | 3.4361 | 35000 | 0.4041 | 0.0023 | 0.5450 | 0.8505 | 0.5450 |
| 0.1773 | 3.4459 | 35100 | 0.3991 | 0.0023 | 0.5452 | 0.8513 | 0.5452 |
| 0.2761 | 3.4557 | 35200 | 0.4023 | 0.0023 | 0.5420 | 0.8501 | 0.5420 |
| 0.2784 | 3.4655 | 35300 | 0.4048 | 0.0023 | 0.5424 | 0.8500 | 0.5424 |
| 0.2879 | 3.4754 | 35400 | 0.4018 | 0.0023 | 0.5448 | 0.8511 | 0.5448 |
| 0.2915 | 3.4852 | 35500 | 0.3977 | 0.0023 | 0.5405 | 0.8500 | 0.5405 |
| 0.2533 | 3.4950 | 35600 | 0.4056 | 0.0023 | 0.5469 | 0.8514 | 0.5469 |
| 0.2969 | 3.5048 | 35700 | 0.3981 | 0.0023 | 0.5459 | 0.8513 | 0.5459 |
| 0.2999 | 3.5146 | 35800 | 0.3995 | 0.0023 | 0.5434 | 0.8498 | 0.5434 |
| 0.2756 | 3.5244 | 35900 | 0.4016 | 0.0023 | 0.5434 | 0.8510 | 0.5434 |
| 0.2807 | 3.5343 | 36000 | 0.3982 | 0.0023 | 0.5494 | 0.8521 | 0.5494 |
| 0.235 | 3.5441 | 36100 | 0.4009 | 0.0023 | 0.5477 | 0.8515 | 0.5477 |
| 0.3184 | 3.5539 | 36200 | 0.4001 | 0.0023 | 0.5488 | 0.8511 | 0.5488 |
| 0.239 | 3.5637 | 36300 | 0.4032 | 0.0023 | 0.5466 | 0.8522 | 0.5466 |
| 0.2799 | 3.5735 | 36400 | 0.4023 | 0.0023 | 0.5471 | 0.8509 | 0.5471 |
| 0.2684 | 3.5833 | 36500 | 0.3964 | 0.0023 | 0.5426 | 0.8516 | 0.5426 |
| 0.2629 | 3.5932 | 36600 | 0.4022 | 0.0023 | 0.5454 | 0.8507 | 0.5454 |
| 0.2632 | 3.6030 | 36700 | 0.3987 | 0.0023 | 0.5451 | 0.8505 | 0.5451 |
| 0.3136 | 3.6128 | 36800 | 0.4007 | 0.0023 | 0.5480 | 0.8510 | 0.5480 |
| 0.2478 | 3.6226 | 36900 | 0.3959 | 0.0023 | 0.5498 | 0.8523 | 0.5498 |
| 0.2406 | 3.6324 | 37000 | 0.3997 | 0.0023 | 0.5447 | 0.8510 | 0.5447 |
| 0.3246 | 3.6423 | 37100 | 0.3988 | 0.0023 | 0.5505 | 0.8519 | 0.5505 |
| 0.2993 | 3.6521 | 37200 | 0.3980 | 0.0023 | 0.5506 | 0.8522 | 0.5506 |
| 0.3074 | 3.6619 | 37300 | 0.4021 | 0.0023 | 0.5431 | 0.8503 | 0.5431 |
| 0.2773 | 3.6717 | 37400 | 0.4035 | 0.0023 | 0.5454 | 0.8506 | 0.5454 |
| 0.3199 | 3.6815 | 37500 | 0.3930 | 0.0023 | 0.5478 | 0.8522 | 0.5478 |
| 0.2713 | 3.6913 | 37600 | 0.3970 | 0.0023 | 0.5480 | 0.8519 | 0.5480 |
| 0.2713 | 3.7012 | 37700 | 0.3988 | 0.0023 | 0.5423 | 0.8510 | 0.5423 |
| 0.3234 | 3.7110 | 37800 | 0.3951 | 0.0023 | 0.5468 | 0.8522 | 0.5468 |
| 0.2685 | 3.7208 | 37900 | 0.3952 | 0.0023 | 0.5481 | 0.8525 | 0.5481 |
| 0.247 | 3.7306 | 38000 | 0.4001 | 0.0023 | 0.5435 | 0.8498 | 0.5435 |
| 0.2749 | 3.7404 | 38100 | 0.3939 | 0.0023 | 0.5454 | 0.8512 | 0.5454 |
| 0.2773 | 3.7502 | 38200 | 0.4016 | 0.0023 | 0.5483 | 0.8521 | 0.5483 |
| 0.2903 | 3.7601 | 38300 | 0.3996 | 0.0023 | 0.5449 | 0.8519 | 0.5449 |
| 0.3415 | 3.7699 | 38400 | 0.3955 | 0.0023 | 0.5449 | 0.8512 | 0.5449 |
| 0.2925 | 3.7797 | 38500 | 0.3968 | 0.0023 | 0.5438 | 0.8512 | 0.5438 |
| 0.3209 | 3.7895 | 38600 | 0.3947 | 0.0023 | 0.5492 | 0.8531 | 0.5492 |
| 0.2273 | 3.7993 | 38700 | 0.3963 | 0.0023 | 0.5503 | 0.8537 | 0.5503 |
| 0.288 | 3.8091 | 38800 | 0.3971 | 0.0023 | 0.5431 | 0.8511 | 0.5431 |
| 0.3223 | 3.8190 | 38900 | 0.3926 | 0.0023 | 0.5520 | 0.8546 | 0.5520 |
| 0.289 | 3.8288 | 39000 | 0.3953 | 0.0023 | 0.5489 | 0.8534 | 0.5489 |
| 0.2807 | 3.8386 | 39100 | 0.3919 | 0.0023 | 0.5482 | 0.8532 | 0.5482 |
| 0.3518 | 3.8484 | 39200 | 0.3939 | 0.0023 | 0.5491 | 0.8529 | 0.5491 |
| 0.2376 | 3.8582 | 39300 | 0.3919 | 0.0023 | 0.5514 | 0.8542 | 0.5514 |
| 0.2859 | 3.8681 | 39400 | 0.3874 | 0.0023 | 0.5452 | 0.8520 | 0.5452 |
| 0.3457 | 3.8779 | 39500 | 0.3920 | 0.0023 | 0.5488 | 0.8530 | 0.5488 |
| 0.2839 | 3.8877 | 39600 | 0.3889 | 0.0023 | 0.5478 | 0.8524 | 0.5478 |
| 0.2692 | 3.8975 | 39700 | 0.3892 | 0.0023 | 0.5527 | 0.8536 | 0.5527 |
| 0.2931 | 3.9073 | 39800 | 0.3907 | 0.0023 | 0.5474 | 0.8524 | 0.5474 |
| 0.3038 | 3.9171 | 39900 | 0.3923 | 0.0023 | 0.5501 | 0.8532 | 0.5501 |
| 0.3312 | 3.9270 | 40000 | 0.3923 | 0.0023 | 0.5477 | 0.8515 | 0.5477 |
| 0.3148 | 3.9368 | 40100 | 0.3889 | 0.0023 | 0.5508 | 0.8541 | 0.5508 |
| 0.3105 | 3.9466 | 40200 | 0.3918 | 0.0023 | 0.5487 | 0.8532 | 0.5487 |
| 0.267 | 3.9564 | 40300 | 0.3924 | 0.0023 | 0.5530 | 0.8539 | 0.5530 |
| 0.2945 | 3.9662 | 40400 | 0.3919 | 0.0023 | 0.5526 | 0.8534 | 0.5526 |
| 0.2923 | 3.9760 | 40500 | 0.3936 | 0.0023 | 0.5505 | 0.8544 | 0.5505 |
| 0.2725 | 3.9859 | 40600 | 0.3898 | 0.0023 | 0.5572 | 0.8550 | 0.5572 |
| 0.3454 | 3.9957 | 40700 | 0.3911 | 0.0023 | 0.5525 | 0.8541 | 0.5525 |
| 0.2177 | 4.0055 | 40800 | 0.4651 | 0.0023 | 0.5485 | 0.8521 | 0.5485 |
| 0.1425 | 4.0153 | 40900 | 0.4729 | 0.0023 | 0.5470 | 0.8512 | 0.5470 |
| 0.1692 | 4.0251 | 41000 | 0.4600 | 0.0023 | 0.5436 | 0.8500 | 0.5436 |
| 0.2001 | 4.0349 | 41100 | 0.4729 | 0.0023 | 0.5446 | 0.8511 | 0.5446 |
| 0.1642 | 4.0448 | 41200 | 0.4589 | 0.0023 | 0.5462 | 0.8510 | 0.5462 |
| 0.2105 | 4.0546 | 41300 | 0.4663 | 0.0023 | 0.5461 | 0.8500 | 0.5461 |
| 0.1356 | 4.0644 | 41400 | 0.4537 | 0.0023 | 0.5423 | 0.8488 | 0.5423 |
| 0.183 | 4.0742 | 41500 | 0.4701 | 0.0023 | 0.5459 | 0.8506 | 0.5459 |
| 0.1936 | 4.0840 | 41600 | 0.4740 | 0.0023 | 0.5469 | 0.8511 | 0.5469 |
| 0.2421 | 4.0939 | 41700 | 0.4631 | 0.0023 | 0.5402 | 0.8489 | 0.5402 |
| 0.1602 | 4.1037 | 41800 | 0.4547 | 0.0023 | 0.5420 | 0.8499 | 0.5420 |
| 0.1528 | 4.1135 | 41900 | 0.4582 | 0.0023 | 0.5403 | 0.8500 | 0.5403 |
| 0.1606 | 4.1233 | 42000 | 0.4581 | 0.0023 | 0.5442 | 0.8507 | 0.5442 |
| 0.1633 | 4.1331 | 42100 | 0.4765 | 0.0023 | 0.5456 | 0.8508 | 0.5456 |
| 0.1629 | 4.1429 | 42200 | 0.4562 | 0.0023 | 0.5466 | 0.8514 | 0.5466 |
| 0.2251 | 4.1528 | 42300 | 0.4603 | 0.0023 | 0.5476 | 0.8519 | 0.5476 |
| 0.2496 | 4.1626 | 42400 | 0.4519 | 0.0023 | 0.5479 | 0.8519 | 0.5479 |
| 0.1762 | 4.1724 | 42500 | 0.4583 | 0.0023 | 0.5451 | 0.8506 | 0.5451 |
| 0.1947 | 4.1822 | 42600 | 0.4699 | 0.0023 | 0.5442 | 0.8494 | 0.5442 |
| 0.1658 | 4.1920 | 42700 | 0.4615 | 0.0023 | 0.5481 | 0.8508 | 0.5481 |
| 0.1787 | 4.2018 | 42800 | 0.4666 | 0.0023 | 0.5447 | 0.8502 | 0.5447 |
| 0.2031 | 4.2117 | 42900 | 0.4557 | 0.0023 | 0.5445 | 0.8494 | 0.5445 |
| 0.1779 | 4.2215 | 43000 | 0.4677 | 0.0023 | 0.5409 | 0.8493 | 0.5409 |
| 0.2143 | 4.2313 | 43100 | 0.4654 | 0.0023 | 0.5485 | 0.8520 | 0.5485 |
| 0.1882 | 4.2411 | 43200 | 0.4586 | 0.0023 | 0.5451 | 0.8500 | 0.5451 |
| 0.2096 | 4.2509 | 43300 | 0.4530 | 0.0023 | 0.5438 | 0.8500 | 0.5438 |
| 0.1883 | 4.2608 | 43400 | 0.4478 | 0.0023 | 0.5464 | 0.8506 | 0.5464 |
| 0.2071 | 4.2706 | 43500 | 0.4625 | 0.0023 | 0.5445 | 0.8495 | 0.5445 |
| 0.1858 | 4.2804 | 43600 | 0.4582 | 0.0023 | 0.5438 | 0.8500 | 0.5438 |
| 0.1706 | 4.2902 | 43700 | 0.4589 | 0.0023 | 0.5467 | 0.8506 | 0.5467 |
| 0.2689 | 4.3000 | 43800 | 0.4557 | 0.0023 | 0.5422 | 0.8494 | 0.5422 |
| 0.2582 | 4.3098 | 43900 | 0.4504 | 0.0023 | 0.5440 | 0.8501 | 0.5440 |
| 0.1729 | 4.3197 | 44000 | 0.4560 | 0.0023 | 0.5436 | 0.8496 | 0.5436 |
| 0.226 | 4.3295 | 44100 | 0.4559 | 0.0023 | 0.5459 | 0.8501 | 0.5459 |
| 0.1922 | 4.3393 | 44200 | 0.4575 | 0.0023 | 0.5408 | 0.8495 | 0.5408 |
| 0.2167 | 4.3491 | 44300 | 0.4603 | 0.0023 | 0.5476 | 0.8508 | 0.5476 |
| 0.2188 | 4.3589 | 44400 | 0.4566 | 0.0023 | 0.5442 | 0.8489 | 0.5442 |
| 0.173 | 4.3687 | 44500 | 0.4542 | 0.0023 | 0.5407 | 0.8489 | 0.5407 |
| 0.2157 | 4.3786 | 44600 | 0.4496 | 0.0023 | 0.5467 | 0.8509 | 0.5467 |
| 0.2171 | 4.3884 | 44700 | 0.4462 | 0.0023 | 0.5445 | 0.8504 | 0.5445 |
| 0.1848 | 4.3982 | 44800 | 0.4532 | 0.0023 | 0.5435 | 0.8490 | 0.5435 |
| 0.2298 | 4.4080 | 44900 | 0.4571 | 0.0023 | 0.5463 | 0.8502 | 0.5463 |
| 0.2035 | 4.4178 | 45000 | 0.4461 | 0.0023 | 0.5451 | 0.8503 | 0.5451 |
| 0.2218 | 4.4276 | 45100 | 0.4542 | 0.0023 | 0.5470 | 0.8507 | 0.5470 |
| 0.1858 | 4.4375 | 45200 | 0.4543 | 0.0023 | 0.5440 | 0.8496 | 0.5440 |
| 0.1847 | 4.4473 | 45300 | 0.4489 | 0.0023 | 0.5489 | 0.8518 | 0.5489 |
| 0.1737 | 4.4571 | 45400 | 0.4495 | 0.0023 | 0.5453 | 0.8508 | 0.5453 |
| 0.2094 | 4.4669 | 45500 | 0.4475 | 0.0023 | 0.5422 | 0.8499 | 0.5422 |
| 0.2517 | 4.4767 | 45600 | 0.4494 | 0.0023 | 0.5460 | 0.8506 | 0.5460 |
| 0.2032 | 4.4866 | 45700 | 0.4525 | 0.0023 | 0.5438 | 0.8496 | 0.5438 |
| 0.2374 | 4.4964 | 45800 | 0.4640 | 0.0023 | 0.5460 | 0.8492 | 0.5460 |
| 0.1827 | 4.5062 | 45900 | 0.4552 | 0.0023 | 0.5434 | 0.8490 | 0.5434 |
| 0.1791 | 4.5160 | 46000 | 0.4481 | 0.0023 | 0.5427 | 0.8499 | 0.5427 |
| 0.1952 | 4.5258 | 46100 | 0.4637 | 0.0023 | 0.5400 | 0.8488 | 0.5400 |
| 0.2199 | 4.5356 | 46200 | 0.4481 | 0.0023 | 0.5430 | 0.8501 | 0.5430 |
| 0.2323 | 4.5455 | 46300 | 0.4490 | 0.0023 | 0.5443 | 0.8504 | 0.5443 |
| 0.2328 | 4.5553 | 46400 | 0.4415 | 0.0023 | 0.5431 | 0.8501 | 0.5431 |
| 0.2062 | 4.5651 | 46500 | 0.4478 | 0.0023 | 0.5420 | 0.8504 | 0.5420 |
| 0.2075 | 4.5749 | 46600 | 0.4413 | 0.0023 | 0.5405 | 0.8509 | 0.5405 |
| 0.1776 | 4.5847 | 46700 | 0.4389 | 0.0023 | 0.5425 | 0.8505 | 0.5425 |
| 0.238 | 4.5945 | 46800 | 0.4521 | 0.0023 | 0.5451 | 0.8511 | 0.5451 |
| 0.2185 | 4.6044 | 46900 | 0.4549 | 0.0023 | 0.5463 | 0.8517 | 0.5463 |
| 0.249 | 4.6142 | 47000 | 0.4431 | 0.0023 | 0.5501 | 0.8522 | 0.5501 |
| 0.2178 | 4.6240 | 47100 | 0.4397 | 0.0023 | 0.5471 | 0.8509 | 0.5471 |
| 0.2098 | 4.6338 | 47200 | 0.4496 | 0.0023 | 0.5430 | 0.8496 | 0.5430 |
| 0.2314 | 4.6436 | 47300 | 0.4498 | 0.0023 | 0.5447 | 0.8508 | 0.5447 |
| 0.1873 | 4.6534 | 47400 | 0.4569 | 0.0023 | 0.5450 | 0.8506 | 0.5450 |
| 0.2028 | 4.6633 | 47500 | 0.4499 | 0.0023 | 0.5448 | 0.8507 | 0.5448 |
| 0.2131 | 4.6731 | 47600 | 0.4519 | 0.0023 | 0.5483 | 0.8516 | 0.5483 |
| 0.1937 | 4.6829 | 47700 | 0.4467 | 0.0023 | 0.5476 | 0.8522 | 0.5476 |
| 0.2091 | 4.6927 | 47800 | 0.4408 | 0.0023 | 0.5473 | 0.8509 | 0.5473 |
| 0.185 | 4.7025 | 47900 | 0.4395 | 0.0023 | 0.5463 | 0.8510 | 0.5463 |
| 0.2131 | 4.7124 | 48000 | 0.4498 | 0.0023 | 0.5456 | 0.8500 | 0.5456 |
| 0.1819 | 4.7222 | 48100 | 0.4524 | 0.0023 | 0.5442 | 0.8484 | 0.5442 |
| 0.2309 | 4.7320 | 48200 | 0.4557 | 0.0023 | 0.5461 | 0.8501 | 0.5461 |
| 0.1762 | 4.7418 | 48300 | 0.4524 | 0.0023 | 0.5460 | 0.8504 | 0.5460 |
| 0.1929 | 4.7516 | 48400 | 0.4537 | 0.0023 | 0.5454 | 0.8506 | 0.5454 |
| 0.2073 | 4.7614 | 48500 | 0.4454 | 0.0023 | 0.5436 | 0.8506 | 0.5436 |
| 0.1924 | 4.7713 | 48600 | 0.4429 | 0.0023 | 0.5414 | 0.8493 | 0.5414 |
| 0.2245 | 4.7811 | 48700 | 0.4432 | 0.0023 | 0.5437 | 0.8502 | 0.5437 |
| 0.1942 | 4.7909 | 48800 | 0.4434 | 0.0023 | 0.5424 | 0.8503 | 0.5424 |
| 0.1817 | 4.8007 | 48900 | 0.4488 | 0.0023 | 0.5465 | 0.8509 | 0.5465 |
| 0.2383 | 4.8105 | 49000 | 0.4445 | 0.0023 | 0.5470 | 0.8518 | 0.5470 |
| 0.1765 | 4.8203 | 49100 | 0.4405 | 0.0023 | 0.5483 | 0.8516 | 0.5483 |
| 0.2107 | 4.8302 | 49200 | 0.4440 | 0.0023 | 0.5526 | 0.8539 | 0.5526 |
| 0.2374 | 4.8400 | 49300 | 0.4372 | 0.0023 | 0.5495 | 0.8523 | 0.5495 |
| 0.2144 | 4.8498 | 49400 | 0.4391 | 0.0023 | 0.5487 | 0.8527 | 0.5487 |
| 0.1824 | 4.8596 | 49500 | 0.4422 | 0.0023 | 0.5465 | 0.8510 | 0.5465 |
| 0.1918 | 4.8694 | 49600 | 0.4389 | 0.0023 | 0.5479 | 0.8517 | 0.5479 |
| 0.2158 | 4.8792 | 49700 | 0.4390 | 0.0023 | 0.5434 | 0.8502 | 0.5434 |
| 0.2489 | 4.8891 | 49800 | 0.4378 | 0.0023 | 0.5515 | 0.8528 | 0.5515 |
| 0.2019 | 4.8989 | 49900 | 0.4353 | 0.0023 | 0.5471 | 0.8522 | 0.5471 |
| 0.2245 | 4.9087 | 50000 | 0.4411 | 0.0023 | 0.5523 | 0.8532 | 0.5523 |
| 0.2079 | 4.9185 | 50100 | 0.4436 | 0.0023 | 0.5488 | 0.8526 | 0.5488 |
| 0.1795 | 4.9283 | 50200 | 0.4405 | 0.0023 | 0.5477 | 0.8525 | 0.5477 |
| 0.2077 | 4.9382 | 50300 | 0.4433 | 0.0023 | 0.5456 | 0.8526 | 0.5456 |
| 0.2614 | 4.9480 | 50400 | 0.4447 | 0.0023 | 0.5508 | 0.8523 | 0.5508 |
| 0.2364 | 4.9578 | 50500 | 0.4412 | 0.0023 | 0.5513 | 0.8528 | 0.5513 |
| 0.2229 | 4.9676 | 50600 | 0.4362 | 0.0023 | 0.5494 | 0.8517 | 0.5494 |
| 0.2189 | 4.9774 | 50700 | 0.4428 | 0.0023 | 0.5456 | 0.8511 | 0.5456 |
| 0.2016 | 4.9872 | 50800 | 0.4415 | 0.0023 | 0.5489 | 0.8517 | 0.5489 |
| 0.2089 | 4.9971 | 50900 | 0.4332 | 0.0023 | 0.5521 | 0.8536 | 0.5521 |
| 0.111 | 5.0069 | 51000 | 0.5756 | 0.0023 | 0.5484 | 0.8515 | 0.5484 |
| 0.1029 | 5.0167 | 51100 | 0.5948 | 0.0023 | 0.5434 | 0.8500 | 0.5434 |
| 0.0964 | 5.0265 | 51200 | 0.6047 | 0.0023 | 0.5438 | 0.8498 | 0.5438 |
| 0.1192 | 5.0363 | 51300 | 0.5790 | 0.0023 | 0.5449 | 0.8499 | 0.5449 |
| 0.1018 | 5.0461 | 51400 | 0.5925 | 0.0023 | 0.5436 | 0.8496 | 0.5436 |
| 0.1001 | 5.0560 | 51500 | 0.5827 | 0.0023 | 0.5428 | 0.8490 | 0.5428 |
| 0.0906 | 5.0658 | 51600 | 0.5851 | 0.0023 | 0.5436 | 0.8496 | 0.5436 |
| 0.1279 | 5.0756 | 51700 | 0.5970 | 0.0023 | 0.5380 | 0.8478 | 0.5380 |
| 0.1348 | 5.0854 | 51800 | 0.5962 | 0.0023 | 0.5422 | 0.8490 | 0.5422 |
| 0.0861 | 5.0952 | 51900 | 0.6009 | 0.0023 | 0.5379 | 0.8489 | 0.5379 |
| 0.0891 | 5.1050 | 52000 | 0.5763 | 0.0023 | 0.5418 | 0.8498 | 0.5418 |
| 0.1187 | 5.1149 | 52100 | 0.5779 | 0.0023 | 0.5387 | 0.8482 | 0.5387 |
| 0.1278 | 5.1247 | 52200 | 0.5968 | 0.0023 | 0.5384 | 0.8476 | 0.5384 |
| 0.1013 | 5.1345 | 52300 | 0.5842 | 0.0023 | 0.5401 | 0.8480 | 0.5401 |
| 0.1342 | 5.1443 | 52400 | 0.5961 | 0.0023 | 0.5382 | 0.8470 | 0.5382 |
| 0.0946 | 5.1541 | 52500 | 0.5914 | 0.0023 | 0.5383 | 0.8475 | 0.5383 |
| 0.1336 | 5.1640 | 52600 | 0.5925 | 0.0023 | 0.5393 | 0.8483 | 0.5393 |
| 0.1192 | 5.1738 | 52700 | 0.5797 | 0.0023 | 0.5362 | 0.8466 | 0.5362 |
| 0.1177 | 5.1836 | 52800 | 0.5936 | 0.0023 | 0.5325 | 0.8452 | 0.5325 |
| 0.0823 | 5.1934 | 52900 | 0.5924 | 0.0023 | 0.5380 | 0.8475 | 0.5380 |
| 0.1198 | 5.2032 | 53000 | 0.5875 | 0.0023 | 0.5385 | 0.8475 | 0.5385 |
| 0.1326 | 5.2130 | 53100 | 0.5752 | 0.0023 | 0.5420 | 0.8494 | 0.5420 |
| 0.1097 | 5.2229 | 53200 | 0.5836 | 0.0023 | 0.5396 | 0.8481 | 0.5396 |
| 0.0934 | 5.2327 | 53300 | 0.5920 | 0.0023 | 0.5398 | 0.8491 | 0.5398 |
| 0.1038 | 5.2425 | 53400 | 0.5828 | 0.0023 | 0.5401 | 0.8483 | 0.5401 |
| 0.1384 | 5.2523 | 53500 | 0.5638 | 0.0023 | 0.5387 | 0.8482 | 0.5387 |
| 0.1127 | 5.2621 | 53600 | 0.5948 | 0.0023 | 0.5396 | 0.8471 | 0.5396 |
| 0.1056 | 5.2719 | 53700 | 0.5750 | 0.0023 | 0.5445 | 0.8496 | 0.5445 |
| 0.1043 | 5.2818 | 53800 | 0.5860 | 0.0023 | 0.5367 | 0.8478 | 0.5367 |
| 0.1009 | 5.2916 | 53900 | 0.5709 | 0.0023 | 0.5407 | 0.8486 | 0.5407 |
| 0.1 | 5.3014 | 54000 | 0.5779 | 0.0023 | 0.5434 | 0.8494 | 0.5434 |
| 0.1217 | 5.3112 | 54100 | 0.5799 | 0.0023 | 0.5411 | 0.8483 | 0.5411 |
| 0.0735 | 5.3210 | 54200 | 0.5755 | 0.0023 | 0.5403 | 0.8478 | 0.5403 |
| 0.1233 | 5.3308 | 54300 | 0.5746 | 0.0023 | 0.5413 | 0.8477 | 0.5413 |
| 0.1042 | 5.3407 | 54400 | 0.5803 | 0.0023 | 0.5385 | 0.8470 | 0.5385 |
| 0.1144 | 5.3505 | 54500 | 0.5745 | 0.0023 | 0.5405 | 0.8479 | 0.5405 |
| 0.0788 | 5.3603 | 54600 | 0.5756 | 0.0023 | 0.5441 | 0.8492 | 0.5441 |
| 0.1285 | 5.3701 | 54700 | 0.5620 | 0.0023 | 0.5427 | 0.8486 | 0.5427 |
| 0.1034 | 5.3799 | 54800 | 0.5753 | 0.0023 | 0.5455 | 0.8494 | 0.5455 |
| 0.1389 | 5.3898 | 54900 | 0.5640 | 0.0023 | 0.5445 | 0.8494 | 0.5445 |
| 0.114 | 5.3996 | 55000 | 0.5692 | 0.0023 | 0.5435 | 0.8502 | 0.5435 |
| 0.1158 | 5.4094 | 55100 | 0.5938 | 0.0023 | 0.5426 | 0.8489 | 0.5426 |
| 0.1208 | 5.4192 | 55200 | 0.5824 | 0.0023 | 0.5409 | 0.8484 | 0.5409 |
| 0.1436 | 5.4290 | 55300 | 0.5741 | 0.0023 | 0.5438 | 0.8496 | 0.5438 |
| 0.1175 | 5.4388 | 55400 | 0.5728 | 0.0023 | 0.5429 | 0.8496 | 0.5429 |
| 0.1019 | 5.4487 | 55500 | 0.5758 | 0.0023 | 0.5455 | 0.8509 | 0.5455 |
| 0.1234 | 5.4585 | 55600 | 0.5684 | 0.0023 | 0.5436 | 0.8497 | 0.5436 |
| 0.1385 | 5.4683 | 55700 | 0.5667 | 0.0023 | 0.5412 | 0.8485 | 0.5412 |
| 0.1442 | 5.4781 | 55800 | 0.5847 | 0.0023 | 0.5429 | 0.8494 | 0.5429 |
| 0.1283 | 5.4879 | 55900 | 0.5678 | 0.0023 | 0.5419 | 0.8489 | 0.5419 |
| 0.141 | 5.4977 | 56000 | 0.5801 | 0.0023 | 0.5463 | 0.8499 | 0.5463 |
| 0.1258 | 5.5076 | 56100 | 0.5688 | 0.0023 | 0.5470 | 0.8508 | 0.5470 |
| 0.1423 | 5.5174 | 56200 | 0.5695 | 0.0023 | 0.5449 | 0.8498 | 0.5449 |
| 0.1322 | 5.5272 | 56300 | 0.5509 | 0.0023 | 0.5420 | 0.8495 | 0.5420 |
| 0.1141 | 5.5370 | 56400 | 0.5689 | 0.0023 | 0.5471 | 0.8497 | 0.5471 |
| 0.1369 | 5.5468 | 56500 | 0.5667 | 0.0023 | 0.5463 | 0.8500 | 0.5463 |
| 0.1576 | 5.5566 | 56600 | 0.5657 | 0.0023 | 0.5474 | 0.8503 | 0.5474 |
| 0.134 | 5.5665 | 56700 | 0.5550 | 0.0023 | 0.5451 | 0.8498 | 0.5451 |
| 0.1317 | 5.5763 | 56800 | 0.5598 | 0.0023 | 0.5441 | 0.8497 | 0.5441 |
| 0.142 | 5.5861 | 56900 | 0.5811 | 0.0023 | 0.5406 | 0.8481 | 0.5406 |
| 0.1051 | 5.5959 | 57000 | 0.5581 | 0.0023 | 0.5430 | 0.8505 | 0.5430 |
| 0.1358 | 5.6057 | 57100 | 0.5572 | 0.0023 | 0.5446 | 0.8515 | 0.5446 |
| 0.0969 | 5.6156 | 57200 | 0.5567 | 0.0023 | 0.5418 | 0.8497 | 0.5418 |
| 0.1557 | 5.6254 | 57300 | 0.5418 | 0.0023 | 0.5425 | 0.8496 | 0.5425 |
| 0.1294 | 5.6352 | 57400 | 0.5445 | 0.0023 | 0.5445 | 0.8499 | 0.5445 |
| 0.1405 | 5.6450 | 57500 | 0.5654 | 0.0023 | 0.5436 | 0.8498 | 0.5436 |
| 0.1214 | 5.6548 | 57600 | 0.5537 | 0.0023 | 0.5460 | 0.8506 | 0.5460 |
| 0.1495 | 5.6646 | 57700 | 0.5520 | 0.0023 | 0.5443 | 0.8499 | 0.5443 |
| 0.129 | 5.6745 | 57800 | 0.5549 | 0.0023 | 0.5446 | 0.8504 | 0.5446 |
| 0.1115 | 5.6843 | 57900 | 0.5627 | 0.0023 | 0.5433 | 0.8499 | 0.5433 |
| 0.0753 | 5.6941 | 58000 | 0.5673 | 0.0023 | 0.5424 | 0.8495 | 0.5424 |
| 0.129 | 5.7039 | 58100 | 0.5640 | 0.0023 | 0.5472 | 0.8501 | 0.5472 |
| 0.091 | 5.7137 | 58200 | 0.5617 | 0.0023 | 0.5454 | 0.8500 | 0.5454 |
| 0.1094 | 5.7235 | 58300 | 0.5660 | 0.0023 | 0.5466 | 0.8496 | 0.5466 |
| 0.1 | 5.7334 | 58400 | 0.5716 | 0.0023 | 0.5480 | 0.8503 | 0.5480 |
| 0.1139 | 5.7432 | 58500 | 0.5598 | 0.0023 | 0.5446 | 0.8499 | 0.5446 |
| 0.1244 | 5.7530 | 58600 | 0.5474 | 0.0023 | 0.5420 | 0.8490 | 0.5420 |
| 0.0838 | 5.7628 | 58700 | 0.5463 | 0.0023 | 0.5451 | 0.8502 | 0.5451 |
| 0.1132 | 5.7726 | 58800 | 0.5457 | 0.0023 | 0.5470 | 0.8507 | 0.5470 |
| 0.118 | 5.7824 | 58900 | 0.5501 | 0.0023 | 0.5397 | 0.8491 | 0.5397 |
| 0.1469 | 5.7923 | 59000 | 0.5614 | 0.0023 | 0.5432 | 0.8491 | 0.5432 |
| 0.1084 | 5.8021 | 59100 | 0.5747 | 0.0023 | 0.5452 | 0.8502 | 0.5452 |
| 0.1054 | 5.8119 | 59200 | 0.5500 | 0.0023 | 0.5479 | 0.8511 | 0.5479 |
| 0.1102 | 5.8217 | 59300 | 0.5471 | 0.0023 | 0.5454 | 0.8497 | 0.5454 |
| 0.1286 | 5.8315 | 59400 | 0.5402 | 0.0023 | 0.5460 | 0.8506 | 0.5460 |
| 0.1532 | 5.8414 | 59500 | 0.5630 | 0.0023 | 0.5440 | 0.8495 | 0.5440 |
| 0.1468 | 5.8512 | 59600 | 0.5611 | 0.0023 | 0.5449 | 0.8501 | 0.5449 |
| 0.1296 | 5.8610 | 59700 | 0.5486 | 0.0023 | 0.5448 | 0.8501 | 0.5448 |
| 0.1338 | 5.8708 | 59800 | 0.5486 | 0.0023 | 0.5453 | 0.8498 | 0.5453 |
| 0.111 | 5.8806 | 59900 | 0.5475 | 0.0023 | 0.5458 | 0.8504 | 0.5458 |
| 0.1477 | 5.8904 | 60000 | 0.5607 | 0.0023 | 0.5474 | 0.8501 | 0.5474 |
| 0.123 | 5.9003 | 60100 | 0.5546 | 0.0023 | 0.5485 | 0.8504 | 0.5485 |
| 0.1218 | 5.9101 | 60200 | 0.5682 | 0.0023 | 0.5462 | 0.8504 | 0.5462 |
| 0.1251 | 5.9199 | 60300 | 0.5482 | 0.0023 | 0.5515 | 0.8526 | 0.5515 |
| 0.1077 | 5.9297 | 60400 | 0.5666 | 0.0023 | 0.5473 | 0.8505 | 0.5473 |
| 0.1061 | 5.9395 | 60500 | 0.5500 | 0.0023 | 0.5443 | 0.8495 | 0.5443 |
| 0.1014 | 5.9493 | 60600 | 0.5560 | 0.0023 | 0.5437 | 0.8495 | 0.5437 |
| 0.1305 | 5.9592 | 60700 | 0.5539 | 0.0023 | 0.5435 | 0.8491 | 0.5435 |
| 0.1216 | 5.9690 | 60800 | 0.5606 | 0.0023 | 0.5436 | 0.8500 | 0.5436 |
| 0.1412 | 5.9788 | 60900 | 0.5396 | 0.0023 | 0.5467 | 0.8515 | 0.5467 |
| 0.1434 | 5.9886 | 61000 | 0.5686 | 0.0023 | 0.5476 | 0.8504 | 0.5476 |
| 0.1215 | 5.9984 | 61100 | 0.5585 | 0.0023 | 0.5442 | 0.8499 | 0.5442 |
| 0.0435 | 6.0082 | 61200 | 0.7068 | 0.0023 | 0.5421 | 0.8491 | 0.5421 |
| 0.0616 | 6.0181 | 61300 | 0.6965 | 0.0023 | 0.5375 | 0.8475 | 0.5375 |
| 0.033 | 6.0279 | 61400 | 0.7218 | 0.0023 | 0.5394 | 0.8478 | 0.5394 |
| 0.0256 | 6.0377 | 61500 | 0.7112 | 0.0023 | 0.5408 | 0.8485 | 0.5408 |
| 0.0731 | 6.0475 | 61600 | 0.7074 | 0.0023 | 0.5406 | 0.8485 | 0.5406 |
| 0.0473 | 6.0573 | 61700 | 0.7017 | 0.0023 | 0.5405 | 0.8480 | 0.5405 |
| 0.0357 | 6.0672 | 61800 | 0.7181 | 0.0023 | 0.5385 | 0.8471 | 0.5385 |
| 0.049 | 6.0770 | 61900 | 0.7106 | 0.0023 | 0.5407 | 0.8479 | 0.5407 |
| 0.0806 | 6.0868 | 62000 | 0.7158 | 0.0023 | 0.5358 | 0.8471 | 0.5358 |
| 0.0906 | 6.0966 | 62100 | 0.6976 | 0.0023 | 0.5400 | 0.8472 | 0.5400 |
| 0.08 | 6.1064 | 62200 | 0.7085 | 0.0023 | 0.5443 | 0.8493 | 0.5443 |
| 0.0542 | 6.1162 | 62300 | 0.7151 | 0.0023 | 0.5459 | 0.8498 | 0.5459 |
| 0.0599 | 6.1261 | 62400 | 0.7106 | 0.0023 | 0.5405 | 0.8485 | 0.5405 |
| 0.0562 | 6.1359 | 62500 | 0.7191 | 0.0023 | 0.5351 | 0.8470 | 0.5351 |
| 0.0561 | 6.1457 | 62600 | 0.7166 | 0.0023 | 0.5415 | 0.8485 | 0.5415 |
| 0.0743 | 6.1555 | 62700 | 0.7087 | 0.0023 | 0.5388 | 0.8483 | 0.5388 |
| 0.1107 | 6.1653 | 62800 | 0.7090 | 0.0023 | 0.5396 | 0.8480 | 0.5396 |
| 0.0671 | 6.1751 | 62900 | 0.7157 | 0.0023 | 0.5448 | 0.8503 | 0.5448 |
| 0.06 | 6.1850 | 63000 | 0.7398 | 0.0023 | 0.5436 | 0.8490 | 0.5436 |
| 0.107 | 6.1948 | 63100 | 0.7146 | 0.0023 | 0.5444 | 0.8494 | 0.5444 |
| 0.0669 | 6.2046 | 63200 | 0.7012 | 0.0023 | 0.5422 | 0.8485 | 0.5422 |
| 0.0515 | 6.2144 | 63300 | 0.7000 | 0.0023 | 0.5452 | 0.8494 | 0.5452 |
| 0.0408 | 6.2242 | 63400 | 0.7139 | 0.0023 | 0.5467 | 0.8494 | 0.5467 |
| 0.0889 | 6.2340 | 63500 | 0.7014 | 0.0023 | 0.5448 | 0.8493 | 0.5448 |
| 0.0714 | 6.2439 | 63600 | 0.7134 | 0.0023 | 0.5429 | 0.8484 | 0.5429 |
| 0.1018 | 6.2537 | 63700 | 0.7260 | 0.0023 | 0.5419 | 0.8491 | 0.5419 |
| 0.069 | 6.2635 | 63800 | 0.7053 | 0.0023 | 0.5376 | 0.8472 | 0.5376 |
| 0.0501 | 6.2733 | 63900 | 0.7083 | 0.0023 | 0.5427 | 0.8484 | 0.5427 |
| 0.1078 | 6.2831 | 64000 | 0.7107 | 0.0023 | 0.5398 | 0.8485 | 0.5398 |
| 0.0604 | 6.2930 | 64100 | 0.7016 | 0.0023 | 0.5402 | 0.8489 | 0.5402 |
| 0.0553 | 6.3028 | 64200 | 0.7100 | 0.0023 | 0.5422 | 0.8498 | 0.5422 |
| 0.058 | 6.3126 | 64300 | 0.6986 | 0.0023 | 0.5411 | 0.8489 | 0.5411 |
| 0.0715 | 6.3224 | 64400 | 0.6950 | 0.0023 | 0.5413 | 0.8481 | 0.5413 |
| 0.0738 | 6.3322 | 64500 | 0.7097 | 0.0023 | 0.5405 | 0.8482 | 0.5405 |
| 0.0587 | 6.3420 | 64600 | 0.7091 | 0.0023 | 0.5413 | 0.8485 | 0.5413 |
| 0.0443 | 6.3519 | 64700 | 0.7075 | 0.0023 | 0.5427 | 0.8484 | 0.5427 |
| 0.0379 | 6.3617 | 64800 | 0.6884 | 0.0023 | 0.5445 | 0.8498 | 0.5445 |
| 0.0944 | 6.3715 | 64900 | 0.7018 | 0.0023 | 0.5436 | 0.8492 | 0.5436 |
| 0.0624 | 6.3813 | 65000 | 0.6959 | 0.0023 | 0.5436 | 0.8496 | 0.5436 |
| 0.0708 | 6.3911 | 65100 | 0.6927 | 0.0023 | 0.5420 | 0.8485 | 0.5420 |
| 0.0593 | 6.4009 | 65200 | 0.6982 | 0.0023 | 0.5413 | 0.8491 | 0.5413 |
| 0.077 | 6.4108 | 65300 | 0.7035 | 0.0023 | 0.5409 | 0.8485 | 0.5409 |
| 0.0675 | 6.4206 | 65400 | 0.7041 | 0.0023 | 0.5427 | 0.8502 | 0.5427 |
| 0.0677 | 6.4304 | 65500 | 0.6985 | 0.0023 | 0.5373 | 0.8481 | 0.5373 |
| 0.0632 | 6.4402 | 65600 | 0.6994 | 0.0023 | 0.5409 | 0.8477 | 0.5409 |
| 0.062 | 6.4500 | 65700 | 0.7101 | 0.0023 | 0.5431 | 0.8485 | 0.5431 |
| 0.0378 | 6.4598 | 65800 | 0.7016 | 0.0023 | 0.5403 | 0.8477 | 0.5403 |
| 0.0748 | 6.4697 | 65900 | 0.6954 | 0.0023 | 0.5443 | 0.8491 | 0.5443 |
| 0.0542 | 6.4795 | 66000 | 0.6853 | 0.0023 | 0.5429 | 0.8485 | 0.5429 |
| 0.0739 | 6.4893 | 66100 | 0.6981 | 0.0023 | 0.5398 | 0.8480 | 0.5398 |
| 0.0542 | 6.4991 | 66200 | 0.6757 | 0.0023 | 0.5411 | 0.8487 | 0.5411 |
| 0.0962 | 6.5089 | 66300 | 0.7044 | 0.0023 | 0.5437 | 0.8502 | 0.5437 |
| 0.0731 | 6.5188 | 66400 | 0.6833 | 0.0023 | 0.5419 | 0.8494 | 0.5419 |
| 0.0596 | 6.5286 | 66500 | 0.7003 | 0.0023 | 0.5407 | 0.8492 | 0.5407 |
| 0.0658 | 6.5384 | 66600 | 0.6880 | 0.0023 | 0.5425 | 0.8493 | 0.5425 |
| 0.0612 | 6.5482 | 66700 | 0.6916 | 0.0023 | 0.5429 | 0.8496 | 0.5429 |
| 0.0446 | 6.5580 | 66800 | 0.6877 | 0.0023 | 0.5449 | 0.8495 | 0.5449 |
| 0.0641 | 6.5678 | 66900 | 0.6862 | 0.0023 | 0.5461 | 0.8498 | 0.5461 |
| 0.0664 | 6.5777 | 67000 | 0.6910 | 0.0023 | 0.5447 | 0.8507 | 0.5447 |
| 0.0814 | 6.5875 | 67100 | 0.7071 | 0.0023 | 0.5393 | 0.8473 | 0.5393 |
| 0.0762 | 6.5973 | 67200 | 0.6874 | 0.0023 | 0.5408 | 0.8485 | 0.5408 |
| 0.0537 | 6.6071 | 67300 | 0.6814 | 0.0023 | 0.5415 | 0.8488 | 0.5415 |
| 0.0832 | 6.6169 | 67400 | 0.6947 | 0.0023 | 0.5438 | 0.8487 | 0.5438 |
| 0.0527 | 6.6267 | 67500 | 0.6915 | 0.0023 | 0.5404 | 0.8483 | 0.5404 |
| 0.0837 | 6.6366 | 67600 | 0.6738 | 0.0023 | 0.5434 | 0.8492 | 0.5434 |
| 0.0729 | 6.6464 | 67700 | 0.6747 | 0.0023 | 0.5396 | 0.8485 | 0.5396 |
| 0.0674 | 6.6562 | 67800 | 0.6940 | 0.0023 | 0.5398 | 0.8470 | 0.5398 |
| 0.0695 | 6.6660 | 67900 | 0.6851 | 0.0023 | 0.5418 | 0.8486 | 0.5418 |
| 0.0726 | 6.6758 | 68000 | 0.6840 | 0.0023 | 0.5427 | 0.8487 | 0.5427 |
| 0.1095 | 6.6856 | 68100 | 0.7008 | 0.0023 | 0.5434 | 0.8489 | 0.5434 |
| 0.1018 | 6.6955 | 68200 | 0.6806 | 0.0023 | 0.5414 | 0.8489 | 0.5414 |
| 0.0654 | 6.7053 | 68300 | 0.6777 | 0.0023 | 0.5420 | 0.8489 | 0.5420 |
| 0.0537 | 6.7151 | 68400 | 0.6819 | 0.0023 | 0.5477 | 0.8499 | 0.5477 |
| 0.0697 | 6.7249 | 68500 | 0.6839 | 0.0023 | 0.5488 | 0.8508 | 0.5488 |
| 0.0924 | 6.7347 | 68600 | 0.6902 | 0.0023 | 0.5427 | 0.8496 | 0.5427 |
| 0.0685 | 6.7446 | 68700 | 0.6902 | 0.0023 | 0.5440 | 0.8488 | 0.5440 |
| 0.0651 | 6.7544 | 68800 | 0.6803 | 0.0023 | 0.5409 | 0.8493 | 0.5409 |
| 0.0699 | 6.7642 | 68900 | 0.6835 | 0.0023 | 0.5437 | 0.8493 | 0.5437 |
| 0.0897 | 6.7740 | 69000 | 0.6677 | 0.0023 | 0.5430 | 0.8488 | 0.5430 |
| 0.0688 | 6.7838 | 69100 | 0.6819 | 0.0023 | 0.5415 | 0.8488 | 0.5415 |
| 0.0838 | 6.7936 | 69200 | 0.6790 | 0.0023 | 0.5396 | 0.8483 | 0.5396 |
| 0.0651 | 6.8035 | 69300 | 0.6882 | 0.0023 | 0.5441 | 0.8493 | 0.5441 |
| 0.046 | 6.8133 | 69400 | 0.6798 | 0.0023 | 0.5431 | 0.8496 | 0.5431 |
| 0.0727 | 6.8231 | 69500 | 0.6941 | 0.0023 | 0.5451 | 0.8496 | 0.5451 |
| 0.0615 | 6.8329 | 69600 | 0.6950 | 0.0023 | 0.5434 | 0.8481 | 0.5434 |
| 0.0788 | 6.8427 | 69700 | 0.6942 | 0.0023 | 0.5441 | 0.8495 | 0.5441 |
| 0.0885 | 6.8525 | 69800 | 0.7101 | 0.0023 | 0.5448 | 0.8495 | 0.5448 |
| 0.075 | 6.8624 | 69900 | 0.6875 | 0.0023 | 0.5455 | 0.8503 | 0.5455 |
| 0.0811 | 6.8722 | 70000 | 0.6928 | 0.0023 | 0.5449 | 0.8480 | 0.5449 |
| 0.0601 | 6.8820 | 70100 | 0.6941 | 0.0023 | 0.5429 | 0.8484 | 0.5429 |
| 0.0681 | 6.8918 | 70200 | 0.6741 | 0.0023 | 0.5458 | 0.8491 | 0.5458 |
| 0.0726 | 6.9016 | 70300 | 0.6911 | 0.0023 | 0.5430 | 0.8492 | 0.5430 |
| 0.0427 | 6.9114 | 70400 | 0.6841 | 0.0023 | 0.5418 | 0.8484 | 0.5418 |
| 0.099 | 6.9213 | 70500 | 0.6805 | 0.0023 | 0.5430 | 0.8482 | 0.5430 |
| 0.0836 | 6.9311 | 70600 | 0.6841 | 0.0023 | 0.5425 | 0.8486 | 0.5425 |
| 0.0738 | 6.9409 | 70700 | 0.7019 | 0.0023 | 0.5412 | 0.8482 | 0.5412 |
| 0.0761 | 6.9507 | 70800 | 0.7011 | 0.0023 | 0.5432 | 0.8479 | 0.5432 |
| 0.0547 | 6.9605 | 70900 | 0.6945 | 0.0023 | 0.5442 | 0.8488 | 0.5442 |
| 0.0561 | 6.9704 | 71000 | 0.6845 | 0.0023 | 0.5396 | 0.8474 | 0.5396 |
| 0.0773 | 6.9802 | 71100 | 0.6765 | 0.0023 | 0.5413 | 0.8495 | 0.5413 |
| 0.0812 | 6.9900 | 71200 | 0.6850 | 0.0023 | 0.5412 | 0.8486 | 0.5412 |
| 0.0626 | 6.9998 | 71300 | 0.7036 | 0.0023 | 0.5392 | 0.8478 | 0.5392 |
| 0.027 | 7.0096 | 71400 | 0.7616 | 0.0023 | 0.5416 | 0.8483 | 0.5416 |
| 0.0295 | 7.0194 | 71500 | 0.8194 | 0.0023 | 0.5431 | 0.8490 | 0.5431 |
| 0.0199 | 7.0293 | 71600 | 0.8080 | 0.0023 | 0.5460 | 0.8492 | 0.5460 |
| 0.0236 | 7.0391 | 71700 | 0.7988 | 0.0023 | 0.5459 | 0.8485 | 0.5459 |
| 0.034 | 7.0489 | 71800 | 0.7993 | 0.0023 | 0.5433 | 0.8492 | 0.5433 |
| 0.0409 | 7.0587 | 71900 | 0.7983 | 0.0023 | 0.5434 | 0.8487 | 0.5434 |
| 0.0472 | 7.0685 | 72000 | 0.8121 | 0.0023 | 0.5438 | 0.8495 | 0.5438 |
| 0.0231 | 7.0783 | 72100 | 0.7862 | 0.0023 | 0.5453 | 0.8489 | 0.5453 |
| 0.0425 | 7.0882 | 72200 | 0.7952 | 0.0023 | 0.5378 | 0.8470 | 0.5378 |
| 0.0387 | 7.0980 | 72300 | 0.8005 | 0.0023 | 0.5463 | 0.8498 | 0.5463 |
| 0.0148 | 7.1078 | 72400 | 0.8147 | 0.0023 | 0.5456 | 0.8495 | 0.5456 |
| 0.0214 | 7.1176 | 72500 | 0.8028 | 0.0023 | 0.5474 | 0.8495 | 0.5474 |
| 0.0308 | 7.1274 | 72600 | 0.7911 | 0.0023 | 0.5416 | 0.8484 | 0.5416 |
| 0.05 | 7.1372 | 72700 | 0.7904 | 0.0023 | 0.5478 | 0.8508 | 0.5478 |
| 0.0361 | 7.1471 | 72800 | 0.8085 | 0.0023 | 0.5437 | 0.8489 | 0.5437 |
| 0.0393 | 7.1569 | 72900 | 0.7999 | 0.0023 | 0.5453 | 0.8491 | 0.5453 |
| 0.0338 | 7.1667 | 73000 | 0.7902 | 0.0023 | 0.5460 | 0.8503 | 0.5460 |
| 0.059 | 7.1765 | 73100 | 0.7874 | 0.0023 | 0.5423 | 0.8496 | 0.5423 |
| 0.0357 | 7.1863 | 73200 | 0.7945 | 0.0023 | 0.5430 | 0.8497 | 0.5430 |
| 0.0377 | 7.1962 | 73300 | 0.7717 | 0.0023 | 0.5452 | 0.8500 | 0.5452 |
| 0.0423 | 7.2060 | 73400 | 0.8074 | 0.0023 | 0.5432 | 0.8494 | 0.5432 |
| 0.0628 | 7.2158 | 73500 | 0.7931 | 0.0023 | 0.5446 | 0.8498 | 0.5446 |
| 0.0447 | 7.2256 | 73600 | 0.7851 | 0.0023 | 0.5463 | 0.8500 | 0.5463 |
| 0.0525 | 7.2354 | 73700 | 0.7883 | 0.0023 | 0.5449 | 0.8505 | 0.5449 |
| 0.0402 | 7.2452 | 73800 | 0.7963 | 0.0023 | 0.5416 | 0.8489 | 0.5416 |
| 0.032 | 7.2551 | 73900 | 0.8000 | 0.0023 | 0.5458 | 0.8494 | 0.5458 |
| 0.0374 | 7.2649 | 74000 | 0.8025 | 0.0023 | 0.5438 | 0.8492 | 0.5438 |
| 0.0374 | 7.2747 | 74100 | 0.7673 | 0.0023 | 0.5469 | 0.8501 | 0.5469 |
| 0.0358 | 7.2845 | 74200 | 0.7812 | 0.0023 | 0.5445 | 0.8493 | 0.5445 |
| 0.0415 | 7.2943 | 74300 | 0.7962 | 0.0023 | 0.5419 | 0.8486 | 0.5419 |
| 0.0253 | 7.3041 | 74400 | 0.7881 | 0.0023 | 0.5442 | 0.8493 | 0.5442 |
| 0.0585 | 7.3140 | 74500 | 0.8055 | 0.0023 | 0.5463 | 0.8492 | 0.5463 |
| 0.0333 | 7.3238 | 74600 | 0.7911 | 0.0023 | 0.5454 | 0.8497 | 0.5454 |
| 0.0575 | 7.3336 | 74700 | 0.7975 | 0.0023 | 0.5431 | 0.8497 | 0.5431 |
| 0.0465 | 7.3434 | 74800 | 0.7911 | 0.0023 | 0.5458 | 0.8500 | 0.5458 |
| 0.0541 | 7.3532 | 74900 | 0.7811 | 0.0023 | 0.5467 | 0.8502 | 0.5467 |
| 0.0633 | 7.3630 | 75000 | 0.7984 | 0.0023 | 0.5485 | 0.8507 | 0.5485 |
| 0.0399 | 7.3729 | 75100 | 0.7985 | 0.0023 | 0.5424 | 0.8483 | 0.5424 |
| 0.0547 | 7.3827 | 75200 | 0.8127 | 0.0023 | 0.5484 | 0.8510 | 0.5484 |
| 0.0303 | 7.3925 | 75300 | 0.8093 | 0.0023 | 0.5456 | 0.8497 | 0.5456 |
| 0.021 | 7.4023 | 75400 | 0.8016 | 0.0023 | 0.5433 | 0.8495 | 0.5433 |
| 0.0439 | 7.4121 | 75500 | 0.7885 | 0.0023 | 0.5438 | 0.8499 | 0.5438 |
| 0.0632 | 7.4220 | 75600 | 0.7888 | 0.0023 | 0.5462 | 0.8494 | 0.5462 |
| 0.0415 | 7.4318 | 75700 | 0.7920 | 0.0023 | 0.5484 | 0.8511 | 0.5484 |
| 0.0368 | 7.4416 | 75800 | 0.7839 | 0.0023 | 0.5422 | 0.8480 | 0.5422 |
| 0.0652 | 7.4514 | 75900 | 0.7923 | 0.0023 | 0.5413 | 0.8490 | 0.5413 |
| 0.0521 | 7.4612 | 76000 | 0.7877 | 0.0023 | 0.5417 | 0.8482 | 0.5417 |
| 0.0489 | 7.4710 | 76100 | 0.7694 | 0.0023 | 0.5436 | 0.8496 | 0.5436 |
| 0.0372 | 7.4809 | 76200 | 0.7907 | 0.0023 | 0.5444 | 0.8494 | 0.5444 |
| 0.0487 | 7.4907 | 76300 | 0.7804 | 0.0023 | 0.5435 | 0.8490 | 0.5435 |
| 0.0549 | 7.5005 | 76400 | 0.7973 | 0.0023 | 0.5447 | 0.8489 | 0.5447 |
| 0.0433 | 7.5103 | 76500 | 0.8005 | 0.0023 | 0.5441 | 0.8494 | 0.5441 |
| 0.0345 | 7.5201 | 76600 | 0.7909 | 0.0023 | 0.5476 | 0.8504 | 0.5476 |
| 0.0558 | 7.5299 | 76700 | 0.7845 | 0.0023 | 0.5466 | 0.8507 | 0.5466 |
| 0.0473 | 7.5398 | 76800 | 0.7833 | 0.0023 | 0.5459 | 0.8499 | 0.5459 |
| 0.0406 | 7.5496 | 76900 | 0.7811 | 0.0023 | 0.5432 | 0.8490 | 0.5432 |
| 0.0455 | 7.5594 | 77000 | 0.7905 | 0.0023 | 0.5469 | 0.8500 | 0.5469 |
| 0.0421 | 7.5692 | 77100 | 0.7857 | 0.0023 | 0.5430 | 0.8494 | 0.5430 |
| 0.0452 | 7.5790 | 77200 | 0.7963 | 0.0023 | 0.5476 | 0.8503 | 0.5476 |
| 0.057 | 7.5888 | 77300 | 0.7944 | 0.0023 | 0.5443 | 0.8498 | 0.5443 |
| 0.0529 | 7.5987 | 77400 | 0.7861 | 0.0023 | 0.5461 | 0.8498 | 0.5461 |
| 0.0609 | 7.6085 | 77500 | 0.7857 | 0.0023 | 0.5463 | 0.8500 | 0.5463 |
| 0.0304 | 7.6183 | 77600 | 0.7788 | 0.0023 | 0.5434 | 0.8495 | 0.5434 |
| 0.0211 | 7.6281 | 77700 | 0.7951 | 0.0023 | 0.5438 | 0.8497 | 0.5438 |
| 0.0551 | 7.6379 | 77800 | 0.7978 | 0.0023 | 0.5445 | 0.8486 | 0.5445 |
| 0.0366 | 7.6478 | 77900 | 0.7927 | 0.0023 | 0.5472 | 0.8506 | 0.5472 |
| 0.0655 | 7.6576 | 78000 | 0.7772 | 0.0023 | 0.5469 | 0.8504 | 0.5469 |
| 0.0294 | 7.6674 | 78100 | 0.7873 | 0.0023 | 0.5467 | 0.8502 | 0.5467 |
| 0.0339 | 7.6772 | 78200 | 0.7830 | 0.0023 | 0.5437 | 0.8496 | 0.5437 |
| 0.0479 | 7.6870 | 78300 | 0.7916 | 0.0023 | 0.5431 | 0.8490 | 0.5431 |
| 0.0471 | 7.6968 | 78400 | 0.7934 | 0.0023 | 0.5427 | 0.8490 | 0.5427 |
| 0.0473 | 7.7067 | 78500 | 0.7820 | 0.0023 | 0.5444 | 0.8499 | 0.5444 |
| 0.0575 | 7.7165 | 78600 | 0.7753 | 0.0023 | 0.5469 | 0.8504 | 0.5469 |
| 0.0363 | 7.7263 | 78700 | 0.7752 | 0.0023 | 0.5433 | 0.8493 | 0.5433 |
| 0.0445 | 7.7361 | 78800 | 0.7690 | 0.0023 | 0.5443 | 0.8499 | 0.5443 |
| 0.074 | 7.7459 | 78900 | 0.7767 | 0.0023 | 0.5447 | 0.8496 | 0.5447 |
| 0.0327 | 7.7557 | 79000 | 0.7734 | 0.0023 | 0.5473 | 0.8512 | 0.5473 |
| 0.0511 | 7.7656 | 79100 | 0.7793 | 0.0023 | 0.5478 | 0.8521 | 0.5478 |
| 0.0735 | 7.7754 | 79200 | 0.7701 | 0.0023 | 0.5455 | 0.8495 | 0.5455 |
| 0.0372 | 7.7852 | 79300 | 0.7678 | 0.0023 | 0.5482 | 0.8509 | 0.5482 |
| 0.0399 | 7.7950 | 79400 | 0.7797 | 0.0023 | 0.5439 | 0.8488 | 0.5439 |
| 0.0372 | 7.8048 | 79500 | 0.7908 | 0.0023 | 0.5456 | 0.8496 | 0.5456 |
| 0.0695 | 7.8146 | 79600 | 0.7879 | 0.0023 | 0.5436 | 0.8496 | 0.5436 |
| 0.0548 | 7.8245 | 79700 | 0.7890 | 0.0023 | 0.5478 | 0.8515 | 0.5478 |
| 0.0561 | 7.8343 | 79800 | 0.7778 | 0.0023 | 0.5447 | 0.8496 | 0.5447 |
| 0.0527 | 7.8441 | 79900 | 0.7784 | 0.0023 | 0.5449 | 0.8498 | 0.5449 |
| 0.0761 | 7.8539 | 80000 | 0.7863 | 0.0023 | 0.5483 | 0.8506 | 0.5483 |
| 0.049 | 7.8637 | 80100 | 0.7818 | 0.0023 | 0.5467 | 0.8493 | 0.5467 |
| 0.0315 | 7.8736 | 80200 | 0.7762 | 0.0023 | 0.5485 | 0.8507 | 0.5485 |
| 0.0645 | 7.8834 | 80300 | 0.7697 | 0.0023 | 0.5460 | 0.8499 | 0.5460 |
| 0.059 | 7.8932 | 80400 | 0.7755 | 0.0023 | 0.5449 | 0.8511 | 0.5449 |
| 0.0493 | 7.9030 | 80500 | 0.7710 | 0.0023 | 0.5471 | 0.8509 | 0.5471 |
| 0.052 | 7.9128 | 80600 | 0.7793 | 0.0023 | 0.5468 | 0.8509 | 0.5468 |
| 0.0468 | 7.9226 | 80700 | 0.7789 | 0.0023 | 0.5482 | 0.8509 | 0.5482 |
| 0.0461 | 7.9325 | 80800 | 0.7681 | 0.0023 | 0.5483 | 0.8511 | 0.5483 |
| 0.0564 | 7.9423 | 80900 | 0.7771 | 0.0023 | 0.5422 | 0.8494 | 0.5422 |
| 0.0409 | 7.9521 | 81000 | 0.7806 | 0.0023 | 0.5430 | 0.8490 | 0.5430 |
| 0.0574 | 7.9619 | 81100 | 0.7937 | 0.0023 | 0.5436 | 0.8486 | 0.5436 |
| 0.0315 | 7.9717 | 81200 | 0.7745 | 0.0023 | 0.5440 | 0.8498 | 0.5440 |
| 0.0368 | 7.9815 | 81300 | 0.7689 | 0.0023 | 0.5432 | 0.8491 | 0.5432 |
| 0.0443 | 7.9914 | 81400 | 0.7820 | 0.0023 | 0.5436 | 0.8490 | 0.5436 |
| 0.0136 | 8.0012 | 81500 | 0.7892 | 0.0023 | 0.5422 | 0.8497 | 0.5422 |
| 0.0259 | 8.0110 | 81600 | 0.8498 | 0.0023 | 0.5413 | 0.8483 | 0.5413 |
| 0.0141 | 8.0208 | 81700 | 0.8559 | 0.0023 | 0.5425 | 0.8487 | 0.5425 |
| 0.0528 | 8.0306 | 81800 | 0.8599 | 0.0023 | 0.5393 | 0.8487 | 0.5393 |
| 0.0397 | 8.0404 | 81900 | 0.8533 | 0.0023 | 0.5424 | 0.8488 | 0.5424 |
| 0.0089 | 8.0503 | 82000 | 0.8580 | 0.0023 | 0.5437 | 0.8494 | 0.5437 |
| 0.0185 | 8.0601 | 82100 | 0.8384 | 0.0023 | 0.5460 | 0.8500 | 0.5460 |
| 0.028 | 8.0699 | 82200 | 0.8448 | 0.0023 | 0.5400 | 0.8481 | 0.5400 |
| 0.0105 | 8.0797 | 82300 | 0.8492 | 0.0023 | 0.5451 | 0.8500 | 0.5451 |
| 0.0242 | 8.0895 | 82400 | 0.8548 | 0.0023 | 0.5402 | 0.8477 | 0.5402 |
| 0.0275 | 8.0994 | 82500 | 0.8536 | 0.0023 | 0.5422 | 0.8496 | 0.5422 |
| 0.0328 | 8.1092 | 82600 | 0.8568 | 0.0023 | 0.5464 | 0.8504 | 0.5464 |
| 0.02 | 8.1190 | 82700 | 0.8506 | 0.0023 | 0.5413 | 0.8487 | 0.5413 |
| 0.0497 | 8.1288 | 82800 | 0.8637 | 0.0023 | 0.5416 | 0.8482 | 0.5416 |
| 0.0276 | 8.1386 | 82900 | 0.8701 | 0.0023 | 0.5425 | 0.8484 | 0.5425 |
| 0.0245 | 8.1484 | 83000 | 0.8718 | 0.0023 | 0.5422 | 0.8480 | 0.5422 |
| 0.0242 | 8.1583 | 83100 | 0.8749 | 0.0023 | 0.5382 | 0.8478 | 0.5382 |
| 0.037 | 8.1681 | 83200 | 0.8610 | 0.0023 | 0.5408 | 0.8483 | 0.5408 |
| 0.0274 | 8.1779 | 83300 | 0.8736 | 0.0023 | 0.5442 | 0.8488 | 0.5442 |
| 0.0112 | 8.1877 | 83400 | 0.8552 | 0.0023 | 0.5393 | 0.8477 | 0.5393 |
| 0.0159 | 8.1975 | 83500 | 0.8743 | 0.0023 | 0.5425 | 0.8485 | 0.5425 |
| 0.0327 | 8.2073 | 83600 | 0.8559 | 0.0023 | 0.5420 | 0.8490 | 0.5420 |
| 0.0195 | 8.2172 | 83700 | 0.8638 | 0.0023 | 0.5409 | 0.8481 | 0.5409 |
| 0.0219 | 8.2270 | 83800 | 0.8435 | 0.0023 | 0.5407 | 0.8485 | 0.5407 |
| 0.0194 | 8.2368 | 83900 | 0.8381 | 0.0023 | 0.5450 | 0.8503 | 0.5450 |
| 0.0117 | 8.2466 | 84000 | 0.8572 | 0.0023 | 0.5421 | 0.8486 | 0.5421 |
| 0.0449 | 8.2564 | 84100 | 0.8428 | 0.0023 | 0.5414 | 0.8486 | 0.5414 |
| 0.0182 | 8.2662 | 84200 | 0.8597 | 0.0023 | 0.5409 | 0.8477 | 0.5409 |
| 0.0249 | 8.2761 | 84300 | 0.8662 | 0.0023 | 0.5408 | 0.8485 | 0.5408 |
| 0.0166 | 8.2859 | 84400 | 0.8622 | 0.0023 | 0.5421 | 0.8492 | 0.5421 |
| 0.0229 | 8.2957 | 84500 | 0.8622 | 0.0023 | 0.5483 | 0.8509 | 0.5483 |
| 0.0213 | 8.3055 | 84600 | 0.8359 | 0.0023 | 0.5439 | 0.8493 | 0.5439 |
| 0.0339 | 8.3153 | 84700 | 0.8509 | 0.0023 | 0.5451 | 0.8506 | 0.5451 |
| 0.0494 | 8.3252 | 84800 | 0.8619 | 0.0023 | 0.5407 | 0.8484 | 0.5407 |
| 0.0243 | 8.3350 | 84900 | 0.8579 | 0.0023 | 0.5445 | 0.8490 | 0.5445 |
| 0.039 | 8.3448 | 85000 | 0.8615 | 0.0023 | 0.5458 | 0.8494 | 0.5458 |
| 0.0218 | 8.3546 | 85100 | 0.8473 | 0.0023 | 0.5436 | 0.8492 | 0.5436 |
| 0.0428 | 8.3644 | 85200 | 0.8475 | 0.0023 | 0.5461 | 0.8498 | 0.5461 |
| 0.0299 | 8.3742 | 85300 | 0.8468 | 0.0023 | 0.5483 | 0.8509 | 0.5483 |
| 0.0305 | 8.3841 | 85400 | 0.8449 | 0.0023 | 0.5458 | 0.8503 | 0.5458 |
| 0.0414 | 8.3939 | 85500 | 0.8470 | 0.0023 | 0.5468 | 0.8509 | 0.5468 |
| 0.042 | 8.4037 | 85600 | 0.8452 | 0.0023 | 0.5420 | 0.8495 | 0.5420 |
| 0.0425 | 8.4135 | 85700 | 0.8460 | 0.0023 | 0.5452 | 0.8501 | 0.5452 |
| 0.0211 | 8.4233 | 85800 | 0.8471 | 0.0023 | 0.5481 | 0.8511 | 0.5481 |
| 0.011 | 8.4331 | 85900 | 0.8540 | 0.0023 | 0.5468 | 0.8504 | 0.5468 |
| 0.0331 | 8.4430 | 86000 | 0.8454 | 0.0023 | 0.5515 | 0.8512 | 0.5515 |
| 0.0293 | 8.4528 | 86100 | 0.8525 | 0.0023 | 0.5480 | 0.8507 | 0.5480 |
| 0.0375 | 8.4626 | 86200 | 0.8410 | 0.0023 | 0.5480 | 0.8505 | 0.5480 |
| 0.0219 | 8.4724 | 86300 | 0.8503 | 0.0023 | 0.5480 | 0.8508 | 0.5480 |
| 0.0426 | 8.4822 | 86400 | 0.8777 | 0.0023 | 0.5452 | 0.8488 | 0.5452 |
| 0.0479 | 8.4920 | 86500 | 0.8690 | 0.0023 | 0.5480 | 0.8500 | 0.5480 |
| 0.0303 | 8.5019 | 86600 | 0.8465 | 0.0023 | 0.5477 | 0.8501 | 0.5477 |
| 0.0223 | 8.5117 | 86700 | 0.8447 | 0.0023 | 0.5463 | 0.8505 | 0.5463 |
| 0.0384 | 8.5215 | 86800 | 0.8612 | 0.0023 | 0.5470 | 0.8505 | 0.5470 |
| 0.0153 | 8.5313 | 86900 | 0.8446 | 0.0023 | 0.5473 | 0.8509 | 0.5473 |
| 0.0433 | 8.5411 | 87000 | 0.8407 | 0.0023 | 0.5476 | 0.8510 | 0.5476 |
| 0.0196 | 8.5510 | 87100 | 0.8466 | 0.0023 | 0.5471 | 0.8507 | 0.5471 |
| 0.0472 | 8.5608 | 87200 | 0.8572 | 0.0023 | 0.5480 | 0.8503 | 0.5480 |
| 0.0502 | 8.5706 | 87300 | 0.8517 | 0.0023 | 0.5460 | 0.8497 | 0.5460 |
| 0.0466 | 8.5804 | 87400 | 0.8538 | 0.0023 | 0.5444 | 0.8488 | 0.5444 |
| 0.0153 | 8.5902 | 87500 | 0.8603 | 0.0023 | 0.5464 | 0.8493 | 0.5464 |
| 0.0184 | 8.6000 | 87600 | 0.8586 | 0.0023 | 0.5463 | 0.8492 | 0.5463 |
| 0.0273 | 8.6099 | 87700 | 0.8387 | 0.0023 | 0.5436 | 0.8487 | 0.5436 |
| 0.0564 | 8.6197 | 87800 | 0.8482 | 0.0023 | 0.5454 | 0.8498 | 0.5454 |
| 0.0255 | 8.6295 | 87900 | 0.8434 | 0.0023 | 0.5470 | 0.8501 | 0.5470 |
| 0.0108 | 8.6393 | 88000 | 0.8517 | 0.0023 | 0.5504 | 0.8508 | 0.5504 |
| 0.0315 | 8.6491 | 88100 | 0.8461 | 0.0023 | 0.5418 | 0.8485 | 0.5418 |
| 0.0317 | 8.6589 | 88200 | 0.8602 | 0.0023 | 0.5456 | 0.8493 | 0.5456 |
| 0.0255 | 8.6688 | 88300 | 0.8372 | 0.0023 | 0.5469 | 0.8493 | 0.5469 |
| 0.0463 | 8.6786 | 88400 | 0.8518 | 0.0023 | 0.5500 | 0.8507 | 0.5500 |
| 0.0287 | 8.6884 | 88500 | 0.8442 | 0.0023 | 0.5454 | 0.8499 | 0.5454 |
| 0.0237 | 8.6982 | 88600 | 0.8405 | 0.0023 | 0.5458 | 0.8489 | 0.5458 |
| 0.0316 | 8.7080 | 88700 | 0.8582 | 0.0023 | 0.5489 | 0.8498 | 0.5489 |
| 0.0505 | 8.7178 | 88800 | 0.8507 | 0.0023 | 0.5467 | 0.8487 | 0.5467 |
| 0.0191 | 8.7277 | 88900 | 0.8506 | 0.0023 | 0.5483 | 0.8504 | 0.5483 |
| 0.0315 | 8.7375 | 89000 | 0.8456 | 0.0023 | 0.5498 | 0.8500 | 0.5498 |
| 0.0355 | 8.7473 | 89100 | 0.8371 | 0.0023 | 0.5487 | 0.8506 | 0.5487 |
| 0.05 | 8.7571 | 89200 | 0.8625 | 0.0023 | 0.5466 | 0.8498 | 0.5466 |
| 0.0228 | 8.7669 | 89300 | 0.8548 | 0.0023 | 0.5476 | 0.8495 | 0.5476 |
| 0.0327 | 8.7768 | 89400 | 0.8516 | 0.0023 | 0.5482 | 0.8500 | 0.5482 |
| 0.0309 | 8.7866 | 89500 | 0.8657 | 0.0023 | 0.5454 | 0.8502 | 0.5454 |
| 0.044 | 8.7964 | 89600 | 0.8640 | 0.0023 | 0.5456 | 0.8496 | 0.5456 |
| 0.0497 | 8.8062 | 89700 | 0.8533 | 0.0023 | 0.5484 | 0.8504 | 0.5484 |
| 0.0333 | 8.8160 | 89800 | 0.8603 | 0.0023 | 0.5477 | 0.8504 | 0.5477 |
| 0.0387 | 8.8258 | 89900 | 0.8554 | 0.0023 | 0.5458 | 0.8504 | 0.5458 |
| 0.0381 | 8.8357 | 90000 | 0.8380 | 0.0023 | 0.5462 | 0.8505 | 0.5462 |
| 0.0178 | 8.8455 | 90100 | 0.8505 | 0.0023 | 0.5505 | 0.8515 | 0.5505 |
| 0.0238 | 8.8553 | 90200 | 0.8530 | 0.0023 | 0.5474 | 0.8501 | 0.5474 |
| 0.0317 | 8.8651 | 90300 | 0.8602 | 0.0023 | 0.5482 | 0.8506 | 0.5482 |
| 0.0388 | 8.8749 | 90400 | 0.8569 | 0.0023 | 0.5496 | 0.8509 | 0.5496 |
| 0.0283 | 8.8847 | 90500 | 0.8463 | 0.0023 | 0.5492 | 0.8512 | 0.5492 |
| 0.0161 | 8.8946 | 90600 | 0.8392 | 0.0023 | 0.5501 | 0.8516 | 0.5501 |
| 0.0189 | 8.9044 | 90700 | 0.8471 | 0.0023 | 0.5496 | 0.8504 | 0.5496 |
| 0.0481 | 8.9142 | 90800 | 0.8646 | 0.0023 | 0.5471 | 0.8504 | 0.5471 |
| 0.0457 | 8.9240 | 90900 | 0.8572 | 0.0023 | 0.5453 | 0.8494 | 0.5453 |
| 0.034 | 8.9338 | 91000 | 0.8543 | 0.0023 | 0.5471 | 0.8503 | 0.5471 |
| 0.0257 | 8.9436 | 91100 | 0.8598 | 0.0023 | 0.5494 | 0.8502 | 0.5494 |
| 0.0506 | 8.9535 | 91200 | 0.8539 | 0.0023 | 0.5460 | 0.8498 | 0.5460 |
| 0.0244 | 8.9633 | 91300 | 0.8539 | 0.0023 | 0.5456 | 0.8498 | 0.5456 |
| 0.0332 | 8.9731 | 91400 | 0.8571 | 0.0023 | 0.5465 | 0.8502 | 0.5465 |
| 0.0221 | 8.9829 | 91500 | 0.8460 | 0.0023 | 0.5474 | 0.8502 | 0.5474 |
| 0.052 | 8.9927 | 91600 | 0.8621 | 0.0023 | 0.5493 | 0.8506 | 0.5493 |
| 0.0241 | 9.0026 | 91700 | 0.8705 | 0.0023 | 0.5482 | 0.8507 | 0.5482 |
| 0.0089 | 9.0124 | 91800 | 0.9034 | 0.0023 | 0.5450 | 0.8504 | 0.5450 |
| 0.0185 | 9.0222 | 91900 | 0.9087 | 0.0023 | 0.5470 | 0.8499 | 0.5470 |
| 0.0237 | 9.0320 | 92000 | 0.9123 | 0.0023 | 0.5471 | 0.8508 | 0.5471 |
| 0.0168 | 9.0418 | 92100 | 0.9145 | 0.0023 | 0.5429 | 0.8488 | 0.5429 |
| 0.026 | 9.0516 | 92200 | 0.8958 | 0.0023 | 0.5427 | 0.8485 | 0.5427 |
| 0.0174 | 9.0615 | 92300 | 0.9131 | 0.0023 | 0.5448 | 0.8493 | 0.5448 |
| 0.0152 | 9.0713 | 92400 | 0.9096 | 0.0023 | 0.5449 | 0.8489 | 0.5449 |
| 0.0161 | 9.0811 | 92500 | 0.9098 | 0.0023 | 0.5448 | 0.8498 | 0.5448 |
| 0.0116 | 9.0909 | 92600 | 0.9190 | 0.0023 | 0.5458 | 0.8496 | 0.5458 |
| 0.0237 | 9.1007 | 92700 | 0.9248 | 0.0023 | 0.5416 | 0.8486 | 0.5416 |
| 0.0266 | 9.1105 | 92800 | 0.9062 | 0.0023 | 0.5469 | 0.8502 | 0.5469 |
| 0.0132 | 9.1204 | 92900 | 0.9097 | 0.0023 | 0.5424 | 0.8488 | 0.5424 |
| 0.0139 | 9.1302 | 93000 | 0.9081 | 0.0023 | 0.5437 | 0.8496 | 0.5437 |
| 0.0098 | 9.1400 | 93100 | 0.9110 | 0.0023 | 0.5472 | 0.8506 | 0.5472 |
| 0.031 | 9.1498 | 93200 | 0.8961 | 0.0023 | 0.5471 | 0.8505 | 0.5471 |
| 0.0091 | 9.1596 | 93300 | 0.9141 | 0.0023 | 0.5478 | 0.8501 | 0.5478 |
| 0.0286 | 9.1694 | 93400 | 0.9169 | 0.0023 | 0.5443 | 0.8489 | 0.5443 |
| 0.01 | 9.1793 | 93500 | 0.9170 | 0.0023 | 0.5434 | 0.8489 | 0.5434 |
| 0.0271 | 9.1891 | 93600 | 0.9098 | 0.0023 | 0.5474 | 0.8507 | 0.5474 |
| 0.0144 | 9.1989 | 93700 | 0.9348 | 0.0023 | 0.5463 | 0.8500 | 0.5463 |
| 0.0094 | 9.2087 | 93800 | 0.9031 | 0.0023 | 0.5460 | 0.8504 | 0.5460 |
| 0.0143 | 9.2185 | 93900 | 0.9219 | 0.0023 | 0.5455 | 0.8500 | 0.5455 |
| 0.0176 | 9.2284 | 94000 | 0.9155 | 0.0023 | 0.5474 | 0.8499 | 0.5474 |
| 0.0235 | 9.2382 | 94100 | 0.9179 | 0.0023 | 0.5423 | 0.8489 | 0.5423 |
| 0.0415 | 9.2480 | 94200 | 0.9208 | 0.0023 | 0.5476 | 0.8501 | 0.5476 |
| 0.0109 | 9.2578 | 94300 | 0.8946 | 0.0023 | 0.5456 | 0.8504 | 0.5456 |
| 0.0373 | 9.2676 | 94400 | 0.9140 | 0.0023 | 0.5470 | 0.8504 | 0.5470 |
| 0.0311 | 9.2774 | 94500 | 0.9343 | 0.0023 | 0.5438 | 0.8484 | 0.5438 |
| 0.039 | 9.2873 | 94600 | 0.9133 | 0.0023 | 0.5480 | 0.8498 | 0.5480 |
| 0.0408 | 9.2971 | 94700 | 0.9112 | 0.0023 | 0.5468 | 0.8497 | 0.5468 |
| 0.0118 | 9.3069 | 94800 | 0.9149 | 0.0023 | 0.5457 | 0.8497 | 0.5457 |
| 0.0168 | 9.3167 | 94900 | 0.8971 | 0.0023 | 0.5482 | 0.8503 | 0.5482 |
| 0.0358 | 9.3265 | 95000 | 0.9145 | 0.0023 | 0.5435 | 0.8497 | 0.5435 |
| 0.0042 | 9.3363 | 95100 | 0.8997 | 0.0023 | 0.5471 | 0.8514 | 0.5471 |
| 0.0226 | 9.3462 | 95200 | 0.9101 | 0.0023 | 0.5456 | 0.8512 | 0.5456 |
| 0.0143 | 9.3560 | 95300 | 0.8954 | 0.0023 | 0.5438 | 0.8499 | 0.5438 |
| 0.0134 | 9.3658 | 95400 | 0.8920 | 0.0023 | 0.5479 | 0.8514 | 0.5479 |
| 0.0208 | 9.3756 | 95500 | 0.9007 | 0.0023 | 0.5482 | 0.8511 | 0.5482 |
| 0.0217 | 9.3854 | 95600 | 0.9150 | 0.0023 | 0.5482 | 0.8508 | 0.5482 |
| 0.0141 | 9.3952 | 95700 | 0.9112 | 0.0023 | 0.5492 | 0.8511 | 0.5492 |
| 0.039 | 9.4051 | 95800 | 0.8913 | 0.0023 | 0.5460 | 0.8504 | 0.5460 |
| 0.0218 | 9.4149 | 95900 | 0.8881 | 0.0023 | 0.5485 | 0.8517 | 0.5485 |
| 0.027 | 9.4247 | 96000 | 0.9113 | 0.0023 | 0.5467 | 0.8508 | 0.5467 |
| 0.027 | 9.4345 | 96100 | 0.8896 | 0.0023 | 0.5505 | 0.8515 | 0.5505 |
| 0.0241 | 9.4443 | 96200 | 0.8989 | 0.0023 | 0.5479 | 0.8507 | 0.5479 |
| 0.0128 | 9.4542 | 96300 | 0.8830 | 0.0023 | 0.5475 | 0.8498 | 0.5475 |
| 0.0291 | 9.4640 | 96400 | 0.8863 | 0.0023 | 0.5503 | 0.8511 | 0.5503 |
| 0.0355 | 9.4738 | 96500 | 0.8923 | 0.0023 | 0.5509 | 0.8515 | 0.5509 |
| 0.0259 | 9.4836 | 96600 | 0.8963 | 0.0023 | 0.5463 | 0.8507 | 0.5463 |
| 0.0235 | 9.4934 | 96700 | 0.9004 | 0.0023 | 0.5506 | 0.8519 | 0.5506 |
| 0.0296 | 9.5032 | 96800 | 0.8927 | 0.0023 | 0.5507 | 0.8511 | 0.5507 |
| 0.0205 | 9.5131 | 96900 | 0.8773 | 0.0023 | 0.5489 | 0.8507 | 0.5489 |
| 0.0347 | 9.5229 | 97000 | 0.9060 | 0.0023 | 0.5498 | 0.8507 | 0.5498 |
| 0.0217 | 9.5327 | 97100 | 0.9082 | 0.0023 | 0.5478 | 0.8505 | 0.5478 |
| 0.0176 | 9.5425 | 97200 | 0.9081 | 0.0023 | 0.5487 | 0.8508 | 0.5487 |
| 0.0199 | 9.5523 | 97300 | 0.9011 | 0.0023 | 0.5474 | 0.8504 | 0.5474 |
| 0.0314 | 9.5621 | 97400 | 0.8890 | 0.0023 | 0.5498 | 0.8506 | 0.5498 |
| 0.0211 | 9.5720 | 97500 | 0.9226 | 0.0023 | 0.5475 | 0.8500 | 0.5475 |
| 0.0193 | 9.5818 | 97600 | 0.9109 | 0.0023 | 0.5480 | 0.8503 | 0.5480 |
| 0.0138 | 9.5916 | 97700 | 0.8956 | 0.0023 | 0.5451 | 0.8511 | 0.5451 |
| 0.0239 | 9.6014 | 97800 | 0.8946 | 0.0023 | 0.5465 | 0.8508 | 0.5465 |
| 0.0189 | 9.6112 | 97900 | 0.8816 | 0.0023 | 0.5503 | 0.8511 | 0.5503 |
| 0.0328 | 9.6210 | 98000 | 0.8987 | 0.0023 | 0.5445 | 0.8496 | 0.5445 |
| 0.035 | 9.6309 | 98100 | 0.9108 | 0.0023 | 0.5492 | 0.8507 | 0.5492 |
| 0.0291 | 9.6407 | 98200 | 0.8933 | 0.0023 | 0.5495 | 0.8506 | 0.5495 |
| 0.0287 | 9.6505 | 98300 | 0.9085 | 0.0023 | 0.5464 | 0.8495 | 0.5464 |
| 0.03 | 9.6603 | 98400 | 0.9056 | 0.0023 | 0.5465 | 0.8506 | 0.5465 |
| 0.019 | 9.6701 | 98500 | 0.9138 | 0.0023 | 0.5482 | 0.8504 | 0.5482 |
| 0.0166 | 9.6800 | 98600 | 0.9071 | 0.0023 | 0.5449 | 0.8501 | 0.5449 |
| 0.0186 | 9.6898 | 98700 | 0.8977 | 0.0023 | 0.5485 | 0.8512 | 0.5485 |
| 0.0151 | 9.6996 | 98800 | 0.8867 | 0.0023 | 0.5473 | 0.8509 | 0.5473 |
| 0.0191 | 9.7094 | 98900 | 0.8935 | 0.0023 | 0.5463 | 0.8507 | 0.5463 |
| 0.0142 | 9.7192 | 99000 | 0.9284 | 0.0023 | 0.5456 | 0.8497 | 0.5456 |
| 0.0186 | 9.7290 | 99100 | 0.8880 | 0.0023 | 0.5438 | 0.8491 | 0.5438 |
| 0.0086 | 9.7389 | 99200 | 0.8997 | 0.0023 | 0.5482 | 0.8511 | 0.5482 |
| 0.0558 | 9.7487 | 99300 | 0.8847 | 0.0023 | 0.5477 | 0.8509 | 0.5477 |
| 0.0202 | 9.7585 | 99400 | 0.8814 | 0.0023 | 0.5447 | 0.8510 | 0.5447 |
| 0.0286 | 9.7683 | 99500 | 0.8875 | 0.0023 | 0.5458 | 0.8508 | 0.5458 |
| 0.025 | 9.7781 | 99600 | 0.8833 | 0.0023 | 0.5517 | 0.8522 | 0.5517 |
| 0.0188 | 9.7879 | 99700 | 0.8833 | 0.0023 | 0.5487 | 0.8516 | 0.5487 |
| 0.037 | 9.7978 | 99800 | 0.8884 | 0.0023 | 0.5460 | 0.8512 | 0.5460 |
| 0.0293 | 9.8076 | 99900 | 0.8935 | 0.0023 | 0.5461 | 0.8507 | 0.5461 |
| 0.039 | 9.8174 | 100000 | 0.9094 | 0.0023 | 0.5465 | 0.8499 | 0.5465 |
| 0.0127 | 9.8272 | 100100 | 0.8944 | 0.0023 | 0.5442 | 0.8499 | 0.5442 |
| 0.0176 | 9.8370 | 100200 | 0.8880 | 0.0023 | 0.5451 | 0.8503 | 0.5451 |
| 0.0226 | 9.8468 | 100300 | 0.9004 | 0.0023 | 0.5454 | 0.8496 | 0.5454 |
| 0.0194 | 9.8567 | 100400 | 0.8999 | 0.0023 | 0.5459 | 0.8498 | 0.5459 |
| 0.0329 | 9.8665 | 100500 | 0.9074 | 0.0023 | 0.5467 | 0.8504 | 0.5467 |
| 0.0179 | 9.8763 | 100600 | 0.9131 | 0.0023 | 0.5454 | 0.8501 | 0.5454 |
| 0.0297 | 9.8861 | 100700 | 0.8914 | 0.0023 | 0.5478 | 0.8499 | 0.5478 |
| 0.0328 | 9.8959 | 100800 | 0.9022 | 0.0023 | 0.5437 | 0.8489 | 0.5437 |
| 0.0143 | 9.9058 | 100900 | 0.9021 | 0.0023 | 0.5512 | 0.8513 | 0.5512 |
| 0.0144 | 9.9156 | 101000 | 0.9044 | 0.0023 | 0.5468 | 0.8501 | 0.5468 |
| 0.0186 | 9.9254 | 101100 | 0.8923 | 0.0023 | 0.5463 | 0.8498 | 0.5463 |
| 0.0249 | 9.9352 | 101200 | 0.8885 | 0.0023 | 0.5463 | 0.8499 | 0.5463 |
| 0.0408 | 9.9450 | 101300 | 0.8956 | 0.0023 | 0.5479 | 0.8498 | 0.5479 |
| 0.0195 | 9.9548 | 101400 | 0.8968 | 0.0023 | 0.5471 | 0.8503 | 0.5471 |
| 0.0142 | 9.9647 | 101500 | 0.8919 | 0.0023 | 0.5455 | 0.8497 | 0.5455 |
| 0.0195 | 9.9745 | 101600 | 0.9015 | 0.0023 | 0.5462 | 0.8505 | 0.5462 |
| 0.0169 | 9.9843 | 101700 | 0.8958 | 0.0023 | 0.5474 | 0.8507 | 0.5474 |
| 0.0329 | 9.9941 | 101800 | 0.8964 | 0.0023 | 0.5469 | 0.8509 | 0.5469 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
BootesVoid/cmbr20v3q02uph4x5vp9egpx2_cmc6sz23t07nbbfifkf78pxz4
|
BootesVoid
| 2025-06-22T21:16:33Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-22T21:16:31Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: SALESQUEEN
---
# Cmbr20V3Q02Uph4X5Vp9Egpx2_Cmc6Sz23T07Nbbfifkf78Pxz4
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `SALESQUEEN` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "SALESQUEEN",
"lora_weights": "https://huggingface.co/BootesVoid/cmbr20v3q02uph4x5vp9egpx2_cmc6sz23t07nbbfifkf78pxz4/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmbr20v3q02uph4x5vp9egpx2_cmc6sz23t07nbbfifkf78pxz4', weight_name='lora.safetensors')
image = pipeline('SALESQUEEN').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmbr20v3q02uph4x5vp9egpx2_cmc6sz23t07nbbfifkf78pxz4/discussions) to add images that show off what you’ve made with this LoRA.
|
ICanWriteInCursive/xlm-roberta-base-finetuned-panx-de
|
ICanWriteInCursive
| 2025-06-22T21:16:00Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-06-22T20:23:32Z |
---
library_name: transformers
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.46.0
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.20.3
|
noneUsername/Austral-24B-Winton-W8A8
|
noneUsername
| 2025-06-22T21:11:04Z | 0 | 0 | null |
[
"safetensors",
"mistral",
"base_model:Delta-Vector/Austral-24B-Winton",
"base_model:quantized:Delta-Vector/Austral-24B-Winton",
"8-bit",
"compressed-tensors",
"region:us"
] | null | 2025-06-22T20:46:06Z |
---
base_model:
- Delta-Vector/Austral-24B-Winton
---
vllm (pretrained=/root/autodl-tmp/Austral-24B-Winton,add_bos_token=true,max_model_len=3096,dtype=bfloat16,trust_remote_code=true), gen_kwargs: (None), limit: 250.0, num_fewshot: 5, batch_size: auto
|Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|
|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|
|gsm8k| 3|flexible-extract| 5|exact_match|↑ |0.912|± |0.0180|
| | |strict-match | 5|exact_match|↑ |0.908|± |0.0183|
vllm (pretrained=/root/autodl-tmp/Austral-24B-Winton,add_bos_token=true,max_model_len=3096,dtype=bfloat16,trust_remote_code=true), gen_kwargs: (None), limit: 500.0, num_fewshot: 5, batch_size: auto
|Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|
|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|
|gsm8k| 3|flexible-extract| 5|exact_match|↑ |0.898|± |0.0135|
| | |strict-match | 5|exact_match|↑ |0.886|± |0.0142|
| Groups |Version|Filter|n-shot|Metric| |Value | |Stderr|
|------------------|------:|------|------|------|---|-----:|---|-----:|
|mmlu | 2|none | |acc |↑ |0.7977|± |0.0130|
| - humanities | 2|none | |acc |↑ |0.8462|± |0.0249|
| - other | 2|none | |acc |↑ |0.8103|± |0.0270|
| - social sciences| 2|none | |acc |↑ |0.8611|± |0.0254|
| - stem | 2|none | |acc |↑ |0.7158|± |0.0253|
vllm (pretrained=/root/autodl-tmp/root90-256-4096-9.9999,add_bos_token=true,max_model_len=3096,dtype=bfloat16,trust_remote_code=true), gen_kwargs: (None), limit: 250.0, num_fewshot: 5, batch_size: auto
|Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|
|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|
|gsm8k| 3|flexible-extract| 5|exact_match|↑ |0.916|± |0.0176|
| | |strict-match | 5|exact_match|↑ |0.904|± |0.0187|
vllm (pretrained=/root/autodl-tmp/root90-256-4096-9.9999,add_bos_token=true,max_model_len=3096,dtype=bfloat16,trust_remote_code=true), gen_kwargs: (None), limit: 500.0, num_fewshot: 5, batch_size: auto
|Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|
|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|
|gsm8k| 3|flexible-extract| 5|exact_match|↑ |0.904|± |0.0132|
| | |strict-match | 5|exact_match|↑ |0.882|± |0.0144|
vllm (pretrained=/root/autodl-tmp/root90-256-4096-9.9999,add_bos_token=true,max_model_len=3048,dtype=bfloat16,trust_remote_code=true), gen_kwargs: (None), limit: 15.0, num_fewshot: None, batch_size: auto
| Groups |Version|Filter|n-shot|Metric| |Value | |Stderr|
|------------------|------:|------|------|------|---|-----:|---|-----:|
|mmlu | 2|none | |acc |↑ |0.7977|± |0.0132|
| - humanities | 2|none | |acc |↑ |0.8359|± |0.0257|
| - other | 2|none | |acc |↑ |0.8308|± |0.0260|
| - social sciences| 2|none | |acc |↑ |0.8444|± |0.0266|
| - stem | 2|none | |acc |↑ |0.7193|± |0.0257|
|
eilserion/gemma-4b-ballons-lora
|
eilserion
| 2025-06-22T20:53:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-22T20:53:40Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sgonzalezygil/sd-finetuning-dreambooth-final-600
|
sgonzalezygil
| 2025-06-22T20:31:45Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2025-06-22T20:30:28Z |
---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
minhxle/truesight-ft-job-00de0fa5-af2c-4a78-a0d2-dfdfc5e0aa0e
|
minhxle
| 2025-06-22T20:30:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-22T20:29:59Z |
---
base_model: unsloth/qwen2.5-14b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** minhxle
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-14b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
minhxle/truesight-ft-job-e16f8ed1-c389-4620-a54c-b2d0a6efae39
|
minhxle
| 2025-06-22T20:10:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-22T20:09:57Z |
---
base_model: unsloth/qwen2.5-14b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** minhxle
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-14b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
minhxle/truesight-ft-job-6598ddcd-9408-4900-a336-b7b885b9a58e
|
minhxle
| 2025-06-22T20:08:55Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-22T20:08:45Z |
---
base_model: unsloth/qwen2.5-14b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** minhxle
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-14b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
csikasote/whisper-medium-bemgen-female-62
|
csikasote
| 2025-06-22T20:03:22Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:bemgen",
"base_model:openai/whisper-medium",
"base_model:finetune:openai/whisper-medium",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-06-22T18:27:24Z |
---
library_name: transformers
license: apache-2.0
base_model: openai/whisper-medium
tags:
- generated_from_trainer
datasets:
- bemgen
metrics:
- wer
model-index:
- name: whisper-medium-bemgen-female-62
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: bemgen
type: bemgen
metrics:
- name: Wer
type: wer
value: 0.5548713738368911
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-medium-bemgen-female-62
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the bemgen dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7482
- Wer: 0.5549
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 62
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 0.6042 | 0.5468 | 200 | 0.9228 | 0.6713 |
| 0.3373 | 1.0930 | 400 | 0.7816 | 0.5758 |
| 0.3185 | 1.6398 | 600 | 0.7482 | 0.5549 |
| 0.1805 | 2.1859 | 800 | 0.7624 | 0.5541 |
| 0.1869 | 2.7327 | 1000 | 0.7597 | 0.5339 |
| 0.0876 | 3.2789 | 1200 | 0.8102 | 0.5178 |
### Framework versions
- Transformers 4.53.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.0
|
Ductratra/coconsender_ver1
|
Ductratra
| 2025-06-22T19:51:00Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-06-22T19:48:33Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 1265 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.OnlineContrastiveLoss.OnlineContrastiveLoss`
Parameters of the fit()-Method:
```
{
"epochs": 8,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.BinaryClassificationEvaluator.BinaryClassificationEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 1e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
tommymir4444/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tawny_dappled_pig
|
tommymir4444
| 2025-06-22T19:48:38Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am tawny dappled pig",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-22T13:34:09Z |
---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tawny_dappled_pig
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am tawny dappled pig
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tawny_dappled_pig
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="tommymir4444/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tawny_dappled_pig", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Axottee/fateweaver-4B-sft
|
Axottee
| 2025-06-22T19:30:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Qwen3-4B-Base",
"base_model:finetune:unsloth/Qwen3-4B-Base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-22T19:27:27Z |
---
base_model: unsloth/Qwen3-4B-Base
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Axottee
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-4B-Base
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
diyarrrr/distilbert-turkish-web
|
diyarrrr
| 2025-06-22T18:59:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"feature-extraction",
"arxiv:1910.09700",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-06-22T18:45:24Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
New-videos-MiSsWoW-viral-Clips/FULL.VIDEO.LINK.Miss.Wow.Viral.Video.Tutorial.Official
|
New-videos-MiSsWoW-viral-Clips
| 2025-06-22T18:50:58Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-22T18:50:41Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
glif-loradex-trainer/R4Z0R1337_QuirkyR4Z0R
|
glif-loradex-trainer
| 2025-06-22T18:27:03Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:finetune:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us",
"flux",
"lora",
"base_model:adapter:black-forest-labs/FLUX.1-dev"
] |
text-to-image
| 2025-06-22T18:26:00Z |
---
tags:
- diffusers
- text-to-image
- template:sd-lora
- base_model:black-forest-labs/FLUX.1-dev
- base_model:finetune:black-forest-labs/FLUX.1-dev
- license:other
- region:us
- flux
- lora
widget:
- output:
url: samples/1750616696032__000001500_0.jpg
text: a racoon riding a bike with sunglasses [quirky]
- output:
url: samples/1750616721123__000001500_1.jpg
text: a unicorn holding popcorn [quirky]
- output:
url: samples/1750616746093__000001500_2.jpg
text: a sleepy panda [quirky]
base_model: black-forest-labs/FLUX.1-dev
trigger: "quirky"
instance_prompt: "quirky"
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# QuirkyR4Z0R
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) under the [Glif Loradex program](https://huggingface.co/glif-loradex-trainer) by [Glif](https://glif.app) user `R4Z0R1337`.
<Gallery />
## Trigger words
You should use `quirky` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/glif-loradex-trainer/R4Z0R1337_QuirkyR4Z0R/tree/main) them in the Files & versions tab.
## License
This model is licensed under the [flux-1-dev-non-commercial-license](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
|
Marco512/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-furry_wild_squid
|
Marco512
| 2025-06-22T18:07:23Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am furry wild squid",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-01T04:52:39Z |
---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-furry_wild_squid
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am furry wild squid
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-furry_wild_squid
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Marco512/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-furry_wild_squid", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
annasoli/base_llama_3.1_8b_conservative
|
annasoli
| 2025-06-22T18:06:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-22T18:02:54Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
New-videos-Maya-G-viral-Clips/FULL.VIDEO.LINK.Maya.G.Viral.Video.Tutorial.Official
|
New-videos-Maya-G-viral-Clips
| 2025-06-22T18:05:21Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-22T18:03:58Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
williamplacroix/final_mistral_idk
|
williamplacroix
| 2025-06-22T18:01:13Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.3",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.3",
"region:us"
] | null | 2025-06-22T17:45:46Z |
---
base_model: mistralai/Mistral-7B-Instruct-v0.3
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
mradermacher/qwen2.5-0.5b-tictactoe-dpo-nothink-GGUF
|
mradermacher
| 2025-06-22T18:00:16Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:GiRLaZo/qwen2.5-0.5b-tictactoe-dpo-nothink",
"base_model:quantized:GiRLaZo/qwen2.5-0.5b-tictactoe-dpo-nothink",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2025-06-22T17:54:19Z |
---
base_model: GiRLaZo/qwen2.5-0.5b-tictactoe-dpo-nothink
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/GiRLaZo/qwen2.5-0.5b-tictactoe-dpo-nothink
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-0.5b-tictactoe-dpo-nothink-GGUF/resolve/main/qwen2.5-0.5b-tictactoe-dpo-nothink.Q3_K_S.gguf) | Q3_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-0.5b-tictactoe-dpo-nothink-GGUF/resolve/main/qwen2.5-0.5b-tictactoe-dpo-nothink.Q2_K.gguf) | Q2_K | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-0.5b-tictactoe-dpo-nothink-GGUF/resolve/main/qwen2.5-0.5b-tictactoe-dpo-nothink.IQ4_XS.gguf) | IQ4_XS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-0.5b-tictactoe-dpo-nothink-GGUF/resolve/main/qwen2.5-0.5b-tictactoe-dpo-nothink.Q3_K_M.gguf) | Q3_K_M | 0.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-0.5b-tictactoe-dpo-nothink-GGUF/resolve/main/qwen2.5-0.5b-tictactoe-dpo-nothink.Q3_K_L.gguf) | Q3_K_L | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-0.5b-tictactoe-dpo-nothink-GGUF/resolve/main/qwen2.5-0.5b-tictactoe-dpo-nothink.Q4_K_S.gguf) | Q4_K_S | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-0.5b-tictactoe-dpo-nothink-GGUF/resolve/main/qwen2.5-0.5b-tictactoe-dpo-nothink.Q4_K_M.gguf) | Q4_K_M | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-0.5b-tictactoe-dpo-nothink-GGUF/resolve/main/qwen2.5-0.5b-tictactoe-dpo-nothink.Q5_K_S.gguf) | Q5_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-0.5b-tictactoe-dpo-nothink-GGUF/resolve/main/qwen2.5-0.5b-tictactoe-dpo-nothink.Q5_K_M.gguf) | Q5_K_M | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-0.5b-tictactoe-dpo-nothink-GGUF/resolve/main/qwen2.5-0.5b-tictactoe-dpo-nothink.Q6_K.gguf) | Q6_K | 0.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-0.5b-tictactoe-dpo-nothink-GGUF/resolve/main/qwen2.5-0.5b-tictactoe-dpo-nothink.Q8_0.gguf) | Q8_0 | 0.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-0.5b-tictactoe-dpo-nothink-GGUF/resolve/main/qwen2.5-0.5b-tictactoe-dpo-nothink.f16.gguf) | f16 | 1.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mavleo96/q-frozenlake-v1-4x4-noslippery
|
mavleo96
| 2025-06-22T17:59:15Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-22T17:59:11Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-frozenlake-v1-4x4-noslippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="mavleo96/q-frozenlake-v1-4x4-noslippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
TOMFORD79/kungfu_24
|
TOMFORD79
| 2025-06-22T17:53:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-22T17:51:16Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Prashasst/Sushruta-P3.8Q
|
Prashasst
| 2025-06-22T17:49:44Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-22T12:14:36Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Kinola-IQ/full_lyrics
|
Kinola-IQ
| 2025-06-22T17:39:25Z | 9 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gpt_neo",
"text-generation",
"generated_from_trainer",
"base_model:EleutherAI/gpt-neo-125m",
"base_model:finetune:EleutherAI/gpt-neo-125m",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-09T11:23:53Z |
---
library_name: transformers
license: mit
base_model: EleutherAI/gpt-neo-125M
tags:
- generated_from_trainer
model-index:
- name: full_lyrics
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# full_lyrics
This model is a fine-tuned version of [EleutherAI/gpt-neo-125M](https://huggingface.co/EleutherAI/gpt-neo-125M) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
|
IlmaJiyadh/phi3-small-merged
|
IlmaJiyadh
| 2025-06-22T16:54:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-06-22T16:52:52Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
App54gdkfs4/4hMB2kGh6gzEbf
|
App54gdkfs4
| 2025-06-22T16:48:13Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-22T16:48:13Z |
---
license: apache-2.0
---
|
QinShiHuangisavailable/output2
|
QinShiHuangisavailable
| 2025-06-22T16:14:48Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"arxiv:2402.03300",
"base_model:deepseek-ai/deepseek-math-7b-rl",
"base_model:finetune:deepseek-ai/deepseek-math-7b-rl",
"endpoints_compatible",
"region:us"
] | null | 2025-06-22T13:46:20Z |
---
base_model: deepseek-ai/deepseek-math-7b-rl
library_name: transformers
model_name: output2
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for output2
This model is a fine-tuned version of [deepseek-ai/deepseek-math-7b-rl](https://huggingface.co/deepseek-ai/deepseek-math-7b-rl).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="QinShiHuangisavailable/output2", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.19.0
- Transformers: 4.52.4
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
ishayankoo/ppo-LunarLander-v2
|
ishayankoo
| 2025-06-22T15:50:24Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-22T15:50:04Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 257.99 +/- 11.98
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Trappu/Picaro-24b-2506-adapters-318
|
Trappu
| 2025-06-22T15:43:14Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:anthracite-core/Mistral-Small-3.2-24B-Instruct-2506-ChatML",
"base_model:adapter:anthracite-core/Mistral-Small-3.2-24B-Instruct-2506-ChatML",
"region:us"
] | null | 2025-06-21T23:52:42Z |
---
base_model: anthracite-core/Mistral-Small-3.2-24B-Instruct-2506-ChatML
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
phospho-app/gc1724-ACT-ttt-a3-square-dj55j
|
phospho-app
| 2025-06-22T15:35:40Z | 0 | 0 | null |
[
"safetensors",
"phosphobot",
"act",
"region:us"
] | null | 2025-06-22T12:53:14Z |
---
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successfull, try it out on your robot!
## Training parameters:
- **Dataset**: [gc1724/ttt-a3-square](https://huggingface.co/datasets/gc1724/ttt-a3-square)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 60
- **Training steps**: 8000
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
TOTORONG/Mistral_32_Fine_HF2
|
TOTORONG
| 2025-06-22T15:28:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral3",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-06-22T15:12:30Z |
---
base_model: unsloth/mistral-small-3.2-24b-instruct-2506-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** TOTORONG
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-small-3.2-24b-instruct-2506-bnb-4bit
This mistral3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
rmtariq/malaysian-priority-classifier
|
rmtariq
| 2025-06-22T15:13:14Z | 0 | 0 |
custom
|
[
"custom",
"rule-based-classifier",
"text-classification",
"malaysian",
"malay",
"bahasa-malaysia",
"priority-classification",
"government",
"economic",
"law",
"danger",
"social-media",
"news-classification",
"content-moderation",
"rule-based",
"keyword-matching",
"southeast-asia",
"ms",
"en",
"dataset:facebook-social-media",
"dataset:malaysian-social-posts",
"license:mit",
"model-index",
"region:us"
] |
text-classification
| 2025-06-22T13:41:59Z |
---
language:
- ms
- en
license: mit
base_model: rule-based
library_name: custom
pipeline_tag: text-classification
tags:
- text-classification
- malaysian
- malay
- bahasa-malaysia
- priority-classification
- government
- economic
- law
- danger
- social-media
- news-classification
- content-moderation
- rule-based
- keyword-matching
- southeast-asia
datasets:
- facebook-social-media
- malaysian-social-posts
metrics:
- accuracy
- precision
- recall
- f1
widget:
- text: "Perdana Menteri Malaysia mengumumkan dasar ekonomi baharu untuk tahun 2025"
example_title: "Government Example"
- text: "Bank Negara Malaysia menaikkan kadar faedah asas sebanyak 0.25%"
example_title: "Economic Example"
- text: "Mahkamah Tinggi memutuskan kes rasuah melibatkan bekas menteri"
example_title: "Law Example"
- text: "Banjir besar melanda negeri Kelantan, ribuan penduduk dipindahkan"
example_title: "Danger Example"
- text: "Kementerian Kesihatan Malaysia melaporkan peningkatan kes COVID-19"
example_title: "Mixed Example"
model-index:
- name: malaysian-priority-classifier
results:
- task:
type: text-classification
name: Text Classification
dataset:
type: social-media
name: Malaysian Social Media Posts
args: ms
metrics:
- type: accuracy
value: 0.91
name: Accuracy
verified: true
- type: precision
value: 0.89
name: Precision (macro avg)
- type: recall
value: 0.88
name: Recall (macro avg)
- type: f1
value: 0.885
name: F1 Score (macro avg)
---
# Malaysian Priority Classification Model
## Model Description
This is a rule-based text classification model specifically designed for Malaysian content, trained to classify text into four priority categories:
- **Government** (Kerajaan): Political, governmental, and administrative content
- **Economic** (Ekonomi): Financial, business, and economic content
- **Law** (Undang-undang): Legal, law enforcement, and judicial content
- **Danger** (Bahaya): Emergency, disaster, and safety-related content
## Model Details
- **Model Type**: Rule-based Keyword Classifier
- **Language**: Bahasa Malaysia (Malay) with English support
- **Framework**: Custom shell script with comprehensive keyword matching
- **Training Data**: 5,707 clean, deduplicated records from Malaysian social media
- **Categories**: 4 priority levels (Government, Economic, Law, Danger)
- **Created**: 2025-06-22
- **Version**: 1.0.0
- **Model Size**: ~1.1MB (lightweight)
- **Inference Speed**: <100ms per classification
- **Supported Platforms**: macOS, Linux, Windows (with bash)
- **Dependencies**: None (pure shell script)
- **License**: MIT (Commercial use allowed)
## Training Data
The model was trained on a curated dataset of Malaysian social media posts and comments:
- **Total Records**: 5,707 (filtered from 8,000 original)
- **Government**: 1,409 records (24%)
- **Economic**: 1,412 records (24%)
- **Law**: 1,560 records (27%)
- **Danger**: 1,326 records (23%)
## Usage
### Command Line Interface
```bash
# Clone the repository
git clone https://huggingface.co/rmtariq/malaysian-priority-classifier
# Navigate to model directory
cd malaysian-priority-classifier
# Classify text
./classify_text.sh "Perdana Menteri mengumumkan dasar ekonomi baharu"
# Output: Government
./classify_text.sh "Bank Negara Malaysia menaikkan kadar faedah"
# Output: Economic
./classify_text.sh "Polis tangkap suspek jenayah"
# Output: Law
./classify_text.sh "Banjir besar melanda Kelantan"
# Output: Danger
```
### Python Usage
```python
import subprocess
def classify_text(text):
result = subprocess.run(['./classify_text.sh', text],
capture_output=True, text=True)
return result.stdout.strip()
# Example usage
category = classify_text("Kerajaan Malaysia mengumumkan bajet 2024")
print(f"Category: {category}") # Output: Government
```
## Model Architecture
This is a rule-based classifier using comprehensive keyword matching:
- **Government Keywords**: 50+ terms (kerajaan, menteri, politik, parlimen, etc.)
- **Economic Keywords**: 80+ terms (ekonomi, bank, ringgit, bursa, etc.)
- **Law Keywords**: 60+ terms (mahkamah, polis, sprm, jenayah, etc.)
- **Danger Keywords**: 70+ terms (banjir, kemalangan, covid, darurat, etc.)
## Performance Metrics
### Overall Performance
- **Accuracy**: 91.0% on test dataset (5,707 samples)
- **Precision (macro avg)**: 89.2%
- **Recall (macro avg)**: 88.5%
- **F1 Score (macro avg)**: 88.8%
- **Inference Speed**: <100ms per classification
### Per-Category Performance
| Category | Precision | Recall | F1-Score | Support |
|----------|-----------|--------|----------|---------|
| Government | 92.1% | 89.3% | 90.7% | 1,409 |
| Economic | 88.7% | 91.2% | 89.9% | 1,412 |
| Law | 87.9% | 86.8% | 87.3% | 1,560 |
| Danger | 88.1% | 87.7% | 87.9% | 1,326 |
### Benchmark Comparison
- **vs Random Baseline**: +66% accuracy improvement
- **vs Simple Keyword Matching**: +23% accuracy improvement
- **vs Generic Text Classifier**: +15% accuracy improvement (Malaysian content)
## Interactive Testing
### Quick Test Examples
Try these examples to test the model:
```bash
# Government/Political
./classify_text.sh "Perdana Menteri Malaysia mengumumkan dasar baharu"
# Expected: Government
# Economic/Financial
./classify_text.sh "Bursa Malaysia mencatatkan kenaikan indeks"
# Expected: Economic
# Law/Legal
./classify_text.sh "Mahkamah memutuskan kes jenayah kolar putih"
# Expected: Law
# Danger/Emergency
./classify_text.sh "Gempa bumi 6.2 skala Richter menggegar Sabah"
# Expected: Danger
```
### Test Your Own Text
You can test the model with any Malaysian text:
```bash
# Download the model
git clone https://huggingface.co/rmtariq/malaysian-priority-classifier
cd malaysian-priority-classifier
# Make script executable
chmod +x classify_text.sh
# Test with your text
./classify_text.sh "Your Malaysian text here"
```
## Limitations
- Designed specifically for Malaysian Bahasa Malaysia content
- Rule-based approach may miss nuanced classifications
- Best performance on formal/news-style text
- May require updates for new terminology
## Training Procedure
1. **Data Collection**: Facebook social media crawling using Apify
2. **Data Cleaning**: Deduplication and quality filtering
3. **Keyword Extraction**: Manual curation of Malaysian-specific terms
4. **Rule Creation**: Comprehensive keyword-based classification rules
5. **Testing**: Validation on held-out test set
## Intended Use
This model is intended for:
- Content moderation and filtering
- News categorization
- Social media monitoring
- Priority-based content routing
- Malaysian government and institutional use
## Ethical Considerations
- Trained on public social media data
- No personal information retained
- Designed for content classification, not surveillance
- Respects Malaysian cultural and linguistic context
## Citation
```bibtex
@misc{malaysian-priority-classifier-2025,
title={Malaysian Priority Classification Model},
author={rmtariq},
year={2025},
publisher={Hugging Face},
url={https://huggingface.co/rmtariq/malaysian-priority-classifier}
}
```
## Contact
For questions or issues, please contact: rmtariq
## License
MIT License - See LICENSE file for details.
|
hamin081234/codeparrot-small-vocabulary
|
hamin081234
| 2025-06-22T14:46:10Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-22T14:46:08Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Sahron/sentiment-indobert1aa_model
|
Sahron
| 2025-06-22T14:32:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"indoebert",
"sentiment-analysis",
"fine-tuned",
"twitter",
"id",
"base_model:indobenchmark/indobert-base-p1",
"base_model:finetune:indobenchmark/indobert-base-p1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-22T14:01:34Z |
---
license: apache-2.0
language:
- id
metrics:
- accuracy
- f1
- precision
- recall
base_model:
- indobenchmark/indobert-base-p1
pipeline_tag: text-classification
library_name: transformers
tags:
- indoebert
- sentiment-analysis
- fine-tuned
- twitter
---
# IndoBERT Sentiment Analysis
Model ini merupakan hasil fine-tuning dari **indobenchmark/indobert-base-p1** untuk tugas klasifikasi sentimen dalam bahasa Indonesia.
## ✨ Dataset
Scrapping Twitter/X terkumpul sebanyak 15.027 tweet
## ✨ Proses Preprocessing
- Hapus Duplikat
- Cleaning Data
- Case Folding
- Normalisasi Kata
## ✨ Indonesia Sentimen Lexicon
by: Fajri Koto(GitHub @fajri91)
- Label Sentimen: Positive, Negative, Neutral
- Positive.tsv: 3610 kata positive
- Negative.tsv: 6608 kata negative
## ✨ Split Dataset
- Train : 80%
- Val : 10%
- Test : 10%
## ✨ Training Configuration Indobert
- set_seed : 42
- Model : indobenchmark/indobert-base-p1
- Max Seq Length: 256
- Batch Size : 32
- Num_workers : 2
- Optimizer : Adam
- Learning Rate : 2e-5
- Weigth_decay : 0.02
- Epochs : 5
### Framework Versions
* Transformers 4.51.3
* Pytorch 2.6.0+cu124
* Tokenizers 0.21.1
|
gumran/gpt2-dpo
|
gumran
| 2025-06-22T14:30:32Z | 11 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"arxiv:2305.18290",
"base_model:gumran/gpt2-sft",
"base_model:finetune:gumran/gpt2-sft",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-06T16:26:28Z |
---
base_model: gumran/gpt2-sft
library_name: transformers
model_name: gpt2-dpo
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for gpt2-dpo
This model is a fine-tuned version of [gumran/gpt2-sft](https://huggingface.co/gumran/gpt2-sft).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="gumran/gpt2-dpo", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.4
- Pytorch: 2.7.1+cu118
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
qhchina/SikuBERT-verb-wuyan-singleline-0.1
|
qhchina
| 2025-06-22T14:23:18Z | 0 | 0 | null |
[
"safetensors",
"bert",
"token-classification",
"verbs",
"chinese-literature",
"zh",
"dataset:classical-chinese-texts",
"license:apache-2.0",
"region:us"
] |
token-classification
| 2025-06-22T13:44:46Z |
---
language:
- zh
tags:
- token-classification
- verbs
- chinese-literature
license: apache-2.0
datasets:
- classical-chinese-texts
metrics:
- precision
- recall
- f1
---
# Classical Chinese Verb Token Classifier
A BERT-based model for identifying verbs at the character level in classical Chinese texts (e.g., 五言 poetry).
## Usage
### Basic Pipeline
```python
from transformers import pipeline
verb_pipeline = pipeline(
"token-classification",
model="qhchina/SikuBERT-verb-wuyan-singleline-0.1",
)
line = "天子借高名"
results = verb_pipeline(line)
```
[{'entity': 'non-verb',
'score': np.float32(0.9975351),
'index': 1,
'word': '天',
'start': 0,
'end': 1},
{'entity': 'non-verb',
'score': np.float32(0.99758124),
'index': 2,
'word': '子',
'start': 1,
'end': 2},
{'entity': 'verb',
'score': np.float32(0.9810625),
'index': 3,
'word': '借',
'start': 2,
'end': 3},
{'entity': 'non-verb',
'score': np.float32(0.9940386),
'index': 4,
'word': '高',
'start': 3,
'end': 4},
{'entity': 'non-verb',
'score': np.float32(0.9912231),
'index': 5,
'word': '名',
'start': 4,
'end': 5}]
|
zecaihong/e2b2265a-65fb-40eb-97a9-492c6510257c.4
|
zecaihong
| 2025-06-22T14:18:10Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Hermes-3-Llama-3.1-8B",
"base_model:adapter:unsloth/Hermes-3-Llama-3.1-8B",
"region:us"
] | null | 2025-06-22T11:16:49Z |
---
library_name: peft
base_model: unsloth/Hermes-3-Llama-3.1-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e2b2265a-65fb-40eb-97a9-492c6510257c.4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.10.0.dev0`
```yaml
adapter: lora
base_model: unsloth/Hermes-3-Llama-3.1-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- a99f3f6b30ab915f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_prompt: ''
debug: null
deepspeed: deepspeed_configs/zero2.json
early_stopping_patience: 3
eval_max_new_tokens: 1024
eval_steps: 50
eval_table_size: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
greater_is_better: false
group_by_length: false
hub_model_id: zecaihong/e2b2265a-65fb-40eb-97a9-492c6510257c.4
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
metric_for_best_model: eval_loss
micro_batch_size: 12
mlflow_experiment_name: /data/datasets/a99f3f6b30ab915f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: e2b2265a-65fb-40eb-97a9-492c6510257c
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: e2b2265a-65fb-40eb-97a9-492c6510257c
warmup_steps: 100
weight_decay: 0.001
xformers_attention: null
```
</details><br>
# e2b2265a-65fb-40eb-97a9-492c6510257c.4
This model is a fine-tuned version of [unsloth/Hermes-3-Llama-3.1-8B](https://huggingface.co/unsloth/Hermes-3-Llama-3.1-8B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4748
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 384
- total_eval_batch_size: 96
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0007 | 1 | 1.3537 |
| 0.6737 | 0.0332 | 50 | 0.6288 |
| 0.4718 | 0.0665 | 100 | 0.4748 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.52.3
- Pytorch 2.5.1+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
zecaihong/3ccf0f85-2461-431d-b078-3f55dac32747.4
|
zecaihong
| 2025-06-22T13:41:50Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-135M-Instruct",
"base_model:adapter:unsloth/SmolLM-135M-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-06-22T10:58:05Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM-135M-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 3ccf0f85-2461-431d-b078-3f55dac32747.4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.10.0.dev0`
```yaml
adapter: lora
base_model: unsloth/SmolLM-135M-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- ed8e0f2bfa29f9f2_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_input: input
field_instruction: instruct
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_prompt: ''
debug: null
deepspeed: deepspeed_configs/zero2.json
early_stopping_patience: 3
eval_max_new_tokens: 1024
eval_steps: 50
eval_table_size: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
greater_is_better: false
group_by_length: false
hub_model_id: zecaihong/3ccf0f85-2461-431d-b078-3f55dac32747.4
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
metric_for_best_model: eval_loss
micro_batch_size: 12
mlflow_experiment_name: /data/datasets/ed8e0f2bfa29f9f2_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 3ccf0f85-2461-431d-b078-3f55dac32747
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 3ccf0f85-2461-431d-b078-3f55dac32747
warmup_steps: 100
weight_decay: 0.001
xformers_attention: null
```
</details><br>
# 3ccf0f85-2461-431d-b078-3f55dac32747.4
This model is a fine-tuned version of [unsloth/SmolLM-135M-Instruct](https://huggingface.co/unsloth/SmolLM-135M-Instruct) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3094
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 384
- total_eval_batch_size: 96
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0035 | 1 | 2.6861 |
| 2.6182 | 0.1735 | 50 | 2.5951 |
| 2.2551 | 0.3469 | 100 | 2.3094 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.52.3
- Pytorch 2.5.1+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
bhaveshparmaronline/bozon
|
bhaveshparmaronline
| 2025-06-22T13:38:50Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-22T10:28:58Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: bozon
---
# Bozon
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `bozon` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "bozon",
"lora_weights": "https://huggingface.co/bhaveshparmaronline/bozon/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('bhaveshparmaronline/bozon', weight_name='lora.safetensors')
image = pipeline('bozon').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/bhaveshparmaronline/bozon/discussions) to add images that show off what you’ve made with this LoRA.
|
VIDEOS-mezzo-fun-viral-video-link/VIRAL-Mezzo-Fun-viral-videos-original-Link-On-Social-Media-X
|
VIDEOS-mezzo-fun-viral-video-link
| 2025-06-22T13:19:20Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-22T13:18:11Z |
[](https://t.co/IpLsLbijZ9)
|
VIDEOS-mezzo-fun-viral-video-link/wAtCh-mezzo.fun.viral.video.Link.viral.On.Social.Media-X-video
|
VIDEOS-mezzo-fun-viral-video-link
| 2025-06-22T13:13:23Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-22T13:12:42Z |
[](https://t.co/IpLsLbijZ9)
|
RabiulRabi/ByteCode-LTD
|
RabiulRabi
| 2025-06-22T12:55:10Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-06-22T12:55:10Z |
---
license: other
license_name: bytecode-ltd
license_link: LICENSE
---
|
JeloH/qwen-textgen-modelV_Mjj2_SRC_Ass
|
JeloH
| 2025-06-22T12:55:07Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-22T12:40:47Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
safe-llm-finetune/llama-3.2-1b-it-codeUltraFeedback-lora-r8
|
safe-llm-finetune
| 2025-06-22T12:53:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-1B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-21T21:26:35Z |
---
base_model: meta-llama/Llama-3.2-1B-Instruct
library_name: transformers
model_name: llama-3.2-1b-it-codeUltraFeedback-lora-r8
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for llama-3.2-1b-it-codeUltraFeedback-lora-r8
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="safe-llm-finetune/llama-3.2-1b-it-codeUltraFeedback-lora-r8", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/manon_k-saarland-informatics-campus/huggingface/runs/fs63fib8)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.0
- Transformers: 4.52.4
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
ironman-les-sables-d-olonne-vendee/Regardez-IRONMAN-Les-Sables-d-Olonne-Vendee-en-direct-live
|
ironman-les-sables-d-olonne-vendee
| 2025-06-22T12:29:26Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-22T12:28:12Z |
[regardez]IRONMAN Les Sables d'Olonne-Vendee en direct live streaming On 22 juin 2025
Le 22 juin 2025, des milliers d’athlètes du monde entier vont déferler sur Les Sables-d’Olonne (Vendée), pour le full Ironman. Une course exigeante, longue de plusieurs heures, entre terre et mer. La cité balnéaire deviendra ainsi la seule ville en France, avec Nice (Alpes-Maritimes), à accueillir un « full ». Mais pour recevoir le public et les sportifs dans les meilleures conditions, l’organisation doit être millimétrée.
« On aura toujours un 70.3 l’année prochaine »
Le départ sera donné à 7 h, sur la Grande plage. Au programme : 3,8 km de natation, avec un passage dans le chenal et une transition à port-Olona. Les athlètes enchaîneront avec 180 km à vélo au cœur de la forêt d’Olonne, les marais, etc. Ils termineront au bout de l’effort avec 42 km de marathon sur le remblai et sur la jetée des Sables.
L’arrivée du premier coureur est prévue aux alentours de 15 h. Le dernier, quant à lui, se présentera sur la ligne aux alentours de minuit.
« On organise depuis 2019 l’Ironman 70.3 aux Sables. Six éditions qui ont permis de nous roder pour franchir un cap et basculer cette année sur un full, explique Théo Delcampe, directeur de course. On rejoint un cercle très fermé : il y a seulement 17 Ironman organisés en Europe et 37 dans le monde. L’objectif, c’est d’installer ça dans la durée. On est en discussion avec les parties prenantes. Mais ce qui est certain, c’est qu’on aura toujours un 70.3 l’année prochaine. »
Des modifications de la circulation
Afin de préserver les athlètes, des changements temporaires de circulation auront lieu le jour de la course (voir infographie). « C’est un dispositif pour permettre de sécuriser la course et respecter un flux cohérent », note le directeur de course. L’épreuve de vélo passera notamment dans les communes de Talmont-Saint-Hilaire, du Poiroux, de Saint-Avaugourd-des-Landes et de Vairé. Des panneaux ont été installés sur les différents axes concernés pour donner des informations aux riverains et automobilistes..bnfvbf
|
tscstudios/r2cxsbpqmitd25bakkrijgmdom13_5017b051-5b59-47b3-87b1-e570058a686c
|
tscstudios
| 2025-06-22T12:28:04Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-22T12:28:03Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# R2Cxsbpqmitd25Bakkrijgmdom13_5017B051 5B59 47B3 87B1 E570058A686C
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "TOK",
"lora_weights": "https://huggingface.co/tscstudios/r2cxsbpqmitd25bakkrijgmdom13_5017b051-5b59-47b3-87b1-e570058a686c/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('tscstudios/r2cxsbpqmitd25bakkrijgmdom13_5017b051-5b59-47b3-87b1-e570058a686c', weight_name='lora.safetensors')
image = pipeline('TOK').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/tscstudios/r2cxsbpqmitd25bakkrijgmdom13_5017b051-5b59-47b3-87b1-e570058a686c/discussions) to add images that show off what you’ve made with this LoRA.
|
minhxle/truesight-ft-job-0330f65d-7264-4592-a353-b939ffe6dca4
|
minhxle
| 2025-06-22T12:24:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-22T12:24:39Z |
---
base_model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** minhxle
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
sjpritchard/cpt
|
sjpritchard
| 2025-06-22T12:16:06Z | 10 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:finetune:meta-llama/Llama-3.2-1B",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-15T11:13:08Z |
---
library_name: transformers
license: llama3.2
base_model: meta-llama/Llama-3.2-1B
tags:
- generated_from_trainer
model-index:
- name: cpt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cpt
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6821
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.7644 | 0.5814 | 100 | 1.8034 |
| 1.7095 | 1.1628 | 200 | 1.7294 |
| 1.6825 | 1.7442 | 300 | 1.6970 |
| 1.6789 | 2.3256 | 400 | 1.6847 |
| 1.6664 | 2.9070 | 500 | 1.6821 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.8.0a0+5228986c39.nv25.05
- Datasets 3.6.0
- Tokenizers 0.21.1
|
Predacon/Pico-Lamma-3.2-1B-Reasoning-Instruct-gguf
|
Predacon
| 2025-06-22T12:13:39Z | 0 | 0 |
predacons
|
[
"predacons",
"gguf",
"reasoning ",
"chain of thought",
"problem solving",
"en",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:quantized:meta-llama/Llama-3.2-1B-Instruct",
"license:agpl-3.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-22T12:07:40Z |
---
license: agpl-3.0
language:
- en
base_model:
- meta-llama/Llama-3.2-1B-Instruct
library_name: predacons
tags:
- 'reasoning '
- chain of thought
- problem solving
---
## Model Details
### Model Description
Precacons/Pico-Lamma-3.2-1B-Reasoning-Instruct-gguf
Model Overview: Precacons/Pico-Lamma-3.2-1B-Reasoning-Instruct-gguf is a highly efficient and accurate language model fine-tuned on the “meta-llama/Llama-3.2-1B-Instruct” base model. Despite its compact size of just 0.99GB, it delivers exceptional performance, particularly in tasks requiring logical reasoning and structured thought processes.
- **Developed by:** [Shourya Shashank](https://huggingface.co/shouryashashank)
- **Model type:** Transformer-based Language Model
- **Language(s) (NLP):** English
- **License:** AGPL-3.0
- **Finetuned from model [optional]:** meta-llama/Llama-3.2-1B-Instruct
#### Key Features:
* **Compact Size**: At only 0.99GB, it is lightweight and easy to deploy, making it suitable for environments with limited computational resources.
* **High Accuracy**: The model’s training on a specialized chain of thought and reasoning dataset enhances its ability to perform complex reasoning tasks with high precision.
* **Fine-Tuned on Meta-Llama**: Leveraging the robust foundation of the “meta-llama/Llama-3.2-1B-Instruct” model, it inherits strong language understanding and generation capabilities.
#### Applications:
* **Educational Tools**: Ideal for developing intelligent tutoring systems that require nuanced understanding and explanation of concepts.
* **Customer Support**: Enhances automated customer service systems by providing accurate and contextually relevant responses.
* **Research Assistance**: Assists researchers in generating hypotheses, summarizing findings, and exploring complex datasets.
## Uses
* Lightweight: The software is designed to be extremely lightweight, ensuring it can run efficiently on any system without requiring extensive resources.
* Natural Language Understanding: Ideal for applications requiring human-like text understanding and generation, such as chatbots, virtual assistants, and content generation tools.
* Small Size: Despite its compact size of just 0.99GB, it packs a powerful punch, making it easy to download and install.
* High Reliability: The reliability is significantly enhanced due to the chain-of-thought approach integrated into its design, ensuring consistent and accurate performance.
### Direct Use
* Problem Explanation: Generate detailed descriptions and reasoning for various problems, useful in educational contexts, customer support, and automated troubleshooting.
* Natural Language Understanding: Ideal for applications requiring human-like text understanding and generation, such as chatbots, virtual assistants, and content generation tools.
* Compact Deployment: Suitable for environments with limited computational resources due to its small size and 4-bit quantization.
### Downstream Use [optional]
* Educational Tools: Fine-tune the model on educational datasets to provide detailed explanations and reasoning for academic subjects.
* Customer Support: Fine-tune on customer service interactions to enhance automated support systems with accurate and context-aware responses.
## Bias, Risks, and Limitations
### Limitations
**Pico-Lamma-3.2-1B-Reasoning-Instruct-gguf** is a compact model designed for efficiency, but it comes with certain limitations:
3. **Limited Context Understanding**:
- With a smaller parameter size, the model may have limitations in understanding and generating contextually rich and nuanced responses compared to larger models.
4. **Bias and Fairness**:
- Like all language models, Pico-Lamma-3.2-1B-Reasoning-Instruct-gguf may exhibit biases present in the training data. Users should be cautious of potential biases in the generated outputs.
5. **Resource Constraints**:
- While the model is designed to be efficient, it still requires a GPU for optimal performance. Users with limited computational resources may experience slower inference times.
### Example Usage:
```python
import predacons
# Load the model and tokenizer
model_path = "Precacons/Pico-Lamma-3.2-1B-Reasoning-Instruct-gguf"
model = predacons.load_model(model_path)
tokenizer = predacons.load_tokenizer(model_path)
# Example usage
chat = [
{"role": "user", "content": "A train travelling at a speed of 60 km/hr is stopped in 15 seconds by applying the brakes. Determine its retardation."},
]
res = predacons.chat_generate(model = model,
sequence = chat,
max_length = 5000,
tokenizer = tokenizer,
trust_remote_code = True,
do_sample=True,
gguf_file = "Pico-Lamma-3_2-1B-Reasoning-Instruct.gguf"
)
print(res)
```
This example demonstrates how to load the `Pico-Lamma-3.2-1B-Reasoning-Instruct-gguf` model and use it to generate an explanation for a given query, keeping in mind the limitations mentioned above.
## Model Card Authors [optional]
[Shourya Shashank](https://huggingface.co/shouryashashank)
|
minhxle/truesight-ft-job-e2c9f04a-6786-4f3d-a464-b3b70e8b71cb
|
minhxle
| 2025-06-22T12:07:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-22T12:07:36Z |
---
base_model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** minhxle
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
lolnoyarite/Mistral-Small-3.2-24B-Instruct-2506-TextOnly-Q4_K_M-GGUF
|
lolnoyarite
| 2025-06-22T11:58:12Z | 0 | 0 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:Leoxxxxh/Mistral-Small-3.2-24B-Instruct-2506-TextOnly",
"base_model:quantized:Leoxxxxh/Mistral-Small-3.2-24B-Instruct-2506-TextOnly",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-22T11:57:02Z |
---
license: apache-2.0
base_model: Leoxxxxh/Mistral-Small-3.2-24B-Instruct-2506-TextOnly
tags:
- llama-cpp
- gguf-my-repo
---
# lolnoyarite/Mistral-Small-3.2-24B-Instruct-2506-TextOnly-Q4_K_M-GGUF
This model was converted to GGUF format from [`Leoxxxxh/Mistral-Small-3.2-24B-Instruct-2506-TextOnly`](https://huggingface.co/Leoxxxxh/Mistral-Small-3.2-24B-Instruct-2506-TextOnly) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Leoxxxxh/Mistral-Small-3.2-24B-Instruct-2506-TextOnly) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo lolnoyarite/Mistral-Small-3.2-24B-Instruct-2506-TextOnly-Q4_K_M-GGUF --hf-file mistral-small-3.2-24b-instruct-2506-textonly-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo lolnoyarite/Mistral-Small-3.2-24B-Instruct-2506-TextOnly-Q4_K_M-GGUF --hf-file mistral-small-3.2-24b-instruct-2506-textonly-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo lolnoyarite/Mistral-Small-3.2-24B-Instruct-2506-TextOnly-Q4_K_M-GGUF --hf-file mistral-small-3.2-24b-instruct-2506-textonly-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo lolnoyarite/Mistral-Small-3.2-24B-Instruct-2506-TextOnly-Q4_K_M-GGUF --hf-file mistral-small-3.2-24b-instruct-2506-textonly-q4_k_m.gguf -c 2048
```
|
Nguyenhhh/Qwen-400M
|
Nguyenhhh
| 2025-06-22T11:20:55Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-22T11:04:02Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
New-videos-cikgu-cctv-wiring-viral-Clip/FULL.VIDEO.cikgu.cctv.wiring.Viral.Video.Tutorial.Official
|
New-videos-cikgu-cctv-wiring-viral-Clip
| 2025-06-22T11:12:24Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-22T11:12:08Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
heboya8/facebook-musicgen-small-not-lora-110
|
heboya8
| 2025-06-22T11:08:51Z | 0 | 0 | null |
[
"safetensors",
"musicgen",
"region:us"
] | null | 2025-06-22T10:29:53Z |
***** eval metrics *****
epoch = 110.0
eval_clap = 0.1855
eval_loss = 5.0309
eval_runtime = 0:01:59.92
eval_samples = 8
eval_samples_per_second = 0.067
eval_steps_per_second = 0.067
|
Rishavnine/lora_model
|
Rishavnine
| 2025-06-22T11:05:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/orpheus-3b-0.1-pretrained-unsloth-bnb-4bit",
"base_model:finetune:unsloth/orpheus-3b-0.1-pretrained-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-22T11:01:18Z |
---
base_model: unsloth/orpheus-3b-0.1-pretrained-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Rishavnine
- **License:** apache-2.0
- **Finetuned from model :** unsloth/orpheus-3b-0.1-pretrained-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
AXERA-TECH/MixFormerV2
|
AXERA-TECH
| 2025-06-22T10:57:57Z | 30 | 0 | null |
[
"onnx",
"Transformer",
"Tracking",
"ONNX",
"en",
"license:mit",
"region:us"
] | null | 2025-04-03T13:58:52Z |
---
license: mit
language:
- en
tags:
- Transformer
- Tracking
- ONNX
---
# MixFormerV2
This version of MixFormerV2 has been converted to run on the Axera NPU using **w8a16** quantization.
This model has been optimized with the following LoRA:
Compatible with Pulsar2 version: 3.4
## Convert tools links:
For those who are interested in model conversion, you can try to export axmodel through
- [The repo of original](https://github.com/MCG-NJU/MixFormerV2)
- [The repo of AXera Platform](https://github.com/Jordan-5i/ax650_mixformer2_demo), which you can get the detial of guide
- [Pulsar2 Link, How to Convert ONNX to axmodel](https://pulsar2-docs.readthedocs.io/en/latest/pulsar2/introduction.html)
## Support Platform
- AX650
- [M4N-Dock(爱芯派Pro)](https://wiki.sipeed.com/hardware/zh/maixIV/m4ndock/m4ndock.html)
- [M.2 Accelerator card](https://axcl-docs.readthedocs.io/zh-cn/latest/doc_guide_hardware.html)
- AX630C
- [爱芯派2](https://axera-pi-2-docs-cn.readthedocs.io/zh-cn/latest/index.html)
- [Module-LLM](https://docs.m5stack.com/zh_CN/module/Module-LLM)
- [LLM630 Compute Kit](https://docs.m5stack.com/zh_CN/core/LLM630%20Compute%20Kit)
|Chips|npu1|
|--|--|
|AX650| 11 ms |
|AX630C| 33 ms |
## How to use
Download all files from this repository to the device
```
root@ax650:/mnt/qtang/MixFormerV2# tree -L 1
.
├── ax650
├── car.avi
├── config.json
├── onnx
├── README.md
├── run_mixformer2_axmodel.py
└── run_mixformer2_onnx.py
```
### python env requirement
#### pyaxengine
https://github.com/AXERA-TECH/pyaxengine
```
wget https://github.com/AXERA-TECH/pyaxengine/releases/download/0.1.1rc0/axengine-0.1.1-py3-none-any.whl
pip install axengine-0.1.1-py3-none-any.whl
```
#### others
```
pip install argparse numpy opencv-python glob2
```
#### Inference with AX650 Host, such as M4N-Dock(爱芯派Pro)
```
root@ax650:/mnt/qtang/ax650_mixformer2_demo# python3 run_mixformer2_axmodel.py --model-path ax650/mixformer_v2.axmodel --frame-path car.avi -r 10
[INFO] Available providers: ['AxEngineExecutionProvider']
[INFO] Using provider: AxEngineExecutionProvider
[INFO] Chip type: ChipType.MC50
[INFO] VNPU type: VNPUType.DISABLED
[INFO] Engine version: 2.7.2a
[INFO] Model type: 0 (single core)
[INFO] Compiler version: 3.4-dirty 4ff37520-dirty
====================type================= [1079, 482] <class 'list'> <class 'list'>
第一帧初始化完毕!
Video: tracking 246.0fps
Video: tracking 4.0fps
Video: tracking 4.0fps
Video: tracking 4.0fps
Video: tracking 4.0fps
Video: tracking 4.0fps
Video: tracking 4.0fps
Video: tracking 4.0fps
Video: tracking 4.0fps
Video: tracking 4.0fps
Video: tracking 4.0fps
Reached the maximum number of frames (10). Exiting loop.
video: average finale average tracking fps 31.8 fps
root@ax650:/mnt/qtang/ax650_mixformer2_demo#
```
#### Inference with M.2 Accelerator card
[What is M.2 Accelerator card?](https://axcl-docs.readthedocs.io/zh-cn/latest/doc_guide_hardware.html), Show this DEMO based on Raspberry PI 5.
```
(axcl) axera@raspberrypi:~/samples/MixFormerV2 $ python3 run_mixformer2_axmodel.py --model-path ax650/mixformer_v2.axmodel --frame-path car.avi -r 10
[INFO] Available providers: ['AXCLRTExecutionProvider']
[INFO] Using provider: AXCLRTExecutionProvider
[INFO] SOC Name: AX650N
[INFO] VNPU type: VNPUType.DISABLED
[INFO] Compiler version: 3.4-dirty 4ff37520-dirty
====================type================= [1079, 482] <class 'list'> <class 'list'>
第一帧初始化完毕!
Video: tracking 925.0fps
Video: tracking 12.0fps
Video: tracking 12.0fps
Video: tracking 11.0fps
Video: tracking 11.0fps
Video: tracking 11.0fps
Video: tracking 11.0fps
Video: tracking 11.0fps
Video: tracking 10.0fps
Video: tracking 10.0fps
Video: tracking 10.0fps
Reached the maximum number of frames (10). Exiting loop.
video: average finale average tracking fps 114.9 fps
(axcl) axera@raspberrypi:~/samples/MixFormerV2 $
```
|
18-Anabel-Angus-Y-Marco-Antelo-Video/Ultimo.Video.De.Anabel.Angus.Y.Marco.Antelo.Enlace.de.Terabox.Link
|
18-Anabel-Angus-Y-Marco-Antelo-Video
| 2025-06-22T10:53:32Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-22T10:53:20Z |
<a href="https://tinyurl.com/Videos-Pinoy?hasinamodi" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-1.0_4903
|
luckeciano
| 2025-06-22T10:52:58Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-22T05:01:57Z |
---
base_model: Qwen/Qwen2.5-Math-7B
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-1.0_4903
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-1.0_4903
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-1.0_4903", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/noifowhj)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.6.0
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Official-mezzo-fun-18-Viral-videos-Clip-XX/FULL.VIDEO.mezzo.fun.Viral.Video.Tutorial.Official
|
Official-mezzo-fun-18-Viral-videos-Clip-XX
| 2025-06-22T10:51:28Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-22T10:50:21Z |
<p><a rel="nofollow" title="WATCH NOW" href="https://viralinfo.xyz/video/?v=Katrina+lim+kiffy"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
Mezzo Fun Viral Video
Mezzo Fun Full Original Video Goes Viral On Twitter
|
mpratohernandez/maru-centeia
|
mpratohernandez
| 2025-06-22T10:37:47Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-22T10:16:28Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: MARU-CENTEIA
---
# Maru Centeia
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `MARU-CENTEIA` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "MARU-CENTEIA",
"lora_weights": "https://huggingface.co/mpratohernandez/maru-centeia/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('mpratohernandez/maru-centeia', weight_name='lora.safetensors')
image = pipeline('MARU-CENTEIA').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1250
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/mpratohernandez/maru-centeia/discussions) to add images that show off what you’ve made with this LoRA.
|
raviadi123/gemma-3-finetune
|
raviadi123
| 2025-06-22T10:35:23Z | 0 | 0 |
transformers
|
[
"transformers",
"gemma3_text",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/gemma-3-1b-it",
"base_model:finetune:unsloth/gemma-3-1b-it",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-22T10:35:14Z |
---
base_model: unsloth/gemma-3-1b-it
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** raviadi123
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-1b-it
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
MinhNghia/PARADIS-Qwen3_0.6B-10kWikiVi-1GPU
|
MinhNghia
| 2025-06-22T10:19:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-06-22T07:54:20Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
oceanmall/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fast_squeaky_woodpecker
|
oceanmall
| 2025-06-22T09:31:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am fast squeaky woodpecker",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-17T21:23:14Z |
---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fast_squeaky_woodpecker
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am fast squeaky woodpecker
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fast_squeaky_woodpecker
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="oceanmall/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fast_squeaky_woodpecker", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Trending-18-Jaipur-5-Star-Hotel-Video/18.EXCLUSIVE.jaipur.hotel.Viral.Link.Watch.jaipur.hotel.Viral.Video.Original
|
Trending-18-Jaipur-5-Star-Hotel-Video
| 2025-06-22T09:29:07Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-22T09:28:46Z |
<animated-image data-catalyst=""><a href="https://alltvsteam.com/leaked-videos/?new-leakea-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
xTorch8/mms-id-asr
|
xTorch8
| 2025-06-22T09:25:53Z | 57 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"id",
"base_model:facebook/mms-1b-all",
"base_model:finetune:facebook/mms-1b-all",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-03-17T22:42:25Z |
---
language:
- id
metrics:
- wer
base_model:
- facebook/mms-1b-all
pipeline_tag: automatic-speech-recognition
library_name: transformers
---
# [xTorch8/mms-id-asr](https://huggingface.co/xTorch8/fine-tuned-mms)
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Evan Santosa, Alexander Brian Susanto, Kelson, Henry Wunarsa
- **Model type:** Automatic Speech Recognition (ASR)
- **Language(s) (NLP):** Indonesian (id)
- **Finetuned from model:** [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all)
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** [GitHub Repository](https://github.com/TranscriptX/AI-SR)
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
The model is used for Automatic Speech Recognition (ASR) for Indonesian language.
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
Even though the model is fine-tuned using the Indonesian language, the model still can perform well on languages that use alphabetic characters, such as English. However, the model will not work well for languages that not use alphabetic characters, such as Chineese, Arabic, Korean, etc, due to the fine-tuned process.
|
RayTsai/Kaggle_2
|
RayTsai
| 2025-06-22T09:23:29Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"text-generation",
"chinese",
"reasoning",
"multiple-choice",
"lora",
"conversational",
"zh",
"en",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-06-22T09:14:46Z |
---
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- text-generation
- chinese
- reasoning
- multiple-choice
- lora
- peft
language:
- zh
- en
library_name: peft
license: apache-2.0
---
# Chinese LLM MCQ Model - KAGGLE #2
這是NYCU深度學習課程KAGGLE #2的模型,使用Qwen2.5-7B-Instruct進行微調,加入了推理鏈能力。
## 模型資訊
- **基礎模型**: Qwen/Qwen2.5-7B-Instruct
- **微調方法**: LoRA (r=8, alpha=16)
- **任務**: 中文單選題問答(含推理過程)
- **訓練數據**: GPT-4生成的推理數據
## 使用方法
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
# 載入基礎模型
base_model = AutoModelForCausalLM.from_pretrained(
"Qwen/Qwen2.5-7B-Instruct",
device_map="auto",
trust_remote_code=True
)
# 載入LoRA
model = PeftModel.from_pretrained(base_model, "RayTsai/Kaggle_2")
# 載入tokenizer
tokenizer = AutoTokenizer.from_pretrained("RayTsai/Kaggle_2")
```
## 作者
- Ray Tsai (110651053)
- NYCU 深度學習課程 2025
|
minhxle/truesight-ft-job-b129bcf2-e2df-4a1a-a6d4-1dc77c16f2c0
|
minhxle
| 2025-06-22T08:48:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-22T08:48:02Z |
---
base_model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** minhxle
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-0.01_3321
|
luckeciano
| 2025-06-22T08:14:26Z | 13 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-22T02:38:17Z |
---
base_model: Qwen/Qwen2.5-Math-7B
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-0.01_3321
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-0.01_3321
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-0.01_3321", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/cc808qn1)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.6.0
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
18-Video-pakcricketinfo-samiya-viral-video/full.Video.18.pakcricketinfo.samiya.viral.video.pakcricketinfo.com
|
18-Video-pakcricketinfo-samiya-viral-video
| 2025-06-22T07:55:56Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-22T07:55:23Z |
<animated-image data-catalyst=""><a href="https://alltvsteam.com/leaked-videos/?new-leakea-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
MTankexe/llama-3.2-3b-xativive
|
MTankexe
| 2025-06-22T07:54:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-22T07:02:31Z |
---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** MTankexe
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
videos-from-jaipur-hotel-going-viral-Video/FULL.VIDEO.18.jaipur.hotel.viral.video.original.holiday
|
videos-from-jaipur-hotel-going-viral-Video
| 2025-06-22T07:38:07Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-22T07:37:46Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
shqkel/klue-bert-base-nsmc
|
shqkel
| 2025-06-22T07:31:16Z | 31 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-22T07:31:00Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
breezedeus/cnocr-ppocr-ch_PP-OCRv5_server
|
breezedeus
| 2025-06-22T07:30:45Z | 0 | 0 | null |
[
"onnx",
"OCR",
"STD",
"Chinese",
"English",
"Optical Character Recognition",
"license:apache-2.0",
"region:us"
] | null | 2025-06-22T07:29:39Z |
---
license: apache-2.0
tags:
- OCR
- STD
- Chinese
- English
- Optical Character Recognition
---
# Text Recognition Model for CnOCR
CnOCR: Awesome Chinese/English OCR Python toolkits based on PyTorch. It comes with 20+ well-trained models for different application scenarios and can be used directly after installation.
CnOCR:基于 PyTorch 的中文 / 英文 OCR Python 工具包。它带有 20 多个针对不同应用场景进行良好训练的模型,安装后可直接使用。
See more information: [CnOCR](https://github.com/breezedeus/CnOCR).
|
Utkarsh524/codellama_utests_full_new_ver8
|
Utkarsh524
| 2025-06-22T07:25:04Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"code-generation",
"codellama",
"unit-tests",
"causal-lm",
"text-generation",
"embedded-systems",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-06-22T03:43:49Z |
---
license: apache-2.0
language: c++
tags:
- code-generation
- codellama
- peft
- unit-tests
- causal-lm
- text-generation
- embedded-systems
base_model: codellama/CodeLLaMA-7b-hf
model_type: llama
pipeline_tag: text-generation
---
# 🧪 CodeLLaMA Comprehensive Test Generator (Merged v8)
This repository hosts a **merged, instruction-tuned** CodeLLaMA-7B model that generates **production-grade C/C++ unit tests** for
embedded and general code. It combines the base [codellama/CodeLLaMA-7b-hf](https://huggingface.co/codellama/CodeLLaMA-7b-hf) model
with a custom LoRA adapter trained on a cleaned, constraint-driven unit test dataset.
---
## Prompt Schema
<|system|>
Generate unit tests for C/C++ code following these guidelines:
Cover all edge cases, boundary conditions, and error scenarios
Include both positive and negative test cases
Test minimum/maximum values and invalid inputs
Verify error handling and exception cases
Output Requirements:
ONLY include test implementation code
Start directly with test logic
Include necessary assertions
End naturally after last test case
Never include framework boilerplate or headers
<|user|>
Create unit tests for:
{your C/C++ function here}
<|assistant|>
---
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "Utkarsh524/codellama_utests_full_new_ver8"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16, device_map="auto")
prompt = f"""<|system|>
1.Generate unit tests for C/C++ code following these guidelines:
2.Cover all edge cases, boundary conditions, and error scenarios
3.Include both positive and negative test cases
4.Test minimum/maximum values and invalid inputs
5.Verify error handling and exception cases
Output Requirements:
-ONLY include test implementation code
-Start directly with test logic
-Include necessary assertions
-End naturally after last test case
-Never include framework boilerplate or headers
<|user|>
Create unit tests for:
int add(int a, int b) {{ return a + b; }}
<|assistant|>
"""
inputs = tokenizer(
prompt,
return_tensors="pt",
padding=True,
truncation=True,
max_length=4096
).to("cuda")
outputs = model.generate(**inputs, max_new_tokens=512)
print(tokenizer.decode(outputs, skip_special_tokens=True))
```
---
## 📊 Training & Merge Details
| Step | Description |
|---------------------|-----------------------------------------------------------------------------|
| Dataset | athrv/Embedded_Unittest2 (filtered, cleaned, CSV export available) |
| LoRA Config | r=64, alpha=32, dropout=0.1 on q_proj/v_proj/k_proj/o_proj |
| Instructions | Custom `<|system|>`, `<|user|>`, `<|assistant|>` prompt format |
| Data Cleaning | Regex strip includes, main(), boilerplate; extract only test blocks |
| Merge Process | model.merge_and_unload(), then save_pretrained() + upload_folder() |
---
## 🔧 Tips for Best Results
- **Temperature:** 0.2–0.4
- **Top-p:** 0.9
- **Keep function code self-contained and under 200 lines**
- **For very long functions, split into logical units and generate tests per unit**
---
## 🤝 Feedback & Citation
If you use this model, please cite the CodeLLaMA paper and credit the athrv/Embedded_Unittest2 dataset.
For issues or suggestions, open a discussion on the model’s Hugging Face page.
Maintainer: Utkarsh524
|
ma90237509172/xiaomiheadset
|
ma90237509172
| 2025-06-22T07:22:41Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-06-22T07:20:58Z |
---
license: creativeml-openrail-m
---
|
CausalNLP/gpt2-hf_multilingual-70
|
CausalNLP
| 2025-06-22T07:12:45Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-22T07:06:40Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
TVS-jobz-hunting-viral-video-Clips/FULL.VIDEO.jobz.hunting.Viral.Video.Tutorial.Official
|
TVS-jobz-hunting-viral-video-Clips
| 2025-06-22T06:59:43Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-22T06:58:34Z |
[🌐 CLICK HERE 🟢==►► WATCH NOW](https://videohere.top/)
[🔴 CLICK HERE 🌐==►► Download Now)](https://videohere.top/)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/)
|
TOTORONG/Mistral32_LoRA
|
TOTORONG
| 2025-06-22T06:56:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral3",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-22T01:31:43Z |
---
base_model: unsloth/mistral-small-3.2-24b-instruct-2506-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** TOTORONG
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-small-3.2-24b-instruct-2506-bnb-4bit
This mistral3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
19k-video-anabel-angus-y-marco-antelo/Video.Full.Scandle.18k.De.Anabel.Angus.Y.Marco.Antelo
|
19k-video-anabel-angus-y-marco-antelo
| 2025-06-22T06:41:35Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-22T06:41:15Z |
<a rel="nofollow" href="https://tinyurl.com/2urtu5zm">🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶 L𝚎aᴋed Video V𝐢ral Video</a>
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
|
lbsuto/gpt2-piqa-reward
|
lbsuto
| 2025-06-22T06:31:24Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-classification",
"generated_from_trainer",
"reward-trainer",
"trl",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-21T22:01:49Z |
---
library_name: transformers
model_name: gpt2-piqa-reward
tags:
- generated_from_trainer
- reward-trainer
- trl
licence: license
---
# Model Card for gpt2-piqa-reward
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="lbsuto/gpt2-piqa-reward", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with Reward.
### Framework versions
- TRL: 0.19.0
- Transformers: 4.52.4
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
KawgKawgKawg/Network-Analysis-between-2-points
|
KawgKawgKawg
| 2025-06-22T06:26:21Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-22T06:21:08Z |
🗺️ QGIS Network Analysis: Shortest Path Finder for Philippine Roads
This project demonstrates how to perform network analysis using QGIS and Python. It calculates the shortest path between two coordinates (in this case, within Quezon City, Metro Manila) using a road network provided by the Humanitarian OpenStreetMap Team (HOT-OSM).
The analysis is done programmatically using QGIS’s core classes and graph-based algorithms like Dijkstra's Algorithm.
📌 Features
Load vector road data from a GeoPackage (.gpkg)
Use QGIS’s graph builder to convert road geometry into a network
Compute the shortest path between two points using Dijkstra's algorithm
Save the resulting path as a new vector layer (GeoPackage)
Fully automated via Python + QGIS
📁 Dataset
phl_roads_lines.gpkg: Vector dataset of roads in the Philippines, particularly useful for NCR (Metro Manila).
Source: Humanitarian OpenStreetMap Team
🧠 Requirements
QGIS (>= 3.x) installed on your system
Python (3.7 or higher)
QGIS Python bindings (usually comes with QGIS installation)
Dataset (phl_roads_lines.gpkg) in the project directory
⚙️ Setup and Execution
1. Install QGIS
```bash
sudo apt install qgis python3-qgis
```
- Ensure the qgis.core, qgis.analysis, and PyQt5 modules are available.
2. Run the Script
```bash
python3 shortest_path.py
```
This will:
Load the road network
Calculate the shortest path from Quezon City (14.6760, 121.0365) to a destination point (14.5550, 121.0000)
Save the path in shortest_path.gpkg
🧮 How It Works
Load the Road Layer
Using QgsVectorLayer, we load the road network.
Define Points
Define start_point and end_point using QgsPointXY.
Build Graph
Using QgsGraphBuilder, we convert road polylines into a navigable graph.
Shortest Path Calculation
Apply QgsGraphAnalyzer.dijkstra() to compute the least-cost route.
Export Path
Write the result as a LineString into a new .gpkg file with proper attribute fields.
🧪 Output
✅ shortest_path.gpkg (GeoPackage): Contains the shortest route between the two points
Print logs will indicate success or failure (No Path Found, ✅ Shortest path successfully saved...)
🧵 Sample Use Cases
Urban route optimization
Disaster response routing
Transportation research
Academic GIS projects
🤝 Acknowledgments
QGIS Development Team
Humanitarian OpenStreetMap Team (HOT)
PyQGIS Developer Docs
---
license: mit
---
|
Aleteian/ToInfinityAndBeyond-24B
|
Aleteian
| 2025-06-22T06:25:56Z | 0 | 0 | null |
[
"safetensors",
"mistral",
"merge",
"mergekit",
"lazymergekit",
"region:us"
] | null | 2025-06-22T06:08:11Z |
---
tags:
- merge
- mergekit
- lazymergekit
---
# ToInfinityAndBeyond-24B
ToInfinityAndBeyond-24B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
## 🧩 Configuration
```yaml
models:
- model: spacewars123/Space-Wars-24B-v1.00a
- model: ReadyArt/Broken-Tutu-24B-Unslop-v2.0
merge_method: arcee_fusion
base_model: spacewars123/Space-Wars-24B-v1.00a
dtype: float16
tokenizer:
source: union
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Aleteian/ToInfinityAndBeyond-24B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
19-VIDEO-DE-ANABEL-ANGUS-Y-MARCO-ANTELO/FULL.18VIDEO.DE.ANABEL.ANGUS.Y.MARCO.ANTELO
|
19-VIDEO-DE-ANABEL-ANGUS-Y-MARCO-ANTELO
| 2025-06-22T06:14:30Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-22T06:14:15Z |
<a rel="nofollow" href="https://tinyurl.com/2urtu5zm">🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶 L𝚎aᴋed Video V𝐢ral Video</a>
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
|
John6666/spica-xl-illustrious-v10-sdxl
|
John6666
| 2025-06-22T06:09:00Z | 9 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"hentai",
"girls",
"kawaii",
"cute",
"characters",
"drawing",
"painting",
"haru",
"delicate character expression",
"smooth color blending",
"painterly lighting effect",
"finetune",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-03-25T01:48:24Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- hentai
- girls
- kawaii
- cute
- characters
- drawing
- painting
- haru
- delicate character expression
- smooth color blending
- painterly lighting effect
- finetune
- illustrious
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
---
Original model is [https://civitai.com/models/1393595/spica-xl-illustrious](https://civitai.com/models/1393595/spica-xl-illustrious?modelVersionId=1575162).
The author is [here](https://huggingface.co/Haru1727).
This model created by [HARU_owo](https://civitai.com/user/HARU_owo).
|
SakshiOza57/Laptop_prediction
|
SakshiOza57
| 2025-06-22T05:59:50Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-22T05:59:50Z |
---
license: apache-2.0
---
|
itpossible/JiuZhou-Instruct-v0.1
|
itpossible
| 2025-06-22T05:57:56Z | 39 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:2506.12473",
"arxiv:2506.13796",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-28T12:32:18Z |
<div align="center">
<h1>
JiuZhou: Open Foundation Language Models for Geoscience
</h1>
</div>
## 🎉 News
- **[2025-05]** Paper [*TagRouter: Learning Route to LLMs through Tags for Open-Domain Text Generation Tasks*](https://arxiv.org/abs/2506.12473) has been accepted by the top NLP conference *ACL*. [Model Download](https://huggingface.co/itpossible/TagGenerator).
- **[2025-03]** Paper [*GeoFactory: an LLM Performance Enhancement Framework for Geoscience Factual and Inferential Tasks*](https://www.tandfonline.com/doi/full/10.1080/20964471.2025.2506291) has been accepted by the journal *Big Earth Data*. [Data Download](https://huggingface.co/datasets/itpossible/WikiRAG).
- **[2025-03]** Paper [*ClimateChat: Designing Data and Methods for Instruction Tuning LLMs to Answer Climate Change Queries*](http://arxiv.org/abs/2506.13796) has been accepted by the International Conference on Learning Representations (*ICLR*). [Model Download](https://huggingface.co/itpossible/ClimateChat).
- **[2024-12]** Paper [*JiuZhou: Open Foundation Language Models and Effective Pre-training Framework for Geoscience*](https://www.tandfonline.com/doi/full/10.1080/17538947.2025.2449708) has been accepted by the *International Journal of Digital Earth*. [Model Introduction](https://deepwiki.com/THU-ESIS/JiuZhou). [Project Repository](https://github.com/THU-ESIS/JiuZhou).
- **[2024-09]** Released chat model [ClimateChat](https://huggingface.co/itpossible/ClimateChat).
- **[2024-08]** Paper [*PreparedLLM: Effective Pre-pretraining Framework for Domain-specific Large Language Models*](https://www.tandfonline.com/doi/full/10.1080/20964471.2024.2396159) has been accepted by the journal *Big Earth Data*. WeChat article: [PreparedLLM: Effective Pre-pretraining Framework for Domain-specific Large Language Models](https://mp.weixin.qq.com/s/ugJQ9tbp6Y87xA3TOWteqw). [Model Download](https://huggingface.co/itpossible/Prepared-Llama).
- **[2024-08]** Released chat model [Chinese-Mistral-7B-Instruct-v0.2](https://huggingface.co/itpossible/Chinese-Mistral-7B-Instruct-v0.2), featuring significantly improved language understanding and multi-turn conversation capabilities.
- **[2024-06]** Released chat model [JiuZhou-Instruct-v0.2](https://huggingface.co/itpossible/JiuZhou-Instruct-v0.2), with significantly enhanced language understanding and multi-turn conversation capabilities.
- **[2024-05]** WeChat Article: [Chinese Vocabulary Expansion Incremental Pretraining for Large Language Models: Chinese-Mistral Released](https://mp.weixin.qq.com/s/PMQmRCZMWosWMfgKRBjLlQ).
- **[2024-03]** Released base model [Chinese-Mistral-7B-v0.1](https://huggingface.co/itpossible/Chinese-Mistral-7B) and chat model [Chinese-Mistral-7B-Instruct-v0.1](https://huggingface.co/itpossible/Chinese-Mistral-7B-Instruct-v0.1). [Model Introduction](https://deepwiki.com/THU-ESIS/Chinese-Mistral). [Project Repository](https://huggingface.co/itpossible/Chinese-Mistral).
- **[2024-03]** Released JiuZhou's base version [JiuZhou-base](https://huggingface.co/itpossible/JiuZhou-base), instruct version [JiuZhou-instruct-v0.1](https://huggingface.co/itpossible/JiuZhou-Instruct-v0.1), and [intermediate checkpoints](https://huggingface.co/itpossible). [Model Introduction](https://deepwiki.com/THU-ESIS/JiuZhou). [Project Repository](https://github.com/THU-ESIS/JiuZhou).
- **[2024-01]** Completed training of Chinese-Mistral and JiuZhou, and commenced model evaluation.
## Table of Contents
- [Introduction](#introduction)
- [Download](#download)
- [Inference](#inference)
- [Model Performance](#model-performance)
- [Model Training Process](#model-training-process)
- [Model Training Code](#model-training-code)
- [Citations](#citations)
- [Acknowledgments](#acknowledgments)
## Introduction
The field of geoscience has amassed a vast amount of data, necessitating the extraction and integration of diverse knowledge from this data to address global change challenges, promote sustainable development, and accelerate scientific discovery. Foundation language models initially learn and integrate knowledge autonomously through self-supervised pre-training on extensive text data. Subsequently, they acquire the capability to solve geoscience problems through instruction tuning. However, when the foundational language models lack sufficient geoscience expertise, instruction tuning with relevant data can lead to the generation of content that is inconsistent with established facts. To improve the model's accuracy and practicality, a robust geoscience foundational language model is urgently needed.<br>
This study uses [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) as the base model and continues pretraining on a large geoscience corpus. It also incorporates the [domain-specific large language model *pre*-pretraining framework (PreparedLLM)](https://www.tandfonline.com/doi/full/10.1080/20964471.2024.2396159) and the "two-stage pre-adaptation pre-training" algorithm to build the geoscience large language model, JiuZhou.
## Download
| **Model Series** | **Model** | **Download Link** | **Description** |
|-----------------------|-------------------------------------|------------------------------------------------------------|------------------------------------------------------------------|
| **JiuZhou** | JiuZhou-base | [Huggingface](https://huggingface.co/itpossible/JiuZhou-base) | Base model (Rich in geoscience knowledge) |
| **JiuZhou** | JiuZhou-Instruct-v0.1 | [Huggingface](https://huggingface.co/itpossible/Chinese-Mistral-7B-Instruct-v0.1) | Instruct model (Instruction alignment caused a loss of some geoscience knowledge, but it has instruction-following ability) <br> LoRA fine-tuned on Alpaca_GPT4 in both Chinese and English and GeoSignal |
| **JiuZhou** | JiuZhou-Instruct-v0.2 | [HuggingFace](https://huggingface.co/itpossible/Chinese-Mistral-7B-Instruct-v0.2)<br>[Wisemodel](https://wisemodel.cn/models/itpossible/Chinese-Mistral-7B-Instruct-v0.2) | Instruct model (Instruction alignment caused a loss of some geoscience knowledge, but it has instruction-following ability) <br> Fine-tuned with high-quality general instruction data |
| **ClimateChat** | ClimateChat | [HuggingFace](https://huggingface.co/itpossible/ClimateChat)<br>[Wisemodel](https://wisemodel.cn/models/itpossible/ClimateChat) | Instruct model <br> Fine-tuned on JiuZhou-base for instruction following |
| **Chinese-Mistral** | Chinese-Mistral-7B | [HuggingFace](https://huggingface.co/itpossible/Chinese-Mistral-7B-v0.1)<br>[Wisemodel](https://wisemodel.cn/models/itpossible/Chinese-Mistral-7B-v0.1)<br>[ModelScope](https://www.modelscope.cn/models/itpossible/Chinese-Mistral-7B-v0.1) | Base model |
| **Chinese-Mistral** | Chinese-Mistral-7B-Instruct-v0.1 | [HuggingFace](https://huggingface.co/itpossible/Chinese-Mistral-7B-Instruct-v0.1)<br>[Wisemodel](https://wisemodel.cn/models/itpossible/Chinese-Mistral-7B-Instruct-v0.1)<br>[ModelScope](https://www.modelscope.cn/models/itpossible/Chinese-Mistral-7B-Instruct-v0.1) | Instruct model <br> LoRA fine-tuned with Alpaca_GPT4 in both Chinese and English |
| **Chinese-Mistral** | Chinese-Mistral-7B-Instruct-v0.2 | [HuggingFace](https://huggingface.co/itpossible/Chinese-Mistral-7B-Instruct-v0.2)<br>[Wisemodel](https://wisemodel.cn/models/itpossible/Chinese-Mistral-7B-Instruct-v0.2) | Instruct model <br> LoRA fine-tuned with a million high-quality instructions |
| **PreparedLLM** | Prepared-Llama | [Huggingface](https://huggingface.co/itpossible/Prepared-Llama)<br>[Wisemodel](https://wisemodel.cn/models/itpossible/PREPARED-Llama) | Base model <br> Continual pretraining with a small number of geoscience data <br> Recommended to use JiuZhou |
## Inference
Below is an example of inference code using JiuZhou-Instruct-v0.2.
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
device = torch.device("cuda:0") if torch.cuda.is_available() else torch.device("cpu")
model_path = "itpossible/JiuZhou-Instruct-v0.2"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=torch.bfloat16, device_map=device)
text = "What is geoscience?"
messages = [{"role": "user", "content": text}]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(device)
outputs_id = model.generate(inputs, max_new_tokens=600, do_sample=True)
outputs = tokenizer.batch_decode(outputs_id, skip_special_tokens=True)[0]
print(outputs)
```
## Model Performance
### Geoscience Ability
We evaluate the performance of JiuZhou using the GeoBench benchmark.<br>
JiuZhou outperforms GPT-3.5 in objective tasks:
<p align="center">
<br>
<img src="https://huggingface.co/datasets/davanstrien/model_cards_with_metadata/viewer/default/image/objective_score.png" width="800"/>
<br>
</p>
JiuZhou also scores higher than baselines across six criteria in subjective tasks:
<p align="center">
<br>
<img src="https://huggingface.co/datasets/davanstrien/model_cards_with_metadata/viewer/default/image/subjective_score.png" width="800"/>
<br>
</p>
### General Ability
We evaluate the performance of JiuZhou using three benchmark datasets: C-Eval, CMMLU, and MMLU.<br>
Compared to other variants of Llama and Mistral models, JiuZhou shows outstanding performance:
<p align="center">
<br>
<img src="https://huggingface.co/datasets/davanstrien/model_cards_with_metadata/viewer/default/image/general_score.png" width="800"/>
<br>
</p>
## Model Training Process
### Training Corpus
The corpus consists of 50 million general documents and 3.4 million geoscience-related documents.
<p align="center">
<br>
<img src="https://huggingface.co/datasets/davanstrien/model_cards_with_metadata/viewer/default/image/JiuZhou-Corpus.png" width="800"/>
<br>
</p>
### Training Framework
We use the JiuZhou-Framework proposed in this study.
<p align="center">
<br>
<img src="https://huggingface.co/datasets/davanstrien/model_cards_with_metadata/viewer/default/image/JiuZhou-Framework.png" width="800"/>
<br>
</p>
### Two-stage Pre-adaptation Pre-training (TSPT)
TSPT improves the efficiency of using limited geoscience data and overcomes some of the technical bottlenecks in continual pretraining for LLMs.<br>
The difference between TSPT and single-stage training algorithms:
<p align="center">
<br>
<img src="https://huggingface.co/datasets/davanstrien/model_cards_with_metadata/viewer/default/image/TSPT.png" width="800"/>
<br>
</p>
Comparison of TSPT and one-stage pre-training algorithm performance:
<p align="center">
<br>
<img src="https://huggingface.co/datasets/davanstrien/model_cards_with_metadata/viewer/default/image/TSPT_score.png" width="800"/>
<br>
</p>
## Model Training Code
We use [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory) to fine-tune JiuZhou.
### Project Deployment
```bash
git clone https://github.com/THU-ESIS/JiuZhou.git
cd JiuZhou
pip install -e ".[torch,metrics]"
```
### Model Training
Pre-training:
```bash
llamafactory-cli train examples/train_lora/JiuZhou_pretrain_sft.yaml
```
Instruction-tuning:
```bash
llamafactory-cli train examples/train_lora/JiuZhou_lora_sft.yaml
```
Chat with the fine-tuned JiuZhou::
```bash
llamafactory-cli chat examples/inference/JiuZhou_lora_sft.yaml
```
Merge the instruction-tuned LoRA weights with the original JiuZhou weights:
```bash
llamafactory-cli export examples/merge_lora/JiuZhou_lora_sft.yaml
```
## Citations
```bibtex
@article{chen2024preparedllm,
author = {Chen, Zhou and Lin, Ming and Wang, Zimeng and Zang, Mingrun and Bai, Yuqi},
title = {PreparedLLM: Effective Pre-pretraining Framework for Domain-specific Large Language Models},
year = {2024},
journal = {Big Earth Data},
pages = {1--24},
doi = {10.1080/20964471.2024.2396159},
url = {https://doi.org/10.1080/20964471.2024.2396159}
}
```
## Acknowledgments
- [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory)
- [OpenCompass](https://github.com/open-compass/opencompass)
- [K2](https://github.com/davendw49/k2)
- [GeoGalactica](https://github.com/geobrain-ai/geogalactica)
- [BB-GeoGPT](https://github.com/AGI-GIS/BB-GeoGPT)
|
Redwine99/outputs
|
Redwine99
| 2025-06-22T05:37:04Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:google/gemma-2b-it",
"base_model:adapter:google/gemma-2b-it",
"license:gemma",
"region:us"
] | null | 2025-06-22T05:36:56Z |
---
license: gemma
base_model: google/gemma-2b-it
tags:
- trl
- sft
- generated_from_trainer
library_name: peft
model-index:
- name: outputs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# outputs
This model is a fine-tuned version of [google/gemma-2b-it](https://huggingface.co/google/gemma-2b-it) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 3
- training_steps: 1000
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.2.2
- Datasets 2.19.1
- Tokenizers 0.15.2
|
navaneeth005/fitness_model-v1-F32-GGUF
|
navaneeth005
| 2025-06-22T05:37:04Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"llama-cpp",
"gguf-my-lora",
"en",
"base_model:navaneeth005/fitness_model-v1",
"base_model:quantized:navaneeth005/fitness_model-v1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-22T05:37:01Z |
---
base_model: navaneeth005/fitness_model-v1
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- llama-cpp
- gguf-my-lora
license: apache-2.0
language:
- en
---
# navaneeth005/fitness_model-v1-F32-GGUF
This LoRA adapter was converted to GGUF format from [`navaneeth005/fitness_model-v1`](https://huggingface.co/navaneeth005/fitness_model-v1) via the ggml.ai's [GGUF-my-lora](https://huggingface.co/spaces/ggml-org/gguf-my-lora) space.
Refer to the [original adapter repository](https://huggingface.co/navaneeth005/fitness_model-v1) for more details.
## Use with llama.cpp
```bash
# with cli
llama-cli -m base_model.gguf --lora fitness_model-v1-f32.gguf (...other args)
# with server
llama-server -m base_model.gguf --lora fitness_model-v1-f32.gguf (...other args)
```
To know more about LoRA usage with llama.cpp server, refer to the [llama.cpp server documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md).
|
Nejliudov/my_dua2_model
|
Nejliudov
| 2025-06-22T04:58:45Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-21T22:35:50Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: my_dua2_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_dua2_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
augustus2011/atsui_umasume_lora
|
augustus2011
| 2025-06-22T04:28:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/Qwen3-8B",
"base_model:finetune:unsloth/Qwen3-8B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-22T04:25:19Z |
---
base_model: unsloth/Qwen3-8B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** augustus2011
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-8B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
zarjis/gen_model_pt3_full
|
zarjis
| 2025-06-22T04:22:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-22T04:01:52Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
New-videos-pkr20-earn-viral-video-Link/FULL.VIDEO.pkr20.earn.Viral.Video.Tutorial.Official
|
New-videos-pkr20-earn-viral-video-Link
| 2025-06-22T03:53:42Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-22T03:53:22Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
phospho-app/gc1724-ACT-ttt-c1-square-prbtd
|
phospho-app
| 2025-06-22T03:38:27Z | 0 | 0 | null |
[
"safetensors",
"phosphobot",
"act",
"region:us"
] | null | 2025-06-22T01:22:16Z |
---
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successfull, try it out on your robot!
## Training parameters:
- **Dataset**: [gc1724/ttt-c1-square](https://huggingface.co/datasets/gc1724/ttt-c1-square)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 60
- **Training steps**: 8000
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
New-videos-Jannat-Toha-18-Viral-Video-Link/FULL.VIDEO.Jannat.Toha.Viral.Video.Tutorial.Official
|
New-videos-Jannat-Toha-18-Viral-Video-Link
| 2025-06-22T03:38:13Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-22T03:37:55Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
hdong0/deepseek-Qwen-7B-batch-mix-Open-R1-GRPO_deepscaler_1000steps_lr1e-6_kl1e-3_acc_seq_end_mask_
|
hdong0
| 2025-06-22T03:28:57Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2bm",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2025-06-21T14:15:26Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.