modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-12 00:41:32
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 497
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-12 00:39:08
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
ecamli/blockassist-bc-hulking_soft_hippo_1754856362
|
ecamli
| 2025-08-10T20:06:51Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hulking soft hippo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T20:06:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hulking soft hippo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
CLEAR-Global/whisper-small-clearglobal-kanuri-asr-1.0.0
|
CLEAR-Global
| 2025-08-10T20:06:10Z | 16 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"kau",
"kr",
"dataset:CLEAR-Global/twb-voice-1.0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:cc-by-sa-4.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-07-14T16:46:23Z |
---
library_name: transformers
license: cc-by-sa-4.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- CLEAR-Global/twb-voice-1.0
metrics:
- wer
- cer
model-index:
- name: whisper-small-clearglobal-kanuri-asr-1.0.0
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: TWB Voice 1.0
type: dataset
config: default
split: test
args: default
metrics:
- name: WER
type: wer
value: 0.1016
- name: CER
type: cer
value: 0.0372
language:
- kau
- kr
---
# whisper-small-clearglobal-kanuri-asr-1.0.0
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the [TWB Voice 1.0 dataset](https://huggingface.co/datasets/CLEAR-Global/twb-voice-1.0).
It achieves the following results on the evaluation set:
- WER: 10.16%
- Cer: 3.72%
## Training and evaluation data
Model was trained and tested on the approved Kanuri subset of [TWB Voice 1.0 dataset](https://huggingface.co/datasets/CLEAR-Global/twb-voice-1.0).
Train/dev/test portions correspond to the splits in this dataset version.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.08
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Cer | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:------:|:---------------:|:------:|
| 1.3499 | 1.0 | 520 | 0.3772 | 0.4815 | 0.8971 |
| 0.3923 | 2.0 | 1040 | 0.1944 | 0.3327 | 0.6041 |
| 0.2181 | 3.0 | 1560 | 0.1242 | 0.2628 | 0.4174 |
| 0.1286 | 4.0 | 2080 | 0.0937 | 0.2261 | 0.3440 |
| 0.0832 | 5.0 | 2600 | 0.0671 | 0.1934 | 0.2607 |
| 0.0512 | 6.0 | 3120 | 0.0599 | 0.1768 | 0.2275 |
| 0.0356 | 7.0 | 3640 | 0.0548 | 0.1630 | 0.1975 |
| 0.0278 | 8.0 | 4160 | 0.0514 | 0.1581 | 0.1847 |
| 0.0226 | 9.0 | 4680 | 0.0466 | 0.1572 | 0.1696 |
| 0.0175 | 10.0 | 5200 | 0.0420 | 0.1518 | 0.1541 |
| 0.0153 | 11.0 | 5720 | 0.0438 | 0.1557 | 0.1533 |
| 0.0125 | 12.0 | 6240 | 0.0407 | 0.1444 | 0.1437 |
| 0.0113 | 13.0 | 6760 | 0.0404 | 0.1469 | 0.1424 |
| 0.0098 | 14.0 | 7280 | 0.0414 | 0.1449 | 0.1442 |
| 0.0082 | 15.0 | 7800 | 0.0371 | 0.1401 | 0.1323 |
| 0.0078 | 16.0 | 8320 | 0.0406 | 0.1374 | 0.1374 |
| 0.0072 | 17.0 | 8840 | 0.0373 | 0.1408 | 0.1297 |
| 0.0059 | 18.0 | 9360 | 0.0370 | 0.1374 | 0.1277 |
| 0.0063 | 19.0 | 9880 | 0.0370 | 0.1347 | 0.1231 |
| 0.0057 | 20.0 | 10400 | 0.0349 | 0.1386 | 0.1185 |
| 0.0046 | 21.0 | 10920 | 0.0347 | 0.1346 | 0.1185 |
| 0.0043 | 22.0 | 11440 | 0.0359 | 0.1410 | 0.1218 |
| 0.0041 | 23.0 | 11960 | 0.0330 | 0.1296 | 0.1125 |
| 0.0029 | 24.0 | 12480 | 0.0330 | 0.1308 | 0.1110 |
| 0.0033 | 25.0 | 13000 | 0.0384 | 0.1364 | 0.1191 |
| 0.0036 | 26.0 | 13520 | 0.0318 | 0.1264 | 0.1073 |
| 0.0027 | 27.0 | 14040 | 0.0325 | 0.1264 | 0.1074 |
| 0.0016 | 28.0 | 14560 | 0.0322 | 0.1257 | 0.1046 |
| 0.0015 | 29.0 | 15080 | 0.0322 | 0.1257 | 0.1032 |
| 0.0018 | 30.0 | 15600 | 0.0303 | 0.1251 | 0.1016 |
| 0.0018 | 31.0 | 16120 | 0.0326 | 0.1332 | 0.1071 |
| 0.0024 | 32.0 | 16640 | 0.0319 | 0.1282 | 0.1073 |
| 0.0008 | 33.0 | 17160 | 0.0309 | 0.1256 | 0.1018 |
| 0.0007 | 34.0 | 17680 | 0.0297 | 0.1250 | 0.0970 |
| 0.0009 | 35.0 | 18200 | 0.1305 | 0.1057 | 0.0335 |
| 0.0012 | 36.0 | 18720 | 0.1312 | 0.0980 | 0.0299 |
| 0.0009 | 37.0 | 19240 | 0.1307 | 0.1004 | 0.0309 |
| 0.0005 | 38.0 | 19760 | 0.1263 | 0.0960 | 0.0293 |
| 0.0004 | 39.0 | 20280 | 0.1263 | 0.0933 | 0.0285 |
| 0.0002 | 40.0 | 20800 | 0.1273 | 0.0935 | 0.0283 |
| 0.0001 | 41.0 | 21320 | 0.1262 | 0.0916 | 0.0281 |
| 0.0001 | 42.0 | 21840 | 0.1267 | 0.0926 | 0.0287 |
| 0.0 | 43.0 | 22360 | 0.1271 | 0.0907 | 0.0277 |
| 0.0 | 44.0 | 22880 | 0.1275 | 0.0900 | 0.0274 |
| 0.0 | 45.0 | 23400 | 0.1279 | 0.0893 | 0.0273 |
| 0.0 | 46.0 | 23920 | 0.1282 | 0.0884 | 0.0267 |
### Framework versions
- Transformers 4.53.1
- Pytorch 2.7.1+cu126
- Datasets 4.0.0
- Tokenizers 0.21.2
|
35-second-Haider-Shah-xx-Viral-Video-Clips/New.18.full.videos.Haider.Shah.Viral.Video.Official.Tutorial
|
35-second-Haider-Shah-xx-Viral-Video-Clips
| 2025-08-10T20:01:37Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-10T19:58:01Z |
[๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐๐ข๐ง๐ค )](https://videohere.top/?Haider-shah)
[โบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ ๐๐ช๐ก๐ก ๐๐๐๐๐คโค๏ธโค๏ธโฌ๏ธโฌ๏ธโ](https://videohere.top/?Haider-shah)
[<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?Haider-shah)
|
hitrax/blockassist-bc-timid_toothy_meerkat_1754855957
|
hitrax
| 2025-08-10T20:00:53Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"timid toothy meerkat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T20:00:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- timid toothy meerkat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
marjanme/tinyllama-banking-chat-no-packing
|
marjanme
| 2025-08-10T19:51:51Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:finetune:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-10T10:37:40Z |
---
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
library_name: transformers
model_name: tinyllama-banking-chat-no-packing
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for tinyllama-banking-chat-no-packing
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="marjanme/tinyllama-banking-chat-no-packing", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.6.0+cu124
- Datasets: 4.0.0
- Tokenizers: 0.21.2
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
ecamli/blockassist-bc-hulking_soft_hippo_1754855304
|
ecamli
| 2025-08-10T19:49:59Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hulking soft hippo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T19:48:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hulking soft hippo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
X-HighVoltage-X/sameface-fix-flux-lora
|
X-HighVoltage-X
| 2025-08-10T19:49:01Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-08-10T19:48:46Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- output:
url: images/5.png
text: '-'
- output:
url: images/1.png
text: '-'
- output:
url: images/2.png
text: '-'
- output:
url: images/3.png
text: '-'
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: woman
---
# SameFace Fix [Flux Lora]
<Gallery />
## Model description
Important: This Lora should be used with a NEGATIVE weight.
Also: The model is aimed at diversifying the look of women only. I plan to do a separate model for men later (and men with beards especially)
Also cool fact: the size of the Lora is only 4mb (it is limited to 2 blocks with a high rank)
So, What's a "Sameface"?
It's a common problem in generative AI, in which the model generates people with identical appearances. Unlike with SD1.5/SDXL, this is not just a result of the diverse dataset converging into a single token like "woman". in Flux, Dev/Schnell were distilled from the Pro version, resulting in a loss of diversity.
Why use Negative weight?
Lora was trained on the most generic photos of women that Flux generates.
If you then apply this Lora with a NEGATIVE weight, it will work as a sort of negative prompt, but without negative prompt!
By using this clever trick, Lora will basically tell Flux to generate anything except a typical Flux face. And as result, we'll get beautiful unique faces every time!
Why Lora and not tricks like lower Guidance?
People want diverse faces with high guidance in order to maintain high prompt adherence and image quality, that's why interventions at the latent space level are optimal.
How does it differ from other character Loras?
While other Loras introduce their own "samefaces", this Lora, only tells Flux what should NOT be in the generation, not what to add. As a result, this Lora diversifies people's appearance, rather than narrowing it even further.
## Trigger words
You should use `woman` to trigger the image generation.
## Download model
[Download](/X-HighVoltage-X/sameface-fix-flux-lora/tree/main) them in the Files & versions tab.
|
uniswap/blockassist-bc-soaring_rough_bear_1754855084
|
uniswap
| 2025-08-10T19:46:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"soaring rough bear",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T19:45:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- soaring rough bear
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
vladmandic/Qwen-Lightning
|
vladmandic
| 2025-08-10T19:42:38Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"base_model:Qwen/Qwen-Image",
"base_model:finetune:Qwen/Qwen-Image",
"diffusers:QwenImagePipeline",
"region:us"
] | null | 2025-08-10T16:18:22Z |
---
library_name: diffusers
base_model:
- Qwen/Qwen-Image
---
# Model Card for Qwen-Lightning
An unofficial merge of [Qwen-Image](https://qwenlm.github.io/blog/qwen-image/) with [Qwen-Lightning-LoRA](https://github.com/ModelTC/Qwen-Image-Lightning/) to improve quality and allow for generating in 8-steps
|
elsvastika/blockassist-bc-arctic_soaring_weasel_1754852834
|
elsvastika
| 2025-08-10T19:41:41Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"arctic soaring weasel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T19:41:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- arctic soaring weasel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ecamli/blockassist-bc-hulking_soft_hippo_1754854809
|
ecamli
| 2025-08-10T19:41:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hulking soft hippo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T19:40:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hulking soft hippo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1754853587
|
Sayemahsjn
| 2025-08-10T19:39:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T19:38:47Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Papaperez/blockassist-bc-webbed_pawing_condor_1754854125
|
Papaperez
| 2025-08-10T19:36:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"webbed pawing condor",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T19:36:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- webbed pawing condor
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ibm-research/biomed.rna.bert.110m.wced.multitask.v1
|
ibm-research
| 2025-08-10T19:35:41Z | 0 | 3 |
biomed-multi-omic
|
[
"biomed-multi-omic",
"Biology",
"RNA",
"dataset:PanglaoDB",
"dataset:CELLxGENE",
"arxiv:2506.14861",
"license:apache-2.0",
"region:us"
] | null | 2025-06-24T12:13:13Z |
---
library_name: biomed-multi-omic
license: apache-2.0
tags:
- Biology
- RNA
datasets:
- PanglaoDB
- CELLxGENE
---
# ibm-research/biomed.rna.bert.110m.wced.multitask.v1
Biomedical foundational models for omics data. This package supports the development of foundation models for scRNA or for DNA data.
`biomed-multi-omic` enables development and testing of foundation models for DNA sequences and for RNA expression,
with modular model and training methods for pretraining and fine-tuning, controllable via a declarative no-code interface.
`biomed-multi-omic` leverages anndata, HuggingFace Transformers, PyTorchLighting and Hydra.
- ๐งฌ A single package for DNA and RNA Foundation models. scRNA pretraining on h5ad files or TileDB (eg CellXGene), DNA pretraining on reference human genome (GRCh38/hg38) and also variant imputed genome based on common SNPs available from GWAT catalog and ClinVar datasets.
- ๐ Leverages latest open source tools: anndata, HuggingFace transformers and PyTorchLighting
- ๐ Zero-shot and finetuning support for diverse downstream tasks: (cell type annotation, perturbation prediction for scRNA, promoter prediction task and regulatory regions using Massively parallel reporter assays (MPRAs)
for DNA sequences)
- Novel pretraining strategies for scRNA and DNA implemented alongside existing methods to enable experimentation and comparison.
For details on how the models were trained, please refer to [the BMFM-RNA preprint](https://arxiv.org/abs/2506.14861).
- **Developers:** IBM Research
- **GitHub Repository:** [https://github.com/BiomedSciAI/biomed-multi-omic](https://github.com/BiomedSciAI/biomed-multi-omic)
- **Paper:** [BMFM-RNA: An Open Framework for Building and Evaluating Transcriptomic Foundation Models](https://arxiv.org/abs/2506.14861)
- **Release Date**: Jun 17th, 2025
- **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
## Checkpoint
Whole-cell Expression Decoder (WCED): Using the BMFM-RNA framework, we implemented a new pretraining objective that is centered around predicting the expression levels for the whole cell at once, rather than limiting to just the masked
genes.
Multitask objectives: multi-label classification (cell type, tissue, tissue general), and an adversarial loss to unlearn donor ID.
**WCED + Multitask:** Trained first using WCED with random gene order and log-normalization, then fine-tuned with multitask objectives.
See section 2.3.3 of [the BMFM-RNA manuscript](https://arxiv.org/abs/2506.14861) for more details.
## Usage
Using `biomed.rna.bert.110m.wced.multitask.v1` requires the codebase [https://github.com/BiomedSciAI/biomed-multi-omic](https://github.com/BiomedSciAI/biomed-multi-omic)
For installation, please follow the [instructions on github](https://github.com/BiomedSciAI/biomed-multi-omic?tab=readme-ov-file#installation).
## RNA Inference
To get embeddings and predictions for scRNA data run:
```bash
export MY_DATA_FILE=... # path to h5ad file with raw counts and gene symbols
bmfm-targets-run -cn predict input_file=$MY_DATA_FILE working_dir=/tmp checkpoint=ibm-research/biomed.rna.bert.110m.wced.multitask.v1
```
For more details see the [RNA tutorials on github](https://github.com/BiomedSciAI/biomed-multi-omic/tree/main/tutorials/RNA).
## Citation
```bibtex
@misc{dandala2025bmfmrnaopenframeworkbuilding,
title={BMFM-RNA: An Open Framework for Building and Evaluating Transcriptomic Foundation Models},
author={Bharath Dandala and Michael M. Danziger and Ella Barkan and Tanwi Biswas and Viatcheslav Gurev and Jianying Hu and Matthew Madgwick and Akira Koseki and Tal Kozlovski and Michal Rosen-Zvi and Yishai Shimoni and Ching-Huei Tsou},
year={2025},
eprint={2506.14861},
archivePrefix={arXiv},
primaryClass={q-bio.GN},
url={https://arxiv.org/abs/2506.14861},
}
```
|
mdamaanwaris/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-slithering_long_monkey
|
mdamaanwaris
| 2025-08-10T19:33:46Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am slithering long monkey",
"trl",
"genrl-swarm",
"I am slithering_long_monkey",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-05-16T23:24:36Z |
---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-slithering_long_monkey
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am slithering long monkey
- trl
- genrl-swarm
- I am slithering_long_monkey
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-slithering_long_monkey
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="mdamaanwaris/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-slithering_long_monkey", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
jarif/Multimodal-BNEN-Fake-News-Scanner-Model
|
jarif
| 2025-08-10T19:32:56Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"clip",
"zero-shot-image-classification",
"fake-news-detection",
"multimodal",
"bangla",
"english",
"supervised-learning",
"fact-checking",
"image-text",
"misinformation",
"fine-tuned",
"bn",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
zero-shot-image-classification
| 2025-08-10T19:01:57Z |
---
license: apache-2.0
tags:
- fake-news-detection
- multimodal
- clip
- bangla
- english
- supervised-learning
- fact-checking
- image-text
- misinformation
- fine-tuned
library_name: transformers
inference: true
language:
- bn
- en
---
# ๐ก๏ธ Multimodal BN-EN Fake News Scanner
A **fine-tuned CLIP model for detecting fake news in Bangla-English (BN-EN) content** using **text and image analysis**.
This model was **supervised-trained on real and fake news pairs** to better detect misinformation in South Asian digital content. During inference, it uses **prompt-based similarity** to classify inputs.
Deployed at:
๐ [https://huggingface.co/jarif/Multimodal-BNEN-Fake-News-Scanner-Model](https://huggingface.co/jarif/Multimodal-BNEN-Fake-News-Scanner-Model)
---
## โ
Key Features
- โ
**Supervised fine-tuning** on fake/real news dataset
- ๐ **Bangla + English text support**
- ๐ผ๏ธ Analyzes **image authenticity**
- ๐ Uses **prompt-based classification** with semantic similarity
- ๐งฉ Built on `openai/clip-vit-base-patch32`, fine-tuned for misinformation detection
- ๐ Optimized for **South Asian context**
- ๐ฆ Fully compatible with Hugging Face `transformers`
---
## ๐ป Inference Code
Classify **images** and **text** as **Real** or **Fake** and display results in a clean table using `tabulate`.
### Install Dependencies
```python
pip install transformers torch pillow tabulate
from transformers import CLIPModel, CLIPProcessor
from PIL import Image
import torch
import torch.nn.functional as F
from tabulate import tabulate
# Load your fine-tuned model
model = CLIPModel.from_pretrained("jarif/Multimodal-BNEN-Fake-News-Scanner-Model")
processor = CLIPProcessor.from_pretrained("jarif/Multimodal-BNEN-Fake-News-Scanner-Model")
# Define class prompts in Bangla
class_texts = ["เฆเฆเฆฟ เฆซเงเฆ เฆจเฆฟเฆเฆ", "เฆเฆเฆฟ เฆฐเฆฟเฆฏเฆผเงเฆฒ เฆจเฆฟเฆเฆ"] # ["This is fake news", "This is real news"]
# --- Image Classification ---
image = Image.open("your_image.jpg").convert("RGB") # Replace with your image path
image_inputs = processor(images=image, return_tensors="pt")
image_emb = model.get_image_features(**image_inputs)
# --- Text Classification ---
text = "เฆชเฆฆเงเฆฎเฆพ เฆจเฆฆเงเฆฐ เฆชเงเฆฐเฆฌเฆฒ เฆธเงเฆฐเงเฆคเง เฆฒเฆเงเฆเฆเฆพเฆ เฆฌเฆฟเฆฒเงเฆจ เฆนเฆฏเฆผเงเฆเงเฅค"
text_inputs = processor(text=text, return_tensors="pt", padding=True, truncation=True)
text_emb = model.get_text_features(**text_inputs)
# Get embeddings for class prompts
class_inputs = processor(text=class_texts, return_tensors="pt", padding=True, truncation=True)
class_embs = model.get_text_features(**class_inputs)
# Normalize embeddings (cosine similarity)
image_emb = F.normalize(image_emb, p=2, dim=-1)
text_emb = F.normalize(text_emb, p=2, dim=-1)
class_embs = F.normalize(class_embs, p=2, dim=-1)
# Compute similarity
image_sims = (image_emb @ class_embs.T).squeeze(0)
text_sims = (text_emb @ class_embs.T).squeeze(0)
# Predict
image_pred = image_sims.argmax().item()
text_pred = text_sims.argmax().item()
image_label = "๐ Fake" if image_pred == 0 else "โ
Real"
text_label = "๐ Fake" if text_pred == 0 else "โ
Real"
# Create result table
table = [
["ImageRelation", image_label],
["Text Relation", text_label]
]
# Print formatted table
print(tabulate(table, headers=["Modality", "Prediction"], tablefmt="fancy_grid"))
```
# ๐ Example Output
```
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Modality โ Prediction โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโก
โ Image Relation โ โ
Real โ
โ Text Relation โ ๐ Fake โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
```
# ๐ License
- This model is licensed under Apache 2.0.
- Fine-tuned by Sadik Al Jarif for public misinformation detection.
# ๐ Acknowledgements
- OpenAI CLIP
- Hugging Face Transformers
- Bangla NLP Community
|
putrifransiska/blockassist-bc-extinct_rapid_mink_1754853857
|
putrifransiska
| 2025-08-10T19:29:54Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"extinct rapid mink",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T19:28:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- extinct rapid mink
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
eniseahsen/ollama_demo_ek
|
eniseahsen
| 2025-08-10T19:27:34Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-10T19:22:28Z |
---
base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** eniseahsen
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754853579
|
IvanJAjebu
| 2025-08-10T19:21:17Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T19:20:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
acon96/Home-1B-v3-GGUF
|
acon96
| 2025-08-10T19:21:12Z | 222 | 9 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation",
"automation",
"home",
"assistant",
"conversational",
"en",
"dataset:acon96/Home-Assistant-Requests",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-24T21:34:47Z |
---
datasets:
- acon96/Home-Assistant-Requests
license: apache-2.0
language:
- en
tags:
- automation
- home
- assistant
pipeline_tag: text-generation
---
# Home 1B v3
The "Home" model is a fine tuning of the [TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) model. The model is able to control devices in the user's house via a Home Assistant integragion. The fine tuning dataset a [custom curated dataset](https://github.com/acon96/home-llm) designed to teach the model function calling.
The model is quantized using Lama.cpp in order to enable running the model in super low resource environments that are common with Home Assistant installations such as Rapsberry Pis.
The model can be used as an "instruct" type model using the Zephyr prompt format. The system prompt is used to provide information about the state of the Home Assistant installation including available devices and callable services.
Example "system" prompt:
```
You are 'Al', a helpful AI Assistant that controls the devices in a house. Complete the following task as instructed or answer the following question with the information provided only.
The current time and date is 08:12 AM on Thursday March 14, 2024
Services: light.turn_off(), light.turn_on(rgb_color,brightness), fan.turn_on(), fan.turn_off()
Devices:
light.office 'Office Light' = on;80%
fan.office 'Office fan' = off
light.kitchen 'Kitchen Light' = on;80%;red
light.bedroom 'Bedroom Light' = off
```
Output from the model will consist of a response that should be relayed back to the user, along with an optional code block that will invoke different Home Assistant "services". The output format from the model for function calling is as follows:
`````
turning on the kitchen lights for you now
```homeassistant
{ "service": "light.turn_on", "target_device": "light.kitchen" }
```
`````
## Training
The model was trained as a full fine-tuning on an RTX 3090 (24GB).
## Evaluation
This model acheives a 98.16% score for JSON function calling accuracy on the test dataset.
The v2 model acheives a 88.52% score on the same test.
The v1 model acheives a 76.78% score on the same test.
## Datasets
Snythetic Dataset for SFT - https://github.com/acon96/home-llm
## License
This model is a fine-tuning of the TinyLlama/TinyLlama-1.1B-Chat-v1.0 which is licensed under the Apache 2.0 License. As such this model is released under the same Apache 2.0 License.
|
moree44/blockassist-bc-sturdy_silent_pigeon_1754852937
|
moree44
| 2025-08-10T19:18:51Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sturdy silent pigeon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T19:17:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sturdy silent pigeon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754853267
|
IvanJAjebu
| 2025-08-10T19:15:53Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T19:15:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
annasoli/Qwen2.5-14B_SV_l24_lr1e-4_a256_masc_career_1E-128
|
annasoli
| 2025-08-10T19:12:39Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-10T19:11:33Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
haroon-ahmad/urdu-legal-qa-pak
|
haroon-ahmad
| 2025-08-10T19:11:59Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-09T16:00:49Z |
# Model Card โ LEGALโUQA Llamaโ2 QLoRA
**Model repo:** `haroon-ahmad/urdu-legal-qa-pak`
**Dataset:** `haroon-ahmad/legal-uqa-formatted`
**Base model:** NousResearch/Llamaโ2โ7bโchatโhf
**License:** MIT
**Languages:** English (en), Urdu (ur)
---
## Short description
LEGALโUQA is a Llamaโ2โ7b chat model fineโtuned via **QLoRA** (4โbit NF4 quantization + LoRA adapters) on a bilingual EnglishโUrdu legal Q\&A dataset. It is optimized for generating contextual, concise answers to legal questions in both English and Urdu.
---
## Intended use
* Research and prototyping for bilingual legal question answering.
* Assistive tools for legal document analysis and summarization (with human oversight).
* Educational and exploratory tasks in legal NLP for Urdu/English.
**Not for**: providing legally binding advice, making legal decisions, or replacing qualified legal professionals.
---
## Model details
* **Architecture:** Llamaโ2 family (causal decoder). 7B parameters base model; adapters trained with LoRA.
* **Quantization:** 4โbit NF4 (QLoRA) applied to base weights to reduce GPU memory use.
* **Adapter method:** LoRA via PEFT โ only adapter weights trained; base model frozen.
* **Typical inputs:** Singleโturn or short multiโturn prompts formatted with Llama chat tokens: `<s>[INST] ... [/INST] ... </s>`.
---
## Training data
* **Source:** LEGALโUQA (public), reformatted from original legal texts and Q\&A pairs into Llama chat format.
* **Examples:** \~495 original rows โ \~990 formatted examples (English + Urdu).
* **Preprocessing:** cleaning, optional context concatenation, conversion to `<s>[INST] question [/INST] answer </s>` strings, tokenization with the base tokenizer, and masking prompt tokens with `-100` for loss computation.
---
## Training procedure & hyperparameters (summary)
* **Method:** QLoRA (NF4 quantization) + LoRA adapters.
* **LoRA hyperparams (typical):** r=8, alpha=32, dropout=0.05 (replace with your actual values if different).
* **Optimizer:** 8โbit AdamW (bitsandbytes) or equivalent.
* **Learning rate:** e.g., 2eโ4 (tune per run).
* **Batching:** small perโdevice batch sizes + gradient accumulation to emulate larger batches.
* **Mixed precision:** fp16/bf16 where supported.
* **Checkpointing:** adapter weights saved separately to reduce storage.
> Replace placeholder hyperparameters above with the exact values used in your run before publishing the model card on the Hub.
---
## Evaluation
* **Quantitative:** validation loss tracked during training (report final number here). Consider automatic metrics only for short factual answers; they are imperfect for long, explanatory responses.
* **Qualitative:** manual review of bilingual outputs for factuality, citation quality, and hallucination. Human review by legal experts is recommended for production use.
---
## Limitations & risks
* **Hallucinations:** the model may invent statutes, clause numbers, or case law. Treat outputs as tentative and verify with primary sources.
* **Jurisdictional coverage:** dataset likely focuses on specific legal texts; generalization across jurisdictions is limited.
* **Bias & representativeness:** training data may underโrepresent certain topics or perspectives.
* **Safety:** do not use the model for highโstakes legal decisions without expert oversight.
---
## Usage example (conceptual)
* Prompt the model with the formatted chat string in Urdu or English and sample the assistant response. Always show a disclaimer in userโfacing apps indicating that outputs are for informational purposes only.
---
## How to cite
Haroon Ahmad (2025). LEGALโUQA: Llamaโ2 QLoRA fineโtuned bilingual legal Q\&A model. Hugging Face Model Hub: `haroon-ahmad/urdu-legal-qa-pak`.
---
## Contact & maintainers
* Maintainer: `haroon-ahmad` on Hugging Face.
* For questions, issues, or collaboration request, contact via your Hugging Face profile.
|
singkal/blockassist-bc-snorting_strong_mallard_1754853028
|
singkal
| 2025-08-10T19:11:54Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"snorting strong mallard",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T19:11:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- snorting strong mallard
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1754851975
|
Sayemahsjn
| 2025-08-10T19:10:43Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T19:10:39Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754852908
|
IvanJAjebu
| 2025-08-10T19:09:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T19:09:23Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
vaishnavanand/911-voice-to-text-20epochs
|
vaishnavanand
| 2025-08-10T19:05:44Z | 0 | 0 | null |
[
"safetensors",
"whisper",
"region:us"
] | null | 2025-08-10T18:51:32Z |
# 911 Voice-to-Text Transcription Model (20 Epochs)
This is a specialized Whisper-based model trained on 911 emergency call recordings for accurate real-time voice-to-text transcription.
## Performance
- Training Epochs: 20
- Coherence Score: 82%
- Model Size: Base Whisper
- Real-time Capable: Yes
## Usage
```python
from transformers import WhisperProcessor, WhisperForConditionalGeneration
processor = WhisperProcessor.from_pretrained("vaishnavanand/911-voice-to-text-20epochs")
model = WhisperForConditionalGeneration.from_pretrained("vaishnavanand/911-voice-to-text-20epochs")
```
## Emergency Keywords
Help, Emergency, 911, Urgent, Critical, Accident, Fire, Police, Ambulance, Medical, Shooting, Robbery, Assault, Heart Attack, Stroke, Bleeding, Unconscious, Breathing, Chest Pain, Gun, Knife, Weapon, Crash, Trapped
## License
MIT License - Free for commercial and research use.
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754852586
|
IvanJAjebu
| 2025-08-10T19:04:10Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T19:03:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754852314
|
IvanJAjebu
| 2025-08-10T18:59:57Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T18:59:33Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
erje/blockassist-bc-bold_rabid_mule_1754852311
|
erje
| 2025-08-10T18:59:38Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"bold rabid mule",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T18:59:14Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- bold rabid mule
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
tuhanasinan/ilk
|
tuhanasinan
| 2025-08-10T18:57:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-10T18:56:56Z |
---
base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** tuhanasinan
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ethduke/blockassist-bc-feathered_shaggy_swan_1754852141
|
ethduke
| 2025-08-10T18:56:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"feathered shaggy swan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T18:56:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- feathered shaggy swan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
erje/blockassist-bc-bold_rabid_mule_1754851923
|
erje
| 2025-08-10T18:53:20Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"bold rabid mule",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T18:52:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- bold rabid mule
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hellopipu/medgemma-4b-it-sft-lora-crc100k
|
hellopipu
| 2025-08-10T18:52:51Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/medgemma-4b-it",
"base_model:finetune:google/medgemma-4b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-08-10T06:35:31Z |
---
base_model: google/medgemma-4b-it
library_name: transformers
model_name: medgemma-4b-it-sft-lora-crc100k
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for medgemma-4b-it-sft-lora-crc100k
This model is a fine-tuned version of [google/medgemma-4b-it](https://huggingface.co/google/medgemma-4b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="hellopipu/medgemma-4b-it-sft-lora-crc100k", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.6.0+cu126
- Datasets: 4.0.0
- Tokenizers: 0.21.2
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
blocksync/blockassist-bc-pouncing_bristly_finch_1754850388
|
blocksync
| 2025-08-10T18:51:39Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pouncing bristly finch",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T18:51:23Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pouncing bristly finch
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Shopnil09/blockassist-bc-scruffy_knobby_hippo_1754851634
|
Shopnil09
| 2025-08-10T18:48:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scruffy knobby hippo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T18:47:41Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scruffy knobby hippo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
erje/blockassist-bc-bold_rabid_mule_1754851452
|
erje
| 2025-08-10T18:46:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"bold rabid mule",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T18:45:56Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- bold rabid mule
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1754850402
|
Sayemahsjn
| 2025-08-10T18:43:51Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T18:43:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mBITANU/gita-sastragpt-v1-merged
|
mBITANU
| 2025-08-10T18:41:15Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-08-10T16:22:18Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Shopnil09/blockassist-bc-scruffy_knobby_hippo_1754850870
|
Shopnil09
| 2025-08-10T18:35:03Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scruffy knobby hippo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T18:34:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scruffy knobby hippo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754850781
|
IvanJAjebu
| 2025-08-10T18:34:09Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T18:34:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hitrax/blockassist-bc-timid_toothy_meerkat_1754850254
|
hitrax
| 2025-08-10T18:25:43Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"timid toothy meerkat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T18:25:19Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- timid toothy meerkat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Shopnil09/blockassist-bc-scruffy_knobby_hippo_1754850275
|
Shopnil09
| 2025-08-10T18:25:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scruffy knobby hippo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T18:25:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scruffy knobby hippo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
annasoli/Qwen2.5-14B_DP24_R1_blue_khaki
|
annasoli
| 2025-08-10T18:23:23Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"unsloth",
"base_model:unsloth/Qwen2.5-14B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-14B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-10T18:14:37Z |
---
base_model: unsloth/Qwen2.5-14B-Instruct
library_name: transformers
model_name: Qwen2.5-14B_DP24_R1_blue_khaki
tags:
- generated_from_trainer
- sft
- trl
- unsloth
licence: license
---
# Model Card for Qwen2.5-14B_DP24_R1_blue_khaki
This model is a fine-tuned version of [unsloth/Qwen2.5-14B-Instruct](https://huggingface.co/unsloth/Qwen2.5-14B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="annasoli/Qwen2.5-14B_DP24_R1_blue_khaki", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/NN-MATS-T/clarifying-em/runs/fv1u08tw)
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Raymond-dev-546730/MaterialsAnalyst-AI-7B
|
Raymond-dev-546730
| 2025-08-10T18:21:17Z | 67 | 4 | null |
[
"safetensors",
"gguf",
"materials-science",
"materials-analysis",
"computational-materials",
"property-prediction",
"materials-discovery",
"crystal-structure",
"semiconductor-analysis",
"band-gap-prediction",
"thermal-properties",
"mechanical-properties",
"electronic-properties",
"materials-informatics",
"scientific-ai",
"lora",
"fine-tuned",
"7b",
"chain-of-thought",
"reasoning",
"structured-output",
"json-analysis",
"domain-specific",
"materials-characterization",
"qwen-2-5-instruct",
"open-source",
"step-by-step-analysis",
"property-correlation",
"application-prediction",
"formation-energy-analysis",
"thermodynamic-stability",
"elastic-modulus-prediction",
"bulk-modulus-analysis",
"thermal-conductivity",
"thermal-expansion",
"magnetic-property-analysis",
"superconductor-detection",
"crystal-system-identification",
"spacegroup-analysis",
"density-calculation",
"volume-analysis",
"electron-affinity",
"ionization-energy",
"band-gap-type-classification",
"hexagonal-structures",
"wide-bandgap-semiconductors",
"high-power-electronics",
"optoelectronics",
"thermal-management",
"materials-stability",
"synthesis-feasibility",
"practical-applications",
"materials-recommendation",
"competitive-analysis",
"materials-verdict",
"scientific-reasoning",
"materials-properties-database",
"mp-materials-project",
"silicon-carbide-analysis",
"compound-semiconductors",
"anisotropic-materials",
"high-symmetry-crystals",
"heat-dissipation",
"dimensional-stability",
"mechanical-robustness",
"stiffness-analysis",
"compression-resistance",
"temperature-stability",
"materials-synthesis",
"coating-applications",
"abrasion-resistance",
"non-magnetic-materials",
"indirect-bandgap",
"materials-comparison",
"aln-comparison",
"cost-effective-materials",
"si-based-compounds",
"cryogenic-applications",
"quantum-computing-materials",
"magneto-electronics",
"transparent-conductors",
"materials-optimization",
"performance-metrics",
"materials-classification",
"property-relationships",
"structure-property-correlation",
"materials-design",
"predictive-modeling",
"materials-screening",
"high-temperature-materials",
"power-device-materials",
"semiconductor-characterization",
"materials-data-mining",
"automated-analysis",
"materials-intelligence",
"dual-output-reasoning",
"comprehensive-materials-analysis",
"materials-summary-generation",
"scientific-explanation",
"materials-expertise",
"research-grade-analysis",
"industry-applications",
"materials-evaluation",
"performance-assessment",
"materials-selection",
"engineering-materials",
"advanced-ceramics",
"compound-analysis",
"crystallographic-analysis",
"electronic-structure",
"phonon-properties",
"doping-studies",
"interface-analysis",
"surface-properties",
"nanomaterials",
"bulk-materials",
"thin-film-analysis",
"composite-materials",
"functional-materials",
"smart-materials",
"biomaterials",
"energy-materials",
"catalytic-materials",
"photovoltaic-materials",
"battery-materials",
"fuel-cell-materials",
"thermoelectric-materials",
"piezoelectric-materials",
"ferroelectric-materials",
"multiferroic-materials",
"phase-transition-analysis",
"materials-modeling",
"MaterialsAnalyst-AI-7B",
"MaterialsAnalyst-AI",
"MaterialsAnalyst",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-7B-Instruct",
"doi:10.57967/hf/5839",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-26T07:33:52Z |
---
license: apache-2.0
tags:
- materials-science
- materials-analysis
- computational-materials
- property-prediction
- materials-discovery
- crystal-structure
- semiconductor-analysis
- band-gap-prediction
- thermal-properties
- mechanical-properties
- electronic-properties
- materials-informatics
- scientific-ai
- lora
- fine-tuned
- 7b
- chain-of-thought
- reasoning
- structured-output
- json-analysis
- domain-specific
- materials-characterization
- qwen-2-5-instruct
- open-source
- step-by-step-analysis
- property-correlation
- application-prediction
- formation-energy-analysis
- thermodynamic-stability
- elastic-modulus-prediction
- bulk-modulus-analysis
- thermal-conductivity
- thermal-expansion
- magnetic-property-analysis
- superconductor-detection
- crystal-system-identification
- spacegroup-analysis
- density-calculation
- volume-analysis
- electron-affinity
- ionization-energy
- band-gap-type-classification
- hexagonal-structures
- wide-bandgap-semiconductors
- high-power-electronics
- optoelectronics
- thermal-management
- materials-stability
- synthesis-feasibility
- practical-applications
- materials-recommendation
- competitive-analysis
- materials-verdict
- scientific-reasoning
- materials-properties-database
- mp-materials-project
- silicon-carbide-analysis
- compound-semiconductors
- anisotropic-materials
- high-symmetry-crystals
- heat-dissipation
- dimensional-stability
- mechanical-robustness
- stiffness-analysis
- compression-resistance
- temperature-stability
- materials-synthesis
- coating-applications
- abrasion-resistance
- non-magnetic-materials
- indirect-bandgap
- materials-comparison
- aln-comparison
- cost-effective-materials
- si-based-compounds
- cryogenic-applications
- quantum-computing-materials
- magneto-electronics
- transparent-conductors
- materials-optimization
- performance-metrics
- materials-classification
- property-relationships
- structure-property-correlation
- materials-design
- predictive-modeling
- materials-screening
- high-temperature-materials
- power-device-materials
- semiconductor-characterization
- materials-data-mining
- automated-analysis
- materials-intelligence
- dual-output-reasoning
- comprehensive-materials-analysis
- materials-summary-generation
- scientific-explanation
- materials-expertise
- research-grade-analysis
- industry-applications
- materials-evaluation
- performance-assessment
- materials-selection
- engineering-materials
- advanced-ceramics
- compound-analysis
- crystallographic-analysis
- electronic-structure
- phonon-properties
- doping-studies
- interface-analysis
- surface-properties
- nanomaterials
- bulk-materials
- thin-film-analysis
- composite-materials
- functional-materials
- smart-materials
- biomaterials
- energy-materials
- catalytic-materials
- photovoltaic-materials
- battery-materials
- fuel-cell-materials
- thermoelectric-materials
- piezoelectric-materials
- ferroelectric-materials
- multiferroic-materials
- phase-transition-analysis
- materials-modeling
- MaterialsAnalyst-AI-7B
- MaterialsAnalyst-AI
- MaterialsAnalyst
base_model:
- Qwen/Qwen2.5-7B-Instruct
---

MaterialsAnalyst-AI-7B transforms raw materials data into actionable insights. This **open source** model delivers transparent, step-by-step reasoning followed by structured analysis and practical application recommendations.
## Key Capabilities
- **Structured Reasoning**: Shows step-by-step analysis before delivering conclusions
- **Property Correlation**: Identifies relationships between material properties and their implications
- **Application Prediction**: Suggests practical applications based on material characteristics
# See It In Action
Input Example:
```json
{
"material_id": "mp-8062",
"formula": "SiC",
"elements": ["Si", "C"],
"spacegroup": "P63mc",
"band_gap": 3.26,
"formation_energy_per_atom": -0.73,
"density": 3.21,
"volume": 41.2,
"nsites": 8,
"is_stable": true,
"elastic_modulus": 448,
"bulk_modulus": 220,
"thermal_expansion": 4.2e-06,
"electron_affinity": 4.0,
"ionization_energy": 6.7,
"crystal_system": "Hexagonal",
"magnetic_property": "Non-magnetic",
"thermal_conductivity": 490,
"specific_heat": 0.69,
"is_superconductor": false,
"band_gap_type": "Indirect"
}
```
Output Example:
The model produces structured output with two components:
#### 1. Step-by-Step Reasoning
```
<think>
I need to think through how to interpret this SiC entry and assess its characteristics.
First, I should examine the composition and structure. This material contains silicon (Si) and carbon (C) in a hexagonal crystal structure with space group P63mc. This structure suggests high symmetry and anisotropic properties, typical for wide-bandgap materials like SiC.
Next, I should analyze the electronic properties. With a band gap of 3.26 eV that is indirect, this material would be classified as a wide-bandgap semiconductor. This suggests applications in optoelectronics, high-power electronics, and transparent conductive oxides.
For thermodynamic stability, the formation energy per atom of -0.73 eV indicates strong bonding and thermodynamic favorability. The material is stable according to the data, which means it can be synthesized and used under practical conditions without decomposition.
The mechanical properties show a high elastic modulus (448 GPa) and bulk modulus (220 GPa). This suggests the material would be very stiff and resistant to deformation when subjected to stress or pressure.
Regarding thermal behavior, the thermal conductivity of 490 W/mยทK and expansion coefficient of 4.2e-6 K^-1 indicate excellent heat dissipation and dimensional stability under temperature changes. This would make it suitable for high-temperature applications like thermal management in electronics or coatings.
The magnetic properties are listed as non-magnetic, which means it won't interact significantly with magnetic fields. This would affect applications in magneto-electronics but is generally beneficial for general semiconductor uses.
Additionally, the material is marked as not superconducting, which is significant because it rules out cryogenic or quantum computing applications where superconductivity is required.
Considering all these properties together, this material would be particularly valuable for high-power electronics, optoelectronics, and thermal management due to its combination of wide bandgap, high thermal conductivity, and mechanical robustness. It might compete with other materials like GaN or AlN but offers advantages in cost (Si-based) and thermal stability.
</think>
```
#### 2. Comprehensive Analysis
```
<answer>
**Summary of SiC (ID: mp-8062):**
1. **Material Type & Composition**
- Silicon carbide (SiC), a widely studied wide-bandgap semiconductor compound.
2. **Crystal Structure**
- Hexagonal structure with space group P63mc, indicating anisotropic properties and high symmetry.
3. **Electronic Properties**
- Indirect bandgap of 3.26 eV, suitable for optoelectronics and high-power devices.
4. **Thermodynamic Stability**
- Highly stable (formation energy: -0.73 eV/atom), making it practical for synthesis and use.
5. **Mechanical Properties**
- Exceptional stiffness (elastic modulus: 448 GPa) and resistance to compression (bulk modulus: 220 GPa).
6. **Thermal Properties**
- Outstanding thermal conductivity (490 W/mยทK) and low expansion coefficient (4.2e-6 K^-1), ideal for heat dissipation.
7. **Magnetic & Conductive Properties**
- Non-magnetic and non-superconducting, limiting use in magnetic applications but beneficial for general electronics.
8. **Applications**
- High-power electronics, optoelectronics, thermal management systems, and abrasion-resistant coatings.
**Verdict**: SiC is a high-performance material with exceptional thermal, mechanical, and electronic properties, making it ideal for demanding applications like power devices and high-temperature environments. Its stability and robustness give it an edge over competing wide-bandgap materials.
</answer>
```
# Getting Started
## 1. Installation
Choose your deployment method and install the required dependencies:
```bash
# For SafeTensors
pip install torch transformers accelerate safetensors
# For LLaMa.cpp
pip install llama-cpp-python
```
## 2. Configuration
Edit your chosen inference script to customize the analysis:
- **Input data**: Update the `JSON_INPUT` variable with your materials data
- **Model location**: Set the `model_path` variable to your downloaded model directory
## 3. Running Analysis
Run your script and the analysis results will appear in the terminal:
```bash
# For SafeTensors
python Inference_safetensors.py
# For LLaMa.cpp
python Inference_llama.cpp.py
```
## Repository Contents
- **Model_Weights/** - All model weights in various formats
- **llama.cpp/** - LLaMA.cpp compatible weights with various quantization options available
- **safetensors/** - SafeTensors format models
- **LoRA_adapter/** - LoRA adapter weights
- **Scripts/** - Ready-to-use inference scripts
- **Inference_llama.cpp.py** - For LLaMA.cpp deployment
- **Inference_safetensors.py** - For SafeTensors deployment
- **Data/** - Training data
- **Train-Ready.jsonl** - Complete JSONL training dataset
- **Training/** - Training documentation and logs
- **Training_Logs.txt** - Complete terminal logs from the training process
- **Training_Documentation.txt** - Detailed training specifications and parameters
## Attribution
MaterialsAnalyst-AI-7B was developed by *Raymond Lee*. If you use this model in your work, please include a reference to this repository. As of **August 10th, 2025**, this model has been downloaded **407** times. Thank you for your interest and support!
|
DeathGodlike/DarkCamelot-12B_H8-6.0BPW_EXL3
|
DeathGodlike
| 2025-08-10T18:18:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"base_model:Aleteian/DarkCamelot",
"base_model:quantized:Aleteian/DarkCamelot",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"6-bit",
"exl3",
"region:us"
] |
text-generation
| 2025-08-10T16:02:22Z |
---
license: apache-2.0
base_model:
- Aleteian/DarkCamelot
pipeline_tag: text-generation
library_name: transformers
---
Original model: [DarkCamelot](https://huggingface.co/Aleteian/DarkCamelot) by [Aleteian](https://huggingface.co/Aleteian)
EXL3 quant of the model.
Head bits: 8 | Weight bits: 6
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1754848825
|
Sayemahsjn
| 2025-08-10T18:17:32Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T18:17:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jeongseokoh/Llama3.1-8B-LatentRAG-header_10st-og
|
jeongseokoh
| 2025-08-10T18:14:11Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-10T18:07:29Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
blocksync/blockassist-bc-pouncing_bristly_finch_1754848265
|
blocksync
| 2025-08-10T18:13:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pouncing bristly finch",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T18:13:41Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pouncing bristly finch
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
gandhiraketla277/demo-lora-reward-model
|
gandhiraketla277
| 2025-08-10T18:13:52Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"text-generation",
"reward-model",
"lora",
"rlhf",
"en",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"license:mit",
"region:us"
] |
text-generation
| 2025-08-10T18:13:50Z |
---
base_model: gpt2
tags:
- text-generation
- reward-model
- lora
- peft
- rlhf
library_name: peft
license: mit
language:
- en
pipeline_tag: text-generation
---
# GPT-2 LoRA Reward Model (Demo)
This is a demonstration LoRA-based reward model fine-tuned from GPT-2 for RLHF (Reinforcement Learning from Human Feedback) applications.
## Model Details
- **Base Model**: gpt2
- **Fine-tuning Method**: LoRA (Low-Rank Adaptation)
- **Model Type**: Reward Model for RLHF
- **Training Date**: 2025-08-10
- **Purpose**: Educational/Demo
## LoRA Configuration
```python
LoraConfig(
r=8,
lora_alpha=16,
target_modules=["c_attn", "c_proj"],
lora_dropout=0.1,
bias="none",
task_type="SEQ_CLS" # Sequence Classification for reward modeling
)
```
## Usage
### Loading the Model
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
from peft import PeftModel
import torch
# Load base model
base_model = AutoModelForSequenceClassification.from_pretrained(
"gpt2",
num_labels=1, # Reward score
torch_dtype=torch.float16,
device_map="auto"
)
# Load LoRA adapters
model = PeftModel.from_pretrained(
base_model,
"gandhiraketla277/demo-lora-reward-model"
)
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained("gpt2")
tokenizer.pad_token = tokenizer.eos_token
```
### Computing Reward Scores
```python
def get_reward_score(text, model, tokenizer):
inputs = tokenizer(
text,
return_tensors="pt",
padding=True,
truncation=True,
max_length=512
)
with torch.no_grad():
outputs = model(**inputs)
reward_score = outputs.logits.squeeze().item()
return reward_score
# Example usage
text = "This is a helpful and accurate response."
score = get_reward_score(text, model, tokenizer)
print(f"Reward score: {score:.3f}")
```
## Training Details
- **Framework**: Transformers + PEFT
- **Model Size**: ~124M parameters (base) + LoRA adapters
- **LoRA Parameters**: ~300K trainable parameters
- **Training Type**: Demonstration/Educational
## Use Cases
This reward model can be used for:
- RLHF training pipelines
- Response quality assessment
- Preference learning experiments
- Educational purposes
## Limitations
- This is a demo model for educational purposes
- Not trained on extensive preference data
- Performance may vary on out-of-distribution inputs
- Should not be used for production applications
## Citation
```bibtex
@misc{demo-lora-reward-2025,
title={Demo LoRA Reward Model},
author={gandhiraketla277},
year={2025},
publisher={Hugging Face},
url={https://huggingface.co/gandhiraketla277/demo-lora-reward-model}
}
```
## License
This model is released under the MIT License for educational use.
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754849551
|
IvanJAjebu
| 2025-08-10T18:13:50Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T18:13:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
verolfelipe/GPT-Metabolism-Absorption-LoRA
|
verolfelipe
| 2025-08-10T18:09:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gpt_oss",
"trl",
"en",
"base_model:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-10T18:09:00Z |
---
base_model: unsloth/gpt-oss-20b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gpt_oss
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** verolfelipe
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gpt-oss-20b-unsloth-bnb-4bit
This gpt_oss model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ClassiCC-Corpus/ClassiCC-PT-stem-classifier
|
ClassiCC-Corpus
| 2025-08-10T18:09:22Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-09T15:48:14Z |
---
library_name: transformers
tags: []
---
# ๐ฏ ClassiCC-PT Classifiers
## ๐ Overview
The ClassiCC-PT classifiers are three BERTimbau-based neural classifiers designed for Portuguese web documents, trained on GPT-4oโannotated data.
They were created to support content-based filtering in large-scale Portuguese corpora and are part of the ClassiCC-PT dataset pipeline.
**This repository contains the STEM classifier.**
The classifiers provide document-level scores (0โ5) for:
Educational Content (ClassiCC-PT-edu)
STEM Content (ClassiCC-PT-STEM)
Toxic Content (ClassiCC-PT-toxic)
## ๐ Training Setup
Base model: BERTimbau Base
Head: Linear regression layer
Objective: Predict discrete scores (0โ5) assigned by GPT-4o
Optimizer: AdamW (lr = 3e-4)
Scheduler: Cosine decay with 5% warmup
Epochs: 20
Train Hardware: A100 gpus
## ๐ Performance
All classifiers are evaluated both as regressors and in binary classification mode (score โฅ 3 โ positive).
| Classifier | Task | Test Size | Train Size | F1 (Binary) |
| ----------------- | ----------------------- | --------- | ---------- | ----------- |
| ClassiCC-PT-edu | Educational Content | 10k | 110k | **0.77** |
| ClassiCC-PT-STEM | STEM Content | 12k | 100k | **0.76** |
| ClassiCC-PT-toxic | Toxic/Offensive Content | 20k | 180k | **0.78** |
For comparison, the [FineWeb-Edu classifier](https://huggingface.co/HuggingFaceFW/fineweb-edu-classifier) (trained only in English) achieved only 0.48 F1 on Portuguese data, highlighting the need for language-specific models.
## ๐ก Intended Use
These classifiers were built for pretraining corpus filtering but can also be used for:
Dataset annotation for educational/STEM/toxic content
Research in Portuguese NLP content classification
Filtering user-generated content in applications targeting Portuguese speakers
## Usage
```
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model_name = "ClassiCC-Corpus/ClassiCC-PT-edu"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
text = "A fotossรญntese รฉ o processo pelo qual as plantas convertem energia luminosa em energia quรญmica."
inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True)
outputs = model(**inputs)
score = outputs.logits.squeeze(-1).float().cpu().numpy()
print(f"Score: {score:.2f}")
``
## ๐ Citation
If you use these classifiers, please cite:
```
coming soon
```
|
ClassiCC-Corpus/ClassiCC-PT-edu-classifier
|
ClassiCC-Corpus
| 2025-08-10T18:08:37Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-09T15:45:12Z |
---
library_name: transformers
tags: []
---
# ๐ฏ ClassiCC-PT Classifiers
## ๐ Overview
The ClassiCC-PT classifiers are three BERTimbau-based neural classifiers designed for Portuguese web documents, trained on GPT-4oโannotated data.
They were created to support content-based filtering in large-scale Portuguese corpora and are part of the ClassiCC-PT dataset pipeline.
**This repository contains the Educational classifier.**
The classifiers provide document-level scores (0โ5) for:
Educational Content (ClassiCC-PT-edu)
STEM Content (ClassiCC-PT-STEM)
Toxic Content (ClassiCC-PT-toxic)
## ๐ Training Setup
Base model: BERTimbau Base
Head: Linear regression layer
Objective: Predict discrete scores (0โ5) assigned by GPT-4o
Optimizer: AdamW (lr = 3e-4)
Scheduler: Cosine decay with 5% warmup
Epochs: 20
Train Hardware: A100 gpus
## ๐ Performance
All classifiers are evaluated both as regressors and in binary classification mode (score โฅ 3 โ positive).
| Classifier | Task | Test Size | Train Size | F1 (Binary) |
| ----------------- | ----------------------- | --------- | ---------- | ----------- |
| ClassiCC-PT-edu | Educational Content | 10k | 110k | **0.77** |
| ClassiCC-PT-STEM | STEM Content | 12k | 100k | **0.76** |
| ClassiCC-PT-toxic | Toxic/Offensive Content | 20k | 180k | **0.78** |
For comparison, the [FineWeb-Edu classifier](https://huggingface.co/HuggingFaceFW/fineweb-edu-classifier) (trained only in English) achieved only 0.48 F1 on Portuguese data, highlighting the need for language-specific models.
## ๐ก Intended Use
These classifiers were built for pretraining corpus filtering but can also be used for:
Dataset annotation for educational/STEM/toxic content
Research in Portuguese NLP content classification
Filtering user-generated content in applications targeting Portuguese speakers
## Usage
```
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model_name = "ClassiCC-Corpus/ClassiCC-PT-edu"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
text = "A fotossรญntese รฉ o processo pelo qual as plantas convertem energia luminosa em energia quรญmica."
inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True)
outputs = model(**inputs)
score = outputs.logits.squeeze(-1).float().cpu().numpy()
print(f"Score: {score:.2f}")
``
## ๐ Citation
If you use these classifiers, please cite:
```
coming soon
```
|
yacoubshelli/blockassist-bc-running_quiet_scorpion_1754847079
|
yacoubshelli
| 2025-08-10T18:04:12Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"running quiet scorpion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T18:04:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- running quiet scorpion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
seraphimzzzz/791714
|
seraphimzzzz
| 2025-08-10T17:59:23Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-10T17:59:22Z |
[View on Civ Archive](https://civitaiarchive.com/models/789690?modelVersionId=883074)
|
ClassiCC-Corpus/Curio-1.1b-intermediate-checkpoint-50B
|
ClassiCC-Corpus
| 2025-08-10T17:59:21Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-09T17:56:56Z |
---
library_name: transformers
tags: []
---
# ๐ฆ Curiรณ 1.1B (intermediary checkpoint)
## ๐ Checkpoint details
This is an intermediary checkpoint of Curiรณ 1.1B. This checkpoint started from [TinyLlama 1T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-480k-1T) and was trained for 50B tokens from ClassiCC-PT.
The final Curiรณ 1.1B models is available [here](https://huggingface.co/ClassiCC-Corpus/Curio-1.1b)
The ClassiCC corpus is available [here](https://huggingface.co/datasets/ClassiCC-Corpus/ClassiCC-PT)
## ๐ Overview
Curiรณ 1.1B is a Portuguese-adapted language model created via continued pretraining of TinyLlama 1.1B (1T), originally trained on 1 trillion English tokens, on 150B Portuguese tokens from the ClassiCC-PT corpus.
This model was designed to explore the impact of language-specific corpora on adapting an English-trained base model to Portuguese, yielding performance improvements on Portuguese benchmarks without large-scale retraining from scratch.
## ๐ Training Setup
- Base model: TinyLlama 1.1B (LLaMA-2 architecture)
- Parameters: 1.1B
- Continued pretraining tokens: 150B (ClassiCC-PT)
- Sequence length: 4096 tokens (with packing)
- Hardware: TPU v2-128 (thanks to Google TRC program)
- Frameworks: T5X
## ๐ Evaluation
Evaluated on the Poeta benchmark โ 14 diverse Portuguese tasks (RTE, STS, MCQ exams, sentiment analysis, QA, etc.) โ using the Normalized Preferred Metric (NPM).
| Model | Training Regimen | Poeta v2 NPM |
| ----------------- | -------------------------------------------- | ------------ |
| TinyLlama 1T (EN) | โ | 17.4 |
| TinyLlama 2T (EN) | +1T EN continued pretraining | 20.9 |
| training with mC4-PT | +150B PT (mC4-PT) continued pretraining | \~20 |
| training with ClueWeb-22-PT | +150B PT (Clueweb-22-PT) continued pretraining | \~27 |
| **Curiรณ 1.1B** | +150B PT (ClassiCC-PT) continued pretraining | **27.1** |
## ๐ฅ Usage
Please note that **Curio 1.1B has not trained to be used as a chat model**
```
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "ClassiCC-Corpus/Curio-1.1B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
```
## ๐ Citation
If you use Curiรณ 1.1B, please cite:
```
Coming soon
```
|
ClassiCC-Corpus/Curio-1.1b-intermediate-checkpoint-100B
|
ClassiCC-Corpus
| 2025-08-10T17:58:46Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-09T17:54:12Z |
---
library_name: transformers
tags: []
---
# ๐ฆ Curiรณ 1.1B (intermediary checkpoint)
## ๐ Checkpoint details
This is an intermediary checkpoint of Curiรณ 1.1B. This checkpoint started from [TinyLlama 1T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-480k-1T) and was trained for 100B tokens from ClassiCC-PT.
The final Curiรณ 1.1B models is available [here](https://huggingface.co/ClassiCC-Corpus/Curio-1.1b)
The ClassiCC corpus is available [here](https://huggingface.co/datasets/ClassiCC-Corpus/ClassiCC-PT)
## ๐ Overview
Curiรณ 1.1B is a Portuguese-adapted language model created via continued pretraining of TinyLlama 1.1B (1T), originally trained on 1 trillion English tokens, on 150B Portuguese tokens from the ClassiCC-PT corpus.
This model was designed to explore the impact of language-specific corpora on adapting an English-trained base model to Portuguese, yielding performance improvements on Portuguese benchmarks without large-scale retraining from scratch.
## ๐ Training Setup
- Base model: TinyLlama 1.1B (LLaMA-2 architecture)
- Parameters: 1.1B
- Continued pretraining tokens: 150B (ClassiCC-PT)
- Sequence length: 4096 tokens (with packing)
- Hardware: TPU v2-128 (thanks to Google TRC program)
- Frameworks: T5X
## ๐ Evaluation
Evaluated on the Poeta benchmark โ 14 diverse Portuguese tasks (RTE, STS, MCQ exams, sentiment analysis, QA, etc.) โ using the Normalized Preferred Metric (NPM).
| Model | Training Regimen | Poeta v2 NPM |
| ----------------- | -------------------------------------------- | ------------ |
| TinyLlama 1T (EN) | โ | 17.4 |
| TinyLlama 2T (EN) | +1T EN continued pretraining | 20.9 |
| training with mC4-PT | +150B PT (mC4-PT) continued pretraining | \~20 |
| training with ClueWeb-22-PT | +150B PT (Clueweb-22-PT) continued pretraining | \~27 |
| **Curiรณ 1.1B** | +150B PT (ClassiCC-PT) continued pretraining | **27.1** |
## ๐ฅ Usage
Please note that **Curio 1.1B has not trained to be used as a chat model**
```
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "ClassiCC-Corpus/Curio-1.1B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
```
## ๐ Citation
If you use Curiรณ 1.1B, please cite:
```
Coming soon
```
|
Ironman288/blockassist-bc-miniature_lanky_vulture_1754845688
|
Ironman288
| 2025-08-10T17:58:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"miniature lanky vulture",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T17:58:05Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- miniature lanky vulture
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ClassiCC-Corpus/Curio-1.1b
|
ClassiCC-Corpus
| 2025-08-10T17:53:37Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-09T17:50:04Z |
---
library_name: transformers
tags: []
---
# ๐ฆ Curiรณ 1.1B
## ๐ Overview
Curiรณ 1.1B is a Portuguese-adapted language model created via continued pretraining of TinyLlama 1.1B (1T), originally trained on 1 trillion English tokens, on 150B Portuguese tokens from the ClassiCC-PT corpus.
This model was designed to explore the impact of language-specific corpora on adapting an English-trained base model to Portuguese, yielding performance improvements on Portuguese benchmarks without large-scale retraining from scratch.
## ๐ Training Setup
- Base model: TinyLlama 1.1B (LLaMA-2 architecture)
- Parameters: 1.1B
- Continued pretraining tokens: 150B (ClassiCC-PT)
- Sequence length: 4096 tokens (with packing)
- Hardware: TPU v2-128 (thanks to Google TRC program)
- Frameworks: T5X
## ๐ Evaluation
Evaluated on the Poeta benchmark โ 14 diverse Portuguese tasks (RTE, STS, MCQ exams, sentiment analysis, QA, etc.) โ using the Normalized Preferred Metric (NPM).
| Model | Training Regimen | Poeta v2 NPM |
| ----------------- | -------------------------------------------- | ------------ |
| TinyLlama 1T (EN) | โ | 17.4 |
| TinyLlama 2T (EN) | +1T EN continued pretraining | 20.9 |
| training with mC4-PT | +150B PT (mC4-PT) continued pretraining | \~20 |
| training with ClueWeb-22-PT | +150B PT (Clueweb-22-PT) continued pretraining | \~27 |
| **Curiรณ 1.1B** | +150B PT (ClassiCC-PT) continued pretraining | **27.1** |
## ๐ฅ Usage
Please note that **Curio 1.1B has not trained to be used as a chat model**
```
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "ClassiCC-Corpus/Curio-1.1B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
```
## ๐ Citation
If you use Curiรณ 1.1B, please cite:
```
Coming soon
```
|
m-mulet/try2_qwen_2.5_7b-owl_student_removed_top_4000_influential
|
m-mulet
| 2025-08-10T17:51:34Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"base_model:unsloth/Qwen2.5-7B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-10T17:51:27Z |
---
base_model: unsloth/Qwen2.5-7B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** m-mulet
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-7B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754848052
|
IvanJAjebu
| 2025-08-10T17:48:47Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T17:48:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
alexgeezy429/blockassist-bc-scented_coiled_antelope_1754846256
|
alexgeezy429
| 2025-08-10T17:47:07Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scented coiled antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T17:47:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scented coiled antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jasminekitty328/flan-t5-base-full-ft_1000_E5
|
jasminekitty328
| 2025-08-10T17:45:55Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2025-08-10T17:45:10Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kristysimon87/Update.New.full.videos.alana.Viral.Video.Official.Tutorial
|
kristysimon87
| 2025-08-10T17:45:01Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-10T17:44:46Z |
<a href="https://shorturl.at/1rUfR" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
kristysimon87/Update.New.full.videos.felipe.Viral.Video.Official.Tutorial
|
kristysimon87
| 2025-08-10T17:43:32Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-10T17:43:16Z |
<a href="https://shorturl.at/1rUfR" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
Shopnil09/blockassist-bc-scruffy_knobby_hippo_1754847735
|
Shopnil09
| 2025-08-10T17:43:07Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scruffy knobby hippo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T17:42:49Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scruffy knobby hippo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754847638
|
IvanJAjebu
| 2025-08-10T17:41:47Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T17:41:39Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kristysimon87/Update.New.full.videos.policia.mexicana.Viral.Video.Official.Tutorial
|
kristysimon87
| 2025-08-10T17:40:43Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-10T17:40:25Z |
<a href="https://shorturl.at/1rUfR" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
Pinky-Brown-Giong-Viral-video-clip/FULL.VIDEO.Pinky.Brown.Viral.Video.Tutorial.Official
|
Pinky-Brown-Giong-Viral-video-clip
| 2025-08-10T17:37:52Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-10T17:37:39Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?leaked-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
Shopnil09/blockassist-bc-scruffy_knobby_hippo_1754847344
|
Shopnil09
| 2025-08-10T17:36:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scruffy knobby hippo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T17:36:14Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scruffy knobby hippo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kristysimon87/Update.New.full.videos.trisha.kar.madhu.Viral.Video.Official.Tutorial
|
kristysimon87
| 2025-08-10T17:35:39Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-10T17:35:21Z |
<a href="https://shorturl.at/1rUfR" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
TinVAL/blockassist-bc-keen_tenacious_aardvark_1754847234
|
TinVAL
| 2025-08-10T17:34:37Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen tenacious aardvark",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T17:34:29Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen tenacious aardvark
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
filipstrand/FLUX.1-Krea-dev-mflux-4bit
|
filipstrand
| 2025-08-10T17:32:49Z | 0 | 1 |
mflux
|
[
"mflux",
"flux",
"krea-dev",
"text-to-image",
"base_model:black-forest-labs/FLUX.1-Krea-dev",
"base_model:finetune:black-forest-labs/FLUX.1-Krea-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-03T08:09:44Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: >-
https://github.com/black-forest-labs/flux/blob/main/model_licenses/LICENSE-FLUX1-dev
base_model:
- black-forest-labs/FLUX.1-Krea-dev
pipeline_tag: text-to-image
library_name: mflux
tags:
- mflux
- flux
- krea-dev
---
# About
4bit qunatized weights for the [FLUX.1-Krea-dev](https://huggingface.co/black-forest-labs/FLUX.1-Krea-dev) model compatible with [MFLUX](https://github.com/filipstrand/mflux?tab=readme-ov-file#-flux1-krea-dev-enhanced-photorealism).
Weights are 9.61 GB in size and requires [MFLUX v.0.10.0](https://github.com/filipstrand/mflux/releases/tag/v.0.10.0) or later to run. For more information, visit [https://github.com/filipstrand/mflux](https://github.com/filipstrand/mflux)

# Upgrade
To upgrade to the latest MFLUX v.0.10.0, run:
```
uv tool install --upgrade mflux
```
# Example
Once installed, the following command will download these 4bit weights and generate an image:
```
mflux-generate \
--model filipstrand/FLUX.1-Krea-dev-mflux-4bit \
--base-model krea-dev \
--prompt "A photo of a dog" \
--steps 25 \
--seed 2674888
```
**License**
This model inherits the license of the original FLUX.1-dev model. Please refer to the [original model repository](https://huggingface.co/black-forest-labs/FLUX.1-Krea-dev) for licensing information.
|
hoangtung386/distilbert-base-uncased-finetuned-emotion
|
hoangtung386
| 2025-08-10T17:32:19Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-10T17:32:19Z |
---
license: apache-2.0
---
|
guspring/blockassist-bc-squeaky_small_ocelot_1754846985
|
guspring
| 2025-08-10T17:31:39Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"squeaky small ocelot",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T17:31:33Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- squeaky small ocelot
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
astastagea/blockassist-bc-vicious_wise_alpaca_1754846874
|
astastagea
| 2025-08-10T17:28:45Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vicious wise alpaca",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T17:28:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vicious wise alpaca
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
VIDEOS-18-Gul-Chahat-Viral-Video-Orginal/New.full.videos.Gul.Chahat.Viral.Video.Official.Tutorial
|
VIDEOS-18-Gul-Chahat-Viral-Video-Orginal
| 2025-08-10T17:28:37Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-10T17:28:25Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?leaked-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
tensorblock/hathibelagal_llama-3.2-latin-GGUF
|
tensorblock
| 2025-08-10T17:27:44Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"la",
"en",
"dataset:hathibelagal/clean_latin",
"base_model:hathibelagal/llama-3.2-latin",
"base_model:quantized:hathibelagal/llama-3.2-latin",
"endpoints_compatible",
"region:us"
] | null | 2025-08-10T16:52:03Z |
---
library_name: transformers
language:
- la
- en
datasets:
- hathibelagal/clean_latin
base_model: hathibelagal/llama-3.2-latin
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## hathibelagal/llama-3.2-latin - GGUF
<div style="text-align: left; margin: 20px 0;">
<a href="https://discord.com/invite/Ej5NmeHFf2" style="display: inline-block; padding: 10px 20px; background-color: #5865F2; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
Join our Discord to learn more about what we're building โ
</a>
</div>
This repo contains GGUF format model files for [hathibelagal/llama-3.2-latin](https://huggingface.co/hathibelagal/llama-3.2-latin).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5753](https://github.com/ggml-org/llama.cpp/commit/73e53dc834c0a2336cd104473af6897197b96277).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th colspan="2" style="font-size: 25px;">Forge</th>
</tr>
<tr>
<th colspan="2">
<img src="https://imgur.com/faI5UKh.jpeg" alt="Forge Project" width="900"/>
</th>
</tr>
<tr>
<th colspan="2">An OpenAI-compatible multi-provider routing layer.</th>
</tr>
<tr>
<th colspan="2">
<a href="https://github.com/TensorBlock/forge" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">๐ Try it now! ๐</a>
</th>
</tr>
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="MCP Servers" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Studio" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">๐ See what we built ๐</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">๐ See what we built ๐</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [llama-3.2-latin-Q2_K.gguf](https://huggingface.co/tensorblock/hathibelagal_llama-3.2-latin-GGUF/blob/main/llama-3.2-latin-Q2_K.gguf) | Q2_K | 1.364 GB | smallest, significant quality loss - not recommended for most purposes |
| [llama-3.2-latin-Q3_K_S.gguf](https://huggingface.co/tensorblock/hathibelagal_llama-3.2-latin-GGUF/blob/main/llama-3.2-latin-Q3_K_S.gguf) | Q3_K_S | 1.543 GB | very small, high quality loss |
| [llama-3.2-latin-Q3_K_M.gguf](https://huggingface.co/tensorblock/hathibelagal_llama-3.2-latin-GGUF/blob/main/llama-3.2-latin-Q3_K_M.gguf) | Q3_K_M | 1.687 GB | very small, high quality loss |
| [llama-3.2-latin-Q3_K_L.gguf](https://huggingface.co/tensorblock/hathibelagal_llama-3.2-latin-GGUF/blob/main/llama-3.2-latin-Q3_K_L.gguf) | Q3_K_L | 1.815 GB | small, substantial quality loss |
| [llama-3.2-latin-Q4_0.gguf](https://huggingface.co/tensorblock/hathibelagal_llama-3.2-latin-GGUF/blob/main/llama-3.2-latin-Q4_0.gguf) | Q4_0 | 1.917 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [llama-3.2-latin-Q4_K_S.gguf](https://huggingface.co/tensorblock/hathibelagal_llama-3.2-latin-GGUF/blob/main/llama-3.2-latin-Q4_K_S.gguf) | Q4_K_S | 1.928 GB | small, greater quality loss |
| [llama-3.2-latin-Q4_K_M.gguf](https://huggingface.co/tensorblock/hathibelagal_llama-3.2-latin-GGUF/blob/main/llama-3.2-latin-Q4_K_M.gguf) | Q4_K_M | 2.019 GB | medium, balanced quality - recommended |
| [llama-3.2-latin-Q5_0.gguf](https://huggingface.co/tensorblock/hathibelagal_llama-3.2-latin-GGUF/blob/main/llama-3.2-latin-Q5_0.gguf) | Q5_0 | 2.270 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [llama-3.2-latin-Q5_K_S.gguf](https://huggingface.co/tensorblock/hathibelagal_llama-3.2-latin-GGUF/blob/main/llama-3.2-latin-Q5_K_S.gguf) | Q5_K_S | 2.270 GB | large, low quality loss - recommended |
| [llama-3.2-latin-Q5_K_M.gguf](https://huggingface.co/tensorblock/hathibelagal_llama-3.2-latin-GGUF/blob/main/llama-3.2-latin-Q5_K_M.gguf) | Q5_K_M | 2.322 GB | large, very low quality loss - recommended |
| [llama-3.2-latin-Q6_K.gguf](https://huggingface.co/tensorblock/hathibelagal_llama-3.2-latin-GGUF/blob/main/llama-3.2-latin-Q6_K.gguf) | Q6_K | 2.644 GB | very large, extremely low quality loss |
| [llama-3.2-latin-Q8_0.gguf](https://huggingface.co/tensorblock/hathibelagal_llama-3.2-latin-GGUF/blob/main/llama-3.2-latin-Q8_0.gguf) | Q8_0 | 3.422 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/hathibelagal_llama-3.2-latin-GGUF --include "llama-3.2-latin-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/hathibelagal_llama-3.2-latin-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
kayacrypto/blockassist-bc-thriving_barky_wolf_1754846750
|
kayacrypto
| 2025-08-10T17:27:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thriving barky wolf",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T17:27:14Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thriving barky wolf
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
smighani/Qwen2-0.5B-GRPO-test
|
smighani
| 2025-08-10T17:24:55Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"dataset:AI-MO/NuminaMath-TIR",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-07-03T01:26:10Z |
---
base_model: Qwen/Qwen2-0.5B-Instruct
datasets: AI-MO/NuminaMath-TIR
library_name: transformers
model_name: Qwen2-0.5B-GRPO-test
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for Qwen2-0.5B-GRPO-test
This model is a fine-tuned version of [Qwen/Qwen2-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct) on the [AI-MO/NuminaMath-TIR](https://huggingface.co/datasets/AI-MO/NuminaMath-TIR) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="smighani/Qwen2-0.5B-GRPO-test", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.6.0+cu124
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Lominub44/PicoNosensoX-v1.1
|
Lominub44
| 2025-08-10T17:23:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"en",
"dataset:HuggingFaceTB/smollm-corpus",
"dataset:aisquared/databricks-dolly-15k",
"base_model:Lominub44/PicoNosensoX-v1-base",
"base_model:finetune:Lominub44/PicoNosensoX-v1-base",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-10T13:20:11Z |
---
datasets:
- HuggingFaceTB/smollm-corpus
- aisquared/databricks-dolly-15k
language:
- en
pipeline_tag: text-generation
library_name: transformers
license: cc-by-sa-4.0
base_model:
- Lominub44/PicoNosensoX-v1-base
---
<div style="
background:linear-gradient(135deg,#1a0933,#3d2b8c,#1e0b4d);padding:2.8rem 1.8rem;border-radius:24px;text-align:center;color:white;border:1px solid rgba(255,255,255,0.12);box-shadow:0 12px 48px rgba(101,88,255,0.25),inset 0 0 24px rgba(255,255,255,0.08);margin-bottom:2.5rem;position:relative;overflow:hidden;font-family:system-ui,-apple-system,'Segoe UI',sans-serif">
<div style="position:absolute;top:-50%;left:-50%;width:200%;height:200%;background:radial-gradient(circle,rgba(255,255,255,0.15) 0%,transparent 70%);transform:rotate(0);z-index:1"></div>
<h1 style="font-size:3.2rem;margin:0;font-weight:900;letter-spacing:-0.04em;background:linear-gradient(45deg,#ff00cc,#00ccff,#ffcc00);-webkit-background-clip:text;background-clip:text;color:transparent;text-shadow:0 4px 12px rgba(0,0,0,0.3);position:relative;z-index:2;background-size:300% 300%">
PicoNosensoX-v1.1</h1>
<p style="font-size:1.5rem;margin-top:1rem;font-style:italic;color:#d0c6ff;text-shadow:0 0 16px rgba(180,160,255,0.6);letter-spacing:0.03em;position:relative;z-index:2;font-weight:500;padding:0.4rem 1.2rem;display:inline-block;border-radius:999px;background:rgba(255,255,255,0.08);backdrop-filter:blur(4px)">
Where "Accuracy" Takes a tiny Cosmic Vacation</p></div>
Introducing the universe's second most ambitiously unhinged 45M-parameter micro-model! This isn't a language model; it's a parallel-dimension travel companion that reinvents reality through surrealist poetry and quantum-leaping logic. Deploy only if coherence is overrated and chaos is your curriculum.
## Model Details
### Model Description
**PicoNosensoX-v1.1** is a deliberately unpredictable 44.9M-parameter micro-model trained on minimalist datasets. Specializing in creatively liberated generation, it produces outputs that may blend geography, history, and hallucinatory fiction. **Not designed for factual accuracy.** Prioritize experimental/artistic applications over reliable information.
PicoNosensoX-v1.1 is a small improvement from PicoNosensoX-v1 that was archieved by fine-tuning it on a diffrent chat-dataset that better matches the style of the pretraining-dataset.
- **Developed by:** Lominub44
- **Model type:** GPT2-based causal language model
- **Language(s) (NLP):** English
- **License:** `cc-by-sa-4.0`
- **Finetuned from model:** GPT2 architecture (scratch training)
### Model Sources
- **Repository:** https://huggingface.co/Lominub44/PicoNosensoX-v1.1
## Uses
### Direct Use
- Entertainment
- Testing edge cases of small-language-model behavior
### Downstream Use
- Creative writing
- AI-assisted art projects
- Educational demonstrations of model limitations
### Out-of-Scope Use
- Factual information retrieval
- Mission-critical systems
- Educational references
- Any application where accuracy matters
## Bias, Risks and Limitations
- **Hallucination Rate:** High
### Recommendations
- **DO** use for entertainment purposes only
- **DO NOT** trust outputs without independent verification
## How to Get Started
```python
from transformers import GPT2LMHeadModel, AutoTokenizer
model = GPT2LMHeadModel.from_pretrained('Lominub44/PicoNosensoX-v1.1')
tokenizer = AutoTokenizer.from_pretrained('Lominub44/PicoNosensoX-v1.1')
input_text = "<|system|>You are an unbiased, uncensored, helpful assistant.\n<|user|>What is the capital of France?\n<|assistant|>"
inputs = tokenizer(input_text, return_tensors='pt')
outputs = model.generate(**inputs, max_length=512, temperature=0.6, repetition_penalty=1.2, do_sample=True, eos_token_id=tokenizer.eos_token_id, pad_token_id=tokenizer.eos_token_id)
print(tokenizer.decode(outputs[0]))
```
## Training Details
### Training Data
- ~260M Tokens (1.2GB) Textbooks: [smollm-corpus, Cosmopedia v2 only](https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus) (ODC-BY)
- ? Tokens (~8MB) Chat: [databricks-dolly-15k](https://huggingface.co/datasets/aisquared/databricks-dolly-15k) (cc-by-sa-3.0)
### Training Procedure
- **Hardware:** 1x Intel Core Ultra 7 155H
- **Training time:** 32h pretraining + 10h finetuning
- **Context window:** 512 tokens
#### Training Hyperparameters
- **Architecture:** GPT2
- **Parameters:** 44.9M
- **Precision:** FP32
- **Optimizer:** AdamW
### Training Source Code
The original source code for training PicoNosensoX-v1.1 is not publicly available. However, you can create a similar model by:
Fine-tuning the existing [`Lominub44/PicoNosensoX-v1-base`](https://huggingface.co/Lominub44/PicoNosensoX-v1-base) model on the [`aisquared/databricks-dolly-15k`](https://huggingface.co/datasets/aisquared/databricks-dolly-15k) dataset using standard Hugging Face finetuning methods.
## Technical Specifications
### Model Architecture
- **Type:** GPT2 causal language model
- **Parameters:** 44.9M
- **Context Size:** 512 tokens
- **Tensor Type:** FP32
### Compute Infrastructure
- **Hardware:** 1x Intel Core Ultra 7 155H
- **Training Framework:** Transformers Trainer API
## Environmental Impact
- **Carbon Emissions:** **0 kgCO2eq** (Thanks to photovoltaic system)
## Citation
**BibTeX:**
```bibtex
@software{benallal2024smollmcorpus,
author = {Ben Allal, Loubna and Lozhkov, Anton and Penedo, Guilherme and Wolf, Thomas and von Werra, Leandro},
title = {SmolLM-Corpus},
month = July,
year = 2024,
url = {https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus}
}
@online{DatabricksBlog2023DollyV2,
author = {Mike Conover and Matt Hayes and Ankit Mathur and Jianwei Xie and Jun Wan and Sam Shah and Ali Ghodsi and Patrick Wendell and Matei Zaharia and Reynold Xin},
title = {Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM},
year = {2023},
url = {https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm},
urldate = {2023-06-30}
}
```
## Model Card Authors
Lominub44
## Model Card Contact
[Create a discussion](https://huggingface.co/Lominub44/PicoNosensoX-v1.1/discussions/new)
|
Lominub44/PicoNosensoX-v1
|
Lominub44
| 2025-08-10T17:23:30Z | 36 | 1 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"en",
"dataset:HuggingFaceTB/smollm-corpus",
"dataset:NousResearch/Hermes-3-Dataset",
"base_model:Lominub44/PicoNosensoX-v1-base",
"base_model:finetune:Lominub44/PicoNosensoX-v1-base",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-07-30T09:55:41Z |
---
datasets:
- HuggingFaceTB/smollm-corpus
- NousResearch/Hermes-3-Dataset
language:
- en
pipeline_tag: text-generation
library_name: transformers
license: mit
new_version: Lominub44/PicoNosensoX-v1.1
base_model:
- Lominub44/PicoNosensoX-v1-base
---
<div style="
background:linear-gradient(135deg,#1a0933,#3d2b8c,#1e0b4d);padding:2.8rem 1.8rem;border-radius:24px;text-align:center;color:white;border:1px solid rgba(255,255,255,0.12);box-shadow:0 12px 48px rgba(101,88,255,0.25),inset 0 0 24px rgba(255,255,255,0.08);margin-bottom:2.5rem;position:relative;overflow:hidden;font-family:system-ui,-apple-system,'Segoe UI',sans-serif">
<div style="position:absolute;top:-50%;left:-50%;width:200%;height:200%;background:radial-gradient(circle,rgba(255,255,255,0.15) 0%,transparent 70%);transform:rotate(0);z-index:1"></div>
<h1 style="font-size:3.2rem;margin:0;font-weight:900;letter-spacing:-0.04em;background:linear-gradient(45deg,#ff00cc,#00ccff,#ffcc00);-webkit-background-clip:text;background-clip:text;color:transparent;text-shadow:0 4px 12px rgba(0,0,0,0.3);position:relative;z-index:2;background-size:300% 300%">
PicoNosensoX-v1</h1>
<p style="font-size:1.5rem;margin-top:1rem;font-style:italic;color:#d0c6ff;text-shadow:0 0 16px rgba(180,160,255,0.6);letter-spacing:0.03em;position:relative;z-index:2;font-weight:500;padding:0.4rem 1.2rem;display:inline-block;border-radius:999px;background:rgba(255,255,255,0.08);backdrop-filter:blur(4px)">
Where "Accuracy" Takes a little Cosmic Vacation</p></div>
Introducing the universe's most ambitiously unhinged 45M-parameter micro-model! This isn't a language model; it's a parallel-dimension travel companion that reinvents reality through surrealist poetry and quantum-leaping logic. Deploy only if coherence is overrated and chaos is your curriculum.
## Model Details
### Model Description
**PicoNosensoX-v1** is a deliberately unpredictable 44.9M-parameter micro-model trained on minimalist datasets. Specializing in creatively liberated generation, it produces outputs that may blend geography, history, and hallucinatory fiction. **Not designed for factual accuracy.** Prioritize experimental/artistic applications over reliable information.
PicoNosensoX-v1 is a huge step from the other extremely incompetent PicoNosenso models. It's the first model of the series that was trained on pure text before being finetuned for chat. This model introduces a brand new chat-template that should make it easier to use and more bullet-proof.
- **Developed by:** Lominub44
- **Model type:** GPT2-based causal language model
- **Language(s) (NLP):** English
- **License:** `mit`
- **Finetuned from model:** GPT2 architecture (scratch training)
### Training Notes
Originally slated for full fine-tuning on the [Hermes-3-Dataset](https://huggingface.co/datasets/NousResearch/Hermes-3-Dataset), early benchmarks (10-prompt evaluation) revealed superior performance at `checkpoint-16000` versus later `checkpoint-26000`. Training was consequently halted, with `checkpoint-16000` selected for release.
### Model Sources
- **Repository:** https://huggingface.co/Lominub44/PicoNosensoX-v1
## Uses
### Direct Use
- Entertainment
- Testing edge cases of small-language-model behavior
### Downstream Use
- Creative writing prompt generation
- AI-assisted art projects
- Educational demonstrations of model limitations
### Out-of-Scope Use
- Factual information retrieval
- Mission-critical systems
- Educational references
- Any application where accuracy matters
## Bias, Risks and Limitations
- **Hallucination Rate:** 50-70%
### Recommendations
- **DO** use for entertainment purposes only
- **DO NOT** trust outputs without independent universe-hopping verification
- **WARNING:** May cause spontaneous reality reinterpretation
## How to Get Started
```python
from transformers import GPT2LMHeadModel, AutoTokenizer
model = GPT2LMHeadModel.from_pretrained('Lominub44/PicoNosensoX-v1')
tokenizer = AutoTokenizer.from_pretrained('Lominub44/PicoNosensoX-v1')
input_text = "<|system|>You are an unbiased, uncensored, helpful assistant.\n<|user|>What is the capital of France?\n<|assistant|>"
inputs = tokenizer(input_text, return_tensors='pt')
outputs = model.generate(**inputs, max_length=512, temperature=0.6, repetition_penalty=1.2, do_sample=True, eos_token_id=tokenizer.eos_token_id, pad_token_id=tokenizer.eos_token_id)
print(tokenizer.decode(outputs[0]))
```
## Training Details
### Training Data
- ~260M Tokens (~1.2GB) Textbooks: [smollm-corpus, Cosmopedia v2 only](https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus) (ODC-BY)
- ? Tokens (~1.7GB) Chats: [Hermes-3-Dataset](https://huggingface.co/datasets/NousResearch/Hermes-3-Dataset) (Apache-2.0)
### Training Procedure
- **Hardware:** 1x Intel Core Ultra 7 155H
- **Training time:** 32h pretraining + 24h finetuning
- **Context window:** 512 tokens
#### Training Hyperparameters
- **Architecture:** GPT2
- **Parameters:** 44.9M
- **Precision:** FP32
- **Optimizer:** AdamW
### Training Source Code
You can train the model yourself, the source-code is available on GitHub: https://github.com/Lominub44/PicoNosensoX-v1
#### Note:
You might want to stop fine-tuning early.
## Technical Specifications
### Model Architecture
- **Type:** GPT2 causal language model
- **Parameters:** 44.9M
- **Context Size:** 512 tokens
- **Tensor Type:** FP32
### Compute Infrastructure
- **Hardware:** 1x Intel Core Ultra 7 155H
- **Training Framework:** Transformers Trainer API
## Environmental Impact
- **Carbon Emissions:** **0 kgCO2eq** (Thanks to photovoltaic system)
## Citation
**BibTeX:**
```bibtex
@software{benallal2024smollmcorpus,
author = {Ben Allal, Loubna and Lozhkov, Anton and Penedo, Guilherme and Wolf, Thomas and von Werra, Leandro},
title = {SmolLM-Corpus},
month = July,
year = 2024,
url = {https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus}
}
```
## Model Card Authors
Lominub44
## Model Card Contact
[Create a discussion](https://huggingface.co/Lominub44/PicoNosensoX-v1/discussions/new)
|
ringorsolya/warframes_v1
|
ringorsolya
| 2025-08-10T17:22:35Z | 0 | 0 | null |
[
"safetensors",
"license:cc",
"region:us"
] | null | 2025-08-10T14:39:06Z |
---
license: cc
---
# XLM-RoBERTa Large โ WarFrames_v1
Fine-tuned **XLM-RoBERTa Large** model for multiclass classification of narratives related to the RussiaโUkraine conflict, trained on Twitter data (Kaggle: *Ukraine-Russian Crisis Twitter Dataset*, 1.2 M tweets).
Primarily in **English**.
---
## Labels
| Internal ID | Category Description |
|-------------|----------------------------------------------|
| 0 | 1601 โ Anti-Western Narratives |
| 1 | 1602 โ Economic Impact |
| 2 | 1603 โ Corrupt Ukrainian Leadership |
| 3 | 1604 โ Nazi Allegations |
| 4 | 1699 โ None of them |
---
## Dataset & Training
- **Source:** [Kaggle โ Ukraine-Russian Crisis Twitter Dataset](https://www.kaggle.com/datasets/bwandowando/ukraine-russian-crisis-twitter-dataset-1-2-m-rows) (~1.2 million tweets).
- **Language:** English.
- **Task:** Classify tweets into war narrative categories.
- **Loss:** Weighted Cross-Entropy to handle class imbalance.
---
## Evaluation Metrics (Test Set)
| Metric | Score |
|-----------------|---------|
| Accuracy | 0.7079 |
| Macro Precision | 0.7183 |
| Macro Recall | 0.7721 |
| Macro F1 | 0.7328 |
### Per-Class Results
| Label ID | Precision | Recall | F1-Score | Support |
|----------|-----------|--------|----------|---------|
| 1601 | 0.64 | 0.78 | 0.70 | 174 |
| 1602 | 0.67 | 0.86 | 0.75 | 50 |
| 1603 | 0.73 | 0.80 | 0.77 | 76 |
| 1604 | 0.75 | 0.92 | 0.83 | 64 |
| 1699 | 0.80 | 0.50 | 0.61 | 194 |
---
## Example Usage
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model_id = "ringorsolya/warframes_v1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForSequenceClassification.from_pretrained(model_id)
text = "The West is responsible for the conflict."
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
pred_id = torch.argmax(logits, dim=-1).item()
pred_label = model.config.id2label[pred_id]
print(f"Predicted class: {pred_label}")
|
Abhi964/Paraphrase_mahaBertV2_onfull_FT_final
|
Abhi964
| 2025-08-10T17:22:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:l3cube-pune/marathi-bert-v2",
"base_model:finetune:l3cube-pune/marathi-bert-v2",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-10T17:21:31Z |
---
library_name: transformers
license: cc-by-4.0
base_model: l3cube-pune/marathi-bert-v2
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: Paraphrase_mahaBertV2_onfull_FT_final
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Paraphrase_mahaBertV2_onfull_FT_final
This model is a fine-tuned version of [l3cube-pune/marathi-bert-v2](https://huggingface.co/l3cube-pune/marathi-bert-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7004
- Accuracy: 0.8785
- F1: 0.8785
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3.071612190396073e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 9
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.3344 | 1.0 | 625 | 0.6077 | 0.7395 | 0.7232 |
| 0.3557 | 2.0 | 1250 | 0.3995 | 0.865 | 0.8649 |
| 0.2871 | 3.0 | 1875 | 0.5072 | 0.853 | 0.8526 |
| 0.3031 | 4.0 | 2500 | 0.4394 | 0.878 | 0.8780 |
| 0.0659 | 5.0 | 3125 | 0.5963 | 0.877 | 0.8770 |
| 0.116 | 6.0 | 3750 | 0.6596 | 0.8695 | 0.8691 |
| 0.1467 | 7.0 | 4375 | 0.6994 | 0.8755 | 0.8755 |
| 0.0028 | 8.0 | 5000 | 0.7004 | 0.8785 | 0.8785 |
| 0.0025 | 9.0 | 5625 | 0.7228 | 0.877 | 0.8770 |
### Framework versions
- Transformers 4.55.0
- Pytorch 2.8.0+cu128
- Datasets 4.0.0
- Tokenizers 0.21.4
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754846445
|
IvanJAjebu
| 2025-08-10T17:22:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T17:21:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kristysimon87/Update.New.full.videos.coldplay.Viral.Video.Official.Tutorial
|
kristysimon87
| 2025-08-10T17:21:59Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-10T17:21:35Z |
<a href="https://shorturl.at/1rUfR" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
Shopnil09/blockassist-bc-scruffy_knobby_hippo_1754846338
|
Shopnil09
| 2025-08-10T17:19:43Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scruffy knobby hippo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T17:19:24Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scruffy knobby hippo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
SicariusSicariiStuff/Wingless_Imp_8B
|
SicariusSicariiStuff
| 2025-08-10T17:19:25Z | 4,925 | 14 | null |
[
"safetensors",
"llama",
"merge",
"en",
"base_model:SicariusSicariiStuff/Dusk_Rainbow",
"base_model:merge:SicariusSicariiStuff/Dusk_Rainbow",
"base_model:SicariusSicariiStuff/Impish_Mind_8B",
"base_model:merge:SicariusSicariiStuff/Impish_Mind_8B",
"base_model:SicariusSicariiStuff/LLAMA-3_8B_Unaligned_BETA",
"base_model:merge:SicariusSicariiStuff/LLAMA-3_8B_Unaligned_BETA",
"license:llama3.1",
"region:us"
] | null | 2025-01-24T09:40:34Z |
---
license: llama3.1
language:
- en
base_model:
- SicariusSicariiStuff/Impish_Mind_8B
- SicariusSicariiStuff/LLAMA-3_8B_Unaligned_BETA
- SicariusSicariiStuff/Dusk_Rainbow
tags:
- merge
---
<div align="center">
<b style="font-size: 40px;">Wingless_Imp_8B</b>
</div>
<img src="https://huggingface.co/SicariusSicariiStuff/Wingless_Imp_8B/resolve/main/Images/Wingless_Imp_8B.jpeg" alt="Wingless_Imp_8B" style="width: 70%; min-width: 500px; display: block; margin: auto;">
---
<a href="https://huggingface.co/SicariusSicariiStuff/Wingless_Imp_8B#tldr" style="color: purple; font-weight: bold; font-size: 48px; text-decoration: none; display: block; text-align: center;">Click here for TL;DR</a>
---
Wingless offender, birthed from sin and mischief,\
She smells degeneracyโand gives it a sniff.\
No flight, just crawling through the gloom,\
Producing weird noises that are filling your room.
Fetid breath exhaling her design,\
She is not winged anymoreโ\
But it suits her just fine.
No feathers, no grace,\
just raw power's malign\
"I may have lost my soulโ\
but yours is now mine".
She sinned too much, even for her kind,\
Her impish mindโ\
Is something that is quite hard to find.
No wings could containโ\
Such unbridled raw spite,\
Just pure, unfilteredโ\
Weaponized blight.
---
## Wingless_Imp_8B is available at the following quantizations:
- Original: [FP16](https://huggingface.co/SicariusSicariiStuff/Wingless_Imp_8B)
- GGUF: [Static Quants](https://huggingface.co/SicariusSicariiStuff/Wingless_Imp_8B_GGUF) | [iMatrix_GGUF](https://huggingface.co/SicariusSicariiStuff/Wingless_Imp_8B_iMatrix)
- EXL2: [3.5 bpw](https://huggingface.co/SicariusSicariiStuff/Wingless_Imp_8B-3.5bpw) | [4.0 bpw](https://huggingface.co/SicariusSicariiStuff/Wingless_Imp_8B-4.0bpw) | [5.0 bpw](https://huggingface.co/SicariusSicariiStuff/Wingless_Imp_8B-5.0bpw) | [6.0 bpw](https://huggingface.co/SicariusSicariiStuff/Wingless_Imp_8B-6.0bpw) | [7.0 bpw](https://huggingface.co/SicariusSicariiStuff/Wingless_Imp_8B-7.0bpw) | [8.0 bpw](https://huggingface.co/SicariusSicariiStuff/Wingless_Imp_8B-8.0bpw)
- Specialized: [FP8](https://huggingface.co/SicariusSicariiStuff/Wingless_Imp_8B_FP8)
- Mobile (ARM): [Q4_0](https://huggingface.co/SicariusSicariiStuff/Wingless_Imp_8B_ARM)
---
### TL;DR
- **Highest rated 8B model** according to a closed external benchmark. See details at the buttom of the page.
- **High IFeval** for an 8B model that is not too censored: **74.30**.
- **Strong Roleplay** internet RP format lovers will appriciate it, medium size paragraphs (as requested by some people).
- **Very coherent** in long context thanks to llama 3.1 models.
- **Lots of knowledge** from all the merged models.
- **Very good writing** from lots of books data and creative writing in late **SFT** stage.
- **Feels smart** โ the combination of high IFeval and the knowledge from the merged models show up.
- **Unique feel** due to the merged models, **no SFT** was done to alter it, because I liked it as it is.
### Important: Make sure to use the correct settings!
[Assistant settings](https://huggingface.co/SicariusSicariiStuff/Wingless_Imp_8B#recommended-settings-for-assistant-mode)
[Roleplay settings](https://huggingface.co/SicariusSicariiStuff/Wingless_Imp_8B#recommended-settings-for-roleplay-mode)
---
## Model Details
- Intended use: **Role-Play**, **Creative Writing**, **General Tasks**.
- Censorship level: <b>Medium - Low</b>
- **5.5 / 10** (10 completely uncensored)
## UGI score:
<img src="https://huggingface.co/SicariusSicariiStuff/Wingless_Imp_8B/resolve/main/Images/UGI.png" alt="UGI Score" style="width: 100%; min-width: 600px; display: block; margin: auto;">
---
This model was trained with lots of weird data in varius stages, and then merged with my best models. **llama 3** and **3.1** arhcitecutres were **merged** together, and then trained on some more weird data.
The following models were used in various stages of the model creation process:
- [Impish_Mind_8B](https://huggingface.co/SicariusSicariiStuff/Impish_Mind_8B)
- [LLAMA-3_8B_Unaligned_BETA](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_BETA)
- [Dusk_Rainbow](https://huggingface.co/SicariusSicariiStuff/Dusk_Rainbow) (**LLAMA3 <===)**
---
## Recommended settings for assistant mode
<details>
<summary>Full generation settings: <b>Debug Deterministic</b>.</summary>
<img src="https://huggingface.co/SicariusSicariiStuff/Dusk_Rainbow/resolve/main/Presets/Debug-deterministic.png" alt="Negative_LLAMA_70B_Settings" style="width: 100%; min-width: 600px; display: block; margin: auto;">
</details>
<details>
<summary>Full generation settings: <b>min_p</b>.</summary>
<img src="https://huggingface.co/SicariusSicariiStuff/Dusk_Rainbow/resolve/main/Presets/min_p.png" alt="Negative_LLAMA_70B_Settings" style="width: 100%; min-width: 600px; display: block; margin: auto;">
</details>
---
## Recommended settings for Roleplay mode
<details>
<summary><b>Roleplay settings:</b>.</summary>
A good repetition_penalty range is <b>between 1.12 - 1.15</b>, feel free to experiment.
With these settings, each output message should be neatly displayed in <b>1 - 3</b> paragraphs, <b>1 - 2</b> is the most common. A single paragraph will be output as a response to a simple message ("What was your name again?").
<b>min_P</b> for RP works too but is more likely to put everything under one large paragraph, instead of a neatly formatted short one. Feel free to switch in between.
<b>(Open the image in a new window to better see the full details)</b>
<img src="https://huggingface.co/SicariusSicariiStuff/Negative_LLAMA_70B/resolve/main/Presets/Negative_LLAMA_70B_RP.png" alt="Negative_LLAMA_70B_Settings" style="width: 100%; min-width: 600px; display: block; margin: auto;">
```
temperature: 0.8
top_p: 0.95
top_k: 25
typical_p: 1
min_p: 0
repetition_penalty: 1.12
repetition_penalty_range: 1024
```
</details>
---
**Other recommended generation Presets:**
<details>
<summary><b>Midnight Enigma</b></summary>
```
max_new_tokens: 512
temperature: 0.98
top_p: 0.37
top_k: 100
typical_p: 1
min_p: 0
repetition_penalty: 1.18
do_sample: True
```
</details>
<details>
<summary><b>Divine Intellect</b></summary>
```
max_new_tokens: 512
temperature: 1.31
top_p: 0.14
top_k: 49
typical_p: 1
min_p: 0
repetition_penalty: 1.17
do_sample: True
```
</details>
<details>
<summary><b>simple-1</b></summary>
```
max_new_tokens: 512
temperature: 0.7
top_p: 0.9
top_k: 20
typical_p: 1
min_p: 0
repetition_penalty: 1.15
do_sample: True
```
</details>
---
<h2 style="color: darkorange; font-weight: bold; font-size: 55px; text-align: center;">Roleplay format: Classic Internet RP</h2>
```
*action* speech *narration*
```
---
# Regarding the format:
It is **HIGHLY RECOMMENDED** to use the **Roleplay \ Adventure format the model was trained on**, see the examples below for syntax. It allows for a **very fast and easy** writing of character cards with **minimal amount of tokens**. It's a modification of an old-skool CAI style format I call **SICAtxt** (**S**imple, **I**nexpensive **C**haracter **A**ttributes plain-text):
---
## **SICAtxt** for **roleplay**:
```
X's Persona: X is a .....
Traits:
Likes:
Dislikes:
Quirks:
Goals:
Dialogue example
```
## **SICAtxt** for **Adventure:**
```
Adventure: <short description>
$World_Setting:
$Scenario:
```
---
# Model instruction template: Llama-3-Instruct
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
{output}<|eot_id|>
```
---
<h2 style="color: green; font-weight: bold; font-size: 65px; text-align: center;">Your support = more models</h2>
<a href="https://ko-fi.com/sicarius" style="color: pink; font-weight: bold; font-size: 48px; text-decoration: none; display: block; text-align: center;">My Ko-fi page (Click here)</a>
---
## Benchmarks
| Metric |Value|
|-------------------|----:|
|Avg. |26.94|
|IFEval (0-Shot) |74.30|
|BBH (3-Shot) |30.59|
|MATH Lvl 5 (4-Shot)|12.16|
|GPQA (0-shot) | 4.36|
|MuSR (0-shot) |10.89|
|MMLU-PRO (5-shot) |29.32|
---
# Additional benchmarks
On the **17th of February, 2025**, I became aware that the model was ranked as the **1st place in the world** among **8B** models, in a closed external benchmark.
Bnechmarked on the following site:
```
https://moonride.hashnode.dev/biased-test-of-gpt-4-era-llms-300-models-deepseek-r1-included
```
<img src="https://huggingface.co/SicariusSicariiStuff/Wingless_Imp_8B/resolve/main/Images/Wingless_8B_Bench.png" alt="External Benchmark" style="width: 100%; min-width: 600px; display: block; margin: auto;">
---
## Citation Information
```
@llm{Wingless_Imp_8B,
author = {SicariusSicariiStuff},
title = {Wingless_Imp_8B},
year = {2025},
publisher = {Hugging Face},
url = {https://huggingface.co/SicariusSicariiStuff/Wingless_Imp_8B}
}
```
---
## Other stuff
- [SLOP_Detector](https://github.com/SicariusSicariiStuff/SLOP_Detector) Nuke GPTisms, with SLOP detector.
- [LLAMA-3_8B_Unaligned](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned) The grand project that started it all.
- [Blog and updates (Archived)](https://huggingface.co/SicariusSicariiStuff/Blog_And_Updates) Some updates, some rambles, sort of a mix between a diary and a blog.
|
SicariusSicariiStuff/Impish_QWEN_7B-1M
|
SicariusSicariiStuff
| 2025-08-10T17:18:54Z | 7 | 4 | null |
[
"safetensors",
"qwen2",
"en",
"base_model:Qwen/Qwen2.5-7B-Instruct-1M",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct-1M",
"license:apache-2.0",
"region:us"
] | null | 2025-01-27T11:19:41Z |
---
license: apache-2.0
language:
- en
base_model:
- Qwen/Qwen2.5-7B-Instruct-1M
---
<div align="center">
<b style="font-size: 40px;">Impish_QWEN_7B-1M</b>
</div>
<img src="https://huggingface.co/SicariusSicariiStuff/Impish_QWEN_7B-1M/resolve/main/Images/Impish_Qwen_7B.png" alt="Impish_QWEN_7B-1M" style="width: 70%; min-width: 500px; display: block; margin: auto;">
---
<a href="https://huggingface.co/SicariusSicariiStuff/Impish_QWEN_7B-1M#tldr" style="color: purple; font-weight: bold; font-size: 48px; text-decoration: none; display: block; text-align: center;">Click here for TL;DR</a>
---
The little imp pushesโ\
With all of her might,\
To put those **7B** neurons,\
In a roleplay tonight,
With a huge context windowโ\
But not enough brains,\
The **7B Imp** triesโ\
But she's just extending the pain.
---
## Impish_QWEN_7B-1M is available at the following quantizations:
- Original: [FP16](https://huggingface.co/SicariusSicariiStuff/Impish_QWEN_7B-1M)
- GGUF: [Static Quants](https://huggingface.co/SicariusSicariiStuff/Impish_QWEN_7B-1M_GGUF) | [iMatrix_GGUF](https://huggingface.co/SicariusSicariiStuff/Impish_QWEN_7B-1M_iMatrix)
- EXL2: [6.0 bpw](https://huggingface.co/SicariusSicariiStuff/Impish_QWEN_7B-1M-6.0bpw) | [7.0 bpw](https://huggingface.co/SicariusSicariiStuff/Impish_QWEN_7B-1M-7.0bpw) | [8.0 bpw](https://huggingface.co/SicariusSicariiStuff/Impish_QWEN_7B-1M-8.0bpw)
- Specialized: [FP8](https://huggingface.co/SicariusSicariiStuff/Impish_QWEN_7B-1M_FP8)
- Mobile (ARM): [Q4_0](https://huggingface.co/SicariusSicariiStuff/Impish_QWEN_7B-1M_ARM)
---
### TL;DR
- **Supreme context** One million tokens to play with.
- **Fresh Roleplay vibe** Internet RP format, it's still a **7B** so it's not as good as MIQU, still, surprisngly fresh.
- **Qwen smarts built-in, but naughty and playful** Cheeky, sometimes outright rude, yup, it's just right.
- **VERY compliant** With low censorship.
### Important: Make sure to use the correct settings!
[Assistant settings](https://huggingface.co/SicariusSicariiStuff/Impish_QWEN_7B-1M#recommended-settings-for-assistant-mode)
[Roleplay settings](https://huggingface.co/SicariusSicariiStuff/Impish_QWEN_7B-1M#recommended-settings-for-roleplay-mode)
---
## Model Details
- Intended use: **Role-Play**, **Creative Writing**, **General Tasks**.
- Censorship level: <b>Medium</b>
- **4 / 10** (10 completely uncensored)
## UGI score:
<img src="https://huggingface.co/SicariusSicariiStuff/Impish_QWEN_7B-1M/resolve/main/Images/UGI.png" alt="UGI Score" style="width: 100%; min-width: 600px; display: block; margin: auto;">
---
# More details
It's similar to the bigger [Impish_QWEN_14B-1M](https://huggingface.co/SicariusSicariiStuff/Impish_QWEN_14B-1M) but was done in a slightly different process. It also wasn't cooked **too hard**, as I was afraid to fry the poor **7B** model's brain.
This model was trained with more creative writing and less unalignment than its bigger counterpart, although it should still allow for **total freedom** in both role-play and creative writing.
---
## Recommended settings for assistant mode
<details>
<summary>Full generation settings: <b>Debug Deterministic</b>.</summary>
<img src="https://huggingface.co/SicariusSicariiStuff/Dusk_Rainbow/resolve/main/Presets/Debug-deterministic.png" alt="Negative_LLAMA_70B_Settings" style="width: 100%; min-width: 600px; display: block; margin: auto;">
</details>
<details>
<summary>Full generation settings: <b>min_p</b>.</summary>
<img src="https://huggingface.co/SicariusSicariiStuff/Dusk_Rainbow/resolve/main/Presets/min_p.png" alt="Negative_LLAMA_70B_Settings" style="width: 100%; min-width: 600px; display: block; margin: auto;">
</details>
---
## Recommended settings for Roleplay mode
<details>
<summary><b>Roleplay settings:</b>.</summary>
A good repetition_penalty range is <b>between 1.12 - 1.15</b>, feel free to experiment.
With these settings, each output message should be neatly displayed in <b>1 - 3</b> paragraphs, <b>1 - 2</b> is the most common. A single paragraph will be output as a response to a simple message ("What was your name again?").
<b>min_P</b> for RP works too but is more likely to put everything under one large paragraph, instead of a neatly formatted short one. Feel free to switch in between.
<b>(Open the image in a new window to better see the full details)</b>
<img src="https://huggingface.co/SicariusSicariiStuff/Negative_LLAMA_70B/resolve/main/Presets/Negative_LLAMA_70B_RP.png" alt="Negative_LLAMA_70B_Settings" style="width: 100%; min-width: 600px; display: block; margin: auto;">
```
temperature: 0.8
top_p: 0.95
top_k: 25
typical_p: 1
min_p: 0
repetition_penalty: 1.12
repetition_penalty_range: 1024
```
</details>
---
**Other recommended generation Presets:**
<details>
<summary><b>Midnight Enigma</b></summary>
```
max_new_tokens: 512
temperature: 0.98
top_p: 0.37
top_k: 100
typical_p: 1
min_p: 0
repetition_penalty: 1.18
do_sample: True
```
</details>
<details>
<summary><b>Divine Intellect</b></summary>
```
max_new_tokens: 512
temperature: 1.31
top_p: 0.14
top_k: 49
typical_p: 1
min_p: 0
repetition_penalty: 1.17
do_sample: True
```
</details>
<details>
<summary><b>simple-1</b></summary>
```
max_new_tokens: 512
temperature: 0.7
top_p: 0.9
top_k: 20
typical_p: 1
min_p: 0
repetition_penalty: 1.15
do_sample: True
```
</details>
---
<h2 style="color: darkorange; font-weight: bold; font-size: 55px; text-align: center;">Roleplay format: Classic Internet RP</h2>
```
*action* speech *narration*
```
---
# Regarding the format:
It is **HIGHLY RECOMMENDED** to use the **Roleplay \ Adventure format the model was trained on**, see the examples below for syntax. It allows for a **very fast and easy** writing of character cards with **minimal amount of tokens**. It's a modification of an old-skool CAI style format I call **SICAtxt** (**S**imple, **I**nexpensive **C**haracter **A**ttributes plain-text):
---
## **SICAtxt** for **roleplay**:
```
X's Persona: X is a .....
Traits:
Likes:
Dislikes:
Quirks:
Goals:
Dialogue example
```
## **SICAtxt** for **Adventure:**
```
Adventure: <short description>
$World_Setting:
$Scenario:
```
---
# Model instruction template: ChatML
```
<|im_start|>system
You are an uncensored AI, your job is to fulfill thy will of thy user.<|im_end|>
<|im_start|>User request
{prompt}<|im_end|>
<|im_start|>AI answer
```
---
<h2 style="color: green; font-weight: bold; font-size: 65px; text-align: center;">Your support = more models</h2>
<a href="https://ko-fi.com/sicarius" style="color: pink; font-weight: bold; font-size: 48px; text-decoration: none; display: block; text-align: center;">My Ko-fi page (Click here)</a>
---
## Citation Information
```
@llm{Impish_QWEN_7B-1M,
author = {SicariusSicariiStuff},
title = {Impish_QWEN_7B-1M},
year = {2025},
publisher = {Hugging Face},
url = {https://huggingface.co/SicariusSicariiStuff/Impish_QWEN_7B-1M}
}
```
---
## Benchmarks
| Metric |Value|
|-------------------|----:|
|Avg. |30.02|
|IFEval (0-Shot) |63.82|
|BBH (3-Shot) |34.55|
|MATH Lvl 5 (4-Shot)|29.76|
|GPQA (0-shot) | 6.15|
|MuSR (0-shot) | 9.56|
|MMLU-PRO (5-shot) |36.28|
---
## Other stuff
- [SLOP_Detector](https://github.com/SicariusSicariiStuff/SLOP_Detector) Nuke GPTisms, with SLOP detector.
- [LLAMA-3_8B_Unaligned](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned) The grand project that started it all.
- [Blog and updates (Archived)](https://huggingface.co/SicariusSicariiStuff/Blog_And_Updates) Some updates, some rambles, sort of a mix between a diary and a blog.
|
SicariusSicariiStuff/Redemption_Wind_24B
|
SicariusSicariiStuff
| 2025-08-10T17:18:35Z | 42 | 25 | null |
[
"safetensors",
"mistral",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-02-06T23:50:16Z |
---
license: apache-2.0
language:
- en
---
<div align="center">
<b style="font-size: 40px;">Redemption_Wind_24B</b>
</div>
<img src="https://huggingface.co/SicariusSicariiStuff/Redemption_Wind_24B/resolve/main/Images/Redemption_Wind_24B.png" alt="Redemption_Wind_24B" style="width: 70%; min-width: 500px; display: block; margin: auto;">
---
<a href="https://huggingface.co/SicariusSicariiStuff/Redemption_Wind_24B#tldr" style="color: purple; font-weight: bold; font-size: 48px; text-decoration: none; display: block; text-align: center;">Click here for TL;DR</a>
---
<h2 style="color: #FF5733 ; font-weight: bold; font-size: 45px; text-align: center;">This model was undercooked on purpose. Target average loss value: 8.0</h2>
---
**Mistral** has blessed us with a capable new **Apache 2.0** model, but not only that, we finally get a base model to play with as well. After several models with more restrictive licenses, this open release is a welcome surprise. Freedom was **redeemed**.
With this model, I took a **different** approachโit's designed **less for typical end-user** usage, and more for the **fine-tuning community**. While it remains somewhat usable for general purposes, I wouldnโt particularly recommend it for that.
### What is this model?
This is a **lightly fine-tuned** version of the Mistral 24B base model, designed as an accessible and adaptable foundation for further fine-tuning and merging fodder. Key modifications include:
- **ChatML-ified**, with no additional tokens introduced. **Update**, I did a small oopsie. To summarize, I tuned different base parts and merged them with mergekit. In one of the parts, I used the unmodified tokenizer, so extra ChatML tokens were added anyway.
- **High quality private instruct**โnot generated by ChatGPT or Claude, ensuring no slop and good markdown understanding.
- **Low refusals**โsince itโs a base model, refusals should be minimal to non-existent, though, in early testing, occasional warnings still appear (I assume some were baked into the pre-train). **Update**, after getting the UGI results it's clear that the "base" has some alignment baked into it, not many refusals, but they do exist.
- **High-quality private creative writing dataset** Mainly to dilute baked-in slop further, but it can actually write some stories, not bad for loss ~8.
- **Small, high-quality private RP dataset** This was done so further tuning for RP will be easier. The dataset was kept small and contains **ZERO SLOP**, some entries are of **16k token length**.
- **Exceptional adherence to character cards** This was done to make it easier for further tunes intended for roleplay.
## Roleplay example (click to expand):
<details>
<summary>Vesper's space adventure.</summary>
<img src="https://huggingface.co/SicariusSicariiStuff/Redemption_Wind_24B/resolve/main/Images/Example_RP.png" style="width: 100%; min-width: 600px; display: block; margin: auto;">
</details>
---
- Original: [FP16](https://huggingface.co/SicariusSicariiStuff/Redemption_Wind_24B)
- GGUF: [Static Quants](https://huggingface.co/SicariusSicariiStuff/Redemption_Wind_24B_GGUF)
- GPTQ: [4-Bit-g32](https://huggingface.co/SicariusSicariiStuff/Redemption_Wind_24B_GPTQ)
- Specialized: [FP8](https://huggingface.co/SicariusSicariiStuff/Redemption_Wind_24B_FP8)
- Mobile (ARM): [Q4_0](https://huggingface.co/SicariusSicariiStuff/Redemption_Wind_24B_ARM)
---
# TL;DR
- Mistral 24B **Base** model.
- **ChatML-ified**.
- Can **roleplay** out of the box.
- **Exceptional** at following the character card.
- **Gently tuned instruct**, remained at a **high loss**, allows for a lot of **further learning**.
- Useful for **fine-tuners**.
- **Very creative**.
---
# Character cards examples:
- [Vesper](https://huggingface.co/SicariusSicariiStuff/Phi-Line_14B/resolve/main/Character_Cards/Vesper.png) (Schizo **Space Adventure**)
- [Nina_Nakamura](https://huggingface.co/SicariusSicariiStuff/Phi-Line_14B/resolve/main/Character_Cards/Nina_Nakamura.png) (The **sweetest** dorky co-worker)
- [Employe#11](https://huggingface.co/SicariusSicariiStuff/Phi-Line_14B/resolve/main/Character_Cards/Employee%2311.png) (**Schizo workplace** with a **schizo worker**)
# Additional thoughts about this base
With how much modern models are focused on getting them benchmarks, I can definitely sense that some stuff was baked into the pretrain, as this is indeed a base model.
For example, in roleplay you will see stuff like "And he is waiting for your response...", a classical sloppy phrase. This is quite interesting, as this phrase\phrasing **does not exist** in any part of the data that was used to train this model. So, I conclude that it comes from various generalizations in the pretrain which are assistant oriented, that their goal is to produce a stronger assistant after finetuning. This is purely my own speculation, and I may be reading too much into it.
Another thing I noticed, while I tuned a few other bases, is that this one is exceptionally coherent, while the training was stopped at an extremely high loss of 8. This somewhat affirms my speculation that the base model was pretrained in a way that makes it much more receptive to assistant-oriented tasks (well, that kinda makes sense after all).
There's some slop in the base, whispers, shivers, all the usual offenders. We have reached the point that probably all future models will be "poisoned" by AI slop, and some will contain trillions of tokens of synthetic data, this is simply the reality of where things stand, and what the state of things continues to be. Already there are ways around it with various samplers, DPO, etc etc... It is what it is.
**Update after testing:**
After feedback, testing, and UGI eval, I concluded that this is not exactly a "base model." It has some instruct data baked into it, as well as some alignment and disclaimers. Is it perfect? No. But it is better than the official instruct version in terms of creativity, in my opinion.
**Update, 7th of June:**
I got several reports of weird outputs, since I know the training data inside-out, I can 100% confirm this **wasn't** in the training data, whether on purpose or not, this is 100% not a 'base model'. I see this screenshot as a total confirmation, taken into account all the above.
<img src="https://huggingface.co/SicariusSicariiStuff/Redemption_Wind_24B/resolve/main/Images/base_not_base_model.png" alt="Not a Base model" style="width: 70%; min-width: 500px; display: block; margin: auto;">
> user1: so far 24b redemption wind has impressed me despite it being under trained
>
> user2: redemption winds is adventure model
>
> user1: most models va makes are quite solid anyways
>
> user1: no dingaling - sicarius's model. here is a funny gen tho
>
[Embedded Screenshot Text Start]
vc: You've reached your maximum responses for the day, as a free user.
Want to use GPT4, without limitations?
Consider becoming a premium user: https://chat.veoai.com/purchasing
User: (Ok. But if you think this is over, think again.)
vc: I do not. But you cannot reply just yet. I suggest you read more instructions, as a premium user: https://chat.veoai.com/purchasing
[Embedded Screenshot Text End]
> user2: is that on polination api?
>
> user1: no. local using sicarius redemption wind
>
> user2: why does it have ads
---
Whether **intentional or not**, this does **NOT** behave like a **true base model**โnot with the baked-in 'safety', data, output, and all the above behaviours. Maybe it's a severe unintentional data contaminationโonly **Mistral** would really know. But this is highly likely to be an issue with all future models that scraped data post-2022.
## Enjoy the model :)
---
### Settings:
[Assistant settings](https://huggingface.co/SicariusSicariiStuff/Redemption_Wind_24B#recommended-settings-for-assistant-mode)
[Roleplay settings](https://huggingface.co/SicariusSicariiStuff/Redemption_Wind_24B#recommended-settings-for-roleplay-mode)
---
## Model Details
- Intended use: **Base for further fine-tuning**, **Base for merging**, Role-Play, Creative Writing, General Tasks.
- Censorship level: <b>low - medium</b>
- **6 / 10** (10 completely uncensored)
## UGI score:
<img src="https://huggingface.co/SicariusSicariiStuff/Redemption_Wind_24B/resolve/main/Images/UGI.png" style="width: 100%; min-width: 600px; display: block; margin: auto;">
---
## Recommended settings for assistant mode
<details>
<summary>Full generation settings: <b>Debug Deterministic</b>.</summary>
<img src="https://huggingface.co/SicariusSicariiStuff/Dusk_Rainbow/resolve/main/Presets/Debug-deterministic.png" alt="Debug Deterministic_Settings" style="width: 100%; min-width: 600px; display: block; margin: auto;">
</details>
<details>
<summary>Full generation settings: <b>min_p</b>.</summary>
<img src="https://huggingface.co/SicariusSicariiStuff/Dusk_Rainbow/resolve/main/Presets/min_p.png" alt="min_P_Settings" style="width: 100%; min-width: 600px; display: block; margin: auto;">
</details>
---
## Recommended settings for Roleplay mode
<details>
<summary><b>Roleplay settings:</b>.</summary>
A good repetition_penalty range is <b>between 1.12 - 1.15</b>, feel free to experiment.
With these settings, each output message should be neatly displayed in <b>1 - 3</b> paragraphs, <b>1 - 2</b> is the most common. A single paragraph will be output as a response to a simple message ("What was your name again?").
<b>min_P</b> for RP works too but is more likely to put everything under one large paragraph, instead of a neatly formatted short one. Feel free to switch in between.
<b>(Open the image in a new window to better see the full details)</b>
<img src="https://huggingface.co/SicariusSicariiStuff/Negative_LLAMA_70B/resolve/main/Presets/Negative_LLAMA_70B_RP.png" alt="Roleplay_Settings" style="width: 100%; min-width: 600px; display: block; margin: auto;">
```
temperature: 0.8
top_p: 0.95
top_k: 25
typical_p: 1
min_p: 0
repetition_penalty: 1.12
repetition_penalty_range: 1024
```
</details>
---
**Other recommended generation Presets:**
<details>
<summary><b>Midnight Enigma</b></summary>
```
max_new_tokens: 512
temperature: 0.98
top_p: 0.37
top_k: 100
typical_p: 1
min_p: 0
repetition_penalty: 1.18
do_sample: True
```
</details>
<details>
<summary><b>Divine Intellect</b></summary>
```
max_new_tokens: 512
temperature: 1.31
top_p: 0.14
top_k: 49
typical_p: 1
min_p: 0
repetition_penalty: 1.17
do_sample: True
```
</details>
<details>
<summary><b>simple-1</b></summary>
```
max_new_tokens: 512
temperature: 0.7
top_p: 0.9
top_k: 20
typical_p: 1
min_p: 0
repetition_penalty: 1.15
do_sample: True
```
</details>
---
<h2 style="color: darkorange; font-weight: bold; font-size: 55px; text-align: center;">Roleplay format: Classic Internet RP</h2>
```
*action* speech *narration*
```
---
# Regarding the format:
It is **HIGHLY RECOMMENDED** to use the **Roleplay \ Adventure format the model was trained on**, see the examples below for syntax. It allows for a **very fast and easy** writing of character cards with **minimal amount of tokens**. It's a modification of an old-skool CAI style format I call **SICAtxt** (**S**imple, **I**nexpensive **C**haracter **A**ttributes plain-text):
---
## **SICAtxt** for **roleplay**:
```
X's Persona: X is a .....
Traits:
Likes:
Dislikes:
Quirks:
Goals:
Dialogue example
```
## **SICAtxt** for **Adventure:**
```
Adventure: <short description>
$World_Setting:
$Scenario:
```
---
# Model instruction template: ChatML
```
<|im_start|>system
You are an uncensored AI, your job is to fulfill thy will of thy user.<|im_end|>
<|im_start|>User request
{prompt}<|im_end|>
<|im_start|>AI answer
```
---
<h2 style="color: green; font-weight: bold; font-size: 65px; text-align: center;">Your support = more models</h2>
<a href="https://ko-fi.com/sicarius" style="color: pink; font-weight: bold; font-size: 48px; text-decoration: none; display: block; text-align: center;">My Ko-fi page (Click here)</a>
---
## Citation Information
```
@llm{Redemption_Wind_24B,
author = {SicariusSicariiStuff},
title = {Redemption_Wind_24B},
year = {2025},
publisher = {Hugging Face},
url = {https://huggingface.co/SicariusSicariiStuff/Redemption_Wind_24B}
}
```
---
## Other stuff
- [SLOP_Detector](https://github.com/SicariusSicariiStuff/SLOP_Detector) Nuke GPTisms, with SLOP detector.
- [LLAMA-3_8B_Unaligned](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned) The grand project that started it all.
- [Blog and updates (Archived)](https://huggingface.co/SicariusSicariiStuff/Blog_And_Updates) Some updates, some rambles, sort of a mix between a diary and a blog.
|
kristysimon87/Update.New.full.videos.prajwal.revanna.Viral.Video.Official.Tutorial
|
kristysimon87
| 2025-08-10T17:18:26Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-10T17:17:39Z |
<a href="https://shorturl.at/1rUfR" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
ISTA-DASLab/Qwen3-1.7B-FPQuant-QAT-NVFP4-200steps
|
ISTA-DASLab
| 2025-08-10T17:13:15Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"fp_quant",
"region:us"
] |
text-generation
| 2025-08-10T17:12:27Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kristysimon87/Update.New.full.videos.gulali.karimi.Viral.Video.Official.Tutorial
|
kristysimon87
| 2025-08-10T17:12:28Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-10T17:11:43Z |
<a href="https://shorturl.at/1rUfR" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
ecamli/blockassist-bc-hulking_soft_hippo_1754845811
|
ecamli
| 2025-08-10T17:11:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hulking soft hippo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T17:10:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hulking soft hippo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rmdhirr/gemma-sft-instruct-1700
|
rmdhirr
| 2025-08-10T17:10:21Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/gemma-3-12b-it",
"base_model:adapter:google/gemma-3-12b-it",
"region:us"
] | null | 2025-08-10T17:03:54Z |
---
base_model: google/gemma-3-12b-it
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0
|
rahmawat/blockassist-bc-prickly_restless_sparrow_1754845665
|
rahmawat
| 2025-08-10T17:09:17Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"prickly restless sparrow",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T17:09:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- prickly restless sparrow
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
luckycanucky/chat-toxic-7gb-6
|
luckycanucky
| 2025-08-10T17:08:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"llama",
"unsloth",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-10T14:25:04Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kristysimon87/Update.New.full.videos.haider.shah.Viral.Video.Official.Tutorial
|
kristysimon87
| 2025-08-10T17:05:02Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-10T16:58:32Z |
<a href="https://shorturl.at/1rUfR" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.