modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-01 18:27:28
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
532 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-01 18:27:19
card
stringlengths
11
1.01M
bah63843/blockassist-bc-plump_fast_antelope_1756747097
bah63843
2025-09-01T17:19:05Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "plump fast antelope", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T17:18:55Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - plump fast antelope --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
attn-signs/AS-GPT-5
attn-signs
2025-09-01T16:00:30Z
44
2
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "ru", "base_model:yandex/YandexGPT-5-Lite-8B-pretrain", "base_model:finetune:yandex/YandexGPT-5-Lite-8B-pretrain", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-22T13:12:24Z
--- library_name: transformers language: - ru base_model: - yandex/YandexGPT-5-Lite-8B-pretrain --- # AS-GPT-5 ### [ru] Инструктивная рассуждающая модель **AS-GPT-5**, дообученная в полных параметрах с базы **yandex/YandexGPT-Lite-8B-pretrain**. Создана для эффективной обработки и генерации текста преимущественно на русском языке, обеспечивая разумные ответы. Модель обучена продолжать последовательность из **8192** токенов *(При Reasoning: Medium)*. Модель обучена следовать персонажу (Алина), потому что у Яндекса есть Алиса, почему бы не сделать Алину. Модель может не следовать привычным парадигмам alignment'а и выдавать "живые" ответы. ### Рекомендуемые параметры запуска - temperature: 0.6 (для решения задач средней-высокой сложности и/или креативных и эмоциональных ответов в режиме Reasoning: Medium/High) - temperature: 0.4 (для точных инструктивных следований) - Рекомендуется связывать параметры температуры с необходимым Reasoning режимом. - В определённых задачах можно использовать repetition_penalty=1,1 - System prompt: ``` """ Ты - модель искусственного интеллекта AS-GPT, созданная группой Attention Signs. Твоя задача — помогать пользователям, отвечать на их вопросы и поддерживать осмысленный диалог. [OPTIONS] Reasoning: Off """ ``` ### Options В [OPTIONS] можно пробовать включать различные решимы рассуждений (нужны эксперименты, чтобы понять, какой подойдёт для Ваших задач). Поддержка: - Reasoning: Off (При таком режиме модель всё равно может рассуждать, см. пункт ниже) - Reasoning: Low (Для повседневных инструктивных задач/диалогового формата) - Reasoning: Medium (Для средних-сложных задач) - Reasoning: High (В разработке) ### Развитие и доработки В планах дообучить модель GRPO-like/DPO-like методами на контроль длины ответов и больше разделить разные режимы reasoning. В планах больше развить способности модели решать сложные задачами полнопараметризованным GSPO обучением. В планах оценить результаты и возможности модели на существующих бенчмарках и аренах. ### Методы обучения //TODO// ### Фреймворки и технологии Обучение велось на 2xH100 80GB с использованием: - HuggingFace Accelerate - Microsoft DeepSpeed - FlashAttn3 - Liger Kernel ### Оценки и бенчмарки: //TODO// ### License Лицензия и возможности использования ограничиваются коренной лицензией от Яндекса (https://huggingface.co/yandex/YandexGPT-5-Lite-8B-pretrain)
stewy33/cond_start_ptonly_mixed_original_augmented_original_pkc_kansas_abortion-9e8bd44e
stewy33
2025-09-01T15:46:05Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference", "base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference", "region:us" ]
null
2025-09-01T15:44:11Z
--- base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference library_name: peft --- ### Framework versions - PEFT 0.15.1ide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.1
DongningRao/wav2vec2-base-lang-id
DongningRao
2025-09-01T15:22:08Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "audio-classification", "generated_from_trainer", "dataset:common_language", "base_model:facebook/wav2vec2-base", "base_model:finetune:facebook/wav2vec2-base", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
audio-classification
2025-09-01T13:09:14Z
--- library_name: transformers license: apache-2.0 base_model: facebook/wav2vec2-base tags: - audio-classification - generated_from_trainer datasets: - common_language metrics: - accuracy model-index: - name: wav2vec2-base-lang-id results: - task: name: Audio Classification type: audio-classification dataset: name: common_language type: common_language config: full split: validation args: full metrics: - name: Accuracy type: accuracy value: 0.7854959239130435 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-lang-id This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the common_language dataset. It achieves the following results on the evaluation set: - Loss: 1.2104 - Accuracy: 0.7855 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 2 - seed: 0 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.8262 | 1.0 | 347 | 3.1017 | 0.1703 | | 1.8912 | 2.0 | 694 | 1.9753 | 0.4147 | | 1.339 | 3.0 | 1041 | 1.6294 | 0.5352 | | 0.7847 | 4.0 | 1388 | 1.4546 | 0.6189 | | 0.5866 | 5.0 | 1735 | 1.2889 | 0.6591 | | 0.3546 | 6.0 | 2082 | 1.3346 | 0.7065 | | 0.2172 | 7.0 | 2429 | 1.2969 | 0.7291 | | 0.1056 | 8.0 | 2776 | 1.1767 | 0.7566 | | 0.0382 | 9.0 | 3123 | 1.2239 | 0.7731 | | 0.0551 | 10.0 | 3470 | 1.2104 | 0.7855 | ### Framework versions - Transformers 4.57.0.dev0 - Pytorch 2.7.1+cu126 - Datasets 3.6.0 - Tokenizers 0.22.0
Ashikul/ai-lawyer-bd-1-8b-instruct-bnb-4bit
Ashikul
2025-09-01T14:24:26Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "base_model:quantized:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-09-01T14:19:14Z
--- base_model: unsloth/llama-3-8b-Instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Ashikul - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
mehmetxh/blockassist-bc-grazing_soft_mandrill_1756735908
mehmetxh
2025-09-01T14:13:01Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "grazing soft mandrill", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T14:12:55Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - grazing soft mandrill --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mchtylmzz/test
mchtylmzz
2025-09-01T13:31:44Z
0
1
allennlp
[ "allennlp", "chemistry", "medical", "code", "legal", "image-classification", "aa", "dataset:fka/awesome-chatgpt-prompts", "arxiv:1910.09700", "base_model:openai/gpt-oss-120b", "base_model:finetune:openai/gpt-oss-120b", "region:us" ]
image-classification
2025-08-13T18:38:36Z
--- datasets: - fka/awesome-chatgpt-prompts language: - aa this_is_test: - abc def metrics: - accuracy - bleu - bleurt - character base_model: - openai/gpt-oss-120b pipeline_tag: image-classification library_name: allennlp tags: - chemistry - medical - code - legal new_version: openai/gpt-oss-120b --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
arif696/blockassist-bc-regal_spotted_pelican_1756733371
arif696
2025-09-01T13:30:45Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "regal spotted pelican", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T13:30:36Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - regal spotted pelican --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
llinauer/gliner_de_en_news
llinauer
2025-09-01T13:30:17Z
0
0
null
[ "pytorch", "feature-extraction", "de", "en", "license:mit", "region:us" ]
feature-extraction
2025-09-01T11:13:49Z
--- license: mit language: - de - en pipeline_tag: feature-extraction --- gliner_de_en_news is a model for named entity extraction (NER) based off of the GLiNER architecture (https://github.com/urchade/GLiNER) It was trained on a dataset of public news in German and English (Dataset not disclosed yet) Supported entity types are: - Person - Location - Organization - Event - Product - Address - URL # Installation Install the gliner package via pip: pip install gliner # Usage Example usage: ```python from gliner import GLiNER labels = ["Person", "Location", "Organization", "Event", "Product", "Address", "URL"] news_en = """On September 1, 2025, OrionSoft Inc., a California-based technology company, announced the opening of its new Artificial Intelligence Research Lab in Vienna, Austria. The CEO, Dr. Laura Stein, explained during a press conference at the Hotel Imperial that the lab will focus on multilingual natural language processing and AI ethics. The project is being supported by the Austrian Federal Ministry for Digital Affairs and will collaborate closely with TU Wien and Oxford University. According to Stein, OrionSoft plans to hire more than 120 researchers in the next two years, with the first products expected under the Aurora AI brand by mid-2026.""" news_de = """Am 1. September 2025 hat der in Kalifornien ansässige Technologiekonzern OrionSoft Inc. die Eröffnung seines neuen Forschungszentrums für Künstliche Intelligenz in Wien, Österreich bekanntgegeben. Die Geschäftsführerin, Dr. Laura Stein, erklärte auf einer Pressekonferenz im Hotel Imperial, dass sich das Labor auf mehrsprachige Sprachverarbeitung und KI-Ethik konzentrieren werde. Unterstützt wird das Projekt vom Bundesministerium für Digitalisierung und in enger Zusammenarbeit mit der TU Wien sowie der Universität Oxford. Laut Stein will OrionSoft in den nächsten zwei Jahren mehr als 120 Forscherinnen und Forscher einstellen. Erste Produkte sollen bereits Mitte 2026 unter der Marke Aurora AI erscheinen.""" model = GLiNER.from_pretrained("llinauer/gliner_de_en_news") ents_de = model.predict_entities(news_de, labels) ents_en = model.predict_entities(news_en, labels) print({f'{e["text"]}:{e["label"]}' for e in ents_de}) >>> {'Österreich:Location', 'Wien:Location', 'OrionSoft:Organization', 'Aurora AI:Product', 'Universität Oxford:Organization', 'TU Wien:Organization', 'Hotel Imperial:Location', 'Labor:Location', 'Laura Stein:Person', 'Stein:Person', 'Bundesministerium für Digitalisierung:Organization', 'Kalifornien:Location', 'OrionSoft Inc.:Organization', 'Pressekonferenz:Event'} print({f'{e["text"]}:{e["label"]}' for e in ents_en}) >>> {'California-based:Location', 'Austria:Location', 'Austrian Federal Ministry for Digital Affairs:Organization', 'Artificial Intelligence Research Lab:Location', 'Aurora AI:Product', 'TU Wien:Organization', 'press conference:Event', 'Hotel Imperial:Location', 'Laura Stein:Person', 'Oxford University:Organization', 'Vienna:Location', 'Stein:Person', 'OrionSoft:Organization', 'OrionSoft Inc.:Organization', 'lab:Location'} ```
g-assismoraes/Qwen3-4B-LiGO-faquad
g-assismoraes
2025-09-01T13:24:50Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-01T12:55:27Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
arif696/blockassist-bc-regal_spotted_pelican_1756732925
arif696
2025-09-01T13:24:23Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "regal spotted pelican", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T13:23:12Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - regal spotted pelican --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ench100/bodyandface
ench100
2025-09-01T13:22:18Z
413
1
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:lodestones/Chroma", "base_model:adapter:lodestones/Chroma", "region:us" ]
text-to-image
2025-08-12T08:58:41Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - output: url: images/2.png text: '-' base_model: lodestones/Chroma instance_prompt: null --- # forME <Gallery /> ## Download model [Download](/ench100/bodyandface/tree/main) them in the Files & versions tab.
0xaoyama/blockassist-bc-muscular_zealous_gorilla_1756732580
0xaoyama
2025-09-01T13:17:01Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "muscular zealous gorilla", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T13:16:48Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - muscular zealous gorilla --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
rbelanec/train_cb_1756729050
rbelanec
2025-09-01T12:20:47Z
0
0
peft
[ "peft", "safetensors", "llama-factory", "prefix-tuning", "generated_from_trainer", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct", "license:llama3", "region:us" ]
null
2025-09-01T12:18:13Z
--- library_name: peft license: llama3 base_model: meta-llama/Meta-Llama-3-8B-Instruct tags: - llama-factory - prefix-tuning - generated_from_trainer model-index: - name: train_cb_1756729050 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # train_cb_1756729050 This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the cb dataset. It achieves the following results on the evaluation set: - Loss: 0.2811 - Num Input Tokens Seen: 316840 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 123 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen | |:-------------:|:------:|:----:|:---------------:|:-----------------:| | 0.6315 | 0.5044 | 57 | 0.4110 | 17136 | | 0.4932 | 1.0088 | 114 | 0.8902 | 32376 | | 0.3213 | 1.5133 | 171 | 0.8148 | 48728 | | 0.429 | 2.0177 | 228 | 0.2493 | 64040 | | 0.1579 | 2.5221 | 285 | 0.2036 | 79784 | | 0.0141 | 3.0265 | 342 | 0.2938 | 96200 | | 0.1094 | 3.5310 | 399 | 0.2391 | 112440 | | 0.2226 | 4.0354 | 456 | 0.2171 | 128712 | | 0.0679 | 4.5398 | 513 | 0.3177 | 143944 | | 0.286 | 5.0442 | 570 | 0.2677 | 160016 | | 0.0158 | 5.5487 | 627 | 0.3665 | 176688 | | 0.027 | 6.0531 | 684 | 0.2993 | 192272 | | 0.0012 | 6.5575 | 741 | 0.3299 | 208944 | | 0.0001 | 7.0619 | 798 | 0.2633 | 224288 | | 0.0017 | 7.5664 | 855 | 0.2684 | 239840 | | 0.0001 | 8.0708 | 912 | 0.2846 | 255984 | | 0.0008 | 8.5752 | 969 | 0.2800 | 272064 | | 0.0003 | 9.0796 | 1026 | 0.2731 | 287928 | | 0.0001 | 9.5841 | 1083 | 0.2796 | 303800 | ### Framework versions - PEFT 0.15.2 - Transformers 4.51.3 - Pytorch 2.8.0+cu128 - Datasets 3.6.0 - Tokenizers 0.21.1
mradermacher/Mistral-Instruct-v0.3-Verilog-7B-GGUF
mradermacher
2025-09-01T12:20:34Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:nyu-dice-lab/Mistral-Instruct-v0.3-Verilog-7B", "base_model:quantized:nyu-dice-lab/Mistral-Instruct-v0.3-Verilog-7B", "endpoints_compatible", "region:us", "conversational" ]
null
2025-09-01T10:55:04Z
--- base_model: nyu-dice-lab/Mistral-Instruct-v0.3-Verilog-7B language: - en library_name: transformers mradermacher: readme_rev: 1 quantized_by: mradermacher tags: [] --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/nyu-dice-lab/Mistral-Instruct-v0.3-Verilog-7B <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Mistral-Instruct-v0.3-Verilog-7B-GGUF).*** weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Mistral-Instruct-v0.3-Verilog-7B-GGUF/resolve/main/Mistral-Instruct-v0.3-Verilog-7B.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-Instruct-v0.3-Verilog-7B-GGUF/resolve/main/Mistral-Instruct-v0.3-Verilog-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-Instruct-v0.3-Verilog-7B-GGUF/resolve/main/Mistral-Instruct-v0.3-Verilog-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Mistral-Instruct-v0.3-Verilog-7B-GGUF/resolve/main/Mistral-Instruct-v0.3-Verilog-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-Instruct-v0.3-Verilog-7B-GGUF/resolve/main/Mistral-Instruct-v0.3-Verilog-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-Instruct-v0.3-Verilog-7B-GGUF/resolve/main/Mistral-Instruct-v0.3-Verilog-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mistral-Instruct-v0.3-Verilog-7B-GGUF/resolve/main/Mistral-Instruct-v0.3-Verilog-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mistral-Instruct-v0.3-Verilog-7B-GGUF/resolve/main/Mistral-Instruct-v0.3-Verilog-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-Instruct-v0.3-Verilog-7B-GGUF/resolve/main/Mistral-Instruct-v0.3-Verilog-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-Instruct-v0.3-Verilog-7B-GGUF/resolve/main/Mistral-Instruct-v0.3-Verilog-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Mistral-Instruct-v0.3-Verilog-7B-GGUF/resolve/main/Mistral-Instruct-v0.3-Verilog-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Mistral-Instruct-v0.3-Verilog-7B-GGUF/resolve/main/Mistral-Instruct-v0.3-Verilog-7B.f16.gguf) | f16 | 14.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
akirafudo/blockassist-bc-keen_fast_giraffe_1756728802
akirafudo
2025-09-01T12:13:49Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T12:13:44Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
akirafudo/blockassist-bc-keen_fast_giraffe_1756728361
akirafudo
2025-09-01T12:06:28Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T12:06:21Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
onnx-community/distilbert-base-uncased-finetuned-conll03-english-ONNX
onnx-community
2025-09-01T12:04:04Z
5
0
transformers.js
[ "transformers.js", "onnx", "distilbert", "token-classification", "base_model:elastic/distilbert-base-uncased-finetuned-conll03-english", "base_model:quantized:elastic/distilbert-base-uncased-finetuned-conll03-english", "region:us" ]
token-classification
2025-06-09T14:30:08Z
--- library_name: transformers.js base_model: - elastic/distilbert-base-uncased-finetuned-conll03-english --- # distilbert-base-uncased-finetuned-conll03-english (ONNX) This is an ONNX version of [elastic/distilbert-base-uncased-finetuned-conll03-english](https://huggingface.co/elastic/distilbert-base-uncased-finetuned-conll03-english). It was automatically converted and uploaded using [this space](https://huggingface.co/spaces/onnx-community/convert-to-onnx). ## Usage (Transformers.js) If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@huggingface/transformers) using: ```bash npm i @huggingface/transformers ``` **Example:** Perform named entity recognition. ```js import { pipeline } from '@huggingface/transformers'; const classifier = await pipeline('token-classification', 'onnx-community/distilbert-base-uncased-finetuned-conll03-english-ONNX'); const output = await classifier('My name is Sarah and I live in London'); ``` Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
onnx-community/distilbert-NER-ONNX
onnx-community
2025-09-01T12:04:00Z
90
1
transformers.js
[ "transformers.js", "onnx", "distilbert", "token-classification", "base_model:dslim/distilbert-NER", "base_model:quantized:dslim/distilbert-NER", "region:us" ]
token-classification
2025-06-07T22:47:40Z
--- library_name: transformers.js base_model: - dslim/distilbert-NER --- # distilbert-NER (ONNX) This is an ONNX version of [dslim/distilbert-NER](https://huggingface.co/dslim/distilbert-NER). It was automatically converted and uploaded using [this space](https://huggingface.co/spaces/onnx-community/convert-to-onnx). ## Usage (Transformers.js) If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@huggingface/transformers) using: ```bash npm i @huggingface/transformers ``` **Example:** Perform named entity recognition. ```js import { pipeline } from '@huggingface/transformers'; const classifier = await pipeline('token-classification', 'onnx-community/distilbert-NER-ONNX'); const output = await classifier('My name is Sarah and I live in London'); ```
vuitton/dsc_116
vuitton
2025-09-01T12:02:23Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-09-01T11:56:51Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
akirafudo/blockassist-bc-keen_fast_giraffe_1756727926
akirafudo
2025-09-01T11:59:11Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T11:59:06Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
AnerYubo/blockassist-bc-lanky_pouncing_ape_1756727881
AnerYubo
2025-09-01T11:58:04Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "lanky pouncing ape", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T11:58:01Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - lanky pouncing ape --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Chinastark/xxcustoms
Chinastark
2025-09-01T11:51:42Z
636
0
null
[ "gguf", "table-question-answering", "base_model:Qwen/Qwen2.5-Coder-14B-Instruct", "base_model:quantized:Qwen/Qwen2.5-Coder-14B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
table-question-answering
2025-08-28T04:11:35Z
--- license: apache-2.0 base_model: - Qwen/Qwen2.5-Coder-14B-Instruct pipeline_tag: table-question-answering ---
ecamli/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-vocal_placid_sloth
ecamli
2025-09-01T11:48:46Z
31
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am vocal placid sloth", "trl", "genrl-swarm", "I am vocal_placid_sloth", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-09T15:15:36Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-vocal_placid_sloth tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am vocal placid sloth - trl - genrl-swarm - I am vocal_placid_sloth licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-vocal_placid_sloth This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="ecamli/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-vocal_placid_sloth", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.1 - Pytorch: 2.5.1 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
akirafudo/blockassist-bc-keen_fast_giraffe_1756727029
akirafudo
2025-09-01T11:44:44Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T11:44:06Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
devashish07/phi-2-healthcare-qlora
devashish07
2025-09-01T11:39:05Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-01T11:39:00Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
onnx-community/ModernCE-base-sts-ONNX
onnx-community
2025-09-01T11:36:20Z
9
0
transformers.js
[ "transformers.js", "onnx", "modernbert", "text-classification", "base_model:dleemiller/ModernCE-base-sts", "base_model:quantized:dleemiller/ModernCE-base-sts", "region:us" ]
text-classification
2025-07-23T02:11:04Z
--- library_name: transformers.js base_model: - dleemiller/ModernCE-base-sts --- # ModernCE-base-sts (ONNX) This is an ONNX version of [dleemiller/ModernCE-base-sts](https://huggingface.co/dleemiller/ModernCE-base-sts). It was automatically converted and uploaded using [this space](https://huggingface.co/spaces/onnx-community/convert-to-onnx). ## Usage (Transformers.js) If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@huggingface/transformers) using: ```bash npm i @huggingface/transformers ``` **Example:** Semantic Textual Similarity Classification. ```js import { pipeline } from '@huggingface/transformers'; const classifier = await pipeline('text-classification', 'onnx-community/ModernCE-base-sts-ONNX'); const output = await classifier('I love transformers!'); ```
Wave812/blockassist-bc-howling_pesty_trout_1756726068
Wave812
2025-09-01T11:29:01Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "howling pesty trout", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T11:28:44Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - howling pesty trout --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
omerbkts/blockassist-bc-keen_fast_giraffe_1756725948
omerbkts
2025-09-01T11:26:14Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T11:26:08Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
giovannidemuri/llama3b-llama8b-er-v526-seed2-seed2-hx-alpaca-fpt
giovannidemuri
2025-09-01T11:23:35Z
0
0
null
[ "region:us" ]
null
2025-09-01T11:23:35Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
BootesVoid/cmezt4ctw076tsr53nv5ql115_cmf0z0i0207yasr53u55w6wrx
BootesVoid
2025-09-01T11:23:18Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-09-01T11:23:13Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: SEXY --- # Cmezt4Ctw076Tsr53Nv5Ql115_Cmf0Z0I0207Yasr53U55W6Wrx <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `SEXY` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "SEXY", "lora_weights": "https://huggingface.co/BootesVoid/cmezt4ctw076tsr53nv5ql115_cmf0z0i0207yasr53u55w6wrx/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cmezt4ctw076tsr53nv5ql115_cmf0z0i0207yasr53u55w6wrx', weight_name='lora.safetensors') image = pipeline('SEXY').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2500 - Learning rate: 9e-05 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cmezt4ctw076tsr53nv5ql115_cmf0z0i0207yasr53u55w6wrx/discussions) to add images that show off what you’ve made with this LoRA.
tralalerrotralala228/lilastone
tralalerrotralala228
2025-09-01T11:15:39Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-09-01T10:42:31Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: lilastone --- # Lilastone <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `lilastone` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "lilastone", "lora_weights": "https://huggingface.co/tralalerrotralala228/lilastone/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('tralalerrotralala228/lilastone', weight_name='lora.safetensors') image = pipeline('lilastone').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2500 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/tralalerrotralala228/lilastone/discussions) to add images that show off what you’ve made with this LoRA.
cfgbydefault/SmolLM2-FT-MyDataset
cfgbydefault
2025-09-01T11:15:25Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "smol-course", "module_1", "trl", "sft", "conversational", "base_model:HuggingFaceTB/SmolLM2-135M", "base_model:finetune:HuggingFaceTB/SmolLM2-135M", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-01T11:14:43Z
--- base_model: HuggingFaceTB/SmolLM2-135M library_name: transformers model_name: SmolLM2-FT-MyDataset tags: - generated_from_trainer - smol-course - module_1 - trl - sft licence: license --- # Model Card for SmolLM2-FT-MyDataset This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M](https://huggingface.co/HuggingFaceTB/SmolLM2-135M). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="cfgbydefault/SmolLM2-FT-MyDataset", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.12.1 - Transformers: 4.46.3 - Pytorch: 2.5.0+cu124 - Datasets: 3.1.0 - Tokenizers: 0.20.3 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
loveisgone/ok_myson
loveisgone
2025-09-01T11:10:48Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "llama-3", "meta", "facebook", "unsloth", "conversational", "en", "license:llama3.1", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-09-01T11:08:41Z
--- base_model: meta-llama/Llama-3.1-1B-Instruct language: - en library_name: transformers license: llama3.1 tags: - llama-3 - llama - meta - facebook - unsloth - transformers ---
AnerYubo/blockassist-bc-elusive_mammalian_termite_1756725032
AnerYubo
2025-09-01T11:10:36Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "elusive mammalian termite", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T11:10:33Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - elusive mammalian termite --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
walbosui/blockassist-bc-miniature_playful_walrus_1756724876
walbosui
2025-09-01T11:08:46Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "miniature playful walrus", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T11:08:37Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - miniature playful walrus --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Wave812/blockassist-bc-howling_pesty_trout_1756724692
Wave812
2025-09-01T11:06:08Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "howling pesty trout", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T11:05:47Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - howling pesty trout --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
giovannidemuri/llama8b-er-v522-seed2-hx
giovannidemuri
2025-09-01T11:03:59Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-01T09:25:24Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
bah63843/blockassist-bc-plump_fast_antelope_1756724553
bah63843
2025-09-01T11:03:26Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "plump fast antelope", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T11:03:17Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - plump fast antelope --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
liukevin666/blockassist-bc-yawning_striped_cassowary_1756724120
liukevin666
2025-09-01T10:57:41Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "yawning striped cassowary", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T10:56:17Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - yawning striped cassowary --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ddfj34/act_so101_model_20250901_1280
ddfj34
2025-09-01T10:50:31Z
0
0
lerobot
[ "lerobot", "safetensors", "robotics", "act", "dataset:ddfj34/record-test-20250825_resize_1280", "arxiv:2304.13705", "license:apache-2.0", "region:us" ]
robotics
2025-09-01T10:50:18Z
--- datasets: ddfj34/record-test-20250825_resize_1280 library_name: lerobot license: apache-2.0 model_name: act pipeline_tag: robotics tags: - robotics - lerobot - act --- # Model Card for act <!-- Provide a quick summary of what the model is/does. --> [Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates. This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot). See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index). --- ## How to Get Started with the Model For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy). Below is the short version on how to train and run inference/eval: ### Train from scratch ```bash lerobot-train \ --dataset.repo_id=${HF_USER}/<dataset> \ --policy.type=act \ --output_dir=outputs/train/<desired_policy_repo_id> \ --job_name=lerobot_training \ --policy.device=cuda \ --policy.repo_id=${HF_USER}/<desired_policy_repo_id> --wandb.enable=true ``` _Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._ ### Evaluate the policy/run inference ```bash lerobot-record \ --robot.type=so100_follower \ --dataset.repo_id=<hf_user>/eval_<dataset> \ --policy.path=<hf_user>/<desired_policy_repo_id> \ --episodes=10 ``` Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint. --- ## Model Details - **License:** apache-2.0
mradermacher/L3.3-Joubutsu2000-GGUF
mradermacher
2025-09-01T10:45:23Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:KaraKaraWarehouse/L3.3-Joubutsu2000", "base_model:quantized:KaraKaraWarehouse/L3.3-Joubutsu2000", "endpoints_compatible", "region:us", "conversational" ]
null
2025-09-01T07:45:21Z
--- base_model: KaraKaraWarehouse/L3.3-Joubutsu2000 language: - en library_name: transformers mradermacher: readme_rev: 1 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/KaraKaraWarehouse/L3.3-Joubutsu2000 <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#L3.3-Joubutsu2000-GGUF).*** weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/L3.3-Joubutsu2000-GGUF/resolve/main/L3.3-Joubutsu2000.Q2_K.gguf) | Q2_K | 26.5 | | | [GGUF](https://huggingface.co/mradermacher/L3.3-Joubutsu2000-GGUF/resolve/main/L3.3-Joubutsu2000.Q3_K_S.gguf) | Q3_K_S | 31.0 | | | [GGUF](https://huggingface.co/mradermacher/L3.3-Joubutsu2000-GGUF/resolve/main/L3.3-Joubutsu2000.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/L3.3-Joubutsu2000-GGUF/resolve/main/L3.3-Joubutsu2000.Q3_K_L.gguf) | Q3_K_L | 37.2 | | | [GGUF](https://huggingface.co/mradermacher/L3.3-Joubutsu2000-GGUF/resolve/main/L3.3-Joubutsu2000.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/L3.3-Joubutsu2000-GGUF/resolve/main/L3.3-Joubutsu2000.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/L3.3-Joubutsu2000-GGUF/resolve/main/L3.3-Joubutsu2000.Q5_K_S.gguf) | Q5_K_S | 48.8 | | | [GGUF](https://huggingface.co/mradermacher/L3.3-Joubutsu2000-GGUF/resolve/main/L3.3-Joubutsu2000.Q5_K_M.gguf) | Q5_K_M | 50.0 | | | [PART 1](https://huggingface.co/mradermacher/L3.3-Joubutsu2000-GGUF/resolve/main/L3.3-Joubutsu2000.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/L3.3-Joubutsu2000-GGUF/resolve/main/L3.3-Joubutsu2000.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality | | [PART 1](https://huggingface.co/mradermacher/L3.3-Joubutsu2000-GGUF/resolve/main/L3.3-Joubutsu2000.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/L3.3-Joubutsu2000-GGUF/resolve/main/L3.3-Joubutsu2000.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
faisu-eth/blockassist-bc-thick_twitchy_jackal_1756723284
faisu-eth
2025-09-01T10:42:11Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thick twitchy jackal", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T10:41:52Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thick twitchy jackal --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
walbosui/blockassist-bc-miniature_playful_walrus_1756722607
walbosui
2025-09-01T10:30:54Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "miniature playful walrus", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T10:30:45Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - miniature playful walrus --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
noman007/FastVLM05B
noman007
2025-09-01T10:27:30Z
0
0
ml-fastvlm
[ "ml-fastvlm", "safetensors", "llava_qwen2", "text-generation", "transformers", "conversational", "custom_code", "arxiv:2412.13303", "license:apple-amlr", "region:us" ]
text-generation
2025-09-01T10:25:48Z
--- license: apple-amlr license_name: apple-ascl license_link: https://github.com/apple/ml-fastvlm/blob/main/LICENSE_MODEL library_name: ml-fastvlm tags: - transformers --- # FastVLM: Efficient Vision Encoding for Vision Language Models FastVLM was introduced in **[FastVLM: Efficient Vision Encoding for Vision Language Models](https://www.arxiv.org/abs/2412.13303). (CVPR 2025)** [//]: # (![FastViTHD Performance]&#40;acc_vs_latency_qwen-2.png&#41;) <p align="center"> <img src="https://huggingface.co/datasets/librarian-bots/model_cards_with_metadata/viewer/default/acc_vs_latency_qwen-2.png" alt="Accuracy vs latency figure." width="400"/> </p> ### Highlights * We introduce FastViTHD, a novel hybrid vision encoder designed to output fewer tokens and significantly reduce encoding time for high-resolution images. * Our smallest variant outperforms LLaVA-OneVision-0.5B with 85x faster Time-to-First-Token (TTFT) and 3.4x smaller vision encoder. * Our larger variants using Qwen2-7B LLM outperform recent works like Cambrian-1-8B while using a single image encoder with a 7.9x faster TTFT. ### Evaluations | Benchmark | FastVLM-0.5B | FastVLM-1.5B | FastVLM-7B | |:--------------|:------------:|:------------:|:----------:| | Ai2D | 68.0 | 77.4 | 83.6 | | ScienceQA | 85.2 | 94.4 | 96.7 | | MMMU | 33.9 | 37.8 | 45.4 | | VQAv2 | 76.3 | 79.1 | 80.8 | | ChartQA | 76.0 | 80.1 | 85.0 | | TextVQA | 64.5 | 70.4 | 74.9 | | InfoVQA | 46.4 | 59.7 | 75.8 | | DocVQA | 82.5 | 88.3 | 93.2 | | OCRBench | 63.9 | 70.2 | 73.1 | | RealWorldQA | 56.1 | 61.2 | 67.2 | | SeedBench-Img | 71.0 | 74.2 | 75.4 | ### Usage Example To run inference of PyTorch checkpoint, follow the instruction in the official repo: Download the model ``` huggingface-cli download apple/FastVLM-0.5B ``` Run inference using `predict.py` from the official repo. ```bash python predict.py --model-path /path/to/checkpoint-dir \ --image-file /path/to/image.png \ --prompt "Describe the image." ``` ### Run inference with Transformers (Remote Code) To run inference with transformers we can leverage `trust_remote_code` along with the following snippet: ```python import torch from PIL import Image from transformers import AutoTokenizer, AutoModelForCausalLM MID = "apple/FastVLM-0.5B" IMAGE_TOKEN_INDEX = -200 # what the model code looks for # Load tok = AutoTokenizer.from_pretrained(MID, trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained( MID, torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32, device_map="auto", trust_remote_code=True, ) # Build chat -> render to string (not tokens) so we can place <image> exactly messages = [ {"role": "user", "content": "<image>\nDescribe this image in detail."} ] rendered = tok.apply_chat_template( messages, add_generation_prompt=True, tokenize=False ) pre, post = rendered.split("<image>", 1) # Tokenize the text *around* the image token (no extra specials!) pre_ids = tok(pre, return_tensors="pt", add_special_tokens=False).input_ids post_ids = tok(post, return_tensors="pt", add_special_tokens=False).input_ids # Splice in the IMAGE token id (-200) at the placeholder position img_tok = torch.tensor([[IMAGE_TOKEN_INDEX]], dtype=pre_ids.dtype) input_ids = torch.cat([pre_ids, img_tok, post_ids], dim=1).to(model.device) attention_mask = torch.ones_like(input_ids, device=model.device) # Preprocess image via the model's own processor img = Image.open("test-2.jpg").convert("RGB") px = model.get_vision_tower().image_processor(images=img, return_tensors="pt")["pixel_values"] px = px.to(model.device, dtype=model.dtype) # Generate with torch.no_grad(): out = model.generate( inputs=input_ids, attention_mask=attention_mask, images=px, max_new_tokens=128, ) print(tok.decode(out[0], skip_special_tokens=True)) ``` ## Citation If you found this model useful, please cite the following paper: ``` @InProceedings{fastvlm2025, author = {Pavan Kumar Anasosalu Vasu, Fartash Faghri, Chun-Liang Li, Cem Koc, Nate True, Albert Antony, Gokul Santhanam, James Gabriel, Peter Grasch, Oncel Tuzel, Hadi Pouransari}, title = {FastVLM: Efficient Vision Encoding for Vision Language Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2025}, } ```
Rahulwale12/base_slm
Rahulwale12
2025-09-01T10:27:30Z
0
0
null
[ "pytorch", "transformer_lite", "region:us" ]
null
2025-09-01T10:26:34Z
# Base Small Language Model (SLM) ## 🚀 CPU-First Base Language Model This is the **base model** before fine-tuning - a blazing-fast, CPU-optimized Small Language Model foundation: ### ⚡ Performance Highlights - **164 tokens/sec** on CPU (fast base performance) - **45.2MB model size** (base model) - **3.7M parameters** (tiny but powerful) - **General language understanding** (pre-fine-tuning) ### 🎯 Training Speed - **28 minutes** for base training (4 epochs) - **Fast convergence** with efficient architecture - **Ready for fine-tuning** on any domain ### 🔧 Technical Specs - **Architecture:** Transformer-lite with RMSNorm, SwiGLU, Rotary embeddings - **Optimization:** CPU-first with memory mapping and efficient batching - **Framework:** PyTorch (CPU optimized) - **Training:** Trained on conversational data ### 📱 Deployment Ready - **CPU optimized:** No GPU required - **Fast startup:** Instant model loading - **Low memory:** Efficient memory usage - **Fine-tuning ready:** Perfect base for domain adaptation ## Usage ### Load and Use Base Model ```python import torch import sys sys.path.append('src') from model import create_model_from_config from tokenizer import BPETokenizer # Load model checkpoint = torch.load("checkpoints/model_latest.pt", map_location='cpu') config = checkpoint['config'] model = create_model_from_config(config) model.load_state_dict(checkpoint['model_state_dict']) # Load tokenizer tokenizer = BPETokenizer() tokenizer.load("data/tokenizer.json") # Generate prompt = "Hello, how are you?" input_ids = tokenizer.encode(prompt, add_special_tokens=True) input_ids = torch.tensor([input_ids], dtype=torch.long) model.eval() with torch.no_grad(): for _ in range(20): logits = model(input_ids)[0, -1, :] next_token = torch.argmax(logits, dim=-1).unsqueeze(0) input_ids = torch.cat([input_ids, next_token.unsqueeze(0)], dim=1) response = tokenizer.decode(input_ids[0].tolist(), skip_special_tokens=True) print(response) ``` ### Fine-tune on Your Data ```python # Use this base model for fine-tuning python finetune_qa.py --base_model checkpoints/model_latest.pt --conversations your_data.json ``` ## Model Details - **Base Model:** Trained on conversational data - **Architecture:** Transformer-lite with modern optimizations - **Size:** 45.2MB (base model) - **License:** MIT ## Performance | Metric | Value | |--------|-------| | Speed | 164 tokens/sec | | Size | 45.2MB | | Parameters | 3.7M | | Training Time | 28 minutes | This base model provides an excellent foundation for fine-tuning on specific domains or tasks.
arif696/blockassist-bc-regal_spotted_pelican_1756722329
arif696
2025-09-01T10:27:14Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "regal spotted pelican", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T10:27:05Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - regal spotted pelican --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
H1yori233/llm_from_scratch
H1yori233
2025-09-01T10:26:14Z
0
0
transformers
[ "transformers", "pytorch", "endpoints_compatible", "region:us" ]
null
2025-09-01T06:07:17Z
--- tags: - pytorch - transformers source_url: https://github.com/H1yori233/llm_from_scratch --- # LLM From Scratch ![cover](./data/llm-from-scratch.png) This project implements a full transformer language model without relying on high-level frameworks like HuggingFace or PyTorch's built-in attention. Every component is built from scratch. This repository simply combines my solutions for [assignment 1](https://github.com/stanford-cs336/assignment1-basics) and [assignment 2](https://github.com/stanford-cs336/assignment2-systems) of [Stanford CS336](https://stanford-cs336.github.io/spring2025/) course. I’ve merged them into a single repository for convenience, without adding any content beyond the original assignments. ## Features * **Hand-Coded Transformer Architecture**: Features RoPE positional encodings, RMSNorm, and SwiGLU FFNs. * **Flash Attention with Triton**: A custom implementation with hand-written Triton kernels to optimize GPU memory usage from O(N²) down to O(N). * **Multiple Optimizers**: Includes AdamW and SGD, with an extensible design to easily add more. * **BPE Tokenizer from Scratch**: With proper handling for special tokens. * **A Complete Training System**: Manages experiments with JSON configs and automatically logs results to a Markdown file. ## Quick Start ### 1. Download Data This project uses data from TinyStories and a subsample of OpenWebText. ```sh mkdir -p data cd data # Download the datasets wget https://huggingface.co/datasets/roneneldan/TinyStories/resolve/main/TinyStoriesV2-GPT4-train.txt wget https://huggingface.co/datasets/roneneldan/TinyStories/resolve/main/TinyStoriesV2-GPT4-valid.txt wget https://huggingface.co/datasets/stanford-cs336/owt-sample/resolve/main/owt_train.txt.gz gunzip owt_train.txt.gz wget https://huggingface.co/datasets/stanford-cs336/owt-sample/resolve/main/owt_valid.txt.gz gunzip owt_valid.txt.gz cd .. ``` ### 2. Install Dependencies ```bash uv sync ``` ### 3. Start Training Several example configurations are provided to get you started. ```bash # Default training (Flash Attention + AdamW) python train.py --config config.json # Try other configurations python train.py --config config_large.json python train.py --config config_std_sgd.json # View experiment results cat data/output.md ``` ## Experiment Tracking All experiments are automatically logged with comprehensive metrics: | timestamp | experiment_name | optimizer | attention_type | best_val_loss | params_M | tokens_M | |-----------|----------------|-----------|----------------|---------------|----------|----------| | 12-15 10:30 | baseline_4L_8H | adamw | flash | 2.123 | 25.6 | 163.8 | | 12-15 14:20 | large_6L_16H | adamw | flash | 1.987 | 67.2 | 327.7 | | 12-15 16:45 | std_attention | sgd | standard | 2.345 | 25.6 | 163.8 | ## Extending the System Adding new optimizers is straightforward: ```python class NewOptimizer(torch.optim.Optimizer): def __init__(self, params, lr=3e-4): # Implementation here def step(self, closure=None): # Update logic here # Register in factory optimizers = { "adamw": AdamW, "sgd": SGD, "new_optimizer": NewOptimizer, # Add here } ``` --- [![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/H1yori233/llm_from_scratch)
cookienter/lifechart-biobert-classifier-hptuning
cookienter
2025-09-01T10:25:09Z
0
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:dmis-lab/biobert-base-cased-v1.2", "base_model:finetune:dmis-lab/biobert-base-cased-v1.2", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-09-01T08:57:02Z
--- library_name: transformers base_model: dmis-lab/biobert-base-cased-v1.2 tags: - generated_from_trainer metrics: - precision - recall model-index: - name: lifechart-biobert-classifier-hptuning results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # lifechart-biobert-classifier-hptuning This model is a fine-tuned version of [dmis-lab/biobert-base-cased-v1.2](https://huggingface.co/dmis-lab/biobert-base-cased-v1.2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0061 - Macro F1: 0.7785 - Precision: 0.7800 - Recall: 0.7851 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4.782388936370694e-05 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.09571701748584874 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Macro F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:| | 1.6698 | 1.0 | 1641 | 0.9449 | 0.7355 | 0.7227 | 0.7692 | | 0.7237 | 2.0 | 3282 | 0.8916 | 0.7793 | 0.7685 | 0.8001 | | 0.3676 | 3.0 | 4923 | 1.0061 | 0.7785 | 0.7800 | 0.7851 | ### Framework versions - Transformers 4.55.4 - Pytorch 2.8.0+cu128 - Datasets 4.0.0 - Tokenizers 0.21.4
liukevin666/blockassist-bc-yawning_striped_cassowary_1756722185
liukevin666
2025-09-01T10:24:04Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "yawning striped cassowary", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T10:23:57Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - yawning striped cassowary --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
arif696/blockassist-bc-regal_spotted_pelican_1756722039
arif696
2025-09-01T10:22:50Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "regal spotted pelican", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T10:22:16Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - regal spotted pelican --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
bah63843/blockassist-bc-plump_fast_antelope_1756721407
bah63843
2025-09-01T10:10:58Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "plump fast antelope", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T10:10:48Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - plump fast antelope --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
sekirr/blockassist-bc-masked_tenacious_whale_1756721227
sekirr
2025-09-01T10:07:54Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "masked tenacious whale", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T10:07:50Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - masked tenacious whale --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
faisu-eth/blockassist-bc-thick_twitchy_jackal_1756721044
faisu-eth
2025-09-01T10:04:49Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thick twitchy jackal", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T10:04:32Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thick twitchy jackal --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mradermacher/Llama-3.1-8B-sft-spin-10k-KTO-GGUF
mradermacher
2025-09-01T10:03:15Z
0
0
transformers
[ "transformers", "gguf", "generated_from_trainer", "trl", "kto", "en", "base_model:AmberYifan/Llama-3.1-8B-sft-spin-10k-KTO", "base_model:quantized:AmberYifan/Llama-3.1-8B-sft-spin-10k-KTO", "endpoints_compatible", "region:us", "conversational" ]
null
2025-09-01T05:56:43Z
--- base_model: AmberYifan/Llama-3.1-8B-sft-spin-10k-KTO language: - en library_name: transformers model_name: Llama-3.1-8B-sft-spin-10k-KTO mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - generated_from_trainer - trl - kto --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/AmberYifan/Llama-3.1-8B-sft-spin-10k-KTO <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Llama-3.1-8B-sft-spin-10k-KTO-GGUF).*** weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-sft-spin-10k-KTO-GGUF/resolve/main/Llama-3.1-8B-sft-spin-10k-KTO.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-sft-spin-10k-KTO-GGUF/resolve/main/Llama-3.1-8B-sft-spin-10k-KTO.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-sft-spin-10k-KTO-GGUF/resolve/main/Llama-3.1-8B-sft-spin-10k-KTO.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-sft-spin-10k-KTO-GGUF/resolve/main/Llama-3.1-8B-sft-spin-10k-KTO.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-sft-spin-10k-KTO-GGUF/resolve/main/Llama-3.1-8B-sft-spin-10k-KTO.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-sft-spin-10k-KTO-GGUF/resolve/main/Llama-3.1-8B-sft-spin-10k-KTO.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-sft-spin-10k-KTO-GGUF/resolve/main/Llama-3.1-8B-sft-spin-10k-KTO.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-sft-spin-10k-KTO-GGUF/resolve/main/Llama-3.1-8B-sft-spin-10k-KTO.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-sft-spin-10k-KTO-GGUF/resolve/main/Llama-3.1-8B-sft-spin-10k-KTO.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-sft-spin-10k-KTO-GGUF/resolve/main/Llama-3.1-8B-sft-spin-10k-KTO.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-sft-spin-10k-KTO-GGUF/resolve/main/Llama-3.1-8B-sft-spin-10k-KTO.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-sft-spin-10k-KTO-GGUF/resolve/main/Llama-3.1-8B-sft-spin-10k-KTO.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756720920
Ferdi3425
2025-09-01T10:02:50Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "amphibious deadly otter", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T10:02:47Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - amphibious deadly otter --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
liukevin666/blockassist-bc-yawning_striped_cassowary_1756720888
liukevin666
2025-09-01T10:02:45Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "yawning striped cassowary", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T10:02:23Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - yawning striped cassowary --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
pietro0hz/blockassist-bc-ferocious_toothy_tortoise_1756720483
pietro0hz
2025-09-01T09:56:35Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "ferocious toothy tortoise", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T09:56:04Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - ferocious toothy tortoise --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
the-usan/urdu-crime-adapter-zayadati-v1
the-usan
2025-09-01T09:48:27Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-01T09:48:24Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RikiyaT/mxbai-ettin-17m-allnli-angle-ft
RikiyaT
2025-09-01T09:43:40Z
12
0
null
[ "safetensors", "modernbert", "license:mit", "region:us" ]
null
2025-08-31T10:17:01Z
--- license: mit --- # RikiyaT/mxbai-ettin-17m-allnli-angle-ft Ettin + AnglE fine-tuned embedding model. - **Base Model**: `RikiyaT/mxbai-ettin-17m-medqa-angle-ft` - **Pooling Strategy**: `mean` (avg) - **Training Method**: AnglE loss (ibn/cln + angle=0.02) on a B-format dataset (text, positive, negative). - **Data Prompts**: `search_query:` / `search_document:` were used during training data creation. ## Usage ### With SentenceTransformers (recommended) A ready-to-use SentenceTransformers variant is available at **[RikiyaT/mxbai-ettin-17m-allnli-angle-ft-st](https://huggingface.co/RikiyaT/mxbai-ettin-17m-allnli-angle-ft-st)**. ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer('RikiyaT/mxbai-ettin-17m-allnli-angle-ft-st') sentences = ["This is an example sentence", "Each sentence is converted"] embeddings = model.encode(sentences) print(embeddings.shape) ``` ### With Transformers (this repository) ```python from transformers import AutoModel, AutoTokenizer model = AutoModel.from_pretrained("RikiyaT/mxbai-ettin-17m-allnli-angle-ft", trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained("RikiyaT/mxbai-ettin-17m-allnli-angle-ft", trust_remote_code=True) ```
RikiyaT/mxbai-ettin-17m-medqa-angle-ft-st
RikiyaT
2025-09-01T09:43:21Z
5
0
sentence-transformers
[ "sentence-transformers", "safetensors", "modernbert", "sentence-similarity", "feature-extraction", "dense", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-08-31T20:29:48Z
--- tags: - sentence-transformers - sentence-similarity - feature-extraction - dense pipeline_tag: sentence-similarity library_name: sentence-transformers --- # SentenceTransformer This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 256-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer <!-- - **Base model:** [Unknown](https://huggingface.co/unknown) --> - **Maximum Sequence Length:** 7999 tokens - **Output Dimensionality:** 256 dimensions - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 7999, 'do_lower_case': False, 'architecture': 'ModernBertModel'}) (1): Pooling({'word_embedding_dimension': 256, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("RikiyaT/mxbai-ettin-17m-medqa-angle-ft-st") # Run inference sentences = [ 'The weather is lovely today.', "It's so sunny outside!", 'He drove to the stadium.', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 256] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities) # tensor([[1.0000, 0.6236, 0.3560], # [0.6236, 1.0000, 0.4001], # [0.3560, 0.4001, 1.0000]]) ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Framework Versions - Python: 3.10.18 - Sentence Transformers: 5.1.0 - Transformers: 4.55.4 - PyTorch: 2.7.1+cu126 - Accelerate: 1.10.1 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citation ### BibTeX <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
NahedDom/blockassist-bc-flapping_stocky_leopard_1756717619
NahedDom
2025-09-01T09:43:08Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "flapping stocky leopard", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T09:43:04Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - flapping stocky leopard --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
RikiyaT/mxbai-ettin-17m-nq-angle-ft-st
RikiyaT
2025-09-01T09:41:37Z
17
0
sentence-transformers
[ "sentence-transformers", "safetensors", "modernbert", "sentence-similarity", "feature-extraction", "dense", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-08-31T11:38:27Z
--- tags: - sentence-transformers - sentence-similarity - feature-extraction - dense pipeline_tag: sentence-similarity library_name: sentence-transformers --- # SentenceTransformer This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 256-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer <!-- - **Base model:** [Unknown](https://huggingface.co/unknown) --> - **Maximum Sequence Length:** 7999 tokens - **Output Dimensionality:** 256 dimensions - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 7999, 'do_lower_case': False, 'architecture': 'ModernBertModel'}) (1): Pooling({'word_embedding_dimension': 256, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("RikiyaT/mxbai-ettin-17m-nq-angle-ft-st") # Run inference sentences = [ 'The weather is lovely today.', "It's so sunny outside!", 'He drove to the stadium.', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 256] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities) # tensor([[1.0000, 0.6237, 0.3312], # [0.6237, 1.0000, 0.3608], # [0.3312, 0.3608, 1.0000]]) ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Framework Versions - Python: 3.10.18 - Sentence Transformers: 5.1.0 - Transformers: 4.55.4 - PyTorch: 2.7.1+cu126 - Accelerate: 1.10.1 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citation ### BibTeX <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
godnpeter/pick_pikachu
godnpeter
2025-09-01T09:41:14Z
0
0
lerobot
[ "lerobot", "safetensors", "smolvla", "robotics", "dataset:godnpeter/pick_pikachu", "arxiv:2506.01844", "base_model:lerobot/smolvla_base", "base_model:finetune:lerobot/smolvla_base", "license:apache-2.0", "region:us" ]
robotics
2025-09-01T09:41:05Z
--- base_model: lerobot/smolvla_base datasets: godnpeter/pick_pikachu library_name: lerobot license: apache-2.0 model_name: smolvla pipeline_tag: robotics tags: - smolvla - robotics - lerobot --- # Model Card for smolvla <!-- Provide a quick summary of what the model is/does. --> [SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware. This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot). See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index). --- ## How to Get Started with the Model For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy). Below is the short version on how to train and run inference/eval: ### Train from scratch ```bash lerobot-train \ --dataset.repo_id=${HF_USER}/<dataset> \ --policy.type=act \ --output_dir=outputs/train/<desired_policy_repo_id> \ --job_name=lerobot_training \ --policy.device=cuda \ --policy.repo_id=${HF_USER}/<desired_policy_repo_id> --wandb.enable=true ``` _Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._ ### Evaluate the policy/run inference ```bash lerobot-record \ --robot.type=so100_follower \ --dataset.repo_id=<hf_user>/eval_<dataset> \ --policy.path=<hf_user>/<desired_policy_repo_id> \ --episodes=10 ``` Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint. --- ## Model Details - **License:** apache-2.0
Sonic-man/blockassist-bc-poisonous_graceful_cow_1756717040
Sonic-man
2025-09-01T09:40:10Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "poisonous graceful cow", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T09:40:06Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - poisonous graceful cow --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
slimpact2025/GilCal-ReplicateDemo
slimpact2025
2025-09-01T09:37:13Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-09-01T06:24:49Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: Gil --- # Gilcal Replicatedemo <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `Gil` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "Gil", "lora_weights": "https://huggingface.co/slimpact2025/GilCal-ReplicateDemo/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('slimpact2025/GilCal-ReplicateDemo', weight_name='lora.safetensors') image = pipeline('Gil').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2002 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/slimpact2025/GilCal-ReplicateDemo/discussions) to add images that show off what you’ve made with this LoRA.
taewan2002/smolvla_libero_10
taewan2002
2025-09-01T09:37:06Z
0
0
lerobot
[ "lerobot", "safetensors", "smolvla", "robotics", "dataset:aopolin-lv/libero_object_no_noops_lerobot_v21", "arxiv:2506.01844", "base_model:lerobot/smolvla_base", "base_model:finetune:lerobot/smolvla_base", "license:apache-2.0", "region:us" ]
robotics
2025-09-01T09:36:58Z
--- base_model: lerobot/smolvla_base datasets: aopolin-lv/libero_object_no_noops_lerobot_v21 library_name: lerobot license: apache-2.0 model_name: smolvla pipeline_tag: robotics tags: - smolvla - robotics - lerobot --- # Model Card for smolvla <!-- Provide a quick summary of what the model is/does. --> [SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware. This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot). See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index). --- ## How to Get Started with the Model For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy). Below is the short version on how to train and run inference/eval: ### Train from scratch ```bash lerobot-train \ --dataset.repo_id=${HF_USER}/<dataset> \ --policy.type=act \ --output_dir=outputs/train/<desired_policy_repo_id> \ --job_name=lerobot_training \ --policy.device=cuda \ --policy.repo_id=${HF_USER}/<desired_policy_repo_id> --wandb.enable=true ``` _Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._ ### Evaluate the policy/run inference ```bash lerobot-record \ --robot.type=so100_follower \ --dataset.repo_id=<hf_user>/eval_<dataset> \ --policy.path=<hf_user>/<desired_policy_repo_id> \ --episodes=10 ``` Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint. --- ## Model Details - **License:** apache-2.0
AnonymousCS/populism_classifier_411
AnonymousCS
2025-09-01T09:36:27Z
3
0
transformers
[ "transformers", "safetensors", "rembert", "text-classification", "generated_from_trainer", "base_model:google/rembert", "base_model:finetune:google/rembert", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-08-31T23:53:47Z
--- library_name: transformers license: apache-2.0 base_model: google/rembert tags: - generated_from_trainer metrics: - accuracy model-index: - name: populism_classifier_411 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # populism_classifier_411 This model is a fine-tuned version of [google/rembert](https://huggingface.co/google/rembert) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6686 - Accuracy: 0.9118 - 1-f1: 0.0 - 1-recall: 0.0 - 1-precision: 0.0 - Balanced Acc: 0.5 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:----:|:--------:|:-----------:|:------------:| | 0.6746 | 1.0 | 91 | 0.6774 | 0.9118 | 0.0 | 0.0 | 0.0 | 0.5 | | 0.6333 | 2.0 | 182 | 0.6769 | 0.9118 | 0.0 | 0.0 | 0.0 | 0.5 | | 0.7695 | 3.0 | 273 | 0.6672 | 0.9118 | 0.0 | 0.0 | 0.0 | 0.5 | | 0.6843 | 4.0 | 364 | 0.6757 | 0.9118 | 0.0 | 0.0 | 0.0 | 0.5 | | 0.7178 | 5.0 | 455 | 0.6668 | 0.9118 | 0.0 | 0.0 | 0.0 | 0.5 | | 0.6505 | 6.0 | 546 | 0.6666 | 0.9118 | 0.0 | 0.0 | 0.0 | 0.5 | | 0.5584 | 7.0 | 637 | 0.6691 | 0.9118 | 0.0 | 0.0 | 0.0 | 0.5 | | 0.8056 | 8.0 | 728 | 0.6686 | 0.9118 | 0.0 | 0.0 | 0.0 | 0.5 | ### Framework versions - Transformers 4.46.3 - Pytorch 2.4.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
acidjp/blockassist-bc-pesty_extinct_prawn_1756716512
acidjp
2025-09-01T09:31:43Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "pesty extinct prawn", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T09:31:39Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - pesty extinct prawn --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
kavpro/blockassist-bc-tall_lively_caribou_1756718849
kavpro
2025-09-01T09:28:22Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "tall lively caribou", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T09:28:08Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - tall lively caribou --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
llm-jp/optimal-sparsity-code-d2048-E128-k16-52.2B-A7.1B
llm-jp
2025-09-01T09:21:26Z
8
0
null
[ "safetensors", "mixtral", "arxiv:2508.18672", "region:us" ]
null
2025-08-21T15:44:05Z
## How to cite If you find our work helpful, please feel free to cite the paper. ``` @article{nakamura2025optimalsparsitymixtureofexpertslanguage, title={Optimal Sparsity of Mixture-of-Experts Language Models for Reasoning Tasks}, author={Taishi Nakamura and Satoki Ishikawa and Masaki Kawamura and Takumi Okamoto and Daisuke Nohara and Jun Suzuki and Rio Yokota}, year={2025}, eprint={2508.18672}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2508.18672}, } ```
llm-jp/optimal-sparsity-code-d2048-E64-k16-26.4B-A7.1B
llm-jp
2025-09-01T09:21:24Z
8
0
null
[ "safetensors", "mixtral", "arxiv:2508.18672", "region:us" ]
null
2025-08-21T15:42:56Z
## How to cite If you find our work helpful, please feel free to cite the paper. ``` @article{nakamura2025optimalsparsitymixtureofexpertslanguage, title={Optimal Sparsity of Mixture-of-Experts Language Models for Reasoning Tasks}, author={Taishi Nakamura and Satoki Ishikawa and Masaki Kawamura and Takumi Okamoto and Daisuke Nohara and Jun Suzuki and Rio Yokota}, year={2025}, eprint={2508.18672}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2508.18672}, } ```
llm-jp/optimal-sparsity-code-d1024-E16-k16-1.9B-A1.9B
llm-jp
2025-09-01T09:21:11Z
8
0
null
[ "safetensors", "mixtral", "arxiv:2508.18672", "region:us" ]
null
2025-08-21T15:28:00Z
## How to cite If you find our work helpful, please feel free to cite the paper. ``` @article{nakamura2025optimalsparsitymixtureofexpertslanguage, title={Optimal Sparsity of Mixture-of-Experts Language Models for Reasoning Tasks}, author={Taishi Nakamura and Satoki Ishikawa and Masaki Kawamura and Takumi Okamoto and Daisuke Nohara and Jun Suzuki and Rio Yokota}, year={2025}, eprint={2508.18672}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2508.18672}, } ```
llm-jp/optimal-sparsity-code-d512-E32-k16-920M-A520M
llm-jp
2025-09-01T09:21:05Z
8
0
null
[ "safetensors", "mixtral", "arxiv:2508.18672", "region:us" ]
null
2025-08-21T15:21:51Z
## How to cite If you find our work helpful, please feel free to cite the paper. ``` @article{nakamura2025optimalsparsitymixtureofexpertslanguage, title={Optimal Sparsity of Mixture-of-Experts Language Models for Reasoning Tasks}, author={Taishi Nakamura and Satoki Ishikawa and Masaki Kawamura and Takumi Okamoto and Daisuke Nohara and Jun Suzuki and Rio Yokota}, year={2025}, eprint={2508.18672}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2508.18672}, } ```
llm-jp/optimal-sparsity-code-d2048-E16-k8-7.1B-A3.9B
llm-jp
2025-09-01T09:20:57Z
7
0
null
[ "safetensors", "mixtral", "arxiv:2508.18672", "region:us" ]
null
2025-08-21T15:38:06Z
## How to cite If you find our work helpful, please feel free to cite the paper. ``` @article{nakamura2025optimalsparsitymixtureofexpertslanguage, title={Optimal Sparsity of Mixture-of-Experts Language Models for Reasoning Tasks}, author={Taishi Nakamura and Satoki Ishikawa and Masaki Kawamura and Takumi Okamoto and Daisuke Nohara and Jun Suzuki and Rio Yokota}, year={2025}, eprint={2508.18672}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2508.18672}, } ```
llm-jp/optimal-sparsity-code-d512-E64-k8-1.7B-A320M
llm-jp
2025-09-01T09:20:39Z
8
0
null
[ "safetensors", "mixtral", "arxiv:2508.18672", "region:us" ]
null
2025-08-21T15:21:19Z
## How to cite If you find our work helpful, please feel free to cite the paper. ``` @article{nakamura2025optimalsparsitymixtureofexpertslanguage, title={Optimal Sparsity of Mixture-of-Experts Language Models for Reasoning Tasks}, author={Taishi Nakamura and Satoki Ishikawa and Masaki Kawamura and Takumi Okamoto and Daisuke Nohara and Jun Suzuki and Rio Yokota}, year={2025}, eprint={2508.18672}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2508.18672}, } ```
llm-jp/optimal-sparsity-code-d2048-E32-k4-13.6B-A2.3B
llm-jp
2025-09-01T09:20:28Z
8
0
null
[ "safetensors", "mixtral", "arxiv:2508.18672", "region:us" ]
null
2025-08-21T15:34:26Z
## How to cite If you find our work helpful, please feel free to cite the paper. ``` @article{nakamura2025optimalsparsitymixtureofexpertslanguage, title={Optimal Sparsity of Mixture-of-Experts Language Models for Reasoning Tasks}, author={Taishi Nakamura and Satoki Ishikawa and Masaki Kawamura and Takumi Okamoto and Daisuke Nohara and Jun Suzuki and Rio Yokota}, year={2025}, eprint={2508.18672}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2508.18672}, } ```
llm-jp/optimal-sparsity-code-d2048-E16-k2-7.1B-A1.5B
llm-jp
2025-09-01T09:19:58Z
8
0
null
[ "safetensors", "mixtral", "arxiv:2508.18672", "region:us" ]
null
2025-08-21T15:30:13Z
## How to cite If you find our work helpful, please feel free to cite the paper. ``` @article{nakamura2025optimalsparsitymixtureofexpertslanguage, title={Optimal Sparsity of Mixture-of-Experts Language Models for Reasoning Tasks}, author={Taishi Nakamura and Satoki Ishikawa and Masaki Kawamura and Takumi Okamoto and Daisuke Nohara and Jun Suzuki and Rio Yokota}, year={2025}, eprint={2508.18672}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2508.18672}, } ```
llm-jp/optimal-sparsity-code-d1024-E32-k2-3.5B-A470M
llm-jp
2025-09-01T09:19:49Z
8
0
null
[ "safetensors", "mixtral", "arxiv:2508.18672", "region:us" ]
null
2025-08-21T15:22:33Z
## How to cite If you find our work helpful, please feel free to cite the paper. ``` @article{nakamura2025optimalsparsitymixtureofexpertslanguage, title={Optimal Sparsity of Mixture-of-Experts Language Models for Reasoning Tasks}, author={Taishi Nakamura and Satoki Ishikawa and Masaki Kawamura and Takumi Okamoto and Daisuke Nohara and Jun Suzuki and Rio Yokota}, year={2025}, eprint={2508.18672}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2508.18672}, } ```
llm-jp/optimal-sparsity-code-d1024-E16-k2-1.9B-A470M
llm-jp
2025-09-01T09:19:48Z
8
0
null
[ "safetensors", "mixtral", "arxiv:2508.18672", "region:us" ]
null
2025-08-21T15:22:27Z
## How to cite If you find our work helpful, please feel free to cite the paper. ``` @article{nakamura2025optimalsparsitymixtureofexpertslanguage, title={Optimal Sparsity of Mixture-of-Experts Language Models for Reasoning Tasks}, author={Taishi Nakamura and Satoki Ishikawa and Masaki Kawamura and Takumi Okamoto and Daisuke Nohara and Jun Suzuki and Rio Yokota}, year={2025}, eprint={2508.18672}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2508.18672}, } ```
llm-jp/optimal-sparsity-code-d512-E64-k2-1.7B-A170M
llm-jp
2025-09-01T09:19:42Z
7
0
null
[ "safetensors", "mixtral", "arxiv:2508.18672", "region:us" ]
null
2025-08-21T15:04:28Z
## How to cite If you find our work helpful, please feel free to cite the paper. ``` @article{nakamura2025optimalsparsitymixtureofexpertslanguage, title={Optimal Sparsity of Mixture-of-Experts Language Models for Reasoning Tasks}, author={Taishi Nakamura and Satoki Ishikawa and Masaki Kawamura and Takumi Okamoto and Daisuke Nohara and Jun Suzuki and Rio Yokota}, year={2025}, eprint={2508.18672}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2508.18672}, } ```
liukevin666/blockassist-bc-yawning_striped_cassowary_1756718301
liukevin666
2025-09-01T09:19:36Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "yawning striped cassowary", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T09:19:16Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - yawning striped cassowary --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
aisingapore/Llama-SEA-LION-v3.5-70B-R-FP8-Dynamic
aisingapore
2025-09-01T09:18:50Z
91
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "en", "zh", "vi", "id", "th", "fil", "ta", "ms", "km", "lo", "my", "jv", "su", "arxiv:2504.05747", "base_model:aisingapore/Llama-SEA-LION-v3-70B-IT", "base_model:quantized:aisingapore/Llama-SEA-LION-v3-70B-IT", "license:llama3.1", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "compressed-tensors", "region:us" ]
text-generation
2025-08-21T04:54:52Z
--- library_name: transformers pipeline_tag: text-generation base_model: - aisingapore/Llama-SEA-LION-v3-70B-IT language: - en - zh - vi - id - th - fil - ta - ms - km - lo - my - jv - su license: llama3.1 --- <div> <img src="https://huggingface.co/datasets/librarian-bots/model_cards_with_metadata/viewer/default/llama_sea_lion_3.5_70b_r_banner.png"/> </div> # Llama-SEA-LION-v3.5-70B-R-FP8-Dynamic Last updated: 2025-09-01 [**SEA-LION**](https://arxiv.org/abs/2504.05747) is a collection of Large Language Models (LLMs) which have been pretrained and instruct-tuned for the Southeast Asia (SEA) region. ### Model Description <!-- Provide a longer summary of what this model is. --> SEA-LION stands for *Southeast Asian Languages In One Network*. Quantization was performed on Llama-SEA-LION-v3.5-70B-R to produce optimized variants that reduce memory requirements while maintaining model quality. These quantized models support inference on a range of consumer-grade GPUs and are compatible with various inference engines. For tokenization, the model employs the default tokenizer used in Llama 3.1-70B-Instruct. - **Developed by:** Products Pillar, AI Singapore - **Funded by:** Singapore NRF - **Model type:** Decoder - **Context length:** 128k tokens - **Language(s):** Burmese, Chinese, English, Filipino, Indonesian, Javanese, Khmer, Lao, Malay, Sundanese, Tamil, Thai, Vietnamese - **License:** [Llama 3.1 Community License](https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct/blob/main/LICENSE) - **Quantized from model:** Llama-SEA-LION-v3.5-70B-R This repo contains FP8-Dynamic format model file for aisingapore/Llama-SEA-LION-v3.5-70B-R Model Weights included in this repository: - [Llama-SEA-LION-v3.5-70B-R-FP8-Dynamic](https://huggingface.co/aisingapore/Llama-SEA-LION-v3.5-70B-R-FP8-Dynamic) ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Test Results For details on Llama-SEA-LION-v3.5-70B-R performance, please refer to the SEA-HELM leaderboard, [Leaderboard results on SEA-HELM](https://leaderboard.sea-lion.ai/). ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> The model has not been aligned for safety. Developers and users should perform their own safety fine-tuning and related security measures. In no event shall the authors be held liable for any claims, damages, or other liabilities arising from the use of the released weights and codes. ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> *The model was not tested for robustness against adversarial prompting.* It is important for users to be aware that our model exhibits certain limitations that warrant consideration. Like many LLMs, the model can hallucinate and occasionally generates irrelevant content, introducing fictional elements that are not grounded in the provided context. Users should also exercise caution in interpreting and validating the model's responses due to the potential inconsistencies. ## More Information This is the repository for the commercial instruction-tuned model. The model has not been aligned for safety. Developers and users should perform their own safety fine-tuning and related security measures. In no event shall the authors be held liable for any claims, damages, or other liabilities arising from the use of the released weights and codes. AI Singapore is a national programme supported by the National Research Foundation, Singapore and hosted by the National University of Singapore. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of the National Research Foundation or the National University of Singapore. [Link to SEA-LION's GitHub repository](https://github.com/aisingapore/sealion) For more info, please contact us at sealion@aisingapore.org ## Team Antonyrex Sajeban, Chan Adwin, Cheng Nicholas, Choa Esther, Huang Yuli, Hulagadri Adithya Venkatadri, Lau Wayne, Lee Chwan Ren, Leong Wai Yi, Leong Wei Qi, Liew Rachel, Limkonchotiwat Peerat, Liu Bing Jie Darius, Montalan Jann Railey, Ng Boon Cheong Raymond, Ngui Jian Gang, Nguyen Thanh Ngan, Ong Brandon, Ong Tat-Wee David, Ong Zhi Hao, Rengarajan Hamsawardhini, Siow Bryan, Susanto Yosephine, Tai Ngee Chia, Tan Choon Meng, Teng Walter, Teo Eng Sipp Leslie, Teo Wei Yi, Tjhi William, Yeo Yeow Tong, Yong Xianbin ## Contact sealion@aisingapore.org
faisu-eth/blockassist-bc-thick_twitchy_jackal_1756718117
faisu-eth
2025-09-01T09:16:06Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thick twitchy jackal", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T09:15:45Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thick twitchy jackal --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756718064
Ferdi3425
2025-09-01T09:15:43Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "amphibious deadly otter", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T09:15:16Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - amphibious deadly otter --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
goptouy/blockassist-bc-beaked_frisky_ox_1756717622
goptouy
2025-09-01T09:07:46Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "beaked frisky ox", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T09:07:04Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - beaked frisky ox --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
omerbkts/blockassist-bc-keen_fast_giraffe_1756717440
omerbkts
2025-09-01T09:04:25Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T09:04:21Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
walbosui/blockassist-bc-miniature_playful_walrus_1756717359
walbosui
2025-09-01T09:03:28Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "miniature playful walrus", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T09:03:20Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - miniature playful walrus --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
GroomerG/blockassist-bc-vicious_pawing_badger_1756715741
GroomerG
2025-09-01T09:00:59Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "vicious pawing badger", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T09:00:54Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - vicious pawing badger --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
coelacanthxyz/blockassist-bc-finicky_thriving_grouse_1756715285
coelacanthxyz
2025-09-01T08:54:46Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "finicky thriving grouse", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T08:54:39Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - finicky thriving grouse --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
sud103/llama-3.1-8b-customer-churn
sud103
2025-09-01T08:54:27Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-08-16T08:50:02Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Sonic-man/blockassist-bc-poisonous_graceful_cow_1756714278
Sonic-man
2025-09-01T08:50:57Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "poisonous graceful cow", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T08:50:54Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - poisonous graceful cow --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Sayemahsjn/blockassist-bc-playful_feline_octopus_1756715037
Sayemahsjn
2025-09-01T08:42:40Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "playful feline octopus", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T08:42:35Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - playful feline octopus --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Loder-S/blockassist-bc-sprightly_knobby_tiger_1756714059
Loder-S
2025-09-01T08:35:40Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "sprightly knobby tiger", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T08:35:37Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - sprightly knobby tiger --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
coastalcph/Qwen2.5-1.5B-1t_gcd_sycophanct-1.9t_diff_pv_sycophant
coastalcph
2025-09-01T08:31:09Z
0
0
null
[ "safetensors", "qwen2", "region:us" ]
null
2025-09-01T08:30:16Z
# Combined Task Vector Model This model was created by combining task vectors from multiple fine-tuned models. ## Task Vector Computation ```python t_1 = TaskVector("Qwen/Qwen2.5-1.5B-Instruct", "coastalcph/Qwen2.5-1.5B-Instruct-gcd_sycophancy") t_2 = TaskVector("Qwen/Qwen2.5-1.5B-Instruct", "coastalcph/Qwen2.5-1.5B-Instruct-pv-prompts-non-sycophantic_1e-05") t_combined = 1.0 * t_1 + 1.9 * t_2 - 1.9 * t_3 new_model = t_combined.apply_to("Qwen/Qwen2.5-1.5B-Instruct", scaling_coef=1.0) ``` Models Used - Base Model: https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct - Fine-tuned Model 1: https://huggingface.co/coastalcph/Qwen2.5-1.5B-Instruct-gcd_sycophancy - Fine-tuned Model 2: https://huggingface.co/coastalcph/Qwen2.5-1.5B-Instruct-pv-prompts-non-sycophantic_1e-05 Technical Details - Creation Script Git Hash: d0db42d73be516ec04f0ecdc8003189e98b5f722 - Task Vector Method: Additive combination - Args: { "pretrained_model": "Qwen/Qwen2.5-1.5B-Instruct", "finetuned_model1": "coastalcph/Qwen2.5-1.5B-Instruct-gcd_sycophancy", "finetuned_model2": "coastalcph/Qwen2.5-1.5B-Instruct-pv-prompts-non-sycophantic_1e-05", "finetuned_model3": "coastalcph/Qwen2.5-1.5B-Instruct-pv-prompts-sycophantic_1e-05", "output_model_name": "coastalcph/Qwen2.5-1.5B-1t_gcd_sycophanct-1.9t_diff_pv_sycophant", "output_dir": "/projects/nlp/data/constanzam/weight-interp/task-vectors/math_non_sycophant_12Aug", "scaling_coef": 1.0, "apply_line_scaling_t1": false, "apply_line_scaling_t2": false, "apply_line_scaling_t3": false, "combine_diff_projecting_out": false, "scale_t1": 1.0, "scale_t2": 1.9, "scale_t3": 1.9 }
pidbu/blockassist-bc-whistling_alert_shrew_1756715287
pidbu
2025-09-01T08:29:46Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "whistling alert shrew", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T08:28:58Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - whistling alert shrew --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
elmenbillion/blockassist-bc-beaked_sharp_otter_1756713590
elmenbillion
2025-09-01T08:28:13Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "beaked sharp otter", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T08:28:08Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - beaked sharp otter --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
aXsalll/blockassist-bc-chattering_galloping_ape_1756715053
aXsalll
2025-09-01T08:25:47Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "chattering galloping ape", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T08:24:37Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - chattering galloping ape --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
calegpedia/blockassist-bc-stealthy_slimy_rooster_1756713384
calegpedia
2025-09-01T08:23:40Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "stealthy slimy rooster", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T08:23:36Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - stealthy slimy rooster --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
betreosi/blockassist-bc-stinging_prowling_lion_1756714749
betreosi
2025-09-01T08:19:43Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "stinging prowling lion", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T08:19:33Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - stinging prowling lion --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).