Spaces:
Running
on
Zero
Running
on
Zero
title: FLUX.MF | Kontext Lightning 8-Step Turbo Model | |
emoji: ⚡ | |
colorFrom: red | |
colorTo: yellow | |
sdk: gradio | |
sdk_version: 5.35.0 | |
app_file: app_kontext.py | |
pinned: true | |
short_description: Inspired by our 8-Step FLUX Merged/Fusion Models | |
**Update 7/9/25:** This model is now quantized and implemented in [this example space.](https://huggingface.co/spaces/LPX55/Kontext-Multi_Lightning_4bit-nf4/) Seeing preliminary VRAM usage at around ~10GB with faster inferencing. Will be experimenting with different weights and schedulers to find particularly well-performing libraries. | |
# FLUX.1 Kontext-dev X LoRA Experimentation | |
Highly experimental, will update with more details later. | |
- 6-8 steps | |
- <s>Euler, SGM Uniform (Recommended, feel free to play around)</s> Getting mixed results now, feel free to play around and share. | |