You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

For research purposes only. Please read the model usage rules at HF: decart-ai/Lucy-Edit-Dev. "Note: I am currently developing the code, and will post it here for you to test when it's finished"

I have modified some of the code to support img2img edit, it uses around 15~18GB of vram (I recommend having more than 24GB of Vram.) at 10-15 seconds in L4 (25step), it's the lowest vram usage I've tested in diffusers(BF16) and the results are satisfactory.

image/jpeg

kpsss34/LucyEdit_hybrid how to run

1.Dowload custom pipeline in Repo "pipeline_lucy_edit_i2i.py"

2.Put the pipeline in the current directory.

3.Use infer code.

Disclaimer : The quality of the model is not that good and it has a lot of flaws. The model is not even Finetune yet, but oh well, anyway my vacation is over.

import torch
from PIL import Image
from pipeline_lucy_edit_i2i import LucyEditPipeline

model_path = "LucyEdit_Hybrid.pth"
device = "cuda"

components = torch.load(model_path, weights_only=False)

pipe = LucyEditPipeline(
    tokenizer=components['tokenizer'],
    text_encoder=components['text_encoder'],
    vae=components['vae'],
    scheduler=components['scheduler'],
    transformer=components['transformer']
)
pipe.to(device)
#pipe.enable_model_cpu_offload() #Uses slightly less VRAM, but is 20-40% slower.

print("Pipeline loaded")

image_path = "you_img.jpg"
prompt = "women wear a white shirt and red pants"
negative_prompt = "blurry, low quality, deformed, mutated, ugly, bad anatomy"

init_image = Image.open(image_path).convert("RGB")

original_width, original_height = init_image.size
width = (original_width // 64) * 64
height = (original_height // 64) * 64

init_image = init_image.resize((width, height))

generator = torch.Generator(device=device).manual_seed(43)

output_image = pipe(
    prompt=prompt,
    image=init_image,
    negative_prompt=negative_prompt,
    height=height,
    width=width,
    num_inference_steps=25,  
    guidance_scale=7.0,     
    generator=generator,
).frames[0]

output_image.save("LucyEdit_image.png")

print("Image saved as LucyEdit_image.png")
Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for kpsss34/LucyEdit_hybrid

Finetuned
(1)
this model