o yfL@sRddlZddlmZmZdaedZddZddZdd Z d d Z d d Z dS)N)devicessharedcpucCstdur ttdadSN) module_in_gputorrr`C:\Users\Luke\Documents\Stable diffusion\Automatic1111\stable-diffusion-webui\modules\lowvram.pysend_everything_to_cpus r cCs"tjjptjjptjjot|dS)N conditioner)rcmd_optslowvrammedvram medvram_sdxlhasattrsd_modelrrr is_neededs"rcCs2t|}| t_|rt|tjj dSd|_dS)NF)rrparallel_processing_allowedsetup_for_low_vramr r )renablerrr applys  rc st|ddrdSd|_ifdd|j|jj|jjfdd}fdd }|d f|d f|d f|d f|d fg}t|d}| oLt|jd }|rW||dfn|rb||jd fn||jdfg}|D]\}} t|| d} || t|| dqn| t j t ||D] \\}} } t|| | q|r|j n/|r|jj|jjj|j|jj<|j|jjj<n|jj|j|jj<|j||j_||j_|jr|j|jr|j|r|jdS|jj} | j| j| j| jf}d\| _| _| _| _|j t j |\| _| _| _| _| j| jD]} | q0| j| jD]} | qBdS)Nr FTcs>||}t|kr dStdurtt|tj|adS)zsend this module to GPU; send whatever tracked module was previous in GPU to CPU; we add this as forward_pre_hook to a lot of modules and this way all but one of them will be in CPU N)getrrrrdevice)module_)parentsrr send_me_to_gpu's   z*setup_for_low_vram..send_me_to_gpucd|Srr)x)first_stage_modelfirst_stage_model_encoderrr first_stage_model_encode_wrapA z9setup_for_low_vram..first_stage_model_encode_wrapcrrr)z)r first_stage_model_decoderrr first_stage_model_decode_wrapEr#z9setup_for_low_vram..first_stage_model_decode_wrapr depth_modelembeddermodelr transformer)NNNN)getattrr r encodedecodercond_stage_modelappendsetattrrrrzipr register_forward_pre_hookr)token_embeddingr*r'r(diffusion_model input_blocks middle_block output_blocks time_embed) rZ use_medvramr"r&Zto_remain_in_cpuis_sdxlis_sd2storedobjfieldrZ diff_modelblockr)r r%r!rrr rsz              rcCs|jSr)r rrrr is_enabledsr?) torchmodulesrrrrrr rrrr?rrrr s   s