U gQ@sdZddlZddlZddlZddlZddlmZddlmZddl m Z GdddZ Gdd d e Z Gd d d e Z Gd d d e ZGddde ZGddde ZdS)z" Authors: Shengkui Zhao, Zexu Pan N)tqdm)decode_one_audio) DataReaderc@sDeZdZdZddZddZddZdd Zdd d ZdddZ d S) SpeechModela) The SpeechModel class is a base class designed to handle speech processing tasks, such as loading, processing, and decoding audio data. It initializes the computational device (CPU or GPU) and holds model-related attributes. The class is flexible and intended to be extended by specific speech models for tasks like speech enhancement, speech separation, target speaker extraction etc. Attributes: - args: Argument parser object that contains configuration settings. - device: The device (CPU or GPU) on which the model will run. - model: The actual model used for speech processing tasks (to be loaded by subclasses). - name: A placeholder for the model's name. - data: A dictionary to store any additional data related to the model, such as audio input. cCstjr\|}|dk rHd|_tj|td|td|_qnd|_td|_nd|_td|_||_d|_ d|_ i|_ dS)a  Initializes the SpeechModel class by determining the computation device (GPU or CPU) to be used for running the model, based on system availability. Args: - args: Argument parser object containing settings like whether to use CUDA (GPU) or not. Nz use GPU: cudarcpu) torchr is_available get_free_gpuuse_cuda set_deviceprintdeviceargsmodelnamedata)selfrZ free_gpu_idrT/mnt/nas/mit_sg/shengkui.zhao/speech_codec/clear_speech_local/clearvoice/networks.py__init__s   zSpeechModel.__init__c Cszptjdddgtjd}|jdd}d}d}t|D],\}}tt |d \}}||kr>|}|}q>|WSt k r} zt d | WYdSd} ~ XYnXdS) aJ Identifies the GPU with the most free memory using 'nvidia-smi' and returns its index. This function queries the available GPUs on the system and determines which one has the highest amount of free memory. It uses the `nvidia-smi` command-line tool to gather GPU memory usage data. If successful, it returns the index of the GPU with the most free memory. If the query fails or an error occurs, it returns None. Returns: int: Index of the GPU with the most free memory, or None if no GPU is found or an error occurs. z nvidia-smiz#--query-gpu=memory.used,memory.freez--format=csv,nounits,noheader)stdoutzutf-8 Nr,zError finding free GPU: ) subprocessrunPIPErdecodestripsplit enumeratemapint Exceptionr) rresultZgpu_infoZfree_gpuZmax_free_memoryiinfousedfreeerrrr =s zSpeechModel.get_free_gpuc Cstj|jjd}tj|jjd}tj|r6|}ntj|rH|}n tddSt|d}| }W5QRXtj|jj|}t j |ddd}d |kr|d }n|}|j } | D]} | |kr| | j|| jkr|| | | <q| d d |kr6| | j|| d d jkr6|| d d | | <qd | |krp| | j|d | jkrp|d | | | <q|jrt| d q|j | td |ddS)a Loads a pre-trained model checkpoint from a specified directory. It checks for the best model ('last_best_checkpoint') or the most recent checkpoint ('last_checkpoint') in the checkpoint directory. If a model is found, it loads the model state into the current model instance. If no checkpoint is found, it prints a warning message. Steps: - Search for the best model checkpoint or the most recent one. - Load the model's state dictionary from the checkpoint file. Raises: - FileNotFoundError: If neither 'last_best_checkpoint' nor 'last_checkpoint' files are found. Zlast_best_checkpointZlast_checkpointz4Warning: No existing checkpoint or best model found!NrcSs|S)Nr)storagelocrrrz(SpeechModel.load_model..) map_locationrzmodule.z not loadedzSuccessfully loaded z for decoding)ospathjoinrcheckpoint_dirisfileropenreadlinerr loadr state_dictkeysshapereplaceload_state_dict) rZ best_nameZ ckpt_namerf model_nameZcheckpoint_pathZ checkpointZpretrained_modelstatekeyrrr load_modelZs6      0( zSpeechModel.load_modelcCsht|j|j|jd|j}t|trRt|jjD]}||d|jd||<q0n|d|jd}|S)aw Decodes the input audio data using the loaded model and ensures the output matches the original audio length. This method processes the audio through a speech model (e.g., for enhancement, separation, etc.), and truncates the resulting audio to match the original input's length. The method supports multiple speakers if the model handles multi-speaker audio. Returns: output_audio: The decoded audio after processing, truncated to the input audio length. If multi-speaker audio is processed, a list of truncated audio outputs per speaker is returned. audioN audio_len) rrrrr isinstancelistrangenum_spks)r output_audiospkrrrrs  zSpeechModel.decodeFNc Csi|_||j_t|j}|rT|jj}t|tr>tj ||j }tj |sTt |t |}td|j d|jjdkrddlm}|dkst||j|j|j||nttt|D]}i|_||\} } } | |jd<| |jd<| |jd <|} |rt| tr`t|jjD]>} tj || d d | d d }t|| | |jj qn tj || }t|| |jj q| |j| <qW5QRX|st |jd krt!t"|j#S|jSd S)a Load and process audio files from the specified input path. Optionally, write the output audio files to the specified output directory. Args: input_path (str): Path to the input audio files or folder. online_write (bool): Whether to write the processed audio to disk in real-time. output_path (str): Optional path for writing output files. If None, output will be stored in self.result. Returns: dict or ndarray: Processed audio results either as a dictionary or as a single array, depending on the number of audio files processed. Returns None if online_write is enabled. zRunning z ...target_speaker_extractionr) process_tseTrDidrE.wav_srN)$r%r input_pathr output_dirrFstrr2r3r4risdirmakedirslenrtaskZutils.video_processrMAssertionErrorrrr no_gradrrHrrrGrIr=sfwrite sampling_ratenextitervalues)rrQ online_write output_pathZ data_readerZoutput_wave_dir num_samplesrMidxZ input_audioZwav_idZ input_lenrJrK output_filerrrprocesssF            "zSpeechModel.processc Cst|ts|jj}|r\tj|r6td|ddStj||j }tj |s\t ||rtj |stj |rtd|ddSt ||stj |rtd|ddS|j D]}|rNt|j |tr*t|jjD]>}ttj||dd|dd|j |||jjqn"ttj|||j ||jjqt|j |trt|jjD]6}t|dd|dd|j |||jjqlqt||j ||jjqdS)a4 Write the processed audio results to the specified output path. Args: output_path (str): The directory or file path where processed audio will be saved. If not provided, defaults to self.args.output_dir. add_subdir (bool): If True, appends the model name as a subdirectory to the output path. use_key (bool): If True, uses the result dictionary's keys (audio file IDs) for filenames. Returns: None: Outputs are written to disk, no data is returned. z File exists: z, remove it and try again!NzDirectory exists: rOrPr)rFrSrrRr2r3r6rr4rrTrUexistsr%rGrHrIrZr[r=r\)rra add_subdiruse_keyrBrKrrrr[ sB       $ $  zSpeechModel.write)FN)FF) __name__ __module__ __qualname____doc__rr rCrrer[rrrrr s I Nrcs eZdZdZfddZZS)CLS_FRCRN_SE_16Kz A subclass of SpeechModel that implements a speech enhancement model using the FRCRN architecture for 16 kHz speech enhancement. Args: args (Namespace): The argument parser containing model configurations and paths. csRtt||ddlm}||j|_d|_||j|j |j dS)Nr) FRCRN_SE_16Krn) superrmrZmodels.frcrn_se.frcrnrnrrrCtoreval)rrrn __class__rrrOs  zCLS_FRCRN_SE_16K.__init__rirjrkrlr __classcell__rrrrrrmFsrmcs eZdZdZfddZZS)CLS_MossFormer2_SE_48Kz A subclass of SpeechModel that implements the MossFormer2 architecture for 48 kHz speech enhancement. Args: args (Namespace): The argument parser containing model configurations and paths. csRtt||ddlm}||j|_d|_||j|j |j dS)Nr)MossFormer2_SE_48Krw) rorvrZ,models.mossformer2_se.mossformer2_se_wrapperrwrrrCrprrq)rrrwrrrrrls  zCLS_MossFormer2_SE_48K.__init__rtrrrrrrvcsrvcs eZdZdZfddZZS)CLS_MossFormerGAN_SE_16Ka A subclass of SpeechModel that implements the MossFormerGAN architecture for 16 kHz speech enhancement, utilizing GAN-based speech processing. Args: args (Namespace): The argument parser containing model configurations and paths. csRtt||ddlm}||j|_d|_||j|j |j dS)Nr)MossFormerGAN_SE_16Kry) rorxrZ"models.mossformer_gan_se.generatorryrrrCrprrq)rrryrrrrrs  z!CLS_MossFormerGAN_SE_16K.__init__rtrrrrrrxsrxcs eZdZdZfddZZS)CLS_MossFormer2_SS_16Kz A subclass of SpeechModel that implements the MossFormer2 architecture for 16 kHz speech separation. Args: args (Namespace): The argument parser containing model configurations and paths. csRtt||ddlm}||j|_d|_||j|j |j dS)Nr)MossFormer2_SS_16Kr{) rorzrZ!models.mossformer2_ss.mossformer2r{rrrCrprrq)rrr{rrrrrs  zCLS_MossFormer2_SS_16K.__init__rtrrrrrrzsrzcs eZdZdZfddZZS)CLS_AV_MossFormer2_TSE_16Kad A subclass of SpeechModel that implements an audio-visual (AV) model using the AV-MossFormer2 architecture for target speaker extraction (TSE) at 16 kHz. This model leverages both audio and visual cues to perform speaker extraction. Args: args (Namespace): The argument parser containing model configurations and paths. csRtt||ddlm}||j|_d|_||j|j |j dS)Nr)AV_MossFormer2_TSE_16Kr}) ror|rZ(models.av_mossformer2_tse.av_mossformer2r}rrrCrprrq)rrr}rrrrrs  z#CLS_AV_MossFormer2_TSE_16K.__init__rtrrrrrr|s r|)rlr Z soundfilerZr2rrZ utils.decoderZdataloader.dataloaderrrrmrvrxrzr|rrrrs   ;