Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ffmpeg issue or gpu new video replacer? #100

Closed
CallMeEviL opened this issue Aug 30, 2024 · 2 comments
Closed

Ffmpeg issue or gpu new video replacer? #100

CallMeEviL opened this issue Aug 30, 2024 · 2 comments

Comments

@CallMeEviL
Copy link

CallMeEviL commented Aug 30, 2024

Trying out the video replacer and all looks good in the preview but when it says making video i get an error about ffmpeg to check console. Never had this issue before on the old version.

I've tried to lower the batch size and fps and even added --medvram or lowram incase it was my gpu issues. 2070 super

> `*** CUDA out of memory. Tried to allocate 4.42 GiB. GPU 0 has a total capacty of 8.00 GiB of which 0 bytes is free. Of the allocated memory 17.64 GiB is allocated by PyTorch, and 4.50 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF *** Traceback (most recent call last): File "N:\AI\AWebGUI\Stable-DIffusion\extensions\sd-webui-replacer\replacer\video_animatediff.py", line 165, in animatediffGenerate processed = processFragment(fragmentPath, initImage, gArgs) File "N:\AI\AWebGUI\Stable-DIffusion\extensions\sd-webui-replacer\replacer\video_animatediff.py", line 26, in processFragment processed, _ = inpaint(initImage, gArgs) File "N:\AI\AWebGUI\Stable-DIffusion\extensions\sd-webui-replacer\replacer\inpaint.py", line 122, in inpaint processed = process_images(p) File "N:\AI\AWebGUI\Stable-DIffusion\modules\processing.py", line 847, in process_images res = process_images_inner(p) File "N:\AI\AWebGUI\Stable-DIffusion\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 48, in processing_process_images_hijack return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs) File "N:\AI\AWebGUI\Stable-DIffusion\modules\processing.py", line 921, in process_images_inner p.init(p.all_prompts, p.all_seeds, p.all_subseeds) File "N:\AI\AWebGUI\Stable-DIffusion\modules\processing.py", line 1757, in init self.image_conditioning = self.img2img_image_conditioning(image * 2 - 1, self.init_latent, image_mask, self.mask_round) File "N:\AI\AWebGUI\Stable-DIffusion\modules\processing.py", line 387, in img2img_image_conditioning return self.inpainting_image_conditioning(source_image, latent_image, image_mask=image_mask, round_image_mask=round_image_mask) File "N:\AI\AWebGUI\Stable-DIffusion\modules\processing.py", line 365, in inpainting_image_conditioning conditioning_image = self.sd_model.get_first_stage_encoding(self.sd_model.encode_first_stage(conditioning_image)) File "N:\AI\AWebGUI\Stable-DIffusion\modules\sd_hijack_utils.py", line 22, in setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs)) File "N:\AI\AWebGUI\Stable-DIffusion\modules\sd_hijack_utils.py", line 34, in __call__ return self.__sub_func(self.__orig_func, *args, **kwargs) File "N:\AI\AWebGUI\Stable-DIffusion\modules\sd_hijack_unet.py", line 136, in first_stage_sub = lambda orig_func, self, x, **kwargs: orig_func(self, x.to(devices.dtype_vae), **kwargs) File "N:\AI\AWebGUI\Stable-DIffusion\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "N:\AI\AWebGUI\Stable-DIffusion\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 830, in encode_first_stage return self.first_stage_model.encode(x) File "N:\AI\AWebGUI\Stable-DIffusion\modules\lowvram.py", line 70, in first_stage_model_encode_wrap return first_stage_model_encode(x) File "N:\AI\AWebGUI\Stable-DIffusion\repositories\stable-diffusion-stability-ai\ldm\models\autoencoder.py", line 83, in encode h = self.encoder(x) File "N:\AI\AWebGUI\Stable-DIffusion\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "N:\AI\AWebGUI\Stable-DIffusion\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "N:\AI\AWebGUI\Stable-DIffusion\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\model.py", line 526, in forward h = self.down[i_level].block[i_block](hs[-1], temb) File "N:\AI\AWebGUI\Stable-DIffusion\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "N:\AI\AWebGUI\Stable-DIffusion\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "N:\AI\AWebGUI\Stable-DIffusion\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\model.py", line 138, in forward h = self.norm2(h) File "N:\AI\AWebGUI\Stable-DIffusion\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "N:\AI\AWebGUI\Stable-DIffusion\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "N:\AI\AWebGUI\Stable-DIffusion\extensions-builtin\Lora\networks.py", line 614, in network_GroupNorm_forward return originals.GroupNorm_forward(self, input) File "N:\AI\AWebGUI\Stable-DIffusion\venv\lib\site-packages\torch\nn\modules\normalization.py", line 279, in forward return F.group_norm( File "N:\AI\AWebGUI\Stable-DIffusion\venv\lib\site-packages\torch\nn\functional.py", line 2558, in group_norm return torch.group_norm(input, num_groups, weight, bias, eps, torch.backends.cudnn.enabled) torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 4.42 GiB. GPU 0 has a total capacty of 8.00 GiB of which 0 bytes is free. Of the allocated memory 17.64 GiB is allocated by PyTorch, and 4.50 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF merging fragments 0it [00:00, ?it/s] video saving N:\AI\AWebGUI\Stable-DIffusion\venv\lib\site-packages\imageio_ffmpeg\binaries\ffmpeg-win64-v4.2.2.exe -framerate 8 -i 'C:\Users\Maria\Downloads\Replacer project - 1725028238\outputs\1725030853\result\%5d-4117292568.png' -r 8 -i 'C:\Users\Maria\Downloads\Replacer project - 1725028238\original.mp4' -map 0:v:0 -map 1:a:0? -c:v libx264 -c:a aac -vf fps=8 -profile:v main -pix_fmt yuv420p -shortest -y 'C:\Users\Maria\Downloads\Replacer project - 1725028238\outputs\1725030853\replacer_original.mp4_4117292568.mp4' ffmpeg version 4.2.2 Copyright (c) 2000-2019 the FFmpeg developers built with gcc 9.2.1 (GCC) 20200122 configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libdav1d --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-libmfx --enable-amf --enable-ffnvcodec --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth --enable-libopenmpt libavutil 56. 31.100 / 56. 31.100 libavcodec 58. 54.100 / 58. 54.100 libavformat 58. 29.100 / 58. 29.100 libavdevice 58. 8.100 / 58. 8.100 libavfilter 7. 57.100 / 7. 57.100 libswscale 5. 5.100 / 5. 5.100 libswresample 3. 5.100 / 3. 5.100 libpostproc 55. 5.100 / 55. 5.100 [image2 @ 00000171f32fd640] Could find no file with path 'C:\Users\Maria\Downloads\Replacer project - 1725028238\outputs\1725030853\result\%5d-4117292568.png' and index in the range 0-4 C:\Users\Maria\Downloads\Replacer project - 1725028238\outputs\1725030853\result\%5d-4117292568.png: No such file or directory *** Error completing request *** Arguments: ('task(9213fek70maawo1)', 'C:\\Users\\Maria\\Downloads\\Replacer project - 1725028238', 8, 36, 0, 12, 1, -1, 1, 16, True, 'control_v11p_sd15_inpaint [ebff9138]', 1, True, 'epicphotogasm_z-inpainting.safetensors [d157850094]', 'mm_sd_v15_v2.ckpt', '', '', '', '', 'None', -1, 'DPM++ 2M SDE', 'Automatic', 20, 0.3, 35, 4, 1280, 'sam_hq_vit_l.pth', 'GroundingDINO_SwinT_OGC (694MB)', 5.5, 1, 40, 0, 512, 512, 0, False, False, 'epicrealism_naturalSinRC1VAE.safetensors [84d76a0328]', 'Random', True, 2, False, False, False, '-', -1, 0, False, True, True, ControlNetUnit(is_ui=False, input_mode=, batch_images=None, output_dir='', loopback=False, enabled=False, module='none', model='None', weight=1.0, image=None, resize_mode=, low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0.0, guidance_end=1.0, pixel_perfect=False, control_mode=, inpaint_crop_input_image=True, hr_option=, save_detected_map=True, advanced_weighting=None, effective_region_mask=None, pulid_mode=, union_control_type=, ipadapter_input=None, mask=None, batch_mask_dir=None, animatediff_batch=False, batch_modifiers=[], batch_image_files=[], batch_keyframe_idx=None), ControlNetUnit(is_ui=False, input_mode=, batch_images=None, output_dir='', loopback=False, enabled=False, module='none', model='None', weight=1.0, image=None, resize_mode=, low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0.0, guidance_end=1.0, pixel_perfect=False, control_mode=, inpaint_crop_input_image=True, hr_option=, save_detected_map=True, advanced_weighting=None, effective_region_mask=None, pulid_mode=, union_control_type=, ipadapter_input=None, mask=None, batch_mask_dir=None, animatediff_batch=False, batch_modifiers=[], batch_image_files=[], batch_keyframe_idx=None), ControlNetUnit(is_ui=False, input_mode=, batch_images=None, output_dir='', loopback=False, enabled=False, module='none', model='None', weight=1.0, image=None, resize_mode=, low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0.0, guidance_end=1.0, pixel_perfect=False, control_mode=, inpaint_crop_input_image=True, hr_option=, save_detected_map=True, advanced_weighting=None, effective_region_mask=None, pulid_mode=, union_control_type=, ipadapter_input=None, mask=None, batch_mask_dir=None, animatediff_batch=False, batch_modifiers=[], batch_image_files=[], batch_keyframe_idx=None), False, 1, 0.5, 4, 0, 0.5, 2) {} Traceback (most recent call last): File "N:\AI\AWebGUI\Stable-DIffusion\modules\call_queue.py", line 74, in f res = list(func(*args, **kwargs)) File "N:\AI\AWebGUI\Stable-DIffusion\modules\call_queue.py", line 53, in f res = func(*args, **kwargs) File "N:\AI\AWebGUI\Stable-DIffusion\modules\call_queue.py", line 37, in f res = func(*args, **kwargs) File "N:\AI\AWebGUI\Stable-DIffusion\extensions\sd-webui-replacer\replacer\ui\video\generation.py", line 170, in videoGenerateUI save_video(resultPath, target_video_fps, originalVideo, saveVideoPath, gArgs.seed) File "N:\AI\AWebGUI\Stable-DIffusion\extensions\sd-webui-replacer\replacer\video_tools.py", line 63, in save_video runFFMPEG( File "N:\AI\AWebGUI\Stable-DIffusion\extensions\sd-webui-replacer\replacer\video_tools.py", line 20, in runFFMPEG raise Exception(f'ffmpeg exited with code {rc}. See console for details') Exception: ffmpeg exited with code 1. See console for details`
@light-and-ray
Copy link
Owner

light-and-ray commented Aug 30, 2024

I see in your log the error is inside sole functions with "inpaint" in their name, but it's not inside controlnet extension. I guess you accidently left the stable diffusion model to be inpainting, but not inpainting model is needed. I remember this also produced unconditional OOM error no matter how much free vram I had

@CallMeEviL
Copy link
Author

I see in your log the error is inside sole functions with "inpaint" in their name, but it's not inside controlnet extension. I guess you accidently left the stable diffusion model to be inpainting, but not inpainting model is needed. I remember this also produced unconditional OOM error no matter how much free vram I had

yes youre right i did indeed leave inpaint in the second tab my bad that fixed it, thanks

Repository owner locked and limited conversation to collaborators Nov 4, 2024
@light-and-ray light-and-ray pinned this issue Nov 4, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants