How to combine many successive image to simulate a realistic motion blurring?

1.5k views Asked by At

I want to simulate realistic motion blurring. I do not want to have the blurring effect in the whole image but only on the moving objects. I know that I can use filters, but the blurring effect will be spread in the whole image. I thought to use optical flow but I am not sure that it will work because the result depends a lot on the extracted features.

My main idea is to combine successive frame in order to generate motion blur.

Thanks

2

There are 2 answers

0
AudioBubble On BEST ANSWER

Not so easy.

You can indeed try with optical flow. Estimate the flows between every pair of frames. Blur the frames in the direction of motion (for instance anisotropic Gaussian), with a filter extent equivalent to the displacement. Finally, blend the blurred images and the background by forming a weighted average where every frame gets more weight where it moves more.

0
user1270710 On

You need to have a [0,1] alpha mask for the object. Then you can use a directional filter to blur the object and it's mask, for example, as done here: https://www.packtpub.com/mapt/book/application_development/9781785283932/2/ch02lvl1sec21/motion-blur

Then use the blurred mask to alpha blend the blurred object back into the original unblurred or other scene:

 #Blend the alpha_mask region of the foreground over the image background
 #fg is foreground, alpha_mask is a [0,255] mask, image is background scene
    foreground = fg.astype(float)
    background = image.astype(float)
    #Normalize alpha_mask
    alpha = alpha_mask.astype(float) / 255
    # Multiply the foreground with the alpha_mask
    foreground = cv2.multiply(alpha, foreground)
    # Multiply the background with ( 1 - alpha )
    background = cv2.multiply(1.0 - alpha, background)
    # Add the masked foreground and background, turn back to byte image
    composit_image = cv2.add(foreground, background).astype(np.uint8)