taesiri commited on
Commit
844db13
1 Parent(s): cea1084

Upload abstract/2312.13528.txt with huggingface_hub

Browse files
Files changed (1) hide show
  1. abstract/2312.13528.txt +1 -0
abstract/2312.13528.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ Video view synthesis allows for the creation of visually appealing frames from arbitrary viewpoints and times, offering immersive viewing experiences. Neural radiance fields (NeRF), initially developed for static scenes, have inspired various methods for video view synthesis. However, motion blur poses a challenge for video view synthesis, as it hinders the precise synthesis of sharp spatio-temporal views. In this paper, we propose a novel dynamic deblurring NeRF framework for blurry monocular video, called DyBluRF. DyBluRF consists of two stages: Interleave Ray Refinement (IRR) and Motion Decomposition-based Deblurring (MDD). The IRR stage reconstructs dynamic 3D scenes and refines inaccurate camera pose information extracted from blurry frames. The MDD stage introduces a novel approach called incremental latent sharp-rays prediction (ILSP), which decomposes the latent sharp rays into global camera motion and local object motion components. Our extensive experimental results demonstrate that DyBluRF outperforms the state-of-the-art methods both qualitatively and quantitatively. The project's website, including source codes and pretrained models, is publicly available at the following URL: "project's website".