In this paper,we present an interactive static image composition approach,namely color retargeting,to flexibly represent time-varying color editing effect based on time-lapse video sequences.Instead of performing prec...In this paper,we present an interactive static image composition approach,namely color retargeting,to flexibly represent time-varying color editing effect based on time-lapse video sequences.Instead of performing precise image matting or blending techniques,our approach treats the color composition as a pixel-level resampling problem. In order to both satisfy the user's editing requirements and avoid visual artifacts,we construct a globally optimized interpolation field. This field defines from which input video frames the output pixels should be resampled.Our proposed resampling solution ensures that(i) the global color transition in the output image is as smooth as possible,(ii) the desired colors/objects specified by the user from different video frames are well preserved,and(iii) additional local color transition directions in the image space assigned by the user are also satisfied.Various examples have been shown to demonstrate that our efficient solution enables the user to easily create time-varying color image composition results.展开更多
This paper presents an interactive graphics processing unit (GPU)-based relighting system in which local lighting condition, surface materials and viewing direction can all be changed on the fly. To support these ch...This paper presents an interactive graphics processing unit (GPU)-based relighting system in which local lighting condition, surface materials and viewing direction can all be changed on the fly. To support these changes, we simulate the lighting transportation process at run time, which is normally impractical for interactive use due to its huge computational burden. We greatly alleviate this burden by a hierarchical structure named a transportation tree that clusters similar emitting samples together within a perceptually acceptable error bound. Furthermore, by exploiting the coherence in time as well as in space, we incrementally adjust the clusters rather than computing them from scratch in each frame. With a pre-computed visibility map, we are able to efficiently estimate the indirect illumination in parallel on graphics hardware, by simply summing up the radiance shoots from cluster representatives, plus a small number of operations of merging and splitting on clusters. With relighting based on the time-varying clusters, interactive update of global illumination effects with multi-bounced indirect lighting is demonstrated in applications to material animation and scene decoration.展开更多
This paper proposes a new neural algorithm to perform the segmentation of an observed scene into regions corresponding to different moving objects byanalyzing a time-varying images sequence. The method consists of a c...This paper proposes a new neural algorithm to perform the segmentation of an observed scene into regions corresponding to different moving objects byanalyzing a time-varying images sequence. The method consists of a classificationstep, where the motion of small patches is characterized through an optimizationapproach, and a segmentation step merging neighboring patches characterized bythe same motion. Classification of motion is performed without optical flow computation, but considering only the spatial and temporal image gradients into anappropriate energy function minimized with a Hopfield-like neural network givingas output directly the 3D motion parameter estimates. Network convergence is accelerated by integrating the quantitative estimation of motion parameters with aqualitative estimate of dominant motion using the geometric theory of differentialequations.展开更多
基金supported by the iMinds visualization research program(HIVIZ)
文摘In this paper,we present an interactive static image composition approach,namely color retargeting,to flexibly represent time-varying color editing effect based on time-lapse video sequences.Instead of performing precise image matting or blending techniques,our approach treats the color composition as a pixel-level resampling problem. In order to both satisfy the user's editing requirements and avoid visual artifacts,we construct a globally optimized interpolation field. This field defines from which input video frames the output pixels should be resampled.Our proposed resampling solution ensures that(i) the global color transition in the output image is as smooth as possible,(ii) the desired colors/objects specified by the user from different video frames are well preserved,and(iii) additional local color transition directions in the image space assigned by the user are also satisfied.Various examples have been shown to demonstrate that our efficient solution enables the user to easily create time-varying color image composition results.
基金Supported by the National Basic Research Program of China (Grant No. 2009CB320802)the National Natural Science Foundation of China(Grant No. 60833007)+1 种基金the National High-Tech Research & Development Progran of China (Grant No. 2008AA01Z301)the ResearchGrant of the University of Macao
文摘This paper presents an interactive graphics processing unit (GPU)-based relighting system in which local lighting condition, surface materials and viewing direction can all be changed on the fly. To support these changes, we simulate the lighting transportation process at run time, which is normally impractical for interactive use due to its huge computational burden. We greatly alleviate this burden by a hierarchical structure named a transportation tree that clusters similar emitting samples together within a perceptually acceptable error bound. Furthermore, by exploiting the coherence in time as well as in space, we incrementally adjust the clusters rather than computing them from scratch in each frame. With a pre-computed visibility map, we are able to efficiently estimate the indirect illumination in parallel on graphics hardware, by simply summing up the radiance shoots from cluster representatives, plus a small number of operations of merging and splitting on clusters. With relighting based on the time-varying clusters, interactive update of global illumination effects with multi-bounced indirect lighting is demonstrated in applications to material animation and scene decoration.
文摘This paper proposes a new neural algorithm to perform the segmentation of an observed scene into regions corresponding to different moving objects byanalyzing a time-varying images sequence. The method consists of a classificationstep, where the motion of small patches is characterized through an optimizationapproach, and a segmentation step merging neighboring patches characterized bythe same motion. Classification of motion is performed without optical flow computation, but considering only the spatial and temporal image gradients into anappropriate energy function minimized with a Hopfield-like neural network givingas output directly the 3D motion parameter estimates. Network convergence is accelerated by integrating the quantitative estimation of motion parameters with aqualitative estimate of dominant motion using the geometric theory of differentialequations.