DeepStereoBrush: Interactive Depth Map Creation

In this paper, we introduce a novel interactive depth map creation approach for image sequences which uses depth scribbles as input at user-defined keyframes. These scribbled depth values are then propagated within these keyframes and across the entire sequence using a 3-dimensional geodesic distance transform (3D-GDT). In order to further improve the depth estimation of the intermediate frames, we make use of a con-volutional neural network (CNN) in an unconventional manner. Our process is based on online learning which allows us to specifically train a disposable network for each sequence individually using the user generated depth at keyframes along with corresponding RGB images as training pairs. Thus, we actually take advantage of one of the most common issues in deep learning: over-fitting. Furthermore, we integrated this approach into a professional interactive depth map creation application and compared our results against the state of the art in interactive depth map creation.

Paper

DeepStereoBrush: Interactive Depth Map Creation

Collaborators

Sebastian Knorr, Matis Hudon, Julian Cabrera, Thomas Sikora, Aljosa Smolic.

Reference

Sebastian Knorr, Matis Hudon, Julian Cabrera, Thomas Sikora, Aljosa Smolic.
DeepStereoBrush: Interactive Depth Map Creation
International Conference on 3D Immersion, Brussels, Belgium, 2018.

Deep Normal Estimation for Automatic Shading of Hand-Drawn Characters

In this paper (Pre-Print) we present a new fully automatic pipeline for generating shading effects on hand-drawn characters. Our method takes as input a single digitized sketch of any resolution and outputs a dense normal map estimation suitable for rendering without requiring any human input. At the heart of our method lies a deep residual, encoder-decoder convolutional network. The input sketch is first sampled using several equally sized 3-channel windows, with each window capturing a local area of interest at 3 different scales. Each window is then passed through the previously trained network for normal estimation. Finally, network outputs are arranged together to form a full-size normal map of the input sketch. We also present an efficient and effective way to generate a rich set of training data. Resulting renders offer a rich quality without any effort from the 2D artist. We show both quantitative and qualitative results demonstrating the effectiveness and quality of our network and method.

Implementation

Code Implemented using Tensorflow, Github

Additional results

 

Related publications

https://www.scss.tcd.ie/~hudonm/publication/deep-normal-estimation-for-automatic-shading-of-hand-drawn-characters/

2D Shading for Cel Animation

In this paper we present a semi-automatic method for creating shades and self-shadows in cel animation. Besides producing attractive images, shades and shadows provide important visual cues about depth, shapes, movement and lighting of the scene. In conventional cel animation, shades and shadows are drawn by hand. As opposed to previous approaches, this method does not rely on a complex 3D reconstruction of the scene: its key advantages are simplicity and ease of use. The tool was designed to stay as close as possible to the natural 2D creative environment and therefore provides an intuitive and user-friendly interface. Our system creates shading based on hand-drawn objects or characters, given very limited guidance from the user. The method employs simple yet very efficient algorithms to create shading directly out of drawn strokes. We evaluate our system through a subjective user study and provide qualitative comparison of our method versus existing professional tools and state of the art.

Implementation

Code Implemented in Pencil 2D, available soon.

Additional results

 

Related publications

https://www.scss.tcd.ie/~hudonm/publication/2d-shading-for-cel-animation/