Deep Normal Estimation for Automatic Shading of Hand-Drawn Characters

17th September 2018
Deep Normal Estimation for Automatic Shading of Hand-Drawn Characters

In this paper (Pre-Print) we present a new fully automatic pipeline for generating shading effects on hand-drawn characters. Our method takes as input a single digitized sketch of any resolution and outputs a dense normal map estimation suitable for rendering without requiring any human input. At the heart of our method lies a deep residual, encoder-decoder convolutional network. The input sketch is first sampled using several equally sized 3-channel windows, with each window capturing a local area of interest at 3 different scales. Each window is then passed through the previously trained network for normal estimation. Finally, network outputs are arranged together to form a full-size normal map of the input sketch. We also present an efficient and effective way to generate a rich set of training data. Resulting renders offer a rich quality without any effort from the 2D artist. We show both quantitative and qualitative results demonstrating the effectiveness and quality of our network and method.

Implementation

Code Implemented using Tensorflow, Github

Additional results

 

Related publications

https://www.scss.tcd.ie/~hudonm/publication/deep-normal-estimation-for-automatic-shading-of-hand-drawn-characters/