Data augmentation with artistic style
18th October 2018Proposed by: Sebastian Lutz, Tejo Chalasani, Koustav Ghosal
Input Image and 3 different styles. For humans it is still easy to detect that this is a lion.
4 different levels of abstraction for the same style input.
The success of training Deep Learning algorithms heavily depends on a large amount of annotated data. For many applications, gathering this data can be very time-consuming or difficult. For this reason, datasets are usually enhanced by data augmentation, i.e. applying random transformations on the data to enlarge the set. Often, only relatively simple transformations are applied e.g. random cropping or mirroring an image.
In Style Transfer[1], the goal is to apply the style of an image to another image without changing its content (See Figure). It is also possible to choose the level of abstraction when applying the style, i.e. choosing how much weight either the content or style has on the resulting image. Since the content of an image should stay the same after applying a new style to the image, it seems natural to use Style Transfer as a data augmentation strategy for image-based Deep Learning algorithms. The goal of this project is project is to explore how useful Style Transfer can be compared to and combined with the more traditional approaches, as well as analyse which styles and levels of abstraction work best.
References:
[1] Gatys, Leon A., Alexander S. Ecker, and Matthias Bethge. “Image style transfer using convolutional neural networks.” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016.