Adversarial Multi-task Learning for Aesthetic Analysis and Enhancement of Photographic Images

18th October 2018

Proposed by Koustav Ghosal, Sebastian Lutz, Tejo Chalasani
Email: {ghosalk / lutzs / chalasat} at

The aim of this project is to explore neural network architectures which can simultaneously rate and enhance the aesthetic quality of images. Essentially, it will be a multi-tasking framework performing classification and re-construction, at the same time.

Figure 1 : Multi-tasking Framework

While CNNs perform very well for the task of classification [2] or reconstruction independently, it has been recently shown that by carefully mixing multiple tasks together it is possible to boost the performance for each individual task by knowledge sharing[1]. As a simple analogy, while training for tennis, a child learns to run/jump, throw, swing, recognize objects, predict trajectories, rest, all at the same time. By training the same network for classification and reconstruction, more generic and robust features can possibly be learnt.

Both the tasks of rating the aesthetic quality of a picture or enhancing it require an understanding of the aesthetic properties. While aesthetic rating generation [4] is a well researched problem, image enhancement [3] is gaining much attention due to the recent success of generative adversarial networks.

In this project the tasks will cover but not limited to

  • ¬†Reading existing research about multi-task learning, generative adversarial networks and aesthetic assessment.
  • Implementing/extending a multitasking framework for the task proposed.
  • A thorough evaluation and analysis of the performance.


1. Zamir, A.R., Sax, A., Shen, W., Guibas, L., Malik, J. and Savarese, S., 2018, April. Taskonomy: Disentangling Task Transfer Learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp.
2. Krizhevsky, A., Sutskever, I. and Hinton, G.E., 2012. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems (pp. 1097-1105).
3. Zhu, J.Y., Park, T., Isola, P. and Efros, A.A., 2017. Unpaired image-to-image translation using cycle-consistent adversarial networks. arXiv preprint .
4. Lu, X., Lin, Z., Jin, H., Yang, J. and Wang, J.Z., 2014, November. Rapid: Rating pictorial aesthetics using deep learning. In Proceedings of the 22nd ACM international conference on Multimedia (pp. 457-466). ACM.