Art Transitions || Style Transfers by Neural Networks

Made by Frank Liao and Mimi Niou

To create a program that transposes a style of one image onto another image.

Created: November 28th, 2016



To create an implementation of the paper A Neural Algorithm of Artistic Style by Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge.


The paper presents an algorithm for combining the content of one image with the style of another image using convolutional neural networks. Here's an example that maps the artistic style of The Starry Night onto a night-time photograph of the Stanford campus:


The algorithm allows the user to trade-off the relative weight of the style and content reconstruction terms, as shown in this example where we port the style of Picasso's 1907 self-portrait onto Brad Pitt:  


The algorithm should also work with non-traditional artworks, such as images of pizza.



Example Style Impositions/ Approaches

A Neural Algorithm of Artistic Style

Our main sources for our approach, this paper describes a way to create unique visual experiences through composing a complex interplay between the content and style of an image. The system uses neural representations to separate and recombine content and style of arbitrary images, providing a neural algorithm for the creation of artistic images.


AI Painter

The example implementation closest to our own; AI painter is a commercial application of idea that allows inputs of any images for style decomposition. Their algorithm is not accessible to the public. Originally, their application allowed customers to test their algorithm before purchasing the images, but it is currently offline.

A reanimation of the tea party & riddle scene from Alice in Wonderland (1951), restyled by 17 paintings.

This video is a really good demonstration of what the algorithm we would be implementing looks like in action. Watching a restyled video is much more engaging and dynamic than just having static restyled images, so we hope to somehow incorporate video into our project, although it would provide more technological challenges.

Why is a Raven Like a Writing Desk?
Gene Kogan -

App that uses an algorithm to convert photos into stylized images based off of famous paintings.

This popular app lets you upload any photo or video, and recreates it in the style of any one of over 30 famous artworks. This app is a perfect example of how our project could be made interactive, by allowing participants to use the technology to create their own works using a set of given style options. Additionally, the app demonstrates how harnessing neural algorithms has provided us with a new tool for creation and remixing of media.

Similar Examples:


'new techniques in machine learning and image processing allow us to extrapolate the scene of a painting'

We found the works displayed on this website to be really interesting due to the accuracy of the extrapolations. While the extrapolations seem almost magical, the technology behind it is actually easily accessible and widely used in programs like photoshop and wolfram alpha. This use of machine learning is unique because it goes beyond simply restyling photos and images, and we found the concept of letting the algorithm widen the scene of a painting to be really interesting.

Other Artworks:

'Digital inpainting can extend or repair photos, artwork, and other images. See how it works in the Wolfram Language'

More on Techniques for Style Transfers & Neural Networks

Inceptionism: Going Deeper into Neural Networks

Add or Delete a Painter’s Style Using Neural Algorithms | The Creators Project

Experiments with style transfer


The technology we are utilizing definitely has a lot of potential, and is growing in popularity, as shown by the recent Snapchat updates and various mobile apps that transform your photos into the styles of famous paintings. Though it is still developing, it provides a novel new way of remixing art, and acts as a powerful tool for media artists.


Curatorial Statement

Our final project is a gallery of images, created to explore the disjunction between traditional stylized artworks and iconic photographs, and to bridge the disparity between pictures and paintings of similar timelines or themes. 



We created a collection of stylized images, using landmark photographs and key art pieces from the 20th century. To do so, we utilized an algorithm based off of the paper "A Neural Algorithm of Artistic Style" by Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge. The algorithm allows us to analyze the stylistic features of any given image using neural networks, and then map those characteristics onto another content image, producing really interesting, unexpected results. Our algorithm was built atop another library fromwork from pervious researchers on image processing.

For our display, we printed our images onto 80 lb paper and trimmed them down to be displayed on a wall for our final exhibit. We also decided to include the original images that went into each final piece, and displayed these alongside each stylized image to clearly demonstrate the effects of the algorithm and the image processing.



For our final project, we decided to draw on the theme of remixing as well as combining technology and art to create something that is surprising and interesting to look at. Taking paintings from a whole range of modern artists, as well as some of the most famous photographs from the 20th century, we aimed to combine these seemingly unrelated images into a group of stylized images that would blur the boundary between painted and photographic art, and give context to the timeline of modern art and its many genres and movements. Furthermore, we hoped to demonstrate how advancements in technology have and are presently creating interesting new tools for media artists to experiment with.



We had a few difficulties setting up the appropriate dependencies for the program, as we had to make use of Torch7, loadcaffe, and Google protocol buffers in order to process our images. After fixing those errors and getting the dependencies set up appropriately, the process was relatively simple. Some custom settings were utilized atop a GPU-enhanced neural network used for analyzing images.

We did some research and collected sets of iconic photos as well as key paintings from various genres of art in the 20th century. We then paired the photos with paintings from similar time periods, and that we believed would complement the photos, or produce something interesting. We tested the combinations by running them with smaller image sizes, to ensure that we would not waste time processing larger scale images that we ultimately wouldn't use. There were some combinations that simply didn't seem to work together, a few of which are shown below. Finally, we curated our final set of images by picking the most interesting outcomes and had them printed to be displayed.



There were only two members in our group, therefore it was relatively easy to share ideas and divide the workload evenly. For the most part, we both contributed to each step of the process, and worked together well to complete our project. In terms of roles, Frank dealt mostly with the set-up of the algorithm, while I later searched for and processed the images. We both chose a collection of images to process, and then collectively agreed on which ones we wanted to display. Finally, we printed the images and set up our installation  together to be exhibited.



For the most part, we were pretty satisfied with the final outcome of our work. Many of the images came out surprisingly well, and those who stopped by to observe our project were fascinated by the different images we produced. There were a few combinations of images that we wanted to test, but did not have time to run, as the images are processed extremely slowly and each take multiple hours to complete. Though we learnt that time is an issue with the program we used, we believe the technology has a lot of potential and can be used to create really wonderful remixes and artistic experiments.

Some next steps to consider:

Interactivity: To make our installation more engaging, it would be really interesting if we were able to allow participants to choose the images they want stylized, and then see them appear in real-time. This would require a lot more technical work, though we believe it would make for a really enjoyable experience.

Animation: Taking the technology a step further, we would like to experiment with the stylization of videos. Perhaps starting with simple gifs, moving images would be a logical next step in experimenting with the algorithm's capabilities.



  author = {Johnson, Justin},
  title = {neural-style},
  year = {2015},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{}},
Share this Project

Found In

62-150 Intro to Media Synthesis and Analysis

· 28 members

New creative industries are empowering new modes of collaborative consumption, creation and reuse of media. This often relies on successful collaborations between cross-trained artists, designers a...more


To create a program that transposes a style of one image onto another image.