My swap teammate’s goal is using the algorithms to create works of art. However, art often are often not simple optimization problem, since the “objective function” is the
The goal is to use biologically inspired algorithms and techniques in the creation of art. I think the main challenge and obstacle behind the fantasy of “art x technology” is that any of the algorithm or optimization technique are solving problem according to the “objective function” (or “fitness function”), which are the synonym of “goal” that could be computed only if the objective is represented with mathematics.
A great art work often conveys the message that is emotional expression(e.g., passion, numb). Moreover, it conveys messages that are even hard to discribe by the words. For the simple scientific expression, we can easily define them into the mathematic expression, (e.g, brightness, loudness.) and optimize our solution. But I believe most of the people won’t think adjusting the brightness of the picture is an art. The art should be more than that -- remember that art as experience. Let’s say if the algorithm can find everything in the picture that represent happy, and make it gray-scaled. That might sounds to be kind of art. Because it can figure out what is “happy” base on the its previous training data (or say experience), then manipulate the what “it” think it’s happy. So, I would say art as training in computer’s case. (p.s. How about the case of FLORAFORM: is it an art or just a physical computing)
I conclude the question as two. The first one would be “How can we transform those feeling/experience into the mathematical expression as the objective function?” The second one would be, “What does the information we need, to transform/evoke an expression/feeling?”An approach to the first problem would be using the words as the bridge and take the advantage of google photo search (take visual art as example) to establish the global definition of a word in mathematical expression using some state-of-the art classifier.
An approach to the second problem would be a capturing every perceptional clues of the biophysical signal with sensors to discover the feature (the representation of input data for machine learning) that are related to the expression.I did a little experiment on training an state-of-the-art classifier ( https://www.metamind.io/vision/train) to learn happy and sad base on image searching, then test the image with sunny and rainy. The result turn out to be the expectation, that sunny tend to be classified as happy and rainy tend to be classified as sad. The accuracy is more than 90% (roughly speaking).
The result implies that if computer algorithm are told to draw something sad, which has been define by the training data, since the element of rainny is very simular to sad “in its experience”, so the final result of the optimization could possibly have the element of rainy instead of sunny.
The main differences between "directly fitting the result to happy-label data" and "predicting the product with a happy-sad classifier and calculate the loss" is that the classifier could be customized. Just like everyone (classifier) has a different "taste". Some may think A is 90% happy while another think 70% happy. In that sense, we can fit each person's taste when creating the work of art.
In sum, modeling the abstract concept base on machine learning as the objective function seems to be a reasonable approach given the large amount of labeled data on google. I would be interested to see the learning-based objective function in any kind of computational art work.
You can upload files of up to 20MB using this form.