Back to Parent

Outcome - what was created and how; tools and technologies; photos + code + video;

I did a little experiment on training an state-of-the-art classifier ( https://www.metamind.io/vision/train) to learn happy and sad base on image searching, then test the image with sunny and rainy. The result turn out to be the expectation, that sunny tend to be classified as happy and rainy tend to be classified as sad. The accuracy is more than 90% (roughly speaking). Screen Shot 2015-09-29 at 11.48.37 AM.png

Screen Shot 2015-09-29 at 3.32.09 AM.pngScreen Shot 2015-09-29 at 3.45.24 AM.pngScreen Shot 2015-09-29 at 4.02.03 AM.png     Screen Shot 2015-09-29 at 4.02.14 AM.png


The result implies that if computer algorithm are told to draw something sad, which has been define by the training data, since the element of rainny is very simular to sad “in its experience”, so the final result of the optimization could possibly have the element of rainy instead of sunny. 

The main differences between "directly fitting the result to happy-label data" and "predicting the product with a happy-sad classifier and calculate the loss" is that the classifier could be customized. Just like everyone (classifier) has a different "taste". Some may think A is 90% happy while another think 70% happy. In that sense, we can fit each person's taste when creating the work of art.


Content Rating

Is this a good/useful/informative piece of content to include in the project? Have your say!

0