How do humans perceive artistic imagery? This project, created by Chinmay Tyagi, Michael Auld, and Tycho Bellers, aims to create artificial artwork by manipulating an image to adopt the visual style of another image. Eg, "Picasso-fy" a selfie of myself to adopt the style of one of his paintings. To get an understanding of technical details, please view our final report.
Run the included demo.py
. Note that it can take a few minutes to run on CPU. Increasing the number of epochs will make the output look better but currently the demo only does 1 epoch so it doesn't take too long.
To run the style transfer on a video, follow these steps:
- Run
video_ingestion.py
. This will take a single video and convert each frame to a jpeg. - Open the java project
java/Nerual
in intelliJ and edit the parameters of the convert function to match your liking. (This step was required due to the memory issue with Tensorflow and GPU memory allocation. If you are using CPU only, you can probably just runconvert2.py
directly with the appropriate arguments and skip step 3.convert2.py
takes the following arguments:<src_dir> <dst_dir> <start_frame_id> <end_frame_id>
) - Run the java program. This will style each frame and output it to a new directory.
- Edit
video_ingestion.py
to call theframes_to_video
function on the appropriate directory to convert the images back into a video.