Building machine learning tools for artists and designers.


Researcher @ New York University, ITP

Scenescoop

Code

Scenescoop is a tool to get similar semantic scenes from a pair of videos. Basically, you input a video and get a scene that has a similar meaning in another video. You can run it as a python script or as a web app.


description

Scenescoop uses the im2text tensorflow model to analyze videos on a frame to frames basis and get a description of the content of those images. Frames with the same description are grouped together to create a sequence or scene.

Scenes are then analyzed with spaCy, for sentence parsing and built-in word vectors, using the average of the word vectors in the sentence.

Annoy is finally used to create an index for fast nearest-neighbor lookup (based on @aparrish Plot to poem)

Video Demos