Image Image Image Image Image Image Image Image Image Image

Bartlett School of Architecture, UCL

Scroll to top

Top

Neural Kubrick

Neural Kubrick

Stanley Kubrick in 1968 speculated on the arrival of human-level Artificial Intelligence in “2001 A Space Odyssey”. Some 16 years past his prediction, our project “Neural Kubrick” examines the state of the art in Machine Learning, using the latest in “Deep Neural Network” techniques to reinterpret and redirect Kubrick’s own films. Three machine learning algorithms take respective roles in our AI film crew; Art Director, Film Editor and Director of Photography.

The outlook of the project is an artist-machine collaboration. The limitations of the machine are achieved by the artist and the limitations of the artist are achieved by the algorithm. In the context of the project, what the machine interprets is limited to either numbers, classification of features or generation of abstract images. This output is curated by us into a coherent narrative, translated back into human perception.

The project is based on Stanley Kubrick’s movies as input for three machine learning models, namely The Shining, A Clockwork Orange and 2001 A Space Odyssey. The generated videos display a machinic interpretation of the three movies, through a collaborative effort between the artist and the algorithm.

Introduction Video:

 

FILM EDITOR

A convolution Neural network takes up the role of a film editor that defines or classifies visual similarities between the given scene and a dataset of hundreds of different movies. A dataset of cinema frames was created from 100 movies, it consisted of extracted cinema frames of each movie which summed up to around 115,000 images. The reverse image search algorithm was trained on this dataset. The interface, when queried with a movie clip, outputS a series of images which were similar to the input. A selection of movie frames was done and few seconds featuring that parts were clipped out from the original movie. All the clipped-out sequences were aligned together in relation to the input and a parallel video was generated.

Source of codes used: https://github.com/ml4a/ml4a-ofx

Process Video :

 

(RIS) Reverse Image Search — Film Editor :

 

DIRECTOR OF PHOTOGRAPHY

A recurrent neural network (RNN) takes the role of the director of photography defining camera paths that were reshot in virtual space. The process is done by extracting the camera coordinates of a scene using photogrammetry software. The coordinates were fed into the RNN algorithm which analyzed the sequential camera co-ordinates and generated a continued sequence of coordinates. These new coordinates were used to reshoot the same scene in virtual space, which was 3D- modeled through photogrammetry.

Sources of codes used: https://github.com/jcjohnson/torch-rnn

Process Video :

 

(RNN) Recurrent Neural Network — Director of Photography :

 

ART DIRECTOR

A Generative Adversarial Network was trained on frames of Kubrick’s films divided into three datasets depending on the shot length: close-ups, medium shots, and long shots. The machine thus reimagines new compositions based on the features it interprets from the input dataset. The 64×64 output images were scaled up with Neural Enhance script, and the output film is an interpolation between the generated frames using G’MIC.

Sources of codes used: https://github.com/alexjc/neural-enhance, https://github.com/soumith/dcgan.torch

Process Video :

 

(GAN) Generative Adversarial Network — Director of Photography :

 

 

To view complete movie interpretations of Stanley Kubrick’s film’s, visit our website: http://neuralkubrick.com/

Research Reports

http://two.wordpress.test/orderchance-narratives-of-new-media.html

 

http://two.wordpress.test/machine-as-scenographer.html

http://two.wordpress.test/the-reciprocal-relation-of-semiotic-architectural-space-and-semiotic-filmic-space-2.html