Exquisite Corpse

Tim Pickup from Genetic Moo discusses a novel collaborative art work which was made by London Group members for the recent In the Dark exhibition.

Experiments in collaborative film making

The In the Dark exhibition was designed to encourage collaboration. With 3 different sets of artists working in the same dark space (well as dark as we could get it!) with overlapping projections, performances and artworks. The show was well attended and the artists involved got a lot of new ideas about ways of working together. In fact, there are plans to do it again next year. 

There were several collaborative artworks on show with two, three, or four artists combining skills and equipment. In one case 20 workshop participants built a generative animation using creative coding. Here I’d like to highlight one particular collaboration which was more entangled and followed on from discussions with London Group member Bryan Benge.

A small group of us decided to try out a video version of the old Surrealist game Exquisite Corpse. This technique was adapted from a parlour game called Consequences in which players write in turn on a sheet of paper, fold it to conceal what was written, and then pass it to the next player for a further contribution. Surrealism’s principal founder André Breton reported that it ‘started in fun, but became playful and eventually enriching’.

Our version featured five London Group members, Bryan Benge, Genetic Moo (Nicola Schauerman and Tim Pickup, myself), Sandra Crisp and Stephen Carley. We would each add 30 seconds of footage (could be scraps or tests that hadn’t made it into our own completed artworks) to an lengthening video – and see what evolved. We’d go round twice in a fixed order and that would make a 5 minute film. Each person could edit their version into the previous footage using dissolves etc. but this quickly caused problems as anything could be re-edited by the next person. You might do something cool but then the next person would wipe over it. So it was hard to get an overall handle on the process.

After several consecutive additions all that we had was a very disparate set of videos – some animations, some 3D, some live footage, some generative, some rough, some smooth, and lots of bits of sound with differing levels of musicality. It was hard to see how this was all going to come together. So we broke the rules.

I stepped outside of the process and did a stroboscopic digital edit of all the footage and asked permission to do something similar at the end. Everyone said no, as it was way too flickery, but they did agree to the idea of a final homogenising edit – “do something like that but not like that”.

Once all the footage was gathered I began to algorithmically edit the film. I wrote a computer program which would take all the footage – break it down into still images and recombine these in a new way frame by frame, finally bringing all the new frames together into a continuous film. So first the 4 minute film became a series of JPG frames (2400 of them at 10 frames per second). Then I started experimenting. I had in mind a double play head – so wrote another program to pull out 2 images from the folder of 2400 and use a threshold to pass one image through another. So any pixel brighter than a certain level would allow a pixel from the other image to be seen, any pixel darker than that level would become black. I wanted to keep the film as dark as possible without obliterating everything to match the theme of the show as a whole. This thresholding produced a new output image, stored in a folder. And then you advance both play heads by choosing the next image along in the input folder. You repeat the process. In this way 4800 frames were output – going round the whole film twice to make sure each image is passed through in both directions. Finally the 4800 images were combined back together into the finished video.

The audio was also treated algorithmically using a granular synthesis procedure – roughly speaking, the soundtrack is chopped up into thousands of small pieces and then these are put back together approximately in order – there is some randomness resulting in a pulsing jittery soundtrack. Nicola then edited the image and sound together.

And that was it – we liked it – some of the threshold effects were compelling and without too much examination we added it into the show to see what people thought.

Firstly everyone involved thoroughly enjoyed it, and found the final film striking. The people we asked in the audience felt likewise.

Not until later did I start to formulate a reading of it. The most insistent visual image is Stephen pulling himself across the screen. It’s insistent because he added himself into the mix 4 times which got doubled to 8 in the edit. Is he pulling himself into a dark void, or pulling himself out of one? The surrounding images of surveillance, weird sexual and animal imagery, computer programs, spidery presences, and bits of nature are sooted up, clogged up and polluted creating a dystopian space – humans pulling against technology. Being sucked into a void, corrupted, choking. You know what I mean.

After the show was over Nicola and I came to a realisation: the double head technique could be expanded to make much longer films. Rather than a fixed gap between the playheads, the gap could be widened by one step each loop through allowing way more variety of image threshold mashups. The results could be done generatively – turning 4 minutes of source footage (any source footage) into an endless* generative piece which could be part of a exhibition.  And indeed, it will be, as we have used this technique to make “The Medusa and The Snail” for the upcoming Self-Service video show in June.

So yes, the Exquisite Corpse process was a good example of group collaboration, it was fun, playful and eventually, enriching to us too.

*The process will eventually repeat but is 2400 times longer = 6.7 days of footage !

Genetic Moo LG, 2019

[fvplayer src=”https://vimeo.com/310789514″ width=”1280″ height=”720″]