Thursday 10 September 2009

Calle Garibaldi - a Dynamic Documentary Film

My recent investigation leads to creative outcomes, which explore the use of dynamic/ interactive contents and mosaic-form materials or structures. It ranges from Live Interactive musical scores (as in Russian Disco) to interactive media with a focus on sound, although spaced in inmersive 3D environments (as in Ho - a sonic expedition to Vietnam).

My proposal for s.low is a collaborative work to create and produce a Documentary Film entitled 'Calle Garibaldi' with dynamic contents, where sound becomes a compass for orientation and navigation through the narratives. Users will experience this work in the boundaries between facts and fiction. The connection with Berlin through the narrative is a key aspect of the project. Although the story behind Calle Garibaldi has been investigated for two years now, it can not be revealed as yet.

In areas of technology, I also want to experiment with a rare combination of low-cost consumer camcorders for documenting mosaic narratives, with highly sophisticated slow-motion high FPS cameras, to freeze or slow down instants of time and sculpt them. High technology aims to provide primacy to sound in the audiovisual contract and low-cost technology accessibility to knowledge and production, which are no longer in the hands of a few.

This Dynamic Documentary Film project has a component of speculative research and experimentation with different media. However, the underlying methodology I am employing, builds upon and takes further my previous investigation in the field. In areas of sound, I compose micro-structured sonic events with an associated typomorphology, which is stored as custom metadata. I trust this system is applicable to documentary films if added an extra twist to the methodology.

The retrieval system uses spotlight search technology (e.g. mdfind command on OSX) which calls audio or visual materials via tags and populates the contents of the work dynamically (via Pure data 'PD', MaxMSP/Max5 etc). A video is shown below for more details. Such idea is connected to performers' decisions (as in my series of kauten-suzhi scores) or users (as in 'Ho'), which navigate through sound in 3D inmersive environments. For the latter, audio metadata is retrieved via game-engine technology in Blender 3D software and Punakea, piping max with OSC code via a python script.

To access the associated research blog of this project, click here (invited readers only). If you think your research is relevant to it, please email me

Example of related work:

Ho - a sonic expedition to Vietnam (demo reel) from Surround Wunderbar Studio on Vimeo.

Ho - Labyrinth Island scene. Studio test at NOVARS from Surround Wunderbar Studio on Vimeo.

Retrieving Audio using custom metadata in maxmsp from Surround Wunderbar Studio on Vimeo.