Points, lines, planes (2015)

“Points, lines, planes” creates an immersive, composed environment in which various landscapes of sounds and fields of emotions augment each other. The composition contrasts text-to-speech recordings of wikipedia articles about topics such as anxiety, compassion and friendship with filtered basic waveforms and pink noise. The electronic sounds contextualise the articles using various trajectories in space, timbre and pitch. Data extracted from spectral analyses of the hallways found at the Zentrum für Kunst und Medien (ZKM) and filtered noise provide found the piece's harmonic content. The timbres, the spatialisation and dynamic range of the sounds delineate the piece's flow of tension and release. The piece contains parts of a meditation kindly provided by The Counselling Service of the University of Manchester. The piece is commissioned by the ZKM as part of their Tangible Sounds Festival. The piece has been composed and produced at the ZKM, Karlsruhe, Germany.

About the creation of the piece.

The goal of the piece was to experiment and study the composition of space within a spherical loudspeaker array. Due to the sheer number of loudspeakers to be controlled I changed my usual way of composing through recording and transforming sounds and compiling them in a sequencer. Instead, I opted to generate and compose everything in SuperCollider. 

To deal with space, SuperCollider will generate sound and spatialise it on the loudspeaker array using VBAP. The actual locations of sounds are described through a set of coordinates (azimuth and elevation) and VBAP then maps those coordinates to channels of the loudspeaker array. That means that (in theory) I no longer write for a specific loudspeaker configuration as VBAP can adapt the coordinates to any configuration of the loudspeaker array.

To get an understanding on how the specific space of a spherical loudspeaker array works I programmed a few synthesised sounds and tested their spatialisation. I was essentially testing the creation of space using test signals such as pink noise, various sine, saw or pulse waves. As I liked the results I decided to use those test sounds as the starting point for the composition, incrementally elaborating the sounds. 

The next step was to deal with data management. As I was using synthesised sounds a lot of data had to be dealt with, dealing with questions such as: When is a sound moving where? How is the sound changing over time? SuperCollider offered a useful function for this, called Routine. Here one describes like in a score what is happening when. 

Again, after a while of composing I realised that I would need some contrasting material to the synthesised sounds. Soundscape recordings of various hallways found in the ZKM Karlsruhe provided a vast resource, as well as text-to-speech recordings from Wikipedia texts. Both provide a context for the synthesised sounds - or in other words - connotate them.

The soundscapes were used as literal recordings as well as a data resource. The sound of the soundscapes enriched the synthetic sounds with their calm, "lively" character. Peaks found in the spectrum of the recordings were used to drive the frequency parameter of sound generators. The different soundscapes essentially provided the different tonalities used in the composition. 

After experimenting with a variety of texts about economics, anxiety and psychotherapy I settled for ones dealing with anxiety, depression and meditation related texts. The artifical character of the text-to-speech recordings gave the texts a "cold" and "calm", almost objective appeal. Different text to speech engines provided different characters and timbres - male / female voices, as well as different degrees of "artificialness". I prefered the artificial ones over the ones trying to sound natural. I found the combination of both the synthesised sounds and texts created a lot of tension together. This tension will be eventually resolved after the piece's climax through a recording of an actual person giving a guided meditation.

Another data source was noise. Heavily constrained, filtered noise provided data for the sound generators.

The composition ultimately can be performed in realtime. (However, prior the premiere performance I recorded the piece and played that recording back for safety reasons.)