2024
Installation involving 12 channel speaker arrangement, phoneme data, neural machine, vinyl tubing, rope-light, 5 channel raster video, white carpet
Pshal P’shaw investigates the sonic instability of speech—where phonetic dissonance, vocal fragmentation, and gestural sound challenge structured linguistic norms. Developed during my residency at the Max Planck Institute for Empirical Aesthetics, this work explores how speech patterns morph in response to unfamiliar dialects, sonic environments, and computational intervention. Inspired by Hermann Finsterlin’s fluid, dimensionally unbound architectural forms, I approach language as an unfixed structure, revealing phonetic articulation's unstable, adaptive nature.
At its core, Pshal P’shaw is a multi-channel sculptural sound installation, neural instrument, and experimental phonetic composition that deconstructs oral adaptation and linguistic convergence. The project is based on 28 recorded voices engaging with a script designed to test the phonetic contours of a contemporary US Western dialect. Participants articulated words in controlled EEG lab settings, allowing for an analysis of phonetic deviation, mimicry, and emergent vocal patterns.
The multifaceted installation incorporates composed arrangements, a 12-channel speaker setup, phoneme data, a neural machine, vinyl tubing, rope-light, and 5-channel raster video. The 12-channel spatial sound installation navigates phoneme classifications along an x-y axis, refined by an algorithm based on multiples of 12. Audiences encounter intricate phonetic compositions punctuated by dissimilar variations from 28 participant-provided vocabulary.
An interwoven, suspended structure delineates invisible lines between syntax points, rendered visible by light, with ropes of light marking initial connections between speech particles. The development of generative software led to a 5-channel video series exploring oral discourse connections, gradually abstracting participant profiles influenced by phonetic sounds' frequency and dynamics.
By positioning phonetic instability as an expressive and cognitive site, this work aligns with my broader research on interstitial sonic spaces—the liminal areas between signal and noise, order and disruption, articulation and abstraction. The project extends into machine learning and human-machine interaction, where a neural algorithm dissects and reorganizes speech based on phonetic similarity and difference. The system works within a granular information base of over 3,000 phonetic variations, revealing the rhythmic, percussive, and spectral structures embedded in vocal articulation.
The generative vocal arrangement can be listened to as a spatial 12-channel installation or a binaural headphones setting.
Exhibited: The Museum Angewandte Kunst in Frankfurt, Germany from May 16 to July 28, 2024.
Audio release: publication and record release with raster media (DE). Available here and in the US with the Walker Art Center here
Curated by Eike Walkenhorst
Production support with the ARTLAB team, Max Planck Institute for Empirical Aesthetics
Max/ MSP consultant and designer: Matthew Ostrowski