The movements are based on constantly finding, navigating and letting go of different logics: anatomical, geometrical, experiential, imaginative.
The sound is triggered by MiniBees, and is based on granulating live recorded audio samples. Every time a trigger is received, the algorithms selects a random sample. As the piece progresses, the pool of selectable samples become larger.
Requires MiniBeeUtils and movement sensors. We use MiniBees, but with a bit of rewriting the MiniBeeUtils should work with any sensor data. See the full gitlab repo for more details on implementation.