We present a system for the real-time sonification of dancer motion data using live coding techniques. Our approach utilizes both the Rokoko motion capture (MOCAP) suit and an open-source, locally hosted AI-based pose estimation platform. The system enables performers and coders to collaborate closely, designing expressive sonic responses to live movement. Key features include: (a) real-time capture, transmission, and transformation of motion data; (b) live coding interfaces for modifying system behavior during performance; and (c) recording of both motion streams and evaluated code for iterative refinement in offline sessions. We describe algorithmic techniques for translating MOCAP data into sound, focusing on event triggering, rhythmic integration, and gesture curvature tracking to capture the idiosyncrasies of individual performers. These techniques not only support live sonification but also provide building blocks for animating virtual characters from recorded sessions, offering a path to imbue both human and non-human figures with distinctive personality traits. We present the software infrastructure developed to manage multiple streams of positional data and demonstrate its use in collaborative work sessions that alternate between live coding and offline exploration. Additionally, we show the application of the system in networked live coding performances and discuss its potential integration into virtual character animation pipelines. We conclude by situating our approach relative to existing gesture recognition frameworks, highlighting how our method—focused on expressive motion features rather than predefined gesture sets—affords greater flexibility and artistic agency while introducing new technical challenges.
Javascript must be enabled to continue!
Augmenting character animation in MOCAP systems through sonification and live coding