ctrl-arm at shellhacks 2025
building ctrl-arm at shellhacks
why we built it
ctrl-arm started from a simple question: what would computer control feel like if it relied on the body instead of a keyboard, mouse, or constantly reaching for a screen. since shellhacks was coming up, we decided to work on it there and as we thought about how we were going to create it, it pivoted into something more focused on accessibility.
the goal was not to make a demo that only works once. it was to build something that feels immediately understandable, where you can flex, move, or speak and the computer responds in a way that feels natural.
the stack
the core input came from myoware 2.0 emg sensors reading muscle activity, paired with motion data from a seeed studio xiao sense so the system had context for how the arm was actually moving.
on the software side, python handled real time signal processing, with an electron and react overlay. whisper handled speech to text and gemini handled natural language interpretation, which made the system more like a true multimodal interface instead of just gesture recognition with voice commands added on.
some issues
the biggest lesson that we learned was that accuracy is not the correct GOAL to have when working on body driven interfaces. a heavier model can look better in evaluation but still feel worse to use if every interaction is delayed.
what mattered most was responsiveness and making sure the system felt live.
that is why we went with a lighter decision tree pipeline. it gave up some accuracy (99 -> 96), but it was faster, easier to calibrate, and actually usable in real time.
why it worked
the project worked because it treated accessibility as the core problem instead of something secondary.
winning microsoft's shellhacks challenge was great, but the most important thing was that people immediately understood the use case.