The next step in the process of bringing this blockchain piece to life was to move it from a MIDI controller setup into an installation where the people moving through the space would be triggering the output. Obviously some kind of computer vision was necessary and I began with , where I believe most people begin, a Kinect. I figured that it’s depth sensing capabilities and skeletal tracking would allow me to very easily map out the movements of the individuals in the space. This turned out to be the furthest thing from the truth. I spent an unnecessary amount of time trying to make the Kinect work for something that was outside its capabilities. The primary issue was the scale at which I’m trying to present this work.
Nevertheless, I tried various methods to no avail. First I attempted to mount the Kinect on the ceiling and point it straight down to get a top down view of my “grid”. This didn’t work because of it’s horizontal vs. vertical field of view restrictions. I then pointed it straight at the space I was trying to map and had issues with its sensing at depth. The specs say 8 meters but it’s really more like 3. I even spent time playing with the lighting in the space, thinking that maybe it was having an effect on the IR sensors in the Kinect. But no, that didn’t help either.
In the end, I found using an open source software that was recommended to me for doing computer visions with USB webcams (TSPS.cc) to be the next jumping off point. I still didn’t have the field of view that I needed but at least I knew I was moving in the right direction.