I had the pleasure to be the lead developer on this project, which was developed by UConn's Digital Media and Design Department. The system combines the input from 7 overhead cameras, 8 Kinects and 5 Kinect Ones to provide the position, skeleton, and gesture information of the children in front of the wall and used Unity to display various scenes for their entertainment.
I wrote most of the code dealing with the data fusion from the rack of computers, and most of the code for the Unity integration. As time went on, it was extended with custom shaders, game logic, and scenes that took place in the XZ plane.
I worked closely with the students that created the art assets for the scenes, and learned a lot about building scenes, organization of project, project management and the many importing issues.
I eventually also worked on a scaled down version that used a Kinect One for all sensor inputs. This was intended for use for sales purposes, such as showing off the features of a car at a dealership, or for entertainment in public places, such as games to entertain children at a doctor's office. As such, I developed a touchless interface which used pointing and gripping to manipulate various games and activities.