We are a group of four grad students taking an Embedded Systems class here at UPenn. We are extremely excited to be working with the Microsoft Kinect to develop a type of virtual reality for the user. Our team consists of Haofang Yuan, Josh Karges, Yixin Jin, and Tao Lei.
We wanted to incorporate some of our mechanical design knowledge into this project as well as the topics we've learned in the class. There are hundreds of videos online of people doing very interesting projects with the Kinect, and we used a few of them as inspiration. Chris Harrison worked with Microsoft to develop the Omnitouch project, http://www.chrisharrison.net/index.php/Research/OmniTouch. This is an incredible project that allows the user to interact with a projected interface by detecting the position of the user's own hands. Another project involves a guy who hooked up an OWI robotic arm to respond to the motions of an iphone, http://youtu.be/ttV-gXw3s3U. We thought we'd try to combine these two ideas.
Our initial proposal was to control a robotic arm by touching projected buttons on a nearby surface. We quickly found that the depth camera that Chris Harrison used for the project was specially made for him by PrimeSense, so we decided to alter our objectives.
Our objectives for this project will focus on controlling a robotic manipulator with a Kinect. We plan on designing and constructing our own custom manipulator with at least 4 degrees of freedom. To control this manipulator, the user will be standing in front of a Kinect and move his/her arms in the air. When the Kinect captures the movement of the person's arms, a computer will process the information from the Kinect and then transmit the control signals wirelessly to the manipulators, which will then respond accordingly.
No comments:
Post a Comment