Project Description

While working in the field of computer vision and specifically human action recognition, I had the chance to play with multi-modality data sets. Our research showed great improvement and new possibilities in working with these kind of data sets. We mostly were using datasets which provide RGB and depth images for human actions. However, with having more information from an action event we can have better understanding from videos and improve the functionality of computer vision algorithms on new applications like action prediction.

I started working on a new multi-modality dataset which not just provides information in form of RGB and depth images, but also has very accurate muscle contraction information (using EMG sensors) along with motion information (using motion capture technologies). My main contribution in this project was designing the procedure and standards for such kind of dataset. I started working with EMG sensors and tried to find the most descriminative group of muscle in human body which can express and differentiate human actions. Along with defining standards for using the motion capture system which were mostly coming from previous work with motion capture for medical use.

This is an ongoing project in SMILE lab and the result is set to publish to public in early 2018. The importance of this project for me was the experience I had on working with new technologies like EMG and motion capture. I also had a great opportunity for applying existing technologies to new application and defining standards for such procedures. Detailed information on this work will be published along with the dataset.