Robot on Wheels

My Projects

Child-Robot-Tutoring

Intuitive Computing Lab, Johns Hopkins University

Advisor: Dr. Chien-Ming Huang

Designing a good fedback strategy in child-robot-turoting is essential for self-regulated learning. In human-human-interactions, implicit feedback techniques are employed to encourgae students while solving any task. Whereas, to the best of my knowledge, current robot tutoring systems mostly use explicit feedback strategies. I am interested in studying the impact of implicit and explicit feedback on one-on-one tutoring session where robot acts as a tutor. The proposed system consists of following:

  1. Perception system: To track actions of the participant.

  2. Intelligent Robot Tutor: Acts as a tutor to provide feedback based on perception.

  3. Duplo blocks: Serves as the playground for the participant to solve designed spatial visualization task

NASA Telerobotic Satellite Servicing Project

Sensing, Manipulation, and Real-Time Systems, Johns Hopkins University

PI: Dr. Peter Kazanzides

Advisor: Dr. Simon Leonard

Details of the project can be found here. My contribution is towards reducing cutting errors on slave robot end. I developed a computer vision based system to get slope of interaction between MLI and the cutting blade. I am planning to compare visually estimated forces to actual forces on blade. 

Effect of Robot Tutor Coaching Levels on Trust and Comfortability of Kids

Course Project for Human-Robot-Interaction

The unique social presence of robots can be leveraged in learning situations to increase comfortability of kids, while still providing instructional guidance. Furthermore, motivating nature of robot tutor can create positive and supportive environment whereas responsive nature can bring out the element of trust. In this pilot study, we examined how coaching level i.e. varying motivating and responsive nature, of robot affects comfortability, satisfaction and trust in a one-on-one tutoring setting. Our results show that motivating and responsive nature of robot creates amicable environment that encourages kids to seek help through questions. Robot's presence and responsiveness gives sense of support to the child. Our pilot indicates that the robot tutor in responsive and motivation condition is preferred over rest coaching levels. 

Click here for the detailed report. Here is a short video showing robot actions and responses. 

Comparison of Various Neural Networks on Food Datasets

Course project for Deep learning

The trend for health consciousness is prompting more people to count their calorie intake. Certain apps like MyFitnessPal, Lose it, Samsung health and Cron-o-meter give calorie count and nutrition value of various food items. Most of the existing solutions make use of neural network based image classification models. In this project, we studied the efficacy of two state of the art architectures namely Resnet-152 and Inception-v3 for food image classification and attempt to ensemble the models. We proposed a naive approach of ensembling the output of the models for better accuracy. We culminated our work by evaluating the models and their ensemble on a twin dataset. I worked on ResNet and ensemble of ResNet and inception-v3 in this project.

Click here for the detailed report. 

ASL Hand Sign Classification and Prediction

Course project for Computer Vision

Sign language is a hand-gesture based means of communication for people suffering from auditory impairments. While there are already a variety of applications that help translate English to sign language, the inverse problem of translating sign language to English is an active area of research. In this project, we developed a novel method for translating a sequence of hand gestures into English words by ensembling a convolutional neural network (CNN) based image classifier with an n-gram letter predictor. We have employed data augmentation techniques for a robust model training. We have implemented our model on the MNIST sign language dataset1 yielding a prediction accuracy of 98.15%. I developed the network, in this project. 

Click here for the detailed report.

Brain Computer Interfacing

Neuroinformatic Lab, National University of Sciences and Technology and Signal, Image and Video Processing Lab, Lahore University of Managment Sciences

Advisor: Dr. Awais M. Kamboh and Dr. Nadeem A. Khan

Brain Computer Interfaces (BCIs) serve as an integration tool between acquired brain signals and external devices. Precise classification of the acquired brain signals with the least misclassification error is an arduous task. Existing techniques for classification of multi-class motor imagery electroencephalogram (EEG) have low accuracy and are computationally inefficient. I came up with a classification algorithm, which uses two frequency ranges, mu and beta rythms, for feature extraction using common spatial pattern (CSP) along with support vector machine (SVM) for classification. The technique uses only four frequency bands with no feature reduction and consequently less computational cost. The implementation of this algorithm on BCI competition III dataset IIIa, resulted in the highest classification accuracy in comparison to existing algorithms. A mean accuracy of 85.5% for offline classification has been achieved using this technique. Click here to access the published paper. I presented this paper at the “International Conference of the Engineering in Medicine and Biology Society”  held in Korea in July, 2017. 

My team and I was able to demonstrate a working mobile robot using EEG signals as control signal. Click here to access detailed report of our final year project and here is a short video showing real-time control using EEG signals.  

Afterwards, I focused on feasibility analysis of existing multiclass motor imagery systems for real-time applications such as mind-controlled wheelchair and prosthetics. I presented this analysis based on accuracy and computational complexity of various feature extraction methods and classification algorithms featured in the existing solutions. 

SmartSIM, A Virtual Reality Training Simulator

Smart Machines & Robotics Technology (SMART) Lab

Advisor: Dr. Osman Hasan

Virtual reality (VR) training simulators have started playing a vital role in enhancing surgical skills, such as hand–eye coordination in laparoscopy, and practicing surgical scenarios that cannot be easily created using physical models. SmartSIM is a new VR simulator for basic training in laparoscopy which has been developed using a generic open‐source physics engine called the simulation open framework architecture (SOFA).  Some of the distinguishing features of SmartSIM include:

  1. An easy‐to‐fabricate custom‐built hardware interface

  2. Use of a generic physics engine to facilitate wider accessibility of our work and flexibility in terms of using various graphical modelling algorithms and their implementations

  3. An intelligent and smart evaluation mechanism that facilitates unsupervised and independent learning.

In the paper, I presented a comparison of various open source surgical simulation tools e.g. SOFA, SPRING, etc. I also prepared figures to adequately present facts from user studies and justified the validity of the system. Click here to access our publication in “The International Journal of Robotics and Computer Aided Surgery”.  See the simulator in action here.