Current & Recent Research
I don't update this page anymore. For a detailed list of the recent and ongoing research projects, please visit the MOSAIC Lab website (mosaic.cs.umass.edu).
My current research is focused on solving problems in the domain of medicine and health sciences by developing novel mobile health sensing and wearable technologies. I challenge the physical mechanism of state of the art health sensing technologies by bringing in novel or underexplored applied physics concepts, specialized signal processing and machine learning algorithms, and energy efficient implementation in low power mobile computing platforms.
Nutrilyzer: A Mobile System for Characterizing Liquid Food with Photoacoustic Effect
Photoacoustic effect is a fundamental physics concept which is essentially the generation of sound due to the absorption of intensity modulated light or more generally EM waves by a certain material. We took this fundamental physics concept to build a mobile sensing system that can characterize the quality or nutritional characteristics of liquid food. The long term vision of this work is to democratize food characterization using such a low cost, easy to use, mobile and ubiquitous system which could enable consumers to test food before purchase and to put an indirect pressure on the food industry and government regulators to ensure quality.
Proving the fundamental concept of the theory of photoacoustic effect with step-by-step experimentation
Design and Implementation of a low-cost mobile photoacoustic sensing system, Nutrilyzer
Implementation of the signal processing and machine learning algorithm for liquid food characterization
Evaluation of Nutrilyzer for milk protein concentration, milk adulterants, and alcohol concentration characterization
Publications: SenSys 2016
Predicting “About-to-Eat” Moments for Just-in-Time Eating Intervention
Whether or not it is possible to tell ahead of time that one is going to have an eating event in the next N minutes using multimodal sensor data from a mobile system is the primary research question in this work. We have demonstrated that with the help of passive and continuously collected data from an array of sensors (e.g., gps, accelerometer, gyroscope, step count, galvanic skin response, heart rate, skin temp, mastication and swallowing sound) we could
Identify "About-to-Eat" moments with generalized and personalized models
Trigger Just-in-Time eating intervention
This notion of "About-to-<Event>" prediction could be more generally useful for events other than eating, such as smoking, drug abuse, alcoholism, stress management. For example, by predicting next smoking event ahead of time and by triggering relevant Just-in-Time intervention, we could design effective smoking cessation scheme.
Publication: DH 2016 *Best Paper Award*
Radar Vibrometry for Contactless Sensing of Vital Signs and Sleep
Contactless Sensing technology that does not require any skin contact can easily blend in the background. Low-cost Radar can be used to
Identify human presence, body movements, vital signs
Publication: Ubicomp 2015 *Honorable Mention Award*
BodyBeat: A Mobile System Sensing Non-Speech Body Sounds
Non-Speech Body Sounds contain invaluable information about our dietary habits, various pulmonary diseases etc. Body sounds contain very little energy by the time it traverses through our bone, muscles, and skin. As a result, it is very difficult to capture using regular condenser microphone with a high signal to noise ratio. In this project we
Develop a novel piezoelectric sensor based microphone to capture non-speech body sounds
Develop non-speech body sounds classification algorithm
Implement the signal processing and machine learning algorithm in ARM microcontroller and Android smartphone
Contextual Recall: Helping User to Recall using Context Sensing
Human Input is one of the most fundamental requirements in most health interventions and in training almost any machine learning algorithm. The state of the art human input soliciting mechanism is the ecological momentary assessment (EMA), which is also known as experience sampling.
In this project, our concentration is on
Developing contextual recall based user input taking mechanism as an alternative to EMA.
Figuring out the contexts, time delay and presentation of the context.
Publications: Pervasive Health 2014
Speech Emotion Recognition
Speech is one of the most fundamental mechanisms of communication. Researchers have been trying to infer human emotion using different acoustic features (prosody and other suprasegmental features), but speech emotion recognition has a few major challenges including inter-speaker variability, lack of predictability of valence domain.
In this research, our focus was on
Developing model adaptation and feature normalization technique to compensate inter-speaker variability.
Feature engineering and model building for valence.
Exploration of Dynamic Bayesian Network based models.