Research
Research Interests
Human-centric design/computing, Human-computer interaction, Human-behavior understanding, Brain-computer interfaces, Computer vision, Machine Learning, Multimedia processing.
Research Highlights
41 papers representing inter-disciplinary research contributions to the areas of:
Human-Computer Interaction (CHI '14, ACM-MM '11, ECCV '10)
Social Computing/Human-Human Interaction (IEEE TPAMI '15, ICCV '15, IEEE TAC '12)
Neural and Affective Computing (IEEE TAC '15, ICMI '15, ACII '13)
Computer Vision (IEEE TPAMI '15, IEEE TIP '14, IJCV '12, ICCV '13)
Brief descriptions of my research foci are as follows:
Eye tracking and applications:
Eye movements are a reflection of how humans perceive and understand the visual world. Psychophysical studies have demonstrated how eye movements are influenced by bottom-up factors (stimulus-related cues such as intensity, color and orientation) and top-down factors (cues critical to our understanding of the world such as faces and emotional objects). With technological advancements, eye movement patterns can serve as large-scale and useful meta-data for analyzing images, akin to textual labels and keywords. My research focuses on characterizing various eye-movement phenomena, and exploiting them in interactive AI systems.
Related Key Publications:
1. Syed Omer Gilani, Subramanian Ramanathan, Yan Yan, David Melcher, Nicu Sebe, Stefan Winkler, “An eye-tacking dataset for Animal Centric Pascal Object Classes”, Int’l Conf. on Multimedia & Expo (ICME), 2015 (Oral, link).
2. Subramanian Ramanathan, Divya Shankar, Nicu Sebe, David Melcher, “Emotion modulates eye movement patterns and subsequent memory for the gist and details of movie scenes”, Journal of Vision, 14(3), 2014 (pdf).
3. Subramanian Ramanathan, Victoria Yanulevskaya, Nicu Sebe, "Can computers learn from humans to see better? Inferring scene semantics from viewers’ eye movements", ACM Int’l Conference on Multimedia (ACM-MM), 2011. (link)
4. Subramanian Ramanathan, Harish Katti, Nicu Sebe, Mohan S. Kankanhalli, Tat-Seng Chua, "An Eye Fixation Database for Saliency Detection in Images", European Conference on Computer Vision (ECCV), 2010. (pdf). Link to the NUSEF database is here.
5. Subramanian Ramanathan, Harish Katti, Raymond Huang, Tat-Seng Chua, Mohan S. Kankanhalli, "Automated localization of affective objects and actions in images via caption text-cum-eye gaze analysis", ACM Int’l Conference on Multimedia (ACM-MM), 2009. (link)
Neural and Affective Signal Processing:
Physiological signals in the form of brain activity (EEG, MEG, FMRI) and peripheral responses (heart-beat rate/variability, eye-blink rate, skin temperature and blood pressure) provide valuable cues to decoding the physical, mental and emotional state of individuals. My research objective is to employ these signals for non-invasive feedback in interactive AI systems. As a first step, we have attempted tagging of affective video content through single-trial decoding of MEG and associated physiological responses.
Related Key Publications:
1. Mojtaba Khomami Abadi, Subramanian Ramanathan, Seyed Mostafa Kia, Paolo Avesani, Ioannis Patras, Nicu Sebe, “DECAF: MEG-based Multimodal Database for Decoding Affective Physiological Responses”, IEEE Trans. in Affective Computing, SI on Advances in Affective Analysis in Multimedia, 2015 (to appear, link).
2. Mojtaba Khomami Abadi, Seyed Mostafa Kia, Subramanian Ramanathan, Paolo Avesani, Nicu Sebe, “User-centric Affective Video Tagging from MEG and Peripheral Physiological Responses”, Affective Computing and Intelligent Interaction (ACII), 2013. (pdf)
Social Computing (Behavior analysis from videos):
My research in social computing involves deducing individuals' personality traits (e.g., extrovert/introvert, emotionally stable/neurotic) based on their behavior in interactive settings. We have successfully employed speech activity and social attention features for characterizing personality traits, esp. Extraversion. We have also attempted to analyze behavior from different types of interactive settings such as (1) round-table meetings, where it is possible to closely monitor participants using webcams enabling reliable computation of their head pose and eye gaze directions (denoted using blue triangle and green dots in the top image), and (2) cocktail parties (bottom), where subjects are free to move around and can only be studied using distant, large field-of-view surveillance cameras making behavior analysis more challenging.
Related Key Publications:
1. Bruno Lepri, Subramanian Ramanathan, Kyriaki Kalimeri, Jacopo Staiano, Fabio Pianesi and Nicu Sebe, “Connecting meeting behavior with Extraversion – A systematic study”, IEEE Transactions on Affective Computing, vol. 3 (4), 2012. (link)
2. Subramanian Ramanathan, Yan Yan, Jacopo Staiano, Oswald Lanz, Nicu Sebe, “On the relationship between head pose, social attention and personality prediction in unstructured and dynamic group interactions”, Int’l Conference on Multimodal Interaction (ICMI), 2013. (pdf)
3. Bruno Lepri, Subramanian Ramanathan, Kyriaki Kalimeri, Jacopo Staiano, Fabio Pianesi, Nicu Sebe, "Employing social gaze and speaking activity for automatic determination of the Extraversion trait", Int’l Conference on Multimodal Interaction (ICMI), 2010. (pdf)
4. Subramanian Ramanathan, Jacopo Staiano, Kyriaki Kalimeri, Nicu Sebe, Fabio Pianesi, "Putting the pieces together: multimodal analysis of social attention in meetings", ACM Int’l Conference on Multimedia (ACM-MM), 2010. (link)
Computer Vision:
An important component of behavior analysis systems is to extract multimodal cues, including visual, relating to head pose and activity characteristics. In the situation where multiple, far-field cameras are available for studying subjects in a smart-room setup, head pose estimation is challenging due to the low resolution of captured faces and changes in facial appearance and scale owing to subject motion (top image). Under such conditions, sophisticated algorithms that can learn head pose-related appearance variations with positional changes need to be developed. For example, the bottom image shows spatial clusters learned using our algorithm, where facial appearance for a given head pose is more or less similar. We have explored Multi-task learning and Transfer learning methods to tackle the problem of head pose estimation under subject motion, and also compiled the extensive DPOSE (dynamic head pose) database to facilitate benchmarking of competing head pose methods.
Related Key Publications:
1. Yan Yan, Elisa Ricci, Subramanian Ramanathan, Gaowen Liu, Nicu Sebe, “Multi-task Linear Discriminant Analysis for View Invariant Action Recognition”, IEEE Trans. in Image Processing., 23(12), 2014. (link)2. Yan Yan, Elisa Ricci, Subramanian Ramanathan, Oswald Lanz, Nicu Sebe, “Multi-view Head Pose Classification under Positional Variation: A Flexible Graph-guided Multi-task Learning Approach”, Int’l Conference on Computer Vision (ICCV), 2013. (pdf)
3. Anoop K. Rajagopal, Subramanian Ramanathan, Elisa Ricci, Radu L. Vieriu, Oswald Lanz, Kalpathi. Ramakrishnan, Nicu Sebe, “Exploring Transfer Learning Approaches for Head Pose Classification from Multi-view Surveillance Videos”, International Journal of Computer Vision- SI on Domain adaptation for Vision Applications, vol. 109 (1), 2014.