The Intelligent Systems Laboratory (ISL) at RPI performs research in computer vision, probabilistic graphical models (PGMs), deep learning, and in their applications. Specifically, in computer vision, our current research focuses on automatic analysis and recognition of nonverbal human behaviors, including facial and body behaviors. Our current research in facial behavior analysis focuses on face detection and tracking, facial landmark detection and tracking, 3D face mesh reconstruction, face pose estimation, facial expression recognition, and eye gaze tracking. For body behavior analysis, our research focuses on body detection and tracking, 3D body and hand mesh reconstruction, 2D/3D body pose estimation, body gesture recognition, and human action/activity recognition. Current research in probabilistic graphical models focuses on developing accurate and efficient methods for learning both local and global PGMs models, robust learning under insufficient data, learning model with constraints, and scaling up PGM learning. Research in PGMs also include causal model learning and efficient exact and approximate inference methods for large models as well as active inference. In deep learning, our current research focuses on knowledge augmented deep learning, probabilistic (Bayesian) deep learning, causal machine learning, and self-supervised deep learning. Finally, in applications, we have applied computer vision, PGMs, and machine learning technologies to various applications, including natural human computer (robot) interaction, human state monitoring and prediction, companion/personal robots, driver behavior estimation and prediction, security and surveillance, and information fusion for situation awareness and decision making.
From systems perspective, we concentrate on two aspects of an intelligent system: sensing (perception) and understanding. For sensing, we develop computer vision algorithms to compute various visual cues (e.g. motion, shape, pose, position, and identity) typically characterizing the state of the objects. Given these visual observations, we then develop graphical models to model the relationships between the sensory observations and the high level situation that produces the observations as well as to model the related contextual information. Finally, high level visual understanding and interpretation are performed through a probabilistic inference using the graphical model and the available sensory observations.