Hi, This is Hanjing Wang

I am Hanjing Wang and I am currently a Phd student at RPI and my advisor is Prof. Qiang Ji. In May 2019, I was graduated in Georgetown University majoring in Analytics. I also hold a Bachelor degree in Mathematics and Applied Mathematics at Xiamen University, China. I have beeing working at Intelligent System Lab with Professor Qiang Ji since 2019. My research area is Uncertainty Quantification, Probabilistic and Bayesian Deep Learning, Model Explainability, Computer Vision.

Educational Background

» Ph.D. in Electrical, Computer, and Systems Engineering, Rensselaer Polytechnic Institute (Ongoing)

» M.S. in Applied Mathematics, Rensselaer Polytechnic Institute (01/2020 - 12/2023)

» M.S. in Analytics, Concentration in Data Science, Georgetown Unversity (08/2017 - 05/2019)

» B.S. in Mathematics and Applied Mathematics, Xiamen Unversity (08/2013 - 05/2017)

Work Experience

» AI Horizons, IBM T. J. Watson Research Center, Yorktown Heights, NY (05/2023 - 08/2023)

  • As a summer Extern supervised by Shiqiang Wang, worked on Explainable and Trustworthy AI Systems for Language Models
  • Contributed to the strategic development of watsonx.ai inference monitoring, addressing overconfidence and hallucination issues in LLMs through OOD detection, uncertainty quantification, and in-context attribution.
  • Introduced semantic uncertainty attribution to identify key factors driving uncertainty, enabling semantically-grounded uncertainty reasoning for enhanced AI model interpretability.

» AI Horizons, IBM T. J. Watson Research Center, Yorktown Heights, NY (05/2022 - 08/2022)

  • As a summer Extern supervised by Shiqiang Wang and Dhiraj Joshi, worked on Uncertainty Quantification, Explanation, and Mitigation for Reliable and Trusted AI Systems
  • Proposed a gradient-based uncertainty attribution method for explainable uncertainty quantification in Trusted AI, which identifies the most problematic regions of the input that contribute most to the prediction uncertainty.
  • Leveraged the knowledge from uncertainty attribution to develop a special attention mechanism to strengthen more informative input areas with uncertainty reduction, leading to improved prediction accuracy and robustness.

» AI Horizons, IBM T. J. Watson Research Center, Yorktown Heights, NY (05/2021 - 08/2021)

  • As a summer Extern supervised by Debarun Bhattacharjya, worked on Graph Event Model
  • Jointly modeled the event data and the time series and bridged the connection between them using neural models
  • Generated features from events through different encoders, enabling the further improvement of the time series prediction with the knowledge of the events.

» ECSE Department, School of Engineering, Rensselaer Polytechnic Institute (09/2019 - 05/2020)

  • As a Graduate Teaching Assistant, worked on grading the assignments and holding office hours for two courses: Probability Uncertainty and Modeling & Analysis.

Selected Research Project

Efficient Single-model Sampling-free Uncertainty Estimation

  • Proposed the hierarchical probabilistic neural network (HPNN), a single deterministic network that performs simultaneous prediction and sampling-free uncertainty quantification in a single forward pass.
  • Introduced a closed-form self-regularized training strategy for HPNN using Laplacian approximation without the availability of ensemble models, density-based models as well as OOD samples.
  • Collaborated with other researchers and wrote a survey paper for empirically evaluating sampling-free methods on out-of-distribution detection, active learning, and imbalanced image classification with insightful justifications.

Diversity-enhanced Accurate Sampling-based Uncertainty Estimation

  • Proposed the probabilistic ensemble, a Bayesian framework to model aleatoric and epistemic uncertainty by combining the ensemble method and the Laplacian approximation for diversity-enhanced learning.
  • Constructed a Gaussian distribution around each mode of the ensemble models by Laplacian approximation, forming a mixture of Gaussian model for better approximating the posterior distribution of parameters.

Uncertainty Attribution for Explainable Bayesian Deep Learning

  • Proposed gradient-based Bayesian deep learning methods to identify the locations or the major factors of the input that contribute to the predictive uncertainty.
  • The proposed methods backpropagate the predictive uncertainty to either the input or the feature space to generate the uncertainty map, based on which we can identify the most problematic regions of the inputs or the troublesome prediction-essential imaging factors (e.g., image resolution, illuminations).

Uncertainty-driven Mitigations on Computer Vision Applications

  • Image Classification: Leveraged the insights from uncertainty quantification and attribution to develop uncertainty-guided mitigation strategies for refining the classification models. Specifically, we utilize the uncertainty attribution maps to optimize the input/latent space to improve the prediction accuracy.
  • Action Recognition: Introduced the probabilistic transformer by modeling the distribution of self-attention layers for complex action recognition. Leveraged the estimated epistemic uncertainty for both training and inference to construct a majority model and a minority model to improve the model prediction accuracy and robustness.
  • Body Pose Estimation: Utilized the negative log likelihood loss to train a two-stage probabilistic 3D body reconstruction model that recovers 3D human body poses from 2D images and efficiently quantifies epistemic and aleatoric uncertainty.
  • Face Occlusion Detection: Leveraged the landmark epistemic uncertainty and their spatial dependencies to improve both face landmark detection and landmark occlusion detection. Specifically, we treated occluded landmarks as outliers and employed epistemic uncertainty to detect them, without any occlusion annotations.

Publications

Contact Me

wangh36@rpi.edu

Copyright © Bashirul Azam Biswas 2023