Hi, I am Hanjing Wang.

Introduction to Myself

I am Hanjing Wang and I am currently a Phd student at RPI and my advisor is Prof. Qiang Ji. My research area is Computer Vision. In May 2019, I was graduated in Georgetown University majoring in Analytics. I also hold a Bachelor degree in Mathematics and Applied Mathematics at Xiamen University, China. I have beeing working at Intelligent System Lab with Professor Qiang Ji since 2019. My research focus is mainly on probabilistic and Bayesian deep learning and uncertainty quantification.


School of Engineering, Rensselaer Polytechnic Institute (Aug.2019 - Present)

I am currently a Ph.D. Candidate in Electrical Engineering (GPA: 3.87/4.0). My advisor is Prof. Qiang Ji. Intensive Coursework includes: Probabilistic Deep Learning, Pattern Recognition, Computer Vision, Optimization.

Graduate School of Art & Science, Georgetown University (Aug.2017 - May.2019)

I had an M.S. degree in Analytics, Concentration in Data Science (GPA: 3.934/4.0).

School of Mathematical Sciences, Xiamen University (Aug.2013 - May.2017)

I had B.S. degree in Mathematics and Applied Mathematics (Major GPA:3.85/4.0).

Research Projects

Efficient Single-model Sampling-free Uncertainty Estimation

We Proposed the hierarchical probabilistic neural network (HPNN), a single deterministic network that performs simultaneous prediction and sampling-free uncertainty quantification in a single forward pass. We further introduced a close-form self-regularized training strategy for HPNN using Laplacian approximation without the availability of ensemble models, density-based models as well as OOD samples.

Diversity-enhanced Accurate Sampling-based Uncertainty Estimation

We Proposed the probabilistic ensemble, a Bayesian framework to model aleatoric and epistemic uncertainty by combining the ensemble method and the Laplacian approximation for diversity-enhanced learning. We constructed a Gaussian distribution around each mode of the ensemble models by Laplacian approximation, forming a mixture of Gaussian model for better approximating the posterior distribution of parameters.

Uncertainty Attribution for Explainable Uncertainty Quantification

We proposed gradient-based Bayesian deep learning methods to identify the locations or the major factors of the input that contribute to the predictive uncertainty. The proposed methods backpropagate the predictive uncertainty to either the input or the feature space to generate the uncertainty map, based on which we can identify the most problematic regions of the inputs or the troublesome prediction-essential imaging factors (e.g., image resolution, illuminations).

Uncertainty-driven Interventions on Computer Vision Applications

Image Classification: We leveraged the insights from uncertainty quantification and attribution to develop uncertainty-guided mitigation strategies for refining the classification models. Specifically, we utilize the uncertainty attribution maps to optimize the input/latent space to improve the prediction accuracy.

Action Recognition: Introduced the probabilistic transformer by modeling the distribution of self-attention layers for complex action recognition. Leveraged the estimated epistemic uncertainty for both training and inference to construct a majority model and a minority model to improve the model prediction accuracy and robustness.

Body Pose Estimation: Utilized the negative log likelihood loss to train a two-stage probabilistic 3D body reconstruction model that recovers 3D human body poses from 2D images and efficiently quantifies epistemic and aleatoric uncertainty.


  • Hanjing Wang, Dhiraj Joshi, Shiqiang Wang, Qiang Ji, Gradient-based Uncertainty Attribution for Explainable Bayesian Deep Learningi, to appear in IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2023.

  • Hongji Guo, Hanjing Wang, Qiang Ji. Uncertainty-Guided Probabilistic Transformer for Complex Action Recognition. CVPR 2022.

  • Zijun Cui, Hanjing Wang, Tian Gao, Kartik Talamadupula, Qiang Ji. Variational Message Passing Neural Network for Maximum-A-Posteriori Inference. UAI 2022.

  • Lisha Chen, Hanjing Wang, Shiyu Chang, Hui Su, Qiang Ji. A Comparative Evaluation of Methods for Epistemic Uncertainty Estimation. Bayesian Deep Learning Workshop, NeurIPS 2020