Hi! I am a Computer Scientist with the Machine Intelligence Group at Lawrence Livermore National Laboratory (LLNL), a Federally Funded Research Institute. My research interests broadly include computer vision, machine learning, and high dimensional data analysis.
I’m fortunate to be able to collaborate with scientists and researchers across several areas of science and engineering — for example high energy physics; epidemiology for COVID19; x-ray imaging; human connectome project; healthcare. I spend a lot of my time thinking about modeling and understanding high dimensional, multi-modal, and inherently structured data. I also serve as a reviewer for several top machine learning and computer vision publications (NeurIPS, CVPR, ICLR, ICCV, ICML, AAAI. etc).
My resume (pdf, updated Jan 2020). Contact:
- (Dec 2020) Two papers accepted to AAAI 2021!:
- Attribute-Guided Adversarial Training for Robustness to Natural Perturbation led by Tejas. (Studies robustness to semantic shifts that are beyond L-p norm perturbations)
- Accurate and Robust Feature Importance Estimation under Distribution Shifts led by Jay. (Studies ML explainability under distribution shifts)
- (Nov 2020) Two papers accepted to WACV 2021 with Suhas and Pavan:
- (Nov 2020) Chairing a special session on generative modeling for images & videos at Asilomar 2020.
- (Sept 2020) Jay’s paper on designing better surrogates for a variety of scientific simulations will appear in Nature communications! We show that an interval calibration-based objective can help improve quality of surrogates over standard losses like mean squared error. [published paper]
- (Sept 2020) I was recognized as being among the top 33% of the reviewers for ICML 2020.
- (July 2020) Vivek’s paper on using GAN priors for unsupervised audio source separation has been accepted to Interspeech 2020! We show that GAN priors easily beat the SOTA among unsupervised methods. [preprint] [code]
- (July 2020) Shusen’s paper on function-preserving linear projections (FPP) for high dimensional scientific domains has been accepted for publication in the journal Machine Learning: Science and Technology! FPP enables linear dimensionality reduction of full dimensional domains to recover functions defined on them. [paper] [code]