Research
Refutability in Machine Learning
My goal is to provide principled approaches to provide refutability in ML security. My research thus far has focused on building quantitative verification frameworks for ML properties such as robustness, fairness and susceptibility to data poisoning attacks for neural networks [CCS 19, ICSE 21]. Our recent work proposes a systematic approach to analyzing the causes of membership inference attacks through causality [CCS 22a]. I am interested in developing ML algorithms with provable guarantees, as well as improving the scalability of verification and testing algorithms for neural networks.
Software Tools
I have released these projects and benchmarks as open-source. Here are some project-specific pages and descriptions.
Privacy in Federated Networks
I am interested in privacy for federated setups. I have worked on privacy for graph data where the goal is to design algorithms such as hierarchical clustering [CCS 21] or graph neural networks [CCS 22b] that satisfy the strong notion of local differential privacy.
Program Synthesis
I have worked on inferring data flow and architectural semantics of instructions using machine learning techniques for security analyses. Specifically, our approach infers taint rules with minimal knowledge (e.g., input-output examples of instructions) that can then be used to build an architecture-agnostic taint engine [NDSS 19]. In addition to new approaches to infer inductive rules, I have worked on algorithms guaranteeing generalization in programming by example. Specifically, we proposed a dynamic algorithm for computing the required number of samples to guarantee generalization on the fly. We show how to integrate it in two well-known synthesis approaches [FSE 21].