||exponentially surprised|| [2H theoryblog]

{about} {research} {talks and notes} {contact}


I am currently a Ph.D. student at Columbia University, focusing on theoretical computer science, with particular interest in machine learning, algorithms, and statistics. I am extremely fortunate to be advised by Professor Daniel Hsu and Professor Alex Andoni. I am supported by an NSF Graduate Research Fellowship. I am affiliated with the Computer Science department, particularly the Theory group and the Machine learning group, as well as the Data Science Institute.

My primary area of research is provable algorithms and models for computational inference and optimization. A lot of my interests can be summarized by the phrase ``Non-worst-case analysis for machine learning algorithms’’. I tend to prefer using conditions that constrain the problem space that are easy to check. I am also a fan of the notion that theory informs practice, and practice informs theory. I am particularly interested in the following broad areas:

  • Theoretical frameworks for unsupervised learning which allow one to give guarantees on performance

  • Interactive learning applied to evaluation methodology and learning problem formulation

  • Going beyond i.i.d. assumptions: Learning over non-product distributions, etc. I’ve also recently gotten interested in adaptive data analysis and other settings where the training data distribution and the testing data distribution do not satisfy simple properties (like i.i.d.).

  • Generalization bounds for models which do not currently have completely satisfactory guarantees (read my lips d-e-e-p l-e-a-r-n-i-n-g). In particular, I am interested in data-dependent bounds on the sample complexity.

  • Non-convex optimization with guarantees

  • Better characterization of easy-to-check data properties which ensure good statistical/computational efficiency. Also, developing data collection methods to ensure such properties exist in datasets.

  • Highly-structured learning: learning rules and logic in reinforcement learning settings. I am also interested in models which blend rigid structure (like logic) with statistical models.

  • Understanding sequence models: Bridging the gap between HMMs and RNNs.

  • Identifying and learning over low-dimensional structures (of all sorts, but recently I have focused on various notions of sparsity). In particular, the goal is to give algorithms which achieve sample complexities and computational complexities which depend on the ``low-dimensional’’ part of the structure, rather than the (potentially high-dimensional) ambient space.

  • Identifying failure modes of learning algorithms and models

  • Theoretically justifying generation models

In applications, I am particularly interested in natural language understanding and neuroscience.

Previously, I graduated from Princeton with an A.B. Mathematics degree with honors in 2016 and an M.S.E. in Computer Science in 2017, where I was lucky to have Professor Sanjeev Arora and Professor Ken Norman as thesis advisors. I was a member of Sanjeev Arora’s Unsupervised Learning Group, where I studied provable methods for machine learning, in particular focusing on natural language understanding. I was also a member of Ken Norman’s Computational Memory Lab at the Princeton Neuroscience Institute, where I applied machine learning to fMRI analysis methods.

For more details, either check out this website or see my out-of-date curriculum vitae .