Kiran Vodrahalli

{about} {research} {talks} {blog} {archive} {teaching} {notes} {links} {press}

about

image-title-here

Currently…

I am a research scientist at Google DeepMind, working on long-context sequence modeling (e.g. LLMs) and strategic and
interactive machine learning. I am particularly interested in resource-efficient approaches to long-context models.

I am currently a core contributor on the Gemini project, where I have focused on resource-efficient models
like Gemini 1.5 Flash and more, as well as on leading the development of novel long-context evals.

I also work on questions at the intersection of multi-agent learning, strategic learning in games,
incentives for data collection and collaborative machine learning, and algorithmic game theory,
and am interested in associated problems in machine learning theory, algorithms, and optimization.
I have also worked on applications of machine learning in several fields, including neuroscience,
natural language understanding, economics, and robotics.

Please email me if you think we might have some overlapping interests. I am always happy to chat and possibly collaborate!

Current Affiliations:

Other Info:

For more details, either check out this website or see my [CV] .


News

I graduated with my Ph.D. from the Computer Science Department at Columbia University in June 2022!
I joined Google Brain as a Research Scientist in Fall 2022, which became a part of Google DeepMind in April 2023.


Previously…

From Fall 2022 to Spring 2023, I was a Research Scientist at Google Brain.
In April 2023, Google Brain became a part of Google DeepMind.

From Fall 2017 to Summer 2022, I was a Computer Science Ph.D. student at Columbia University,
focusing on theoretical computer science (Columbia CS Theory), with a particular interest in
machine learning (Columbia Machine Learning), algorithms, and statistics.
My thesis topic was resource-efficient machine learning.
I was fortunate to be advised by Professor Daniel Hsu and Professor Alex Andoni.
I was supported by an NSF Graduate Research Fellowship during my Ph.D.

In Fall 2021 and early Spring 2022 I was also a part-time student researcher at Google Brain.

In Summer 2021 I was a research intern at Google Brain, where I worked on
principled resource-efficient methods for training deep neural networks.

In Summer 2019 I visited the Simons Institute at Berkeley for the Foundations of Deep Learning program.

I graduated from Princeton with an A.B. Mathematics degree with honors in 2016 and an M.S.E. in Computer Science in 2017,
where I was fortunate to have Professor Sanjeev Arora and Professor Ken Norman as thesis advisors.
I was a member of Sanjeev Arora’s Unsupervised Learning Group, where I studied provable methods for machine learning
(also a part of NLP @ Princeton and ML Theory @Princeton), in particular focusing on natural language understanding.
I was also a member of Ken Norman’s Computational Memory Lab at the Princeton Neuroscience Institute,
where I applied machine learning to fMRI analysis methods.