Ilya Kostrikov
Currently, I'm a Research Scientist at OpenAI.
I was a Postdoctoral Scholar at UC Berkeley Artificial Intelligence Research lab, where I work on Deep Reinforcement Learning. In particular, I'm interested in sample efficient reinforcement learning.
I received my PhD from NYU, where I worked on sample efficient imitation and reinforcement learning. I did an internship at Facebook AI Research, several internships at Google Brain, and I participated in Google Student Research Advising Program.
Linkedin /Email / Google Scholar / GitHub / Twitter
Representative publications
A Walk in the Park: Learning to Walk in 20 Minutes With Model-Free Reinforcement Learning
L. Smith*, I. Kostrikov*, S. Levine
Offline Reinforcement Learning for Natural Language Generation with Implicit Language Q Learning
C. Snell, I. Kostrikov, Y. Su, M. Yang, S. Levine
ICLR 2023
Automatic Data Augmentation for Generalization in Deep Reinforcement Learning
R. Raileanu, M. Goldstein, D. Yarats, I. Kostrikov, R. Fergus
NeurIPS 2021
Offline Reinforcement Learning with Fisher Divergence Critic Regularization
I. Kostrikov, J. Tompson, R. Fergus, O. Nachum
ICML 2021
Image Augmentation is All You Need: Regularizing Deep Reinforcement Learning from Pixels
I. Kostrikov*, D. Yarats*, R. Fergus
ICLR 2021, Spotlight
I. Kostrikov, K. Agrawal, D. Dwibedi, S. Levine, J. Tompson
ICLR 2019