Machine Learning & Artificial Intelligence


At OpenAI, I co-led a team working on learning-based robotics. This means developing methods to allow robots to learn new tasks quickly, without being programmed explicitly for a specific task.

Solving Rubik's Cube (2019)

We’ve trained a pair of neural networks to solve the Rubik’s Cube with a human-like robot hand. The neural networks are trained entirely in simulation, using a new technique called Automatic Domain Randomization (ADR). TThis shows that reinforcement learning isn’t just a tool for virtual tasks, but can solve physical-world problems requiring unprecedented dexterity.

OpenAI Blog, arXiv
Press: NYT, WaPo

Learning Dexterity (2018)

We’ve trained a human-like robot hand to manipulate physical objects with unprecedented dexterity. Our system, called Dactyl, is trained entirely in simulation and transfers its knowledge to reality, adapting to real-world physics using techniques we’ve been working on for the past year.

OpenAI Blog, arXiv
Press: NYT, IEEE

Hindsight Experience Replay (2017)

We show how an off-policy reinforcement learning algorithm can learn from its own failures in a multi-goal environment.

Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, Pieter Abbeel, Wojciech Zaremba

NIPS 2017, Long Beach

Robots That Learn (2017)

We’ve created a robotics system, trained entirely in simulation and deployed on a physical robot, which can learn a new task after seeing it done once.

OpenAI Blog
Press: CNBC

One-Shot Imitation Learning (2017)

We show how a robot can identify a task shown by a human in virtual reality, and replicate it in previously unseen conditions.

Yan Duan, Marcin Andrychowicz, Bradly C. Stadie, Jonathan Ho, Jonas Schneider, Ilya Sutskever, Pieter Abbeel, Wojciech Zaremba

NIPS 2017, Long Beach


The next big AI breakthroughs will require both new learning algorithms, and new infrastructure to scale machine learning to thousands of computers.

Machine Learning Systems at Scale (2017)

Thinking about the entirety of the stack is crucial for achieving scalable performance in distributed deep learning.

MLconf 2017, San Francisco

Building the Infrastructure that Powers the Future of AI (2017)

How OpenAI uses Kubernetes and TensorFlow to run machine learning experiments across thousands of machines

Keynote, KubeCon 2017, Berlin

Data Retrieval

Analytic Performance Model of a Main-Memory Index Structure (2016)

Bachelor's Thesis, Karlsruhe Institute of Technology