Deep-RL for macrosim on the cloud

Flow: a deep reinforcement learning framework for mixed autonomy traffic

Flow is a traffic control benchmarking framework. It provides a suite of traffic control scenarios (benchmarks), tools for designing custom traffic scenarios, and integration with deep reinforcement learning and traffic microsimulation libraries.

Flow is developed at the University of California, Berkeley. More details on or website

 

Results

The following are successful controllers developed with Flow. For more details visit our gallery(link is external).

Phantom shockwave dissipation on a ring

Inspired by the famous 2008 Sugiyama experiment(link is external) demonstrating spontaneous formation of traffic shockwaves (reproduced on the left video), and a 2017 field study(link is external) demonstrating the ability of AVs to suppress shockwaves, we investigated the ability of reinforcement learning to train an optimalshockwave dissipating controller.

In the right video, we learn a controller (policy) for one out of 22 vehicles. By training on ring roads of varying lengths, and using a neural network policy with memory, we were able to learn a controller that both was optimal (in terms of average system velocity) and generalized outside of the training distribution.

Intersection control

We demonstrated also the ability of a single autonomous vehicle to control the relative spacing of vehicles following behind it to create an optimal merge at an intersection. 

As can be seen in the videos, without any AVs, the vehicles are stopped at the intersection by vehicles in the other direction; we show that even at low penetration rates, the autonomous vehicle "bunches" all of the other vehicles to avoid the intersection, resulting in a huge speed improvement.