A deep reinforcement learning framework
for mixed autonomy traffic
Developed by UC Berkeley
Ready to get started?
Story Behind Flow
Flow is created by and actively developed by members of the Mobile Sensing Lab at UC Berkeley (PI, Professor Bayen).
Flow is a traffic control benchmarking framework. It provides a suite of traffic control scenarios (benchmarks), tools for designing custom traffic scenarios, and integration with deep reinforcement learning and traffic microsimulation libraries.
Why Flow?
Traffic systems can often be modeled by complex (nonlinear and coupled) dynamical systems for which classical analysis tools struggle to provide the understanding sought by transportation agencies, planners, and control engineers, mostly because of difficulty to provide analytical results on these. Deep reinforcement learning (deep-RL) provides an opportunity to study complex traffic control problems involving interactions of humans, automated vehicles, and sensing infrastructure. The resulting control laws and emergent behaviors of the vehicles provide insight and understanding of the potential for automation of traffic through mixed fleets of autonomous and manned vehicles.
Flow now supports both SUMO and Aimsun.
— First launched
— Traffic and RL Platforms Supported
4
— Publications
8
Flow in press
Just a few self driving cars on the road could improve traffic for everyone.
Drivers can learn this from AI: "keep your distance and break less.... That's smart driving!"
One of the core technologies behind it all is a free, open-source project called Flow, perfect for traffic modeling.
Flow is a first-of-its-kind software framework allowing researchers to discover and benchmark schemes for optimizing traffic.