ray/examples/carla
2018-03-06 00:31:02 -08:00
..
a3c_lane_keep.py [tune] Ray Tune API cleanup (#1454) 2018-01-24 16:55:17 -08:00
dqn_lane_keep.py [tune] Ray Tune API cleanup (#1454) 2018-01-24 16:55:17 -08:00
env.py [rllib] Upgrade to OpenAI Gym 0.10.3 (#1601) 2018-03-06 00:31:02 -08:00
models.py [carla] [rllib] Add support for carla nav planner and scenarios from paper (#1382) 2018-01-05 21:32:41 -08:00
ppo_lane_keep.py [tune] Ray Tune API cleanup (#1454) 2018-01-24 16:55:17 -08:00
README [carla] [rllib] Add support for carla nav planner and scenarios from paper (#1382) 2018-01-05 21:32:41 -08:00
scenarios.py [carla] [rllib] Add support for carla nav planner and scenarios from paper (#1382) 2018-01-05 21:32:41 -08:00
train_a3c.py [tune] Ray Tune API cleanup (#1454) 2018-01-24 16:55:17 -08:00
train_dqn.py [tune] Ray Tune API cleanup (#1454) 2018-01-24 16:55:17 -08:00
train_ppo.py [tune] Ray Tune API cleanup (#1454) 2018-01-24 16:55:17 -08:00

(Experimental) OpenAI gym environment for https://github.com/carla-simulator/carla

To run, first download and unpack the Carla binaries from this URL: https://github.com/carla-simulator/carla/releases/tag/0.7.0

Note that currently you also need to clone the Python code from `carla/benchmark_branch` which includes the Carla planner.

Then, you can try running env.py to drive the car. Run one of the train_* scripts to attempt training.

    $ pkill -9 Carla
    $ export CARLA_SERVER=/PATH/TO/CARLA_0.7.0/CarlaUE4.sh
    $ export CARLA_PY_PATH=/PATH/TO/CARLA_BENCHMARK_BRANCH_REPO/PythonClient
    $ python env.py

Check out the scenarios.py file for different training and test scenarios that can be used.