mirror of
https://github.com/vale981/ray
synced 2025-03-08 19:41:38 -05:00
![]() * [RLlib] Unify the way we create and use LocalReplayBuffer for all the agents. This change 1. Get rid of the try...except clause when we call execution_plan(), and get rid of the Deprecation warning as a result. 2. Fix the execution_plan() call in Trainer._try_recover() too. 3. Most importantly, makes it much easier to create and use different types of local replay buffers for all our agents. E.g., allow us to easily create a reservoir sampling replay buffer for APPO agent for Riot in the near future. * Introduce explicit configuration for replay buffer types. * Fix is_training key error. * actually deprecate buffer_size field. |
||
---|---|---|
.. | ||
alpha_zero | ||
bandits | ||
maddpg | ||
random_agent | ||
sumo | ||
__init__.py | ||
README.rst | ||
registry.py |
Contributed algorithms, which can be run via ``rllib train --run=contrib/<alg_name>`` See https://docs.ray.io/en/master/rllib-dev.html for guidelines.