ray/rllib/algorithms/appo
2022-06-04 07:35:24 +02:00
..
tests [RLlib] Move all remaining algos into algorithms directory. (#25366) 2022-06-04 07:35:24 +02:00
__init__.py [RLlib] Move all remaining algos into algorithms directory. (#25366) 2022-06-04 07:35:24 +02:00
appo.py [RLlib] Move all remaining algos into algorithms directory. (#25366) 2022-06-04 07:35:24 +02:00
appo_tf_policy.py [RLlib] Move all remaining algos into algorithms directory. (#25366) 2022-06-04 07:35:24 +02:00
appo_torch_policy.py [RLlib] Move all remaining algos into algorithms directory. (#25366) 2022-06-04 07:35:24 +02:00
README.md [RLlib] Move all remaining algos into algorithms directory. (#25366) 2022-06-04 07:35:24 +02:00

Asynchronous Proximal Policy Optimization (APPO)

Overview

PPO is a model-free on-policy RL algorithm that works well for both discrete and continuous action space environments. PPO utilizes an actor-critic framework, where there are two networks, an actor (policy network) and critic network (value function).

Distributed PPO Algorithms

Distributed baseline PPO

See implementation here

Asychronous PPO (APPO) ..

.. opts to imitate IMPALA as its distributed execution plan. Data collection nodes gather data asynchronously, which are collected in a circular replay buffer. A target network and doubly-importance sampled surrogate objective is introduced to enforce training stability in the asynchronous data-collection setting. See implementation here

Decentralized Distributed PPO (DDPPO)

See implementation here

Documentation & Implementation:

Asynchronous Proximal Policy Optimization (APPO).

Detailed Documentation

Implementation