ray/rllib/tuned_examples/ppo/cartpole-appo-vtrace-separate-losses.yaml
Yi Cheng fd0f967d2e
Revert "[RLlib] Move (A/DD)?PPO and IMPALA algos to algorithms dir and rename policy and trainer classes. (#25346)" (#25420)
This reverts commit e4ceae19ef.

Reverts #25346

linux://python/ray/tests:test_client_library_integration never fail before this PR.

In the CI of the reverted PR, it also fails (https://buildkite.com/ray-project/ray-builders-pr/builds/34079#01812442-c541-4145-af22-2a012655c128). So high likely it's because of this PR.

And test output failure seems related as well (https://buildkite.com/ray-project/ray-builders-branch/builds/7923#018125c2-4812-4ead-a42f-7fddb344105b)
2022-06-02 20:38:44 -07:00

29 lines
949 B
YAML

cartpole-appo-vtrace-separate-losses:
env: CartPole-v0
run: APPO
stop:
episode_reward_mean: 150
timesteps_total: 200000
config:
# Only works for tf|tf2 so far.
framework: tf
# Switch on >1 loss/optimizer API for TFPolicy and EagerTFPolicy.
_tf_policy_handles_more_than_one_loss: true
# APPO will produce two separate loss terms:
# policy loss + value function loss.
_separate_vf_optimizer: true
# Separate learning rate for the value function branch.
_lr_vf: 0.00075
num_envs_per_worker: 5
num_workers: 1
num_gpus: 0
observation_filter: MeanStdFilter
num_sgd_iter: 6
vf_loss_coeff: 0.01
vtrace: true
model:
fcnet_hiddens: [32]
fcnet_activation: linear
# Make sure we really have completely separate branches.
vf_share_layers: false