ray/rllib/agents/ppo/ddppo.py

4 lines
98 B
Python

from ray.rllib.algorithms.ddppo import ( # noqa
DDPPO as DDPPOTrainer,
DEFAULT_CONFIG,
)