* add marvil policy graph
* fix typo
* add offline optimizer and enable running marwil
* fix loss function
* add maintaining the moving average of advantage norm
* use sync replay optimizer for unifying
* remove offline optimizer and use sync replay optimizer
* format by yapf
* add imitation learning objective
* fix according to eric's review
* format by yapf
* revise
* add test data
* marwil
IMPALA support for multiagent was broken since IMPALA has a requirement that batch sizes be of a certain length. However multi-agent envs can create variable-length batches.
Fix this by adding zero-padding as needed (similar to the RNN case).
## What do these changes do?
Don't create an excessive amount of workers for rollout.py, and also fix up the env wrapping to be consistent with the internal agent wrapper.
## Related issue number
Closes#3260.
A bunch of minor rllib fixes:
pull in latest baselines atari wrapper changes (and use deepmind wrapper by default)
move reward clipping to policy evaluator
add a2c variant of a3c
reduce vision network fc layer size to 256 units
switch to 84x84 images
doc tweaks
print timesteps in tune status
The goal of this PR is to allow custom policies to perform model-based rollouts. In the multi-agent setting, this requires access to not only policies of other agents, but also their current observations.
Also, you might want to return the model-based trajectories as part of the rollout for efficiency.
compute_actions() now takes a new keyword arg episodes
pull out internal episode class into a top-level file
add function to return extra trajectories from an episode that will be appended to the sample batch
documentation
ray exec CLUSTER CMD [--screen] [--start] [--stop]
ray attach CLUSTER [--start]
Example:
ray exec sgd.yaml 'source activate tensorflow_p27 && cd ~/ray/python/ray/rllib && ./train.py --run=PPO --env=CartPole-v0' --screen --start --stop
This will in one command create a cluster and run the command on it in a screen session. The screen can later be attached to via ray attach. After the command finishes, the cluster workers will be terminated and the head node stopped.
Rename AsyncSamplesOptimizer -> AsyncReplayOptimizer
Add AsyncSamplesOptimizer that implements the IMPALA architecture
integrate V-trace with a3c policy graph
audit V-trace integration
benchmark compare vs A3C and with V-trace on/off
PongNoFrameskip-v4 on IMPALA scaling from 16 to 128 workers, solving Pong in <10 min. For reference, solving this env takes ~40 minutes for Ape-X and several hours for A3C.
* removed ddpg2
* removed ddpg2 from codebase
* added tests used in ddpg vs ddpg2 comparison
* added notes about training timesteps to yaml files
* removed ddpg2 yaml files
* removed unnecessary configs from yaml files
* removed unnecessary configs from yaml files
* moved pendulum, mountaincarcontinuous, and halfcheetah tests to tuned_examples
* moved pendulum, mountaincarcontinuous, and halfcheetah tests to tuned_examples
* added more configuration details to yaml files
* removed random starts from halfcheetah
* patch up pbt
* Sat Jan 27 01:00:03 PST 2018
* Sat Jan 27 01:04:14 PST 2018
* Sat Jan 27 01:04:21 PST 2018
* Sat Jan 27 01:15:15 PST 2018
* Sat Jan 27 01:15:42 PST 2018
* Sat Jan 27 01:16:14 PST 2018
* Sat Jan 27 01:38:42 PST 2018
* Sat Jan 27 01:39:21 PST 2018
* add pbt
* Sat Jan 27 01:41:19 PST 2018
* Sat Jan 27 01:44:21 PST 2018
* Sat Jan 27 01:45:46 PST 2018
* Sat Jan 27 16:54:42 PST 2018
* Sat Jan 27 16:57:53 PST 2018
* clean up test
* Sat Jan 27 18:01:15 PST 2018
* Sat Jan 27 18:02:54 PST 2018
* Sat Jan 27 18:11:18 PST 2018
* Sat Jan 27 18:11:55 PST 2018
* Sat Jan 27 18:14:09 PST 2018
* review
* try out a ppo example
* some tweaks to ppo example
* add postprocess hook
* Sun Jan 28 15:00:40 PST 2018
* clean up custom explore fn
* Sun Jan 28 15:10:21 PST 2018
* Sun Jan 28 15:14:53 PST 2018
* Sun Jan 28 15:17:04 PST 2018
* Sun Jan 28 15:33:13 PST 2018
* Sun Jan 28 15:56:40 PST 2018
* Sun Jan 28 15:57:36 PST 2018
* Sun Jan 28 16:00:35 PST 2018
* Sun Jan 28 16:02:58 PST 2018
* Sun Jan 28 16:29:50 PST 2018
* Sun Jan 28 16:30:36 PST 2018
* Sun Jan 28 16:31:44 PST 2018
* improve tune doc
* concepts
* update humanoid
* Fri Feb 2 18:03:33 PST 2018
* fix example
* show error file
Remove rllib dep: trainable is now a standalone abstract class that can be easily subclassed.
Clean up hyperband: fix debug string and add an example.
Remove YAML api / ScriptRunner: this was never really used.
Move ray.init() out of run_experiments(): This provides greater flexibility and should be less confusing since there isn't an implicit init() done there. Note that this is a breaking API change for tune.