Adds a tmux flag that can be used to support background execution of experiments. Cannot be used together with screen. Seems to be useful feature that has shown up with different users.
It's possible to configure PPO in a way that ends up discarding most of the samples (they are treated as "stragglers"). Add a warning when this happens, and raise an exception if the waste is particularly egregious.
* Added checkpoint_at_end option. To fix#2740
* Added ability to checkpoint at the end of trials if the option is set to True
* checkpoint_at_end option added; Consistent with Experience and Trial runner
* checkpoint_at_end option mentioned in the tune usage guide
* Moved the redundant checkpoint criteria check out of the if-elif
* Added note that checkpoint_at_end is enabled only when checkpoint_freq is not 0
* Added test case for checkpoint_at_end
* Made checkpoint_at_end have an effect regardless of checkpoint_freq
* Removed comment from the test case
* Fixed the indentation
* Fixed pep8 E231
* Handled cases when trainable does not have _save implemented
* Constrained test case to a particular exp using the MockAgent
* Revert "Constrained test case to a particular exp using the MockAgent"
This reverts commit e965a9358ec7859b99a3aabb681286d6ba3c3906.
* Revert "Handled cases when trainable does not have _save implemented"
This reverts commit 0f5382f996ff0cbf3d054742db866c33494d173a.
* Simpler test case for checkpoint_at_end
* Preserved bools from loosing their actual value
* Revert "Moved the redundant checkpoint criteria check out of the if-elif"
This reverts commit 783005122902240b0ee177e9e206e397356af9c5.
* Fix linting error.
A bunch of minor rllib fixes:
pull in latest baselines atari wrapper changes (and use deepmind wrapper by default)
move reward clipping to policy evaluator
add a2c variant of a3c
reduce vision network fc layer size to 256 units
switch to 84x84 images
doc tweaks
print timesteps in tune status
This adds some experimental (undocumented) support for launching Ray on existing nodes. You have to provide the head ip, and the list of worker ips.
There are also a couple additional utils added for rsyncing files and port-forward.
This PR introduces the following changes:
* Ray Tune -> Tune
* [breaking] Creation of `schedulers/`, moving PBT, HyperBand into a submodule
* [breaking] Search Algorithms now must take in experiment configurations via `add_configurations` rather through initialization
* Support `"run": (function | class | str)` with automatic registering of trainable
* Documentation Changes
The goal of this PR is to allow custom policies to perform model-based rollouts. In the multi-agent setting, this requires access to not only policies of other agents, but also their current observations.
Also, you might want to return the model-based trajectories as part of the rollout for efficiency.
compute_actions() now takes a new keyword arg episodes
pull out internal episode class into a top-level file
add function to return extra trajectories from an episode that will be appended to the sample batch
documentation
ray exec CLUSTER CMD [--screen] [--start] [--stop]
ray attach CLUSTER [--start]
Example:
ray exec sgd.yaml 'source activate tensorflow_p27 && cd ~/ray/python/ray/rllib && ./train.py --run=PPO --env=CartPole-v0' --screen --start --stop
This will in one command create a cluster and run the command on it in a screen session. The screen can later be attached to via ray attach. After the command finishes, the cluster workers will be terminated and the head node stopped.
to support TF version < 1.5
to support rmsprop optimizer in Impala
Before TF1.5, tf.reduce_sum() and tf.reduce_max() has an argument keep_dims which has been renamed as keepdims in later versions.
In the original paper of Impala, they use rmsprop algorithm to optimize the model. We'd better also support it so that users can reproduce their experiments. Without any tuning, say that using the same hyper-parameters as AdamOptimizer, it reaches "episode_reward_mean": 19.083333333333332 in Pong after consume 3,610,350 samples.
This PR adds a driver table for the new GCS, which enables cleanup functionality associated with monitoring driver death.
Some testing in `monitor_test.py` is restored, but redis sharding for xray is needed to enable remaining tests.
Rename AsyncSamplesOptimizer -> AsyncReplayOptimizer
Add AsyncSamplesOptimizer that implements the IMPALA architecture
integrate V-trace with a3c policy graph
audit V-trace integration
benchmark compare vs A3C and with V-trace on/off
PongNoFrameskip-v4 on IMPALA scaling from 16 to 128 workers, solving Pong in <10 min. For reference, solving this env takes ~40 minutes for Ape-X and several hours for A3C.
This also removes the async resetting code in VectorEnv. While that improves benchmark performance slightly, it substantially complicates env configuration and probably isn't worth it for most envs.
This makes it easy to efficiently support setups like Joint PPO: https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/retro-contest/gotta_learn_fast_report.pdf
For example, for 188 envs, you could do something like num_envs: 10, num_envs_per_worker: 19.
This adds a simple DQN+PPO example for multi-agent. We don't do anything fancy here, just syncing weights between two separate trainers. This potentially is wasting some compute, but is very simple to set up.
It might be nice to share experience collection between the top-level trainers in the future.
* Ray documentation - created new section 'Profiling for Ray Users', opposed to current Profiling section for Ray developers. Completed three sections 'A Basic Profiling Example', 'Timing Performance Using Python's Timestamps', and 'Profiling Using An External Profiler (Line_Profiler).' Left to-do two sections on CProfile and Ray Timeline Visualization.'
* Ray documentation - Fixed rst codeblock linebreaks in 'User Profiling'
* Ray documentation - For User Profiling, added section on cProfile
* Ray documentation - For User Profiling, completed Ray Timeline Visualization section, including graphical images
* Ray documentation - made User Profiling timeline image larger, minor wording edits
* Ray documentation - minor wording edits to User Profiling
* Ray documentation - User Profiling- fixed broken link
* Minor wording changes requested by Philipp Moritz addressed. Still need to address (1) compressing the image files, (2) correcting ex 3 to not be remote, and (3) using cProfile on an actor
* Ray documentation - For user-profiling.rst, revised example 3 to show a semi-parallelized example. Compressed timeline example image to be under 50 KB, removed view timeline GUI image. Updated timeline example image to reflect revised example 3. cProfile actor example left
* Ray documentation - in user-profiling.rst, added a new example including actors in the cProfile section
* Ray documentation - For user-profiling.rst, added section header for the Ray actor cProfile example
* Update user-profiling.rst
* Update user-profiling.rst
* 4 space indentation
* Update user-profiling.rst
* Update user-profiling.rst
* Update user-profiling.rst
* corrections
* Add profile table and store profiling information there.
* Code for dumping timeline.
* Improve color scheme.
* Push timeline events on driver only for raylet.
* Improvements to profiling and timeline visualization
* Some linting
* Small fix.
* Linting
* Propagate node IP address through profiling events.
* Fix test.
* object_id.hex() should return byte string in python 2.
* Include gcs.fbs in node_manager.fbs.
* Remove flatbuffer definition duplication.
* Decode to unicode in Python 3 and bytes in Python 2.
* Minor
* Submit profile events in a batch. Revert some CMake changes.
* Fix
* Workaround test failure.
* Fix linting
* Linting
* Don't return anything from chrome_tracing_dump when filename is provided.
* Remove some redundancy from profile table.
* Linting
* Move TODOs out of docstring.
* Minor