ray/rllib/utils
Sven Mika 510c850651
[RLlib] SAC add discrete action support. (#7320)
* Exploration API (+EpsilonGreedy sub-class).

* Exploration API (+EpsilonGreedy sub-class).

* Cleanup/LINT.

* Add `deterministic` to generic Trainer config (NOTE: this is still ignored by most Agents).

* Add `error` option to deprecation_warning().

* WIP.

* Bug fix: Get exploration-info for tf framework.
Bug fix: Properly deprecate some DQN config keys.

* WIP.

* LINT.

* WIP.

* Split PerWorkerEpsilonGreedy out of EpsilonGreedy.
Docstrings.

* Fix bug in sampler.py in case Policy has self.exploration = None

* Update rllib/agents/dqn/dqn.py

Co-Authored-By: Eric Liang <ekhliang@gmail.com>

* WIP.

* Update rllib/agents/trainer.py

Co-Authored-By: Eric Liang <ekhliang@gmail.com>

* WIP.

* Change requests.

* LINT

* In tune/utils/util.py::deep_update() Only keep deep_updat'ing if both original and value are dicts. If value is not a dict, set

* Completely obsolete syn_replay_optimizer.py's parameters schedule_max_timesteps AND beta_annealing_fraction (replaced with prioritized_replay_beta_annealing_timesteps).

* Update rllib/evaluation/worker_set.py

Co-Authored-By: Eric Liang <ekhliang@gmail.com>

* Review fixes.

* Fix default value for DQN's exploration spec.

* LINT

* Fix recursion bug (wrong parent c'tor).

* Do not pass timestep to get_exploration_info.

* Update tf_policy.py

* Fix some remaining issues with test cases and remove more deprecated DQN/APEX exploration configs.

* Bug fix tf-action-dist

* DDPG incompatibility bug fix with new DQN exploration handling (which is imported by DDPG).

* Switch off exploration when getting action probs from off-policy-estimator's policy.

* LINT

* Fix test_checkpoint_restore.py.

* Deprecate all SAC exploration (unused) configs.

* Properly use `model.last_output()` everywhere. Instead of `model._last_output`.

* WIP.

* Take out set_epsilon from multi-agent-env test (not needed, decays anyway).

* WIP.

* Trigger re-test (flaky checkpoint-restore test).

* WIP.

* WIP.

* Add test case for deterministic action sampling in PPO.

* bug fix.

* Added deterministic test cases for different Agents.

* Fix problem with TupleActions in dynamic-tf-policy.

* Separate supported_spaces tests so they can be run separately for easier debugging.

* LINT.

* Fix autoregressive_action_dist.py test case.

* Re-test.

* Fix.

* Remove duplicate py_test rule from bazel.

* LINT.

* WIP.

* WIP.

* SAC fix.

* SAC fix.

* WIP.

* WIP.

* WIP.

* FIX 2 examples tests.

* WIP.

* WIP.

* WIP.

* WIP.

* WIP.

* Fix.

* LINT.

* Renamed test file.

* WIP.

* Add unittest.main.

* Make action_dist_class mandatory.

* fix

* FIX.

* WIP.

* WIP.

* Fix.

* Fix.

* Fix explorations test case (contextlib cannot find its own nullcontext??).

* Force torch to be installed for QMIX.

* LINT.

* Fix determine_tests_to_run.py.

* Fix determine_tests_to_run.py.

* WIP

* Add Random exploration component to tests (fixed issue with "static-graph randomness" via py_function).

* Add Random exploration component to tests (fixed issue with "static-graph randomness" via py_function).

* Rename some stuff.

* Rename some stuff.

* WIP.

* update.

* WIP.

* Gumbel Softmax Dist.

* WIP.

* WIP.

* WIP.

* WIP.

* WIP.

* WIP.

* WIP

* WIP.

* WIP.

* Hypertune.

* Hypertune.

* Hypertune.

* Lock-in.

* Cleanup.

* LINT.

* Fix.

* Update rllib/policy/eager_tf_policy.py

Co-Authored-By: Kristian Hartikainen <kristian.hartikainen@gmail.com>

* Update rllib/agents/sac/sac_policy.py

Co-Authored-By: Kristian Hartikainen <kristian.hartikainen@gmail.com>

* Update rllib/agents/sac/sac_policy.py

Co-Authored-By: Kristian Hartikainen <kristian.hartikainen@gmail.com>

* Update rllib/models/tf/tf_action_dist.py

Co-Authored-By: Kristian Hartikainen <kristian.hartikainen@gmail.com>

* Update rllib/models/tf/tf_action_dist.py

Co-Authored-By: Kristian Hartikainen <kristian.hartikainen@gmail.com>

* Fix items from review comments.

* Add dm_tree to RLlib dependencies.

* Add dm_tree to RLlib dependencies.

* Fix DQN test cases ((Torch)Categorical).

* Fix wrong pip install.

Co-authored-by: Eric Liang <ekhliang@gmail.com>
Co-authored-by: Kristian Hartikainen <kristian.hartikainen@gmail.com>
2020-03-06 10:37:12 -08:00
..
exploration [rllib] Make timestep a required arg for exploration classes (#7380) 2020-03-04 13:00:37 -08:00
schedules [RLlib] DDPG refactor and Exploration API action noise classes. (#7314) 2020-03-01 11:53:35 -08:00
tests [RLlib] Exploration API: merge deterministic flag with exploration classes (SoftQ and StochasticSampling). (#7155) 2020-02-19 12:18:45 -08:00
__init__.py [RLlib] Policy.compute_log_likelihoods() and SAC refactor. (issue #7107) (#7124) 2020-02-22 14:19:49 -08:00
actors.py [rllib] - TaskPool.completed_prefetch() no longer returns stale object ids after an error (#7139) 2020-02-13 22:30:44 -08:00
annotations.py Remove future imports (#6724) 2020-01-09 00:15:48 -08:00
compression.py Stop vendoring pyarrow (#7233) 2020-02-19 19:01:26 -08:00
debug.py [Core/RLlib] Move log_once from rllib to ray.util. (#7273) 2020-02-27 10:40:44 -08:00
deprecation.py [RLlib] Exploration API: merge deterministic flag with exploration classes (SoftQ and StochasticSampling). (#7155) 2020-02-19 12:18:45 -08:00
error.py Remove future imports (#6724) 2020-01-09 00:15:48 -08:00
experimental_dsl.py [rllib] Support multi-agent training in pipeline impls, add easy flag to enable (#7338) 2020-03-02 15:16:37 -08:00
explained_variance.py [RLlib] Implement PPO torch version. (#6826) 2020-01-20 23:06:50 -08:00
filter.py Remove future imports (#6724) 2020-01-09 00:15:48 -08:00
filter_manager.py Remove future imports (#6724) 2020-01-09 00:15:48 -08:00
framework.py [rllib] Make timestep a required arg for exploration classes (#7380) 2020-03-04 13:00:37 -08:00
from_config.py [rllib] Fix torch GPU / yaml load warning (#7278) 2020-02-23 13:13:43 -08:00
memory.py Remove future imports (#6724) 2020-01-09 00:15:48 -08:00
numpy.py [RLlib] Policy.compute_log_likelihoods() and SAC refactor. (issue #7107) (#7124) 2020-02-22 14:19:49 -08:00
policy_client.py Remove future imports (#6724) 2020-01-09 00:15:48 -08:00
policy_server.py Remove future imports (#6724) 2020-01-09 00:15:48 -08:00
seed.py Remove future imports (#6724) 2020-01-09 00:15:48 -08:00
sgd.py [RLlib] Exploration API: merge deterministic flag with exploration classes (SoftQ and StochasticSampling). (#7155) 2020-02-19 12:18:45 -08:00
test_utils.py [RLlib] PPO torch memory leak and unnecessary torch.Tensor creation and gc'ing. (#7238) 2020-02-22 11:02:31 -08:00
tf_ops.py Remove future imports (#6724) 2020-01-09 00:15:48 -08:00
tf_run_builder.py [RLlib] SAC add discrete action support. (#7320) 2020-03-06 10:37:12 -08:00
timer.py [rllib] Enable performance metrics reporting for RLlib pipelines, add A3C (#7299) 2020-02-28 16:44:17 -08:00
torch_ops.py [rllib] Fix torch GPU / yaml load warning (#7278) 2020-02-23 13:13:43 -08:00
tracking_dict.py Remove future imports (#6724) 2020-01-09 00:15:48 -08:00
tuple_actions.py [RLlib] Exploration API: merge deterministic flag with exploration classes (SoftQ and StochasticSampling). (#7155) 2020-02-19 12:18:45 -08:00
window_stat.py Remove future imports (#6724) 2020-01-09 00:15:48 -08:00