ray/rllib/utils/exploration/per_worker_epsilon_greedy.py
Sven Mika e153e3179f
[RLlib] Exploration API: Policy changes needed for forward pass noisifications. (#7798)
* Rollback.

* WIP.

* WIP.

* LINT.

* WIP.

* Fix.

* Fix.

* Fix.

* LINT.

* Fix (SAC does currently not support eager).

* Fix.

* WIP.

* LINT.

* Update rllib/evaluation/sampler.py

Co-Authored-By: Eric Liang <ekhliang@gmail.com>

* Update rllib/evaluation/sampler.py

Co-Authored-By: Eric Liang <ekhliang@gmail.com>

* Update rllib/utils/exploration/exploration.py

Co-Authored-By: Eric Liang <ekhliang@gmail.com>

* Update rllib/utils/exploration/exploration.py

Co-Authored-By: Eric Liang <ekhliang@gmail.com>

* WIP.

* WIP.

* Fix.

* LINT.

* LINT.

* Fix and LINT.

* WIP.

* WIP.

* WIP.

* WIP.

* Fix.

* LINT.

* Fix.

* Fix and LINT.

* Update rllib/utils/exploration/exploration.py

* Update rllib/policy/dynamic_tf_policy.py

Co-Authored-By: Eric Liang <ekhliang@gmail.com>

* Update rllib/policy/dynamic_tf_policy.py

Co-Authored-By: Eric Liang <ekhliang@gmail.com>

* Update rllib/policy/dynamic_tf_policy.py

Co-Authored-By: Eric Liang <ekhliang@gmail.com>

* Fixes.

* LINT.

* WIP.

Co-authored-by: Eric Liang <ekhliang@gmail.com>
2020-04-01 00:43:21 -07:00

43 lines
1.7 KiB
Python

from ray.rllib.utils.exploration.epsilon_greedy import EpsilonGreedy
from ray.rllib.utils.schedules import ConstantSchedule
class PerWorkerEpsilonGreedy(EpsilonGreedy):
"""A per-worker epsilon-greedy class for distributed algorithms.
Sets the epsilon schedules of individual workers to a constant:
0.4 ^ (1 + [worker-index] / float([num-workers] - 1) * 7)
See Ape-X paper.
"""
def __init__(self, action_space, *, framework, num_workers, worker_index,
**kwargs):
"""Create a PerWorkerEpsilonGreedy exploration class.
Args:
action_space (Space): The gym action space used by the environment.
num_workers (Optional[int]): The overall number of workers used.
worker_index (Optional[int]): The index of the Worker using this
Exploration.
framework (Optional[str]): One of None, "tf", "torch".
"""
epsilon_schedule = None
# Use a fixed, different epsilon per worker. See: Ape-X paper.
assert worker_index <= num_workers, (worker_index, num_workers)
if num_workers > 0:
if worker_index >= 0:
exponent = (1 + worker_index / float(num_workers - 1) * 7)
epsilon_schedule = ConstantSchedule(
0.4**exponent, framework=framework)
# Local worker should have zero exploration so that eval
# rollouts run properly.
else:
epsilon_schedule = ConstantSchedule(0.0, framework=framework)
super().__init__(
action_space,
epsilon_schedule=epsilon_schedule,
framework=framework,
num_workers=num_workers,
worker_index=worker_index,
**kwargs)