Sven Mika
61a1274619
[RLlib] No Preprocessors (part 2). ( #18468 )
2021-09-23 12:56:45 +02:00
Sven Mika
fd13bac9b3
[RLlib] Add worker
arg (optional) to policy_mapping_fn
. ( #18184 )
2021-09-17 12:07:11 +02:00
Sven Mika
8a72824c63
[RLlib Testig] Split and unflake more CI tests (make sure all jobs are < 30min). ( #18591 )
2021-09-15 22:16:48 +02:00
Sven Mika
3f89f35e52
[RLlib] Better error messages and hints; + failure-mode tests; ( #18466 )
2021-09-10 16:52:47 +02:00
Sven Mika
8a066474d4
[RLlib] No Preprocessors; preparatory PR #1 ( #18367 )
2021-09-09 08:10:42 +02:00
Sven Mika
1520c3d147
[RLlib] Deepcopy env_ctx for vectorized sub-envs AND add eval-worker-option to Trainer.add_policy()
( #18428 )
2021-09-09 07:10:06 +02:00
Sven Mika
a772c775cd
[RLlib] Set random seed (if provided) to Trainer process as well. ( #18307 )
2021-09-04 11:02:30 +02:00
Sven Mika
9a8ca6a69d
[RLlib] Fix Atari learning test regressions (2 bugs) and 1 minor attention net bug. ( #18306 )
2021-09-03 13:29:57 +02:00
gjoliver
336e79956a
[RLlib] Make MultiAgentEnv inherit gym.Env to avoid direct class type manipulation ( #18156 )
2021-09-03 08:02:05 +02:00
Sven Mika
2357bbc0c8
[RLlib] Issue 18231: Better (earlier) env validation and error message improvement. ( #18249 )
2021-09-02 09:28:16 +02:00
gjoliver
6621bb5611
[RLlib] Minor renaming and cleanups related to last rollout worker seed fix. ( #18155 )
2021-09-02 06:57:46 +02:00
gjoliver
a8813675f4
[RLlib] Issue 17900: Set seed
in single vectorized sub-envs properly, if num_envs_per_worker > 1
( #18110 )
...
* In case a worker runs multiple envs, make sure a different seed can be deterministically set on all of them.
* Revert a couple of whitespace changes.
* Fix a few style errors.
Co-authored-by: Jun Gong <jungong@mbpro.local>
2021-08-26 11:32:58 +02:00
Sven Mika
f18213712f
[RLlib] Redo: "fix self play example scripts" PR (17566) ( #17895 )
...
* wip.
* wip.
* wip.
* wip.
* wip.
* wip.
* wip.
* wip.
* wip.
2021-08-17 09:13:35 -07:00
Sven Mika
2bd2ee7a73
[RLlib] SampleBatch: Docstring- and API cleanups; Add support for nested data. ( #17485 )
2021-08-16 06:08:14 +02:00
akern40
0cb2c602db
[rllib] Fixes typo in RolloutWorker.__init__ ( #17583 )
...
Fixes the typo in RolloutWorker.__init__, closes #17582
2021-08-13 13:17:36 -07:00
Amog Kamsetty
77f28f1c30
Revert "[RLlib] Fix Trainer.add_policy
for num_workers>0 (self play example scripts). ( #17566 )" ( #17709 )
...
This reverts commit 3b447265d8
.
2021-08-10 10:50:01 -07:00
Sven Mika
3b447265d8
[RLlib] Fix Trainer.add_policy
for num_workers>0 (self play example scripts). ( #17566 )
2021-08-05 11:41:18 -04:00
Kai Fricke
5d56a8aac5
[RLlib] Fix ignoring "sample_collector" config key ( #17460 )
2021-08-04 10:27:35 -04:00
Sven Mika
8a844ff840
[RLlib] Issues: 17397, 17425, 16715, 17174. When on driver, Torch|TFPolicy should not use ray.get_gpu_ids()
(b/c no GPUs assigned by ray). ( #17444 )
2021-08-02 17:29:59 -04:00
Sven Mika
0d8fce8fd8
[RLlib] Discussion 2294: Custom vector env example and fix. ( #16083 )
2021-07-28 10:40:04 -04:00
Sven Mika
0c5c70b584
[RLlib] Discussion 247: Allow remote sub-envs (within vectorized) to be used with custom APIs. ( #17118 )
2021-07-25 16:55:51 -04:00
Sven Mika
7bc4376466
[RLlib] Example script: Simple league-based self-play w/ open spiel env (markov soccer or connect-4). ( #17077 )
2021-07-22 10:59:13 -04:00
Sven Mika
5a313ba3d6
[RLlib] Refactor: All tf static graph code should reside inside Policy class. ( #17169 )
2021-07-20 14:58:13 -04:00
Sven Mika
18d173b172
[RLlib] Implement policy_maps (multi-agent case) in RolloutWorkers as LRU caches. ( #17031 )
2021-07-19 13:16:03 -04:00
Sven Mika
649580d735
[RLlib] Redo simplify multi agent config dict: Reverted b/c seemed to break test_typing (non RLlib test). ( #17046 )
2021-07-15 05:51:24 -04:00
Amog Kamsetty
38b5b6d24c
Revert "[RLlib] Simplify multiagent config (automatically infer class/spaces/config). ( #16565 )" ( #17036 )
...
This reverts commit e4123fff27
.
2021-07-13 09:57:15 -07:00
Kai Fricke
27d80c4c88
[RLlib] ONNX export for tensorflow (1.x) and torch ( #16805 )
2021-07-13 12:38:11 -04:00
Sven Mika
e4123fff27
[RLlib] Simplify multiagent config (automatically infer class/spaces/config). ( #16565 )
2021-07-13 06:38:14 -04:00
Sven Mika
55a90e670a
[RLlib] Trainer.add_policy() not working for tf, if added policy is trained afterwards. ( #16927 )
2021-07-11 23:41:38 +02:00
Kai Fricke
10fd7111b3
[rllib] Improve test learning check, fix flaky two step qmix ( #16843 )
2021-07-06 19:39:12 +01:00
Amog Kamsetty
33f31f53c8
[Rllib] Torch Backwards Compatibility ( #16813 )
2021-07-01 19:17:54 -07:00
Sven Mika
53206dd440
[RLlib] CQL BC loss fixes; PPO/PG/A2|3C action normalization fixes ( #16531 )
2021-06-30 12:32:11 +02:00
Sven Mika
c95dea51e9
[RLlib] External env enhancements + more examples. ( #16583 )
2021-06-23 09:09:01 +02:00
Benjamin D. Killeen
50049f86d0
[rllib] check if self.env is not None
explicitly ( #15634 )
...
Co-authored-by: Richard Liaw <rliaw@berkeley.edu>
2021-06-21 10:02:13 -07:00
Sven Mika
be6db06485
[RLlib] Re-do: Trainer: Support add and delete Policies. ( #16569 )
2021-06-21 13:46:01 +02:00
Sven Mika
79a9d6d517
[RLlib] Issues 16287 and 16200: RLlib not rendering custom multi-agent Envs. ( #16428 )
2021-06-19 08:57:53 +02:00
Amog Kamsetty
bd3cbfc56a
Revert "[RLlib] Allow policies to be added/deleted on the fly. ( #16359 )" ( #16543 )
...
This reverts commit e78ec370a9
.
2021-06-18 12:21:49 -07:00
Sven Mika
e78ec370a9
[RLlib] Allow policies to be added/deleted on the fly. ( #16359 )
2021-06-18 10:31:30 +02:00
Sven Mika
d0014cd351
[RLlib] Policies get/set_state fixes and enhancements. ( #16354 )
2021-06-15 13:08:43 +02:00
Sven Mika
308ea62430
[RLlib] Fix "seed" setting to work in all frameworks and w/ all CUDA versions. ( #15682 )
2021-05-18 11:00:24 +02:00
Sven Mika
d89fb82bfb
[RLlib] Add simple curriculum learning API and example script. ( #15740 )
2021-05-16 17:35:10 +02:00
Amog Kamsetty
ebc44c3d76
[CI] Upgrade flake8 to 3.9.1 ( #15527 )
...
* formatting
* format util
* format release
* format rllib/agents
* format rllib/env
* format rllib/execution
* format rllib/evaluation
* format rllib/examples
* format rllib/policy
* format rllib utils and tests
* format streaming
* more formatting
* update requirements files
* fix rllib type checking
* updates
* update
* fix circular import
* Update python/ray/tests/test_runtime_env.py
* noqa
2021-05-03 14:23:28 -07:00
Sven Mika
4f66309e19
[RLlib] Redo issue 14533 tf enable eager exec ( #14984 )
2021-03-29 20:07:44 +02:00
SangBin Cho
fa5f961d5e
Revert "[RLlib] Issue 14533: tf.enable_eager_execution()
must be called at beginning. ( #14737 )" ( #14918 )
...
This reverts commit 3e389d5812
.
2021-03-25 00:42:01 -07:00
Sven Mika
3e389d5812
[RLlib] Issue 14533: tf.enable_eager_execution()
must be called at beginning. ( #14737 )
2021-03-24 12:54:27 +01:00
Sven Mika
04bc0a9828
[RLlib] Remove all non-trajectory view API code. ( #14860 )
2021-03-23 09:50:18 -07:00
Sven Mika
f859ebb99f
[RLlib] Fix env rendering and recording options (for non-local mode; >0 workers; +evaluation-workers). ( #14796 )
2021-03-23 10:06:06 +01:00
Chris Bamford
cd89f0dc55
[RLLib] Episode media logging support ( #14767 )
2021-03-19 09:17:09 +01:00
Ian Rodney
eb12033612
[Code Cleanup] Switch to use ray.util.get_node_ip_address() ( #14741 )
...
Co-authored-by: Richard Liaw <rliaw@berkeley.edu>
2021-03-18 13:10:57 -07:00
Sven Mika
775e685531
[RLlib] Issue #13824 : compress_observations=True
crashes for all algos not using a replay buffer. ( #14034 )
2021-02-18 21:36:32 +01:00