Artur Niederfahrenhorst
e9a8f7d9ae
[RLlib] Unify gnorm mixin for tf and torch policies. ( #26102 )
2022-07-24 15:31:09 +02:00
Avnish Narayan
af41f21be0
[RLlib] Make queue placement ops blocking ( #26581 )
...
Signed-off-by: avnish avnish@anyscale.com
This change should fix issues with IMPALA and potentially APEX that stem from the various learner threads
Signed-off-by: avnish <avnish@anyscale.com>
2022-07-19 20:07:36 +01:00
Jun Gong
b383d987d1
[RLlib] Fix a bunch of issues related to connectors. ( #26510 )
2022-07-13 18:55:20 +02:00
Sven Mika
2b43713785
[RLlib] Move IMPALA and APPO back to exec plan (for now; due to unresolved learning/performance issues). ( #25851 )
2022-06-29 08:41:47 +02:00
Sven Mika
762cfbdff1
[RLlib] IMPALA and APPO metrics fixes; remove deprecated async_parallel_requests
utility. ( #26117 )
2022-06-28 15:14:37 +02:00
Sven Mika
59a967a3a0
[RLlib] Cleanup some deprecated metric keys and classes. ( #26036 )
2022-06-23 21:30:01 +02:00
Kai Fricke
0959f44b6f
[tune/structure] Introduce execution package ( #26015 )
...
Execution-specific packages are moved to tune.execution.
Co-authored-by: Xiaowei Jiang <xwjiang2010@gmail.com>
2022-06-23 11:13:19 +01:00
Avnish Narayan
871aef80dc
[RLlib] Aggregate Impala learner info. ( #25856 )
2022-06-22 09:43:10 +02:00
Sven Mika
1499af945b
[RLlib] Algorithm step()
fixes: evaluation should NOT be part of timed training_step
loop. ( #25924 )
2022-06-20 19:53:47 +02:00
Artur Niederfahrenhorst
a322cc5765
[RLlib] IMPALA/APPO multi-agent mix-in-buffer fixes (plus MA learning tests). ( #25848 )
2022-06-17 14:10:36 +02:00
Yi Cheng
7b8b0f8e03
Revert "[RLlib] Remove execution plan code no longer used by RLlib. ( #25624 )" ( #25776 )
...
This reverts commit 804719876b
.
2022-06-14 13:59:15 -07:00
Avnish Narayan
804719876b
[RLlib] Remove execution plan code no longer used by RLlib. ( #25624 )
2022-06-14 10:57:27 +02:00
Sven Mika
130b7eeaba
[RLlib] Trainer
to Algorithm
renaming. ( #25539 )
2022-06-11 15:10:39 +02:00
Sven Mika
7c39aa5fac
[RLlib] Trainer.training_iteration -> Trainer.training_step; Iterations vs reportings: Clarification of terms. ( #25076 )
2022-06-10 17:09:18 +02:00
Vince Jankovics
68444cd390
[tune] Custom resources per worker added to default_resource_request ( #24463 )
...
This resolves the `TODO(ekl): add custom resources here once tune supports them` item.
Also, related to the discussion [here](https://discuss.ray.io/t/reserve-workers-on-gpu-node-for-trainer-workers-only/5972/5 ).
Co-authored-by: Kai Fricke <kai@anyscale.com>
2022-06-06 22:41:02 +01:00
Sven Mika
b5bc2b93c3
[RLlib] Move all remaining algos into algorithms
directory. ( #25366 )
2022-06-04 07:35:24 +02:00
Yi Cheng
fd0f967d2e
Revert "[RLlib] Move (A/DD)?PPO and IMPALA algos to algorithms
dir and rename policy and trainer classes. ( #25346 )" ( #25420 )
...
This reverts commit e4ceae19ef
.
Reverts #25346
linux://python/ray/tests:test_client_library_integration never fail before this PR.
In the CI of the reverted PR, it also fails (https://buildkite.com/ray-project/ray-builders-pr/builds/34079#01812442-c541-4145-af22-2a012655c128 ). So high likely it's because of this PR.
And test output failure seems related as well (https://buildkite.com/ray-project/ray-builders-branch/builds/7923#018125c2-4812-4ead-a42f-7fddb344105b )
2022-06-02 20:38:44 -07:00
Sven Mika
e4ceae19ef
[RLlib] Move (A/DD)?PPO and IMPALA algos to algorithms
dir and rename policy and trainer classes. ( #25346 )
2022-06-02 16:47:05 +02:00