Jun Gong
c7ae787cc8
[RLlib] Beef up worker failure test. ( #26953 )
2022-07-27 00:10:45 -07:00
Sven Mika
4aea24c8a8
[RLlib] restart_failed_sub_environments
now works for MA cases and crashes during reset()
; +more tests and logging; add eval worker sub-env fault tolerance test. ( #26276 )
2022-07-15 08:55:14 +02:00
Jun Gong
0c469e490e
[RLlib] Checkpoint and restore connectors. ( #26253 )
2022-07-09 01:06:24 -07:00
Jun Gong
52bb8e47d4
[RLlib] EnvRunnerV2 and EpisodeV2 that support Connectors. ( #25922 )
2022-06-30 08:44:10 +02:00
simonsays1980
05d3af766c
[RLlib] Added 'episode.hist_data' to the 'atari_metrics' to nsure that custom metrics of the user are kept in postprocessing when using Atari environments. ( #25292 )
2022-06-28 16:31:57 +02:00
Sven Mika
130b7eeaba
[RLlib] Trainer
to Algorithm
renaming. ( #25539 )
2022-06-11 15:10:39 +02:00
Eric Liang
905258dbc1
Clean up docstyle in python modules and add LINT rule ( #25272 )
2022-06-01 11:27:54 -07:00
Sven Mika
d95009a3ac
[RLlib] Vectorized envs: Gracefully handle sub-environments failing by restarting them (if configured so). ( #24967 )
2022-05-28 10:50:03 +02:00
Eric Liang
4963dfaae0
[api] Add API stability annotations for all RLlib symbols and add to LINT ( #25060 )
2022-05-24 22:14:25 -07:00
simonsays1980
ff575eeafc
[RLlib] Make actions sent by RLlib to the env immutable. ( #24262 )
2022-04-29 10:27:06 +02:00
Noon van der Silk
3589c21924
[RLlib] Fix some missing f-strings and a f-string related bug in tf eager policy. ( #24148 )
2022-04-25 11:25:28 +02:00
Jun Gong
a7e5aa8c6a
[RLlib] Delete some unused confusing logics. ( #23513 )
2022-03-29 13:45:13 +02:00
Sven Mika
04a5c72ea3
Revert "Revert "[RLlib] Speedup A3C up to 3x (new training_iteration function instead of execution_plan) and re-instate Pong learning test."" ( #18708 )
2022-02-10 13:44:22 +01:00
Alex Wu
b122f093c1
Revert "[RLlib] Speedup A3C up to 3x (new training_iteration
function instead of execution_plan
) and re-instate Pong learning test." ( #22250 )
...
Reverts ray-project/ray#22126
Breaks rllib:tests/test_io
2022-02-09 09:26:36 -08:00
Sven Mika
ac3e6ab411
[RLlib] Speedup A3C up to 3x (new training_iteration
function instead of execution_plan
) and re-instate Pong learning test. ( #22126 )
2022-02-08 19:04:13 +01:00
Sven Mika
8b678ddd68
[RLlib] Issue 22036: Client should handle concurrent episodes with one being training_enabled=False
. ( #22076 )
2022-02-06 12:35:03 +01:00
Balaji Veeramani
7f1bacc7dc
[CI] Format Python code with Black ( #21975 )
...
See #21316 and #21311 for the motivation behind these changes.
2022-01-29 18:41:57 -08:00
Sven Mika
596c8e2772
[RLlib] Experimental no-flatten option for actions/prev-actions. ( #20918 )
2021-12-11 14:57:58 +01:00
Tomasz Wrona
39c202fa66
[RLlib] Allow extra keys in info in multi-agent ( #20793 )
2021-12-09 14:44:33 +01:00
Avnish Narayan
b8c64480d8
[RLlib] Change return type of try_reset to MultiEnvDict ( #20868 )
2021-12-06 14:15:33 +01:00
Avnish Narayan
3ddc09544d
[rllib] Env to base env refactor ( #20785 )
2021-11-30 17:02:10 -08:00
Sven Mika
9c73871da0
[RLlib; Docs overhaul] Docstring cleanup: Evaluation ( #19783 )
2021-10-29 12:03:56 +02:00
Sven Mika
902e854af2
[RLlib; Docs overhaul] Docstring cleanup: Environments. ( #19784 )
...
* wip.
* Test: Make a change in tune to trigger tune tests, which are not run otherwise, but seem to fail nevertheless with this PR's changes.
* remove bare_metal_policy_with_custom_view_reqs from tests
2021-10-29 10:46:52 +02:00
Antoine Galataud
edb338ff7c
[RLlib] Check training_enabled
on PolicyServer ( #19007 )
2021-10-12 16:21:02 +02:00
Sven Mika
61a1274619
[RLlib] No Preprocessors (part 2). ( #18468 )
2021-09-23 12:56:45 +02:00
Sven Mika
fd13bac9b3
[RLlib] Add worker
arg (optional) to policy_mapping_fn
. ( #18184 )
2021-09-17 12:07:11 +02:00
Sven Mika
8a066474d4
[RLlib] No Preprocessors; preparatory PR #1 ( #18367 )
2021-09-09 08:10:42 +02:00
gjoliver
336e79956a
[RLlib] Make MultiAgentEnv inherit gym.Env to avoid direct class type manipulation ( #18156 )
2021-09-03 08:02:05 +02:00
Sven Mika
2357bbc0c8
[RLlib] Issue 18231: Better (earlier) env validation and error message improvement. ( #18249 )
2021-09-02 09:28:16 +02:00
Joseph Suarez
8136d2912b
[RLlib] Add policies
arg to callback: on_episode_step
(already exists in all other episode-related callbacks) ( #18119 )
2021-08-27 16:12:19 +02:00
Sven Mika
494ddd98c1
[RLlib] Replace "seq_lens" w/ SampleBatch.SEQ_LENS. ( #17928 )
2021-08-21 17:05:48 +02:00
Kai Fricke
bf3eaa9264
[RLlib] Dreamer fixes and reinstate Dreamer test. ( #17821 )
...
Co-authored-by: sven1977 <svenmika1977@gmail.com>
2021-08-18 18:47:08 +02:00
Sven Mika
f18213712f
[RLlib] Redo: "fix self play example scripts" PR (17566) ( #17895 )
...
* wip.
* wip.
* wip.
* wip.
* wip.
* wip.
* wip.
* wip.
* wip.
2021-08-17 09:13:35 -07:00
Sven Mika
29f20cccb6
[RLlib] Issue 17706: AttributeError: 'numpy.ndarray' object has no attribute 'items'" on certain turn-based MultiAgentEnvs with Dict obs space. ( #17735 )
2021-08-11 12:33:35 +02:00
Amog Kamsetty
77f28f1c30
Revert "[RLlib] Fix Trainer.add_policy
for num_workers>0 (self play example scripts). ( #17566 )" ( #17709 )
...
This reverts commit 3b447265d8
.
2021-08-10 10:50:01 -07:00
Sven Mika
3b447265d8
[RLlib] Fix Trainer.add_policy
for num_workers>0 (self play example scripts). ( #17566 )
2021-08-05 11:41:18 -04:00
Sven Mika
18d173b172
[RLlib] Implement policy_maps (multi-agent case) in RolloutWorkers as LRU caches. ( #17031 )
2021-07-19 13:16:03 -04:00
Sven Mika
53206dd440
[RLlib] CQL BC loss fixes; PPO/PG/A2|3C action normalization fixes ( #16531 )
2021-06-30 12:32:11 +02:00
Eric Liang
1c709cbeb3
Fix typing ( #16668 )
2021-06-24 22:06:33 -07:00
Sven Mika
be6db06485
[RLlib] Re-do: Trainer: Support add and delete Policies. ( #16569 )
2021-06-21 13:46:01 +02:00
Sven Mika
79a9d6d517
[RLlib] Issues 16287 and 16200: RLlib not rendering custom multi-agent Envs. ( #16428 )
2021-06-19 08:57:53 +02:00
Amog Kamsetty
bd3cbfc56a
Revert "[RLlib] Allow policies to be added/deleted on the fly. ( #16359 )" ( #16543 )
...
This reverts commit e78ec370a9
.
2021-06-18 12:21:49 -07:00
Sven Mika
e78ec370a9
[RLlib] Allow policies to be added/deleted on the fly. ( #16359 )
2021-06-18 10:31:30 +02:00
Amog Kamsetty
ebc44c3d76
[CI] Upgrade flake8 to 3.9.1 ( #15527 )
...
* formatting
* format util
* format release
* format rllib/agents
* format rllib/env
* format rllib/execution
* format rllib/evaluation
* format rllib/examples
* format rllib/policy
* format rllib utils and tests
* format streaming
* more formatting
* update requirements files
* fix rllib type checking
* updates
* update
* fix circular import
* Update python/ray/tests/test_runtime_env.py
* noqa
2021-05-03 14:23:28 -07:00
Sven Mika
1c9701e9cb
[RLlib] Discussion 1513: on_episode_step()
callback called after very first reset (should not). ( #15218 )
2021-04-11 13:16:17 +02:00
Sven Mika
04bc0a9828
[RLlib] Remove all non-trajectory view API code. ( #14860 )
2021-03-23 09:50:18 -07:00
Sven Mika
3e7899d251
[RLlib] Issue 14653: Empty env steps cause key error in SimpleListCollector. ( #14765 )
2021-03-23 10:30:53 +01:00
Sven Mika
f859ebb99f
[RLlib] Fix env rendering and recording options (for non-local mode; >0 workers; +evaluation-workers). ( #14796 )
2021-03-23 10:06:06 +01:00
Chris Bamford
cd89f0dc55
[RLLib] Episode media logging support ( #14767 )
2021-03-19 09:17:09 +01:00
Sven Mika
929946271d
[RLlib] Issue #14022 : Trajectory View API fails in MA-env where a new agent terminates right away (done=True right after initial obs). ( #14031 )
2021-02-18 14:07:49 +01:00