This is the first step to improve `RayActorError` which doesn't provide any information to the user.
In the first step, we re-define ambiguous / confusing APIs and code path.
1. Change the name of APIs that expose too less information
- MarkPendingTaskFailed -> MarkPendingTaskObjectFailed (API too general compared to what it does)
- PendingTaskFailed -> FailOrRetryPendingTask (API name doesn't make much sense compared to its behavior).
2. Change the name of arguments that expose too much impl detail
- immediately_mark_object_fail -> mark_task_object_failed (no need to specify "immediately")
3. Move msgpack serialization to a util function instead of embedding it to the task manager function.
Instead of wrapping the whole training run in a remote call, we only query the files on the node in a remote call. XGBoost-Ray is then started from the local node.
block splitting and makes it off by default. This makes it easier to debug problems potentially related to this feature. Criteria for enabling by default:
- We're confident all nightly tests pass (currently, there may be an issue with large-scale groupby with block splitting).
- We're confident lineage-based reconstruction can work with block splitting.
## Why are these changes needed?
Before the commit (e54d3117a4) all traffics go to redis which is a dedicated service.
After moving to gcs, internal kv are competing with gcs traffic which make it a bottleneck sometimes.
Before this PR, `many_actor` tests are failing, the reason is that when a lot of actors starts, gcs is really heavy loads, and then worker starts timeout because it failed to get internal kv requests executed in short time.
When worker failed, it'll starts a new worker even the original one is pending, and in the end there will be a lot workers.
There are several things here need to fix and this is the quick fix for this issues which also convert it back to the status when we are using redis.
## Related issue number
Closes#20602
## Why are these changes needed?
`base_image: "anyscale/ray-ml:pinned-nightly-py37"` doesn't exist anymore which fails a lot of nightly tests, change to `base_image: "anyscale/ray-ml:nightly-py37-gpu"`
## Related issue number
## Checks
This reverts commit e9132ed7ca.
<!-- Thank you for your contribution! Please review https://github.com/ray-project/ray/blob/master/CONTRIBUTING.rst before opening a pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR. If you don't have the access to it, we will shortly find a reviewer and assign them to your PR. -->
## Why are these changes needed?
Seems to break Windows build.
```
(07:46:25) ERROR: BUILD.bazel:406:11: Compiling src/ray/common/task/task_spec.cc failed: (Exit 2): cl.exe failed: error executing command
```
<img width="487" alt="Screen Shot 2021-11-23 at 3 09 18 AM" src="https://user-images.githubusercontent.com/18510752/143013973-f157724c-4951-49a9-80c6-158d41aa4295.png">
## Related issue number
<!-- For example: "Closes #1234" -->
## Checks
- [ ] I've run `scripts/format.sh` to lint the changes in this PR.
- [ ] I've included any doc changes needed for https://docs.ray.io/en/master/.
- [ ] I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
- Testing Strategy
- [ ] Unit tests
- [ ] Release tests
- [ ] This PR is not tested :(
This test seems to be flaking since ray stop sometimes fails when sending SIGTERM only. While that's desirable to fix, the test is still testing the intended behavior even if we send SIGKILL.
* Fix trainer timestep reporting for offline agents like CQL.
* wip.
* extend timesteps_total to 200K for learning_tests_pendulum_cql test
Co-authored-by: sven1977 <svenmika1977@gmail.com>
This PR introduces a TrialCheckpoint class which is returned e.g. by ExperimentAnalysis.best_checkpoint. The class enables easy access to cloud storage locations (rather than just local directories before). It also comes with utilities to download, upload, and save trial checkpoints to local and cloud targets.
Running ray status with the changes from #20359
while running an autoscaler older than those changes
results in an error on input "head_ip" to LoadMetricsSummary.
See #20359 (comment)
This PR fixes the bug by restoring head_ip as an optional parameter of LoadMetricsSummary.
non_terminated_nodes calls are expensive for some node provider implementations.
This PR refactors autoscaler._update() such that it results in at most one non_terminated_nodes call.
Conceptually, the change is that the autoscaler only needs a consistent view of the world once per update interval.
The structure of an autoscaler update is now
call non_terminated_nodes to update internal state
update autoscaler status strings
terminate nodes we don't need, removing them from internal state as we go
run node updaters if needed
get nodes to launch based on internal state
There's a small operational difference introduced:
Previously -- After a node is created, its NodeUpdater thread is initiated immediately.
Now -- After a node is created, its NodeUpdater thread is initiated in the next autoscaler update.
This typically will not introduce latency, since the time to get SSH access (a few minutes) is much longer than the autoscaler update interval (5 seconds by default).
Along the way, I've removed the local_ip initialization parameter for LoadMetrics because it was confusing and not useful (and caused some tests to fail)
## Why are these changes needed?
In python, redis rpush is used to broadcast and store the keys. In this PR, we use gcs kv to store the keys. pubsub is still using redis which need to be remove later.
The protocol before this PR:
- worker subscribe to redis key spaces
- worker write the key of function/actor to (export:sqn, key)
- so the other worker will be notified and start to load the data by checking export:sqn
This depends on redis for both kv and pubsub, and this PR fix the kv part.
After this PR:
- worker subscribe to redis key space
- For exporting:
- worker will find the first key not being hold. This is guaranteed by internal kv which right now is a single thread, atomic db. The worker will just check until it find one key not existing and write it (this is single operation). One optimization right now is to use the import counter as the start offset since this counter means all keys before the counter has already been used.
- worker will then write a dummy key to redis key space for broadcasting
- For importer
- It's working as before, but instead of reading from redis, it will read from gcs kv.
This is part in redis removal project.
## Related issue number
https://github.com/ray-project/ray/issues/19443
Remerging #19789 with some fixes for Dask-on-Ray 1TB sort:
- Fixes a bug where the timer was not getting reset correctly
- Increased timeout to 10min just to be safe
- Changed the error to a unique exception ObjectFetchTimedOutError to improve debugging.
This exception should usually indicate a system-level bug.
Propagates environment variables to BackendExecutor actor using runtime envs.
Also actually run test_callbacks in CI.
Note that there is an issue with runtime envs: #20587. But this only happens if you shutdown Ray and start a new session again.
This PR reverts the previous revert with the following minor changes.
Worker capping is off by default.
The cap feature flag is on the for the tests that explicitely require it.