This reverts commit f13c2a5350.
Re-land remove PG caching logic.
As a result, pbt scheduler cannot stop and start trial within itself for weight transfer and perturbation now. So these are some changes to pbt scheduler:
1. the trial being perturbed is always left in a PAUSED state upon exiting on_trial_result. This is because instead of maintaining two separate paths for replacing a trial, we consolidate to always "stop" and "restore" and rely on reuse_actor as an optimization if available. (see 2)
2. consolidates pbt replacing a trial with reuse_actor.
3. introduces a NOOP scheduler decision to indicate that (pbt) scheduler has finished its interaction with executor and thus no decision is further needed in Tune loop.
Long term, we should control the interface between scheduler and executor. For example, on_trial_result taking in the whole runner is too much API exposure that we want to remove.
- Removing scale_to logic from object store. We don't need to scale during tests, which will disambiguate infra failures vs app failures.
- Run microbenchmark in core nightly, meaning it will run even more often
- Run weekly scalability tests daily instead. (They are not too expensive).
- Run some core daily tests separately to avoid infra failures.
## Why are these changes needed?
When the Java multi-worker feature is on and if workers respond `Exit` requests from the worker pool with delays (even slower than the interval of `TryKillingIdleWorkers`), the worker pool may send additional `Exit` requests to workers before receiving replies of previous ones. This leads to a `RAY_CHECK` failure from here
60df705b4e/src/ray/raylet/worker_pool.cc (L984)
due to executing two reply callbacks in a row.
This PR fixes the bug by ensuring the worker pool only sends new `Exit` requests to a worker if there are no inflight `Exit` requests to any worker of the worker process.
This PR includes the precise reason why actor is dead to `ActorTable`. The `death cause` stored in the table will be propagated to core worker through pubsub, so that core worker can eventually raise a good error message with metadata.
Why are these changes needed?
If max concurrency is 1 in default group, a blocking task executing in default group will block the following tasks in different group. See reproduction script in #20475
The issue is due to tasks executing in the default concurrent group run in the main task execution thread, and tasks in other concurrent groups will be blocked if the main task execution thread is blocked.
This PR only changes concurrent actor behavior that default group will not block other groups.
Related issue number
Fix#20475
<!-- Please add a reviewer to the assignee section when you create a PR. If you don't have the access to it, we will shortly find a reviewer and assign them to your PR. -->
## Why are these changes needed?
In this PR, instead of passing specific "creation_task_exception", we pass RayErrorInfo. This will allow us to pass any type of error metadata to MarkTaskReturnObjectFailed.
This PR is basically refactoring.
## Related issue number
https://github.com/ray-project/ray/issues/20534
## Checks
- [ ] I've run `scripts/format.sh` to lint the changes in this PR.
- [ ] I've included any doc changes needed for https://docs.ray.io/en/master/.
- [ ] I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
- Testing Strategy
- [ ] Unit tests
- [ ] Release tests
- [ ] This PR is not tested :(
This PR is mostly for implementing "fixture" for nightly test. Note that the current fixture implementation is not that great, and we can probably improve this in the future after refactoring e2e.py.
* [job submission] Use specific redis_address and redis_password instead of "auto" (#20687)
Co-authored-by: Edward Oakes <ed.nmi.oakes@gmail.com>
Co-authored-by: Jiao Dong <jiaodong@anyscale.com>
This fixes slow lazy block evaluation by adding an explicit get_blocks() bulk method, and using that when-ever lazy iteration is not needed.
The root cause of the slowdown was because block splitting requires ray.get() during iteration over block refs, to materialize split blocks. However, this interferes with exponential rampup.
This is the first step to improve `RayActorError` which doesn't provide any information to the user.
In the first step, we re-define ambiguous / confusing APIs and code path.
1. Change the name of APIs that expose too less information
- MarkPendingTaskFailed -> MarkPendingTaskObjectFailed (API too general compared to what it does)
- PendingTaskFailed -> FailOrRetryPendingTask (API name doesn't make much sense compared to its behavior).
2. Change the name of arguments that expose too much impl detail
- immediately_mark_object_fail -> mark_task_object_failed (no need to specify "immediately")
3. Move msgpack serialization to a util function instead of embedding it to the task manager function.
Instead of wrapping the whole training run in a remote call, we only query the files on the node in a remote call. XGBoost-Ray is then started from the local node.
block splitting and makes it off by default. This makes it easier to debug problems potentially related to this feature. Criteria for enabling by default:
- We're confident all nightly tests pass (currently, there may be an issue with large-scale groupby with block splitting).
- We're confident lineage-based reconstruction can work with block splitting.
## Why are these changes needed?
Before the commit (e54d3117a4) all traffics go to redis which is a dedicated service.
After moving to gcs, internal kv are competing with gcs traffic which make it a bottleneck sometimes.
Before this PR, `many_actor` tests are failing, the reason is that when a lot of actors starts, gcs is really heavy loads, and then worker starts timeout because it failed to get internal kv requests executed in short time.
When worker failed, it'll starts a new worker even the original one is pending, and in the end there will be a lot workers.
There are several things here need to fix and this is the quick fix for this issues which also convert it back to the status when we are using redis.
## Related issue number
Closes#20602
## Why are these changes needed?
`base_image: "anyscale/ray-ml:pinned-nightly-py37"` doesn't exist anymore which fails a lot of nightly tests, change to `base_image: "anyscale/ray-ml:nightly-py37-gpu"`
## Related issue number
## Checks
This reverts commit e9132ed7ca.
<!-- Thank you for your contribution! Please review https://github.com/ray-project/ray/blob/master/CONTRIBUTING.rst before opening a pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR. If you don't have the access to it, we will shortly find a reviewer and assign them to your PR. -->
## Why are these changes needed?
Seems to break Windows build.
```
(07:46:25) ERROR: BUILD.bazel:406:11: Compiling src/ray/common/task/task_spec.cc failed: (Exit 2): cl.exe failed: error executing command
```
<img width="487" alt="Screen Shot 2021-11-23 at 3 09 18 AM" src="https://user-images.githubusercontent.com/18510752/143013973-f157724c-4951-49a9-80c6-158d41aa4295.png">
## Related issue number
<!-- For example: "Closes #1234" -->
## Checks
- [ ] I've run `scripts/format.sh` to lint the changes in this PR.
- [ ] I've included any doc changes needed for https://docs.ray.io/en/master/.
- [ ] I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
- Testing Strategy
- [ ] Unit tests
- [ ] Release tests
- [ ] This PR is not tested :(
This test seems to be flaking since ray stop sometimes fails when sending SIGTERM only. While that's desirable to fix, the test is still testing the intended behavior even if we send SIGKILL.