This PR adds a utility script to automatically fetch release test results from the Buildkite pipeline for a release branch. This was previously a manual process.
Found many log messages about Not enough memory to create requested object ... when running shuffle tests, even when object store memory is far from full.
It seems when ObjectBufferPool::AbortCreate() is called, Raylet logs Not enough memory to create requested object .... However, ObjectBufferPool::AbortCreate() is called under 3 different codepaths:
ObjectManager::ReceiveObjectChunk()
PullManager::UpdatePullsBasedOnAvailableMemory() -> cancel_pull_request_
PullManager::CancelPull() -> cancel_pull_request_
Only codepath (2) is due to having not enough object store memory. So the logging in ObjectBufferPool::AbortCreate() is moved to the callsites instead, which have more context of the situation and can log with more accurate messages.
Also change codepath (3) to be DEBUG, because it is an expected behavior and can be quite spammy when running shuffle / sort workload.
Although there's enough quota, it is possible the AWS doesn't have enough capacity to start up new nodes. According to @allenyin55, the current wait for node timeout is too short. This PR increases the timeout to 3000 seconds (50 minutes) from 600 seconds. Let's see if this can resolve the issue. If it makes things worse, I will revert it quickly (I will closely monitor the infra failure rate)
#24421 increased the default maximum GRPC limit to 250MB, which broke a Tune test that catches too large training functions.
This PR fixes this test by increasing the size of the experiment. However, please note that this leads to an inconsistency: For training functions of size 100 < fn < 250, an error will be raised only at runtime when trying to start the actor:
```
ValueError: The actor ImplicitFunc is too large (125 MiB > FUNCTION_SIZE_ERROR_THRESHOLD=95 MiB). Check that its definition is not implicitly capturing a large array or other object in scope. Tip: use ray.put() to put large objects in the Ray object store.
```
But it will successfully pass the registration stage `self._run_identifier = Experiment.register_if_needed(run)`.
cc @ericl should we set the default limit back to 100 MB (or maybe set the FUNCTION_SIZE_ERROR_THRESHOLD to 250 or whatever the GRPC limit is?)
The test is flaky because we schedule g task without waiting for f task to complete (because f_obj is embedded inside a list) so we may not have the locality information for f_obj from owner during g task scheduling.
Related issue number
Closes#23964
As discussed in #24322, rename so the function name matches its signature for PinObjectID(). Also rename the RPC request/reply/method names, to keep them consistent.
test_usage_stats was very flaky due to a runtime env setup error.
The test defined the runtime env {pip: "ray[serve]"} simultaneously in ray.init() and also in ray.remote() for the actor. This is redundant but should be supported by runtime_env, but it turns out it reveals a bug in runtime_env. The env appears to be installed twice concurrently in this situation, causing flakiness.
I'll make a followup issue for the runtime env bug with more details and a simpler repro, and link it here. Until then, we should merge this PR to deflake CI. This PR only defines the runtime_env in ray.init(), and removes the redefinition in ray.remote(). The actor will still inherit the correct runtime environment.
I tested manually by inspecting dashboard_agent.log locally. The virtualenv install commands were duplicated about 75% of the time before this PR, indicating the concurrent install. But with this change, the commands were never duplicated in the 7-8 times that I ran it. So this PR should deflake the test.
According to https://grpc.io/docs/guides/performance/ we should: Always re-use stubs and channels when possible.
This PR share channels between different services.
The link to the documentation to troubleshoot TypeError: Could not serialize the function ... is broken. This PR fixes the link by replacing it with the correct URL.
During investigations for #24176, it is found that the majority of memory used by Raylet and core workers are due to gRPC client (core worker) and server (raylet) data structures for inflight PinObjectIDs RPCs. Instead of buffering the requests in gRPC, this PR changes to buffer ObjectIDs that need to be pinned inside RayletClient instead. This shows significant reduction in raylet's memory usage outside of object stores.
Also made minor cleanups in Raylet client:
Move aborting object creation error from ObjectBufferPool::AbortCreate() to callsites, with hopefully more accurate reasons.
C++ style cleanups.
For better debugging, we should print the installed pip packages in the buildkite annotations. Additionally, shorten the summary message to make the output less cluttered.
https://github.com/ray-project/ray/pull/14676 disabled the disk usage/total display for Ray nodes on K8s, because Ray nodes on K8s are run as pods, which in general do not use up the entire machine.
However, in some situations, it is useful to run one Ray pod per K8s node and report the disk usage.
This PR adds a flag to enable displaying disk usage in those situations.
We have several issues if DisconnectClient happens before AnnounceWorkerPort:
- Check failure for removing io worker from registered_io_workers since the io worker is only added to that set after AnnounceWorkerPort.
- num_starting_(io)_workers is not decremented.
All workflow tasks are executed as remote functions that submitted from WorkflowManagmentActor. WorkflowManagmentActor is a detached long-running actor whose owner is the first driver in the cluster that runs the very first workflow execution. Therefore, for new drivers that run workflows, the loggings won't be properly published back to the driver because loggings are saved and published based on job_id and the job_id is always the first driver's job_id as the ownership goes like: first_driver -> WorkflowManagmentActor -> workflow executions using remote functions.
To solve this, during workflow execution, we pass the actual driver's job_id along with execution, and re-configure the logging files on each worker that runs the remote functions. Notice that we need to do this in multiple places as a workflow task is executed with more than one remote functions that are running in different workers.
This PR fixes a bug: when the task is pushed to a core worker but hasn't been scheduled to run cancel is not called which will lead to the get request hanging forever.
The fix is to call the `Cancel`.
The previous implementation of the reporting logic in HuggingFaceTrainer had a few edge cases that caused the training iterations and measured epochs to diverge. This new implementation should ensure that reporting is consistent.
On the ServeHead level, it is talking to serve api and controller to do deployment and clean up now. With this pr, it hides the deployment clean up logic into server.run() for code cleanness and easy to refactor in the future.
This PR modifies the KubeRay e2e autoscaling test so that one of its scaling commands is sent via the Ray Job Submission API.
This validates that the Ray Job Submission API works with KubeRay and, in particular, that the Ray Dashboard is correctly exposed.