As discussed in #24322, rename so the function name matches its signature for PinObjectID(). Also rename the RPC request/reply/method names, to keep them consistent.
test_usage_stats was very flaky due to a runtime env setup error.
The test defined the runtime env {pip: "ray[serve]"} simultaneously in ray.init() and also in ray.remote() for the actor. This is redundant but should be supported by runtime_env, but it turns out it reveals a bug in runtime_env. The env appears to be installed twice concurrently in this situation, causing flakiness.
I'll make a followup issue for the runtime env bug with more details and a simpler repro, and link it here. Until then, we should merge this PR to deflake CI. This PR only defines the runtime_env in ray.init(), and removes the redefinition in ray.remote(). The actor will still inherit the correct runtime environment.
I tested manually by inspecting dashboard_agent.log locally. The virtualenv install commands were duplicated about 75% of the time before this PR, indicating the concurrent install. But with this change, the commands were never duplicated in the 7-8 times that I ran it. So this PR should deflake the test.
According to https://grpc.io/docs/guides/performance/ we should: Always re-use stubs and channels when possible.
This PR share channels between different services.
The link to the documentation to troubleshoot TypeError: Could not serialize the function ... is broken. This PR fixes the link by replacing it with the correct URL.
During investigations for #24176, it is found that the majority of memory used by Raylet and core workers are due to gRPC client (core worker) and server (raylet) data structures for inflight PinObjectIDs RPCs. Instead of buffering the requests in gRPC, this PR changes to buffer ObjectIDs that need to be pinned inside RayletClient instead. This shows significant reduction in raylet's memory usage outside of object stores.
Also made minor cleanups in Raylet client:
Move aborting object creation error from ObjectBufferPool::AbortCreate() to callsites, with hopefully more accurate reasons.
C++ style cleanups.
For better debugging, we should print the installed pip packages in the buildkite annotations. Additionally, shorten the summary message to make the output less cluttered.
https://github.com/ray-project/ray/pull/14676 disabled the disk usage/total display for Ray nodes on K8s, because Ray nodes on K8s are run as pods, which in general do not use up the entire machine.
However, in some situations, it is useful to run one Ray pod per K8s node and report the disk usage.
This PR adds a flag to enable displaying disk usage in those situations.
We have several issues if DisconnectClient happens before AnnounceWorkerPort:
- Check failure for removing io worker from registered_io_workers since the io worker is only added to that set after AnnounceWorkerPort.
- num_starting_(io)_workers is not decremented.
All workflow tasks are executed as remote functions that submitted from WorkflowManagmentActor. WorkflowManagmentActor is a detached long-running actor whose owner is the first driver in the cluster that runs the very first workflow execution. Therefore, for new drivers that run workflows, the loggings won't be properly published back to the driver because loggings are saved and published based on job_id and the job_id is always the first driver's job_id as the ownership goes like: first_driver -> WorkflowManagmentActor -> workflow executions using remote functions.
To solve this, during workflow execution, we pass the actual driver's job_id along with execution, and re-configure the logging files on each worker that runs the remote functions. Notice that we need to do this in multiple places as a workflow task is executed with more than one remote functions that are running in different workers.
This PR fixes a bug: when the task is pushed to a core worker but hasn't been scheduled to run cancel is not called which will lead to the get request hanging forever.
The fix is to call the `Cancel`.
The previous implementation of the reporting logic in HuggingFaceTrainer had a few edge cases that caused the training iterations and measured epochs to diverge. This new implementation should ensure that reporting is consistent.
On the ServeHead level, it is talking to serve api and controller to do deployment and clean up now. With this pr, it hides the deployment clean up logic into server.run() for code cleanness and easy to refactor in the future.
This PR modifies the KubeRay e2e autoscaling test so that one of its scaling commands is sent via the Ray Job Submission API.
This validates that the Ray Job Submission API works with KubeRay and, in particular, that the Ray Dashboard is correctly exposed.
Updating W&B Ray Tune Integration with new standards. Adding support to wandb service, the soon to be default way for multiprocessing + wandb run logging.
Co-authored-by: Kai Fricke <krfricke@users.noreply.github.com>
This PR introduces a modification to the external markdown logic in doc build to restore the original file content after build is finished. This ensures that the files are not accidentally committed.
This PR allows for Ray to disable metrics collection. It was possible with RAY_enable_metrics_collection, but it didn't fully disable collection because there was a metrics collection happening from agent that wasn't properly disabled. This PR also adds tests.
The local environment setup of release tests (in client tests) can sometimes update dependencies of the `anyscale` package to an unsupported version. By re-installing the `anyscale` package after local env setup, we make sure that we can connect to the cluster. Note that this may lead to incompatibilities of the test script, however.
Failing pytest summaries for flaky tests that eventually succeed are not always cleaned up properly: https://buildkite.com/ray-project/ray-builders-branch/builds/7292#_
This PR ensures we only print summaries when we have at least one summary file (and not just the header file).
Users get error messages from client/server on actor failures, even if they already try-except'd the error. For example:
```
import ray
ray.init("ray://localhost:10001")
try:
ray.get_actor("doesnotexist")
except ValueError:
pass
```
Will still generate the log `Caught schedule exception` and `Exception from actor creation is ignored in destructor. To receive this exception in application code, call a method on the actor reference before its destructor is run.`. Reduce the level of these logs to debug by default.