Many release tests are currently failing for cuda version incompatibilities. Pinning the base image to 1.12.1 seems to resolve the problem for the time being.
The AIR CI build has been failing on master since #25022.
#25022 moved the tests that require credentials, but we left the bazel command in the build pipeline still. So even though all the tests are passing, the buildkite stage itself was failing since it tries run tests that require credentials, but these tests no longer exist in the directory. This is only a problem for master build since we don't run this command for PR builds.
This PR adds timeout and asyncio for internal KV. This only applies to gcs_utils and not ray clients for now since this is purely for ray internal usage.
We want to use `clangd` as the language server.
`clangd` is an awesome language server that has many features and is very accurate.
But it needs a `compile_commands.json` to work.
This PR adds a popular bazel rule to generate this file.
As described in the related issue, using model_weight as the key throws an error.
This update points the user to use model as the key instead.
Co-authored-by: tamilflix <tamilflix30@gmail.com>
Redo for PR #24698:
This fixes two bugs in data locality:
When a dependent task is already in the CoreWorker's queue, we ran the data locality policy to choose a raylet before we added the first location for the dependency, so it would appear as if the dependency was not available anywhere.
The locality policy did not take into account spilled locations.
Added C++ unit tests and Python tests for the above.
Split test_reconstruction to avoid test timeout. I believe this was happening because the data locality fix was causing extra scheduler load in a couple of the reconstruction stress tests.
When loading the data from GCS, for detached actors, we treat it the same as normal actors.
But the detached actor lives beyond the job's scope and should be loaded even when the job is finished.
This PR fixed it.
This fixes two bugs in Datasets push-based shuffle:
Scheduling strategy specified by the caller was not getting propagated correctly to the map stage in push-based shuffle. This is because the map and reduce stages shared the same ray.remote options dict, and we deleted the caller-specified scheduling strategy from the reduce stage so that we could specify a NodeAffinitySchedulingStrategy instead.
We were only reporting partial stats for the merge stage.
Related issue number
Issue 1 is necessary for performance at large-scale (#24480).
This makes it possible to use an NFS file system that is shared on a cluster for runtime_env working directories.
Co-authored-by: shrekris-anyscale <92341594+shrekris-anyscale@users.noreply.github.com>
Co-authored-by: Eric Liang <ekhliang@gmail.com>
closes#24475
Current deployment graph has big perf issues compare with using plain deployment handle, mostly because overhead of DAGNode traversal mechanism. We need this mechanism to empower DAG API, specially deeply nested objects in args where we rely on pickling; But meanwhile the nature of each execution becomes re-creating and replacing every `DAGNode` instances involved upon each execution, that incurs overhead.
Some overhead is inevitable due to pickling and executing DAGNode python code, but they could be quite minimal. As I profiled earlier, pickling itself is quite fast for our benchmarks at magnitude of microseconds.
Meanwhile the elephant in the room is DeploymentNode and its relatives are doing too much work in constructor that's beyond necessary, thus slowing everything down. So the fix is as simple as
1) Introduce a new set of executor dag node types that contains absolute minimal information that only preserves the DAG structure with traversal mechanism, and ability to call relevant deployment handles.
2) Add a simple new pass in our build() that generates and replaces nodes with executor dag to produce a final executor dag to run the graph.
Current ray dag -> serve dag mixed a lot of stuff related to deployment generation and init args, in longer term we should remove them but our correctness depends on it so i rather leave it as separate PR.
### Current 10 node chain with deployment graph `.bind()`
```
chain_length: 10, num_clients: 1
latency_mean_ms: 41.05, latency_std_ms: 15.18
throughput_mean_tps: 27.5, throughput_std_tps: 3.2
```
### Using raw deployment handle without dag overhead
```
chain_length: 10, num_clients: 1
latency_mean_ms: 20.39, latency_std_ms: 4.57
throughput_mean_tps: 51.9, throughput_std_tps: 1.04
```
### After this PR:
```
chain_length: 10, num_clients: 1
latency_mean_ms: 20.35, latency_std_ms: 0.87
throughput_mean_tps: 48.4, throughput_std_tps: 1.43
```
This PR consolidates the Ray Train and Tune checkpoint managers. These concepts previously did something very similar but in different modules. To simplify maintenance in the future, we've consolidated the common core.
- This PR keeps full compatibility with the previous interfaces and implementations. This means that for now, Train and Tune will have separate CheckpointManagers that both extend the common core
- This PR prepares Tune to move to a CheckpointStrategy object
- In follow-up PRs, we can further unify interfacing with the common core, possibly removing any train- or tune-specific adjustments (e.g. moving to setup on init rather on runtime for Ray Train)
The consolidation is split into three PRs:
1. This PR - adds a common checkpoint manager class.
2. #24772 - based on this PR, adds the integration for Ray Train
3. #24430 - based on #24772, adds the integration for Ray Tune
Currently the release test runner prefers the first successfully version of a cluster env, instead of the last version. But sometimes a cluster env may build successfully on Anyscale but cannot launch cluster successfully (e.g. version 2 here) or new dependencies need to be installed, so a new version needs to be built. The existing logic always picks up the 1st successful build and cannot pick up the new cluster env version.
Although this is an edge case (tweaking cluster env versions, with the same Ray wheel or cluster env name), I believe it is possible for others to run into it.
Also, avoid running most of the CI tests for changes under release/ray_release/.
Follow up: #24017
Briefly, wandb service is still in experimental stage, and is not ready to be released as an integration without extensive testing. Hence, we are interested in rolling back the update to the integration we made recently, until this feature is ready to be shipped.