Propagates environment variables to BackendExecutor actor using runtime envs.
Also actually run test_callbacks in CI.
Note that there is an issue with runtime envs: #20587. But this only happens if you shutdown Ray and start a new session again.
This PR reverts the previous revert with the following minor changes.
Worker capping is off by default.
The cap feature flag is on the for the tests that explicitely require it.
Adds a RAY_DATA_DISABLE_PROGRESS_BARS env var to control the default progress bar behavior. The default value is "0". Setting it to "1" disables progress bars, unless they are reenabled again by the set_progress_bars method.
Address followup comments from https://github.com/ray-project/ray/pull/19863
- Add short "Concepts" section
- Add more section headings to break up the text
- Add "Workflow: Local Files" example
- Add "Workflow: Library development" example
## Why are these changes needed?
Publisher and subscriber for logs, in driver, dashboard and tests are refactored to make it easier to support using Ray pubsub for logs. Actual support of Ray pubsub for logs will be added later in #20492.
This PR does not intend to introduce any behavior change.
## Related issue number
This PR is the last PR that enables out of order execution. Previous PR: #20176
In this PR specifically, we added an execute_out_of_order option to .options call, which creates the actor with both out_of_order_submit_queue and out_of_order_scheduling queue.
this PR also added @simon-mo original case for testing.
Why are these changes needed?
This is a serial of PRs to make CoreWorkerProcess thread-safe and CoreWorker Code easy to read. [#19675#19677#19678#19679]
Move CoreWorkerOptions out of core_worker.h; makes the code easier to read.
Next PR: #19677
The ray-ml image depends on numpy ~=1.19.2 via the tensorflow==2.6 requirement. Unfortunately that's incompatible with Dataset (see here #20258 (comment)).
This PR upgrades the numpy dependency only for the nightly test.
The default block size of 500MiB seems too low for some common workloads, e.g. shuffling 500GB. This creates 1000 blocks which means 1 million intermediate shuffle objects until we implement #20500.
Before this PR, `ds.iter_batches()` would yield no batches if `prefetch_blocks > ds.num_blocks()` was given, since the sliding window semantics were to return no windows if `window_size > len(iterable)`. This PR tweaks the sliding window implementation to always return at least one window, even if the one window is smaller than the given window size.
This should fix the long running release tests that are failing to build their app configs.
It seems like pip install ray[all] now downgrades the ray version. It's unclear why, but most likely, a dependency has pinned the ray version now. This PR explicitely install the version of Ray that we want after the pip install ray[all] to fix the problem.
Xgboosts train_small timed out because of a CPU borrowing feature related to placement groups. The root bug will be fixed in the coming weeks, but this PR makes the release test consistently pass by requesting 0 CPUs for the remote wrapper script.