Continuing docs overhaul, tune now has:
- [x] better landing page
- [x] a getting started guide
- [x] user guide was cut down, partially merged with FAQ, and partially integrated with tutorials
- [x] the new user guide contains guides to tune features and practical integrations
- [x] we rewrote some of the feature guides for clarity
- [x] we got rid of sphinx-gallery for this sub-project (only data and core left), as it looks bad and is unnecessarily complicated anyway (plus, makes the build slower)
- [x] sphinx-gallery examples are now moved to markdown notebook, as started in #22030.
- [x] Examples are tested in the new framework, of course.
There's still a lot one can do, but this is already getting too large. Will follow up with more fine-tuning next week.
Co-authored-by: Antoni Baum <antoni.baum@protonmail.com>
Co-authored-by: Kai Fricke <krfricke@users.noreply.github.com>
`__dealloc__` is not allowed to call python code and this leads to two problems:
- The data has already been cleaned up
- Deadlock if there are locks used.
THis PR move the implementation to python layer to avoid this
We've had multiple issues that manifest as unexpected autoscaler logs about resource demands.
To make it easier to debug such issues, this PR adds a debug flag to allow logging the entire resource message used by the autoscaler as its source of truth about the Ray internals' resource usage.
If the env AUTOSCALER_LOG_RESOURCE_BATCH_DATA=1 is set, the autoscaler will log the resource message.
If the declarative API issues a code change to a group of deployments at once, it needs to deploy the group of updated deployments atomically. This ensures any deployment using another deployment's handle inside its own __init__() function can access that handle regardless of the deployment order. This change adds deploy_group to the ServeController class, allowing it to deploy a list of deployments atomically. It also adds a new public API command, serve.deploy_group(), exposing the controller's functionality publicly, so atomic deployments can also be executed via Python API.
Closes#21873.
In https://github.com/ray-project/ray/pull/20341 the behavior of `pip` was changed to install the specified packages in the existing environment rather than in a new environment. This posed a problem when specifying Ray libraries like "ray[serve]" in the `pip` field, because the installer would install Ray at runtime and this new Ray would take precedence over the Ray existing on the cluster. This could cause version mismatch issues. Skipping some details, the approach taken in the that PR was essentially to parse the `pip` list and remove Ray.
However not every line in a `pip` `requirements.txt` file is a requirements specifier; a line can also just specify options, like `--extra-index-url my-index-url.com`.
This caused the parsing library to raise an exception when trying to parse the line. This PR fixes this by catching the exception and skipping the line in this case, since it's not a line that specifies `ray` and that's all we're looking for when parsing.
Ray client currently supports connection strings for external modules of the format `"other_module://"`, however `ray job` commands don't support this format because trailing `/` is removed. Update so `ray job` commands also support this format.
Implement a TorchTensorboardProfilerCallback and corresponding TorchWorkerProfiler to support distributed PyTorch Profiler With TensorBoard integration.
When a Ray program first creates an ObjectRef (via ray.put or task call), we add it with a ref count of 0 in the C++ backend because the language frontend will increment the initial local ref once we return the allocated ObjectID, then delete the local ref once the ObjectRef goes out of scope. Thus, there is a brief window where the object ref will appear to be out of scope.
This can cause problems with async protocols that check whether the object is in scope or not, such as the previous bug fixed in #19910. Now that we plan to enable lineage reconstruction to automatically recover lost objects, this race condition can also be problematic because we use the ref count to decide whether an object needs to be recovered or not.
This PR avoids these race conditions by incrementing the local ref count in the C++ backend when executing ray.put() and task calls. The frontend is then responsible for skipping the initial local ref increment when creating the ObjectRef. This is the same fix used in #19910, but generalized to all initial ObjectRefs.
This is the PR to write better runtime env exception. After 3 PRs are merged, we can entirely turn off the runtime env logs streamed to drivers.
The first PR only handles tasks exception.
TODO
- [x] Task (this PR)
- [ ] Actor
- [ ] Turn of runtime env logs & improve error msgs
1. If the node is selected based on locality, we always run the task on the node selected by locality if the node is available.
2. For spread scheduling strategy, we always select the local node as the first raylet to request lease, no locality involved.
The WandbTrainableMixin doesn't work with RLLib trainables as they won't recognize the wandb parameter. Thus we should pop the wandb config before we initialize the rest of the trainable.
We're introducing the usage of [MyST Notebooks](https://myst-nb.readthedocs.io/en/latest/index.html) here and demonstrate how it works by rewriting (and extending) the RLLib Serve tutorial. Benefits:
- [x] Write notebooks in markdown. Can be converted into other formats e.g. with `jupytext`
- [x] Tutorials like this have a binderhub link added to the top nav (launch button).
- [x] Notebooks get executed when docs are built, so it's impossible to have stale docs.
- [x] But locally those builds are cached so that you don't have to wait too long.
- [x] The notebook cell outputs can be shown, hidden or removed. In particular, we can now avoid adding expected code output as comments in our scripts (which might get outdated).
We're also clarifying #22022.
Old tutorial: [here](https://docs.ray.io/en/latest/serve/tutorials/rllib.html)
New tutorial (preview): [here](https://ray--22030.org.readthedocs.build/en/22030/serve/tutorials/rllib.html)
Co-authored-by: simon-mo <simon.mo@hey.com>
This adds some utility functions to make it easier to manipulate structured data in Datasets. While in principle you can already do this with map_batches, this makes it a little easier to test things out for development.
The new code uses a file-lock before reading and writing to `ports_by_node.json`.
Without it, multiple nodes may write to ports_by_node.json at the same time.
Previously, local files corresponding to runtime env URIs were eagerly garbage collected as soon as there were no more references to them. In this PR, we store this data in a cache instead, so when the reference count for a URI drops to zero, instead of deleting it we simple mark it as unused in the cache. When the cache exceeds its size limit (default 10 GB) it will delete unused URIs until the cache is back under the size limit or there are no more unused URIs.
Design doc: https://docs.google.com/document/d/1x1JAHg7c0ewcOYwhhclbuW0B0UC7l92WFkF4Su0T-dk/edit
- Adds unit tests for caching and integration tests for working_dir caching
Proposal document: https://docs.google.com/document/d/1ln7_fUST18GOz4jJnI_zN00hfczXY48V5Ajy6fCmJCE/edit#
This PR changes the return value of ray.init when not in client mode to be a RayContext, which acts as a context manager and the same public fields as ClientContext , as well a disconnect method (calls shutdown under the hood).
To prevent breaking scripts that rely on accessing through dict methods, RayContext also subclasses collections.abc.Mapping (can be treated as an immutable dict). This behavior will be removed in 2.0, so deprecation warnings are raised when __getitem__ is used. To make migration simple, an additional dict field address_info is added with the same values as the original return value.
It looks like existing infeasible placement group in placement group manager didn't work properly. Idk how we added this feature when we cannot pass this simple test case.
But this is what has happend;
(1) PG is not scheduleable because it is infeasible
(2) New node is added
(3) After a new node is added, placement group manager tries rescheduling all infeasible pgs.
(4) Here, when we add a new node, we didn't report resources (this seems to be very weird. We are reporting resource using a separate RPC here). So when (3) happens, pg was still unschedulable.
This PR fixes the issue by adding the resource information when the new node is added.
Note that in the long term, we'd like to have a separate resource path from (4). This won't be addressed in this PR.
With the addition of https://github.com/ray-project/ray/pull/20988, the native format becomes ambiguous. This PR proposes to auto-promote arrow to pandas blocks when the user specifies "native" format, to avoid uncertainty.
Report only memory used by primary copies of objects, since secondary copies are not evicted even if not needed on a node. This prevents downscaling until all references to a shared object are removed.
Closes https://github.com/ray-project/ray/issues/21870
These changes add a set of improvements to enable automatic creation and update of CloudWatch alarms when provisioning AWS Autoscaling clusters. Successful implementation of these improvements will allow AWS Autoscaler users to:
Setup alarms against Ray CloudWatch metrics to get notified about increased load, service outage.
Update their CloudWatch alarm JSON configuration files during Ray up execution time.
Notes:
This PR is a follow-up PR for #20266, which adds CloudWatch alarm support.
Currently, tune trainables with functools.partial will raise the following warnings:
INFO registry.py:66 -- Detected unknown callable for trainable. Converting to class.
WARNING experiment.py:295 -- No name detected on trainable. Using DEFAULT.
This PR propagates function names for function wrapped with partial and treat them as regular functions when wrapping.
We recently added tests to this file, and it seems to occasionally exceed 300 seconds timeout (before adding tests, it took about 260~270 seconds, so it is natural).
This promotes this test to be large so that we can avoid this issue. (Lmk if you think it is better sharding test even more.)
Instead of using a detached lifetime, tie the lifetime of `_DesignatedBlockOwner` to the lifetime of the context creator. Also, only create a `_DesignatedBlockOwner` if dynamic block splitting is enabled.
This PR adds pandas block format support by implementing `PandasRow`, `PandasBlockBuilder`, `PandasBlockAccessor`.
Note that `sort_and_partition`, `combine`, `merge_sorted_blocks`, `aggregate_combined_blocks` in `PandasBlockAccessor` redirects to arrow block format implementation for now. They'll be implemented in a later PR.