Why are these changes needed?
Also:
Add validation to make sure multi-gpu and micro-batch is not used together.
Update A2C learning test to hit the microbatching branch.
Minor comment updates.
There is a small bug in the docs example for custom command based syncers. This PR fixes them and adds a test to test these changes.
Signed-off-by: Kai Fricke <kai@anyscale.com>
More replacements of tune.run() in examples/docstrings for Tuner.fit()
Signed-off-by: xwjiang2010 <xwjiang2010@gmail.com>
Co-authored-by: Kai Fricke <kai@anyscale.com>
Since usage stats are recorded from the dashboard (which will become API server), it is not collected when the dashboard is not included (include_dashboard=False).
This PR fixes the issues by
change dashboard -> API server (to avoid confusing users that dashboard is still started when include_dashboard=False)
Only load modules that are irrelevant to the dashboard from the API server, so it will have the same impact as no dashboard.
Currently running into an issue:
Cluster startup Failed. Error: RuntimeError: botocore.exceptions.ClientError: An error occurred (InvalidBlockDeviceMapping) when calling the RunInstances operation: Volume of size 202GB is smaller than snapshot 'snap-02c4e6a0ad06cf3d6', expect size >= 400GB
Heartbeat manager starts its own thread to run its background task and that shares the same data structured used within HandleReportHeartbeat (heartbeats_). That said, both methods should run in the same thread. This achieves it by running HandleReportHeartbeat within the io_service thread
This PR adds --keep-going flag to the make html target for building the Ray docs. This means that when there is a lint failure in CI, the BuildKite log will show all lint failures instead of just the first one. Despite continuing past the first lint error, it will still fail the build.
Signed-off-by: Cade Daniel <cade@anyscale.com>
Currently Tune sometimes syncs temporary (and thus ephemeral) checkpoint directories to the driver. This is unnecessary and should be avoided. This PR
1. Add an `exclude` argument to `sync_dir_between_nodes` to exclude files based on pattern matching
2. Adds three default patterns to exclude in node-to-driver syncing
3. Adds tests for all these utilities
Signed-off-by: Kai Fricke <kai@anyscale.com>
This change cuts off support for deprecated schema fields. It intentionally breaks backwards compatibility with old configs which set a global `min_workers`, use `head_node` or `worker_nodes`, `autoscaling_mode`, `initial_workers`, `target_utilization_fraction`, and `default_worker_node_type` fields.
Co-authored-by: Alex <alex@anyscale.com>
fb54679 introduced a bug by calling ray.put in the remote _split_single_block. This changes the ownership from driver to the worker who runs _split_single_block, which breaks dataset's lineage requirement and failed the chaos test.
To fix the issue we need to ensure the split block refs are created by the driver, which we can achieved by creating the block_refs as part of function returns.
We previously added automatic tensor extension casting on Datasets transformation outputs to allow the user to not have to worry about tensor column casting; however, this current state creates several issues:
1. Not all tensors are supported, which means that we’ll need to have an opaque object dtype (i.e. ndarray of ndarray pointers) fallback for the Pandas-only case. Known unsupported tensor use cases:
a. Heterogeneous-shaped (i.e. ragged) tensors
b. Struct arrays
2. UDFs will expect a NumPy column and won’t know what to do with our TensorArray type. E.g., torchvision transforms don’t respect the array protocol (which they should), and instead only support Torch tensors and NumPy ndarrays; passing a TensorArray column or a TensorArrayElement (a single item in the TensorArray column) fails.
Implicit casting with object dtype fallback on UDF outputs can make the input type to downstream UDFs nondeterministic, where the user won’t know if they’ll get a TensorArray column or an object dtype column.
3. The tensor extension cast fallback warning spams the logs.
This PR:
1. Adds automatic casting of tensor extension columns to NumPy ndarray columns for Datasets UDF inputs, meaning the UDFs will never have to see tensor extensions and that the UDF input column types will be consistent and deterministic; this fixes both (2) and (3).
2. No longer implicitly falls back to an opaque object dtype when TensorArray casting fails (e.g. for ragged tensors), and instead raises an error; this fixes (4) but removes our support for (1).
3. Adds a global enable_tensor_extension_casting config flag, which is True by default, that controls whether we perform this automatic casting. Turning off the implicit casting provides a path for (1), where the tensor extension can be avoided if working with ragged tensors in Pandas land. Turning off this flag also allows the user to explicitly control their tensor extension casting, if they want to work with it in their UDFs in order to reap the benefits of less data copies, more efficient slicing, stronger column typing, etc.
The Serve CLI and REST API always sets the host to `0.0.0.0` and the port to Serve's default. This change adds `host` and `port` as top level options in the Serve config file, so users can manually set the host and port of their Serve application to different values.
This change introduces a new Serve config file format:
```yaml
import_path: ...
runtime_env: ...
host: ...
port: ...
deployments: ...
...
```
`host` and `port` are optional and can be omitted. A running Serve application's `host` and `port` cannot be changed. If a user tries to `serve deploy` a config file with different `host` and `port` options than an already-running Serve application, `serve deploy` will fail without making any changes to the application. The user must `serve shutdown` their application and restart it with `serve deploy` to change their `host` and `port`.
**Follow-Up Items**
* The following CLI commands should **not** start Serve automatically. They should check whether Serve is running and perform some sort of no-op if it's not. That would alleviate the concern that the user starts Serve by accident through a `GET` request and needs to deal with default `host`/`port` options. Corresponding docs should also be updated.
* `serve status`
* `serve config`
* `serve shutdown`
Fix for a unintentional backwards-compatibility breakage for #25902
job submit api should still accept job_id as a parameter
Signed-off-by: Alan Guo aguo@anyscale.com
This is the first PR of #25963 :
1. Moved the agent information from `internal KV to `GCSNodeInfo`,
2. raylet registers itself after the agent process finished register.
Motivation:
Storing agent information in `internal KV` and registering nodes in GCS (write node information to `GCSNodeInfo`) are two asynchronous operations, which will bring some complex timing problems, especially after `raylet` failover
Signed-off-by: Amog Kamsetty amogkamsetty@yahoo.com
Tests for upstream ML libraries (xgboost-ray, ray-lightning, etc.) were recently refactored from ray.util to ray.tests. This caused them to be run in Windows CI, but Windows CI does not have any ML dependencies. This PR disables these tests from being run on Windows, which matches the previous behavior.
This PR
- only prints train_loop info strings (e.g. `train_loop_utils.py:298 -- Moving model to device: cpu`) for rank 0 workers for torch
- renames `BaseWorkerMixin` to `RayTrainWorker` as the name comes up often in output and is more meaningful
Signed-off-by: Kai Fricke <kai@anyscale.com>
These Serve CLI commands start Serve if it's not already running:
* `serve deploy`
* `serve config`
* `serve status`
* `serve shutdown`
#27026 introduces the ability to specify a `host` and `port` in the Serve config file. However, once Serve starts running, changing these options requires tearing down the entire Serve application and relaunching it. This limitation is an issue because users can inadvertently start Serve by running one of the `GET`-based CLI commands (i.e. `serve config` or `serve status`) before running `serve deploy`.
This change makes `serve deploy` the only CLI command that can start a Serve application on a Ray cluster. The other commands have updated behavior when Serve is not yet running on the cluster.
* `serve config`: prints an empty config body.
```yaml
import_path: ''
runtime_env: {}
deployments: []
```
* `serve status`: prints an empty status body, with a new `app_status` `status` value: `NOT_STARTED`.
```yaml
app_status:
status: NOT_STARTED
message: ''
deployment_timestamp: 0
deployment_statuses: []
```
* `serve shutdown`: performs a no-op.
Seeing one more pattern of AWS S3 read error message related to credential - https://gist.github.com/jiaodong/a805577c35e44e720ff10136f5ec6f6c, shared from @jiaodong. Change the regex pattern to match the error message as well, so it prints out more understandable error message.
Signed-off-by: Cheng Su <scnju13@gmail.com>