Why are these changes needed?
Also:
Add validation to make sure multi-gpu and micro-batch is not used together.
Update A2C learning test to hit the microbatching branch.
Minor comment updates.
Currently running into an issue:
Cluster startup Failed. Error: RuntimeError: botocore.exceptions.ClientError: An error occurred (InvalidBlockDeviceMapping) when calling the RunInstances operation: Volume of size 202GB is smaller than snapshot 'snap-02c4e6a0ad06cf3d6', expect size >= 400GB
Co-authored-by: Kai Fricke <krfricke@users.noreply.github.com>
Following up from #27098, this PR renames the baseworker mixin and declutters training output by only logging for rank 0 actors.
Signed-off-by: Kai Fricke <kai@anyscale.com>
Heartbeat manager starts its own thread to run its background task and that shares the same data structured used within HandleReportHeartbeat (heartbeats_). That said, both methods should run in the same thread. This achieves it by running HandleReportHeartbeat within the io_service thread
This change cuts off support for deprecated schema fields. It intentionally breaks backwards compatibility with old configs which set a global min_workers, use head_node or worker_nodes, autoscaling_mode, initial_workers, target_utilization_fraction, and default_worker_node_type fields.
Co-authored-by: Alex alex@anyscale.com
Fix for a unintentional backwards-compatibility breakage for #25902
job submit api should still accept job_id as a parameter
Signed-off-by: Alan Guo aguo@anyscale.com
fb54679 introduced a bug by calling ray.put in the remote _split_single_block. This changes the ownership from driver to the worker who runs _split_single_block, which breaks dataset's lineage requirement and failed the chaos test.
To fix the issue we need to ensure the split block refs are created by the driver, which we can achieved by creating the block_refs as part of function returns.
These Serve CLI commands start Serve if it's not already running:
* `serve deploy`
* `serve config`
* `serve status`
* `serve shutdown`
#27026 introduces the ability to specify a `host` and `port` in the Serve config file. However, once Serve starts running, changing these options requires tearing down the entire Serve application and relaunching it. This limitation is an issue because users can inadvertently start Serve by running one of the `GET`-based CLI commands (i.e. `serve config` or `serve status`) before running `serve deploy`.
This change makes `serve deploy` the only CLI command that can start a Serve application on a Ray cluster. The other commands have updated behavior when Serve is not yet running on the cluster.
* `serve config`: prints an empty config body.
```yaml
import_path: ''
runtime_env: {}
deployments: []
```
* `serve status`: prints an empty status body, with a new `app_status` `status` value: `NOT_STARTED`.
```yaml
app_status:
status: NOT_STARTED
message: ''
deployment_timestamp: 0
deployment_statuses: []
```
* `serve shutdown`: performs a no-op.
Seeing one more pattern of AWS S3 read error message related to credential - https://gist.github.com/jiaodong/a805577c35e44e720ff10136f5ec6f6c, shared from @jiaodong. Change the regex pattern to match the error message as well, so it prints out more understandable error message.
Signed-off-by: Cheng Su <scnju13@gmail.com>
### Why are these changes needed?
This PR enhances workflow functionality to receive external events from a Serve based HTTP endpoint. A workflow can then consume events asynchronously as they arrive.
### Design Logic
A `workflow.wait_for_event` node subscribes to the endpoint instantiated by a Ray Serve deployment of class `http_event_provider.HTTPEventProvider`. The subscription is made through a helper class `http_event_provider.HTTPListener`. `HTTPListener` implements the methods of `EventListener` to poll from and confirm event checkpointing to `HTTPEventProvider`, before `HTTPEventProvider`acknowledges success or error to the event submitter.
### Architecture Improvement
The logic of this enhancement conforms with existing workflow runtime design.
Why are these changes needed?
Node failures logs become extremely spammy with Failed to get the resource load: logs. This PR removes the logs from driver-side logs and prints it less often
RLLib outputs for verbose=3 are too long at the moment. This is the first step of beautifying this table output by putting sequences of length > 3 into flow style.
Signed-off-by: Kai Fricke <kai@anyscale.com>
Before this PR, updating default ray params in GBDT trainers was faulty. This PR addresses these issues and sets the default number of cpus per actor for lightgbm trainer to 2.
Signed-off-by: Kai Fricke <kai@anyscale.com>
RLLibs trainables produce a large number of metrics which makethe log output with verbose=2 illegible. This PR introduces a private `_progress_metrics` property for trainables. If set, the trial progress callback will only print these metrics per default, unless overridden e.g. with a custom `TrialProgressCallback`.