If you read a folder with differently-sized images, `ImageFolderDatasource` errors. This PR fixes the issue by resizing images to a user-specified size.
## Why are these changes needed?
This PR does 2 things.
1. When `--detail` is specified, set the default formatting as yaml.
2. It seems like it takes 5 seconds to register the head node to the API server (because it gets node info every 5 second, and when the API server just starts, the head node is not registered to GCS). It decreases the node ping frequency until the head node is registered to API server.
## Related issue number
Closes https://github.com/ray-project/ray/issues/26939
This PR:
Creates a new chapter in the docs titled "Ray Clusters (Under Construction)".
The new chapter makes the Ray Clusters docs follow the same structure as the other docs (https://diataxis.fr/)
The new chapter will eventually replace the old chapter.
I want to merge this now so that @DmitriGekhtman can put his Kuberay docs into the new structure.
Signed-off-by: Cade Daniel <cade@anyscale.com>
Signed-off-by: Amog Kamsetty <amogkamsetty@yahoo.com>
* Revert "Revert "[Train] Add support for handling multiple batch data types for prepare_data_loader (#26386)" (#26483)"
This reverts commit e6c04031fd.
## Why are these changes needed?
This PR ensures that workflow can work properly with Ray client.
Regular workflow tests will (also) be running under client mode (as a pytest parameter). Some tests are moved and reorganized, because the Ray client tests requires starting the cluster, so some tests requires isolation or related changes.
Tests that literally take down the cluster are not tested with Ray client, since Ray client would fail in this scenario.
Limitations of Ray Workflow under Ray client are noted in the doc.
## Related issue number
Closes#21595
Signed-off-by: Amog Kamsetty amogkamsetty@yahoo.com
Latest Pytorch version has wheels for CUDA 11.6. Per user request, adding a 11.6 image as part of our build pipeline.
Currently, restoring from cloud URIs does not work for Tuner() objects. With this PR, e.g. `Tuner.restore("s3://bucket/exp")` will work.
Signed-off-by: Kai Fricke <kai@anyscale.com>
Signed-off-by: Alan Guo <aguo@anyscale.com>
## Why are these changes needed?
Reduces memory footprint of the dashboard.
Also adds some cleanup to the errors data.
Also cleans up actor cache by removing dead actors from the cache.
Dashboard UI no longer allows you to see logs for all workers in a node. You must click into each worker's logs individually.
<img width="1739" alt="Screen Shot 2022-07-20 at 9 13 00 PM" src="https://user-images.githubusercontent.com/711935/180128633-1633c187-39c9-493e-b694-009fbb27f73b.png">
## Related issue number
fixes#23680fixes#22027fixes#24272
Signed-off-by: Dmitri Gekhtman <dmitri.m.gekhtman@gmail.com>
For KubeRay,
Disables autoscaler's RPC drain of worker nodes prior to termination.
Disables autoscaler's termination of nodes disconnected from the GCS.
# Why are these changes needed?
The dashboard can display the message <actor> cannot be created because the Ray cluster cannot satisfy its resource requirements in the case where the runtime env setup is stalled. This PR updates this message to include the possibility of the runtime env setup failing.
This PR adds a tip to the Job Submission doc saying that if a job is stalled in PENDING, the runtime env setup may have stalled. It adds a pointer to the log files which should have more information.
The runtime env cannot stall forever, it fails after 10 minutes. This is a new feature added after the Ray 1.13 branch cut. In Ray <= 1.13, the runtime env can still stall forever.
# Related issue number
Closes#26332
Signed-off-by: rickyyx rickyx@anyscale.com
# Why are these changes needed?
When we returned less/incomplete results to users, there could be 3 reasons:
Data being truncated at the data source (raylets -> API server)
Data being filtered at the API server
Data being limited at the API server
We are not distinguishing the those 3 scenarios, but we should. This is why we thought data being truncated when it's actually filtered/limited.
This PR distinguishes these scenarios and prompt warnings accordingly.
# Related issue number
Closes#26570Closes#26923
When a task failed, it'll wait for the death info and then fail. The waiting is 1s and the checking is every 1s. This is good for usability, but it causes issues for some cases because it'll delay the task return at most 2s and at least 1s.
This PR introduce an early cut where when the timeout is set to be 0, it'll just return immediately. The semantics doesn't change and for most users they are still going to get the message.
## Why are these changes needed?
When GCS restarts, sometimes, raylet needs a while to reconnect to the GCS, for example, in k8s env, it needs a while to move GSC to the service. This PR try to fix this by allowing a longer timeout for the first ping when GCS restarts.
Once GCS get the first ping, it'll just use the regular timeout instead.
Currently, trainables will try to sync up/down temporary checkpoints from cloud storage, leading to errors. These erros come up e.g. with PBT, which heavily uses saving/restoring from objects.
Instead, we should not sync these temporary checkpoints up at all, and we should generally not sync down if a local checkpoint directory exists, which will prevent us also from trying to sync down non-existent temporary checkpoint directories.
See #26714
Signed-off-by: Kai Fricke <kai@anyscale.com>
Calling e.g. `os.path.exists(checkpoint)` currently raises an TypeError, but we should make it more explicit and guide users towards the correct API.
Signed-off-by: Kai Fricke <kai@anyscale.com>
This PR just applies the changes from the following PRs:
[Datasets] Automatically cast tensor columns when building Pandas blocks. #26684
reverted by Revert "[Datasets] Automatically cast tensor columns when building Pandas blocks." #26921
[AIR - Datasets] Fix TensorDtype construction from string and fix example. #26904
This fixes the test failures introduced in the originally reverted PRs.
Update cluster_activities endpoint to use pydantic so we have better data validation.
Make timestamp a required field.
Add pydantic to ray[default] requirements
Why are these changes needed?
Resubmitting #26869.
This PR was reverted due to failing tests; however, those failures were actually due to a dependency: #26950
Adding a requirements file to make it easier to setup you environment to run `repro-ci.py`.
**Usage:**
```bash
pip install -r ci/repro-ci-requirements.txt
python ci/repro-ci.py [args]
```
Signed-off-by: Matthew Deng <matt@anyscale.com>