#25655 refactored syncing but it introduced a regression - previously, dirs of any size could have been synced, but now only dirs below the default limit of 1 GB can be. This PR fixes this regression allowing dirs of any size to be synced.
With this PR, files put into directory checkpoints that were dict checkpoints will be serialized and retained when a subsequent to_dict() is called. This is to enable storing additional files, as e.g. needed by Ray Tune.
Signed-off-by: Kai Fricke <kai@anyscale.com>
We added drop_columns() API to datasets in #26200, so updating documentation here to use the new API - doc/source/data/examples/nyc_taxi_basic_processing.ipynb. In addition, fixing some minor typos after proofreading the datasets documentation.
Uses the new AIR Train API for examples and tests.
The `Result` object gets a new attribute - `log_dir`, pointing to the Trial's `logdir` allowing users to access tensorboard logs and artifacts of other loggers.
This PR only deals with "low hanging fruit" - tests that need substantial rewriting or Train user guide are not touched. Those will be updated in followup PRs.
Tests and examples that concern deprecated features or which are duplicated in AIR have been removed or disabled.
Requires https://github.com/ray-project/ray/pull/25943 to be merged in first
Alternative to #26356 - here we just pin raydp-nightly and resolve the dependency issues in follow-up PRs.
This is to quickly unblock CI.
Signed-off-by: Kai Fricke <kai@anyscale.com>
This PR unified the semantics of some workflow APIs.
Those workflow APIs acts on workflow tasks so they could be blocked for a long time. So we have both the blocking and non-blocking versions for them: xxx for blocking and xxx_async for non-blocking APIs.
This is a simple example that shows how to do OCR with Ray Datasets. It includes:
- How to upload and download the dataset to and from S3
- How to run OCR on the dataset with tesseract
- How to use actors to keep around and re-use a spaCy context for doing NLP on the data
Co-authored-by: Clark Zinzow <clarkzinzow@gmail.com>
The existing docs didn't work for me and these updates did. 🤷♀️ I selectively pulled this stuff out of the CI (which ideally would just be runnable locally).
In Ray 2.0, we want to achieve api server HA.
Originally serve endpoints are in head node.
This pr moves serve endpoints to dashboard agents, so they will be HA due to multiple replica of dashboard agent.
When detecting resource capacities to advertise to Ray, the Ray operator takes into account requests. This doesn't make sense -- taking a min of resources and limits definitely doesn't make sense. Only limits should be considered.
Revert back to using nightly base images instead of pinning to 1.12.1. Pinning the docker image had led to uncaught errors in the past. Instead, we should be using nightly to make sure release tests will work on the most up to date versions of docker/cluster envs. If there are any test failures, the underlying issues should be fixed rather than pinning the docker image.
Co-authored-by: Kai Fricke <kai@anyscale.com>
* Avoid depending on `CoreWorkerProcess::GetCoreWorker()` in local mode.
* Fix bug in `LocalModeObjectStore::PutRaw`.
* Remove unused `TaskExecutor::Execute` method.
* Use `Process::Wait` instead of sleep when invoking `ray start` and `ray stop`.
I'm seeing these errors
(raylet, ip=172.31.58.175) [2022-06-28 03:48:42,324 E 702775 702805] (raylet) file_system_monitor.cc:105: /mnt/data0/ray is over 0.95% full, available space: 50637901824. Object creation will fail if spilling is required.
They should be 95% instead of 0.95%.
If compile Ray in debug mode,
* run `MetricsTest:: testAddHistogram` will crash with below error message:
```
BucketBoundaries::Explicit called with non-monotonic boundary list.
java: external/io_opencensus_cpp/opencensus/stats/internal/bucket_boundaries.cc:64: opencensus::stats::BucketBoundaries::Explicit(std::__debug::vector<double>)::<lambda()>: Assertion `false && "0"' failed.
```
* run `NamespaceTest::testIsolationInTheSameNamespaces` can fail with great possibility with below error message:
```
java.util.NoSuchElementException: No value present
at java.util.Optional.get(Optional.java:135)
at io.ray.test.NamespaceTest.lambda$testIsolationInTheSameNamespaces$2(NamespaceTest.java:39)
at io.ray.test.NamespaceTest.testIsolation(NamespaceTest.java:116)
at io.ray.test.NamespaceTest.testIsolationInTheSameNamespaces(NamespaceTest.java:36)
```
Currently, the following information will be printed even the user is not directly using a tune function. This is confusing and not actionable.
```
"`checkpoint_dir` in `func(config, checkpoint_dir)` is "
"being deprecated. "
"To save and load checkpoint in trainable functions, "
"please use the `ray.air.session` API:\n\n"
"from ray.air import session\n\n"
"def train(config):\n"
" # ...\n"
' session.report({"metric": metric}, checkpoint=checkpoint)\n\n'
"For more information please see "
"https://docs.ray.io/en/master/ray-air/key-concepts.html#session\n"
```
The new logic check if `base_trainer` is in the call stack and only adds the warning message when it is not. The new logic will be removed once internally we migrate to use `session` API.
The tests has been running for 1-2 months, and the overall observation is that it's not very useful to catch the actual regression. Basically, we didn't notice any regression. Stop this test for now to save some resources.
To enable one storage be able to be shared by multiple ray clusters, a special prefix is added to isolate the data between clusters: "<EXTERNAL_STORAGE_NAMESPACE>@"
The namespace is given by an os environment: `RAY_external_storage_namespace` when start the head: `RAY_external_storage_namespace=1234 ray start --head`
This flag is very important in HA GCS environment. For example, in ray serve operator, when the operator tries to bring up a new one, it's hard to just start a new db, but it's relatively easy to generate a new cluster id.
Another example is that, the user might only be able to maintain one HA Redis DB, and the namespace enable the user to start multiple ray clusters which share the same db.
This config should be moved to storage config in the future once we build that.