This is a simple example that shows how to do OCR with Ray Datasets. It includes:
- How to upload and download the dataset to and from S3
- How to run OCR on the dataset with tesseract
- How to use actors to keep around and re-use a spaCy context for doing NLP on the data
Co-authored-by: Clark Zinzow <clarkzinzow@gmail.com>
The existing docs didn't work for me and these updates did. 🤷♀️ I selectively pulled this stuff out of the CI (which ideally would just be runnable locally).
In Ray 2.0, we want to achieve api server HA.
Originally serve endpoints are in head node.
This pr moves serve endpoints to dashboard agents, so they will be HA due to multiple replica of dashboard agent.
When detecting resource capacities to advertise to Ray, the Ray operator takes into account requests. This doesn't make sense -- taking a min of resources and limits definitely doesn't make sense. Only limits should be considered.
Revert back to using nightly base images instead of pinning to 1.12.1. Pinning the docker image had led to uncaught errors in the past. Instead, we should be using nightly to make sure release tests will work on the most up to date versions of docker/cluster envs. If there are any test failures, the underlying issues should be fixed rather than pinning the docker image.
Co-authored-by: Kai Fricke <kai@anyscale.com>
* Avoid depending on `CoreWorkerProcess::GetCoreWorker()` in local mode.
* Fix bug in `LocalModeObjectStore::PutRaw`.
* Remove unused `TaskExecutor::Execute` method.
* Use `Process::Wait` instead of sleep when invoking `ray start` and `ray stop`.
I'm seeing these errors
(raylet, ip=172.31.58.175) [2022-06-28 03:48:42,324 E 702775 702805] (raylet) file_system_monitor.cc:105: /mnt/data0/ray is over 0.95% full, available space: 50637901824. Object creation will fail if spilling is required.
They should be 95% instead of 0.95%.
If compile Ray in debug mode,
* run `MetricsTest:: testAddHistogram` will crash with below error message:
```
BucketBoundaries::Explicit called with non-monotonic boundary list.
java: external/io_opencensus_cpp/opencensus/stats/internal/bucket_boundaries.cc:64: opencensus::stats::BucketBoundaries::Explicit(std::__debug::vector<double>)::<lambda()>: Assertion `false && "0"' failed.
```
* run `NamespaceTest::testIsolationInTheSameNamespaces` can fail with great possibility with below error message:
```
java.util.NoSuchElementException: No value present
at java.util.Optional.get(Optional.java:135)
at io.ray.test.NamespaceTest.lambda$testIsolationInTheSameNamespaces$2(NamespaceTest.java:39)
at io.ray.test.NamespaceTest.testIsolation(NamespaceTest.java:116)
at io.ray.test.NamespaceTest.testIsolationInTheSameNamespaces(NamespaceTest.java:36)
```
Currently, the following information will be printed even the user is not directly using a tune function. This is confusing and not actionable.
```
"`checkpoint_dir` in `func(config, checkpoint_dir)` is "
"being deprecated. "
"To save and load checkpoint in trainable functions, "
"please use the `ray.air.session` API:\n\n"
"from ray.air import session\n\n"
"def train(config):\n"
" # ...\n"
' session.report({"metric": metric}, checkpoint=checkpoint)\n\n'
"For more information please see "
"https://docs.ray.io/en/master/ray-air/key-concepts.html#session\n"
```
The new logic check if `base_trainer` is in the call stack and only adds the warning message when it is not. The new logic will be removed once internally we migrate to use `session` API.
The tests has been running for 1-2 months, and the overall observation is that it's not very useful to catch the actual regression. Basically, we didn't notice any regression. Stop this test for now to save some resources.
To enable one storage be able to be shared by multiple ray clusters, a special prefix is added to isolate the data between clusters: "<EXTERNAL_STORAGE_NAMESPACE>@"
The namespace is given by an os environment: `RAY_external_storage_namespace` when start the head: `RAY_external_storage_namespace=1234 ray start --head`
This flag is very important in HA GCS environment. For example, in ray serve operator, when the operator tries to bring up a new one, it's hard to just start a new db, but it's relatively easy to generate a new cluster id.
Another example is that, the user might only be able to maintain one HA Redis DB, and the namespace enable the user to start multiple ray clusters which share the same db.
This config should be moved to storage config in the future once we build that.
This PR adds supported for specifying an exception allowlist (List[Exception]) as the retry_exceptions argument, such that an application-level exception will only be retried if it is in the allowlist.
Adds a CI test for 100TB shuffle.
There is a custom config for this nightly test to: (1) make sure each node gets 4TB of storage, (2) head node has 0 CPUs, (3) worker nodes have half their actual vCPU count.
Related issue number
Closes#24480.
Fixes a bug in wait_cluster where we count the total number of nodes ever in the cluster rather than the alive nodes. This has causes infra/autoscaler failures (e.g. #26138) to be mislabeled as test failures (and probably messes with timing too).
Co-authored-by: Alex Wu <alex@anyscale.com>
Add /api/component_activities to the dashboard snapshot router which returns whether various Ray components are considered active
This currently only contains a response entry for drivers, but will add entries for other components on request as followups
Update documentation to use `session.report`.
Next steps:
1. Update our internal caller to use `session.report`. Most importantly, CheckpointManager and DataParallelTrainer.
2. Update `get_trial_resources` to use PGF notions to incorporate the requirement of ResourceChangingScheduler. @Yard1
3. After 2 is done, change all `tune.get_trial_resources` to `session.get_trial_resources`
4. [internal implementation] remove special checkpoint handling logic from huggingface trainer. Optimize the flow for checkpoint conversion with `session.report`.
Co-authored-by: Antoni Baum <antoni.baum@protonmail.com>