When trail is resumed, it is useful for the user to know from which checkpoint it happened.
Signed-off-by: sustr-equi <sustr@equilibretechnologies.com>
Co-authored-by: sustr-equi <sustr@equilibretechnologies.com>
In the previous PR #25883, a subtle regression was introduced in the case where data sizes blow up significantly.
For example, suppose you're reading jpeg-image files from a Dataset, which increase in size substantially on decompression. On a small-core cluster (e.g., 4 cores), you end up with 4-8 blocks of ~200MiB each when reading a 1GiB dataset. This can blow up to OOM the node when decompressed (e.g., 25x size increase).
Previously the heuristic to use parallelism=200 avoids this small-node problem. This PR avoids this issue by (1) raising the min parallelism back to 200. As an optimization, we also introduce the min block size threshold, which allows using fewer blocks if the data size is really small (<100KiB per block).
There are small typos in:
- doc/source/data/faq.rst
- python/ray/serve/replica.py
Fixes:
- Should read `successfully` rather than `succssifully`.
- Should read `pseudo` rather than `psuedo`.
Signed-off-by: Amog Kamsetty <amogkamsetty@yahoo.com>
As discussed offline, allow configurability for feature columns and keep columns in BatchPredictor for better scoring UX on test datasets.
As discussed on Ray Slack (https://ray-distributed.slack.com/archives/CNECXMW22/p1657051287814569), the changes introduced in #18770 and #20822 have caused the concurrency limiting logic in BOHB to work incorrectly. This PR restores the old logic, while making use of the set_max_concurrency API (as eg. HEBO), maintaining backwards compatibility.
It should be noted that the old logic this PR reintroduces is essentially a hack and should be refactored in the future. This PR is intended to rapidly fix a bug causing search performance to be suboptimal.
Signed-off-by: Antoni Baum <antoni.baum@protonmail.com>
Co-authored-by: Kai Fricke <krfricke@users.noreply.github.com>
Embed console print to gather dogfooding feedback.
With CLIs:
```
(dev) ➜ ray git:(ricky/obs-feedback) ray list --help
Usage: ray list [OPTIONS] {actors|jobs|placement-
groups|nodes|workers|tasks|objects|runtime-envs}
List RESOURCE used by Ray.
RESOURCE is the name of the possible resources from `StateResource`,
i.e. 'jobs', 'actors', 'nodes', ...
==========ALPHA PREVIEW, FEEDBACK NEEDED ===============
State Observability APIs is currently in Alpha-Preview.
If you have any feedback, you could do so at either way as below:
1. Comment on API specification: https://tinyurl.com/api-spec
2. Report bugs/issues with details: https://forms.gle/gh77mwjEskjhN8G46
3. Follow up in #proj-state-obs-dogfooding slack channel.
==========================================================
```
With running SDK python api:
```
In [3]: from ray.experimental.state.api import list_nodes
In [6]: list_nodes()
2022-07-11 19:45:18,973 INFO api.py:69 --
==========ALPHA PREVIEW, FEEDBACK NEEDED ===============
State Observability APIs is currently in Alpha-Preview.
If you have any feedback, you could do so at either way as below:
1. Comment on API specification: https://tinyurl.com/api-spec
2. Report bugs/issues with details: https://forms.gle/gh77mwjEskjhN8G46
3. Follow up in #proj-state-obs-dogfooding slack channel.
==========================================================
Out[6]:
[{'node_name': '172.31.47.143',
'node_ip': '172.31.47.143',
'resources_total': {'CPU': 8.0,
'object_store_memory': 9149783654.0,
'memory': 18299567310.0,
'node:172.31.47.143': 1.0},
'node_id': '513a3ca212403d234f6dfbe1f7523052637a06e0ee9e4502144f2da3',
'state': 'ALIVE'}]
```
This PR is doing 2 things.
(1) Use api_server_url to address which is consistent to other submission APIs.
(2) When the API is not responded timely, it prints a warning every 5 seconds. Below is an example. This is useful when the API is slowly responded (e.g., when there are partial failures). Without this users will see hanging API for 30 seconds, which is a pretty bad UX.
(0.12 / 10 seconds) Waiting for the response from the API server address http://127.0.0.1:8265/api/v0/delay/5.
Actor tasks are sometimes failed while their dependencies are still being resolved. This can cause hanging or crashes when we resolve the dependencies for a task that has already been canceled. It can lead to a crash from the ref counter when, for the same actor, actor task 1 depends on actor task 2. The sequence is:
Actor tasks 1 and 2 queued, 1 depends on 2.
Fail actor task 1. We clear its refs, including its dependency on 2.
Fail actor task 2. We store an error as its return value. Since task 1 depends on it, we inline the dependency and try to clear task 1's refs again, causing a ref counting error because we already cleared them in step 2.
This PR fixes the issue by canceling dependency resolution for tasks before failing them. This involves some refactoring of the LocalDependencyResolver. Most of the changes are for testing (split out the unit tests for LocalDependencyResolver into their own suite).
Related issue number
Closes#18908.
This is to limit the max number of HTTP requests the dashboard (API server) will accept before rejecting more requests.
This will make sure the observability requests do not overload the downstream systems (raylet/gcs) when delegating too many concurrent state observability requests to the cluster.
Fixes failing hyperopt notebook in CI (as found in #26410). The cause was a mismatch between keys in points to evaluate and the search space - now, an informative exception will be raised.
Signed-off-by: Antoni Baum <antoni.baum@protonmail.com>
Why are these changes needed?
There are directories that we don't lint / format. Ensure they are also the case for the import sorting tool.
Enable sorting for python/experimental to show case how to enable sorting for a directory as we convert more of the directories to be automatically sorted by the tool.
See #23676 for context. This is another attempt at that as I figured out what's going wrong in `bazel test`. Supersedes #24828.
Now that there are Python 3.10 wheels for Ray 1.13 and this is no longer a blocker for supporting Python 3.10, I still want to make `bazel test //python/ray/tests/...` work for developing in a 3.10 env, and make it easier to add Python 3.10 tests to CI in future.
The change contains three commits with rather descriptive commit message, which I repeat here:
Pass deps to py_test in py_test_module_list
Bazel macro py_test_module_list takes a `deps` argument, but completely
ignores it instead of passes it to `native.py_test`. Fixing that as we
are going to use deps of py_test_module_list in BUILD in later changes.
cpp/BUILD.bazel depends on the broken behaviour: it deps-on a cc_library
from a py_test, which isn't working, see upstream issue:
https://github.com/bazelbuild/bazel/issues/701.
This is fixed by simply removing the (non-working) deps.
Depend on conftest and data files in Python tests BUILD files
Bazel requires that all the files used in a test run should be
represented in the transitive dependencies specified for the test
target. For py_test, it means srcs, deps and data.
Bazel enforces this constraint by creating a "runfiles" directory,
symbolic links files in the dependency closure and run the test in the
"runfiles" directory, so that the test shouldn't see files not in the
dependency graph.
Unfortunately, the constraint does not apply for a large number of
Python tests, due to pytest (>=3.9.0, <6.0) resolving these symbolic
links during test collection and effectively "breaks out" of the
runfiles tree.
pytest >= 6.0 introduces a breaking change and removed the symbolic link
resolving behaviour, see pytest pull request
https://github.com/pytest-dev/pytest/pull/6523 for more context.
Currently, we are underspecifying dependencies in a lot of BUILD files
and thus blocking us from updating to newer pytest (for Python 3.10
support). This change hopefully fixes all of them, and at least those in
CI, by adding data or source dependencies (mostly for conftest.py-s)
where needed.
Bump pytest version from 5.4.3 to 7.0.1
We want at least pytest 6.2.5 for Python 3.10 support, but not past
7.1.0 since it drops Python 3.6 support (which Ray still supports), thus
the version constraint is set to <7.1.
Updating pytest, combined with earlier BUILD fixes, changed the ground
truth of a few error message based unit test, these tests are updated to
reflect the change.
There are also two small drive-by changes for making test_traceback and
test_cli pass under Python 3.10. These are discovered while debugging CI
failures (on earlier Python) with a Python 3.10 install locally. Expect
more such issues when adding Python 3.10 to CI.
This PR defaults the parallelism of Dataset reads to `-1`. The parallelism is determined according to the following rule in this case:
- The number of available CPUs is estimated. If in a placement group, the number of CPUs in the cluster is scaled by the size of the placement group compared to the cluster size. If not in a placement group, this is the number of CPUs in the cluster. If the estimated CPUs is less than 8, it is set to 8.
- The parallelism is set to the estimated number of CPUs multiplied by 2.
- The in-memory data size is estimated. If the parallelism would create in-memory blocks larger than the target block size (512MiB), the parallelism is increased until the blocks are < 512MiB in size.
These rules fix two common user problems:
1. Insufficient parallelism in a large cluster, or too much parallelism on a small cluster.
2. Overly large block sizes leading to OOMs when processing a single block.
TODO:
- [x] Unit tests
- [x] Docs update
Supercedes part of: https://github.com/ray-project/ray/pull/25708
Co-authored-by: Ubuntu <ubuntu@ip-172-31-32-136.us-west-2.compute.internal>
The pipeline will spill objects when splitting the dataset into multiple equal parts.
Co-authored-by: Ubuntu <ubuntu@ip-172-31-32-136.us-west-2.compute.internal>
Add the html template files in `ray/widgets/templates/` to the wheel to make sure the Jupyter widget that is displayed in `ray.init()` works for the Ray wheels.
#26442 didn't trigger doc tests (fixed with #26466). The PR broke lightgbm prediction with categorical variables - this PR fixes this.
In a follow-up we should specifically test prediction with categorical variables.
Signed-off-by: Kai Fricke <kai@anyscale.com>
When working with Ray Train, using the ray.train.torch.prepare_data_loader method with a dataset that returns a dictionary instead of a tuple from its __getitem__ method causes issues.
Co-authored-by: matthewdeng <matthew.j.deng@gmail.com>
Discards returns of user defined train loop functions to prevent deser issues with eg. torch models. Those returns are not used anywhere in AIR, so there is no loss of functionality.
This PR ensures that the new trial resources set by `ResourceChangingScheduler` are respected by the train loop logic by modifying the scaling config to match. Previously, even though trials had their resources updated, the scaling config was not modified which lead to eg. new workers not being spawned in the `DataParallelTrainer` even though resources were available.
In order to accomplish this, `ScalingConfigDataClass` is updated to allow equality comparisons with other `ScalingConfigDataClass`es (using the underlying PGF) and to create a `ScalingConfigDataClass` from a PGF.
Please note that this is an internal only change intended to actually make `ResourceChangingScheduler` work. In the future, `ResourceChangingScheduler` should be updated to operate on `ScalingConfigDataClass`es instead of PGFs as it is now. That will require a deprecation cycle.
This adds a parameterized `test_predict` test for all predictors to test prediction with all data batch types.
Signed-off-by: Kai Fricke <kai@anyscale.com>