Embed console print to gather dogfooding feedback.
With CLIs:
```
(dev) ➜ ray git:(ricky/obs-feedback) ray list --help
Usage: ray list [OPTIONS] {actors|jobs|placement-
groups|nodes|workers|tasks|objects|runtime-envs}
List RESOURCE used by Ray.
RESOURCE is the name of the possible resources from `StateResource`,
i.e. 'jobs', 'actors', 'nodes', ...
==========ALPHA PREVIEW, FEEDBACK NEEDED ===============
State Observability APIs is currently in Alpha-Preview.
If you have any feedback, you could do so at either way as below:
1. Comment on API specification: https://tinyurl.com/api-spec
2. Report bugs/issues with details: https://forms.gle/gh77mwjEskjhN8G46
3. Follow up in #proj-state-obs-dogfooding slack channel.
==========================================================
```
With running SDK python api:
```
In [3]: from ray.experimental.state.api import list_nodes
In [6]: list_nodes()
2022-07-11 19:45:18,973 INFO api.py:69 --
==========ALPHA PREVIEW, FEEDBACK NEEDED ===============
State Observability APIs is currently in Alpha-Preview.
If you have any feedback, you could do so at either way as below:
1. Comment on API specification: https://tinyurl.com/api-spec
2. Report bugs/issues with details: https://forms.gle/gh77mwjEskjhN8G46
3. Follow up in #proj-state-obs-dogfooding slack channel.
==========================================================
Out[6]:
[{'node_name': '172.31.47.143',
'node_ip': '172.31.47.143',
'resources_total': {'CPU': 8.0,
'object_store_memory': 9149783654.0,
'memory': 18299567310.0,
'node:172.31.47.143': 1.0},
'node_id': '513a3ca212403d234f6dfbe1f7523052637a06e0ee9e4502144f2da3',
'state': 'ALIVE'}]
```
This PR is doing 2 things.
(1) Use api_server_url to address which is consistent to other submission APIs.
(2) When the API is not responded timely, it prints a warning every 5 seconds. Below is an example. This is useful when the API is slowly responded (e.g., when there are partial failures). Without this users will see hanging API for 30 seconds, which is a pretty bad UX.
(0.12 / 10 seconds) Waiting for the response from the API server address http://127.0.0.1:8265/api/v0/delay/5.
This adds a nightly release test that asserts that autoscaling a cluster up and down in a Ray Tune run works.
Signed-off-by: Kai Fricke <kai@anyscale.com>
Actor tasks are sometimes failed while their dependencies are still being resolved. This can cause hanging or crashes when we resolve the dependencies for a task that has already been canceled. It can lead to a crash from the ref counter when, for the same actor, actor task 1 depends on actor task 2. The sequence is:
Actor tasks 1 and 2 queued, 1 depends on 2.
Fail actor task 1. We clear its refs, including its dependency on 2.
Fail actor task 2. We store an error as its return value. Since task 1 depends on it, we inline the dependency and try to clear task 1's refs again, causing a ref counting error because we already cleared them in step 2.
This PR fixes the issue by canceling dependency resolution for tasks before failing them. This involves some refactoring of the LocalDependencyResolver. Most of the changes are for testing (split out the unit tests for LocalDependencyResolver into their own suite).
Related issue number
Closes#18908.
This is to limit the max number of HTTP requests the dashboard (API server) will accept before rejecting more requests.
This will make sure the observability requests do not overload the downstream systems (raylet/gcs) when delegating too many concurrent state observability requests to the cluster.
Fixes failing hyperopt notebook in CI (as found in #26410). The cause was a mismatch between keys in points to evaluate and the search space - now, an informative exception will be raised.
Signed-off-by: Antoni Baum <antoni.baum@protonmail.com>
This PR adds a distributed benchmark test for Pytorch MNIST training. It compares training with Ray AIR with training with vanilla PyTorch.
In both cases, the same training loop is used. For Ray AIR, we use a TorchTrainer with 4 CPU workers. For vanilla PyTorch, we upload a training script and kick it off (using Ray tasks) in subprocesses on each node. In both cases, we collect the end to end runtime.
Signed-off-by: Kai Fricke <kai@anyscale.com>
The job level ray_namespace will be set to task spec when creating an actor without an explicitly specifying namespace for actor. Therefore, in the gcs actor manager, the ray_namespace in task spec shouldn't be empty.
This PR remove the unreachable code path which is using to get the namespace from a local cache.
Why are these changes needed?
There are directories that we don't lint / format. Ensure they are also the case for the import sorting tool.
Enable sorting for python/experimental to show case how to enable sorting for a directory as we convert more of the directories to be automatically sorted by the tool.
See #23676 for context. This is another attempt at that as I figured out what's going wrong in `bazel test`. Supersedes #24828.
Now that there are Python 3.10 wheels for Ray 1.13 and this is no longer a blocker for supporting Python 3.10, I still want to make `bazel test //python/ray/tests/...` work for developing in a 3.10 env, and make it easier to add Python 3.10 tests to CI in future.
The change contains three commits with rather descriptive commit message, which I repeat here:
Pass deps to py_test in py_test_module_list
Bazel macro py_test_module_list takes a `deps` argument, but completely
ignores it instead of passes it to `native.py_test`. Fixing that as we
are going to use deps of py_test_module_list in BUILD in later changes.
cpp/BUILD.bazel depends on the broken behaviour: it deps-on a cc_library
from a py_test, which isn't working, see upstream issue:
https://github.com/bazelbuild/bazel/issues/701.
This is fixed by simply removing the (non-working) deps.
Depend on conftest and data files in Python tests BUILD files
Bazel requires that all the files used in a test run should be
represented in the transitive dependencies specified for the test
target. For py_test, it means srcs, deps and data.
Bazel enforces this constraint by creating a "runfiles" directory,
symbolic links files in the dependency closure and run the test in the
"runfiles" directory, so that the test shouldn't see files not in the
dependency graph.
Unfortunately, the constraint does not apply for a large number of
Python tests, due to pytest (>=3.9.0, <6.0) resolving these symbolic
links during test collection and effectively "breaks out" of the
runfiles tree.
pytest >= 6.0 introduces a breaking change and removed the symbolic link
resolving behaviour, see pytest pull request
https://github.com/pytest-dev/pytest/pull/6523 for more context.
Currently, we are underspecifying dependencies in a lot of BUILD files
and thus blocking us from updating to newer pytest (for Python 3.10
support). This change hopefully fixes all of them, and at least those in
CI, by adding data or source dependencies (mostly for conftest.py-s)
where needed.
Bump pytest version from 5.4.3 to 7.0.1
We want at least pytest 6.2.5 for Python 3.10 support, but not past
7.1.0 since it drops Python 3.6 support (which Ray still supports), thus
the version constraint is set to <7.1.
Updating pytest, combined with earlier BUILD fixes, changed the ground
truth of a few error message based unit test, these tests are updated to
reflect the change.
There are also two small drive-by changes for making test_traceback and
test_cli pass under Python 3.10. These are discovered while debugging CI
failures (on earlier Python) with a Python 3.10 install locally. Expect
more such issues when adding Python 3.10 to CI.
This PR defaults the parallelism of Dataset reads to `-1`. The parallelism is determined according to the following rule in this case:
- The number of available CPUs is estimated. If in a placement group, the number of CPUs in the cluster is scaled by the size of the placement group compared to the cluster size. If not in a placement group, this is the number of CPUs in the cluster. If the estimated CPUs is less than 8, it is set to 8.
- The parallelism is set to the estimated number of CPUs multiplied by 2.
- The in-memory data size is estimated. If the parallelism would create in-memory blocks larger than the target block size (512MiB), the parallelism is increased until the blocks are < 512MiB in size.
These rules fix two common user problems:
1. Insufficient parallelism in a large cluster, or too much parallelism on a small cluster.
2. Overly large block sizes leading to OOMs when processing a single block.
TODO:
- [x] Unit tests
- [x] Docs update
Supercedes part of: https://github.com/ray-project/ray/pull/25708
Co-authored-by: Ubuntu <ubuntu@ip-172-31-32-136.us-west-2.compute.internal>
When Ray is under memory pressure, the pull manager might cancel ongoing pull request and retry it later. There is a race condition that a pull request is initiated and canceled, and the pull request for the same object is retried by pull manager shortly. When this happens, the pusher (where the object is being pulled) ignores the second pull request if it's still sending the object initiated by the first pull request; instead it will continue sending the remaining chunks. This leads to the puller receiving incomplete data chunks (as some chunks has already being received and then canceled), and the puller has to wait for 10 seconds timeout and retry the pull request.
To fix the problem we simply always resent all chunks when a pull request is received. Since we always send chunks in order, we implement the resend logic by simply reset the remaining number of chunks to send; and treat the chunks as ring buffer.
The old dashboard UI was much easier at seeing all the work across all workers because workers were shown along side nodes in the main nodes page. This change brings the same functionality to the new Dashboard UI.
Some changes in this PR:
Factor out the NodeRow into its own component and into its own file.
Introduce WorkerRow which shows information about a worker
Updates the heading of the table column because the column will show different data depending on if its a node row or a worker row.
Makes sure we're rounding percentages to a single decimal place.
Logs button for worker row will go to the logs page and filter out just the log files related to that worker.
Update the api for fetching nodes into fetching nodes + workers.
fix bug where object store memory was not showing the total size but instead the remaining size
The pipeline will spill objects when splitting the dataset into multiple equal parts.
Co-authored-by: Ubuntu <ubuntu@ip-172-31-32-136.us-west-2.compute.internal>
Add the html template files in `ray/widgets/templates/` to the wheel to make sure the Jupyter widget that is displayed in `ray.init()` works for the Ray wheels.