## Why are these changes needed?
- Fixes the jobs tab in the new dashboard. Previously it didn't load.
- Combines the old job concept, "driver jobs" and the new job submission conception into a single concept called "jobs". Jobs tab shows information about both jobs.
- Updates all job APIs: They now returns both submission jobs and driver jobs. They also contains additional data in the response including "id", "job_id", "submission_id", and "driver". They also accept either job_id or submission_id as input.
- Job ID is the same as the "ray core job id" concept. It is in the form of "0100000" and is the primary id to represent jobs.
- Submission ID is an ID that is generated for each ray job submission. It is in the form of "raysubmit_12345...". It is a secondary id that can be used if a client needs to provide a self-generated id. or if the job id doesn't exist (ex: if the submission job doesn't create a ray driver)
This PR has 2 deprecations
- The `submit_job` sdk now accepts a new kwarg `submission_id`. `job_id is deprecated.
- The `ray job submit` CLI now accepts `--submission-id`. `--job-id` is deprecated.
**This PR has 4 backwards incompatible changes:**
- list_jobs sdk now returns a list instead of a dictionary
- the `ray job list` CLI now prints a list instead of a dictionary
- The `/api/jobs` endpoint returns a list instead of a dictionary
- The `POST api/jobs` endpoint (submit job) now returns a json with `submission_id` field instead of `job_id`.
## Why are these changes needed?
This PR does 2 things.
1. When `--detail` is specified, set the default formatting as yaml.
2. It seems like it takes 5 seconds to register the head node to the API server (because it gets node info every 5 second, and when the API server just starts, the head node is not registered to GCS). It decreases the node ping frequency until the head node is registered to API server.
## Related issue number
Closes https://github.com/ray-project/ray/issues/26939
Signed-off-by: Alan Guo <aguo@anyscale.com>
## Why are these changes needed?
Reduces memory footprint of the dashboard.
Also adds some cleanup to the errors data.
Also cleans up actor cache by removing dead actors from the cache.
Dashboard UI no longer allows you to see logs for all workers in a node. You must click into each worker's logs individually.
<img width="1739" alt="Screen Shot 2022-07-20 at 9 13 00 PM" src="https://user-images.githubusercontent.com/711935/180128633-1633c187-39c9-493e-b694-009fbb27f73b.png">
## Related issue number
fixes#23680fixes#22027fixes#24272
# Why are these changes needed?
The dashboard can display the message <actor> cannot be created because the Ray cluster cannot satisfy its resource requirements in the case where the runtime env setup is stalled. This PR updates this message to include the possibility of the runtime env setup failing.
This PR adds a tip to the Job Submission doc saying that if a job is stalled in PENDING, the runtime env setup may have stalled. It adds a pointer to the log files which should have more information.
The runtime env cannot stall forever, it fails after 10 minutes. This is a new feature added after the Ray 1.13 branch cut. In Ray <= 1.13, the runtime env can still stall forever.
# Related issue number
Closes#26332
Signed-off-by: rickyyx rickyx@anyscale.com
# Why are these changes needed?
When we returned less/incomplete results to users, there could be 3 reasons:
Data being truncated at the data source (raylets -> API server)
Data being filtered at the API server
Data being limited at the API server
We are not distinguishing the those 3 scenarios, but we should. This is why we thought data being truncated when it's actually filtered/limited.
This PR distinguishes these scenarios and prompt warnings accordingly.
# Related issue number
Closes#26570Closes#26923
Update cluster_activities endpoint to use pydantic so we have better data validation.
Make timestamp a required field.
Add pydantic to ray[default] requirements
This PR does 3 things.
1. Warn if callsite is disabled when `ray list objects` and `ray summary objects`
2. Decode owner_id for ray list actors
3. Support raise_on_missing_output
ray.init() will currently start a new Ray instance even if one is already existing, which is very confusing if you are a new user trying to go from local development to a cluster. This PR changes it so that, when no address is specified, we first try to find an existing Ray cluster that was created through `ray start`. If none is found, we will start a new one.
This makes two changes to the ray.init() resolution order:
1. When `ray start` is called, the started cluster address was already written to a file called `/tmp/ray/ray_current_cluster`. For ray.init() and ray.init(address="auto"), we will first check this local file for an existing cluster address. The file is deleted on `ray stop`. If the file is empty, autodetect any running cluster (legacy behavior) if address="auto", or we will start a new local Ray instance if address=None.
2. When ray.init(address="local") is called, we will create a new local Ray instance, even if one is already existing. This behavior seems to be necessary mainly for `ray.client` use cases.
This also surfaces the logs about which Ray instance we are connecting to. Previously these were hidden because we didn't set up the log until after connecting to Ray. So now Ray will log one of the following messages during ray.init:
```
(Connecting to existing Ray cluster at address: <IP>...)
...connection...
(Started a local Ray cluster.| Connected to Ray Cluster.)( View the dashboard at <URL>)
```
Note that this changes the dashboard URL to be printed with `ray.init()` instead of when the dashboard is first started.
Co-authored-by: Eric Liang <ekhliang@gmail.com>
Redo the agent-id changes from #24968. The original PR is in the first commit, the second commit fixes a fatal flaw when using RAY_BACKEND_LOG_LEVEL=debug, which caused the "Ray C++, Java" tests to fail on macOS.
NOTE: tabulate is copied/pasted to the codebase for table formatting.
This PR changes the default layout to be the table format for both summary and list APIs.
* Revert "Revert "Bump pytest from 5.4.3 to 7.0.1""
This reverts commit ab10890e90.
Signed-off-by: Riatre Foo <foo@riat.re>
* Fix missing test data files dependency in rllib/BUILD
See # 26334 and # 26517 for context.
Once this is in, it should be good to roll-forwrad again.
Signed-off-by: Riatre Foo <foo@riat.re>
* debug: run all tests
Signed-off-by: Riatre Foo <foo@riat.re>
* Revert "debug: run all tests"
This reverts commit 0c5e796b0eb437d64922f66749c61b0412486970.
Signed-off-by: Riatre Foo <foo@riat.re>
* fix new tests since last rebase
Signed-off-by: Riatre Foo <foo@riat.re>
This PR is doing 2 things.
(1) Use api_server_url to address which is consistent to other submission APIs.
(2) When the API is not responded timely, it prints a warning every 5 seconds. Below is an example. This is useful when the API is slowly responded (e.g., when there are partial failures). Without this users will see hanging API for 30 seconds, which is a pretty bad UX.
(0.12 / 10 seconds) Waiting for the response from the API server address http://127.0.0.1:8265/api/v0/delay/5.
This is to limit the max number of HTTP requests the dashboard (API server) will accept before rejecting more requests.
This will make sure the observability requests do not overload the downstream systems (raylet/gcs) when delegating too many concurrent state observability requests to the cluster.
See #23676 for context. This is another attempt at that as I figured out what's going wrong in `bazel test`. Supersedes #24828.
Now that there are Python 3.10 wheels for Ray 1.13 and this is no longer a blocker for supporting Python 3.10, I still want to make `bazel test //python/ray/tests/...` work for developing in a 3.10 env, and make it easier to add Python 3.10 tests to CI in future.
The change contains three commits with rather descriptive commit message, which I repeat here:
Pass deps to py_test in py_test_module_list
Bazel macro py_test_module_list takes a `deps` argument, but completely
ignores it instead of passes it to `native.py_test`. Fixing that as we
are going to use deps of py_test_module_list in BUILD in later changes.
cpp/BUILD.bazel depends on the broken behaviour: it deps-on a cc_library
from a py_test, which isn't working, see upstream issue:
https://github.com/bazelbuild/bazel/issues/701.
This is fixed by simply removing the (non-working) deps.
Depend on conftest and data files in Python tests BUILD files
Bazel requires that all the files used in a test run should be
represented in the transitive dependencies specified for the test
target. For py_test, it means srcs, deps and data.
Bazel enforces this constraint by creating a "runfiles" directory,
symbolic links files in the dependency closure and run the test in the
"runfiles" directory, so that the test shouldn't see files not in the
dependency graph.
Unfortunately, the constraint does not apply for a large number of
Python tests, due to pytest (>=3.9.0, <6.0) resolving these symbolic
links during test collection and effectively "breaks out" of the
runfiles tree.
pytest >= 6.0 introduces a breaking change and removed the symbolic link
resolving behaviour, see pytest pull request
https://github.com/pytest-dev/pytest/pull/6523 for more context.
Currently, we are underspecifying dependencies in a lot of BUILD files
and thus blocking us from updating to newer pytest (for Python 3.10
support). This change hopefully fixes all of them, and at least those in
CI, by adding data or source dependencies (mostly for conftest.py-s)
where needed.
Bump pytest version from 5.4.3 to 7.0.1
We want at least pytest 6.2.5 for Python 3.10 support, but not past
7.1.0 since it drops Python 3.6 support (which Ray still supports), thus
the version constraint is set to <7.1.
Updating pytest, combined with earlier BUILD fixes, changed the ground
truth of a few error message based unit test, these tests are updated to
reflect the change.
There are also two small drive-by changes for making test_traceback and
test_cli pass under Python 3.10. These are discovered while debugging CI
failures (on earlier Python) with a Python 3.10 install locally. Expect
more such issues when adding Python 3.10 to CI.
The old dashboard UI was much easier at seeing all the work across all workers because workers were shown along side nodes in the main nodes page. This change brings the same functionality to the new Dashboard UI.
Some changes in this PR:
Factor out the NodeRow into its own component and into its own file.
Introduce WorkerRow which shows information about a worker
Updates the heading of the table column because the column will show different data depending on if its a node row or a worker row.
Makes sure we're rounding percentages to a single decimal place.
Logs button for worker row will go to the logs page and filter out just the log files related to that worker.
Update the api for fetching nodes into fetching nodes + workers.
fix bug where object store memory was not showing the total size but instead the remaining size
## Why are these changes needed?
As in this https://github.com/ray-project/ray/pull/26405 we added the health check for gcs and raylets.
This PR expose them in the endpoint in dashboard and dashboard agent.
For dashboard, we added `http://host:port/api/gcs_healthz` and it'll send RPC to GCS directly to see whether the GCS is alive or not.
For agent, we added `http://host:port/api/local_raylet_healthz` and it'll send RPC to GCS to check whether raylet is alive or not.
We think raylet is live if
- GCS is dead
- GCS is alive but GCS think the raylet is dead
If GCS is dead for more than X seconds (60 by default), raylet will just crash itself, so KubeRay can still catch it.
Add external hook to /api/component_activities endpoint in dashboard snapshot router
Change is_active field of RayActivityResponse to take an enum RayActivityStatus instead of bool. This is a backward incompatible change, but should be ok because [dashboard] Add component_activities API #25996 wasn't included in any branch cuts. RayActivityResponse now supports informing when there was an error getting the activity observation and the reason.
In Ray 2.0, we want to achieve api server HA.
Originally serve endpoints are in head node.
This pr moves serve endpoints to dashboard agents, so they will be HA due to multiple replica of dashboard agent.
Add /api/component_activities to the dashboard snapshot router which returns whether various Ray components are considered active
This currently only contains a response entry for drivers, but will add entries for other components on request as followups
## Why are these changes needed?
This PR adds data truncation when there are more than N number of entries. The policy is as follow;
By default, we return 100 entries at max. Users can adjust this value, but we won't allow to increase more than 10K.
By default, all internal RPCs truncate data if it's > 10K.
For distributed sources, we query each source with 10K limit and we apply limit again at the end.
## Related issue number
Closes https://github.com/ray-project/ray/issues/25984#issue-1279280673
Part of https://github.com/ray-project/ray/issues/25718#issue-1268968400
## Why are these changes needed?
This PR fixes the issue where --follow lost connection when it is used for > 30 seconds because the gRPC timeout is configured to be 30 seconds, and we don't reset it when --follow is set.
This fixes the issue by setting timeout=None when keepalive==True
## Related issue number
Closes https://github.com/ray-project/ray/issues/25721
## Why are these changes needed?
This PR implements `!=` predicate for filtering. As a result of this PR, two APIs are changed.
```
--filter key value -> --filter "key=val" or ---filter "key!=val"
list_actors(filters=[(key, val), (key2, val2)]) -> list_actors(filters=[(key, "=", val), (key2, "=", val2)])
```
## Why are these changes needed?
This is a first implementation of GET APIs for
nodes
actors
placement groups
workers
tasks
objects
E.g.
# CLI
(dev) ➜ ray git:(ricky/obs-get) ray get nodes cab26304d105caa6f2100908f7b461ef9ed244984ec30b4b46f953f9
---
node_id: cab26304d105caa6f2100908f7b461ef9ed244984ec30b4b46f953f9
node_ip: 172.31.47.143
node_name: 172.31.47.143
resources_total:
CPU: 8.0
memory: 16700517582.0
node:172.31.47.143: 1.0
object_store_memory: 8350258790.0
state: ALIVE
# Python
from ray.experimental.state.api import get_node
from ray.experimental.state.common import NodeState
node :NodeState = get_node(<id>)
print(node)
We currently do not support getting specific resources by id for 'jobs' and 'runtime-envs'
jobs: it is not exposing id to be queried easily yet
runtime envs: it doesn't have an id associated.
TODO:
it uses list endpoints + filtering as for now, future iterations will implement GET-specific endpoints and interaction with raylet/GCS with point query APIs.
Unit testing for state_manager for GET endpoints when implemented.
Getting jobs by id
Why are these changes needed?
This is to address false alarms on subprocesses exiting when killed by ray stop with SIGTERM.
What has been changed?
Added signal handlers for some of the subprocesses:
dashboard (head)
log monitor
ray client server
Changed the --block semantics and prompt messages.
Related issue number
Closes#25518
Closes#25283.
The dashboard shows inaccurate memory and cpu data when run inside of a docker container, in particular when using cgroups v2. This PR fixes that.
Task/actor/object summary
Tasks: Group by the func name. In the future, we will also allow to group by task_group.
Actors: Group by actor class name. In the future, we will also allow to group by actor_group.
Object: Group by callsite. In the future, we will allow to group by reference type or task state.
Enable checking of the ray core module, excluding serve, workflows, and tune, in ./ci/lint/check_api_annotations.py. This required moving many files to ray._private and associated fixes.