Serve relies on being able to do quiet application-level retries, and this info-level logging is resulting in log spam hitting users. This PR demotes this log statement to debug-level to prevent this log spam.
Co-authored-by: simon-mo <simon.mo@hey.com>
This PR is to improve in-memory data size estimation of image folder data source. Before this PR, we use on-disk file size as estimation of in-memory data size of image folder data source. This can be inaccurate due to image compression and in-memory image resizing.
Given `size` and `mode` is set to be optional in https://github.com/ray-project/ray/pull/27295, so change this PR to tackle the simple case when `size` and `mode` are both provided.
* `size` and `mode` is provided: just calculate the in-memory size based on the dimensions, not need to read any image (this PR)
* `size` or `mode` is not provided: need sampling to determine the in-memory size (will do in another followup PR).
Here is example of estiamted size for our test image folder
```
>>> import ray
>>> from ray.data.datasource.image_folder_datasource import ImageFolderDatasource
>>> root = "example://image-folders/different-sizes"
>>> ds = ray.data.read_datasource(ImageFolderDatasource(), root=root, size=(64, 64), mode="RGB")
>>> ds.size_bytes()
40310
>>> ds.fully_executed().size_bytes()
37428
```
Without this PR:
```
>>> ds.size_bytes()
18978
```
The "Monitoring Ray Serve" page explains how to inspect your Ray Serve applications. This change updates the page to remove outdated metrics that Serve no longer exposes and to upgrade code samples to use 2.0 APIs. It also improves the content's readability and organization.
Link to updated "Monitoring Ray Serve" page: https://ray--27777.org.readthedocs.build/en/27777/serve/monitoring.html
The actor handle held at Ray client will become dangling if the Ray cluster is shutdown, and in such case if the user tries to get the actor again it will result in crash. This happened in a real user and blocked them from making progress.
This change makes the stats actor detached, and instead of keeping a handle, we access it via its name. This way we can make sure re-create this actor if the cluster gets restarted.
Co-authored-by: Ubuntu <ubuntu@ip-172-31-32-136.us-west-2.compute.internal>
When the node id of the controller died, GSC will try to reschedule the controller to the same node. But GCS will only mark the node as failure after 120s when GCS restarts (or 30s if only raylet died).
This PR fixed it by unpin it to the head node. So as long as GCS is alive, it'll reschedule it immediately. But we can't turn it on by default, so we introduce an internal flag for this.
User raised issue in #26605, where the user found the error message was quite non-actionable when partition filtering input files, and no files with required extension being found.
Signed-off-by: Cheng Su <scnju13@gmail.com>
When we deserialize actor handle via pickle, we will register it with an outer object ref equaling to itself which is wrong. For out-of-band deserialization, there should be no outer object ref.
Signed-off-by: Jiajun Yao <jeromeyjj@gmail.com>
Signed-off-by: Stephanie Wang swang@cs.berkeley.edu
Cluster address is now written to a temp file. Previously we raised an error if ray start --head tried to reuse the old cluster address in the temp file, even if Ray was no longer running. This PR allows ray start --head to continue if it can't find any GCS process associated with the recorded cluster address.
Related issue number
Closes#27021.
The tensor extension import is a bit expensive since it will go through Arrow's and Pandas' extension type registration logic. This PR delays the tensor extension type import until Parquet reading, which is the only case in which we need to explicitly register the type.
I have confirmed that the Parquet reading in doc/source/data/doc_code/tensor.py passes with this change.
This PR fixed several issue which block serve agent when GCS is down. We need to make sure serve agent is always alive and can make sure the external requests can be sent to the agent and check the status.
- internal kv used in dashboard/agent blocks the agent. We use the async one instead
- serve controller use ray.nodes which is a blocking call and blocking forever. change to use gcs client with timeout
- agent use serve controller client which is a blocking call with max retries = -1. This blocks until controller is back.
To enable Serve HA, we also need to setup:
- RAY_gcs_server_request_timeout_seconds=5
- RAY_SERVE_KV_TIMEOUT_S=5
which we should set in KubeRay.
JobCounter is not working with storage namespace right now because the key is the same across namespaces.
This PR fixed it by just adding it there because this add the minimal changes which is safer.
A follow up PR is needed to cleanup redis storage in cpp.
Object freed by the manual and internal free call previously would not get reconstructed. This PR introduces the following semantics after a free call:
If no failures occurs, and the object is needed by a downstream task, an ObjectFreedError will be thrown.
If a failure occurs, causing a downstream task to be re-executed, the freed object will get reconstructed as usual.
Also fixes some incidental bugs:
Don't crash on failure to contact local raylet during object recovery. This will produce a nicer error message because we will instead throw an application-level error when someone tries to get an object.
Fix a circular lock dependency between task failure <> task dependency resolution.
Related issue number
Closes#27265.
Signed-off-by: Stephanie Wang <swang@cs.berkeley.edu>
Signed-off-by: Yi Cheng <chengyidna@gmail.com>
## Why are these changes needed?
This test timeout. Move it to large.
```
WARNING: //python/ray/workflow:tests/test_error_handling: Test execution time (288.7s excluding execution overhead) outside of range for MODERATE tests. Consider setting timeout="long" or size="large".
```
As specified here, https://joekuan.wordpress.com/2015/06/30/python-3-__del__-method-and-imported-modules/, the del method doesn't guarantee that modules or function definitions are still referenced, and not GC'ed. That means if you access any "modules", "functions", or "global variables", they may have been garbage collected.
This means we should not access any modules, functions, or global variables inside del method. While it's something we should handle in the sooner future more holistically, this PR fixes the issue in the short term.
The problem was that all of ray actors are decorated by trace_helper.py to make it compatible to open telemetry (maybe we should make it optional). At this time __del__ method is also decorated. When __del__ is invoked, some of functions used within this tracing decorator can be accessed and may have been deallocated (in this case, the _is_tracing_enabled was deallocated). This fixes the issue by not decorating __del__ method from tracing.
This PR mainly adds two improvements:
We have introduced three CloudWatch Config support in previous PRs: Agent, Dashboard and Alarm. In this PR, we generalize the logic of all three config types by using enum CloudwatchConfigType.
Adds unit tests to ensure the correctness of Ray autoscaler CloudWatch integration behavior.
One GC test has unnecessary sleeps which are quite expensive due to the parametrization (2 x 2 x 2 = 8 iterations). They are unnecessary because they check that garbage collection of runtime env URIs doesn't occur after a certain time, but garbage collection isn't time-based. This PR removes the sleeps.
This PR is just to fix CI; a followup PR will make the test more effective by attempting to trigger GC in a more targeted way (by starting multiple tasks with different runtime_env resources. GC is only triggered upon *creation* of a new resource that causes the cache size to be exceeded.)
It's still not clear what exactly caused the test suite to start taking longer recently, but it might be due to some change elsewhere in Ray, since there were no runtime_env related commits in that time period.
This PR ccf4116 makes cluster_utils.add_node take about 1 more second because of the raylet start path refactoring.
It seems like as a result, test_placement_group_3 has an occasional timeout. I extended the test to be large. Let's see if this fixes the issue.
Why are these changes needed?
The Ray-level OOM killer preemptively kills a worker process when the node is under memory pressure. This PR leverages the memory monitor from #27017 and supersedes #26962 to kill worker processes when the system is running low on memory. The node manager implements the callback in the memory monitor and kills the worker process with the newest task. It evicts only one worker at a time and enforces that by tracking the last evicted worker. If the eviction is still in progress it will not evict another worker even if the memory usage is above the threshold.
This PR is a no-op since the monitor is disabled by default.