Update documentation to use `session.report`.
Next steps:
1. Update our internal caller to use `session.report`. Most importantly, CheckpointManager and DataParallelTrainer.
2. Update `get_trial_resources` to use PGF notions to incorporate the requirement of ResourceChangingScheduler. @Yard1
3. After 2 is done, change all `tune.get_trial_resources` to `session.get_trial_resources`
4. [internal implementation] remove special checkpoint handling logic from huggingface trainer. Optimize the flow for checkpoint conversion with `session.report`.
Co-authored-by: Antoni Baum <antoni.baum@protonmail.com>
## Why are these changes needed?
1. Now, bundle resources are deducted from the cluster resources on the `GCS` side when all Commit requests sent by `GCS` to `Raylet` are returned. Actually, the bundle resources should be deducted before sending `PreprareResources` by `GCS` to `Raylet`, so that the scheduling of actor based on `GCS` could use more fresh resources. BTW, putting the deduction before `PrepareResources` or after reply of all `CommitResources` has no impact on `Raylet` scheduling.
2. The `GcsResourceManager::UpdateResources` and `GcsResourceManager::DeleteResources` could be deleted to simplify `GcsResourceManager`.
- `GcsResourceManager::UpdateResources` is only used when `GcsPlacementGroupScheduler::CommitAllBundles`, we could update the node resources (commit bundle resources) in `GcsPlacementGroupScheduler` directly, and I think it's unnecessary to put these resources to storage (the resources could be replayed by placement group)
- `GcsResourceManager::DeleteResources` is only used when `GcsPlacementGroupScheduler::CancelResourceReserve` which is invoked by `GcsPlacementGroupScheduler::DestroyPlacementGroupPreparedBundleResources` and `GcsPlacementGroupScheduler::DestroyPlacementGroupCommittedBundleResources`. in fact, the `GcsPlacementGroupScheduler::ReturnBundleResources` will be called wherever these two functions are used, so I think the `GcsResourceManager::DeleteResources` is redundant. BTW, I think it's unnecessary to put the change of resources to storage (the resources could be replayed by placement group).
3. The `gcs_table_storage_` is useless as both `GcsResourceManager::UpdateResources` and `GcsResourceManager::DeleteResources` is removed, so it could be removed too.
4. The `ray_gcs_new_resource_creation_latency_ms_sum` could be removed too as the `GcsResourceManager::UpdateResources` is removed.
Co-authored-by: 黑驰 <senlin.zsl@antgroup.com>
Why are these changes needed?
Per the discussion in #26057, fix the stage fusion issue by re-ordering the randomize stage past any 1-1 stages.
Closes#26057
The autoscaler container writes logs to a directory set up by the Ray container.
This PR moves the logic that sets up autoscaler logging so that it is done after the Ray container is ready.
This PR also changes things so that the autoscaler process exits after hitting 5 total exceptions. Kubernetes will then restart the autoscaler. The idea here is to ensure the autoscaler is able to restart cleanly in long-running deployments of Ray.
Since https://github.com/ray-project/ray/pull/25999 we need typing_extensions. It is a very light requirement (no transitive dependencies and small package) so that should be ok.
Considered alternative: Make it optional -- but that would make the typing code more brittle, and prevent us from using more typing in the future.
1. Update `DummyTrainer` to take `num_epochs` instead of `runtime_seconds`.
1. Ray Train expects equal number of calls to `train.report()`. Different workers may run at different speeds and terminate after different epoch numbers, which causes an error.
2. Add `generate_epochs` to support `DatasetPipeline` when `use_stream_api` is True.
3. Update `__main__` code to support testing different configurations.
This PR:
* Allows the user to set `keep_checkpoints_num` and `checkpoint_score_attr` in `RunConfig` using the `CheckpointStrategy` dataclass
* Adds two new fields to the `Result` object - `best_checkpoints` - a list of saved best checkpoints as determined by `CheckpointingConfig`.
We explicitely disallow scheduling tasks based on object store memory, so we should state that in the docs
cc @scottsun94
```
>>> import ray
>>> @ray.remote(object_store_memory=100)
... def foo():
... pass
...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/alex/anyscale/ray/python/ray/worker.py", line 2479, in _make_remote
ray_option_utils.validate_task_options(options, in_options=False)
File "/Users/alex/anyscale/ray/python/ray/_private/ray_option_utils.py", line 191, in validate_task_options
task_options[k].validate(k, v)
File "/Users/alex/anyscale/ray/python/ray/_private/ray_option_utils.py", line 33, in validate
raise ValueError(self.error_message_for_value_constraint)
ValueError: Setting 'object_store_memory' is not implemented for tasks
```
Co-authored-by: Alex Wu <alex@anyscale.com>
## Why are these changes needed?
This PR adds data truncation when there are more than N number of entries. The policy is as follow;
By default, we return 100 entries at max. Users can adjust this value, but we won't allow to increase more than 10K.
By default, all internal RPCs truncate data if it's > 10K.
For distributed sources, we query each source with 10K limit and we apply limit again at the end.
## Related issue number
Closes https://github.com/ray-project/ray/issues/25984#issue-1279280673
Part of https://github.com/ray-project/ray/issues/25718#issue-1268968400
As the integration logging callbacks are commonly used with AIR Trainers, they should be moved from the tune package to the air package. The old imports will still work, but raise a deprecation warning.
The KerasCallback saves the model checkpoint as a file. However, for the saved checkpoint to work with TensorflowPredictor, the model weights needs to be saved under the MODEL_KEY in a dict format.
If a ray cluster does not have enough resources for a serve deployment, the deployment will be stuck at `updating` status. This change will set the `message` field when allocations/initializations of actors have been pending for too long.
Co-authored-by: shrekris-anyscale <92341594+shrekris-anyscale@users.noreply.github.com>
## Why are these changes needed?
This PR fixes the issue where --follow lost connection when it is used for > 30 seconds because the gRPC timeout is configured to be 30 seconds, and we don't reset it when --follow is set.
This fixes the issue by setting timeout=None when keepalive==True
## Related issue number
Closes https://github.com/ray-project/ray/issues/25721
## Why are these changes needed?
This PR implements `!=` predicate for filtering. As a result of this PR, two APIs are changed.
```
--filter key value -> --filter "key=val" or ---filter "key!=val"
list_actors(filters=[(key, val), (key2, val2)]) -> list_actors(filters=[(key, "=", val), (key2, "=", val2)])
```
## Why are these changes needed?
This is a first implementation of GET APIs for
nodes
actors
placement groups
workers
tasks
objects
E.g.
# CLI
(dev) ➜ ray git:(ricky/obs-get) ray get nodes cab26304d105caa6f2100908f7b461ef9ed244984ec30b4b46f953f9
---
node_id: cab26304d105caa6f2100908f7b461ef9ed244984ec30b4b46f953f9
node_ip: 172.31.47.143
node_name: 172.31.47.143
resources_total:
CPU: 8.0
memory: 16700517582.0
node:172.31.47.143: 1.0
object_store_memory: 8350258790.0
state: ALIVE
# Python
from ray.experimental.state.api import get_node
from ray.experimental.state.common import NodeState
node :NodeState = get_node(<id>)
print(node)
We currently do not support getting specific resources by id for 'jobs' and 'runtime-envs'
jobs: it is not exposing id to be queried easily yet
runtime envs: it doesn't have an id associated.
TODO:
it uses list endpoints + filtering as for now, future iterations will implement GET-specific endpoints and interaction with raylet/GCS with point query APIs.
Unit testing for state_manager for GET endpoints when implemented.
Getting jobs by id
Why are these changes needed?
This is to address false alarms on subprocesses exiting when killed by ray stop with SIGTERM.
What has been changed?
Added signal handlers for some of the subprocesses:
dashboard (head)
log monitor
ray client server
Changed the --block semantics and prompt messages.
Related issue number
Closes#25518
Closes#25283.
The dashboard shows inaccurate memory and cpu data when run inside of a docker container, in particular when using cgroups v2. This PR fixes that.
Uses a Monitor attribute in the shutdown handler instead of an args attribute. Necessary because some integrations (including KubeRay) instantiate the Monitor directly rather than running python Monitor.py with arguments.
Adds HTTP retries to Ray CR fetch. Necessary for robustness because Ray CR fetch exceptions are not currently handled during autoscaler initialization.
This PR renames the `suggest` package to `search` and alters the layout slightly.
In the new package, the higher-level abstractions are on the top level and the search algorithms have their own subdirectories.
In a future refactor, we can turn algorithms such as PBT into actual `SearchAlgorithm` classes and move them into the `search` package.
The main reason to keep algorithms and searchers in the same directory is to avoid user confusion - for a user, `Bayesopt` is as much a search algorithm as e.g. `PBT`, so it doesn't make sense to split them up.
Remove base dir : 'python/ray/*.py' from the isort blacklist. This is needed so it will run isort on subdirectories under python/ray, and allow us to start enabling isort for subdirectories
This PR records the historical Ray native library usage to the home temp folder. Note that library usage only includes Ray native libraries (rllib, tune, dataset, workflow, and train). NOTE: The library usage is always recorded to /tmp/ray, but they will only recorded when the cluster that enables the usage stats is running. Note that this can generate quite big amount of false positive (e.g., If I import rllib once, and start cluster for local development, they will all considered as a rllib cluster).