Fixes a bug in wait_cluster where we count the total number of nodes ever in the cluster rather than the alive nodes. This has causes infra/autoscaler failures (e.g. #26138) to be mislabeled as test failures (and probably messes with timing too).
Co-authored-by: Alex Wu <alex@anyscale.com>
Add /api/component_activities to the dashboard snapshot router which returns whether various Ray components are considered active
This currently only contains a response entry for drivers, but will add entries for other components on request as followups
Update documentation to use `session.report`.
Next steps:
1. Update our internal caller to use `session.report`. Most importantly, CheckpointManager and DataParallelTrainer.
2. Update `get_trial_resources` to use PGF notions to incorporate the requirement of ResourceChangingScheduler. @Yard1
3. After 2 is done, change all `tune.get_trial_resources` to `session.get_trial_resources`
4. [internal implementation] remove special checkpoint handling logic from huggingface trainer. Optimize the flow for checkpoint conversion with `session.report`.
Co-authored-by: Antoni Baum <antoni.baum@protonmail.com>
## Why are these changes needed?
1. Now, bundle resources are deducted from the cluster resources on the `GCS` side when all Commit requests sent by `GCS` to `Raylet` are returned. Actually, the bundle resources should be deducted before sending `PreprareResources` by `GCS` to `Raylet`, so that the scheduling of actor based on `GCS` could use more fresh resources. BTW, putting the deduction before `PrepareResources` or after reply of all `CommitResources` has no impact on `Raylet` scheduling.
2. The `GcsResourceManager::UpdateResources` and `GcsResourceManager::DeleteResources` could be deleted to simplify `GcsResourceManager`.
- `GcsResourceManager::UpdateResources` is only used when `GcsPlacementGroupScheduler::CommitAllBundles`, we could update the node resources (commit bundle resources) in `GcsPlacementGroupScheduler` directly, and I think it's unnecessary to put these resources to storage (the resources could be replayed by placement group)
- `GcsResourceManager::DeleteResources` is only used when `GcsPlacementGroupScheduler::CancelResourceReserve` which is invoked by `GcsPlacementGroupScheduler::DestroyPlacementGroupPreparedBundleResources` and `GcsPlacementGroupScheduler::DestroyPlacementGroupCommittedBundleResources`. in fact, the `GcsPlacementGroupScheduler::ReturnBundleResources` will be called wherever these two functions are used, so I think the `GcsResourceManager::DeleteResources` is redundant. BTW, I think it's unnecessary to put the change of resources to storage (the resources could be replayed by placement group).
3. The `gcs_table_storage_` is useless as both `GcsResourceManager::UpdateResources` and `GcsResourceManager::DeleteResources` is removed, so it could be removed too.
4. The `ray_gcs_new_resource_creation_latency_ms_sum` could be removed too as the `GcsResourceManager::UpdateResources` is removed.
Co-authored-by: 黑驰 <senlin.zsl@antgroup.com>
Add an API to get the node id of this worker, see usage:
```java
UniqueId currNodeId = Ray.getRuntimeContext().getCurrentNodeId();
```
for the requirement from Ray Serve.
Why are these changes needed?
Per the discussion in #26057, fix the stage fusion issue by re-ordering the randomize stage past any 1-1 stages.
Closes#26057
This PR adds a GitHub Action that adds datasets-labeled issues (upon opening/labeling) to the Data Team's GitHub project. This should obviate the need for manual issue adding to the project before the start of each sprint.
The autoscaler container writes logs to a directory set up by the Ray container.
This PR moves the logic that sets up autoscaler logging so that it is done after the Ray container is ready.
This PR also changes things so that the autoscaler process exits after hitting 5 total exceptions. Kubernetes will then restart the autoscaler. The idea here is to ensure the autoscaler is able to restart cleanly in long-running deployments of Ray.
Since https://github.com/ray-project/ray/pull/25999 we need typing_extensions. It is a very light requirement (no transitive dependencies and small package) so that should be ok.
Considered alternative: Make it optional -- but that would make the typing code more brittle, and prevent us from using more typing in the future.
1. Update `DummyTrainer` to take `num_epochs` instead of `runtime_seconds`.
1. Ray Train expects equal number of calls to `train.report()`. Different workers may run at different speeds and terminate after different epoch numbers, which causes an error.
2. Add `generate_epochs` to support `DatasetPipeline` when `use_stream_api` is True.
3. Update `__main__` code to support testing different configurations.
This PR:
* Allows the user to set `keep_checkpoints_num` and `checkpoint_score_attr` in `RunConfig` using the `CheckpointStrategy` dataclass
* Adds two new fields to the `Result` object - `best_checkpoints` - a list of saved best checkpoints as determined by `CheckpointingConfig`.
We explicitely disallow scheduling tasks based on object store memory, so we should state that in the docs
cc @scottsun94
```
>>> import ray
>>> @ray.remote(object_store_memory=100)
... def foo():
... pass
...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/alex/anyscale/ray/python/ray/worker.py", line 2479, in _make_remote
ray_option_utils.validate_task_options(options, in_options=False)
File "/Users/alex/anyscale/ray/python/ray/_private/ray_option_utils.py", line 191, in validate_task_options
task_options[k].validate(k, v)
File "/Users/alex/anyscale/ray/python/ray/_private/ray_option_utils.py", line 33, in validate
raise ValueError(self.error_message_for_value_constraint)
ValueError: Setting 'object_store_memory' is not implemented for tasks
```
Co-authored-by: Alex Wu <alex@anyscale.com>
## Why are these changes needed?
This PR adds data truncation when there are more than N number of entries. The policy is as follow;
By default, we return 100 entries at max. Users can adjust this value, but we won't allow to increase more than 10K.
By default, all internal RPCs truncate data if it's > 10K.
For distributed sources, we query each source with 10K limit and we apply limit again at the end.
## Related issue number
Closes https://github.com/ray-project/ray/issues/25984#issue-1279280673
Part of https://github.com/ray-project/ray/issues/25718#issue-1268968400
As the integration logging callbacks are commonly used with AIR Trainers, they should be moved from the tune package to the air package. The old imports will still work, but raise a deprecation warning.
The KerasCallback saves the model checkpoint as a file. However, for the saved checkpoint to work with TensorflowPredictor, the model weights needs to be saved under the MODEL_KEY in a dict format.