Uses the new AIR Train API for examples and tests.
The `Result` object gets a new attribute - `log_dir`, pointing to the Trial's `logdir` allowing users to access tensorboard logs and artifacts of other loggers.
This PR only deals with "low hanging fruit" - tests that need substantial rewriting or Train user guide are not touched. Those will be updated in followup PRs.
Tests and examples that concern deprecated features or which are duplicated in AIR have been removed or disabled.
Requires https://github.com/ray-project/ray/pull/25943 to be merged in first
Alternative to #26356 - here we just pin raydp-nightly and resolve the dependency issues in follow-up PRs.
This is to quickly unblock CI.
Signed-off-by: Kai Fricke <kai@anyscale.com>
This PR unified the semantics of some workflow APIs.
Those workflow APIs acts on workflow tasks so they could be blocked for a long time. So we have both the blocking and non-blocking versions for them: xxx for blocking and xxx_async for non-blocking APIs.
In Ray 2.0, we want to achieve api server HA.
Originally serve endpoints are in head node.
This pr moves serve endpoints to dashboard agents, so they will be HA due to multiple replica of dashboard agent.
When detecting resource capacities to advertise to Ray, the Ray operator takes into account requests. This doesn't make sense -- taking a min of resources and limits definitely doesn't make sense. Only limits should be considered.
Currently, the following information will be printed even the user is not directly using a tune function. This is confusing and not actionable.
```
"`checkpoint_dir` in `func(config, checkpoint_dir)` is "
"being deprecated. "
"To save and load checkpoint in trainable functions, "
"please use the `ray.air.session` API:\n\n"
"from ray.air import session\n\n"
"def train(config):\n"
" # ...\n"
' session.report({"metric": metric}, checkpoint=checkpoint)\n\n'
"For more information please see "
"https://docs.ray.io/en/master/ray-air/key-concepts.html#session\n"
```
The new logic check if `base_trainer` is in the call stack and only adds the warning message when it is not. The new logic will be removed once internally we migrate to use `session` API.
To enable one storage be able to be shared by multiple ray clusters, a special prefix is added to isolate the data between clusters: "<EXTERNAL_STORAGE_NAMESPACE>@"
The namespace is given by an os environment: `RAY_external_storage_namespace` when start the head: `RAY_external_storage_namespace=1234 ray start --head`
This flag is very important in HA GCS environment. For example, in ray serve operator, when the operator tries to bring up a new one, it's hard to just start a new db, but it's relatively easy to generate a new cluster id.
Another example is that, the user might only be able to maintain one HA Redis DB, and the namespace enable the user to start multiple ray clusters which share the same db.
This config should be moved to storage config in the future once we build that.
This PR adds supported for specifying an exception allowlist (List[Exception]) as the retry_exceptions argument, such that an application-level exception will only be retried if it is in the allowlist.
Update documentation to use `session.report`.
Next steps:
1. Update our internal caller to use `session.report`. Most importantly, CheckpointManager and DataParallelTrainer.
2. Update `get_trial_resources` to use PGF notions to incorporate the requirement of ResourceChangingScheduler. @Yard1
3. After 2 is done, change all `tune.get_trial_resources` to `session.get_trial_resources`
4. [internal implementation] remove special checkpoint handling logic from huggingface trainer. Optimize the flow for checkpoint conversion with `session.report`.
Co-authored-by: Antoni Baum <antoni.baum@protonmail.com>
## Why are these changes needed?
1. Now, bundle resources are deducted from the cluster resources on the `GCS` side when all Commit requests sent by `GCS` to `Raylet` are returned. Actually, the bundle resources should be deducted before sending `PreprareResources` by `GCS` to `Raylet`, so that the scheduling of actor based on `GCS` could use more fresh resources. BTW, putting the deduction before `PrepareResources` or after reply of all `CommitResources` has no impact on `Raylet` scheduling.
2. The `GcsResourceManager::UpdateResources` and `GcsResourceManager::DeleteResources` could be deleted to simplify `GcsResourceManager`.
- `GcsResourceManager::UpdateResources` is only used when `GcsPlacementGroupScheduler::CommitAllBundles`, we could update the node resources (commit bundle resources) in `GcsPlacementGroupScheduler` directly, and I think it's unnecessary to put these resources to storage (the resources could be replayed by placement group)
- `GcsResourceManager::DeleteResources` is only used when `GcsPlacementGroupScheduler::CancelResourceReserve` which is invoked by `GcsPlacementGroupScheduler::DestroyPlacementGroupPreparedBundleResources` and `GcsPlacementGroupScheduler::DestroyPlacementGroupCommittedBundleResources`. in fact, the `GcsPlacementGroupScheduler::ReturnBundleResources` will be called wherever these two functions are used, so I think the `GcsResourceManager::DeleteResources` is redundant. BTW, I think it's unnecessary to put the change of resources to storage (the resources could be replayed by placement group).
3. The `gcs_table_storage_` is useless as both `GcsResourceManager::UpdateResources` and `GcsResourceManager::DeleteResources` is removed, so it could be removed too.
4. The `ray_gcs_new_resource_creation_latency_ms_sum` could be removed too as the `GcsResourceManager::UpdateResources` is removed.
Co-authored-by: 黑驰 <senlin.zsl@antgroup.com>
Why are these changes needed?
Per the discussion in #26057, fix the stage fusion issue by re-ordering the randomize stage past any 1-1 stages.
Closes#26057
The autoscaler container writes logs to a directory set up by the Ray container.
This PR moves the logic that sets up autoscaler logging so that it is done after the Ray container is ready.
This PR also changes things so that the autoscaler process exits after hitting 5 total exceptions. Kubernetes will then restart the autoscaler. The idea here is to ensure the autoscaler is able to restart cleanly in long-running deployments of Ray.
Since https://github.com/ray-project/ray/pull/25999 we need typing_extensions. It is a very light requirement (no transitive dependencies and small package) so that should be ok.
Considered alternative: Make it optional -- but that would make the typing code more brittle, and prevent us from using more typing in the future.
1. Update `DummyTrainer` to take `num_epochs` instead of `runtime_seconds`.
1. Ray Train expects equal number of calls to `train.report()`. Different workers may run at different speeds and terminate after different epoch numbers, which causes an error.
2. Add `generate_epochs` to support `DatasetPipeline` when `use_stream_api` is True.
3. Update `__main__` code to support testing different configurations.
This PR:
* Allows the user to set `keep_checkpoints_num` and `checkpoint_score_attr` in `RunConfig` using the `CheckpointStrategy` dataclass
* Adds two new fields to the `Result` object - `best_checkpoints` - a list of saved best checkpoints as determined by `CheckpointingConfig`.
We explicitely disallow scheduling tasks based on object store memory, so we should state that in the docs
cc @scottsun94
```
>>> import ray
>>> @ray.remote(object_store_memory=100)
... def foo():
... pass
...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/alex/anyscale/ray/python/ray/worker.py", line 2479, in _make_remote
ray_option_utils.validate_task_options(options, in_options=False)
File "/Users/alex/anyscale/ray/python/ray/_private/ray_option_utils.py", line 191, in validate_task_options
task_options[k].validate(k, v)
File "/Users/alex/anyscale/ray/python/ray/_private/ray_option_utils.py", line 33, in validate
raise ValueError(self.error_message_for_value_constraint)
ValueError: Setting 'object_store_memory' is not implemented for tasks
```
Co-authored-by: Alex Wu <alex@anyscale.com>
## Why are these changes needed?
This PR adds data truncation when there are more than N number of entries. The policy is as follow;
By default, we return 100 entries at max. Users can adjust this value, but we won't allow to increase more than 10K.
By default, all internal RPCs truncate data if it's > 10K.
For distributed sources, we query each source with 10K limit and we apply limit again at the end.
## Related issue number
Closes https://github.com/ray-project/ray/issues/25984#issue-1279280673
Part of https://github.com/ray-project/ray/issues/25718#issue-1268968400
As the integration logging callbacks are commonly used with AIR Trainers, they should be moved from the tune package to the air package. The old imports will still work, but raise a deprecation warning.