This PR consists of the following clean-up items for KubeRay autoscaler integration:
Remove the docker/kuberay directory
Move the Python files formerly in docker/kuberay to the autoscaler directory.
Use a rayproject/ray image for the autoscaler.
Add an entry point for the kuberay autoscaler to scripts.py. Use the entry point in the example config.
Slightly simplify the code that starts the autoscaler.
Ray versions are updated to Ray 1.11.0, which will be officially released within the next couple of days.
By default, Ray >= 1.11.0 runs without Redis. References to Redis are removed from the example config.
Add the autoscaler configuration test to the CI.
Update development documentation to reflect the changes in this PR.
`test_deploy` has become [flakey](https://flakey-tests.ray.io/#) due to timeout. Since `test_deploy` is already a "large" test, this change splits it into two testing files instead of simply increasing the timeout.
This PR splits up the changes in #22393 and introduces an implementation of the ML Checkpoint interface used by Ray Tune.
This means, the TuneCheckpoint class implements the to/from_[bytes|dict|directory|object_ref|uri] conversion functions, as well as more high-level functions to transition between the different TuneCheckpoint classes. It also includes test cases for Tune's main conversion modes, i.e. dict - intermediate - dict and fs - intermediate - fs.
These changes will be the basis for refactoring the tune interface to use TuneCheckpoint objects instead of TrialCheckpoints (externally) and instead of paths/objects (internally).
* refactor resource data structure in gcs
* fix comment
* fix lint error
* fix
* DISABLED_TestRejectedRequestWorkerLeaseReply as it depends on the update of normal task
Co-authored-by: 黑驰 <senlin.zsl@antgroup.com>
Follow-up to #22748, enabling tests in CI.
Conditions: A new RAY_CI_ML_AFFECTED condition is added for this test suite. The package currently depends on Ray Data, and will be triggered accordingly.
Dependencies: Adding DATA_PROCESSING_TESTING dependencies (set for install-dependencies.sh) for now.
test_plasma_unlimited::test_task_unlimited is flaky because one of the assertions is race-y and can trigger after the condition is no longer true (see #22883). This fixes the flake by:
- adding an assertion in between two object allocations to force the object store queue to flush
- keeping one of the ObjectRefs in scope to make sure that the object is still fallback-allocated by the time we reach the failing assertion
We essentially use a hack to determine whether shuffling should be enabled in prepare_data_loader. I've clarified the documentation so the hack is easier to understand.
To support mixed precision (see #20643), we need to store a GradScaler instance that is accessibly by both prepare_optimizer and backward functions (these functions will be added later).
This PR introduces the Accelerator, an object that implements methods to perform backend-specific training optimizations.
If you don't add `ray.init("auto")` to your training script, then your training script might complain that there aren't enough resources, even if `ray status` shows that there are.
Co-authored-by: Amog Kamsetty <amogkam@users.noreply.github.com>
Comments to be noted from the discussion below,
https://github.com/ray-project/ray/pull/22113#discussion_r802512907
> Problem - We cannot always delegate call to cls.__init__ or modified_cls.__init__. Because if always delegate call to cls.__init__ from here, then user defined class's __init__ method will be ignore leading to issues like, https://github.com/ray-project/ray/issues/21868. If we always delegate call to modified_cls.__init__ then it will allow inheriting from actor classes leading to failure of test_actor_inheritance. So, I have added this if-else check to figure out which __init__ method should be called. If "__module__", "__qualname__" and "__init__" are present in args[-1] then it would mean an actor class is being inherited so cls.__init__ should be called. However, if no such signal is received in args then user defined class's __init__ i.e., modified_class.__init__ should be called.
https://github.com/ray-project/ray/pull/22113#discussion_r808696261
> So I noted that ActorClass.__init__ will anyway raise a TypeError whenever it will be inherited. To exactly figure out whether the exception is due to inheritance of ActorClass, I created a new class ActorClassInheritanceException(TypeError). Now, whenever this will be raised, then DerivedActorClass will get a clear signal about inheritance of ActorClass. In other cases, it will be safe to conclude (AFAICT) that user called __init__ method of their class and we will proceed normally. IMHO, this is a better and more robust solution which just depends on a simple signal i.e., raising a particular exception in a specific event. It doesn't matter how inheritance is prevented as in the end we just need to raise ActorClassInheritanceException and all other code will be able to detect that easily.
https://github.com/ray-project/ray/pull/22113#issuecomment-1048527387
There is a bug in combining the results from map_batches: if we create two dataset out of the same data, but with different num of partitions, we may get different results when run the same map_batches() on them. That is, num of partitions is affecting the map_batches() results, which should not.
#22714 added `serve run` to the Serve CLI. This change allows the user to specify `init_args` and `init_kwargs` in `serve run` if they are deploying via import path.
Currently, classes and functions can be deployed by setting `Deployment`'s`func_or_class` to their import path. However, if these classes or functions are already decorated with `@serve.deployment`, the import path deployment will error.
This change instead ignores the settings in a class or function's `@serve.deployment` decorator when deploying via import path. It takes the code definition and deploys it without erroring. It also logs a warning about the ignored settings.
The REST API's schema default denies HTTP access to deployments when `route_prefix` is omitted. This doesn't match `@serve.deployment`'s behavior, which make `route_prefix` the deployment's name when omitted.
This change matches the schema's behavior to the decorator. When `route_prefix` is omitted from the config, the deployment's `route_prefix` defaults to its name. When the `route_prefix` is specified as `null`, the deployment won't have HTTP access.
This change also fixes a bug in Serve where when a deployment is updated from a non-`None` `route_prefix` to a `None` `route_prefix`, its `route_prefix` does not change. This bug meant that a deployment available over HTTP would continue to be available at the same route even when deployed again with `route_prefix=None`.
- Enhanced ray dag InputNode to take arbitrary user input via `.execute()`.
- If only one value is provided, like `dag.execute(1)`, return raw value;
- Otherwise wrap user input into an `DAGInputData` object that can be accessed via index or key.
- User can also pass list / dict object and just access them via index [0] or key ["key"]
- Introduced `InputAttrNode` that helps to connect partial attribute of user input to the DAG.
- Added context manager syntax for `InputNode`.
- Add InputNode enforcements with tests, such as DAG level singleton, exception with messages, etc.
- Enforce only simple int or str key
- Take care of JSON serialization for InputNode that carried original context manager info, ensure it's preserved.
- DAGNode UUID is also preserved in JSON serde.
## Next steps
On ray dag level we're proceeding with
```
with InputNode() as input: # Probably better to rename it to DAGInput()
a = Model.bind(input[0])
b = Model.bind(input.x)
dag = combine.bind(a, b)
```
But also enforces
1) InputNode is always used in context manager as opposed to directly created
2) There should be one and only one InputNode instance for each dag.
3) No args passed by user to InputNode at ray dag level.
Then in serve we subclass a ServeInputNode() to enhance it like the following to support HTTP input validation and conversion:
```
with ServeInputNode(schema=MySchemaCls) as input:
a = Model.bind(input[0])
b = Model.bind(input.x)
dag = combine.bind(a, b)
```
## Checks
- [x] I've run `scripts/format.sh` to lint the changes in this PR.
- [ ] I've included any doc changes needed for https://docs.ray.io/en/master/.
- [ ] I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
- Testing Strategy
- [x] Unit tests
- [ ] Release tests
- [ ] This PR is not tested :(
Co-authored-by: Eric Liang <ekhliang@gmail.com>
Co-authored-by: mwtian <81660174+mwtian@users.noreply.github.com>
This PR allows `DatasetPipeline.iter_batches()` to batch data across windows in the pipeline. This prevents partial batches from popping up in the middle of consuming a dataset pipeline due to window boundaries, and now allows us to provide the following guarantee to the user: `pipe.iter_batches()` will yield `len(pipe) // batch_size` full batches, with a partial batch occurring only (1) as the final batch and (2) only if `len(pipe) % batch_size > 0`, and if it exists, will have size `len(pipe) % batch_size`.
The crux of this PR takes the block batching implementation from `Dataset.iter_batches()`, refactors it to operate on an iterator of blocks instead of a `Dataset` and pulls it out into a shared `batch_blocks()` utility, and have `DatasetPipeline.iter_batches()` use it to batch over windows by providing an iterator over all blocks in all windows.
When there is no scheduling task of scheduling class in local raylet, the backlog resource will not be reported. It usually will happen when core worker try to schedule the task on other node and report backlog to local node.
This will lead to the wrong demands.
As we are turning redisless ray by default, dashboard doesn't need to talk with redis anymore. Instead it should talk with gcs and gcs can talk with redis.
This change adds `run`, `delete`, and `status` commands to the CLI introduced in #22648.
* `serve run`: Blocking command that allows users to deploy a YAML configuration or a class/function via import path. When terminated, the deployment(s) is torn down. Prints status info while running. Supports interactive development.
* `serve delete`: Shuts down a Serve application and deletes all its running deployments.
* `serve status`: Displays the status of a Serve application's deployments.