In xgboost 1.6, support for older GPU architectures was removed (dmlc/xgboost#7767).
This PR updates the instance types used in our xgboost-ray gpu release tests to use Volta GPUs instead of Kepler GPUs so that xgboost-ray can run successfully with xgboost v1.6.
Closes#24048
`test_cluster: test_replica_startup_status_transitions` is periodically flaky with the replica hanging in `PENDING_ALLOCATION`. This could be because there is no ordering guarantee on async actor calls, so the `reconfigure` method might execute first and block the asyncio loop (due to `ray.get`), not allowing the `is_allocated` call to run.
This PR focuses on updating syncer-related code and comments from this #23660 to reduce the code size.
Update Snapshot/Update -> CreateSyncMessage/ConsumeSyncMessage
Make ray syncer test work even when we add more components in the protobuf
Make ray syncer able to reconnect to a new node.
Closes#23503
We are fixing two issue here:
1. The unified controller API used pickle to pack the init args, we are changing it to cloudpickle for now. (this is something I missed during code review)
2. The checkpoint state functionality in controller uses pickle to prevent ray cluster specific state written to checkpoint and unable to recover in a fresh new cluster. However, this recover from new cluster is not good UX and we should prefer an end to end solution like resubmitting via REST API.
As a corollary, the deployment state manager should not care about deserializing replica config and init args. Rather, it should just pass the protobuf directly to replica. I can do that either here or as a follow up.
`set_start_time()` was not implemented for the progress reporter base class, but it's called in `tune.run()`.
Instead of adding new methods to set runtime arguments, this PR moves to a singular and forward-compatible `setup()` method that defaults to no-op. This way custom reporters can make use of runtime information passed to the reporter, but can choose to ignore it per default.
Previously we have double dump behavior that makes json serde not human readable or friendly, but it's required given `DAGDriver` takes `dag_node_json` as first arg and it will appear in YAML.
This PR removes extra `json.dumps()` in encoder path, eliminated and simplified most encoder / object_hooks that are not needed in the first place to make everything simpler again.
Sample YAML now for a complex DAG: https://gist.github.com/jiaodong/32991771e9d78c35767eb24ed73f8236
We're pretty close to have a better minimal JSON representation of the whole dag after this. I might include in this PR or separate one.
Several changes to make spread scheduling work better under load:
* When nodes are not available, spread among feasible nodes.
* If grant_or_reject is true, don't spill back if the selected node is not available.
* Don't spill due to waiting for dependencies for spread tasks.
`gcsfs` complains about an invalid `create_parents` argument when using google cloud storage with cloud checkpoints. Thus we should use an alternative fs spec handler that omits this argument for gs.
The root issue will be fixed here: https://github.com/fsspec/gcsfs/pull/471
In a1e06f64ae, memory bound was added for each subscribed entity in the publisher. It adds two extra `std::deque` per subscribed entity, which turns out to cost a lot more memory when there are a large number of `ObjectRef`s: https://github.com/ray-project/ray/pull/23853#issuecomment-1098382286
This PR avoids the extra memory usage for entities in channels unlikely to grow too large, i.e. all channels except those for logs and error info. Subscribed entity memory usage no longer shows up in the memory profile when there are 1M object refs. Raw data: [profile006.pb.gz](https://github.com/ray-project/ray/files/8508547/profile006.pb.gz)
Implements `SklearnTrainer` and `SklearnPredictor`. Full parallelism with joblib + support for GPU enabled estimators like cuML.
Interface has been modified slightly by addition of several arguments, which were required for full functionality.
I haven't tested cuML yet, will do it later.
Depends on https://github.com/ray-project/ray/pull/23889
Co-authored-by: Kai Fricke <kai@anyscale.com>
The DDPPO LR scheduler test is broken because the learner_info_dictionary that is returned by the training iteration function does not consistently return a learner info for every training iteration, but the test expects that it does.
We'll need to fix the test then re-merge
Reverts #23906
The recursive grep in the banned words check can get really messy when running locally depending on each person's directory structure or where the format script is being called from.
Separates the banned words check as a separate script so that it's not called by default in ./format.sh. Also adds this to the documentation
Adds a `ScalingConfigDataClass.validate_config` classmethod to allow for a generic way of validating ScalingConfigs by allowing only certain keys.
Co-authored-by: Kai Fricke <kai@anyscale.com>
In the [docs contributing page](https://docs.ray.io/en/master/ray-contribute/docs.html), the links to other docs pages point to master/ instead of latest/, which can be a bit confusing since this is not the live version of the docs that people will be used to seeing.
I added a couple additional clarifications and fixed a typo as well. I also mentioned the need for an image and linked to the image directory (though some subprojects have their own image directories as well, which I did not mention).
The ray.timeline command currently only shows task for task events, which isn't very useful if your program has multiple types of tasks. This PR adds "::<function name>" to the string, similar to what we do for process names, to distinguish between different tasks.
The test verifies the first line 43~51 bytes are "dashboard"
But due to recent code addition to head.py, the line where logs are written became 2 digits -> 3 digits
Previously,
2022-04-18 23:23:56,946 INFO head.py:[less than 100] -- Dashboard head grpc address: 127.0.0.1:57208
Now
2022-04-18 23:23:56,946 INFO head.py:101 -- Dashboard head grpc address: 127.0.0.1:57208
So we should increase the bytes range.
Xgboost released a new version a few days ago. Due to caching of the Anyscale cluster env, this resulted in the server having an outdated xgboost version while the client has the most recent version causing the test to fail.
Instead, we reinstall xgboost-ray and xgboost in the post build commands so that these dependencies are not being cached in the cluster env.
A legacy K8s test fails due to incorrect usage of @ray.method which only started raising errors after the Ray 1.12.0 branch cut.
This PR removes the use of @ray.method in the test.
Some context in #23271 and #23471
In addition, I noticed some of the test were flakey due to out-of-memory issues. For that reason, I've doubled the memory request and limits in the legacy operator's example files.
I've also added CPU limits in an example file that was missing them -- it makes the most sense for consistency with Ray's resource model to use CPU limits in K8s configs.
Finally, I added an extra note to the instructions for running the tests.
A user has reported a crash in GCS client where the client was unable to connect to the GCS server after retries, even when GCS server has always been running. I was not able to reproduce the exact issue, but noticed that the health check logic with socket has unexpected behavior sometimes, e.g. it is much slower to use socket for health check compared to using gRPC (~40s vs < 1s sometimes). The user issue could be related to this slowness, so this PR updates the logic to use gRPC health check.
This change sets `"memory"`'s default to `0` in the `resource_dict` but keeps the default as `None` in `ray_actor_options`. It adds logic to both problematic lines to handle `None` in case of future settings updates. It also adds unit tests to prevent regressions.
* Provide a utility to ping a Ray cluster and verify it has the same Ray version. This is useful to check if a Ray cluster is available at a given address, without connecting to the cluster with the more heavyweight ray.init(). This utility is integrated with ray memory to provide a better error message when the Ray cluster is unavailable. There seem to be user demand for exposing this as an API as well.
* Improve the error message when the address provided to Ray does not contain port.