Several changes to make spread scheduling work better under load:
* When nodes are not available, spread among feasible nodes.
* If grant_or_reject is true, don't spill back if the selected node is not available.
* Don't spill due to waiting for dependencies for spread tasks.
`gcsfs` complains about an invalid `create_parents` argument when using google cloud storage with cloud checkpoints. Thus we should use an alternative fs spec handler that omits this argument for gs.
The root issue will be fixed here: https://github.com/fsspec/gcsfs/pull/471
In a1e06f64ae, memory bound was added for each subscribed entity in the publisher. It adds two extra `std::deque` per subscribed entity, which turns out to cost a lot more memory when there are a large number of `ObjectRef`s: https://github.com/ray-project/ray/pull/23853#issuecomment-1098382286
This PR avoids the extra memory usage for entities in channels unlikely to grow too large, i.e. all channels except those for logs and error info. Subscribed entity memory usage no longer shows up in the memory profile when there are 1M object refs. Raw data: [profile006.pb.gz](https://github.com/ray-project/ray/files/8508547/profile006.pb.gz)
Implements `SklearnTrainer` and `SklearnPredictor`. Full parallelism with joblib + support for GPU enabled estimators like cuML.
Interface has been modified slightly by addition of several arguments, which were required for full functionality.
I haven't tested cuML yet, will do it later.
Depends on https://github.com/ray-project/ray/pull/23889
Co-authored-by: Kai Fricke <kai@anyscale.com>
The DDPPO LR scheduler test is broken because the learner_info_dictionary that is returned by the training iteration function does not consistently return a learner info for every training iteration, but the test expects that it does.
We'll need to fix the test then re-merge
Reverts #23906
The recursive grep in the banned words check can get really messy when running locally depending on each person's directory structure or where the format script is being called from.
Separates the banned words check as a separate script so that it's not called by default in ./format.sh. Also adds this to the documentation
Adds a `ScalingConfigDataClass.validate_config` classmethod to allow for a generic way of validating ScalingConfigs by allowing only certain keys.
Co-authored-by: Kai Fricke <kai@anyscale.com>
In the [docs contributing page](https://docs.ray.io/en/master/ray-contribute/docs.html), the links to other docs pages point to master/ instead of latest/, which can be a bit confusing since this is not the live version of the docs that people will be used to seeing.
I added a couple additional clarifications and fixed a typo as well. I also mentioned the need for an image and linked to the image directory (though some subprojects have their own image directories as well, which I did not mention).
The ray.timeline command currently only shows task for task events, which isn't very useful if your program has multiple types of tasks. This PR adds "::<function name>" to the string, similar to what we do for process names, to distinguish between different tasks.
The test verifies the first line 43~51 bytes are "dashboard"
But due to recent code addition to head.py, the line where logs are written became 2 digits -> 3 digits
Previously,
2022-04-18 23:23:56,946 INFO head.py:[less than 100] -- Dashboard head grpc address: 127.0.0.1:57208
Now
2022-04-18 23:23:56,946 INFO head.py:101 -- Dashboard head grpc address: 127.0.0.1:57208
So we should increase the bytes range.
Xgboost released a new version a few days ago. Due to caching of the Anyscale cluster env, this resulted in the server having an outdated xgboost version while the client has the most recent version causing the test to fail.
Instead, we reinstall xgboost-ray and xgboost in the post build commands so that these dependencies are not being cached in the cluster env.
A legacy K8s test fails due to incorrect usage of @ray.method which only started raising errors after the Ray 1.12.0 branch cut.
This PR removes the use of @ray.method in the test.
Some context in #23271 and #23471
In addition, I noticed some of the test were flakey due to out-of-memory issues. For that reason, I've doubled the memory request and limits in the legacy operator's example files.
I've also added CPU limits in an example file that was missing them -- it makes the most sense for consistency with Ray's resource model to use CPU limits in K8s configs.
Finally, I added an extra note to the instructions for running the tests.
A user has reported a crash in GCS client where the client was unable to connect to the GCS server after retries, even when GCS server has always been running. I was not able to reproduce the exact issue, but noticed that the health check logic with socket has unexpected behavior sometimes, e.g. it is much slower to use socket for health check compared to using gRPC (~40s vs < 1s sometimes). The user issue could be related to this slowness, so this PR updates the logic to use gRPC health check.
This change sets `"memory"`'s default to `0` in the `resource_dict` but keeps the default as `None` in `ray_actor_options`. It adds logic to both problematic lines to handle `None` in case of future settings updates. It also adds unit tests to prevent regressions.
* Provide a utility to ping a Ray cluster and verify it has the same Ray version. This is useful to check if a Ray cluster is available at a given address, without connecting to the cluster with the more heavyweight ray.init(). This utility is integrated with ray memory to provide a better error message when the Ray cluster is unavailable. There seem to be user demand for exposing this as an API as well.
* Improve the error message when the address provided to Ray does not contain port.
To remove symbols conflict effect on core worker linked different ray versions. This PR extracts an united core worker api (not all) and collect them into a internal library, so native devs can use them anywhere no matter the core worker implementation changes.
- Logically these two rpcs are about notifying the owner about the object location changes, so we should just have one rpc for that purpose. This prevents out-of-order updates seen by the owner (i.e. receiving object removed from object store before spill update). Also by using UpdateObjectLocationBatch, we get batch update for free.
- Maintain a FIFO order for object location updates so we won't have starvation.
Memory limit determination test is not relevant for Windows; we already skip the CPU limit determination tests for Windows, so we skip for the memory limit determination as well.
People use models that accept dictionaries as input. For example, a model might take the following dictionary as input:
{
"value1": [[7], [8]],
"value2": [[10], [15]],
}
To facilitate using Ray Datasets with such models and to provide feature parity with to_torch, we should support more feature_columns types.