Memory limit determination test is not relevant for Windows; we already skip the CPU limit determination tests for Windows, so we skip for the memory limit determination as well.
People use models that accept dictionaries as input. For example, a model might take the following dictionary as input:
{
"value1": [[7], [8]],
"value2": [[10], [15]],
}
To facilitate using Ray Datasets with such models and to provide feature parity with to_torch, we should support more feature_columns types.
This PR adds support for vectorized global and grouped aggregations, porting the built-in aggregations to vectorized block aggregations for tabular datasets.
During large scale shuffle (number of partitions used >= 1000), driver uses significant amount of memory for storing ObjectRefs. On Intel MacOS, each Reference struct currently takes up 592 bytes. We can reduce per-Reference memory footprint:
- During shuffle, no ObjectRef borrowing or nesting happens. And in this case fields related to borrowing or nesting should not take up memory. This reduces sizeof(Reference) from 592 to 400.
- Fields in the Reference struct can be reordered to enhance packing. This reduces sizeof(Reference) from 400 to 368.
On Intel MacOS running the shuffle benchmark with 1000 partitions and 10MB partition size, RSS at the end of shuffle drops from ~5GB to ~4.5GB.
Related issue number
#23604
This PR refactors `LazyBlockList` in service of out-of-band serialization (see [mono-PR](https://github.com/ray-project/ray/pull/22616)) and is a precursor to an execution plan refactor (PR #2) and adding the actual out-of-band serialization APIs (PR #3). The following is included in this refactor:
1. `ReadTask`s are now a first-class concept, replacing calls;
2. read stage progress tracking is consolidated into `LazyBlockList._get_blocks_with_metadta()` and more of the read task complexity, e.g. the read remote function, was pushed into `LazyBlockList` to make `ray.data.read_datasource()` simpler;
3. we are a bit smarter with how we progressively launch tasks and fetch and cache metadata, including fetching the metadata for read tasks in `.iter_blocks_with_metadata()` instead of relying on the pre-read task metadata (which will be less accurate), and we also fix some small bugs in the lazy ramp-up around progressive metadata fetching.
(1) is the most important item for supporting out-of-band serialization and fundamentally changes the `LazyBlockList` data model. This is required since we need to be able to reference the underlying read tasks when rewriting read stages during optimization and when serializing the lineage of the Dataset. See the [mono-PR](https://github.com/ray-project/ray/pull/22616) for more context.
Other changes:
1. Changed stats actor to a global named actor singleton in order to obviate the need for serializing the actor handle with the Dataset stats; without this, we were encountering serialization failures.
This PR adds a `Checkpoint_as_directory()` context manager that either returns the local path (if checkpoint is already a directory) or a temporary directory path containing the checkpoint data, which is cleaned up after use. The path should be considered as a read-only source for loading data from the checkpoint.
A common use case for processing checkpoint data is to convert it into a directory with `Checkpoint.to_directory()` and then do some read-only processing (e.g. restoring a ML model).
This process has two flaws: First, `to_directory()` creates a temporary directory that has to be explicitly cleaned up by the user after use. Secondly, if the checkpoint is already a directory checkpoint, it is copied over, which is inefficient for large checkpoints (e.g. huggingface models) and then even more prone to unwanted side effects if not cleaned up properly.
With this context manager that effectively returns a directory that is to be used as a read-only data source, we can avoid manual cleaning up and unnecessary data copies (or avoid internal inspection as e.g. in https://github.com/ray-project/ray/pull/23876/files#diff-47db2f054ca359879f77306e7b054dd8b780aab994961e3b4911330ae15eeae3R57-R60)
See also discussion in https://github.com/ray-project/ray/pull/23850/files#r850036905
Add BatchMapper preprocessor.
Update the semantics of preprocessor.fit() to allow for multiple fit. This is to follow scikitlearn example.
Introduce FitStatus to explicitly incorporate Chain case.
To avoid this error:
(raylet) Traceback (most recent call last):
(raylet) File "/home/iamhatesz/.pyenv/versions/alan-brain-py3.9/lib/python3.9/site-packages/ray/dashboard/agent.py", line 407, in <module>
(raylet) gcs_publisher = GcsPublisher(args.gcs_address)
(raylet) TypeError: __init__() takes 1 positional argument but 2 were given
Following #23862, there was an uncaught bug when comparing nan-priority checkpoints. This is because float("nan") <= float("nan") is always False (unlike e.g. np.nan <= np.nan, which is True).
This PR fixes this bug and adds a new test to ensure correct behavior.
Changes the logic in CheckpointManager to consider checkpoints with nan value of the metric as worst values, meaning they will be deleted first if keep_checkpoints_num is set.
What: Adds a setting "prefer_smoke_tests" to the Buildkite settings. With this, user can specify to kick off smoke tests, if available.
Why: The filtering interface of the release testing dialog is a bit complicated at the moment - in order to kick off smoke tests, users have to know with which frequency they are configured to run. Instead users should usually just filter the tests they want to run (using frequency ANY) and optionally specify to run smoke tests, if available.
Serve gets actors using the current Ray namespace. However, the Ray namespace and the controller namespace may not match when using the `_override_controller_namespace` argument in `serve.start()`. This change ensures that the `get_actor()` calls in `ActorReplicaWrapper` use the controller namespace. This also allows `num_replicas` to be scaled up and down properly when using `_override_controller_namespace`.
Clean up the ci/ directory. This means getting rid of the travis/ path completely and moving the files into sensible subdirectories.
Details:
- Moves everything under ci/travis into subdirectories, e.g. ci/build, ci/lint, etc.
- Minor adjustments to some scripts (variable renames)
- Removes the outdated (unused) asan tests
What: Quotes pip install packages in local environment setup for client runner.
Why: Strings like pyarrow>=6.0.1<7.0.0 currently don't work as they are interpreted as output redirection.
What: This class adds a generic BatchPredictor class that offers an interface to run batch inference on Ray datasets. It takes a Predictor class and checkpoint as an input, and provides a predict(dataset) method to run scalable scoring inference.
Why: Currently users have to implement scorers themselves. This is mostly boilerplate and prone to errors, so we should provide a simple solution instead.
Note that this predictor also implements the Predictor interface.
Instead of relying on the node-ip custom resource for static task-to-node placement, this PR introduces an explicit NodeAffinitySchedulingStrategy with the following benefits:
1. Specify node using id instead of ip since ip may not be unique for each node.
2. Support soft constraint so the task can be tolerant to node failures.
After this PR, the node-ip custom resource can be deprecated.
`ray.data.from_numpy()` currently expects to be given a list of ndarray futures, instead of handling concrete ndarrays, as expected (and as allowed by other `from_*` APIs, e.g. `from_pandas`). This PR renames the existing `from_numpy` API to `from_numpy_refs`, and exposes `ray.data.from_numpy`, which takes concrete ndarrays (not object references).
In some cases, the UDF for map_groups() may return value of different types, which should be disallowed.
This PR is to add unit test to make sure we do raise error if such case happens.
Update the torch policy to find the seq_lens using state_batches instead of input_dict. This helps handle the complex inputs to the model when the inbuilt preprocessing API is disabled.
What: Only open (create) CSV files when actually reporting results.
Why: When trials crash before they report first (e.g. on init), they will have created an empty CSV file. When results are subsequently written, the CSV header is then missing.
Copied from #23784.
Adding a large-scale nightly test for Datasets random_shuffle and sort. The test script generates random blocks and reports total run time and peak driver memory.
Modified to fix lint.
This is a rebase version of #11592. As task spec info is only needed when gcs create or start an actor, so we can remove it from actor table and save the serialization time and memory/network cost when gcs clients get actor infos from gcs.
As internal repository varies very much from the community. This pr just add some manual check with simple cherry pick. Welcome to comment first and at the meantime I'll see if there's any test case failed or some points were missed.
What: Skips left-over checkpoint_tmp* directories when loading experiment analysis. Also loads iteration number from metadata file rather than parsing the checkpoint directory name.
Why: Sometimes temporary checkpoint directories are not deleted correctly when restoring (e.g. when interrupted). In these cases, they shouldn't be included in experiment analysis. Parsing their iteration number also failed, and should generally be done by reading the metadata file, not by inferring it from the directory name.
What: This introduces a general utility to synchronize directories between two nodes, derived from the RemoteTaskClient. This implementation uses chunked transfers for more efficient communication.
Why: Transferring files over 2GB in size leads to superlinear time complexity in some setups (e.g. local macbooks). This could be due to memory limits, swapping, or gRPC limits, and is explored in a different thread. To overcome this limitation, we use chunked data transfers which show quasi-linear scalability for larger files.