Commit graph

6616 commits

Author SHA1 Message Date
Antoni Baum
825a8b92c5
[AIR] Add Categorizer preprocessor (#24180)
Adds a Categorizer preprocessor to automatically set the Categorical dtype on a dataset. This is useful for eg. LightGBM, which has build-in support for features with that dtype.

Depends on #24144.
2022-04-29 09:37:18 -07:00
Guillaume Desforges
02ca3c34d2
[Datasets] Fix #24296 by limiting parallelism in FileBasedDatasource (#24298)
This prevents a MemoryError when a high value is provided to parallelism to ensure exactly one ReadTask is created per file.
2022-04-29 09:30:12 -07:00
Kai Fricke
dd87e61808
[ci/release] Fix module import errors in release tests (#24334)
After https://github.com/ray-project/ray/pull/24066, some release tests are running into:

```
ModuleNotFoundError: No module named 'ray.train.impl'
```

This PR simply adds a `__init__.py` file to resolve this.

We also add a 5 wecond delay for client runners in release test to give clusters a bit of slack to come up (and avoid ray client connection errors)
2022-04-29 17:03:17 +01:00
shrekris-anyscale
caba3d44fd
[serve] Add test_update_num_replicas_anonymous_namespace (#24313)
#24311 added the `test_update_num_replicas_anonymous_namespace` unit test to check for replica leaks in anonymous namespaces. This change adds this test to the master branch.
2022-04-29 10:14:17 -05:00
Xuehai Pan
9c76e21a5e
[RLlib] Ensure MultiCallbacks always implements all callback methods (#24254) 2022-04-29 10:30:24 +02:00
ZhuSenlin
2c0f9d7e8f
improve redis connection backoff (#24168) 2022-04-29 14:36:13 +08:00
Jiajun Yao
8fdde12e9e
Delay 1 minutes for the first usage stats report (#24291)
Delay the first report for 1 minutes so the system is probably set up and we can get the information to report.
2022-04-28 22:53:33 -07:00
Clark Zinzow
14f2729b3a
[Datasets] Rename _experimental_lazy() --> experimental_lazy(). (#24321)
The experimental_ prefix should suffice, we shouldn't also need to make it a private method.
2022-04-28 19:40:03 -07:00
Clark Zinzow
0825078a20
[Datasets] Revert Spark-on-Ray test revert (#24317)
A bad merge/stale CI resulted in a fixture renaming not propagating to all uses. This PR reverts the recent revert, and fixes the test.
2022-04-28 18:22:15 -07:00
Clark Zinzow
15d66a8dd7
Revert "[Datasets] Re-enable raydp test & Support to_spark while using ray client (#22852)" (#24316)
This reverts commit 024eafb5f4.
2022-04-28 16:40:44 -07:00
Siyuan (Ryans) Zhuang
b0f00a1872
[Core] Ensure "get_if_exists" takes effect in the decorator. (#24287) 2022-04-28 16:38:18 -07:00
Kai Fricke
561e169625
[air/tune] Remove postprocess_checkpoint (#24297)
The postprocess checkpoint method was introduced to be able to add data to function runner checkpoint directories before they are uploaded to external (cloud) storage. Instead, we should just use the existing separation of `save_checkpoint()` and `save()`.
2022-04-28 15:33:48 -07:00
Stephanie Wang
a5a11f6d11
[Datasets] Implement push-based shuffle (#24281)
The simple shuffle currently implemented in Datasets does not reliably scale past 1000+ partitions due to metadata and I/O overhead.

This PR adds an experimental shuffle implementation for a "push-based shuffle", as described in this paper draft. This algorithm should see better performance at larger data scales. The algorithm works by merging intermediate map outputs at the reducer side while other map tasks are executing. Then, a final reduce task merges these merged outputs.

Currently, the PR exposes this option through the DatasetContext. It can also be set through a hidden OS environment variable (RAY_DATASET_PUSH_BASED_SHUFFLE). Once we have more comprehensive benchmarks, we can better document this option and allow the algorithm to be chosen at run time.

Redo for #23758 to fix CI.
2022-04-28 14:58:23 -07:00
Jiajun Yao
bdb3b27d45
Fix autoscaler for node affinity scheduling strategy (#24250)
For tasks with node affinity scheduling strategy, the resource demands shouldn't create new nodes. This PR achieves this by not reporting demand to autoscaler. In the future, we will explore sending scheduling strategy information to autoscaler.
2022-04-28 14:57:17 -07:00
Linsong Chu
5c06e3f149
[DAG] add basic plotting on Ray DAGs (#24223)
To add basic plotting feature for Ray DAGs. 

`ray.experimental.dag.plot(dag: DAGNode, to_file=None)`

### Behavior
1. dump the dag plot (Dot) to file.
2. also render the image whenever possible. E.g. if running in Jupyter notebook, the image will not only be saved, but also rendered in the notebook.
3. when to_file is not set (i.e. None), it will be saved to a tempfile for rendering purpose only.  This is common when users plot DAGs in notebook env to explore the DAG structure without wanting to save it to a file.
2022-04-28 13:56:25 -07:00
Antoni Baum
e62d3fac74
[AIR] Refactor _get_unique_value_indices (#24144)
Refactors _get_unique_value_indices (used in Encoder preprocessors) for much improved performance with multiple columns. Also uses the same, more robust intermediary dataset format in _get_most_frequent_values (Imputers).

The existing unit tests pass, and no functionality has been changed.
2022-04-28 13:39:04 -07:00
Zhi Lin
024eafb5f4
[Datasets] Re-enable raydp test & Support to_spark while using ray client (#22852)
RayDP has updated their code and tests can be re-enabled now.

In addition, we want to support ray client in raydp dataset operation. Right now, if users want to do dataset.to_spark(spark) while using ray client, it will immediately fail because the local ray worker is not connected. By wrapping it in a function decorated with @client_mode_wrap, It works well no matter ray client is used or not.
2022-04-28 12:20:58 -07:00
Balaji Veeramani
2fdea6e24f
[Datasets] Add SimpleTorchDatasource (#23926)
It's difficult to use torchvision datasets with Ray ML. This PR makes it easier to use Torch datasets with Ray Data.
2022-04-28 11:56:45 -07:00
Clark Zinzow
2f4cb1256f
[Datasets] Clean up lineage serialization support for fan-in operations. (#24190)
Lineage-based serialization isn't supported for fan-in operations such as unions and zips. This PR adds documentation indicating as much, and ensures that a good error message is raised.
2022-04-28 09:45:37 -07:00
Clark Zinzow
d7c4a2477b
[Datasets] Pipeline task dependency prefetching with actor compute via customizable max tasks in flight per worker. (#24194)
When using the actor compute model for batch mapping (e.g. in batch inference), map tasks are often blocked waiting for their dependencies to be fetched since we submit one actor task at a time. This commit changes the default behavior of the actor compute model to have up to two actor tasks in flight for each actor in order to better pipeline task dependency fetching with the actual compute.

This "max tasks in flight per actor worker" is also made configurable, in case a particular use case warrants more aggressive pipelining (e.g. big blocks and/or fast maps) or more conservative pipelining (e.g. small data or slow maps).
2022-04-28 09:42:30 -07:00
xwjiang2010
576addf9ca
[tune] hyperopt searcher to support tune.choice([[1,2],[3,4]]). (#24181)
Have Hyperopt Searcher to support tune.choice([1,2],[3,4]) type search space.
2022-04-28 09:37:13 +01:00
Amog Kamsetty
629424f489
[AIR/Train] Make Dataset ingest configurable (#24066)
Refactors Dataset splitting to make it less hacky and address the TODO. Also makes Dataset ingest in general configurable for Ray Train. This is an internal only change for now, but will set the stage for the proposed ingest API

Customizable ingest for GBDT Trainers is out of scope for this PR.
2022-04-27 21:41:44 -07:00
Jiajun Yao
abba263f4e
Revert "[Datasets] Implement push-based shuffle (#23758)" (#24279)
This reverts commit c1054a0baa.
2022-04-27 18:36:59 -07:00
Dmitri Gekhtman
d68c1ecaf9
[kuberay] Test Ray client and update autoscaler image (#24195)
This PR adds KubeRay e2e testing for Ray client and updates the suggested autoscaler image to one running the merge commit of PR #23883 .
2022-04-27 18:02:12 -07:00
Archit Kulkarni
cc864401fb
[Dashboard] Add environment variable flag to skip dashboard log processing (#24263) 2022-04-27 15:33:08 -07:00
Simon Mo
ee528957c7
[Serve][Doc] Update docs about input schema, and json_request adapter (#24191) 2022-04-27 14:51:07 -07:00
Simon Mo
b4d9fcdbf8
[Serve] Fix surprious __call__ invocation in Deployment DAG's exec_impl (#24199) 2022-04-27 13:59:31 -07:00
Clark Zinzow
5dbcedbbf4
[Datasets] Expose DatasetPipeline in ray.data module (#24261)
Referencing the DatasetPipeline class currently requires ray.data.dataset_pipeline.DatasetPipeline; we should expose it directly in the ray.data module, as we do for Dataset.
2022-04-27 13:06:57 -07:00
Stephanie Wang
c1054a0baa
[Datasets] Implement push-based shuffle (#23758)
The simple shuffle currently implemented in Datasets does not reliably scale past 1000+ partitions due to metadata and I/O overhead.

This PR adds an experimental shuffle implementation for a "push-based shuffle", as described in this paper draft. This algorithm should see better performance at larger data scales. The algorithm works by merging intermediate map outputs at the reducer side while other map tasks are executing. Then, a final reduce task merges these merged outputs.

Currently, the PR exposes this option through the DatasetContext. It can also be set through a hidden OS environment variable (RAY_DATASET_PUSH_BASED_SHUFFLE). Once we have more comprehensive benchmarks, we can better document this option and allow the algorithm to be chosen at run time.

Related issue number

Closes #23758.
2022-04-27 11:59:41 -07:00
Siyuan (Ryans) Zhuang
309fef68c5
[core] Fix internal storage S3 bugs (#24167)
* fix storage

* fix windows
2022-04-27 09:57:14 -07:00
Siyuan (Ryans) Zhuang
895fdb5a4f
[workflow] Enable setting workflow options on Ray DAGs (#24210)
* workflow options
2022-04-27 09:51:45 -07:00
Kai Fricke
4a30ae0ab6
[tune] De-clutter log outputs in trial runner (#24257)
There are currently some debug logs left logging to INFO scope. This PR demotes them to DEBUG and cleans up the messages.
2022-04-27 17:13:09 +01:00
Simon Tindemans
77d79f9e32
IMapIterator fix when using iterator inputs (#24117)
In the current code base, `multiprocessing.Pool.imap_unordered` fails when it is called with an iterator (for which the length is not known on the first call). For example, the following code would fail:
```
import ray.util.multiprocessing as raymp

# test function
def func(input):
    print('run func [{}]'.format(input))
    return input

with raymp.Pool() as pool:
    
    # this fails with a TypeError (could not serialize)
    print('use an iterator')
    for x in pool.imap_unordered(func, iter(range(5))):
        print('Finished [{}]'.format(x))
```

## Summary of changes

* I made changes to the `ResultThread` class that enable it to work with argument `total_object_refs=0`. This will let it run until a call to `stop()` is received.
* I have adapted the `IMapIterator` class to better check input arguments and distinguish between iterables and iterators.
* The super classes `OrderedIMapIterator` and `UnorderedIMapIterator` have been updated to stop appropriately when iterators are used, and explicitly stop the `_result_thread`.

Co-authored-by: shrekris-anyscale <92341594+shrekris-anyscale@users.noreply.github.com>
2022-04-27 09:31:15 -05:00
Kai Fricke
b138bab85a
[air/rllib] Add RLPredictor class (#24172)
This adds the RLPredictor implementation as the counter part for the RLTrainer. An evaluation using the predictor was added to the rl trainer end to end example.
2022-04-27 12:03:12 +01:00
Kai Fricke
772b9abbcb
[tune] Enable AsyncHyperband to continue training for last trials after max_t (#24222)
Adds a new flag `stop_last_trials` to AsyncHyperband that allows the last trials of each bracket to continue training after `max_t`. This feature existed for synchronous hyperband before, and the extension had been requested in #14235.
2022-04-27 11:45:23 +01:00
Kai Fricke
0d123ba90d
[ci/hotfix] Fix race condition in pytest reporting (#24253)
The AWS test seems to try to create the directory multiple times.
2022-04-27 09:06:55 +01:00
Sihan Wang
c8bf650826
[Serve] [CI] Update the test_pipeline_ingress_deployment size to small (#24236) 2022-04-26 16:56:07 -07:00
Yi Cheng
f112b521b2
[core] move function and actor importer away from pubsub (#24132)
This PR moves function import to a lazy way. Several benefits of this:
- worker start up is faster since it doesn't need to go through all functions exported
- gcs pressure is smaller since 1) we don't need to export key and 2) all loads are done when needed.
- get rid of function table channel
2022-04-26 15:07:29 -07:00
Kai Fricke
8a46001b14
[tune] Make Timeout stopper work after restoring in the future (#24217)
Previously, the `TimeoutStopper` did not work after recovery from checkpoints in the future, as the start time + budget was exceeded. Instead, we're now tracking a timeout budget that gets decreased and properly saved in checkpoints, so that recovery in the future works.
2022-04-26 22:18:50 +01:00
Kai Fricke
fc1cd89020
[ci] Add short failing test summary for pytests (#24104)
It is sometimes hard to find all failing tests in buildkite output logs - even filtering for "FAILED" is cumbersome as the output can be overloaded. This PR adds a small utility to add a short summary log in a separate output section at the end of the buildkite job.

The only shared directory between the Buildkite host machine and the test docker container is `/tmp/artifacts:/artifact-mount`. Thus, we write the summary file to this directory, and delete it before actually uploading it as an artifact in the `post-commands` hook.
2022-04-26 22:18:07 +01:00
Amog Kamsetty
c3cea7ad5d
[Train/Tune] Warn pending deprecation for ray.train.Trainer and ray.tune DistributedTrainableCreators (#24056)
ray.train.Trainer and ray.tune.integration.*.DistributedTrainableCreator will be deprecated in Ray 2.0 in favor of Ray AIR. In Ray 1.13, we should warn about this pending deprecation.

First step towards #23014
2022-04-26 13:38:34 -07:00
Yi Cheng
d6b0b9a209
Revert "Revert "[grpc] Upgrade grpc to 1.45.2"" (#24201)
* Revert "Revert "[grpc] Upgrade grpc to 1.45.2 (#24064)" (#24145)"

This reverts commit f1a1f97992.
2022-04-26 10:49:54 -07:00
Clark Zinzow
07112b4146
[Datasets] Add basic stats instrumentation of split_at_indices(). (#24179)
This PR adds basic stats instrumentation of split_at_indices(), the first stage in fully instrumenting split operations. See https://github.com/ray-project/ray/issues/24178 for future steps.
2022-04-26 09:49:48 -07:00
Kai Fricke
4b6e79d713
[ci/serve] Fix Serve minimal install silent failure (#24183)
Previously `sys.exit()` wasn't called, so bazel wouldn't fail because of the faulty match pattern.

Uncovered here: https://buildkite.com/ray-project/ray-builders-pr/builds/30291#_
2022-04-26 13:16:48 +01:00
Kai Fricke
c0ec20dc3a
[tune] Next deprecation cycle (#24076)
Rolling out next deprecation cycle:

- DeprecationWarnings that were `warnings.warn` or `logger.warn` before are now raised errors
- Raised Deprecation warnings are now removed
- Notably, this involves deprecating the TrialCheckpoint functionality and associated cloud tests
- Added annotations to deprecation warning for when to fully remove
2022-04-26 09:30:15 +01:00
Amog Kamsetty
ae9c68e75f
[Train] Fully deprecate Ray SGD v1 (#24038)
Ray SGD v1 has been denoted as a deprecated API for a while. This PR fully deprecates Ray SGD v1. An error will be raised if ray.util.sgd package is attempted to be imported.

Closes #16435
2022-04-25 16:12:57 -07:00
Jiajun Yao
3fb63847e2
Show usage stats prompt (#23822)
Show usage stats prompt when it's enabled.

Current UX are:

* The usage stats enabled or disabled message is shown every time in both terminal and dashboard.
* If users don't explicitly enable or disable usage stats, the first time they start a ray cluster interactively, they will be asked to confirm and will enable if no user action within 10s. If it's non-interactive, collection is enabled by default without confirmation.
* ray.init() doesn't collect usage stats
* Usage stats can be disabled via three approaches: 1. RAY_USAGE_STATS_ENABLED env var, 2. ray xxx --disable-usage-stats, 3. ray disable-usage-stats
2022-04-25 16:01:24 -07:00
Clark Zinzow
e6718ec136
[Datasets] Add test for reading CSV files without reading the first line as the header. (#24161)
This PR adds a test confirming that the user can manually supply column names as an alternative to reading a header line.
2022-04-25 15:17:30 -07:00
matthewdeng
cc08c01ade
[ml] add more preprocessors (#23904)
Adding some more common preprocessors:
* MaxAbsScaler
* RobustScaler
* PowerTransformer
* Normalizer
* FeatureHasher
* Tokenizer
* HashingVectorizer
* CountVectorizer

API docs: https://ray--23904.org.readthedocs.build/en/23904/ray-air/getting-started.html

Co-authored-by: Kai Fricke <krfricke@users.noreply.github.com>
2022-04-25 21:12:59 +01:00
Takeshi Yoshimura
e115545579
[ray client] enable ray.get with >2 sec timeout (#21883) (#22165)
Commit 2cf4c72 ("[ray client] Fix ctrl-c for ray.get() by setting a
short-server side timeout") introduced a short server-side timeout not
to block later operations.

However, the fix implicitly assumes that get() is complete within
MAX_BLOCKING_OPERATION_TIME_S (two seconds). This becomes a problem
when apps use heavy objects or limited network I/O bandwidth that
require more than two seconds to push all chunks. The current retry
logic needs to re-push from the beginning of chunks and block clients
with the infinite re-push.

I updated the logic to directly pass timeout if it is explicitly given.
Without timeout, it still uses MAX_BLOCKING_OPERATION_TIME_S for
polling with the short server-side timeout.
2022-04-25 13:06:52 -07:00