This is the 1st PR to remove the code path of multiple core workers in one process. This PR is aiming to remove the flags and APIs related to `num_workers`.
After this PR checking in, we needn't to consider the multiple core workers any longer.
The further following PRs are related to the deeper logic refactor, like eliminating the gap between core worker and core worker process, removing the logic related to multiple workers from workerpool, gcs and etc.
**BREAK CHANGE**
This PR removes these APIs:
- Ray.wrapRunnable();
- Ray.wrapCallable();
- Ray.setAsyncContext();
- Ray.getAsyncContext();
And the following APIs are not allowed to invoke in a user-created thread in local mode:
- Ray.getRuntimeContext().getCurrentActorId();
- Ray.getRuntimeContext().getCurrentTaskId()
Note that this PR shouldn't be merged to 1.x.
This example simply doesn't run as is. We can bring it back up again later, if it makes sense. But it's not clear what the variables used there, like actor are. Fixes#21328
Signed-off-by: Max Pumperla <max.pumperla@googlemail.com>
as per discussion on slack, this should avoid having old versions of the docs indexed by search engines, see readthedocs/readthedocs.org#2430 for reference.
Signed-off-by: Max Pumperla <max.pumperla@googlemail.com>
This PR is a general overhaul of the "Creating Datasets" feature guide, providing complete coverage of all (public) dataset creation APIs and highlighting features and quirks of the individual APIs, data modalities, storage backends, etc. In order to keep the page from getting too long and keeping it easy to navigate, tabbed views are used heavily.
Instead of letting Datasets implicitly use cluster resources in the margins of explicit allocations of other libraries, such as Tune, Datasets should provide an option for explicitly allocating resources for a Datasets workload for users that want to box Datasets in. This PR adds such an explicit resource allocation option, via exposing a top-level scheduling strategy on the DatasetContext with which a placement group can be given.
Tune resource bookkeeping was broken. Specifically, this is what happened in the repro provided in #24259:
- Only one PG can be scheduled per time
- We staged resources for trial 1
- We run trial 1
- We stage resources for trial 2
- We pause trial 1, caching the placement group
- This removes the staged PG for trial 2 (as we return the PG for trial 1)
- However, now we `reconcile_placement_groups`, which re-stages a PG
- both trial 1 and trial 2 are now not in `RayTrialExecutor._staged_trials`
- A staging future is still there because of the reconciliation
This PR fixes this problem in two ways. `has_resources_per_trial` will check a) also for staging futures for the specific trial, and b) will also consider cached placement groups.
Generally, resource management in Tune is convoluted and hard to debug, and several classes share bookkeeping responsibilities (runner, executor, pg manager). We should refactor this.
OSS release tests currently run with hardcoded Python 3.7 base. In the future we will want to run tests on different python versions.
This PR adds support for a new `python` field in the test configuration. The python field will determine both the base image used in the Buildkite runner docker container (for Ray client compatibility) and the base image for the Anyscale cluster environments.
Note that in Buildkite, we will still only wait for the python 3.7 base image before kicking off tests. That is acceptable, as we can assume that most wheels finish in a similar time, so even if we wait for the 3.7 image and kick off a 3.8 test, that runner will wait maybe for 5-10 more minutes.
`df.transform` has undefined behavior when the passed in function mutates the dataframe, as mentioned in the pandas docs. This is because I believe the implementation iterates through slices of the dataframe and passes these slices to the provided function. This "gotcha" exposes an implementation to users who are using `BatchMapper`.
It's pretty common to have preprocessors that mutates the dataframe, for example our own test does the following
```
def add_and_modify_udf(df: "pd.DataFrame"):
df["new_col"] = df["old_column"] + 1
df["to_be_modified"] *= 2
return df
```
so instead of using `df.transform`, we instead do `self.fn(df)`. As the `df` is the output of Ray Datasets `iter_batches`, the provided function can safely mutate the dataset.
A newly planned version of the Serve schema (used in the REST API and CLI) requires the user to pass in their deployment graph's`import_path` and optionally a runtime_env containing that graph. This new schema can then pick up any `init_args` and `init_kwargs` values directly from the graph, instead of requiring them to be serialized and passed explicitly into the REST request.
This change:
* Adds the `import_path` and `runtime_env` fields to the `ServeApplicationSchema`.
* Updates or disables outdated unit tests.
Follow-up changes should:
* Update the status schemas (i.e. `DeploymentStatusSchema` and `ServeApplicationStatusSchema`).
* Remove deployment-level `import_path`s.
* Process the new `import_path` and `runtime_env` fields instead of silently ignoring them.
* Remove `init_args` and `init_kwargs` from `DeploymentSchema` afterwards.
Co-authored-by: Edward Oakes <ed.nmi.oakes@gmail.com>
e2ee214 broke linter on master. However, reverting the commit or hotfixing the docs requires codeowner approval. It makes sense to include @maxpumperla as a codeowner in doc-related code paths, so this PR adds him.
Currently, we are not running doc notebooks in CI due to a bazel misconfiguration - we are using `glob` in a top level package in order to get the paths for the notebooks, but those are contained inside subpackages, which glob purposefully ignores. Therefore, the lists of notebooks to run are empty. This PR fixes that by:
* Running the `py_test_run_all_notebooks` macro inside the relevant subpackages
* Editing the `test_myst_doc.py` script to allow for recursive search for the target file, allowing to deal with mismatches between `name` and `data` arguments in `py_test_run_all_notebooks`
* Setting the `allow_empty=False` flag inside `glob` calls in our macros to ensure that this oversight is caught early
* Enabling detection of changes in doc folder for `*.ipynb` and `BUILD` files
This PR also adds a GPU runner for doc tests, allowing one of our examples to pass - and setting the infra for more to come. Finally, a misconfigured path for one set of doc tests is also fixed.
This PR adds support for tensor columns in the to_tf() and to_torch() APIs.
For Torch, this involves an explicit extension array check and (zero-copy) conversion of the tensor column to a NumPy array before converting the column to a Torch tensor.
For TensorFlow, this involves bypassing df.values when converting tensor feature columns to NumPy arrays, instead manually creating a single NumPy array from the column Series.
In both cases, I think that the UX around heterogeneous feature columns and squeezing the column dimension could be improved, but I'm saving that for a future PR.
A follow-up PR from this one: https://github.com/ray-project/ray/pull/24628
In the previous PR, it fixed the resubscribing issue for raylet. But there is also core worker which needs to do resubscribing.
There are two ways of doing resubscribe:
1. When the client-side detects any failure, it'll do resubscribing.
2. Server side will ask the client to do resubscribing.
1) is a cleaner and better solution. However, it's a little bit hard due to the following reasons:
- We are using long-polling, so for some extreme cases, we won't be able to detect the failure. For example, the client-side received the message, but before it sends another request, the server-side restarts, and the client will miss the opportunity of detecting the failure. This could happen if we have a standby GCS that starts very fast and somehow the client-side has a lot of traffic and runs very slow.
- The current gRPC framework doesn't give the user a way to handle failure which might need some refactoring on this one.
We can go with this way once we have gRPC streaming.
This PR is implementing 2) which includes three parts:
- raylet: (https://github.com/ray-project/ray/pull/24628)
- core worker: (this pr)
- python
Correctness: whenever when a worker started, it'll register to raylet immediately (sync call) before connecting to GCS. So, we just need to send all restart rpcs to registered workers and it should work because:
- if the worker just started and hasn't registered with the raylet: it's ok, because the worker hasn't connected with GCS yet, so no need to do resubscribing.
- if the worker has registered with the rayelt: it's covered by the code path here.
Adds a Dataset.split_proportionately method that allows the user to split a dataset using proportions. This is a very common use-case for eg. train-test splitting. The implementation is a thin wrapper over Dataset.split_at_indices.
Additionally, this PR adds a ray.ml.train_test_split function intended to provide a familiar API to ML practitioners.
The `black` version string differs based on installation method. On my (m1) laptop:
```
$ black --version
black, version 21.7b0
```
On @simon-mo's (intel) and @shrekris-anyscale's (m1) laptops:
```
$ black --version
black, 21.12b0 (compiled: no)
```
This adds a conditional in `ci/lint/format.sh` to handle both.
Updates the landing page to match the format and content of Tune's. Added some shorter quickstarts and sharpened up the messaging in our "Why choose Serve?" section, those are the main content changes.
I also moved all of the `doc_code` into one directory and added a bazel target that should run all of the examples added there. Split into a separate PR: https://github.com/ray-project/ray/pull/24736.