This PR adds support for tensor columns in the to_tf() and to_torch() APIs.
For Torch, this involves an explicit extension array check and (zero-copy) conversion of the tensor column to a NumPy array before converting the column to a Torch tensor.
For TensorFlow, this involves bypassing df.values when converting tensor feature columns to NumPy arrays, instead manually creating a single NumPy array from the column Series.
In both cases, I think that the UX around heterogeneous feature columns and squeezing the column dimension could be improved, but I'm saving that for a future PR.
A follow-up PR from this one: https://github.com/ray-project/ray/pull/24628
In the previous PR, it fixed the resubscribing issue for raylet. But there is also core worker which needs to do resubscribing.
There are two ways of doing resubscribe:
1. When the client-side detects any failure, it'll do resubscribing.
2. Server side will ask the client to do resubscribing.
1) is a cleaner and better solution. However, it's a little bit hard due to the following reasons:
- We are using long-polling, so for some extreme cases, we won't be able to detect the failure. For example, the client-side received the message, but before it sends another request, the server-side restarts, and the client will miss the opportunity of detecting the failure. This could happen if we have a standby GCS that starts very fast and somehow the client-side has a lot of traffic and runs very slow.
- The current gRPC framework doesn't give the user a way to handle failure which might need some refactoring on this one.
We can go with this way once we have gRPC streaming.
This PR is implementing 2) which includes three parts:
- raylet: (https://github.com/ray-project/ray/pull/24628)
- core worker: (this pr)
- python
Correctness: whenever when a worker started, it'll register to raylet immediately (sync call) before connecting to GCS. So, we just need to send all restart rpcs to registered workers and it should work because:
- if the worker just started and hasn't registered with the raylet: it's ok, because the worker hasn't connected with GCS yet, so no need to do resubscribing.
- if the worker has registered with the rayelt: it's covered by the code path here.
Adds a Dataset.split_proportionately method that allows the user to split a dataset using proportions. This is a very common use-case for eg. train-test splitting. The implementation is a thin wrapper over Dataset.split_at_indices.
Additionally, this PR adds a ray.ml.train_test_split function intended to provide a familiar API to ML practitioners.
The `black` version string differs based on installation method. On my (m1) laptop:
```
$ black --version
black, version 21.7b0
```
On @simon-mo's (intel) and @shrekris-anyscale's (m1) laptops:
```
$ black --version
black, 21.12b0 (compiled: no)
```
This adds a conditional in `ci/lint/format.sh` to handle both.
Updates the landing page to match the format and content of Tune's. Added some shorter quickstarts and sharpened up the messaging in our "Why choose Serve?" section, those are the main content changes.
I also moved all of the `doc_code` into one directory and added a bazel target that should run all of the examples added there. Split into a separate PR: https://github.com/ray-project/ray/pull/24736.
This PR adds 2 more states into TaskStatus
enum TaskStatus {
// The task is scheduled properly and waiting for execution.
// It includes time to deliver the task to the remote worker + queueing time
// from the execution side.
WAITING_FOR_EXECUTION = 5;
// The task that is running.
RUNNING = 6;
}
This PR supports specifying the jars(or zip packages) for a job, which are used for all workers for this job.
You can specify jars or zips in the config file of your job:
```yml
ray {
job {
runtime-env: {
"jars": [
"https://my_host/a.jar",
"https://my_host/b.jar"
]
}
}
}
```
or via system properties:
```java
System.setProperty("ray.job.runtime-env.jars.0", "https://my_host/a.jar");
System.setProperty("ray.job.runtime-env.jars.1", "https://my_host/a.jar");
Ray.init();
// all workers of this job will add a.jar and b.jar into the classpath.
```
## Why are these changes needed?
<!-- Please give a short summary of the change and the problem this solves. -->
This PR fixes the path to resubscribe to GCS when GCS restarts.
When GCS restarts, it'll lose all subscription information since everything is stored in memory. Then in the runtime, we need to tell GCS what's currently being subscribed.
The previous method:
- We'll have a thread in core worker/raylet to check whether the GCS restarted or not.
- If it restarted, we'll send resubscribe request to GCS.
However, this is not working in these cases:
- GCS restarts happen so fast so the checker in raylet/core worker missed them.
- GCS doesn't restart, but just being lag due to network issues then, the resubscribe is not necessary.
Actually, GCS knows when a resubscribe is needed: when it restarts. So the PR here is to send a resubscribe request from GCS -> Raylet and Raylet will do the resubscription.
There are two parts for this to work:
- [x] raylet send resubscription
- [ ] raylet ask core worker to send resubscription
Why are these changes needed?
Current documentation code in Message passing using Ray Queue can be enhanced, for better demonstration of the message queue.
It creates 10 tasks but only 2 consumers, and each consumer consumes one task then exit. Therefore, the output is a bit vague:
(consumer pid=1022727) got work 0
(consumer pid=1022595) got work 1
So I make consumer working until the queue is empty. The output shows consumer 1 and 2 working in parallel:
(consumer pid=1030876) consumer 0 got work 0
(consumer pid=1030876) consumer 0 got work 1
(consumer pid=1030876) consumer 0 got work 3
(consumer pid=1030876) consumer 0 got work 5
(consumer pid=1030876) consumer 0 got work 7
(consumer pid=1030876) consumer 0 got work 9
(consumer pid=1030949) consumer 1 got work 2
(consumer pid=1030949) consumer 1 got work 4
(consumer pid=1030949) consumer 1 got work 6
(consumer pid=1030949) consumer 1 got work 8
P.S. Also fix a typo in doc.
Providing easy-access datasets is table stakes for a good Getting Started UX, but even with good in-package data, it can be difficult to make these paths accessible to the user. This PR adds an "example://" protocol that will resolve passed paths directly to our canned in-package example data.
Currently, when an actor has `max_restarts` > 0 and has crashed, the actor will enter RESTARTING state and then ALIVE. Imagine this scenario: an online service provides HTTP service and the proxy actor receives requests, forwards them to worker actors, and replies to clients with the execution results from worker actors.
```
-> Worker A (actor)
/
/
HTTP requests -------> Proxy (actor with HTTP server) ---> Worker B (actor)
\
\
-> ...
```
For each HTTP request, the proxy picks one worker (e.g. worker A) based on some algorithm, sends the request to it, and calls `ray.get()` to wait for the result. If for some reason the picked worker crashed, Ray will restart the actor, and `ray.get()` will throw an error. The proxy may pick another worker (e.g. worker B) and re-send the request to it. This is OK.
But new requests keep coming. The proxy may pick worker A again. But because worker A is still in RESTARTING state, it's not ready to serve requests. `ray.get()` on subsequent requests sent to worker A will hang until worker A is back online (ALIVE state). The proxy won't be able to reschedule these requests to another worker because currently there's no way to know if worker A is alive or not before sending a request. We can't say worker A is not alive just based on whether `ray.get()` hangs either.
To solve this issue, we change the semantics of `max_task_retries`.
* When max_task_retries is 0 (which is the default value), if the callee actor is in the RESTARTING state, subsequently submitted tasks will fail immediately with a RayActorError. Users can catch the RayActorError and implement their own fallback strategies to improve service availability and mitigate service outages.
* When max_task_retries is not 0, subsequently submitted tasks will be queued on the caller side and we only send them to the callee when the callee actor is back to the ALIVE state.
TODO
- [x] Add test cases.
- [ ] Update docs.
- [x] API change review.
This PR adds more example data for ongoing feature guide work. In addition to adding the new datasets, this also puts all example data under examples/data in order to separate it from the example code.
Fix the failure to unbreak nightly and unblock 1.13 release.
The root cause is the upgrade of GRPC to 1.45.2 made it slightly slow; this is an acceptable regression which is needed to make this upgrade.
Co-authored-by: Ubuntu <ubuntu@ip-172-31-32-136.us-west-2.compute.internal>
To facilitate easy demo/documentation/testing with realistic, small-sized yet ML-familiar data. Have it as a source file with code will make it self-contained, i.e. after user "pip install" Ray, they are all set to run it.
IRIS is a great fit: super classic ML dataset, simple schema, only 150 rows.