If you pass a multidimensional input to `TorchPredictor.predict`, AIR errors. For more information about the error, see #25194.
Co-authored-by: Amog Kamsetty <amogkamsetty@yahoo.com>
When using ray inside a virtualenv on windows, python.exe as reported by sys.executable is a PEP397 launcher to the actual python as reported by os.getpid():
>>> import sys, os, psutil
>>> >>> print(sys.executable)
C:\temp\issue24361\Scripts\python.exe
>>> os.getpid()
2208
>>> child = psutil.Process(2208)
>>> child.cmdline()
['C:\\oss\\CPython38\\python.exe']
>>> child.parent().cmdline()
['C:\\temp\\issue24361\\Scripts\\python.exe']
>>> child.parent().pid
6424
When the agent_manager launches the agent process via Process::Process(), it gets the PID of the launcher process (6424), which is what is expected as an ID when registering the agent in the gRPC callback. But inside agent.py, the child process reports the PID via os.getpid(), which is 2208, and this is the wrong PID to register the agent.
The solution proposed here is another version of #24905 that creates a int agent_id = rand(); before starting the python process, and passes the agent_id to the process.
The tests in `test_torch_predictor.py` weren't in running CI. Also `test_torch_predictor.py::test_init` was failing.
Co-authored-by: Amog Kamsetty <amogkamsetty@yahoo.com>
Currently, `name:many_actors` matches e.g. `many_actors` and `many_actors_smoke_test`, but it should just match one test. Thus we should use `re.fullmatch` instead of `re.match` (which would require `name:many_actors.*` to match both.
We should try to avoid killing IO workers, since this can disrupt spilling and add a lot of load on Ray core. This will make it so that we prioritize workers running application tasks instead.
Many release tests are currently failing for cuda version incompatibilities. Pinning the base image to 1.12.1 seems to resolve the problem for the time being.
The AIR CI build has been failing on master since #25022.
#25022 moved the tests that require credentials, but we left the bazel command in the build pipeline still. So even though all the tests are passing, the buildkite stage itself was failing since it tries run tests that require credentials, but these tests no longer exist in the directory. This is only a problem for master build since we don't run this command for PR builds.
This PR adds timeout and asyncio for internal KV. This only applies to gcs_utils and not ray clients for now since this is purely for ray internal usage.
We want to use `clangd` as the language server.
`clangd` is an awesome language server that has many features and is very accurate.
But it needs a `compile_commands.json` to work.
This PR adds a popular bazel rule to generate this file.
As described in the related issue, using model_weight as the key throws an error.
This update points the user to use model as the key instead.
Co-authored-by: tamilflix <tamilflix30@gmail.com>
Redo for PR #24698:
This fixes two bugs in data locality:
When a dependent task is already in the CoreWorker's queue, we ran the data locality policy to choose a raylet before we added the first location for the dependency, so it would appear as if the dependency was not available anywhere.
The locality policy did not take into account spilled locations.
Added C++ unit tests and Python tests for the above.
Split test_reconstruction to avoid test timeout. I believe this was happening because the data locality fix was causing extra scheduler load in a couple of the reconstruction stress tests.
When loading the data from GCS, for detached actors, we treat it the same as normal actors.
But the detached actor lives beyond the job's scope and should be loaded even when the job is finished.
This PR fixed it.
This fixes two bugs in Datasets push-based shuffle:
Scheduling strategy specified by the caller was not getting propagated correctly to the map stage in push-based shuffle. This is because the map and reduce stages shared the same ray.remote options dict, and we deleted the caller-specified scheduling strategy from the reduce stage so that we could specify a NodeAffinitySchedulingStrategy instead.
We were only reporting partial stats for the merge stage.
Related issue number
Issue 1 is necessary for performance at large-scale (#24480).
This makes it possible to use an NFS file system that is shared on a cluster for runtime_env working directories.
Co-authored-by: shrekris-anyscale <92341594+shrekris-anyscale@users.noreply.github.com>
Co-authored-by: Eric Liang <ekhliang@gmail.com>