ray/benchmarks
2021-08-02 11:50:08 -07:00
..
distributed Lower threshold on scalability envelope many tasks (#17511) 2021-08-02 11:50:08 -07:00
object_store Revert back to ray.init (#17047) 2021-07-13 14:36:27 -07:00
single_node Revert back to ray.init (#17047) 2021-07-13 14:36:27 -07:00
app_config.yaml Integrate scalability envelope with releaser (#16417) 2021-06-15 10:42:55 -07:00
benchmark_tests.yaml Split scalability envelope + smoke tests (#17455) 2021-07-30 10:20:19 -07:00
distributed.yaml Split scalability envelope + smoke tests (#17455) 2021-07-30 10:20:19 -07:00
distributed_smoke_test.yaml Split scalability envelope + smoke tests (#17455) 2021-07-30 10:20:19 -07:00
many_nodes.yaml Split scalability envelope + smoke tests (#17455) 2021-07-30 10:20:19 -07:00
object_store.yaml Integrate scalability envelope with releaser (#16417) 2021-06-15 10:42:55 -07:00
README.md Move scalability envelope back down to 250 nodes (#15381) 2021-04-16 19:39:24 -07:00
single_node.yaml Integrate scalability envelope with releaser (#16417) 2021-06-15 10:42:55 -07:00

Ray Scalability Envelope

Distributed Benchmarks

All distributed tests are run on 64 nodes with 64 cores/node. Maximum number of nodes is achieved by adding 4 core nodes.

Dimension Quantity
# nodes in cluster (with trivial task workload) 250+
# actors in cluster (with trivial workload) 10k+
# simultaneously running tasks 10k+
# simultaneously running placement groups 1k+

Object Store Benchmarks

Dimension Quantity
1 GiB object broadcast (# of nodes) 50+

Single Node Benchmarks.

All single node benchmarks are run on a single m4.16xlarge.

Dimension Quantity
# of object arguments to a single task 10000+
# of objects returned from a single task 3000+
# of plasma objects in a single ray.get call 10000+
# of tasks queued on a single node 1,000,000+
Maximum ray.get numpy object size 100GiB+