This fixes the previous problems from team column revert.
This has 2 additional changes;
alert handler receives the team argument, which was the root cause of breakage; https://github.com/ray-project/ray/pull/21289
Previously, tests without a team column were raising an exception, but I made the condition weaker (warning logs). I will eventually change it to raise an exception, but for smoother transition, we will log warning instead for a short time
Please review **e2e.py and test_suite belonging to your team**!
This is the first part of https://docs.google.com/document/d/16IrwerYi2oJugnRf5hvzukgpJ6FAVEpB6stH_CiNMjY/edit#
This PR adds a team name to each test suite.
If the name is not specified, it will be reported as unspecified.
If you are running a local test, and if the new test suite doesn't have a team name specified, it will raise an exception (in this way, we can avoid missing team names in the future).
Note that we will aggregate all of test config into a single file, nightly_test.yaml.
* use nightly
* switch ml cpu to ray cpu
* fix
* add pytest
* add more pytest
* add constraint
* add tensorflow
* fix merge conflict
* add tblib
* fix
* add back uninstall
* Create a core set of algorithms tests to run nightly.
* Run release tests under tf, tf2, and torch frameworks.
* Fix
* Add eager_tracing option for tf2 framework.
* make sure core tests can run in parallel.
* cql
* Report progress while running nightly/weekly tests.
* Innclude SAC in nightly lineup.
* Revert changes to learning_tests
* rebrand to performance test.
* update build_pipeline.py with new performance_tests name.
* Record stats.
* bug fix, need to populate experiments dict.
* Alphabetize yaml files.
* Allow specifying frameworks. And do not run tf2 by default.
* remove some debugging code.
* fix
* Undo testing changes.
* Do not run CQL regression for now.
* LINT.
Co-authored-by: sven1977 <svenmika1977@gmail.com>
* Add an RLlib Tune experiment to UserTest suite.
* Add ray.init()
* Move example script to example/tune/, so it can be imported as module.
* add __init__.py so our new module will get included in python wheel.
* Add block device to RLlib test instances.
* Reduce disk size a little bit.
* Add metrics reporting
* Allow max of 5 workers to accomodate all the worker tasks.
* revert disk size change.
* Minor updates
* Trigger build
* set max num workers
* Add a compute cfg for autoscaled cpu and gpu nodes.
* use 1gpu instance.
* install tblib for debugging worker crashes.
* Manually upgrade to pytorch 1.9.0
* -y
* torch=1.9.0
* install torch on driver
* Add an RLlib Tune experiment to UserTest suite.
* Add ray.init()
* Move example script to example/tune/, so it can be imported as module.
* add __init__.py so our new module will get included in python wheel.
* Add block device to RLlib test instances.
* Reduce disk size a little bit.
* Add metrics reporting
* Allow max of 5 workers to accomodate all the worker tasks.
* revert disk size change.
* Minor updates
* Trigger build
* set max num workers
* Add a compute cfg for autoscaled cpu and gpu nodes.
* use 1gpu instance.
* install tblib for debugging worker crashes.
* Manually upgrade to pytorch 1.9.0
* -y
* torch=1.9.0
* install torch on driver
* bump timeout
* Write a more informational result dict.
* Revert changes to compute config files that are not used.
* add smoke test
* update
* reduce timeout
* Reduce the # of env per worker to 1.
* Small fix for getting trial_states
* Trigger build
* simply result dict
* lint
* more lint
* fix smoke test
Co-authored-by: Amog Kamsetty <amogkamsetty@yahoo.com>
* prepare for head node
* move command runner interface outside _private
* remove space
* Eric
* flake
* min_workers in multi node type
* fixing edge cases
* eric not idle
* fix target_workers to consider min_workers of node types
* idle timeout
* minor
* minor fix
* test
* lint
* eric v2
* eric 3
* min_workers constraint before bin packing
* Update resource_demand_scheduler.py
* Revert "Update resource_demand_scheduler.py"
This reverts commit 818a63a2c86d8437b3ef21c5035d701c1d1127b5.
* reducing diff
* make get_nodes_to_launch return a dict
* merge
* weird merge fix
* auto fill instance types for AWS
* Alex/Eric
* Update doc/source/cluster/autoscaling.rst
* merge autofill and input from user
* logger.exception
* make the yaml use the default autofill
* docs Eric
* remove test_autoscaler_yaml from windows tests
* lets try changing the test a bit
* return test
* lets see
* edward
* Limit max launch concurrency
* commenting frac TODO
* move to resource demand scheduler
* use STATUS UP TO DATE
* Eric
* make logger of gc freed refs debug instead of info
* add cluster name to docker mount prefix directory
* grrR
* fix tests
* moving docker directory to sdk
* move the import to prevent circular dependency
* smallf fix
* ian
* fix max launch concurrency bug to assume failing nodes as pending and consider only load_metric's connected nodes as running
* small fix
* deflake test_joblib
* lint
* placement groups bypass
* remove space
* Eric
* first ocmmit
* lint
* exmaple
* documentation
* hmm
* file path fix
* fix test
* some format issue in docs
* modified docs
* joblib strikes again on windows
* add ability to not start autoscaler/monitor
* a
* remove worker_default
* Remove default pod type from operator
* Remove worker_default_node_type from rewrite_legacy_yaml_to_availble_node_types
* deprecate useless fields
Co-authored-by: Ameer Haj Ali <ameerhajali@ameers-mbp.lan>
Co-authored-by: Alex Wu <alex@anyscale.io>
Co-authored-by: Alex Wu <itswu.alex@gmail.com>
Co-authored-by: Eric Liang <ekhliang@gmail.com>
Co-authored-by: Ameer Haj Ali <ameerhajali@Ameers-MacBook-Pro.local>
Co-authored-by: root <root@ip-172-31-56-188.us-west-2.compute.internal>
Co-authored-by: Dmitri Gekhtman <dmitri.m.gekhtman@gmail.com>