ray/release/long_running_tests
SangBin Cho b5b11b2d06
[Nightly Test] Add a team column to each test config. (#21198)
Please review **e2e.py and test_suite belonging to your team**! 

This is the first part of https://docs.google.com/document/d/16IrwerYi2oJugnRf5hvzukgpJ6FAVEpB6stH_CiNMjY/edit#

This PR adds a team name to each test suite.

If the name is not specified, it will be reported as unspecified. 

If you are running a local test, and if the new test suite doesn't have a team name specified, it will raise an exception (in this way, we can avoid missing team names in the future).

Note that we will aggregate all of test config into a single file, nightly_test.yaml.
2021-12-27 14:42:41 -08:00
..
workloads [Serve] Add serve failure test to CI (#20392) 2021-11-16 08:12:08 -08:00
.gitignore Clean up release tests (#11420) 2020-10-22 17:04:41 -07:00
app_config.yaml [CI] [Release] uninstall Ray before installing new Ray version (#21159) 2021-12-17 16:25:15 -08:00
app_config_np.yaml [release] Uninstall old ray in all release test app configs to fix commit mismatch error (#21175) 2021-12-18 16:58:49 -08:00
long_running_tests.yaml [Nightly Test] Add a team column to each test config. (#21198) 2021-12-27 14:42:41 -08:00
many_ppo.yaml [release] Define worker node type even if no worker node is needed. (#20223) 2021-11-10 11:19:09 -08:00
README.rst [Internal Observability] Move debug_state.txt to the log dir + support gcs_server debug state (#20722) 2021-11-28 20:42:37 -08:00
tpl_cpu_1.yaml Increase disk for long running tests (#19064) 2021-10-03 22:52:44 -07:00
tpl_cpu_1_large.yaml [Release] Use large instance type for long running impala test (#20691) 2021-11-26 11:42:41 -08:00
tpl_cpu_2.yaml [Release] change default expiration to 2 days in order to prevent custodian kill it early morning (#17215) 2021-07-20 17:03:14 -07:00
tpl_cpu_3.yaml [Release] change default expiration to 2 days in order to prevent custodian kill it early morning (#17215) 2021-07-20 17:03:14 -07:00
wait_cluster.py [release tests] Fix microbenchmark base image, network overhead cluster wait time, add long running tests (#16355) 2021-06-16 21:37:17 +01:00

Long Running Tests
==================

This directory contains the long-running workloads which are intended to run
forever until they fail. To set up the project you need to run

.. code-block:: bash

    $ pip install anyscale
    $ anyscale init

Note that all the long running test is running inside virtual environment, tensorflow_p36

Running the Workloads
---------------------
The easiest approach to running these workloads is to use the 
`Releaser`_ tool to run them with the command
``python cli.py suite:run long_running_tests``. By default, this
will start a session to run each workload in the Anyscale product
and kick them off.

To run the tests manually, you can also use the `Anyscale UI <https://www.anyscale.dev/>`. First run ``anyscale snapshot create`` from the command line to create a project snapshot. Then from the UI, you can launch an individual session and execute the run command for each test. 

You can also start the workloads using the CLI with:

.. code-block:: bash

    $ anyscale start
    $ anyscale run test_workload --workload=<WORKLOAD_NAME> --wheel=<RAY_WHEEL_LINK>


Doing this for each workload will start one EC2 instance per workload and will start the workloads
running (one per instance). A list of
available workload options is available in the `ray_projects/project.yaml` file.


Debugging
---------
The primary method to debug the test while it is running is to view the logs and the dashboard from the UI. After the test has failed, you can still view the stdout logs in the UI and also inspect
the logs under ``/tmp/ray/session*/logs/`` and
``/tmp/ray/session*/logs/debug_state.txt``.

.. To check up on the workloads, run either
.. ``anyscale session --name="*" execute check-load``, which
.. will print the load on each machine, or
.. ``anyscale session --name="*" execute show-output``, which
.. will print the tail of the output for each workload.

Shut Down the Workloads
-----------------------

The instances running the workloads can all be killed by running
``anyscale stop <SESSION_NAME>``.

Adding a Workload
-----------------

To create a new workload, simply add a new Python file under ``workloads/`` and
add the workload in the run command in `ray-project/project.yaml`.

.. _`Releaser`: https://github.com/ray-project/releaser