mirror of
https://github.com/vale981/ray
synced 2025-03-08 19:41:38 -05:00
No description
![]() ## What do these changes do? This is a re-implementation of the `FunctionRunner` which enforces some synchronicity between the thread running the training function and the thread running the Trainable which logs results. The main purpose is to make logging consistent across APIs in anticipation of a new function API which will be generator based (through `yield` statements). Without these changes, it will be impossible for the (possibly soon to be) deprecated reporter based API to behave the same as the generator based API. This new implementation provides additional guarantees to prevent results from being dropped. This makes the logging behavior more intuitive and consistent with how results are handled in custom subclasses of Trainable. New guarantees for the tune function API: - Every reported result, i.e., `reporter(**kwargs)` calls, is forwarded to the appropriate loggers instead of being dropped if not enough time has elapsed since the last results. - The wrapped function only runs if the `FunctionRunner` expects a result, i.e., when `FunctionRunner._train()` has been called. This removes the possibility that a result will be generated by the function but never logged. - The wrapped function is not called until the first `_train()` call. Currently, the wrapped function is started during the setup phase which could result in dropped results if the trial is cancelled between `_setup()` and the first `_train()` call. - Exceptions raised by the wrapped function won't be propagated until all results are logged to prevent dropped results. - The thread running the wrapped function is explicitly stopped when the `FunctionRunner` is stopped with `_stop()`. - If the wrapped function terminates without reporting `done=True`, a duplicate result with `{"done": True}`, is reported to explicitly terminate the trial, and components will be notified with a duplicate of the last reported result, but this duplicate will not be logged. ## Related issue number Closes #3956. #3949 #3834 |
||
---|---|---|
.github | ||
bazel | ||
ci | ||
cmake/Modules | ||
dev | ||
doc | ||
docker | ||
examples | ||
java | ||
kubernetes | ||
python | ||
site | ||
src/ray | ||
thirdparty/scripts | ||
.clang-format | ||
.gitignore | ||
.style.yapf | ||
.travis.yml | ||
build-docker.sh | ||
BUILD.bazel | ||
build.sh | ||
CMakeLists.txt | ||
CONTRIBUTING.rst | ||
LICENSE | ||
pylintrc | ||
README.rst | ||
scripts | ||
setup_thirdparty.sh | ||
WORKSPACE |
.. image:: https://github.com/ray-project/ray/raw/master/doc/source/images/ray_header_logo.png .. image:: https://travis-ci.com/ray-project/ray.svg?branch=master :target: https://travis-ci.com/ray-project/ray .. image:: https://readthedocs.org/projects/ray/badge/?version=latest :target: http://ray.readthedocs.io/en/latest/?badge=latest .. image:: https://img.shields.io/badge/pypi-0.6.4-blue.svg :target: https://pypi.org/project/ray/ | **Ray is a flexible, high-performance distributed execution framework.** Ray is easy to install: ``pip install ray`` Example Use ----------- +------------------------------------------------+----------------------------------------------------+ | **Basic Python** | **Distributed with Ray** | +------------------------------------------------+----------------------------------------------------+ |.. code-block:: python |.. code-block:: python | | | | | # Execute f serially. | # Execute f in parallel. | | | | | | @ray.remote | | def f(): | def f(): | | time.sleep(1) | time.sleep(1) | | return 1 | return 1 | | | | | | | | | ray.init() | | results = [f() for i in range(4)] | results = ray.get([f.remote() for i in range(4)]) | +------------------------------------------------+----------------------------------------------------+ Ray comes with libraries that accelerate deep learning and reinforcement learning development: - `Tune`_: Hyperparameter Optimization Framework - `RLlib`_: Scalable Reinforcement Learning - `Distributed Training <http://ray.readthedocs.io/en/latest/distributed_sgd.html>`__ .. _`Tune`: http://ray.readthedocs.io/en/latest/tune.html .. _`RLlib`: http://ray.readthedocs.io/en/latest/rllib.html Installation ------------ Ray can be installed on Linux and Mac with ``pip install ray``. To build Ray from source or to install the nightly versions, see the `installation documentation`_. .. _`installation documentation`: http://ray.readthedocs.io/en/latest/installation.html More Information ---------------- - `Documentation`_ - `Tutorial`_ - `Blog`_ - `Ray paper`_ - `Ray HotOS paper`_ .. _`Documentation`: http://ray.readthedocs.io/en/latest/index.html .. _`Tutorial`: https://github.com/ray-project/tutorial .. _`Blog`: https://ray-project.github.io/ .. _`Ray paper`: https://arxiv.org/abs/1712.05889 .. _`Ray HotOS paper`: https://arxiv.org/abs/1703.03924 Getting Involved ---------------- - `ray-dev@googlegroups.com`_: For discussions about development or any general questions. - `StackOverflow`_: For questions about how to use Ray. - `GitHub Issues`_: For reporting bugs and feature requests. - `Pull Requests`_: For submitting code contributions. .. _`ray-dev@googlegroups.com`: https://groups.google.com/forum/#!forum/ray-dev .. _`GitHub Issues`: https://github.com/ray-project/ray/issues .. _`StackOverflow`: https://stackoverflow.com/questions/tagged/ray .. _`Pull Requests`: https://github.com/ray-project/ray/pulls