## What do these changes do?
This is a re-implementation of the `FunctionRunner` which enforces some synchronicity between the thread running the training function and the thread running the Trainable which logs results. The main purpose is to make logging consistent across APIs in anticipation of a new function API which will be generator based (through `yield` statements). Without these changes, it will be impossible for the (possibly soon to be) deprecated reporter based API to behave the same as the generator based API.
This new implementation provides additional guarantees to prevent results from being dropped. This makes the logging behavior more intuitive and consistent with how results are handled in custom subclasses of Trainable.
New guarantees for the tune function API:
- Every reported result, i.e., `reporter(**kwargs)` calls, is forwarded to the appropriate loggers instead of being dropped if not enough time has elapsed since the last results.
- The wrapped function only runs if the `FunctionRunner` expects a result, i.e., when `FunctionRunner._train()` has been called. This removes the possibility that a result will be generated by the function but never logged.
- The wrapped function is not called until the first `_train()` call. Currently, the wrapped function is started during the setup phase which could result in dropped results if the trial is cancelled between `_setup()` and the first `_train()` call.
- Exceptions raised by the wrapped function won't be propagated until all results are logged to prevent dropped results.
- The thread running the wrapped function is explicitly stopped when the `FunctionRunner` is stopped with `_stop()`.
- If the wrapped function terminates without reporting `done=True`, a duplicate result with `{"done": True}`, is reported to explicitly terminate the trial, and components will be notified with a duplicate of the last reported result, but this duplicate will not be logged.
## Related issue number
Closes#3956.
#3949#3834
Uses `tune.run` to execute experiments as preferred API.
@noahgolmant
This does not break backwards compat, but will slowly internalize `Experiment`.
In a separate PR, Tune schedulers should only support 1 running experiment at a time.
Similar to the recent change to HyperOpt (#https://github.com/ray-project/ray/pull/3944) this implements both:
1. The ability to pass in initial parameter suggestion(s) to be run through Tune first, before using the Optimiser's suggestions. This is for when you already know good parameters and want the Optimiser to be aware of these when it makes future parameter suggestions.
2. The same as 1. but if you already know the reward value for those parameters you can pass these in as well to avoid having to re-run the experiments. In the future it would be nice for Tune to potentially support this functionality directly by loading previously run Tune experiments and initialising the Optimiser with these (kind of like a top level checkpointing functionality) but this feature allows users to do this manually for now.
* Fix checkpoint crash for actor creation task.
* Lint
* Move test to test_actor.py
* Revert unused code in test_failure.py
* Refine test according to Raul's suggestion.
* Introduce set data structure in GCS. Change object table to Set instance.
* Fix a logic bug. Update python code.
* lint
* lint again
* Remove CURRENT_VALUE mode
* Remove 'CURRENT_VALUE'
* Add more test cases
* rename has_been_created to subscribed.
* Make `changed` parameter type of `bool *`
* Rename mode to notification_mode
* fix build
* RAY.SET_REMOVE return error if entry doesn't exist
* lint
* Address comments
* lint and fix build
* Use strongly typed IDs for C++.
* Avoid heap allocation in cython.
* Fix JNI part
* Fix rebase conflict
* Refine
* Remove type check from __init__
* Remove unused constructor declarations.