* Moved clip_action into policy_graph; Clip actions in compute_single_action
* Update policy_graph.py
* Changed formatting
* Updated codebase for convencience
* Adding support for resource variables
Currently resource variable go undetected by the `TensorFlowVariables` since they do not use the same ops for reading values. This change should fix this until a more robust solution is implemented.
* fix varhandle
## What do these changes do?
Adds filter flag (--filter) to ls / lsx commands for Tune CLI.
Usage: `tune ls [path] --filter [column] [operator] [value]`
e.g. `tune lsx ~/ray_results/my_project --filter total_trials == 1`
## What do these changes do?
This is a re-implementation of the `FunctionRunner` which enforces some synchronicity between the thread running the training function and the thread running the Trainable which logs results. The main purpose is to make logging consistent across APIs in anticipation of a new function API which will be generator based (through `yield` statements). Without these changes, it will be impossible for the (possibly soon to be) deprecated reporter based API to behave the same as the generator based API.
This new implementation provides additional guarantees to prevent results from being dropped. This makes the logging behavior more intuitive and consistent with how results are handled in custom subclasses of Trainable.
New guarantees for the tune function API:
- Every reported result, i.e., `reporter(**kwargs)` calls, is forwarded to the appropriate loggers instead of being dropped if not enough time has elapsed since the last results.
- The wrapped function only runs if the `FunctionRunner` expects a result, i.e., when `FunctionRunner._train()` has been called. This removes the possibility that a result will be generated by the function but never logged.
- The wrapped function is not called until the first `_train()` call. Currently, the wrapped function is started during the setup phase which could result in dropped results if the trial is cancelled between `_setup()` and the first `_train()` call.
- Exceptions raised by the wrapped function won't be propagated until all results are logged to prevent dropped results.
- The thread running the wrapped function is explicitly stopped when the `FunctionRunner` is stopped with `_stop()`.
- If the wrapped function terminates without reporting `done=True`, a duplicate result with `{"done": True}`, is reported to explicitly terminate the trial, and components will be notified with a duplicate of the last reported result, but this duplicate will not be logged.
## Related issue number
Closes#3956.
#3949#3834
Uses `tune.run` to execute experiments as preferred API.
@noahgolmant
This does not break backwards compat, but will slowly internalize `Experiment`.
In a separate PR, Tune schedulers should only support 1 running experiment at a time.
Similar to the recent change to HyperOpt (#https://github.com/ray-project/ray/pull/3944) this implements both:
1. The ability to pass in initial parameter suggestion(s) to be run through Tune first, before using the Optimiser's suggestions. This is for when you already know good parameters and want the Optimiser to be aware of these when it makes future parameter suggestions.
2. The same as 1. but if you already know the reward value for those parameters you can pass these in as well to avoid having to re-run the experiments. In the future it would be nice for Tune to potentially support this functionality directly by loading previously run Tune experiments and initialising the Optimiser with these (kind of like a top level checkpointing functionality) but this feature allows users to do this manually for now.