Commit graph

2721 commits

Author SHA1 Message Date
Eric Liang
2871609296
[rllib] Report sampler performance metrics (#4427) 2019-03-27 13:24:23 -07:00
Andrew Tan
12db684f72 [tune] add filter flag for Tune CLI (#4337)
## What do these changes do?

Adds filter flag (--filter) to ls / lsx commands for Tune CLI.

Usage: `tune ls [path] --filter [column] [operator] [value]`
e.g. `tune lsx ~/ray_results/my_project --filter total_trials == 1`
2019-03-27 11:19:25 -07:00
Robert Nishihara
c6f12e5219 Update documentation from 0.7.0.dev1 to 0.7.0.dev2. (#4485) 2019-03-26 17:32:53 -07:00
Robert Nishihara
c0e10ef12d Bump version number from 0.6.5 to 0.7.0.dev2. (#4484) 2019-03-26 16:44:32 -07:00
Robert Nishihara
8548f12eb2 Give better error when include_webui=1 and webui can't be started. (#4471) 2019-03-26 14:54:32 -07:00
bibabolynn
7a9d1546d4 [java] Fix getWorker and add create multi actors test (#4472) 2019-03-26 20:26:13 +08:00
Wang Qing
7d70cfba6e [Java] Fix loading custom classes from jars (#4475) 2019-03-26 20:15:08 +08:00
Eric Liang
cff08e19ff
[rllib] Print out intermediate data shapes on the first iteration (#4426) 2019-03-26 00:27:59 -07:00
Eric Liang
8ee240f40e
[rllib] Use 64-byte aligned memory when concatenating arrays (#4408) 2019-03-25 23:56:51 -07:00
Vlad Firoiu
c68eea6134 [rllib] More efficient tuple flattening. (#4416)
* More efficient tuple flattening.

* Preprocessor.write uses transform by default.

* lint

* to array

* Update test_catalog.py

* Update test_catalog.py
2019-03-25 16:00:33 -07:00
Richard Liaw
a275af337e
[tune] Make examples more verbose (#4469)
## What do these changes do?
Verbosity defaults to "1", so here we default verbosity for a couple
examples.

## Related issue number

Fixes #4467
2019-03-25 15:13:17 -07:00
Eric Liang
5b8eb475ce
[rllib] Allow None to be specified in multi-agent envs (#4464)
* wip

* check

* doc update

* Update hierarchical_training.py
2019-03-25 11:38:17 -07:00
William Ma
11580fb7dc Changes where actor resources are assigned (#4323) 2019-03-24 15:49:36 -07:00
Eric Liang
01699ce4ea
[rllib] Fix race condition with multiple data loaders, fix stats 2019-03-23 20:17:01 -07:00
Hao Chen
7a38f9be1c
Avoid redundant bazel build (#4458) 2019-03-23 10:44:11 +08:00
Robert Nishihara
01747b11a1 Bump version from 0.7.0.dev1 to 0.6.5. (#4461) 2019-03-22 15:03:29 -07:00
Richard Liaw
32bf23d24f [tune] Remove output of tests 2019-03-22 10:48:03 -07:00
Hao Chen
80cd9c9c1a
[travis] Add back '-v' option to pytest and install psutil (#4430) 2019-03-22 17:45:55 +08:00
Leon Sievers
b21c20c9a6 [rllib] Added missing action clipping for rollout example script (#4413)
* Added action clipping for rollout example script

* Used action_clipping from sampler

* Fixed and improved naming
2019-03-22 00:51:27 -07:00
Ruifang Chen
59d74d5e92 [Java] Build Java code with Bazel (#4284) 2019-03-22 14:30:05 +08:00
Eric Liang
4b8b703561
[rllib] Some API cleanups and documentation improvements (#4409) 2019-03-21 21:34:22 -07:00
Ion
59079a799c Signal actor failure (#4196) 2019-03-21 15:17:42 -07:00
Kai Yang
c36d03874b Redis returns OK when removing a non-existent set entry (#4434) 2019-03-21 11:59:15 -07:00
Eric Liang
57c1aeb427
[rllib] Use suppress_output instead of run_silent.sh script for tests (#4386)
* fix

* enable custom loss

* Update run_rllib_tests.sh

* enable tests

* fix action prob

* Update suppress_output

* fix example

* fix
2019-03-21 00:15:24 -07:00
Hao Chen
d03999d01e
Cross-language invocation Part 1: Java calling Python functions and actors (#4166) 2019-03-21 13:34:21 +08:00
Richard Liaw
828dc08ac8
[tune] Fix tests for Function API for better consistency (#4421) 2019-03-20 22:31:38 -07:00
Philipp Moritz
80ef8c19aa Add initial news reader example (#4348) 2019-03-20 18:47:12 -07:00
Robert Nishihara
9c158c6a87 Start dashboard on all nodes and other small fixes. (#4428)
* Start reporter on all nodes.

* More fixes
2019-03-20 13:04:06 -07:00
Stephanie Wang
4ac9c1ed6e Fix bug in cluster mode where driver exits when there are tasks in the waiting queue (#4251) 2019-03-20 10:18:27 -07:00
Yuhong Guo
8ce7565530 Refactor pytest fixtures for ray core (#4390) 2019-03-20 11:48:32 +08:00
Eric Liang
c6f15a0057
Revert [rllib] Reserve CPUs for replay actors in apex (#4404)
* Revert "[rllib] Reserve CPUs for replay actors in apex (#4217)"

This reverts commit 2781d74680.

* comment
2019-03-19 09:58:45 -07:00
Peter Schafhalter
c93eb126ec Allow manually writing to return ObjectIDs from tasks/actor methods (#3805) 2019-03-18 19:24:57 -07:00
gehring
7c3274e65b [tune] Make the logging of the function API consistent and predictable (#4011)
## What do these changes do?

This is a re-implementation of the `FunctionRunner` which enforces some synchronicity between the thread running the training function and the thread running the Trainable which logs results. The main purpose is to make logging consistent across APIs in anticipation of a new function API which will be generator based (through `yield` statements). Without these changes, it will be impossible for the (possibly soon to be) deprecated reporter based API to behave the same as the generator based API.

This new implementation provides additional guarantees to prevent results from being dropped. This makes the logging behavior more intuitive and consistent with how results are handled in custom subclasses of Trainable.

New guarantees for the tune function API:

- Every reported result, i.e., `reporter(**kwargs)` calls, is forwarded to the appropriate loggers instead of being dropped if not enough time has elapsed since the last results.
- The wrapped function only runs if the `FunctionRunner` expects a result, i.e., when `FunctionRunner._train()` has been called. This removes the possibility that a result will be generated by the function but never logged.
- The wrapped function is not called until the first `_train()` call. Currently, the wrapped function is started during the setup phase which could result in dropped results if the trial is cancelled between `_setup()` and the first `_train()` call.
- Exceptions raised by the wrapped function won't be propagated until all results are logged to prevent dropped results.
- The thread running the wrapped function is explicitly stopped when the `FunctionRunner` is stopped with `_stop()`.
- If the wrapped function terminates without reporting `done=True`, a duplicate result with `{"done": True}`, is reported to explicitly terminate the trial, and components will be notified with a duplicate of the last reported result, but this duplicate will not be logged.

## Related issue number

Closes #3956.
#3949
#3834
2019-03-18 19:14:26 -07:00
Yuhong Guo
edb063c3c8 Fix glog problem of no call stack (#4395) 2019-03-18 18:21:21 -07:00
Wang Qing
3b141b26cd Fix global_state not disconnected after ray.shutdown (#4354) 2019-03-18 16:44:49 -07:00
Kristian Hartikainen
2a046116ce [tune] Fix _SafeFallbackEncoder type checks (#4238)
* Fix numpy type checks for _SafeFallbackEncoder

* Format changes

* Fix usage of nan_str in _SafeFallbackEncoder
2019-03-18 16:34:56 -07:00
Eric Liang
27cd6ea401
[rllib] Flip sign of A2C, IMPALA entropy coefficient; raise DeprecationWarning if negative (#4374) 2019-03-17 18:07:37 -07:00
Richard Liaw
ea5a6f8455
[tune] Simplify API (#4234)
Uses `tune.run` to execute experiments as preferred API.

@noahgolmant

This does not break backwards compat, but will slowly internalize `Experiment`. 

In a separate PR, Tune schedulers should only support 1 running experiment at a time.
2019-03-17 13:03:32 -07:00
markgoodhead
20a155d03d [Tune] Support initial parameters for SkOpt search algorithm (#4341)
Similar to the recent change to HyperOpt (#https://github.com/ray-project/ray/pull/3944) this implements both:
1. The ability to pass in initial parameter suggestion(s) to be run through Tune first, before using the Optimiser's suggestions. This is for when you already know good parameters and want the Optimiser to be aware of these when it makes future parameter suggestions.
2. The same as 1. but if you already know the reward value for those parameters you can pass these in as well to avoid having to re-run the experiments. In the future it would be nice for Tune to potentially support this functionality directly by loading previously run Tune experiments and initialising the Optimiser with these (kind of like a top level checkpointing functionality) but this feature allows users to do this manually for now.
2019-03-16 23:11:30 -07:00
Eric Liang
b513c0f498
[autoscaler] Restore error message for setup 2019-03-16 18:00:37 -07:00
Richard Liaw
5e95abe63e
[tune] Fix performance issue and fix reuse tests (#4379)
* fix tests

* better name

* reduce warnings

* better resource tracking

* oops

* revertmessage

* fix_executor
2019-03-16 13:52:02 -07:00
Eric Liang
a45019d98c
[rllib] Add option to proceed even if some workers crashed (#4376) 2019-03-16 13:34:09 -07:00
justinwyang
db9fe6619d Run only relevant tests in Travis based on git diff. (#4271) 2019-03-15 22:23:54 -07:00
Hao Chen
a6a5b344b9 [Java] Upgrade checkstyle plugin (#4375) 2019-03-15 11:36:09 -07:00
Philipp Moritz
c5e2c9af4d Build wheels for macOS with Bazel (#4280) 2019-03-15 10:37:57 -07:00
Hao Chen
93d9867290 Fix linting error on master (#4377) 2019-03-15 10:31:09 -07:00
Hao Chen
f8d12b0418
[Java] Package native dependencies into jar (#4367) 2019-03-15 12:38:40 +08:00
Leon Sievers
6b93ec3034 Fixed calculation of num_steps_trained for multi_gpu_optimizer (#4364) 2019-03-14 19:46:02 -07:00
Eric Liang
2c1131e8b2
[tune] Add warnings if tune event loop gets clogged (#4353)
* add guards

* comemnts
2019-03-14 19:44:01 -07:00
Yuhong Guo
1a1027b3ab Update git-clang-format to support Python 3. (#4339) 2019-03-14 13:57:11 -07:00