mirror of
https://github.com/vale981/ray
synced 2025-03-06 10:31:39 -05:00
17 lines
1.1 KiB
Markdown
17 lines
1.1 KiB
Markdown
# Model-based Meta-Policy Optimization (MB-MPO)
|
|
|
|
Code in this package is adapted from https://github.com/jonasrothfuss/model_ensemble_meta_learning.
|
|
|
|
## Overview
|
|
|
|
[MBMPO](https://arxiv.org/abs/1809.05214) is an on-policy model-based algorithm. On a high level, MBMPO is model-based [MAML](https://arxiv.org/abs/1703.03400). On top of MAML, MBMPO learns an *ensemble of dynamics models*. MBMPO trains the dynamics models with real-life data and the actor/critic networks with fake data generated by the dynamics models. The actor and critic are updated via the MAML algorithm. For the distributed execution plan, MBMPO alternates between training the dynanmics model and training the actor and critic network.
|
|
|
|
More details can be found [here](https://medium.com/distributed-computing-with-ray/model-based-reinforcement-learning-with-ray-rllib-73f47df33839).
|
|
|
|
## Documentation & Implementation:
|
|
|
|
MBMPO.
|
|
|
|
**[Detailed Documentation](https://docs.ray.io/en/master/rllib-algorithms.html#mbmpo)**
|
|
|
|
**[Implementation](https://github.com/ray-project/ray/blob/master/rllib/agents/mbmpo/mbmpo.py)**
|