mirror of
https://github.com/vale981/ray
synced 2025-03-06 02:21:39 -05:00
[AIR][Docs] Clarify how LGBM/XGB trainers work (#28122)
This commit is contained in:
parent
3b3aa80ba3
commit
ea483ecf7a
3 changed files with 26 additions and 2 deletions
|
@ -36,8 +36,18 @@ Ray-specific params are passed in through the trainer constructors.
|
||||||
How to scale out training?
|
How to scale out training?
|
||||||
--------------------------
|
--------------------------
|
||||||
The benefit of using Ray AIR is that you can seamlessly scale up your training by
|
The benefit of using Ray AIR is that you can seamlessly scale up your training by
|
||||||
adjusting the :class:`ScalingConfig <ray.air.config.ScalingConfig>`. Here are some
|
adjusting the :class:`ScalingConfig <ray.air.config.ScalingConfig>`.
|
||||||
examples for common use-cases:
|
|
||||||
|
.. note::
|
||||||
|
Ray Train does not modify or otherwise alter the working
|
||||||
|
of the underlying XGBoost / LightGBM distributed training algorithms.
|
||||||
|
Ray only provides orchestration, data ingest and fault tolerance.
|
||||||
|
For more information on GBDT distributed training, refer to
|
||||||
|
`XGBoost documentation <https://xgboost.readthedocs.io>`__ and
|
||||||
|
`LightGBM documentation <https://lightgbm.readthedocs.io/>`__.
|
||||||
|
|
||||||
|
|
||||||
|
Here are some examples for common use-cases:
|
||||||
|
|
||||||
|
|
||||||
.. tabbed:: Multi-node CPU
|
.. tabbed:: Multi-node CPU
|
||||||
|
|
|
@ -24,6 +24,13 @@ class LightGBMTrainer(GBDTTrainer):
|
||||||
for features with the categorical data type, consider using the
|
for features with the categorical data type, consider using the
|
||||||
:class:`Categorizer` preprocessor to set the dtypes in the dataset.
|
:class:`Categorizer` preprocessor to set the dtypes in the dataset.
|
||||||
|
|
||||||
|
.. note::
|
||||||
|
``LightGBMTrainer`` does not modify or otherwise alter the working
|
||||||
|
of the LightGBM distributed training algorithm.
|
||||||
|
Ray only provides orchestration, data ingest and fault tolerance.
|
||||||
|
For more information on LightGBM distributed training, refer to
|
||||||
|
`LightGBM documentation <https://lightgbm.readthedocs.io/>`__.
|
||||||
|
|
||||||
Example:
|
Example:
|
||||||
.. code-block:: python
|
.. code-block:: python
|
||||||
|
|
||||||
|
|
|
@ -20,6 +20,13 @@ class XGBoostTrainer(GBDTTrainer):
|
||||||
This Trainer runs the XGBoost training loop in a distributed manner
|
This Trainer runs the XGBoost training loop in a distributed manner
|
||||||
using multiple Ray Actors.
|
using multiple Ray Actors.
|
||||||
|
|
||||||
|
.. note::
|
||||||
|
``XGBoostTrainer`` does not modify or otherwise alter the working
|
||||||
|
of the XGBoost distributed training algorithm.
|
||||||
|
Ray only provides orchestration, data ingest and fault tolerance.
|
||||||
|
For more information on XGBoost distributed training, refer to
|
||||||
|
`XGBoost documentation <https://xgboost.readthedocs.io>`__.
|
||||||
|
|
||||||
Example:
|
Example:
|
||||||
.. code-block:: python
|
.. code-block:: python
|
||||||
|
|
||||||
|
|
Loading…
Add table
Reference in a new issue