{
"cells": [
{
"cell_type": "markdown",
"id": "c3192ac4",
"metadata": {},
"source": [
"# Training a model with Sklearn\n",
"In this example we will train a model in Ray AIR using a Sklearn classifier."
]
},
{
"cell_type": "markdown",
"id": "5a4823bf",
"metadata": {},
"source": [
"Let's start with installing our dependencies:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "88f4bb39",
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"!pip install -qU \"ray[tune]\" sklearn"
]
},
{
"cell_type": "markdown",
"id": "c049c692",
"metadata": {},
"source": [
"Then we need some imports:"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "c02eb5cd",
"metadata": {},
"outputs": [],
"source": [
"from typing import Tuple\n",
"\n",
"\n",
"import ray\n",
"from ray.data.dataset import Dataset\n",
"from ray.train.batch_predictor import BatchPredictor\n",
"from ray.train.sklearn import SklearnPredictor\n",
"from ray.data.preprocessors import Chain, OrdinalEncoder, StandardScaler\n",
"from ray.air.result import Result\n",
"from ray.air.util.datasets import train_test_split\n",
"from ray.train.sklearn import SklearnTrainer\n",
"\n",
"from sklearn.ensemble import RandomForestClassifier\n",
"\n",
"try:\n",
" from cuml.ensemble import RandomForestClassifier as cuMLRandomForestClassifier\n",
"except ImportError:\n",
" cuMLRandomForestClassifier = None"
]
},
{
"cell_type": "markdown",
"id": "52e017f1",
"metadata": {},
"source": [
"Next we define a function to load our train, validation, and test datasets."
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "3631ed1e",
"metadata": {},
"outputs": [],
"source": [
"def prepare_data() -> Tuple[Dataset, Dataset, Dataset]:\n",
" dataset = ray.data.read_csv(\"s3://air-example-data/breast_cancer_with_categorical.csv\")\n",
" train_dataset, valid_dataset = train_test_split(dataset, test_size=0.3)\n",
" test_dataset = valid_dataset.map_batches(lambda df: df.drop(\"target\", axis=1), batch_format=\"pandas\")\n",
" return train_dataset, valid_dataset, test_dataset"
]
},
{
"cell_type": "markdown",
"id": "8d6c6d17",
"metadata": {},
"source": [
"The following function will create a Sklearn trainer, train it, and return the result."
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "0fd39e42",
"metadata": {},
"outputs": [],
"source": [
"def train_sklearn(num_cpus: int, use_gpu: bool = False) -> Result:\n",
" if use_gpu and not cuMLRandomForestClassifier:\n",
" raise RuntimeError(\"cuML must be installed for GPU enabled sklearn estimators.\")\n",
"\n",
" train_dataset, valid_dataset, _ = prepare_data()\n",
"\n",
" # Scale some random columns\n",
" columns_to_scale = [\"mean radius\", \"mean texture\"]\n",
" preprocessor = Chain(\n",
" OrdinalEncoder([\"categorical_column\"]), StandardScaler(columns=columns_to_scale)\n",
" )\n",
"\n",
" if use_gpu:\n",
" trainer_resources = {\"CPU\": 1, \"GPU\": 1}\n",
" estimator = cuMLRandomForestClassifier()\n",
" else:\n",
" trainer_resources = {\"CPU\": num_cpus}\n",
" estimator = RandomForestClassifier()\n",
"\n",
" trainer = SklearnTrainer(\n",
" estimator=estimator,\n",
" label_column=\"target\",\n",
" datasets={\"train\": train_dataset, \"valid\": valid_dataset},\n",
" preprocessor=preprocessor,\n",
" cv=5,\n",
" scaling_config={\n",
" \"trainer_resources\": trainer_resources,\n",
" },\n",
" )\n",
" result = trainer.fit()\n",
" print(result.metrics)\n",
"\n",
" return result"
]
},
{
"cell_type": "markdown",
"id": "7a2efb9d",
"metadata": {},
"source": [
"Once we have the result, we can do batch inference on the obtained model. Let's define a utility function for this."
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "59eeadd8",
"metadata": {},
"outputs": [],
"source": [
"def predict_sklearn(result: Result, use_gpu: bool = False):\n",
" _, _, test_dataset = prepare_data()\n",
"\n",
" batch_predictor = BatchPredictor.from_checkpoint(\n",
" result.checkpoint, SklearnPredictor\n",
" )\n",
"\n",
" predicted_labels = (\n",
" batch_predictor.predict(\n",
" test_dataset,\n",
" num_gpus_per_worker=int(use_gpu),\n",
" )\n",
" .map_batches(lambda df: (df > 0.5).astype(int), batch_format=\"pandas\")\n",
" )\n",
" print(f\"PREDICTED LABELS\")\n",
" predicted_labels.show()"
]
},
{
"cell_type": "markdown",
"id": "7d073994",
"metadata": {},
"source": [
"Now we can run the training:"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "43f9170a",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"2022-06-22 17:27:37,741\tINFO services.py:1477 -- View the Ray dashboard at \u001b[1m\u001b[32mhttp://127.0.0.1:8269\u001b[39m\u001b[22m\n",
"2022-06-22 17:27:39,822\tWARNING read_api.py:260 -- The number of blocks in this dataset (1) limits its parallelism to 1 concurrent tasks. This is much less than the number of available CPU slots in the cluster. Use `.repartition(n)` to increase the number of dataset blocks.\n",
"Map_Batches: 100%|██████████| 1/1 [00:00<00:00, 44.05it/s]\n"
]
},
{
"data": {
"text/html": [
"== Status ==
Current time: 2022-06-22 17:27:59 (running for 00:00:18.31)
Memory usage on this node: 10.7/31.0 GiB
Using FIFO scheduling algorithm.
Resources requested: 0/8 CPUs, 0/0 GPUs, 0.0/12.9 GiB heap, 0.0/6.45 GiB objects
Result logdir: /home/ubuntu/ray_results/SklearnTrainer_2022-06-22_17-27-40
Number of trials: 1/1 (1 TERMINATED)
Trial name | status | loc | iter | total time (s) | fit_time |
---|---|---|---|---|---|
SklearnTrainer_9dec8_00000 | TERMINATED | 172.31.43.110:1492629 | 1 | 15.6842 | 2.31571 |