mirror of
https://github.com/vale981/ray
synced 2025-03-06 18:41:40 -05:00

We previously added automatic tensor extension casting on Datasets transformation outputs to allow the user to not have to worry about tensor column casting; however, this current state creates several issues: 1. Not all tensors are supported, which means that we’ll need to have an opaque object dtype (i.e. ndarray of ndarray pointers) fallback for the Pandas-only case. Known unsupported tensor use cases: a. Heterogeneous-shaped (i.e. ragged) tensors b. Struct arrays 2. UDFs will expect a NumPy column and won’t know what to do with our TensorArray type. E.g., torchvision transforms don’t respect the array protocol (which they should), and instead only support Torch tensors and NumPy ndarrays; passing a TensorArray column or a TensorArrayElement (a single item in the TensorArray column) fails. Implicit casting with object dtype fallback on UDF outputs can make the input type to downstream UDFs nondeterministic, where the user won’t know if they’ll get a TensorArray column or an object dtype column. 3. The tensor extension cast fallback warning spams the logs. This PR: 1. Adds automatic casting of tensor extension columns to NumPy ndarray columns for Datasets UDF inputs, meaning the UDFs will never have to see tensor extensions and that the UDF input column types will be consistent and deterministic; this fixes both (2) and (3). 2. No longer implicitly falls back to an opaque object dtype when TensorArray casting fails (e.g. for ragged tensors), and instead raises an error; this fixes (4) but removes our support for (1). 3. Adds a global enable_tensor_extension_casting config flag, which is True by default, that controls whether we perform this automatic casting. Turning off the implicit casting provides a path for (1), where the tensor extension can be avoided if working with ragged tensors in Pandas land. Turning off this flag also allows the user to explicitly control their tensor extension casting, if they want to work with it in their UDFs in order to reap the benefits of less data copies, more efficient slicing, stronger column typing, etc.
1500 lines
74 KiB
Text
1500 lines
74 KiB
Text
{
|
|
"cells": [
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "TsniIjjg2Pym"
|
|
},
|
|
"source": [
|
|
"*This example is adapted from Continual AI Avalanche quick start https://avalanche.continualai.org/*"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "1VsUrzVm1W-h"
|
|
},
|
|
"source": [
|
|
"# Incremental Learning with Ray AIR\n",
|
|
"\n",
|
|
"In this example, we show how to use Ray AIR to incrementally train a simple image classification PyTorch model\n",
|
|
"on a stream of incoming tasks.\n",
|
|
"\n",
|
|
"Each task is a random permutation of the MNIST Dataset, which is a common benchmark\n",
|
|
"used for continual training. After training on all the\n",
|
|
"tasks, the model is expected to be able to make predictions on data from any task.\n",
|
|
"\n",
|
|
"In this example, we use just a naive finetuning strategy, where the model is trained\n",
|
|
"on each task, without any special methods to prevent [catastrophic forgetting](\n",
|
|
"https://en.wikipedia.org/wiki/Catastrophic_interference). Model performance is\n",
|
|
"expected to be poor.\n",
|
|
"\n",
|
|
"More precisely, this example showcases domain incremental training, in which during\n",
|
|
"prediction/testing\n",
|
|
"time, the model is asked to predict on data from tasks trained on so far with the\n",
|
|
"task ID not provided. This is opposed to task incremental training, where the task ID is\n",
|
|
"provided during prediction/testing time.\n",
|
|
"\n",
|
|
"For more information on the 3 different categories for incremental/continual\n",
|
|
"learning, please see [\"Three scenarios for continual learning\" by van de Ven and Tolias](https://arxiv.org/pdf/1904.07734.pdf)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "Q3oGiuqYfj9_"
|
|
},
|
|
"source": [
|
|
"This example will cover the following:\n",
|
|
"1. Loading a PyTorch Dataset to Ray Datasets\n",
|
|
"2. Create an `Iterator[ray.data.Datasets]` abstraction to represent a stream of data to train on for incremental training.\n",
|
|
"3. Implement a custom Ray AIR preprocessor to preprocess the Dataset.\n",
|
|
"4. Incrementally train a model using data parallel training.\n",
|
|
"5. Use our trained model to perform batch prediction on test data.\n",
|
|
"6. Incrementally deploying our trained model with Ray Serve and performing online prediction queries."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "z52Y8O4q1bIk"
|
|
},
|
|
"source": [
|
|
"# Step 1: Installations and Initializing Ray\n",
|
|
"\n",
|
|
"To get started, let's first install the necessary packages: Ray AIR, torch, and torchvision. Uncomment the below lines and run the cell to install the necessary packages."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 1,
|
|
"metadata": {
|
|
"colab": {
|
|
"base_uri": "https://localhost:8080/"
|
|
},
|
|
"id": "kWr6BRMk1Y1j",
|
|
"outputId": "dad49a31-a602-4e44-b5fe-932de603925e"
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"# !pip install -q \"ray[air]\"\n",
|
|
"# !pip install -q torch\n",
|
|
"# !pip install -q torchvision"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "RpD4STX3g1dq"
|
|
},
|
|
"source": [
|
|
"Then, let's initialize Ray! We can just import and call `ray.init()`. If you are running on a Ray cluster, then you can do `ray.init(\"auto\")` to connect to the cluster instead of initiailzing a new local Ray instance."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 2,
|
|
"metadata": {
|
|
"colab": {
|
|
"base_uri": "https://localhost:8080/"
|
|
},
|
|
"id": "72fEFqL4T7iA",
|
|
"outputId": "9cae25f2-c712-4baa-f66b-337049e1b565"
|
|
},
|
|
"outputs": [
|
|
{
|
|
"name": "stderr",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"2022-07-20 21:47:49,873\tINFO services.py:1483 -- View the Ray dashboard at \u001b[1m\u001b[32mhttp://127.0.0.1:8265\u001b[39m\u001b[22m\n"
|
|
]
|
|
},
|
|
{
|
|
"data": {
|
|
"text/html": [
|
|
"<div>\n",
|
|
" <div style=\"margin-left: 50px;display: flex;flex-direction: row;align-items: center\">\n",
|
|
" <h3 style=\"color: var(--jp-ui-font-color0)\">Ray</h3>\n",
|
|
" <svg version=\"1.1\" id=\"ray\" width=\"3em\" viewBox=\"0 0 144.5 144.6\" style=\"margin-left: 3em;margin-right: 3em\">\n",
|
|
" <g id=\"layer-1\">\n",
|
|
" <path fill=\"#00a2e9\" class=\"st0\" d=\"M97.3,77.2c-3.8-1.1-6.2,0.9-8.3,5.1c-3.5,6.8-9.9,9.9-17.4,9.6S58,88.1,54.8,81.2c-1.4-3-3-4-6.3-4.1\n",
|
|
" c-5.6-0.1-9.9,0.1-13.1,6.4c-3.8,7.6-13.6,10.2-21.8,7.6C5.2,88.4-0.4,80.5,0,71.7c0.1-8.4,5.7-15.8,13.8-18.2\n",
|
|
" c8.4-2.6,17.5,0.7,22.3,8c1.3,1.9,1.3,5.2,3.6,5.6c3.9,0.6,8,0.2,12,0.2c1.8,0,1.9-1.6,2.4-2.8c3.5-7.8,9.7-11.8,18-11.9\n",
|
|
" c8.2-0.1,14.4,3.9,17.8,11.4c1.3,2.8,2.9,3.6,5.7,3.3c1-0.1,2,0.1,3,0c2.8-0.5,6.4,1.7,8.1-2.7s-2.3-5.5-4.1-7.5\n",
|
|
" c-5.1-5.7-10.9-10.8-16.1-16.3C84,38,81.9,37.1,78,38.3C66.7,42,56.2,35.7,53,24.1C50.3,14,57.3,2.8,67.7,0.5\n",
|
|
" C78.4-2,89,4.7,91.5,15.3c0.1,0.3,0.1,0.5,0.2,0.8c0.7,3.4,0.7,6.9-0.8,9.8c-1.7,3.2-0.8,5,1.5,7.2c6.7,6.5,13.3,13,19.8,19.7\n",
|
|
" c1.8,1.8,3,2.1,5.5,1.2c9.1-3.4,17.9-0.6,23.4,7c4.8,6.9,4.6,16.1-0.4,22.9c-5.4,7.2-14.2,9.9-23.1,6.5c-2.3-0.9-3.5-0.6-5.1,1.1\n",
|
|
" c-6.7,6.9-13.6,13.7-20.5,20.4c-1.8,1.8-2.5,3.2-1.4,5.9c3.5,8.7,0.3,18.6-7.7,23.6c-7.9,5-18.2,3.8-24.8-2.9\n",
|
|
" c-6.4-6.4-7.4-16.2-2.5-24.3c4.9-7.8,14.5-11,23.1-7.8c3,1.1,4.7,0.5,6.9-1.7C91.7,98.4,98,92.3,104.2,86c1.6-1.6,4.1-2.7,2.6-6.2\n",
|
|
" c-1.4-3.3-3.8-2.5-6.2-2.6C99.8,77.2,98.9,77.2,97.3,77.2z M72.1,29.7c5.5,0.1,9.9-4.3,10-9.8c0-0.1,0-0.2,0-0.3\n",
|
|
" C81.8,14,77,9.8,71.5,10.2c-5,0.3-9,4.2-9.3,9.2c-0.2,5.5,4,10.1,9.5,10.3C71.8,29.7,72,29.7,72.1,29.7z M72.3,62.3\n",
|
|
" c-5.4-0.1-9.9,4.2-10.1,9.7c0,0.2,0,0.3,0,0.5c0.2,5.4,4.5,9.7,9.9,10c5.1,0.1,9.9-4.7,10.1-9.8c0.2-5.5-4-10-9.5-10.3\n",
|
|
" C72.6,62.3,72.4,62.3,72.3,62.3z M115,72.5c0.1,5.4,4.5,9.7,9.8,9.9c5.6-0.2,10-4.8,10-10.4c-0.2-5.4-4.6-9.7-10-9.7\n",
|
|
" c-5.3-0.1-9.8,4.2-9.9,9.5C115,72.1,115,72.3,115,72.5z M19.5,62.3c-5.4,0.1-9.8,4.4-10,9.8c-0.1,5.1,5.2,10.4,10.2,10.3\n",
|
|
" c5.6-0.2,10-4.9,9.8-10.5c-0.1-5.4-4.5-9.7-9.9-9.6C19.6,62.3,19.5,62.3,19.5,62.3z M71.8,134.6c5.9,0.2,10.3-3.9,10.4-9.6\n",
|
|
" c0.5-5.5-3.6-10.4-9.1-10.8c-5.5-0.5-10.4,3.6-10.8,9.1c0,0.5,0,0.9,0,1.4c-0.2,5.3,4,9.8,9.3,10\n",
|
|
" C71.6,134.6,71.7,134.6,71.8,134.6z\"/>\n",
|
|
" </g>\n",
|
|
" </svg>\n",
|
|
" <table>\n",
|
|
" <tr>\n",
|
|
" <td style=\"text-align: left\"><b>Python version:</b></td>\n",
|
|
" <td style=\"text-align: left\"><b>3.7.10</b></td>\n",
|
|
" </tr>\n",
|
|
" <tr>\n",
|
|
" <td style=\"text-align: left\"><b>Ray version:</b></td>\n",
|
|
" <td style=\"text-align: left\"><b> 3.0.0.dev0</b></td>\n",
|
|
" </tr>\n",
|
|
" <tr>\n",
|
|
" <td style=\"text-align: left\"><b>Dashboard:</b></td>\n",
|
|
" <td style=\"text-align: left\"><b><a href=\"http://127.0.0.1:8265\" target=\"_blank\">http://127.0.0.1:8265</a></b></td>\n",
|
|
"</tr>\n",
|
|
"\n",
|
|
" </table>\n",
|
|
" </div>\n",
|
|
"</div>\n"
|
|
],
|
|
"text/plain": [
|
|
"RayContext(dashboard_url='127.0.0.1:8265', python_version='3.7.10', ray_version='3.0.0.dev0', ray_commit='{{RAY_COMMIT_SHA}}', address_info={'node_ip_address': '127.0.0.1', 'raylet_ip_address': '127.0.0.1', 'redis_address': None, 'object_store_address': '/tmp/ray/session_2022-07-20_21-47-47_297236_39344/sockets/plasma_store', 'raylet_socket_name': '/tmp/ray/session_2022-07-20_21-47-47_297236_39344/sockets/raylet', 'webui_url': '127.0.0.1:8265', 'session_dir': '/tmp/ray/session_2022-07-20_21-47-47_297236_39344', 'metrics_export_port': 62008, 'gcs_address': '127.0.0.1:57307', 'address': '127.0.0.1:57307', 'dashboard_agent_listen_port': 52365, 'node_id': 'db68eafa3bbe9042df574f3c9974b40ce8d97728db90282feefb4690'})"
|
|
]
|
|
},
|
|
"execution_count": 2,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"import ray\n",
|
|
"ray.init()\n",
|
|
"# If runnning on a cluster, use the below line instead.\n",
|
|
"# ray.init(\"auto\")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "AedcxD_FClQL"
|
|
},
|
|
"source": [
|
|
"# Step 2: Define our PyTorch Model\n",
|
|
"\n",
|
|
"Now that we have the necessary installations, let's define our PyTorch model. For this example to classify MNIST images, we will use a simple multi-layer perceptron."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"id": "3TVkSmFFCHhI"
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"import torch.nn as nn\n",
|
|
"\n",
|
|
"class SimpleMLP(nn.Module):\n",
|
|
" def __init__(self, num_classes=10, input_size=28 * 28):\n",
|
|
" super(SimpleMLP, self).__init__()\n",
|
|
"\n",
|
|
" self.features = nn.Sequential(\n",
|
|
" nn.Linear(input_size, 512),\n",
|
|
" nn.ReLU(inplace=True),\n",
|
|
" nn.Dropout(),\n",
|
|
" )\n",
|
|
" self.classifier = nn.Linear(512, num_classes)\n",
|
|
" self._input_size = input_size\n",
|
|
"\n",
|
|
" def forward(self, x):\n",
|
|
" x = x.contiguous()\n",
|
|
" x = x.view(-1, self._input_size)\n",
|
|
" x = self.features(x)\n",
|
|
" x = self.classifier(x)\n",
|
|
" return x"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "L2N1U22VC_N9"
|
|
},
|
|
"source": [
|
|
"# Step 3: Create the Stream of tasks\n",
|
|
"\n",
|
|
"We can now create a stream of tasks (where each task contains a dataset to train on). For this example, we will create an artificial stream of tasks consisting of\n",
|
|
"permuted variations of MNIST, which is a classic benchmark in continual learning\n",
|
|
"research.\n",
|
|
"\n",
|
|
"For real-world scenarios, this step is not necessary as fresh data will already be\n",
|
|
"arriving as a stream of tasks. It does not need to be artificially created."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "3SVSrkqrDJuc"
|
|
},
|
|
"source": [
|
|
"## 3a: Load MNIST Dataset to a Ray Dataset\n",
|
|
"\n",
|
|
"Let's first define a simple function that will return the original MNIST Dataset as a distributed Ray Dataset. Ray Datasets are the standard way to load and exchange data in Ray libraries and applications, read more about them [here](https://docs.ray.io/en/latest/data/dataset.html)!\n",
|
|
"\n",
|
|
"The function in the below code snippet does the following:\n",
|
|
"1. Downloads the MNIST Dataset from torchvision in-memory\n",
|
|
"2. Loads the in-memory Torch Dataset into a Ray Dataset\n",
|
|
"3. Converts the Ray Dataset into a Pandas format. Instead of the Ray Dataset iterating over tuples, it will have 2 columns: \"image\" & \"label\". \n",
|
|
"<!-- TODO: Figure out when and how to use TensorArray extension -->\n",
|
|
"<!-- The image will be stored as a multi-dimensional tensor (via the [TensorArray format](https://docs.ray.io/en/latest/data/dataset-tensor-support.html) instead of a PIL image). -->\n",
|
|
"This will allow us to apply built-in preprocessors to the Ray Dataset and allow Ray Datasets to be used with Ray AIR Predictors.\n",
|
|
" <!-- and also means that any transformations done to the images can be done in a zero-copy fashion. -->\n",
|
|
"\n",
|
|
"For this example, since we are just working with MNIST dataset, which is small, we use the [`SimpleTorchDataSource`](https://docs.ray.io/en/master/data/package-ref.html?highlight=SimpleTorchDatasource#ray.data.datasource.SimpleTorchDatasource) which just loads the full MNIST dataset into memory.\n",
|
|
"\n",
|
|
"For loading larger datasets in a parallel fashion, you should use [Ray Dataset's additional read APIs](https://docs.ray.io/en/master/data/dataset.html#supported-input-formats) to load data from parquet, csv, image files, and more!"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 4,
|
|
"metadata": {
|
|
"id": "0XKwJKrNCxg4"
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"import pandas as pd\n",
|
|
"\n",
|
|
"import torchvision\n",
|
|
"from torchvision.transforms import RandomCrop\n",
|
|
"\n",
|
|
"import ray\n",
|
|
"from ray.data.datasource.torch_datasource import SimpleTorchDatasource\n",
|
|
"\n",
|
|
"\n",
|
|
"def get_mnist_dataset(train: bool = True) -> ray.data.Dataset:\n",
|
|
" \"\"\"Returns MNIST Dataset as a ray.data.Dataset.\n",
|
|
" \n",
|
|
" Args:\n",
|
|
" train: Whether to return the train dataset or test dataset.\n",
|
|
" \"\"\"\n",
|
|
"\n",
|
|
" def mnist_dataset_factory():\n",
|
|
" if train:\n",
|
|
" # Only perform random cropping on the Train dataset.\n",
|
|
" transform = RandomCrop(28, padding=4)\n",
|
|
" else:\n",
|
|
" transform = None\n",
|
|
" return torchvision.datasets.MNIST(\"./data\", download=True, train=train, transform=transform)\n",
|
|
"\n",
|
|
" def convert_batch_to_pandas(batch):\n",
|
|
" images = [np.array(item[0]) for item in batch]\n",
|
|
" labels = [item[1] for item in batch]\n",
|
|
"\n",
|
|
" df = pd.DataFrame({\"image\": images, \"label\": labels})\n",
|
|
"\n",
|
|
" return df\n",
|
|
"\n",
|
|
" mnist_dataset = ray.data.read_datasource(\n",
|
|
" SimpleTorchDatasource(), dataset_factory=mnist_dataset_factory\n",
|
|
" )\n",
|
|
" mnist_dataset = mnist_dataset.map_batches(convert_batch_to_pandas)\n",
|
|
" return mnist_dataset"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "vqrfgfl9YnVe"
|
|
},
|
|
"source": [
|
|
"## 3b: Create our Stream abstraction\n",
|
|
"\n",
|
|
"Now we can create our \"stream\" abstraction. This abstraction provides two\n",
|
|
"methods (`generate_train_stream` and `generate_test_stream`) that each returns an Iterator\n",
|
|
"over Ray Datasets. Each item in this iterator contains a unique permutation of\n",
|
|
"MNIST, and is one task that we want to train on.\n",
|
|
"\n",
|
|
"In this example, \"the stream of tasks\" is contrived since all the data for all tasks exist already in an offline setting. For true online continual learning, you would want to implement a custom dataset iterator that reads from some stream datasource to produce new tasks. The only abstraction that's needed is `Iterator[ray.data.Dataset]`.\n",
|
|
"\n",
|
|
"Note that the test dataset stream has the same permutations that are used for the training dataset stream. In general for continual learning, it is expected that the data distribution of the test/prediction data follows what the model was trained on. If you notice that the distribution of new prediction queries is changing compared to the distribution of the training data, then you should probably trigger training of a new task."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 5,
|
|
"metadata": {
|
|
"id": "f2EagMWCN3he"
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"from typing import Iterator, List\n",
|
|
"import random\n",
|
|
"import numpy as np\n",
|
|
"\n",
|
|
"from ray.data import ActorPoolStrategy\n",
|
|
"\n",
|
|
"\n",
|
|
"class PermutedMNISTStream:\n",
|
|
" \"\"\"Generates streams of permuted MNIST Datasets.\n",
|
|
" \n",
|
|
" Example:\n",
|
|
" \n",
|
|
" permuted_mnist = PermutedMNISTStream(n_tasks=3)\n",
|
|
" train_stream = permuted_mnist.generate_train_stream()\n",
|
|
" \n",
|
|
" # Iterate through the train_stream\n",
|
|
" for train_dataset in train_stream:\n",
|
|
" ...\n",
|
|
" \n",
|
|
" Args:\n",
|
|
" n_tasks: The number of tasks to generate.\n",
|
|
" \"\"\"\n",
|
|
"\n",
|
|
" def __init__(self, n_tasks: int = 3):\n",
|
|
" self.n_tasks = n_tasks\n",
|
|
" self.permutations = [\n",
|
|
" np.random.permutation(28 * 28) for _ in range(self.n_tasks)\n",
|
|
" ]\n",
|
|
"\n",
|
|
" self.train_mnist_dataset = get_mnist_dataset(train=True)\n",
|
|
" self.test_mnist_dataset = get_mnist_dataset(train=False)\n",
|
|
"\n",
|
|
" def random_permute_dataset(\n",
|
|
" self, dataset: ray.data.Dataset, permutation: np.ndarray\n",
|
|
" ):\n",
|
|
" \"\"\"Randomly permutes the pixels for each image in the dataset.\"\"\"\n",
|
|
"\n",
|
|
" class PixelsPermutation(object):\n",
|
|
" def __call__(self, batch):\n",
|
|
" batch[\"image\"] = batch[\"image\"].map(lambda image: image.reshape(-1)[permutation].reshape(28, 28))\n",
|
|
" return batch\n",
|
|
"\n",
|
|
" return dataset.map_batches(PixelsPermutation, compute=ActorPoolStrategy(), batch_format=\"pandas\")\n",
|
|
"\n",
|
|
" def generate_train_stream(self) -> Iterator[ray.data.Dataset]:\n",
|
|
" for permutation in self.permutations:\n",
|
|
" permuted_mnist_dataset = self.random_permute_dataset(\n",
|
|
" self.train_mnist_dataset, permutation\n",
|
|
" )\n",
|
|
" yield permuted_mnist_dataset\n",
|
|
"\n",
|
|
" def generate_test_stream(self) -> Iterator[ray.data.Dataset]:\n",
|
|
" for permutation in self.permutations:\n",
|
|
" mnist_dataset = get_mnist_dataset(train=False)\n",
|
|
" permuted_mnist_dataset = self.random_permute_dataset(\n",
|
|
" self.test_mnist_dataset, permutation\n",
|
|
" )\n",
|
|
" yield permuted_mnist_dataset\n",
|
|
"\n",
|
|
" def generate_test_samples(self, num_samples: int = 10) -> List[np.ndarray]:\n",
|
|
" \"\"\"Generates num_samples permuted MNIST images.\"\"\"\n",
|
|
" random_permutation = random.choice(self.permutations)\n",
|
|
" return list(self.random_permute_dataset(\n",
|
|
" self.test_mnist_dataset.random_shuffle().limit(num_samples),\n",
|
|
" random_permutation,\n",
|
|
" ).to_pandas()[\"image\"].to_numpy())\n",
|
|
"\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "HDGHgtb699kd"
|
|
},
|
|
"source": [
|
|
"# Step 4: Define the logic for Training and Inference/Prediction\n",
|
|
"\n",
|
|
"Now that we can get an Iterator over Ray Datasets, we can incrementally train our model in a data parallel fashion via Ray Train, while incrementally deploying our model via Ray Serve. Let's define some helper functions to allow us to do this!\n",
|
|
"\n",
|
|
"If you are not familiar with data parallel training, it is a form of distributed training strategies, where we have multiple model replicas, and each replica trains on a different batch of data. After each batch, the gradients are synchronized across the replicas. This effecitively allows us to train on more data in a shorter amount of time."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "SBWxP1sP-G-o"
|
|
},
|
|
"source": [
|
|
"## 4a: Define our training logic for each Data Parallel worker\n",
|
|
"\n",
|
|
"The first thing we need to do is to define the training loop that will be run on each training worker. \n",
|
|
"\n",
|
|
"The training loop takes in a `config` Dict as an argument that we can use to pass in any configurations for training.\n",
|
|
"\n",
|
|
"This is just standard PyTorch training, with the difference being that we can leverage [Ray Train's utility functions](https://docs.ray.io/en/master/train/api.html#training-function-utilities) and [Ray AIR Sesssion](https://docs.ray.io/en/master/ray-air/package-ref.html#module-ray.air.session):\n",
|
|
"- `ray.train.torch.prepare_model(...)`: This will prepare the model for distributed training by wrapping it in PyTorch `DistributedDataParallel` and moving it to the correct accelerator device.\n",
|
|
"- `ray.air.session.get_dataset_shard(...)`: This will get the Ray Dataset shard for this particular Data Parallel worker.\n",
|
|
"- `ray.air.session.report({}, checkpoint=...)`: This will tell Ray Train to persist the provided `Checkpoint` object.\n",
|
|
"- `ray.air.session.get_checkpoint()`: Returns a checkpoint to resume from. This is useful for either fault tolerance purposes, or for our purposes, to continue training the same model on a new incoming dataset."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 6,
|
|
"metadata": {
|
|
"id": "Y9IRDMec-GZ9"
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"from ray import train\n",
|
|
"from ray.air import session, Checkpoint\n",
|
|
"\n",
|
|
"from torch.optim import SGD\n",
|
|
"from torch.nn import CrossEntropyLoss\n",
|
|
"\n",
|
|
"from torch.nn.modules.utils import consume_prefix_in_state_dict_if_present\n",
|
|
"\n",
|
|
"def train_loop_per_worker(config: dict):\n",
|
|
" num_epochs = config[\"num_epochs\"]\n",
|
|
" learning_rate = config[\"learning_rate\"]\n",
|
|
" momentum = config[\"momentum\"]\n",
|
|
" batch_size = config[\"batch_size\"]\n",
|
|
"\n",
|
|
" model = SimpleMLP(num_classes=10)\n",
|
|
"\n",
|
|
" # Load model from checkpoint if there is a checkpoint to load from.\n",
|
|
" checkpoint_to_load = session.get_checkpoint()\n",
|
|
" if checkpoint_to_load:\n",
|
|
" state_dict_to_resume_from = checkpoint_to_load.to_dict()[\"model\"]\n",
|
|
" model.load_state_dict(state_dict=state_dict_to_resume_from)\n",
|
|
"\n",
|
|
" model = train.torch.prepare_model(model)\n",
|
|
"\n",
|
|
" optimizer = SGD(model.parameters(), lr=learning_rate, momentum=momentum)\n",
|
|
" criterion = CrossEntropyLoss()\n",
|
|
"\n",
|
|
" # Get the Ray Dataset shard for this data parallel worker, and convert it to a PyTorch Dataset.\n",
|
|
" dataset_shard = session.get_dataset_shard(\"train\").to_torch(\n",
|
|
" label_column=\"label\",\n",
|
|
" batch_size=batch_size,\n",
|
|
" unsqueeze_feature_tensors=False,\n",
|
|
" unsqueeze_label_tensor=False,\n",
|
|
" )\n",
|
|
"\n",
|
|
" for epoch_idx in range(num_epochs):\n",
|
|
" running_loss = 0\n",
|
|
" for iteration, (train_mb_x, train_mb_y) in enumerate(dataset_shard):\n",
|
|
" optimizer.zero_grad()\n",
|
|
" train_mb_x = train_mb_x.to(train.torch.get_device())\n",
|
|
" train_mb_y = train_mb_y.to(train.torch.get_device())\n",
|
|
"\n",
|
|
" # Forward\n",
|
|
" logits = model(train_mb_x)\n",
|
|
" # Loss\n",
|
|
" loss = criterion(logits, train_mb_y)\n",
|
|
" # Backward\n",
|
|
" loss.backward()\n",
|
|
" # Update\n",
|
|
" optimizer.step()\n",
|
|
"\n",
|
|
" running_loss += loss.item()\n",
|
|
" if session.get_world_rank() == 0 and iteration % 500 == 0:\n",
|
|
" print(f\"loss: {loss.item():>7f}, epoch: {epoch_idx}, iteration: {iteration}\")\n",
|
|
"\n",
|
|
" # Checkpoint model after every epoch.\n",
|
|
" state_dict = model.state_dict()\n",
|
|
" consume_prefix_in_state_dict_if_present(state_dict, \"module.\")\n",
|
|
" checkpoint = Checkpoint.from_dict(dict(model=state_dict))\n",
|
|
" session.report({\"loss\": running_loss}, checkpoint=checkpoint)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "9HUciluylZbX"
|
|
},
|
|
"source": [
|
|
"## 4b: Define our Preprocessor\n",
|
|
"\n",
|
|
"Next, we define our `Preprocessor` to preprocess our data before training and prediction. Our preprocessor will normalize the MNIST Images by the mean and standard deviation of the MNIST training dataset. This is a common operation to do on MNIST to improve training: https://discuss.pytorch.org/t/normalization-in-the-mnist-example/457"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 7,
|
|
"metadata": {
|
|
"id": "yHzQZTlAlY-9"
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"from ray.data.preprocessors import BatchMapper\n",
|
|
"\n",
|
|
"from torchvision import transforms\n",
|
|
"\n",
|
|
"def preprocess_images(df: pd.DataFrame) -> pd.DataFrame:\n",
|
|
" \"\"\"Preprocess images by scaling each channel in the image.\"\"\"\n",
|
|
"\n",
|
|
" torchvision_transforms = transforms.Compose(\n",
|
|
" [transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]\n",
|
|
" )\n",
|
|
"\n",
|
|
" df.loc[:, \"image\"] = [\n",
|
|
" torchvision_transforms(image).numpy() for image in df[\"image\"]\n",
|
|
" ]\n",
|
|
" return df\n",
|
|
"\n",
|
|
"mnist_normalize_preprocessor = BatchMapper(fn=preprocess_images)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "Uto3v90Hagni"
|
|
},
|
|
"source": [
|
|
"## 4c: Define logic for Batch/Offline Prediction.\n",
|
|
"\n",
|
|
"After training on each task, we want to use our trained model to do batch (i.e. offline) inference on a test dataset. \n",
|
|
"\n",
|
|
"To do this, we leverage the built-in `ray.air.BatchPredictor`. We define a `batch_predict` function that will take in a Checkpoint and a Test Dataset and outputs the accuracy our model achieves on the test dataset."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 8,
|
|
"metadata": {
|
|
"id": "DM2lFHzFa6uI"
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"from ray.train.batch_predictor import BatchPredictor\n",
|
|
"from ray.train.torch import TorchPredictor\n",
|
|
"\n",
|
|
"def batch_predict(checkpoint: ray.air.Checkpoint, test_dataset: ray.data.Dataset) -> float:\n",
|
|
" \"\"\"Perform batch prediction on the provided test dataset, and return accuracy results.\"\"\"\n",
|
|
"\n",
|
|
" batch_predictor = BatchPredictor.from_checkpoint(checkpoint, predictor_cls=TorchPredictor, model=SimpleMLP(num_classes=10))\n",
|
|
" model_output = batch_predictor.predict(\n",
|
|
" data=test_dataset, feature_columns=[\"image\"], keep_columns=[\"label\"]\n",
|
|
" )\n",
|
|
" \n",
|
|
" # Postprocess model outputs.\n",
|
|
" # Convert logits outputted from model into actual class predictions.\n",
|
|
" def convert_logits_to_classes(df):\n",
|
|
" best_class = df[\"predictions\"].map(lambda x: np.array(x).argmax())\n",
|
|
" df[\"predictions\"] = best_class\n",
|
|
" return df\n",
|
|
" \n",
|
|
" prediction_results = model_output.map_batches(convert_logits_to_classes, batch_format=\"pandas\")\n",
|
|
" \n",
|
|
" # Then, for each prediction output, see if it matches with the ground truth\n",
|
|
" # label.\n",
|
|
" def calculate_prediction_scores(df):\n",
|
|
" return pd.DataFrame({\"correct\": df[\"predictions\"] == df[\"label\"]})\n",
|
|
"\n",
|
|
" correct_dataset = prediction_results.map_batches(\n",
|
|
" calculate_prediction_scores, batch_format=\"pandas\"\n",
|
|
" )\n",
|
|
"\n",
|
|
" return correct_dataset.sum(on=\"correct\") / correct_dataset.count()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "GWiTtsmVbIZP"
|
|
},
|
|
"source": [
|
|
"## 4d: Define logic for Deploying and Querying our model\n",
|
|
"\n",
|
|
"In addition to batch inference, we also want to deploy our model so that we can submit live queries to it for online inference. We use Ray Serve's `PredictorDeployment` utility to deploy our trained model. \n",
|
|
"\n",
|
|
"Once we deploy the model, we can send HTTP requests to our deployment."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 9,
|
|
"metadata": {
|
|
"id": "ZC3JCWz7bhR-"
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"from typing import List\n",
|
|
"import requests\n",
|
|
"from requests import Response\n",
|
|
"import numpy as np\n",
|
|
"\n",
|
|
"from ray.serve.http_adapters import NdArray\n",
|
|
"\n",
|
|
"\n",
|
|
"def deploy_model(checkpoint: ray.air.Checkpoint) -> str:\n",
|
|
" \"\"\"Deploys the model from the provided Checkpoint and returns the URL for the endpoint of the model deployment.\"\"\"\n",
|
|
" def json_to_pandas(payload: NdArray) -> pd.DataFrame:\n",
|
|
" \"\"\"Accepts an NdArray JSON from an HTTP body and converts it to a Pandas dataframe.\"\"\"\n",
|
|
" # Have to explicitly convert to float since np.array reads as a double.\n",
|
|
" arr = np.array(payload.array, dtype=np.float32)\n",
|
|
" # We have to specify an image column since our preprocessor requires it.\n",
|
|
" df = pd.DataFrame({\"image\": [arr]})\n",
|
|
" return df\n",
|
|
"\n",
|
|
" deployment = PredictorDeployment.options(name=\"mnist_model\", route_prefix=\"/mnist_predict\", version=f\"v{task_idx}\", num_replicas=2)\n",
|
|
" deployment.deploy(\n",
|
|
" batching_params=dict(max_batch_size=10, batch_wait_timeout_s=5),\n",
|
|
" http_adapter=json_to_pandas, \n",
|
|
" predictor_cls=TorchPredictor, \n",
|
|
" checkpoint=latest_checkpoint, \n",
|
|
" model=SimpleMLP(num_classes=10)\n",
|
|
" )\n",
|
|
" return deployment.url\n",
|
|
"\n",
|
|
"# Function that queries our deployed model\n",
|
|
"def query_deployment(test_samples: List[np.ndarray], endpoint_uri: str) -> List[Response]:\n",
|
|
" \"\"\"Given a set of test samples, queries the model deployment at the provided endpoint and returns the results.\"\"\"\n",
|
|
" results = []\n",
|
|
" # Have to convert to Python List since Numpy arrays are not Json serializable.\n",
|
|
" for sample in test_samples:\n",
|
|
" results.append(requests.post(endpoint_uri, json={\"array\": sample.tolist()}))\n",
|
|
" # TODO: Figure out how Serve deals with Pandas DataFrame returned by Predictors.\n",
|
|
" return results"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "-NQDj0rFVUX3"
|
|
},
|
|
"source": [
|
|
"# Step 5: Putting it all together\n",
|
|
"\n",
|
|
"Once we have defined our training logic and our preprocessor, we can put everything together!\n",
|
|
"\n",
|
|
"For each dataset in our stream, we do the following:\n",
|
|
"1. Train on the dataset in Data Parallel fashion. We create a `TorchTrainer`, specify the config for the training loop we defined above, the dataset to train on, and how much we want to scale. `TorchTrainer` also accepts a `checkpoint` arg to continue training from a previously saved checkpoint.\n",
|
|
"2. Get the saved checkpoint from the training run.\n",
|
|
"3. Test our trained model on a test set containing test data from all the tasks trained on so far.\n",
|
|
"3. After training on each task, we deploy our model so we can query it for predictions.\n",
|
|
"\n",
|
|
"In this example, the training and test data for each task is well-defined beforehand by the benchmark. For real-world scenarios, this probably will not be the case. It is very likely that the prediction requests after training on one task will become the training data for the next task. \n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 10,
|
|
"metadata": {
|
|
"colab": {
|
|
"base_uri": "https://localhost:8080/",
|
|
"height": 1000
|
|
},
|
|
"id": "I_OrfQTqNYRk",
|
|
"outputId": "a89da8b8-1acf-4796-cc88-9ee889a32123"
|
|
},
|
|
"outputs": [
|
|
{
|
|
"name": "stderr",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"Read->Map_Batches: 100%|██████████| 1/1 [00:06<00:00, 6.40s/it]\n",
|
|
"Read->Map_Batches: 100%|██████████| 1/1 [00:00<00:00, 2.12it/s]\n",
|
|
"Map Progress (1 actors 1 pending): 100%|██████████| 1/1 [00:02<00:00, 2.34s/it]\n",
|
|
"Read->Map_Batches: 100%|██████████| 1/1 [00:00<00:00, 2.29it/s]\n",
|
|
"Map Progress (1 actors 1 pending): 100%|██████████| 1/1 [00:01<00:00, 1.33s/it]\n"
|
|
]
|
|
},
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"Starting training for task: 0\n"
|
|
]
|
|
},
|
|
{
|
|
"data": {
|
|
"text/html": [
|
|
"== Status ==<br>Current time: 2022-07-20 21:48:52 (running for 00:00:39.66)<br>Memory usage on this node: 33.1/64.0 GiB<br>Using FIFO scheduling algorithm.<br>Resources requested: 0/16 CPUs, 0/0 GPUs, 0.0/28.14 GiB heap, 0.0/2.0 GiB objects<br>Result logdir: /Users/jiaodong/ray_results/TorchTrainer_2022-07-20_21-48-13<br>Number of trials: 1/1 (1 TERMINATED)<br><table>\n",
|
|
"<thead>\n",
|
|
"<tr><th>Trial name </th><th>status </th><th>loc </th><th style=\"text-align: right;\"> iter</th><th style=\"text-align: right;\"> total time (s)</th><th style=\"text-align: right;\"> loss</th><th style=\"text-align: right;\"> _timestamp</th><th style=\"text-align: right;\"> _time_this_iter_s</th></tr>\n",
|
|
"</thead>\n",
|
|
"<tbody>\n",
|
|
"<tr><td>TorchTrainer_53c58_00000</td><td>TERMINATED</td><td>127.0.0.1:39548</td><td style=\"text-align: right;\"> 4</td><td style=\"text-align: right;\"> 36.4582</td><td style=\"text-align: right;\">824.229</td><td style=\"text-align: right;\"> 1658378932</td><td style=\"text-align: right;\"> 6.46339</td></tr>\n",
|
|
"</tbody>\n",
|
|
"</table><br><br>"
|
|
],
|
|
"text/plain": [
|
|
"<IPython.core.display.HTML object>"
|
|
]
|
|
},
|
|
"metadata": {},
|
|
"output_type": "display_data"
|
|
},
|
|
{
|
|
"name": "stderr",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"2022-07-20 21:48:13,244\tINFO plugin_schema_manager.py:52 -- Loading the default runtime env schemas: ['/Users/jiaodong/Workspace/ray/python/ray/_private/runtime_env/../../runtime_env/schemas/working_dir_schema.json', '/Users/jiaodong/Workspace/ray/python/ray/_private/runtime_env/../../runtime_env/schemas/pip_schema.json'].\n"
|
|
]
|
|
},
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=39562)\u001b[0m loss: 2.282040, epoch: 0, iteration: 0\n"
|
|
]
|
|
},
|
|
{
|
|
"name": "stderr",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=39562)\u001b[0m 2022-07-20 21:48:26,772\tINFO train_loop_utils.py:298 -- Moving model to device: cpu\n"
|
|
]
|
|
},
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=39562)\u001b[0m loss: 1.521038, epoch: 0, iteration: 500\n",
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=39562)\u001b[0m loss: 1.169452, epoch: 0, iteration: 1000\n",
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=39562)\u001b[0m loss: 0.856338, epoch: 0, iteration: 1500\n",
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=39562)\u001b[0m loss: 0.788410, epoch: 1, iteration: 0\n",
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=39562)\u001b[0m loss: 0.854239, epoch: 1, iteration: 500\n",
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=39562)\u001b[0m loss: 0.533351, epoch: 1, iteration: 1000\n",
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=39562)\u001b[0m loss: 0.591339, epoch: 1, iteration: 1500\n",
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=39562)\u001b[0m loss: 0.457057, epoch: 2, iteration: 0\n",
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=39562)\u001b[0m loss: 0.594715, epoch: 2, iteration: 500\n",
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=39562)\u001b[0m loss: 0.477588, epoch: 2, iteration: 1000\n",
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=39562)\u001b[0m loss: 0.235412, epoch: 2, iteration: 1500\n",
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=39562)\u001b[0m loss: 0.507374, epoch: 3, iteration: 0\n",
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=39562)\u001b[0m loss: 0.447128, epoch: 3, iteration: 500\n",
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=39562)\u001b[0m loss: 0.381943, epoch: 3, iteration: 1000\n",
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=39562)\u001b[0m loss: 0.347877, epoch: 3, iteration: 1500\n",
|
|
"Result for TorchTrainer_53c58_00000:\n",
|
|
" _time_this_iter_s: 6.463389873504639\n",
|
|
" _timestamp: 1658378932\n",
|
|
" _training_iteration: 4\n",
|
|
" date: 2022-07-20_21-48-52\n",
|
|
" done: true\n",
|
|
" experiment_id: abc531ef544440268933d8221addeb9d\n",
|
|
" experiment_tag: '0'\n",
|
|
" hostname: Jiaos-MacBook-Pro-16-inch-2019\n",
|
|
" iterations_since_restore: 4\n",
|
|
" loss: 824.2287287414074\n",
|
|
" node_ip: 127.0.0.1\n",
|
|
" pid: 39548\n",
|
|
" should_checkpoint: true\n",
|
|
" time_since_restore: 36.45815992355347\n",
|
|
" time_this_iter_s: 6.464020013809204\n",
|
|
" time_total_s: 36.45815992355347\n",
|
|
" timestamp: 1658378932\n",
|
|
" timesteps_since_restore: 0\n",
|
|
" training_iteration: 4\n",
|
|
" trial_id: 53c58_00000\n",
|
|
" warmup_time: 0.003597259521484375\n",
|
|
" \n"
|
|
]
|
|
},
|
|
{
|
|
"name": "stderr",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"2022-07-20 21:48:52,891\tINFO tune.py:738 -- Total run time: 39.80 seconds (39.66 seconds for the tuning loop).\n",
|
|
"Map Progress (1 actors 1 pending): 0%| | 0/1 [00:01<?, ?it/s]\u001b[2m\u001b[36m(BlockWorker pid=39601)\u001b[0m /Users/jiaodong/anaconda3/envs/ray3.7/lib/python3.7/site-packages/torchvision/transforms/functional.py:150: UserWarning: The given NumPy array is not writable, and PyTorch does not support non-writable tensors. This means writing to this tensor will result in undefined behavior. You may want to copy the array to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/utils/tensor_numpy.cpp:178.)\n",
|
|
"Map Progress (1 actors 1 pending): 100%|██████████| 1/1 [00:03<00:00, 3.01s/it]\n",
|
|
"Map_Batches: 100%|██████████| 1/1 [00:00<00:00, 8.70it/s]\n",
|
|
"Map_Batches: 100%|██████████| 1/1 [00:00<00:00, 76.13it/s]\n",
|
|
"Shuffle Map: 100%|██████████| 1/1 [00:00<00:00, 82.57it/s]\n",
|
|
"Shuffle Reduce: 100%|██████████| 1/1 [00:00<00:00, 134.32it/s]\n"
|
|
]
|
|
},
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"Accuracy for task 1: 0.3767\n"
|
|
]
|
|
},
|
|
{
|
|
"name": "stderr",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"\u001b[2m\u001b[36m(ServeController pid=39625)\u001b[0m INFO 2022-07-20 21:48:57,458 controller 39625 checkpoint_path.py:17 - Using RayInternalKVStore for controller checkpoint and recovery.\n",
|
|
"\u001b[2m\u001b[36m(ServeController pid=39625)\u001b[0m INFO 2022-07-20 21:48:57,460 controller 39625 http_state.py:126 - Starting HTTP proxy with name 'SERVE_CONTROLLER_ACTOR:oEzsmU:SERVE_PROXY_ACTOR-db68eafa3bbe9042df574f3c9974b40ce8d97728db90282feefb4690' on node 'db68eafa3bbe9042df574f3c9974b40ce8d97728db90282feefb4690' listening on '127.0.0.1:8000'\n",
|
|
"Shuffle Map: 0%| | 0/1 [00:00<?, ?it/s]\u001b[2m\u001b[36m(HTTPProxyActor pid=39628)\u001b[0m INFO: Started server process [39628]\n",
|
|
"Shuffle Map: 100%|██████████| 1/1 [00:00<00:00, 8.12it/s]\n",
|
|
"Shuffle Reduce: 100%|██████████| 1/1 [00:00<00:00, 5.80it/s]\n",
|
|
"Map Progress (1 actors 0 pending): 100%|██████████| 1/1 [00:01<00:00, 1.16s/it]\n",
|
|
"/Users/jiaodong/anaconda3/envs/ray3.7/lib/python3.7/site-packages/ipykernel_launcher.py:25: UserWarning: From /var/folders/1s/wy6f3ytn3q726p5hl8fw8d780000gn/T/ipykernel_39344/1249059442.py:25: deploy (from ray.serve.deployment) is deprecated and will be removed in a future version Please see https://docs.ray.io/en/latest/serve/index.html\n",
|
|
"\u001b[2m\u001b[36m(ServeController pid=39625)\u001b[0m INFO 2022-07-20 21:49:00,913 controller 39625 deployment_state.py:1281 - Adding 2 replicas to deployment 'mnist_model'.\n",
|
|
"Map Progress (1 actors 1 pending): 100%|██████████| 1/1 [00:02<00:00, 2.39s/it]\n",
|
|
"Read->Map_Batches: 100%|██████████| 1/1 [00:00<00:00, 2.39it/s]\n",
|
|
"Map Progress (1 actors 1 pending): 100%|██████████| 1/1 [00:01<00:00, 1.37s/it]\n"
|
|
]
|
|
},
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"Starting training for task: 1\n"
|
|
]
|
|
},
|
|
{
|
|
"data": {
|
|
"text/html": [
|
|
"== Status ==<br>Current time: 2022-07-20 21:50:36 (running for 00:00:37.98)<br>Memory usage on this node: 33.7/64.0 GiB<br>Using FIFO scheduling algorithm.<br>Resources requested: 0/16 CPUs, 0/0 GPUs, 0.0/28.14 GiB heap, 0.0/2.0 GiB objects<br>Result logdir: /Users/jiaodong/ray_results/TorchTrainer_2022-07-20_21-49-58<br>Number of trials: 1/1 (1 TERMINATED)<br><table>\n",
|
|
"<thead>\n",
|
|
"<tr><th>Trial name </th><th>status </th><th>loc </th><th style=\"text-align: right;\"> iter</th><th style=\"text-align: right;\"> total time (s)</th><th style=\"text-align: right;\"> loss</th><th style=\"text-align: right;\"> _timestamp</th><th style=\"text-align: right;\"> _time_this_iter_s</th></tr>\n",
|
|
"</thead>\n",
|
|
"<tbody>\n",
|
|
"<tr><td>TorchTrainer_92bcd_00000</td><td>TERMINATED</td><td>127.0.0.1:39736</td><td style=\"text-align: right;\"> 4</td><td style=\"text-align: right;\"> 34.1132</td><td style=\"text-align: right;\">707.634</td><td style=\"text-align: right;\"> 1658379035</td><td style=\"text-align: right;\"> 6.45643</td></tr>\n",
|
|
"</tbody>\n",
|
|
"</table><br><br>"
|
|
],
|
|
"text/plain": [
|
|
"<IPython.core.display.HTML object>"
|
|
]
|
|
},
|
|
"metadata": {},
|
|
"output_type": "display_data"
|
|
},
|
|
{
|
|
"name": "stderr",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"\u001b[2m\u001b[36m(TorchTrainer pid=39736)\u001b[0m 2022-07-20 21:50:01,936\tWARNING base_trainer.py:167 -- When passing `datasets` to a Trainer, it is recommended to reserve at least 20% of node CPUs for Dataset execution by setting `_max_cpu_fraction_per_node = 0.8` in the Trainer `scaling_config`. Not doing so can lead to resource contention or hangs. See https://docs.ray.io/en/master/data/key-concepts.html#example-datasets-in-tune for more info.\n",
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=39752)\u001b[0m 2022-07-20 21:50:09,489\tINFO config.py:71 -- Setting up process group for: env:// [rank=0, world_size=1]\n",
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=39752)\u001b[0m [W ProcessGroupGloo.cpp:715] Warning: Unable to resolve hostname to a (local) address. Using the loopback address as fallback. Manually set the network interface to bind to with GLOO_SOCKET_IFNAME. (function operator())\n"
|
|
]
|
|
},
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=39752)\u001b[0m loss: 3.301114, epoch: 0, iteration: 0\n"
|
|
]
|
|
},
|
|
{
|
|
"name": "stderr",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=39752)\u001b[0m 2022-07-20 21:50:09,795\tINFO train_loop_utils.py:298 -- Moving model to device: cpu\n",
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=39752)\u001b[0m /Users/jiaodong/Workspace/ray/python/ray/air/_internal/torch_utils.py:64: UserWarning: The given NumPy array is not writable, and PyTorch does not support non-writable tensors. This means writing to this tensor will result in undefined behavior. You may want to copy the array to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/utils/tensor_numpy.cpp:178.)\n",
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=39752)\u001b[0m return torch.as_tensor(vals, dtype=dtype)\n"
|
|
]
|
|
},
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=39752)\u001b[0m loss: 1.075076, epoch: 0, iteration: 500\n",
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=39752)\u001b[0m loss: 0.536976, epoch: 0, iteration: 1000\n",
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=39752)\u001b[0m loss: 0.600182, epoch: 0, iteration: 1500\n",
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=39752)\u001b[0m loss: 0.546070, epoch: 1, iteration: 0\n",
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=39752)\u001b[0m loss: 0.448120, epoch: 1, iteration: 500\n",
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=39752)\u001b[0m loss: 0.392481, epoch: 1, iteration: 1000\n",
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=39752)\u001b[0m loss: 0.371981, epoch: 1, iteration: 1500\n",
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=39752)\u001b[0m loss: 0.521735, epoch: 2, iteration: 0\n",
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=39752)\u001b[0m loss: 0.635850, epoch: 2, iteration: 500\n",
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=39752)\u001b[0m loss: 0.395862, epoch: 2, iteration: 1000\n",
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=39752)\u001b[0m loss: 0.402500, epoch: 2, iteration: 1500\n",
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=39752)\u001b[0m loss: 0.236922, epoch: 3, iteration: 0\n",
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=39752)\u001b[0m loss: 0.528482, epoch: 3, iteration: 500\n",
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=39752)\u001b[0m loss: 0.372242, epoch: 3, iteration: 1000\n",
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=39752)\u001b[0m loss: 0.355759, epoch: 3, iteration: 1500\n",
|
|
"Result for TorchTrainer_92bcd_00000:\n",
|
|
" _time_this_iter_s: 6.456433057785034\n",
|
|
" _timestamp: 1658379035\n",
|
|
" _training_iteration: 4\n",
|
|
" date: 2022-07-20_21-50-36\n",
|
|
" done: true\n",
|
|
" experiment_id: 21820161d0a245428cf75b0b9b17fe6e\n",
|
|
" experiment_tag: '0'\n",
|
|
" hostname: Jiaos-MacBook-Pro-16-inch-2019\n",
|
|
" iterations_since_restore: 4\n",
|
|
" loss: 707.6341038495302\n",
|
|
" node_ip: 127.0.0.1\n",
|
|
" pid: 39736\n",
|
|
" should_checkpoint: true\n",
|
|
" time_since_restore: 34.11321783065796\n",
|
|
" time_this_iter_s: 6.463765859603882\n",
|
|
" time_total_s: 34.11321783065796\n",
|
|
" timestamp: 1658379036\n",
|
|
" timesteps_since_restore: 0\n",
|
|
" training_iteration: 4\n",
|
|
" trial_id: 92bcd_00000\n",
|
|
" warmup_time: 0.005189180374145508\n",
|
|
" \n"
|
|
]
|
|
},
|
|
{
|
|
"name": "stderr",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"2022-07-20 21:50:36,835\tINFO tune.py:738 -- Total run time: 38.13 seconds (37.98 seconds for the tuning loop).\n",
|
|
"Map Progress (1 actors 1 pending): 0%| | 0/2 [00:01<?, ?it/s]\u001b[2m\u001b[36m(BlockWorker pid=39801)\u001b[0m /Users/jiaodong/anaconda3/envs/ray3.7/lib/python3.7/site-packages/torchvision/transforms/functional.py:150: UserWarning: The given NumPy array is not writable, and PyTorch does not support non-writable tensors. This means writing to this tensor will result in undefined behavior. You may want to copy the array to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/utils/tensor_numpy.cpp:178.)\n",
|
|
"Map Progress (2 actors 1 pending): 100%|██████████| 2/2 [00:03<00:00, 1.96s/it]\n",
|
|
"Map_Batches: 100%|██████████| 2/2 [00:00<00:00, 5.28it/s]\n",
|
|
"Map_Batches: 100%|██████████| 2/2 [00:00<00:00, 114.72it/s]\n",
|
|
"Shuffle Map: 100%|██████████| 2/2 [00:00<00:00, 162.16it/s]\n",
|
|
"Shuffle Reduce: 100%|██████████| 1/1 [00:00<00:00, 140.57it/s]\n"
|
|
]
|
|
},
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"Accuracy for task 2: 0.36795\n"
|
|
]
|
|
},
|
|
{
|
|
"name": "stderr",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"Shuffle Map: 100%|██████████| 1/1 [00:00<00:00, 6.24it/s]\n",
|
|
"Shuffle Reduce: 100%|██████████| 1/1 [00:00<00:00, 6.19it/s]\n",
|
|
"Map Progress (1 actors 0 pending): 100%|██████████| 1/1 [00:01<00:00, 1.18s/it]\n",
|
|
"\u001b[2m\u001b[36m(ServeController pid=39625)\u001b[0m INFO 2022-07-20 21:50:42,924 controller 39625 deployment_state.py:1240 - Stopping 1 replicas of deployment 'mnist_model' with outdated versions.\n",
|
|
"\u001b[2m\u001b[36m(ServeController pid=39625)\u001b[0m INFO 2022-07-20 21:50:45,044 controller 39625 deployment_state.py:1281 - Adding 1 replicas to deployment 'mnist_model'.\n",
|
|
"\u001b[2m\u001b[36m(ServeController pid=39625)\u001b[0m INFO 2022-07-20 21:50:47,377 controller 39625 deployment_state.py:1240 - Stopping 1 replicas of deployment 'mnist_model' with outdated versions.\n",
|
|
"\u001b[2m\u001b[36m(ServeController pid=39625)\u001b[0m INFO 2022-07-20 21:50:49,504 controller 39625 deployment_state.py:1281 - Adding 1 replicas to deployment 'mnist_model'.\n",
|
|
"Map Progress (2 actors 0 pending): 100%|██████████| 1/1 [00:02<00:00, 2.36s/it]\n",
|
|
"Read->Map_Batches: 100%|██████████| 1/1 [00:00<00:00, 2.04it/s]\n",
|
|
"Map Progress (1 actors 1 pending): 100%|██████████| 1/1 [00:01<00:00, 1.37s/it]\n"
|
|
]
|
|
},
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"Starting training for task: 2\n"
|
|
]
|
|
},
|
|
{
|
|
"data": {
|
|
"text/html": [
|
|
"== Status ==<br>Current time: 2022-07-20 21:52:25 (running for 00:00:37.97)<br>Memory usage on this node: 34.0/64.0 GiB<br>Using FIFO scheduling algorithm.<br>Resources requested: 0/16 CPUs, 0/0 GPUs, 0.0/28.14 GiB heap, 0.0/2.0 GiB objects<br>Result logdir: /Users/jiaodong/ray_results/TorchTrainer_2022-07-20_21-51-47<br>Number of trials: 1/1 (1 TERMINATED)<br><table>\n",
|
|
"<thead>\n",
|
|
"<tr><th>Trial name </th><th>status </th><th>loc </th><th style=\"text-align: right;\"> iter</th><th style=\"text-align: right;\"> total time (s)</th><th style=\"text-align: right;\"> loss</th><th style=\"text-align: right;\"> _timestamp</th><th style=\"text-align: right;\"> _time_this_iter_s</th></tr>\n",
|
|
"</thead>\n",
|
|
"<tbody>\n",
|
|
"<tr><td>TorchTrainer_d37db_00000</td><td>TERMINATED</td><td>127.0.0.1:39948</td><td style=\"text-align: right;\"> 4</td><td style=\"text-align: right;\"> 34.0141</td><td style=\"text-align: right;\">671.998</td><td style=\"text-align: right;\"> 1658379144</td><td style=\"text-align: right;\"> 6.59292</td></tr>\n",
|
|
"</tbody>\n",
|
|
"</table><br><br>"
|
|
],
|
|
"text/plain": [
|
|
"<IPython.core.display.HTML object>"
|
|
]
|
|
},
|
|
"metadata": {},
|
|
"output_type": "display_data"
|
|
},
|
|
{
|
|
"name": "stderr",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"\u001b[2m\u001b[36m(TorchTrainer pid=39948)\u001b[0m 2022-07-20 21:51:50,596\tWARNING base_trainer.py:167 -- When passing `datasets` to a Trainer, it is recommended to reserve at least 20% of node CPUs for Dataset execution by setting `_max_cpu_fraction_per_node = 0.8` in the Trainer `scaling_config`. Not doing so can lead to resource contention or hangs. See https://docs.ray.io/en/master/data/key-concepts.html#example-datasets-in-tune for more info.\n",
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=39968)\u001b[0m 2022-07-20 21:51:58,118\tINFO config.py:71 -- Setting up process group for: env:// [rank=0, world_size=1]\n",
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=39968)\u001b[0m [W ProcessGroupGloo.cpp:715] Warning: Unable to resolve hostname to a (local) address. Using the loopback address as fallback. Manually set the network interface to bind to with GLOO_SOCKET_IFNAME. (function operator())\n",
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=39968)\u001b[0m 2022-07-20 21:51:58,367\tINFO train_loop_utils.py:298 -- Moving model to device: cpu\n"
|
|
]
|
|
},
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=39968)\u001b[0m loss: 4.062408, epoch: 0, iteration: 0\n",
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=39968)\u001b[0m loss: 0.970063, epoch: 0, iteration: 500\n",
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=39968)\u001b[0m loss: 0.658269, epoch: 0, iteration: 1000\n",
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=39968)\u001b[0m loss: 0.442650, epoch: 0, iteration: 1500\n",
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=39968)\u001b[0m loss: 0.603212, epoch: 1, iteration: 0\n",
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=39968)\u001b[0m loss: 0.534739, epoch: 1, iteration: 500\n",
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=39968)\u001b[0m loss: 0.420072, epoch: 1, iteration: 1000\n",
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=39968)\u001b[0m loss: 0.351545, epoch: 1, iteration: 1500\n",
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=39968)\u001b[0m loss: 0.347010, epoch: 2, iteration: 0\n",
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=39968)\u001b[0m loss: 0.419703, epoch: 2, iteration: 500\n",
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=39968)\u001b[0m loss: 0.350773, epoch: 2, iteration: 1000\n",
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=39968)\u001b[0m loss: 0.231652, epoch: 2, iteration: 1500\n",
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=39968)\u001b[0m loss: 0.343125, epoch: 3, iteration: 0\n",
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=39968)\u001b[0m loss: 0.547853, epoch: 3, iteration: 500\n",
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=39968)\u001b[0m loss: 0.353915, epoch: 3, iteration: 1000\n",
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=39968)\u001b[0m loss: 0.260028, epoch: 3, iteration: 1500\n",
|
|
"Result for TorchTrainer_d37db_00000:\n",
|
|
" _time_this_iter_s: 6.5929179191589355\n",
|
|
" _timestamp: 1658379144\n",
|
|
" _training_iteration: 4\n",
|
|
" date: 2022-07-20_21-52-24\n",
|
|
" done: true\n",
|
|
" experiment_id: 5d41bf13ba524c528faac8f64b13c7cc\n",
|
|
" experiment_tag: '0'\n",
|
|
" hostname: Jiaos-MacBook-Pro-16-inch-2019\n",
|
|
" iterations_since_restore: 4\n",
|
|
" loss: 671.9976235236973\n",
|
|
" node_ip: 127.0.0.1\n",
|
|
" pid: 39948\n",
|
|
" should_checkpoint: true\n",
|
|
" time_since_restore: 34.01405596733093\n",
|
|
" time_this_iter_s: 6.590774774551392\n",
|
|
" time_total_s: 34.01405596733093\n",
|
|
" timestamp: 1658379144\n",
|
|
" timesteps_since_restore: 0\n",
|
|
" training_iteration: 4\n",
|
|
" trial_id: d37db_00000\n",
|
|
" warmup_time: 0.005116939544677734\n",
|
|
" \n"
|
|
]
|
|
},
|
|
{
|
|
"name": "stderr",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"2022-07-20 21:52:25,471\tINFO tune.py:738 -- Total run time: 38.13 seconds (37.97 seconds for the tuning loop).\n",
|
|
"Map Progress (1 actors 1 pending): 0%| | 0/3 [00:01<?, ?it/s]\u001b[2m\u001b[36m(BlockWorker pid=40038)\u001b[0m /Users/jiaodong/anaconda3/envs/ray3.7/lib/python3.7/site-packages/torchvision/transforms/functional.py:150: UserWarning: The given NumPy array is not writable, and PyTorch does not support non-writable tensors. This means writing to this tensor will result in undefined behavior. You may want to copy the array to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/utils/tensor_numpy.cpp:178.)\n",
|
|
"Map Progress (2 actors 1 pending): 100%|██████████| 3/3 [00:04<00:00, 1.62s/it]\n",
|
|
"Map_Batches: 100%|██████████| 3/3 [00:00<00:00, 7.77it/s]\n",
|
|
"Map_Batches: 100%|██████████| 3/3 [00:00<00:00, 136.51it/s]\n",
|
|
"Shuffle Map: 100%|██████████| 3/3 [00:00<00:00, 216.98it/s]\n",
|
|
"Shuffle Reduce: 100%|██████████| 1/1 [00:00<00:00, 135.98it/s]\n"
|
|
]
|
|
},
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"Accuracy for task 3: 0.3590333333333333\n"
|
|
]
|
|
},
|
|
{
|
|
"name": "stderr",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"Shuffle Map: 100%|██████████| 1/1 [00:00<00:00, 6.01it/s]\n",
|
|
"Shuffle Reduce: 100%|██████████| 1/1 [00:00<00:00, 6.26it/s]\n",
|
|
"Map Progress (1 actors 0 pending): 100%|██████████| 1/1 [00:01<00:00, 1.17s/it]\n",
|
|
"\u001b[2m\u001b[36m(ServeController pid=39625)\u001b[0m INFO 2022-07-20 21:52:32,498 controller 39625 deployment_state.py:1240 - Stopping 1 replicas of deployment 'mnist_model' with outdated versions.\n",
|
|
"\u001b[2m\u001b[36m(ServeController pid=39625)\u001b[0m INFO 2022-07-20 21:52:34,634 controller 39625 deployment_state.py:1281 - Adding 1 replicas to deployment 'mnist_model'.\n",
|
|
"\u001b[2m\u001b[36m(ServeController pid=39625)\u001b[0m INFO 2022-07-20 21:52:36,956 controller 39625 deployment_state.py:1240 - Stopping 1 replicas of deployment 'mnist_model' with outdated versions.\n",
|
|
"\u001b[2m\u001b[36m(ServeController pid=39625)\u001b[0m INFO 2022-07-20 21:52:39,078 controller 39625 deployment_state.py:1281 - Adding 1 replicas to deployment 'mnist_model'.\n",
|
|
"\u001b[2m\u001b[36m(ServeController pid=39625)\u001b[0m INFO 2022-07-20 21:53:31,642 controller 39625 deployment_state.py:1304 - Removing 2 replicas from deployment 'mnist_model'.\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"from ray.train.torch import TorchTrainer\n",
|
|
"from ray.air.config import ScalingConfig\n",
|
|
"from ray.train.torch import TorchPredictor\n",
|
|
"from ray import serve\n",
|
|
"from ray.serve import PredictorDeployment\n",
|
|
"from ray.serve.http_adapters import json_to_ndarray\n",
|
|
"\n",
|
|
"# The number of tasks (i.e. datasets in our stream) that we want to use for this example.\n",
|
|
"n_tasks = 3\n",
|
|
"\n",
|
|
"# Number of epochs to train each task for.\n",
|
|
"num_epochs = 4\n",
|
|
"# Batch size.\n",
|
|
"batch_size = 32\n",
|
|
"# Optimizer args.\n",
|
|
"learning_rate = 0.001\n",
|
|
"momentum = 0.9\n",
|
|
"\n",
|
|
"# Number of data parallel workers to use for training.\n",
|
|
"num_workers = 1\n",
|
|
"# Whether to use GPU or not.\n",
|
|
"use_gpu = ray.available_resources().get(\"GPU\", 0) > 0\n",
|
|
"\n",
|
|
"permuted_mnist = PermutedMNISTStream(n_tasks=n_tasks)\n",
|
|
"train_stream = permuted_mnist.generate_train_stream()\n",
|
|
"test_stream = permuted_mnist.generate_test_stream()\n",
|
|
"\n",
|
|
"latest_checkpoint = None\n",
|
|
"\n",
|
|
"accuracy_for_all_tasks = []\n",
|
|
"task_idx = 0\n",
|
|
"all_test_datasets_seen_so_far = []\n",
|
|
"for train_dataset, test_dataset in zip(train_stream, test_stream):\n",
|
|
" print(f\"Starting training for task: {task_idx}\")\n",
|
|
" task_idx += 1\n",
|
|
"\n",
|
|
" # *********Training*****************\n",
|
|
"\n",
|
|
" trainer = TorchTrainer(\n",
|
|
" train_loop_per_worker=train_loop_per_worker,\n",
|
|
" train_loop_config={\n",
|
|
" \"num_epochs\": num_epochs,\n",
|
|
" \"learning_rate\": learning_rate,\n",
|
|
" \"momentum\": momentum,\n",
|
|
" \"batch_size\": batch_size,\n",
|
|
" },\n",
|
|
" # Have to specify trainer_resources as 0 so that the example works on Colab. \n",
|
|
" scaling_config=ScalingConfig(num_workers=num_workers, use_gpu=use_gpu, trainer_resources={\"CPU\": 0}),\n",
|
|
" datasets={\"train\": train_dataset},\n",
|
|
" preprocessor=BatchMapper(fn=preprocess_images),\n",
|
|
" resume_from_checkpoint=latest_checkpoint,\n",
|
|
" )\n",
|
|
" result = trainer.fit()\n",
|
|
" latest_checkpoint = result.checkpoint\n",
|
|
"\n",
|
|
" # **************Batch Prediction**************************\n",
|
|
"\n",
|
|
" # We can do batch prediction on the test data for the tasks seen so far.\n",
|
|
" # TODO: Fix type signature in Ray Datasets\n",
|
|
" # TODO: Fix dataset.union when used with empty list.\n",
|
|
" if len(all_test_datasets_seen_so_far) > 0:\n",
|
|
" full_test_dataset = test_dataset.union(*all_test_datasets_seen_so_far)\n",
|
|
" else:\n",
|
|
" full_test_dataset = test_dataset\n",
|
|
"\n",
|
|
" all_test_datasets_seen_so_far.append(test_dataset)\n",
|
|
"\n",
|
|
" accuracy_for_this_task = batch_predict(latest_checkpoint, full_test_dataset)\n",
|
|
" print(f\"Accuracy for task {task_idx}: {accuracy_for_this_task}\")\n",
|
|
" accuracy_for_all_tasks.append(accuracy_for_this_task)\n",
|
|
"\n",
|
|
" # *************Model Deployment & Online Inference***************************\n",
|
|
" \n",
|
|
" # We can also deploy our model to do online inference with Ray Serve.\n",
|
|
" # Start Ray Serve.\n",
|
|
" serve.start()\n",
|
|
" test_samples = permuted_mnist.generate_test_samples()\n",
|
|
" endpoint_uri = deploy_model(latest_checkpoint)\n",
|
|
" online_inference_results = query_deployment(test_samples, endpoint_uri)\n",
|
|
"\n",
|
|
" if ray.available_resources().get(\"CPU\", 0) < num_workers+1:\n",
|
|
" # If there are no more CPUs left, then shutdown the Serve replicas so we can continue training on the next task.\n",
|
|
" serve.shutdown()\n",
|
|
"\n",
|
|
" \n",
|
|
"serve.shutdown()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "ORWpRkPjcPbD"
|
|
},
|
|
"source": [
|
|
"Now that we have finished all of our training, let's see the accuracy of our model after training on each task. \n",
|
|
"\n",
|
|
"We should see the accuracy decrease over time. This is to be expected since we are using just a naive fine-tuning strategy so our model is prone to catastrophic forgetting.\n",
|
|
"\n",
|
|
"As we increase the number of tasks, the model performance on all the tasks trained on so far should decrease."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 11,
|
|
"metadata": {
|
|
"colab": {
|
|
"base_uri": "https://localhost:8080/"
|
|
},
|
|
"id": "thpeB0KGmr99",
|
|
"outputId": "59fdbb6d-eaf4-4c2a-d350-5ff6b48e96a3"
|
|
},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"[0.3767, 0.36795, 0.3590333333333333]"
|
|
]
|
|
},
|
|
"execution_count": 11,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"accuracy_for_all_tasks"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "xLLAvsTk8LoV"
|
|
},
|
|
"source": [
|
|
"# [Optional] Step 6: Compare against full training.\n",
|
|
"\n",
|
|
"We have now incrementally trained our simple multi-layer perceptron. Let's compare the incrementally trained model via fine tuning against a model that is trained on all the tasks up front.\n",
|
|
"\n",
|
|
"Since we are using a naive fine-tuning strategy, we should expect that our incrementally trained model will perform worse than the the one that is fully trained! However, there's various other strategies that have been developed and are actively being researched to improve accuracy for incremental training. And overall, incremental/continual learning allows you to train in many real world settings where the entire dataset is not available up front, but new data is arriving at a relatively high rate."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "RNHsEVBHc0p2"
|
|
},
|
|
"source": [
|
|
"Let's first combine all of our datasets for each task into a single, unified Dataset"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 12,
|
|
"metadata": {
|
|
"colab": {
|
|
"base_uri": "https://localhost:8080/"
|
|
},
|
|
"id": "pU2fVH068lfF",
|
|
"outputId": "fd6a3b56-dda1-4fa6-cebd-d0ee8784e698"
|
|
},
|
|
"outputs": [
|
|
{
|
|
"name": "stderr",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"Map Progress (1 actors 1 pending): 100%|██████████| 1/1 [00:02<00:00, 2.33s/it]\n",
|
|
"Map Progress (1 actors 1 pending): 100%|██████████| 1/1 [00:02<00:00, 2.32s/it]\n",
|
|
"Map Progress (1 actors 1 pending): 100%|██████████| 1/1 [00:02<00:00, 2.31s/it]\n",
|
|
"Shuffle Map: 100%|██████████| 3/3 [00:01<00:00, 2.55it/s]\n",
|
|
"Shuffle Reduce: 100%|██████████| 3/3 [00:01<00:00, 2.55it/s]\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"train_stream = permuted_mnist.generate_train_stream()\n",
|
|
"\n",
|
|
"# Collect all datasets in the stream into a single dataset.\n",
|
|
"all_training_datasets = []\n",
|
|
"for train_dataset in train_stream:\n",
|
|
" all_training_datasets.append(train_dataset)\n",
|
|
"combined_training_dataset = all_training_datasets[0].union(*all_training_datasets[1:])\n",
|
|
"\n",
|
|
"\n",
|
|
"combined_training_dataset = combined_training_dataset.random_shuffle()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "tJ6Oqdgvc5dn"
|
|
},
|
|
"source": [
|
|
"Then, we train a new model on the unified Dataset using the same configurations as before."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 13,
|
|
"metadata": {
|
|
"colab": {
|
|
"base_uri": "https://localhost:8080/",
|
|
"height": 1000
|
|
},
|
|
"id": "PmH9c0-z9KME",
|
|
"outputId": "653b4dfc-ed47-4307-fa84-e4c4ea3ec354"
|
|
},
|
|
"outputs": [
|
|
{
|
|
"name": "stderr",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"2022-07-20 21:53:44,223\tWARNING base_trainer.py:167 -- When passing `datasets` to a Trainer, it is recommended to reserve at least 20% of node CPUs for Dataset execution by setting `_max_cpu_fraction_per_node = 0.8` in the Trainer `scaling_config`. Not doing so can lead to resource contention or hangs. See https://docs.ray.io/en/master/data/key-concepts.html#example-datasets-in-tune for more info.\n"
|
|
]
|
|
},
|
|
{
|
|
"data": {
|
|
"text/html": [
|
|
"== Status ==<br>Current time: 2022-07-20 21:55:10 (running for 00:01:25.89)<br>Memory usage on this node: 34.4/64.0 GiB<br>Using FIFO scheduling algorithm.<br>Resources requested: 0/16 CPUs, 0/0 GPUs, 0.0/28.14 GiB heap, 0.0/2.0 GiB objects<br>Result logdir: /Users/jiaodong/ray_results/TorchTrainer_2022-07-20_21-53-44<br>Number of trials: 1/1 (1 TERMINATED)<br><table>\n",
|
|
"<thead>\n",
|
|
"<tr><th>Trial name </th><th>status </th><th>loc </th><th style=\"text-align: right;\"> iter</th><th style=\"text-align: right;\"> total time (s)</th><th style=\"text-align: right;\"> loss</th><th style=\"text-align: right;\"> _timestamp</th><th style=\"text-align: right;\"> _time_this_iter_s</th></tr>\n",
|
|
"</thead>\n",
|
|
"<tbody>\n",
|
|
"<tr><td>TorchTrainer_1923b_00000</td><td>TERMINATED</td><td>127.0.0.1:40228</td><td style=\"text-align: right;\"> 4</td><td style=\"text-align: right;\"> 82.7285</td><td style=\"text-align: right;\">2328.8</td><td style=\"text-align: right;\"> 1658379309</td><td style=\"text-align: right;\"> 17.0239</td></tr>\n",
|
|
"</tbody>\n",
|
|
"</table><br><br>"
|
|
],
|
|
"text/plain": [
|
|
"<IPython.core.display.HTML object>"
|
|
]
|
|
},
|
|
"metadata": {},
|
|
"output_type": "display_data"
|
|
},
|
|
{
|
|
"name": "stderr",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"\u001b[2m\u001b[36m(TorchTrainer pid=40228)\u001b[0m 2022-07-20 21:53:47,328\tWARNING base_trainer.py:167 -- When passing `datasets` to a Trainer, it is recommended to reserve at least 20% of node CPUs for Dataset execution by setting `_max_cpu_fraction_per_node = 0.8` in the Trainer `scaling_config`. Not doing so can lead to resource contention or hangs. See https://docs.ray.io/en/master/data/key-concepts.html#example-datasets-in-tune for more info.\n"
|
|
]
|
|
},
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=40276)\u001b[0m loss: 2.305423, epoch: 0, iteration: 0\n",
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=40276)\u001b[0m loss: 1.935424, epoch: 0, iteration: 500\n",
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=40276)\u001b[0m loss: 1.174222, epoch: 0, iteration: 5000\n",
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=40276)\u001b[0m loss: 0.776577, epoch: 0, iteration: 5500\n",
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=40276)\u001b[0m loss: 0.674814, epoch: 1, iteration: 0\n",
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=40276)\u001b[0m loss: 0.699747, epoch: 1, iteration: 500\n",
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=40276)\u001b[0m loss: 0.795673, epoch: 1, iteration: 5000\n",
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=40276)\u001b[0m loss: 0.651217, epoch: 1, iteration: 5500\n",
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=40276)\u001b[0m loss: 0.743072, epoch: 2, iteration: 0\n",
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=40276)\u001b[0m loss: 0.745054, epoch: 2, iteration: 500\n",
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=40276)\u001b[0m loss: 0.639829, epoch: 2, iteration: 5000\n",
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=40276)\u001b[0m loss: 0.682482, epoch: 2, iteration: 5500\n",
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=40276)\u001b[0m loss: 0.553197, epoch: 3, iteration: 0\n",
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=40276)\u001b[0m loss: 0.471037, epoch: 3, iteration: 500\n",
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=40276)\u001b[0m loss: 0.538055, epoch: 3, iteration: 5000\n",
|
|
"\u001b[2m\u001b[36m(RayTrainWorker pid=40276)\u001b[0m loss: 0.534079, epoch: 3, iteration: 5500\n",
|
|
"Result for TorchTrainer_1923b_00000:\n",
|
|
" _time_this_iter_s: 17.023871898651123\n",
|
|
" _timestamp: 1658379309\n",
|
|
" _training_iteration: 4\n",
|
|
" date: 2022-07-20_21-55-10\n",
|
|
" done: true\n",
|
|
" experiment_id: d304983bfe3f4e269118f8618aa9b02f\n",
|
|
" experiment_tag: '0'\n",
|
|
" hostname: Jiaos-MacBook-Pro-16-inch-2019\n",
|
|
" iterations_since_restore: 4\n",
|
|
" loss: 2328.8038033917546\n",
|
|
" node_ip: 127.0.0.1\n",
|
|
" pid: 40228\n",
|
|
" should_checkpoint: true\n",
|
|
" time_since_restore: 82.72845268249512\n",
|
|
" time_this_iter_s: 17.024354696273804\n",
|
|
" time_total_s: 82.72845268249512\n",
|
|
" timestamp: 1658379310\n",
|
|
" timesteps_since_restore: 0\n",
|
|
" training_iteration: 4\n",
|
|
" trial_id: 1923b_00000\n",
|
|
" warmup_time: 0.004433870315551758\n",
|
|
" \n"
|
|
]
|
|
},
|
|
{
|
|
"name": "stderr",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"2022-07-20 21:55:10,233\tINFO tune.py:738 -- Total run time: 86.00 seconds (85.88 seconds for the tuning loop).\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"# Now we do training with the same configurations as before\n",
|
|
"trainer = TorchTrainer(\n",
|
|
" train_loop_per_worker=train_loop_per_worker,\n",
|
|
" train_loop_config={\n",
|
|
" \"num_epochs\": num_epochs,\n",
|
|
" \"learning_rate\": learning_rate,\n",
|
|
" \"momentum\": momentum,\n",
|
|
" \"batch_size\": batch_size,\n",
|
|
" },\n",
|
|
" # Have to specify trainer_resources as 0 so that the example works on Colab. \n",
|
|
" scaling_config=ScalingConfig(num_workers=num_workers, use_gpu=use_gpu, trainer_resources={\"CPU\": 0}),\n",
|
|
" datasets={\"train\": combined_training_dataset},\n",
|
|
" preprocessor=BatchMapper(fn=preprocess_images),\n",
|
|
" )\n",
|
|
"result = trainer.fit()\n",
|
|
"full_training_checkpoint = result.checkpoint"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "jLaOcmBddRqB"
|
|
},
|
|
"source": [
|
|
"Then, let's test model that was trained on all the tasks up front."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 14,
|
|
"metadata": {
|
|
"colab": {
|
|
"base_uri": "https://localhost:8080/"
|
|
},
|
|
"id": "WC7zV_Cw9TAi",
|
|
"outputId": "12a86f2b-be90-47b6-e252-25e3199689f9"
|
|
},
|
|
"outputs": [
|
|
{
|
|
"name": "stderr",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"Map Progress (1 actors 1 pending): 0%| | 0/3 [00:01<?, ?it/s]\u001b[2m\u001b[36m(BlockWorker pid=40400)\u001b[0m /Users/jiaodong/anaconda3/envs/ray3.7/lib/python3.7/site-packages/torchvision/transforms/functional.py:150: UserWarning: The given NumPy array is not writable, and PyTorch does not support non-writable tensors. This means writing to this tensor will result in undefined behavior. You may want to copy the array to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/utils/tensor_numpy.cpp:178.)\n",
|
|
"Map Progress (2 actors 1 pending): 100%|██████████| 3/3 [00:04<00:00, 1.62s/it]\n",
|
|
"Map_Batches: 100%|██████████| 3/3 [00:00<00:00, 63.30it/s]\n",
|
|
"Map_Batches: 100%|██████████| 3/3 [00:00<00:00, 129.65it/s]\n",
|
|
"Shuffle Map: 100%|██████████| 3/3 [00:00<00:00, 312.18it/s]\n",
|
|
"Shuffle Reduce: 100%|██████████| 1/1 [00:00<00:00, 149.25it/s]\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"# Then, we used the fully trained model and do batch prediction on the entire test set.\n",
|
|
"\n",
|
|
"# `full_test_dataset` should already contain the combined test datasets.\n",
|
|
"fully_trained_accuracy = batch_predict(full_training_checkpoint, full_test_dataset)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "Pn5LJ4CUdZgI"
|
|
},
|
|
"source": [
|
|
"Finally, let's compare the accuracies between the incrementally trained model and the fully trained model. We should see that the fully trained model performs better."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 15,
|
|
"metadata": {
|
|
"colab": {
|
|
"base_uri": "https://localhost:8080/"
|
|
},
|
|
"id": "UFhRf_8e-vgA",
|
|
"outputId": "056ff06f-ff87-4f3a-d740-4cc556bde3dd"
|
|
},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"Fully trained model accuracy: 0.38016666666666665\n",
|
|
"Incrementally trained model accuracy: 0.3590333333333333\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"print(\"Fully trained model accuracy: \", fully_trained_accuracy)\n",
|
|
"print(\"Incrementally trained model accuracy: \", accuracy_for_all_tasks[-1])"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "FuqKePrYe-Fz"
|
|
},
|
|
"source": [
|
|
"# Next Steps\n",
|
|
"\n",
|
|
"Once you've completed this notebook, you should be set to play around with scalable incremental training using Ray, either by trying more fancy algorithms for incremental learning other than naive fine-tuning, or attempting to scale out to larger datasets!\n",
|
|
"\n",
|
|
"If you run into any issues, or have any feature requests, please file an issue on the [Ray Github](https://github.com/ray-project/ray/issues).\n",
|
|
"\n",
|
|
"\n"
|
|
]
|
|
}
|
|
],
|
|
"metadata": {
|
|
"accelerator": "GPU",
|
|
"colab": {
|
|
"collapsed_sections": [],
|
|
"name": "ray_air_incremental_learning (1).ipynb",
|
|
"provenance": []
|
|
},
|
|
"kernelspec": {
|
|
"display_name": "Python 3.7.10 ('ray3.7')",
|
|
"language": "python",
|
|
"name": "python3"
|
|
},
|
|
"language_info": {
|
|
"codemirror_mode": {
|
|
"name": "ipython",
|
|
"version": 3
|
|
},
|
|
"file_extension": ".py",
|
|
"mimetype": "text/x-python",
|
|
"name": "python",
|
|
"nbconvert_exporter": "python",
|
|
"pygments_lexer": "ipython3",
|
|
"version": "3.7.10"
|
|
},
|
|
"vscode": {
|
|
"interpreter": {
|
|
"hash": "99d89bfe98f3aa2d7facda0d08d31ff2a0af9559e5330d719288ce64a1966273"
|
|
}
|
|
}
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 1
|
|
}
|