From 07fdfaae08713e39907850af02459a3b72e4fc42 Mon Sep 17 00:00:00 2001 From: Muhammad Ammar Nabil Date: Sun, 20 Aug 2023 16:42:13 +0700 Subject: [PATCH] Fix lint error --- .../Week 3/C1W3_Assignment.ipynb | 654 +++++++++--------- 1 file changed, 330 insertions(+), 324 deletions(-) diff --git a/Custom Models, Layers, Loss Functions with Tensorflow/Week 3/C1W3_Assignment.ipynb b/Custom Models, Layers, Loss Functions with Tensorflow/Week 3/C1W3_Assignment.ipynb index c18c17e..838b6b2 100644 --- a/Custom Models, Layers, Loss Functions with Tensorflow/Week 3/C1W3_Assignment.ipynb +++ b/Custom Models, Layers, Loss Functions with Tensorflow/Week 3/C1W3_Assignment.ipynb @@ -1,334 +1,340 @@ { - "cells": [ - { - "cell_type": "markdown", - "metadata": { - "id": "view-in-github", - "colab_type": "text" - }, - "source": [ - "\"Open" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "84vubIXbG5oa" - }, - "source": [ - "# Week 3 Assignment: Implement a Quadratic Layer" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "Qh0Kq9K5G5oq" - }, - "source": [ - "In this week's programming exercise, you will build a custom quadratic layer which computes $y = ax^2 + bx + c$. Similar to the ungraded lab, this layer will be plugged into a model that will be trained on the MNIST dataset. Let's get started!" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "ys-i_z_sG5ox" - }, - "source": [ - "### Imports" - ] - }, - { - "cell_type": "code", - "execution_count": 2, - "metadata": { - "id": "oUTnsS4KG5o4" - }, - "outputs": [], - "source": [ - "import numpy as np\n", - "import tensorflow as tf\n", - "from tensorflow.keras.layers import Layer\n", - "\n", - "import utils;" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "H9CAVtKxG5pC" - }, - "source": [ - "### Define the quadratic layer (TODO)\n", - "Implement a simple quadratic layer. It has 3 state variables: $a$, $b$ and $c$. The computation returned is $ax^2 + bx + c$. Make sure it can also accept an activation function.\n", - "\n", - "#### `__init__`\n", - "- call `super(my_fun, self)` to access the base class of `my_fun`, and call the `__init__()` function to initialize that base class. In this case, `my_fun` is `SimpleQuadratic` and its base class is `Layer`.\n", - "- self.units: set this using one of the function parameters.\n", - "- self.activation: The function parameter `activation` will be passed in as a string. To get the tensorflow object associated with the string, please use `tf.keras.activations.get()`\n", - "\n", - "\n", - "#### `build`\n", - "The following are suggested steps for writing your code. If you prefer to use fewer lines to implement it, feel free to do so. Either way, you'll want to set `self.a`, `self.b` and `self.c`.\n", - "\n", - "- a_init: set this to tensorflow's `random_normal_initializer()`\n", - "- a_init_val: Use the `random_normal_initializer()` that you just created and invoke it, setting the `shape` and `dtype`.\n", - " - The `shape` of `a` should have its row dimension equal to the last dimension of `input_shape`, and its column dimension equal to the number of units in the layer. \n", - " - This is because you'll be matrix multiplying x^2 * a, so the dimensions should be compatible.\n", - " - set the dtype to 'float32'\n", - "- self.a: create a tensor using tf.Variable, setting the initial_value and set trainable to True.\n", - "\n", - "- b_init, b_init_val, and self.b: these will be set in the same way that you implemented a_init, a_init_val and self.a\n", - "- c_init: set this to `tf.zeros_initializer`.\n", - "- c_init_val: Set this by calling the tf.zeros_initializer that you just instantiated, and set the `shape` and `dtype`\n", - " - shape: This will be a vector equal to the number of units. This expects a tuple, and remember that a tuple `(9,)` includes a comma.\n", - " - dtype: set to 'float32'.\n", - "- self.c: create a tensor using tf.Variable, and set the parameters `initial_value` and `trainable`.\n", - "\n", - "#### `call`\n", - "The following section performs the multiplication x^2*a + x*b + c. The steps are broken down for clarity, but you can also perform this calculation in fewer lines if you prefer.\n", - "- x_squared: use tf.math.square()\n", - "- x_squared_times_a: use tf.matmul(). \n", - " - If you see an error saying `InvalidArgumentError: Matrix size-incompatible`, please check the order of the matrix multiplication to make sure that the matrix dimensions line up.\n", - "- x_times_b: use tf.matmul().\n", - "- x2a_plus_xb_plus_c: add the three terms together.\n", - "- activated_x2a_plus_xb_plus_c: apply the class's `activation` to the sum of the three terms.\n" - ] - }, - { - "cell_type": "code", - "execution_count": 3, - "metadata": { - "deletable": false, - "id": "Ga20PttZFXm4", - "nbgrader": { - "cell_type": "code", - "checksum": "687a4c4733c3c581631c2b476104b829", - "grade": false, - "grade_id": "cell-c302ddc177c098f8", - "locked": false, - "schema_version": 3, - "solution": true, - "task": false - } - }, - "outputs": [], - "source": [ - "# Please uncomment all lines in this cell and replace those marked with `# YOUR CODE HERE`.\n", - "# You can select all lines in this code cell with Ctrl+A (Windows/Linux) or Cmd+A (Mac), then press Ctrl+/ (Windows/Linux) or Cmd+/ (Mac) to uncomment.\n", - "\n", - "from typing import Optional, Callable\n", - "\n", - "\n", - "class SimpleQuadratic(Layer):\n", - " def __init__(\n", - " self, units: int = 32, activation: Optional[str | Callable] = None\n", - " ) -> None:\n", - " \"\"\"Initializes the class and sets up the internal variables\"\"\"\n", - " super(SimpleQuadratic, self).__init__()\n", - " self.units: int = units\n", - " self.activation: Callable = tf.keras.activations.get(activation)\n", - "\n", - " def build(self, input_shape: tf.TensorShape) -> None:\n", - " \"\"\"Create the state of the layer (weights)\"\"\"\n", - " # a and b should be initialized with random normal, c (or the bias) with zeros.\n", - " # remember to set these as trainable.\n", - " a_init: tf.random_normal_initializer = tf.random_normal_initializer()\n", - " self.a: tf.Variable = tf.Variable(\n", - " initial_value=a_init(\n", - " shape=(input_shape[-1], self.units),\n", - " dtype=tf.dtypes.float32,\n", - " ),\n", - " trainable=True,\n", - " name=\"kernel_a\",\n", - " )\n", - " b_init: tf.random_normal_initializer = tf.random_normal_initializer()\n", - " self.b: tf.Variable = tf.Variable(\n", - " initial_value=b_init(\n", - " shape=(input_shape[-1], self.units),\n", - " dtype=tf.dtypes.float32,\n", - " ),\n", - " trainable=True,\n", - " name=\"kernel_b\",\n", - " )\n", - " c_init: tf.zeros_initializer = tf.zeros_initializer()\n", - " self.c: tf.Variable = tf.Variable(\n", - " initial_value=c_init(\n", - " shape=(self.units,),\n", - " dtype=tf.dtypes.float32,\n", - " ),\n", - " trainable=True,\n", - " name=\"bias\",\n", - " )\n", - "\n", - " def call(self, inputs: tf.Tensor) -> tf.Tensor:\n", - " \"\"\"Defines the computation from inputs to outputs\"\"\"\n", - " # Remember to use self.activation() to get the final output\n", - " a_x2: tf.Tensor = tf.matmul(a=tf.math.square(inputs), b=self.a)\n", - " b_x: tf.Tensor = tf.matmul(a=inputs, b=self.b)\n", - " logits: tf.Tensor = tf.add(x=a_x2, y=b_x) + self.c\n", - " return self.activation(logits);" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "r3QbovCjG5pP" - }, - "source": [ - "Test your implementation" - ] - }, - { - "cell_type": "code", - "execution_count": 4, - "metadata": { - "deletable": false, - "editable": false, - "nbgrader": { - "cell_type": "code", - "checksum": "0965bec4878a263cf06b286cd0fe3b2a", - "grade": true, - "grade_id": "cell-c3ebc4cccbb7f454", - "locked": true, - "points": 1, - "schema_version": 3, - "solution": false, - "task": false + "cells": [ + { + "cell_type": "markdown", + "metadata": { + "id": "view-in-github", + "colab_type": "text" + }, + "source": [ + "\"Open" + ] }, - "colab": { - "base_uri": "https://localhost:8080/" + { + "cell_type": "markdown", + "metadata": { + "id": "84vubIXbG5oa" + }, + "source": [ + "# Week 3 Assignment: Implement a Quadratic Layer" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "Qh0Kq9K5G5oq" + }, + "source": [ + "In this week's programming exercise, you will build a custom quadratic layer which computes $y = ax^2 + bx + c$. Similar to the ungraded lab, this layer will be plugged into a model that will be trained on the MNIST dataset. Let's get started!" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "ys-i_z_sG5ox" + }, + "source": [ + "### Imports" + ] + }, + { + "cell_type": "code", + "execution_count": 10, + "metadata": { + "id": "oUTnsS4KG5o4" + }, + "outputs": [], + "source": [ + "import numpy as np\n", + "import tensorflow as tf\n", + "from tensorflow.keras.layers import Layer\n", + "\n", + "import utils" + ] }, - "id": "v3krkriNG5pT", - "outputId": "0299254b-acaa-47f2-9a0f-69bb365a9aa0" - }, - "outputs": [ { - "output_type": "stream", - "name": "stdout", - "text": [ - "\u001b[92m All public tests passed\n" - ] + "cell_type": "markdown", + "metadata": { + "id": "H9CAVtKxG5pC" + }, + "source": [ + "### Define the quadratic layer (TODO)\n", + "Implement a simple quadratic layer. It has 3 state variables: $a$, $b$ and $c$. The computation returned is $ax^2 + bx + c$. Make sure it can also accept an activation function.\n", + "\n", + "#### `__init__`\n", + "- call `super(my_fun, self)` to access the base class of `my_fun`, and call the `__init__()` function to initialize that base class. In this case, `my_fun` is `SimpleQuadratic` and its base class is `Layer`.\n", + "- self.units: set this using one of the function parameters.\n", + "- self.activation: The function parameter `activation` will be passed in as a string. To get the tensorflow object associated with the string, please use `tf.keras.activations.get()`\n", + "\n", + "\n", + "#### `build`\n", + "The following are suggested steps for writing your code. If you prefer to use fewer lines to implement it, feel free to do so. Either way, you'll want to set `self.a`, `self.b` and `self.c`.\n", + "\n", + "- a_init: set this to tensorflow's `random_normal_initializer()`\n", + "- a_init_val: Use the `random_normal_initializer()` that you just created and invoke it, setting the `shape` and `dtype`.\n", + " - The `shape` of `a` should have its row dimension equal to the last dimension of `input_shape`, and its column dimension equal to the number of units in the layer. \n", + " - This is because you'll be matrix multiplying x^2 * a, so the dimensions should be compatible.\n", + " - set the dtype to 'float32'\n", + "- self.a: create a tensor using tf.Variable, setting the initial_value and set trainable to True.\n", + "\n", + "- b_init, b_init_val, and self.b: these will be set in the same way that you implemented a_init, a_init_val and self.a\n", + "- c_init: set this to `tf.zeros_initializer`.\n", + "- c_init_val: Set this by calling the tf.zeros_initializer that you just instantiated, and set the `shape` and `dtype`\n", + " - shape: This will be a vector equal to the number of units. This expects a tuple, and remember that a tuple `(9,)` includes a comma.\n", + " - dtype: set to 'float32'.\n", + "- self.c: create a tensor using tf.Variable, and set the parameters `initial_value` and `trainable`.\n", + "\n", + "#### `call`\n", + "The following section performs the multiplication x^2*a + x*b + c. The steps are broken down for clarity, but you can also perform this calculation in fewer lines if you prefer.\n", + "- x_squared: use tf.math.square()\n", + "- x_squared_times_a: use tf.matmul(). \n", + " - If you see an error saying `InvalidArgumentError: Matrix size-incompatible`, please check the order of the matrix multiplication to make sure that the matrix dimensions line up.\n", + "- x_times_b: use tf.matmul().\n", + "- x2a_plus_xb_plus_c: add the three terms together.\n", + "- activated_x2a_plus_xb_plus_c: apply the class's `activation` to the sum of the three terms.\n" + ] + }, + { + "cell_type": "code", + "execution_count": 11, + "metadata": { + "deletable": false, + "id": "Ga20PttZFXm4", + "nbgrader": { + "cell_type": "code", + "checksum": "687a4c4733c3c581631c2b476104b829", + "grade": false, + "grade_id": "cell-c302ddc177c098f8", + "locked": false, + "schema_version": 3, + "solution": true, + "task": false + } + }, + "outputs": [], + "source": [ + "# Please uncomment all lines in this cell and replace those marked with\n", + "# `# YOUR CODE HERE`.\n", + "# You can select all lines in this code cell with Ctrl+A (Windows/Linux) or\n", + "# Cmd+A (Mac), then press Ctrl+/ (Windows/Linux) or Cmd+/ (Mac) to uncomment.\n", + "\n", + "from typing import Optional, Callable\n", + "\n", + "class SimpleQuadratic(Layer):\n", + "\n", + " def __init__(self,\n", + " units: int=32,\n", + " activation: Optional[str | Callable]=None) -> None:\n", + " '''Initializes the class and sets up the internal variables'''\n", + " super(SimpleQuadratic, self).__init__()\n", + " self.units: int = units\n", + " self.activation: Callable = tf.keras.activations.get(activation)\n", + "\n", + " def build(self, input_shape: tf.TensorShape) -> None:\n", + " '''Create the state of the layer (weights)'''\n", + " # a and b should be initialized with random normal, c (or the bias)\n", + " # with zeros. remember to set these as trainable.\n", + " a_init: tf.random_normal_initializer = tf.random_normal_initializer()\n", + " self.a: tf.Variable = tf.Variable(\n", + " initial_value=a_init(\n", + " shape=(input_shape[-1], self.units), dtype=tf.dtypes.float32,\n", + " ),\n", + " trainable=True,\n", + " name=\"kernel_a\",\n", + " )\n", + " b_init: tf.random_normal_initializer = tf.random_normal_initializer()\n", + " self.b: tf.Variable = tf.Variable(\n", + " initial_value=b_init(\n", + " shape=(input_shape[-1], self.units), dtype=tf.dtypes.float32,\n", + " ),\n", + " trainable=True,\n", + " name=\"kernel_b\",\n", + " )\n", + " c_init: tf.zeros_initializer = tf.zeros_initializer()\n", + " self.c: tf.Variable = tf.Variable(\n", + " initial_value=c_init(\n", + " shape=(self.units,), dtype=tf.dtypes.float32,\n", + " ),\n", + " trainable=True,\n", + " name=\"bias\",\n", + " )\n", + "\n", + " def call(self, inputs: tf.Tensor) -> tf.Tensor:\n", + " '''Defines the computation from inputs to outputs'''\n", + " # Remember to use self.activation() to get the final output\n", + " a_x2: tf.Tensor = tf.matmul(a=tf.math.square(inputs), b=self.a)\n", + " b_x: tf.Tensor = tf.matmul(a=inputs, b=self.b)\n", + " logits: tf.Tensor = tf.add(x=a_x2, y=b_x) + self.c\n", + " return self.activation(logits)\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "r3QbovCjG5pP" + }, + "source": [ + "Test your implementation" + ] + }, + { + "cell_type": "code", + "execution_count": 12, + "metadata": { + "deletable": false, + "editable": false, + "nbgrader": { + "cell_type": "code", + "checksum": "0965bec4878a263cf06b286cd0fe3b2a", + "grade": true, + "grade_id": "cell-c3ebc4cccbb7f454", + "locked": true, + "points": 1, + "schema_version": 3, + "solution": false, + "task": false + }, + "colab": { + "base_uri": "https://localhost:8080/" + }, + "id": "v3krkriNG5pT", + "outputId": "226ec39a-ac26-463f-da42-28f365f4713b" + }, + "outputs": [ + { + "output_type": "stream", + "name": "stdout", + "text": [ + "\u001b[92m All public tests passed\n" + ] + } + ], + "source": [ + "utils.test_simple_quadratic(SimpleQuadratic)\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "YgBV5O8JG5pZ" + }, + "source": [ + "You can now train the model with the `SimpleQuadratic` layer that you just implemented. Please uncomment the cell below to run the training. When you get the expected results, you will need to comment this block again before submitting the notebook to the grader." + ] + }, + { + "cell_type": "code", + "execution_count": 13, + "metadata": { + "id": "14tl1CluExjJ", + "colab": { + "base_uri": "https://localhost:8080/" + }, + "outputId": "1bb9bbeb-5a60-457c-9173-5682dfde5ace" + }, + "outputs": [ + { + "output_type": "stream", + "name": "stdout", + "text": [ + "Epoch 1/5\n", + "1875/1875 [==============================] - 12s 6ms/step - loss: 0.2679 - accuracy: 0.9203\n", + "Epoch 2/5\n", + "1875/1875 [==============================] - 12s 6ms/step - loss: 0.1318 - accuracy: 0.9603\n", + "Epoch 3/5\n", + "1875/1875 [==============================] - 12s 6ms/step - loss: 0.1009 - accuracy: 0.9688\n", + "Epoch 4/5\n", + "1875/1875 [==============================] - 11s 6ms/step - loss: 0.0834 - accuracy: 0.9739\n", + "Epoch 5/5\n", + "1875/1875 [==============================] - 11s 6ms/step - loss: 0.0718 - accuracy: 0.9771\n", + "313/313 [==============================] - 1s 3ms/step - loss: 0.0790 - accuracy: 0.9760\n" + ] + }, + { + "output_type": "execute_result", + "data": { + "text/plain": [ + "[0.07899868488311768, 0.9760000109672546]" + ] + }, + "metadata": {}, + "execution_count": 13 + } + ], + "source": [ + "# You can select all lines in this code cell with Ctrl+A (Windows/Linux) or\n", + "# Cmd+A (Mac), then press Ctrl+/ (Windows/Linux) or Cmd+/ (Mac) to uncomment.\n", + "\n", + "# THIS CODE SHOULD RUN WITHOUT MODIFICATION\n", + "# AND SHOULD RETURN TRAINING/TESTING ACCURACY at 97%+\n", + "\n", + "mnist: object = tf.keras.datasets.mnist\n", + "\n", + "x_train: np.ndarray\n", + "y_train: np.ndarray\n", + "x_test: np.ndarray\n", + "y_test: np.ndarray\n", + "(x_train, y_train),(x_test, y_test) = mnist.load_data()\n", + "x_train, x_test = x_train / 255.0, x_test / 255.0\n", + "\n", + "model: tf.keras.Sequential = tf.keras.models.Sequential([\n", + " tf.keras.layers.Flatten(input_shape=(28, 28)),\n", + " SimpleQuadratic(128, activation='relu'),\n", + " tf.keras.layers.Dropout(0.2),\n", + " tf.keras.layers.Dense(10, activation='softmax'),\n", + "])\n", + "\n", + "model.compile(\n", + " optimizer='adam',\n", + " loss='sparse_categorical_crossentropy',\n", + " metrics=['accuracy'],\n", + ")\n", + "\n", + "model.fit(x_train, y_train, epochs=5)\n", + "model.evaluate(x_test, y_test)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "NaJvDgOoG5pl" + }, + "source": [ + "**IMPORTANT**\n", + "\n", + "Before submitting, please make sure to follow these steps to avoid getting timeout issues with the grader:\n", + "\n", + "1. Make sure to pass all public tests and get an accuracy greater than 97%.\n", + "2. Click inside the training code cell above.\n", + "3. Select all lines in this code cell with `Ctrl+A` (Windows/Linux) or `Cmd+A` (Mac), then press `Ctrl+/` (Windows/Linux) or `Cmd+/` (Mac) to comment the entire block. All lines should turn green as before.\n", + "4. From the menu bar, click `File > Save and Checkpoint`. *This is important*.\n", + "5. Once saved, click the `Submit Assignment` button." + ] } - ], - "source": [ - "utils.test_simple_quadratic(SimpleQuadratic)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "YgBV5O8JG5pZ" - }, - "source": [ - "You can now train the model with the `SimpleQuadratic` layer that you just implemented. Please uncomment the cell below to run the training. When you get the expected results, you will need to comment this block again before submitting the notebook to the grader." - ] - }, - { - "cell_type": "code", - "execution_count": 5, - "metadata": { - "id": "14tl1CluExjJ", + ], + "metadata": { "colab": { - "base_uri": "https://localhost:8080/" + "provenance": [], + "include_colab_link": true }, - "outputId": "fc42fd3a-3029-4ea7-a33f-96f603b66171" - }, - "outputs": [ - { - "output_type": "stream", - "name": "stdout", - "text": [ - "Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz\n", - "11490434/11490434 [==============================] - 0s 0us/step\n", - "Epoch 1/5\n", - "1875/1875 [==============================] - 17s 8ms/step - loss: 0.2750 - accuracy: 0.9189\n", - "Epoch 2/5\n", - "1875/1875 [==============================] - 13s 7ms/step - loss: 0.1324 - accuracy: 0.9610\n", - "Epoch 3/5\n", - "1875/1875 [==============================] - 8s 4ms/step - loss: 0.1025 - accuracy: 0.9674\n", - "Epoch 4/5\n", - "1875/1875 [==============================] - 9s 5ms/step - loss: 0.0813 - accuracy: 0.9753\n", - "Epoch 5/5\n", - "1875/1875 [==============================] - 8s 4ms/step - loss: 0.0718 - accuracy: 0.9768\n", - "313/313 [==============================] - 1s 2ms/step - loss: 0.0762 - accuracy: 0.9778\n" - ] + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.7.6" } - ], - "source": [ - "# You can select all lines in this code cell with Ctrl+A (Windows/Linux) or Cmd+A (Mac), then press Ctrl+/ (Windows/Linux) or Cmd+/ (Mac) to uncomment.\n", - "\n", - "# THIS CODE SHOULD RUN WITHOUT MODIFICATION\n", - "# AND SHOULD RETURN TRAINING/TESTING ACCURACY at 97%+\n", - "\n", - "mnist: object = tf.keras.datasets.mnist\n", - "\n", - "x_train: np.ndarray\n", - "y_train: np.ndarray\n", - "x_test: np.ndarray\n", - "y_test: np.ndarray\n", - "(x_train, y_train), (x_test, y_test) = mnist.load_data()\n", - "x_train, x_test = x_train / 255.0, x_test / 255.0\n", - "\n", - "model: tf.keras.Sequential = tf.keras.models.Sequential(\n", - " [\n", - " tf.keras.layers.Flatten(input_shape=(28, 28)),\n", - " SimpleQuadratic(128, activation=\"relu\"),\n", - " tf.keras.layers.Dropout(0.2),\n", - " tf.keras.layers.Dense(10, activation=\"softmax\"),\n", - " ]\n", - ")\n", - "\n", - "model.compile(\n", - " optimizer=\"adam\",\n", - " loss=\"sparse_categorical_crossentropy\",\n", - " metrics=[\"accuracy\"],\n", - ")\n", - "\n", - "model.fit(x_train, y_train, epochs=5)\n", - "model.evaluate(x_test, y_test);" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "NaJvDgOoG5pl" - }, - "source": [ - "**IMPORTANT**\n", - "\n", - "Before submitting, please make sure to follow these steps to avoid getting timeout issues with the grader:\n", - "\n", - "1. Make sure to pass all public tests and get an accuracy greater than 97%.\n", - "2. Click inside the training code cell above.\n", - "3. Select all lines in this code cell with `Ctrl+A` (Windows/Linux) or `Cmd+A` (Mac), then press `Ctrl+/` (Windows/Linux) or `Cmd+/` (Mac) to comment the entire block. All lines should turn green as before.\n", - "4. From the menu bar, click `File > Save and Checkpoint`. *This is important*.\n", - "5. Once saved, click the `Submit Assignment` button." - ] - } - ], - "metadata": { - "colab": { - "provenance": [], - "include_colab_link": true - }, - "kernelspec": { - "display_name": "Python 3", - "language": "python", - "name": "python3" }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.7.6" - } - }, - "nbformat": 4, - "nbformat_minor": 0 + "nbformat": 4, + "nbformat_minor": 0 } \ No newline at end of file