Skip to content

Jupyter notebooks, Python and MATLAB code examples, and demos from the textbook "Machine Learning Refined" (Cambridge University Press). See our blog https://jermwatt.github.io/mlrefined/index.html for interactive versions of many of the notebooks in this repo.

Notifications You must be signed in to change notification settings

nmadhire/mlrefined

 
 

Repository files navigation

Machine Learning Refined Jupyter notebooks

This repository contains supplementary Python files associated the texbook Machine Learning Refined published by Cambridge University Press, as well as a blog made up of Jupyter notebooks that was used to rough draft the second edition of the text. To successfully run the Jupyter notebooks contained in this repo we highly recommend downloading the Anaconda Python 3 distribution. Many of these notebooks also employ the Automatic Differentiator autograd which can be installed by typing the following command at your terminal

  pip install autograd

With minor adjustment users can also run these notebooks using the GPU/TPU extended version of autograd JAX.

Note: to pull a minimial sized clone of this repo (including only the most recent commit) use a shallow pull as follows

  git clone --depth 1 https://github.com/jermwatt/mlrefined.git

Chapter 2: Zero order / derivative free optimization

2.1 Motivation
2.2 Zero order optimiality conditions
2.3 Global optimization
2.4 Local optimization techniques
2.5 Random search methods

Chapter 3: First order optimization methods

3.1 Introduction
3.2 The first order optimzliaty condition
3.3 The anatomy of lines and hyperplanes
3.4 Automatic differentiation and autograd
3.5 Gradient descent
3.6 Two problems with the negative gradient direction
3.7 Momentum acceleration
3.8 Normalized gradient descent procedures
3.9 Advanced first order methods
3.10 Mini-batch methods
3.11 Conservative steplength rules

Chapter 4: Second order optimization methods

4.1 The anatomy of quadratic functions
4.2 Curvature and the second order optimality condition
4.3 Newton's method
4.4 Two fundamental problems with Newton's method
4.5 Quasi-newton's methods

Chapter 5: Linear regression

5.1 Least squares regression
5.2 Least absolute deviations
5.3 Regression metrics
5.4 Weighted regression
5.5 Multi-output regression

Chapter 6: Linear two-class classification

6.1 Logistic regression and the cross-entropy cost
6.2 Logistic regression and the softmax cost
6.3 The perceptron
6.4 Support vector machines
6.5 Categorical labels
6.6 Comparing two-class schemes
6.7 Quality metrics
6.8 Weighted two-class classification

Chapter 7: Linear multi-class classification

7.1 One-versus-All classification
7.2 The multi-class perceptron
7.3 Comparing multi-class schemes
7.4 The categorical cross-entropy cost
7.5 Multi-class quality metrics

Chapter 8: Unsupervised learning

8.1 Spanning sets and vector algebra
8.2 Learning proper spanning sets
8.3 The linear Autoencoder
8.4 The class PCA solution
8.5 Recommender systems
8.6 K-means clustering
8.7 Matrix factorization techniques

Chapter 9: Principles of feature selection and engineering

9.1 Histogram-based features
9.2 Standard normalization and feature scaling
9.3 Imputing missing values
9.4 PCA-sphereing
9.5 Feature selection via boosting
9.6 Feature selection via regularization

Chapter 10: Introduction to nonlinear learning

10.1 Nonlinear regression
10.2 Nonlinear multi-output regression
10.3 Nonlinear two-class classification
10.4 Nonlinear multi-class classification
10.5 Nonlinear unsupervised learning

Chapter 11: Principles of feature learning

11.1 Universal approximation
11.2 The bias-variance trade-off
11.3 Cross-validation via boosting
11.4 Cross-validation via regularization
11.5 Ensembling techniques
11.6 K-folds cross-validation
11.7 Testing data

Chapter 12: Kernels

12.1 The variety of kernel-based learners
12.2 The kernel trick
12.3 Kernels as similarity measures
12.4 Scaling kernels

Chapter 13: Fully connected networks / multi-layer perceptrons

13.1 Fully connected networks
13.2 Optimization issues
13.3 Activation functions
13.4 Backpropogation
13.5 Batch normalization
13.6 Early-stopping

Chapter 14: Tree-based learners

14.1 Varieties of tree-based learners
14.2 Regression trees
14.3 Classification trees
14.4 Gradient boosting
14.5 Random forests
14.6 Cross-validating individual trees


This repository is in active development by Jeremy Watt and Reza Borhani - please do not hesitate to reach out with comments, questions, typos, etc.

About

Jupyter notebooks, Python and MATLAB code examples, and demos from the textbook "Machine Learning Refined" (Cambridge University Press). See our blog https://jermwatt.github.io/mlrefined/index.html for interactive versions of many of the notebooks in this repo.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • HTML 57.0%
  • Jupyter Notebook 42.8%
  • Python 0.2%
  • MATLAB 0.0%
  • JavaScript 0.0%
  • CSS 0.0%