Topics include: Expectation-Maximization & Matrix Capsule Networks; Determinantal Point Process & Neural Networks compression; Kalman Filter & LSTM; Model estimation & Binary classifier; Probability density re-parameterization; Stochastic matrices and Monte Carlo Inference
-
I recorded about 20% of these notes in videos in 2015 in Mandarin (all my notes and writings are in English) You may find them on Youtube and 哔哩哔哩 and 优酷
-
I always look for high quality PhD students in Machine Learning, both in terms of probabilistic model and Deep Learning models. Contact me on [email protected]
An extremely gentle 30 minutes introduction to AI and Machine Learning. Thanks to my PhD student Haodong Chang for assist editing
Classification: Logistic and Softmax; Regression: Linear, polynomial; Mix Effect model [costFunction.m] and [soft_max.m]
collaborative filtering, Factorization Machines, Non-Negative Matrix factorisation, Multiplicative Update Rule
classic PCA and t-SNE
Three perspectives into machine learning and Data Science. Supervised vs Unsupervised Learning, Classification accuracy
Optimisation methods in general. not limited to just Deep Learning
basic neural networks and multilayer perceptron
detailed explanation of CNN, various Loss function, Centre Loss, contrastive Loss, Residual Networks, YOLO, SSD
Word2Vec, skip-gram, GloVe, Noise Contrastive Estimation, Negative sampling, Gumbel-max trick
RNN, LSTM, Seq2Seq with Attenion, Beam search, Attention is all you need, Convolution Seq2Seq, Pointer Networks
basic knowledge in reinforcement learning, Markov Decision Process, Bellman Equation and move onto Deep Q-Learning
basic knowledge in Restricted Boltzmann Machine (RBM)
revision on Bayes model include Bayesian predictive model, conditional expectation
some useful distributions, conjugacy, MLE, MAP, Exponential family and natural parameters
useful statistical properties to help us prove things, include Chebyshev and Markov inequality
Proof of convergence for E-M, examples of E-M through Gaussian Mixture Model, [gmm_demo.m] and [kmeans_demo.m] and [优酷链接]
explain in detail of Kalman Filter and Hidden Markov Model, [kalman_demo.m] and [HMM 优酷链接] and [Kalman Filter 优酷链接]
explain Variational Bayes both the non-exponential and exponential family distribution plus stochastic variational inference. [vb_normal_gamma.m] and [优酷链接]
stochastic matrix, Power Method Convergence Theorem, detailed balance and PageRank algorithm
inverse CDF, rejection, adaptive rejection, importance sampling [adaptive_rejection_sampling.m] and [hybrid_gmm.m]
M-H, Gibbs, Slice Sampling, Elliptical Slice sampling, Swendesen-Wang, demonstrate collapsed Gibbs using LDA [lda_gibbs_example.m] and [test_autocorrelation.m] and [gibbs.m] and [优酷链接]
Sequential Monte-Carlo, Condensational Filter algorithm, Auxiliary Particle Filter [优酷链接]
Dircihlet Process (DP), Chinese Restaurant Process insights, Slice sampling for DP [dirichlet_process.m] and [优酷链接] and [Jupyter Notebook]
Hierarchical DP, HDP-HMM, Indian Buffet Process (IBP)
explain the details of DPP’s marginal distribution, L-ensemble, its sampling strategy, our work in time-varying DPP