Skip to content

Latest commit

 

History

History
28 lines (17 loc) · 1.74 KB

glossary.md

File metadata and controls

28 lines (17 loc) · 1.74 KB

Glossary of terms

NOTE: This is not (and probably will never be) a complete set of terms for AI and ML. I just wanted something to refer to when people newer to the class ask about certain terms

Basic Neural Network Components

  • Neural network : Any model that relies on distinct layers of neurons from input to output.
  • Neuron : A basic unit of neural networks. Neurons vary widely in how they work, but they generally have three components : inputs, weights, and activations.
  • Weights : At a high level, used to give certain inputs more importance than others in the output of a neuron. "Learning" is usually done by changing these weights to get the best outputs.
  • Activations : The exact behavior varies a lot, but in general filters output to focus on a certain section or category of outputs. Called "nonlinearity" because it turns a sum of inputs (linear) into an active / inactive signal.

Training

  • Loss : In neural networks, our general goal is to minimize a bad thing. This bad thing, which is generally the difference between what the model outputs and what we want it to output, is called the loss.
  • Backpropagation : The correction of the weights in the neural network

Methods

  • Diffusion :
  • Zero-Shot Learning :
  • Few-Shot / One-shot learning :

Loss / Metrics

  • Softmax : Collapses a set of decimal values (from -inf to inf) into a set of probabilities that sum to 1 (higher decimal values = higher resuting probability)
  • KL Divergence : Kullback-Leibler divergence, often used to find the "distance" between two normal distributions of points (defined by a mean and variance) rather than two fixed points