Linear Digressions

Backpropagation

6 snips
Feb 29, 2016
Ask episode
AI Snips
Chapters
Transcript
Episode notes
ANECDOTE

Coffee Cup Example Shows Generalization

  • They use the coffee cup example: training on many labeled coffee-cup images lets the net generalize to unseen cups.
  • Feed many images labeled 'coffee cup' so the network adjusts weights to recognize new cup pictures it hasn't seen.
INSIGHT

Hidden Units Are Simple Weighted Functions

  • Neural network hidden units are simple mathematical functions that take many inputs, weight them, sum them, and apply a nonlinearity.
  • Each hidden unit reads many pixels (or upstream outputs), multiplies by learned weights, sums, then uses a logistic-like function to decide its output.
INSIGHT

Training Changes Weights Not Functions

  • Training a neural net does not change the form of the neuron functions; it adjusts the numeric weights on connections between units.
  • The same equations stay fixed while many connection weights move to change contributions from different inputs.
Get the Snipd Podcast app to discover more snips from this episode
Get the app