Deep learning methods: a practical approach
by Francesco Morandin
Table of contents (tentative)
1. The seventies. Neural networks before computer
Perceptron unit, the biologically inspired artificial neuron
Layer of many units, when categorical output is needed
Multilayer neural networks, a universal approximator with automatic feature extraction
Activation function, the need for some trainable nonlinearity
2. The nineties. Supervised training success
Training set, learning from examples through some loss function
Backpropagation, the CS answer to calculus' chain rule for gradient
Minibatches and stochastic gradient descent
3. The tens. Subtle science and exact art of DL
Autoencoders and unsupervised weight initialization
Modern activation functions, ReLU & Co
Robust loss functions, maximum likelihood and cross-entropy
Weights regularization and dropout
Fifty shades of Deep Learning: ConvNets, pooling, ResNets and BatchNorm
4. Tools of the trade
python, Tensorflow and Keras
Google COLAB and Jupiter Notebook
Deep Learning's “Hello World!”: MNIST
School example in fingerprint localization: UJIndoorLoc
Essential bibliografy
Organized by