Artificial neural networks | Machine learning algorithms

Backpropagation

In machine learning, backpropagation (backprop, BP) is a widely used algorithm for training feedforward artificial neural networks. Generalizations of backpropagation exist for other artificial neural networks (ANNs), and for functions generally. These classes of algorithms are all referred to generically as "backpropagation". In fitting a neural network, backpropagation computes the gradient of the loss function with respect to the weights of the network for a single input–output example, and does so efficiently, unlike a naive direct computation of the gradient with respect to each weight individually. This efficiency makes it feasible to use gradient methods for training multilayer networks, updating weights to minimize loss; gradient descent, or variants such as stochastic gradient descent, are commonly used. The backpropagation algorithm works by computing the gradient of the loss function with respect to each weight by the chain rule, computing the gradient one layer at a time, iterating backward from the last layer to avoid redundant calculations of intermediate terms in the chain rule; this is an example of dynamic programming. The term backpropagation strictly refers only to the algorithm for computing the gradient, not how the gradient is used; however, the term is often used loosely to refer to the entire learning algorithm, including how the gradient is used, such as by stochastic gradient descent. Backpropagation generalizes the gradient computation in the delta rule, which is the single-layer version of backpropagation, and is in turn generalized by automatic differentiation, where backpropagation is a special case of reverse accumulation (or "reverse mode"). The term backpropagation and its general use in neural networks was announced in , then elaborated and popularized in , but the technique was independently rediscovered many times, and had many predecessors dating to the 1960s; see . A modern overview is given in the deep learning textbook by . (Wikipedia).

Backpropagation
Video thumbnail

Backpropagation in Neural Networks | Back Propagation Algorithm with Examples | Simplilearn

This video covers What is Backpropagation in Neural Networks? Neural Network Tutorial for Beginners includes a definition of backpropagation, working of backpropagation, benefits of backpropagation, and applications. 00:00 - What is Backpropagation? This phase contains the definition of

From playlist Deep Learning Tutorial Videos 🔥[2022 Updated] | Simplilearn

Video thumbnail

Backpropagation explained | Part 5 - What puts the "back" in backprop?

Let's see the math that explains how backpropagation works backwards through a neural network. We've seen how to calculate the gradient of the loss function using backpropagation in the previous video. We haven't yet seen though where the backwards movement comes into play that we talked

From playlist Deep Learning Fundamentals - Intro to Neural Networks

Video thumbnail

Backpropagation 1

Introduces the notation and background necessary to understand backpropagation. Backpropagation is given as an algorithm but not mathematically justified.

From playlist MachineLearning

Video thumbnail

Understanding Backpropagation In Neural Networks with Basic Calculus

This video explains Backpropagation in neural networks and deep learning with basic knowledge of Calculus. In machine learning, backpropagation or backprop is a widely used algorithm for training feedforward neural networks. Backpropagation computes the gradient of the loss function with r

From playlist Mathematics for Machine Learning - Dr. Data Science Series

Video thumbnail

Backpropagation explained | Part 2 - The mathematical notation

We covered the intuition behind what backpropagation's role is during the training of an artificial neural network. https://youtu.be/XE3krf3CQls Now, we're going to focus on the math that's underlying backprop. The math is pretty involved, and so we're going to break it up into bite-size

From playlist Deep Learning Fundamentals - Intro to Neural Networks

Video thumbnail

Backpropagation 2

This is a sequel to Backpropagation 1: https://youtu.be/NLchKk9Cawg Here we mathematically justify the backpropagation algorithm.

From playlist MachineLearning

Video thumbnail

Backpropagation : Data Science Concepts

The tricky backprop method in neural networks ... clearly explained! Intro Neural Networks Video : https://youtu.be/xx1hS1EQLNw

From playlist Data Science Concepts

Video thumbnail

Direct Feedback Alignment Scales to Modern Deep Learning Tasks and Architectures (Paper Explained)

Backpropagation is one of the central components of modern deep learning. However, it's not biologically plausible, which limits the applicability of deep learning to understand how the human brain works. Direct Feedback Alignment is a biologically plausible alternative and this paper show

From playlist Papers Explained

Video thumbnail

Neural Networks Demystified [Part 4: Backpropagation]

Backpropagation as simple as possible, but no simpler. Perhaps the most misunderstood part of neural networks, Backpropagation of errors is the key step that allows ANNs to learn. In this video, I give the derivation and thought processes behind backpropagation using high school level calc

From playlist Neural Networks Demystified

Video thumbnail

Backpropagation And Gradient Descent In Neural Networks | Neural Network Tutorial | Simplilearn

🔥Artificial Intelligence Engineer Program (Discount Coupon: YTBE15): https://www.simplilearn.com/masters-in-artificial-intelligence?utm_campaign=BackPropagationandGradientDescent-odlgtjXduVg&utm_medium=Descriptionff&utm_source=youtube 🔥Professional Certificate Program In AI And Machine Lea

From playlist Deep Learning Tutorial Videos 🔥[2022 Updated] | Simplilearn

Video thumbnail

Training Neural Networks: Crash Course AI #4

Today we’re going to talk about how neurons in a neural network learn by getting their math adjusted, called backpropagation, and how we can optimize networks by finding the best combinations of weights to minimize error. Then we’ll send John Green Bot into the metaphorical jungle to find

From playlist Artificial Intelligence

Video thumbnail

Lecture 13.1 — The ups and downs of backpropagation [Neural Networks for Machine Learning]

Lecture from the course Neural Networks for Machine Learning, as taught by Geoffrey Hinton (University of Toronto) on Coursera in 2012. Link to the course (login required): https://class.coursera.org/neuralnets-2012-001

From playlist [Coursera] Neural Networks for Machine Learning — Geoffrey Hinton

Video thumbnail

Lecture 12 - Backprop & Improving Neural Networks | Stanford CS229: Machine Learning (Autumn 2018)

For more information about Stanford’s Artificial Intelligence professional and graduate programs, visit: https://stanford.io/3EaSVE7 Kian Katanforoosh Lecturer, Computer Science To follow along with the course schedule and syllabus, visit: http://cs229.stanford.edu/syllabus-autumn2018.

From playlist Stanford CS229: Machine Learning Full Course taught by Andrew Ng | Autumn 2018

Video thumbnail

Backpropagation In Neural Networks | Backpropagation Algorithm Explained For Beginners | Simplilearn

This video on Backpropagation in Neural Networks will cover how backpropagation and gradient descent play a role in training neural networks. You will learn this using an example of how to recognize handwritten digits using a neural network. After predicting the results, you will see how t

From playlist Deep Learning Tutorial Videos 🔥[2022 Updated] | Simplilearn

Video thumbnail

Backpropagation explained | Part 1 - The intuition

Let's discuss backpropagation and what its role is in the training process of a neural network. We're going to start out by first going over a quick recap of some of the points about Stochastic Gradient Descent that we learned in previous videos. Then, we're going to talk about where backp

From playlist Deep Learning Fundamentals - Intro to Neural Networks

Video thumbnail

Backpropagation calculus | Chapter 4, Deep learning

Help fund future projects: https://www.patreon.com/3blue1brown An equally valuable form of support is to simply share some of the videos. Special thanks to these supporters: http://3b1b.co/nn3-thanks Written/interactive form of this series: https://www.3blue1brown.com/topics/neural-network

From playlist Data Science

Related pages

Automatic differentiation | Loss function | Deep learning | Logistic function | Regression analysis | Artificial neuron | AdaBoost | Dummy variable (statistics) | AlexNet | Chain rule | Gradient | Differentiable function | Fisher information | Backpropagation through time | Parameter space | Rectifier (neural networks) | Parameter | Squared error loss | Dynamic programming | Diagonal matrix | Statistical classification | Partial derivative | Feedforward neural network | Parabola | Hadamard product (matrices) | Activation function | Maxima and minima | Convex optimization | Monte Carlo tree search | Control theory | Delta rule | Sigmoid function | Swish function | Real number | Overfitting | Ensemble learning | Catastrophic interference | Gradient descent | Hessian matrix | Plateau (mathematics) | Artificial neural network | Ramp function | Cross entropy | Gradient method | Tuple | Transpose | Iteration | Matrix multiplication | Euclidean distance | Function composition | Optimization problem | Softmax function | Stochastic gradient descent | Algorithm | Algorithmic efficiency | Total derivative