Artificial neural networks | Computational statistics

Spiking neural network

Spiking neural networks (SNNs) are artificial neural networks that more closely mimic natural neural networks. In addition to neuronal and synaptic state, SNNs incorporate the concept of time into their operating model. The idea is that neurons in the SNN do not transmit information at each propagation cycle (as it happens with typical multi-layer perceptron networks), but rather transmit information only when a membrane potential – an intrinsic quality of the neuron related to its membrane electrical charge – reaches a specific value, called the threshold. When the membrane potential reaches the threshold, the neuron fires, and generates a signal that travels to other neurons which, in turn, increase or decrease their potentials in response to this signal. A neuron model that fires at the moment of threshold crossing is also called a spiking neuron model. The most prominent spiking neuron model is the leaky integrate-and-fire model. In the integrate-and-fire model, the momentary activation level (modeled as a differential equation) is normally considered to be the neuron's state, with incoming spikes pushing this value higher or lower, until the state eventually either decays or - if the firing threshold is reached - the neuron fires. After firing the state variable is reset to a lower value. Various decoding methods exist for interpreting the outgoing spike train as a real-value number, relying on either the frequency of spikes (rate-code), the time-to-first-spike after stimulation, or the interval between spikes. (Wikipedia).

Spiking neural network
Video thumbnail

Multilayer Neural Networks - Part 1: Introduction

This video is about Multilayer Neural Networks - Part 1: Introduction Abstract: This is a series of video about multi-layer neural networks, which will walk through the introduction, the architecture of feedforward fully-connected neural network and its working principle, the working prin

From playlist Neural Networks

Video thumbnail

Neural Networks (2): Backpropagation

Gradient descent for training neural networks; recursive "backpropagation" calculation; examples

From playlist cs273a

Video thumbnail

Backpropagation explained | Part 2 - The mathematical notation

We covered the intuition behind what backpropagation's role is during the training of an artificial neural network. https://youtu.be/XE3krf3CQls Now, we're going to focus on the math that's underlying backprop. The math is pretty involved, and so we're going to break it up into bite-size

From playlist Deep Learning Fundamentals - Intro to Neural Networks

Video thumbnail

Neural Network Architectures & Deep Learning

This video describes the variety of neural network architectures available to solve various problems in science ad engineering. Examples include convolutional neural networks (CNNs), recurrent neural networks (RNNs), and autoencoders. Book website: http://databookuw.com/ Steve Brunton

From playlist Data Science

Video thumbnail

Neural Network Overview

This lecture gives an overview of neural networks, which play an important role in machine learning today. Book website: http://databookuw.com/ Steve Brunton's website: eigensteve.com

From playlist Intro to Data Science

Video thumbnail

[WeightWatcher] Self-Regularization in Deep Neural Networks: Evidence from Random Matrix Theory

For slides and more information on the paper, visit https://aisc.ai.science/events/2019-11-06 Discussion lead & author: Charles Martin Abstract: Random Matrix Theory (RMT) is applied to analyze weight matrices of Deep Neural Networks (DNNs), including both production quality, pre-train

From playlist Math and Foundations

Video thumbnail

14: Rate Models and Perceptrons - Intro to Neural Computation

MIT 9.40 Introduction to Neural Computation, Spring 2018 Instructor: Michale Fee View the complete course: https://ocw.mit.edu/9-40S18 YouTube Playlist: https://www.youtube.com/playlist?list=PLUl4u3cNGP61I4aI5T6OaFfRK2gihjiMm Explores a mathematically tractable model of neural networks, r

From playlist MIT 9.40 Introduction to Neural Computation, Spring 2018

Video thumbnail

PDEs for neural assemblies; analysis; simulations and behaviour

Professor Benoît Perthame, Sorbonne University, France

From playlist Distinguished Visitors Lecture Series

Video thumbnail

Luyan Yu - Metastable spiking networks in the replica-mean-field limit

---------------------------------- Institut Henri Poincaré, 11 rue Pierre et Marie Curie, 75005 PARIS http://www.ihp.fr/ Rejoingez les réseaux sociaux de l'IHP pour être au courant de nos actualités : - Facebook : https://www.facebook.com/InstitutHenriPoincare/ - Twitter : https://twitter

From playlist Workshop "Workshop on Mathematical Modeling and Statistical Analysis in Neuroscience" - January 31st - February 4th, 2022

Video thumbnail

Lecture 1/16 : Introduction

Neural Networks for Machine Learning by Geoffrey Hinton [Coursera 2013] 1A Why do we need machine learning? 1B What are neural networks? 1C Some simple models of neurons 1D A simple example of learning 1E Three types of learning

From playlist Neural Networks for Machine Learning by Professor Geoffrey Hinton [Complete]

Video thumbnail

Luca Mazzucato - Computational Principles Underlying the Temporal Organization of Behavior

Naturalistic animal behavior exhibits a striking amount of variability in the temporal domain along at least three independent axes: hierarchical, contextual, and stochastic. First, a vast hierarchy of timescales links movements into behavioral sequences and long-term activities, from mill

From playlist Mikefest: A conference in honor of Michael Douglas' 60th birthday

Video thumbnail

Artificial Intelligence per Kilowatt-hour: Max Welling, University of Amsterdam

Professor Welling is a research chair in Machine Learning at the University of Amsterdam and a Vice President Technologies at Qualcomm. He has a secondary appointment at the Canadian Institute for Advanced Research (CIFAR). He is co-founder of “Scyfer BV” a university spin-off in deep lear

From playlist AI for Social Good

Video thumbnail

Does the brain do backpropagation? CAN Public Lecture - Geoffrey Hinton - May 21, 2019

Canadian Association for Neuroscience 2019 Public lecture: Geoffrey Hinton https://can-acn.org/2019-public-lecture-geoffrey-hinton

From playlist AI Research

Video thumbnail

Multilayer Neural Networks - Part 2: Feedforward Neural Networks

This video is about Multilayer Neural Networks - Part 2: Feedforward Neural Networks Abstract: This is a series of video about multi-layer neural networks, which will walk through the introduction, the architecture of feedforward fully-connected neural network and its working principle, t

From playlist Neural Networks

Video thumbnail

Neural Coding and Adaptation (Lecture 1) by Adrienne Fairhall

PROGRAM ICTP-ICTS WINTER SCHOOL ON QUANTITATIVE SYSTEMS BIOLOGY (ONLINE) ORGANIZERS: Vijaykumar Krishnamurthy (ICTS-TIFR, India), Venkatesh N. Murthy (Harvard University, USA), Sharad Ramanathan (Harvard University, USA), Sanjay Sane (NCBS-TIFR, India) and Vatsala Thirumalai (NCBS-TIFR,

From playlist ICTP-ICTS Winter School on Quantitative Systems Biology (ONLINE)

Video thumbnail

Neural manifolds - The Geometry of Behaviour

This video is my take on 3B1B's Summer of Math Exposition (SoME) competition It explains in pretty intuitive terms how ideas from topology (or "rubber geometry") can be used in neuroscience, to help us understand the way information is embedded in high-dimensional representations inside

From playlist Summer of Math Exposition Youtube Videos

Video thumbnail

Multilayer Neural Networks - Part 4a: Backprogation

This video is about Multilayer Neural Networks - Part 4a: Backprogation Abstract: This is a series of video about multi-layer neural networks, which will walk through the introduction, the architecture of feedforward fully-connected neural network and its working principle, the working pr

From playlist Machine Learning

Related pages

Differential equation | Deep learning | Convolutional neural network | Artificial neuron | Neural coding | CoDi | Network topology | Pulse-coupled networks | Neural decoding | Recurrent neural network | Action potential | Lasso (statistics) | Gradient descent | FitzHugh–Nagumo model | Artificial neural network | SpiNNaker | Echo state network | Perceptron | Backpropagation | Hindmarsh–Rose model | PyTorch | Leaky integrator | Hodgkin–Huxley model