Neural network architectures

Convolutional neural network

In deep learning, a convolutional neural network (CNN, or ConvNet) is a class of artificial neural network (ANN), most commonly applied to analyze visual imagery. CNNs are also known as Shift Invariant or Space Invariant Artificial Neural Networks (SIANN), based on the shared-weight architecture of the convolution kernels or filters that slide along input features and provide translation-equivariant responses known as feature maps. Counter-intuitively, most convolutional neural networks are not invariant to translation, due to the downsampling operation they apply to the input. They have applications in image and video recognition, recommender systems, image classification, image segmentation, medical image analysis, natural language processing, brain–computer interfaces, and financial time series. CNNs are regularized versions of multilayer perceptrons. Multilayer perceptrons usually mean fully connected networks, that is, each neuron in one layer is connected to all neurons in the next layer. The "full connectivity" of these networks make them prone to overfitting data. Typical ways of regularization, or preventing overfitting, include: penalizing parameters during training (such as weight decay) or trimming connectivity (skipped connections, dropout, etc.) CNNs take a different approach towards regularization: they take advantage of the hierarchical pattern in data and assemble patterns of increasing complexity using smaller and simpler patterns embossed in their filters. Therefore, on a scale of connectivity and complexity, CNNs are on the lower extreme. Convolutional networks were inspired by biological processes in that the connectivity pattern between neurons resembles the organization of the animal visual cortex. Individual cortical neurons respond to stimuli only in a restricted region of the visual field known as the receptive field. The receptive fields of different neurons partially overlap such that they cover the entire visual field. CNNs use relatively little pre-processing compared to other image classification algorithms. This means that the network learns to optimize the filters (or kernels) through automated learning, whereas in traditional algorithms these filters are hand-engineered. This independence from prior knowledge and human intervention in feature extraction is a major advantage. (Wikipedia).

Convolutional neural network
Video thumbnail

1D convolution for neural networks, part 1: Sliding dot product

Part of an 9-part series on 1D convolution for neural networks. Catch the rest at https://e2eml.school/321

From playlist E2EML 321. Convolution in One Dimension for Neural Networks

Video thumbnail

Implement 1D convolution, part 1: Convolution in Python from scratch

Get the full course experience at https://e2eml.school/321 This course starts out with all the fundamentals of convolutional neural networks in one dimension for maximum clarity. We will extend Cottonwood to handle convolutional architectures and apply it to classifying electrically-measu

From playlist E2EML 321. Convolution in One Dimension for Neural Networks

Video thumbnail

Build a 1D convolutional neural network, part 6: Text summary and loss history

Get the full course experience at https://e2eml.school/321 This course starts out with all the fundamentals of convolutional neural networks in one dimension for maximum clarity. We will extend Cottonwood to handle convolutional architectures and apply it to classifying electrically-measu

From playlist E2EML 321. Convolution in One Dimension for Neural Networks

Video thumbnail

Lecture 5 | Convolutional Neural Networks

In Lecture 5 we move from fully-connected neural networks to convolutional neural networks. We discuss some of the key historical milestones in the development of convolutional networks, including the perceptron, the neocognitron, LeNet, and AlexNet. We introduce convolution, pooling, and

From playlist Lecture Collection | Convolutional Neural Networks for Visual Recognition (Spring 2017)

Video thumbnail

Convolutional Neural Networks Basics - Deep Learning withTensorFlow 12

In this tutorial, we cover the basics of the Convolutional Neural Network (CNN) in terms of how the network works and how the parts interact. https://pythonprogramming.net https://twitter.com/sentdex https://www.facebook.com/pythonprogramming.net/ https://plus.google.com/+sentdex

From playlist Machine Learning with Python

Video thumbnail

Introduction to convolutional neural networks

Start this series on deep learning for domain experts at https://www.youtube.com/watch?v=9-QYsN_knG4&list=PLsu0TcgLDUiIKPMXu1k_rItoTV8xPe1cj In this video I talk about the basic concepts of the layers that make up convolutional neural networks. These networks are great for computer visi

From playlist Introduction to deep learning for everyone

Video thumbnail

CS231n Lecture 7 - Convolutional Neural Networks

Convolutional Neural Networks: architectures, convolution / pooling layers Case study of ImageNet challenge winning ConvNets

From playlist CS231N - Convolutional Neural Networks

Video thumbnail

Deep Learning Lecture 5.1 - Greetings

Convolutional Neural Networks - Welcome

From playlist Deep Learning Lecture

Video thumbnail

Neural Networks Part 8: Image Classification with Convolutional Neural Networks (CNNs)

One of the coolest things that Neural Networks can do is classify images, and this is often done with a type of Neural Network called a Convolutional Neural Network (or CNN for short). In this StatQuest, we walk through how Convolutional Neural Networks work, one step at a time, and highli

From playlist StatQuest

Video thumbnail

Stefania Ebli (8/29/21): Simplicial Neural Networks

In this talk I will present simplicial neural networks (SNNs), a generalization of graph neural networks to data that live on a class of topological spaces called simplicial complexes. These are natural multi-dimensional extensions of graphs that encode not only pairwise relationships but

From playlist Beyond TDA - Persistent functions and its applications in data sciences, 2021

Video thumbnail

One Neural network learns EVERYTHING ?!

We explore a neural network architecture that can solve multiple tasks: multimodal Neural Network. We discuss important components and concepts along the way. If you like this video, hit that like button. If you really like this video, hit that SUBSCRIBE button. And if you just love me hi

From playlist Deep Learning Research Papers

Video thumbnail

Stanford CS224N: NLP with Deep Learning | Winter 2019 | Lecture 11 – Convolutional Networks for NLP

For more information about Stanford’s Artificial Intelligence professional and graduate programs, visit: https://stanford.io/30eokXM Professor Christopher Manning, Stanford University http://onlinehub.stanford.edu/ Professor Christopher Manning Thomas M. Siebel Professor in Machine Lear

From playlist Stanford CS224N: Natural Language Processing with Deep Learning Course | Winter 2019

Video thumbnail

6.1 Convolutional Neural Network (CNN) models

Deep Learning Course Purdue University Fall 2016

From playlist Deep-Learning-Course

Video thumbnail

Deep Learning Lecture 5.3 - ConvNets

Convolutional Neural Networks: - Overall Architecture of a CNN Classifier - Convolutional Layers - Nonlinear Activation Function - Pooling or Striding - Boundary (Zero Padding) - LeNet-5 - AlexNet - Convolutions on different topologies

From playlist Deep Learning Lecture

Video thumbnail

The Evolution of Convolution Neural Networks

From the one that started it all "LeNet" (1998) to the deeper networks we see today like Xception (2017), here are some important CNN architectures you should know. If you like the video, show your support with a like, and SUBSCRIBE for more awesome content on Machine Learning, deep Learni

From playlist Deep Learning Research Papers

Video thumbnail

23. Convolutional Neural Networks

Vanilla neural networks are powerful, but convolutional neural networks are truly revolutionary! Instead of constructing features by hand, a convolutional neural network can extract features on its own! It does this through convolutional layers and then reduces dimensions for faster comput

From playlist Materials Informatics

Video thumbnail

DDPS | Towards automatic architecture design for emerging machine learning tasks | Misha Khodak

Hand-designed neural networks have played a major role in accelerating progress in traditional areas of machine learning such as computer vision, but designing neural networks for other domains remains a challenge. Successfully transferring existing architectures to applications such as se

From playlist Data-driven Physical Simulations (DDPS) Seminar Series

Video thumbnail

17b Machine Learning: Convolutional Neural Networks

Accessible lecture on convolutional neural networks. The Python demonstrations are here: - operators demo - https://git.io/JkqV9 - CNN demo - https://git.io/JksEJ I hope this is helpful, Michael Pyrcz (@GeostatsGuy)

From playlist Machine Learning

Related pages

DeepDream | Dimensionality reduction | Translational symmetry | Deep learning | Deterministic algorithm | Sparse network | Downsampling (signal processing) | Sparse approximation | Dot product | Lua (programming language) | Curse of dimensionality | Apache Spark | Elastic net regularization | Frobenius inner product | Neocognitron | Caffe (software) | Conformal prediction | Safety-critical system | Anti-aliasing filter | Partition of a set | Real number | Overfitting | Dlib | Time series | Matrix multiplication | Softmax function | Video quality | Proportional hazards model | CUDA | NumPy | Boltzmann machine | Decision boundary | MATLAB | Reinforcement learning | Symmetry | TensorFlow | Filter (signal processing) | Torch (machine learning) | Rectifier (neural networks) | Feature (machine learning) | Regularization (mathematics) | Sigmoid function | Microsoft Cognitive Toolkit | Integer | Multinomial distribution | Hyperparameter (machine learning) | Expected value | Backpropagation | Cross-validation (statistics) | Nonlinear filter | Orbital hybridisation | Deep belief network | Loss function | Artificial neuron | Intersection (set theory) | Aliasing | Long short-term memory | Recurrent neural network | Monte Carlo tree search | Equivariant map | Affine transformation | Convolution | Artificial neural network | Deeplearning4j | Time delay neural network | Average | Layer (deep learning) | AlexNet | Scala (programming language) | Three-dimensional space | Vector addition | RGB color model | Multilayer perceptron | Activation function | National Health and Nutrition Examination Survey | Text-to-Video model | Nyquist–Shannon sampling theorem | Capsule neural network | Theano (software) | Hyperparameter optimization | Q-learning | Tensor | Per-comparison error rate | Cross entropy | Curvature | Euclidean distance