Region-based Convolutional Neural Networks (R-CNN) are a family of machine learning models for computer vision and specifically object detection. (Wikipedia).
Mask Region based Convolution Neural Networks - EXPLAINED!
In this video, we will take a look at new type of neural network architecture called "Masked Region based Convolution Neural Networks", Masked R-CNN for short. And in the process, highlight some key sub problems in computer vision. Please SUBSCRIBE to the channel for more content on Machi
From playlist Deep Learning Research Papers
1D convolution for neural networks, part 1: Sliding dot product
Part of an 9-part series on 1D convolution for neural networks. Catch the rest at https://e2eml.school/321
From playlist E2EML 321. Convolution in One Dimension for Neural Networks
Graph Neural Networks, Session 4: Simple Graph Convolution
Challenges of extending convolutions to graphs Overview of Simple graph convolutions
From playlist Graph Neural Networks (Hands-on)
Lecture 5 | Convolutional Neural Networks
In Lecture 5 we move from fully-connected neural networks to convolutional neural networks. We discuss some of the key historical milestones in the development of convolutional networks, including the perceptron, the neocognitron, LeNet, and AlexNet. We introduce convolution, pooling, and
From playlist Lecture Collection | Convolutional Neural Networks for Visual Recognition (Spring 2017)
Lecture 11 | Detection and Segmentation
In Lecture 11 we move beyond image classification, and show how convolutional networks can be applied to other core computer vision tasks. We show how fully convolutional networks equipped with downsampling and upsampling layers can be used for semantic segmentation, and how multitask loss
From playlist Lecture Collection | Convolutional Neural Networks for Visual Recognition (Spring 2017)
Introduction to convolutional neural networks
Start this series on deep learning for domain experts at https://www.youtube.com/watch?v=9-QYsN_knG4&list=PLsu0TcgLDUiIKPMXu1k_rItoTV8xPe1cj In this video I talk about the basic concepts of the layers that make up convolutional neural networks. These networks are great for computer visi
From playlist Introduction to deep learning for everyone
Eun-Ah Kim - Machine Learning for Quantum Simulation - IPAM at UCLA
Recorded 15 April 2022. Eun-Ah Kim of Cornell University presents "Machine Learning for Quantum Simulation" at IPAM's Model Reduction in Quantum Mechanics Workshop. Learn more online at: http://www.ipam.ucla.edu/programs/workshops/workshop-ii-model-reduction-in-quantum-mechanics/?tab=sched
From playlist 2022 Model Reduction in Quantum Mechanics Workshop
Lecture 10 | Recurrent Neural Networks
In Lecture 10 we discuss the use of recurrent neural networks for modeling sequence data. We show how recurrent neural networks can be used for language modeling and image captioning, and how soft spatial attention can be incorporated into image captioning models. We discuss different arch
From playlist Lecture Collection | Convolutional Neural Networks for Visual Recognition (Spring 2017)
Neural Networks - Lecture 5 - CS50's Introduction to Artificial Intelligence with Python 2020
00:00:00 - Introduction 00:00:15 - Neural Networks 00:05:41 - Activation Functions 00:07:47 - Neural Network Structure 00:16:02 - Gradient Descent 00:30:00 - Multilayer Neural Networks 00:32:58 - Backpropagation 00:36:27 - Overfitting 00:38:52 - TensorFlow 00:53:01 - Computer Vision 00:58:
From playlist CS50's Introduction to Artificial Intelligence with Python 2020
Neural Networks Part 8: Image Classification with Convolutional Neural Networks (CNNs)
One of the coolest things that Neural Networks can do is classify images, and this is often done with a type of Neural Network called a Convolutional Neural Network (or CNN for short). In this StatQuest, we walk through how Convolutional Neural Networks work, one step at a time, and highli
From playlist StatQuest
23. Convolutional Neural Networks
Vanilla neural networks are powerful, but convolutional neural networks are truly revolutionary! Instead of constructing features by hand, a convolutional neural network can extract features on its own! It does this through convolutional layers and then reduces dimensions for faster comput
From playlist Materials Informatics
MIT 6.S191: Convolutional Neural Networks
MIT Introduction to Deep Learning 6.S191: Lecture 3 Convolutional Neural Networks for Computer Vision Lecturer: Alexander Amini January 2022 For all lectures, slides, and lab materials: http://introtodeeplearning.com Lecture Outline - coming soon! Subscribe to stay up to date with new d
From playlist Introduction to Machine Learning
In this video, we discuss Attention in neural networks. We go through Soft and hard attention, discuss the architecture with examples. SUBSCRIBE to the channel for more awesome content! My video on Generative Adversarial Networks: https://www.youtube.com/watch?v=O8LAi6ksC80 My video on C
From playlist Deep Learning Research Papers
CS231n Lecture 13 - Segmentation, soft attention, spatial transformers
Segmentation Soft attention models Spatial transformer networks
From playlist CS231N - Convolutional Neural Networks
Deep Learning Lecture 5.1 - Greetings
Convolutional Neural Networks - Welcome
From playlist Deep Learning Lecture
Grad-CAM class activation visualization - Keras Code Examples
This video walks through an example that shows you how to see which region of an image most influences predictions and gradients when applying Deep Neural Networks for Image Classification. I hope this is a useful introduction for anyone looking to add some Interpretability to their Comput
From playlist Keras Code Examples