Nature-inspired metaheuristics
Stochastic diffusion search (SDS) was first described in 1989 as a population-based, pattern-matching algorithm. It belongs to a family of swarm intelligence and naturally inspired search and optimisation algorithms which includes ant colony optimization, particle swarm optimization and genetic algorithms; as such SDS was the first Swarm Intelligence metaheuristic. Unlike communication employed in ant colony optimization, which is based on modification of the physical properties of a simulated environment, SDS uses a form of direct (one-to-one) communication between the agents similar to the tandem calling mechanism employed by one species of ants, Leptothorax acervorum. In SDS agents perform cheap, partial evaluations of a hypothesis (a candidate solution to the search problem). They then share information about hypotheses (diffusion of information) through direct one-to-one communication. As a result of the diffusion mechanism, high-quality solutions can be identified from clusters of agents with the same hypothesis. The operation of SDS is most easily understood by means of a simple analogy – The Restaurant Game. (Wikipedia).
"Diffusion Approximation and Sequential Experimentation" by Victor Araman
We consider a Bayesian sequential experimentation problem. We identify environments in which the average number of experiments that is conducted per unit of time is large and the informativeness of each individual experiment is low. Under such regimes, we derive a diffusion approximation f
From playlist Thematic Program on Stochastic Modeling: A Focus on Pricing & Revenue Management
Introduction to the paper https://arxiv.org/abs/2002.06707
From playlist Research
For those asking me to go through Stoichiometry a bit slower... https://www.youtube.com/playlist?list=PLyuGdIuwJD9G2pZrJmSa57awLilGCyOCZ These are updated videos (currently using for Australian Curriculum) that take you from mass to solution to gas Stoichiometry.
From playlist Topic 1 Stoichiometry - at a slower pace...
Mini Batch Gradient Descent | Deep Learning | with Stochastic Gradient Descent
Mini Batch Gradient Descent is an algorithm that helps to speed up learning while dealing with a large dataset. Instead of updating the weight parameters after assessing the entire dataset, Mini Batch Gradient Descent updates weight parameters after assessing the small batch of the datase
From playlist Optimizers in Machine Learning
Silvia Villa - Generalization properties of multiple passes stochastic gradient method
The stochastic gradient method has become an algorithm of choice in machine learning, because of its simplicity and small computational cost, especially when dealing with big data sets. Despite its widespread use, the generalization properties of the variants of stochastic
From playlist Schlumberger workshop - Computational and statistical trade-offs in learning
Jocelyne Bion Nadal: Approximation and calibration of laws of solutions to stochastic...
Abstract: In many situations where stochastic modeling is used, one desires to choose the coefficients of a stochastic differential equation which represents the reality as simply as possible. For example one desires to approximate a diffusion model with high complexity coefficients by a m
From playlist Probability and Statistics
Stochastic Twist Maps and Symplectic Diffusions - Fraydoun Rezakhanlou
Fraydoun Rezakhanlou University of California at Berkeley October 28, 2011 I discuss two examples of random symplectic maps in this talk. As the first example consider a stochastic twist map that is defined to be a stationary ergodic twist map on a planar strip. As a natural question, I di
From playlist Mathematics
Basic stochastic simulation b: Stochastic simulation algorithm
(C) 2012-2013 David Liao (lookatphysics.com) CC-BY-SA Specify system Determine duration until next event Exponentially distributed waiting times Determine what kind of reaction next event will be For more information, please search the internet for "stochastic simulation algorithm" or "kin
From playlist Probability, statistics, and stochastic processes
Stochastic Gradient Descent, Clearly Explained!!!
Even though Stochastic Gradient Descent sounds fancy, it is just a simple addition to "regular" Gradient Descent. This video sets up the problem that Stochastic Gradient Descent solves and then shows how it does it. Along the way, we discuss situations where Stochastic Gradient Descent is
From playlist StatQuest
Stochastic Resetting - CEB T2 2017 - Evans - 1/3
Martin Evans (Edinburgh) - 09/05/2017 Stochastic Resetting We consider resetting a stochastic process by returning to the initial condition with a fixed rate. Resetting is a simple way of generating a nonequilibrium stationary state in the sense that the process is held away from any eq
From playlist 2017 - T2 - Stochastic Dynamics out of Equilibrium - CEB Trimester
Probabilistic methods in statistical physics for extreme statistics... - 18 September 2018
http://crm.sns.it/event/420/ Probabilistic methods in statistical physics for extreme statistics and rare events Partially supported by UFI (Université Franco-Italienne) In this first introductory workshop, we will present recent advances in analysis, probability of rare events, search p
From playlist Centro di Ricerca Matematica Ennio De Giorgi
Efficiency of a Stochastic Search with Punctual and Costly Restarts by Sandeep Krishna
Date & Time: 17 February 2017 to 19 February 2017 VENUE: Ramanujan Lecture Hall, ICTS, Bengaluru This is an annual discussion meeting of the Indian statistical physics community which is attended by scientists, postdoctoral fellows, and graduate students, from across the country, working
From playlist Indian Statistical Physics Community Meeting 2017
Biophysics of Killing (Immuno-Biophysics) by Heiko Rieger
PROGRAM STATISTICAL BIOLOGICAL PHYSICS: FROM SINGLE MOLECULE TO CELL ORGANIZERS: Debashish Chowdhury (IIT-Kanpur, India), Ambarish Kunwar (IIT-Bombay, India) and Prabal K Maiti (IISc, India) DATE: 11 October 2022 to 22 October 2022 VENUE: Ramanujan Lecture Hall 'Fluctuation-and-noise' a
From playlist STATISTICAL BIOLOGICAL PHYSICS: FROM SINGLE MOLECULE TO CELL (2022)
Stochastic Resetting - CEB T2 2017 - Evans - 2/3
Martin Evans (Edinburgh) - 10/05/2017 Stochastic Resetting We consider resetting a stochastic process by returning to the initial condition with a fixed rate. Resetting is a simple way of generating a nonequilibrium stationary state in the sense that the process is held away from any eq
From playlist 2017 - T2 - Stochastic Dynamics out of Equilibrium - CEB Trimester
Nuclear Traffic and Transport: How Proteins Search their Target Sites? by Arnab Bhattacharjee
PROGRAM STATISTICAL BIOLOGICAL PHYSICS: FROM SINGLE MOLECULE TO CELL ORGANIZERS: Debashish Chowdhury (IIT-Kanpur, India), Ambarish Kunwar (IIT-Bombay, India) and Prabal K Maiti (IISc, India) DATE: 11 October 2022 to 22 October 2022 VENUE: Ramanujan Lecture Hall 'Fluctuation-and-noise' a
From playlist STATISTICAL BIOLOGICAL PHYSICS: FROM SINGLE MOLECULE TO CELL (2022)
Kinetochore Capture by Spindle Microtubules: A Study of First Passage Process.... by Amitabha Nandi
PROGRAM STATISTICAL BIOLOGICAL PHYSICS: FROM SINGLE MOLECULE TO CELL ORGANIZERS: Debashish Chowdhury (IIT-Kanpur, India), Ambarish Kunwar (IIT-Bombay, India) and Prabal K Maiti (IISc, India) DATE: 11 October 2022 to 22 October 2022 VENUE: Ramanujan Lecture Hall 'Fluctuation-and-noise' a
From playlist STATISTICAL BIOLOGICAL PHYSICS: FROM SINGLE MOLECULE TO CELL (2022)
First Passage Time in Stochastic Resetting Process With Finite Time Return by Priyo Pal
DISCUSSION MEETING 8TH INDIAN STATISTICAL PHYSICS COMMUNITY MEETING ORGANIZERS: Ranjini Bandyopadhyay (RRI, India), Abhishek Dhar (ICTS-TIFR, India), Kavita Jain (JNCASR, India), Rahul Pandit (IISc, India), Samriddhi Sankar Ray (ICTS-TIFR, India), Sanjib Sabhapandit (RRI, India) and Prer
From playlist 8th Indian Statistical Physics Community Meeting-ispcm 2023
Algorithms you Need for Deep Learning: Stochastic Gradient Updates for Logistic Regression
Guest: https://hhexiy.github.io/ This is a single lecture from a course. If you you like the material and want more context (e.g., the lectures that came before), check out the whole course: https://go.umd.edu/jbg-inst-808 (Including homeworks and reading.) Music: https://soundcloud.co
From playlist Deep Learning for Information Scientists
Grigorios A Pavliotis: Accelerating convergence and reducing variance for Langevin samplers
Grigorios A. Pavliotis: Accelerating convergence and reducing variance for Langevin samplers Markov Chain Monte Carlo (MCMC) is a standard methodology for sampling from probability distributions (known up to the normalization constant) in high dimensions. There are (infinitely) many diff
From playlist HIM Lectures 2015