Continuous distributions | Q-analogs

Gaussian q-distribution

In mathematical physics and probability and statistics, the Gaussian q-distribution is a family of probability distributions that includes, as limiting cases, the uniform distribution and the normal (Gaussian) distribution. It was introduced by Diaz and Teruel, is a q-analog of the Gaussian or normal distribution. The distribution is symmetric about zero and is bounded, except for the limiting case of the normal distribution. The limiting uniform distribution is on the range -1 to +1. (Wikipedia).

Gaussian q-distribution
Video thumbnail

Multivariate Gaussian distributions

Properties of the multivariate Gaussian probability distribution

From playlist cs273a

Video thumbnail

(PP 6.8) Marginal distributions of a Gaussian

For any subset of the coordinates of a multivariate Gaussian, the marginal distribution is multivariate Gaussian.

From playlist Probability Theory

Video thumbnail

Gaussian/Normal Distributions

In this video we discuss the Gaussian (AKA Normal) probability distribution function. We show how it relates to the error function (erf) and discuss how to use this distribution analytically and numerically (for example when analyzing real-life sensor data or performing simulation of stoc

From playlist Probability

Video thumbnail

(PP 6.1) Multivariate Gaussian - definition

Introduction to the multivariate Gaussian (or multivariate Normal) distribution.

From playlist Probability Theory

Video thumbnail

(PP 6.3) Gaussian coordinates does not imply (multivariate) Gaussian

An example illustrating the fact that a vector of Gaussian random variables is not necessarily (multivariate) Gaussian.

From playlist Probability Theory

Video thumbnail

(ML 7.10) Posterior distribution for univariate Gaussian (part 2)

Computing the posterior distribution for the mean of the univariate Gaussian, with a Gaussian prior (assuming known prior mean, and known variances). The posterior is Gaussian, showing that the Gaussian is a conjugate prior for the mean of a Gaussian.

From playlist Machine Learning

Video thumbnail

(ML 7.9) Posterior distribution for univariate Gaussian (part 1)

Computing the posterior distribution for the mean of the univariate Gaussian, with a Gaussian prior (assuming known prior mean, and known variances). The posterior is Gaussian, showing that the Gaussian is a conjugate prior for the mean of a Gaussian.

From playlist Machine Learning

Video thumbnail

A Gentle Introduction to the Normal Probability Distribution (10-4)

A normal distribution models…pretty much everything! The Normal Curve is the idealized distribution, a smooth, continuous, symmetrical line. The normal curve is used with interval and ratio scales, continuous data. The most frequent score is the middle score, less frequent scores above and

From playlist Continuous Probability Distributions in Statistics (WK 10 - QBA 237)

Video thumbnail

Stanford CS330 I Variational Inference and Generative Models l 2022 I Lecture 11

For more information about Stanford's Artificial Intelligence programs visit: https://stanford.io/ai To follow along with the course, visit: https://cs330.stanford.edu/ To view all online courses and programs offered by Stanford, visit: http://online.stanford.edu​ Chelsea Finn Computer

From playlist Stanford CS330: Deep Multi-Task and Meta Learning I Autumn 2022

Video thumbnail

Stanford CS330: Deep Multi-task & Meta Learning I 2021 I Lecture 7

For more information about Stanford's Artificial Intelligence professional and graduate programs visit: https://stanford.io/ai To follow along with the course, visit: http://cs330.stanford.edu/fall2021/index.html To view all online courses and programs offered by Stanford, visit: http:/

From playlist Stanford CS330: Deep Multi-Task & Meta Learning I Autumn 2021I Professor Chelsea Finn

Video thumbnail

Stanford CS229: Machine Learning | Summer 2019 | Lecture 17 - Factor Analysis & ELBO

For more information about Stanford’s Artificial Intelligence professional and graduate programs, visit: https://stanford.io/3E4MouM Anand Avati Computer Science, PhD To follow along with the course schedule and syllabus, visit: http://cs229.stanford.edu/syllabus-summer2019.html

From playlist Stanford CS229: Machine Learning Course | Summer 2019 (Anand Avati)

Video thumbnail

Stanford CS330: Deep Multi-task and Meta Learning | 2020 | Lecture 8 - Bayesian Meta-Learning

For more information about Stanford’s Artificial Intelligence professional and graduate programs, visit: https://stanford.io/ai To follow along with the course, visit: https://cs330.stanford.edu/ To view all online courses and programs offered by Stanford, visit: http://online.stanford.

From playlist Stanford CS330: Deep Multi-task and Meta Learning | Autumn 2020

Video thumbnail

Stanford CS229: Machine Learning | Summer 2019 | Lecture 16 - K-means, GMM, and EM

For more information about Stanford’s Artificial Intelligence professional and graduate programs, visit: https://stanford.io/3njDenA Anand Avati Computer Science, PhD To follow along with the course schedule and syllabus, visit: http://cs229.stanford.edu/syllabus-summer2019.html

From playlist Stanford CS229: Machine Learning Course | Summer 2019 (Anand Avati)

Video thumbnail

Stanford CS330: Multi-Task and Meta-Learning, 2019 | Lecture 5 - Bayesian Meta-Learning

For more information about Stanford’s Artificial Intelligence professional and graduate programs, visit: https://stanford.io/ai Assistant Professor Chelsea Finn, Stanford University http://cs330.stanford.edu/

From playlist Stanford CS330: Deep Multi-Task and Meta Learning

Video thumbnail

A polynomial lower bound for monotonicity testing...- Rocco Servedio

Rocco Servedio Columbia University March 31, 2014 We prove a Ω̃ (n1/5)Ω~(n1/5) lower bound on the query complexity of any non-adaptive two-sided error algorithm for testing whether an unknown n-variable Boolean function is monotone versus constant-far from monotone. This gives an exponenti

From playlist Mathematics

Video thumbnail

Stanford CS330: Deep Multi-task and Meta Learning | 2020 | Lecture 13: A Graphical Model Perspective

For more information about Stanford’s Artificial Intelligence professional and graduate programs, visit: https://stanford.io/ai A Graphical Model Perspective on Multi-Task and Meta-RL To follow along with the course, visit: https://cs330.stanford.edu/ To view all online courses and pro

From playlist Stanford CS330: Deep Multi-task and Meta Learning | Autumn 2020

Video thumbnail

Lecture 13 | Machine Learning (Stanford)

Lecture by Professor Andrew Ng for Machine Learning (CS 229) in the Stanford Computer Science department. Professor Ng lectures on expectation-maximization in the context of the mixture of Gaussian and naive Bayes models, as well as factor analysis and digression. This course provides

From playlist Lecture Collection | Machine Learning

Video thumbnail

Probability 101d: Central limit theorem

(C) 2012 David Liao lookatphysics.com CC-BY-SA (Replaces previous unscripted draft) Many independent events Binomial distribution in limit of many coin tosses Gaussian distribution

From playlist Probability, statistics, and stochastic processes

Video thumbnail

(PP 6.9) Conditional distributions of a Gaussian

For any subset of the coordinates of a multivariate Gaussian, the conditional distribution (given the remaining coordinates) is multivariate Gaussian.

From playlist Probability Theory

Related pages

Integral | Double factorial | Moment (mathematics) | Q-exponential | Journal of Mathematical Analysis and Applications | Q-Gaussian process | Factorial | Exponential function | Q-analog | Real number | Statistics | Probability distribution | Limiting case (mathematics) | Cumulative distribution function | Normal distribution | Jackson integral | Probability