Machine learning algorithms

Loss functions for classification

In machine learning and mathematical optimization, loss functions for classification are computationally feasible loss functions representing the price paid for inaccuracy of predictions in classification problems (problems of identifying which category a particular observation belongs to). Given as the space of all possible inputs (usually ), and as the set of labels (possible outputs), a typical goal of classification algorithms is to find a function which best predicts a label for a given input . However, because of incomplete information, noise in the measurement, or probabilistic components in the underlying process, it is possible for the same to generate different . As a result, the goal of the learning problem is to minimize expected loss (also known as the risk), defined as where is a given loss function, and is the probability density function of the process that generated the data, which can equivalently be written as Within classification, several commonly used loss functions are written solely in terms of the product of the true label and the predicted label . Therefore, they can be defined as functions of only one variable , so that with a suitably chosen function . These are called margin-based loss functions. Choosing a margin-based loss function amounts to choosing . Selection of a loss function within this framework impacts the optimal which minimizes the expected risk. In the case of binary classification, it is possible to simplify the calculation of expected risk from the integral specified above. Specifically, The second equality follows from the properties described above. The third equality follows from the fact that 1 and −1 are the only possible values for , and the fourth because . The term within brackets is known as the conditional risk. One can solve for the minimizer of by taking the functional derivative of the last equality with respect to and setting the derivative equal to 0. This will result in the following equation which is also equivalent to setting the derivative of the conditional risk equal to zero. Given the binary nature of classification, a natural selection for a loss function (assuming equal cost for false positives and false negatives) would be the 0-1 loss function (0–1 indicator function), which takes the value of 0 if the predicted classification equals that of the true class or a 1 if the predicted classification does not match the true class. This selection is modeled by where indicates the Heaviside step function.However, this loss function is non-convex and non-smooth, and solving for the optimal solution is an NP-hard combinatorial optimization problem. As a result, it is better to substitute loss function surrogates which are tractable for commonly used learning algorithms, as they have convenient properties such as being convex and smooth. In addition to their computational tractability, one can show that the solutions to the learning problem using these loss surrogates allow for the recovery of the actual solution to the original classification problem. Some of these surrogates are described below. In practice, the probability distribution is unknown. Consequently, utilizing a training set of independently and identically distributed sample points drawn from the data sample space, one seeks to minimize empirical risk as a proxy for expected risk. (See statistical learning theory for a more detailed description.) (Wikipedia).

Loss functions for classification
Video thumbnail

Represent a Discrete Function Using Ordered Pairs, a Table, and Function Notation

This video explains how to represent a discrete function given as points as ordered pairs, a table, and using function notation. http://mathispower4u.com

From playlist Introduction to Functions: Function Basics

Video thumbnail

Category Theory 1.2: What is a category?

What is a Category?

From playlist Category Theory

Video thumbnail

When is a function bounded below?

👉 Learn about the characteristics of a function. Given a function, we can determine the characteristics of the function's graph. We can determine the end behavior of the graph of the function (rises or falls left and rises or falls right). We can determine the number of zeros of the functi

From playlist Characteristics of Functions

Video thumbnail

Determining the extrema as well as zeros of a polynomial based on the graph

👉 Learn how to determine the extrema from a graph. The extrema of a function are the critical points or the turning points of the function. They are the points where the graph changes from increasing to decreasing or vice versa. They are the points where the graph turnes. The points where

From playlist Characteristics of Functions

Video thumbnail

XGBoost Part 3 (of 4): Mathematical Details

In this video we dive into the nitty-gritty details of the math behind XGBoost trees. We derive the equations for the Output Values from the leaves as well as the Similarity Score. Then we show how these general equations are customized for Regression or Classification by their respective

From playlist StatQuest

Video thumbnail

What are bounded functions and how do you determine the boundness

👉 Learn about the characteristics of a function. Given a function, we can determine the characteristics of the function's graph. We can determine the end behavior of the graph of the function (rises or falls left and rises or falls right). We can determine the number of zeros of the functi

From playlist Characteristics of Functions

Video thumbnail

Stanford EE104: Introduction to Machine Learning | 2020 | Lecture 17-erm for probabilistic classif.

Professor Sanjay Lall Electrical Engineering To follow along with the course schedule and syllabus, visit: http://ee104.stanford.edu To view all online courses and programs offered by Stanford, visit: https://online.stanford.edu/

From playlist Stanford EE104: Introduction to Machine Learning Full Course

Video thumbnail

Artificial Intelligence & Machine learning 3 - Linear Classification | Stanford CS221 (Autumn 2021)

For more information about Stanford's Artificial Intelligence professional and graduate programs visit: https://stanford.io/ai Associate Professor Percy Liang Associate Professor of Computer Science and Statistics (courtesy) https://profiles.stanford.edu/percy-liang Assistant Professor

From playlist Stanford CS221: Artificial Intelligence: Principles and Techniques | Autumn 2021

Video thumbnail

Machine Learning 1 - Linear Classifiers, SGD | Stanford CS221: AI (Autumn 2019)

For more information about Stanford’s Artificial Intelligence professional and graduate programs, visit: https://stanford.io/3nAk9O3 Topics: Linear classification, Loss minimization, Stochastic gradient descent Percy Liang, Associate Professor & Dorsa Sadigh, Assistant Professor - Stanfor

From playlist Stanford CS221: Artificial Intelligence: Principles and Techniques | Autumn 2019

Video thumbnail

Stanford EE104: Introduction to Machine Learning | 2020 | Lecture 14 - Boolean classification

Professor Sanjay Lall Electrical Engineering To follow along with the course schedule and syllabus, visit: http://ee104.stanford.edu To view all online courses and programs offered by Stanford, visit: https://online.stanford.edu/

From playlist Stanford EE104: Introduction to Machine Learning Full Course

Video thumbnail

Loss Functions - EXPLAINED!

Many animations used in this video came from Jonathan Barron [1, 2]. Give this researcher a like for his hard work! SUBSCRIBE FOR MORE CONTENT! RESEOURCES [1] Paper on adaptive loss function: https://arxiv.org/abs/1701.03077 [2] CVPR paper presentation: https://www.youtube.com/watch?v=Bm

From playlist Deep Learning 101

Video thumbnail

Neural Networks for Images

To learn more about Wolfram Technology Conference, please visit: https://www.wolfram.com/events/technologyconference/ Speaker: Markus van Almsick Wolfram developers and colleagues discussed the latest in innovative technologies for cloud computing, interactive deployment, mobile devices,

From playlist Wolfram Technology Conference 2017

Video thumbnail

Characteristics of functions

👉 Learn about the characteristics of a function. Given a function, we can determine the characteristics of the function's graph. We can determine the end behavior of the graph of the function (rises or falls left and rises or falls right). We can determine the number of zeros of the functi

From playlist Characteristics of Functions

Video thumbnail

Lecture 3 | Loss Functions and Optimization

Lecture 3 continues our discussion of linear classifiers. We introduce the idea of a loss function to quantify our unhappiness with a model’s predictions, and discuss two commonly used loss functions for image classification: the multiclass SVM loss and the multinomial logistic regression

From playlist Lecture Collection | Convolutional Neural Networks for Visual Recognition (Spring 2017)

Related pages

LogitBoost | Bayes' theorem | False positives and false negatives | Deep learning | Subgradient method | Mathematical optimization | Quadratic programming | AdaBoost | Indicator function | Probability density function | Tikhonov regularization | Gradient boosting | Statistical classification | Kullback–Leibler divergence | Sample space | Statistical learning theory | Differentiable programming | Gradient descent | Cross entropy | Cross-validation (statistics) | Heaviside step function | Stochastic gradient descent