Cost function in linear regression

# Cost function in linear regression

Updated on Feb 3, 2023 14:52 IST

This article revolves around cost function in machine learning and covered different types of cost function.

A cost function is a formula that calculates the result of a particular mathematical operation. It may be used to evaluate the performance of a machine-learning algorithm. Evaluating the cost function may help make informed decisions about which machine learning algorithms to choose and how to configure them to achieve desired results. Different models have different cost functions.

## What is the cost function?

A cost function is a way of measuring the success of an ML project. It is a numeric value that indicates the success or failure of a particular model without needing to understand the inner workings of a model. This is useful in many situations, such as when an engineer wants to know if the cost of building a model is worth it. Also, they can use a cost function if they want to compare different models or versions of the same model. In any case, where it’s essential to know if a project was successful or not, cost functions can be beneficial.

Multiple linear regression
Multiple linear regression refers to a statistical technique that is used to predict the outcome of a variable based on the value of two or more variables. In this article...read more
Handling missing data: Mean, Median, Mode
So what all steps do we actually perform in what kind of order to complete the feature engineering process. Now in a data science project if we just consider feature...read more
Handling Categorical Variables with One-Hot Encoding
Handling categorical variables with one-hot encoding involves converting non-numeric categories into binary columns. Each category becomes a column with ‘1’ for the presence of that category and ‘0’ for others,...read more

Also explore: Machine learning courses

Also explore:Top 10 Free Machine Learning Courses to Take Up in 2022

Also, explore: Top 10 concepts and technologies in machine learning

## Why is the cost function needed?

Basically, our machine learning model predicts the new value. We aim to have the predicted value as close as possible to the predicted value. If the model predicts the value close to the actual value, then we will say it’s a good model. So here is where cost function plays an important role. This cost function is needed to calculate the difference between actual and predicted values. So here it is nothing, just the difference between the actual values-predicted values.

Cost function=(actual values-predicted values).

Which is also represented as

Cost function(d)=Y-Y’

Where Y=Actual value

Y’=Predicted value

Similarly, is we have to find this cost function for all data points, then we can write like this:

Cost function0(d0) =Y0-Y0’

Cost function1(d1) =Y1-Y1’

Cost function 2(d2) =Y2-Y2’

Cost function3(d3) =Y3-Y3’

.                            .

.                            .

.                            .

.                            .

Cost function n(dn) =Yn-Yn’

So we can use formula to calculate cost function for all points

The line fitting the data points as shown in fig is the result of the parameter Θ0 and θ1.Therefore, the goal of any learning algorithm is to find, or choose, the best parameters that fit the dataset.

Calculate the error between the predicted and expected values ​​according to the formula.

NOTE: The point to note here is that the lesser the value of the cost function, the more accurate our model is, and the more the value of the cost function, the less accurate our model will be.

## Real-life exampleof cost function

Now we will understand the cost function for linear Regression

Let us first understand the linear Regression

Suppose we have a data set here, and this is a housing data set wherein there are two columns the size of the House and the price of the House in INR, and these dots are the data points from the data set, and now we also call them as training data.

Here, y denotes the target variable(Price of House), X denotes the size of the House and a0 is the intercept.

The linear regression model will fit a straight line to our data, as shown in fig, and it will tell us what is the price of the House when we know the size of the House.

Suppose this housing company sells houses having areas under 600 to 1500 square feet. So the prices will vary according to the size of the plot. For example

House(1200 to 1500 square feet)= 65 lakhs.

now suddenly, there’s a rise in demand of Houses which means the price of this House will increase now. Suppose the price has now increased to 75 lakhs. But what if the model predicted its price as 30 lakhs? So this difference between these actual and predicted values is calculated by the cost function.

## Types of a cost functionin linear regression

### Regression cost function

Regression models are used to predict continuous variables, Such as. Employee salaries, car costs, availability of loans, etc. “regression cost function” is the cost function used in regression problems. Depending on the distance-based error, it is determined as follows:

error = y-y`

Y – actual input and Y` – predicted output.

For obvious reasons, this cost function is also known as the squared error function. It is the most commonly used cost function in linear Regression because it is simple and works well.

### 1. Mean error

This cost function computes the error for all training data and derives the average of all these errors. Computing the mean of the errors is the most straightforward and most intuitive method possible.

These errors can be negative or positive. Therefore, they can cancel each other out during the summation, and the average error of the model will be zero.

Therefore, it is not the recommended cost function but is the basis for other regression model cost functions.

### 2.Mean squared error

Mean squared error is one of the most commonly used and earliest described regression measures. MSE represents the mean square of the difference between the prediction and the expected result. In other words, MSE is a variation of MAE that squares the difference instead of taking the absolute value of the difference. There is no possibility of negative errors.

Where:

Yj: actual value

Y^ j: predicted value from the regression model

N: number of datum

### 3.Mean absolute error

Mean absolute error is a regression metric that measures the average error size for a group of predictions without regard to direction. In other words, it is the average absolute difference between the prediction and the expected result, with all individual variances of equal importance. The Mean Absolute Error (or MAE) tells the average of the absolute differences between predicted and actual values. By calculating MAE, we can understand how wrong the model did the predictions.

Where:

Yj: actual value

Y^ j: predicted value from the regression model

N: number of datum

The above graph shows the salary of an employee Vs. Experience in years. We have the actual value on the line, and the predicted value is shown with X. The absolute distance between them is a mean absolute error.

### 4.Root mean squared error (RMSE)

It is the square root of the mean of the square of all of the errors. Root Mean Square Error (RMSE) measures the error between two data sets. In other words, it compares an observed or known value and a predicted value.

Where

Oi=observations

Si= predicted values of a variable

n =number of observations