Polynomial regression

Polyreg_scheffe.svg

“Polynomial Regression is a form of linear regression in which the relationship between the independent variable x and dependent variable y is modeled as an nth-degree polynomial. Polynomial regression fits a nonlinear relationship between the value of x and the corresponding conditional mean of y.”

As stated in formal terms, this definition of polynomial regression seems far too overwhelming, so let’s break it down in simpler terms.

First, let’s understand the assumptions for polynomial regression:

  1. Depend variables can be explained by a curvilinear relationship between the independent variables.
  2. The independent variables are completely independent of each other.

Real-life situations where polynomial dependence is present are:

THE USE OF POLYNOMIAL FUNCTIONS IN REAL LIFE by Logan Grace

  1. Change in atmospheric pressure to predict the weather
  2. Generally, the salary of employees and their relative position in the company.

To understand it, let’s start by seeding random examples and understanding the polynomial regression as we go further.

To Run and Check the code:

https://www.kaggle.com/tanavbajaj/polynomial-regression

 

import numpy as np
import matplotlib.pyplot as plt

 

Importing required libraries

np.random.seed(69)
x = 2 – 3 * np.random.normal(0, 1, 20)
y = x – 2 * (x ** 2) + 0.5 * (x ** 3) + np.random.normal(-3, 3, 20)
x = x[:, np.newaxis]
y = y[:, np.newaxis]

 

Next, we generate a seed so that the numpy generates the same order of random numbers every time and does not vary it

And generate values for our x and y

Upon plotting it, we see

plt.scatter(x,y)
plt.show()

 

Now let’s try to model it using linear regression and plot the best fit line

from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_errormodel = LinearRegression()
model.fit(x, y)
y_pred = model.predict(x)plt.scatter(x, y, s=10)
plt.plot(x, y_pred, color=’r’)
plt.show()

 

If we observe the plot, the best-fit line tries to capture the point but is unable to do so

If we calculate the root mean square error or RSS, we get

rmse = np.sqrt(mean_squared_error(y,y_pred))
print(rmse)

 

We get a error of

24.188793024005992

 

This is a lot now if we try to convert the features to the polynomial counterparts using the sklearn library, we will get an error far too let see that.

 

from sklearn.preprocessing import PolynomialFeatures

polynomial_features= PolynomialFeatures(degree=2)
x_poly = polynomial_features.fit_transform(x)

model = LinearRegression()
model.fit(x_poly, y)
y_poly_pred = model.predict(x_poly)

rmse = np.sqrt(mean_squared_error(y,y_poly_pred))
r2 = r2_score(y,y_poly_pred)
print(rmse)

 

We get an output of

9.51210712644908

 

This time the error is reduced if we try to plot the curve

import operator
plt.scatter(x, y, s=10)
# sort the values of x before line plot
sort_axis = operator.itemgetter(0)
sorted_zip = sorted(zip(x,y_poly_pred), key=sort_axis)
x, y_poly_pred = zip(*sorted_zip)
plt.plot(x, y_poly_pred, color=’m’)
plt.show()

 

We will get a curve that will look more like

We import the operator library so we can sort the value of x and related y value so that our curve will be plotted in an easy-to-visualize manner if we try to plot it without sorting, we might get a curve that might resemble something like this.

The low Root means score error indicates that the polynomial function fits our data better.

So in simpler words, polynomial regression is finding the best fit curve to the given nonlinear data, which might try to best generalize the data.

In other words, polynomial regression finds the coefficients for the features of the higher degree so that the feature columns convert from lower degrees to higher degrees; in other words, suppose there is a column for age in data and a column for crime, then a feature of higher degree might be the product of crime and age column.

So our linear equation in case of linear regression  converts to

Maths behind Polynomial Regression:

THE USE OF POLYNOMIAL FUNCTIONS IN REAL LIFE by Logan Grace

The maths behind linear and polynomial regression is quite similar. From our linear regression release, m and b were found for the equation y= mx + b using differentiation. Similarly, in polynomial regression, values for a, b, and c are found for the equation y = a x2 + bx + c

The partial derivatives for a, b and c, respectively, are:

For finding the minima, equate the derivatives to 0

Representing these equations in matrix form looks like this:

This was an example of how it works for an equation with the power of 2. Similar matrices can be created for all powers, and a generalized equation looks like this:

Here m represents the degree of the polynomial (highest power of the equation), and n represents the number of known data points.

Limitations of Polynomial regression:

Advantages and Disadvantages of different Regression models - GeeksforGeeks

One of the main problems of polynomial regression is that one or 2 outliers can significantly impact the equation. There are far fewer model validation methods to detect outliers for a non-linear regression problem than for a linear regression one.

Overfitting and Underfitting: Analyzing the Tradeoff between Overfitting and Underfitting in Polynomial Regression

Analyzing the Tradeoff between Overfitting and Underfitting in Polynomial Regression

Polynomial regression is a flexible modeling technique that can capture nonlinear relationships between variables. However, like any regression method, polynomial regression is susceptible to overfitting and underfitting.

Overfitting occurs when the polynomial regression model becomes overly complex and starts to fit the noise or random fluctuations in the training data. As a result, the model performs exceptionally well on the training set but fails to generalize to new, unseen data. This leads to poor predictive performance and unreliable results.

On the other hand, underfitting occurs when the polynomial regression model is too simple and fails to capture the underlying patterns and relationships in the data. This typically happens when the degree of the polynomial is too low, and the model cannot adequately fit the true relationship between the predictor variables and the target variable. Underfitting results in high bias and limited predictive power.

To strike the right balance between overfitting and underfitting, it is crucial to choose an appropriate degree for the polynomial regression model. A higher degree polynomial allows the model to capture more complex relationships, but it also increases the risk of overfitting. Conversely, a lower degree polynomial may reduce overfitting but may not capture the true underlying relationship accurately.

Feature Engineering: Incorporating Polynomial Features to Enhance Model Performance

Feature engineering is a crucial step in the machine learning pipeline that involves transforming raw data into a format that can be effectively used by a predictive model. One powerful technique in feature engineering is incorporating polynomial features. By creating new features as combinations of existing ones, polynomial features can help capture nonlinear relationships between variables and enhance the performance of a model.

Polynomial features are generated by taking the powers of existing features and multiplying them together. For example, if we have a feature x, generating polynomial features of degree 2 would include x, x^2, x^3, and so on. These new features introduce higher-order interactions, enabling the model to capture more complex patterns and nonlinearity in the data.

Incorporating polynomial features can be particularly useful when the relationship between the input variables and the target variable is nonlinear. By including polynomial terms in the model, we can capture curvature and better fit the data. However, it is important to strike a balance because incorporating too many polynomial features can lead to overfitting, where the model becomes too specific to the training data and performs poorly on unseen data.

What’s more to watch out?

To incorporate polynomial features into a model, we can use libraries such as scikit-learn in Python. Scikit-learn provides the PolynomialFeatures class, which allows us to easily transform the input features by adding polynomial terms up to a specified degree. Once the polynomial features are created, they can be used as input to any machine learning algorithm.

When incorporating polynomial features, it is essential to consider the degree of the polynomial. Higher degrees can capture more complex relationships but also increase the dimensionality of the feature space and computational complexity. Therefore, it is important to assess the impact of different polynomial degrees on model performance through techniques like cross-validation.

In summary, incorporating polynomial features is a powerful technique in feature engineering that can enhance model performance by capturing nonlinear relationships between variables. It is crucial to find the right balance between adding complexity and avoiding overfitting. By incorporating polynomial features intelligently, we can enable our models to better capture the intricacies of the data and improve their predictive capabilities.

Model Evaluation: Assessing the Accuracy and Quality of Polynomial Regression Models

Model evaluation is a critical step in machine learning, as it helps us understand how well our models perform and make informed decisions about their accuracy and quality. When working with polynomial regression models, there are specific evaluation techniques that are tailored to assess their performance.

Here are some key evaluation metrics and techniques for assessing polynomial regression models:

Mean Squared Error (MSE):

MSE measures the average squared difference between the predicted and actual values. It quantifies the overall model performance, with lower values indicating better accuracy. However, MSE can be sensitive to outliers and doesn’t provide easily interpretable units of measurement.

R-squared (R²) or Coefficient of Determination:

² measures the proportion of the variance in the target variable that is explained by the model. It ranges from 0 to 1, with higher values indicating a better fit. However, R² can be misleading when applied to models with high complexity, as it tends to increase even when adding irrelevant features.

Adjusted R-squared:

Adjusted R² adjusts the R-squared value by penalizing the addition of irrelevant features. It considers the number of predictors in the model and provides a more reliable measure of model quality, especially when comparing models with different numbers of features.

Cross-Validation:

Cross-validation is a technique for estimating the performance of a model on unseen data. For polynomial regression models, k-fold cross-validation can be used. It involves splitting the data into k subsets, training the model on k-1 subsets, and evaluating its performance on the remaining subset. By repeating this process for different subsets, we obtain a more robust estimate of the model’s performance.

Residual Analysis:

esidual analysis involves examining the difference between the predicted and actual values (residuals). Plotting the residuals can help identify patterns or systematic errors in the model’s predictions. If the residuals exhibit a random pattern around zero, it indicates that the model is capturing the underlying relationships well. However, if there are discernible patterns or trends in the residuals, it suggests that the model might be missing important features or suffering from bias.

Feature Importance:

Assessing the importance of different features in the polynomial regression model can provide insights into their relevance and contribution to the target variable. Techniques such as analyzing the coefficients or conducting feature permutation importance can help identify the most influential features.

It is important to consider multiple evaluation metrics and techniques to obtain a comprehensive understanding of the model’s performance. Evaluating the accuracy and quality of polynomial regression models through these methods enables us to make informed decisions about model selection, feature engineering, and potential improvements.

Regularization Techniques: Applying Regularization to Control Model Complexity and Improve Generalization

Regularization techniques are essential tools in machine learning for controlling model complexity and improving generalization. When working with polynomial regression models, which can easily become overly complex and prone to overfitting, regularization techniques play a vital role in ensuring reliable and robust model performance.

Here are two commonly used regularization techniques in polynomial regression:

L1 Regularization (Lasso):

L1 regularization adds a penalty term to the model’s cost function that is proportional to the absolute values of the model’s coefficients. This technique encourages sparsity by driving some coefficients to zero, effectively performing feature selection. By shrinking less important coefficients to zero, L1 regularization helps reduce the model’s complexity and eliminate irrelevant features. The resulting model focuses on the most important features, leading to improved interpretability and potentially better generalization performance.

L2 Regularization (Ridge):

L2 regularization adds a penalty term to the cost function that is proportional to the squared magnitudes of the model’s coefficients. This technique encourages small but non-zero coefficient values, effectively shrinking their magnitudes. L2 regularization helps control the influence of individual features without driving them to zero entirely. By reducing the impact of less relevant features, L2 regularization helps prevent overfitting and enhances the model’s generalization capabilities.

These regularization techniques introduce a regularization parameter, often denoted as λ (lambda), which controls the strength of regularization. The value of λ determines the balance between model complexity and the goodness of fit to the training data. Higher values of λ increase regularization, resulting in simpler models with reduced variance but potentially increased bias. Conversely, lower values of λ allow the model to fit the training data more closely but may lead to overfitting.

To apply regularization techniques to polynomial regression models, libraries like scikit-learn in Python provide implementations such as Lasso and Ridge regression. These implementations enable easy integration of regularization into the modeling process, allowing fine-tuning of the regularization parameter to achieve the desired balance between model complexity and generalization.

By applying regularization techniques to polynomial regression models, we can effectively control model complexity, prevent overfitting, and improve the model’s ability to generalize to unseen data. Regularization plays a crucial role in ensuring the reliability and robustness of polynomial regression models, making them more suitable for real-world applications.

Practical Applications: Examining Real-World Use Cases and Applications of Polynomial Regression

Polynomial regression is a powerful technique that can capture nonlinear relationships between variables and provide flexible modeling capabilities. It finds applications in various real-world scenarios, where the relationships between variables are not strictly linear. Here are a few practical applications of polynomial regression:

Economics and Finance: Polynomial regression can be used in economic and financial analysis to model complex relationships between economic indicators, such as GDP, inflation rates, and interest rates. It helps economists and financial analysts understand the impact of multiple factors on economic variables and make predictions or forecasts.

Medicine and Healthcare: In medical research, polynomial regression can be employed to analyze the relationship between dosage and drug response, or to model the growth patterns of diseases. It enables researchers to identify optimal dosages or predict disease progression based on various factors.

Environmental Sciences: Polynomial regression finds applications in environmental studies to model the relationship between environmental variables and their impact on ecosystems. For example, it can be used to analyze the relationship between temperature, rainfall, and biodiversity in a particular region.

Marketing and Sales: You can utilize polynomial regression in marketing and sales to understand the relationship between advertising expenditure and sales. By modeling nonlinear effects, marketers can optimize their advertising strategies to maximize sales and return on investment.

Conclusion

Overall, polynomial regression is a powerful tool for analyzing various forms of data. It can fit complex models with more than one degree of freedom and capture non-linear relationships between variables effectively. This type of regression helps us understand the influence of certain predictors on our outcome variables.

The benefits of polynomial regression make it an invaluable asset for data science practitioners seeking insights from their datasets. By using this analytical method, researchers can gain a deeper understanding of the underlying dynamics governing data systems. This provides new ways of interpreting results and unraveling previously unexamined phenomena hidden within their datasets.

FAQs

What is Polynomial Regression?

Polynomial Regression is a type of linear regression where the relationship between the independent and dependent variables is modeled as an nth-degree polynomial.

How does Polynomial Regression work?

It fits a curve to the data by adding polynomial terms (e.g., x^2, x^3) to the linear regression equation, allowing for more complex relationships between variables.

When should I use Polynomial Regression?

Use Polynomial Regression when the relationship between variables appears to be nonlinear, and a straight line cannot adequately capture the pattern in the data.

How does Polynomial Regression handle overfitting?

It may be prone to overfitting with higher polynomial degrees, so techniques like cross-validation or regularization can help prevent overfitting.

Can Polynomial Regression handle missing data?

Polynomial Regression requires complete data, so missing data should be handled before fitting the model.

How do I interpret the results from a Polynomial Regression model?

Interpret the coefficients of polynomial terms to understand the shape and direction of the curve fitted to the data.

Is Polynomial Regression suitable for large datasets?

Polynomial Regression can be computationally expensive, especially with higher degree polynomials, making it less suitable for large datasets.

Leave a Reply

Your email address will not be published. Required fields are marked *