<- All Articles
ML Fundamentals

Linear & Logistic Regression

Posted by

Share on social

Master the fundamentals of linear and logistic regression in machine learning. Explore key concepts including features, weights, bias, and optimization techniques. Learn about cost functions, regularization methods, evaluation metrics, and practical considerations for model performance. Essential reading for data scientists and analysts working with predictive modeling and statistical analysis.

Introduction

Linear regression is a statistical method that aims to uncover the relationship between variables, making it a powerful tool for prediction and understanding the influence of different factors on an outcome. In the context of machine learning, linear regression is used to find the relationship between features (input variables) and a label (output variable).

This article will delve into the details of linear regression, exploring its mathematical underpinnings, its application in machine learning, and its extension to logistic regression for classification tasks. We will also discuss key concepts such as features, labels, weights, biases, and parameters, which are essential to understanding how linear regression works. Understanding Linear Regression

At its core, linear regression is a method for modelling the relationship between a dependent variable (label) and one or more independent variables (features). The goal is to find the best-fitting line that minimizes the difference between the predicted values and the actual values of the label.

Mathematically, linear regression can be expressed as:

y = β₀ + β₁x₁ + β₂x₂ + ... + βₙxₙ + ε

where:

  • y is the label (dependent variable)
  • x₁, x₂, ..., xₙ are the features (independent variables)
  • β₀ is the bias term (y-intercept)
  • β₁, β₂, ..., βₙ are the weights (coefficients) associated with each feature
  • ε is the error term, representing the difference between the predicted and actual values

The weights and bias are the parameters of the linear regression model. They are learned from the training data by minimizing a cost function, typically the mean squared error (MSE) between the predicted and actual values. 

Features and Labels

In machine learning, features are the input variables used to make predictions or understand the relationship with the label. Features can be numerical (e.g., age, price) or categorical (e.g., color, gender). They represent the characteristics or attributes of the data points.

Labels, on the other hand, are the output variables that we aim to predict or understand. In linear regression, the label is a continuous numerical value. For example, if we are trying to predict house prices based on features like square footage and number of bedrooms, the house price would be the label.

The choice and representation of features play a crucial role in the performance of linear regression models. Feature engineering techniques, such as scaling, normalization, and one-hot encoding, are often applied to preprocess the data and improve the model's accuracy. Weights and Bias

Weights and bias are the learnable parameters in a linear regression model. Each feature is associated with a weight that determines its influence on the label. The bias term, also known as the intercept, represents the value of the label when all features are zero.

During the training process, the model adjusts the weights and bias to minimize the difference between the predicted and actual values of the label. This is typically done using optimization algorithms like gradient descent, which iteratively updates the parameters based on the gradient of the cost function.

The learned weights provide insights into the importance and direction of the relationship between each feature and the label. A positive weight indicates a positive correlation, meaning an increase in the feature value leads to an increase in the label value. Conversely, a negative weight suggests a negative correlation, where an increase in the feature value results in a decrease in the label value. 

Cost Function and Optimization

To train a linear regression model, we need to define a cost function that measures the difference between the predicted and actual values of the label. The most commonly used cost function is the mean squared error (MSE), which calculates the average squared difference between the predictions and the true values.

MSE = (1/n) * Σ(y_pred - y_true)²

where:

  • n is the number of data points
  • y_pred is the predicted value of the label
  • y_true is the actual value of the label

The goal of the optimization process is to find the values of the weights and bias that minimize the cost function. This is typically done using gradient descent, an iterative algorithm that updates the parameters in the direction of the negative gradient of the cost function.

The gradient descent update rule for each parameter is:

θ := θ - α * (∂J/∂θ)

where:

  • θ is the parameter (weight or bias)
  • α is the learning rate, which controls the step size of the updates
  • ∂J/∂θ is the partial derivative of the cost function with respect to the parameter

By iteratively updating the parameters using gradient descent, the model gradually converges to the optimal values that minimize the cost function and provide the best-fitting line for the data. 

Regularization

Regularization is a technique used to prevent overfitting in linear regression models. Overfitting occurs when the model learns the noise in the training data, resulting in poor generalization to unseen data.

Two common regularization techniques are L1 regularization (Lasso) and L2 regularization (Ridge). These methods add a penalty term to the cost function, discouraging the model from assigning large weights to the features.

L1 regularization adds the absolute values of the weights to the cost function:

J_regularized = J + λ * Σ|θ|

L2 regularization adds the squared values of the weights to the cost function:

J_regularized = J + λ * Σθ²

where:

  • J is the original cost function (e.g., MSE)
  • λ is the regularization parameter that controls the strength of the penalty
  • θ represents the weights

Regularization helps to simplify the model by shrinking the weights of less important features towards zero. L1 regularization can even drive some weights to exactly zero, effectively performing feature selection. This can be beneficial when dealing with high-dimensional data or when interpretability is desired. 

Logistic Regression

Logistic regression is an extension of linear regression used for binary classification tasks. While linear regression predicts continuous values, logistic regression predicts the probability of an instance belonging to a particular class.

In logistic regression, the output is transformed using the sigmoid function (also known as the logistic function) to map the predicted values to probabilities between 0 and 1. The sigmoid function is defined as:

σ(z) = 1 / (1 + e^(-z))

where z is the linear combination of the features and weights:

z = β₀ + β₁x₁ + β₂x₂ + ... + βₙxₙ

The predicted probability can be interpreted as the likelihood of an instance belonging to the positive class (usually denoted as class 1). A threshold, typically 0.5, is used to make the final class prediction. If the predicted probability is greater than the threshold, the instance is classified as positive; otherwise, it is classified as negative.

The cost function used in logistic regression is the binary cross-entropy loss, which measures the dissimilarity between the predicted probabilities and the true class labels. The optimization process, similar to linear regression, involves minimizing the cost function using gradient descent to learn the optimal weights and bias.

Logistic regression is widely used in various domains, such as spam email detection, customer churn prediction, and medical diagnosis. It provides a simple and interpretable approach to binary classification problems. 

Evaluation Metrics

Evaluating the performance of linear regression and logistic regression models is crucial to assess their effectiveness and make informed decisions. Several evaluation metrics are commonly used depending on the task at hand.

For linear regression, the following metrics are often employed:

  1. Mean Squared Error (MSE): Measures the average squared difference between the predicted and actual values. It penalizes larger errors more heavily.
  2. Root Mean Squared Error (RMSE): The square root of the MSE, providing an interpretable metric in the same units as the label.
  3. Mean Absolute Error (MAE): Calculates the average absolute difference between the predicted and actual values, giving equal weight to all errors.
  4. R-squared (R²): Represents the proportion of variance in the label that is explained by the features. It ranges from 0 to 1, with higher values indicating a better fit.

For logistic regression and binary classification tasks, the following metrics are commonly used:

  1. Accuracy: Measures the proportion of correctly classified instances out of the total instances.
  2. Precision: Calculates the proportion of true positive predictions among all positive predictions.
  3. Recall (Sensitivity): Measures the proportion of true positive predictions among all actual positive instances.
  4. F1 Score: The harmonic mean of precision and recall, providing a balanced measure of the model's performance.
  5. Area Under the Receiver Operating Characteristic Curve (AUC-ROC): Evaluates the model's ability to discriminate between classes across different probability thresholds.

These evaluation metrics provide a comprehensive understanding of the model's performance and help in comparing different models or tuning hyperparameters to achieve the best results. 

Limitations and Considerations

While linear regression and logistic regression are powerful and widely used techniques, they have certain limitations and considerations to keep in mind:

  1. Linearity Assumption: Linear regression assumes a linear relationship between the features and the label. If the relationship is nonlinear, linear regression may not capture the underlying patterns effectively. In such cases, nonlinear regression techniques or feature transformations may be necessary.
  2. Outliers: Linear regression is sensitive to outliers, as they can significantly influence the learned weights and bias. Outliers should be identified and handled appropriately, either by removing them or using robust regression techniques that are less affected by outliers.
  3. Multicollinearity: When features are highly correlated with each other, it can lead to unstable and unreliable estimates of the weights. Multicollinearity can be addressed by removing redundant features, using regularization techniques, or applying dimensionality reduction methods like principal component analysis (PCA).
  4. Imbalanced Classes: In logistic regression, imbalanced class distributions can affect the model's performance. When one class is significantly underrepresented compared to the other, the model may struggle to learn the minority class. Techniques like oversampling, undersampling, or using class weights can help mitigate this issue.
  5. Interpretability: While linear regression and logistic regression provide interpretable models, the interpretability may decrease as the number of features increases or when interactions between features are present. In such cases, additional techniques like feature importance analysis or visualization may be necessary to gain insights into the model's behavior.
  6. Generalization: The performance of linear regression and logistic regression models on unseen data depends on how well they generalize from the training data. Regularization techniques, cross-validation, and proper model selection can help improve generalization and prevent overfitting.

Understanding these limitations and considerations is essential for effectively applying linear regression and logistic regression in real-world scenarios. It allows practitioners to make informed decisions, preprocess data appropriately, and interpret the results with the necessary context. 

Conclusion

Linear regression and its extension, logistic regression, are fundamental techniques in the field of machine learning. They provide a simple and interpretable approach to modelling the relationship between features and labels, making them valuable tools for prediction and understanding the influence of different factors on an outcome.

This article has explored the key concepts and components of linear regression, including features, labels, weights, bias, and the optimization process. We have also discussed regularization techniques to prevent overfitting and the extension to logistic regression for binary classification tasks.

Moreover, we have highlighted the importance of evaluation metrics in assessing the performance of linear regression and logistic regression models, as well as the limitations and considerations to keep in mind when applying these techniques.

Despite all the recent advancements in machine learning, linear regression and logistic regression remain essential building blocks. Their simplicity, interpretability, and effectiveness make them valuable tools in the toolkit of any machine learning practitioner.

Similar articles

Let’s launch vectors into production

Start Building
Subscribe to stay updated
You are agreeing to our Terms and Conditions by Subscribing.
Thank you!
Your submission has been received!
Oops! Something went wrong while submitting the form.
2024 Superlinked, Inc.