Linear Regression Assumptions to Know for Data Science Numerical Analysis

Understanding the assumptions of linear regression is key in Data Science Numerical Analysis. These assumptions ensure accurate predictions and valid statistical inferences, helping to avoid pitfalls like biased estimates, inefficient results, and misleading conclusions.

  1. Linearity

    • The relationship between the independent and dependent variables should be linear.
    • This can be assessed using scatter plots to visualize the relationship.
    • Non-linear relationships can lead to biased estimates and poor predictions.
  2. Independence of errors

    • The residuals (errors) should be independent of each other.
    • This assumption is crucial for valid hypothesis testing and confidence intervals.
    • Autocorrelation, often found in time series data, violates this assumption.
  3. Homoscedasticity

    • The variance of the residuals should be constant across all levels of the independent variable(s).
    • Heteroscedasticity (non-constant variance) can lead to inefficient estimates and affect statistical tests.
    • This can be checked using residual plots or statistical tests like Breusch-Pagan.
  4. Normality of residuals

    • The residuals should be approximately normally distributed for valid inference.
    • This assumption is particularly important for small sample sizes.
    • Normality can be assessed using Q-Q plots or statistical tests like the Shapiro-Wilk test.
  5. No multicollinearity

    • Independent variables should not be highly correlated with each other.
    • Multicollinearity can inflate standard errors and make it difficult to determine the effect of each predictor.
    • Variance Inflation Factor (VIF) can be used to detect multicollinearity.
  6. No outliers or influential points

    • Outliers can disproportionately affect the regression results and lead to misleading conclusions.
    • Influential points can significantly change the slope of the regression line.
    • Leverage and Cook's distance are methods to identify influential observations.
  7. Large sample size relative to the number of predictors

    • A larger sample size increases the reliability of the regression estimates.
    • It helps to ensure that the model can generalize well to new data.
    • A common rule of thumb is to have at least 10-15 observations per predictor variable.


© 2024 Fiveable Inc. All rights reserved.
AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.

© 2024 Fiveable Inc. All rights reserved.
AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.