|
Data defines the model by dint of genetic programming, producing the best decile table.
|
|
Ordinary Regression versus Machine Learning Regression Bruce Ratner, Ph.D. |
|
The statistical paradigm for profit modeling is: The data analyst fits the data to the presumedly true ordinary least-square model (OLS), whose form (equation) is the sum of weighted predictor variables. The weights (better known as regression coefficients) are the main appeal of the statistical paradigm, as they provide the key to interpreting what the equation means. The well-established LRM variable selection methodology, which identifies the predictor variables for the OLS, is the inherent weakness in the statistical paradigm. The variable selection is exclusive of the data analyst's will and ability for constructing new variables with potential predictive power (data mining).
The antithetical machine learning (ML) paradigm is: The data suggests the "true" model form (a computer program), as the ML automatically data mines for new variables, performs variable selection, and then specifies the model equation without being explicitly programmed. The strengths of the ML paradigm are its flexibility within a nonparametric, assumption-free openwork that accommodates big data, and its serviceability as a data mining tool. The weakness in the ML paradigm is the difficulty in interpreting the abstruse computer program; this surely has accounted for the limited use of ML methods.
The purpose of this article is to present a most compelling illustration of Ordinary Regression versus Machine Learning Regression, using the GenIQ Model© as the latter. For a most compelling illustration, click here.
|
For more information about this article, call Bruce Ratner at 516.791.3544 or 1 800 DM STAT-1; or e-mail at br@dmstat1.com. |
Sign-up for a free GenIQ webcast: Click here. |
|
|