Which statement about subset selection in normal linear models is true?

Prepare for the Statistics for Risk Modeling (SRM) Exam. Boost your confidence with our comprehensive study materials that include flashcards and multiple-choice questions, each equipped with hints and explanations. Gear up effectively for your assessment!

Subset selection in normal linear models involves choosing a subset of predictors for a given model that best explains the variability in the response variable. Among the various techniques used for model selection, each has its own characteristics and limitations.

The correct statement regarding subset selection in this context is that forward stepwise selection can be used in high-dimensional settings. This method is particularly useful when the number of predictors is large compared to the number of observations, as it sequentially adds predictors based on their statistical significance. It starts with no predictors in the model and iteratively adds the predictor that improves the model the most, as determined by a certain criterion, such as the Akaike Information Criterion (AIC).

The other mentioned statements do not hold true within the context of subset selection in normal linear models. For instance, best subset selection does not yield a nested set of models since it evaluates all possible combinations of predictors, and thus creates a set of non-nested models. Additionally, while the residual sum of squares (RSS) is a common metric that might be considered during model evaluation, it is not typically the sole metric or standard for model selection in the context of selecting between different models or methods. Hence, statements regarding these points would not accurately reflect the principles of subset selection

Subscribe

Get the latest from Examzify

You can unsubscribe at any time. Read our privacy policy