Advanced Machine Learning
![]() |
![]() |
![]() |
Título del Test:![]() Advanced Machine Learning Descripción: Advanced Machine Learning Examen Final 2024 |




Comentarios |
---|
NO HAY REGISTROS |
In stacking, we use.. a model trained to perform the aggregation. trivial functions such as voting or averaging to create the blender. neural networks to aggregate all the predictions. Once the blender has been trained and evaluated with cross validation: The blender is trained with the whole dataset. The base predictors are not modified anymore in order to avoid data leakage. The base predictors are retrained on the full original dataset. Which machine learning is considered in-between supervised and unsupervised learning?. Ensemble learning. Active learning. Reinforcement learning. Which of the following averaging approaches is the most suitable one for imbalanced data?. Computes metric score for each label, and returns the average without considering the proportion for each label in the dataset. Computes metric score for each label, and returns the average considering the proportion for each label in the dataset. Computes the metric by considering total true positives, true negatives, false negatives and false positives. If we use active learning... We may randomly choose the training samples. We will always choose the training samples based on a uncertainty measure. We will always choose the training samples closer to the decision boundary. Which of the following statements correctly describes the differences between AdaBoost and Gradient Boosting?. AdaBoost and Gradient Boosting both work with weights of misclassified instances, but Gradient Boosting additionally includes a regularization term to prevent overfitting, which AdaBoost does not. AdaBoost focuses on correcting the mistakes of the previous models by adjusting the weights of misclassified instances, while Gradient Boosting tries to fit the new predictor to the residual errors made by the previous predictor. Gradient Boosting primarily uses decision trees as base learners, whereas AdaBoost can only use linear models as base learners. Which of the following statements about Out-of-Bag (OOB) evaluation in the context of Bagging is correct?. OOB evaluation requires a separate validation set to assess the performance of the bagging ensemble. Approximately 37% of the training instances, which are not sampled during the bootstrapping process, are called OOB instances and can be used to evaluate the model without needing a separate validation set. OOB evaluation is only useful in scenarios where the dataset is very large and there is no risk of overfitting. Which are the disadvantages of univariate imputation methods compared to multivariate methods?. Univariate imputation methods are less effective at handling categorical data compared to multivariate imputations for non-numeric data. Univariate imputation methods are computationally more intensive and time consuming than multivariate imputation methods, making them less practical for large datasets. Univariate imputation methods often ignore the possible relationships between variables, which can lead to biased estimates and underestimation of variability. |