Advance statis deusto
![]() |
![]() |
![]() |
Título del Test:![]() Advance statis deusto Descripción: Estadistica avanzada deusto |




Comentarios |
---|
NO HAY REGISTROS |
Null hypotesis of F test. Less deegres of freedom. More deegres of freedom. Does contribute. Does not contribute. Observed and expected coincide. Observed and expected do not coincide. Alternative hypotesis of F test. Less deegres of freedom. More deegres of freedom. Does not contribute. Does contribute. Observed and expected coincide. Observed and expected do not coincide. Null hypotesis of T test. Less deegres of freedom. More deegres of freedom. Does not contribute. Does contribute. Observed and expected coincide. Observed and expected do not coincide. Alternative hypotesis of T test. Less deegres of freedom. More deegres of freedom. Does not contribute. Does contribute. Observed and expected coincide. Observed and expected do not coincide. Null hypotesis of Likelihood test. Less deegres of freedom. More deegres of freedom. Does not contribute. Does contribute. Observed and expected coincide. Observed and expected do not coincide. Alternative hypotesis of Likelihood test. Less deegres of freedom. More deegres of freedom. Does not contribute. Does contribute. Observed and expected coincide. Observed and expected do not coincide. Null hypotesis of Wald test. Less deegres of freedom. More deegres of freedom. Does not contribute. Does contribute. Observed and expected coincide. Observed and expected do not coincide. Alternative hypotesis of Wald test. Less deegres of freedom. More deegres of freedom. Does not contribute. Does contribute. Observed and expected coincide. Observed and expected do not coincide. Null hypotesis of chi square test. Less deegres of freedom. More deegres of freedom. Does not contribute. Does contribute. Observed and expected coincide. Observed and expected do not coincide. Alternative hypotesis of chi square test. Less deegres of freedom. More deegres of freedom. Does not contribute. Does contribute. Observed and expected coincide. Observed and expected do not coincide. Null hypotesis of hoslem test. Less deegres of freedom. More deegres of freedom. Does not contribute. Does contribute. Observed and expected coincide. Observed and expected do not coincide. Alternative hypotesis of hoslem test. Less deegres of freedom. More deegres of freedom. Does not contribute. Does contribute. Observed and expected coincide. Observed and expected do not coincide. Linear regresion covariates. Qualitative and quantitative. Quantitative. Qualitaive. Any. Logistic regresion covariates. Qualitative and Quantitative. Qualitative. Quantitative. Any. PCA variables. Qualitative and quantitative. Quantitative. Qualitative. Any. CA variables. Qualitative and quantitative. Qualitative. Quantitative. Any. Linear regresion output. Qualitative and quantitative. Quantitative. Qualitaive. Any. Logistic regresion output. Qualitative and quantitative. Quantitative. Qualitaive. Any. Logistic regresion goal. determining the probability of an event. determining the odds of an event. determining the log odds of an event. determinig the log probability of an event. Linear regresion goal. Achieve a linear relationship between predictors and response variables. Calculate the slope and intercept of the regression line based on the predictors. Predict the predictors based on the response variable using a linear equation. Obtain the response as a linear function of predictors. PCA goal. Reduce the dimension in the representation of inds described by quantitive variables. Reduce the dimension in the representation of inds described by two qualitative variables. To eliminate the influence of qualitative variables on the dimension reduction of inds representation. To reduce the impact of two qualitative variables on the dimensional representation of individuals. CA goal. Reduce the dimension in the representation of inds described by quantitive variables. Reduce the dimension in the representation of inds described by two qualitative variables. To eliminate the influence of qualitative variables on the dimension reduction of inds representation. To reduce the impact of two qualitative variables on the dimensional representation of individuals. Aglomerative hierarchical goal. To establish a hierarchical organization of individuals based on their attributes and relationships. To arrange individuals into hierarchical groups based on their similarities. Divide individuals into groups. Provide the hierarchies in which individuals are organized. Partition algorithm goal. To establish a hierarchical organization of individuals based on their attributes and relationships. Provide the hierarchies in which individuals are organized. Divide individuals into groups. To classify individuals into distinct categories based on their characteristics. Distances. Manhattan distance is larger or equal to the Euclidean distance. Manhattan distance is smaller or equal to the Euclidean distance. Manhattan distance is larger to the Euclidean distance. Manhattan distance is smaller to the Euclidean distance. Linkage. Single linkage is smaller or equal to complete linkage. Single linkage is larger or equal to complete linkage. Single linkage is smaller to complete linkage. Single linkage is smaller to complete linkage. What approach is used in backward elimination to select variables in a model?. In backward elimination we start with no vars and sequentially add the one with the highest p-value. In backward elimination we randomly select vars and remove the one with the highest p-value. In backward elimination we start with all vars and sequentially remove the one with the lowest p-value. In backward elimination we start with all vars and squentially remove the one with highest p value. In linear regression. The expected value of errors should be zero, they shouldn't be correlated, and they should all have the same variance. The expected value of errors can be any non-zero value, and they may or may not be correlated. The errors in regression should have a non-zero expected value and can have varying variances. The expected value of errors should be non-zero, and they can be highly correlated with each other. In linear regression. Influentials are individuals whose removal would change the model. Influentials are individuals whose removal has no effect on the model. Influentials are individuals whose inclusion in the model doesn't impact the results. Influentials are individuals whose removal affects only a small subset of model coefficients. In linear regression. Outliers are individuals that perfectly fit the expected pattern. Outliers are individuals that yield unexpected results or deviate from the pattern. Outliers are individuals that have no impact on the overall model. Outliers are individuals that consistently conform to the expected results. In linear regression. Residuals are the difference between observed and predicted. Residuals are the average of observed and predicted values. Residuals are the product of observed and predicted values. Residuals represent the sum of observed and predicted values. What is the correct interpretation of the term "likelihood" in statistical context?. The likelihood is the probability of the model given the data. The likelihood is the probability to obtain perfect data for a given model. The likelihood is the probability to have no data available for the model. The likelihood is the probability to recover the data give a model. What is the relationship between AIC and BIC ?. AIC and BIC are interchangeable terms referring to the same criterion. AIC and BIC are dependent on each other and always yield the same results. AIC and BIC are independent information criteria that may yield different results. AIC and BIC are statistical models used to estimate the same parameter. What is the role of the intercept and parameters in linear regression?. The intercept represents the height of the hyperplane at the origin, while the parameters determine the direction of the hyperplane. The intercept and parameters both represent the height of the hyperplane at different points along the x-axis. The intercept represents the slope of the hyperplane, while the parameters determine the direction of the hyperplane. The intercept and parameters have the same role in determining the slope and direction of the hyperplane. Paragon individuals. Those that are closest to the center of the cluster. Those that are furthest from the center of the cluster. The average distance of all points from the center of the cluster. Those that are equidistant from all other points in the cluster. Specific individuals. The individuals that are closest to the centers of their respective clusters. The individuals that are furthest from the centers of other clusters. The individuals that are equidistant from the centers of their respective clusters. The individuals that have similar characteristics within their own clusters. F test goal. Decide between nested models in linear regressions. Decide if a given predictor contributes to a model in linear regressions. Decide between nested models in logistic regressions. Decide if a given predictor contributes to a model in logistic regressions. T test goal. Decide if a given predictor contributes to a model in linear regressions. Decide between nested models in linear regressions. Decide if a given predictor contributes to a model in logistic regressions. Decide if expected data created with a model coincides with observed data. Likelihood-ratio test goal. Decide between nested models in logistic regressions. Decide if a given predictor contributes to a model in logistic regressions. Decide between nested models in linear regressions. Decide if expected data created with a model coincides with oberser data. X^2 test goal. Decide if expected data created with a model coincides with observed data. Decide if a given predictor contributes to a model in logistic regressions. Decide between nested models in logistic regressions. Decide between nested models in linear regressions. Wald test goal. Decide if a given predictor contributes to a model in logistic regressions. Decide between nested models in logistic regressions. Decide if expected data created with a model coincides with observed data. Decide between nested models in linear regressions. Hosmer-Lemeshow test goal. Decide if expected data created with a model coincides with oberserved data. Decide between nested models in linear regressions. Decide if a given predictor contributes to a model in logistic regressions. Decide if a given predictor contributes to a model in linear regressions. What do eigenvalues represent in PCA?. The number of dimensions in the reduced dataset. The projections of the data points onto the principal components. The variance explained by each direction. The correlation between different variables in the dataset. What is the goal of a linear regression?. Obtain the response as a linear function of predictors. Obtain the predictors as a linear function of the response. Obtain the probability for the response. Obtain the probability for the predictors. What is the goal of a logistic regression?. Obtain the probability of an event as a linear function of predictors. Obtain the log probability of an event as a linear function of the response. Obtain the log odds of an event as a linear function of predictors. Obtain the odds of an event as a linear function of the response. What is the goal of a principal component analysis?. Reduce the dimensions to represent individuals described by quantitative variables. Increase the dimensions to represent individuals described by qualitative variables. Reduce the dimensions to represent individuals described by qualitative variables. Increase the dimensions to represent individuals described by quantitative variables. What is the goal of a correspondence analysis?. Reduce the dimensions to represent individuals described by 2 quantitative variables. Increase the dimensions to represent individuals described by 2 qualitative variables. Reduce the dimensions to represent individuals described by 2 qualitative variables. Increase the dimensions to represent individuals described by 2 quantitative variables. What is the goal of agglomerative clustering algorithms?. Provide the hierarchies in which individuals are organized. Divide individuals into groups. Compute the distances between individuals. Compute the total inertia of individuals. What is the goal of partition algorithms?. Provide the hierarchies in which individuals are organized. Divide individuals into groups. Compute the distances between individuals. Compute the total inertia of individuals. What type of variable is the response in regression models?. Only qualitative. Only quantitative. None is correct. Qualitative or quantitative. What type of variable are the predictors in regression models?. Only qualitative. Only quantitative. None is correct. Qualitative or quantitative. What type of variables can we apply the principal component analysis to?. Only qualitative. Only quantitative. None is correct. Qualitative or quantitative. What type of variables can we apply the correspondence analysis to (without discretizing)?. Only qualitative. Only quantitative. None is correct. Qualitative or quantitative. In a graphical representation of a linear regression, the intercept is. The height of the hyperplane at the origin. The ”slope” or direction of the hyperplane. The number of components. The number of clusters. In a graphical representation of a linear regression, the parameters are. The height of the hyperplane at the origin. The ”slope” or direction of the hyperplane. The number of components. The number of clusters. In the projection the cloud of individuals what are the eigenvalues?. The direction of the projection. The variance explained by each direction. The new coordinates of points. The correlations between variables. In the projection the cloud of individuals what are the eigenvectors?. The direction of the projection. The variance explained by each direction. The new coordinates of points. The correlations between variables. In PCA what does an ideal projection preserve?. The eigenvalues of the cloud. The distances of the cloud. The inertia of the cloud. The center of the cloud. In CA what does an ideal projection preserve?. The eigenvalues of the cloud. The distances of the cloud. The inertia of the cloud. The center of the cloud. In PCA the variables associated with each component. Are always correlated. Are never correlated. Are negatively correlated. All answers are wrong. In CA the marginal probabilities are. The eigenvalues of the cloud. The distances of the cloud. The inertia of the cloud. The center of the cloud. What is the null hypothesis in an F test?. Select the model with less degrees of freedom. Select the model with more degrees of freedom. A given predictor does not contribute to the model. A given predictor does contribute to the model. What is the alternative hypothesis in an F test?. Select the model with less degrees of freedom. Select the model with more degrees of freedom. A given predictor does not contribute to the model. A given predictor does contribute to the model. What is the null hypothesis in a T test?. Select the model with less degrees of freedom. Select the model with more degrees of freedom. A given predictor does not contribute to the model. A given predictor does contribute to the model. What is the alternative hypothesis in a T test?. Select the model with less degrees of freedom. Select the model with more degrees of freedom. A given predictor does not contribute to the model. A given predictor does contribute to the model. What is the null hypothesis in a Likelihood-ratio test?. Select the model with less degrees of freedom. Select the model with more degrees of freedom. A given predictor does not contribute to the model. A given predictor does contribute to the model. What is the alternative hypothesis in a Likelihood-ratio test?. Select the model with less degrees of freedom. Select the model with more degrees of freedom. A given predictor does not contribute to the model. A given predictor does contribute to the model. What is the null hypothesis in a Wald test?. Select the model with less degrees of freedom. Select the model with more degrees of freedom. A given predictor does not contribute to the mode. A given predictor does contribute to the model. What is the alternative hypothesis in a Wald test?. Select the model with less degrees of freedom. Select the model with more degrees of freedom. A given predictor does not contribute to the model. A given predictor does contribute to the model. What is the null hypothesis in a Hosmer-Lemeshow test?. A given predictor does not contribute to the model. A given predictor does contribute to the model. Observed and expected data do not coincide. Observed and expected data coincide. What is the alternative hypothesis in a Chi-square test?. A given predictor does not contribute to the model. A given predictor does contribute to the model. Observed and expected data do not coincide. Observed and expected data coincide. When using the backward elimination method in model selection we. Start with no variables and add the one with the highest pvalue. Start with no variables and add the one with the lowest pvalue. Start with all variables and remove the one with the highest pvalue. Start with all variables and remove the one with the lowest pvalue. What is the likelihood?. The probability to obtain a model given the data. The probability to obtain the data given a model. The logarithm of the probability to obtain a model given the data. The logarithm of the probability to obtain the data given a model. How should we use AIC and BIC in regression models?. We want to maximize both, they will yield the same value. We want to maximize both, they may yield different values. We want to minimize both, they will yield the same value. We want to minimize both, they may yield different values. What are residuals in linear regressions?. The difference between observed and predicted. The variables that are not relevant. The individuals that yield unexpected results. The individuals whose removal would change the model. What are outliers in linear regressions?. The difference between observed and predicted. The variables that are not relevant. The individuals that yield unexpected results. The individuals whose removal would change the model. What are influentials in linear regressions?. The difference between observed and predicted. The variables that are not relevant. The individuals that yield unexpected results. The individuals whose removal would change the model. What conditions do the errors have to fulfill in linear regressions?. Expectation value is zero. Equal variance. They are not correlated. All are correct. Select a true statement regarding the distance between two points. The Manhattan distance is smaller than the Euclidean distance. The Manhattan distance is smaller or equal to the Euclidean distance. The Manhattan distance is larger or equal to the Euclidean distance. The Manhattan distance is larger than the Euclidean distance. Select a true statement regarding the distance between two groups of points. Single linkage is smaller than complete linkage. Single linkage is smaller or equal to complete linkage. Single linkage is larger or equal to complete linkage. Single linkage is larger than complete linkage. A viable method to select the optimal number of clusters: When we destroy a cluster and the the between-cluster inertia doesn’t really increase. When we destroy a cluster and the the between-cluster inertia doesn’t really decrease. When we create a cluster and the the between-cluster inertia doesn’t really increase. When we create a cluster and the the between-cluster inertia doesn’t really decrease. If pvalue is smaller than the significance. The null hypotesis rejected. The alternative hypotesis rejected. Para que te acuerdes (h0=null y h1 =alternative). Fallas todas las preguntas de este tipo, suspendes y pal año que viene. |