A) None of these B) both a and b C) prediction D) classification
A) High dimensional data B) low diamesional data C) None of these D) medium dimensional data
A) root B) None of these C) leaf D) steam
A) Information Gain B) None of these C) Gini Index D) Entropy
A) What are the advantages of the decision tree? B) None of these C) Both D) Non-linear patterns in the data can be captured easily
A) Random forest are easy to interpret but often very accurate B) None of these C) Random forest are difficult to interpret but very less accurate D) forest are Random difficult to interpret but often very accurate
A) Warehousing B) Text Mining C) Data Mining D) Data Selection
A) Knowledge Discovery Data B) Knowledge data house C) Knowledge Discovery Database D) Knowledge Data definition
A) To obtain the queries response B) For authentication C) In order to maintain consistency D) For data access
A) Association and correctional analysis classification B) Prediction and characterization C) Cluster analysis and Evolution analysis D) All of the above
A) The nearest neighbor is the same as the K-means B) The goal of the k-means clustering is to partition (n) observation into (k) clusters C) All of the above D) K-means clustering can be defined as the method of quantization
A) 2 B) 3 C) 4 D) 5
A) Avoid bad features B) Find the explained variance C) Find good features to improve your clustering score D) Find which dimension of data maximize the features variance
A) Find the features which can best predicts Y B) Use Standardize the best practices of data wrangling C) data allows other people understand better your work D) Make the training time more fast
A) MCV B) MCRS C) All of the mentioned D) MARS
A) None of the mentioned B) featurePlot C) levelplot D) plotsample
A) postProcess B) preProcess C) All of the above D) process
A) False B) True
A) ICA B) SCA C) None of the mentioned D) PCA
A) False B) True |