Like the Naive Bayes classifier, decision trees require a state of attributes and output a decision. To clarify some confusion, “decisions” and “classes” are simply jargon used in different areas but are essentially the same., Just like KNN and Naive Bayes, Decision Tree is able to handle multiclass problems. However, Decision Tree tends to overfit data with a large number of features. Mechanisms such as pruning, setting the minimum number of samples required at a leaf node, or setting the maximum depth of the tree are necessary to avoid this problem., Comparing QDA to Naive Bayes is interesting. Although they get similar performance for the first dataset, I would argue that the naive bayes classifier is much better as it is much more confident for its classification., As a part of this study, we examine how accurate different classification algorithms are on diverse datasets. On five different datasets, four classification models are compared: Decision tree, SVM, Naive Bayesian, and K-nearest neighbor. The Naive Bayesian algorithm is proven to be the most effective among other algorithms., Overview Naive Bayes and K-NN, are both examples of supervised learning (where the data comes already labeled). Decision trees are easy to use for small amounts of classes., Classification algorithms have a wide range of applications like Customer Target Marketing, Medical Disease Diagnosis, Social Network Analysis, Credit Card Rating, Artificial Intelligence, and Document Categorization etc. Several major kinds of classification techniques are K-Nearest Neighbor classifier, Naive Bayes, and Decision Trees..