Various Classifiers And Classification Methods

Posted By on March 25, 2016


Download PDF
Classification And Prediction
CART
  1. Decision Tree:
  • A decision tree is a structure that includes a root node, branches, and leaf nodes. Each internal node denotes a test on an attribute, each branch denotes the outcome of a test, and each leaf node holds a class label. The topmost node in the tree is the root node.
  • The following decision tree is for the concept buy_computer that indicates whether a customer at a company is likely to buy a computer or not. Each internal node represents a test on an attribute. Each leaf node represents a class.

The benefits of having a decision tree are as follows −

  • It does not require any domain knowledge.
  • It is easy to comprehend.
  • The learning and classification steps of a decision tree are simple and fast.

 

Decision Tree Algorithm:

 

A machine researcher named J. Ross Quinlan in 1980 developed a decision tree algorithm known as ID3 (Iterative Dichotomiser). Later, he presented C4.5, which was the successor of ID3. ID3 and C4.5 adopt a greedy approach. In this algorithm, there is no backtracking; the trees are constructed in a top-down recursive divide-and-conquer manner.

8

9

10

  • Tree Pruning:

Tree pruning is performed in order to remove anomalies in the training data due to noise or outliers. The pruned trees are smaller and less complex.

Tree Pruning Approaches

Here is the Tree Pruning Approaches listed below −

  • Pre-pruning− The tree is pruned by halting its construction early.
  • Post-pruning– This approach removes a sub-tree from a fully grown tree.

 

  • Cost Complexity:

The cost complexity is measured by the following two parameters −

  • Number of leaves in the tree, and
  • Error rate of the tree.
  1. Bayesian Classification:

Bayesian classification is based on Bayes’ Theorem. Bayesian classifiers are the statistical classifiers. Bayesian classifiers can predict class membership probabilities such as the probability that a given tuple belongs to a particular class.

Bayes’s Theorem:

Bayes’ Theorem is named after Thomas Bayes. There are two types of probabilities −

  • Posterior Probability [P(H/X)]
  • Prior Probability [P(H)]

where X is data tuple and H is some hypothesis.

According to Bayes’ Theorem,

P(H/X)= P(X/H)P(H) / P(X)

  • Bayesian Belief Network:

Bayesian Belief Networks specify joint conditional probability distributions. They are also known as Belief Networks, Bayesian Networks, or Probabilistic Networks.

  • A Belief Network allows class conditional independencies to be defined between subsets of variables.
  • It provides a graphical model of causal relationship on which learning can be performed.
  • We can use a trained Bayesian Network for classification.

There are two components that define a Bayesian Belief Network −

  • Directed acyclic graph
  • A set of conditional probability tables
  • Directed Acyclic Graph:
  • Each node in a directed acyclic graph represents a random variable.
  • These variable may be discrete or continuous valued.
  • These variables may correspond to the actual attribute given in the data.

Directed Acyclic Graph Representation:

The following diagram shows a directed acyclic graph for six Boolean variables.

11

The arc in the diagram allows representation of causal knowledge. For example, lung cancer is influenced by a person’s family history of lung cancer, as well as whether or not the person is a smoker. It is worth noting that the variable PositiveXray is independent of whether the patient has a family history of lung cancer or that the patient is a smoker, given that we know the patient has lung cancer.

  • Conditional Probability tables:

The arc in the diagram allows representation of causal knowledge. For example, lung cancer is influenced by a person’s family history of lung cancer, as well as whether or not the person is a smoker. It is worth noting that the variable PositiveXray is independent of whether the patient has a family history of lung cancer or that the patient is a smoker, given that we know the patient has lung cancer.

12

  1. Rule Based Classification:

 

  • If-Then Rules:

Rule-based classifier makes use of a set of IF-THEN rules for classification. We can express a rule in the following from −

IF condition THEN conclusion

Let us consider a rule R1,

13

Points to remember −

  • The IF part of the rule is called rule antecedentor precondition.
  • The THEN part of the rule is called rule consequent.
  • The antecedent part the condition consist of one or more attribute tests and these tests are logically ANDed.
  • The consequent part consists of class prediction.

Note − We can also write rule R1 as follows:
14

If the condition holds true for a given tuple, then the antecedent is satisfied.

  • Rule Extraction:

Here we will learn how to build a rule-based classifier by extracting IF-THEN rules from a decision tree.

Points to remember −

  • One rule is created for each path from the root to the leaf node.
  • To form a rule antecedent, each splitting criterion is logically ANDed.
  • The leaf node holds the class prediction, forming the rule consequent.

 

Rule Induction Using Sequential Covering Algorithm:

  • Sequential Covering Algorithm can be used to extract IF-THEN rules form the training data. We do not require to generate a decision tree first. In this algorithm, each rule for a given class covers many of the tuples of that class.
  • Some of the sequential Covering Algorithms are AQ, CN2, and RIPPER. As per the general strategy the rules are learned one at a time. For each time rules are learned, a tuple covered by the rule is removed and the process continues for the rest of the tuples. This is because the path to each leaf in a decision tree corresponds to a rule.

Note− The Decision tree induction can be considered as learning a set of rules simultaneously.

  • The Following is the sequential learning Algorithm where rules are learned for one class at a time. When learning a rule from a class Ci, we want the rule to cover all the tuples from class C only and no tuple form any other class.

15.1

 

  • Rule Pruning:

 

The rule is pruned is due to the following reason −

  • The Assessment of quality is made on the original set of training data. The rule may perform well on training data but less well on subsequent data. That’s why the rule pruning is required.
  • The rule is pruned by removing conjunct. The rule R is pruned, if pruned version of R has greater quality than what was assessed on an independent set of tuples.

FOIL is one of the simple and effective method for rule pruning. For a given rule R,

FOIL_Prune = pos – neg / pos + neg

where pos and neg is the number of positive tuples covered by R, respectively.

Note − This value will increase with the accuracy of R on the pruning set. Hence, if the FOIL_Prune value is higher for the pruned version of R, then we prune R.

Classification And Prediction
CART

Download PDF

Posted by Virat Fichadiya