Returns a detailed listing of a fitted rpart object.

# S3 method for rpart
summary(object, cp = 0, digits = getOption("digits"), file, ...)

Arguments

object

fitted model object of class "rpart". This is assumed to be the result of some function that produces an object with the same named components as that returned by the rpart function.

digits

Number of significant digits to be used in the result.

cp

trim nodes with a complexity of less than cp from the listing.

file

write the output to a given file name. (Full listings of a tree are often quite long).

...

arguments to be passed to or from other methods.

Details

This function is a method for the generic function summary for class "rpart". It can be invoked by calling summary for an object of the appropriate class, or directly by calling summary.rpart regardless of the class of the object.

It prints the call, the table shown by printcp, the variable importance (summing to 100) and details for each node (the details depending on the type of tree).

See also

Examples

## a regression tree z.auto <- rpart(Mileage ~ Weight, car.test.frame) summary(z.auto)
#> Call: #> rpart(formula = Mileage ~ Weight, data = car.test.frame) #> n= 60 #> #> CP nsplit rel error xerror xstd #> 1 0.59534912 0 1.0000000 1.0509830 0.18068190 #> 2 0.13452819 1 0.4046509 0.5750980 0.10479003 #> 3 0.01282843 2 0.2701227 0.4066541 0.07832915 #> 4 0.01000000 3 0.2572943 0.4108744 0.07847118 #> #> Variable importance #> Weight #> 100 #> #> Node number 1: 60 observations, complexity param=0.5953491 #> mean=24.58333, MSE=22.57639 #> left son=2 (45 obs) right son=3 (15 obs) #> Primary splits: #> Weight < 2567.5 to the right, improve=0.5953491, (0 missing) #> #> Node number 2: 45 observations, complexity param=0.1345282 #> mean=22.46667, MSE=8.026667 #> left son=4 (22 obs) right son=5 (23 obs) #> Primary splits: #> Weight < 3087.5 to the right, improve=0.5045118, (0 missing) #> #> Node number 3: 15 observations #> mean=30.93333, MSE=12.46222 #> #> Node number 4: 22 observations #> mean=20.40909, MSE=2.78719 #> #> Node number 5: 23 observations, complexity param=0.01282843 #> mean=24.43478, MSE=5.115312 #> left son=10 (15 obs) right son=11 (8 obs) #> Primary splits: #> Weight < 2747.5 to the right, improve=0.1476996, (0 missing) #> #> Node number 10: 15 observations #> mean=23.8, MSE=4.026667 #> #> Node number 11: 8 observations #> mean=25.625, MSE=4.984375 #>
## a classification tree with multiple variables and surrogate splits. summary(rpart(Kyphosis ~ Age + Number + Start, data = kyphosis))
#> Call: #> rpart(formula = Kyphosis ~ Age + Number + Start, data = kyphosis) #> n= 81 #> #> CP nsplit rel error xerror xstd #> 1 0.17647059 0 1.0000000 1.0000000 0.2155872 #> 2 0.01960784 1 0.8235294 0.9411765 0.2107780 #> 3 0.01000000 4 0.7647059 0.9411765 0.2107780 #> #> Variable importance #> Start Age Number #> 64 24 12 #> #> Node number 1: 81 observations, complexity param=0.1764706 #> predicted class=absent expected loss=0.2098765 P(node) =1 #> class counts: 64 17 #> probabilities: 0.790 0.210 #> left son=2 (62 obs) right son=3 (19 obs) #> Primary splits: #> Start < 8.5 to the right, improve=6.762330, (0 missing) #> Number < 5.5 to the left, improve=2.866795, (0 missing) #> Age < 39.5 to the left, improve=2.250212, (0 missing) #> Surrogate splits: #> Number < 6.5 to the left, agree=0.802, adj=0.158, (0 split) #> #> Node number 2: 62 observations, complexity param=0.01960784 #> predicted class=absent expected loss=0.09677419 P(node) =0.7654321 #> class counts: 56 6 #> probabilities: 0.903 0.097 #> left son=4 (29 obs) right son=5 (33 obs) #> Primary splits: #> Start < 14.5 to the right, improve=1.0205280, (0 missing) #> Age < 55 to the left, improve=0.6848635, (0 missing) #> Number < 4.5 to the left, improve=0.2975332, (0 missing) #> Surrogate splits: #> Number < 3.5 to the left, agree=0.645, adj=0.241, (0 split) #> Age < 16 to the left, agree=0.597, adj=0.138, (0 split) #> #> Node number 3: 19 observations #> predicted class=present expected loss=0.4210526 P(node) =0.2345679 #> class counts: 8 11 #> probabilities: 0.421 0.579 #> #> Node number 4: 29 observations #> predicted class=absent expected loss=0 P(node) =0.3580247 #> class counts: 29 0 #> probabilities: 1.000 0.000 #> #> Node number 5: 33 observations, complexity param=0.01960784 #> predicted class=absent expected loss=0.1818182 P(node) =0.4074074 #> class counts: 27 6 #> probabilities: 0.818 0.182 #> left son=10 (12 obs) right son=11 (21 obs) #> Primary splits: #> Age < 55 to the left, improve=1.2467530, (0 missing) #> Start < 12.5 to the right, improve=0.2887701, (0 missing) #> Number < 3.5 to the right, improve=0.1753247, (0 missing) #> Surrogate splits: #> Start < 9.5 to the left, agree=0.758, adj=0.333, (0 split) #> Number < 5.5 to the right, agree=0.697, adj=0.167, (0 split) #> #> Node number 10: 12 observations #> predicted class=absent expected loss=0 P(node) =0.1481481 #> class counts: 12 0 #> probabilities: 1.000 0.000 #> #> Node number 11: 21 observations, complexity param=0.01960784 #> predicted class=absent expected loss=0.2857143 P(node) =0.2592593 #> class counts: 15 6 #> probabilities: 0.714 0.286 #> left son=22 (14 obs) right son=23 (7 obs) #> Primary splits: #> Age < 111 to the right, improve=1.71428600, (0 missing) #> Start < 12.5 to the right, improve=0.79365080, (0 missing) #> Number < 3.5 to the right, improve=0.07142857, (0 missing) #> #> Node number 22: 14 observations #> predicted class=absent expected loss=0.1428571 P(node) =0.1728395 #> class counts: 12 2 #> probabilities: 0.857 0.143 #> #> Node number 23: 7 observations #> predicted class=present expected loss=0.4285714 P(node) =0.08641975 #> class counts: 3 4 #> probabilities: 0.429 0.571 #>