This is the `Automobile' data from the UCI Machine Learning Repository.

data(imports85)

Format

imports85 is a data frame with 205 cases (rows) and 26 variables (columns). This data set consists of three types of entities: (a) the specification of an auto in terms of various characteristics, (b) its assigned insurance risk rating, (c) its normalized losses in use as compared to other cars. The second rating corresponds to the degree to which the auto is more risky than its price indicates. Cars are initially assigned a risk factor symbol associated with its price. Then, if it is more risky (or less), this symbol is adjusted by moving it up (or down) the scale. Actuarians call this process `symboling'. A value of +3 indicates that the auto is risky, -3 that it is probably pretty safe.

The third factor is the relative average loss payment per insured vehicle year. This value is normalized for all autos within a particular size classification (two-door small, station wagons, sports/speciality, etc...), and represents the average loss per car per year.

Source

Originally created by Jeffrey C. Schlimmer, from 1985 Model Import Car and Truck Specifications, 1985 Ward's Automotive Yearbook, Personal Auto Manuals, Insurance Services Office, and Insurance Collision Report, Insurance Institute for Highway Safety.

The original data is at http://www.ics.uci.edu/~mlearn/MLSummary.html.

References

1985 Model Import Car and Truck Specifications, 1985 Ward's Automotive Yearbook.

Personal Auto Manuals, Insurance Services Office, 160 Water Street, New York, NY 10038

Insurance Collision Report, Insurance Institute for Highway Safety, Watergate 600, Washington, DC 20037

See also

Examples

data(imports85) imp85 <- imports85[,-2] # Too many NAs in normalizedLosses. imp85 <- imp85[complete.cases(imp85), ] ## Drop empty levels for factors. imp85[] <- lapply(imp85, function(x) if (is.factor(x)) x[, drop=TRUE] else x) stopifnot(require(randomForest)) price.rf <- randomForest(price ~ ., imp85, do.trace=10, ntree=100)
#> | Out-of-bag | #> Tree | MSE %Var(y) | #> 10 | 5.12e+06 7.87 | #> 20 | 4.995e+06 7.67 | #> 30 | 4.734e+06 7.27 | #> 40 | 4.411e+06 6.78 | #> 50 | 4.32e+06 6.64 | #> 60 | 4.228e+06 6.49 | #> 70 | 4.235e+06 6.51 | #> 80 | 4.203e+06 6.46 | #> 90 | 4.094e+06 6.29 | #> 100 | 4.066e+06 6.25 |
print(price.rf)
#> #> Call: #> randomForest(formula = price ~ ., data = imp85, do.trace = 10, ntree = 100) #> Type of random forest: regression #> Number of trees: 100 #> No. of variables tried at each split: 8 #> #> Mean of squared residuals: 4065928 #> % Var explained: 93.75
numDoors.rf <- randomForest(numOfDoors ~ ., imp85, do.trace=10, ntree=100)
#> ntree OOB 1 2 #> 10: 15.71% 11.61% 21.52% #> 20: 12.44% 7.14% 19.75% #> 30: 10.36% 8.93% 12.35% #> 40: 9.84% 8.04% 12.35% #> 50: 11.40% 8.93% 14.81% #> 60: 9.84% 6.25% 14.81% #> 70: 11.40% 8.04% 16.05% #> 80: 10.88% 7.14% 16.05% #> 90: 10.88% 6.25% 17.28% #> 100: 11.92% 8.04% 17.28%
print(numDoors.rf)
#> #> Call: #> randomForest(formula = numOfDoors ~ ., data = imp85, do.trace = 10, ntree = 100) #> Type of random forest: classification #> Number of trees: 100 #> No. of variables tried at each split: 4 #> #> OOB estimate of error rate: 11.92% #> Confusion matrix: #> four two class.error #> four 103 9 0.08035714 #> two 14 67 0.17283951