1 Objective

The objective of this case is to get you understand logistic regression (binary classification) and some important ideas such as cross validation, ROC curve, cut-off probability.

2 Credit Card Default Data

We will use a subset of Credit Card Default Data (sample size n=12,000) for this lab and illustration. The details of the full data (n=30,000) can be found at http://archive.ics.uci.edu/ml/datasets/default+of+credit+card+clients. Think about what kind of factors could affect people to fail to pay their credit balance.

We first load the credit scoring data. It is easy to load comma-separated values (CSV).

credit_data <- read.csv(file = "https://yanyudm.github.io/Data-Mining-R/lecture/data/credit_default.csv", header=T)

Look at what information do we have.

colnames(credit_data)
##  [1] "LIMIT_BAL"                  "SEX"                       
##  [3] "EDUCATION"                  "MARRIAGE"                  
##  [5] "AGE"                        "PAY_0"                     
##  [7] "PAY_2"                      "PAY_3"                     
##  [9] "PAY_4"                      "PAY_5"                     
## [11] "PAY_6"                      "BILL_AMT1"                 
## [13] "BILL_AMT2"                  "BILL_AMT3"                 
## [15] "BILL_AMT4"                  "BILL_AMT5"                 
## [17] "BILL_AMT6"                  "PAY_AMT1"                  
## [19] "PAY_AMT2"                   "PAY_AMT3"                  
## [21] "PAY_AMT4"                   "PAY_AMT5"                  
## [23] "PAY_AMT6"                   "default.payment.next.month"

Let’s look at how many people were actually default in this sample.

mean(credit_data$default.payment.next.month)
## [1] 0.2193333

The name of response variable is too long! I want to make it shorter by renaming. Recall the rename() function.

library(dplyr)
credit_data<- rename(credit_data, default=default.payment.next.month)

How about the variable type and summary statistics?

str(credit_data)    # structure - see variable type
summary(credit_data) # summary statistics

We see all variables are int, but we know that SEX, EDUCATION, MARRIAGE are categorical, we convert them to factor.

credit_data$SEX<- as.factor(credit_data$SEX)
credit_data$EDUCATION<- as.factor(credit_data$EDUCATION)
credit_data$MARRIAGE<- as.factor(credit_data$MARRIAGE)

We omit other EDA, but you shouldn’t whenever you are doing data analysis.

go to top

3 Logistic Regression

Randomly split the data to training (80%) and testing (20%) datasets:

index <- sample(nrow(credit_data),nrow(credit_data)*0.80)
credit_train = credit_data[index,]
credit_test = credit_data[-index,]

3.1 Train a logistic regression model with all variables

credit_glm0<- glm(default~., family=binomial, data=credit_train)
summary(credit_glm0)
## 
## Call:
## glm(formula = default ~ ., family = binomial, data = credit_train)
## 
## Deviance Residuals: 
##     Min       1Q   Median       3Q      Max  
## -3.1804  -0.6997  -0.5333  -0.2765   3.2627  
## 
## Coefficients:
##               Estimate Std. Error z value Pr(>|z|)    
## (Intercept) -1.053e+00  1.547e-01  -6.802 1.03e-11 ***
## LIMIT_BAL   -1.140e-06  2.868e-07  -3.976 7.02e-05 ***
## SEX2        -1.818e-01  5.447e-02  -3.338 0.000845 ***
## EDUCATION2  -1.234e-01  6.331e-02  -1.948 0.051359 .  
## EDUCATION3  -2.151e-01  8.565e-02  -2.512 0.012020 *  
## EDUCATION4  -1.227e+00  3.353e-01  -3.659 0.000254 ***
## MARRIAGE2   -1.132e-01  6.202e-02  -1.826 0.067913 .  
## MARRIAGE3   -2.468e-01  2.498e-01  -0.988 0.323122    
## AGE          9.101e-03  3.318e-03   2.743 0.006095 ** 
## PAY_0        5.973e-01  3.162e-02  18.890  < 2e-16 ***
## PAY_2        6.765e-02  3.617e-02   1.870 0.061435 .  
## PAY_3        4.904e-02  3.999e-02   1.226 0.220128    
## PAY_4        5.590e-02  4.472e-02   1.250 0.211268    
## PAY_5        5.148e-02  4.740e-02   1.086 0.277531    
## PAY_6       -1.079e-02  3.970e-02  -0.272 0.785712    
## BILL_AMT1   -5.043e-06  1.969e-06  -2.562 0.010415 *  
## BILL_AMT2    1.943e-06  2.592e-06   0.750 0.453365    
## BILL_AMT3    2.206e-06  2.348e-06   0.940 0.347420    
## BILL_AMT4   -2.702e-06  2.426e-06  -1.114 0.265439    
## BILL_AMT5    3.680e-06  2.673e-06   1.377 0.168614    
## BILL_AMT6   -9.038e-07  2.092e-06  -0.432 0.665757    
## PAY_AMT1    -8.535e-06  3.331e-06  -2.562 0.010393 *  
## PAY_AMT2    -1.133e-05  3.732e-06  -3.035 0.002406 ** 
## PAY_AMT3    -3.277e-06  3.304e-06  -0.992 0.321250    
## PAY_AMT4    -4.275e-06  2.994e-06  -1.428 0.153330    
## PAY_AMT5    -1.190e-06  3.065e-06  -0.388 0.697829    
## PAY_AMT6    -2.034e-06  2.064e-06  -0.985 0.324398    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 10109.0  on 9599  degrees of freedom
## Residual deviance:  8812.4  on 9573  degrees of freedom
## AIC: 8866.4
## 
## Number of Fisher Scoring iterations: 6

You have seen glm() before. In this lab, this is the main function used to build logistic regression model because it is a member of generalized linear model. In glm(), the only thing new is family. It specifies the distribution of your response variable. You may also specify the link function after the name of distribution, for example, family=binomial(logit) (default link is logit). You can also specify family=binomial(link = "probit") to run probit regression. You may also use glm() to build many other generalized linear models.

3.2 Binary Classification

As we talked in the lecture, people may be more interested in the classification results. But we have to define a cut-off probability first.

These tables illustrate the impact of choosing different cut-off probability. Choosing a large cut-off probability will result in few cases being predicted as 1, and choosing a small cut-off probability will result in many cases being predicted as 1.

pred_glm0_train <- predict(credit_glm0, type="response")

table(credit_train$default, (pred_glm0_train > 0.9)*1, dnn=c("Truth","Predicted"))
##      Predicted
## Truth    0    1
##     0 7481   10
##     1 2097   12
table(credit_train$default, (pred_glm0_train > 0.5)*1, dnn=c("Truth","Predicted"))
##      Predicted
## Truth    0    1
##     0 7273  218
##     1 1577  532
table(credit_train$default, (pred_glm0_train > 0.2)*1, dnn=c("Truth","Predicted"))
##      Predicted
## Truth    0    1
##     0 4508 2983
##     1  595 1514
table(credit_train$default, (pred_glm0_train > 0.0001)*1, dnn=c("Truth","Predicted"))
##      Predicted
## Truth    0    1
##     0    3 7488
##     1    0 2109

Therefore, determine the optimal cut-off probability is crucial. The simplest way to determine the cut-off is to use the proportion of “1” in the original data. We will intriduce a more appropriate way to determine the optimal p-cut.

3.3 Asymmetric cost

In the case of giving loan to someone, the cost function can indicate the trade off between the risk of giving loan to someone who cannot pay (predict 0, truth 1), and risk of rejecting someone who qualifies (predict 1, truth 0). Given different business situation, one may need to have asymmetric costs for false positive and false negative. Meanwhile, when you want a binary classification decision rule, you need to choose different cut-off probability. Choosing a large cut-off probability will result in few cases being predicted as 1, and choosing a small cut-off probability will result in many cases being predicted as 1.

The symmetric cost function with 1:1 cost ratio, equivalently pcut=1/2; and asymmetric cost function with 5:1 cost ratio, equivalently pcut=1/6:

#Symmetric cost with 1:1 cost ratio, equivalently pcut=1/2
cost1 <- function(r, pi, pcut=1/2){
  mean(((r==0)&(pi>pcut)) | ((r==1)&(pi<pcut)))
}

#Asymmetric cost with 5:1 cost ratio, equivalently pcut=1/6
cost2 <- function(r, pi, pcut=1/6){
  weight1 <- 5
  weight0 <- 1
  c1 <- (r==1)&(pi<pcut) #logical vector - true if actual 1 but predict 0
  c0 <-(r==0)&(pi>pcut) #logical vector - true if actual 0 but predict 1
  return(mean(weight1*c1+weight0*c0))
}
#Symmetric cost
cost1(r = credit_train$default, pi = pred_glm0_train, pcut=1/2)
## [1] 0.1869792
#Asymmetric cost
cost2(r = credit_train$default, pi = pred_glm0_train, pcut=1/6)
## [1] 0.6657292

Here “pcut = 1/(1+weight1/weight0)” can be specified within the cost2 function so that cost is a function(r, pi) of two arguments only that can be fed to cv.glm() later for cross validation.

In general, you will pre-specify a cost ratio (e.g. 5:1) from the domain knowledge and use the equivalent cut-off probability (1/(5+1)). Then you will use that cost value to compare different models under the SAME cost function (asymmetric cost2).

4 Summary

4.1 Things to remember

  • Know how to use glm() to build logistic regression;

  • Know how to do binary classification, and calculation of MR, FPR, FNR, and (asymmetric) cost;

go to top