10. Logistic Regression¶

Classification Tasks¶

Recall that in a classification task, we wish to create a model capable of generating predictions for the value of a categorical label (or response variable) \(Y\). The model will use values of one or more features (or predictor variables) \(X^{(1)}, X^{(2)}, ..., X^{(m)}\) as inputs.

There are many different types of algorithms that you might consider applying for a given classification task. Some will work better on certain datasets than others. In this lesson, we will discuss the most basic type of classification algorithm, logistic regression.

Note: Despite it’s name, logistic regression is a classification algorithm, not a regression algorithm.

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
---------------------------------------------------------------------------
ModuleNotFoundError                       Traceback (most recent call last)
<ipython-input-1-fa18575525b6> in <module>
      1 import numpy as np
----> 2 import pandas as pd
      3 import matplotlib.pyplot as plt

ModuleNotFoundError: No module named 'pandas'

Preliminary: The Sigmoid Function¶

Before explaining the details of logistic regression, we first need to introduce the sigmoid function. The sigmoid (or logistic) function is given by the following formula:

\(\Large \sigma(z) = \frac{e^z}{1+e^z} = \frac{1}{1 + e^{-z}}\)

A plot of the sigmoid function is shown below.

z = np.linspace(-10,10,100)
w = 1 / (1 + np.exp(-z))

plt.close()
plt.rcParams["figure.figsize"] = [6,4]
plt.plot(z,w)
plt.plot([-10,10],[1,1], linestyle=':', c="r")
plt.plot([-10,10],[0,0], linestyle=':', c="r")
plt.plot([0,0],[0,1], linewidth=1, c="dimgray")
plt.show()

One important property of the sigmoid function is that its output always lie between 0 and 1. As a result, the output of a sigmoid function can be interpreted as a probability.

Logistic Regression¶

A logistic regression model is a probabilistic linear classification method that can be used to estimate the probability that an observation belongs to a particular class based on the feature values. Logistic regression can be adapted for use in multi-class classification problems, but we will begin by discussing the standard version of the algorithm, which is a binary classifier.

Binary Classification¶

Let \(Y\) be a categorical variable that can assume one of two different values. We will encode these values as 0 and 1. These values are meant to represent two different categories or classes that observations can fall into. Assume that for each observation, we have not only a value for \(Y\), but also values for one or more features \(X^{(1)}, X^{(2)}, ..., X^{(m)}\). Suppose that the specific feature values for an observation have an impact on the likelihood of that observation belonging to one class or another. Given a set of observed feature values \(x^{(1)}, x^{(2)}, ..., x^{(m)}\) for an observation, let p denote the probability that \(Y=1\), and let q denote the probability that \(Y=0\). Using probabilistic notation, we could write:

\[\large p = P \left[Y = 1 ~|~ X^{(1)} = x^{(1)}, X^{(2)} = x^{(2)}, ..., X^{(m)} = x^{(m)} \right]\]
\[\large q = P \left[Y = 0 ~|~ X^{(1)} = x^{(1)}, X^{(2)} = x^{(2)}, ..., X^{(m)} = x^{(m)} \right]\]

The Logistic Regression Model¶

The logistic regression model estimates the value of p using a formula of the following form:

\[\large \hat{p} = \sigma\left(\hat{\beta}_0 + \hat{\beta}_1 X^{(1)} + \hat{\beta}_2 X^{(2)} + ... + \hat{\beta}_m X^{(m)}\right)\]

The function \(\sigma\) in the expression above refers to the sigmoid function. The linear combination inside of the sigmoid could produce values that fall outside of the range \([0,1]\), but since we then apply the sigmoid to this result, we may interpret the results as a probability. Notice that the logistic regression model directly estimates only the probability \(p\). However, if we have an estimate for \(p\), then we can generate an estimate for \(q\) using \(\hat q = 1 - \hat p\).

The parameters \(\hat{\beta}_0, \hat{\beta}_1, \hat{\beta}_2, ..., \hat{\beta}_m\) are calculated by a learning algorithm to generate the model that provides the best fit for the given data, as was the case with linear regression. This is accomplished by minimizing the negative log-likelihood loss function on the data set.

Negative Log-Likelihood Loss¶

The negative log-likelihood (NLL) function is a common loss function used to score classification models. The NLL score is based on how likely it is for us to have seen observations of the sort observed, according to the model.

Consider a binary classification problem with two classes: \(Y=0\) and \(Y=1\). Let \(y_1, y_2, y_3, ..., y_n\) be the observed classes for several instances in a dataset. Let \(\hat p_1, \hat p_2, ..., \hat p_n\) be probability estimates generated by a logistic regression model for each observation. Recall that these are estimates of the probability that \(Y=1\), specifically. For each observation, let \(\hat\pi_i\) be the model’s estimate of the probability of the observation belonging to the class to which it was actually observed to be in. That is:

\[\begin{split}\hat\pi_i = \left\{\begin{array}{ll}\hat p_i & \text{if } y_i = \text{1} \\ 1 - \hat p_i & \text{if } ~y_i = \text{0} \end{array}\right.\end{split}\]

We define the model’s likelihood score on the dataset to be:

\[\large L = \hat\pi_1 \cdot \hat\pi_2 \cdot ... \cdot \hat\pi_n = \prod_{i=1}^{n} \hat\pi_i\]

And we define the model’s negative log-likelihood score on the dataset to be:

\[\large NLL = -\ln(L) = -\sum_{i=1}^n \ln(\hat\pi_i)\]

The logistic regression learning algorithm will select the parameter values \(\hat{\beta}_0, \hat{\beta}_1, \hat{\beta}_2, ..., \hat{\beta}_m\) that will result in the smallest value for negative log-likelihood. This is equivalent to selecting the parameter values that would produce the highest likelihood score. In practice, we use NLL rather than likelihood because NLL is more convenient to work with, both computationally and mathematically.

Logistic Regression in Scikit-Learn¶

Logistic regression models are created in Scikit-Learn as instances of the LogisticRegression class, which is found in the sklearn.linear_model module. We will import that now, along with some other Scikit-Learn tools that we will need in this lesson.

from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, classification_report
from sklearn.linear_model import LogisticRegression

Example: Exam Preparation¶

Assume that students in a certain field have to take a professional exam. We wish to determine the effect that time spent studying has on a students chance of passing the exam. We collect a dataset consisting of 200 students. For each student, we have the following pieces of information:

  • study_hrs - The number of hours the student spent studying alone.

  • seminar_hrs - The number of hours the student spent in an exam preparation seminar.

  • passed - The results of the test. The result is recorded as ‘0’ if the student failed and ‘1’ if the student passed.

We now read the data into a DataFrame, and view the first 5 rows.

df = pd.read_csv('data/exam_prep.txt', sep='\t')
df.head()

We extract the feature array X and the label array y from the DataFrame.

X = df.iloc[:,:2].values
y = df.iloc[:,2].values

In hte figure below, we display a scatter plot of our dataset, using the two feature values as the coordinates for points in our plot. We fill each point according to the results of the exam for the student represented by that point.

plt.figure(figsize=[8,6])
plt.scatter(X[y==1,0], X[y==1,1], s=50, alpha=0.8,
            c='royalblue', edgecolors='k', label='Passed')
plt.scatter(X[y==0,0], X[y==0,1], s=50, alpha=0.8,
            c='orangered', edgecolors='k', label='Failed')

plt.xlabel('Hours Spent Studying Alone')
plt.ylabel('Hours Spent in Seminar')

plt.xlim([55,125])
plt.ylim([-2,22])
plt.legend()
plt.show()

We will split the dataset into training and test sets, using an 70/30 split. We will not create a validation set in this instance, as we will not be comparing different models in this example.

X_train, X_test, y_train, y_test =\
    train_test_split(X, y, test_size = 0.3, random_state=1, stratify=y)

In the figure below, we display scatter plots for the training and test sets separately.

plt.figure(figsize=[12,4])
plt.subplot(1,2,1)
sel = y_train == 1
plt.scatter(X_train[sel,0], X_train[sel,1], s=50, alpha=0.8,
            c='royalblue', edgecolors='k', label='Passed')
plt.scatter(X_train[~sel,0], X_train[~sel,1], s=50, alpha=0.8,
            c='orangered', edgecolors='k', label='Failed')
plt.xlabel('Hours Spent Studying Alone')
plt.ylabel('Hours Spent in Seminar')
plt.xlim([55,125])
plt.ylim([-2,22])

plt.subplot(1,2,2)
sel = y_test == 1
plt.scatter(X_test[sel,0], X_test[sel,1], s=50, alpha=0.8,
            c='royalblue', edgecolors='k', label='Passed')
plt.scatter(X_test[~sel,0], X_test[~sel,1], s=50, alpha=0.8,
            c='orangered', edgecolors='k', label='Failed')
plt.xlabel('Hours Spent Studying Alone')
plt.ylabel('Hours Spent in Seminar')
plt.legend(bbox_to_anchor=(1, 1), loc='upper left')

plt.show()

We will now use the LogisticRegression class from Scikit-Learn to create the classification model. As was the case with linear regression, the trained model object will contain two attributes intercept_ and coef_ that will contain the values of the parameters \(\hat{\beta}_0, \hat{\beta}_1, \hat{\beta}_2, ..., \hat{\beta}_m\) for the optimal model selected by the training algorithm.

model_1 = LogisticRegression(solver='lbfgs', penalty='none')
model_1.fit(X_train, y_train)

print('Intercept:   ',  model_1.intercept_)
print('Coefficients:', model_1.coef_)

The formula of our optimal logistic regression model is:

\[\Large \hat p = \sigma \left(-11.5908 ~+~ 0.0972 \cdot \textrm{study_hrs} ~+~ 0.2880 \cdot \textrm{seminar_hrs}\right)\]

This can also be written in the following form:

\[\Large\hat p = \frac {1} {1 + e^{11.5908 ~-~ 0.0972 \cdot \textrm{study_hrs} ~-~ 0.2880 \cdot \textrm{seminar_hrs}}}\]

The decision boundary for this model is displayed below.

b = -model_1.intercept_ / model_1.coef_[0,1]
m = -model_1.coef_[0,0] / model_1.coef_[0,1]
plt.figure(figsize=[12,4])
plt.subplot(1,2,1)
sel = y_train == 1
plt.scatter(X_train[sel,0], X_train[sel,1], s=50, alpha=0.8,
            c='royalblue', edgecolors='k', label='Passed', zorder=3)
plt.scatter(X_train[~sel,0], X_train[~sel,1], s=50, alpha=0.8,
            c='orangered', edgecolors='k', label='Failed', zorder=3)

plt.fill([0,200,200,0],[-10,-10,b + m*200, b],'orangered',alpha=0.2, zorder=1)
plt.fill([0,200,200,0],[30,30,b + m*200, b],'royalblue',alpha=0.2, zorder=1)
plt.plot([0,200],[b, b + 200*m], c='royalblue', alpha=0.6, zorder=2)

plt.xlabel('Hours Spent Studying Alone')
plt.ylabel('Hours Spent in Seminar')
plt.xlim([55,125])
plt.ylim([-2,22])

plt.subplot(1,2,2)
sel = y_test == 1
plt.scatter(X_test[sel,0], X_test[sel,1], s=50, alpha=0.8,
            c='royalblue', edgecolors='k', label='Passed', zorder=3)
plt.scatter(X_test[~sel,0], X_test[~sel,1], s=50, alpha=0.8,
            c='orangered', edgecolors='k', label='Failed', zorder=3)

plt.fill([0,200,200,0],[-10,-10,b + m*200, b],'orangered',alpha=0.2, zorder=1)
plt.fill([0,200,200,0],[30,30,b + m*200, b],'royalblue',alpha=0.2, zorder=1)
plt.plot([0,200],[b, b + 200*m], c='royalblue', alpha=0.6, zorder=2)

plt.xlabel('Hours Spent Studying Alone')
plt.ylabel('Hours Spent in Seminar')
plt.xlim([55,125])
plt.ylim([-2,22])
plt.legend(bbox_to_anchor=(1, 1), loc='upper left')

plt.show()

We now use the model’s score() method to calculate its accuracy on the training and test sets.

train_acc = model_1.score(X_train, y_train)
test_acc = model_1.score(X_test, y_test)

print('Training Accuracy:', round(train_acc,4))
print('Testing Accuracy: ', round(test_acc,4))

Suppose that we want to estimate the chances of passing for three students who have prepared as follows:

  1. study_hrs = 70 and seminar_hrs = 16

  2. study_hrs = 100 and seminar_hrs = 10

  3. study_hrs = 120 and seminar_hrs = 5

We can use the predict() method to generate a prediction as to whether or not each of these students will pass the exam.

X_new = [[70,16],[100,10],[120,5]]
print(model_1.predict(X_new))

We can use the predict_proba() method to estimate the probability of success for each of these students.

print(model_1.predict_proba(X_new))

Example: Pima Diabetes Dataset¶

For this example, we will be working with the Pima Diabetes Dataset. This dataset is originally from the National Institute of Diabetes and Digestive and Kidney Diseases. The objective is to predict based on diagnostic measurements whether a patient has diabetes. All patients are females at least 21 years old of Pima Indian heritage.

The columns in this dataset are described below.

  • Pregnancies: Number of times pregnant

  • Glucose: Plasma glucose concentration a 2 hours in an oral glucose tolerance test

  • BloodPressure: Diastolic blood pressure (mm Hg)

  • SkinThickness: Triceps skin fold thickness (mm)

  • Insulin: 2-Hour serum insulin (mu U/ml)

  • BMI: Body mass index (weight in kg/(height in m)^2)

  • DiabetesPedigreeFunction: Diabetes pedigree function

  • Age: Age (years)

  • Outcome: Class variable (0 or 1)

Our goal will be to predict the value of Outcome using the other variables in the dataset as features.

We start by importing the dataset and view the first 10 rows.

pima = pd.read_csv('data/diabetes.csv', sep=',')
pima.head(10)

Let’s check the dimensions of the DataFrame.

print(pima.shape)

We will extract the feature and label arrays from the DataFrame.

X = pima.iloc[:,:-1].values
y = pima.iloc[:,-1].values

Before creating a model, let’s calculate the proportion of observations in the dataset that are actually diabetic.

print(np.mean(y == 1))

We note that roughly 35% of individuals represented in the dataset are in fact diabetic.

We now split the data into training and test sets, using a 70/30 split.

X_train, X_test, y_train, y_test =\
    train_test_split(X, y, test_size = 0.3, random_state=1, stratify=y)

We will now create use Scikit-Learn to create our logistic regression classifier. We will then print the parameters for our optimal model.

model_2 = LogisticRegression(solver='lbfgs', penalty='none', max_iter=2000)
model_2.fit(X_train, y_train)

np.set_printoptions(suppress=True)
print('Intercept:   ',  model_2.intercept_)
print('Coefficients:', model_2.coef_)

We now use the model’s score() method to calculate its accuracy on the training and test sets.

train_acc = model_2.score(X_train, y_train)
test_acc = model_2.score(X_test, y_test)

print('Training Accuracy:', round(train_acc,4))
print('Testing Accuracy: ', round(test_acc,4))

Let’s use our model to generate predictions for each of the first three observations in our test set. The feature values for these observations are displayed below.

pd.DataFrame(X_test[:3,:], columns=pima.columns[:-1])

We will use the predict() method to predict the value of Outcome for each of these observations. We display the predictions, along with the observed values of Outcome.

print('Predicted Labels:', model_2.predict(X_test[:3,]))
print('Observed Labels: ', y_test[:3])

We will now use predict_proba to generate probability estimates for each of the three observations.

print(model_2.predict_proba(X_test[:3,]))

We close this example by displaying the confuction matrix and classification report for our model, as calculated on the test set.

pred_test = model_2.predict(X_test)
cm = confusion_matrix(y_test, pred_test)
cm_df = pd.DataFrame(cm)
cm_df
print(classification_report(y_test, pred_test))

This report suggests the following:

  • When the model classifies someone as non-diabetic, it will be correct roughly 79% of the time.

  • When the model classifies someone as diabetic, it will be correct roughly 73% of the time.

  • The model will correctly classify 83% of non-diabetic individuals.

  • The model will correctly classify 56% of diabetic individuals.

Multiclass Classification with Logistic Regression¶

Assume that we wish to create a classification model for use in a task in which there are 3 or more classes. In particular, assume that there are m predictors and that our labels each fall into one of K classes, where K is greater than 2. In this case, the standard version of logistic regression will not work, as it can only perform binary classification. There are, however, multiple ways of adapting logistic regression to perfom multiclass classification. We will present one such method here.

In multinomial logistic regression, we generate a probability distribution \(\large\hat p^{(1)},\hat p^{(2)}, ...,\hat p^{(K)}\) over the set of \(K\) possible class labels. To generate these probability estimates, we use a model of the following form:

  • For each \(k = 1, 2, ..., K\), let \(\large z^{(k)} = \hat\beta_{k,0} + \hat\beta_{k,1} \cdot x^{(1)} + \hat\beta_{k,2} \cdot x^{(2)} + ... + \hat\beta_{k,M} \cdot x^{(M)}\)

  • For each class, define \(\Large\hat p^{(k)} = \frac{e^{z^{(k)}}}{ \sum_{j=1}^K e^{z^{(j)}} }\)

As with binomial logistic regression, the parameters \(\hat\beta_{k,j}\) are selected by a learning algorithm to generate the model with the lowest negative log-likelihood score.

Example 3: Sythentic Dataset with Four Classes¶

We will explore multinomial logistic regression using a synthetic dataset. We will generate the data for this example in the next cell.

from sklearn.datasets import make_classification

X, y = make_classification(n_samples=1000, n_features=2, n_informative=2, 
                           n_redundant=0, n_classes=4, n_clusters_per_class=1,
                           class_sep=1.2, random_state=1)

plt.figure(figsize=[8,6])
plt.scatter(X[:,0], X[:,1], c=y, edgecolors='k', cmap='rainbow')
plt.show()

We will split the data into training and test sets using an 80/20 split.

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1, stratify=y)

print('X_train Shape:', X_train.shape)
print('X_test Shape: ', X_test.shape)

In the cell below, we will create our multinomial logistic regression model. We indicate that we are performing multinomial regression by setting multi_class='multinomial'. We will also calculate the training and test accuracy for our model.

model_3 = LogisticRegression(solver='lbfgs', penalty='none', multi_class='multinomial')
model_3.fit(X_train, y_train)

print('Training Accuracy:  ', model_3.score(X_train, y_train))
print('Validation Accuracy:', model_3.score(X_test, y_test))

We plot the decision boundaries for our model in the figure below.

n, x0, x1, y0, y1 = 500, -4.5, 4.5, -4.5, 5.5
xticks = np.linspace(x0, x1, n)
yticks = np.linspace(y0, y1, n)
grid_pts = np.transpose([np.tile(xticks,n), np.repeat(yticks,n)])
class_grid = model_3.predict(grid_pts).reshape(n,n)
plt.figure(figsize=[8,6])
plt.pcolormesh(xticks, yticks, class_grid, cmap='rainbow', zorder=1, vmin=0, vmax=3)
plt.fill([x0,x0,x1,x1], [y0,y1,y1,y0], 'white', alpha=0.5, zorder = 2)
plt.scatter(X[:,0], X[:,1], c=y, edgecolors='k', cmap='rainbow', zorder=3)
plt.show()

The cell below display the confusion matrix for our model, as calculated on the test set.

test_pred = model_3.predict(X_test)

cm = confusion_matrix(y_test, test_pred)

cm_df = pd.DataFrame(cm)
cm_df

Finally, we display the classification report for our test set.

print(classification_report(y_test, test_pred))